2009/5/12 Jerry <[email protected]>:
> Hi,
>
> In my environment, I found that gmetad consumes nearly 3G virtual memory.
> Below is the detail.
>
> [r...@abc ~]# top -b -n 1 | grep gmetad
> 23867 nobody    20   0 2793m 1424  644 S    0  0.0   0:49.27 gmetad
>
> Does anyone see this before? Any idea about this?
>
> Thanks,
>
> Jerry
>
>
>
> ________________________________
> 穿越地震带 纪念汶川地震一周年
> ------------------------------------------------------------------------------
> The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
> production scanning environment may not be a perfect world - but thanks to
> Kodak, there's a perfect scanner to get the job done! With the NEW KODAK
> i700
> Series Scanner you'll get full speed at 300 dpi even with all image
> processing features enabled. http://p.sf.net/sfu/kodak-com
> _______________________________________________
> Ganglia-general mailing list
> [email protected]
> https://lists.sourceforge.net/lists/listinfo/ganglia-general
>
>

Jerry,

I cannot help you with the solution but I can help collect some
information so other don't have to ask.

What is your ganglia version?
What is the cluster scale? (How many monitored nodes do you have?)
How long has the cluster been up?
And what is the OS of your head node?
Does it always stay like this?

Please kindly provide answers to the question above and I hope those
will be helpful for investigation by others
.
-- 
Regards,
Simon Yan

------------------------------------------------------------------------------
The NEW KODAK i700 Series Scanners deliver under ANY circumstances! Your
production scanning environment may not be a perfect world - but thanks to
Kodak, there's a perfect scanner to get the job done! With the NEW KODAK i700
Series Scanner you'll get full speed at 300 dpi even with all image 
processing features enabled. http://p.sf.net/sfu/kodak-com
_______________________________________________
Ganglia-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ganglia-general

Reply via email to