On Wed, Mar 12, 2008 at 10:52:03AM -0500, Seth Graham wrote:
> Martin Hicks wrote:
> 
> >The configuration of gmetad has been modified to store the rrds in
> >/dev/shm, but this directory gets very large so I'd like to move away
> >from that.
> 
> Using tmpfs is pretty much your only option. As you discovered, the disk 
> I/O will bring most machines to their knees.

:( Seems like a pretty crappy use of that much memory.

cct506-1:~ # du -s --si /dev/shm/rrds/
477M    /dev/shm/rrds/

I've seen the rrds directory at 1.5GB in production clusters.

> 
> >Is there a way that I should be architecting the configuration files
> >to make ganglia scale to work on this cluster?
> >
> >I think I want to run gmetad on each head node, and to use that RRD data 
> >without
> >regenerating it on the admin node.  Is that possible?
> 
> This is definitely possible, though I don't think it's necessary. I have 
>  machines handling 1500 reporting nodes without problems, writing the 
> rrds to a tmpfs.
> 
> The downside of setting up ganglia with head nodes is that you have to 
> set up some way to make the rrds available to a central web server. 
> Several ways to do that too, but they introduce their own headaches.

Right.  So I'd have to use NFS or something similar.

After I wrote this first e-mail I started wondering about the updates to
__SummaryInfo__.  How awful/expensive would that be if all of the
sub-cluster RRD files were nfs mounted?

Does that summary info get regenerated on the same poll interval as
everything else?

Thanks,
mh


-------------------------------------------------------------------------
This SF.net email is sponsored by: Microsoft
Defy all challenges. Microsoft(R) Visual Studio 2008.
http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/
_______________________________________________
Ganglia-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ganglia-general

Reply via email to