Joseph Syu <[EMAIL PROTECTED]> writes:

> We have noticed the memory usages on this gmetad server were mostly
> consumed by apache services (each instance of apache process took
> ~30MB memory), not by gmetad (only 6KB memory each) itself, so I
> assume it is just the matter an under-powered machine. Am I on the
> right track here? Or what may be the better way to architect a
> system like this? please advice, thanks,

Ganglia is no big memory hog. Rather, it tends to be I/O-bound, since
it has to update thousands of little bitty files all the time.

What I did when our monitoring server had trouble keeping up was to
put in a couple of fast disks (7200 rpm ultra-ATA 133), stripe them
for performance and put a reiserfs file system on them to use as /var.

This was in all certainty overkill, but it solved the problems;
the RRD database files are once more updated correctly, and the
web interface is, if not blazingly fast, at least usable.

-- 
Leif Nixon                                    Systems expert
------------------------------------------------------------
National Supercomputer Centre           Linkoping University
------------------------------------------------------------

Reply via email to