Remember that RRD files are of a fixed size. In other words, they should never grow beyond their original size when created. That's why they call 'em round-robin databases. :)

So the only reason new RRDs would be created is if new metrics were added for existing hosts or if new hosts were added to the cluster.

Theoretically you could write a startup script that would precisely allocate the size of the ramdisk based on the size of the gmetad/rrds directory (size plus maybe 256k for temp files?). That'd be nice and fun...

matt massie wrote:
mark-

i've seen this behavior on the machine running the ganglia demo page but it's just a p2 with 128 mbs of memory (soon to be upgraded).

i'm rewriting gmetad in C right now and will be incorporating it into the monitoring-core distribution soon. the biggest bottleneck right now with gmetad is disk I/O. keep in mind the load on linux is a measure of the number of running processes but also those processes in I/O wait. gmetad is writing to about 25 files per host every 15 seconds or so. the next generation of gmetad will not be nearly as i/o intensive.

as a trick to make gmetad work better for you.. create a ramdisk to write the round-robin databases to.

here are the steps (i'm assuming you installed gmetad in the default location)

1. find out how much space your round-robin databases are taking right now
   by doing the following
      a. # cd /usr/local/gmetad/rrds
      b. # du -sk .
           80384   .
it's important to note the side of the round-robin databases remain constant over time and don't grow in size. of course, if you increase the number of databases (by monitoring more hosts or metrics) then this number will increase. in this example (taken from the ganglia demo machine), we are monitoring 117 hosts for a total of over 3000 rrds files with only 78 mbs of disk space.

2. create a ramdisk image file at least as big as the space you need (i'd
   double it... 80384*2= 160768)

   dd if=/dev/zero of=/root/rrd-ramdisk.img bs=1k count=160768

3. mke2fs -F -vm0 /root/rrd-ramdisk.img

4. /etc/rc.d/init.d/gmetad stop

5. mv /usr/local/gmetad/rrds /usr/local/gmetad/rrds.orig

6. mkdir /usr/local/gmetad/rrds

7. mount -o loop /root/rrd-ramdisk.img /usr/local/gmetad/rrds

8. copy your round-robin databases to the new RAM disk...
   (cd /usr/local/gmetad/rrds.orig; tar -cf - .)  | \
   (cd /usr/local/gmetad/rrds; tar -xvf -)

9. /etc/rc.d/init.d/gmetad start

if you want to see a site which uses this RAM disk trick (and invented
this trick too) take a look at http://meta.rocksclusters.org/. they are monitoring over 450 hosts using this method quite comfortably.

one important note... since the data is being written to RAM and not the disk.. it of course will be lost on reboot. if you want to keep the round-robin databases long-term.. you will need to setup a cron job which saves the data from the RAM disk to the physical disk and then writes it back on reboot.

i hope this helps. i'm going to focus much more attention on gmetad in the next few days and i'm sure you'll find the C version of gmetad much much more easier to install and much more efficient to run.

good luck!
-matt

Yesterday, markp wrote forth saying...


Is anyone experiencing a high load with gmetad?  I've run this daemon on
a high end intel 933mhz dual proc machine with 1gb of memory and RH
7.2.   Loads get and stay as high as 3.  I get worse results on single
processor machines, loads as high as 6.7  Kill the daemon and it drops
back to normal.  Is it supposed to be such a resource hog?  I ran the
old web-frontend and didn't have any problems.




-------------------------------------------------------
This sf.net email is sponsored by: OSDN - Tired of that same old
cell phone?  Get a new here for FREE!
https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390
_______________________________________________
Ganglia-general mailing list
Ganglia-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-general





-------------------------------------------------------
This sf.net email is sponsored by: OSDN - Tired of that same old
cell phone?  Get a new here for FREE!
https://www.inphonic.com/r.asp?r=sourceforge1&refcode1=vs3390
_______________________________________________
Ganglia-general mailing list
Ganglia-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-general




Reply via email to