Hi Alex: Actually I wonder if we should check your code into trunk as "contrib" code... what do others think?
Cheers, Bernard > -----Original Message----- > From: Alex Balk [mailto:[EMAIL PROTECTED] > Sent: Friday, June 09, 2006 12:32 > To: Bernard Li > Cc: Stackpole, Chris; [email protected] > Subject: Re: [Ganglia-general] Ganglia Alert and Tracking > > > > Bernard Li wrote: > > >> I am trying to write a script that pulls the info from netcat > >> and averages out some numbers but I believe that there is a > >> easier way. Does ganglia store data in such a way that I > >> could pull this type of information? This appears so useful > >> to me that I am sure that there are others that have tried > >> this, are there any ideas and suggestions? > >> > > > > Sorry for hijacking your thread Chris but your question leads me to > > think that there are some interesting data stored in the > RRD database, > > perhaps we could write a script to mine this data and provide some > > interesting historical reports? > > > > Actually, my patch for "custom graphs" accomplishes exactly > what you're > talking about. > It allows you to create a template and then load it for whatever view > (meta, cluster, host) you desire. Couple this with gmetrics > and you can > pretty much generate a graph for anything (read - visually > represent any > aspect of your data). It also supports rrdtool's CDEFs, so you can do > data transformations as well. > Oh, and the rendering backend may be called from within an > <IMG SRC=...> > which allows creating "customized dashboards". I've started working on > one where customers can view different utilizations graphs > based on the > cluster specialty (batch, interactive, infrastructure), NFS > statistics, > parallel job utilization (how much does process named X consume across > multiple hosts), etc. > > > What I'm really missing is a method to "generate" aggregate > data on the > fly. Something like "take these 3 hosts, all from different clusters, > and show me their aggregate CPU consumption". > > Cheers, > Alex >

