guys

i just uploaded a new snapshot for you to take a look at.  you can 
download all the latest source from 

   http://ganglia.sf.net/g3blog/ganglia-3.0.0.tar.gz

this snapshot is a good representation of how the distribute wide-area 
side of ganglia will work.  g3 will only have a single daemon, gangliad, 
which will do the work of gmond, gmetad, the web frontend, and apache/php 
making g3 easier to install and manage.

to install your own gangliad for testing...

% wget http://ganglia.sourceforge.net/g3blog/ganglia-3.0.0.tar.gz
% gunzip < ganglia-3.0.0.tar.gz
% cd ganglia-3.0.0/apps

you will need to modify the gangliad source directly since the 
configuration file parser doesn't exist just yet.  at the very top of 
gangliad.c is a short configuration section.  put in your g2 data sources 
there.

% cd ..
% ./configure
% make
% cd apps
% ./gangliad

i have a new demo web site running the latest snapshot for you to take a 
look at.  you will need the adobe svg viewer plugin to view the svg 
graphs... you can download it from http://www.adobe.com/svg/ .. it 
currently works on windows, macos, linux and solaris.

which reminds me... this snapshot runs on linux and cygwin.  if you have
any problems compiling it on your respective os.. let me know.

once you have the SVG viewer installed... point your web browser to

   http://see.millennium.berkeley.edu:8652/

the response that you receive is from a gangliad running with 3 data 
sources comprised of ~190 hosts total.

there are three main url roots on gangliad are:

   http://see.millennium.berkeley.edu:8652/browse/
   http://see.millennium.berkeley.edu:8652/query/
   http://see.millennium.berkeley.edu:8652/history/

the browse root allows you to view all the data available for a particular 
gangliad in html.  the browser allows you to refine and test a query to 
make sure it returns exactly the data that you are interested in.  there 
are currently four main query variables: filter, age, depth and history.

the query root is a mirror of the browse root but the data is returned in
text/xml instead of text/html.  say you want the xml for the data you find 
at 

   http://see.millennium.berkeley.edu:8652/browse/?filter=load_one 
                             goto
   http://see.millennium.berkeley.edu:8652/query/?filter=load_one

or click the "[Get the XML for this query]" link at the top of the page.
the query root is how gangliad exports its data to other gangliad or
interested clients.

the history root returns svg graphs of historical data.  for example, if 
you want to see the 1-minute load over the last hour on every machine this 
gangliad is monitoring.  tune your browser to

http://see.millennium.berkeley.edu:8652/history/?filter=[^:]/load_one&history=3600

(note: the filter is a case-insensitive extended regular expression.  the 
expression [^:]/load_one will NOT match summary records.)

svg images are actually xml.. they are small, interactive and scalable 
(hence the name scalable vector graphics (svg)).  the graph above might be 
a little busy... if you want to zoom in on a particular part of the 
graph.. right click on the graph and choose "Zoom In".  also, if you click 
on any of the data lines in the graph... you will be directed to the 
history record for that particular data line.  try it. (it's way cool).

in the future we can use very simple javascript to build in interactivity 
into the graphs.  for now, we just have hyperlinked graphs.

also.. in the future, we'll need to have a graph builder that allows you 
to tweek the filter, age, depth, history, image_height, image_width, 
image_max and image_min variables to get exactly the graph you are looking 
for.  for now, you will have to hand-type them into your browser address 
bar.

i'm pretty sure that i showed you this before.. but.. you can also very 
easily get the historical information for a particular metric in xml 
format (instead of svg xml).

for example,

http://see.millennium.berkeley.edu:8652/query/?filter=%3Ag3+demo%3A/load_one&history=3600

would give you the history of the load one overall for all sources 
monitored by see.millennium.  the xml format is ideal for making graphs.  
it has the overall maximum and minimum, the step for the x-axis (time) and 
then follows with a list of time samples with their age, value, min, max 
and number of samples.  it would be trivial to use this history xml stream 
to create graphs other than the svg graphs gangliad makes.  

i just realized.. the whole max/min thing might be confusing.  the 
<history min max> is the maximum and minimum value over the entire 
history.  the <t min max> is the max and min value encountered for a 
single sample at that time point.

you can take the value and divide it by the number of samples if you want 
the average since the value (for summary records) is the sum.  

i think the "gmeta" part of gangliad is firming up.  i'm going to try and 
finish up the "gmon" part next so we can start plugging in modules.  the 
modules will likely talk HTTP over UDP (with no server response).  i need 
to do a little testing to see what performance i get.  more details later.

by the way..
it's easy to install gangliad on a host already running the old gmetad.  
you can just set a single data source 127.0.0.1:8651 in gangliad and away 
you go.

hope you all have a great week
-matt




Reply via email to