See comments below, although it may or may not be really right.

> -----Original Message-----
> From: [EMAIL PROTECTED] 
> [mailto:[EMAIL PROTECTED] On 
> Behalf Of Matthias Blankenhaus
> Sent: 16 February 2007 04:29
> To: [email protected]
> Subject: [Ganglia-general] GRID / CLUSTER
> 
> 
> Howdy !
> 
> I am a newbie and have some questions about concepts and 
> their mapping to config identifier:
> 
> 1. In gmetad.conf one can define a "gridname".  What is the concept 
>    behind this and what does this actually do ?

In itself the grid name is just a string and does nothing if all your
clusters report to 1 server. So as you are starting out, set it and 
forget it.

Later: you can federate ganglia servers together
in a heirarchical way, which is where grid name comes in. You build a
grid of grids. There are examples of this on the web.

> 
> 2. In gmond.conf one can define a "name" for a cluster. What is the 
>    concept behind this and what does this actually do ?  What is
>    the difference between a Ganglia grid and a Ganglia cluster ?

Err... the cluster name is the name of the cluster. You may think that
ganglia would use the cluster label to work out which hosts belong
in which cluster. No.

Clusters contain hosts, Grids contain clusters. They are treated
differently in the php code, but they are structurally similar.

> 
> I wanted to create a cluster (grid ?) consisting of two sub-cluster 
> (cluster ?).  I have tried the following two configuration 
> without seeing a difference.  So what is the difference ?
> 
> Also, I have noticed that the identifier in gmetad.conf after 
> data_source is completely independent from the actual naming 
> of the cluster.  The cluster name then is the one that is 
> presented in the GUI and also reflected in the RRD DB.  What 
> is the id in the data_source clause for ?

Yes, the cluster name in gmetad.conf is irrevelant.
The cluster name is what is returned by gmond that is
polled by gmetad. cluster_head01 from below.

As for the configs below -
Ouch! I have a headache. What I would suggest is that you start simple.
So:
1) Use unicast, not multicast.
2) Start with a single headnode, and configure all hosts of
   the cluster to point to it. (udp_send_channel)
3) Start with a single ganglia server (1 gmetad instance).
   Do not run a gmetad on cluster headnodes.
   Configure gmetad (on what you call master_node)
   to have a single data_source entries for each cluster
   (you configure gmetad with the headnode names).

> I have noticed that the number of deteced CPUs is
> inconsistent.  It seems to depend on the order in which I
> start the gmond's / gmetad's.  Is there a order and if
> so which one is the correct order ?
> 
> 
> -----------------
> Configuration I
> -----------------
> 
> MASTER NODE
> -----------
> 
> gmetad.conf
> 
> gridname "CARLSBAD"
> data_source "OSCAR 1" cluster_head01:8651   # running gmetad
> data_source "OSCAR 2" cluster_head02:8651   # runing gmetad

If you really wanted to do this, you should access port 8651
see above. Port 8649 is for the gmond not the gmetad.

> 
> cluster_head01
> --------------
> 
> gmetad.conf:
> 
> data_source "Rack 1" 11.0.0.5 11.0.0.4
> 
> 
> gmond.conf:  cluster_head01
> 
> cluster_head02
> --------------
> 
> gmetad.conf:
> 
> data_source "Rack 1" 11.0.0.5 11.0.0.4
> 
> gmond.conf:  cluster_head01
> 
> 
> ----------------
> Configuration II
> ----------------
> MASTER NODE
> -----------
> 
> gmetad.conf
> 
> gridname "CARLSBAD"
> data_source "OSCAR 1" cluster_head01   # running gmetad
> data_source "OSCAR 2" cluster_head02   # runing gmetad
> 
> 
> cluster_head01
> --------------
> 
> gmetad.conf:
> gridname "OSCAR 1"
> data_source "Rack 1" 11.0.0.5 11.0.0.4
> 
> gmond.conf:  cluster_head01
> 
> cluster_head02
> --------------
> 
> gmetad.conf:
> 
> gridname "OSCAR 2"
> data_source "Rack 1" 11.0.0.5 11.0.0.4
> 
> 
> gmond.conf:  cluster_head01
> 
> ------------------------------------
> 
> 
> 
> Your answers are greatly appreciated.
> 
> 
> Thanx,
> Matthias
> 
> 
> --------------------------------------------------------------
> -----------
> Take Surveys. Earn Cash. Influence the Future of IT
> Join SourceForge.net's Techsay panel and you'll get the 
> chance to share your opinions on IT & business topics through 
> brief surveys-and earn cash 
> http://www.techsay.com/default.php?page=join.php&p=sourceforge
&CID=DEVDEV
_______________________________________________
Ganglia-general mailing list [email protected]
https://lists.sourceforge.net/lists/listinfo/ganglia-general
------------------------------------------------------------------------
For more information about Barclays Capital, please visit our web site at 
http://www.barcap.com.

Internet communications are not secure and therefore the Barclays Group does 
not accept legal responsibility for the contents of this message.  Although the 
Barclays Group operates anti-virus programmes, it does not accept 
responsibility for any damage whatsoever that is caused by viruses being 
passed.  Any views or opinions presented are solely those of the author and do 
not necessarily represent those of the Barclays Group.  Replies to this email 
may be monitored by the Barclays Group for operational or business reasons.
------------------------------------------------------------------------

Reply via email to