My goal was to have multiple nodes reporting to a central location (
10.50.54.31) also running gmond and reporting info on itself as well. To
accomplish this, wouldn't I configure the clients that will be sending data
something to this effect:
----------------------------------
/* Feel free to specify as many udp_send_channels as you like. Gmond
used to only support having a single channel */
udp_send_channel {
host = 10.50.54.31
port = 8649
ttl = 1
mcast_if = en2
}
----------------------------------
The 10.50.54.31 was the original gmond.conf posted. I'm assuming the gmetad
server (10.50.54.48) reaches out to the nodes defined in gmetad.conf for
information. Is there a method in which they only are aware of their own
information?
Lastly, I have a few nodes where HACMP is in place using IP Aliasing on a
single interface. In these cases, i need to bind gmond.conf to a particular
IP.
Thanks again to everyones' support.
On Tue, Sep 9, 2008 at 9:44 AM, Carlo Marcelo Arenas Belon <
[EMAIL PROTECTED]> wrote:
> On Mon, Sep 08, 2008 at 01:42:16PM -0500, Ryan Robertson wrote:
> > I too am having trouble getting the gmond collector report data of
> > itself.
>
> presume that you are referring to some other report from ganglia 3.1
> not being able to get its own data here based on the subject, but
> the behaviour observed is from 3.0 based on the body, could you provide a
> reference to the original report as it might be an unrelated problem.
>
> running `gmond -d10` should generate a log of what is going on that
> could help trace the problem, but from the configuration shown below it
> might
> be just an unintended misconfiguration.
>
>
> > I've tried mulitple variations on the gmond.conf, but can't seem
> > to find a combination that works. This is on power5 AIX 6.1 running
> > ganglia-gmond-3.0.7-1.
> >
> > /* Feel free to specify as many udp_send_channels as you like. Gmond
> > used to only support having a single channel */
> > udp_send_channel {
> > host = 127.0.0.1
> > port = 8649
> > ttl = 1
> > }
> >
> > /* You can specify as many udp_recv_channels as you like as well. */
> > udp_recv_channel {
> > mcast_join = 239.2.11.71
> > port = 8649
> > // bind = 239.2.11.71
> > bind = 10.50.54.31
> > }
> >
> > /* You can specify as many tcp_accept_channels as you like to share
> > an xml description of the state of the cluster */
> > tcp_accept_channel {
> > port = 8649
> > }
>
> you have to match the mode used (multicast or unicast) for udp_send_channel
> and udp_recv_channel even if we allow the configuration of mismatching sets
> which will result in problems like the one you are observing (that is a
> bug)
> but mainly because the flexibility of the configuration allows for some
> strange settings that we would be otherwise not able to predict (like
> having
> additional unicast messages sent somewhere different than a gmond for
> reporting)
>
> the following should work in your case :
>
> * plain multicast configuration as used by default (you need multicast
> support
> working for your system and enabled/routed correctly)
>
> udp_send_channel {
> mcast_join = 239.2.11.71
> port = 8649
> ttl = 1
> }
>
> udp_recv_channel {
> mcast_join = 239.2.11.71
> port = 8649
> }
>
> * plain unicast configuration through localhost (not what you want in the
> long
> run as it will use "localhost" as the node name)
>
> udp_send_channel {
> host = 127.0.0.1
> port = 8649
> }
>
> udp_recv_channel {
> port = 8649
> }
>
> * plain unicast configuration through working interface (assuming that
> 10.50.54.31 is configured in one of your interfaces)
>
> udp_send_channel {
> host = 10.50.54.31
> port = 8649
> }
>
> udp_recv_channel {
> port = 8649
> }
>
> Carlo
>
-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Ganglia-general mailing list
Ganglia-general@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ganglia-general