-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

David Zaltron wrote:
 > Probably you have a gmond configuration on each node that muticast the
> cluster status to every node.
>
> For example, if you have a configuration like this in the nodes:
>
> -----
> cluster {
>    name = "dummy_cluster"
> }
>
> udp_send_channel {
>    mcast_join = 239.2.11.71
>    port = 8649
> }
>
> udp_recv_channel {
>    mcast_join = 239.2.11.71
>    port = 8649
>    bind = 239.2.11.71
> }
> ----
>
> This means that every node know to belong to the "dummy_cluster", and
> every gmond can return the status of the entire cluster because it knows
> about every each other node (talking in the same multicast channel with
> each other) if telled at the default 8649 TCP port.
>
> You can find the solution unicasting the traffic between the node & itself:
>
> ----
> udp_send_channel {
>    host = <hostname of 127.0.0.1>
>    port = 8649
> }
>
> udp_recv_channel {
>    port = 8649
> }
> ---
>
> In this way you can "simulate" a cluster of a single node, monitoring in
> reality the single node.
>
Okay, I did that and that /sort of/ fixed it, except for now I do not
see the nodes in my web interface.  Keep in mind the web interface is
running on a completely separate box that's not either newton or
winterstar.  So, how do I get the node showing up in the web interface now?

(And David, I apologize for sending to you and not the list, my fingers
got ahead of me today.)




- --
Fere libenter homines id quod volunt credunt.

Mark Haney
Sr. Systems Administrator
ERC Broadband
(828) 350-2415
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFEhDXZYQhnfRtc0AIRAj07AJwNaTsNHM02oJaznXnO0qECZEPZUwCfa6JR
0rLX5KWkRW9MjL/5/J/Igj0=
=iIJp
-----END PGP SIGNATURE-----

Reply via email to