ganglia has to use eth0, eth1 is enabled only on the server for user access
and mpi application will be running on infiniband.

Other thing which i wanted to ask is do i need to add on cluster nodes
gmond.conf


On 7/24/06, Martin Knoblauch <[EMAIL PROTECTED]> wrote:

Hi Toney,

my first guess would be that you are:

a) using multicast and
b) your default gateway goes via eth0
c) your compute nodes are on the 192.168.180.x network

After the change the MC packets are still expected via eth0, but come
in from eth1.

Try adding this from the documentation:

mcast_if=eth1 in your headnodes gmond.conf and

route add -host 239.2.11.71 dev eth1

Hope this helps
Martin

--- toney samuel <[EMAIL PROTECTED]> wrote:

> I have a 4 node cluster. my head node has got two gigabit card and
> infiniband card my cluster ip is
>
>  eth0  192.168.180.17/255.255.252.0
> ipoib0 192.168.0.1/255.255.255.0
>
> I have installed ganglia with this configuration. ganglia was working
> properly.
>
> later i changed my network configuration to this
>
> eth0  192.168.1.1/255.255.255.0
> eth1  192.168.180.17/255.255.252.0
> ipoib0 192.168.0.1/255.255.255.0
>
>
> Now i can't see any information in my web page
>
> Pls guide how to resolve this issue.
>
> Regards.
>

------------------------------------------------------
Martin Knoblauch
email: k n o b i AT knobisoft DOT de
www:   http://www.knobisoft.de

Reply via email to