Hi Branimir,
apparently Rick pushed you into the right direction already :-) Just a
few comments
Martin
--- Branimir Ackovic [EMAIL PROTECTED] wrote:
Thank You Rick and Martin for quick response!
I allready tried configuration that Rick suggest, but it doesn't
work. In that
All,
against all probability, but for reasonable historical reasons, we run
windows based HPC applications.
We also have large networks of similar function windows farms (e..g. web
farms). We want to improve
the visibility of the state of our estate, we like ganglia (rollups and
all that),
and
Hi Richard,
--- [EMAIL PROTECTED] wrote:
All,
against all probability, but for reasonable historical reasons, we
run windows based HPC applications.
What kind of HPC stuff is a financial institution running? Just
curious :-)
If we were to cheat, and create a windows agent that only
Hi,
Could someone explain how my configuration directives should be with the
following setup?
Total of 18 compute nodes
5 compute nodes with eth0 connected to a switch. (192.168.2.* network)
6 compute nodes with eth1 connected to this switch (192.168.2.* network)
and eth0 connected to a
Hi Prakash,
basically what you describe is the expected behaviour. Without the
extra routing information, the multicast packet will be sent through
the default gateway interface, which is eth0 for all three groups.
As a result group 2 and 3 end up disconnected from group 1.
You should use a
For some reason, only the route solution works for me. The unicast
packets do not seem to reach the collection agent in the first group of
nodes.
The route solution works though, giving some relief.
Thanks,
Prakash
Martin Knoblauch [EMAIL PROTECTED] 11/07/05 6:49 PM
Hi Prakash,
basically
On 11/7/05, Prakash Velayutham [EMAIL PROTECTED] wrote:
For some reason, only the route solution works for me. The unicast
packets do not seem to reach the collection agent in the first group of
nodes.
The route solution works though, giving some relief.
Would the nodes be normally able to
7 matches
Mail list logo