a.  Definitely a.

When a host opens a multicast socket, the kernel sends a join message *for that IP* and should start receiving traffic for that multicast network from that point on. Listening on different ports... hmmm, that didn't work the last time I tried it, but I'm pretty sure that it would at least reach the kernel.

So if you want to cut down on your multicast traffic, use a different multicast IP per cluster - that's the way to do it...

Kent IV, William (WW) wrote:
My IP multicasting background is a bit lacking, so I'll pose the question here.

Background:
I have a flat network space with 4 clusters of various sizes (16 - 72 
nodes/ea.).  I'm monitoring each cluster individually (i.e. Penguin = nodes p1 
- 16, Marvin = nodes m1 - m24, etc.)

Question:
Which configuration option (if either) would reduce the CPU interrupts on my 
cluster nodes?
        a.  Configuring each cluster to use it's own multicast channel 
(mcast_channel) through /etc/gmond.conf
        b.  Configuring each cluster to use it's own multicast port 
(mcast_port) on one shared channel.

I'm currently doing (b), but I think (a) might help.

This isn't really a Ganglia question, rather an IP Multicast question. Does the CPU get interrupted for all multicasts, only those for channels it's participating in, or only for channels/ports it's participating in?
My multicast knowledge all derives from DECnet and I'm trying to block that 
out.  Thanks for any assistance or advice anyone might be able to provide.

Bill

-----------------------------------------------------------------------------------
Bill Kent
The Dow Chemical Company



-------------------------------------------------------
This SF.net email is sponsored by:
The Definitive IT and Networking Event. Be There!
NetWorld+Interop Las Vegas 2003 -- Register today!
http://ads.sourceforge.net/cgi-bin/redirect.pl?keyn0001en
_______________________________________________
Ganglia-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/ganglia-general



Reply via email to