On Fri, May 06, 2005 at 12:11:45PM -0700, David Mathog wrote: > > > Multicast != Broadcast. If the clients don't sign up for the multicast > > channel they won't see any packets. > > As I understand it that's because the kernel sees the multicast > packet and drops it, not because the packets weren't broadcast > to all nodes. The result being that the multicast burns > up the same amount of network bandwidth as a true broadcast for > all nodes - whether or not they are processing the multicast > data. That is, the network card may be saturated even if the > OS is ignoring all of those packets. Perhaps some routers > or switches can be configured to block multicast packets > from going out every port, but I'm pretty sure my little > D-Link switch can't do that.
No, you're little D-Link probably doesn't handle multicast other than dumping it out everywhere, though I've never tested that fact. A more enterprise class switch will, as multicast is supposed to be dealt with at Layer 2 and is what IGMP is all about. At the same time, running tee algorithms on a little D-Link may cause all sorts of pain, as you are sending the same packets down and back through the switch for every node, so while each node only sees 80 mb, the switch has to deal with 80mb * 2 * N (where N is number of nodes). That will either melt a switch of that class, or drop your through rate really dramatically I would guess. Whether that means tee vs. multicast is faster is probably based on a lot of things on your network, your server, and your clients. -Sean -- __________________________________________________________________ Sean Dague Mid-Hudson Valley sean at dague dot net Linux Users Group http://dague.net http://mhvlug.org There is no silver bullet. Plus, werewolves make better neighbors than zombies, and they tend to keep the vampire population down. __________________________________________________________________
pgpKKzGCJZAlM.pgp
Description: PGP signature