On Mon, 23 Dec 2002 23:58:39 -0500 "Lester Vecsey" <[EMAIL PROTECTED]> wrote:
> Which switch do you have? We have a Foundry FastIron II+ ( loaded with enough 10/100 blades for a 128 port flat network + servers on GB ), and HP Procurve switches with the same connfiguration, but only 64 ports. Is there something that can be done with the switch configuration to ease the traffic? > I'd really like to see the ganglia community take > a proactive stance on this and actually suggest / recommend working > solutions for gmond being 'friendly' to switches. Also I like the idea > of having all the gmonds in a cluster set to unicast their data to > just a select few others, with the intention that these select few > that would receive them would be redundently set up to be queried by > gmetad. Also the gmonds would have to unicast to themselves to have > them update themselves again, which would be a gross hack, with the > preferred solution to just have it update itself without transmitting > in this case. It was an elegant solution to just multicast out and > then have it listen back on its own channel for updates, but I think > theres a legitimate need here for unicasting out to just a select few > gmonds. I completely agree, as in a HPCC arrangment it is uneccessary for one node to know about another, when neither of them will act on the data. Nic > > By having the option to configure gmonds to unicast their data out to > just a select few ips, wouldn't this potentially allow clusters to > scale up even larger? Sub-clusters of gmonds would be unicasting, and > then the regular'full blown' gmonds would be queried or set to unicast > again out to an even more senior and select group of gmonds.. its > these final gmonds that would be then queried by gmetad for clusters > stats to be archived away.

