Steffen, Thanks for the test.
The differences between our test settings are NIC and CPU, your NICs are GLDv3, but my cards are "legacy", and your systems are SPARC, but mine are Intel. Probably the difference of NICs is the primarily reason for the different results. Two more questions, 1. Where can I find the list of NIC that can support GLDv3? 2. We say a NIC card is a GLDv3 based NIC, can I turn a NIC to GLDv3 by only updating its driver? Andrew On Tue, 2006-10-17 at 15:55 -0400, Steffen Weiberle wrote: > In anticipation of your response, I ran some tests using netperf. One > system is a v210 with two CPUs and gbe interaces. The other is an > x4100 with e1000g interfaces. Turns out both are v3, since show-link > lists status as non-vlan. > > I ran two instances of netperf, and force traffic (as best as I could) > to two interfaces. In both directions I saw interrupt distribution > across CPUs. I was moving around 400 Mbps of data, using a shared > netgear 5-port gigabit switch. > > Steffen > > Andrew Wenlang Zhu wrote On 10/17/06 15:29,: > > Steffen, > > > > The traffic is multiple streams. > > > > The card I am using is Broadcom NetXtreme II Gigabit Ethernet Driver > > v2.8.5, and the system is Intel woodcrest multicores, but Solaris > > recognize it as i386. > > > > Status of virtual processor 0 as of: 10/17/2006 19:25:28 > > on-line since 10/17/2006 02:30:43. > > The i386 processor operates at 2992 MHz, > > and has an i387 compatible floating point processor. > > > > Andrew > > > > > > > > On Tue, 2006-10-17 at 07:25 -0400, Steffen Weiberle wrote: > > > >>Hi Andrew, > >> > >>Is the traffic a single stream or are there multiple ones? I am under > >>the impression that with multiple streams incoming processing will get > >>distributed (except for interrupts). Also, I thought stack would > >>switch from interrupts to polling under heavy load (if data has > >>already arrived). > >> > >>I'm trying to understand the limitations, if any, based on instruction > >>set, NIC type, etc. > >> > >>Thanks > >>Steffen > >> > >>Andrew Wenlang Zhu wrote On 10/16/06 23:03,: > >> > >>>Hi, > >>> > >>>I have a four-CPU system with two G-NICs. The system acts as an > >>>application gateway, and the traffic comes from one NIC and goes out > >>>through the other one. The throughput rate is about 600M. The OS is > >>>Solaris Express SNV-46 on Intel platform. > >>> > >>>Through mpstat, I find CPU utilization is unbalanced. One CPU is about > >>>90% busy, and the rest three is about 50%. For the one with 90%, mpstat > >>>shows its "intr" about 24798, while the other three is about 2000. The > >>>most of time spending on "sys". > >>> > >>>I use Dtrace and find the most frequently calls are for networking I/O. > >>> > >>>It appears that Solaris sends all NIC interrupts from both NICs to a > >>>single CPU. I tried to use "ndd -set /dev/ip ip_squeue_fanout 1", but > >>>seems no much changes. Do I need to reboot to make the fanout effective? > >>> > >>>HP-UX can bind one NIC to a particular CPU, so that the CPU will take > >>>care of all I/Os for the NIC to get a good data locality, while the > >>>burden of serving interrupts can be shared among several CPUs. > >>> > >>>Does Solaris has a similar thing? Or how to make network load balanced > >>>cross several CPUs to alleviate the CPU bottleneck and achieve a higher > >>>overall throughput? > >>> > >>>Thanks, > >>> > >>>Andrew > >>> > >>> > >>>_______________________________________________ > >>>networking-discuss mailing list > >>>[email protected] > > > > _______________________________________________ > > networking-discuss mailing list > > [email protected] _______________________________________________ networking-discuss mailing list [email protected]
