I'm testing out an AMD Opteron 6128 platform as a router with two igb interfaces. The kernel is 2.6.34.1 (32 bit), with igb-2.2.9.
We're seeing lower performance than we expect when testing it with denial-of-service style traffic (source IPs random spoofed). I have gc_interval and gc_elasticity set to 1 to reduce the route-cache lookups, and have set the igb parameters to: IntMode=2,2 QueuePairs=0,0 RSS=4,4 InterruptThrottleRate=3,3 The RX ring is set to the max, 4096. TX ring is set to 512. The traffic is generated with the "mz" tool - source spoofed, short UDP packets. The CPUs are nowhere near maxed out - I set affinity for the four RX queues on the interface receiving the traffic to CPUs 0,1,2,3 and they are around 50% utilized, and CPUS 8,9,10,11 have the queues on the interface forwarding the traffic and they are around 10% utilized. But nonetheless, we're dropping 2-3000 packets per second (rx_fifo_errors according to ethtool). I suspect I'm running into a NUMA issue or some other bottleneck. Any suggestions for how to get the maximum throughput out of this platform? ------------------------------------------------------------------------------ This SF.net email is sponsored by Sprint What will you do first with EVO, the first 4G phone? Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first _______________________________________________ E1000-devel mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/e1000-devel To learn more about Intel® Ethernet, visit http://communities.intel.com/community/wired
