Hi Anna,

On Tue, 14 Apr 2009, Fischer, Anna wrote:
> I am running ixgbe 1.3.56.5-NAPI on a Linux 2.6.18 kernel (SMP, two CPUs):
> 
> filename:       /lib/modules/2.6.18.8/kernel/drivers/net/ixgbe/ixgbe.ko
> version:        1.3.56.5-NAPI

<snip>

> I load the driver with InterruptType=2,2 MQ=1,1 RSS=2,2 VMDQ=2,2 and I 
> have a two-port 82598 NIC:

why are you loading with VMDQ= setting?  I'm almost positive you don't 
want to be enabilng that on this driver, unless you're trying to test or 
develop for Fiber Channel over Ethernet.

I would suggest that you don't add any driver load options at all since 
the default for the driver will enable MSI-X and multiple queues up to 
your cpu count.

> My OS is mainly routing packets from one NIC port (inbound) to the other 
> NIC port (outbound). When I analyze the CPU utilization statistics I 
> figured that one CPU is running close to 100% utilization while the 
> other CPU is idle a lot:
> 

<snip>

> However, I was hoping that by using multiple TX/RX queues I would be 
> able to equally scale across both CPUs. When I look at the interrupt 
> allocation, then it looks as if ixgbe allocates all queues to only the 
> first CPU:

after disabling VMDQ as suggested above, RSS should automatically be 
steering different flows across both receive queues.  But you need to have 
enough difference in the ip/tcp/udp header for the RSS hashing to get a 
different result (usually varying the source port is enough)
 
>            CPU0       CPU1
> 1270:          2          0        Phys-irq  eth1:lsc
> 1271:        541          0        Phys-irq  eth1-tx-1
> 1272:        541          0        Phys-irq  eth1-tx-0
> 1273:       1076          0        Phys-irq  eth1-rx-1
> 1274:    3266645          0        Phys-irq  eth1-rx-0
> 1275:          2          0        Phys-irq  eth2:lsc
> 1276:        541          0        Phys-irq  eth2-tx-1
> 1277:    3184524          0        Phys-irq  eth2-tx-0
> 1278:       1076          0        Phys-irq  eth2-rx-1
> 1279:       1690          0        Phys-irq  eth2-rx-0
> 
> How can I get ixgbe to scale across both CPUs for RX and TX ? I thought 
> that this should happen automatically with the configuration options I 
> pass to ixgbe on module loading.

I think you need to enable the irqbalance daemon and/or set irq affinity 
with echo 2 > /proc/irq/1277/smp_affinity, and/or to 1273

for routing we usually recommend manually setting irq affinity.  Do both 
of your cores share a cache?  I hope so or you probably won't get much of 
a performance boost.

to use the other TX queues you need a newer kernel (or customized driver 
changes) that will allow you to pick your transmit queue.  2.6.27+ is 
recommended because it has the simple_tx_hash function that will allow all 
tx queues to be used by the networking stack.

right now the best you can do is split the receive work across multiple 
processors, but you'll still be bottlenecked at the single transmit queue.

We might be able to help you modify your driver to select the other 
transmit queues in the tx routine, but then there are still all the 
locking problems in the kernel (single lock)

Best bet is to run a newer kernel if at all possible.

------------------------------------------------------------------------------
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel

Reply via email to