I am running ixgbe 1.3.56.5-NAPI on a Linux 2.6.18 kernel (SMP, two CPUs):

filename:       /lib/modules/2.6.18.8/kernel/drivers/net/ixgbe/ixgbe.ko
version:        1.3.56.5-NAPI
license:        GPL
description:    Intel(R) 10 Gigabit PCI Express Network Driver
author:         Intel Corporation, <linux.n...@intel.com>
srcversion:     15BF8D99B34C1663F5A262F
alias:          pci:v00008086d000010DBsv*sd*bc*sc*i*
alias:          pci:v00008086d000010F4sv*sd*bc*sc*i*
alias:          pci:v00008086d000010E1sv*sd*bc*sc*i*
alias:          pci:v00008086d000010F1sv*sd*bc*sc*i*
alias:          pci:v00008086d000010ECsv*sd*bc*sc*i*
alias:          pci:v00008086d000010DDsv*sd*bc*sc*i*
alias:          pci:v00008086d000010C8sv*sd*bc*sc*i*
alias:          pci:v00008086d000010C7sv*sd*bc*sc*i*
alias:          pci:v00008086d000010C6sv*sd*bc*sc*i*
alias:          pci:v00008086d000010B6sv*sd*bc*sc*i*
depends:
vermagic:       2.6.18.8 SMP mod_unload 686 REGPARM gcc-4.1
parm:           InterruptType:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), 
default 2 (array of int)
parm:           MQ:Disable or enable Multiple Queues, default 1 (array of int)
parm:           RSS:Number of Receive-Side Scaling Descriptor Queues, default 
1=number of cpus (array of int)
parm:           VMDQ:Number of Virtual Machine Device Queues: 0/1 = disable 
(default), 2-16 enable (array of int)
parm:           InterruptThrottleRate:Maximum interrupts per second, per 
vector, (100-500000), default 8000 (array of int)
parm:           LLIPort:Low Latency Interrupt TCP Port (0-65535) (array of int)
parm:           LLIPush:Low Latency Interrupt on TCP Push flag (0,1) (array of 
int)
parm:           LLISize:Low Latency Interrupt on Packet Size (0-1500) (array of 
int)
parm:           RxBufferMode:0=1 descriptor per packet,
                        1=use packet split, multiple descriptors per jumbo frame
                        2 (default)=use 1buf mode for 1500 mtu, packet split 
for jumbo (array of int)

I load the driver with InterruptType=2,2 MQ=1,1 RSS=2,2 VMDQ=2,2 and I have a 
two-port 82598 NIC:

ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - version 1.3.56.5-NAPI
ixgbe: Interrupt Type set to 2
ixgbe: Multiple Queue Support Enabled
ixgbe: Receive-Side Scaling (RSS) set to 2
ixgbe: Virtual Machine Device Queues (VMDQ) set to 2
ixgbe: 0000:02:00.0: ixgbe_init_interrupt_scheme: Multiqueue Enabled: Rx Queue 
count = 2, Tx Queue count = 2
ixgbe: eth0: ixgbe_probe: (PCI Express:2.5Gb/s:Width x8) 
ixgbe: eth0: ixgbe_probe: MAC: 1, PHY: 0
ixgbe: eth0: ixgbe_probe: LRO is disabled
ixgbe: eth0: ixgbe_probe: Intel(R) 10 Gigabit Network Connection
ixgbe: Interrupt Type set to 2
ixgbe: Multiple Queue Support Enabled
ixgbe: Receive-Side Scaling (RSS) set to 2
ixgbe: Virtual Machine Device Queues (VMDQ) set to 2
ixgbe: 0000:02:00.1: ixgbe_init_interrupt_scheme: Multiqueue Enabled: Rx Queue 
count = 2, Tx Queue count = 2
ixgbe: eth1: ixgbe_probe: (PCI Express:2.5Gb/s:Width x8) 
ixgbe: eth1: ixgbe_probe: MAC: 1, PHY: 0
ixgbe: eth1: ixgbe_probe: LRO is disabled
ixgbe: eth1: ixgbe_probe: Intel(R) 10 Gigabit Network Connection
ixgbe: eth1: ixgbe_watchdog_task: NIC Link is Up 10 Gbps, Flow Control: None
ixgbe: eth2: ixgbe_watchdog_task: NIC Link is Up 10 Gbps, Flow Control: None
ixgbe: eth1: ixgbe_watchdog_task: NIC Link is Up 10 Gbps, Flow Control: None
ixgbe: eth2: ixgbe_watchdog_task: NIC Link is Up 10 Gbps, Flow Control: None

My OS is mainly routing packets from one NIC port (inbound) to the other NIC 
port (outbound). When I analyze the CPU utilization statistics I figured that 
one CPU is running close to 100% utilization while the other CPU is idle a lot:

5:30:11 AM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal   %idle 
   intr/s
05:31:11 AM  all    0.01    0.00    0.02    0.00    0.00   36.42    0.16   
63.39   8849.89
05:31:11 AM    0    0.00    0.00    0.03    0.02    0.00   72.84    0.15   
26.96      0.25
05:31:11 AM    1    0.02    0.00    0.00    0.00    0.00    0.00    0.18   
99.78      0.25

05:31:11 AM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal   
%idle    intr/s
05:32:11 AM  all    0.00    0.00    0.03    0.01    0.00   36.46    0.16   
63.34   8840.64
05:32:11 AM    0    0.00    0.00    0.03    0.00    0.00   72.93    0.18   
26.85 71594720.54
05:32:11 AM    1    0.00    0.00    0.02    0.00    0.00    0.00    0.13   
99.87 71594720.54

05:32:11 AM  CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal   
%idle    intr/s
05:33:11 AM  all    0.01    0.00    0.02    0.01    0.00   36.22    0.14   
63.59   8853.31
05:33:11 AM    0    0.00    0.00    0.03    0.03    0.00   72.46    0.15   
27.32 71594720.70
05:33:11 AM    1    0.00    0.00    0.02    0.00    0.00    0.00    0.12   
99.88 71594720.70

However, I was hoping that by using multiple TX/RX queues I would be able to 
equally scale across both CPUs. When I look at the interrupt allocation, then 
it looks as if ixgbe allocates all queues to only the first CPU:

           CPU0       CPU1
1270:          2          0        Phys-irq  eth1:lsc
1271:        541          0        Phys-irq  eth1-tx-1
1272:        541          0        Phys-irq  eth1-tx-0
1273:       1076          0        Phys-irq  eth1-rx-1
1274:    3266645          0        Phys-irq  eth1-rx-0
1275:          2          0        Phys-irq  eth2:lsc
1276:        541          0        Phys-irq  eth2-tx-1
1277:    3184524          0        Phys-irq  eth2-tx-0
1278:       1076          0        Phys-irq  eth2-rx-1
1279:       1690          0        Phys-irq  eth2-rx-0

How can I get ixgbe to scale across both CPUs for RX and TX ? I thought that 
this should happen automatically with the configuration options I pass to ixgbe 
on module loading.

Thanks,
Anna

------------------------------------------------------------------------------
This SF.net email is sponsored by:
High Quality Requirements in a Collaborative Environment.
Download a free trial of Rational Requirements Composer Now!
http://p.sf.net/sfu/www-ibm-com
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel

Reply via email to