Todd,

Thanks for the response.  I also received another response off-thread
recommending the set_irq_affinity.sh script so that's probably how
I'll proceed.

In terms of upgrading the driver, is there a recommended version to
use assuming we're sticking with the stock RHEL 6.0 kernel?  Is 3.12.6
(latest stable) the way to go?

brian


On Mon, Jan 28, 2013 at 1:38 PM, Fujinaka, Todd <[email protected]> wrote:
> The latest ixgbe driver from e1000.sourceforge.net has the 
> set_irq_affinity.sh script included. I would certainly suggest you use a 
> newer driver because we're constantly improving the driver for performance 
> and fixing bugs we've found. The version you have is a couple of years old.
>
> As far as your questions regarding improving your RX performance, there's no 
> quick answer. Just running the latest driver and affinity script is the only 
> general answer I can give you and everything else is usually specific to what 
> you're doing.
>
> If you have specific questions, I'd suggest contacting your Intel 
> representative and we can discuss things.
>
> Todd Fujinaka
> Software Applications Engineer
> Networking Division (ND)
> Intel Corporation
> [email protected]
> (503) 712-4565
>
> -----Original Message-----
> From: Brian Fallik [mailto:[email protected]]
> Sent: Monday, January 28, 2013 6:46 AM
> To: [email protected]
> Subject: [E1000-devel] igxbe rx throughout - IRQs cs CPUs
>
> Hi,
>
> I'm trying to diagnose a UDP receive throughput bottleneck using the igxbe 
> driver and the 82599EB NIC.  Under ingress load we're seeing packet drops at 
> the NIC (via `watch cat ifconfig eth4`).
>
> Digging deeper, I noticed that /proc/interrupts reported that only
> CPU0 was servicing IRQs generated by the NIC.  By manually adjusting CPU 
> affinity in /proq/irq/*/smp_affinity I was able to spread the load across 
> more cores to avoid the packet drops.
>
> See below for some basic info about our platform.
>
> My first question is if this is the recommended approach to improve Rx 
> throughput?  Is there are some driver setting or script (e.g.
> http://code.google.com/p/ntzc/source/browse/trunk/zc/ixgbe/set_irq_affinity.sh?r=16)
> that handles this automatically or should we be manually assigning queues to 
> CPUs?  Also, should we assign each queue to a different CPU or use a subset 
> of CPUs?  How should I consider processors, cores, and hyperthreads when 
> considering this assignment?  And, lastly, would this be addressed if we 
> upgraded to a more modern driver?
>
> Thanks,
> brian
>
>
> === platform info ===
>
> # cat /proc/version
> Linux version 2.6.32-71.el6.x86_64 ([email protected]) (gcc version 
> 4.4.4 20100726 (Red Hat 4.4.4-13) (GCC) ) #1 SMP Fri May 20
> 03:51:51 BST 2011
> # cat /etc/issue
> CentOS Linux release 6.0 (Final)
> Kernel \r on an \m
> # ethtool -i eth4
> driver: ixgbe
> version: 2.0.62-k2
> firmware-version: 0.9-3
> bus-info: 0000:05:00.0
> # lspci -v -v -s 05:00.1
> 05:00.1 Ethernet controller: Intel Corporation 82599EB 10-Gigabit Network 
> Connection (rev 01)
>         Subsystem: Intel Corporation Ethernet Server Adapter X520-2
>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
> Stepping- SERR- FastB2B- DisINTx+
>         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> <TAbort- <MAbort- >SERR- <PERR- INTx-
>         Latency: 0, Cache Line Size: 64 bytes
>         Interrupt: pin B routed to IRQ 50
>         Region 0: Memory at df380000 (64-bit, non-prefetchable) [size=512K]
>         Region 2: I/O ports at dce0 [size=32]
>         Region 4: Memory at df2fc000 (64-bit, non-prefetchable) [size=16K]
>         Expansion ROM at d0000000 [disabled] [size=512K]
>         Capabilities: [40] Power Management version 3
>                 Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA 
> PME(D0+,D1-,D2-,D3hot+,D3cold-)
>                 Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME-
>         Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
>                 Address: 0000000000000000  Data: 0000
>                 Masking: 00000000  Pending: 00000000
>         Capabilities: [70] MSI-X: Enable+ Count=64 Masked-
>                 Vector table: BAR=4 offset=00000000
>                 PBA: BAR=4 offset=00002000
>         Capabilities: [a0] Express (v2) Endpoint, MSI 00
>                 DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s 
> <512ns, L1 <64us
>                         ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
>                 DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ 
> Unsupported+
>                         RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
>                         MaxPayload 256 bytes, MaxReadReq 512 bytes
>                 DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- 
> TransPend-
>                 LnkCap: Port #2, Speed 5GT/s, Width x8, ASPM L0s, Latency L0 
> <1us, L1 <8us
>                         ClockPM- Surprise- LLActRep- BwNot-
>                 LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- 
> CommClk+
>                         ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
>                 LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ 
> DLActive-
> BWMgmt- ABWMgmt-
>                 DevCap2: Completion Timeout: Range ABCD, TimeoutDis+
>                 DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis-
>                 LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- 
> SpeedDis-, Selectable De-emphasis: -6dB
>                          Transmit Margin: Normal Operating Range, 
> EnterModifiedCompliance-
> ComplianceSOS-
>                          Compliance De-emphasis: -6dB
>                 LnkSta2: Current De-emphasis Level: -6dB
>         Capabilities: [e0] Vital Product Data
>                 Unknown small resource type 00, will not decode more.
>         Capabilities: [100] Advanced Error Reporting
>                 UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- 
> RxOF-
> MalfTLP- ECRC- UnsupReq- ACSViol-
>                 UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt+ UnxCmplt+ 
> RxOF-
> MalfTLP- ECRC- UnsupReq- ACSViol-
>                 UESvrt: DLP+ SDES- TLP+ FCP+ CmpltTO+ CmpltAbrt- UnxCmplt- 
> RxOF+
> MalfTLP+ ECRC+ UnsupReq- ACSViol-
>                 CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- 
> NonFatalErr+
>                 CEMsk:  RxErr+ BadTLP+ BadDLLP+ Rollover+ Timeout+ 
> NonFatalErr+
>                 AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ 
> ChkEn-
>         Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-d7-c9-5c
>         Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
>                 ARICap: MFVC- ACS-, Next Function: 0
>                 ARICtl: MFVC- ACS-, Function Group: 0
>         Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
>                 IOVCap: Migration-, Interrupt Message Number: 000
>                 IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy-
>                 IOVSta: Migration-
>                 Initial VFs: 64, Total VFs: 64, Number of VFs: 64, Function 
> Dependency Link: 01
>                 VF offset: 128, stride: 2, Device ID: 10ed
>                 Supported Page Size: 00000553, System Page Size: 00000001
>                 Region 0: Memory at 0000000000000000 (64-bit, 
> non-prefetchable)
>                 Region 3: Memory at 0000000000000000 (64-bit, 
> non-prefetchable)
>                 VF Migration: offset: 00000000, BIR: 0
>         Kernel driver in use: ixgbe
>         Kernel modules: ixgbe

------------------------------------------------------------------------------
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnnow-d2d
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to