Hi Dick, we need to know exactly what you are expecting to happen here. When 
you saturate transmit and put lots of data on all tx queues, any ping or other 
traffic that gets put in has to contend with the large amount of data in front 
of it. 

There is a simple test you can do, try to disable TSO using ethtool. ethtool -K 
ethx tso off

If that helps then we know that we need to pursue ways to get your high 
priority traffic onto its own queue, which btw is why the single thread iperf 
works. Ping goes to a different queue (by luck) and gets out sooner due to not 
being behind other traffic
--
Jesse Brandeburg


On Sep 12, 2012, at 8:29 AM, "Dick Snippe" <dick.sni...@tech.omroep.nl> wrote:

> Hello,
> 
> we are running into an issue with intel 82599EB 10-Gigabit, where ping
> round trip times on a local network increase from 0.1ms to >100ms when
> the nics are put under a moderate load. This causes all sorts of
> latency related problems.
> Interestingly there seems to be no packet loss, as ping reports 0%
> packet loss.
> 
> When testing with iperf I'm unable to reproduce the issue; iperf measures
> 9.5Gbit bandwith and ping reports 0.1ms rtt.
> However, the problem is reproducable with a apache / ab combination
> When running apache (geared for large downloads) on the server and
> ab in the same network as client it is easy to trigger the problem.
> It should be noted however that the problem only exists when ab
> is run with a largish concurrency (-c 50).
> 
> Our test setup is as follows:
> - host1 runs a webserver (dltest.omroep.nl); serving a 100Mbyte test file
> - host2 acts as the ab client. The ab command line is
>    ab -n 100 -c 50 http://dltest.omroep.nl/100m
> - host3 acts as an observer, performing ping:
>    sudo ping -c 100 -i 0.001 -q dltest.omroep.nl
> 
> The ping output (when host1 is under load) is consistenly >100 ms:
> $ sudo ping -c 100 -i 0.001 -q dltest.omroep.nl
> --- dltest1afp.omroep.nl ping statistics ---
> PING dltest1afp.omroep.nl (145.58.52.132) 56(84) bytes of data.
> --- dltest1afp.omroep.nl ping statistics ---
> 100 packets transmitted, 100 received, 0% packet loss, time 731ms
> rtt min/avg/max/mdev = 87.425/121.739/150.212/17.075 ms, pipe 21
> 
> ab isn't happy either:
> Transfer rate:          138158.10 [Kbytes/sec] received
> 
> and host1 typically shows (sar -n DEV) ~100Mbyte/sec throughput
> 16:04:26        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   
> txcmp/s  rxmcst/s
> 16:04:27         eth1  37055.00  98042.00   2171.21 144954.92      0.00      
> 0.00      0.00
> 
> When testing with lower concurrency (ab -c 8) all is well we typically get 
> close to 10Gbit throughput.
> Transfer rate:          939407.67 [Kbytes/sec] received
> 
> So far I've tried different kernel versions (3.0.x, 3.5.x),
> different ethtool and sysctl tunings but so far I haven't found
> anything to improve performance. Could it perhaps be a driver issue?
> 
> Details:
> 
> Hardware: IBM HS22 blades with dual 10G mezzanine adapter, connected
> via a Cisco Catalyst Switch 3110G for IBM BladeCenter.
> 
> Software: We've tested using a vanilla linux 3.5.3 kernel.
> smp_affinity follows the hints set by the driver; i.e.
> /proc/irq/<NN>/smp_affinity is set to /proc/irq/<NN>/affinity_hint
> for the relevant interrupts.
> 
> $ dmesg|grep ixgbe
> [    6.509784] ixgbe: Intel(R) 10 Gigabit PCI Express Network Driver - 
> version 3.9.15-k
> [    6.517510] ixgbe: Copyright (c) 1999-2012 Intel Corporation.
> [    6.617763] ixgbe 0000:15:00.0: irq 74 for MSI/MSI-X
> [    6.617770] ixgbe 0000:15:00.0: irq 75 for MSI/MSI-X
> [    6.617776] ixgbe 0000:15:00.0: irq 76 for MSI/MSI-X
> [    6.617782] ixgbe 0000:15:00.0: irq 77 for MSI/MSI-X
> [    6.617789] ixgbe 0000:15:00.0: irq 78 for MSI/MSI-X
> [    6.617795] ixgbe 0000:15:00.0: irq 79 for MSI/MSI-X
> [    6.617801] ixgbe 0000:15:00.0: irq 80 for MSI/MSI-X
> [    6.617807] ixgbe 0000:15:00.0: irq 81 for MSI/MSI-X
> [    6.617813] ixgbe 0000:15:00.0: irq 82 for MSI/MSI-X
> [    6.617819] ixgbe 0000:15:00.0: irq 83 for MSI/MSI-X
> [    6.617825] ixgbe 0000:15:00.0: irq 84 for MSI/MSI-X
> [    6.617830] ixgbe 0000:15:00.0: irq 85 for MSI/MSI-X
> [    6.617836] ixgbe 0000:15:00.0: irq 86 for MSI/MSI-X
> [    6.617842] ixgbe 0000:15:00.0: irq 87 for MSI/MSI-X
> [    6.617848] ixgbe 0000:15:00.0: irq 88 for MSI/MSI-X
> [    6.617854] ixgbe 0000:15:00.0: irq 89 for MSI/MSI-X
> [    6.617860] ixgbe 0000:15:00.0: irq 90 for MSI/MSI-X
> [    6.617910] ixgbe 0000:15:00.0: Multiqueue Enabled: Rx Queue count = 16, 
> Tx Queue count = 16
> [    6.626456] ixgbe 0000:15:00.0: (PCI Express:5.0GT/s:Width x8) 
> a0:36:9f:02:94:04
> [    6.633920] ixgbe 0000:15:00.0: MAC: 2, PHY: 1, PBA No: E46189-012
> [    6.643912] ixgbe 0000:15:00.0: Intel(R) 10 Gigabit Network Connection
> [    6.747566] ixgbe 0000:15:00.1: irq 91 for MSI/MSI-X
> [    6.747574] ixgbe 0000:15:00.1: irq 92 for MSI/MSI-X
> [    6.747580] ixgbe 0000:15:00.1: irq 93 for MSI/MSI-X
> [    6.747586] ixgbe 0000:15:00.1: irq 94 for MSI/MSI-X
> [    6.747595] ixgbe 0000:15:00.1: irq 95 for MSI/MSI-X
> [    6.747601] ixgbe 0000:15:00.1: irq 96 for MSI/MSI-X
> [    6.747610] ixgbe 0000:15:00.1: irq 97 for MSI/MSI-X
> [    6.747616] ixgbe 0000:15:00.1: irq 98 for MSI/MSI-X
> [    6.747622] ixgbe 0000:15:00.1: irq 99 for MSI/MSI-X
> [    6.747628] ixgbe 0000:15:00.1: irq 100 for MSI/MSI-X
> [    6.747634] ixgbe 0000:15:00.1: irq 101 for MSI/MSI-X
> [    6.747640] ixgbe 0000:15:00.1: irq 102 for MSI/MSI-X
> [    6.747645] ixgbe 0000:15:00.1: irq 103 for MSI/MSI-X
> [    6.747651] ixgbe 0000:15:00.1: irq 104 for MSI/MSI-X
> [    6.747657] ixgbe 0000:15:00.1: irq 105 for MSI/MSI-X
> [    6.747663] ixgbe 0000:15:00.1: irq 106 for MSI/MSI-X
> [    6.747669] ixgbe 0000:15:00.1: irq 107 for MSI/MSI-X
> [    6.747714] ixgbe 0000:15:00.1: Multiqueue Enabled: Rx Queue count = 16, 
> Tx Queue count = 16
> [    6.756260] ixgbe 0000:15:00.1: (PCI Express:5.0GT/s:Width x8) 
> a0:36:9f:02:94:05
> [    6.763723] ixgbe 0000:15:00.1: MAC: 2, PHY: 1, PBA No: E46189-012
> [    6.773610] ixgbe 0000:15:00.1: Intel(R) 10 Gigabit Network Connection
> [    6.780135] ixgbevf: Intel(R) 10 Gigabit PCI Express Virtual Function 
> Network Driver - version 2.6.0-k
> [    6.789419] ixgbevf: Copyright (c) 2009 - 2012 Intel Corporation.
> 
> $ sudo ethtool -i eth1
> driver: ixgbe
> version: 3.9.15-k
> firmware-version: 0x613e0001
> bus-info: 0000:15:00.1
> 
> $  sudo lspci -vv|grep -A 35 82599EB
> 15:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit KX4 Network 
> Connection (rev 01)
>        Subsystem: Intel Corporation Ethernet Mezzanine Adapter X520-KX4-2
>        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ 
> Stepping- SERR- FastB2B- DisINTx+
>        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
> <TAbort- <MAbort- >SERR- <PERR- INTx-
>        Latency: 0, Cache Line Size: 64 bytes
>        Interrupt: pin A routed to IRQ 24
>        Region 0: Memory at fb000000 (64-bit, prefetchable) [size=8M]
>        Region 2: I/O ports at 1020 [size=32]
>        Region 4: Memory at fb804000 (64-bit, prefetchable) [size=16K]
>        Expansion ROM at 90000000 [disabled] [size=8M]
>        Capabilities: [40] Power Management version 3
>                Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA 
> PME(D0+,D1-,D2-,D3hot+,D3cold+)
>                Status: D0 PME-Enable- DSel=0 DScale=1 PME-
>        Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+ 
> Queue=0/0 Enable-
>                Address: 0000000000000000  Data: 0000
>                Masking: 00000000  Pending: 00000000
>        Capabilities: [70] MSI-X: Enable+ Mask- TabSize=64
>                Vector table: BAR=4 offset=00000000
>                PBA: BAR=4 offset=00002000
>        Capabilities: [a0] Express (v2) Endpoint, MSI 00
>                DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, 
> L1 <64us
>                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
>                DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ 
> Unsupported+
>                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
>                        MaxPayload 256 bytes, MaxReadReq 4096 bytes
>                DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ 
> TransPend-
>                LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Latency L0 
> <1us, L1 <8us
>                        ClockPM- Suprise- LLActRep- BwNot-
>                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
>                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
>                LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ 
> DLActive- BWMgmt- ABWMgmt-
>        Capabilities: [100] Advanced Error Reporting <?>
>        Capabilities: [140] Device Serial Number 04-94-02-ff-ff-9f-36-a0
>        Capabilities: [150] #0e
>        Capabilities: [160] #10
>        Kernel driver in use: ixgbe
> --
> 15:00.1 Ethernet controller: Intel Corporation 82599EB 10-Gigabit KX4 Network 
> Connection (rev 01)
>        Subsystem: Intel Corporation Ethernet Mezzanine Adapter X520-KX4-2
>        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ 
> Stepping- SERR- FastB2B- DisINTx+
>        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
> <TAbort- <MAbort- >SERR- <PERR- INTx-
>        Latency: 0, Cache Line Size: 64 bytes
>        Interrupt: pin B routed to IRQ 34
>        Region 0: Memory at fa800000 (64-bit, prefetchable) [size=8M]
>        Region 2: I/O ports at 1000 [size=32]
>        Region 4: Memory at fb800000 (64-bit, prefetchable) [size=16K]
>        Expansion ROM at 90800000 [disabled] [size=8M]
>        Capabilities: [40] Power Management version 3
>                Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA 
> PME(D0+,D1-,D2-,D3hot+,D3cold+)
>                Status: D0 PME-Enable- DSel=0 DScale=1 PME-
>        Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+ 
> Queue=0/0 Enable-
>                Address: 0000000000000000  Data: 0000
>                Masking: 00000000  Pending: 00000000
>        Capabilities: [70] MSI-X: Enable+ Mask- TabSize=64
>                Vector table: BAR=4 offset=00000000
>                PBA: BAR=4 offset=00002000
>        Capabilities: [a0] Express (v2) Endpoint, MSI 00
>                DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, 
> L1 <64us
>                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
>                DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ 
> Unsupported+
>                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
>                        MaxPayload 256 bytes, MaxReadReq 4096 bytes
>                DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ 
> TransPend-
>                LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Latency L0 
> <1us, L1 <8us
>                        ClockPM- Suprise- LLActRep- BwNot-
>                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
>                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
>                LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ 
> DLActive- BWMgmt- ABWMgmt-
>        Capabilities: [100] Advanced Error Reporting <?>
>        Capabilities: [140] Device Serial Number 04-94-02-ff-ff-9f-36-a0
>        Capabilities: [150] #0e
>        Capabilities: [160] #10
>        Kernel driver in use: ixgbe
> 
> 
> 
> -- 
> Dick Snippe, internetbeheerder     \ fight war
> beh...@omroep.nl, +31 35 677 3555   \ not wars
> NPO ICT, Sumatralaan 45, 1217 GP Hilversum, NPO Gebouw A
> 
> ------------------------------------------------------------------------------
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and 
> threat landscape has changed and how IT managers can respond. Discussions 
> will include endpoint security, mobile security and the latest in malware 
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> _______________________________________________
> E1000-devel mailing list
> E1000-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/e1000-devel
> To learn more about Intel&#174; Ethernet, visit 
> http://communities.intel.com/community/wired

------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to