Hi
I have a intel DQ67EP desktop board with the following onboard nic.
00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network
Connection (rev 04)
Subsystem: Intel Corporation Device 200f
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort-
<MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 43
Region 0: Memory at fe600000 (32-bit, non-prefetchable) [size=128K]
Region 1: Memory at fe628000 (32-bit, non-prefetchable) [size=4K]
Region 2: I/O ports at f080 [size=32]
Capabilities: [c8] Power Management version 2
Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA
PME(D0+,D1-,D2-,D3hot+,D3cold+)
Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME-
Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
Address: 00000000fee0100c Data: 41d9
Capabilities: [e0] PCI Advanced Features
AFCap: TP+ FLR+
AFCtrl: FLR-
AFStatus: TP-
Kernel driver in use: e1000e
Kernel modules: e1000e
# ethtool -i eth0
driver: e1000e
version: 1.6.3-NAPI
firmware-version: 0.13-4
bus-info: 0000:00:19.0
I am observing some weird packet loss issue thats trivial to reproduce.
hostA = DQ67EP board (10.0.0.221)
hostB = Other linux box #1
hostC = Other linux box #2
hostB# ping -f hostA -c 10000
PING 10.0.0.221 (10.0.0.221) 56(84) bytes of data.
...............
--- 10.0.0.221 ping statistics ---
10000 packets transmitted, 9985 received, 0% packet loss, time 1535ms
rtt min/avg/max/mdev = 0.077/0.114/0.354/0.024 ms, ipg/ewma 0.153/0.112 ms
In general there will be 15-20 drops per 10k packets on this ping
flood test.
The weird thing is if I run the following command plus the above ping
flood at the same time, the packetloss will go away.
hostC# hping3 hostA --udp -p 1000 --faster -d 1492
HPING 10.0.0.221 (eth0 10.0.0.221): udp mode set, 28 headers + 1328 data bytes
hostB# ping -f hostA -c 10000
PING 10.0.0.221 (10.0.0.221) 56(84) bytes of data.
--- 10.0.0.221 ping statistics ---
10000 packets transmitted, 10000 received, 0% packet loss, time 2500ms
rtt min/avg/max/mdev = 0.082/0.235/0.364/0.026 ms, ipg/ewma 0.250/0.238 ms
eth0 Link encap:Ethernet HWaddr 00:22:4d:50:fd:1d
inet addr:10.0.0.221 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::222:4dff:fe50:fd1d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9433085 errors:0 dropped:0 overruns:0 frame:0
TX packets:50280 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:7389143658 (7.3 GB) TX bytes:4991608 (4.9 MB)
Interrupt:20 Memory:fe600000-fe620000
# ethtool -S eth0
NIC statistics:
rx_packets: 12658127
tx_packets: 50318
rx_bytes: 9977857378
tx_bytes: 5209746
rx_broadcast: 92
tx_broadcast: 6
rx_multicast: 0
tx_multicast: 43
rx_errors: 0
tx_errors: 0
tx_dropped: 0
multicast: 0
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0
rx_no_buffer_count: 0
rx_missed_errors: 0
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_window_errors: 0
tx_abort_late_coll: 0
tx_deferred_ok: 0
tx_single_coll_ok: 0
tx_multi_coll_ok: 0
tx_timeout_count: 0
tx_restart_queue: 0
rx_long_length_errors: 0
rx_short_length_errors: 0
rx_align_errors: 0
tx_tcp_seg_good: 0
tx_tcp_seg_failed: 0
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0
rx_long_byte_count: 9977857378
rx_csum_offload_good: 286
rx_csum_offload_errors: 0
rx_header_split: 0
alloc_rx_buff_failed: 0
tx_smbus: 0
rx_smbus: 50105
dropped_smbus: 1542
rx_dma_failed: 0
tx_dma_failed: 0
Thanks
jim
------------------------------------------------------------------------------
All the data continuously generated in your IT infrastructure
contains a definitive record of customers, application performance,
security threats, fraudulent activity, and more. Splunk takes this
data and makes sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-novd2d
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit
http://communities.intel.com/community/wired