Hello,

I still trying to fix this issue, hope someone have some ideas ?

Ilya
-----Original Message-----
From: Ilya Schanikov [mailto:[email protected]] 
Sent: Thursday, April 04, 2013 7:31 PM
To: [email protected]
Subject: [E1000-devel] rx_fifo_errors during peak load
Importance: High

Hello,

Please, help me get rid of rx_fifo_errors.

I got couple of LVS servers, Intel I350-T4 adapter, all 4 interfaces bonded 
together, 1 Gbit/s RX on peak load, 800k RX packets, 30% CPU load , 100 
errors/s, kernel 2.6.39.4 , openSUSE 10.2 Usually I start seeing errors when 
traffic reach 600 Mbit/s & 500k packets.
I used iperf to test what is wrong - 1.5 Gbit/s RX and 1M was not enough to 
produce at least one rx_fifo_error.
Found useful e-mail 
http://www.mail-archive.com/[email protected]/msg04299.html , 
but it doesn't help me.

modinfo igb
filename:       /lib/modules/2.6.39.4-net-07/kernel/drivers/net/igb/igb.ko
version:        4.1.2
license:        GPL
description:    Intel(R) Gigabit Ethernet Network Driver
author:         Intel Corporation, <[email protected]>
srcversion:     DBABE4876BBED027F305396


lspci | grep -i eth
02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
03:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
08:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection 
(rev 01)
08:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection 
(rev 01)
08:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection 
(rev 01)
08:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection 
(rev 01)


cat /etc/modprobe.conf.local
#
# please add local extensions to this file # options igb RSS=6,6,6,6,6,6

08:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection 
(rev 01)
        Subsystem: Intel Corporation Ethernet Server Adapter I350-T4
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR+ FastB2B-
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- 
<MAbort- >SERR- <PERR-
        Latency: 0, Cache Line Size: 256 bytes
        Interrupt: pin A routed to IRQ 30
        Region 0: Memory at fb800000 (32-bit, non-prefetchable) [size=1M]
        Region 3: Memory at fb77c000 (32-bit, non-prefetchable) [size=16K]
        Expansion ROM at fb780000 [disabled] [size=512K]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA 
PME(D0+,D1-,D2-,D3hot+,D3cold+)
                Status: D0 PME-Enable- DSel=0 DScale=1 PME-
        Capabilities: [50] Message Signalled Interrupts: Mask+ 64bit+ Queue=0/0 
Enable-
                Address: 0000000000000000  Data: 0000
                Masking: 00000000  Pending: 00000000
        Capabilities: [70] MSI-X: Enable+ Mask- TabSize=10
                Vector table: BAR=3 offset=00000000
                PBA: BAR=3 offset=00002000
        Capabilities: [a0] Express Endpoint IRQ 0
                Device: Supported: MaxPayload 512 bytes, PhantFunc 0, ExtTag-
                Device: Latency L0s <512ns, L1 <64us
                Device: AtnBtn- AtnInd- PwrInd-
                Device: Errors: Correctable- Non-Fatal- Fatal- Unsupported-
                Device: RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop+
                Device: MaxPayload 256 bytes, MaxReadReq 512 bytes
                Link: Supported Speed unknown, Width x4, ASPM L0s L1, Port 0
                Link: Latency L0s <4us, L1 <32us
                Link: ASPM Disabled RCB 64 bytes CommClk+ ExtSynch-
                Link: Speed unknown, Width x4

ethtool -S eth2
NIC statistics:
     rx_packets: 447885957564
     tx_packets: 452967126934
     rx_bytes: 67873379483451
     tx_bytes: 68756350830290
     rx_broadcast: 428921
     tx_broadcast: 0
     rx_multicast: 1551532
     tx_multicast: 277100
     multicast: 1551532
     collisions: 0
     rx_crc_errors: 0
     rx_no_buffer_count: 22993
     rx_missed_errors: 15207708
     tx_aborted_errors: 0
     tx_carrier_errors: 0
     tx_window_errors: 0
     tx_abort_late_coll: 0
     tx_deferred_ok: 0
     tx_single_coll_ok: 0
     tx_multi_coll_ok: 0
     tx_timeout_count: 0
     rx_long_length_errors: 0
     rx_short_length_errors: 0
     rx_align_errors: 0
     tx_tcp_seg_good: 120
     tx_tcp_seg_failed: 0
     rx_flow_control_xon: 0
     rx_flow_control_xoff: 0
     tx_flow_control_xon: 0
     tx_flow_control_xoff: 0
     rx_long_byte_count: 67873379483451
     tx_dma_out_of_sync: 0
     lro_aggregated: 0
     lro_flushed: 0
     lro_recycled: 0
     tx_smbus: 0
     rx_smbus: 0
     dropped_smbus: 0
     os2bmc_rx_by_bmc: 0
     os2bmc_tx_by_bmc: 0
     os2bmc_tx_by_host: 0
     os2bmc_rx_by_host: 0
     rx_errors: 0
     tx_errors: 0
     tx_dropped: 0
     rx_length_errors: 0
     rx_over_errors: 0
     rx_frame_errors: 0
     rx_fifo_errors: 15207708
     tx_fifo_errors: 0
     tx_heartbeat_errors: 0
     tx_queue_0_packets: 90661746199
     tx_queue_0_bytes: 13121025270748
     tx_queue_0_restart: 7195
     tx_queue_1_packets: 87116922206
     tx_queue_1_bytes: 12606476905526
     tx_queue_1_restart: 6373
     tx_queue_2_packets: 87119613756
     tx_queue_2_bytes: 12611199722139
     tx_queue_2_restart: 6420
     tx_queue_3_packets: 87117808078
     tx_queue_3_bytes: 12613371344837
     tx_queue_3_restart: 6600
     tx_queue_4_packets: 87921390354
     tx_queue_4_bytes: 12653843205269
     tx_queue_4_restart: 7046
     tx_queue_5_packets: 13029864072
     tx_queue_5_bytes: 1885102419867
     tx_queue_5_restart: 0
     rx_queue_0_packets: 76985209776
     rx_queue_0_bytes: 11360654634336
     rx_queue_0_drops: 0
     rx_queue_0_csum_err: 499288
     rx_queue_0_alloc_failed: 0
     rx_queue_1_packets: 76992526831
     rx_queue_1_bytes: 11362461767970
     rx_queue_1_drops: 0
     rx_queue_1_csum_err: 495089
     rx_queue_1_alloc_failed: 0
     rx_queue_2_packets: 73491590861
     rx_queue_2_bytes: 10841294846552
     rx_queue_2_drops: 0
     rx_queue_2_csum_err: 478361
     rx_queue_2_alloc_failed: 0
     rx_queue_3_packets: 73476036502
     rx_queue_3_bytes: 10841575376811
     rx_queue_3_drops: 0
     rx_queue_3_csum_err: 470166
     rx_queue_3_alloc_failed: 0
     rx_queue_4_packets: 73472725484
     rx_queue_4_bytes: 10839168689362
     rx_queue_4_drops: 0
     rx_queue_4_csum_err: 471490
     rx_queue_4_alloc_failed: 0
     rx_queue_5_packets: 73468408695
     rx_queue_5_bytes: 10836758716694
     rx_queue_5_drops: 0
     rx_queue_5_csum_err: 479575
     rx_queue_5_alloc_failed: 0

ethtool -k eth2
Offload parameters for eth2:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on

ethtool -g eth2
Ring parameters for eth2:
Pre-set maximums:
RX:             4096
RX Mini:        0
RX Jumbo:       0
TX:             4096
Current hardware settings:
RX:             4096
RX Mini:        0
RX Jumbo:       0
TX:             4096

cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.0 (June 2, 2010)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: 
layer2+3 (2) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 
Down Delay (ms): 0

802.3ad info
LACP rate: slow
Aggregator selection policy (ad_select): stable Active Aggregator Info:
        Aggregator ID: 1
        Number of ports: 4
        Actor Key: 17
        Partner Key: 9
        Partner Mac Address: c4:71:fe:2b:ba:00

Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: a0:36:9f:0c:b8:b8
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: a0:36:9f:0c:b8:b9
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: a0:36:9f:0c:b8:ba
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth5
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: a0:36:9f:0c:b8:bb
Aggregator ID: 1
Slave queue ID: 0


C-states:

<4>using polling idle threads.
<4>WARNING: polling idle and HT enabled, performance may degrade.



Thanks in advance!

Ilya Schanikov
odnoklassniki.ru









------------------------------------------------------------------------------
Precog is a next-generation analytics platform capable of advanced
analytics on semi-structured data. The platform includes APIs for building
apps and a phenomenal toolset for data science. Developers can use
our toolset for easy data analysis & visualization. Get a free account!
http://www2.precog.com/precogplatform/slashdotnewsletter
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to