>-----Original Message-----
>From: Stathis Gkotsis [mailto:stathisgot...@hotmail.com]
>Sent: Friday, October 10, 2014 4:31 AM
>To: e1000-devel@lists.sourceforge.net
>Subject: [E1000-devel] Packet drops with ixgbe 3.22.3 and
>Intel 82599EB‏
>
>Hello,
>
>(Sorry for sending this message twice, my previous message
>was in HTML and not plain text, please ignore it)
>
>My setup is the following:
>
>CPU: Intel(R) Xeon(R) CPU           X5687  @ 3.60GHz
>NIC: Intel 82599EB
>
>OS: Ubuntu 10.04 64-bit with kernel: 2.6.32-40
>driver: ixgbe 3.22.3 , which I compiled and installed from
>source.
>
>There are two physical interfaces on the NIC: eth0 receives
>3 Gbps and eth1 receives 5 Gbps of traffic (maximum
>traffic). No traffic is transmitted.
>The ixgbe driver is loaded with all the default parameters.
>
>I see packet drops, the number rx_missed_errors in "ethtool
>-S eth1" is increasing. This happens only on the eth1
>interface from time to time, when the total traffic exceeds
>7 Gbps. At that time, the CPU usage of ksoftirqd process
>increases at 100%.

If your traffic is mostly receives (at least looks like it from the counters 
below) it's possible that the driver goes into polling mode. You can tell if 
you monitor the number of interrupts/sec - you should see them drop down 
significantly from the previous flows.

>Also, ethtool -a eth1:
>#:/usr/src# ethtool -a eth1
>Pause parameters for eth1:
>Autonegotiate:  off
>RX:             on
>TX:             on
>
>and ethtool -S eth1:
>
>NIC statistics:
>     rx_packets: 37835863761
>     tx_packets: 3
>     rx_bytes: 22762288726713
>     tx_bytes: 230
>     rx_errors: 0
>     tx_errors: 0
>     rx_dropped: 0
>     tx_dropped: 0
>     multicast: 23495130
>     collisions: 0
>     rx_over_errors: 0
>     rx_crc_errors: 0
>     rx_frame_errors: 0
>     rx_fifo_errors: 0
>     rx_missed_errors: 405689925

Rx missed errors are reported when the HW cannot deal with the incoming traffic 
for some reason - usually insufficient buffers, or bandwidth on the bus. The 
counter is reported by a HW register (MPC).

>     tx_aborted_errors: 0
>     tx_carrier_errors: 0
>     tx_fifo_errors: 0
>     tx_heartbeat_errors: 0
>     rx_pkts_nic: 37836344162
>     tx_pkts_nic: 3
>     rx_bytes_nic: 23171493680710
>     tx_bytes_nic: 242
>     lsc_int: 5
>     tx_busy: 0
>     non_eop_descs: 0
>     broadcast: 3604
>     rx_no_buffer_count: 0
>     tx_timeout_count: 0
>     tx_restart_queue: 0
>     rx_long_length_errors: 0
>     rx_short_length_errors: 0
>     tx_flow_control_xon: 102949
>     rx_flow_control_xon: 0
>     tx_flow_control_xoff: 1692189
>     rx_flow_control_xoff: 0

Here you can see that the interface is transmitting xoff packets - which means 
that it gets overwhelmed at times.

You can also try and disable flow control - this will at least give a clue as 
to where the bottleneck is (through the counters - dma/rx_buffer).

>     rx_csum_offload_errors: 0
>     alloc_rx_page_failed: 0
>     alloc_rx_buff_failed: 0
>     lro_aggregated: 0
>     lro_flushed: 0
>     rx_no_dma_resources: 0
>     hw_rsc_aggregated: 0
>     hw_rsc_flushed: 0
>     fdir_match: 0
>     fdir_miss: 33343469891

Flow Director is missing everything (literally) here, you can try and disable 
it, which will switch to RSS.

Thanks,
Emil


------------------------------------------------------------------------------
Meet PCI DSS 3.0 Compliance Requirements with EventLog Analyzer
Achieve PCI DSS 3.0 Compliant Status with Out-of-the-box PCI DSS Reports
Are you Audit-Ready for PCI DSS 3.0 Compliance? Download White paper
Comply to PCI DSS 3.0 Requirement 10 and 11.5 with EventLog Analyzer
http://p.sf.net/sfu/Zoho
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to