On 11/04/2012 03:15 AM, Geoge.Q wrote:
> Intel 82599 Driver issue
>
> 1. Background:
>    1) Linux 2.6.32
>    2) igbex-2.9.7
>    3) Linux box with 8-core CPU and with two Intel 82599 NIC;

I don't believe we ever released a 2.9.7 version of the driver. Do you
mean perhaps 3.9.17?

> 2. Topology:
>    client-> [Switch A] -802.1q---[Linux BOX]--802.1q---[Switch B]--->Server
>
>    1) Both Intel 82599 NICs are installed on Linux BOX( Kernel Version
> 2.6.32), there is a 8-core CPUs on it.
>    2) Using brctl to creat a vlan and add both 10G NIC into vlan;
>    3) Using vconfig to add tag to support trunk in Linux box
>    4) Switch A and switch B with trunk(802.1q) enabled

I'm not sure what you mean here about using vconfig to add tag to
support trunk. The port should support VLANs automatically as soon as
you put it in promiscuous mode by adding it to the bridge. Are switch A
and switch B using the same VLAN tags or different ones. I'm just
wondering if the Linux box is supposed to be acting as a bridge between
two VLANs.

>
> 3. Issue
>
>    when we send traffic from Client to Server, we found there is packet
> loss issue.
>    I checked the status of 82599 NICs, I found that the tx_queue_7_packet
> is always 0;
>    please see the date as following:
>    ---------------------------------------------------------------------
>
>    Linux:~# ethtool -S eth13
> NIC statistics:
>      rx_packets: 5505658
>      tx_packets: 4830304
>      rx_bytes: 2256659484
>      tx_bytes: 1826777635
>      rx_errors: 0
>      tx_errors: 0
>      rx_dropped: 0
>      tx_dropped: 0
>      multicast: 0
>      collisions: 0
>      rx_over_errors: 0
>      rx_crc_errors: 0
>      rx_frame_errors: 0
>      rx_fifo_errors: 0
>      rx_missed_errors: 0
>      tx_aborted_errors: 0
>      tx_carrier_errors: 0
>      tx_fifo_errors: 0
>      tx_heartbeat_errors: 0
>      rx_pkts_nic: 5505660
>      tx_pkts_nic: 4830306
>      rx_bytes_nic: 2293776305
>      tx_bytes_nic: 1868174364
>      lsc_int: 3
>      tx_busy: 0
>      non_eop_descs: 0
>      broadcast: 40
>      rx_no_buffer_count: 0
>      tx_timeout_count: 0
>      tx_restart_queue: 0
>      rx_long_length_errors: 0
>      rx_short_length_errors: 0
>      tx_flow_control_xon: 0
>      rx_flow_control_xon: 0
>      tx_flow_control_xoff: 0
>      rx_flow_control_xoff: 0
>      rx_csum_offload_errors: 0
>      alloc_rx_page_failed: 0
>      alloc_rx_buff_failed: 0
>      lro_aggregated: 0
>      lro_flushed: 0
>      rx_no_dma_resources: 0
>      hw_rsc_aggregated: 0
>      hw_rsc_flushed: 0
>      fdir_match: 47
>      fdir_miss: 5550448
>      fdir_overflow: 19
>      os2bmc_rx_by_bmc: 0
>      os2bmc_tx_by_bmc: 0
>      os2bmc_tx_by_host: 0
>      os2bmc_rx_by_host: 0
>      tx_queue_0_packets: 1210725
>      tx_queue_0_bytes: 445558545
>      tx_queue_1_packets: 576769
>      tx_queue_1_bytes: 224664767
>      tx_queue_2_packets: 609933
>      tx_queue_2_bytes: 236731490
>      tx_queue_3_packets: 614933
>      tx_queue_3_bytes: 234892246
>      tx_queue_4_packets: 594526
>      tx_queue_4_bytes: 215556014
>      tx_queue_5_packets: 613356
>      tx_queue_5_bytes: 233560194
>      tx_queue_6_packets: 610064
>      tx_queue_6_bytes: 235815879
>      tx_queue_7_packets: 0                  <------------- It is zero!!!
>      tx_queue_7_bytes: 0
>      rx_queue_0_packets: 695195
>      rx_queue_0_bytes: 278977936
>      rx_queue_1_packets: 701734
>      rx_queue_1_bytes: 292600357
>      rx_queue_2_packets: 686360
>      rx_queue_2_bytes: 275718791
>      rx_queue_3_packets: 696852
>      rx_queue_3_bytes: 289093338
>      rx_queue_4_packets: 678927
>      rx_queue_4_bytes: 277568190
>      rx_queue_5_packets: 676590
>      rx_queue_5_bytes: 282294878
>      rx_queue_6_packets: 680791
>      rx_queue_6_bytes: 271963862
>      rx_queue_7_packets: 689211            <--------------It is NOT zeor!!!
>      rx_queue_7_bytes: 288442287
> Linux:~# ethtool -S eth14
> NIC statistics:
>      rx_packets: 6372704
>      tx_packets: 4650711
>      rx_bytes: 2483325372
>      tx_bytes: 1881251140
>      rx_errors: 0
>      tx_errors: 0
>      rx_dropped: 0
>      tx_dropped: 0
>      multicast: 36938
>      collisions: 0
>      rx_over_errors: 0
>      rx_crc_errors: 0
>      rx_frame_errors: 0
>      rx_fifo_errors: 0
>      rx_missed_errors: 0
>      tx_aborted_errors: 0
>      tx_carrier_errors: 0
>      tx_fifo_errors: 0
>      tx_heartbeat_errors: 0
>      rx_pkts_nic: 6372709
>      tx_pkts_nic: 4650714
>      rx_bytes_nic: 2534303747
>      tx_bytes_nic: 1921360904
>      lsc_int: 1
>      tx_busy: 0
>      non_eop_descs: 0
>      broadcast: 11009
>      rx_no_buffer_count: 0
>      tx_timeout_count: 0
>      tx_restart_queue: 0
>      rx_long_length_errors: 0
>      rx_short_length_errors: 0
>      tx_flow_control_xon: 0
>      rx_flow_control_xon: 0
>      tx_flow_control_xoff: 0
>      rx_flow_control_xoff: 0
>      rx_csum_offload_errors: 0
>      alloc_rx_page_failed: 0
>      alloc_rx_buff_failed: 0
>      lro_aggregated: 0
>      lro_flushed: 0
>      rx_no_dma_resources: 0
>      hw_rsc_aggregated: 0
>      hw_rsc_flushed: 0
>      fdir_match: 32
>      fdir_miss: 6485646
>      fdir_overflow: 46
>      os2bmc_rx_by_bmc: 0
>      os2bmc_tx_by_bmc: 0
>      os2bmc_tx_by_host: 0
>      os2bmc_rx_by_host: 0
>      tx_queue_0_packets: 1180153
>      tx_queue_0_bytes: 476955048
>      tx_queue_1_packets: 579879
>      tx_queue_1_bytes: 229890239
>      tx_queue_2_packets: 587840
>      tx_queue_2_bytes: 240345235
>      tx_queue_3_packets: 573652
>      tx_queue_3_bytes: 231200121
>      tx_queue_4_packets: 571382
>      tx_queue_4_bytes: 235358528
>      tx_queue_5_packets: 575149
>      tx_queue_5_bytes: 227035305
>      tx_queue_6_packets: 582659
>      tx_queue_6_bytes: 240468055
>      tx_queue_7_packets: 0                  <--------------It is zeor!!!
>      tx_queue_7_bytes: 0
>      rx_queue_0_packets: 824893
>      rx_queue_0_bytes: 306518386
>      rx_queue_1_packets: 779150
>      rx_queue_1_bytes: 302348706
>      rx_queue_2_packets: 762524
>      rx_queue_2_bytes: 305554678
>      rx_queue_3_packets: 809460
>      rx_queue_3_bytes: 320845361
>      rx_queue_4_packets: 805899
>      rx_queue_4_bytes: 317410011
>      rx_queue_5_packets: 784274
>      rx_queue_5_bytes: 295367729
>      rx_queue_6_packets: 805542
>      rx_queue_6_bytes: 317140027
>      rx_queue_7_packets: 800967            <--------------It is NOT zeor!!!
>      rx_queue_7_bytes: 318146743

Take a look at your Tx queue 0, it is nearly double what any of the
other Tx queues are transmitting. I suspect that is where your missing
Tx traffic went.

>
> 4. my question
>     1) can anyone tell me how to adjust the NIC?
>     2) I re-compiled the driver and disabled LRO per Intel website, the
> issue is still alive.
>     3) I use the latest driver(ixgbe-3.11.33), the issue is still alive.
>     3) the fdir_miss is huge, is it related with 802.1q packet?
>
>
> Appreciate your help.
> Nexthop

This isn't a NIC issue. What it looks like is happening is that your
queue 7 traffic is ending up on queue 0 most likely.

This isn't an LRO issue. All LRO would do is cause traffic to not be
delivered if the frame was too large to send out the transmitting interface.

This looks like an issue with the stack, not an issue with the driver.
Your kernel for whatever reason is only transmitting on queues 0-6 and
is ignoring queue 7.

The flow director code doesn't do much in switching/routing situations.
It is meant to be used for application targeted routing. This is why you
are seeing the high fdir_miss rate. You may just want to turn off ATR
and instead enable ntuple filtering if that is supported by your kernel.
You can do that by running "ethtool -K ethX ntuple on".

Thanks,

Alex

------------------------------------------------------------------------------
LogMeIn Central: Instant, anywhere, Remote PC access and management.
Stay in control, update software, and manage PCs from one command center
Diagnose problems and improve visibility into emerging IT issues
Automate, monitor and manage. Do more in less time with Central
http://p.sf.net/sfu/logmein12331_d2d
_______________________________________________
E1000-devel mailing list
E1000-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel&#174; Ethernet, visit 
http://communities.intel.com/community/wired

Reply via email to