30.10.2012 0:58, Skidmore, Donald C пишет: > Hi Andrey, >
Hi Donald! > We haven't seen this in our in house testing so I have a few questions: > > - Where is the system your pinging, connected back to back to eliminate other > network latency? No it isn't. Normal latency between this network ~ 2ms # mtr ya.ru My traceroute [v0.82] s4 (0.0.0.0) Tue Oct 30 03:18:14 2012 Keys: Help Display mode Restart statistics Order of fields quit Packets Pings Host Loss% Snt Last Avg Best Wrst StDev 1. 176.58.32.1 0.0% 6 1.8 2.1 1.5 3.8 0.8 2. vl-998.r1-m9.mnogobyte.net 33.3% 6 0.3 0.5 0.3 0.6 0.1 3. msk-ix-m10.yandex.net 0.0% 6 1.5 1.5 1.4 1.8 0.2 4. l3-s3600-marionetka.yandex.net 0.0% 6 2.0 4.4 1.8 15.9 5.6 5. www.yandex.ru 0.0% 5 1.9 1.9 1.7 2.1 0.2 > - What is your interrupt spread looking like when you do both of these tests > (cat /proc/interrupts)? 1 interrupt per core Now it's look like 91: 2837781482 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge eth2-TxRx-0 92: 2 2671735458 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge eth2-TxRx-1 93: 2 0 2630881398 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge eth2-TxRx-2 94: 2 0 0 2622081651 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge eth2-TxRx-3 95: 2 0 0 0 2636865564 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge eth2-TxRx-4 96: 2 0 0 0 0 2628479017 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge eth2-TxRx-5 97: 2 0 0 0 0 0 2647708166 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge eth2-TxRx-6 98: 2 0 0 0 0 0 0 2594546620 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 PCI-MSI-edge eth2-TxRx-7 > - Is irqbalance running or did you run set_irq_affinity.sh? Yes, i have script #!/bin/bash eth2rx0=`/bin/grep "eth2-TxRx-0" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx1=`/bin/grep "eth2-TxRx-1"$ /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx2=`/bin/grep "eth2-TxRx-2"$ /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx3=`/bin/grep "eth2-TxRx-3" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx4=`/bin/grep "eth2-TxRx-4" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx5=`/bin/grep "eth2-TxRx-5" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx6=`/bin/grep "eth2-TxRx-6" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx7=`/bin/grep "eth2-TxRx-7" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx8=`/bin/grep "eth2-TxRx-8" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx9=`/bin/grep "eth2-TxRx-9" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx10=`/bin/grep "eth2-TxRx-10" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx11=`/bin/grep "eth2-TxRx-11" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx12=`/bin/grep "eth2-TxRx-12" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx13=`/bin/grep "eth2-TxRx-13" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx14=`/bin/grep "eth2-TxRx-14" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx15=`/bin/grep "eth2-TxRx-15" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx16=`/bin/grep "eth2-TxRx-16" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx17=`/bin/grep "eth2-TxRx-17" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx18=`/bin/grep "eth2-TxRx-18" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx19=`/bin/grep "eth2-TxRx-19" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx20=`/bin/grep "eth2-TxRx-20" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx21=`/bin/grep "eth2-TxRx-21" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx22=`/bin/grep "eth2-TxRx-22" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` eth2rx23=`/bin/grep "eth2-TxRx-23" /proc/interrupts | /usr/bin/awk '{ print $1 }' | tr -d ':'` echo "1" > /proc/irq/$eth2rx0/smp_affinity echo "2" > /proc/irq/$eth2rx1/smp_affinity echo "4" > /proc/irq/$eth2rx2/smp_affinity echo "8" > /proc/irq/$eth2rx3/smp_affinity echo "10" > /proc/irq/$eth2rx4/smp_affinity echo "20" > /proc/irq/$eth2rx5/smp_affinity echo "40" > /proc/irq/$eth2rx6/smp_affinity echo "80" > /proc/irq/$eth2rx7/smp_affinity echo "100" > /proc/irq/$eth2rx8/smp_affinity echo "200" > /proc/irq/$eth2rx9/smp_affinity echo "400" > /proc/irq/$eth2rx10/smp_affinity echo "800" > /proc/irq/$eth2rx11/smp_affinity echo "1000" > /proc/irq/$eth2rx12/smp_affinity echo "2000" > /proc/irq/$eth2rx13/smp_affinity echo "4000" > /proc/irq/$eth2rx14/smp_affinity echo "8000" > /proc/irq/$eth2rx15/smp_affinity echo "10000" > /proc/irq/$eth2rx16/smp_affinity echo "20000" > /proc/irq/$eth2rx17/smp_affinity echo "40000" > /proc/irq/$eth2rx18/smp_affinity echo "80000" > /proc/irq/$eth2rx19/smp_affinity echo "100000" > /proc/irq/$eth2rx20/smp_affinity echo "200000" > /proc/irq/$eth2rx21/smp_affinity echo "400000" > /proc/irq/$eth2rx22/smp_affinity echo "800000" > /proc/irq/$eth2rx23/smp_affinity > - Anything interesting in the NIC's statics (ethtool -s) or in the syslog In syslog only when changed SFP+ module (supported and unsupported) root@s4:~# grep ixgbe /var/log/syslog.1 Oct 28 11:20:47 s4 kernel: [58172.860115] ixgbe 0000:05:00.0: eth2: NIC Link is Down Oct 28 11:20:48 s4 kernel: [58173.811723] ixgbe 0000:05:00.0: eth2: Reset adapter Oct 28 11:20:48 s4 kernel: [58174.180037] ixgbe 0000:05:00.0: eth2: WARNING: Intel (R) Network Connections are quality tested using Intel (R) Ethernet Optics. Using untested modules is not supported and may cause unstable operation or damage to the module or the adapter. Intel Corporation is not responsible for any harm caused by using untested modules. Oct 28 11:20:48 s4 kernel: [58174.239033] ixgbe 0000:05:00.0: eth2: detected SFP+: 5 Oct 28 11:21:08 s4 kernel: [58193.881021] ixgbe 0000:05:00.0: eth2: detected SFP+: 5 Oct 28 11:21:22 s4 kernel: [58207.580083] ixgbe 0000:05:00.0: eth2: NIC Link is Up 10 Gbps, Flow Control: RX/TX I have only statics where 8 interrupts :( root@s4:~# ethtool -i eth2 driver: ixgbe version: 3.11.33 firmware-version: 0x61b70001 bus-info: 0000:05:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: no root@s4:~# ethtool -S eth2 NIC statistics: rx_packets: 26630609528 tx_packets: 50883978148 rx_bytes: 1918303303877 tx_bytes: 75208937754502 rx_errors: 0 tx_errors: 0 rx_dropped: 0 tx_dropped: 0 multicast: 4 collisions: 0 rx_over_errors: 0 rx_crc_errors: 0 rx_frame_errors: 0 rx_fifo_errors: 0 rx_missed_errors: 0 tx_aborted_errors: 0 tx_carrier_errors: 0 tx_fifo_errors: 0 tx_heartbeat_errors: 0 rx_pkts_nic: 26652334928 tx_pkts_nic: 50883978080 rx_bytes_nic: 2026345049027 tx_bytes_nic: 75412838203936 lsc_int: 5 tx_busy: 0 non_eop_descs: 17497791 broadcast: 11 rx_no_buffer_count: 0 tx_timeout_count: 1 tx_restart_queue: 320094553 rx_long_length_errors: 1 rx_short_length_errors: 0 tx_flow_control_xon: 60 rx_flow_control_xon: 0 tx_flow_control_xoff: 561 rx_flow_control_xoff: 0 rx_csum_offload_errors: 56599 alloc_rx_page_failed: 0 alloc_rx_buff_failed: 0 lro_aggregated: 0 lro_flushed: 0 rx_no_dma_resources: 0 hw_rsc_aggregated: 36104186 hw_rsc_flushed: 14378831 fdir_match: 26543075238 fdir_miss: 178851649 fdir_overflow: 1367 fcoe_bad_fccrc: 0 fcoe_last_errors: 0 rx_fcoe_dropped: 0 rx_fcoe_packets: 0 rx_fcoe_dwords: 0 fcoe_noddp: 0 fcoe_noddp_ext_buff: 0 tx_fcoe_packets: 0 tx_fcoe_dwords: 0 os2bmc_rx_by_bmc: 0 os2bmc_tx_by_bmc: 0 os2bmc_tx_by_host: 0 os2bmc_rx_by_host: 0 tx_queue_0_packets: 7154914703 tx_queue_0_bytes: 10575617703741 tx_queue_1_packets: 6362341774 tx_queue_1_bytes: 9404457469419 tx_queue_2_packets: 6229744087 tx_queue_2_bytes: 9209298018808 tx_queue_3_packets: 6189723044 tx_queue_3_bytes: 9141488573119 tx_queue_4_packets: 6229022055 tx_queue_4_bytes: 9204992369860 tx_queue_5_packets: 6211813397 tx_queue_5_bytes: 9176843645258 tx_queue_6_packets: 6394697956 tx_queue_6_bytes: 9461427220497 tx_queue_7_packets: 6111721185 tx_queue_7_bytes: 9034812830832 rx_queue_0_packets: 3726561689 rx_queue_0_bytes: 271761713827 rx_queue_1_packets: 3335343328 rx_queue_1_bytes: 246597310031 rx_queue_2_packets: 3272123955 rx_queue_2_bytes: 236609943833 rx_queue_3_packets: 3243408791 rx_queue_3_bytes: 233167596640 rx_queue_4_packets: 3261868061 rx_queue_4_bytes: 236444041166 rx_queue_5_packets: 3251908825 rx_queue_5_bytes: 237653331265 rx_queue_6_packets: 3338343921 rx_queue_6_bytes: 231715253076 rx_queue_7_packets: 3201051010 rx_queue_7_bytes: 224354117279 > > Thanks, > -Don Skidmore <donald.c.skidm...@intel.com> > > -----Original Message----- > From: Андрей Василишин [mailto:a.vasilis...@kpi.ua] > Sent: Saturday, October 27, 2012 4:12 PM > To: e1000-devel@lists.sourceforge.net > Subject: Re: [E1000-devel] 100+ ms latency when 82599EB put under moderate > load > > Hello! > I have had the same problem with X520-Lr1 card > > root@s4:~# ping ya.ru > PING ya.ru (213.180.204.3) 56(84) bytes of data. > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=1 ttl=59 time=2.44 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=2 ttl=59 time=113 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=3 ttl=59 time=311 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=4 ttl=59 time=137 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=5 ttl=59 time=264 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=6 ttl=59 time=48.3 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=7 ttl=59 time=143 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=8 ttl=59 time=3.53 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=9 ttl=59 time=26.4 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=10 ttl=59 time=63.1 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=11 ttl=59 time=260 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=12 ttl=59 time=109 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=13 ttl=59 time=363 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=14 ttl=59 time=6.53 ms > > there was 24 queues (2xE5645 with HT). > I just compiled latest module v3.11.33, but it didn't help. > I have one server with the same card yet, but there are 2xE5620 with HT off > (8 queues) and it working fine. I just added options ixgbe RSS=8 DCA=2 > LLIPort=80 allow_unsupported_sfp=1 in /etc/modprobe.d/aliases-m-i-t.conf > (Debian) and now: > root@s4:~# ping ya.ru > PING ya.ru (213.180.204.3) 56(84) bytes of data. > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=1 ttl=59 time=2.13 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=2 ttl=59 time=2.37 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=3 ttl=59 time=2.21 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=4 ttl=59 time=2.41 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=5 ttl=59 time=2.41 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=6 ttl=59 time=2.54 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=7 ttl=59 time=2.56 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=8 ttl=59 time=2.09 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=9 ttl=59 time=2.09 ms > 64 bytes from www.yandex.ru (213.180.204.3): icmp_req=10 ttl=59 time=2.19 ms > > RSS=8 helped me! > > -- > WBR, Andrey Vasilishin CDIG1-UANIC, CDIG1-RIPE > > ------------------------------------------------------------------------------ > WINDOWS 8 is here. > Millions of people. Your app in 30 days. > Visit The Windows 8 Center at Sourceforge for all your go to resources. > http://windows8center.sourceforge.net/ > join-generation-app-and-make-money-coding-fast/ > _______________________________________________ > E1000-devel mailing list > E1000-devel@lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/e1000-devel > To learn more about Intel® Ethernet, visit > http://communities.intel.com/community/wired > -- WBR, Andrey Vasilishin CDIG1-UANIC, CDIG1-RIPE ------------------------------------------------------------------------------ The Windows 8 Center - In partnership with Sourceforge Your idea - your app - 30 days. Get started! http://windows8center.sourceforge.net/ what-html-developers-need-to-know-about-coding-windows-8-metro-style-apps/ _______________________________________________ E1000-devel mailing list E1000-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/e1000-devel To learn more about Intel® Ethernet, visit http://communities.intel.com/community/wired