yes thank you for your help.
On Wed, Jan 30, 2013 at 5:28 PM, Ronciak, John <[email protected]>wrote:
> So to close the loop on mail list, after offline discussions, this has
> to do with configuring bonding and VLAN’s. This was not an issue with the
> drivers or the HW.****
>
> ** **
>
> Thanks.****
>
> ** **
>
> Cheers,****
>
> John****
>
> ** **
>
> *From:* Jekels, Donny [mailto:[email protected]]
> *Sent:* Wednesday, January 30, 2013 9:47 AM
>
> *To:* Ronciak, John
> *Cc:* [email protected]
> *Subject:* Re: [E1000-devel] ARPING does not work****
>
> ** **
>
> John,****
>
> ** **
>
> Here is what I am talking about, all basic network functionality works.***
> *
>
> ** **
>
> we also have set the arp_announce in the kernel settings to 1 that is
> supposed to fix this behavior and it does but not on these NIC's****
>
> the IP that I used for this testing is 10.8.198.250****
>
> ** **
>
> when I plumb it up and arping from it does not respond to pings.****
>
> ** **
>
> it ONLY starts working after I initate ping -I 10.8.198.250 10.11.11.11***
> *
>
> ** **
>
> then the IP on the interface works.****
>
> ** **
>
> BUT arping fails completely****
>
> ** **
>
> ** **
>
> djekels@postgres19:~$ ip route****
>
> 10.33.0.0/24 ****
>
> nexthop via 10.4.42.1 dev eth4 weight 1****
>
> nexthop via 10.4.42.1 dev eth5 weight 1****
>
> nexthop via 10.4.42.1 dev eth6 weight 1****
>
> nexthop via 10.4.42.1 dev eth7 weight 1****
>
> 10.8.198.0/24 dev vlan198 proto kernel scope link src 10.8.198.119 ****
>
> 10.8.199.0/24 dev vlan199 proto kernel scope link src 10.8.199.39 ****
>
> 10.32.0.0/24 ****
>
> nexthop via 10.4.42.1 dev eth4 weight 1****
>
> nexthop via 10.4.42.1 dev eth5 weight 1****
>
> nexthop via 10.4.42.1 dev eth6 weight 1****
>
> nexthop via 10.4.42.1 dev eth7 weight 1****
>
> 10.4.42.0/24 dev eth4 proto kernel scope link src 10.4.42.96 ****
>
> 10.4.42.0/24 dev eth5 proto kernel scope link src 10.4.42.97 ****
>
> 10.4.42.0/24 dev eth6 proto kernel scope link src 10.4.42.98 ****
>
> 10.4.42.0/24 dev eth7 proto kernel scope link src 10.4.42.99 ****
>
> 172.23.16.0/23 dev vlan4000 proto kernel scope link src 172.23.16.211 *
> ***
>
> 172.23.0.0/16 via 172.23.16.1 dev vlan4000 ****
>
> default via 10.8.198.1 dev vlan198 ****
>
> djekels@postgres19:~$ ****
>
> djekels@postgres19:~$ ip addr****
>
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN ****
>
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00****
>
> inet 127.0.0.1/8 scope host lo****
>
> inet6 ::1/128 scope host ****
>
> valid_lft forever preferred_lft forever****
>
> 2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master
> bond0 state UP qlen 1000****
>
> link/ether bc:30:5b:f0:a3:80 brd ff:ff:ff:ff:ff:ff****
>
> 3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master
> bond0 state UP qlen 1000****
>
> link/ether bc:30:5b:f0:a3:80 brd ff:ff:ff:ff:ff:ff****
>
> 4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000***
> *
>
> link/ether bc:30:5b:f0:a3:82 brd ff:ff:ff:ff:ff:ff****
>
> 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000***
> *
>
> link/ether bc:30:5b:f0:a3:83 brd ff:ff:ff:ff:ff:ff****
>
> 6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen
> 1000****
>
> link/ether a0:36:9f:08:09:c8 brd ff:ff:ff:ff:ff:ff****
>
> inet 10.4.42.96/24 brd 10.4.42.255 scope global eth4****
>
> inet6 fe80::a236:9fff:fe08:9c8/64 scope link ****
>
> valid_lft forever preferred_lft forever****
>
> 7: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen
> 1000****
>
> link/ether a0:36:9f:08:09:c9 brd ff:ff:ff:ff:ff:ff****
>
> inet 10.4.42.97/24 brd 10.4.42.255 scope global eth5****
>
> inet6 fe80::a236:9fff:fe08:9c9/64 scope link ****
>
> valid_lft forever preferred_lft forever****
>
> 8: eth6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen
> 1000****
>
> link/ether a0:36:9f:08:09:ca brd ff:ff:ff:ff:ff:ff****
>
> inet 10.4.42.98/24 brd 10.4.42.255 scope global eth6****
>
> inet6 fe80::a236:9fff:fe08:9ca/64 scope link ****
>
> valid_lft forever preferred_lft forever****
>
> 9: eth7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP qlen
> 1000****
>
> link/ether a0:36:9f:08:09:cb brd ff:ff:ff:ff:ff:ff****
>
> inet 10.4.42.99/24 brd 10.4.42.255 scope global eth7****
>
> inet6 fe80::a236:9fff:fe08:9cb/64 scope link ****
>
> valid_lft forever preferred_lft forever****
>
> 10: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP ****
>
> link/ether bc:30:5b:f0:a3:80 brd ff:ff:ff:ff:ff:ff****
>
> inet6 fe80::be30:5bff:fef0:a380/64 scope link ****
>
> valid_lft forever preferred_lft forever****
>
> 11: vlan4000@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500
> qdisc noqueue state UP ****
>
> link/ether bc:30:5b:f0:a3:80 brd ff:ff:ff:ff:ff:ff****
>
> inet 172.23.16.211/23 brd 172.23.17.255 scope global vlan4000****
>
> inet6 fe80::be30:5bff:fef0:a380/64 scope link ****
>
> valid_lft forever preferred_lft forever****
>
> 12: vlan198@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500
> qdisc noqueue state UP ****
>
> link/ether bc:30:5b:f0:a3:80 brd ff:ff:ff:ff:ff:ff****
>
> inet 10.8.198.119/24 brd 10.8.198.255 scope global vlan198****
>
> inet 10.8.198.134/24 brd 10.8.198.255 scope global secondary vlan198**
> **
>
> inet 10.8.198.167/24 brd 10.8.198.255 scope global secondary vlan198**
> **
>
> inet 10.8.198.250/24 brd 10.8.198.255 scope global secondary vlan198**
> **
>
> inet6 fe80::be30:5bff:fef0:a380/64 scope link ****
>
> valid_lft forever preferred_lft forever****
>
> 13: vlan199@bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500
> qdisc noqueue state UP ****
>
> link/ether bc:30:5b:f0:a3:80 brd ff:ff:ff:ff:ff:ff****
>
> inet 10.8.199.39/24 brd 10.8.199.255 scope global vlan199****
>
> inet 10.8.199.105/24 brd 10.8.199.255 scope global secondary vlan199**
> **
>
> inet 10.8.199.108/24 brd 10.8.199.255 scope global secondary vlan199**
> **
>
> inet6 fe80::be30:5bff:fef0:a380/64 scope link ****
>
> valid_lft forever preferred_lft forever****
>
> ** **
>
> ** **
>
> ** **
>
> djekels@postgres19:~$ netstat -s****
>
> Ip:****
>
> 71751543 total packets received****
>
> 166 with invalid addresses****
>
> 0 forwarded****
>
> 0 incoming packets discarded****
>
> 71706086 incoming packets delivered****
>
> 75075912 requests sent out****
>
> 10 dropped because of missing route****
>
> Icmp:****
>
> 3645 ICMP messages received****
>
> 6 input ICMP message failed.****
>
> ICMP input histogram:****
>
> destination unreachable: 1716****
>
> echo requests: 1851****
>
> echo replies: 78****
>
> 56066 ICMP messages sent****
>
> 0 ICMP messages failed****
>
> ICMP output histogram:****
>
> destination unreachable: 53169****
>
> echo request: 1046****
>
> echo replies: 1851****
>
> IcmpMsg:****
>
> InType0: 78****
>
> InType3: 1716****
>
> InType8: 1851****
>
> OutType0: 1851****
>
> OutType3: 53169****
>
> OutType8: 1046****
>
> Tcp:****
>
> 708305 active connections openings****
>
> 3975 passive connection openings****
>
> 431 failed connection attempts****
>
> 33564 connection resets received****
>
> 14 connections established****
>
> 68956581 segments received****
>
> 72313084 segments send out****
>
> 24468 segments retransmited****
>
> 0 bad segments received.****
>
> 30446 resets sent****
>
> Udp:****
>
> 2671621 packets received****
>
> 74239 packets to unknown port received.****
>
> 0 packet receive errors****
>
> 2682294 packets sent****
>
> UdpLite:****
>
> TcpExt:****
>
> 396 resets received for embryonic SYN_RECV sockets****
>
> 1075 packets pruned from receive queue because of socket buffer overrun
> ****
>
> 174110 TCP sockets finished time wait in fast timer****
>
> 93 time wait sockets recycled by time stamp****
>
> 1547391 delayed acks sent****
>
> 47 delayed acks further delayed because of locked socket****
>
> Quick ack mode was activated 887 times****
>
> 723398 packets directly queued to recvmsg prequeue.****
>
> 5496 bytes directly in process context from backlog****
>
> 4362730 bytes directly received in process context from prequeue****
>
> 49343700 packet headers predicted****
>
> 17131 packets header predicted and directly queued to user****
>
> 1638076 acknowledgments not containing data payload received****
>
> 31754543 predicted acknowledgments****
>
> 181 times recovered from packet loss by selective acknowledgements****
>
> 52 congestion windows recovered without slow start by DSACK****
>
> 1023 congestion windows recovered without slow start after partial ack
> ****
>
> 191 TCP data loss events****
>
> TCPLostRetransmit: 15****
>
> 114 timeouts after SACK recovery****
>
> 2 timeouts in loss state****
>
> 314 fast retransmits****
>
> 15 forward retransmits****
>
> 54 retransmits in slow start****
>
> 12748 other TCP timeouts****
>
> 13 SACK retransmits failed****
>
> 385169 packets collapsed in receive queue due to low socket buffer****
>
> 874 DSACKs sent for old packets****
>
> 236 DSACKs received****
>
> 8 connections reset due to unexpected data****
>
> 29985 connections reset due to early user close****
>
> 91 connections aborted due to timeout****
>
> TCPDSACKIgnoredOld: 168****
>
> TCPDSACKIgnoredNoUndo: 9****
>
> TCPSackShifted: 259****
>
> TCPSackMerged: 503****
>
> TCPSackShiftFallback: 530****
>
> IpExt:****
>
> InBcastPkts: 37803****
>
> InOctets: -1264740945****
>
> OutOctets: -592937989****
>
> InBcastOctets: 9434134****
>
> ** **
>
> ** **
>
> ** **
>
> On Wed, Jan 30, 2013 at 11:34 AM, Ronciak, John <[email protected]>
> wrote:****
>
> So the NIC’s are sending and receiving packets. What is the stack doing
> (‘netstat -s’)? My guess is that it’s not doing the right thing with
> broadcast packets. Do you have a resolv.conf file configured to do what
> you want? Maybe it’s not correct?****
>
> ****
>
> Cheers,****
>
> John****
>
> ****
>
> *From:* Jekels, Donny [mailto:[email protected]]
> *Sent:* Wednesday, January 30, 2013 9:10 AM
> *To:* Ronciak, John
> *Cc:* [email protected]
> *Subject:* Re: [E1000-devel] ARPING does not work****
>
> ****
>
> John,****
>
> ****
>
> The NIC works for TCP and UDP sessions however no broadcast and arp seems
> to work.****
>
> ****
>
> we move IP's around between servers and part of this move is to garp the
> source IP so the switches can update their mac tables.****
>
> without this functionality the NIC's are useless to us.****
>
> ****
>
> please help****
>
> ****
>
> root@postgres19:~# ethtool -S eth7****
>
> NIC statistics:****
>
> rx_packets: 11****
>
> tx_packets: 6****
>
> rx_bytes: 704****
>
> tx_bytes: 492****
>
> rx_broadcast: 11****
>
> tx_broadcast: 0****
>
> rx_multicast: 0****
>
> tx_multicast: 6****
>
> multicast: 0****
>
> collisions: 0****
>
> rx_crc_errors: 0****
>
> rx_no_buffer_count: 0****
>
> rx_missed_errors: 0****
>
> tx_aborted_errors: 0****
>
> tx_carrier_errors: 0****
>
> tx_window_errors: 0****
>
> tx_abort_late_coll: 0****
>
> tx_deferred_ok: 0****
>
> tx_single_coll_ok: 0****
>
> tx_multi_coll_ok: 0****
>
> tx_timeout_count: 0****
>
> rx_long_length_errors: 0****
>
> rx_short_length_errors: 0****
>
> rx_align_errors: 0****
>
> tx_tcp_seg_good: 0****
>
> tx_tcp_seg_failed: 0****
>
> rx_flow_control_xon: 0****
>
> rx_flow_control_xoff: 0****
>
> tx_flow_control_xon: 0****
>
> tx_flow_control_xoff: 0****
>
> rx_long_byte_count: 704****
>
> tx_dma_out_of_sync: 0****
>
> lro_aggregated: 0****
>
> lro_flushed: 0****
>
> lro_recycled: 0****
>
> tx_smbus: 0****
>
> rx_smbus: 0****
>
> dropped_smbus: 0****
>
> os2bmc_rx_by_bmc: 0****
>
> os2bmc_tx_by_bmc: 0****
>
> os2bmc_tx_by_host: 0****
>
> os2bmc_rx_by_host: 0****
>
> rx_errors: 0****
>
> tx_errors: 0****
>
> tx_dropped: 0****
>
> rx_length_errors: 0****
>
> rx_over_errors: 0****
>
> rx_frame_errors: 0****
>
> rx_fifo_errors: 0****
>
> tx_fifo_errors: 0****
>
> tx_heartbeat_errors: 0****
>
> tx_queue_0_packets: 6****
>
> tx_queue_0_bytes: 468****
>
> tx_queue_0_restart: 0****
>
> rx_queue_0_packets: 11****
>
> rx_queue_0_bytes: 660****
>
> rx_queue_0_drops: 0****
>
> rx_queue_0_csum_err: 0****
>
> rx_queue_0_alloc_failed: 0****
>
> ****
>
> root@postgres19:~# ethtool -i eth7****
>
> driver: igb****
>
> version: 4.1.2****
>
> firmware-version: 1.6, 0x80000816****
>
> bus-info: 0000:04:00.3****
>
> ****
>
> root@postgres19:~# arping 10.11.12.13****
>
> ARPING 10.11.12.13****
>
> ****
>
> ****
>
> ****
>
> On Wed, Jan 30, 2013 at 11:05 AM, Ronciak, John <[email protected]>
> wrote:****
>
> Hi Donny,
>
> Sorry to hear you are having problems. This most likely is not a
> NIC/driver issue. Something is mostly not configured correctly on these
> systems. Did you use ethtool to look at the HW stats of the interfaces you
> are trying to ping out of? If not, please do so. Also, are each
> port/interface on a different subnet? The stack does not do well (without
> extra configuration) with multiple ports on the same subnet.
>
> The reason Debian Squeeze doesn't have support is that they didn't pick up
> an updated driver before they released it. This is not a problem with the
> Intel drivers, it's a timing thing for Debian releases and how they update
> kernels and drivers.
>
> Cheers,
> John****
>
>
>
> > -----Original Message-----
> > From: Jekels, Donny [mailto:[email protected]]
> > Sent: Tuesday, January 29, 2013 8:35 PM
> > To: [email protected]
> > Subject: [E1000-devel] ARPING does not work
> >
> > We have a handful of these Intel quad port NIC with the drive 4.1.7 and
> > arping does not work on these NIC's.
> >
> > we run debian stock squeeze and had to compile igb driver from intel to
> > get the NIC ordering correct.
> >
> > now arping is also not working.
> >
> > can you give me some advise how do I continue having this issue
> > resolved?
> >
> > thanks,
> > Donny****
>
> ****
>
> ** **
>
------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_jan
_______________________________________________
E1000-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/e1000-devel
To learn more about Intel® Ethernet, visit
http://communities.intel.com/community/wired