Hi,
I have the following setup:
* Virtual environment with Openstack with Intel X520 NIC
* Hypervisor using ixgbe driver
* Virtual machine using ixgbevf driver (version 4.6.1) on Red Hat Linux
7.6 running VPP 18.01 and DPDK 17.11.4
* VM interfaces are bonded in active-standby mode on ingress and egress
In normal state everything is fine, the bond interfaces are operational.
However when one of the physical interfaces on the hypervisor is brought down
then failover to the standby interfaces on the bonds does not work.
The second interface in each bond does become primary but original primary is
still reported as UP by VPP even though it should be DOWN. The device stats
reported by VPP change to around maximum values and traffic no longer works
through the bond interfaces:
Name Idx Link Hardware
BondEthernet0 5 up Slave-Idx: 1 2
Ethernet address fa:16:3e:20:2c:ae
Ethernet Bonding
carrier up full duplex speed 1000 mtu 1500
Mode 1
rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
cpu socket 0
tx frames ok 8589934243
tx bytes ok 137438924646
rx frames ok 8589849574
rx bytes ok 137433171720
extended stats:
rx good packets 8589849574
tx good packets 8589934243
rx good bytes 137433171720
tx good bytes 137438924646
BondEthernet1 6 up Slave-Idx: 3 4
Ethernet address fa:16:3e:f2:3c:af
Ethernet Bonding
carrier up full duplex speed 1000 mtu 1500
Mode 1
rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
cpu socket 0
tx frames ok 8589934273
tx bytes ok 137438926918
rx frames ok 8589849579
rx bytes ok 137433172132
extended stats:
rx good packets 8589849579
tx good packets 8589934273
rx good bytes 137433172132
tx good bytes 137438926918
device_0/6/0 1 slave device_0/6/0
Ethernet address fa:16:3e:20:2c:ae
Intel 82599 VF
carrier up full duplex speed 1000 mtu 1500
Slave UP
Slave State StandBy
rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
cpu socket 0
tx frames ok 4294966950
tx bytes ok 68719448136
rx frames ok 4294882284
rx bytes ok 68713695344
device_0/7/0 2 slave device_0/7/0
Ethernet address fa:16:3e:20:2c:ae
Intel 82599 VF
carrier up full duplex speed 1000 mtu 1500
Slave UP
Slave State Primary
rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
cpu socket 0
tx frames ok 4294967293
tx bytes ok 68719476510
rx frames ok 4294967290
rx bytes ok 68719476376
device_0/8/0 3 slave device_0/8/0
Ethernet address fa:16:3e:f2:3c:af
Intel 82599 VF
carrier up full duplex speed 1000 mtu 1500
Slave UP
Slave State StandBy
rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
cpu socket 0
tx frames ok 4294966980
tx bytes ok 68719450408
rx frames ok 4294882289
rx bytes ok 68713695756
device_0/9/0 4 slave device_0/9/0
Ethernet address fa:16:3e:f2:3c:af
Intel 82599 VF
carrier up full duplex speed 1000 mtu 1500
Slave UP
Slave State Primary
rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
cpu socket 0
tx frames ok 4294967293
tx bytes ok 68719476510
rx frames ok 4294967290
rx bytes ok 68719476376
There are no specific errors reported in the /var/log/messages files on either
the VM or the hypervisor machines.
Any ideas on this issue?
Thanks
Greg O'Rawe
This message, including attachments, is CONFIDENTIAL. It may also be privileged
or otherwise protected by law. If you received this email by mistake please let
us know by reply and then delete it from your system; you should not copy it or
disclose its contents to anyone. All messages sent to and from Enea may be
monitored to ensure compliance with internal policies and to protect our
business. Emails are not secure and cannot be guaranteed to be error free as
they can be intercepted, a mended, lost or destroyed, or contain viruses. The
sender therefore does not accept liability for any errors or omissions in the
contents of this message, which arise as a result of email transmission. Anyone
who communicates with us by email accepts these risks.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#14713): https://lists.fd.io/g/vpp-dev/message/14713
Mute This Topic: https://lists.fd.io/mt/62499255/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-