I don't really know what you are trying to do, but here are some comments. The 
lacp_status is "negotiated", so the other side is healthy and sending LACP 
PDUs, so the bond is not in active-backup, but it seems you are trying to use 
individual links of the bundle. Another useful command is "ovs-appctl 
lacp/show". Also PXE is probably not going to work with VLAN tagged packets.

Darragh.

From: Nguyen, Minh-Nghia [mailto:[email protected]]
Sent: 09 June 2017 16:25
To: [email protected]
Cc: O'Reilly, Darragh <[email protected]>
Subject: RE: LACP fallback support

Hi,

We are aware of this config and have tried to use it, but are running into a 
problem.

Here is the bond port config:

_uuid               : 22808a9c-0ab3-4aff-881d-92573fc17912
bond_active_slave   : "52:54:00:e6:c4:99"
bond_downdelay      : 31000
bond_fake_iface     : false
bond_mode           : balance-tcp
bond_updelay        : 31000
external_ids        : {}
fake_bridge         : false
interfaces          : [0fc925f7-0166-4455-873b-3f3355b672b2, 
2859937c-25ed-472e-81ff-bb1913196640, 600d7d5f-f25f-405f-b190-f1f5c0db6a5c, 
7a71a3b1-7e0e-4239-b933-03433e665587]
lacp                : active
mac                 : []
name                : "bond17"
other_config        : {bond-detect-mode=miimon, bond-miimon-interval="100", 
host_id=none, lacp-fallback-ab="true", lacp-time=fast}
qos                 : []
rstp_statistics     : {}
rstp_status         : {}
statistics          : {}
status              : {}
tag                 : 400
trunks              : []
vlan_mode           : []

    Bridge "br0"
        Port "bond17"
            tag: 400
            Interface "eth2"
            Interface "eth3"
            Interface "eth5"
            Interface "eth4"
        Port "eth1"
            Interface "eth1"
        Port "br0"
            Interface "br0"
                type: internal

The current bond/show returns:

---- bond17 ----
bond_mode: balance-tcp
bond may use recirculation: yes, Recirc-ID : 44
bond-hash-basis: 0
updelay: 31000 ms
downdelay: 31000 ms
next rebalance: 9816 ms
lacp_status: negotiated
active slave mac: 52:54:00:e6:c4:99(eth4)

slave eth2: enabled
                may_enable: true

slave eth3: enabled
                may_enable: true

slave eth4: enabled
                active slave
                may_enable: true

slave eth5: enabled
                may_enable: true

All the eth interfaces are connected to VMs, on eth1 we have a vlan 400 
interface that has IP 199.166.40.3. On the other hand, eth4 it is 
199.166.40.11. Here is the tcpdump at eth1 after a ping attempt.

16:18:40.755586 52:54:00:6e:a9:48 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
(0x8100), length 64: vlan 400, p 0, ethertype ARP, Request who-has 199.166.40.3 
tell 199.166.40.11, length 46
16:18:40.756877 52:54:00:8d:b0:eb > 52:54:00:6e:a9:48, ethertype 802.1Q 
(0x8100), length 64: vlan 400, p 0, ethertype ARP, Reply 199.166.40.3 is-at 
52:54:00:8d:b0:eb, length 46
16:18:41.754281 52:54:00:6e:a9:48 > ff:ff:ff:ff:ff:ff, ethertype 802.1Q 
(0x8100), length 64: vlan 400, p 0, ethertype ARP, Request who-has 199.166.40.3 
tell 199.166.40.11, length 46
16:18:41.755169 52:54:00:8d:b0:eb > 52:54:00:6e:a9:48, ethertype 802.1Q 
(0x8100), length 64: vlan 400, p 0, ethertype ARP, Reply 199.166.40.3 is-at 
52:54:00:8d:b0:eb, length 46

Tcpdump at eth4, however has no reply messages, and ping failed. Basically 
traffic cannot go back into the bond at this point.

16:18:30.388649 52:54:00:6e:a9:48 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), 
length 60: Request who-has 199.166.40.3 tell 199.166.40.11, length 46
16:18:31.387577 52:54:00:6e:a9:48 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), 
length 60: Request who-has 199.166.40.3 tell 199.166.40.11, length 46
16:18:32.385825 52:54:00:6e:a9:48 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), 
length 60: Request who-has 199.166.40.3 tell 199.166.40.11, length 46
16:18:33.403071 52:54:00:6e:a9:48 > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), 
length 60: Request who-has 199.166.40.3 tell 199.166.40.11, length 46

Could you have a look and any help would be appreciated.

Thanks in advance,

Nghia

From: O'Reilly, Darragh [mailto:[email protected]]
Sent: 09 June 2017 13:44
To: Nguyen, Minh-Nghia 
<[email protected]<mailto:[email protected]>>; 
[email protected]<mailto:[email protected]>
Subject: RE: LACP fallback support

There is fallback to active-backup:

ovs-vsctl set port bond0 other_config:lacp-fallback-ab=true

See http://openvswitch.org/support/dist-docs/ovs-vswitchd.conf.db.5.html

Darragh.

From: 
[email protected]<mailto:[email protected]> 
[mailto:[email protected]] On Behalf Of Nguyen, Minh-Nghia
Sent: 09 June 2017 13:28
To: [email protected]<mailto:[email protected]>
Subject: [ovs-discuss] LACP fallback support

Hi,

We are trying to build a testbed for node provisioning. In the real setup, our 
baremetal nodes are connected to a physical Arista switch with LAGs and LACP 
fallback individual mode is set to support PXE booting.

We want to emulate those behaviours with openvswitch built inside a VM and a 
provisioning VM that has network interfaces connected to the OVS VM via veth 
pairs, and use ovs bonding for LACP. However, as far as we are aware, there are 
not any form of LACP fallback mechanism in ovs. We would love to know if there 
is any way to work around this or if this has ever been considered.

Many thanks.

Minh Nghia Nguyen

HANA Cloud Computing, Systems Engineering
SAP (UK) Limited   I   The Concourse   I   Queen's Road   I   Queen's Island   
I   Belfast BT3 9DT
E: [email protected]<mailto:[email protected]>


_______________________________________________
discuss mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-discuss

Reply via email to