This bug was fixed in the package neutron - 2:12.1.1-0ubuntu4 --------------- neutron (2:12.1.1-0ubuntu4) bionic; urgency=medium
* Fix interrupt of VLAN traffic on reboot of neutron-ovs-agent: - d/p/0001-ovs-agent-signal-to-plugin-if-tunnel-refresh-needed.patch (LP: #1853613) - d/p/0002-Do-not-block-connection-between-br-int-and-br-phys-o.patch (LP: #1869808) - d/p/0003-Ensure-that-stale-flows-are-cleaned-from-phys_bridge.patch (LP: #1864822) - d/p/0004-DVR-Reconfigure-re-created-physical-bridges-for-dvr-.patch (LP: #1864822) - d/p/0005-Ensure-drop-flows-on-br-int-at-agent-startup-for-DVR.patch (LP: #1887148) - d/p/0006-Don-t-check-if-any-bridges-were-recrected-when-OVS-w.patch (LP: #1864822) - d/p/0007-Not-remove-the-running-router-when-MQ-is-unreachable.patch (LP: #1871850) -- Edward Hope-Morley <[email protected]> Mon, 22 Feb 2021 16:55:40 +0000 ** Changed in: neutron (Ubuntu Bionic) Status: Fix Committed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1864822 Title: Openvswitch Agent - Connexion openvswitch DB Broken Status in neutron: Fix Released Status in neutron package in Ubuntu: Incomplete Status in neutron source package in Bionic: Fix Released Bug description: (For SRU template, please see bug 1869808, as the SRU info there applies to this bug also) Hi all, We have deployed more OpenStack plateform in my company. We used kolla ansible to deploy our plateforms. Here is the configuration that we applied : kolla_base_distro: "centos" kolla_install_type : "binary" openstack_version : "stein" Neutron architecture : HA l3 enable DVR enable SNAT Enabled multiple vlan provider : True Note: Our plateforms are multi-region Recently, we have upgraded a master region from rocky to stein with kolla ansible upgrade procedure. Since ugrade, sometimes openvswitch agent lost connexion to ovsdb. We have found this error in neutron-openvswitch-agent.log : "tcp:127.0.0.1:6640: send error: Broken pipe". And we have found this errors in ovsdb-server.log : 2020-02-24T23:13:22.644Z|00009|reconnect|ERR|tcp:127.0.0.1:50260: no response to inactivity probe after 5 seconds, disconnecting 2020-02-25T04:10:55.893Z|00010|reconnect|ERR|tcp:127.0.0.1:58544: no response to inactivity probe after 5 seconds, disconnecting 2020-02-25T07:21:12.301Z|00011|reconnect|ERR|tcp:127.0.0.1:34918: no response to inactivity probe after 5 seconds, disconnecting 2020-02-25T09:21:45.533Z|00012|reconnect|ERR|tcp:127.0.0.1:37782: no response to inactivity probe after 5 seconds, disconnecting When we experience this issue, all "NORMAL" type flows inside br-ex doesn't get out. Example of flows stuck: (neutron-openvswitch-agent)[root@cnp69s12p07 /]# ovs-ofctl dump-flows br-ex | grep NORMAL cookie=0x7adbd675f988912b, duration=72705.077s, table=0, n_packets=185, n_bytes=16024, idle_age=65534, hard_age=65534, priority=0 actions=NORMAL cookie=0x7adbd675f988912b, duration=72695.007s, table=2, n_packets=11835702, n_bytes=5166123797, idle_age=0, hard_age=65534, priority=4,in_port=5,dl_vlan=1 actions=mod_vlan_vid:12,NORMAL cookie=0x7adbd675f988912b, duration=72694.928s, table=2, n_packets=4133243, n_bytes=349654412, idle_age=0, hard_age=65534, priority=4,in_port=5,dl_vlan=9 actions=mod_vlan_vid:18,NORMAL Workaround to solve this issue: - stop openvswitch_db openvswitch_vswitchd neutron_openvswitch_agent neutron_l3_agent (containers) - start containers: openvswitch_db openvswitch_vswitchd - start neutron_l3_agent neutron_openvswitch_agent Note: we have keep ovs connection timeout options by default : - of_connect_timeout: 300 - of_request_timeout: 300 - of_inactivity_probe: 10 Thank you in advance for your help. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1864822/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : [email protected] Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp

