[Yahoo-eng-team] [Bug 1816698] Re: DVR-HA: Removing a router from an agent, does not clear the namespaces on the agent
Reviewed: https://review.openstack.org/638566 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=d9e0bab6acf6a608ed5bf32524c7d362f052a3ac Submitter: Zuul Branch:master commit d9e0bab6acf6a608ed5bf32524c7d362f052a3ac Author: Swaminathan Vasudevan Date: Thu Feb 21 17:14:03 2019 -0800 DVR-HA: Unbinding a HA router from agent does not clear HA interface Removing an active or a standby HA router from an agent that has a valid DVR serviceable port (such as DHCP), does not remove the HA interface associated with the Router in the SNAT namespace. When we try to add the HA router back to the agent, then it adds more than one HA interface to the SNAT Namespace causing more problems and we sometimes also see multiple active routers. This bug might have been introduced by this patch [1]. Fix the problem by just adding the router namespaces without HA interfaces when there is no HA and re-insert the HA interfaces when HA router is bound to the agent into the namespace. [1] https://review.openstack.org/#/c/522362/ Closes-Bug: #1816698 Change-Id: Ie625abcb73f8185bb2bee06dcd26a01d8af0b0d1 ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1816698 Title: DVR-HA: Removing a router from an agent, does not clear the namespaces on the agent Status in neutron: Fix Released Bug description: Removing an active or a standby ha-router from an agent, does not clear the router namespace and the Snat namespaces. This leads to sometimes having two Active HA routers and two 'ha-interface' in the snat namespace for DVR routers. This can be very easily reproduced. 1. Create a HA-DVR router. ( minimum two node setup with 'dvr_snat' agent mode) 2. Attach interface to the router 3. Attach gateway to the router. 4. Now check the l3-agent-list-hosting-router for router. 5. Then remove the router from one of the agent ( l3-agent-router-remove ) 6. Expected result is router namespace and snat namespace to be removed. ( But it is removed). 7. At the minimum we should clear the HA interfaces for that agent so that the HA router does not get into Active mode again. This bug might have been introduced by this patch. https://review.openstack.org/#/c/522362/7 This bug is seen since Ocata/Pike and probably also in master branch. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1816698/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1819299] [NEW] Keystone Installation Tutorial for Red Hat Enterprise Linux and CentOS in keystone
Public bug reported: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [x] This doc is inaccurate in this way: _"This guide documents the OpenStack Queens(Should be Rocky) release" - [ ] This is a doc addition request. - [ ] I have a fix to the document that I can paste below including example: input and output. If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: on 2019-01-07 15:31 SHA: 718f4a9c4c55f5766895eff94eda66d420451235 Source: https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/index-rdo.rst URL: https://docs.openstack.org/keystone/rocky/install/index-rdo.html ** Affects: keystone Importance: Undecided Status: New ** Tags: doc -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1819299 Title: Keystone Installation Tutorial for Red Hat Enterprise Linux and CentOS in keystone Status in OpenStack Identity (keystone): New Bug description: This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [x] This doc is inaccurate in this way: _"This guide documents the OpenStack Queens(Should be Rocky) release" - [ ] This is a doc addition request. - [ ] I have a fix to the document that I can paste below including example: input and output. If you have a troubleshooting or support issue, use the following resources: - Ask OpenStack: http://ask.openstack.org - The mailing list: http://lists.openstack.org - IRC: 'openstack' channel on Freenode --- Release: on 2019-01-07 15:31 SHA: 718f4a9c4c55f5766895eff94eda66d420451235 Source: https://git.openstack.org/cgit/openstack/keystone/tree/doc/source/install/index-rdo.rst URL: https://docs.openstack.org/keystone/rocky/install/index-rdo.html To manage notifications about this bug go to: https://bugs.launchpad.net/keystone/+bug/1819299/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1818613] Re: Functional qos related tests fails often
Reviewed: https://review.openstack.org/641117 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=92f1281b696c79133609d3c04b467ac7ea9f4337 Submitter: Zuul Branch:master commit 92f1281b696c79133609d3c04b467ac7ea9f4337 Author: Rodolfo Alonso Hernandez Date: Tue Mar 5 18:37:44 2019 + Add a more robust method to check OVSDB values in BaseOVSTestCase Sometimes, when the OVSDB is too loaded (that could happen during the functional tests), there is a delay between the OVSDB post transaction end and when the register (new or updated) can be read. Although this is something that should not happen (considering the OVSDB is transactional), tests should deal with this inconvenience and provide a robust method to retrieve a value and at the same time check the value. This new method should provide a retrieving mechanism to read again the value in case of discordance. In order to solve the gate problem ASAP, another bug is fixed in this patch: to skip the QoS removal when OVS agent is initialized during funtional tests When executing functional tests, several OVS QoS policies specific for minimum bandwidth rules [1]. Because during the functional tests execution several threads can create more than one minimum bandwidth QoS policy (something in a production environment cannot happen), the OVS QoS driver must skip the execution of [2] to avoid removing other QoS created in parellel in other tests. This patch is marking as unstable "test_min_bw_qos_policy_rule_lifecycle" and "test_bw_limit_qos_port_removed". Those tests will be investigated once the CI gates are stable. [1] Those QoS policies are created only to hold minimum bandwidth rules. Those policies are marked with: external_ids: {'_type'='minimum_bandwidth'} [2] https://github.com/openstack/neutron/blob/d6fba30781c5f4e63beeda04d065226660fc92b6/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L43 Closes-Bug: #1818613 Closes-Bug: #1818859 Related-Bug: #1819125 Change-Id: Ia725cc1b36bc3630d2891f86f76b13c16f6cc37c ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1818613 Title: Functional qos related tests fails often Status in neutron: Fix Released Bug description: Various QoS related tests are failing often recently. In all cases reason is the same: "ovsdbapp.backend.ovs_idl.idlutils.RowNotFound: Cannot find Port with name=cc566ab0-4201-44b5-ae89-d342284ffdd6" during "_minimum_bandwidth_initialize". Stacktrace: ft1.1: neutron.tests.functional.agent.l2.extensions.test_ovs_agent_qos_extension.TestOVSAgentQosExtension.test_policy_rule_delete(ingress)_StringException: Traceback (most recent call last): File "neutron/tests/base.py", line 174, in func return f(self, *args, **kwargs) File "neutron/tests/functional/agent/l2/extensions/test_ovs_agent_qos_extension.py", line 354, in test_policy_rule_delete port_dict = self._create_port_with_qos() File "neutron/tests/functional/agent/l2/extensions/test_ovs_agent_qos_extension.py", line 172, in _create_port_with_qos self.setup_agent_and_ports([port_dict]) File "neutron/tests/functional/agent/l2/base.py", line 375, in setup_agent_and_ports ancillary_bridge=ancillary_bridge) File "neutron/tests/functional/agent/l2/base.py", line 116, in create_agent ext_mgr, self.config) File "neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py", line 256, in __init__ self.connection, constants.EXTENSION_DRIVER_TYPE, agent_api) File "neutron/agent/agent_extensions_manager.py", line 54, in initialize extension.obj.initialize(connection, driver_type) File "neutron/agent/l2/extensions/qos.py", line 207, in initialize self.qos_driver.initialize() File "neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py", line 57, in initialize self._minimum_bandwidth_initialize() File "neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py", line 52, in _minimum_bandwidth_initialize self.br_int.clear_minimum_bandwidth_qos() File "neutron/agent/common/ovs_lib.py", line 1006, in clear_minimum_bandwidth_qos self.ovsdb.db_destroy('QoS', qos_id).execute(check_error=True) File "/opt/stack/new/neutron/.tox/dsvm-functional-python27/local/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/command.py", line 40, in execute txn.add(self) File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ self.gen.next() File "/opt/stack/new/neutron/.tox/dsvm-functional-python27/local/lib/python2.7/site-packages/ovsdbapp/api.py", line 112, in transaction
[Yahoo-eng-team] [Bug 1818859] Re: neutron functional job intermittent failures with ovsdbapp.backend.ovs_idl.idlutils.RowNotFound
*** This bug is a duplicate of bug 1818613 *** https://bugs.launchpad.net/bugs/1818613 Reviewed: https://review.openstack.org/641117 Committed: https://git.openstack.org/cgit/openstack/neutron/commit/?id=92f1281b696c79133609d3c04b467ac7ea9f4337 Submitter: Zuul Branch:master commit 92f1281b696c79133609d3c04b467ac7ea9f4337 Author: Rodolfo Alonso Hernandez Date: Tue Mar 5 18:37:44 2019 + Add a more robust method to check OVSDB values in BaseOVSTestCase Sometimes, when the OVSDB is too loaded (that could happen during the functional tests), there is a delay between the OVSDB post transaction end and when the register (new or updated) can be read. Although this is something that should not happen (considering the OVSDB is transactional), tests should deal with this inconvenience and provide a robust method to retrieve a value and at the same time check the value. This new method should provide a retrieving mechanism to read again the value in case of discordance. In order to solve the gate problem ASAP, another bug is fixed in this patch: to skip the QoS removal when OVS agent is initialized during funtional tests When executing functional tests, several OVS QoS policies specific for minimum bandwidth rules [1]. Because during the functional tests execution several threads can create more than one minimum bandwidth QoS policy (something in a production environment cannot happen), the OVS QoS driver must skip the execution of [2] to avoid removing other QoS created in parellel in other tests. This patch is marking as unstable "test_min_bw_qos_policy_rule_lifecycle" and "test_bw_limit_qos_port_removed". Those tests will be investigated once the CI gates are stable. [1] Those QoS policies are created only to hold minimum bandwidth rules. Those policies are marked with: external_ids: {'_type'='minimum_bandwidth'} [2] https://github.com/openstack/neutron/blob/d6fba30781c5f4e63beeda04d065226660fc92b6/neutron/plugins/ml2/drivers/openvswitch/agent/extension_drivers/qos_driver.py#L43 Closes-Bug: #1818613 Closes-Bug: #1818859 Related-Bug: #1819125 Change-Id: Ia725cc1b36bc3630d2891f86f76b13c16f6cc37c ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1818859 Title: neutron functional job intermittent failures with ovsdbapp.backend.ovs_idl.idlutils.RowNotFound Status in neutron: Fix Released Bug description: It appears the neutron functional job started failing with errors related to: ovsdbapp.backend.ovs_idl.idlutils.RowNotFound For example [1][2]. Based on logstash [3] it looks like this issue may have cropped up around March 5th. [1] http://logs.openstack.org/34/639034/7/check/neutron-functional/d32644a/job-output.txt.gz [2] http://logs.openstack.org/04/637004/2/check/neutron-functional-python27/5dd04c3/job-output.txt.gz#_2019-03-06_13_27_21_193728 [3] http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22ovsdbapp.backend.ovs_idl.idlutils.RowNotFound%3A%20Cannot%20find%20Port%20with%20name%3D%5C%22 To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1818859/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1819260] [NEW] Adding floating IP fails with SQLA 1.3.0
Public bug reported: Hi, Trying to evaluate if we can upgrade Buster to SQLAlchemy 1.3.0, doing this in my PoC: openstack server add floating ip demo-server 192.168.105.101 leads to this stack dump below. Obviously, there's something wrong that needs fixing. Best would be before Stein is out. 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource [req-2ca7dd4c-515f-4958-964c-8506811c0b5a a498c39ddde54be4aafa7b3ded5563e6 9e0e0a4c736a4687ade8c5e765353bd7 - default default] update failed: No details.: sqlalchemy.exc.InvalidRequestError: Can't determine which FROM clause to join from, there are multiple FROMS which can join to this entity. Try adding an explicit ON clause to help resolve the ambiguity. 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource Traceback (most recent call last): 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/neutron/api/v2/resource.py", line 98, in resource 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource result = method(request=request, **args) 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/neutron/api/v2/base.py", line 626, in update 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource return self._update(request, id, body, **kwargs) 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/neutron_lib/db/api.py", line 140, in wrapped 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource setattr(e, '_RETRY_EXCEEDED', True) 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource self.force_reraise() 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource raise value 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/neutron_lib/db/api.py", line 136, in wrapped 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource return f(*args, **kwargs) 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/oslo_db/api.py", line 154, in wrapper 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource ectxt.value = e.inner_exc 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource self.force_reraise() 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource raise value 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/oslo_db/api.py", line 142, in wrapper 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource return f(*args, **kwargs) 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/neutron_lib/db/api.py", line 183, in wrapped 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource LOG.debug("Retry wrapper got retriable exception: %s", e) 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource self.force_reraise() 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource six.reraise(self.type_, self.value, self.tb) 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource raise value 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "/usr/lib/python3/dist-packages/neutron_lib/db/api.py", line 179, in wrapped 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource return f(*dup_args, **dup_kwargs) 2019-03-09 10:31:26.785 624233 ERROR neutron.api.v2.resource File "