[Yahoo-eng-team] [Bug 2060029] [NEW] [devstack-tobiko-neutron] stable/2023.1 periodic failing stateless secgroup tests

2024-04-02 Thread yatin
Public bug reported:

One of the test fails like:-
testtools.testresult.real._StringException: Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/x/tobiko/tobiko/tests/scenario/neutron/test_security_groups.py",
 line 227, in test_security_group_stateful_to_stateless_switch
self._check_sg_rules_in_ovn_nb_db(sg, neutron.STATELESS_OVN_ACTION)
  File 
"/home/zuul/src/opendev.org/x/tobiko/tobiko/tests/scenario/neutron/test_security_groups.py",
 line 139, in _check_sg_rules_in_ovn_nb_db
self._check_sg_rule_in_ovn_nb_db(sg_rule_id, expected_action)
  File 
"/home/zuul/src/opendev.org/x/tobiko/tobiko/tests/scenario/neutron/test_security_groups.py",
 line 127, in _check_sg_rule_in_ovn_nb_db
self._assert_acl_action(acl_rule, expected_action)
  File 
"/home/zuul/src/opendev.org/x/tobiko/tobiko/tests/scenario/neutron/test_security_groups.py",
 line 131, in _assert_acl_action
self.assertEqual(
  File 
"/home/zuul/src/opendev.org/x/tobiko/.tox/py3/lib/python3.8/site-packages/testtools/testcase.py",
 line 393, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/zuul/src/opendev.org/x/tobiko/.tox/py3/lib/python3.8/site-packages/testtools/testcase.py",
 line 480, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 'allow-stateless' != 'allow-related'

Example failure:-
https://606e7073949a555d6ce7-a5a94d16cd1e63fdf610099df3afaf88.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/neutron/stable/2023.1/devstack-
tobiko-
neutron/e20c66f/tobiko_results_02_create_neutron_resources_neutron.html?sort=result

Builds:- https://zuul.openstack.org/builds?job_name=devstack-tobiko-
neutron=openstack%2Fneutron=stable%2F2023.1=0

Issue triggered with hhttps://review.opendev.org/c/x/devstack-plugin-
tobiko/+/912588 which enabled those tests. 2023.1 jobs running with
ubuntu-focal.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2060029

Title:
  [devstack-tobiko-neutron] stable/2023.1 periodic failing stateless
  secgroup tests

Status in neutron:
  New

Bug description:
  One of the test fails like:-
  testtools.testresult.real._StringException: Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/x/tobiko/tobiko/tests/scenario/neutron/test_security_groups.py",
 line 227, in test_security_group_stateful_to_stateless_switch
  self._check_sg_rules_in_ovn_nb_db(sg, neutron.STATELESS_OVN_ACTION)
File 
"/home/zuul/src/opendev.org/x/tobiko/tobiko/tests/scenario/neutron/test_security_groups.py",
 line 139, in _check_sg_rules_in_ovn_nb_db
  self._check_sg_rule_in_ovn_nb_db(sg_rule_id, expected_action)
File 
"/home/zuul/src/opendev.org/x/tobiko/tobiko/tests/scenario/neutron/test_security_groups.py",
 line 127, in _check_sg_rule_in_ovn_nb_db
  self._assert_acl_action(acl_rule, expected_action)
File 
"/home/zuul/src/opendev.org/x/tobiko/tobiko/tests/scenario/neutron/test_security_groups.py",
 line 131, in _assert_acl_action
  self.assertEqual(
File 
"/home/zuul/src/opendev.org/x/tobiko/.tox/py3/lib/python3.8/site-packages/testtools/testcase.py",
 line 393, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/zuul/src/opendev.org/x/tobiko/.tox/py3/lib/python3.8/site-packages/testtools/testcase.py",
 line 480, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 'allow-stateless' != 'allow-related'

  Example failure:-
  
https://606e7073949a555d6ce7-a5a94d16cd1e63fdf610099df3afaf88.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/neutron/stable/2023.1/devstack-
  tobiko-
  
neutron/e20c66f/tobiko_results_02_create_neutron_resources_neutron.html?sort=result

  Builds:- https://zuul.openstack.org/builds?job_name=devstack-tobiko-
  neutron=openstack%2Fneutron=stable%2F2023.1=0

  Issue triggered with hhttps://review.opendev.org/c/x/devstack-plugin-
  tobiko/+/912588 which enabled those tests. 2023.1 jobs running with
  ubuntu-focal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2060029/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2058378] [NEW] [devstack-tobiko-neutron] stable/2023.2 periodic broken with ERROR: Cannot install tox==4.13.0, tox==4.14.0 and tox==4.14.1

2024-03-19 Thread yatin
Public bug reported:

Fails like:-
2024-03-17 03:08:11.011222 | controller | ++ 
/opt/stack/devstack-plugin-tobiko/devstack/plugin.sh:install_tobiko_deps:14 :   
pip_install 'tox>=4.13'
2024-03-17 03:08:11.036778 | controller | Using python 3.10 to install tox>=4.13
2024-03-17 03:08:11.040275 | controller | ++ inc/python:pip_install:216 
  :   env http_proxy= https_proxy= no_proxy= PIP_FIND_LINKS= 
/opt/stack/data/venv/bin/pip install -c 
/opt/stack/requirements/upper-constraints.txt 'tox>=4.13'
2024-03-17 03:08:11.951193 | controller | Looking in indexes: 
https://mirror.ca-ymq-1.vexxhost.opendev.org/pypi/simple, 
https://mirror.ca-ymq-1.vexxhost.opendev.org/wheel/ubuntu-22.04-x86_64
2024-03-17 03:08:12.099362 | controller | Collecting tox>=4.13
2024-03-17 03:08:12.100739 | controller |   Using cached 
https://mirror.ca-ymq-1.vexxhost.opendev.org/pypifiles/packages/ed/4c/4c7daf604fe4b136a1e25d41eb7cb2d644d1d8d4d6694eb6ffa7f7dd60cd/tox-4.14.1-py3-none-any.whl.metadata
 (5.0 kB)
2024-03-17 03:08:12.162454 | controller | INFO: pip is looking at multiple 
versions of tox to determine which version is compatible with other 
requirements. This could take a while.
2024-03-17 03:08:12.175333 | controller |   Using cached 
https://mirror.ca-ymq-1.vexxhost.opendev.org/pypifiles/packages/dd/48/57d7f9338669686e5f60794b2291684eb89e49bbf01f695bad651a14e4b5/tox-4.14.0-py3-none-any.whl.metadata
 (5.0 kB)
2024-03-17 03:08:12.211821 | controller |   Using cached 
https://mirror.ca-ymq-1.vexxhost.opendev.org/pypifiles/packages/14/1a/19c783781c4638bc076f21c76c1c55d2a7bef7381b7c47911e1e65c6e340/tox-4.13.0-py3-none-any.whl.metadata
 (5.0 kB)
2024-03-17 03:08:12.237786 | controller | ERROR: Cannot install tox==4.13.0, 
tox==4.14.0 and tox==4.14.1 because these package versions have conflicting 
dependencies.
2024-03-17 03:08:12.238263 | controller |
2024-03-17 03:08:12.238298 | controller | The conflict is caused by:
2024-03-17 03:08:12.238312 | controller | tox 4.14.1 depends on 
cachetools>=5.3.2
2024-03-17 03:08:12.238324 | controller | tox 4.14.0 depends on 
cachetools>=5.3.2
2024-03-17 03:08:12.238343 | controller | tox 4.13.0 depends on 
cachetools>=5.3.2
2024-03-17 03:08:12.238355 | controller | The user requested (constraint) 
cachetools===5.3.0
2024-03-17 03:08:12.238366 | controller |
2024-03-17 03:08:12.238377 | controller | To fix this you could try to:
2024-03-17 03:08:12.238391 | controller | 1. loosen the range of package 
versions you've specified
2024-03-17 03:08:12.238610 | controller | 2. remove package versions to allow 
pip attempt to solve the dependency conflict
2024-03-17 03:08:12.238646 | controller |
2024-03-17 03:08:12.238666 | controller | ERROR: ResolutionImpossible: for help 
visit 
https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
2024-03-17 03:08:12.313244 | controller | + inc/python:pip_install:1
 :   exit_trap


Example failures:- 
https://a854d101f2ba9a65b7e6-41ecee2e9c4e7ff71d14a9d7f8c78625.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/neutron/stable/2023.2/devstack-tobiko-neutron/46ee578/job-output.txt

Builds:- https://zuul.openstack.org/builds?job_name=devstack-tobiko-
neutron=openstack%2Fneutron=stable%2F2023.2=0

Job works fine in another branches, so need to check what's missing for
stable/2023.2

Failing since:- https://review.opendev.org/c/x/devstack-plugin-
tobiko/+/911092

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2058378

Title:
  [devstack-tobiko-neutron] stable/2023.2 periodic broken with ERROR:
  Cannot install tox==4.13.0, tox==4.14.0 and tox==4.14.1

Status in neutron:
  New

Bug description:
  Fails like:-
  2024-03-17 03:08:11.011222 | controller | ++ 
/opt/stack/devstack-plugin-tobiko/devstack/plugin.sh:install_tobiko_deps:14 :   
pip_install 'tox>=4.13'
  2024-03-17 03:08:11.036778 | controller | Using python 3.10 to install 
tox>=4.13
  2024-03-17 03:08:11.040275 | controller | ++ inc/python:pip_install:216   
:   env http_proxy= https_proxy= no_proxy= PIP_FIND_LINKS= 
/opt/stack/data/venv/bin/pip install -c 
/opt/stack/requirements/upper-constraints.txt 'tox>=4.13'
  2024-03-17 03:08:11.951193 | controller | Looking in indexes: 
https://mirror.ca-ymq-1.vexxhost.opendev.org/pypi/simple, 
https://mirror.ca-ymq-1.vexxhost.opendev.org/wheel/ubuntu-22.04-x86_64
  2024-03-17 03:08:12.099362 | controller | Collecting tox>=4.13
  2024-03-17 03:08:12.100739 | controller |   Using cached 
https://mirror.ca-ymq-1.vexxhost.opendev.org/pypifiles/packages/ed/4c/4c7daf604fe4b136a1e25d41eb7cb2d644d1d8d4d6694eb6ffa7f7dd60cd/tox-4.14.1-py3-none-any.whl.metadata
 (5.0 kB)
  2024-03-17 03:08:12.162454 | controller | INFO: pip is looking at multiple 
versions of tox to determine which version is compatible 

[Yahoo-eng-team] [Bug 2057492] [NEW] [devstack-tobiko-neutron] periodic broken running on ubuntu focal

2024-03-12 Thread yatin
Public bug reported:

The job is broken in stable/zed and stable/2023.1 where it's running on
ubuntu focal since patch https://review.opendev.org/c/x/tobiko/+/910589

Fails like:-
2024-03-12 02:33:20.806847 | controller | interpreter = 
self.creator.interpreter
2024-03-12 02:33:20.806858 | controller |   File 
"/usr/local/lib/python3.8/dist-packages/tox/tox_env/python/virtual_env/api.py", 
line 127, in creator
2024-03-12 02:33:20.806868 | controller | return self.session.creator
2024-03-12 02:33:20.806880 | controller |   File 
"/usr/local/lib/python3.8/dist-packages/tox/tox_env/python/virtual_env/api.py", 
line 108, in session
2024-03-12 02:33:20.806890 | controller | self._virtualenv_session = 
session_via_cli(env_dir, options=None, setup_logging=False, env=env)
2024-03-12 02:33:20.806900 | controller |   File 
"/home/zuul/.local/lib/python3.8/site-packages/virtualenv/run/__init__.py", 
line 49, in session_via_cli
2024-03-12 02:33:20.806912 | controller | parser, elements = 
build_parser(args, options, setup_logging, env)
2024-03-12 02:33:20.806922 | controller |   File 
"/home/zuul/.local/lib/python3.8/site-packages/virtualenv/run/__init__.py", 
line 82, in build_parser
2024-03-12 02:33:20.806932 | controller | CreatorSelector(interpreter, 
parser),
2024-03-12 02:33:20.806942 | controller |   File 
"/home/zuul/.local/lib/python3.8/site-packages/virtualenv/run/plugin/creators.py",
 line 24, in __init__
2024-03-12 02:33:20.806953 | controller | creators, self.key_to_meta, 
self.describe, self.builtin_key = self.for_interpreter(interpreter)
2024-03-12 02:33:20.806963 | controller |   File 
"/home/zuul/.local/lib/python3.8/site-packages/virtualenv/run/plugin/creators.py",
 line 31, in for_interpreter
2024-03-12 02:33:20.806973 | controller | for key, creator_class in 
cls.options("virtualenv.create").items():
2024-03-12 02:33:20.806991 | controller |   File 
"/home/zuul/.local/lib/python3.8/site-packages/virtualenv/run/plugin/base.py", 
line 45, in options
2024-03-12 02:33:20.843299 | controller | cls._OPTIONS = 
cls.entry_points_for(key)
2024-03-12 02:33:20.843338 | controller |   File 
"/home/zuul/.local/lib/python3.8/site-packages/virtualenv/run/plugin/base.py", 
line 24, in entry_points_for
2024-03-12 02:33:20.843350 | controller | return OrderedDict((e.name, 
e.load()) for e in cls.entry_points().get(key, {}))
2024-03-12 02:33:20.843361 | controller |   File 
"/home/zuul/.local/lib/python3.8/site-packages/virtualenv/run/plugin/base.py", 
line 24, in 
2024-03-12 02:33:20.843495 | controller | return OrderedDict((e.name, 
e.load()) for e in cls.entry_points().get(key, {}))
2024-03-12 02:33:20.843511 | controller |   File 
"/home/zuul/.local/lib/python3.8/site-packages/setuptools/_vendor/importlib_metadata/__init__.py",
 line 210, in load
2024-03-12 02:33:20.843522 | controller | return functools.reduce(getattr, 
attrs, module)
2024-03-12 02:33:20.843532 | controller | AttributeError: module 
'virtualenv.create.via_global_ref.builtin.cpython.mac_os' has no attribute 
'CPython2macOsArmFramework'


Example failure:- 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_2c4/periodic/opendev.org/openstack/neutron/stable/2023.1/devstack-tobiko-neutron/2c413ae/job-output.txt

Builds:- https://zuul.openstack.org/builds?job_name=devstack-tobiko-
neutron=openstack%2Fneutron=stable%2Fzed=stable%2F2023.1=0

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2057492

Title:
  [devstack-tobiko-neutron] periodic broken running on ubuntu focal

Status in neutron:
  New

Bug description:
  The job is broken in stable/zed and stable/2023.1 where it's running
  on ubuntu focal since patch
  https://review.opendev.org/c/x/tobiko/+/910589

  Fails like:-
  2024-03-12 02:33:20.806847 | controller | interpreter = 
self.creator.interpreter
  2024-03-12 02:33:20.806858 | controller |   File 
"/usr/local/lib/python3.8/dist-packages/tox/tox_env/python/virtual_env/api.py", 
line 127, in creator
  2024-03-12 02:33:20.806868 | controller | return self.session.creator
  2024-03-12 02:33:20.806880 | controller |   File 
"/usr/local/lib/python3.8/dist-packages/tox/tox_env/python/virtual_env/api.py", 
line 108, in session
  2024-03-12 02:33:20.806890 | controller | self._virtualenv_session = 
session_via_cli(env_dir, options=None, setup_logging=False, env=env)
  2024-03-12 02:33:20.806900 | controller |   File 
"/home/zuul/.local/lib/python3.8/site-packages/virtualenv/run/__init__.py", 
line 49, in session_via_cli
  2024-03-12 02:33:20.806912 | controller | parser, elements = 
build_parser(args, options, setup_logging, env)
  2024-03-12 02:33:20.806922 | controller |   File 
"/home/zuul/.local/lib/python3.8/site-packages/virtualenv/run/__init__.py", 
line 82, in 

[Yahoo-eng-team] [Bug 2056276] [NEW] unmaintained/yoga periodic jobs broken

2024-03-05 Thread yatin
Public bug reported:

Couple of jobs failing in unmaintained/yoga periodic pipeline.

https://zuul.openstack.org/buildsets?project=openstack%2Fneutron=unmaintained%2Fyoga=stable%2Fyoga=periodic=0

Before switching to unmaintained/yoga jobs were green:-
https://zuul.openstack.org/buildset/eab6ceae73d7437186cb7432e7ad8897

Currently following jobs broken:-
- devstack-tobiko-neutron
- neutron-ovn-tempest-slow
- neutron-ovs-tempest-slow

All fails with:-
2024-03-04 02:30:28.791273 | controller | + ./stack.sh:main:230 
 :   
SUPPORTED_DISTROS='bullseye|focal|f35|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9|openEuler-20.03'
2024-03-04 02:30:28.793551 | controller | + ./stack.sh:main:232 
 :   [[ ! jammy =~ 
bullseye|focal|f35|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9|openEuler-20.03
 ]]
2024-03-04 02:30:28.795963 | controller | + ./stack.sh:main:233 
 :   echo 'WARNING: this script has not been tested on jammy'


builds:- 
https://zuul.openstack.org/builds?job_name=devstack-tobiko-neutron_name=neutron-ovn-tempest-slow_name=neutron-ovs-tempest-slow=openstack%2Fneutron=%09unmaintained%2Fyoga=0

The bug is to track the fixes and also discuss what to do with similar
issues going forward in unmaintained branches?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2056276

Title:
  unmaintained/yoga periodic jobs broken

Status in neutron:
  New

Bug description:
  Couple of jobs failing in unmaintained/yoga periodic pipeline.

  
https://zuul.openstack.org/buildsets?project=openstack%2Fneutron=unmaintained%2Fyoga=stable%2Fyoga=periodic=0

  Before switching to unmaintained/yoga jobs were green:-
  https://zuul.openstack.org/buildset/eab6ceae73d7437186cb7432e7ad8897

  Currently following jobs broken:-
  - devstack-tobiko-neutron
  - neutron-ovn-tempest-slow
  - neutron-ovs-tempest-slow

  All fails with:-
  2024-03-04 02:30:28.791273 | controller | + ./stack.sh:main:230   
   :   
SUPPORTED_DISTROS='bullseye|focal|f35|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9|openEuler-20.03'
  2024-03-04 02:30:28.793551 | controller | + ./stack.sh:main:232   
   :   [[ ! jammy =~ 
bullseye|focal|f35|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9|openEuler-20.03
 ]]
  2024-03-04 02:30:28.795963 | controller | + ./stack.sh:main:233   
   :   echo 'WARNING: this script has not been tested on jammy'

  
  builds:- 
https://zuul.openstack.org/builds?job_name=devstack-tobiko-neutron_name=neutron-ovn-tempest-slow_name=neutron-ovs-tempest-slow=openstack%2Fneutron=%09unmaintained%2Fyoga=0

  The bug is to track the fixes and also discuss what to do with similar
  issues going forward in unmaintained branches?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2056276/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052509] [NEW] [CI][ovs/ovn sqlalchemy master] jobs broken with Exception: Not enough arguments given

2024-02-06 Thread yatin
Public bug reported:

Fails like:-
2024-02-06 02:28:34.618579 | controller | ++ 
inc/python:_setup_package_with_constraints_edit:400 :   cd /opt/stack/sqlalchemy
2024-02-06 02:28:34.621804 | controller | ++ 
inc/python:_setup_package_with_constraints_edit:400 :   pwd
2024-02-06 02:28:34.624859 | controller | + 
inc/python:_setup_package_with_constraints_edit:400 :   
project_dir=/opt/stack/sqlalchemy
2024-02-06 02:28:34.627646 | controller | + 
inc/python:_setup_package_with_constraints_edit:402 :   '[' -n 
/opt/stack/requirements ']'
2024-02-06 02:28:34.630146 | controller | + 
inc/python:_setup_package_with_constraints_edit:406 :   local name
2024-02-06 02:28:34.633640 | controller | ++ 
inc/python:_setup_package_with_constraints_edit:407 :   awk '/^name.*=/ {print 
$3}' /opt/stack/sqlalchemy/setup.cfg
2024-02-06 02:28:34.639338 | controller | + 
inc/python:_setup_package_with_constraints_edit:407 :   name=
2024-02-06 02:28:34.641840 | controller | + 
inc/python:_setup_package_with_constraints_edit:409 :   
/opt/stack/requirements/.venv/bin/edit-constraints 
/opt/stack/requirements/upper-constraints.txt --
2024-02-06 02:28:34.784286 | controller | Traceback (most recent call last):
2024-02-06 02:28:34.784331 | controller |   File 
"/opt/stack/requirements/.venv/bin/edit-constraints", line 10, in 
2024-02-06 02:28:34.784449 | controller | sys.exit(main())
2024-02-06 02:28:34.784484 | controller |   File 
"/opt/stack/requirements/.venv/lib/python3.10/site-packages/openstack_requirements/cmds/edit_constraint.py",
 line 66, in main
2024-02-06 02:28:34.784629 | controller | _validate_options(options, args)
2024-02-06 02:28:34.784661 | controller |   File 
"/opt/stack/requirements/.venv/lib/python3.10/site-packages/openstack_requirements/cmds/edit_constraint.py",
 line 47, in _validate_options
2024-02-06 02:28:34.784783 | controller | raise Exception("Not enough 
arguments given")
2024-02-06 02:28:34.784821 | controller | Exception: Not enough arguments given
2024-02-06 02:28:34.805546 | controller | + 
inc/python:_setup_package_with_constraints_edit:1 :   exit_trap
2024-02-06 02:28:34.808305 | controller | + ./stack.sh:exit_trap:549
 :   local r=1

Example log:-
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_22d/periodic/opendev.org/openstack/neutron/master/neutron-
ovs-tempest-with-sqlalchemy-master/22dfd7f/job-output.txt

Builds:- https://zuul.openstack.org/builds?job_name=neutron-ovs-tempest-
with-sqlalchemy-master_name=neutron-ovn-tempest-with-sqlalchemy-
master


Broken since 
https://github.com/sqlalchemy/sqlalchemy/commit/a8dbf8763a8fa2ca53cc01033f06681a421bf60b

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052509

Title:
  [CI][ovs/ovn sqlalchemy master] jobs broken with Exception: Not enough
  arguments given

Status in neutron:
  New

Bug description:
  Fails like:-
  2024-02-06 02:28:34.618579 | controller | ++ 
inc/python:_setup_package_with_constraints_edit:400 :   cd /opt/stack/sqlalchemy
  2024-02-06 02:28:34.621804 | controller | ++ 
inc/python:_setup_package_with_constraints_edit:400 :   pwd
  2024-02-06 02:28:34.624859 | controller | + 
inc/python:_setup_package_with_constraints_edit:400 :   
project_dir=/opt/stack/sqlalchemy
  2024-02-06 02:28:34.627646 | controller | + 
inc/python:_setup_package_with_constraints_edit:402 :   '[' -n 
/opt/stack/requirements ']'
  2024-02-06 02:28:34.630146 | controller | + 
inc/python:_setup_package_with_constraints_edit:406 :   local name
  2024-02-06 02:28:34.633640 | controller | ++ 
inc/python:_setup_package_with_constraints_edit:407 :   awk '/^name.*=/ {print 
$3}' /opt/stack/sqlalchemy/setup.cfg
  2024-02-06 02:28:34.639338 | controller | + 
inc/python:_setup_package_with_constraints_edit:407 :   name=
  2024-02-06 02:28:34.641840 | controller | + 
inc/python:_setup_package_with_constraints_edit:409 :   
/opt/stack/requirements/.venv/bin/edit-constraints 
/opt/stack/requirements/upper-constraints.txt --
  2024-02-06 02:28:34.784286 | controller | Traceback (most recent call last):
  2024-02-06 02:28:34.784331 | controller |   File 
"/opt/stack/requirements/.venv/bin/edit-constraints", line 10, in 
  2024-02-06 02:28:34.784449 | controller | sys.exit(main())
  2024-02-06 02:28:34.784484 | controller |   File 
"/opt/stack/requirements/.venv/lib/python3.10/site-packages/openstack_requirements/cmds/edit_constraint.py",
 line 66, in main
  2024-02-06 02:28:34.784629 | controller | _validate_options(options, args)
  2024-02-06 02:28:34.784661 | controller |   File 
"/opt/stack/requirements/.venv/lib/python3.10/site-packages/openstack_requirements/cmds/edit_constraint.py",
 line 47, in _validate_options
  2024-02-06 02:28:34.784783 | controller | raise Exception("Not enough 
arguments given")
  

[Yahoo-eng-team] [Bug 2052508] [NEW] [doc][troubleshooting] how to enable ovs vswitchd debug logs

2024-02-06 Thread yatin
Public bug reported:

It was raised while evaluating ovs-vswitchd debug logs in CI[1] that we
should also document it in neutron docs on steps enabling/disabling
debug logs for ovs-vswitchd.


[1] https://review.opendev.org/c/openstack/neutron/+/907037

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052508

Title:
  [doc][troubleshooting] how to enable ovs vswitchd debug logs

Status in neutron:
  New

Bug description:
  It was raised while evaluating ovs-vswitchd debug logs in CI[1] that
  we should also document it in neutron docs on steps enabling/disabling
  debug logs for ovs-vswitchd.

  
  [1] https://review.opendev.org/c/openstack/neutron/+/907037

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2052508/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051845] [NEW] [ovn][source-main] job fails with error: too many arguments to function mcast_snooping_add_group4

2024-01-31 Thread yatin
Public bug reported:

Fails as:-
2024-01-31 03:02:06.467122 | controller | controller/pinctrl.c: In function 
‘pinctrl_ip_mcast_handle_igmp’:
2024-01-31 03:02:06.468027 | controller | controller/pinctrl.c:5478:54: error: 
‘MCAST_GROUP_IGMPV1’ undeclared (first use in this function)
2024-01-31 03:02:06.468683 | controller |  5478 |   
port_key_data, MCAST_GROUP_IGMPV1);
2024-01-31 03:02:06.469444 | controller |   |   
   ^~
2024-01-31 03:02:06.469472 | controller | controller/pinctrl.c:5478:54: note: 
each undeclared identifier is reported only once for each function it appears in
2024-01-31 03:02:06.469493 | controller | controller/pinctrl.c:5477:13: error: 
too many arguments to function ‘mcast_snooping_add_group4’
2024-01-31 03:02:06.469774 | controller |  5477 | 
mcast_snooping_add_group4(ip_ms->ms, ip4, IP_MCAST_VLAN,
2024-01-31 03:02:06.469799 | controller |   | 
^
2024-01-31 03:02:06.469821 | controller | In file included from 
controller/ip-mcast.h:19,
2024-01-31 03:02:06.502161 | controller |  from 
controller/pinctrl.c:64:
2024-01-31 03:02:06.502217 | controller | 
/opt/stack/ovs/lib/mcast-snooping.h:190:6: note: declared here
2024-01-31 03:02:06.502255 | controller |   190 | bool 
mcast_snooping_add_group4(struct mcast_snooping *ms, ovs_be32 ip4,
2024-01-31 03:02:06.502286 | controller |   |  ^
2024-01-31 03:02:06.502324 | controller | controller/pinctrl.c:5483:54: error: 
‘MCAST_GROUP_IGMPV2’ undeclared (first use in this function)
2024-01-31 03:02:06.503464 | controller |  5483 |   
port_key_data, MCAST_GROUP_IGMPV2);
2024-01-31 03:02:06.503491 | controller |   |   
   ^~
2024-01-31 03:02:06.503513 | controller | depbase=`echo controller/local_data.o 
| sed 's|[^/]*$|.deps/&|;s|\.o$||'`;\
2024-01-31 03:02:06.503936 | controller | gcc -DHAVE_CONFIG_H -I.   -I 
./include  -I ./include -I ./ovn -I ./include -I ./lib -I ./lib -I 
/opt/stack/ovs/include -I /opt/stack/ovs/include -I /opt/stack/ovs/lib -I 
/opt/stack/ovs/lib -I /opt/stack/ovs -I /opt/stack/ovs-Wstrict-prototypes 
-Wall -Wextra -Wno-sign-compare -Wpointer-arith -Wformat -Wformat-security 
-Wswitch-enum -Wunused-parameter -Wbad-function-cast -Wcast-align 
-Wstrict-prototypes -Wold-style-definition -Wmissing-prototypes 
-Wmissing-field-initializers -fno-strict-aliasing -Wswitch-bool 
-Wlogical-not-parentheses -Wsizeof-array-argument -Wbool-compare 
-Wshift-negative-value -Wduplicated-cond -Wshadow -Wmultistatement-macros 
-Wcast-align=strict   -g -O2 -MT controller/local_data.o -MD -MP -MF 
$depbase.Tpo -c -o controller/local_data.o controller/local_data.c &&\
2024-01-31 03:02:06.503963 | controller | mv -f $depbase.Tpo $depbase.Po
2024-01-31 03:02:06.504000 | controller | controller/pinctrl.c:5482:13: error: 
too many arguments to function ‘mcast_snooping_add_group4’
2024-01-31 03:02:06.504288 | controller |  5482 | 
mcast_snooping_add_group4(ip_ms->ms, ip4, IP_MCAST_VLAN,
2024-01-31 03:02:06.504314 | controller |   | 
^
2024-01-31 03:02:06.504335 | controller | In file included from 
controller/ip-mcast.h:19,

Example failure:-
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_3c5/periodic/opendev.org/openstack/neutron/master/neutron-
ovn-tempest-ipv6-only-ovs-master/3c5404e/job-output.txt

Failing since 
https://github.com/ovn-org/ovn/commit/dc34b4d9f7f3efb4e7547f9850f6086a7e1a2338
need to update OVS_BRANCH

** Affects: neutron
 Importance: High
     Assignee: yatin (yatinkarel)
 Status: Triaged

** Changed in: neutron
   Status: New => Triaged

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => yatin (yatinkarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2051845

Title:
  [ovn][source-main] job fails with error: too many arguments to
  function mcast_snooping_add_group4

Status in neutron:
  Triaged

Bug description:
  Fails as:-
  2024-01-31 03:02:06.467122 | controller | controller/pinctrl.c: In function 
‘pinctrl_ip_mcast_handle_igmp’:
  2024-01-31 03:02:06.468027 | controller | controller/pinctrl.c:5478:54: 
error: ‘MCAST_GROUP_IGMPV1’ undeclared (first use in this function)
  2024-01-31 03:02:06.468683 | controller |  5478 | 
  port_key_data, MCAST_GROUP_IGMPV1);
  2024-01-31 03:02:06.469444 | controller |   | 
 ^~
  2024-01-31 03:02:06.469472 | controller | controller/pin

[Yahoo-eng-team] [Bug 2051831] [NEW] [ovn] neutron-ovn-tempest-slow job fail tests relying on FIP

2024-01-31 Thread yatin
Public bug reported:

Example failure:-
https://2f4a32f753edcd6fd518-38c49964a79149719549049b602122d6.ssl.cf5.rackcdn.com/906628/1/experimental/neutron-
ovn-tempest-slow/1b35fb8/testr_results.html

Fails as:-
Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/scenario/test_security_groups_basic_ops.py", 
line 191, in setUp
self._deploy_tenant(self.primary_tenant)
  File "/opt/stack/tempest/tempest/scenario/test_security_groups_basic_ops.py", 
line 354, in _deploy_tenant
self._set_access_point(tenant)
  File "/opt/stack/tempest/tempest/scenario/test_security_groups_basic_ops.py", 
line 321, in _set_access_point
self._assign_floating_ips(tenant, server)
  File "/opt/stack/tempest/tempest/scenario/test_security_groups_basic_ops.py", 
line 325, in _assign_floating_ips
floating_ip = self.create_floating_ip(
  File "/opt/stack/tempest/tempest/scenario/manager.py", line 1132, in 
create_floating_ip
result = client.create_floatingip(**floatingip_kwargs)
  File 
"/opt/stack/tempest/tempest/lib/services/network/floating_ips_client.py", line 
30, in create_floatingip
return self.create_resource(uri, post_data)
  File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 62, in 
create_resource
resp, body = self.post(req_uri, req_post_data)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 300, in post
return self.request('POST', url, extra_headers, headers, body, chunked)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 742, in 
request
self._error_checker(resp, resp_body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 852, in 
_error_checker
raise exceptions.NotFound(resp_body, resp=resp)
tempest.lib.exceptions.NotFound: Object not found
Details: {'type': 'ExternalGatewayForFloatingIPNotFound', 'message': 'External 
network 43cd92cd-4957-4770-9945-584e8d4da9e3 is not reachable from subnet 
e354d5ec-5be0-4536-9348-fd819e3f7464.  Therefore, cannot associate Port 
63869e58-05b8-4a18-be25-77500966df61 with a Floating IP.', 'detail': ''}

neutron-server trace:-
Jan 26 15:10:35.544611 np0036547416 neutron-server[63495]: ERROR 
neutron.services.ovn_l3.service_providers.ovn [None 
req-1674312b-7443-4d41-bd9b-5f69f5173083 
tempest-TestNetworkAdvancedServerOps-1277687553 
tempest-TestNetworkAdvancedServerOps-1277687553-project-member] Unable to add 
router interface to lrouter 31bdd6ba-45cf-45bd-aa3d-907a217ce2a3. Interface 
info: {'id': '31bdd6ba-45cf-45bd-aa3d-907a217ce2a3', 'tenant_id': 
'f7ebd951642c4987ad034d6180f81784', 'port_id': 
'f8606824-f658-4634-b9ce-2f8e2ce0d1c3', 'network_id': 
'2f74ef7b-8374-4f36-a4bf-fbf225b87c48', 'subnet_id': 
'd61fb992-2d87-4198-82f1-a687609c3e7c', 'subnet_ids': 
['d61fb992-2d87-4198-82f1-a687609c3e7c']}: 
neutron_lib.exceptions.SubnetNotFound: Subnet 
a283b869-2e54-44da-b109-06da46933d06 could not be found.
Jan 26 15:10:35.544611 np0036547416 neutron-server[63495]: ERROR 
neutron.services.ovn_l3.service_providers.ovn Traceback (most recent call last):
Jan 26 15:10:35.544611 np0036547416 neutron-server[63495]: ERROR 
neutron.services.ovn_l3.service_providers.ovn   File 
"/opt/stack/neutron/neutron/services/ovn_l3/service_providers/ovn.py", line 
122, in _process_add_router_interface
Jan 26 15:10:35.544611 np0036547416 neutron-server[63495]: ERROR 
neutron.services.ovn_l3.service_providers.ovn 
self.l3plugin._ovn_client.create_router_port(context, router.id,
Jan 26 15:10:35.544611 np0036547416 neutron-server[63495]: ERROR 
neutron.services.ovn_l3.service_providers.ovn   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 1754, in create_router_port
Jan 26 15:10:35.544611 np0036547416 neutron-server[63495]: ERROR 
neutron.services.ovn_l3.service_providers.ovn 
self._update_lrouter_port(context, router_port,
Jan 26 15:10:35.544611 np0036547416 neutron-server[63495]: ERROR 
neutron.services.ovn_l3.service_providers.ovn   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 1762, in _update_lrouter_port
Jan 26 15:10:35.544611 np0036547416 neutron-server[63495]: ERROR 
neutron.services.ovn_l3.service_providers.ovn 
self._get_nets_and_ipv6_ra_confs_for_router_port(context, port))
Jan 26 15:10:35.544611 np0036547416 neutron-server[63495]: ERROR 
neutron.services.ovn_l3.service_providers.ovn   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 1236, in _get_nets_and_ipv6_ra_confs_for_router_port
Jan 26 15:10:35.544611 np0036547416 neutron-server[63495]: ERROR 
neutron.services.ovn_l3.service_providers.ovn subnet = 
self._plugin.get_subnet(context, subnet_id)
Jan 26 15:10:35.544611 np0036547416 neutron-server[63495]: ERROR 
neutron.services.ovn_l3.service_providers.ovn   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/neutron_lib/db/api.py", line 
223, in wrapped
Jan 26 15:10:35.544611 np0036547416 

[Yahoo-eng-team] [Bug 2049488] [NEW] [ovn][source-main] job fails with error: too many arguments to function flow_compose

2024-01-16 Thread yatin
Public bug reported:

Fails as:-
2024-01-16 03:02:53.651183 | controller | gcc -DHAVE_CONFIG_H -I.   -I 
./include  -I ./include -I ./ovn -I ./include -I ./lib -I ./lib -I 
/opt/stack/ovs/include -I /opt/stack/ovs/include -I /opt/stack/ovs/lib -I 
/opt/stack/ovs/lib -I /opt/stack/ovs -I /opt/stack/ovs-Wstrict-prototypes 
-Wall -Wextra -Wno-sign-compare -Wpointer-arith -Wformat -Wformat-security 
-Wswitch-enum -Wunused-parameter -Wbad-function-cast -Wcast-align 
-Wstrict-prototypes -Wold-style-definition -Wmissing-prototypes 
-Wmissing-field-initializers -fno-strict-aliasing -Wswitch-bool 
-Wlogical-not-parentheses -Wsizeof-array-argument -Wbool-compare 
-Wshift-negative-value -Wduplicated-cond -Wshadow -Wmultistatement-macros 
-Wcast-align=strict   -g -O2 -MT controller/ovn-controller.o -MD -MP -MF 
$depbase.Tpo -c -o controller/ovn-controller.o controller/ovn-controller.c &&\
2024-01-16 03:02:53.651243 | controller | mv -f $depbase.Tpo $depbase.Po
2024-01-16 03:02:53.651272 | controller | controller/ofctrl.c: In function 
‘ofctrl_inject_pkt’:
2024-01-16 03:02:53.651946 | controller | controller/ofctrl.c:3048:5: error: 
too many arguments to function ‘flow_compose’
2024-01-16 03:02:53.652293 | controller |  3048 | flow_compose(, 
, NULL, 64, false);
2024-01-16 03:02:53.652320 | controller |   | ^~~~
2024-01-16 03:02:53.652342 | controller | In file included from 
/opt/stack/ovs/lib/dp-packet.h:34,
2024-01-16 03:02:53.682892 | controller |  from 
controller/ofctrl.c:21:
2024-01-16 03:02:53.682967 | controller | /opt/stack/ovs/lib/flow.h:129:6: 
note: declared here
2024-01-16 03:02:53.682996 | controller |   129 | void flow_compose(struct 
dp_packet *, const struct flow *,
2024-01-16 03:02:53.683024 | controller |   |  ^~~~
2024-01-16 03:02:53.683059 | controller | make[1]: *** [Makefile:2325: 
controller/ofctrl.o] Error 1
2024-01-16 03:03:02.115087 | controller | make[1]: *** Waiting for unfinished 
jobs
2024-01-16 03:03:02.115190 | controller | make[1]: Leaving directory 
'/opt/stack/ovn'
2024-01-16 03:03:02.115584 | controller | make: *** [Makefile:1505: all] Error 2
2024-01-16 03:03:02.120930 | controller | + 
lib/neutron_plugins/ovn_agent:compile_ovn:1 :   exit_trap

Example failure:-
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c5a/periodic/opendev.org/openstack/neutron/master/neutron-
ovn-tempest-ipv6-only-ovs-master/c5a8aa0/job-output.txt

builds:-
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ipv6-only-ovs-master_name=neutron-ovn-tempest-ovs-master-centos-9-stream=0

Broken since https://github.com/ovn-
org/ovn/commit/66ef6709678486f7abf88db10eed15fb72edcc4a, need to update
OVS_BRANCH in these jobs.

** Affects: neutron
 Importance: High
 Status: Triaged


** Tags: ovn

** Changed in: neutron
   Status: New => Triaged

** Changed in: neutron
   Importance: Undecided => High

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2049488

Title:
  [ovn][source-main] job fails with error: too many arguments to
  function flow_compose

Status in neutron:
  Triaged

Bug description:
  Fails as:-
  2024-01-16 03:02:53.651183 | controller | gcc -DHAVE_CONFIG_H -I.   -I 
./include  -I ./include -I ./ovn -I ./include -I ./lib -I ./lib -I 
/opt/stack/ovs/include -I /opt/stack/ovs/include -I /opt/stack/ovs/lib -I 
/opt/stack/ovs/lib -I /opt/stack/ovs -I /opt/stack/ovs-Wstrict-prototypes 
-Wall -Wextra -Wno-sign-compare -Wpointer-arith -Wformat -Wformat-security 
-Wswitch-enum -Wunused-parameter -Wbad-function-cast -Wcast-align 
-Wstrict-prototypes -Wold-style-definition -Wmissing-prototypes 
-Wmissing-field-initializers -fno-strict-aliasing -Wswitch-bool 
-Wlogical-not-parentheses -Wsizeof-array-argument -Wbool-compare 
-Wshift-negative-value -Wduplicated-cond -Wshadow -Wmultistatement-macros 
-Wcast-align=strict   -g -O2 -MT controller/ovn-controller.o -MD -MP -MF 
$depbase.Tpo -c -o controller/ovn-controller.o controller/ovn-controller.c &&\
  2024-01-16 03:02:53.651243 | controller | mv -f $depbase.Tpo $depbase.Po
  2024-01-16 03:02:53.651272 | controller | controller/ofctrl.c: In function 
‘ofctrl_inject_pkt’:
  2024-01-16 03:02:53.651946 | controller | controller/ofctrl.c:3048:5: error: 
too many arguments to function ‘flow_compose’
  2024-01-16 03:02:53.652293 | controller |  3048 | flow_compose(, 
, NULL, 64, false);
  2024-01-16 03:02:53.652320 | controller |   | ^~~~
  2024-01-16 03:02:53.652342 | controller | In file included from 
/opt/stack/ovs/lib/dp-packet.h:34,
  2024-01-16 03:02:53.682892 | controller |  from 
controller/ofctrl.c:21:
  2024-01-16 03:02:53.682967 | controller | /opt/stack/ovs/lib/flow.h:129:6: 
note: declared here
  2024-01-16 03:02:53.682996 | controller |   129 | 

[Yahoo-eng-team] [Bug 2046196] Re: Unicast ICMPv6 Router Advertisement packets to VM's dropped by OVS firewall driver

2023-12-17 Thread yatin
*** This bug is a duplicate of bug 1958643 ***
https://bugs.launchpad.net/bugs/1958643

Thanks @Stanislav for the confirmation, will close it as Duplicate.

** This bug has been marked a duplicate of bug 1958643
   Unicast RA messages for a VM are filtered out by ovs rules

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2046196

Title:
  Unicast ICMPv6 Router Advertisement packets to VM's dropped by OVS
  firewall driver

Status in neutron:
  Incomplete

Bug description:
  When use Open vSwitch Firewall Driver we noticed that ICMPv6 RA
  unicast packets are not reaching the VM.

  
  =Troubleshooting 
flow:===

  1) Catch ICMPv6 RA package on physical hypervisor bonding interface:

  #tcpdump -XXvpnei bond0 -Q in -c 1 "icmp6[0] = 134 and ether host 
fa:16:3e:68:e8:19"
  dropped privs to tcpdump
  tcpdump: listening on bond0, link-type EN10MB (Ethernet), capture size 262144 
bytes
  21:56:43.669691 26:28:b0:96:c0:c7 > fa:16:3e:68:e8:19, ethertype 802.1Q 
(0x8100), length 162: vlan 3254, p 0, ethertype IPv6, (flowlabel 0x7cc6b, hlim 
255, next-header ICMPv6 (58) payload length: 104) fe80::2428:b0ff:fe96:c0c7 > 
fe80::f816:3eff:fe68:e819: [icmp6 sum ok] ICMP6, router advertisement, length 
104
hop limit 64, Flags [managed], pref medium, router lifetime 6s, 
reachable time 0ms, retrans timer 0ms
  prefix info option (3), length 32 (4): 2a05:fc1:200::/64, Flags 
[onlink, auto], valid time 2592000s, pref. time 14400s
  rdnss option (25), length 40 (5):  lifetime 2s, addr: 2a05:fc1::2 
addr: 2a05:fc1::3
  source link-address option (1), length 8 (1): 26:28:b0:96:c0:c7
  advertisement interval option (7), length 8 (1):  2000ms
0x:  fa16 3e68 e819 2628 b096 c0c7 8100 0cb6  ..>h..&(
0x0010:  86dd 6007 cc6b 0068 3aff fe80    ..`..k.h:...
0x0020:   2428 b0ff fe96 c0c7 fe80    ..$(
0x0030:   f816 3eff fe68 e819 8600 10d2 4080  >..h..@.
0x0040:  0006     0304 40c0 0027  @..'
0x0050:  8d00  3840   2a05 0fc1 0200  8@*.
0x0060:       1905    
0x0070:  0002 2a05 0fc1       ..*.
0x0080:  0002 2a05 0fc1       ..*.
0x0090:  0003 0101 2628 b096 c0c7 0701    &(..
0x00a0:  07d0 ..

  2) Trace the package using "ofproto/trace":

  #ovs-appctl ofproto/trace br-int in_port=1 
fa163e68e8192628b096c0c781000cb686dd6007cc6b00683afffe802428b0fffe96c0c7fe80f8163efffe68e819860010d24086030440c000278d0038402a050fc10200190500022a050fc100022a050fc1000301012628b096c0c7070107d0
  Flow: 
icmp6,in_port=1,dl_vlan=3254,dl_vlan_pcp=0,vlan_tci1=0x,dl_src=26:28:b0:96:c0:c7,dl_dst=fa:16:3e:68:e8:19,ipv6_src=fe80::2428:b0ff:fe96:c0c7,ipv6_dst=fe80::f816:3eff:fe68:e819,ipv6_label=0x7cc6b,nw_tos=0,nw_ecn=0,nw_ttl=255,icmp_type=134,icmp_code=0

  bridge("br-int")
  
   0. in_port=1,dl_vlan=3254, priority 3, cookie 0x4ef408376e507615
  set_field:4097->vlan_vid
  goto_table:60
  60. dl_vlan=1,dl_dst=fa:16:3e:68:e8:19, priority 90, cookie 0x4ef408376e507615
  set_field:0x9e->reg5
  set_field:0x1->reg6
  pop_vlan
  resubmit(,81)
  81. ct_state=-trk,ipv6,reg5=0x9e, priority 90, cookie 0x4ef408376e507615
  ct(table=82,zone=NXM_NX_REG6[0..15])
  drop
   -> A clone of the packet is forked to recirculate. The forked pipeline 
will be resumed at table 82.
   -> Sets the packet to an untracked state, and clears all the conntrack 
fields.

  Final flow: 
icmp6,reg5=0x9e,reg6=0x1,in_port=1,vlan_tci=0x,dl_src=26:28:b0:96:c0:c7,dl_dst=fa:16:3e:68:e8:19,ipv6_src=fe80::2428:b0ff:fe96:c0c7,ipv6_dst=fe80::f816:3eff:fe68:e819,ipv6_label=0x7cc6b,nw_tos=0,nw_ecn=0,nw_ttl=255,icmp_type=134,icmp_code=0
  Megaflow: 
recirc_id=0,ct_state=-trk,eth,icmp6,in_port=1,dl_vlan=3254,dl_vlan_pcp=0,dl_dst=fa:16:3e:68:e8:19,nw_frag=no,icmp_type=0x86/0xff
  Datapath actions: pop_vlan,ct(zone=1),recirc(0xc30495)

  
===
  recirc(0xc30495) - resume conntrack with default ct_state=trk|new (use 
--ct-next to customize)
  
===

  Flow:
  

[Yahoo-eng-team] [Bug 2045725] Re: scenario jobs randomly fails downloading ubuntu image

2023-12-12 Thread yatin
It's not an issue in neutron but was just a tracking bug for the issue in our 
jobs.
https://review.opendev.org/q/I7163aea4d121cb27620e4f2a083a543abfc286bf handles 
the random issue. So closing this for neutron now.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045725

Title:
  scenario jobs randomly fails downloading ubuntu image

Status in neutron:
  Invalid

Bug description:
  Jobs fails as:-
  2023-11-30 10:17:01.182 | + 
/opt/stack/neutron-tempest-plugin/devstack/functions.sh:overridden_upload_image:36
 :   wget --progress=dot:giga -c 
https://cloud-images.ubuntu.com/minimal/releases/focal/release/ubuntu-20.04-minimal-cloudimg-amd64.img
 -O /opt/stack/devstack/files/ubuntu-20.04-minimal-cloudimg-amd64.img
  2023-11-30 10:17:01.187 | --2023-11-30 10:17:01--  
https://cloud-images.ubuntu.com/minimal/releases/focal/release/ubuntu-20.04-minimal-cloudimg-amd64.img
  2023-11-30 10:17:01.241 | Resolving cloud-images.ubuntu.com 
(cloud-images.ubuntu.com)... 2620:2d:4000:1::17, 2620:2d:4000:1::1a, 
185.125.190.37, ...
  2023-11-30 10:17:01.251 | Connecting to cloud-images.ubuntu.com 
(cloud-images.ubuntu.com)|2620:2d:4000:1::17|:443... connected.
  2023-11-30 10:17:01.287 | HTTP request sent, awaiting response... 200 OK
  2023-11-30 10:17:01.287 | Length: 278331392 (265M) [application/octet-stream]
  2023-11-30 10:17:01.287 | Saving to: 
‘/opt/stack/devstack/files/ubuntu-20.04-minimal-cloudimg-amd64.img’
  2023-11-30 10:17:01.342 | 
  2023-11-30 10:17:01.613 |  0K     12%  
101M 2s
  2023-11-30 10:17:01.895 |  32768K     24%  
113M 2s
  2023-11-30 10:17:02.176 |  65536K     36%  
113M 2s
  2023-11-30 10:17:02.507 |  98304K     48% 
96.9M 1s
  2023-11-30 10:17:02.796 | 131072K     60%  
110M 1s
  2023-11-30 10:17:02.855 | 163840K 63%  
131M=1.6s
  2023-11-30 10:17:02.855 | 
  2023-11-30 10:17:02.855 | 2023-11-30 10:17:02 (107 MB/s) - Read error at byte 
176537600/278331392 (Connection reset by peer). Retrying.
  2023-11-30 10:17:02.856 | 
  2023-11-30 10:17:03.856 | --2023-11-30 10:17:03--  (try: 2)  
https://cloud-images.ubuntu.com/minimal/releases/focal/release/ubuntu-20.04-minimal-cloudimg-amd64.img
  2023-11-30 10:17:03.864 | Connecting to cloud-images.ubuntu.com 
(cloud-images.ubuntu.com)|2620:2d:4000:1::17|:443... connected.
  2023-11-30 10:17:13.240 | Unable to establish SSL connection.
  2023-11-30 10:17:13.249 | + 
/opt/stack/neutron-tempest-plugin/devstack/customize_image.sh:upload_image:1 :  
 exit_trap

  Example build:-
  https://zuul.opendev.org/t/openstack/build/aa4ca4bdfd6441a79276334703c6e3c1/
  https://zuul.opendev.org/t/openstack/build/66a45d9eaff44d4e83a225ec268c993c/

  Opensearch(Creds openstack/openstack):-
  
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22Unable%20to%20establish%20SSL%20connection.%22'),sort:!())

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2045725/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1961740] Re: Network interface not found in namespace

2023-12-12 Thread yatin
Still seeing similar issue as below, reopening the bug

-
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ed8/901717/1/gate/neutron-
functional-with-uwsgi/ed8e900/testr_results.html

Fails as:-
neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network
interface sg-aeb1f7f1-24 not found in namespace
snat-89cb4e33-917d-4ba3-b89f-48963ea3d024.

Device added/deleted/added quickly:-
2023-11-29T10:17:44.507Z|06104|bridge|INFO|bridge test-br8645578e: added 
interface sg-aeb1f7f1-24 on port 4
2023-11-29T10:17:44.865Z|06115|bridge|INFO|bridge test-br8645578e: deleted 
interface sg-aeb1f7f1-24 on port 4
2023-11-29T10:17:44.877Z|06117|bridge|INFO|bridge test-br8645578e: added 
interface sg-aeb1f7f1-24 on port 4
2023-11-29T10:17:45.336Z|06131|bridge|INFO|bridge test-br8645578e: deleted 
interface sg-aeb1f7f1-24 on port 4


- 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_03d/901718/1/gate/neutron-functional-with-uwsgi/03d59c7/testr_results.html

Fails as:-
neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network
interface sg-44024bfc-76 not found in namespace
snat-0afef646-ef9d-406e-b479-6cc7861f8804.

Device added/deleted/added quickly:-
2023-11-29T10:53:03.124Z|03237|bridge|INFO|bridge test-brf59c6dc7: added 
interface sg-44024bfc-76 on port 4
2023-11-29T10:53:03.804Z|03253|bridge|INFO|bridge test-brf59c6dc7: deleted 
interface sg-44024bfc-76 on port 4
2023-11-29T10:53:03.817Z|03256|bridge|INFO|bridge test-brf59c6dc7: added 
interface sg-44024bfc-76 on port 4
2023-11-29T10:53:05.215Z|03282|bridge|INFO|bridge test-brf59c6dc7: deleted 
interface sg-44024bfc-76 on port 4


- 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_109/periodic/opendev.org/openstack/neutron/master/neutron-functional-with-sqlalchemy-master/10928da/testr_results.html

Fails as:- AssertionError: Device name: fg-1f3bd0ce-36, expected MAC:
fa:16:3e:80:8d:89, expected CIDRs: ['19.4.5.8/24'], device MAC:
0e:d2:9d:26:57:e6

Device added/deleted/added quickly
2023-12-11T03:51:16.045Z|05000|bridge|INFO|bridge test-br4d6080f6: added 
interface fg-1f3bd0ce-36 on port 6
2023-12-11T03:51:16.437Z|05008|bridge|INFO|bridge test-br4d6080f6: deleted 
interface fg-1f3bd0ce-36 on port 6
2023-12-11T03:51:16.446Z|05010|bridge|WARN|could not add network device 
fg-1f3bd0ce-36 to ofproto (No such device)
2023-12-11T03:51:16.556Z|05013|bridge|INFO|bridge test-br4d6080f6: added 
interface fg-1f3bd0ce-36 on port 8
2023-12-11T03:51:24.190Z|05053|netdev_linux|INFO|ioctl(SIOCGIFHWADDR) on 
fg-1f3bd0ce-36 device failed: No such device
2023-12-11T03:51:29.670Z|05098|bridge|INFO|bridge test-br4d6080f6: deleted 
interface fg-1f3bd0ce-36 on port 8

** Changed in: neutron
   Status: Fix Released => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1961740

Title:
  Network interface not found in namespace

Status in neutron:
  New

Bug description:
  This is intermittent error in the functional tests job. Failure
  example:
  
https://89b1b88fa362b409cfb1-2a70ac574f4ba34d12afc72df211f1b3.ssl.cf5.rackcdn.com/828687/1/gate/neutron-
  functional-with-uwsgi/a5b844b/testr_results.html

  Stacktrace:
  ft1.47: 
neutron.tests.functional.agent.l3.extensions.test_port_forwarding_extension.TestL3AgentFipPortForwardingExtensionDVR.test_dvr_non_ha_router_updatetesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_dvr_router.py",
 line 1708, in test_dvr_non_ha_router_update
  router2 = self._create_dvr_ha_router(
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_dvr_router.py",
 line 1445, in _create_dvr_ha_router
  router = self.manage_router(agent, r_info)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_dvr_router.py",
 line 195, in manage_router
  return super(TestDvrRouter, self).manage_router(agent, router)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/framework.py",
 line 420, in manage_router
  agent._process_added_router(router)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/l3/agent.py", line 
663, in _process_added_router
  self._cleanup_failed_router(router['id'],
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
  self.force_reraise()

[Yahoo-eng-team] [Bug 2045785] [NEW] test_unshelve_to_specific_host fails randomly at _shelve_offload_then_unshelve_to_host otherhost

2023-12-06 Thread yatin
Public bug reported:

test_unshelve_to_specific_host fails randomly at 
_shelve_offload_then_unshelve_to_host otherhost like:-
Traceback (most recent call last):
  File 
"/opt/stack/tempest/tempest/api/compute/admin/test_servers_on_multinodes.py", 
line 178, in test_unshelve_to_specific_host
self.assertEqual(otherhost, self.get_host_for_server(server['id']))
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/testtools/testcase.py",
 line 394, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 'np0035920758' != None

The operations seems to succeed on the destination host but still the
check fails as server show don't have the expected host set.

Some failure builds:-
https://402b9b635a0653670c41-889023abcee881d48f07f1c8f07a15d7.ssl.cf1.rackcdn.com/901474/4/check/neutron-ovs-tempest-multinode-full/03ca936/testr_results.html
https://67fbdc27dc1cab269631-9795c6ee70a51795a4d498c996d13136.ssl.cf5.rackcdn.com/901478/2/gate/neutron-ovs-tempest-multinode-full/e661940/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_59c/884711/9/check/neutron-ovs-tempest-dvr-ha-multinode-full/59c1ef4/testr_results.html
https://85bf7a1c249d0971baf7-33c96ff533edc27f55461123fdf39567.ssl.cf5.rackcdn.com/884407/9/check/neutron-ovs-tempest-multinode-full/07522b9/testr_results.html

Opensearch(creds: openstack/openstack):-
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22line%20178,%20in%20test_unshelve_to_specific_host%22'),sort:!())

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2045785

Title:
  test_unshelve_to_specific_host fails randomly at
  _shelve_offload_then_unshelve_to_host otherhost

Status in OpenStack Compute (nova):
  New

Bug description:
  test_unshelve_to_specific_host fails randomly at 
_shelve_offload_then_unshelve_to_host otherhost like:-
  Traceback (most recent call last):
File 
"/opt/stack/tempest/tempest/api/compute/admin/test_servers_on_multinodes.py", 
line 178, in test_unshelve_to_specific_host
  self.assertEqual(otherhost, self.get_host_for_server(server['id']))
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/testtools/testcase.py",
 line 394, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 'np0035920758' != None

  The operations seems to succeed on the destination host but still the
  check fails as server show don't have the expected host set.

  Some failure builds:-
  
https://402b9b635a0653670c41-889023abcee881d48f07f1c8f07a15d7.ssl.cf1.rackcdn.com/901474/4/check/neutron-ovs-tempest-multinode-full/03ca936/testr_results.html
  
https://67fbdc27dc1cab269631-9795c6ee70a51795a4d498c996d13136.ssl.cf5.rackcdn.com/901478/2/gate/neutron-ovs-tempest-multinode-full/e661940/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_59c/884711/9/check/neutron-ovs-tempest-dvr-ha-multinode-full/59c1ef4/testr_results.html
  
https://85bf7a1c249d0971baf7-33c96ff533edc27f55461123fdf39567.ssl.cf5.rackcdn.com/884407/9/check/neutron-ovs-tempest-multinode-full/07522b9/testr_results.html

  Opensearch(creds: openstack/openstack):-
  
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22line%20178,%20in%20test_unshelve_to_specific_host%22'),sort:!())

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2045785/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2045725] [NEW] scenario jobs randomly fails downloading ubuntu image

2023-12-05 Thread yatin
Public bug reported:

Jobs fails as:-
2023-11-30 10:17:01.182 | + 
/opt/stack/neutron-tempest-plugin/devstack/functions.sh:overridden_upload_image:36
 :   wget --progress=dot:giga -c 
https://cloud-images.ubuntu.com/minimal/releases/focal/release/ubuntu-20.04-minimal-cloudimg-amd64.img
 -O /opt/stack/devstack/files/ubuntu-20.04-minimal-cloudimg-amd64.img
2023-11-30 10:17:01.187 | --2023-11-30 10:17:01--  
https://cloud-images.ubuntu.com/minimal/releases/focal/release/ubuntu-20.04-minimal-cloudimg-amd64.img
2023-11-30 10:17:01.241 | Resolving cloud-images.ubuntu.com 
(cloud-images.ubuntu.com)... 2620:2d:4000:1::17, 2620:2d:4000:1::1a, 
185.125.190.37, ...
2023-11-30 10:17:01.251 | Connecting to cloud-images.ubuntu.com 
(cloud-images.ubuntu.com)|2620:2d:4000:1::17|:443... connected.
2023-11-30 10:17:01.287 | HTTP request sent, awaiting response... 200 OK
2023-11-30 10:17:01.287 | Length: 278331392 (265M) [application/octet-stream]
2023-11-30 10:17:01.287 | Saving to: 
‘/opt/stack/devstack/files/ubuntu-20.04-minimal-cloudimg-amd64.img’
2023-11-30 10:17:01.342 | 
2023-11-30 10:17:01.613 |  0K     12%  101M 
2s
2023-11-30 10:17:01.895 |  32768K     24%  113M 
2s
2023-11-30 10:17:02.176 |  65536K     36%  113M 
2s
2023-11-30 10:17:02.507 |  98304K     48% 96.9M 
1s
2023-11-30 10:17:02.796 | 131072K     60%  110M 
1s
2023-11-30 10:17:02.855 | 163840K 63%  
131M=1.6s
2023-11-30 10:17:02.855 | 
2023-11-30 10:17:02.855 | 2023-11-30 10:17:02 (107 MB/s) - Read error at byte 
176537600/278331392 (Connection reset by peer). Retrying.
2023-11-30 10:17:02.856 | 
2023-11-30 10:17:03.856 | --2023-11-30 10:17:03--  (try: 2)  
https://cloud-images.ubuntu.com/minimal/releases/focal/release/ubuntu-20.04-minimal-cloudimg-amd64.img
2023-11-30 10:17:03.864 | Connecting to cloud-images.ubuntu.com 
(cloud-images.ubuntu.com)|2620:2d:4000:1::17|:443... connected.
2023-11-30 10:17:13.240 | Unable to establish SSL connection.
2023-11-30 10:17:13.249 | + 
/opt/stack/neutron-tempest-plugin/devstack/customize_image.sh:upload_image:1 :  
 exit_trap

Example build:-
https://zuul.opendev.org/t/openstack/build/aa4ca4bdfd6441a79276334703c6e3c1/
https://zuul.opendev.org/t/openstack/build/66a45d9eaff44d4e83a225ec268c993c/

Opensearch(Creds openstack/openstack):-
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22Unable%20to%20establish%20SSL%20connection.%22'),sort:!())

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045725

Title:
  scenario jobs randomly fails downloading ubuntu image

Status in neutron:
  New

Bug description:
  Jobs fails as:-
  2023-11-30 10:17:01.182 | + 
/opt/stack/neutron-tempest-plugin/devstack/functions.sh:overridden_upload_image:36
 :   wget --progress=dot:giga -c 
https://cloud-images.ubuntu.com/minimal/releases/focal/release/ubuntu-20.04-minimal-cloudimg-amd64.img
 -O /opt/stack/devstack/files/ubuntu-20.04-minimal-cloudimg-amd64.img
  2023-11-30 10:17:01.187 | --2023-11-30 10:17:01--  
https://cloud-images.ubuntu.com/minimal/releases/focal/release/ubuntu-20.04-minimal-cloudimg-amd64.img
  2023-11-30 10:17:01.241 | Resolving cloud-images.ubuntu.com 
(cloud-images.ubuntu.com)... 2620:2d:4000:1::17, 2620:2d:4000:1::1a, 
185.125.190.37, ...
  2023-11-30 10:17:01.251 | Connecting to cloud-images.ubuntu.com 
(cloud-images.ubuntu.com)|2620:2d:4000:1::17|:443... connected.
  2023-11-30 10:17:01.287 | HTTP request sent, awaiting response... 200 OK
  2023-11-30 10:17:01.287 | Length: 278331392 (265M) [application/octet-stream]
  2023-11-30 10:17:01.287 | Saving to: 
‘/opt/stack/devstack/files/ubuntu-20.04-minimal-cloudimg-amd64.img’
  2023-11-30 10:17:01.342 | 
  2023-11-30 10:17:01.613 |  0K     12%  
101M 2s
  2023-11-30 10:17:01.895 |  32768K     24%  
113M 2s
  2023-11-30 10:17:02.176 |  65536K     36%  
113M 2s
  2023-11-30 10:17:02.507 |  98304K     48% 
96.9M 1s
  2023-11-30 10:17:02.796 | 131072K     60%  
110M 1s
  2023-11-30 10:17:02.855 | 163840K 63%  
131M=1.6s
  2023-11-30 10:17:02.855 | 
  2023-11-30 10:17:02.855 | 2023-11-30 10:17:02 (107 MB/s) - Read error at byte 
176537600/278331392 (Connection reset by peer). Retrying.
  2023-11-30 10:17:02.856 | 
  2023-11-30 10:17:03.856 | 

[Yahoo-eng-team] [Bug 2045549] [NEW] OVS jobs randomly fails as Guest VMs not(or delayed) configured with DHCP

2023-12-04 Thread yatin
Public bug reported:

Seen couple of hits recently, Tests fails as:-
Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 136, in 
_get_ssh_connection
ssh.connect(self.host, port=self.port, username=self.username,
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/paramiko/client.py",
 line 409, in connect
raise NoValidConnectionsError(errors)
paramiko.ssh_exception.NoValidConnectionsError: [Errno None] Unable to connect 
to port 22 on 172.24.5.37

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
return f(*func_args, **func_kwargs)
  File 
"/opt/stack/tempest/tempest/api/compute/servers/test_attach_interfaces.py", 
line 278, in test_create_list_show_delete_interfaces_by_fixed_ip
server, ifs, _ = self._create_server_get_interfaces()
  File 
"/opt/stack/tempest/tempest/api/compute/servers/test_attach_interfaces.py", 
line 88, in _create_server_get_interfaces
self._wait_for_validation(server, validation_resources)
  File 
"/opt/stack/tempest/tempest/api/compute/servers/test_attach_interfaces.py", 
line 73, in _wait_for_validation
linux_client.validate_authentication()
  File "/opt/stack/tempest/tempest/lib/common/utils/linux/remote_client.py", 
line 31, in wrapper
return function(self, *args, **kwargs)
  File "/opt/stack/tempest/tempest/lib/common/utils/linux/remote_client.py", 
line 123, in validate_authentication
self.ssh_client.test_connection_auth()
  File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 245, in 
test_connection_auth
connection = self._get_ssh_connection()
  File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 155, in 
_get_ssh_connection
raise exceptions.SSHTimeout(host=self.host,
tempest.lib.exceptions.SSHTimeout: Connection to the 172.24.5.37 via SSH timed 
out.
User: cirros, Password: password

>From Console logs of the vm, eth0 interface is not configured correctly as it 
>have IPv4LL address instead:-
Nov 25 08:44:45 cirros daemon.info dhcpcd[316]: eth0: using IPv4LL address 
169.254.203.238

>From DHCP aggent logs, there were DHCPDISCOVER/DHCPOFFER but no 
>DHCPREQUEST/DHCPACK
Nov 25 08:44:35.898432 np0035864589 dnsmasq-dhcp[98241]: 
DHCPDISCOVER(tapb331ed5f-9e) fa:16:3e:fa:8b:d7
Nov 25 08:44:35.898464 np0035864589 dnsmasq-dhcp[98241]: 
DHCPOFFER(tapb331ed5f-9e) 10.1.0.14 fa:16:3e:fa:8b:d7

For in other failures it was seen differently like dhcp took time to configure 
and in meanwhile metadata failed(failed 20/20: up 56.38. request failed):-
Nov 22 06:08:28.142795 np0035837630 dnsmasq-dhcp[104598]: 
DHCPDISCOVER(tap2f7f2c03-6d) fa:16:3e:10:67:5f no address available
Nov 22 06:08:33.071307 np0035837630 dnsmasq-dhcp[104598]: 
DHCPDISCOVER(tap2f7f2c03-6d) fa:16:3e:10:67:5f no address available
Nov 22 06:08:42.063921 np0035837630 dnsmasq-dhcp[104598]: 
DHCPDISCOVER(tap2f7f2c03-6d) fa:16:3e:10:67:5f no address available
Nov 22 06:09:29.752568 np0035837630 dnsmasq-dhcp[104598]: 
DHCPDISCOVER(tap2f7f2c03-6d) fa:16:3e:10:67:5f
Nov 22 06:09:29.752593 np0035837630 dnsmasq-dhcp[104598]: 
DHCPOFFER(tap2f7f2c03-6d) 10.1.0.26 fa:16:3e:10:67:5f
Nov 22 06:09:29.756191 np0035837630 dnsmasq-dhcp[104598]: 
DHCPREQUEST(tap2f7f2c03-6d) 10.1.0.26 fa:16:3e:10:67:5f
Nov 22 06:09:29.756218 np0035837630 dnsmasq-dhcp[104598]: 
DHCPACK(tap2f7f2c03-6d) 10.1.0.26 fa:16:3e:10:67:5f 
tempest-server-test-761246100


Example builds:-
https://zuul.openstack.org/build/3e27967397a64fb587d8aae2ff215d10
https://zuul.openstack.org/build/fb288858b1e14d4893bc3710abca38d8
https://zuul.openstack.org/build/3f16434fef6a4ae5846bd11d49aab8ad
https://zuul.openstack.org/build/b7722266e9134fb2ae38484d284bacd3
https://zuul.openstack.org/build/abd7aada5fd84371a78182effb7f0050

Opensearch:-
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22:%20using%20IPv4LL%20address%22'),sort:!())

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045549

Title:
  OVS jobs randomly fails as Guest VMs not(or delayed) configured with
  DHCP

Status in neutron:
  New

Bug description:
  Seen couple of hits recently, Tests fails as:-
  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 136, in 
_get_ssh_connection
  ssh.connect(self.host, port=self.port, username=self.username,
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/paramiko/client.py",
 line 409, in connect
  raise NoValidConnectionsError(errors)
  

[Yahoo-eng-team] [Bug 2045383] Re: [xena] nftables periodic jobs fails with RETRY_LIMIT

2023-12-03 Thread yatin
Thanks folks jobs are back to green now:-
https://zuul.openstack.org/builds?job_name=neutron-linuxbridge-tempest-
plugin-scenario-nftables_name=neutron-ovs-tempest-plugin-scenario-
iptables_hybrid-
nftables=openstack%2Fneutron=stable%2Fxena=0

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045383

Title:
  [xena] nftables periodic jobs fails with RETRY_LIMIT

Status in neutron:
  Invalid

Bug description:
  Since 18th Nov nftables jobs failing with RETRY_LIMIT and no logs are
  available. These jobs quickly fails and last task seen running in
  console is "Preparing job workspace".

  This looks a regression with zuul change
  https://review.opendev.org/c/zuul/zuul/+/900489.

  @fungi checked executor logs and it fails as:-
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: [e: 
a4e70e9b208f479e98334c09305d2013] [build: 3c41d75624274d2e8d7fd62ae332c31d] 
Exception while executing job
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   Traceback (most recent call 
last):
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/git/util.py", line 1125, in __getitem__
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   return getattr(self, 
index)
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:  

  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/git/util.py", line 1114, in __getattr__
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   return 
list.__getattribute__(self, attr)
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:  
^
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   AttributeError: 
'IterableList' object has no attribute '2.3.0'
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   The above exception was the 
direct cause of the following exception:
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   Traceback (most recent call 
last):
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 1185, 
in do_execute
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   self._execute()
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 1556, 
in _execute
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
self.preparePlaybooks(args)
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2120, 
in preparePlaybooks
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
self.preparePlaybook(jobdir_playbook, playbook, args)
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2177, 
in preparePlaybook
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
self.prepareRole(jobdir_playbook, role, args)
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2367, 
in prepareRole
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
self.prepareZuulRole(jobdir_playbook, role, args, role_info)
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2423, 
in prepareZuulRole
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   path = 
self.checkoutUntrustedProject(project, branch, args)
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:  

  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2295, 
in checkoutUntrustedProject
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
repo.getBranchHead(branch).hexsha
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
^^
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/merger/merger.py", line 465, in 
getBranchHead
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   branch_head = 
repo.heads[branch]
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: 
~~
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/git/util.py", line 1127, in __getitem__
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   raise IndexError("No 
item found with id %r" % (self._prefix + index)) from e
  2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   IndexError: No item found 
with id '2.3.0'

  The parent job[1] which does pins neutron-tempest-plugin to 2.3.0 also
  do pass but not the 

[Yahoo-eng-team] [Bug 2045383] [NEW] [xena] nftables periodic jobs fails with RETRY_LIMIT

2023-11-30 Thread yatin
Public bug reported:

Since 18th Nov nftables jobs failing with RETRY_LIMIT and no logs are
available. These jobs quickly fails and last task seen running in
console is "Preparing job workspace".

This looks a regression with zuul change
https://review.opendev.org/c/zuul/zuul/+/900489.

@fungi checked executor logs and it fails as:-
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: [e: 
a4e70e9b208f479e98334c09305d2013] [build: 3c41d75624274d2e8d7fd62ae332c31d] 
Exception while executing job
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   Traceback (most recent call 
last):
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/git/util.py", line 1125, in __getitem__
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   return getattr(self, index)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:  
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/git/util.py", line 1114, in __getattr__
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   return 
list.__getattribute__(self, attr)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:  
^
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   AttributeError: 'IterableList' 
object has no attribute '2.3.0'
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   The above exception was the 
direct cause of the following exception:
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   Traceback (most recent call 
last):
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 1185, 
in do_execute
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   self._execute()
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 1556, 
in _execute
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   self.preparePlaybooks(args)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2120, 
in preparePlaybooks
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
self.preparePlaybook(jobdir_playbook, playbook, args)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2177, 
in preparePlaybook
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
self.prepareRole(jobdir_playbook, role, args)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2367, 
in prepareRole
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
self.prepareZuulRole(jobdir_playbook, role, args, role_info)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2423, 
in prepareZuulRole
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   path = 
self.checkoutUntrustedProject(project, branch, args)
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:  

2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/executor/server.py", line 2295, 
in checkoutUntrustedProject
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   
repo.getBranchHead(branch).hexsha
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   ^^
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/zuul/merger/merger.py", line 465, in 
getBranchHead
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   branch_head = 
repo.heads[branch]
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: 
~~
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob: File 
"/usr/local/lib/python3.11/site-packages/git/util.py", line 1127, in __getitem__
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   raise IndexError("No item 
found with id %r" % (self._prefix + index)) from e
2023-11-30 14:08:14,253 ERROR zuul.AnsibleJob:   IndexError: No item found with 
id '2.3.0'

The parent job[1] which does pins neutron-tempest-plugin to 2.3.0 also
do pass but not the inherited ones, same can be seen in the test
patch[2]

Builds:- https://zuul.openstack.org/builds?job_name=neutron-linuxbridge-
tempest-plugin-scenario-nftables_name=neutron-ovs-tempest-plugin-
scenario-iptables_hybrid-
nftables=openstack%2Fneutron=stable%2Fxena=0

[1] 
https://zuul.openstack.org/builds?job_name=neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-xena
[2] https://review.opendev.org/c/openstack/neutron/+/902296

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.

[Yahoo-eng-team] [Bug 2045270] [NEW] [xena/yoga/zed] devstack-tobiko-neutron job broken

2023-11-30 Thread yatin
Public bug reported:

These jobs are broken as these jobs running on focal and running tests
which shouldn't on focal as per
https://review.opendev.org/c/openstack/neutron/+/871982 (included in
antelope+)


Example failure 
https://d5a9a78dc7c742990ec8-242e23f01f4a3c50d38acf4a24e0c600.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/stable/yoga/devstack-tobiko-neutron/21d79ad/tobiko_results_02_create_neutron_resources_neutron.html

Builds:- https://zuul.openstack.org/builds?job_name=devstack-tobiko-
neutron=openstack%2Fneutron=stable%2Fxena=stable%2Fyoga=stable%2Fzed=0

That patch is not available before zed so for older branches need to
skip those tests in some other way

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045270

Title:
  [xena/yoga/zed] devstack-tobiko-neutron job broken

Status in neutron:
  New

Bug description:
  These jobs are broken as these jobs running on focal and running tests
  which shouldn't on focal as per
  https://review.opendev.org/c/openstack/neutron/+/871982 (included in
  antelope+)

  
  Example failure 
https://d5a9a78dc7c742990ec8-242e23f01f4a3c50d38acf4a24e0c600.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/stable/yoga/devstack-tobiko-neutron/21d79ad/tobiko_results_02_create_neutron_resources_neutron.html

  Builds:- https://zuul.openstack.org/builds?job_name=devstack-tobiko-
  
neutron=openstack%2Fneutron=stable%2Fxena=stable%2Fyoga=stable%2Fzed=0

  That patch is not available before zed so for older branches need to
  skip those tests in some other way

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2045270/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2043959] [NEW] Unit test jobs with neutron-lib master(3.9.0) failing

2023-11-19 Thread yatin
Public bug reported:

Since https://review.opendev.org/c/openstack/neutron-lib/+/895940 the job fails 
as:-
ft1.2: 
neutron.tests.unit.plugins.ml2.drivers.agent.test_capabilities.CapabilitiesTest.test_registertesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
return f(self, *args, **kwargs)
   
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/plugins/ml2/drivers/agent/test_capabilities.py",
 line 47, in test_register
self._mgr.subscribe.assert_called_with(*args)
  File "/usr/lib/python3.11/unittest/mock.py", line 933, in assert_called_with
raise AssertionError(_error_message()) from cause
AssertionError: expected call not found.
Expected: subscribe(, , 
'after_init', )
Actual: subscribe(, , 
'after_init', , False)


ft3.1: 
neutron.tests.unit.services.logapi.drivers.test_manager.TestHandleResourceCallback.test_subscribe_resources_cbtesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
return f(self, *args, **kwargs)
   
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/services/logapi/drivers/test_manager.py",
 line 189, in test_subscribe_resources_cb
self._cb_mgr.subscribe.assert_has_calls(assert_calls)
  File "/usr/lib/python3.11/unittest/mock.py", line 970, in assert_has_calls
raise AssertionError(
AssertionError: Calls not found.

ft4.1: 
neutron.tests.unit.services.trunk.rpc.test_backend.ServerSideRpcBackendTest.test___init__testtools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
return f(self, *args, **kwargs)
   
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/services/trunk/rpc/test_backend.py",
 line 60, in test___init__
self._mgr.subscribe.assert_has_calls(calls, any_order=True)
  File "/usr/lib/python3.11/unittest/mock.py", line 986, in assert_has_calls
raise AssertionError(
AssertionError: 'subscribe' does not contain all of (call(>, 'trunk', 'after_create', ), call(>, 'trunk', 'after_delete', ), call(>, 'subports', 'after_create', ), call(>, 'subports', 'after_delete', )) in its call list, 
found [call(>, 'trunk', 'after_create', , False), call(>, 'subports', 'after_create', , False), call(>, 'trunk', 'after_delete', , False), call(>, 'subports', 'after_delete', , False)] instead

Need to make these unit tests compatible with neutron-lib 3.9.0.

Example failure:-
https://25f3b33717093a9f4f4e-a759d6b54561529b072782a6b0052389.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/neutron/master/openstack-tox-py311-with-neutron-lib-master/87a570a/testr_results.html

Builds:-
https://zuul.openstack.org/builds?job_name=openstack-tox-py311-with-neutron-lib-master_name=openstack-tox-py311-with-sqlalchemy-master=openstack%2Fneutron=0

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2043959

Title:
  Unit test jobs with neutron-lib master(3.9.0) failing

Status in neutron:
  New

Bug description:
  Since https://review.opendev.org/c/openstack/neutron-lib/+/895940 the job 
fails as:-
  ft1.2: 
neutron.tests.unit.plugins.ml2.drivers.agent.test_capabilities.CapabilitiesTest.test_registertesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
  return f(self, *args, **kwargs)
 
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/plugins/ml2/drivers/agent/test_capabilities.py",
 line 47, in test_register
  self._mgr.subscribe.assert_called_with(*args)
File "/usr/lib/python3.11/unittest/mock.py", line 933, in assert_called_with
  raise AssertionError(_error_message()) from cause
  AssertionError: expected call not found.
  Expected: subscribe(, , 
'after_init', )
  Actual: subscribe(, , 
'after_init', , False)

  
  ft3.1: 
neutron.tests.unit.services.logapi.drivers.test_manager.TestHandleResourceCallback.test_subscribe_resources_cbtesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
  return f(self, *args, **kwargs)
 
File 

[Yahoo-eng-team] [Bug 2042947] [NEW] [stable branches] nftables job inherits from openvswitch-iptables_hybrid master job

2023-11-07 Thread yatin
Public bug reported:

These jobs runs in periodic pipeline and are broken[1], these jobs
inherit from ovs master jobs instead of stable variant. This needs to be
fixed.


[1] 
https://zuul.openstack.org/builds?job_name=neutron-linuxbridge-tempest-plugin-scenario-nftables

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2042947

Title:
  [stable branches] nftables job inherits from openvswitch-
  iptables_hybrid master job

Status in neutron:
  New

Bug description:
  These jobs runs in periodic pipeline and are broken[1], these jobs
  inherit from ovs master jobs instead of stable variant. This needs to
  be fixed.

  
  [1] 
https://zuul.openstack.org/builds?job_name=neutron-linuxbridge-tempest-plugin-scenario-nftables

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2042947/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2042941] [NEW] neutron-{ovn, ovs}-tempest-with-sqlalchemy-master jobs not installing sqlalchemy/alembic from source

2023-11-07 Thread yatin
Public bug reported:

neutron-ovn-tempest-with-sqlalchemy-master and 
neutron-ovs-tempest-with-sqlalchemy-master jobs expected to install sqlalchemy 
and alembic from main branch as defined in required-projects, but these 
installs released versions instead:-
required-projects:
  - name: github.com/sqlalchemy/sqlalchemy
override-checkout: main
  - openstack/oslo.db
  - openstack/neutron-lib
  - name: github.com/sqlalchemy/alembic
override-checkout: main


Builds:- 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-with-sqlalchemy-master_name=neutron-ovs-tempest-with-sqlalchemy-master=0

Noticed it when other jobs running with sqlalchemy master are broken but
not these https://bugs.launchpad.net/neutron/+bug/2042939

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2042941

Title:
  neutron-{ovn,ovs}-tempest-with-sqlalchemy-master jobs not installing
  sqlalchemy/alembic from source

Status in neutron:
  New

Bug description:
  neutron-ovn-tempest-with-sqlalchemy-master and 
neutron-ovs-tempest-with-sqlalchemy-master jobs expected to install sqlalchemy 
and alembic from main branch as defined in required-projects, but these 
installs released versions instead:-
  required-projects:
- name: github.com/sqlalchemy/sqlalchemy
  override-checkout: main
- openstack/oslo.db
- openstack/neutron-lib
- name: github.com/sqlalchemy/alembic
  override-checkout: main

  
  Builds:- 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-with-sqlalchemy-master_name=neutron-ovs-tempest-with-sqlalchemy-master=0

  Noticed it when other jobs running with sqlalchemy master are broken
  but not these https://bugs.launchpad.net/neutron/+bug/2042939

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2042941/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2042939] [NEW] sqlalchemy-master jobs broken with ValueError: invalid literal for int() with base 10: '0b1'

2023-11-07 Thread yatin
Public bug reported:

Broken since https://github.com/sqlalchemy/sqlalchemy/commit/e93a5e89.

Fails like:-
2023-11-07 10:51:53.328284 | ubuntu-jammy | Failed to import test module: 
neutron.tests.unit.agent.common.test_ovs_lib
2023-11-07 10:51:53.328310 | ubuntu-jammy | Traceback (most recent call last):
2023-11-07 10:51:53.328335 | ubuntu-jammy |   File 
"/usr/lib/python3.10/unittest/loader.py", line 436, in _find_test_path
2023-11-07 10:51:53.328360 | ubuntu-jammy | module = 
self._get_module_from_name(name)
2023-11-07 10:51:53.328385 | ubuntu-jammy |   File 
"/usr/lib/python3.10/unittest/loader.py", line 377, in _get_module_from_name
2023-11-07 10:51:53.328411 | ubuntu-jammy | __import__(name)
2023-11-07 10:51:53.328436 | ubuntu-jammy |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/agent/common/test_ovs_lib.py",
 line 18, in 
2023-11-07 10:51:53.328461 | ubuntu-jammy | from neutron_lib import 
exceptions
2023-11-07 10:51:53.328486 | ubuntu-jammy |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py310/lib/python3.10/site-packages/neutron_lib/__init__.py",
 line 17, in 
2023-11-07 10:51:53.328511 | ubuntu-jammy | from neutron_lib.db import api  
# noqa
2023-11-07 10:51:53.328537 | ubuntu-jammy |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py310/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 22, in 
2023-11-07 10:51:53.328569 | ubuntu-jammy | from oslo_db.sqlalchemy import 
enginefacade
2023-11-07 10:51:53.328916 | ubuntu-jammy |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py310/lib/python3.10/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 28, in 
2023-11-07 10:51:53.328952 | ubuntu-jammy | from oslo_db.sqlalchemy import 
engines
2023-11-07 10:51:53.328979 | ubuntu-jammy |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py310/lib/python3.10/site-packages/oslo_db/sqlalchemy/engines.py",
 line 36, in 
2023-11-07 10:51:53.329004 | ubuntu-jammy | from oslo_db.sqlalchemy import 
compat
2023-11-07 10:51:53.329030 | ubuntu-jammy |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py310/lib/python3.10/site-packages/oslo_db/sqlalchemy/compat/__init__.py",
 line 18, in 
2023-11-07 10:51:53.329055 | ubuntu-jammy | _vers = 
versionutils.convert_version_to_tuple(__version__)
2023-11-07 10:51:53.329080 | ubuntu-jammy |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py310/lib/python3.10/site-packages/oslo_utils/versionutils.py",
 line 90, in convert_version_to_tuple
2023-11-07 10:51:53.329105 | ubuntu-jammy | return tuple(int(part) for part 
in version_str.split('.'))
2023-11-07 10:51:53.329130 | ubuntu-jammy |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py310/lib/python3.10/site-packages/oslo_utils/versionutils.py",
 line 90, in 
2023-11-07 10:51:53.329162 | ubuntu-jammy | return tuple(int(part) for part 
in version_str.split('.'))
2023-11-07 10:51:53.329469 | ubuntu-jammy | ValueError: invalid literal for 
int() with base 10: '0b1'

Example failure:-
https://46050a1a8fa787c1d655-839a87b8a49ad5b15d7c39aaacbfa49e.ssl.cf5.rackcdn.com/900087/2/check/openstack-
tox-py310-with-sqlalchemy-master/429b0c2/job-output.txt

Builds:- https://zuul.openstack.org/builds?job_name=openstack-tox-
py310-with-sqlalchemy-master_name=neutron-functional-with-
sqlalchemy-master=0

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2042939

Title:
  sqlalchemy-master jobs broken with ValueError: invalid literal for
  int() with base 10: '0b1'

Status in neutron:
  New

Bug description:
  Broken since https://github.com/sqlalchemy/sqlalchemy/commit/e93a5e89.

  Fails like:-
  2023-11-07 10:51:53.328284 | ubuntu-jammy | Failed to import test module: 
neutron.tests.unit.agent.common.test_ovs_lib
  2023-11-07 10:51:53.328310 | ubuntu-jammy | Traceback (most recent call last):
  2023-11-07 10:51:53.328335 | ubuntu-jammy |   File 
"/usr/lib/python3.10/unittest/loader.py", line 436, in _find_test_path
  2023-11-07 10:51:53.328360 | ubuntu-jammy | module = 
self._get_module_from_name(name)
  2023-11-07 10:51:53.328385 | ubuntu-jammy |   File 
"/usr/lib/python3.10/unittest/loader.py", line 377, in _get_module_from_name
  2023-11-07 10:51:53.328411 | ubuntu-jammy | __import__(name)
  2023-11-07 10:51:53.328436 | ubuntu-jammy |   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/agent/common/test_ovs_lib.py",
 line 18, in 
  2023-11-07 10:51:53.328461 | ubuntu-jammy | from neutron_lib import 
exceptions
  2023-11-07 10:51:53.328486 | ubuntu-jammy |   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py310/lib/python3.10/site-packages/neutron_lib/__init__.py",
 line 17, in 
  2023-11-07 10:51:53.328511 | ubuntu-jammy | from neutron_lib.db import 
api  # noqa
  

[Yahoo-eng-team] [Bug 2040517] [NEW] host not removed from table ml2_vxlan_endpoints with the agent delete

2023-10-25 Thread yatin
Public bug reported:

After deleting an agent, there is stale entry for the host in table
'ml2_vxlan_endpoints'. An use case is during node scale down, a agent is
deleted, but the host entry is not removed from ml2_vxlan_endpoints;

I have not checked other topologies but same should apply to other
similar tables 'ml2_gre_endpoints' and 'ml2_geneve_endpoints'

# Ensure agent is stopped or node is removed.

$ openstack network agent show 338d13fc-3483-414f-bc55-5b2cbb0db189 --fit-width
+---++
| Field | Value 
 |
+---++
| admin_state_up| UP
 |
| agent_type| Open vSwitch agent
 |
| alive | XXX   
 |
| availability_zone | None  
 |
| binary| neutron-openvswitch-agent 
 |
| configuration | {'arp_responder_enabled': True, 'baremetal_smartnic': 
False, 'bridge_mappings': {'public': 'br-ex'}, 'datapath_type': 'system',   
 |
|   | 'devices': 0, 'enable_distributed_routing': True, 
'extensions': [], 'in_distributed_mode': True, 'integration_bridge': 'br-int',  
 |
|   | 'l2_population': True, 'log_agent_heartbeats': False, 
'ovs_capabilities': {'datapath_types': ['netdev', 'system'], 'iface_types': 
 |
|   | ['bareudp', 'erspan', 'geneve', 'gre', 'gtpu', 
'internal', 'ip6erspan', 'ip6gre', 'lisp', 'patch', 'stt', 'system', 'tap', 
'vxlan']},  |
|   | 'ovs_hybrid_plug': False, 'resource_provider_bandwidths': 
{'br-ex': {'egress': 100, 'ingress': 100}},  |
|   | 'resource_provider_hypervisors': {'br-ex': 
'ykarel-temp3', 'rp_tunnelled': 'ykarel-temp3'}, 
'resource_provider_inventory_defaults':|
|   | {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 
'reserved': 0}, 'resource_provider_packet_processing_inventory_defaults': |
|   | {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 
'reserved': 0}, 'resource_provider_packet_processing_with_direction': {}, |
|   | 'resource_provider_packet_processing_without_direction': 
{}, 'tunnel_types': ['vxlan'], 'tunneling_ip': '10.0.109.173',|
|   | 'vhostuser_socket_dir': '/var/run/openvswitch'}   
 |
| created_at| 2023-10-25 14:30:17   
 |
| description   | None  
 |
| ha_state  | None  
 |
| host  | ykarel-temp3  
 |
| id| 338d13fc-3483-414f-bc55-5b2cbb0db189  
 |
| last_heartbeat_at | 2023-10-25 14:30:17   
 |
| resources_synced  | None  
 |
| started_at| 2023-10-25 14:30:17   
 |
| topic | N/A   
 |
+---++

$ openstack network agent 

[Yahoo-eng-team] [Bug 2039940] [NEW] test_resize_volume_backed_server_confirm which fails randomly with Kernel panic

2023-10-20 Thread yatin
Public bug reported:

tempest.api.compute.servers.test_server_actions.ServerActionsTestOtherA.test_resize_volume_backed_server_confirm
fails randomly with:-

Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 136, in 
_get_ssh_connection
ssh.connect(self.host, port=self.port, username=self.username,
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/paramiko/client.py",
 line 386, in connect
sock.connect(addr)
TimeoutError: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/lib/decorators.py", line 106, in wrapper
raise exc
  File "/opt/stack/tempest/tempest/lib/decorators.py", line 98, in wrapper
return f(*func_args, **func_kwargs)
  File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
return f(*func_args, **func_kwargs)
  File "/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", 
line 510, in test_resize_volume_backed_server_confirm
linux_client.validate_authentication()
  File "/opt/stack/tempest/tempest/lib/common/utils/linux/remote_client.py", 
line 31, in wrapper
return function(self, *args, **kwargs)
  File "/opt/stack/tempest/tempest/lib/common/utils/linux/remote_client.py", 
line 123, in validate_authentication
self.ssh_client.test_connection_auth()
  File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 245, in 
test_connection_auth
connection = self._get_ssh_connection()
  File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 155, in 
_get_ssh_connection
raise exceptions.SSHTimeout(host=self.host,
tempest.lib.exceptions.SSHTimeout: Connection to the 172.24.5.65 via SSH timed 
out.
User: cirros, Password: None

Guest console log says it's kernel panic:-
info: initramfs: up at 7.36
[8.403290] virtio_blk virtio2: [vda] 2097152 512-byte logical blocks (1.07 
GB/1.00 GiB)
[8.440219] GPT:Primary header thinks Alt. header is not at the end of the 
disk.
[8.440726] GPT:229375 != 2097151
[8.440967] GPT:Alternate GPT header not at the end of the disk.
[8.441252] GPT:229375 != 2097151
[8.441503] GPT: Use GNU Parted to correct GPT errors.
[8.974064] virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not 
called
[9.068224] random: crng init done
currently loaded modules: 8021q 8139cp 8390 9pnet 9pnet_virtio ahci cec drm 
drm_kms_helper e1000 e1000e failover fb_sys_fops garp hid hid_generic 
ip6_udp_tunnel ip_tables isofs libahci libcrc32c llc mii mrp ne2k_pci 
net_failover nls_ascii nls_iso8859_1 nls_utf8 pcnet32 qemu_fw_cfg rc_core sctp 
stp syscopyarea sysfillrect sysimgblt udp_tunnel usbhid virtio_blk 
virtio_dma_buf virtio_gpu virtio_input virtio_net virtio_rng virtio_scsi 
x_tables 
info: initramfs loading root from /dev/vda1
/sbin/init: can't load library 'libtirpc.so.3'
[   11.288963] Kernel panic - not syncing: Attempted to kill init! 
exitcode=0x1000
[   11.290203] CPU: 0 PID: 1 Comm: init Not tainted 5.15.0-71-generic #78-Ubuntu
[   11.290952] Hardware name: OpenStack Foundation OpenStack Nova, BIOS 
1.15.0-1 04/01/2014
[   11.291870] Call Trace:
[   11.292973]  
[   11.293458]  show_stack+0x52/0x5c
[   11.294280]  dump_stack_lvl+0x4a/0x63
[   11.294720]  dump_stack+0x10/0x16
[   11.295179]  panic+0x15c/0x334
[   11.295587]  ? exit_to_user_mode_prepare+0x37/0xb0
[   11.296118]  do_exit.cold+0x15/0xa0
[   11.296460]  __x64_sys_exit+0x1b/0x20
[   11.296880]  do_syscall_64+0x5c/0xc0
[   11.297283]  ? ksys_write+0x67/0xf0
[   11.297672]  ? exit_to_user_mode_prepare+0x37/0xb0
[   11.298172]  ? syscall_exit_to_user_mode+0x27/0x50
[   11.298683]  ? __x64_sys_write+0x19/0x20
[   11.299151]  ? do_syscall_64+0x69/0xc0
[   11.299644]  entry_SYSCALL_64_after_hwframe+0x61/0xcb
[   11.300611] RIP: 0033:0x7f147a37555e
[   11.301938] Code: 05 d7 2a 00 00 4c 89 f9 bf 02 00 00 00 48 8d 35 fb 0d 00 
00 48 8b 10 31 c0 e8 50 d2 ff ff bf 10 00 00 00 b8 3c 00 00 00 0f 05 <48> 8d 15 
f3 2a 00 00 f7 d8 89 02 48 83 ec 20 49 8b 8c 24 b8 00 00
[   11.312012] RSP: 002b:7fff85488500 EFLAGS: 0207 ORIG_RAX: 
003c
[   11.318360] RAX: ffda RBX: 7fff854897b0 RCX: 7f147a37555e
[   11.324215] RDX: 0002 RSI: 1000 RDI: 0010
[   11.331344] RBP: 7fff85489790 R08: 7f147a36e000 R09: 7f147a36e01a
[   11.338406] R10: 0001 R11: 0207 R12: 7f147a36f040
[   11.347090] R13: 004bae50 R14:  R15: 00403d66
[   11.354220]  
[   11.362227] Kernel Offset: 0x3640 from 0x8100 (relocation 
range: 0x8000-0xbfff)
[   11.369248] ---[ end Kernel panic - not syncing: Attempted to kill init! 
exitcode=0x1000 ]---

As per opensearch[1] there are 16 hits in last 12 days across multiple
jobs in master/stable2023.2 branch.

Jobs:-
tempest-integrated-networking 31.3%

[Yahoo-eng-team] [Bug 2039417] [NEW] [master][stable/2023.2][functional] test_maintenance.TestMaintenance tests fails randomly

2023-10-16 Thread yatin
Public bug reported:

The functional test fails randomly as:-
ft1.3: 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_maintenance.TestMaintenance.test_porttesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_maintenance.py",
 line 306, in test_port
neutron_net = self._create_network('network1')
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_maintenance.py",
 line 68, in _create_network
return self.deserialize(self.fmt, res)['network']
KeyError: 'network'

neutron server log when it fails:-
2023-10-06 08:16:33.966 37713 DEBUG neutron.db.ovn_revision_numbers_db [None 
req-34e699fe-0fdc-4373-8b91-5b7dd1ac60cb - 46f70361-ba71-4bd0-9769-3573fd227c4b 
- - - -] create_initial_revision uuid=6ab7bd53-277b-4133-ac32-52b1d0c90f78, 
type=security_groups, rev=-1 create_initial_revision 
/home/zuul/src/opendev.org/openstack/neutron/neutron/db/ovn_revision_numbers_db.py:108
2023-10-06 08:16:33.973 37713 ERROR ovsdbapp.backend.ovs_idl.transaction [-] 
OVSDB Error: The transaction failed because the IDL has been configured to 
require a database lock but didn't get it yet or has already lost it
2023-10-06 08:16:33.974 37713 ERROR ovsdbapp.backend.ovs_idl.transaction [None 
req-34e699fe-0fdc-4373-8b91-5b7dd1ac60cb - 46f70361-ba71-4bd0-9769-3573fd227c4b 
- - - -] Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 118, in run
txn.results.put(txn.do_commit())
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 123, in do_commit
raise RuntimeError(msg)
RuntimeError: OVSDB Error: The transaction failed because the IDL has been 
configured to require a database lock but didn't get it yet or has already lost 
it

2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager [None 
req-34e699fe-0fdc-4373-8b91-5b7dd1ac60cb - 46f70361-ba71-4bd0-9769-3573fd227c4b 
- - - -] Error during notification for 
neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver._create_security_group-11049860
 security_group, after_create: RuntimeError: OVSDB Error: The transaction 
failed because the IDL has been configured to require a database lock but 
didn't get it yet or has already lost it
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager Traceback 
(most recent call last):
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/neutron_lib/callbacks/manager.py",
 line 181, in _notify_loop
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, payload=payload)
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 409, in _create_security_group
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager 
self._ovn_client.create_security_group(context,
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 2328, in create_security_group
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager with 
self._nb_idl.transaction(check_error=True) as txn:
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3.10/contextlib.py", line 142, in __exit__
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager 
next(self.gen)
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py",
 line 272, in transaction
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager with 
super(OvsdbNbOvnIdl, self).transaction(*args, **kwargs) as t:
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3.10/contextlib.py", line 142, in __exit__
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager 
next(self.gen)
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/ovsdbapp/api.py",
 line 114, in transaction
2023-10-06 08:16:33.975 37713 ERROR neutron_lib.callbacks.manager with 

[Yahoo-eng-team] [Bug 2039066] [NEW] openstacksdk-functional-devstack-networking periodic job broken in xena/yoga/zed

2023-10-11 Thread yatin
Public bug reported:

These periodic jobs are failing since
https://review.opendev.org/c/openstack/openstacksdk/+/874695 merged as
it added some new tests for designate zone share api.

Tests fails as:-
ft2.2: 
openstack.tests.functional.dns.v2.test_zone_share.TestZoneShare.test_find_zone_sharetesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/openstacksdk/openstack/tests/functional/dns/v2/test_zone_share.py",
 line 89, in test_find_zone_share
orig_zone_share = self.operator_cloud.dns.create_zone_share(
  File 
"/home/zuul/src/opendev.org/openstack/openstacksdk/openstack/dns/v2/_proxy.py", 
line 631, in create_zone_share
return self._create(
  File "/home/zuul/src/opendev.org/openstack/openstacksdk/openstack/proxy.py", 
line 605, in _create
return res.create(self, base_path=base_path)
  File 
"/home/zuul/src/opendev.org/openstack/openstacksdk/openstack/resource.py", line 
1535, in create
self._translate_response(response, **response_kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/openstacksdk/openstack/resource.py", line 
1287, in _translate_response
exceptions.raise_from_response(response, error_message=error_message)
  File 
"/home/zuul/src/opendev.org/openstack/openstacksdk/openstack/exceptions.py", 
line 250, in raise_from_response
raise cls(
openstack.exceptions.HttpException: HttpException: 405: Client Error for url: 
https://158.69.78.71/dns/v2/zones/5f647760-ebd2-4e3b-84c1-e2f4e7bd4dea/shares, 
METHOD NOT ALLOWED

zone sharing is supported in designate only since 2023.1
https://review.opendev.org/c/openstack/designate/+/726334

We do run this job in wallaby as well but is not impacted because
openstacksdk stable/wallaby is used in those jobs as done in
https://review.opendev.org/c/openstack/openstacksdk/+/885898

Builds:- https://zuul.openstack.org/builds?job_name=openstacksdk-
functional-devstack-
networking=openstack%2Fneutron=stable%2Fxena=stable%2Fyoga=stable%2Fzed

** Affects: neutron
 Importance: Medium
 Assignee: yatin (yatinkarel)
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039066

Title:
  openstacksdk-functional-devstack-networking periodic job broken in
  xena/yoga/zed

Status in neutron:
  Triaged

Bug description:
  These periodic jobs are failing since
  https://review.opendev.org/c/openstack/openstacksdk/+/874695 merged as
  it added some new tests for designate zone share api.

  Tests fails as:-
  ft2.2: 
openstack.tests.functional.dns.v2.test_zone_share.TestZoneShare.test_find_zone_sharetesttools.testresult.real._StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/openstacksdk/openstack/tests/functional/dns/v2/test_zone_share.py",
 line 89, in test_find_zone_share
  orig_zone_share = self.operator_cloud.dns.create_zone_share(
File 
"/home/zuul/src/opendev.org/openstack/openstacksdk/openstack/dns/v2/_proxy.py", 
line 631, in create_zone_share
  return self._create(
File 
"/home/zuul/src/opendev.org/openstack/openstacksdk/openstack/proxy.py", line 
605, in _create
  return res.create(self, base_path=base_path)
File 
"/home/zuul/src/opendev.org/openstack/openstacksdk/openstack/resource.py", line 
1535, in create
  self._translate_response(response, **response_kwargs)
File 
"/home/zuul/src/opendev.org/openstack/openstacksdk/openstack/resource.py", line 
1287, in _translate_response
  exceptions.raise_from_response(response, error_message=error_message)
File 
"/home/zuul/src/opendev.org/openstack/openstacksdk/openstack/exceptions.py", 
line 250, in raise_from_response
  raise cls(
  openstack.exceptions.HttpException: HttpException: 405: Client Error for url: 
https://158.69.78.71/dns/v2/zones/5f647760-ebd2-4e3b-84c1-e2f4e7bd4dea/shares, 
METHOD NOT ALLOWED

  zone sharing is supported in designate only since 2023.1
  https://review.opendev.org/c/openstack/designate/+/726334

  We do run this job in wallaby as well but is not impacted because
  openstacksdk stable/wallaby is used in those jobs as done in
  https://review.opendev.org/c/openstack/openstacksdk/+/885898

  Builds:- https://zuul.openstack.org/builds?job_name=openstacksdk-
  functional-devstack-
  
networking=openstack%2Fneutron=stable%2Fxena=stable%2Fyoga=stable%2Fzed

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2039066/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039012] [NEW] [fwaas] Scenario job fails randomly

2023-10-11 Thread yatin
Public bug reported:

Seen couple of occurrences across different releases:-

Failures:-
- https://zuul.opendev.org/t/openstack/build/30c625cd86aa40e6b6252689a7e88910 
neutron-tempest-plugin-fwaas-2023-1
Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/api/test_fwaasv2_extensions.py",
 line 269, in test_create_show_delete_firewall_group
body = self.firewall_groups_client.create_firewall_group(
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 25, in create_firewall_group
return self.create_resource(uri, post_data)
  File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 62, in 
create_resource
resp, body = self.post(req_uri, req_post_data)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 300, in post
return self.request('POST', url, extra_headers, headers, body, chunked)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 742, in 
request
self._error_checker(resp, resp_body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 922, in 
_error_checker
raise exceptions.ServerFault(resp_body, resp=resp,
tempest.lib.exceptions.ServerFault: Got server fault
Details: Request Failed: internal server error while processing your request.

-
https://zuul.opendev.org/t/openstack/build/0d0fbfc009cb4142920ddb96e9695ec0
neutron-tempest-plugin-fwaas-2023-2

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/scenario/test_fwaas_v2.py",
 line 220, in test_icmp_reachability_scenarios
self.check_vm_connectivity(
  File "/opt/stack/tempest/tempest/scenario/manager.py", line 983, in 
check_vm_connectivity
self.assertTrue(self.ping_ip_address(ip_address,
  File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : Timed out waiting for 172.24.5.235 to 
become reachable


- https://zuul.opendev.org/t/openstack/build/5ed8731220654e9fb67ba910a3f08c25 
neutron-tempest-plugin-fwaas-zed
Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/fwaas/api/test_fwaasv2_extensions.py",
 line 344, in test_update_firewall_group
self.firewall_groups_client.delete_firewall_group(fwg_id)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 38, in delete_firewall_group
return self.delete_resource(uri)
  File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 42, in 
delete_resource
resp, body = self.delete(req_uri)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 339, in 
delete
return self.request('DELETE', url, extra_headers, headers, body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 742, in 
request
self._error_checker(resp, resp_body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 867, in 
_error_checker
raise exceptions.Conflict(resp_body, resp=resp)
tempest.lib.exceptions.Conflict: Conflict with state of target resource
Details: {'type': 'FirewallGroupInUse', 'message': 'Firewall group 
a20ead7e-40ca-488b-a751-a36e5fb4119a is still active.', 'detail': ''}


- https://zuul.opendev.org/t/openstack/build/fbe68294562a43b492dd2ba66dec9d43 
neutron-tempest-plugin-fwaas-2023-2
Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/scenario/test_fwaas_v2.py",
 line 220, in test_icmp_reachability_scenarios
self.check_vm_connectivity(
  File "/opt/stack/tempest/tempest/scenario/manager.py", line 983, in 
check_vm_connectivity
self.assertTrue(self.ping_ip_address(ip_address,
  File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true : Timed out waiting for 172.24.5.126 to 
become reachable

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039012

Title:
  [fwaas] Scenario job fails randomly

Status in neutron:
  New

Bug description:
  Seen couple of occurrences across different releases:-

  Failures:-
  - https://zuul.opendev.org/t/openstack/build/30c625cd86aa40e6b6252689a7e88910 
neutron-tempest-plugin-fwaas-2023-1
  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/api/test_fwaasv2_extensions.py",
 line 269, in test_create_show_delete_firewall_group
  body = self.firewall_groups_client.create_firewall_group(
File 

[Yahoo-eng-team] [Bug 2038541] [NEW] LinuxBridgeARPSpoofTestCase functional tests fails with latest jammy kernel 5.15.0-86.96

2023-10-05 Thread yatin
Public bug reported:

Tests fails while running ebtables(['-D', chain] + rule.split()) with:-
2023-10-05 12:09:19.307 41358 ERROR neutron.agent.linux.utils [None 
req-defd197a-c4e2-4761-a4cc-cc960a3ff71a - - - - - -] Exit code: 4; Cmd: ['ip', 
'netns', 'exec', 'test-b58b5cf9-5018-4801-aacb-8b00fae3fe37', 'ebtables', '-t', 
'nat', '--concurrent', '-D', 'neutronMAC-test-veth09e6dc', '-i', 
'test-veth09e6dc', '--among-src', 'fa:16:3e:ac:fd:b6', '-j', 'RETURN']; Stdin: 
; Stdout: ; Stderr: ebtables v1.8.7 (nf_tables):  RULE_DELETE failed (Invalid 
argument): rule in chain neutronMAC-test-veth09e6dc

2023-10-05 12:09:29.576 41358 ERROR neutron.agent.linux.utils [None req-
defd197a-c4e2-4761-a4cc-cc960a3ff71a - - - - - -] Exit code: 4; Cmd:
['ip', 'netns', 'exec', 'test-b58b5cf9-5018-4801-aacb-8b00fae3fe37',
'ebtables', '-t', 'nat', '--concurrent', '-D', 'neutronMAC-test-
veth09e6dc', '-i', 'test-veth09e6dc', '--among-src',
'fa:16:3e:ac:fd:b6', '-j', 'RETURN']; Stdin: ; Stdout: ; Stderr:
ebtables v1.8.7 (nf_tables):  RULE_DELETE failed (Invalid argument):
rule in chain neutronMAC-test-veth09e6dc

2023-10-05 12:09:50.099 41358 ERROR neutron.agent.linux.utils [None req-
defd197a-c4e2-4761-a4cc-cc960a3ff71a - - - - - -] Exit code: 4; Cmd:
['ip', 'netns', 'exec', 'test-b58b5cf9-5018-4801-aacb-8b00fae3fe37',
'ebtables', '-t', 'nat', '--concurrent', '-D', 'neutronMAC-test-
veth09e6dc', '-i', 'test-veth09e6dc', '--among-src',
'fa:16:3e:ac:fd:b6', '-j', 'RETURN']; Stdin: ; Stdout: ; Stderr:
ebtables v1.8.7 (nf_tables):  RULE_DELETE failed (Invalid argument):
rule in chain neutronMAC-test-veth09e6dc

The new kernel includes below changes which have triggered this, described in 
https://launchpad.net/ubuntu/+source/linux/5.15.0-86.96:-
- netfilter: nf_tables: disallow element updates of bound anonymous sets
- netfilter: nf_tables: reject unbound anonymous set before commit phase
- netfilter: nf_tables: reject unbound chain set before commit phase
- netfilter: nf_tables: disallow updates of anonymous sets

Following two test fails:-
- test_arp_protection_update
- test_arp_fails_incorrect_mac_protection

** Affects: neutron
 Importance: Critical
 Status: Triaged


** Tags: functional-tests gate-failure

** Tags added: functional-tests gate-failure

** Changed in: neutron
   Status: New => Triaged

** Changed in: neutron
   Importance: Undecided => Critical

** Description changed:

  Tests fails while running ebtables(['-D', chain] + rule.split()) with:-
  2023-10-05 12:09:19.307 41358 ERROR neutron.agent.linux.utils [None 
req-defd197a-c4e2-4761-a4cc-cc960a3ff71a - - - - - -] Exit code: 4; Cmd: ['ip', 
'netns', 'exec', 'test-b58b5cf9-5018-4801-aacb-8b00fae3fe37', 'ebtables', '-t', 
'nat', '--concurrent', '-D', 'neutronMAC-test-veth09e6dc', '-i', 
'test-veth09e6dc', '--among-src', 'fa:16:3e:ac:fd:b6', '-j', 'RETURN']; Stdin: 
; Stdout: ; Stderr: ebtables v1.8.7 (nf_tables):  RULE_DELETE failed (Invalid 
argument): rule in chain neutronMAC-test-veth09e6dc
  
  2023-10-05 12:09:29.576 41358 ERROR neutron.agent.linux.utils [None req-
  defd197a-c4e2-4761-a4cc-cc960a3ff71a - - - - - -] Exit code: 4; Cmd:
  ['ip', 'netns', 'exec', 'test-b58b5cf9-5018-4801-aacb-8b00fae3fe37',
  'ebtables', '-t', 'nat', '--concurrent', '-D', 'neutronMAC-test-
  veth09e6dc', '-i', 'test-veth09e6dc', '--among-src',
  'fa:16:3e:ac:fd:b6', '-j', 'RETURN']; Stdin: ; Stdout: ; Stderr:
  ebtables v1.8.7 (nf_tables):  RULE_DELETE failed (Invalid argument):
  rule in chain neutronMAC-test-veth09e6dc
  
  2023-10-05 12:09:50.099 41358 ERROR neutron.agent.linux.utils [None req-
  defd197a-c4e2-4761-a4cc-cc960a3ff71a - - - - - -] Exit code: 4; Cmd:
  ['ip', 'netns', 'exec', 'test-b58b5cf9-5018-4801-aacb-8b00fae3fe37',
  'ebtables', '-t', 'nat', '--concurrent', '-D', 'neutronMAC-test-
  veth09e6dc', '-i', 'test-veth09e6dc', '--among-src',
  'fa:16:3e:ac:fd:b6', '-j', 'RETURN']; Stdin: ; Stdout: ; Stderr:
  ebtables v1.8.7 (nf_tables):  RULE_DELETE failed (Invalid argument):
  rule in chain neutronMAC-test-veth09e6dc
  
- The new kernel includes below changes which have triggered this:-
- - netfilter: nf_tables: disallow element updates of bound anonymous sets
- - netfilter: nf_tables: reject unbound anonymous set before commit phase
- - netfilter: nf_tables: reject unbound chain set before commit phase
- - netfilter: nf_tables: disallow updates of anonymous sets
+ The new kernel includes below changes which have triggered this, described in 
https://launchpad.net/ubuntu/+source/linux/5.15.0-86.96:-
+ - netfilter: nf_tables: disallow element updates of bound anonymous sets
+ - netfilter: nf_tables: reject unbound anonymous set before commit phase
+ - netfilter: nf_tables: reject unbound chain set before commit phase
+ - netfilter: nf_tables: disallow updates of anonymous sets
  
  Following two test fails:-
  - test_arp_protection_update
  - test_arp_fails_incorrect_mac_protection

-- 
You 

[Yahoo-eng-team] [Bug 2035578] [NEW] [stable branches] devstack-tobiko-neutron job Fails with InvocationError('could not find executable python', None)

2023-09-14 Thread yatin
Public bug reported:

It started failing[1] since the job switched to ubuntu-jammy[2].

Fails as below:-
2023-09-13 16:46:18.124882 | TASK [tobiko-tox : run sanity test cases before 
creating resources]
2023-09-13 16:46:19.463567 | controller | neutron_sanity create: 
/home/zuul/src/opendev.org/x/tobiko/.tox/py3
2023-09-13 16:46:20.518574 | controller | neutron_sanity installdeps: 
-c/home/zuul/src/opendev.org/x/tobiko/upper-constraints.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/test-requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/extra-requirements.txt
2023-09-13 16:46:20.519390 | controller | ERROR: could not install deps 
[-c/home/zuul/src/opendev.org/x/tobiko/upper-constraints.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/test-requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/extra-requirements.txt]; v = 
InvocationError('could not find executable python', None)
2023-09-13 16:46:20.520263 | controller | ___ 
summary 
2023-09-13 16:46:20.555843 | controller | ERROR:   neutron_sanity: could not 
install deps [-c/home/zuul/src/opendev.org/x/tobiko/upper-constraints.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/test-requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/extra-requirements.txt]; v = 
InvocationError('could not find executable python', None)
2023-09-13 16:46:21.141713 | controller | ERROR
2023-09-13 16:46:21.142024 | controller | {
2023-09-13 16:46:21.142117 | controller |   "delta": "0:00:01.484351",
2023-09-13 16:46:21.142197 | controller |   "end": "2023-09-13 16:46:20.556249",
2023-09-13 16:46:21.142276 | controller |   "failed_when_result": true,
2023-09-13 16:46:21.142353 | controller |   "msg": "non-zero return code",
2023-09-13 16:46:21.142688 | controller |   "rc": 1,
2023-09-13 16:46:21.142770 | controller |   "start": "2023-09-13 
16:46:19.071898"
2023-09-13 16:46:21.142879 | controller | }
2023-09-13 16:46:21.142972 | controller | ERROR: Ignoring Errors


Example failures zed/stable2023.1:-
- https://zuul.opendev.org/t/openstack/build/591dae67122444daa35195f7458ffafe
- https://zuul.opendev.org/t/openstack/build/5838bf0704b247dc8f1eb12367b1d33e
- https://zuul.opendev.org/t/openstack/build/8d2e22ff171944b0b549c12e1aaac476

Wallaby/Xena/Yoga builds started failing with:-
++ functions:write_devstack_version:852 :   git log '--format=%H %s %ci' -1
+ ./stack.sh:main:230  :   
SUPPORTED_DISTROS='bullseye|focal|f35|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9|openEuler-20.03'
+ ./stack.sh:main:232  :   [[ ! jammy =~ 
bullseye|focal|f35|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9|openEuler-20.03
 ]]
+ ./stack.sh:main:233  :   echo 'WARNING: this script has 
not been tested on jammy'

Example:-
- https://zuul.opendev.org/t/openstack/build/0bd0421e30804b7aa9b6ea032d271be7
- https://zuul.opendev.org/t/openstack/build/8e06dfc0ccd940f3ab71edc0ec93466c
- https://zuul.opendev.org/t/openstack/build/899634e90ee94e0294985747075fb26c

Even before these jobs were broken but there tests used to fail not the
test setup, that can be handled once the current issues are cleared.


[1] 
https://zuul.opendev.org/t/openstack/builds?job_name=devstack-tobiko-neutron=stable%2F2023.1
[2] https://review.opendev.org/c/x/devstack-plugin-tobiko/+/893662?usp=search

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2035578

Title:
  [stable branches] devstack-tobiko-neutron job Fails with
  InvocationError('could not find executable python', None)

Status in neutron:
  New

Bug description:
  It started failing[1] since the job switched to ubuntu-jammy[2].

  Fails as below:-
  2023-09-13 16:46:18.124882 | TASK [tobiko-tox : run sanity test cases before 
creating resources]
  2023-09-13 16:46:19.463567 | controller | neutron_sanity create: 
/home/zuul/src/opendev.org/x/tobiko/.tox/py3
  2023-09-13 16:46:20.518574 | controller | neutron_sanity installdeps: 
-c/home/zuul/src/opendev.org/x/tobiko/upper-constraints.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/test-requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/extra-requirements.txt
  2023-09-13 16:46:20.519390 | controller | ERROR: could not install deps 
[-c/home/zuul/src/opendev.org/x/tobiko/upper-constraints.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/test-requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/extra-requirements.txt]; v = 
InvocationError('could not find executable python', None)
  2023-09-13 16:46:20.520263 | controller | 

[Yahoo-eng-team] [Bug 1949606] Re: QEMU >= 5.0.0 with -accel tcg uses a tb-size of 1GB causing OOM issues in CI

2023-09-12 Thread yatin
Fixed with https://review.opendev.org/c/openstack/nova/+/868419

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1949606

Title:
  QEMU >= 5.0.0 with -accel tcg uses a tb-size of 1GB  causing OOM
  issues in CI

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This is a Nova tracker for a set of issues being seen in OpenStack CI
  jobs using QEMU >= 5.0.0 caused by the following change in defaults
  witin QEMU:

  https://github.com/qemu/qemu/commit/600e17b26

  https://gitlab.com/qemu-project/qemu/-/issues/693

  At present most of the impacted jobs are being given an increased
  amount of swap with lower Tempest concurrency settings to avoid the
  issue, for example for CentOS 8 stream:

  https://review.opendev.org/c/openstack/devstack/+/803706

  https://review.opendev.org/c/openstack/tempest/+/797614

  Longer term a libvirt RFE has been raised to allow Nova to control the
  size of the cache:

  https://gitlab.com/libvirt/libvirt/-/issues/229

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1949606/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2012731] Re: [CI] "neutron-ovs-grenade-multinode-skip-level" and "neutron-ovn-grenade-multinode-skip-level" failing always

2023-09-12 Thread yatin
Required patch merged long back, jobs are green, Closing it:-
* 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-grenade-multinode-skip-level=0
* 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovn-grenade-multinode-skip-level=0

** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2012731

Title:
  [CI] "neutron-ovs-grenade-multinode-skip-level" and "neutron-ovn-
  grenade-multinode-skip-level" failing always

Status in neutron:
  Fix Released

Bug description:
  Logs:
  * 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-grenade-multinode-skip-level=0
  * 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovn-grenade-multinode-skip-level=0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2012731/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033683] Re: openvswitch.agent.ovs_neutron_agent fails to Cmd: ['iptables-restore', '-n']

2023-09-06 Thread yatin
Hi Alex,

<< Can someone take a look why the above patch
https://review.opendev.org/c/openstack/kolla/+/761182 mentioned here has
been excluded from the neutron image?

It would have been just missed, since train release Tripleo builds
container images natively and not use kolla, You can propose a patch in
tripleo-common to fix it.

As said i was more interested to know why the issue seen now as
/usr/sbin/update-alternatives used to be the path from long back.

But considering you are using CentOS8-stream containers on CentOS9-stream host 
i think you are hitting a recent iptables issue in CentOS8-stream[1], you can 
check version in your running container, if it matches iptables-1.8.5-8 you can 
downgrade it to resolve the issue temporary, as the fix for it is not yet 
merged.
If there is no real reason to use CentOS8 images can move to use CentOS 
9-Stream based images[2]

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2236501
[2] 
https://quay.io/repository/tripleowallabycentos9/openstack-neutron-server?tab=tags

Again marking it as invalid for neutron, feel free to reopen but share
what's expected fix is required in neutron project.

** Bug watch added: Red Hat Bugzilla #2236501
   https://bugzilla.redhat.com/show_bug.cgi?id=2236501

** Changed in: neutron
   Status: New => Invalid

** Changed in: tripleo
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033683

Title:
  openvswitch.agent.ovs_neutron_agent fails to Cmd: ['iptables-restore',
  '-n']

Status in neutron:
  Invalid
Status in tripleo:
  Confirmed

Bug description:
  Description
  ===
  Wallaby deployment via undercloud/overcloud started to fail recently on 
overcloud node provision
  Neutron constantly reports inability to update iptables that in turn makes 
baremetal to fail to boot from PXE
  From the review it seems that /usr/bin/update-alternatives set to legacy 
fails since neutron user doesn't have sudo to run it
  In the info I can see that neutron user has the following subset of commands 
it's able to run:
  ...
  (root) NOPASSWD: /usr/bin/update-alternatives --set iptables 
/usr/sbin/iptables-legacy
  (root) NOPASSWD: /usr/bin/update-alternatives --set ip6tables 
/usr/sbin/ip6tables-legacy
  (root) NOPASSWD: /usr/bin/update-alternatives --auto iptables
  (root) NOPASSWD: /usr/bin/update-alternatives --auto ip6tables

  But the issue is the fact that command isn't found as it was moved to
  /usr/sbin/update-alternatives

  Steps to reproduce
  ==
  1. Deploy undercloud
  2. Deploy networks and VIP
  3. Add and introspect a node
  4. Execute overcloud node provision ... that will timeout 

  Expected result
  ===
  Successful overcloud node baremetal provisioning

  Logs & Configs
  ==
  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-18d52177-9c93-401c-b97d-0334e488a257 - - - - -] Error while processing VIF 
ports: neutron_lib.exceptions.ProcessExecutionError: Exit code: 1; Cmd: 
['iptables-restore', '-n']; Stdin: # Generated by iptables_manager

  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent COMMIT
  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent # Completed by 
iptables_manager
  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ; Stdout: ; 
Stderr: iptables-restore: line 23 failed

  Environment
  ===
  Centos 9 Stream and undercloud deployment tool

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2033683/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2034096] [NEW] OVN source(main branch) deploy jobs fails with make: *** No rule to make target '/opt/stack/ovs/python/ovs_build_helpers/soutil.py', needed by 'manpages.mk'. Stop

2023-09-04 Thread yatin
Public bug reported:

Fails as below:-
2023-09-05 02:43:36.168367 | controller | checking that generated files are 
newer than configure... done
2023-09-05 02:43:36.169129 | controller | configure: creating ./config.status
2023-09-05 02:43:36.994278 | controller | config.status: creating lib/libovn.sym
2023-09-05 02:43:37.014160 | controller | config.status: creating Makefile
2023-09-05 02:43:37.035733 | controller | config.status: creating tests/atlocal
2023-09-05 02:43:37.056014 | controller | config.status: creating 
include/ovn/version.h
2023-09-05 02:43:37.076031 | controller | config.status: creating config.h
2023-09-05 02:43:37.093144 | controller | config.status: executing 
tests/atconfig commands
2023-09-05 02:43:37.096437 | controller | config.status: executing depfiles 
commands
2023-09-05 02:43:37.502518 | controller | config.status: executing libtool 
commands
2023-09-05 02:43:37.522280 | controller | config.status: executing 
include/openflow/openflow.h.stamp commands
2023-09-05 02:43:37.529722 | controller | config.status: executing 
utilities/bugtool/dummy commands
2023-09-05 02:43:37.537607 | controller | config.status: executing 
utilities/dummy commands
2023-09-05 02:43:37.546272 | controller | config.status: executing ipsec/dummy 
commands
2023-09-05 02:43:37.623678 | controller | ++ 
lib/neutron_plugins/ovn_agent:compile_ovn:340 :   nproc
2023-09-05 02:43:37.628062 | controller | + 
lib/neutron_plugins/ovn_agent:compile_ovn:340 :   make -j9
2023-09-05 02:43:37.651547 | controller | make: *** No rule to make target 
'/opt/stack/ovs/python/ovs_build_helpers/soutil.py', needed by 'manpages.mk'.  
Stop.
2023-09-05 02:43:37.654863 | controller | + 
lib/neutron_plugins/ovn_agent:compile_ovn:1 :   exit_trap
2023-09-05 02:43:37.657653 | controller | + ./stack.sh:exit_trap:551
 :   local r=2
2023-09-05 02:43:37.660729 | controller | ++ ./stack.sh:exit_trap:552   
  :   jobs -p
2023-09-05 02:43:37.664619 | controller | + ./stack.sh:exit_trap:552
 :   jobs=
2023-09-05 02:43:37.667207 | controller | + ./stack.sh:exit_trap:555
 :   [[ -n '' ]]
2023-09-05 02:43:37.669954 | controller | + ./stack.sh:exit_trap:561
 :   '[' -f '' ']'
2023-09-05 02:43:37.672302 | controller | + ./stack.sh:exit_trap:566
 :   kill_spinner

Example failure:-
https://f73d480b17c2b34dc38c-939d58bab8c2106dbf157cafeea8359a.ssl.cf2.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-
ovn-tempest-ipv6-only-ovs-master/7c297ec/job-output.txt

Builds:-  https://zuul.openstack.org/builds?job_name=neutron-ovn-
tempest-ipv6-only-ovs-master_name=neutron-ovn-tempest-ovs-master-
centos-9-stream=openstack%2Fneutron=0

Broken with https://github.com/ovn-
org/ovn/commit/558da0cd21ad0172405f7d93c5d0e7533edbf653

Need to update OVS_BRANCH in jobs to fix it.

** Affects: neutron
 Importance: High
 Assignee: yatin (yatinkarel)
 Status: Triaged


** Tags: ovn

** Changed in: neutron
   Status: New => Triaged

** Changed in: neutron
   Importance: Undecided => High

** Tags added: ovn

** Changed in: neutron
 Assignee: (unassigned) => yatin (yatinkarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2034096

Title:
  OVN source(main branch) deploy jobs fails with make: *** No rule to
  make target '/opt/stack/ovs/python/ovs_build_helpers/soutil.py',
  needed by 'manpages.mk'.  Stop.

Status in neutron:
  Triaged

Bug description:
  Fails as below:-
  2023-09-05 02:43:36.168367 | controller | checking that generated files are 
newer than configure... done
  2023-09-05 02:43:36.169129 | controller | configure: creating ./config.status
  2023-09-05 02:43:36.994278 | controller | config.status: creating 
lib/libovn.sym
  2023-09-05 02:43:37.014160 | controller | config.status: creating Makefile
  2023-09-05 02:43:37.035733 | controller | config.status: creating 
tests/atlocal
  2023-09-05 02:43:37.056014 | controller | config.status: creating 
include/ovn/version.h
  2023-09-05 02:43:37.076031 | controller | config.status: creating config.h
  2023-09-05 02:43:37.093144 | controller | config.status: executing 
tests/atconfig commands
  2023-09-05 02:43:37.096437 | controller | config.status: executing depfiles 
commands
  2023-09-05 02:43:37.502518 | controller | config.status: executing libtool 
commands
  2023-09-05 02:43:37.522280 | controller | config.status: executing 
include/openflow/openflow.h.stamp commands
  2023-09-05 02:43:37.529722 | controller | config.status: executing 
utilities/bugtool/dummy commands
  2023-09-05 02:43:37.537607 | controller | config.status: executing 
utilities/dummy commands
  2023-09-05 02:43:37.546272 | controller | config.status: executing 
ipsec/dummy commands
  2023-09-05 02:43:37.623678 | controller | ++ 
lib/neutron_plugins/ovn_agent:compile_ovn:340 :   nproc
  2

[Yahoo-eng-team] [Bug 2034016] [NEW] openstack-tox-py310-with-sqlalchemy-master broken with latest alembic commit

2023-09-04 Thread yatin
Public bug reported:

The job runs with latest alembic/sqlalchemy commits and is broken with
recent alembic commit[1].

Test 
neutron.tests.unit.db.test_migration.TestCli.test_autogen_process_directives 
fails as below:-
ft1.27: 
neutron.tests.unit.db.test_migration.TestCli.test_autogen_process_directivestesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
return f(self, *args, **kwargs)
  File "/usr/lib/python3.10/unittest/mock.py", line 1379, in patched
return func(*newargs, **newkeywargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/db/test_migration.py",
 line 708, in test_autogen_process_directives
self.assertThat(
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py310/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: "# ### commands auto generated by 
Alembic - please adjust! ###\nop.drop_constraint('user', 'uq_user_org')\n   
 op.drop_column('user', 'organization_name')\n# ### end Alembic commands 
###" does not match /(# )?### commands\ auto\ generated\ by\ Alembic\ \-\ 
please\ adjust!\ \#\#\#\\n\ \ \ \ op\.drop_constraint\('user',\ 'uq_user_org',\ 
type_=None\)\\n\ \ \ \ op\.drop_column\('user',\ 'organization_name'\)\\n\ \ \ 
\ (# )?### end\ Alembic\ commands\ \#\#\#/
 
Example failure:-
https://76069f8f7ed36a9de08a-3a08f5dd30775d36f5503265ac08cfa4.ssl.cf2.rackcdn.com/886988/10/check/openstack-tox-py310-with-sqlalchemy-master/70a282c/testr_results.html

[1]
https://github.com/sqlalchemy/alembic/commit/733197e0e794ed6aec3e21542257d6cec5bb7d1f

** Affects: neutron
 Importance: Critical
 Status: Triaged


** Tags: gate-failure unittest

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: New => Triaged

** Tags added: gate-failure unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2034016

Title:
  openstack-tox-py310-with-sqlalchemy-master broken with latest alembic
  commit

Status in neutron:
  Triaged

Bug description:
  The job runs with latest alembic/sqlalchemy commits and is broken with
  recent alembic commit[1].

  Test 
neutron.tests.unit.db.test_migration.TestCli.test_autogen_process_directives 
fails as below:-
  ft1.27: 
neutron.tests.unit.db.test_migration.TestCli.test_autogen_process_directivestesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 178, in func
  return f(self, *args, **kwargs)
File "/usr/lib/python3.10/unittest/mock.py", line 1379, in patched
  return func(*newargs, **newkeywargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/db/test_migration.py",
 line 708, in test_autogen_process_directives
  self.assertThat(
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py310/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: "# ### commands auto generated by 
Alembic - please adjust! ###\nop.drop_constraint('user', 'uq_user_org')\n   
 op.drop_column('user', 'organization_name')\n# ### end Alembic commands 
###" does not match /(# )?### commands\ auto\ generated\ by\ Alembic\ \-\ 
please\ adjust!\ \#\#\#\\n\ \ \ \ op\.drop_constraint\('user',\ 'uq_user_org',\ 
type_=None\)\\n\ \ \ \ op\.drop_column\('user',\ 'organization_name'\)\\n\ \ \ 
\ (# )?### end\ Alembic\ commands\ \#\#\#/
   
  Example failure:-
  
https://76069f8f7ed36a9de08a-3a08f5dd30775d36f5503265ac08cfa4.ssl.cf2.rackcdn.com/886988/10/check/openstack-tox-py310-with-sqlalchemy-master/70a282c/testr_results.html

  [1]
  
https://github.com/sqlalchemy/alembic/commit/733197e0e794ed6aec3e21542257d6cec5bb7d1f

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2034016/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033752] Re: test_reboot_server_hard fails with AssertionError: time.struct_time() not greater than time.struct_time()

2023-09-03 Thread yatin
Closed for tempest and neutron, as regression is fixed in nova with
https://review.opendev.org/c/openstack/nova/+/893502, jobs are back to
green.

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: tempest
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033752

Title:
  test_reboot_server_hard fails with  AssertionError: time.struct_time()
  not greater than time.struct_time()

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  Invalid

Bug description:
  Seen many occurrences recently, fails as below:-

  Traceback (most recent call last):
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
259, in test_reboot_server_hard
  self._test_reboot_server('HARD')
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
127, in _test_reboot_server
  self.assertGreater(new_boot_time, boot_time,
File "/usr/lib/python3.10/unittest/case.py", line 1244, in assertGreater
  self.fail(self._formatMessage(msg, standardMsg))
File "/usr/lib/python3.10/unittest/case.py", line 675, in fail
  raise self.failureException(msg)
  AssertionError: time.struct_time(tm_year=2023, tm_mon=9, tm_mday=1, 
tm_hour=7, tm_min=26, tm_sec=33, tm_wday=4, tm_yday=244, tm_isdst=0) not 
greater than time.struct_time(tm_year=2023, tm_mon=9, tm_mday=1, tm_hour=7, 
tm_min=26, tm_sec=33, tm_wday=4, tm_yday=244, tm_isdst=0) : 
time.struct_time(tm_year=2023, tm_mon=9, tm_mday=1, tm_hour=7, tm_min=26, 
tm_sec=33, tm_wday=4, tm_yday=244, tm_isdst=0) > time.struct_time(tm_year=2023, 
tm_mon=9, tm_mday=1, tm_hour=7, tm_min=26, tm_sec=33, tm_wday=4, tm_yday=244, 
tm_isdst=0)

  Example logs:-
  
https://1e11be38b60141dbb290-777f110ca49a5cd01022e1e8aeff1ed5.ssl.cf1.rackcdn.com/893401/5/check/neutron-ovn-tempest-ovs-release/f379752/testr_results.html
  
https://1b9f88b068db0ff45f98-b11b73e0c31560154dece88f25c72a10.ssl.cf2.rackcdn.com/893401/5/check/neutron-linuxbridge-tempest/0bf1039/testr_results.html
  
https://30b3c23edbff5d871c4c-595cfa47540877e41ce912cd21563e42.ssl.cf1.rackcdn.com/886988/10/check/neutron-ovs-tempest-multinode-full/e57a62a/testr_results.html
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0e5/886988/10/check/neutron-ovn-tempest-ipv6-only-ovs-release/0e538d1/testr_results.html

  Opensearch:-
  
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22not%20greater%20than%20time.struct_time%22'),sort:!())

  As per opensearch it's started to be seen just few hours back.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2033752/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033752] [NEW] test_reboot_server_hard fails with AssertionError: time.struct_time() not greater than time.struct_time()

2023-09-01 Thread yatin
Public bug reported:

Seen many occurrences recently, fails as below:-

Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", 
line 259, in test_reboot_server_hard
self._test_reboot_server('HARD')
  File "/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", 
line 127, in _test_reboot_server
self.assertGreater(new_boot_time, boot_time,
  File "/usr/lib/python3.10/unittest/case.py", line 1244, in assertGreater
self.fail(self._formatMessage(msg, standardMsg))
  File "/usr/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: time.struct_time(tm_year=2023, tm_mon=9, tm_mday=1, tm_hour=7, 
tm_min=26, tm_sec=33, tm_wday=4, tm_yday=244, tm_isdst=0) not greater than 
time.struct_time(tm_year=2023, tm_mon=9, tm_mday=1, tm_hour=7, tm_min=26, 
tm_sec=33, tm_wday=4, tm_yday=244, tm_isdst=0) : time.struct_time(tm_year=2023, 
tm_mon=9, tm_mday=1, tm_hour=7, tm_min=26, tm_sec=33, tm_wday=4, tm_yday=244, 
tm_isdst=0) > time.struct_time(tm_year=2023, tm_mon=9, tm_mday=1, tm_hour=7, 
tm_min=26, tm_sec=33, tm_wday=4, tm_yday=244, tm_isdst=0)

Example logs:-
https://1e11be38b60141dbb290-777f110ca49a5cd01022e1e8aeff1ed5.ssl.cf1.rackcdn.com/893401/5/check/neutron-ovn-tempest-ovs-release/f379752/testr_results.html
https://1b9f88b068db0ff45f98-b11b73e0c31560154dece88f25c72a10.ssl.cf2.rackcdn.com/893401/5/check/neutron-linuxbridge-tempest/0bf1039/testr_results.html
https://30b3c23edbff5d871c4c-595cfa47540877e41ce912cd21563e42.ssl.cf1.rackcdn.com/886988/10/check/neutron-ovs-tempest-multinode-full/e57a62a/testr_results.html
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0e5/886988/10/check/neutron-ovn-tempest-ipv6-only-ovs-release/0e538d1/testr_results.html

Opensearch:-
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(),index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22not%20greater%20than%20time.struct_time%22'),sort:!())

As per opensearch it's started to be seen just few hours back.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033752

Title:
  test_reboot_server_hard fails with  AssertionError: time.struct_time()
  not greater than time.struct_time()

Status in neutron:
  New
Status in tempest:
  New

Bug description:
  Seen many occurrences recently, fails as below:-

  Traceback (most recent call last):
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
259, in test_reboot_server_hard
  self._test_reboot_server('HARD')
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
127, in _test_reboot_server
  self.assertGreater(new_boot_time, boot_time,
File "/usr/lib/python3.10/unittest/case.py", line 1244, in assertGreater
  self.fail(self._formatMessage(msg, standardMsg))
File "/usr/lib/python3.10/unittest/case.py", line 675, in fail
  raise self.failureException(msg)
  AssertionError: time.struct_time(tm_year=2023, tm_mon=9, tm_mday=1, 
tm_hour=7, tm_min=26, tm_sec=33, tm_wday=4, tm_yday=244, tm_isdst=0) not 
greater than time.struct_time(tm_year=2023, tm_mon=9, tm_mday=1, tm_hour=7, 
tm_min=26, tm_sec=33, tm_wday=4, tm_yday=244, tm_isdst=0) : 
time.struct_time(tm_year=2023, tm_mon=9, tm_mday=1, tm_hour=7, tm_min=26, 
tm_sec=33, tm_wday=4, tm_yday=244, tm_isdst=0) > time.struct_time(tm_year=2023, 
tm_mon=9, tm_mday=1, tm_hour=7, tm_min=26, tm_sec=33, tm_wday=4, tm_yday=244, 
tm_isdst=0)

  Example logs:-
  
https://1e11be38b60141dbb290-777f110ca49a5cd01022e1e8aeff1ed5.ssl.cf1.rackcdn.com/893401/5/check/neutron-ovn-tempest-ovs-release/f379752/testr_results.html
  
https://1b9f88b068db0ff45f98-b11b73e0c31560154dece88f25c72a10.ssl.cf2.rackcdn.com/893401/5/check/neutron-linuxbridge-tempest/0bf1039/testr_results.html
  
https://30b3c23edbff5d871c4c-595cfa47540877e41ce912cd21563e42.ssl.cf1.rackcdn.com/886988/10/check/neutron-ovs-tempest-multinode-full/e57a62a/testr_results.html
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0e5/886988/10/check/neutron-ovn-tempest-ipv6-only-ovs-release/0e538d1/testr_results.html

  Opensearch:-
  

[Yahoo-eng-team] [Bug 2028285] [NEW] [unit test][xena+] test_port_deletion_prevention fails when runs in isolation

2023-07-20 Thread yatin
Public bug reported:

Can be reproduced by Just running:-
tox -epy3 -- test_port_deletion_prevention
or run any of the below tests individually:-
neutron.tests.unit.extensions.test_l3.L3NatDBSepTestCase.test_port_deletion_prevention_handles_missing_port
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBSepTestCase.test_port_deletion_prevention_handles_missing_port

Fails as below:-
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBSepTestCase.test_port_deletion_prevention_handles_missing_port


Captured traceback:
~~~
Traceback (most recent call last):

  File "/home/ykarel/work/openstack/neutron/neutron/tests/base.py", line 
178, in func
return f(self, *args, **kwargs)

  File "/home/ykarel/work/openstack/neutron/neutron/tests/base.py", line 
178, in func
return f(self, *args, **kwargs)

  File 
"/home/ykarel/work/openstack/neutron/neutron/tests/unit/extensions/test_l3.py", 
line 4491, in test_port_deletion_prevention_handles_missing_port
pl.prevent_l3_port_deletion(context.get_admin_context(), 'fakeid')

  File "/home/ykarel/work/openstack/neutron/neutron/db/l3_db.py", line 
1742, in prevent_l3_port_deletion
port = port or self._core_plugin.get_port(context, port_id)

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 223, in wrapped
return f_with_retry(*args, **kwargs,

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 137, in wrapped
with excutils.save_and_reraise_exception():

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
self.force_reraise()

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
raise self.value

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 135, in wrapped
return f(*args, **kwargs)

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_db/api.py",
 line 144, in wrapper
with excutils.save_and_reraise_exception() as ectxt:

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
self.force_reraise()

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
raise self.value

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_db/api.py",
 line 142, in wrapper
return f(*args, **kwargs)

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 183, in wrapped
with excutils.save_and_reraise_exception():

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
self.force_reraise()

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
raise self.value

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 181, in wrapped
return f(*dup_args, **dup_kwargs)

  File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 1022, in wrapper
return fn(*args, **kwargs)

  File 
"/home/ykarel/work/openstack/neutron/neutron/db/db_base_plugin_v2.py", line 
1628, in get_port
lazy_fields = [models_v2.Port.port_forwardings,

AttributeError: type object 'Port' has no attribute
'port_forwardings'

It's reproducible Since Xena+ since the inclusion of patch
https://review.opendev.org/c/openstack/neutron/+/790691

It do not reproduce if there are other test runs(from the test class)
before this test which involve other requests(like network get/create
etc) apart from the ones modified in above patch.

Considering above point if this test is modified to run other requests like 
below then it succeeds:-
self.plugin.get_ports_count(context.get_admin_context())
or
self.plugin.get_networks_count(context.get_admin_context())
or
with self.port/network():
self.assertIsNone(  
pl.prevent_l3_port_deletion(context.get_admin_context(), 'fakeid')
)


The issue was originally noticed in a downstream job where unit tests were 
executed as part of package build. As the tests suite was executed with 100+ 
concurrency, by chance this test got executed in a worker first of any other 
test of the class.

Me not sure if there are other test cases which will fail like this when
run in isolation.

** 

[Yahoo-eng-team] [Bug 2028123] Re: Jobs setting up more than 1 image fails with ValueError: invalid literal for int() with base 10: ''

2023-07-19 Thread yatin
Fix proposed https://review.opendev.org/c/openstack/devstack/+/888906

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028123

Title:
  Jobs setting up more than 1 image fails with ValueError: invalid
  literal for int() with base 10: ''

Status in devstack:
  In Progress
Status in neutron:
  New

Bug description:
  Happening since the devstack patch
  https://review.opendev.org/c/openstack/devstack/+/886795 merged.

  Fails as below:-
  2023-07-19 01:12:44.183383 | controller | +++ 
lib/tempest:configure_tempest:301:   image_size_in_gib
  2023-07-19 01:12:44.186369 | controller | +++ 
lib/tempest:image_size_in_gib:119:   local size
  2023-07-19 01:12:44.190924 | controller |  
lib/tempest:image_size_in_gib:120:   oscwrap --os-cloud devstack-admin 
image show -c size -f value
  2023-07-19 01:12:45.055357 | controller | usage: openstack image show [-h] 
[-f {json,shell,table,value,yaml}]
  2023-07-19 01:12:45.055438 | controller | [-c 
COLUMN] [--noindent] [--prefix PREFIX]
  2023-07-19 01:12:45.055450 | controller | 
[--max-width ] [--fit-width]
  2023-07-19 01:12:45.055460 | controller | 
[--print-empty] [--human-readable]
  2023-07-19 01:12:45.055469 | controller | 
  2023-07-19 01:12:45.055479 | controller | openstack image show: error: the 
following arguments are required: 
  2023-07-19 01:12:45.170317 | controller |  functions-common:oscwrap:2427  
  :   return 2
  2023-07-19 01:12:45.174387 | controller | +++ 
lib/tempest:image_size_in_gib:120:   size=
  2023-07-19 01:12:45.178301 | controller | +++ 
lib/tempest:image_size_in_gib:121:   echo
  2023-07-19 01:12:45.178524 | controller | +++ 
lib/tempest:image_size_in_gib:121:   python3 -c 'import math; 
print(int(math.ceil(float(int(input()) / 1024.0 ** 3'
  2023-07-19 01:12:45.197214 | controller | Traceback (most recent call last):
  2023-07-19 01:12:45.197249 | controller |   File "", line 1, in 

  2023-07-19 01:12:45.197343 | controller | ValueError: invalid literal for 
int() with base 10: ''
  2023-07-19 01:12:45.205412 | controller | ++ 
lib/tempest:configure_tempest:301:   disk=
  2023-07-19 01:12:45.209824 | controller | + lib/tempest:configure_tempest:1   
   :   exit_trap
  2023-07-19 01:12:45.213259 | controller | + ./stack.sh:exit_trap:547  
   :   local r=1
  2023-07-19 01:12:45.218464 | controller | ++ ./stack.sh:exit_trap:548 
:   jobs -p
  2023-07-19 01:12:45.224425 | controller | + ./stack.sh:exit_trap:548  
   :   jobs=
  2023-07-19 01:12:45.227487 | controller | + ./stack.sh:exit_trap:551  
   :   [[ -n '' ]]
  2023-07-19 01:12:45.232326 | controller | + ./stack.sh:exit_trap:557  
   :   '[' -f /tmp/tmp.phuFS9io2I ']'
  2023-07-19 01:12:45.236057 | controller | + ./stack.sh:exit_trap:558  
   :   rm /tmp/tmp.phuFS9io2I
  2023-07-19 01:12:45.241367 | controller | + ./stack.sh:exit_trap:562  
   :   kill_spinner
  2023-07-19 01:12:45.245280 | controller | + ./stack.sh:kill_spinner:457   
   :   '[' '!' -z '' ']'
  2023-07-19 01:12:45.249184 | controller | + ./stack.sh:exit_trap:564  
   :   [[ 1 -ne 0 ]]
  2023-07-19 01:12:45.254924 | controller | + ./stack.sh:exit_trap:565  
   :   echo 'Error on exit'
  2023-07-19 01:12:45.254945 | controller | Error on exit
  2023-07-19 01:12:45.258724 | controller | + ./stack.sh:exit_trap:567  
   :   type -p generate-subunit
  2023-07-19 01:12:45.261436 | controller | + ./stack.sh:exit_trap:568  
   :   generate-subunit 1689727847 1318 fail
  2023-07-19 01:12:45.402879 | controller | + ./stack.sh:exit_trap:570  
   :   [[ -z /opt/stack/logs ]]
  2023-07-19 01:12:45.406897 | controller | + ./stack.sh:exit_trap:573  
   :   /usr/bin/python3.10 /opt/stack/devstack/tools/worlddump.py -d 
/opt/stack/logs
  2023-07-19 01:12:45.440788 | controller | Traceback (most recent call last):
  2023-07-19 01:12:45.440856 | controller |   File 
"/opt/stack/devstack/tools/worlddump.py", line 271, in 
  2023-07-19 01:12:45.440891 | controller | sys.exit(main())
  2023-07-19 01:12:45.440911 | controller |   File 
"/opt/stack/devstack/tools/worlddump.py", line 248, in main
  2023-07-19 01:12:45.440933 | controller | with io.open(fname, 'w') as f:
  2023-07-19 01:12:45.440958 | controller | PermissionError: [Errno 13] 
Permission denied: '/opt/stack/logs/worlddump-2023-07-19-011245.txt'
  2023-07-19 01:12:45.447843 | controller | *** FINISHED ***
  2023-07-19 01:12:46.151871 | controller | ERROR

  
  Example failure:-
  

[Yahoo-eng-team] [Bug 1996594] Re: OVN metadata randomly stops working

2023-07-18 Thread yatin
There were many fixes related to the reported issue in neutron and ovs since 
this bug report, some of these that i quickly catched are:-
- 
https://patchwork.ozlabs.org/project/openvswitch/patch/20220819230810.2626573-1-i.maxim...@ovn.org/
- https://review.opendev.org/c/openstack/ovsdbapp/+/856200
- https://review.opendev.org/c/openstack/ovsdbapp/+/862524
- https://review.opendev.org/c/openstack/neutron/+/857775
- https://review.opendev.org/c/openstack/neutron/+/871825

Closing it based on above and Comment #5. If the issues are still seen
with python-ovs>=2.17 and above fixes included please feel free to open
the issue along with ovsdb-server sb logs and neutron server and
metadata agent debug logs.

** Bug watch added: Red Hat Bugzilla #2214289
   https://bugzilla.redhat.com/show_bug.cgi?id=2214289

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1996594

Title:
  OVN metadata randomly stops working

Status in neutron:
  Fix Released

Bug description:
  We found that OVN metadata will not work randomly when OVN is writing
  a snapshot.

  1, At 12:30:35, OVN started to transfer leadership to write a snapshot

  $ find sosreport-juju-2752e1-*/var/log/ovn/* |xargs zgrep -i -E 'Transferring 
leadership'
  
sosreport-juju-2752e1-6-lxd-24-xxx-2022-08-18-entowko/var/log/ovn/ovsdb-server-sb.log:2022-08-18T12:30:35.322Z|80962|raft|INFO|Transferring
 leadership to write a snapshot.
  
sosreport-juju-2752e1-6-lxd-24-xxx-2022-08-18-entowko/var/log/ovn/ovsdb-server-sb.log:2022-08-18T17:52:53.024Z|82382|raft|INFO|Transferring
 leadership to write a snapshot.
  
sosreport-juju-2752e1-7-lxd-27-xxx-2022-08-18-hhxxqci/var/log/ovn/ovsdb-server-sb.log:2022-08-18T12:30:35.330Z|92698|raft|INFO|Transferring
 leadership to write a snapshot.

  2, At 12:30:36, neutron-ovn-metadata-agent reported OVSDB Error

  $ find sosreport-srv1*/var/log/neutron/* |xargs zgrep -i -E 'OVSDB Error'
  
sosreport-srv1xxx2d-xxx-2022-08-18-cuvkufw/var/log/neutron/neutron-ovn-metadata-agent.log:2022-08-18
 12:30:36.103 75556 ERROR ovsdbapp.backend.ovs_idl.transaction [-] OVSDB Error: 
no error details available
  
sosreport-srv1xxx6d-xxx-2022-08-18-bgnovqu/var/log/neutron/neutron-ovn-metadata-agent.log:2022-08-18
 12:30:36.104 2171 ERROR ovsdbapp.backend.ovs_idl.transaction [-] OVSDB Error: 
no error details available

  3, At 12:57:53, we saw the error 'No port found in network', then we
  will hit the problem that OVN metadata does not work randomly

  2022-08-18 12:57:53.800 3730 ERROR neutron.agent.ovn.metadata.server
  [-] No port found in network 63e2c276-60dd-40e3-baa1-c16342eacce2 with
  IP address 100.94.98.135

  After the problem occurs, restarting neutron-ovn-metadata-agent or
  restarting haproxy instance as follows can be used as a workaround.

  /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec
  ovnmeta-63e2c276-60dd-40e3-baa1-c16342eacce2 haproxy -f
  /var/lib/neutron/ovn-metadata-
  proxy/63e2c276-60dd-40e3-baa1-c16342eacce2.conf

  One lp bug #1990978 [1] is trying to reducing the frequency of transfers, it 
should be beneficial to this problem.
  But it only reduces the occurrence of problems, not completely avoiding them. 
I wonder if we need to add some retry logic on the neutron side

  NOTE: The openstack version we are using is focal-xena, and
  openvswitch's version is 2.16.0-0ubuntu2.1~cloud0

  [1] https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1990978

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1996594/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1969592] Re: [OVN] Frequent DB leader changes causes 'VIF creation failed' on nova side

2023-07-18 Thread yatin
There were many fixes related to the reported issue in neutron and ovs since 
this bug report, some of these that i quickly catched are:-
- 
https://patchwork.ozlabs.org/project/openvswitch/patch/20220819230810.2626573-1-i.maxim...@ovn.org/
- https://review.opendev.org/c/openstack/ovsdbapp/+/856200
- https://review.opendev.org/c/openstack/ovsdbapp/+/862524
- https://review.opendev.org/c/openstack/neutron/+/857775
- https://review.opendev.org/c/openstack/neutron/+/871825

One of the issue that i know still open but that happens very rarely
https://bugzilla.redhat.com/show_bug.cgi?id=2214289

Closing it based on above and Comment #2 and #3. If the issues are still
seen with python-ovs>=2.17 and above fixes included please feel free to
open the issue along with neutron and nova debug logs.


** Bug watch added: Red Hat Bugzilla #2214289
   https://bugzilla.redhat.com/show_bug.cgi?id=2214289

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1969592

Title:
  [OVN] Frequent DB leader changes causes 'VIF creation failed' on nova
  side

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Hello guys.

  As I found in 2021 there was a commit to OVS [since OVS v2.16 or 2.15 with 
that backport] that changed behavior during OVN DBs snapshoting. 
  Now before the leader creates a snapshot it will transfer leadership to 
another node. 

  We've run tests with rally and tempest and looks like there is a problem now 
when there is interaction between nova and neutron. 
  For example, simple rally test like 'create {network,router,subnet} -> add 
interface to router' looks okay even with 256 concurrent same tests/threads. 
But something like 'neutron.create_subnet -> nova.boot_server -> 
nova.attach_interface' will fail in time when transfer leadership happens. 
  Since it happens to often [ each 10m + rand(10) ] we will get a lot of 
errors. 

  This problem can be observed on all versions where OVS 2.16 [or
  backport] or higher invited :)

  
  Some tracing from logs [neutron, nova, ovn-sb-db]:

  CONTROL-NODES:

  
  ctl01-ovn-sb-db.log:2022-04-19T12:30:03.089Z|01002|raft|INFO|Transferring 
leadership to write a snapshot.
  ctl01-ovn-sb-db.log:2022-04-19T12:30:03.099Z|01003|raft|INFO|server 1c5f is 
leader for term 42

  ctl03-ovn-sb-db.log:2022-04-19T12:30:03.090Z|00938|raft|INFO|received 
leadership transfer from 1f46 in term 41
  ctl03-ovn-sb-db.log:2022-04-19T12:30:03.092Z|00940|raft|INFO|term 42: elected 
leader by 2+ of 3 servers
  
ctl03-ovn-sb-db.log:2022-04-19T12:30:10.941Z|00941|jsonrpc|WARN|tcp:xx.yy.zz.26:41882:
 send error: Connection reset by peer
  
ctl03-ovn-sb-db.log:2022-04-19T12:30:27.324Z|00943|jsonrpc|WARN|tcp:xx.yy.zz.26:41896:
 send error: Connection reset by peer
  
ctl03-ovn-sb-db.log:2022-04-19T12:30:27.325Z|00945|jsonrpc|WARN|tcp:xx.yy.zz.26:41880:
 send error: Connection reset by peer
  
ctl03-ovn-sb-db.log:2022-04-19T12:30:27.325Z|00947|jsonrpc|WARN|tcp:xx.yy.zz.26:41892:
 send error: Connection reset by peer
  
ctl03-ovn-sb-db.log:2022-04-19T12:30:27.327Z|00949|jsonrpc|WARN|tcp:xx.yy.zz.26:41884:
 send error: Connection reset by peer
  
ctl03-ovn-sb-db.log:2022-04-19T12:31:49.244Z|00951|jsonrpc|WARN|tcp:xx.yy.zz.25:40260:
 send error: Connection timed out
  
ctl03-ovn-sb-db.log:2022-04-19T12:31:49.244Z|00953|jsonrpc|WARN|tcp:xx.yy.zz.25:40264:
 send error: Connection timed out
  
ctl03-ovn-sb-db.log:2022-04-19T12:31:49.244Z|00955|jsonrpc|WARN|tcp:xx.yy.zz.24:37440:
 send error: Connection timed out
  
ctl03-ovn-sb-db.log:2022-04-19T12:31:49.244Z|00957|jsonrpc|WARN|tcp:xx.yy.zz.24:37442:
 send error: Connection timed out
  
ctl03-ovn-sb-db.log:2022-04-19T12:31:49.245Z|00959|jsonrpc|WARN|tcp:xx.yy.zz.24:37446:
 send error: Connection timed out
  
ctl03-ovn-sb-db.log:2022-04-19T12:32:01.533Z|01001|jsonrpc|WARN|tcp:xx.yy.zz.67:57586:
 send error: Connection timed out

  
  2022-04-19 12:30:08.898 27 INFO neutron.db.ovn_revision_numbers_db 
[req-7fcfdd74-482d-46b2-9f76-07190669d76d ff1516be452b4b939314bf3864a63f35 
9d3ae9a7b121488285203b0fdeabc3a3 - default default] Successfully bumped 
revision number for resource be178a9a-26d7-4bf0-a4e8-d206a6965205 (type: ports) 
to 1

  2022-04-19 12:30:09.644 27 INFO neutron.db.ovn_revision_numbers_db
  [req-a8278418-3ad9-450c-89bb-e7a5c1c0a06d
  a9864cd890224c079051b3f56021be64 72db34087b9b401d842b66643b647e16 -
  default default] Successfully bumped revision number for resource
  be178a9a-26d7-4bf0-a4e8-d206a6965205 (type: ports) to 2

  
  2022-04-19 12:30:10.235 27 INFO neutron.wsgi 
[req-571b53cc-ca04-46f7-89f9-fdf8e5931f4c a9864cd890224c079051b3f56021be64 
72db34087b9b401d842b66643b647e16 - default default] xx.yy.zz.68,xx.yy.zz.26 
"GET 
/v2.0/ports?tenant_id=9d3ae9a7b121488285203b0fdeabc3a3_id=7560fbb7-3ec7-41ef-b7a5-5e955ca4ff34
 

[Yahoo-eng-team] [Bug 2027817] [NEW] neutron slow jobs before xena broken with ERROR: unknown environment 'slow'

2023-07-14 Thread yatin
Public bug reported:

Broken since https://review.opendev.org/c/openstack/tempest/+/887237
merged as neutron slow jobs inherit from tempest-slow-py3. These fails
with  ERROR: unknown environment 'slow'

In releases before xena tempest is pinned and don't have the patch[1]
which added slow toxenv. The job needs to be fixed such that these use
available toxenv.

Build failures:-
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-tempest-
slow-py3=openstack/neutron

Example failure:-
https://1aa32956600eed195a6f-1e93f2912422e0b9805cacc7d195dee2.ssl.cf1.rackcdn.com/887279/1/gate/neutron-
tempest-slow-py3/c9ff905/job-output.txt

[1]
https://github.com/openstack/tempest/commit/6bb98c2aa478f7ad32838fec4b59c4acb73ccf21

** Affects: neutron
 Importance: Critical
 Status: Triaged

** Affects: tempest
 Importance: Critical
 Assignee: Ghanshyam Mann (ghanshyammann)
 Status: New

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2027817

Title:
  neutron slow jobs before xena broken with  ERROR: unknown environment
  'slow'

Status in neutron:
  Triaged
Status in tempest:
  New

Bug description:
  Broken since https://review.opendev.org/c/openstack/tempest/+/887237
  merged as neutron slow jobs inherit from tempest-slow-py3. These fails
  with  ERROR: unknown environment 'slow'

  In releases before xena tempest is pinned and don't have the patch[1]
  which added slow toxenv. The job needs to be fixed such that these use
  available toxenv.

  Build failures:-
  https://zuul.opendev.org/t/openstack/builds?job_name=neutron-tempest-
  slow-py3=openstack/neutron

  Example failure:-
  
https://1aa32956600eed195a6f-1e93f2912422e0b9805cacc7d195dee2.ssl.cf1.rackcdn.com/887279/1/gate/neutron-
  tempest-slow-py3/c9ff905/job-output.txt

  [1]
  
https://github.com/openstack/tempest/commit/6bb98c2aa478f7ad32838fec4b59c4acb73ccf21

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2027817/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1932043] Re: nova-ceph-multistore test_resize_server_revert fails with rbd.ReadOnlyImage: [errno 30] RBD read-only image

2023-06-28 Thread yatin
Seen it today in
https://c83b20527acf2b0f8494-4a0455790e56cb733d68b35ced7c28e7.ssl.cf5.rackcdn.com/886250/2/check/nova-
ceph-multistore/2231918/testr_results.html

** Changed in: nova
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1932043

Title:
  nova-ceph-multistore test_resize_server_revert fails with
  rbd.ReadOnlyImage: [errno 30] RBD read-only image

Status in OpenStack Compute (nova):
  New

Bug description:
  
  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/api/compute/base.py", line 222, in 
server_check_teardown
  waiters.wait_for_server_status(cls.servers_client,
File "/opt/stack/tempest/tempest/common/waiters.py", line 75, in 
wait_for_server_status
  raise exceptions.BuildErrorException(body['fault'],
  tempest.exceptions.BuildErrorException: Server 
a1fd599b-909a-4e77-b69d-253b0995bd3d failed to build and is in ERROR status
  Details: {'code': 500, 'created': '2021-06-15T14:47:21Z', 'message': 
'ReadOnlyImage'}

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_037/796269/3/check/nova-
  ceph-multistore/0373da6/controller/logs/screen-n-cpu.txt

  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [None req-ac59ec60-ec07-465d-a82b-ae72be0f2454 
tempest-ServerActionsTestJSON-45354220 
tempest-ServerActionsTestJSON-45354220-project] [instance: 
a1fd599b-909a-4e77-b69d-253b0995bd3d] Setting instance vm_state to ERROR: 
rbd.ReadOnlyImage: [errno 30] RBD read-only image (error creating snapshot 
b'nova-resize' from b'a1fd599b-909a-4e77-b69d-253b0995bd3d_disk')
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d] 
Traceback (most recent call last):
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d]   
File "/opt/stack/nova/nova/compute/manager.py", line 10171, in 
_error_out_instance_on_exception
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d] 
yield
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d]   
File "/opt/stack/nova/nova/compute/manager.py", line 5862, in 
_finish_resize_helper
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d] 
network_info = self._finish_resize(context, instance, migration,
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d]   
File "/opt/stack/nova/nova/compute/manager.py", line 5800, in _finish_resize
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d] 
self._set_instance_info(instance, old_flavor)
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d]   
File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 227, 
in __exit__
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d] 
self.force_reraise()
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d]   
File "/usr/local/lib/python3.8/dist-packages/oslo_utils/excutils.py", line 200, 
in force_reraise
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d] 
raise self.value
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d]   
File "/opt/stack/nova/nova/compute/manager.py", line 5783, in _finish_resize
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d] 
self.driver.finish_migration(context, migration, instance,
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR nova.compute.manager [instance: a1fd599b-909a-4e77-b69d-253b0995bd3d]   
File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 10939, in 
finish_migration
  Jun 15 14:47:20.281985 ubuntu-focal-ovh-bhs1-0025123352 nova-compute[114395]: 
ERROR 

[Yahoo-eng-team] [Bug 2024674] [NEW] Unit tests fails with oslo_db.exception.DBNonExistentTable: (sqlite3.OperationalError) no such table: ml2_geneve_allocations when run with low concurrency or on lo

2023-06-22 Thread yatin
Public bug reported:

The issue is noticed in RDO openstack-neutron package build[1], the package 
builds fails as unit tests fails randomly with below Traceback:-
DEBUG: 
neutron.tests.unit.services.trunk.test_utils.UtilsTestCase.test_is_driver_compatible_multiple_drivers
DEBUG: 
-
DEBUG: Captured traceback:
DEBUG: ~~~
DEBUG: Traceback (most recent call last):
DEBUG:   File 
"/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in 
_execute_context
DEBUG: self.dialect.do_execute(
DEBUG:   File 
"/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in 
do_execute
DEBUG: cursor.execute(statement, parameters)
DEBUG: sqlite3.OperationalError: no such table: ml2_geneve_allocations
DEBUG: 
DEBUG: The above exception was the direct cause of the following exception:
DEBUG: Traceback (most recent call last):
DEBUG:   File "/usr/lib/python3.9/site-packages/fixtures/fixture.py", line 
196, in setUp
DEBUG: self._setUp()
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/tests/unit/plugins/ml2/test_plugin.py",
 line 110, in _setUp
DEBUG: self.parent_setup()
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/tests/unit/db/test_db_base_plugin_v2.py",
 line 166, in setUp
DEBUG: self.api = router.APIRouter()
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/api/v2/router.py", 
line 21, in APIRouter
DEBUG: return pecan_app.v2_factory(None, **local_config)
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/pecan_wsgi/app.py", 
line 47, in v2_factory
DEBUG: startup.initialize_all()
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/pecan_wsgi/startup.py",
 line 39, in initialize_all
DEBUG: manager.init()
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/manager.py", line 301, 
in init
DEBUG: NeutronManager.get_instance()
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/manager.py", line 252, 
in get_instance
DEBUG: cls._create_instance()
DEBUG:   File 
"/usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py", line 414, in 
inner
DEBUG: return f(*args, **kwargs)
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/manager.py", line 238, 
in _create_instance
DEBUG: cls._instance = cls()
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/manager.py", line 126, 
in __init__
DEBUG: plugin = self._get_plugin_instance(CORE_PLUGINS_NAMESPACE,
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/manager.py", line 162, 
in _get_plugin_instance
DEBUG: plugin_inst = plugin_class()
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/quota/resource_registry.py",
 line 124, in wrapper
DEBUG: return f(*args, **kwargs)
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/plugins/ml2/plugin.py",
 line 282, in __init__
DEBUG: self.type_manager.initialize()
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/plugins/ml2/managers.py",
 line 205, in initialize
DEBUG: driver.obj.initialize()
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/plugins/ml2/drivers/type_geneve.py",
 line 47, in initialize
DEBUG: self._initialize(cfg.CONF.ml2_type_geneve.vni_ranges)
DEBUG:   File 
"/builddir/build/BUILD/neutron-23.0.0.0b3.dev123/neutron/plugins/ml2/drivers/type_tunnel.py",
 line 131, in _initialize
DEBUG: self.sync_allocations()
DEBUG:   File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", 
line 139, in wrapped
DEBUG: setattr(e, '_RETRY_EXCEEDED', True)
DEBUG:   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", 
line 227, in __exit__
DEBUG: self.force_reraise()
DEBUG:   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", 
line 200, in force_reraise
DEBUG: raise self.value
DEBUG:   File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", 
line 135, in wrapped
DEBUG: return f(*args, **kwargs)
DEBUG:   File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, 
in wrapper
DEBUG: ectxt.value = e.inner_exc
DEBUG:   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", 
line 227, in __exit__
DEBUG: self.force_reraise()
DEBUG:   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", 
line 200, in force_reraise
DEBUG: raise self.value
DEBUG:   File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, 
in wrapper
DEBUG: return f(*args, **kwargs)
DEBUG:   File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", 
line 187, in wrapped
DEBUG: context_reference.session.rollback()
DEBUG:   File 

[Yahoo-eng-team] [Bug 2019802] [NEW] [master] Not all fullstack tests running in CI

2023-05-16 Thread yatin
Public bug reported:

>From last couple of days Only few tests(just 6 tests) are running in job
neutron-fullstack-with-uwsgi job.

Example:-
https://d16311159baa9c9fc692-58e8a805a242f8a07eac2fd1c3f6b11b.ssl.cf1.rackcdn.com/880867/6/gate/neutron-
fullstack-with-uwsgi/6b4c3ba/testr_results.html

Builds:- https://zuul.opendev.org/t/openstack/builds?job_name=neutron-
fullstack-with-uwsgi=openstack%2Fneutron=master=0

The job logs have below Traceback:-
2023-05-16 06:18:13.539943 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron Traceback (most recent call last):
2023-05-16 06:18:13.539958 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/sqlalchemy/engine/base.py",
 line 1900, in _execute_context
2023-05-16 06:18:13.539983 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron self.dialect.do_execute(
2023-05-16 06:18:13.539997 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/sqlalchemy/engine/default.py",
 line 736, in do_execute
2023-05-16 06:18:13.540008 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron cursor.execute(statement, parameters)
2023-05-16 06:18:13.540021 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/cursors.py",
 line 158, in execute
2023-05-16 06:18:13.540032 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron result = self._query(query)
2023-05-16 06:18:13.540044 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/cursors.py",
 line 325, in _query
2023-05-16 06:18:13.540056 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron conn.query(q)
2023-05-16 06:18:13.540067 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/connections.py",
 line 549, in query
2023-05-16 06:18:13.540078 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2023-05-16 06:18:13.540090 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/connections.py",
 line 779, in _read_query_result
2023-05-16 06:18:13.540118 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron result.read()
2023-05-16 06:18:13.540131 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/connections.py",
 line 1157, in read
2023-05-16 06:18:13.540143 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron first_packet = self.connection._read_packet()
2023-05-16 06:18:13.540154 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/connections.py",
 line 729, in _read_packet
2023-05-16 06:18:13.540165 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron packet.raise_for_error()
2023-05-16 06:18:13.540177 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/protocol.py",
 line 221, in raise_for_error
2023-05-16 06:18:13.540188 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron err.raise_mysql_exception(self._data)
2023-05-16 06:18:13.540199 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/err.py",
 line 143, in raise_mysql_exception
2023-05-16 06:18:13.540210 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron raise errorclass(errno, errval)
2023-05-16 06:18:13.540221 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron pymysql.err.OperationalError: (1049, "Unknown database 'ybptypesko'")
2023-05-16 06:18:13.540233 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron
2023-05-16 06:18:13.540244 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron The above exception was the direct cause of the following exception:
2023-05-16 06:18:13.540255 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron
2023-05-16 06:18:13.540266 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron Traceback (most recent call last):
2023-05-16 06:18:13.540277 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 

[Yahoo-eng-team] [Bug 2017992] Re: Jobs running on vexxhost provider failing with Mirror Issues

2023-05-16 Thread yatin
We got an update[1] from vexxhost Team and now the vexxhost node provider is re 
enabled[2].
Since it reenabled we are not seeing the issue. Noticed till now the jobs ran 
on 3 impacted nodes and is passing so can consider the issue resolved. Closing 
the bug.

[1] We have performed some optimizations on our compute pool. You can re-enable 
for now and see how it goes.
[2] https://review.opendev.org/c/openstack/project-config/+/882787

** Changed in: neutron
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2017992

Title:
  Jobs running on vexxhost provider failing with Mirror Issues

Status in neutron:
  Invalid

Bug description:
  Fails as below:-
  2023-04-27 12:36:47.535483 | controller | ++ 
functions-common:apt_get_update:1155 :   timeout 300 sh -c 'while ! sudo 
http_proxy= https_proxy= no_proxy=  apt-get update; do sleep 30; done'
  2023-04-27 12:36:50.877357 | controller | Err:1 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu focal InRelease
  2023-04-27 12:36:50.877408 | controller |   Could not connect to 
mirror.ca-ymq-1.vexxhost.opendev.org:443 (2604:e100:1:0:f816:3eff:fe0c:e2c0). - 
connect (113: No route to host) Could not connect to 
mirror.ca-ymq-1.vexxhost.opendev.org:443 (199.204.45.149). - connect (113: No 
route to host)
  2023-04-27 12:36:50.877419 | controller | Err:2 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu focal-updates InRelease
  2023-04-27 12:36:50.877428 | controller |   Unable to connect to 
mirror.ca-ymq-1.vexxhost.opendev.org:https:
  2023-04-27 12:36:50.877437 | controller | Err:3 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu focal-backports InRelease
  2023-04-27 12:36:50.877446 | controller |   Unable to connect to 
mirror.ca-ymq-1.vexxhost.opendev.org:https:
  2023-04-27 12:36:50.877454 | controller | Err:4 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu focal-security InRelease
  2023-04-27 12:36:50.877462 | controller |   Unable to connect to 
mirror.ca-ymq-1.vexxhost.opendev.org:https:
  2023-04-27 12:36:50.892401 | controller | Reading package lists...
  2023-04-27 12:36:50.895427 | controller | W: Failed to fetch 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu/dists/focal/InRelease  
Could not connect to mirror.ca-ymq-1.vexxhost.opendev.org:443 
(2604:e100:1:0:f816:3eff:fe0c:e2c0). - connect (113: No route to host) Could 
not connect to mirror.ca-ymq-1.vexxhost.opendev.org:443 (199.204.45.149). - 
connect (113: No route to host)
  2023-04-27 12:36:50.895462 | controller | W: Failed to fetch 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu/dists/focal-updates/InRelease
  Unable to connect to mirror.ca-ymq-1.vexxhost.opendev.org:https:
  2023-04-27 12:36:50.895539 | controller | W: Failed to fetch 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu/dists/focal-backports/InRelease
  Unable to connect to mirror.ca-ymq-1.vexxhost.opendev.org:https:
  2023-04-27 12:36:50.895604 | controller | W: Failed to fetch 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu/dists/focal-security/InRelease
  Unable to connect to mirror.ca-ymq-1.vexxhost.opendev.org:https:
  2023-04-27 12:36:50.895677 | controller | W: Some index files failed to 
download. They have been ignored, or old ones used instead.

  
  2023-04-27 12:36:51.007845 | controller |
  2023-04-27 12:36:51.008040 | controller | E: Unable to locate package apache2
  2023-04-27 12:36:51.008098 | controller | E: Unable to locate package 
apache2-dev
  2023-04-27 12:36:51.008157 | controller | E: Unable to locate package bc
  2023-04-27 12:36:51.008218 | controller | E: Package 'bsdmainutils' has no 
installation candidate
  2023-04-27 12:36:51.008321 | controller | E: Unable to locate package gawk
  2023-04-27 12:36:51.008382 | controller | E: Unable to locate package gettext
  2023-04-27 12:36:51.008438 | controller | E: Unable to locate package graphviz

  Example failures:-
  - 
https://db3cff29bdb713e861e1-7db0c1fa1bd98a0adf758f7c1d49f672.ssl.cf5.rackcdn.com/881735/4/check/neutron-tempest-plugin-openvswitch-enforce-scope-new-defaults/278dce1/job-output.txt
  - 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_944/881742/4/check/neutron-tempest-plugin-designate-scenario/9444d54/job-output.txt

  
  It's happening randomly from last couple of weeks, seems some issue in 
general with IPv6 external connectivity.
  From opensearch[1] it can be seen following host_ids are impacted:-
  c984fb897502bc826ccaf0e258b6071e76c29b305bc5b31b301de76a
  1b1e841d5cdc8c40a480da993c1cbf0cd64900baff612378898e72ab
  6cc97bc57f540569368fcc47255180c5d21ed00a22cad83eeb600cec
  86c687840cd74cd63cbb095748afa5c9cd0f6fcea898d90aa030cc68
  94cd367e7821f5d74cf44c5ebafd9af18d2b6dff64a9bee067337cf6
  8926aa5796637312bf5e46a0671a88021c208235fafdfcf22931eb01
  70670f45d0dc4eaae28e6553525eec409dfb6f80e8d6c8dcef7d7bf5

  And 

[Yahoo-eng-team] [Bug 2018967] [NEW] [fwaas] test_update_firewall_group fails randomly

2023-05-09 Thread yatin
Public bug reported:

Seen twice till now recently:-
- 
https://a78793e982809689fe25-25fa16d377ec97c08c4e6ce3af683bd9.ssl.cf5.rackcdn.com/881232/1/check/neutron-tempest-plugin-fwaas/b0730f9/testr_results.html
- 
https://53a7c53d508ecea7485c-f8ccc2b7c32dd8ba5caab7dc1c36a741.ssl.cf5.rackcdn.com/881232/1/gate/neutron-tempest-plugin-fwaas/5712826/testr_results.html

Fails as below:-
traceback-1: {{{
Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/api/test_fwaasv2_extensions.py",
 line 130, in _try_delete_firewall_group
self.firewall_groups_client.delete_firewall_group(fwg_id)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 38, in delete_firewall_group
return self.delete_resource(uri)
  File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 42, in 
delete_resource
resp, body = self.delete(req_uri)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 339, in 
delete
return self.request('DELETE', url, extra_headers, headers, body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 742, in 
request
self._error_checker(resp, resp_body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 867, in 
_error_checker
raise exceptions.Conflict(resp_body, resp=resp)
tempest.lib.exceptions.Conflict: Conflict with state of target resource
Details: {'type': 'FirewallGroupInUse', 'message': 'Firewall group 
57266ed6-c39c-4be2-80d8-649469adf7eb is still active.', 'detail': ''}
}}}

traceback-2: {{{
Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/lib/common/utils/test_utils.py", line 87, in 
call_and_ignore_notfound_exc
return func(*args, **kwargs)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 100, in delete_firewall_policy
return self.delete_resource(uri)
  File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 42, in 
delete_resource
resp, body = self.delete(req_uri)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 339, in 
delete
return self.request('DELETE', url, extra_headers, headers, body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 742, in 
request
self._error_checker(resp, resp_body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 867, in 
_error_checker
raise exceptions.Conflict(resp_body, resp=resp)
tempest.lib.exceptions.Conflict: Conflict with state of target resource
Details: {'type': 'FirewallPolicyInUse', 'message': 'Firewall policy 
0e23c50e-28a9-41e5-829c-9a67d058bafd is being used.', 'detail': ''}
}}}

traceback-3: {{{
Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/lib/common/utils/test_utils.py", line 87, in 
call_and_ignore_notfound_exc
return func(*args, **kwargs)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 100, in delete_firewall_policy
return self.delete_resource(uri)
  File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 42, in 
delete_resource
resp, body = self.delete(req_uri)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 339, in 
delete
return self.request('DELETE', url, extra_headers, headers, body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 742, in 
request
self._error_checker(resp, resp_body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 867, in 
_error_checker
raise exceptions.Conflict(resp_body, resp=resp)
tempest.lib.exceptions.Conflict: Conflict with state of target resource
Details: {'type': 'FirewallPolicyInUse', 'message': 'Firewall policy 
a47d0031-65cb-40bf-86e1-ba3842c295aa is being used.', 'detail': ''}
}}}

traceback-4: {{{
Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/lib/common/utils/test_utils.py", line 87, in 
call_and_ignore_notfound_exc
return func(*args, **kwargs)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 75, in delete_firewall_rule
return self.delete_resource(uri)
  File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 42, in 
delete_resource
resp, body = self.delete(req_uri)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 339, in 
delete
return self.request('DELETE', url, extra_headers, headers, body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 742, in 
request
self._error_checker(resp, resp_body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 867, in 
_error_checker
raise exceptions.Conflict(resp_body, resp=resp)
tempest.lib.exceptions.Conflict: Conflict with state of target resource
Details: {'type': 

[Yahoo-eng-team] [Bug 2018130] [NEW] [master][functional] test_cascading_del_in_txn fails with ovsdbapp=2.3.0

2023-04-29 Thread yatin
Public bug reported:

With ovsdbapp==2.3.0 [1] releaase functional test
neutron.tests.functional.agent.test_ovs_lib.OVSBridgeTestCase.test_cascading_del_in_txn
fails consistently[2] as below:-

ft1.7: 
neutron.tests.functional.agent.test_ovs_lib.OVSBridgeTestCase.test_cascading_del_in_txntesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/test_ovs_lib.py",
 line 493, in test_cascading_del_in_txn
self.assertRaises((RuntimeError, idlutils.RowNotFound),
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 468, in assertRaises
self.assertThat(our_callable, matcher)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: .del_port_mod_iface at 
0x7f35bbe8af80> returned None

Seems to be caused with [3] as db_set will not fail if row is not
found(if_exists=True).

We need to add if_exists=false to keep the same behavior as test or
update test so it succeed with current behavior

[1] https://review.opendev.org/c/openstack/requirements/+/881736
[2] 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional-with-uwsgi=openstack%2Fneutron=master=0
[3] 
https://github.com/openstack/ovsdbapp/commit/6ab3f75388f583c99ac38d9c651f2529d68b617f

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: functional-tests gate-failure

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2018130

Title:
  [master][functional] test_cascading_del_in_txn fails with
  ovsdbapp=2.3.0

Status in neutron:
  Confirmed

Bug description:
  With ovsdbapp==2.3.0 [1] releaase functional test
  
neutron.tests.functional.agent.test_ovs_lib.OVSBridgeTestCase.test_cascading_del_in_txn
  fails consistently[2] as below:-

  ft1.7: 
neutron.tests.functional.agent.test_ovs_lib.OVSBridgeTestCase.test_cascading_del_in_txntesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/test_ovs_lib.py",
 line 493, in test_cascading_del_in_txn
  self.assertRaises((RuntimeError, idlutils.RowNotFound),
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 468, in assertRaises
  self.assertThat(our_callable, matcher)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional-gate/lib/python3.10/site-packages/testtools/testcase.py",
 line 481, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: .del_port_mod_iface at 
0x7f35bbe8af80> returned None

  Seems to be caused with [3] as db_set will not fail if row is not
  found(if_exists=True).

  We need to add if_exists=false to keep the same behavior as test or
  update test so it succeed with current behavior

  [1] https://review.opendev.org/c/openstack/requirements/+/881736
  [2] 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional-with-uwsgi=openstack%2Fneutron=master=0
  [3] 
https://github.com/openstack/ovsdbapp/commit/6ab3f75388f583c99ac38d9c651f2529d68b617f

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2018130/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2017992] [NEW] Jobs running on vexxhost provider failing with Mirror Issues

2023-04-28 Thread yatin
Public bug reported:

Fails as below:-
2023-04-27 12:36:47.535483 | controller | ++ 
functions-common:apt_get_update:1155 :   timeout 300 sh -c 'while ! sudo 
http_proxy= https_proxy= no_proxy=  apt-get update; do sleep 30; done'
2023-04-27 12:36:50.877357 | controller | Err:1 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu focal InRelease
2023-04-27 12:36:50.877408 | controller |   Could not connect to 
mirror.ca-ymq-1.vexxhost.opendev.org:443 (2604:e100:1:0:f816:3eff:fe0c:e2c0). - 
connect (113: No route to host) Could not connect to 
mirror.ca-ymq-1.vexxhost.opendev.org:443 (199.204.45.149). - connect (113: No 
route to host)
2023-04-27 12:36:50.877419 | controller | Err:2 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu focal-updates InRelease
2023-04-27 12:36:50.877428 | controller |   Unable to connect to 
mirror.ca-ymq-1.vexxhost.opendev.org:https:
2023-04-27 12:36:50.877437 | controller | Err:3 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu focal-backports InRelease
2023-04-27 12:36:50.877446 | controller |   Unable to connect to 
mirror.ca-ymq-1.vexxhost.opendev.org:https:
2023-04-27 12:36:50.877454 | controller | Err:4 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu focal-security InRelease
2023-04-27 12:36:50.877462 | controller |   Unable to connect to 
mirror.ca-ymq-1.vexxhost.opendev.org:https:
2023-04-27 12:36:50.892401 | controller | Reading package lists...
2023-04-27 12:36:50.895427 | controller | W: Failed to fetch 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu/dists/focal/InRelease  
Could not connect to mirror.ca-ymq-1.vexxhost.opendev.org:443 
(2604:e100:1:0:f816:3eff:fe0c:e2c0). - connect (113: No route to host) Could 
not connect to mirror.ca-ymq-1.vexxhost.opendev.org:443 (199.204.45.149). - 
connect (113: No route to host)
2023-04-27 12:36:50.895462 | controller | W: Failed to fetch 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu/dists/focal-updates/InRelease
  Unable to connect to mirror.ca-ymq-1.vexxhost.opendev.org:https:
2023-04-27 12:36:50.895539 | controller | W: Failed to fetch 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu/dists/focal-backports/InRelease
  Unable to connect to mirror.ca-ymq-1.vexxhost.opendev.org:https:
2023-04-27 12:36:50.895604 | controller | W: Failed to fetch 
https://mirror.ca-ymq-1.vexxhost.opendev.org/ubuntu/dists/focal-security/InRelease
  Unable to connect to mirror.ca-ymq-1.vexxhost.opendev.org:https:
2023-04-27 12:36:50.895677 | controller | W: Some index files failed to 
download. They have been ignored, or old ones used instead.


2023-04-27 12:36:51.007845 | controller |
2023-04-27 12:36:51.008040 | controller | E: Unable to locate package apache2
2023-04-27 12:36:51.008098 | controller | E: Unable to locate package 
apache2-dev
2023-04-27 12:36:51.008157 | controller | E: Unable to locate package bc
2023-04-27 12:36:51.008218 | controller | E: Package 'bsdmainutils' has no 
installation candidate
2023-04-27 12:36:51.008321 | controller | E: Unable to locate package gawk
2023-04-27 12:36:51.008382 | controller | E: Unable to locate package gettext
2023-04-27 12:36:51.008438 | controller | E: Unable to locate package graphviz

Example failures:-
- 
https://db3cff29bdb713e861e1-7db0c1fa1bd98a0adf758f7c1d49f672.ssl.cf5.rackcdn.com/881735/4/check/neutron-tempest-plugin-openvswitch-enforce-scope-new-defaults/278dce1/job-output.txt
- 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_944/881742/4/check/neutron-tempest-plugin-designate-scenario/9444d54/job-output.txt


It's happening randomly from last couple of weeks, seems some issue in general 
with IPv6 external connectivity.
>From opensearch[1] it can be seen following host_ids are impacted:-
c984fb897502bc826ccaf0e258b6071e76c29b305bc5b31b301de76a
1b1e841d5cdc8c40a480da993c1cbf0cd64900baff612378898e72ab
6cc97bc57f540569368fcc47255180c5d21ed00a22cad83eeb600cec
86c687840cd74cd63cbb095748afa5c9cd0f6fcea898d90aa030cc68
94cd367e7821f5d74cf44c5ebafd9af18d2b6dff64a9bee067337cf6
8926aa5796637312bf5e46a0671a88021c208235fafdfcf22931eb01
70670f45d0dc4eaae28e6553525eec409dfb6f80e8d6c8dcef7d7bf5

And seems like it started happening when nodes were upgraded as part of
fixing nested-virt issue[2]

[1] 
https://opensearch.logs.openstack.org/_dashboards/app/discover/?security_tenant=global#/?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-30d,to:now))&_a=(columns:!(_source),filters:!(('$state':(store:appState),meta:(alias:!n,disabled:!f,index:'94869730-aea8-11ec-9e6a-83741af3fdcd',key:filename,negate:!f,params:(query:job-output.txt),type:phrase),query:(match_phrase:(filename:job-output.txt,index:'94869730-aea8-11ec-9e6a-83741af3fdcd',interval:auto,query:(language:kuery,query:'message:%22Package%20bsdmainutils%20is%20not%20available,%20but%20is%20referred%20to%20by%20another%20package%22'),sort:!())
[2] https://bugs.launchpad.net/neutron/+bug/1999249/comments/3

** Affects: neutron
 Importance: 

[Yahoo-eng-team] [Bug 2017478] [NEW] openstack-tox-py38 broken with tooz==4.0.0

2023-04-24 Thread yatin
Public bug reported:

openstack-tox-py38 broken[1] since tooz update in upper-constratints [2], 
tooz-4.0.0 requires python>=3.9[3], fails as below:-
2023-04-23 22:45:20.084152 | ubuntu-focal | The conflict is caused by:
2023-04-23 22:45:20.084169 | ubuntu-focal | The user requested tooz>=1.58.0
2023-04-23 22:45:20.084184 | ubuntu-focal | The user requested (constraint) 
tooz===4.0.0

Looking at [4] we can drop py38 jobs as min supported python version is
3.9 in 2023.2.

[1] 
https://zuul.opendev.org/t/openstack/builds?job_name=openstack-tox-py38=openstack%2Fneutron=master=0
[2] https://review.opendev.org/c/openstack/requirements/+/880738
[3] https://review.opendev.org/c/openstack/tooz/+/879930
[4] https://review.opendev.org/c/openstack/governance/+/872232

** Affects: neutron
 Importance: Critical
 Assignee: yatin (yatinkarel)
 Status: Triaged


** Tags: gate-failure unittest

** Tags added: gate-failure unittest

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
   Status: New => Triaged

** Changed in: neutron
 Assignee: (unassigned) => yatin (yatinkarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2017478

Title:
  openstack-tox-py38 broken with tooz==4.0.0

Status in neutron:
  Triaged

Bug description:
  openstack-tox-py38 broken[1] since tooz update in upper-constratints [2], 
tooz-4.0.0 requires python>=3.9[3], fails as below:-
  2023-04-23 22:45:20.084152 | ubuntu-focal | The conflict is caused by:
  2023-04-23 22:45:20.084169 | ubuntu-focal | The user requested 
tooz>=1.58.0
  2023-04-23 22:45:20.084184 | ubuntu-focal | The user requested 
(constraint) tooz===4.0.0

  Looking at [4] we can drop py38 jobs as min supported python version
  is 3.9 in 2023.2.

  [1] 
https://zuul.opendev.org/t/openstack/builds?job_name=openstack-tox-py38=openstack%2Fneutron=master=0
  [2] https://review.opendev.org/c/openstack/requirements/+/880738
  [3] https://review.opendev.org/c/openstack/tooz/+/879930
  [4] https://review.opendev.org/c/openstack/governance/+/872232

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2017478/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2017131] [NEW] neutron-loki job fails tests with ORM session: SQL execution without transaction in progress

2023-04-20 Thread yatin
Public bug reported:

Job(neutron-ovn-tempest-with-uwsgi-loki) running neutron-loki service[1]
fails tempest tests randomly with Tracebacks like below:-

 ORM session: SQL execution without transaction in progress, traceback:
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/greenthread.py", line 221, in 
main
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: result = 
function(*args, **kwargs)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py", line 837, in 
process_request
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: 
proto.__init__(conn_state, self)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py", line 350, in 
__init__
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: self.handle()
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py", line 383, in handle
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: 
self.handle_one_request()
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py", line 459, in 
handle_one_request
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: 
self.handle_one_response()
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py", line 569, in 
handle_one_response
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: result = 
self.application(self.environ, start_response)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/paste/urlmap.py", line 216, in __call__
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: return 
app(environ, start_response)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/dec.py", line 129, in __call__
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: resp = 
self.call_func(req, *args, **kw)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/dec.py", line 193, in call_func
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: return 
self.func(req, *args, **kwargs)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/oslo_middleware/base.py", line 124, in 
__call__
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: response = 
req.get_response(self.application)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/request.py", line 1313, in send
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: status, headers, 
app_iter = self.call_application(
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/request.py", line 1278, in 
call_application
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: app_iter = 
application(self.environ, start_response)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/dec.py", line 129, in __call__
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: resp = 
self.call_func(req, *args, **kw)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/dec.py", line 193, in call_func
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: return 
self.func(req, *args, **kwargs)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/oslo_middleware/base.py", line 124, in 
__call__
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: response = 
req.get_response(self.application)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/request.py", line 1313, in send
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: status, headers, 
app_iter = self.call_application(
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/request.py", line 1278, in 
call_application
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: app_iter = 
application(self.environ, start_response)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/dec.py", line 129, in __call__
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]: resp = 
self.call_func(req, *args, **kw)
Apr 17 12:12:44.386611 np0033759257 neutron-server[57996]:   File 
"/usr/local/lib/python3.10/dist-packages/webob/dec.py", line 193, in call_func
Apr 17 12:12:44.386611 

[Yahoo-eng-team] [Bug 2015364] Re: [skip-level] OVN tests constantly failing

2023-04-14 Thread yatin
** Also affects: devstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2015364

Title:
  [skip-level] OVN tests constantly failing

Status in devstack:
  In Progress
Status in neutron:
  Confirmed

Bug description:
  In the new Zed-Bobcat skip-level jobs [1], the OVN job has 4 tests constantly 
failing (1 fail is actually a setup class method):
  
*tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops
  
*tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesUnderV243Test.test_add_remove_fixed_ip
  *setUpClass 
(tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON)
  
*tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops

  Logs:
  
*https://fd50651997fbb0337883-282d0b18354725863279cd3ebda4ab44.ssl.cf5.rackcdn.com/878632/6/experimental/neutron-ovn-grenade-multinode-skip-level/baf4ed5/controller/logs/grenade.sh_log.txt
  
*https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_607/878632/6/experimental/neutron-ovn-grenade-multinode-skip-level/6072d85/controller/logs/grenade.sh_log.txt

  [1]https://review.opendev.org/c/openstack/neutron/+/878632

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/2015364/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2015728] [NEW] ovs/ovn source(with master branch) deployments broken

2023-04-10 Thread yatin
Public bug reported:

With [1] ovn/ovs jobs running with OVS_BRANCH=master,OVN_BRANCH=main are
broken, fails as below:-

utilities/ovn-dbctl.c: In function ‘server_loop’:
utilities/ovn-dbctl.c:1105:5: error: too few arguments to function 
‘daemonize_start’
 1105 | daemonize_start(false);
  | ^~~
In file included from utilities/ovn-dbctl.c:22:
/opt/stack/ovs/lib/daemon.h:170:6: note: declared here
  170 | void daemonize_start(bool access_datapath, bool access_hardware_ports);
  |  ^~~
make[1]: *** [Makefile:2374: utilities/ovn-dbctl.o] Error 1
make[1]: *** Waiting for unfinished jobs


Example failure:- 
https://zuul.openstack.org/build/b7b1700e2e5941f7a52b57ca411db722

Builds:-
- 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ipv6-only-ovs-master
- 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-full-multinode-ovs-master
- https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ovs-master
- 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ovs-master-centos-9-stream
- 
https://zuul.openstack.org/builds?job_name=ovn-octavia-provider-functional-master
- https://zuul.openstack.org/builds?job_name=ovn-octavia-provider-tempest-master


Until ovn main branch is adapted to this change we need to pin ovs_branch to 
working commit or better stable branch(as done with [2])

Also i noticed some of these jobs running in neutron/ovn-octavia stable
branches, that likely not needed to be run, so should be checked and
cleaned up.

[1] https://github.com/openvswitch/ovs/commit/07cf5810de 
[2] https://github.com/ovn-org/ovn/commit/b61e819bf9673

** Affects: neutron
 Importance: High
 Assignee: yatin (yatinkarel)
 Status: Confirmed


** Tags: gat

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
 Assignee: (unassigned) => yatin (yatinkarel)

** Tags added: gat

** Changed in: neutron
   Importance: Critical => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2015728

Title:
  ovs/ovn source(with master branch) deployments broken

Status in neutron:
  Confirmed

Bug description:
  With [1] ovn/ovs jobs running with OVS_BRANCH=master,OVN_BRANCH=main
  are broken, fails as below:-

  utilities/ovn-dbctl.c: In function ‘server_loop’:
  utilities/ovn-dbctl.c:1105:5: error: too few arguments to function 
‘daemonize_start’
   1105 | daemonize_start(false);
| ^~~
  In file included from utilities/ovn-dbctl.c:22:
  /opt/stack/ovs/lib/daemon.h:170:6: note: declared here
170 | void daemonize_start(bool access_datapath, bool 
access_hardware_ports);
|  ^~~
  make[1]: *** [Makefile:2374: utilities/ovn-dbctl.o] Error 1
  make[1]: *** Waiting for unfinished jobs

  
  Example failure:- 
https://zuul.openstack.org/build/b7b1700e2e5941f7a52b57ca411db722

  Builds:-
  - 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ipv6-only-ovs-master
  - 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-full-multinode-ovs-master
  - https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ovs-master
  - 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ovs-master-centos-9-stream
  - 
https://zuul.openstack.org/builds?job_name=ovn-octavia-provider-functional-master
  - 
https://zuul.openstack.org/builds?job_name=ovn-octavia-provider-tempest-master

  
  Until ovn main branch is adapted to this change we need to pin ovs_branch to 
working commit or better stable branch(as done with [2])

  Also i noticed some of these jobs running in neutron/ovn-octavia
  stable branches, that likely not needed to be run, so should be
  checked and cleaned up.

  [1] https://github.com/openvswitch/ovs/commit/07cf5810de 
  [2] https://github.com/ovn-org/ovn/commit/b61e819bf9673

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2015728/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463631] Re: 60_nova/resources.sh:106:ping_check_public fails intermittently

2023-02-22 Thread yatin
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1463631

Title:
  60_nova/resources.sh:106:ping_check_public fails intermittently

Status in grenade:
  Confirmed
Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  http://logs.openstack.org/12/186112/17/gate/gate-grenade-
  dsvm/4da364e/logs/grenade.sh.txt.gz#_2015-06-09_22_42_15_929

  2015-06-09 22:42:13.960 | --- 172.24.5.1 ping statistics ---
  2015-06-09 22:42:13.960 | 1 packets transmitted, 0 received, 100% packet 
loss, time 0ms
  2015-06-09 22:42:13.960 | 
  2015-06-09 22:42:15.929 | + [[ True = \T\r\u\e ]]
  2015-06-09 22:42:15.929 | + die 67 '[Fail] Couldn'\''t ping server'
  2015-06-09 22:42:15.929 | + local exitcode=0
  2015-06-09 22:42:15.929 | [Call Trace]
  2015-06-09 22:42:15.929 | 
/opt/stack/new/grenade/projects/60_nova/resources.sh:134:verify
  2015-06-09 22:42:15.929 | 
/opt/stack/new/grenade/projects/60_nova/resources.sh:101:verify_noapi
  2015-06-09 22:42:15.929 | 
/opt/stack/new/grenade/projects/60_nova/resources.sh:106:ping_check_public
  2015-06-09 22:42:15.929 | /opt/stack/new/grenade/functions:67:die
  2015-06-09 22:42:15.931 | [ERROR] /opt/stack/new/grenade/functions:67 [Fail] 
Couldn't ping server
  2015-06-09 22:42:16.933 | 1 die /opt/stack/old/devstack/functions-common
  2015-06-09 22:42:16.933 | 67 ping_check_public 
/opt/stack/new/grenade/functions
  2015-06-09 22:42:16.933 | 106 verify_noapi 
/opt/stack/new/grenade/projects/60_nova/resources.sh
  2015-06-09 22:42:16.933 | 101 verify 
/opt/stack/new/grenade/projects/60_nova/resources.sh
  2015-06-09 22:42:16.933 | 134 main 
/opt/stack/new/grenade/projects/60_nova/resources.sh
  2015-06-09 22:42:16.933 | Exit code: 1
  2015-06-09 22:42:16.961 | World dumping... see 
/opt/stack/old/worlddump-2015-06-09-224216.txt for details
  2015-06-09 22:42:26.139 | [Call Trace]
  2015-06-09 22:42:26.139 | ./grenade.sh:250:resources
  2015-06-09 22:42:26.139 | /opt/stack/new/grenade/inc/plugin:82:die
  2015-06-09 22:42:26.141 | [ERROR] /opt/stack/new/grenade/inc/plugin:82 Failed 
to run /opt/stack/new/grenade/projects/60_nova/resources.sh verify

  I wonder if there is a race in setting up security groups.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiW0ZhaWxdIENvdWxkbid0IHBpbmcgc2VydmVyXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImN1c3RvbSIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJmcm9tIjoiMjAxNS0wNS0yN1QwMDozMDoxNiswMDowMCIsInRvIjoiMjAxNS0wNi0xMFQwMDozMDoxNiswMDowMCIsInVzZXJfaW50ZXJ2YWwiOiIwIn0sInN0YW1wIjoxNDMzODk2MjUwNTAyfQ==

  This hits in nova-network and neutron grenade jobs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1463631/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2007986] Re: [neutron-lib][stable] Wallaby CI, tempest failing: unknown environment 'integrated-full'

2023-02-21 Thread yatin
Being fixed in Tempest
https://review.opendev.org/c/openstack/tempest/+/874704

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2007986

Title:
  [neutron-lib][stable] Wallaby CI, tempest failing: unknown environment
  'integrated-full'

Status in neutron:
  Invalid
Status in tempest:
  In Progress

Bug description:
  Neutron-lib Wallaby CI, job "tempest-full-py3" is failing with error:
controller | ERROR: unknown environment 'integrated-full'

  Logs:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_cf8/874412/1/check/tempest-
  full-py3/cf87e9c/job-output.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2007986/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2006683] [NEW] fwaas scenario job fails randomly

2023-02-08 Thread yatin
Public bug reported:

fwass scenario job fails randomly on random tests

Example failures:-
test test_update_firewall_group fails:-
  - 
https://2c1464b5b351d9aa5e93-14387ee33f74a27c93fba699ca02403e.ssl.cf1.rackcdn.com/869152/3/gate/neutron-tempest-plugin-fwaas/d9df246/testr_results.html
  - 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_26d/873060/1/check/neutron-tempest-plugin-fwaas/26d3325/testr_results.html

traceback-5: {{{
Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/lib/common/utils/test_utils.py", line 87, in 
call_and_ignore_notfound_exc
return func(*args, **kwargs)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 75, in delete_firewall_rule
return self.delete_resource(uri)
  File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 42, in 
delete_resource
resp, body = self.delete(req_uri)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 330, in 
delete
return self.request('DELETE', url, extra_headers, headers, body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 720, in 
request
self._error_checker(resp, resp_body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 841, in 
_error_checker
raise exceptions.Conflict(resp_body, resp=resp)
tempest.lib.exceptions.Conflict: Conflict with state of target resource
Details: {'type': 'FirewallRuleInUse', 'message': 'Firewall rule 
ddaf801f-b279-41f4-a793-bd02e37c886e is being used.', 'detail': ''}
}}}

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/api/test_fwaasv2_extensions.py",
 line 341, in test_update_firewall_group
self.firewall_groups_client.delete_firewall_group(fwg_id)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 38, in delete_firewall_group
return self.delete_resource(uri)
  File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 42, in 
delete_resource
resp, body = self.delete(req_uri)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 330, in 
delete
return self.request('DELETE', url, extra_headers, headers, body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 720, in 
request
self._error_checker(resp, resp_body)
  File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 841, in 
_error_checker
raise exceptions.Conflict(resp_body, resp=resp)
tempest.lib.exceptions.Conflict: Conflict with state of target resource
Details: {'type': 'FirewallGroupInUse', 'message': 'Firewall group 
0845629f-cb8a-4796-8bb5-e8df06c6919f is still active.', 'detail': ''}


Fails as:-
Feb 07 22:44:15.544863 np0033003110 neutron-server[57277]: INFO 
neutron.api.v2.resource [None req-3d968b8a-c24c-43c1-9d82-ded509555450 
tempest-FWaaSv2ExtensionTestJSON-1939451253 
tempest-FWaaSv2ExtensionTestJSON-1939451253-project] create failed (client 
error): There was a conflict when trying to complete your request.
Feb 07 22:44:15.545878 np0033003110 neutron-server[57277]: INFO neutron.wsgi 
[None req-3d968b8a-c24c-43c1-9d82-ded509555450 
tempest-FWaaSv2ExtensionTestJSON-1939451253 
tempest-FWaaSv2ExtensionTestJSON-1939451253-project] 
149.202.171.72,149.202.171.72 "POST /networking/v2.0/fwaas/firewall_groups 
HTTP/1.1" status: 409  len: 378 time: 0.0383568


test_create_show_delete_firewall_group:-
  - 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_a1d/873060/1/check/neutron-tempest-plugin-fwaas/a1da5c4/testr_results.html
  - 
https://c4dea4e2ebaa3412b0a2-051d59c98d54ce1c08087cd83140c470.ssl.cf1.rackcdn.com/873060/1/check/neutron-tempest-plugin-fwaas-zed/7ade93d/testr_results.html


Traceback as:-
Feb 09 01:35:54.898784 np0033019570 neutron-server[57239]: DEBUG 
neutron_fwaas.db.firewall.v2.firewall_db_v2 [None 
req-f56695ab-2f6b-4035-8b61-74bcaa43898f 
tempest-FWaaSv2ExtensionTestJSON-845501349 
tempest-FWaaSv2ExtensionTestJSON-845501349-project] Default FWG was 
concurrently created {{(pid=57239) _ensure_default_firewall_group 
/opt/stack/neutron-fwaas/neutron_fwaas/db/firewall/v2/firewall_db_v2.py:940}}
Feb 09 01:35:54.920152 np0033019570 neutron-server[57239]: ERROR 
neutron.api.v2.resource [None req-f56695ab-2f6b-4035-8b61-74bcaa43898f 
tempest-FWaaSv2ExtensionTestJSON-845501349 
tempest-FWaaSv2ExtensionTestJSON-845501349-project] create failed: No details.: 
sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled 
back due to a previous exception during flush. To begin a new transaction with 
this Session, first issue Session.rollback(). Original exception was: 
(pymysql.err.IntegrityError) (1062, "Duplicate entry 
'c119661f432d4bc5aaf2308e8a8bd47a' for key 'default_firewall_groups.PRIMARY'")
Feb 09 

[Yahoo-eng-team] [Bug 2002629] Re: devstack build in the gate fails with: ovnnb_db.sock: database connection failed

2023-01-12 Thread yatin
a/CA/int-ca/devstack-cert.crt 
/opt/stack/data/CA/int-ca/ca-chain.pem
2023-01-12 10:45:50.244116 | controller | ovn-nbctl: 
unix:/var/run/ovn/ovnnb_db.sock: database connection failed (No such file or 
directory)

● ovn-ovsdb-server-nb.service - Open vSwitch database server for OVN 
Northbound database
 Loaded: loaded (/lib/systemd/system/ovn-ovsdb-server-nb.service; enabled; 
vendor preset: enabled)
 Active: active (running) since Thu 2023-01-12 10:45:49 UTC; 1min 4s ago
   Main PID: 77511 (ovsdb-server)
  Tasks: 1 (limit: 9247)
 Memory: 2.8M
 CGroup: /system.slice/ovn-ovsdb-server-nb.service
 └─77511 ovsdb-server -vconsole:off -vfile:info 
--log-file=/var/log/ovn/ovsdb-server-nb.log 
--remote=punix:/var/run/ovn/ovnnb_db.sock --pidfile=/var/run/ovn/ovnnb_db.pid 
--unixctl=/var/run/ovn/ovnnb_db.ctl 
--remote=db:OVN_Northbound,NB_Global,connections 
--private-key=db:OVN_Northbound,SSL,private_key 
--certificate=db:OVN_Northbound,SSL,certificate 
--ca-cert=db:OVN_Northbound,SSL,ca_cert 
--ssl-protocols=db:OVN_Northbound,SSL,ssl_protocols 
--ssl-ciphers=db:OVN_Northbound,SSL,ssl_ciphers /var/lib/ovn/ovnnb_db.db


Jan 12 10:45:49 nested-virt-ubuntu-focal-ovh-bhs1-0032709305 systemd[1]: 
ovn-ovsdb-server-nb.service: Succeeded.
Jan 12 10:45:49 nested-virt-ubuntu-focal-ovh-bhs1-0032709305 systemd[1]: 
Stopped Open vSwitch database server for OVN Northbound database.


Jan 12 10:45:49 nested-virt-ubuntu-focal-ovh-bhs1-0032709305 systemd[1]: 
Started Open vSwitch database server for OVN Northbound database.
Jan 12 10:45:50 nested-virt-ubuntu-focal-ovh-bhs1-0032709305 ovn-ctl[77511]:  * 
/var/lib/ovn/ovnnb_db.db does not exist
Jan 12 10:46:05 nested-virt-ubuntu-focal-ovh-bhs1-0032709305 ovn-ctl[77511]:  * 
Creating empty database /var/lib/ovn/ovnnb_db.db
Jan 12 10:46:05 nested-virt-ubuntu-focal-ovh-bhs1-0032709305 
ovsdb-server[77511]: ovs|1|vlog|INFO|opened log file 
/var/log/ovn/ovsdb-server-nb.log
Jan 12 10:46:05 nested-virt-ubuntu-focal-ovh-bhs1-0032709305 
ovsdb-server[77511]: ovs|2|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 
2.13.8
Jan 12 10:46:15 nested-virt-ubuntu-focal-ovh-bhs1-0032709305 
ovsdb-server[77511]: ovs|3|memory|INFO|7272 kB peak resident set size after 
10.0 seconds
Jan 12 10:46:15 nested-virt-ubuntu-focal-ovh-bhs1-0032709305 
ovsdb-server[77511]: ovs|4|memory|INFO|cells:31 monitors:2 sessions:1


Ensuring sock files are absent just like dbs should be able to handle this 
issue. Will push a patch to devstack.

** Changed in: neutron
   Importance: Undecided => High

** Also affects: devstack
   Importance: Undecided
   Status: New

** Changed in: devstack
 Assignee: (unassigned) => yatin (yatinkarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2002629

Title:
  devstack build in the gate fails with: ovnnb_db.sock: database
  connection failed

Status in devstack:
  New
Status in neutron:
  New

Bug description:
  Recently we seem to have many the same devstack build failure in many
  different gate jobs. The usual error message is:

  + lib/neutron_plugins/ovn_agent:start_ovn:714 :   wait_for_db_file 
/var/lib/ovn/ovnsb_db.db
  + lib/neutron_plugins/ovn_agent:wait_for_db_file:175 :   local count=0
  + lib/neutron_plugins/ovn_agent:wait_for_db_file:176 :   '[' '!' -f 
/var/lib/ovn/ovnsb_db.db ']'
  + lib/neutron_plugins/ovn_agent:start_ovn:716 :   is_service_enabled tls-proxy
  + functions-common:is_service_enabled:2089 :   return 0
  + lib/neutron_plugins/ovn_agent:start_ovn:717 :   sudo ovn-nbctl 
--db=unix:/var/run/ovn/ovnnb_db.sock set-ssl 
/opt/stack/data/CA/int-ca/private/devstack-cert.key 
/opt/stack/data/CA/int-ca/devstack-cert.crt 
/opt/stack/data/CA/int-ca/ca-chain.pem
  ovn-nbctl: unix:/var/run/ovn/ovnnb_db.sock: database connection failed (No 
such file or directory)
  + lib/neutron_plugins/ovn_agent:start_ovn:1 :   exit_trap

  A few example logs:

  https://zuul.opendev.org/t/openstack/build/ec852d75c8094afcb4140871bc9ffa36
  https://zuul.opendev.org/t/openstack/build/eae988aa8cd24c78894a3d3438392357

  The search expression 'message:"ovnnb_db.sock: database connection
  failed"' gives me 1200+ hits in https://opensearch.logs.openstack.org
  for the last 2 weeks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/2002629/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999238] Re: [FT] Error in "test_router_port_binding"

2022-12-09 Thread yatin
Specific to patch https://review.opendev.org/c/openstack/neutron/+/867075 as 
older ovs/ovn versions are setup:-
OVN_BRANCH: v20.06.1
OVS_BRANCH: 0047ca3a0290f1ef954f2c76b31477cf4b9755f5

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999238

Title:
  [FT] Error in "test_router_port_binding"

Status in neutron:
  Invalid

Bug description:
  Log:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ce0/867075/3/check/neutron-
  functional-with-uwsgi/ce08f60/testr_results.html

  Snippet: https://paste.opendev.org/show/bJ4hUYzj0tkqcjMU7o0f/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1999238/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999239] Re: [FT] Error in "test_agent_list"

2022-12-09 Thread yatin
Specific to patch https://review.opendev.org/c/openstack/neutron/+/867075 as 
older ovs/ovn versions are setup:-
OVN_BRANCH: v20.06.1
OVS_BRANCH: 0047ca3a0290f1ef954f2c76b31477cf4b9755f5

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999239

Title:
  [FT] Error in "test_agent_list"

Status in neutron:
  Invalid

Bug description:
  Log:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ce0/867075/3/check/neutron-
  functional-with-uwsgi/ce08f60/testr_results.html

  Snippet: https://paste.opendev.org/show/bnHiQN58tZoFIHtjMhTi/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1999239/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999240] Re: [FT] Error in "test_sg_stateful_toggle_updates_ovn_acls"

2022-12-09 Thread yatin
Specific to patch https://review.opendev.org/c/openstack/neutron/+/867075 as 
older ovs/ovn versions are setup:-
OVN_BRANCH: v20.06.1
OVS_BRANCH: 0047ca3a0290f1ef954f2c76b31477cf4b9755f5

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999240

Title:
  [FT] Error in "test_sg_stateful_toggle_updates_ovn_acls"

Status in neutron:
  Invalid

Bug description:
  Logs:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ce0/867075/3/check/neutron-
  functional-with-uwsgi/ce08f60/testr_results.html

  Snippet: https://paste.opendev.org/show/bKuQzDvF6UshmKItbs6r/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1999240/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999154] [NEW] ovs/ovn source deployment broken with ovs_branch=master

2022-12-08 Thread yatin
Public bug reported:

Since [1] jobs running with OVS_BRANCH=master are broken, fails as below:-
utilities/ovn-dbctl.c: In function ‘do_dbctl’:
utilities/ovn-dbctl.c:724:9: error: too few arguments to function 
‘ctl_context_init_command’
  724 | ctl_context_init_command(ctx, c);
  | ^~~~
In file included from utilities/ovn-dbctl.c:23:
/opt/stack/ovs/lib/db-ctl-base.h:249:6: note: declared here
  249 | void ctl_context_init_command(struct ctl_context *, struct ctl_command 
*,
  |  ^~~~
make[1]: *** [Makefile:2352: utilities/ovn-dbctl.o] Error 1
make[1]: *** Waiting for unfinished jobs
make[1]: Leaving directory '/opt/stack/ovn'
make: *** [Makefile:1548: all] Error 2
+ lib/neutron_plugins/ovn_agent:compile_ovn:1 :   exit_trap

Failure builds example:-
- https://zuul.opendev.org/t/openstack/build/3a900a1cfe824746ac8ffc6a27fc8ec4
- https://zuul.opendev.org/t/openstack/build/7d862338d6194a4fb3a34e8c3c67f532
- https://zuul.opendev.org/t/openstack/build/ae092f4985af41908697240e3f64f522


Until OVN repo[2] get's updated to work with ovs master we have to pin ovs to 
working version to get these experimental jobs back to green.

[1] 
https://github.com/openvswitch/ovs/commit/b8bf410a5c94173da02279b369d75875c4035959
[2] https://github.com/ovn-org/ovn

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999154

Title:
  ovs/ovn source deployment broken with ovs_branch=master

Status in neutron:
  New

Bug description:
  Since [1] jobs running with OVS_BRANCH=master are broken, fails as below:-
  utilities/ovn-dbctl.c: In function ‘do_dbctl’:
  utilities/ovn-dbctl.c:724:9: error: too few arguments to function 
‘ctl_context_init_command’
724 | ctl_context_init_command(ctx, c);
| ^~~~
  In file included from utilities/ovn-dbctl.c:23:
  /opt/stack/ovs/lib/db-ctl-base.h:249:6: note: declared here
249 | void ctl_context_init_command(struct ctl_context *, struct 
ctl_command *,
|  ^~~~
  make[1]: *** [Makefile:2352: utilities/ovn-dbctl.o] Error 1
  make[1]: *** Waiting for unfinished jobs
  make[1]: Leaving directory '/opt/stack/ovn'
  make: *** [Makefile:1548: all] Error 2
  + lib/neutron_plugins/ovn_agent:compile_ovn:1 :   exit_trap

  Failure builds example:-
  - https://zuul.opendev.org/t/openstack/build/3a900a1cfe824746ac8ffc6a27fc8ec4
  - https://zuul.opendev.org/t/openstack/build/7d862338d6194a4fb3a34e8c3c67f532
  - https://zuul.opendev.org/t/openstack/build/ae092f4985af41908697240e3f64f522

  
  Until OVN repo[2] get's updated to work with ovs master we have to pin ovs to 
working version to get these experimental jobs back to green.

  [1] 
https://github.com/openvswitch/ovs/commit/b8bf410a5c94173da02279b369d75875c4035959
  [2] https://github.com/ovn-org/ovn

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1999154/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1972764] Re: [Wallaby] OVNPortForwarding._handle_lb_on_ls fails with TypeError: _handle_lb_on_ls() got an unexpected keyword argument 'context'

2022-10-20 Thread yatin
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1972764

Title:
  [Wallaby] OVNPortForwarding._handle_lb_on_ls fails with TypeError:
  _handle_lb_on_ls() got an unexpected keyword argument 'context'

Status in neutron:
  Fix Released

Bug description:
  It's failing with below TraceBack:-

  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager [req-2e5ef575-b18c-4091-8c15-b37f6bcf0fdd 
f1840520501c41b2a6a534525f0f90a4 bf49659cd4cb40edb393b914198ce3c9 - default 
default] Error during notification for 
neutron.services.portforwarding.drivers.ovn.driver.OVNPortForwarding._handle_lb_on_ls-4305748
 router_interface, after_create: TypeError: _handle_lb_on_ls() got an 
unexpected keyword argument 'context'
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager Traceback (most recent call last):
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py", line 197, 
in _notify_loop
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager callback(resource, event, trigger, 
**kwargs)
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager TypeError: _handle_lb_on_ls() got an 
unexpected keyword argument 'context'
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager 

  
  Was noticed in a TripleO job https://bugs.launchpad.net/tripleo/+bug/1972660.

  This method was added in
  https://review.opendev.org/q/I0c4d492887216cad7a8155dceb738389f2886376
  and backported till wallaby. Xena+ are ok, only wallaby impacted
  because before xena old notification format is used where arguments
  are passed as kwargs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1972764/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1973783] Re: [devstack] Segment plugin reports Traceback as placement client not configured

2022-10-20 Thread yatin
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973783

Title:
  [devstack] Segment plugin reports Traceback as placement client not
  configured

Status in neutron:
  Fix Released

Bug description:
  Following Traceback is reported although the job passes but creates a
  noise in logs, so should be cleared:-

  
  May 17 12:08:25.056617 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: DEBUG neutron_lib.callbacks.manager [None 
req-01d22b64-1fbc-4578-8cc2-c6565188c424 admin admin] Publish callbacks 
['neutron.plugins.ml2.plugin.Ml2Plugin._handle_segment_change-1048155', 
'neutron.services.segments.plugin.NovaSegmentNotifier._notify_segment_deleted-1983453',
 
'neutron.services.segments.plugin.NovaSegmentNotifier._notify_segment_deleted-495']
 for segment (45896f0b-13b1-4cfc-ab32-297a8d8dae05), after_delete {{(pid=72995) 
_notify_loop 
/usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py:176}}
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: Traceback (most recent call last):
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/eventlet/hubs/hub.py", line 476, in 
fire_timers
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: timer()
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/eventlet/hubs/timer.py", line 59, in 
__call__
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: cb(*args, **kw)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File "/opt/stack/neutron/neutron/common/utils.py", 
line 922, in wrapper
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: return func(*args, **kwargs)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/notifiers/batch_notifier.py", line 58, in 
synced_send
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: self._notify()
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/notifiers/batch_notifier.py", line 69, in _notify
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: self.callback(batched_events)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/services/segments/plugin.py", line 211, in 
_send_notifications
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: event.method(event)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/services/segments/plugin.py", line 383, in 
_delete_nova_inventory
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: aggregate_id = 
self._get_aggregate_id(event.segment_id)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/services/segments/plugin.py", line 370, in 
_get_aggregate_id
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: aggregate_uuid = self.p_client.list_aggregates(
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/neutron_lib/placement/client.py", line 
58, in wrapper
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: return f(self, *a, **k)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/neutron_lib/placement/client.py", line 
554, in list_aggregates
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: return self._get(url).json()
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/neutron_lib/placement/client.py", line 
190, in _get
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: return self._client.get(url, 
endpoint_filter=self._ks_filter,
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/keystoneauth1/session.py", line 1141, 
in get
  May 17 12:08:25.069453 

[Yahoo-eng-team] [Bug 1979047] Re: Interface attach fails with libvirt.libvirtError: internal error: unable to execute QEMU command 'netdev_add': File descriptor named '(null)' has not been found

2022-10-20 Thread yatin
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1979047

Title:
  Interface attach fails with libvirt.libvirtError: internal error:
  unable to execute QEMU command'netdev_add': File
  descriptor named '(null)' has not been found

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The tempest-integrated-compute-centos-9-stream job is broken since
  2022-06-16 02:26:37 [1]. Multiple interface attach tempest test fails
  with:

  libvirt.libvirtError: internal error: unable to execute QEMU command
  'netdev_add': File descriptor named '(null)' has not been found

  Full exception stack trace:

  [None req-6f95599e-022a-42ab-a4de-07c7b8f73daf tempest-
  AttachInterfacesTestJSON-2035965943 tempest-
  AttachInterfacesTestJSON-2035965943-project] [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] attaching network adapter
  failed.: libvirt.libvirtError: internal error: unable to execute QEMU
  command 'netdev_add': File descriptor named '(null)' has not been
  found

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] Traceback (most recent call
  last):

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2850, in
  attach_interface

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] guest.attach_device(cfg,
  persistent=True, live=live)

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/opt/stack/nova/nova/virt/libvirt/guest.py", line 321, in
  attach_device

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]
  self._domain.attachDeviceFlags(device_xml, flags=flags)

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/usr/local/lib/python3.9/site-packages/eventlet/tpool.py", line 193,
  in doit

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] result =
  proxy_call(self._autowrap, f, *args, **kwargs)

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/usr/local/lib/python3.9/site-packages/eventlet/tpool.py", line 151,
  in proxy_call

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] rv = execute(f, *args,
  **kwargs)

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/usr/local/lib/python3.9/site-packages/eventlet/tpool.py", line 132,
  in execute

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] six.reraise(c, e, tb)

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/usr/local/lib/python3.9/site-packages/six.py", line 719, in reraise

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] raise value

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/usr/local/lib/python3.9/site-packages/eventlet/tpool.py", line 86,
  in tworker

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb] rv = meth(*args, **kwargs)

  Jun 17 08:25:43.545335 centos-9-stream-ovh-bhs1-0030065283 nova-
  compute[70195]: ERROR nova.virt.libvirt.driver [instance:
  b41513e6-cb4e-4441-af14-c272393cdafb]   File
  "/usr/lib64/python3.9/site-packages/libvirt.py", line 706, in
  attachDeviceFlags

  Jun 17 08:25:43.545335 

[Yahoo-eng-team] [Bug 1970679] Re: neutron-tempest-plugin-designate-scenario cross project job is failing on OVN

2022-10-20 Thread yatin
Based on https://bugs.launchpad.net/neutron/+bug/1970679/comments/5 and
https://review.opendev.org/c/openstack/devstack/+/848548 closing the
issue.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1970679

Title:
  neutron-tempest-plugin-designate-scenario cross project job is failing
  on OVN

Status in neutron:
  Fix Released

Bug description:
  The cross-project neutron-tempest-plugin-designate-scenario job is
  failing during the Designate gate runs due to an OVN failure.

  + lib/neutron_plugins/ovn_agent:start_ovn:698 :   wait_for_sock_file 
/var/run/openvswitch/ovnnb_db.sock
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:173 :   local count=0
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 1 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=2
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 2 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=3
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 3 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=4
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 4 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=5
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 5 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:175 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:176 :   count=6
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:177 :   '[' 6 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:178 :   die 178 'Socket 
/var/run/openvswitch/ovnnb_db.sock not found'
  + functions-common:die:264 :   local exitcode=0
  [Call Trace]
  ./stack.sh:1284:start_ovn_services
  /opt/stack/devstack/lib/neutron-legacy:516:start_ovn
  /opt/stack/devstack/lib/neutron_plugins/ovn_agent:698:wait_for_sock_file
  /opt/stack/devstack/lib/neutron_plugins/ovn_agent:178:die
  [ERROR] /opt/stack/devstack/lib/neutron_plugins/ovn_agent:178 Socket 
/var/run/openvswitch/ovnnb_db.sock not found
  exit_trap: cleaning up child processes

  An example job run is here:
  https://zuul.opendev.org/t/openstack/build/b014e50e018d426b9367fd3219ed489e

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1970679/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1992352] [NEW] [OVN] POST requests stucks when rabbitmq is not available

2022-10-10 Thread yatin
Public bug reported:

As some of the operations relies on Messaging Callbacks[1][2], these
requests get's stuck when messaging driver like rabbitmq is not
available, For OVN without any other agent running, there is no consumer
for these messages so these operations should skip messaging callbacks.

To reproduce:-
- Setup Devstack with OVN using 
https://github.com/openstack/neutron/blob/master/devstack/ovn-local.conf.sample
- Comment transport_url(or modify port 5672 -> 5673) in neutron.conf and 
restart neutron services with sudo systemctl restart devstack@q*
- Try operations like, openstack network delete, openstack router add/remove 
subnet   etc.

# In neutron logs can see too many oslo.messaging Errors for Access
Denied(transport_url commented) or Connection Refused(transport_url
updated to non listening port). oslo_messaging connection requests are
also not needed for such cases so that can also be fixed.

Actual Result:-
These operations get's stuck

Expected Result:-
- These operations should succeed as there are no consumers for those callbacks.


[1] 
https://github.com/openstack/neutron/blob/master/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py#L44-L126
[2] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L478-L510

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1992352

Title:
  [OVN] POST requests stucks when rabbitmq is not available

Status in neutron:
  New

Bug description:
  As some of the operations relies on Messaging Callbacks[1][2], these
  requests get's stuck when messaging driver like rabbitmq is not
  available, For OVN without any other agent running, there is no
  consumer for these messages so these operations should skip messaging
  callbacks.

  To reproduce:-
  - Setup Devstack with OVN using 
https://github.com/openstack/neutron/blob/master/devstack/ovn-local.conf.sample
  - Comment transport_url(or modify port 5672 -> 5673) in neutron.conf and 
restart neutron services with sudo systemctl restart devstack@q*
  - Try operations like, openstack network delete, openstack router add/remove 
subnet   etc.

  # In neutron logs can see too many oslo.messaging Errors for Access
  Denied(transport_url commented) or Connection Refused(transport_url
  updated to non listening port). oslo_messaging connection requests are
  also not needed for such cases so that can also be fixed.

  Actual Result:-
  These operations get's stuck

  Expected Result:-
  - These operations should succeed as there are no consumers for those 
callbacks.

  
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py#L44-L126
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L478-L510

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1992352/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1989057] [NEW] [master] [functional] test__get_dvr_subnet_ids_on_host_query failing with oslo.db-12.1.0 release

2022-09-07 Thread yatin
Public bug reported:

Fails as below:-
ft1.12: 
neutron.tests.functional.services.l3_router.test_l3_dvr_ha_router_plugin.L3DvrHATestCase.test__get_dvr_subnet_ids_on_host_querytesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/services/l3_router/test_l3_dvr_router_plugin.py",
 line 1754, in test__get_dvr_subnet_ids_on_host_query
self.core_plugin.update_port(
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", 
line 702, in inner
raise RuntimeError(_("Method %s cannot be called within a "
RuntimeError: Method  cannot 
be called within a transaction.

Logs:-
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_fa7/828488/1/check/neutron-functional-with-uwsgi/fa7c800/testr_results.html
https://ae61986a45a6121e3a31-a2f117574c92282cf0ccc3fc53b9f219.ssl.cf2.rackcdn.com/855851/2/gate/neutron-functional-with-uwsgi/f56ec26/testr_results.html

Happening after oslo.db updated to 12.1.0 in u-c.

** Affects: neutron
 Importance: Critical
 Status: New

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1989057

Title:
  [master] [functional] test__get_dvr_subnet_ids_on_host_query failing
  with oslo.db-12.1.0 release

Status in neutron:
  New

Bug description:
  Fails as below:-
  ft1.12: 
neutron.tests.functional.services.l3_router.test_l3_dvr_ha_router_plugin.L3DvrHATestCase.test__get_dvr_subnet_ids_on_host_querytesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/services/l3_router/test_l3_dvr_router_plugin.py",
 line 1754, in test__get_dvr_subnet_ids_on_host_query
  self.core_plugin.update_port(
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 
702, in inner
  raise RuntimeError(_("Method %s cannot be called within a "
  RuntimeError: Method  
cannot be called within a transaction.

  Logs:-
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_fa7/828488/1/check/neutron-functional-with-uwsgi/fa7c800/testr_results.html
  
https://ae61986a45a6121e3a31-a2f117574c92282cf0ccc3fc53b9f219.ssl.cf2.rackcdn.com/855851/2/gate/neutron-functional-with-uwsgi/f56ec26/testr_results.html

  Happening after oslo.db updated to 12.1.0 in u-c.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1989057/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1981963] [NEW] Some jobs broken post pyroute2 update to 0.7.1

2022-07-18 Thread yatin
Public bug reported:

pyroute2 updated to 0.7.1 with
https://review.opendev.org/c/openstack/requirements/+/849790, since then
couple of jobs are broken, like:-

- 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-grenade-dvr-multinode=master
- 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-tempest-dvr-ha-multinode-full=master
- 
https://zuul.openstack.org/builds?job_name=neutron-functional-with-uwsgi=master

Example failures:-
https://37a49967c371d50badcf-d0788a990c172de672e24591209c14b2.ssl.cf5.rackcdn.com/849122/8/check/neutron-functional-with-uwsgi/1971bae/testr_results.html

Failing as:-
ft7.4: 
neutron.tests.functional.privileged.agent.linux.test_ip_lib.RuleTestCase.test_add_rule_iptesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/privileged/agent/linux/test_ip_lib.py",
 line 323, in test_add_rule_ip
priv_ip_lib.add_ip_rule(self.namespace, src=ip_address,
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_privsep/priv_context.py",
 line 271, in _wrap
return self.channel.remote_call(name, args, kwargs,
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_privsep/daemon.py",
 line 215, in remote_call
raise exc_type(*result[2])
pyroute2.netlink.exceptions.NetlinkError: (22, 'Invalid argument')


testtools.matchers._impl.MismatchError: !=:
reference = {'from': '0.0.0.0/0',
 'iif': 'fpr-663cc9d3-b',
 'priority': '2852018311',
 'table': '2852018311',
 'type': 'unicast'}
actual= {'from': '0.0.0.0/0',
 'iif': 'fpr-663cc9d3-b',
 'priority': '2852018311',
 'table': '2852018311',
 'type': 'unspecified'}


https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ce1/849122/8/check/neutron-ovs-tempest-dvr-ha-multinode-full/ce10281/testr_results.html
https://d0b0b53d30de16fbad20-5f381a9e8c14b627196c6ef3340b4d4e.ssl.cf5.rackcdn.com/849122/8/check/neutron-ovs-grenade-dvr-multinode/2a568da/testr_results.html

^ jobs failing at SSH timeout

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: functional-tests gate-failure ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1981963

Title:
  Some jobs broken post pyroute2 update to 0.7.1

Status in neutron:
  New

Bug description:
  pyroute2 updated to 0.7.1 with
  https://review.opendev.org/c/openstack/requirements/+/849790, since
  then couple of jobs are broken, like:-

  - 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-grenade-dvr-multinode=master
  - 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-tempest-dvr-ha-multinode-full=master
  - 
https://zuul.openstack.org/builds?job_name=neutron-functional-with-uwsgi=master

  Example failures:-
  
https://37a49967c371d50badcf-d0788a990c172de672e24591209c14b2.ssl.cf5.rackcdn.com/849122/8/check/neutron-functional-with-uwsgi/1971bae/testr_results.html

  Failing as:-
  ft7.4: 
neutron.tests.functional.privileged.agent.linux.test_ip_lib.RuleTestCase.test_add_rule_iptesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/privileged/agent/linux/test_ip_lib.py",
 line 323, in test_add_rule_ip
  priv_ip_lib.add_ip_rule(self.namespace, src=ip_address,
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_privsep/priv_context.py",
 line 271, in _wrap
  return self.channel.remote_call(name, args, kwargs,
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_privsep/daemon.py",
 line 215, in remote_call
  raise exc_type(*result[2])
  pyroute2.netlink.exceptions.NetlinkError: (22, 'Invalid argument')

  
  testtools.matchers._impl.MismatchError: !=:
  reference = {'from': '0.0.0.0/0',
   'iif': 'fpr-663cc9d3-b',
   'priority': '2852018311',
   'table': '2852018311',
   'type': 'unicast'}
  actual= {'from': '0.0.0.0/0',
   'iif': 'fpr-663cc9d3-b',
   'priority': '2852018311',
   'table': '2852018311',
   'type': 'unspecified'}

  
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ce1/849122/8/check/neutron-ovs-tempest-dvr-ha-multinode-full/ce10281/testr_results.html
  
https://d0b0b53d30de16fbad20-5f381a9e8c14b627196c6ef3340b4d4e.ssl.cf5.rackcdn.com/849122/8/check/neutron-ovs-grenade-dvr-multinode/2a568da/testr_results.html

  

[Yahoo-eng-team] [Bug 1979031] [NEW] [master] fullstack functional jobs broken with pyrouet2-0.6.12

2022-06-17 Thread yatin
Public bug reported:

After pyroute2-0.6.12 update[1], multiple tests in fullstack/functional
jobs failing[2][3].

Noting some failures here:-
functional:-
AssertionError: CIDR 2001:db8::/64 not found in the list of routes
AssertionError: Route not found: {'table': 'main', 'cidr': '192.168.0.0/24', 
'source_prefix': None, 'scope': 'global', 'device': 'test_device', 'via': None, 
'metric': 0, 'proto': 'static'}
testtools.matchers._impl.MismatchError: 'via' not in None
external_device.route.get_gateway().get('via'))
AttributeError: 'NoneType' object has no attribute 'get'
testtools.matchers._impl.MismatchError: !=:
reference = [{'cidr': '0.0.0.0/0',
  'device': 'rfp-403146e1-f',
  'table': 16,
  'via': '169.254.88.135'},
 {'cidr': '::/0',
  'device': 'rfp-403146e1-f',
  'table': 'main',
  'via': 'fe80::9005:f3ff:fe70:40b9'}]
actual= []

fullstack
neutron.tests.common.machine_fixtures.FakeMachineException: Address 
10.0.0.87/24 or gateway 10.0.0.1 not configured properly on port port3f2663

Route related calls looks broken with new version.
Example:-
functional:- 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_736/797120/19/check/neutron-functional-with-uwsgi/7366408/testr_results.html
fullstack:- 
https://983763dc8d641eb0d8f2-98d26e4bc9646b70fe4861772f4678c8.ssl.cf2.rackcdn.com/845363/4/gate/neutron-fullstack-with-uwsgi/1cc6a9d/testr_results.html


[1] https://review.opendev.org/c/openstack/requirements/+/845871
[2] 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional-with-uwsgi=openstack%2Fneutron=master=0
[3] 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-fullstack-with-uwsgi=openstack%2Fneutron=master=0

** Affects: neutron
 Importance: Critical
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New


** Tags: fullstack functional-tests gate-failure

** Changed in: neutron
   Importance: Undecided => Critical

** Tags added: fullstack functional-tests gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1979031

Title:
  [master] fullstack functional jobs broken with pyrouet2-0.6.12

Status in neutron:
  New

Bug description:
  After pyroute2-0.6.12 update[1], multiple tests in
  fullstack/functional jobs failing[2][3].

  Noting some failures here:-
  functional:-
  AssertionError: CIDR 2001:db8::/64 not found in the list of routes
  AssertionError: Route not found: {'table': 'main', 'cidr': '192.168.0.0/24', 
'source_prefix': None, 'scope': 'global', 'device': 'test_device', 'via': None, 
'metric': 0, 'proto': 'static'}
  testtools.matchers._impl.MismatchError: 'via' not in None
  external_device.route.get_gateway().get('via'))
  AttributeError: 'NoneType' object has no attribute 'get'
  testtools.matchers._impl.MismatchError: !=:
  reference = [{'cidr': '0.0.0.0/0',
'device': 'rfp-403146e1-f',
'table': 16,
'via': '169.254.88.135'},
   {'cidr': '::/0',
'device': 'rfp-403146e1-f',
'table': 'main',
'via': 'fe80::9005:f3ff:fe70:40b9'}]
  actual= []

  fullstack
  neutron.tests.common.machine_fixtures.FakeMachineException: Address 
10.0.0.87/24 or gateway 10.0.0.1 not configured properly on port port3f2663

  Route related calls looks broken with new version.
  Example:-
  functional:- 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_736/797120/19/check/neutron-functional-with-uwsgi/7366408/testr_results.html
  fullstack:- 
https://983763dc8d641eb0d8f2-98d26e4bc9646b70fe4861772f4678c8.ssl.cf2.rackcdn.com/845363/4/gate/neutron-fullstack-with-uwsgi/1cc6a9d/testr_results.html

  
  [1] https://review.opendev.org/c/openstack/requirements/+/845871
  [2] 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional-with-uwsgi=openstack%2Fneutron=master=0
  [3] 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-fullstack-with-uwsgi=openstack%2Fneutron=master=0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1979031/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978938] [NEW] [9-stream][master][yoga] ovs/ovn fips jobs fails randomly with DNS issue

2022-06-16 Thread yatin
Public bug reported:

These FIPS jobs running on centos-9-stream fails randomly at different points 
due to DNS issue.
In these jobs after configuring fips, node get's rebooted. The unbound service 
may take some time to be ready after the reboot and until then any DNS 
resolution fails.

Fails like below:-
2022-06-16 02:36:58.829027 | controller | + ./stack.sh:_install_rdo:314 
 :   sudo curl -L -o /etc/yum.repos.d/delorean-deps.repo 
http://trunk.rdoproject.org/centos9-master/delorean-deps.repo
2022-06-16 02:36:58.940595 | controller |   % Total% Received % Xferd  
Average Speed   TimeTime Time  Current
2022-06-16 02:36:58.940689 | controller |  
Dload  Upload   Total   SpentLeft  Speed
2022-06-16 02:36:58.941657 | controller | 
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
0curl: (6) Could not resolve host: trunk.rdoproject.org
2022-06-16 02:36:58.946989 | controller | + ./stack.sh:_install_rdo:316 
 :   sudo dnf -y update
2022-06-16 02:36:59.649717 | controller | CentOS Stream 9 - BaseOS  
  0.0  B/s |   0  B 00:00
2022-06-16 02:36:59.650078 | controller | Errors during downloading metadata 
for repository 'baseos':
2022-06-16 02:36:59.650117 | controller |   - Curl error (6): Couldn't resolve 
host name for 
https://mirror-int.ord.rax.opendev.org/centos-stream/9-stream/BaseOS/x86_64/os/repodata/repomd.xml
 [Could not resolve host: mirror-int.ord.rax.opendev.org]

2022-06-16 02:40:30.411317 | controller | + 
functions-common:sudo_with_proxies:2384  :   sudo http_proxy= https_proxy= 
no_proxy= dnf install -y rabbitmq-server
2022-06-16 02:40:31.225502 | controller | Last metadata expiration check: 
0:03:30 ago on Thu 16 Jun 2022 02:37:01 AM UTC.
2022-06-16 02:40:31.278762 | controller | No match for argument: rabbitmq-server
2022-06-16 02:40:31.286118 | controller | Error: Unable to find a match: 
rabbitmq-server

Job builds:-
- 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovn-tempest-ovs-release-fips=openstack/neutron
- 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-tempest-fips=openstack/neutron

** Affects: neutron
 Importance: High
 Assignee: yatin (yatinkarel)
 Status: In Progress

** Changed in: neutron
   Status: New => Triaged

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => yatin (yatinkarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1978938

Title:
  [9-stream][master][yoga] ovs/ovn fips jobs fails randomly with DNS
  issue

Status in neutron:
  In Progress

Bug description:
  These FIPS jobs running on centos-9-stream fails randomly at different points 
due to DNS issue.
  In these jobs after configuring fips, node get's rebooted. The unbound 
service may take some time to be ready after the reboot and until then any DNS 
resolution fails.

  Fails like below:-
  2022-06-16 02:36:58.829027 | controller | + ./stack.sh:_install_rdo:314   
   :   sudo curl -L -o /etc/yum.repos.d/delorean-deps.repo 
http://trunk.rdoproject.org/centos9-master/delorean-deps.repo
  2022-06-16 02:36:58.940595 | controller |   % Total% Received % Xferd  
Average Speed   TimeTime Time  Current
  2022-06-16 02:36:58.940689 | controller |  
Dload  Upload   Total   SpentLeft  Speed
  2022-06-16 02:36:58.941657 | controller | 
0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
0curl: (6) Could not resolve host: trunk.rdoproject.org
  2022-06-16 02:36:58.946989 | controller | + ./stack.sh:_install_rdo:316   
   :   sudo dnf -y update
  2022-06-16 02:36:59.649717 | controller | CentOS Stream 9 - BaseOS
0.0  B/s |   0  B 00:00
  2022-06-16 02:36:59.650078 | controller | Errors during downloading metadata 
for repository 'baseos':
  2022-06-16 02:36:59.650117 | controller |   - Curl error (6): Couldn't 
resolve host name for 
https://mirror-int.ord.rax.opendev.org/centos-stream/9-stream/BaseOS/x86_64/os/repodata/repomd.xml
 [Could not resolve host: mirror-int.ord.rax.opendev.org]

  2022-06-16 02:40:30.411317 | controller | + 
functions-common:sudo_with_proxies:2384  :   sudo http_proxy= https_proxy= 
no_proxy= dnf install -y rabbitmq-server
  2022-06-16 02:40:31.225502 | controller | Last metadata expiration check: 
0:03:30 ago on Thu 16 Jun 2022 02:37:01 AM UTC.
  2022-06-16 02:40:31.278762 | controller | No match for argument: 
rabbitmq-server
  2022-06-16 02:40:31.286118 | controller | Error: Unable to find a match: 
rabbitmq-server

  Job builds:-
  - 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovn-tempest-ovs-release-fips=openstack/neutron
  - 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-tem

[Yahoo-eng-team] [Bug 1940243] Re: Neutron-tempest-plugin scenario tests - oom-killer is killing mysql process

2022-06-15 Thread yatin
Reopening the bz as seeing few failures(5 failures in last 15 days as per 
https://opensearch.logs.openstack.org) in linuxbridge and openvswitch scenario 
jobs:-
https://1b33868f301e2201a22c-a64bb815b8796eabf8a53948331bd878.ssl.cf5.rackcdn.com/845366/2/check/neutron-tempest-plugin-openvswitch/aa3106c/controller/logs/syslog.txt
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_82d/843649/2/gate/neutron-tempest-plugin-openvswitch-iptables_hybrid/82d4dbe/controller/logs/syslog.txt
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_052/842135/1/check/neutron-tempest-plugin-linuxbridge/0528022/controller/logs/syslog.txt
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f11/844680/1/check/neutron-tempest-plugin-linuxbridge/f111e3f/controller/logs/syslog.txt
https://bd6c3da3b1723c5f3502-97cb3b32849366f5bed744685e46b266.ssl.cf1.rackcdn.com/844860/3/check/neutron-tempest-plugin-linuxbridge/35e5065/controller/logs/syslog.txt

In all the above failures, SwapFree: 0 kB and many vms were active(using 256 
mb/128 mb each) and mysql utilizing approx 10% of total memory.
Also multiple(around 10) neutron-keepalived-state-change processes utilizing 
approx 12-14% of memory.

Considering only seen in master jobs and with timings seems this
triggered after https://review.opendev.org/c/openstack/neutron-tempest-
plugin/+/836912 as more api tests running along with scenario tests now.

The swap used in these jobs is 1 gb, so it can be increased to some
extent like 3-4 gb to avoid these issues due to memory shortage.

** Changed in: neutron
   Status: Fix Released => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940243

Title:
  Neutron-tempest-plugin scenario tests - oom-killer is killing mysql
  process

Status in neutron:
  Triaged

Bug description:
  It happens pretty often recently that during our scenario tests we are
  running out of memory and oom-killer is killing mysql process as it is
  number 1 in memory consumption. That is causing job's failures.

  It seems for me that it happens when there are running tests which
  uses vms with advanced image (ubuntu). Maybe we should extract those
  tests and run them as second stage with "--concurency 1"?

  Examples of failures:

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_85a/803462/2/check/neutron-
  tempest-plugin-scenario-openvswitch-
  iptables_hybrid/85afc13/testr_results.html

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_944/803936/1/gate/neutron-
  tempest-plugin-scenario-openvswitch-
  iptables_hybrid/9445e5f/testr_results.html

  
https://09e003a6a650f320c43d-e30f275ad83ed88289c7399adb6c5ee6.ssl.cf1.rackcdn.com/804236/5/check/neutron-
  tempest-plugin-scenario-linuxbridge/770722a/testr_results.html

  
https://bd90009aa1732b7b8d4a-e998c5625939f617052baaae6f827bb8.ssl.cf5.rackcdn.com/797221/1/check/neutron-
  tempest-plugin-scenario-openvswitch/2a7ab79/testr_results.html

  
https://27020bbcd4882754b192-88656c065c39ed46f44b21a92a1cea67.ssl.cf5.rackcdn.com/800445/7/check/neutron-
  tempest-plugin-scenario-openvswitch-
  iptables_hybrid/5e597ae/testr_results.html

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e7d/800446/9/check/neutron-
  tempest-plugin-scenario-openvswitch/e7d72c9/testr_results.html

  
https://637b02491f0435a9a86b-ccec73fd7dde7a9826f6a9aeb49ab878.ssl.cf5.rackcdn.com/804397/1/gate/neutron-
  tempest-plugin-scenario-linuxbridge/64bae23/testr_results.html

  
https://d1b1a7bc5606074c0db2-9f552c22a38891cd59267376a7a41496.ssl.cf5.rackcdn.com/802596/12/check/neutron-
  tempest-plugin-scenario-openvswitch-
  iptables_hybrid/de03f1f/testr_results.html

  
https://b395fe859a68f8d08e03-e48e76b6f53fcff59de7a7c1c3da6c62.ssl.cf1.rackcdn.com/804394/3/check/neutron-
  tempest-plugin-scenario-openvswitch-
  iptables_hybrid/98bfff3/testr_results.html

  
https://6ee071d84f4801a650d3-2635c9269ad2bde2592553cd282ad960.ssl.cf2.rackcdn.com/804394/3/check/neutron-
  tempest-plugin-scenario-linuxbridge/a9282d0/testr_results.html

  
  Logstash query: 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Details%3A%20Unexpected%20API%20Error.%20Please%20report%20this%20at%20http%3A%2F%2Fbugs.launchpad.net%2Fnova%2F%20and%20attach%20the%20Nova%20API%20log%20if%20possible.%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1940243/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978369] Re: [ovn] External Gateway Loop in NB SB DB

2022-06-13 Thread yatin
*** This bug is a duplicate of bug 1973347 ***
https://bugs.launchpad.net/bugs/1973347

This looks duplicate of https://bugs.launchpad.net/neutron/+bug/1973347
and is fixed with
https://review.opendev.org/c/openstack/neutron/+/842147. This should be
backported to stable branches as well.

@Ammad Can you try out the patch and confirm it fixes the issue for
your?

For now i will mark out it as duplicate of other lp i.e 1973347, please
reopen if you still consider it different issue once you check other bug
and the fix.

** This bug has been marked a duplicate of bug 1973347
   OVN revision_number infinite update loop

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1978369

Title:
  [ovn] External Gateway Loop in NB SB DB

Status in neutron:
  New

Bug description:
  Hi,

  I have installed neutron 20.0 and OVN 22.03 on ubuntu 22.04. When I
  create a router and attach external network with it, it generate loop
  thousands of ovn NB and SB DB transection cause the DB size grows.

  In SB

  OVSDB JSON 300 0f200aa6397e53cd203c99e6674bda75bdd53151
  
{"_date":1654929577073,"Multicast_Group":{"9b50bf0f-f9fe-4b9a-9333-fe2d1744575c":{"ports":["uuid","efc3d1a7-56a6-4235-8a29-4d1defdb459c"]}},"_is_diff":true,"_comment":"ovn-northd","Port_Binding":{"efc3d1a7-56a6-4235-8a29-4d1defdb459c":{"external_ids":["map",[["neutron:revision_number","10678"]]]}}}
  OVSDB JSON 402 86de47a7521717bd9ab7182422a6ad9b424c93d0
  
{"_date":1654929577345,"Multicast_Group":{"9b50bf0f-f9fe-4b9a-9333-fe2d1744575c":{"ports":["uuid","efc3d1a7-56a6-4235-8a29-4d1defdb459c"]}},"_is_diff":true,"_comment":"ovn-northd","Port_Binding":{"d34d2dd5-260b-4253-8429-5a7a89f3a500":{"external_ids":["map",[["neutron:revision_number","10679"]]]},"2ce0135e-b9b5-441b-aaae-7ce580bcf600":{"external_ids":["map",[["neutron:revision_number","10679"]]]}}}

  and In NB

  OVSDB JSON 334 e0ee7ff61d595e6151abd694ce2179c11d9e2570
  
{"_date":1654929536919,"_is_diff":true,"Logical_Router_Port":{"a0c2e43e-f4cb-4331-b070-a726b3da7a17":{"external_ids":["map",[["neutron:revision_number","10567"]]]}},"Logical_Switch_Port":{"cc97ca2c-979e-4754-a8d2-4fff0a666df8":{"options":["map",[["mcast_flood_reports","true"],["requested-chassis","kvm01-a1-r17-lhr01.rapid.pk"]]]}}}
  OVSDB JSON 269 dd8f87d8b132415a423b0f020b23f07d2488acba
  
{"_date":1654929536992,"Logical_Switch_Port":{"cc97ca2c-979e-4754-a8d2-4fff0a666df8":{"options":["map",[["mcast_flood_reports","true"],["requested-chassis","kvm01-a1-r17-lhr01.rapid.pk"]]],"external_ids":["map",[["neutron:revision_number","10567"]]]}},"_is_diff":true}
  OVSDB JSON 334 42d2a02531bd91d88b8783a45da47a33b5e3dc94
  
{"_date":1654929537262,"_is_diff":true,"Logical_Router_Port":{"a0c2e43e-f4cb-4331-b070-a726b3da7a17":{"external_ids":["map",[["neutron:revision_number","10568"]]]}},"Logical_Switch_Port":{"cc97ca2c-979e-4754-a8d2-4fff0a666df8":{"options":["map",[["mcast_flood_reports","true"],["requested-chassis","kvm01-a1-r17-lhr01.rapid.pk"]]]}}}
  OVSDB JSON 269 b8454f003de8cb14961aa37d5a557d2490d34049
  
{"_date":1654929537355,"Logical_Switch_Port":{"cc97ca2c-979e-4754-a8d2-4fff0a666df8":{"options":["map",[["mcast_flood_reports","true"],["requested-chassis","kvm01-a1-r17-lhr01.rapid.pk"]]],"external_ids":["map",[["neutron:revision_number","10568"]]]}},"_is_diff":true}
  OVSDB JSON 334 705b3007e83f0646642510903602965a6192fccf
  
{"_date":1654929537648,"_is_diff":true,"Logical_Router_Port":{"a0c2e43e-f4cb-4331-b070-a726b3da7a17":{"external_ids":["map",[["neutron:revision_number","10569"]]]}},"Logical_Switch_Port":{"cc97ca2c-979e-4754-a8d2-4fff0a666df8":{"options":["map",[["mcast_flood_reports","true"],["requested-chassis","kvm01-a1-r17-lhr01.rapid.pk"]]]}}}
  OVSDB JSON 269 4506e6ee9336bf2b8bde3134badbea7d23e72d33

  I also see below logs in ovn-northd.log

  2022-06-11T06:46:55.927Z|00171|northd|WARN|Dropped 650 log messages in last 
60 seconds (most recently, 0 seconds ago) due to excessive rate
  2022-06-11T06:46:55.927Z|00172|northd|WARN|Unknown chassis '' set as 
options:requested-chassis on LSP '426cf7d5-4fd7-4aa9-806b-9dbe170c543e'.
  2022-06-11T06:47:55.941Z|00173|northd|WARN|Dropped 644 log messages in last 
60 seconds (most recently, 0 seconds ago) due to excessive rate
  2022-06-11T06:47:55.941Z|00174|northd|WARN|Unknown chassis '' set as 
options:requested-chassis on LSP '426cf7d5-4fd7-4aa9-806b-9dbe170c543e'.

  
  I have tested it on ubuntu 20.04 via UCA AND 22.04. Below are the test 
scenerio.

  - Two gateway chassis
  - 5 compute nodes

  I have also tested this with one chassis as well, for which I am
  attaching neutron-server.log when I attached external interface to
  router and ovn nb and sb DBs as well.

  I would be happy to provide any further info that is needed.

  Ammad

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1978369/+subscriptions


-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1976360] Re: [sqlalchemy-20] Missing DB context in networking-sfc flow classifier

2022-06-02 Thread yatin
This should have opened against networking-sfc, it's fixed with
https://review.opendev.org/c/openstack/networking-sfc/+/844251. Will
update project.

** Project changed: neutron => networking-sfc

** Changed in: networking-sfc
   Status: Confirmed => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1976360

Title:
  [sqlalchemy-20] Missing DB context in networking-sfc flow classifier

Status in networking-sfc:
  Fix Committed

Bug description:
  Logs:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ab7/842135/1/gate/neutron-
  tempest-plugin-sfc/ab703be/controller/logs/screen-q-svc.txt

  Snippet: https://paste.opendev.org/show/bYc7TKCf9X5f4yFhYu7O/

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-sfc/+bug/1976360/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1976323] [NEW] [functional/fullstack][master] move fips periodic job to CentOS 9-Stream

2022-05-30 Thread yatin
Public bug reported:

Currently these periodic jobs are running on CentOS 8-Stream(with python
3.6) and failing[1][2]. These are failing as master no longer supports
py3.6. To unblock switching these jobs to run functional/fullstack tests
with python3.8[3] and disabling dbcounter installation.

Ideally these jobs can be switched to CentOS 9-Stream(have python3.9 as 
default). But found in testing[4] it has couple of issues:-
1. DNS issues are hit randomly during setup until unbound service is setupped 
after reboot. Can fix by waiting for unbound to be ready or workaround by not 
using unbound.
2. 4 functional tests are failing:-
- test_delete_multiple_entries --> conntrack delete not deleting enteries 
(WARNING neutron.privileged.agent.linux.netlink_lib [-] Netlink query failed 
looks related)
- test_delete_icmp_entry --> conntrack delete not deleting enteries (WARNING 
neutron.privileged.agent.linux.netlink_lib [-] Netlink query failed looks 
related)
- test_rule_application_converges(IptablesFirewallDriver,with ipset)
   --> self.assertEqual([], self.firewall.iptables._apply()) fails as _apply 
not returning empty list as expected  --> might be due to iptables behavior in 
C9-Stream
- test_rule_application_converges(IptablesFirewallDriver,without ipset)
   --> self.assertEqual([], self.firewall.iptables._apply()) fails as _apply 
not returning empty list as expected --> might be due to iptables behavior in 
C9-Stream

3. 2 fullstack tests are failing:-
- 
neutron.tests.fullstack.test_l3_agent.TestLegacyL3Agent.test_north_south_traffic
- 
neutron.tests.fullstack.test_local_ip.LocalIPTestCase.test_vm_is_accessible_by_local_ip(static_nat)
Both failing at ping
neutron.tests.common.machine_fixtures.FakeMachineException: No ICMP reply 
obtained from IP address 2001:db8:1234::
neutron.tests.common.machine_fixtures.FakeMachineException: No ICMP reply 
obtained from IP address 10.0.0.10


[1] 
https://zuul.openstack.org/builds?job_name=neutron-functional-with-uwsgi-fips=openstack%2Fneutron=master=periodic=0
[2] 
https://zuul.openstack.org/builds?job_name=neutron-functional-with-uwsgi-fips=openstack%2Fneutron=master=periodic=0
[3] https://review.opendev.org/c/openstack/neutron/+/843252
[4] https://review.opendev.org/c/openstack/neutron/+/843245

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1976323

Title:
  [functional/fullstack][master] move fips periodic job to CentOS
  9-Stream

Status in neutron:
  New

Bug description:
  Currently these periodic jobs are running on CentOS 8-Stream(with
  python 3.6) and failing[1][2]. These are failing as master no longer
  supports py3.6. To unblock switching these jobs to run
  functional/fullstack tests with python3.8[3] and disabling dbcounter
  installation.

  Ideally these jobs can be switched to CentOS 9-Stream(have python3.9 as 
default). But found in testing[4] it has couple of issues:-
  1. DNS issues are hit randomly during setup until unbound service is setupped 
after reboot. Can fix by waiting for unbound to be ready or workaround by not 
using unbound.
  2. 4 functional tests are failing:-
  - test_delete_multiple_entries --> conntrack delete not deleting enteries 
(WARNING neutron.privileged.agent.linux.netlink_lib [-] Netlink query failed 
looks related)
  - test_delete_icmp_entry --> conntrack delete not deleting enteries (WARNING 
neutron.privileged.agent.linux.netlink_lib [-] Netlink query failed looks 
related)
  - test_rule_application_converges(IptablesFirewallDriver,with ipset)
 --> self.assertEqual([], self.firewall.iptables._apply()) fails as _apply 
not returning empty list as expected  --> might be due to iptables behavior in 
C9-Stream
  - test_rule_application_converges(IptablesFirewallDriver,without ipset)
 --> self.assertEqual([], self.firewall.iptables._apply()) fails as _apply 
not returning empty list as expected --> might be due to iptables behavior in 
C9-Stream

  3. 2 fullstack tests are failing:-
  - 
neutron.tests.fullstack.test_l3_agent.TestLegacyL3Agent.test_north_south_traffic
  - 
neutron.tests.fullstack.test_local_ip.LocalIPTestCase.test_vm_is_accessible_by_local_ip(static_nat)
  Both failing at ping
  neutron.tests.common.machine_fixtures.FakeMachineException: No ICMP reply 
obtained from IP address 2001:db8:1234::
  neutron.tests.common.machine_fixtures.FakeMachineException: No ICMP reply 
obtained from IP address 10.0.0.10

  
  [1] 
https://zuul.openstack.org/builds?job_name=neutron-functional-with-uwsgi-fips=openstack%2Fneutron=master=periodic=0
  [2] 
https://zuul.openstack.org/builds?job_name=neutron-functional-with-uwsgi-fips=openstack%2Fneutron=master=periodic=0
  [3] https://review.opendev.org/c/openstack/neutron/+/843252
  [4] https://review.opendev.org/c/openstack/neutron/+/843245

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1973783] [NEW] [devstack] Segment plugin reports Traceback as placement client not configured

2022-05-17 Thread yatin
rt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: return self.request(url, 'GET', **kwargs)
May 17 12:08:25.071826 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/keystoneauth1/session.py", line 811, in 
request
May 17 12:08:25.071826 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: base_url = self.get_endpoint(auth, allow=allow,
May 17 12:08:25.071826 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/keystoneauth1/session.py", line 1241, 
in get_endpoint
May 17 12:08:25.071826 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: auth = self._auth_required(auth, 'determine endpoint 
URL')
May 17 12:08:25.071826 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/keystoneauth1/session.py", line 1181, 
in _auth_required
May 17 12:08:25.071826 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: raise exceptions.MissingAuthPlugin(msg_fmt % msg)
May 17 12:08:25.071826 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: keystoneauth1.exceptions.auth_plugins.MissingAuthPlugin: 
An auth plugin is required to determine endpoint URL

Ex. log:-
https://8430494fc120d4e2add1-92777588630241a74fd2839fb5cc6a5d.ssl.cf5.rackcdn.com/841118/1/gate/neutron-
tempest-plugin-scenario-openvswitch/92d6c29/controller/logs/screen-q-
svc.txt

This is happening as placement client is not configured with devstack
deployments.

** Affects: neutron
 Importance: Undecided
 Assignee: yatin (yatinkarel)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yatin (yatinkarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973783

Title:
  [devstack] Segment plugin reports Traceback as placement client not
  configured

Status in neutron:
  New

Bug description:
  Following Traceback is reported although the job passes but creates a
  noise in logs, so should be cleared:-

  
  May 17 12:08:25.056617 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: DEBUG neutron_lib.callbacks.manager [None 
req-01d22b64-1fbc-4578-8cc2-c6565188c424 admin admin] Publish callbacks 
['neutron.plugins.ml2.plugin.Ml2Plugin._handle_segment_change-1048155', 
'neutron.services.segments.plugin.NovaSegmentNotifier._notify_segment_deleted-1983453',
 
'neutron.services.segments.plugin.NovaSegmentNotifier._notify_segment_deleted-495']
 for segment (45896f0b-13b1-4cfc-ab32-297a8d8dae05), after_delete {{(pid=72995) 
_notify_loop 
/usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py:176}}
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: Traceback (most recent call last):
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/eventlet/hubs/hub.py", line 476, in 
fire_timers
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: timer()
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/usr/local/lib/python3.8/dist-packages/eventlet/hubs/timer.py", line 59, in 
__call__
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: cb(*args, **kw)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File "/opt/stack/neutron/neutron/common/utils.py", 
line 922, in wrapper
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: return func(*args, **kwargs)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/notifiers/batch_notifier.py", line 58, in 
synced_send
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: self._notify()
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/notifiers/batch_notifier.py", line 69, in _notify
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: self.callback(batched_events)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack/neutron/neutron/services/segments/plugin.py", line 211, in 
_send_notifications
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]: event.method(event)
  May 17 12:08:25.069453 nested-virt-ubuntu-focal-ovh-bhs1-0029671721 
neutron-server[72995]:   File 
"/opt/stack

[Yahoo-eng-team] [Bug 1973221] [NEW] [master] pep8 jobs TIMING OUT randomly

2022-05-12 Thread yatin
Public bug reported:

After https://review.opendev.org/c/openstack/neutron/+/806246 patch pep8
job is taking more than 30 minutes and TIMES_OUT. Specifically flake8 is
taking much time now. Before the patch approx 2 minutes and now more
than 12 minutes.

builds:- https://zuul.opendev.org/t/openstack/builds?job_name=openstack-
tox-pep8=openstack%2Fneutron=TIMED_OUT=0

example log:-
https://76c5414ef698e04295b6-922433d163de5a07cac84d974d42345f.ssl.cf2.rackcdn.com/841118/1/gate/openstack-
tox-pep8/e9359b5/job-output.txt

2022-05-12 11:03:13.275284 | ubuntu-focal | [5198] 
/home/zuul/src/opendev.org/openstack/neutron$ 
/home/zuul/src/opendev.org/openstack/neutron/.tox/lint/bin/flake8
2022-05-12 11:25:07.414232 | ubuntu-focal | :582: DeprecationWarning: invalid 
escape sequence \u

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973221

Title:
  [master] pep8 jobs TIMING OUT randomly

Status in neutron:
  New

Bug description:
  After https://review.opendev.org/c/openstack/neutron/+/806246 patch
  pep8 job is taking more than 30 minutes and TIMES_OUT. Specifically
  flake8 is taking much time now. Before the patch approx 2 minutes and
  now more than 12 minutes.

  builds:-
  https://zuul.opendev.org/t/openstack/builds?job_name=openstack-tox-
  pep8=openstack%2Fneutron=TIMED_OUT=0

  example log:-
  
https://76c5414ef698e04295b6-922433d163de5a07cac84d974d42345f.ssl.cf2.rackcdn.com/841118/1/gate/openstack-
  tox-pep8/e9359b5/job-output.txt

  2022-05-12 11:03:13.275284 | ubuntu-focal | [5198] 
/home/zuul/src/opendev.org/openstack/neutron$ 
/home/zuul/src/opendev.org/openstack/neutron/.tox/lint/bin/flake8
  2022-05-12 11:25:07.414232 | ubuntu-focal | :582: DeprecationWarning: invalid 
escape sequence \u

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1973221/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1972764] [NEW] [Wallaby] OVNPortForwarding._handle_lb_on_ls fails with TypeError: _handle_lb_on_ls() got an unexpected keyword argument 'context'

2022-05-09 Thread yatin
Public bug reported:

It's failing with below TraceBack:-

2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 ERROR 
neutron_lib.callbacks.manager [req-2e5ef575-b18c-4091-8c15-b37f6bcf0fdd 
f1840520501c41b2a6a534525f0f90a4 bf49659cd4cb40edb393b914198ce3c9 - default 
default] Error during notification for 
neutron.services.portforwarding.drivers.ovn.driver.OVNPortForwarding._handle_lb_on_ls-4305748
 router_interface, after_create: TypeError: _handle_lb_on_ls() got an 
unexpected keyword argument 'context'
2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 ERROR 
neutron_lib.callbacks.manager Traceback (most recent call last):
2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 ERROR 
neutron_lib.callbacks.manager   File 
"/usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py", line 197, 
in _notify_loop
2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 ERROR 
neutron_lib.callbacks.manager callback(resource, event, trigger, **kwargs)
2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 ERROR 
neutron_lib.callbacks.manager TypeError: _handle_lb_on_ls() got an unexpected 
keyword argument 'context'
2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 ERROR 
neutron_lib.callbacks.manager 


Was noticed in a TripleO job https://bugs.launchpad.net/tripleo/+bug/1972660.

This method was added in
https://review.opendev.org/q/I0c4d492887216cad7a8155dceb738389f2886376
and backported till wallaby. Xena+ are ok, only wallaby impacted because
before xena old notification format is used where arguments are passed
as kwargs.

** Affects: neutron
 Importance: Undecided
 Assignee: yatin (yatinkarel)
 Status: Confirmed

** Changed in: neutron
   Status: New => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => yatin (yatinkarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1972764

Title:
  [Wallaby] OVNPortForwarding._handle_lb_on_ls fails with TypeError:
  _handle_lb_on_ls() got an unexpected keyword argument 'context'

Status in neutron:
  Confirmed

Bug description:
  It's failing with below TraceBack:-

  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager [req-2e5ef575-b18c-4091-8c15-b37f6bcf0fdd 
f1840520501c41b2a6a534525f0f90a4 bf49659cd4cb40edb393b914198ce3c9 - default 
default] Error during notification for 
neutron.services.portforwarding.drivers.ovn.driver.OVNPortForwarding._handle_lb_on_ls-4305748
 router_interface, after_create: TypeError: _handle_lb_on_ls() got an 
unexpected keyword argument 'context'
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager Traceback (most recent call last):
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py", line 197, 
in _notify_loop
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager callback(resource, event, trigger, 
**kwargs)
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager TypeError: _handle_lb_on_ls() got an 
unexpected keyword argument 'context'
  2022-05-10 00:38:36.539 ERROR /var/log/containers/neutron/server.log: 15 
ERROR neutron_lib.callbacks.manager 

  
  Was noticed in a TripleO job https://bugs.launchpad.net/tripleo/+bug/1972660.

  This method was added in
  https://review.opendev.org/q/I0c4d492887216cad7a8155dceb738389f2886376
  and backported till wallaby. Xena+ are ok, only wallaby impacted
  because before xena old notification format is used where arguments
  are passed as kwargs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1972764/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1971050] Re: Nested KVM Networking Issue

2022-05-09 Thread yatin
@Modyn, Ok Thanks for confirming, will close the bug as INVALID for
neutron as i don't see anything to fix for this on neutron side.

** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1971050

Title:
  Nested KVM Networking Issue

Status in neutron:
  Invalid

Bug description:
  ## Host environment
   - Operating system: (ubuntu 20.04 server)
   - OS/kernel version: (5.13.0.40 Generic)
   - Architecture: (64 bit cpu architecture)
   - QEMU version: (latest using sudo apt install virt-manager)

  ## Emulated/Virtualized environment
   - Operating system: (ubuntu 20.04 server)
   - OS/kernel version: ( 5.13.0.40 Generic)
   - Architecture: (64 bit cpu architecture)

  
  ## Description of problem
  
  Hi, 

  Inside openstack i have an instance of Ubuntu 20.04 and i have
  installed KVM ( using virt-manager ) to setup a Virtual Machine ... i
  have done that and i created a VM of ubuntu 20.04 inside the Openstack
  Instance but there are networking issue while i set the default
  parameter as setting up the VM ( i mean the networking is as default
  to NAT ) , So when the VM is up and running the PING to 8.8.8.8 is
  available and also ping to google.com is also valid which shows that
  the DNS is correctly working ... but there is not connectivity with
  packages while i do sudo apt update, it will not get any package
  update and also the wget to google.com is shows that its connected to
  it but it wont able to download!!! the same happen with curl to any
  other websites...

  
  I'm confirming that the openstack instance has full access to the internet 
including ping and wget ,  but the VM is not working correctly!

  P.S. I have set the ip forwarding, Iptables , ... also disabled
  firewals but notting changed!!

  
  Would you please fix this ?


  ## Steps to reproduce
  1. creating an openstack instance from ubuntu 20.04 server image
  2. updating and upgrading packages setting ip forwarding to 1 ( Enabled), 
firewall
  3. and kernel to 5.13.0.40 and installing virt-manager then reboot 
  3. creating a VM with default KVM networking ( NAT ) using ubuntu 20.04 
server image
  4. trying ping, wget, curl , ...

  
  These are my commands after creating an instance with 8VCPU, 16VRAM, 
100VDisk, ubuntu cloud 20.04 image:
  sudo apt update && sudo apt full-upgrade -y && sudo apt install 
linux-image-5.13.0-40-generic linux-headers-5.13.0-40-generic -y && sudo reboot
  sudo apt update && sudo uname -a
  Linux test 5.13.0-40-generic #45~20.04.1-Ubuntu SMP Mon Apr 4 09:38:31 UTC 
2022 x86_64 x86_64 x86_64 GNU/Linux
  sudo apt install virt-manager -y && sudo reboot
  sudo systemctl status libvirtd
  Its runningIP range 192.168.122.2
  sudo usermod -a -G libvirt ubuntu
  then download ubuntu server 20.04 image from 
https://releases.ubuntu.com/20.04/ubuntu-20.04.4-live-server-amd64.iso
  and create a new VM using KVM by virt-manager as shown bellow:
  
https://gitlab.com/qemu-project/qemu/uploads/8bd4c7381a60832b3a5fcd9dbd3665de/image.png

  
  qemu-system-x86_64 --version
  QEMU emulator version 4.2.1 (Debian 1:4.2-3ubuntu6.21)
  Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers


  Here is my networking :
  ```
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
 valid_lft forever preferred_lft forever
  2: ens3:  mtu 1442 qdisc fq_codel state UP 
group default qlen 1000
  link/ether fa:16:3e:10:60:0e brd ff:ff:ff:ff:ff:ff
  altname enp0s3
  inet 10.20.30.52/24 brd 10.20.30.255 scope global dynamic ens3
 valid_lft 34758sec preferred_lft 34758sec
  inet6 fe80::f816:3eff:fe10:600e/64 scope link
 valid_lft forever preferred_lft forever
  3: virbr0:  mtu 1500 qdisc noqueue state UP 
group default qlen 1000
  link/ether 52:54:00:98:07:1a brd ff:ff:ff:ff:ff:ff
  inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
 valid_lft forever preferred_lft forever
  4: virbr0-nic:  mtu 1500 qdisc fq_codel master virbr0 
state DOWN group default qlen 1000
  link/ether 52:54:00:98:07:1a brd ff:ff:ff:ff:ff:ff
  5: vnet0:  mtu 1500 qdisc fq_codel master 
virbr0 state UNKNOWN group default qlen 1000
  link/ether fe:54:00:f9:5d:4d brd ff:ff:ff:ff:ff:ff
  inet6 fe80::fc54:ff:fef9:5d4d/64 scope link
 valid_lft forever preferred_lft forever
  ```

  
  And this is my Iptable

  ```
  iptables -L
  Chain INPUT (policy ACCEPT)
  target prot opt source   destination
  LIBVIRT_INP  all  --  anywhere anywhere

  Chain FORWARD (policy ACCEPT)
  target prot opt source

[Yahoo-eng-team] [Bug 1971569] [NEW] [neutron][api] test_log_deleted_with_corresponding_security_group failing randomly

2022-05-04 Thread yatin
Public bug reported:

test_log_deleted_with_corresponding_security_group api tests randomly
failing with below Trace:-

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/api/admin/test_logging.py",
 line 99, in test_log_deleted_with_corresponding_security_group
self.assertRaises(exceptions.NotFound,
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/testtools/testcase.py",
 line 467, in assertRaises
self.assertThat(our_callable, matcher)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/testtools/testcase.py",
 line 480, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: ._show at 0x7f784dd443a0> returned {'log': 
{'id': '4360d06d-7bd2-42e8-9468-2e760b1b246a', 'name': 
'tempest-test-log-1596006254', 'resource_type': 'security_group', 
'resource_id': '1df2df30-4901-4e41-b58f-bbef3b0c48ee', 'target_id': None, 
'event': 'ALL', 'enabled': True, 'revision_number': 0, 'description': '', 
'created_at': '2022-05-04T10:17:44Z', 'updated_at': '2022-05-04T10:17:44Z'}}


Example logs:-

https://c1d005e52a45f1c42de5-a40b9478ae1bd073dc32f331038fe6d7.ssl.cf1.rackcdn.com/839477/1/check/neutron-tempest-plugin-api/825f39d/testr_results.html

https://9a2b40b9abe499183777-8788f9c43324a469e03ac2bb48dfb234.ssl.cf2.rackcdn.com/837301/1/check/neutron-tempest-plugin-api-yoga/f0c7ad9/testr_results.html

https://5fb6cfbc0c5c1a5be079-29e820dbac3fa779e4aa716d6c5c5850.ssl.cf1.rackcdn.com/839477/2/gate/neutron-tempest-plugin-api/929ff45/testr_results.html


>From neutron api:

May 04 10:17:44.536866 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]: 
DEBUG ovsdbapp.backend.ovs_idl.transaction [None 
req-69db86bc-5bbd-4b78-a6bc-00e84d0f5e88 None None] Running txn n=1 
command(idx=14): DbSetCommand(table=ACL, 
record=8f71b6eb-d4cf-4e64-8bbb-709c99dc3ad8, col_values=(('log', False), 
('meter', []), ('name', []), ('severity', []))) {{(pid=98372) do_commit 
/usr/local/lib/python3.8/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:90}}
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]: 
ERROR ovsdbapp.backend.ovs_idl.transaction [None 
req-d1699144-a926-431c-8202-470b285baa8a tempest-LoggingTestJSON-1873065189 
tempest-LoggingTestJSON-1873065189-project] Traceback (most recent call last):
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]:   
File 
"/usr/local/lib/python3.8/dist-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 131, in run
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]:   
  txn.results.put(txn.do_commit())
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]:   
File 
"/usr/local/lib/python3.8/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 93, in do_commit
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]:   
  command.run_idl(txn)
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]:   
File 
"/usr/local/lib/python3.8/dist-packages/ovsdbapp/backend/ovs_idl/command.py", 
line 139, in run_idl
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]:   
  record = self.api.lookup(self.table, self.record)
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]:   
File 
"/usr/local/lib/python3.8/dist-packages/ovsdbapp/backend/ovs_idl/__init__.py", 
line 181, in lookup
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]:   
  return self._lookup(table, record)
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]:   
File 
"/usr/local/lib/python3.8/dist-packages/ovsdbapp/backend/ovs_idl/__init__.py", 
line 200, in _lookup
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]:   
  raise idlutils.RowNotFound(table=table, col='uuid',
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]: 
ovsdbapp.backend.ovs_idl.idlutils.RowNotFound: Cannot find ACL with 
uuid=8f71b6eb-d4cf-4e64-8bbb-709c99dc3ad8
May 04 10:17:44.541908 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]: 
May 04 10:17:44.544980 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]: 
ERROR neutron.services.logapi.drivers.manager [None 
req-d1699144-a926-431c-8202-470b285baa8a tempest-LoggingTestJSON-1873065189 
tempest-LoggingTestJSON-1873065189-project] Extension driver 'ovn' failed in 
delete_log: ovsdbapp.backend.ovs_idl.idlutils.RowNotFound: Cannot find ACL with 
uuid=8f71b6eb-d4cf-4e64-8bbb-709c99dc3ad8
May 04 10:17:44.544980 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]: 
ERROR neutron.services.logapi.drivers.manager Traceback (most recent call last):
May 04 10:17:44.544980 ubuntu-focal-rax-ord-0029540090 neutron-server[98372]: 
ERROR neutron.services.logapi.drivers.manager   File 
"/opt/stack/neutron/neutron/services/logapi/drivers/manager.py", line 116, in 
call
May 04 

[Yahoo-eng-team] [Bug 1874447] Re: [OVN] Tempest test neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle fails randomly

2022-03-15 Thread yatin
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1874447

Title:
  [OVN] Tempest test
  
neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle
  fails randomly

Status in neutron:
  Fix Released

Bug description:
  We can see occasional test failures of tempest test with OVN:

  
neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle

  
  Traceback (most recent call last):
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/common/utils.py", 
line 78, in wait_until_true
  eventlet.sleep(sleep)
File "/usr/local/lib/python3.6/dist-packages/eventlet/greenthread.py", line 
36, in sleep
  hub.switch()
File "/usr/local/lib/python3.6/dist-packages/eventlet/hubs/hub.py", line 
298, in switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 60 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_trunk.py",
 line 240, in test_trunk_subport_lifecycle
  self._wait_for_port(port)
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/scenario/test_trunk.py",
 line 141, in _wait_for_port
  "status {!r}.".format(port['id'], status)))
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/common/utils.py", 
line 82, in wait_until_true
  raise exception
  RuntimeError: Timed out waiting for port 
'cffcacde-a34e-4e1a-90ca-8d48776b9851' to transition to get status 'ACTIVE'.


  Example failure:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d8c/717851/2/check/neutron-ovn-tempest-ovs-release/d8c0282/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1874447/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1886807] Re: neutron-ovn-tempest-full-multinode-ovs-master job is failing 100% times

2022-03-03 Thread yatin
*** This bug is a duplicate of bug 1904117 ***
https://bugs.launchpad.net/bugs/1904117

** This bug is no longer a duplicate of bug 1885898
   test connectivity through 2 routers fails in 
neutron-ovn-tempest-full-multinode-ovs-master job
** This bug has been marked a duplicate of bug 1904117
   Nodes in the OVN scenario multinode jobs can't talk to each other

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1886807

Title:
  neutron-ovn-tempest-full-multinode-ovs-master  job is failing 100%
  times

Status in neutron:
  Confirmed

Bug description:
  Job neutron-ovn-tempest-full-multinode-ovs-master since few days (around 
7.07.2020) started failing all the time.
  In most cases it TIMEOUTS or many tests are failed due to problem with SSH to 
the instance.

  Examples:

  
https://f14611c8909f3446de65-4bd8d5b2b9c85b41b7d53d514f475e69.ssl.cf2.rackcdn.com/720464/25/check/neutron-ovn-tempest-full-multinode-ovs-master/cf995a4/testr_results.html
  
https://58c639024a13bcb07d67-8e6063eece8c96bdec38e25d6079d8b4.ssl.cf1.rackcdn.com/739139/4/check/neutron-ovn-tempest-full-multinode-ovs-master/bf22922/testr_results.html
  
https://571a74dab618adaab10c-51ebcc6aa3b2e625a79952b9fff60e62.ssl.cf5.rackcdn.com/736386/3/check/neutron-ovn-tempest-full-multinode-ovs-master/149bce9/job-output.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1886807/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1960338] Re: [fullstack] Test "test_east_west_traffic" randomly failing

2022-02-28 Thread yatin
Job is much stable now[1], single failure since the patch[2] merge and
that too different test, Closing for now, if we see the issue again, it
can be reopened.

[1] 
https://zuul.openstack.org/builds?job_name=neutron-fullstack=stable%2Fwallaby=0
[2] https://review.opendev.org/c/openstack/neutron/+/828231


** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1960338

Title:
  [fullstack] Test "test_east_west_traffic" randomly failing

Status in neutron:
  Fix Released

Bug description:
  Test "test_east_west_traffic" randomly failing, mostly in Wallaby CI
  jobs.

  Errors:
  - 
https://60c5a582ad68767b8b2b-7c6e098ee109e2cfc4ca8fa68868cc2d.ssl.cf2.rackcdn.com/periodic/opendev.org/openstack/neutron/stable/wallaby/neutron-fullstack/229aac7/testr_results.html
  - 
https://cb91710b8557a2f6220d-98f49207ca8d4e649788d064c2e22814.ssl.cf2.rackcdn.com/periodic/opendev.org/openstack/neutron/stable/wallaby/neutron-fullstack/7470cbf/testr_results.html

  Snippet: https://paste.opendev.org/show/812593/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1960338/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1960022] Re: neutron-tempest-with-uwsgi job failing tempest tests with: Floating ip failed to associate with server in time

2022-02-28 Thread yatin
Fixed in tempest, closing it.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1960022

Title:
  neutron-tempest-with-uwsgi job failing tempest tests with: Floating ip
   failed to associate with server  in time

Status in neutron:
  Fix Released

Bug description:
  Tempest test failing in neutron-tempest-with-uwsgi with below
  TraceBack:-

  Last successful job is seen on 28th Jan, 2022.

  ft1.1: setUpClass 
(tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON)testtools.testresult.real._StringException:
 Traceback (most recent call last):
File "/opt/stack/tempest/tempest/test.py", line 182, in setUpClass
  raise value.with_traceback(trace)
File "/opt/stack/tempest/tempest/test.py", line 175, in setUpClass
  cls.resource_setup()
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
84, in resource_setup
  cls.server_id = cls.recreate_server(None, validatable=True)
File "/opt/stack/tempest/tempest/api/compute/base.py", line 431, in 
recreate_server
  server = cls.create_test_server(
File "/opt/stack/tempest/tempest/api/compute/base.py", line 270, in 
create_test_server
  body, servers = compute.create_test_server(
File "/opt/stack/tempest/tempest/common/compute.py", line 268, in 
create_test_server
  LOG.exception('Server %s failed to delete in time',
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
  self.force_reraise()
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
  raise self.value
File "/opt/stack/tempest/tempest/common/compute.py", line 246, in 
create_test_server
  _setup_validation_fip()
File "/opt/stack/tempest/tempest/common/compute.py", line 231, in 
_setup_validation_fip
  waiters.wait_for_server_floating_ip(
File "/opt/stack/tempest/tempest/common/waiters.py", line 571, in 
wait_for_server_floating_ip
  raise lib_exc.TimeoutException(msg)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: Floating ip {'id': '42ad634c-9fce-4fff-87e0-c16cfddaa3ca', 
'tenant_id': '485663e37c554596aff00daf8b166106', 'floating_ip_address': 
'172.24.5.190', 'floating_network_id': 'ca44af5f-08e7-4463-a77b-9660d250a82b', 
'router_id': None, 'port_id': None, 'fixed_ip_address': None, 'status': 'DOWN', 
'project_id': '485663e37c554596aff00daf8b166106', 'description': '', 
'port_details': None, 'tags': [], 'created_at': '2022-02-03T13:28:19Z', 
'updated_at': '2022-02-03T13:28:19Z', 'revision_number': 0, 'ip': 
'172.24.5.190'} failed to associate with server 
7329393c-ed1d-433f-8be0-7a15a72b60e7 in time.

  
  As per neutron log fip get's created/associated/disassociated:-
  Feb 03 13:28:19.390516 ubuntu-focal-ovh-bhs1-0028297773 
devstack@neutron-api.service[63722]: DEBUG neutron.db.db_base_plugin_common 
[None req-bf1740b9-bf5c-4a21-a073-1af294df8f30 
tempest-ServerActionsTestJSON-974914713 
tempest-ServerActionsTestJSON-974914713-project] Allocated IP 172.24.5.190 
(ca44af5f-08e7-4463-a77b-9660d250a82b/b8944f8b-e1ee-4e3c-9a3b-e8c2f93ea5eb/9ec9d1c3-5cce-4393-9fb2-ab9407711e99)
 {{(pid=63722) _store_ip_allocation 
/opt/stack/neutron/neutron/db/db_base_plugin_common.py:131}}
  Feb 03 13:28:29.818159 ubuntu-focal-ovh-bhs1-0028297773 
devstack@neutron-api.service[63722]: INFO neutron.db.l3_db [None 
req-c75b805c-590a-4793-ac9d-51d93945926a 
tempest-ServerActionsTestJSON-974914713 
tempest-ServerActionsTestJSON-974914713-project] Floating IP 
42ad634c-9fce-4fff-87e0-c16cfddaa3ca associated. External IP: 172.24.5.190, 
port: a2859895-1f57-439f-8235-a9a7540e9a75.

  Feb 03 13:31:50.314556 ubuntu-focal-ovh-bhs1-0028297773
  devstack@neutron-api.service[63722]: INFO neutron.db.l3_db
  [req-10cd545c-dfcf-48b5-9754-0fb68c3bb271
  req-30c83f43-a0b2-4cdd-915a-4afd4f8e7ced tempest-
  ServerActionsTestJSON-974914713 tempest-
  ServerActionsTestJSON-974914713-project] Floating IP
  42ad634c-9fce-4fff-87e0-c16cfddaa3ca disassociated. External IP:
  172.24.5.190, port: a2859895-1f57-439f-8235-a9a7540e9a75.

  
  But the FIP is not found in server "addresses".

  
  Failure example:- 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_879/827016/1/check/neutron-tempest-with-uwsgi/8794c4a/testr_results.html
  
https://b09ab8a3bdb459203334-b023e6398111e1b55f35dc3d44640709.ssl.cf5.rackcdn.com/827016/1/check/neutron-tempest-with-uwsgi/6eb9664/testr_results.html

  Builds:- https://zuul.opendev.org/t/openstack/builds?job_name=neutron-
  tempest-with-uwsgi=openstack/neutron

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1960022/+subscriptions


-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1864833] Re: [OVN] Functional tests start with OVSDB binary 2.9 instead 2.12

2022-02-25 Thread yatin
It's no longer an issue now as i see ovs 2.16 in test and same is
compiled in job, so closing the bug.

OVS_BRANCH=v2.16.0
2022-02-25T11:12:50.733Z|3|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 
2.16.0


https://91ac997443c049736e18-7ff9e89a9bc5ab4bd512192457a69ff2.ssl.cf1.rackcdn.com/828687/4/check/neutron-functional-with-uwsgi/9014bfc/controller/logs/dsvm-functional-logs/neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_resources.TestNBDbResourcesOverSsl.test_dhcp_options/ovn_nb-22-02-25_11-12-57_log.txt



** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864833

Title:
  [OVN] Functional tests start with OVSDB binary 2.9 instead 2.12

Status in neutron:
  Fix Released

Bug description:
  In OVN functional we start ovsdb per each test. We don't start ovsdb
  2.12 but 2.9, even we compile OVS/OVN 2.12 on gates:

  2020-02-26T02:39:25.824Z|3|ovsdb_server|INFO|ovsdb-server (Open
  vSwitch) 2.9.5

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864833/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1868110] Re: [OVN] neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_sync.TestOvnNbSyncOverTcp.test_ovn_nb_sync_log randomly fails

2022-02-25 Thread yatin
Closing at as not seeing this issue recently as timeout increased long
back.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1868110

Title:
  [OVN]
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_sync.TestOvnNbSyncOverTcp.test_ovn_nb_sync_log
  randomly fails

Status in neutron:
  Fix Released

Bug description:
  The functional test 
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovn_db_sync.TestOvnNbSyncOverTcp.test_ovn_nb_sync_log

  Randomly fails on our CI.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1868110/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1956344] Re: Functional test test_gateway_chassis_rebalance is failing intermittently

2022-02-25 Thread yatin
Closing it as the workaround is in place and recently we have not seen
failures in this test and also the test code have a NOTE added for
future cleanup. If we see again the failure bug can be re opened.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1956344

Title:
  Functional test test_gateway_chassis_rebalance is failing
  intermittently

Status in neutron:
  Fix Released

Bug description:
  It may looks like similar to
  https://bugs.launchpad.net/neutron/+bug/1912369 but I noticed recently
  that at least 2 times test
  
neutron.tests.functional.services.ovn_l3.test_plugin.TestRouter.test_gateway_chassis_rebalance
  failed due to DB integrity error:

  
https://54389876f34a304893e1-7e513b902e81eb926eb2654dfc72839e.ssl.cf1.rackcdn.com/822299/4/check/neutron-
  functional-with-uwsgi/dd33a5d/testr_results.html

  
https://fd3bdfefdb4656902679-237e20f2346dac25649a4e5e3b0485b4.ssl.cf1.rackcdn.com/815962/6/check/neutron-
  functional-with-uwsgi/ef35e56/testr_results.html

  Stacktrace:

  
  ft1.1: 
neutron.tests.functional.services.ovn_l3.test_plugin.TestRouter.test_gateway_chassis_rebalancetesttools.testresult.real._StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/engine/base.py",
 line 1802, in _execute_context
  self.dialect.do_execute(
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/engine/default.py",
 line 732, in do_execute
  cursor.execute(statement, parameters)
  sqlite3.IntegrityError: FOREIGN KEY constraint failed

  The above exception was the direct cause of the following exception:

  Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/services/ovn_l3/test_plugin.py",
 line 494, in test_gateway_chassis_rebalance
  router = self._create_router('router%d' % i, gw_info=gw_info)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/services/ovn_l3/test_plugin.py",
 line 49, in _create_router
  return self.l3_plugin.create_router(self.context, router)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/services/ovn_l3/plugin.py",
 line 166, in create_router
  router = super(OVNL3RouterPlugin, self).create_router(context, router)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/db/l3_db.py", 
line 2088, in create_router
  router_dict = super(L3_NAT_db_mixin, self).create_router(context,
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/neutron_lib/db/api.py",
 line 218, in wrapped
  return method(*args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/neutron_lib/db/api.py",
 line 139, in wrapped
  setattr(e, '_RETRY_EXCEEDED', True)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
  self.force_reraise()
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
  raise self.value
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/neutron_lib/db/api.py",
 line 135, in wrapped
  return f(*args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_db/api.py",
 line 154, in wrapper
  ectxt.value = e.inner_exc
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
  self.force_reraise()
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
  raise self.value
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_db/api.py",
 line 142, in wrapper
  return f(*args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/neutron_lib/db/api.py",
 line 183, in wrapped
  LOG.debug("Retry wrapper got retriable exception: %s", e)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
  self.force_reraise()
File 

[Yahoo-eng-team] [Bug 1962306] [NEW] [Prefix Delegation] router add segment fails with Subnet for router interface must have a gateway IP

2022-02-25 Thread yatin
Public bug reported:

Steps to reproduce:-
Update ipv6_pd_enabled = true in neutron.conf and restart neutron-api service.

$ openstack network create ipv6-pd
$ openstack subnet create --ip-version 6 --ipv6-ra-mode slaac 
--ipv6-address-mode slaac --use-default-subnet-pool --network ipv6-pd ipv6-pd-1
$ openstack router add subnet router1 ipv6-pd-1

Add subnet fails with below Error:-
BadRequestException: 400: Client Error for url: 
http://192.168.0.5:9696/v2.0/routers/417bc2ad-019a-470c-b0ca-61a1d81c3d7d/add_router_interface,
 Bad router request: Subnet for router interface must have a gateway IP.

The issue is caused with 
https://review.opendev.org/c/openstack/neutron/+/699465 in order to fix 
https://bugs.launchpad.net/neutron/+bug/1856675.


Workaround:-
Create segment with --gateway specified, this will cause the issue #1856675 but 
working prefix-delegation, so need to fix it without re introducing previous 
bug:-
openstack subnet create --ip-version 6 --ipv6-ra-mode slaac --ipv6-address-mode 
slaac --use-default-subnet-pool --network ipv6-pd --gateway :: ipv6-pd-1

** Affects: neutron
 Importance: Undecided
 Assignee: yatin (yatinkarel)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yatin (yatinkarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1962306

Title:
  [Prefix Delegation] router add segment fails with Subnet for router
  interface must have a gateway IP

Status in neutron:
  New

Bug description:
  Steps to reproduce:-
  Update ipv6_pd_enabled = true in neutron.conf and restart neutron-api service.

  $ openstack network create ipv6-pd
  $ openstack subnet create --ip-version 6 --ipv6-ra-mode slaac 
--ipv6-address-mode slaac --use-default-subnet-pool --network ipv6-pd ipv6-pd-1
  $ openstack router add subnet router1 ipv6-pd-1

  Add subnet fails with below Error:-
  BadRequestException: 400: Client Error for url: 
http://192.168.0.5:9696/v2.0/routers/417bc2ad-019a-470c-b0ca-61a1d81c3d7d/add_router_interface,
 Bad router request: Subnet for router interface must have a gateway IP.

  The issue is caused with 
https://review.opendev.org/c/openstack/neutron/+/699465 in order to fix 
  https://bugs.launchpad.net/neutron/+bug/1856675.

  
  Workaround:-
  Create segment with --gateway specified, this will cause the issue #1856675 
but working prefix-delegation, so need to fix it without re introducing 
previous bug:-
  openstack subnet create --ip-version 6 --ipv6-ra-mode slaac 
--ipv6-address-mode slaac --use-default-subnet-pool --network ipv6-pd --gateway 
:: ipv6-pd-1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1962306/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930402] Re: SSH timeouts happens very often in the ovn based CI jobs

2022-02-24 Thread yatin
Closing it as neutron-ovn-tempest-slow is moved to periodic in master
branch and is no longer seeing ssh failures, if it happen again this can
be reopened or new issue can be created.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930402

Title:
  SSH timeouts happens very often in the ovn based CI jobs

Status in neutron:
  Fix Released

Bug description:
  I saw those errors mostly in the neutron-ovn-tempest-slow but probably
  it happens also in other jobs. Example of failures:

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0de/791365/7/check/neutron-ovn-tempest-slow/0de1c30/testr_results.html
  
https://2f7fb53980d59c550f7f-e09de525732b656a1c483807eeb06fc8.ssl.cf2.rackcdn.com/793369/3/check/neutron-ovn-tempest-slow/8a116c7/testr_results.html
  
https://f86d217b949ada82d82c-f669355b5e0e599ce4f84e6e473a124c.ssl.cf2.rackcdn.com/791365/6/check/neutron-ovn-tempest-slow/5dc8d92/testr_results.html

  In all those cases, common thing is that VMs seems to get IP address
  from dhcp properly, cloud-init seems to be working fine but SSH to the
  FIP is not possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930402/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1959573] [NEW] test_local_ip_connectivity test failing in stable/xena jobs

2022-01-31 Thread yatin
Public bug reported:

Example failures:-
- https://zuul.opendev.org/t/openstack/build/dc2e6aa9be504a39a930b17645dee287
- https://zuul.opendev.org/t/openstack/build/4ad0d2fd426b40b2b249585e612dbf1f

Failing since https://review.opendev.org/c/openstack/neutron-tempest-
plugin/+/823007 is merged.


It's happening as master version of openvswitch jobs is running in stable/xena 
branch and that feature is only available in master branch. This needs to be 
fixed by switch to xena-jobs in stable/xena branch.

** Affects: neutron
 Importance: Undecided
 Assignee: yatin (yatinkarel)
 Status: In Progress


** Tags: gate-failure ovs

** Changed in: neutron
 Assignee: (unassigned) => yatin (yatinkarel)

** Tags added: gate-failure ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1959573

Title:
  test_local_ip_connectivity test failing in stable/xena jobs

Status in neutron:
  In Progress

Bug description:
  Example failures:-
  - https://zuul.opendev.org/t/openstack/build/dc2e6aa9be504a39a930b17645dee287
  - https://zuul.opendev.org/t/openstack/build/4ad0d2fd426b40b2b249585e612dbf1f

  Failing since https://review.opendev.org/c/openstack/neutron-tempest-
  plugin/+/823007 is merged.

  
  It's happening as master version of openvswitch jobs is running in 
stable/xena branch and that feature is only available in master branch. This 
needs to be fixed by switch to xena-jobs in stable/xena branch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1959573/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1959564] [NEW] Random Tempest test failures(SSH failure) in openvswitch jobs

2022-01-31 Thread yatin
Public bug reported:

Seen multiple similar occurences in Stable/wallaby patches, where tempest tests 
fails with ssh to VM Timeouts, some examples:-
  - 
https://cfaa2d1e4f6a936642aa-ae5561c9d080274a217713c4553af257.ssl.cf5.rackcdn.com/824022/2/check/neutron-tempest-plugin-scenario-openvswitch-wallaby/a7c128e/testr_results.html
  - 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_803/824022/2/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/803c276/testr_results.html
  - 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e19/824022/2/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/e19d9a7/testr_results.html
  - 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_6b0/826830/1/check/neutron-tempest-plugin-scenario-openvswitch-wallaby/6b05e5b/testr_results.html
  - 
https://f529caf5e8a0adc3d959-479aba3a0d5645603ea5f6db22bcd24f.ssl.cf5.rackcdn.com/826830/1/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/9ad9f43/testr_results.html

In some failures seeing metadata request failed and thus ssh failed, and
in some metadata requests passed, but ssh failed, so may be multiple
issues are there


Builds:- 
https://zuul.openstack.org/builds?job_name=neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby_name=neutron-tempest-plugin-scenario-openvswitch-wallaby=openstack%2Fneutron=stable%2Fwallaby=0

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1959564

Title:
  Random Tempest test failures(SSH failure) in openvswitch jobs

Status in neutron:
  New

Bug description:
  Seen multiple similar occurences in Stable/wallaby patches, where tempest 
tests fails with ssh to VM Timeouts, some examples:-
- 
https://cfaa2d1e4f6a936642aa-ae5561c9d080274a217713c4553af257.ssl.cf5.rackcdn.com/824022/2/check/neutron-tempest-plugin-scenario-openvswitch-wallaby/a7c128e/testr_results.html
- 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_803/824022/2/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/803c276/testr_results.html
- 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e19/824022/2/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/e19d9a7/testr_results.html
- 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_6b0/826830/1/check/neutron-tempest-plugin-scenario-openvswitch-wallaby/6b05e5b/testr_results.html
- 
https://f529caf5e8a0adc3d959-479aba3a0d5645603ea5f6db22bcd24f.ssl.cf5.rackcdn.com/826830/1/check/neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby/9ad9f43/testr_results.html

  In some failures seeing metadata request failed and thus ssh failed,
  and in some metadata requests passed, but ssh failed, so may be
  multiple issues are there

  
  Builds:- 
https://zuul.openstack.org/builds?job_name=neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-wallaby_name=neutron-tempest-plugin-scenario-openvswitch-wallaby=openstack%2Fneutron=stable%2Fwallaby=0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1959564/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1959176] [NEW] neutron fullstack/functional jobs TIMED_OUT randomly

2022-01-26 Thread yatin
Public bug reported:

Neutron fullstack and functional jobs are timing out randomly across all
branches, only few such failures though, 1 or 2 failures per week.

Builds references:-
- 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional-with-uwsgi=TIMED_OUT=0
- 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-fullstack-with-uwsgi=TIMED_OUT=0
- 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional=TIMED_OUT=0
- 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-fullstack=TIMED_OUT=0

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fullstack functional-tests

** Tags added: fullstack functional-tests

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1959176

Title:
  neutron fullstack/functional jobs TIMED_OUT randomly

Status in neutron:
  New

Bug description:
  Neutron fullstack and functional jobs are timing out randomly across
  all branches, only few such failures though, 1 or 2 failures per week.

  Builds references:-
  - 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional-with-uwsgi=TIMED_OUT=0
  - 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-fullstack-with-uwsgi=TIMED_OUT=0
  - 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional=TIMED_OUT=0
  - 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-fullstack=TIMED_OUT=0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1959176/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1958961] [NEW] [ovn-octavia-provider] lb create failing with with ValueError: invalid literal for int() with base 10: '24 2001:db8::131/64'

2022-01-25 Thread yatin
Public bug reported:

When deployed with octavia-ovn-provider with below local.conf,
loadbalancer create(openstack loadbalancer create --vip-network-id
public --provider ovn) goes into ERROR state.

>From o-api logs:-
ERROR ovn_octavia_provider.helper Traceback (most recent call last):
ERROR ovn_octavia_provider.helper   File 
"/usr/local/lib/python3.8/dist-packages/netaddr/ip/__init__.py", line 811, in 
parse_ip_network
ERROR ovn_octavia_provider.helper prefixlen = int(val2)
ERROR ovn_octavia_provider.helper ValueError: invalid literal for int() with 
base 10: '24 2001:db8::131/64'

Seems regression caused after
https://review.opendev.org/c/openstack/ovn-octavia-provider/+/816868.

# Logical switch ports output
sudo ovn-nbctl find logical_switch_port  type=router 
_uuid   : 4865f50c-a2cd-4a5c-ae4a-bbc911985fb2
addresses   : [router]
dhcpv4_options  : []
dhcpv6_options  : []
dynamic_addresses   : []
enabled : true
external_ids: {"neutron:cidrs"="172.24.4.149/24 2001:db8::131/64", 
"neutron:device_id"="31a0e24f-6278-4714-b543-cba735a6c49d", 
"neutron:device_owner"="network:router_gateway", 
"neutron:network_name"=neutron-4708e992-cff8-4438-8142-1cc2ac7010db, 
"neutron:port_name"="", "neutron:project_id"="", "neutron:revision_number"="6", 
"neutron:security_group_ids"=""}
ha_chassis_group: []
name: "c18869b9--49a8-bc8a-5d2c51db5b6e"
options : {mcast_flood_reports="true", nat-addresses=router, 
requested-chassis=ykarel-devstack, 
router-port=lrp-c18869b9--49a8-bc8a-5d2c51db5b6e}
parent_name : []
port_security   : []
tag : []
tag_request : []
type: router
up  : true

_uuid   : f0ed6566-a942-4e2d-94f5-64ccd6bed568
addresses   : [router]
dhcpv4_options  : []
dhcpv6_options  : []
dynamic_addresses   : []
enabled : true
external_ids: {"neutron:cidrs"="fd25:38d5:1d9::1/64", 
"neutron:device_id"="31a0e24f-6278-4714-b543-cba735a6c49d", 
"neutron:device_owner"="network:router_interface", 
"neutron:network_name"=neutron-591d2b8c-3501-49b1-822c-731f2cc9b305, 
"neutron:port_name"="", "neutron:project_id"=f4c9948020024e13a1a091bd09d1fbba, 
"neutron:revision_number"="3", "neutron:security_group_ids"=""}
ha_chassis_group: []
name: "e778ac75-a15b-441b-b334-6a7579f851fa"
options : {router-port=lrp-e778ac75-a15b-441b-b334-6a7579f851fa}
parent_name : []
port_security   : []
tag : []
tag_request : []
type: router
up  : true

_uuid   : 9c2f3327-ac94-4881-a9c5-a6da87acf6a3
addresses   : [router]
dhcpv4_options  : []
dhcpv6_options  : []
dynamic_addresses   : []
enabled : true
external_ids: {"neutron:cidrs"="10.0.0.1/26", 
"neutron:device_id"="31a0e24f-6278-4714-b543-cba735a6c49d", 
"neutron:device_owner"="network:router_interface", 
"neutron:network_name"=neutron-591d2b8c-3501-49b1-822c-731f2cc9b305, 
"neutron:port_name"="", "neutron:project_id"=f4c9948020024e13a1a091bd09d1fbba, 
"neutron:revision_number"="3", "neutron:security_group_ids"=""}
ha_chassis_group: []
name: "d728e2a3-f9fd-4fff-8a6f-0c55a26bc55c"
options : {router-port=lrp-d728e2a3-f9fd-4fff-8a6f-0c55a26bc55c}
parent_name : []
port_security   : []
tag : []
tag_request : []
type: router
up  : true


local.conf
==

[[local|localrc]]
RECLONE=yes
DATABASE_PASSWORD=password
RABBIT_PASSWORD=password
SERVICE_PASSWORD=password
SERVICE_TOKEN=password
ADMIN_PASSWORD=password
Q_AGENT=ovn
Q_ML2_PLUGIN_MECHANISM_DRIVERS=ovn,logger
Q_ML2_PLUGIN_TYPE_DRIVERS=local,flat,vlan,geneve
Q_ML2_TENANT_NETWORK_TYPE="geneve"
OVN_BRANCH="v21.06.0"
OVN_BUILD_FROM_SOURCE="True"
OVS_BRANCH="branch-2.15"
OVS_SYSCONFDIR="/usr/local/etc/openvswitch"
OVN_L3_CREATE_PUBLIC_NETWORK=True
OCTAVIA_NODE="api"
DISABLE_AMP_IMAGE_BUILD=True
enable_plugin barbican https://opendev.org/openstack/barbican
enable_plugin octavia https://opendev.org/openstack/octavia
enable_plugin octavia-dashboard https://opendev.org/openstack/octavia-dashboard
LIBS_FROM_GIT+=python-octaviaclient
enable_service octavia
enable_service o-api
enable_service o-hk
enable_service o-da
disable_service o-cw
disable_service o-hm
enable_plugin ovn-octavia-provider 
https://opendev.org/openstack/ovn-octavia-provider
LOGFILE=$DEST/logs/stack.sh.log
enable_service ovn-northd
enable_service ovn-controller
enable_service q-ovn-metadata-agent
enable_service q-svc
disable_service q-agt
disable_service q-l3
disable_service q-dhcp
disable_service q-meta
enable_plugin neutron https://opendev.org/openstack/neutron
enable_service q-trunk
enable_service q-dns
enable_service q-port-forwarding
enable_service neutron-segments
enable_service q-log
enable_plugin neutron-tempest-plugin 

[Yahoo-eng-team] [Bug 1957021] Re: openstack xena documenation has wrong neutron service name to start

2022-01-17 Thread yatin
Seems you following wrong documentation for CentOS Stream. You need to
refer https://docs.openstack.org/neutron/xena/install/install-rdo.html
instead of https://docs.openstack.org/neutron/xena/install/install-
obs.html, The later one is for OpenSUSE. Marking the bug as invalid.
https://docs.openstack.org/neutron/xena/install/install-rdo.html
correctly describes the service names on CentOS Stream.

https://docs.openstack.org/neutron/xena/install/ describes installation
method for various distros.

Marking the bug as invalid, feel free to re open if wrong.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1957021

Title:
  openstack xena documenation has wrong neutron service name to start

Status in neutron:
  Invalid

Bug description:
  The manual install for neturon says to start -

  https://docs.openstack.org/neutron/xena/install/controller-install-obs.html#
  # systemctl enable openstack-neutron.service \
openstack-neutron-linuxbridge-agent.service \
openstack-neutron-dhcp-agent.service \
openstack-neutron-metadata-agent.service
  # systemctl start openstack-neutron.service \
openstack-neutron-linuxbridge-agent.service \
openstack-neutron-dhcp-agent.service \
openstack-neutron-metadata-agent.service

  
  But Xena neutron on centOS stream installs as 
  [root@controller neutron]# cd /usr/lib/systemd/system
  [root@controller system]# ls -ltra neutron*
  -rw-r--r--. 1 root root  569 Oct  6 16:45 neutron-server.service
  -rw-r--r--. 1 root root 1024 Oct  6 16:45 neutron-ovs-cleanup.service
  -rw-r--r--. 1 root root  734 Oct  6 16:45 neutron-openvswitch-agent.service
  -rw-r--r--. 1 root root  987 Oct  6 16:45 neutron-netns-cleanup.service
  -rw-r--r--. 1 root root  536 Oct  6 16:45 neutron-metadata-agent.service
  -rw-r--r--. 1 root root 1039 Oct  6 16:45 neutron-linuxbridge-cleanup.service
  -rw-r--r--. 1 root root  645 Oct  6 16:45 neutron-linuxbridge-agent.service
  -rw-r--r--. 1 root root  512 Oct  6 16:45 neutron-l3-agent.service
  -rw-r--r--. 1 root root  516 Oct  6 16:45 neutron-dhcp-agent.service
  -rw-r--r--. 1 root root  579 Oct  6 16:50 neutron-destroy-patch-ports.service


  so the document has to be updated with the proper services to start.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1957021/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1952393] Re: [OVN] neutron-tempest-plugin-scenario-ovn broken with "ovn-northd did not start"

2021-12-03 Thread yatin
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1952393

Title:
  [OVN] neutron-tempest-plugin-scenario-ovn broken with "ovn-northd did
  not start"

Status in devstack:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  Job failing consistently with below error since 
https://review.opendev.org/c/openstack/devstack/+/806858:-
  2021-11-26 05:58:40.377912 | controller | + 
functions-common:test_with_retry:2339:   timeout 60 sh -c 'while ! test -e 
/usr/local/var/run/openvswitch/ovn-northd.pid; do sleep 1; done'
  2021-11-26 05:59:40.383253 | controller | + 
functions-common:test_with_retry:2340:   die 2340 'ovn-northd did not start'
  2021-11-26 05:59:40.386420 | controller | + functions-common:die:253  
   :   local exitcode=0

  Nov 26 05:58:40.329669 ubuntu-focal-inmotion-iad3-0027510462 bash[107881]:  * 
Creating empty database /opt/stack/data/ovn/ovnsb_db.db
  Nov 26 05:58:40.330503 ubuntu-focal-inmotion-iad3-0027510462 bash[107922]: 
chown: changing ownership of '/opt/stack/data/ovn': Operation not permitted
  Nov 26 05:58:40.331416 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/ovs-vswitchd.pid': 
Operation not permitted
  Nov 26 05:58:40.331416 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of 
'/usr/local/var/run/openvswitch/ovsdb-server.106684.ctl': Operation not 
permitted
  Nov 26 05:58:40.331647 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/br-ex.mgmt': 
Operation not permitted
  Nov 26 05:58:40.331647 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of 
'/usr/local/var/run/openvswitch/ovs-vswitchd.107192.ctl': Operation not 
permitted

  
  Example logs:-
  
https://780a11778aeb29655ec5-5f07f27cfda9f7663453c94db6894b0a.ssl.cf5.rackcdn.com/818443/7/check/neutron-tempest-plugin-scenario-ovn/642b702/job-output.txt
  
https://780a11778aeb29655ec5-5f07f27cfda9f7663453c94db6894b0a.ssl.cf5.rackcdn.com/818443/7/check/neutron-tempest-plugin-scenario-ovn/642b702/controller/logs/screen-ovn-northd.txt

  Job Builds:- https://zuul.openstack.org/builds?job_name=neutron-
  tempest-plugin-scenario-ovn

  
  Other ovn jobs which uses OVN_BUILD_FROM_SOURCE=True will also be impacted. 
So need to fix affected cases by 
https://review.opendev.org/c/openstack/devstack/+/806858.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1952393/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1952393] [NEW] [master] neutron-tempest-plugin-scenario-ovn broken with "ovn-northd did not start"

2021-11-25 Thread yatin
Public bug reported:

Job failing consistently with below error since 
https://review.opendev.org/c/openstack/devstack/+/806858:-
2021-11-26 05:58:40.377912 | controller | + 
functions-common:test_with_retry:2339:   timeout 60 sh -c 'while ! test -e 
/usr/local/var/run/openvswitch/ovn-northd.pid; do sleep 1; done'
2021-11-26 05:59:40.383253 | controller | + 
functions-common:test_with_retry:2340:   die 2340 'ovn-northd did not start'
2021-11-26 05:59:40.386420 | controller | + functions-common:die:253
 :   local exitcode=0

Nov 26 05:58:40.329669 ubuntu-focal-inmotion-iad3-0027510462 bash[107881]:  * 
Creating empty database /opt/stack/data/ovn/ovnsb_db.db
Nov 26 05:58:40.330503 ubuntu-focal-inmotion-iad3-0027510462 bash[107922]: 
chown: changing ownership of '/opt/stack/data/ovn': Operation not permitted
Nov 26 05:58:40.331416 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/ovs-vswitchd.pid': 
Operation not permitted
Nov 26 05:58:40.331416 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of 
'/usr/local/var/run/openvswitch/ovsdb-server.106684.ctl': Operation not 
permitted
Nov 26 05:58:40.331647 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/br-ex.mgmt': 
Operation not permitted
Nov 26 05:58:40.331647 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of 
'/usr/local/var/run/openvswitch/ovs-vswitchd.107192.ctl': Operation not 
permitted


Example logs:-
https://780a11778aeb29655ec5-5f07f27cfda9f7663453c94db6894b0a.ssl.cf5.rackcdn.com/818443/7/check/neutron-tempest-plugin-scenario-ovn/642b702/job-output.txt
https://780a11778aeb29655ec5-5f07f27cfda9f7663453c94db6894b0a.ssl.cf5.rackcdn.com/818443/7/check/neutron-tempest-plugin-scenario-ovn/642b702/controller/logs/screen-ovn-northd.txt

Job Builds:- https://zuul.openstack.org/builds?job_name=neutron-tempest-
plugin-scenario-ovn


Other ovn jobs which uses OVN_BUILD_FROM_SOURCE=True will also be impacted. So 
need to fix affected cases by 
https://review.opendev.org/c/openstack/devstack/+/806858.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure ovn

** Tags added: gate-failure ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1952393

Title:
  [master] neutron-tempest-plugin-scenario-ovn broken with "ovn-northd
  did not start"

Status in neutron:
  New

Bug description:
  Job failing consistently with below error since 
https://review.opendev.org/c/openstack/devstack/+/806858:-
  2021-11-26 05:58:40.377912 | controller | + 
functions-common:test_with_retry:2339:   timeout 60 sh -c 'while ! test -e 
/usr/local/var/run/openvswitch/ovn-northd.pid; do sleep 1; done'
  2021-11-26 05:59:40.383253 | controller | + 
functions-common:test_with_retry:2340:   die 2340 'ovn-northd did not start'
  2021-11-26 05:59:40.386420 | controller | + functions-common:die:253  
   :   local exitcode=0

  Nov 26 05:58:40.329669 ubuntu-focal-inmotion-iad3-0027510462 bash[107881]:  * 
Creating empty database /opt/stack/data/ovn/ovnsb_db.db
  Nov 26 05:58:40.330503 ubuntu-focal-inmotion-iad3-0027510462 bash[107922]: 
chown: changing ownership of '/opt/stack/data/ovn': Operation not permitted
  Nov 26 05:58:40.331416 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/ovs-vswitchd.pid': 
Operation not permitted
  Nov 26 05:58:40.331416 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of 
'/usr/local/var/run/openvswitch/ovsdb-server.106684.ctl': Operation not 
permitted
  Nov 26 05:58:40.331647 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of '/usr/local/var/run/openvswitch/br-ex.mgmt': 
Operation not permitted
  Nov 26 05:58:40.331647 ubuntu-focal-inmotion-iad3-0027510462 bash[107923]: 
chown: changing ownership of 
'/usr/local/var/run/openvswitch/ovs-vswitchd.107192.ctl': Operation not 
permitted

  
  Example logs:-
  
https://780a11778aeb29655ec5-5f07f27cfda9f7663453c94db6894b0a.ssl.cf5.rackcdn.com/818443/7/check/neutron-tempest-plugin-scenario-ovn/642b702/job-output.txt
  
https://780a11778aeb29655ec5-5f07f27cfda9f7663453c94db6894b0a.ssl.cf5.rackcdn.com/818443/7/check/neutron-tempest-plugin-scenario-ovn/642b702/controller/logs/screen-ovn-northd.txt

  Job Builds:- https://zuul.openstack.org/builds?job_name=neutron-
  tempest-plugin-scenario-ovn

  
  Other ovn jobs which uses OVN_BUILD_FROM_SOURCE=True will also be impacted. 
So need to fix affected cases by 
https://review.opendev.org/c/openstack/devstack/+/806858.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1952393/+subscriptions


-- 
Mailing list: 

[Yahoo-eng-team] [Bug 1939137] Re: ovn + log service plugin reports AttributeError: 'NoneType' object has no attribute 'tenant_id'

2021-09-27 Thread yatin
Seems it's fixed with
https://github.com/openstack/neutron/commit/7f063223553a18345891bf42e88989edb67038e7,
no longer seeing issue in TripleO job:-
https://d0e3fd572414115f797b-39524de8c5a1fb89d206195b6f692473.ssl.cf5.rackcdn.com/808056/1/check/tripleo-
ci-
centos-8-standalone/6cb888b/logs/undercloud/var/log/containers/neutron/server.log

** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1939137

Title:
  ovn + log service plugin reports AttributeError: 'NoneType' object has
  no attribute 'tenant_id'

Status in neutron:
  Fix Released

Bug description:
  Originally noticed in a TripleO job[1], and after enabling log service
  plugin in devstack seeing the similar error in neutron service log.
  Following Traceback is seen:-

  ERROR neutron_lib.callbacks.manager [None 
req-131d215c-1d03-48ce-a16e-28175c0f58ba 
tempest-DefaultSnatToExternal-1368745793 
tempest-DefaultSnatToExternal-1368745793-project] Error during notification for 
neutron.services.logapi.common.sg_callback.SecurityGroupRuleCallBack.handle_event-423586
 security_group_rule, after_create: AttributeError: 'NoneType' object has no 
attribute 'tenant_id'
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager Traceback (most 
recent call last):
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py", line 
197, in _notify_loop
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager   File 
"/opt/stack/neutron/neutron/services/logapi/common/sg_callback.py", line 32, in 
handle_event
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager log_resources = 
db_api.get_logs_bound_sg(context, sg_id)
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager   File 
"/opt/stack/neutron/neutron/services/logapi/common/db_api.py", line 186, in 
get_logs_bound_sg
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager project_id = 
context.tenant_id
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager AttributeError: 
'NoneType' object has no attribute 'tenant_id'
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager 

  
  Example log:-
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c84/803712/1/check/neutron-tempest-plugin-scenario-ovn/c84b228/controller/logs/screen-q-svc.txt
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_65b/797823/16/check/tripleo-ci-centos-8-standalone/65b6831/logs/undercloud/var/log/containers/neutron/server.log

  The support was added as part of
  https://bugs.launchpad.net/neutron/+bug/1914757

  Test patch:- https://review.opendev.org/c/openstack/neutron-tempest-
  plugin/+/803712

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1939137/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1939137] [NEW] ovn + log service plugin reports AttributeError: 'NoneType' object has no attribute 'tenant_id'

2021-08-06 Thread yatin
Public bug reported:

Originally noticed in a TripleO job[1], and after enabling log service
plugin in devstack seeing the similar error in neutron service log.
Following Traceback is seen:-

ERROR neutron_lib.callbacks.manager [None 
req-131d215c-1d03-48ce-a16e-28175c0f58ba 
tempest-DefaultSnatToExternal-1368745793 
tempest-DefaultSnatToExternal-1368745793-project] Error during notification for 
neutron.services.logapi.common.sg_callback.SecurityGroupRuleCallBack.handle_event-423586
 security_group_rule, after_create: AttributeError: 'NoneType' object has no 
attribute 'tenant_id'
Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager Traceback (most 
recent call last):
Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py", line 
197, in _notify_loop
Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, **kwargs)
Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager   File 
"/opt/stack/neutron/neutron/services/logapi/common/sg_callback.py", line 32, in 
handle_event
Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager log_resources = 
db_api.get_logs_bound_sg(context, sg_id)
Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager   File 
"/opt/stack/neutron/neutron/services/logapi/common/db_api.py", line 186, in 
get_logs_bound_sg
Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager project_id = 
context.tenant_id
Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager AttributeError: 
'NoneType' object has no attribute 'tenant_id'
Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager 


Example log:-
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c84/803712/1/check/neutron-tempest-plugin-scenario-ovn/c84b228/controller/logs/screen-q-svc.txt
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_65b/797823/16/check/tripleo-ci-centos-8-standalone/65b6831/logs/undercloud/var/log/containers/neutron/server.log

The support was added as part of
https://bugs.launchpad.net/neutron/+bug/1914757

Test patch:- https://review.opendev.org/c/openstack/neutron-tempest-
plugin/+/803712

** Affects: neutron
 Importance: Undecided
 Assignee: Kamil Sambor (ksambor)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1939137

Title:
  ovn + log service plugin reports AttributeError: 'NoneType' object has
  no attribute 'tenant_id'

Status in neutron:
  New

Bug description:
  Originally noticed in a TripleO job[1], and after enabling log service
  plugin in devstack seeing the similar error in neutron service log.
  Following Traceback is seen:-

  ERROR neutron_lib.callbacks.manager [None 
req-131d215c-1d03-48ce-a16e-28175c0f58ba 
tempest-DefaultSnatToExternal-1368745793 
tempest-DefaultSnatToExternal-1368745793-project] Error during notification for 
neutron.services.logapi.common.sg_callback.SecurityGroupRuleCallBack.handle_event-423586
 security_group_rule, after_create: AttributeError: 'NoneType' object has no 
attribute 'tenant_id'
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager Traceback (most 
recent call last):
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager   File 
"/usr/local/lib/python3.8/dist-packages/neutron_lib/callbacks/manager.py", line 
197, in _notify_loop
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager   File 
"/opt/stack/neutron/neutron/services/logapi/common/sg_callback.py", line 32, in 
handle_event
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager log_resources = 
db_api.get_logs_bound_sg(context, sg_id)
  Aug 06 09:28:02.985925 ubuntu-focal-airship-kna1-0025793663 
neutron-server[107737]: ERROR neutron_lib.callbacks.manager   File 
"/opt/stack/neutron/neutron/services/logapi/common/db_api.py", line 186, 

[Yahoo-eng-team] [Bug 1906500] [NEW] [ovn] Tempest tests failing while creating security group driver with KeyError: 'remote_address_group_id'

2020-12-02 Thread yatin
Public bug reported:

This is failing post
https://review.opendev.org/c/openstack/neutron/+/751110, detected in a
packstack job, it fails with below Traceback:-

2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron/api/v2/resource.py", line 98, in 
resource
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron/api/v2/base.py", line 437, in create
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource return 
self._create(request, body, **kwargs)
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron_lib/db/api.py", line 139, in wrapped
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource 
self.force_reraise()
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource raise value
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron_lib/db/api.py", line 135, in wrapped
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_db/api.py", line 154, in wrapper
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource 
self.force_reraise()
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource raise value
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_db/api.py", line 142, in wrapper
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource return f(*args, 
**kwargs)
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron_lib/db/api.py", line 183, in wrapped
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource 
LOG.debug("Retry wrapper got retriable exception: %s", e)
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource 
self.force_reraise()
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource raise value
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron_lib/db/api.py", line 179, in wrapped
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource return 
f(*dup_args, **dup_kwargs)
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron/api/v2/base.py", line 547, in _create
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource objs = 
do_create(body, bulk=True)
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/neutron/api/v2/base.py", line 543, in 
do_create
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource 
request.context, reservation.reservation_id)
2020-12-01 13:05:53.257 85629 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", 

[Yahoo-eng-team] [Bug 1808975] [NEW] python3 + Fedora + SSL + nova compute RecursionError: maximum recursion depth exceeded while calling a Python object

2018-12-18 Thread yatin
Public bug reported:

Description:- While Testing python3 Fedora deployment for nova in [1]
got below Recursion Error in nova-compute:-

2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager 
[req-f908a9e0-e77a-4d35-9266-fc5e8d79dfde - - - - -] Error updating resources 
for node rdo-fedora-stable-rdo-cloud-358855.: RecursionError: maximum 
recursion depth exceeded while calling a Python object
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager Traceback (most recent 
call last):
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/nova/compute/manager.py", line 7690, in 
_update_available_resource_for_node
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager 
rt.update_available_resource(context, nodename, startup=startup)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py", line 738, 
in update_available_resource
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager 
self._update_available_resource(context, resources, startup=startup)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py", line 328, in 
inner
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager return f(*args, 
**kwargs)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py", line 794, 
in _update_available_resource
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager instance_by_uuid)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py", line 1256, 
in _remove_deleted_instances_allocations
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager context, cn.uuid)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/nova/scheduler/client/report.py", line 2165, 
in get_allocations_for_resource_provider
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager resp = 
self.get(url, global_request_id=context.global_id)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/nova/scheduler/client/report.py", line 297, 
in get
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager return 
self._client.get(url, microversion=version, headers=headers)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py", line 351, in get
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager return 
self.request(url, 'GET', **kwargs)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py", line 213, in 
request
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager return 
self.session.request(url, method, **kwargs)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/keystoneauth1/session.py", line 684, in 
request
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager auth_headers = 
self.get_auth_headers(auth)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/keystoneauth1/session.py", line 1071, in 
get_auth_headers
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager return 
auth.get_headers(self, **kwargs)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/keystoneauth1/plugin.py", line 95, in 
get_headers
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager token = 
self.get_token(session)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py", line 88, in 
get_token
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager return 
self.get_access(session).auth_token
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py", line 134, in 
get_access
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager self.auth_ref = 
self.get_auth_ref(session)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py", line 
206, in get_auth_ref
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager self._plugin = 
self._do_create_plugin(session)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py", line 
138, in _do_create_plugin
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager authenticated=False)
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager   File 
"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py", line 610, in 
get_discovery
2018-12-18 08:00:05.266 2428 ERROR nova.compute.manager 
authenticated=authenticated)
2018-12-18 

[Yahoo-eng-team] [Bug 1808951] [NEW] python3 + Fedora + SSL + wsgi nova deployment, nova api returns RecursionError: maximum recursion depth exceeded while calling a Python object

2018-12-18 Thread yatin
Public bug reported:

Description:-

So while testing python3 with Fedora in [1], Found an issue while
running nova-api behind wsgi. It fails with below Traceback:-

2018-12-18 07:41:55.364 26870 INFO nova.api.openstack.requestlog 
[req-e1af4808-ecd8-47c7-9568-a5dd9691c2c9 - - - - -] 127.0.0.1 "GET 
/v2.1/servers/detail?all_tenants=True=True" status: 500 len: 0 
microversion: - time: 0.007297
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack 
[req-e1af4808-ecd8-47c7-9568-a5dd9691c2c9 - - - - -] Caught error: maximum 
recursion depth exceeded while calling a Python object: RecursionError: maximum 
recursion depth exceeded while calling a Python object
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack Traceback (most recent 
call last):
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/__init__.py", line 94, in 
__call__
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return 
req.get_response(self.application)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1313, in send
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack application, 
catch_exc_info=False)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1277, in 
call_application
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **kw)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/requestlog.py", line 92, 
in __call__
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack self._log_req(req, 
res, start)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack self.force_reraise()
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack 
six.reraise(self.type_, self.value, self.tb)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/six.py", line 693, in reraise
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack raise value
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/nova/api/openstack/requestlog.py", line 87, 
in __call__
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack res = 
req.get_response(self.application)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1313, in send
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack application, 
catch_exc_info=False)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1277, in 
call_application
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack app_iter = 
application(self.environ, start_response)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 143, in __call__
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return resp(environ, 
start_response)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack resp = 
self.call_func(req, *args, **kw)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return 
self.func(req, *args, **kwargs)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/osprofiler/web.py", line 112, in __call__
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack return 
request.get_response(self.application)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1313, in send
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack application, 
catch_exc_info=False)
2018-12-18 07:41:55.364 26870 ERROR nova.api.openstack   File 
"/usr/lib/python3.6/site-packages/webob/request.py", line 1277, in 
call_application
2018-12-18 07:41:55.364 26870 ERROR 

[Yahoo-eng-team] [Bug 1789351] [NEW] Glance deployment with python3 + "keystone" paste_deploy flavor Fails

2018-08-28 Thread yatin
Public bug reported:

This happens with oslo.config >= 6.3.0([1]) + python3 + "keystone" paste_deploy 
+ current glance(before 
https://review.openstack.org/#/c/532503/10/glance/common/store_utils.py@30 it 
works)
Testing in devstack: https://review.openstack.org/#/c/596380/

The glance api service fails to start with below Error, reproducing here: 
https://review.openstack.org/#/c/596380/:-
ERROR: dictionary changed size during iteration , see logs below

Failure logs from job:- http://logs.openstack.org/80/596380/2/check
/tempest-full-
py3/514fa29/controller/logs/screen-g-api.txt.gz#_Aug_27_07_26_10_698243


The Runtime Error is returned at keystonemiddleware:- 
https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L551
Adding code snippet here:-
if self._conf.oslo_conf_obj != cfg.CONF:   <-- Fails here
oslo_cache.configure(self._conf.oslo_conf_obj)

So with pdb found that an additional key(fatal_deprecations) was added
to cfg.CONF at ^^, so Error is returned in python3. With python2 same
key is added but no Error.

There are multiple ways to avoid it, like use the paste_deploy configuration 
that works(ex: keystone+cachemanagement), use oslo.config <= 6.2.0, Use python2 
or update 
glance(https://review.openstack.org/#/c/532503/10/glance/common/store_utils.py@30
 as use_user_token is deprecated since long)
with keystone+cachemanagement, all the config items were added before reaching 
the Failure point in keystonemiddleware and self._conf.oslo_conf_obj != 
cfg.CONF didn't raised an error and returned Boolean. Don't know why.

But it seems a real issue to me as it may happen in python3 at different 
places. So it would be good if Teams from affected projects(oslo.config, 
keystonemiddleware, glance) can look at it and fix(not avoid) at the best place.
To me it looks like keystonemiddleware is not handling(comparing the dict) it 
properly for python3, as the conf is dynamically updated(how ? and when ?).

- so can oslo.config Team check if glance and keystonmiddleware are 
handling/using oslo.config properly.
- i checked keystone+cachemanagement is default in devstack from last 6 years, 
is "keystone" flavor supported? if yes it should be fixed. Also it would be 
good to cleanup the deprecated options those are deprecated since Mitaka.
- If it's wrongly used in keystonemiddleware/glance, it would be good to fix 
there.


Initially detected while testing with Fedora[2], but later digged on why it's 
working in CI with Ubuntu and started [3].


[1] https://review.openstack.org/#/c/560094/
[2] https://review.rdoproject.org/r/#/c/14921/
[3] https://review.openstack.org/#/c/596380/

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: keystonemiddleware
 Importance: Undecided
 Status: New

** Affects: oslo.config
 Importance: Undecided
 Status: New

** Also affects: oslo.config
   Importance: Undecided
   Status: New

** Also affects: keystonemiddleware
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1789351

Title:
  Glance deployment with python3 + "keystone" paste_deploy flavor Fails

Status in Glance:
  New
Status in keystonemiddleware:
  New
Status in oslo.config:
  New

Bug description:
  This happens with oslo.config >= 6.3.0([1]) + python3 + "keystone" 
paste_deploy + current glance(before 
https://review.openstack.org/#/c/532503/10/glance/common/store_utils.py@30 it 
works)
  Testing in devstack: https://review.openstack.org/#/c/596380/

  The glance api service fails to start with below Error, reproducing here: 
https://review.openstack.org/#/c/596380/:-
  ERROR: dictionary changed size during iteration , see logs below

  Failure logs from job:- http://logs.openstack.org/80/596380/2/check
  /tempest-full-
  py3/514fa29/controller/logs/screen-g-api.txt.gz#_Aug_27_07_26_10_698243

  
  The Runtime Error is returned at keystonemiddleware:- 
https://github.com/openstack/keystonemiddleware/blob/master/keystonemiddleware/auth_token/__init__.py#L551
  Adding code snippet here:-
  if self._conf.oslo_conf_obj != cfg.CONF:   <-- Fails here
  oslo_cache.configure(self._conf.oslo_conf_obj)

  So with pdb found that an additional key(fatal_deprecations) was added
  to cfg.CONF at ^^, so Error is returned in python3. With python2 same
  key is added but no Error.

  There are multiple ways to avoid it, like use the paste_deploy configuration 
that works(ex: keystone+cachemanagement), use oslo.config <= 6.2.0, Use python2 
or update 
glance(https://review.openstack.org/#/c/532503/10/glance/common/store_utils.py@30
 as use_user_token is deprecated since long)
  with keystone+cachemanagement, all the config items were added before 
reaching the Failure point in keystonemiddleware and self._conf.oslo_conf_obj 
!= cfg.CONF didn't raised an error and returned 

  1   2   >