[Yahoo-eng-team] [Bug 1500012] [NEW] QoS driver for LinuxBridge Agent

2015-09-26 Thread Slawek Kaplonski
Public bug reported:

Currently QoS is supported by openvswitch agent (QosOVSAgentDriver) and by 
sr-iov agent (QosSRIOVAgentDriver) on compute hosts.
There should be also similar agent for support LinuxBridge agent. It can set bw 
limitations using "tc" which can be applied for ingress and egress traffic on 
different ports (interfaces) in linux bridge.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: linuxbridge qos rfe

** Tags added: rfe

** Tags added: linuxbridge

** Tags added: qos

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1500012

Title:
  QoS driver for LinuxBridge Agent

Status in neutron:
  New

Bug description:
  Currently QoS is supported by openvswitch agent (QosOVSAgentDriver) and by 
sr-iov agent (QosSRIOVAgentDriver) on compute hosts.
  There should be also similar agent for support LinuxBridge agent. It can set 
bw limitations using "tc" which can be applied for ingress and egress traffic 
on different ports (interfaces) in linux bridge.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1500012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507761] [NEW] qos wrong units in max-burst-kbps option

2015-10-19 Thread Slawek Kaplonski
Public bug reported:

In neutron in qos bw limit rule table in database and in API extension
parameter "max-burst-kbps" has got wrong units suggested. Burst should
be given in kb instead of kbps because according to for example ovs
documentation: http://openvswitch.org/support/config-cookbooks/qos-rate-
limiting/ it is "a parameter to the policing algorithm to indicate the
maximum amount of data (in Kb) that this interface can send beyond the
policing rate."

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507761

Title:
  qos wrong units in max-burst-kbps option

Status in neutron:
  New

Bug description:
  In neutron in qos bw limit rule table in database and in API extension
  parameter "max-burst-kbps" has got wrong units suggested. Burst should
  be given in kb instead of kbps because according to for example ovs
  documentation: http://openvswitch.org/support/config-cookbooks/qos-
  rate-limiting/ it is "a parameter to the policing algorithm to
  indicate the maximum amount of data (in Kb) that this interface can
  send beyond the policing rate."

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507761/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1518675] [NEW] Add fullstack resources and tests for linuxbridge agent

2015-11-22 Thread Slawek Kaplonski
Public bug reported:

Currently fullstack tests (test_connectivity, qos) are only testing ovs
agent hosts. Support for linuxbridge agent should be added also in
fullstack tests IMO.

** Affects: neutron
 Importance: Undecided
 Assignee: Slawek Kaplonski (slaweq)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Slawek Kaplonski (slaweq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518675

Title:
  Add fullstack resources and tests for linuxbridge agent

Status in neutron:
  New

Bug description:
  Currently fullstack tests (test_connectivity, qos) are only testing
  ovs agent hosts. Support for linuxbridge agent should be added also in
  fullstack tests IMO.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518675/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1926780] [NEW] Multicast traffic scenario test is failing sometimes on OVN job

2021-04-30 Thread Slawek Kaplonski
Public bug reported:

Logstash query:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22RuntimeError%3A%20Unregistered%20server%20received%20unexpected%20packet(s).%5C%22

It seems to be happening mostly on wallaby and victoria jobs. It's not
very often but happens from time to time.

Example of the failure:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b66/712474/7/check
/neutron-tempest-plugin-scenario-ovn/b661cd4/testr_results.html

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 80, in wait_until_true
eventlet.sleep(sleep)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
hub.switch()
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
return self.greenlet.switch()
eventlet.timeout.Timeout: 60 seconds

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_multicast.py",
 line 274, in test_multicast_between_vms_on_same_network
self._check_multicast_conectivity(sender=sender, receivers=receivers,
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_multicast.py",
 line 381, in _check_multicast_conectivity
utils.wait_until_true(
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 84, in wait_until_true
raise exception
RuntimeError: Unregistered server received unexpected packet(s).

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: gate-failure ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1926780

Title:
  Multicast traffic scenario test is failing sometimes on OVN job

Status in neutron:
  Confirmed

Bug description:
  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22RuntimeError%3A%20Unregistered%20server%20received%20unexpected%20packet(s).%5C%22

  It seems to be happening mostly on wallaby and victoria jobs. It's not
  very often but happens from time to time.

  Example of the failure:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b66/712474/7/check
  /neutron-tempest-plugin-scenario-ovn/b661cd4/testr_results.html

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 80, in wait_until_true
  eventlet.sleep(sleep)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
  hub.switch()
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 60 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_multicast.py",
 line 274, in test_multicast_between_vms_on_same_network
  self._check_multicast_conectivity(sender=sender, receivers=receivers,
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_multicast.py",
 line 381, in _check_multicast_conectivity
  utils.wait_until_true(
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 84, in wait_until_true
  raise exception
  RuntimeError: Unregistered server received unexpected packet(s).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1926780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1928466] [NEW] Allowed address pairs aren't populated to the new host with DVR router

2021-05-14 Thread Slawek Kaplonski
Public bug reported:

In the DVR routers, neutron-server needs to populate ARP entries also
for IPs added to the ports as allowed address pairs. When e.g. new IP is
added to the allowed address pairs of the port, it works fine and
neutron server sends notification about such new arp entry to the all L3
agents where dvr router is placed.

But in case when new vm plugged to the same router is spawned on
completly new compute, or existing vm is migrated to the new compute
where dvr router wasn't created before, arp entries for allowed address
pairs aren't populated at all.

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1928466

Title:
  Allowed address pairs aren't populated to the new host with DVR router

Status in neutron:
  Confirmed

Bug description:
  In the DVR routers, neutron-server needs to populate ARP entries also
  for IPs added to the ports as allowed address pairs. When e.g. new IP
  is added to the allowed address pairs of the port, it works fine and
  neutron server sends notification about such new arp entry to the all
  L3 agents where dvr router is placed.

  But in case when new vm plugged to the same router is spawned on
  completly new compute, or existing vm is migrated to the new compute
  where dvr router wasn't created before, arp entries for allowed
  address pairs aren't populated at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1928466/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1928764] [NEW] Fullstack test TestUninterruptedConnectivityOnL2AgentRestart failing often with LB agent

2021-05-18 Thread Slawek Kaplonski
Public bug reported:

It seems that test
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart
in various LB scenarios (flat, vxlan network) are failing recently
pretty often.

Examples of failures:

https://09f8e4e92bfb8d2ac89d-b41143eab52d80358d8555f964e9341b.ssl.cf5.rackcdn.com/670611/13/check/neutron-fullstack-with-uwsgi/8f51833/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_400/790288/1/check/neutron-fullstack-with-uwsgi/40025f9/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_400/790288/1/check/neutron-fullstack-with-uwsgi/40025f9/testr_results.html
https://0603beb4ddbd36de1165-42644bdefd5590a8f7e4e2e8a8a4112f.ssl.cf5.rackcdn.com/787956/1/check/neutron-fullstack-with-uwsgi/7640987/testr_results.html
https://e978bdcfc0235dcd9417-6560bc3b6382c1d289b358872777ca09.ssl.cf1.rackcdn.com/787956/1/check/neutron-fullstack-with-uwsgi/779913e/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0cb/789648/5/check/neutron-fullstack-with-uwsgi/0cb6d65/testr_results.html

Stacktrace:

ft1.1: 
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart(LB,Flat
 network)testtools.testresult.real._StringException: Traceback (most recent 
call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_connectivity.py",
 line 236, in test_l2_agent_restart
self._assert_ping_during_agents_restart(
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/base.py", 
line 123, in _assert_ping_during_agents_restart
common_utils.wait_until_true(
  File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
next(self.gen)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/common/net_helpers.py",
 line 147, in async_ping
f.result()
  File "/usr/lib/python3.8/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
  File "/usr/lib/python3.8/concurrent/futures/_base.py", line 388, in 
__get_result
raise self._exception
  File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/common/net_helpers.py",
 line 128, in assert_async_ping
ns_ip_wrapper.netns.execute(
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/ip_lib.py", 
line 718, in execute
return utils.execute(cmd, check_exit_code=check_exit_code,
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/utils.py", 
line 156, in execute
raise exceptions.ProcessExecutionError(msg,
neutron_lib.exceptions.ProcessExecutionError: Exit code: 1; Cmd: ['ip', 
'netns', 'exec', 'test-af70cf3a-c531-4fdf-ab4c-31cc69cc2c56', 'ping', '-W', 2, 
'-c', '1', '20.0.0.212']; Stdin: ; Stdout: PING 20.0.0.212 (20.0.0.212) 56(84) 
bytes of data.

--- 20.0.0.212 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

; Stderr:


I checked linuxbridge-agent logs (2 cases) and I found there error like
below:

2021-05-13 15:46:07.721 96421 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139960964907248]: (4, ()) _call_back 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py:510
2021-05-13 15:46:07.725 96421 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139960964907248]: (4, None) _call_back 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py:510
2021-05-13 15:46:07.728 96421 DEBUG oslo.privsep.daemon [-] privsep: Exception 
during request[139960964907248]: Network interface brqa235fa8c-09 not found in 
namespace None. _process_cmd 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py:488
Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py",
 line 485, in _process_cmd
ret = func(*f_args, **f_kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/priv_context.py",
 line 249, in _wrap
return func(*args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/privileged/agent/linux/ip_lib.py",
 line 278, in delete_ip_address
_run_iproute_addr("delete",
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/privileged/agent/linux/ip_lib.py",
 line 239, in _run_iproute_addr
idx = get_link_id(device, namespace)
  File 
"/home/zuul/src/opendev.org/open

[Yahoo-eng-team] [Bug 1928913] [NEW] Docs job is broken in neutron

2021-05-19 Thread Slawek Kaplonski
Public bug reported:

Error like:

2021-05-19 02:18:30.683798 | ubuntu-focal | 
/home/zuul/src/opendev.org/openstack/neutron/.tox/docs/lib/python3.8/site-packages/dns/hash.py:23:
 DeprecationWarning: dns.hash module will be removed in future versions. Please 
use hashlib instead.
2021-05-19 02:18:30.683881 | ubuntu-focal |   warnings.warn(
2021-05-19 02:18:30.683908 | ubuntu-focal | 
/home/zuul/src/opendev.org/openstack/neutron/.tox/docs/lib/python3.8/site-packages/dns/namedict.py:35:
 DeprecationWarning: Using or importing the ABCs from 'collections' instead of 
from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop 
working
2021-05-19 02:18:30.683941 | ubuntu-focal |   class 
NameDict(collections.MutableMapping):
2021-05-19 02:18:30.683965 | ubuntu-focal |
2021-05-19 02:18:30.683989 | ubuntu-focal | Extension error 
(oslo_config.sphinxconfiggen):
2021-05-19 02:18:30.684011 | ubuntu-focal | Handler  for event 'builder-inited' threw an exception (exception: No 
module named 'mitogen')
2021-05-19 02:18:31.013073 | ubuntu-focal | ERROR: InvocationError for command 
/home/zuul/src/opendev.org/openstack/neutron/.tox/docs/bin/sphinx-build -W -b 
html doc/source doc/build/html (exited with code 2)
2021-05-19 02:18:31.013153 | ubuntu-focal | docs finish: run-test  after 2.96 
seconds
2021-05-19 02:18:31.013688 | ubuntu-focal | docs start: run-test-post

For example
https://9252355b19e6b9b2ecd6-c7c5829e17049c4cf47426e53829e931.ssl.cf2.rackcdn.com/783748/4/check
/openstack-tox-docs/5a2fd4b/job-output.txt

It seems that it happens since 18.05.2021:
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22for%20event%20
'builder-
inited'%20threw%20an%20exception%20(exception%3A%20No%20module%20named%20'mitogen')%5C%22

** Affects: neutron
 Importance: Critical
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: doc gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1928913

Title:
  Docs job is broken in neutron

Status in neutron:
  Confirmed

Bug description:
  Error like:

  2021-05-19 02:18:30.683798 | ubuntu-focal | 
/home/zuul/src/opendev.org/openstack/neutron/.tox/docs/lib/python3.8/site-packages/dns/hash.py:23:
 DeprecationWarning: dns.hash module will be removed in future versions. Please 
use hashlib instead.
  2021-05-19 02:18:30.683881 | ubuntu-focal |   warnings.warn(
  2021-05-19 02:18:30.683908 | ubuntu-focal | 
/home/zuul/src/opendev.org/openstack/neutron/.tox/docs/lib/python3.8/site-packages/dns/namedict.py:35:
 DeprecationWarning: Using or importing the ABCs from 'collections' instead of 
from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop 
working
  2021-05-19 02:18:30.683941 | ubuntu-focal |   class 
NameDict(collections.MutableMapping):
  2021-05-19 02:18:30.683965 | ubuntu-focal |
  2021-05-19 02:18:30.683989 | ubuntu-focal | Extension error 
(oslo_config.sphinxconfiggen):
  2021-05-19 02:18:30.684011 | ubuntu-focal | Handler  for event 'builder-inited' threw an exception (exception: No 
module named 'mitogen')
  2021-05-19 02:18:31.013073 | ubuntu-focal | ERROR: InvocationError for 
command /home/zuul/src/opendev.org/openstack/neutron/.tox/docs/bin/sphinx-build 
-W -b html doc/source doc/build/html (exited with code 2)
  2021-05-19 02:18:31.013153 | ubuntu-focal | docs finish: run-test  after 2.96 
seconds
  2021-05-19 02:18:31.013688 | ubuntu-focal | docs start: run-test-post

  For example
  
https://9252355b19e6b9b2ecd6-c7c5829e17049c4cf47426e53829e931.ssl.cf2.rackcdn.com/783748/4/check
  /openstack-tox-docs/5a2fd4b/job-output.txt

  It seems that it happens since 18.05.2021:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22for%20event%20
  'builder-
  
inited'%20threw%20an%20exception%20(exception%3A%20No%20module%20named%20'mitogen')%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1928913/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1929518] [NEW] Functional db migration tests broken

2021-05-25 Thread Slawek Kaplonski
Public bug reported:

It seems that it's failing all the time now. Example of failure
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ba6/790999/7/check
/neutron-functional-with-uwsgi/ba6f15c/testr_results.html

Stacktrace:

ft1.4: 
neutron.tests.functional.db.test_migrations.TestModelsMigrationsMysql.test_models_synctesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
return f(self, *args, **kwargs)
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 125, in inner
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/db/test_migrations.py",
 line 385, in test_models_sync
super(TestModelsMigrationsMysql, self).test_models_sync()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_db/sqlalchemy/test_migrations.py",
 line 597, in test_models_sync
self.fail(
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/unittest2/case.py",
 line 690, in fail
raise self.failureException(msg)
AssertionError: Models and migration scripts aren't in sync:
[ ( 'add_index',
Index('ix_address_groups_project_id', Column('project_id', 
String(length=255), table=))),
  ( 'add_index',
Index('ix_address_scopes_project_id', Column('project_id', 
String(length=255), table=))),
  ( 'add_index',
Index('ix_addressgrouprbacs_project_id', Column('project_id', 
String(length=255), table=))),
  ( 'add_constraint',
UniqueConstraint(Column('mac_address', NullType(), table=))),
  ( 'add_index',
Index('ix_floatingipdnses_floatingip_id', Column('floatingip_id', 
String(length=36), table=, primary_key=True, nullable=False))),
  ( 'add_index',
Index('ix_floatingips_project_id', Column('project_id', String(length=255), 
table=))),
  ( 'add_index',
Index('ix_logs_project_id', Column('project_id', String(length=255), 
table=))),
  ( 'add_index',
Index('ix_logs_resource_id', Column('resource_id', String(length=36), 
table=))),
  ( 'add_index',
Index('ix_logs_target_id', Column('target_id', String(length=36), 
table=))),
  ( 'add_index',
Index('ix_meteringlabels_project_id', Column('project_id', 
String(length=255), table=))),
  ( 'add_index',
Index('ix_ml2_gre_allocations_allocated', Column('allocated', Boolean(), 
table=, nullable=False, default=ColumnDefault(False), 
server_default=DefaultClause(, for_update=False,
  ( 'add_index',
Index('ix_ml2_vxlan_allocations_allocated', Column('allocated', Boolean(), 
table=, nullable=False, default=ColumnDefault(False), 
server_default=DefaultClause(, for_update=False,
  ( 'add_index',
Index('ix_networkdnsdomains_network_id', Column('network_id', 
String(length=36), table=, primary_key=True, 
nullable=False))),
  ( 'add_index',
Index('ix_networkrbacs_project_id', Column('project_id', 
String(length=255), table=))),
  ( 'add_index',
Index('ix_networks_project_id', Column('project_id', String(length=255), 
table=))),
  ( 'add_index',
Index('ix_ovn_hash_ring_group_name', Column('group_name', 
String(length=256), table=, primary_key=True, nullable=False))),
  ( 'add_index',
Index('ix_ovn_hash_ring_node_uuid', Column('node_uuid', String(length=36), 
table=, primary_key=True, nullable=False))),
  ( 'add_index',
Index('ix_ovn_revision_numbers_resource_type', Column('resource_type', 
String(length=36), table=, primary_key=True, 
nullable=False))),
  ( 'add_index',
Index('ix_ovn_revision_numbers_resource_uuid', Column('resource_uuid', 
String(length=36), table=, primary_key=True, 
nullable=False))),
  ( 'add_index',
Index('ix_portdataplanestatuses_port_id', Column('port_id', 
String(length=36), table=, primary_key=True, 
nullable=False))),
  ( 'add_index',
Index('ix_portdnses_port_id', Column('port_id', String(length=36), 
table=, primary_key=True, nullable=False))),
  ( 'add_index',
Index('ix_ports_project_id', Column('project_id', String(length=255), 
table=))),
  ( 'add_index',
Index('ix_portuplinkstatuspropagation_port_id', Column('port_id', 
String(length=36), table=, primary_key=True, 
nullable=False))),
  ( 'add_constraint',
UniqueConstraint(Column('qos_policy_id', NullType(), 
table=))),
  ( 'add_constraint',
UniqueConstraint(Column('fip_id', NullType(), 
table=))),
  ( 'add_index',
Index('ix_qos_minimum_bandwidth_rules_qos_policy_id', 
Column('qos_policy_id', String(length=36), table=, 
nullable=False))),
  ( 'add_constraint',
UniqueConstraint(Column('network_id', NullType(), 
table=))),
  ( 'add_index',
Index('ix_qos_policies_project_id', Column('project_id', 
String(length=255), table=))),
  ( 'add_index',
Index('ix_qos_policies_default_project_id', Column('project_id', 
String(length=255), table=, primary_key=T

[Yahoo-eng-team] [Bug 1929523] [NEW] Test tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details is failing from time to time

2021-05-25 Thread Slawek Kaplonski
Public bug reported:

Test
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details
is failing with error like below:

Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
return f(*func_args, **func_kwargs)
  File "/opt/stack/tempest/tempest/scenario/test_network_basic_ops.py", line 
636, in test_subnet_details
self.assertEqual(set(dns_servers), set(servers),
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/testtools/testcase.py",
 line 415, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/testtools/testcase.py",
 line 502, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: {'1.2.3.4'} != set(): Looking for 
servers: ['1.2.3.4']. Retrieved DNS nameservers: [] From host: 172.24.5.27.

Example of the failure:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_567/785895/1/gate
/neutron-tempest-slow-py3/567fc7f/testr_results.html

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: gate-failure tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1929523

Title:
  Test
  
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details
  is failing from time to time

Status in neutron:
  Confirmed

Bug description:
  Test
  
tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details
  is failing with error like below:

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
  return f(*func_args, **func_kwargs)
File "/opt/stack/tempest/tempest/scenario/test_network_basic_ops.py", line 
636, in test_subnet_details
  self.assertEqual(set(dns_servers), set(servers),
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/testtools/testcase.py",
 line 415, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/testtools/testcase.py",
 line 502, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: {'1.2.3.4'} != set(): Looking for 
servers: ['1.2.3.4']. Retrieved DNS nameservers: [] From host: 172.24.5.27.

  Example of the failure:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_567/785895/1/gate
  /neutron-tempest-slow-py3/567fc7f/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1929523/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1929676] [NEW] API extensions not supported by e.g. OVN driver may still be on the list returned from neutron

2021-05-26 Thread Slawek Kaplonski
Public bug reported:

Some time ago we introduced possibility for mechanism drivers to
explicitly filter and filter API extensions which they don't supports so
such extensions aren't really listed in the API.

But the problem is that when there are more than one mech drivers
enabled, first of them will filter some extensions, but other don't
filters anything, not supported extensions may appears on the list.

To reproduce the issue let's enable ovn and logging mech_drivers:

mechanism_drivers = ovn,logger

And then check e.g. dhcp_agent_scheduler that it will be on the list
returned by cmd "neutron ext-list", even if it's not supported by ovn.

The problem is that _filter_extensions_by_mech_driver is passing list of
all ML2 extensions to each mech_driver so even if one of them disable
it, other can add it back to the list:
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L318

Maybe better approach would be to have on the list only extensions which
are supported by all enabled mech_drivers?

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: ml2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1929676

Title:
  API extensions not supported by e.g. OVN driver may still be on the
  list returned from neutron

Status in neutron:
  Confirmed

Bug description:
  Some time ago we introduced possibility for mechanism drivers to
  explicitly filter and filter API extensions which they don't supports
  so such extensions aren't really listed in the API.

  But the problem is that when there are more than one mech drivers
  enabled, first of them will filter some extensions, but other don't
  filters anything, not supported extensions may appears on the list.

  To reproduce the issue let's enable ovn and logging mech_drivers:

  mechanism_drivers = ovn,logger

  And then check e.g. dhcp_agent_scheduler that it will be on the list
  returned by cmd "neutron ext-list", even if it's not supported by ovn.

  The problem is that _filter_extensions_by_mech_driver is passing list
  of all ML2 extensions to each mech_driver so even if one of them
  disable it, other can add it back to the list:
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/plugin.py#L318

  Maybe better approach would be to have on the list only extensions
  which are supported by all enabled mech_drivers?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1929676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930397] [NEW] neutron-lib from master branch is breaking our UT job

2021-06-01 Thread Slawek Kaplonski
Public bug reported:

It's for now only in the periodic queue:

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9e8/periodic/opendev.org/openstack/neutron/master
/openstack-tox-py36-with-neutron-lib-master/9e852a4/testr_results.html

but we need to fix it before we will release and use new neutron-lib.

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930397

Title:
  neutron-lib from master branch is breaking our UT job

Status in neutron:
  Confirmed

Bug description:
  It's for now only in the periodic queue:

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9e8/periodic/opendev.org/openstack/neutron/master
  /openstack-tox-py36-with-neutron-lib-master/9e852a4/testr_results.html

  but we need to fix it before we will release and use new neutron-lib.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930401] [NEW] Fullstack l3 agent tests failing due to timeout waiting until port is active

2021-06-01 Thread Slawek Kaplonski
Public bug reported:

Many fullstack L3 agent related tests are failing recently and the
common thing for many of them is the fact that they are failing while
waiting until port status will be ACTIVE. Like e.g.:

https://9cec50bd524f94a2df4c-c6273b9a7cf594e42eb2c4e7f818.ssl.cf5.rackcdn.com/791365/6/check/neutron-fullstack-with-uwsgi/6fc0704/testr_results.html
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_73b/793141/2/check/neutron-fullstack-with-uwsgi/73b08ae/testr_results.html
https://b87ba208d44b7f1356ad-f27c11edabee52a7804784593cf2712d.ssl.cf5.rackcdn.com/791365/5/check/neutron-fullstack-with-uwsgi/634ccb1/testr_results.html
https://dd43e0f9601da5e2e650-51b18fcc89837fbadd0245724df9c686.ssl.cf1.rackcdn.com/791365/6/check/neutron-fullstack-with-uwsgi/5413cd9/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_8d0/791365/5/check/neutron-fullstack-with-uwsgi/8d024fb/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_188/791365/5/check/neutron-fullstack-with-uwsgi/188aa48/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9a3/792998/2/check/neutron-fullstack-with-uwsgi/9a3b5a2/testr_results.html

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: fullstack l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930401

Title:
  Fullstack l3 agent tests failing due to timeout waiting until port is
  active

Status in neutron:
  Confirmed

Bug description:
  Many fullstack L3 agent related tests are failing recently and the
  common thing for many of them is the fact that they are failing while
  waiting until port status will be ACTIVE. Like e.g.:

  
https://9cec50bd524f94a2df4c-c6273b9a7cf594e42eb2c4e7f818.ssl.cf5.rackcdn.com/791365/6/check/neutron-fullstack-with-uwsgi/6fc0704/testr_results.html
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_73b/793141/2/check/neutron-fullstack-with-uwsgi/73b08ae/testr_results.html
  
https://b87ba208d44b7f1356ad-f27c11edabee52a7804784593cf2712d.ssl.cf5.rackcdn.com/791365/5/check/neutron-fullstack-with-uwsgi/634ccb1/testr_results.html
  
https://dd43e0f9601da5e2e650-51b18fcc89837fbadd0245724df9c686.ssl.cf1.rackcdn.com/791365/6/check/neutron-fullstack-with-uwsgi/5413cd9/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_8d0/791365/5/check/neutron-fullstack-with-uwsgi/8d024fb/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_188/791365/5/check/neutron-fullstack-with-uwsgi/188aa48/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_9a3/792998/2/check/neutron-fullstack-with-uwsgi/9a3b5a2/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930402] [NEW] SSH timeouts happens very often in the ovn based CI jobs

2021-06-01 Thread Slawek Kaplonski
Public bug reported:

I saw those errors mostly in the neutron-ovn-tempest-slow but probably
it happens also in other jobs. Example of failures:

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0de/791365/7/check/neutron-ovn-tempest-slow/0de1c30/testr_results.html
https://2f7fb53980d59c550f7f-e09de525732b656a1c483807eeb06fc8.ssl.cf2.rackcdn.com/793369/3/check/neutron-ovn-tempest-slow/8a116c7/testr_results.html
https://f86d217b949ada82d82c-f669355b5e0e599ce4f84e6e473a124c.ssl.cf2.rackcdn.com/791365/6/check/neutron-ovn-tempest-slow/5dc8d92/testr_results.html

In all those cases, common thing is that VMs seems to get IP address
from dhcp properly, cloud-init seems to be working fine but SSH to the
FIP is not possible.

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930402

Title:
  SSH timeouts happens very often in the ovn based CI jobs

Status in neutron:
  Confirmed

Bug description:
  I saw those errors mostly in the neutron-ovn-tempest-slow but probably
  it happens also in other jobs. Example of failures:

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0de/791365/7/check/neutron-ovn-tempest-slow/0de1c30/testr_results.html
  
https://2f7fb53980d59c550f7f-e09de525732b656a1c483807eeb06fc8.ssl.cf2.rackcdn.com/793369/3/check/neutron-ovn-tempest-slow/8a116c7/testr_results.html
  
https://f86d217b949ada82d82c-f669355b5e0e599ce4f84e6e473a124c.ssl.cf2.rackcdn.com/791365/6/check/neutron-ovn-tempest-slow/5dc8d92/testr_results.html

  In all those cases, common thing is that VMs seems to get IP address
  from dhcp properly, cloud-init seems to be working fine but SSH to the
  FIP is not possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1931217] [NEW] Fullstack Legacy L3 agent tests are failing very often recently

2021-06-08 Thread Slawek Kaplonski
Public bug reported:

Examples of the failures:

https://a3e9d176bf1a926ac6c8-4a42abeb7e7779ef25c6242290fc9b63.ssl.cf5.rackcdn.com/795185/1/check
/neutron-fullstack-with-uwsgi/f7a1672/testr_results.html

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_7f5/788510/7/check
/neutron-fullstack-with-uwsgi/7f51cfa/testr_results.html

https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d7d/794777/1/check
/neutron-fullstack-with-uwsgi/d7d7ad5/testr_results.html

https://13421bfde193d16eadd9-09dae87d9b597faab95a4d42e1ed51da.ssl.cf5.rackcdn.com/792953/3/check
/neutron-fullstack-with-uwsgi/600701b/testr_results.html

In the neutron server logs I see errors like:

2021-06-08 04:28:04.056 137352 ERROR neutron_lib.callbacks.manager 
[req-1d1eba43-5cb8-49cc-8171-f914739630b1 - - - - -] Error during notification 
for neutron.plugins.ml2.ovo_rpc._ObjectChangeHandler.handle_event-4064702 port, 
after_create: TypeError: handle_event() missing 1 required positional argument: 
'context'
2021-06-08 04:28:04.056 137352 ERROR neutron_lib.callbacks.manager Traceback 
(most recent call last):
2021-06-08 04:28:04.056 137352 ERROR neutron_lib.callbacks.manager   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/neutron_lib/callbacks/manager.py",
 line 197, in _notify_loop
2021-06-08 04:28:04.056 137352 ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, **kwargs)
2021-06-08 04:28:04.056 137352 ERROR neutron_lib.callbacks.manager TypeError: 
handle_event() missing 1 required positional argument: 'context'
2021-06-08 04:28:04.056 137352 ERROR neutron_lib.callbacks.manager 
2021-06-08 04:28:04.085 137352 DEBUG neutron.scheduler.dhcp_agent_scheduler 
[req-1d1eba43-5cb8-49cc-8171-f914739630b1 - - - - -] Network 
08e13e32-a5ef-4045-9c91-403737ffea99 is already hosted by enough agents. 
_get_dhcp_agents_hosting_network 
/home/zuul/src/opendev.org/openstack/neutron/neutron/scheduler/dhcp_agent_scheduler.py:275

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: fullstack gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1931217

Title:
  Fullstack Legacy L3 agent tests are failing very often recently

Status in neutron:
  Confirmed

Bug description:
  Examples of the failures:

  
https://a3e9d176bf1a926ac6c8-4a42abeb7e7779ef25c6242290fc9b63.ssl.cf5.rackcdn.com/795185/1/check
  /neutron-fullstack-with-uwsgi/f7a1672/testr_results.html

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_7f5/788510/7/check
  /neutron-fullstack-with-uwsgi/7f51cfa/testr_results.html

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d7d/794777/1/check
  /neutron-fullstack-with-uwsgi/d7d7ad5/testr_results.html

  
https://13421bfde193d16eadd9-09dae87d9b597faab95a4d42e1ed51da.ssl.cf5.rackcdn.com/792953/3/check
  /neutron-fullstack-with-uwsgi/600701b/testr_results.html

  In the neutron server logs I see errors like:

  2021-06-08 04:28:04.056 137352 ERROR neutron_lib.callbacks.manager 
[req-1d1eba43-5cb8-49cc-8171-f914739630b1 - - - - -] Error during notification 
for neutron.plugins.ml2.ovo_rpc._ObjectChangeHandler.handle_event-4064702 port, 
after_create: TypeError: handle_event() missing 1 required positional argument: 
'context'
  2021-06-08 04:28:04.056 137352 ERROR neutron_lib.callbacks.manager Traceback 
(most recent call last):
  2021-06-08 04:28:04.056 137352 ERROR neutron_lib.callbacks.manager   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/neutron_lib/callbacks/manager.py",
 line 197, in _notify_loop
  2021-06-08 04:28:04.056 137352 ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, **kwargs)
  2021-06-08 04:28:04.056 137352 ERROR neutron_lib.callbacks.manager TypeError: 
handle_event() missing 1 required positional argument: 'context'
  2021-06-08 04:28:04.056 137352 ERROR neutron_lib.callbacks.manager 
  2021-06-08 04:28:04.085 137352 DEBUG neutron.scheduler.dhcp_agent_scheduler 
[req-1d1eba43-5cb8-49cc-8171-f914739630b1 - - - - -] Network 
08e13e32-a5ef-4045-9c91-403737ffea99 is already hosted by enough agents. 
_get_dhcp_agents_hosting_network 
/home/zuul/src/opendev.org/openstack/neutron/neutron/scheduler/dhcp_agent_scheduler.py:275

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1931217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1917793] Re: [HA] keepalived_state_change does not finish "handle_initial_state"execution

2021-06-18 Thread Slawek Kaplonski
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1917793

Title:
  [HA] keepalived_state_change does not finish
  "handle_initial_state"execution

Status in neutron:
  Fix Released

Bug description:
  As seen in some logs [1], when the process "keepalived_state_change"
  is spawned, the first task done is to read the HA interface status
  (backup, primary). Sometimes the process never finishes this initial
  task.

  
[1]https://72f7db0ba35c6ad18335-0a8a55712d031506235c83f14141b923.ssl.cf2.rackcdn.com/776701/9/check
  /neutron-functional-with-uwsgi/50d999b/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1917793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1922563] Re: [UT] py38 CI job failing frequently with TIMED_OUT

2021-06-18 Thread Slawek Kaplonski
** Changed in: neutron
   Status: In Progress => Fix Released

** Tags removed: neutron-proactive-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1922563

Title:
  [UT] py38 CI job failing frequently with TIMED_OUT

Status in neutron:
  Fix Released

Bug description:
  Unit tests using py38 are failing frequently in the CI. Some test case
  is not returning and the job fails with TIMED_OUT result.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1922563/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1934115] [NEW] List security groups by project admin may return 500

2021-06-30 Thread Slawek Kaplonski
Public bug reported:

When new RBAC policies and scopes are enforced in Neutron, there are system and 
project admins and project admin don't have access to resources from other 
projects.
Now, when project admin tries to list security groups for other project, empty 
list should be returned but as Neutron tries to ensure that default security 
group for that project is created it may happen that request will go to 
https://github.com/openstack/neutron/blob/25207ed9c0d929aa79270a118983c04f3476afc4/neutron/db/securitygroups_db.py#L144
 and as it will return None for project admin, request will fail and error 500 
will be returned.

In such case I think that context.elevated() should be used to get SG
from DB. If user don't have permission to see it, it will be filtered
out later by policy.

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1934115

Title:
  List security groups by project admin may return 500

Status in neutron:
  Confirmed

Bug description:
  When new RBAC policies and scopes are enforced in Neutron, there are system 
and project admins and project admin don't have access to resources from other 
projects.
  Now, when project admin tries to list security groups for other project, 
empty list should be returned but as Neutron tries to ensure that default 
security group for that project is created it may happen that request will go 
to 
https://github.com/openstack/neutron/blob/25207ed9c0d929aa79270a118983c04f3476afc4/neutron/db/securitygroups_db.py#L144
 and as it will return None for project admin, request will fail and error 500 
will be returned.

  In such case I think that context.elevated() should be used to get SG
  from DB. If user don't have permission to see it, it will be filtered
  out later by policy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1934115/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1922684] Re: Functional dhcp agent tests fails to spawn metadata proxy

2021-07-02 Thread Slawek Kaplonski
** Changed in: neutron
   Status: In Progress => Fix Released

** Tags removed: neutron-proactive-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1922684

Title:
  Functional dhcp agent tests fails to spawn metadata proxy

Status in neutron:
  Fix Released

Bug description:
  In case when during spawning of the setup of dhcp agent, there will be
  iptabls error "Another app is currently holding the xtables lock"
  tests like
  
neutron.tests.functional.agent.test_dhcp_agent.DHCPAgentOVSTestCase.test_enable_isolated_metadata_for_subnet_create_delete
  and
  
neutron.tests.functional.agent.test_dhcp_agent.DHCPAgentOVSTestCase.test_force_metadata_for_subnet_create_delete
  may fail with error like below:

  2021-03-30 20:57:01.829 61168 DEBUG neutron.agent.linux.dhcp 
[req-1d14cf38-d8a8-4f3a-858d-4ab6e9b888da - - - - -] Previous DHCP port 
information: . Updated DHCP port information: . 
_check_dhcp_port_subnet 
/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/dhcp.py:1582
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent 
[req-1d14cf38-d8a8-4f3a-858d-4ab6e9b888da - - - - -] Unable to enable dhcp for 
24e1cf2a-a60d-41a9-9666-a38a90117cf9.: TypeError: can not serialize 'MagicMock' 
object
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/dhcp/agent.py", 
line 227, in call_driver
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent rv = 
getattr(driver, action)(**action_kwargs)
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/dhcp.py", 
line 266, in enable
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent 
common_utils.wait_until_true(self._enable, timeout=300)
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 
707, in wait_until_true
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent while not 
predicate():
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/dhcp.py", 
line 278, in _enable
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent 
interface_name = self.device_manager.setup(self.network)
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/dhcp.py", 
line 1692, in setup
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent if 
ip_lib.ensure_device_is_ready(interface_name,
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/ip_lib.py", 
line 963, in ensure_device_is_ready
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent if not 
dev.link.exists or not dev.link.address:
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/ip_lib.py", 
line 500, in exists
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent return 
privileged.interface_exists(self.name, self._parent.namespace)
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_privsep/priv_context.py",
 line 247, in _wrap
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent return 
self.channel.remote_call(name, args, kwargs)
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_privsep/daemon.py",
 line 214, in remote_call
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent result = 
self.send_recv((Message.CALL.value, name, args, kwargs))
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_privsep/comm.py",
 line 170, in send_recv
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent 
self.writer.send((myid, msg))
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/oslo_privsep/comm.py",
 line 54, in send
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent buf = 
msgpack.packb(msg, use_bin_type=True,
  2021-03-30 20:57:01.852 61168 ERROR neutron.agent.dhcp.agent   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/msgpack/__init__.py

[Yahoo-eng-team] [Bug 1934670] [NEW] neutron-tempest-plugin should migrate from paramiko

2021-07-05 Thread Slawek Kaplonski
Public bug reported:

It should be done if we want to make OpenStack to be "FIPS compliant".
What is FIPS You can find in 
https://csrc.nist.gov/publications/detail/fips/140/3/final

This isn't most needed thing but AFAIK Tempest is also moving away from
paramiko slowly so we can do that in the neutron-tempest-plugin as well.

** Affects: neutron
 Importance: Wishlist
 Status: New


** Tags: tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1934670

Title:
  neutron-tempest-plugin should migrate from paramiko

Status in neutron:
  New

Bug description:
  It should be done if we want to make OpenStack to be "FIPS compliant".
  What is FIPS You can find in 
https://csrc.nist.gov/publications/detail/fips/140/3/final

  This isn't most needed thing but AFAIK Tempest is also moving away
  from paramiko slowly so we can do that in the neutron-tempest-plugin
  as well.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1934670/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936911] [NEW] Scenario test test_established_tcp_session_after_re_attachinging_sg is failing on the linuxbridge backend

2021-07-20 Thread Slawek Kaplonski
Public bug reported:

This test is failing in that backend from time to time. For example:

https://4d0dc33a1771f7b089e2-b79c57b376466cab8e443243a2295837.ssl.cf1.rackcdn.com/601336/95/check/neutron-
tempest-plugin-scenario-linuxbridge/f5be5f7/testr_results.html

https://2c312e10b9f362ff0be0-ac198ee519f662a1d471c5eebfdff2e7.ssl.cf5.rackcdn.com/798009/3/check/neutron-
tempest-plugin-scenario-linuxbridge/534618f/testr_results.html

Every time I saw it it was failing in the linuxbridge job but maybe
that's just coincidence.

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: gate-failure linuxbridge

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1936911

Title:
  Scenario test test_established_tcp_session_after_re_attachinging_sg is
  failing on the linuxbridge backend

Status in neutron:
  Confirmed

Bug description:
  This test is failing in that backend from time to time. For example:

  
https://4d0dc33a1771f7b089e2-b79c57b376466cab8e443243a2295837.ssl.cf1.rackcdn.com/601336/95/check/neutron-
  tempest-plugin-scenario-linuxbridge/f5be5f7/testr_results.html

  
https://2c312e10b9f362ff0be0-ac198ee519f662a1d471c5eebfdff2e7.ssl.cf5.rackcdn.com/798009/3/check/neutron-
  tempest-plugin-scenario-linuxbridge/534618f/testr_results.html

  Every time I saw it it was failing in the linuxbridge job but maybe
  that's just coincidence.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1936911/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1934957] Re: [sriov] Unable to change the VF state for i350 interface

2021-07-20 Thread Slawek Kaplonski
We discussed that issue in our team meeting today 
https://meetings.opendev.org/meetings/networking/2021/networking.2021-07-20-14.00.log.html
Our conclusion is that this is Intel's driver bug and we shouldn't try to 
fix/workaround it in Neutron. It should be fixed in the driver's code. So I'm 
going to close this bug.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1934957

Title:
  [sriov] Unable to change the VF state for i350 interface

Status in neutron:
  Won't Fix

Bug description:
  When sriov-nic-agent configures VF state, the exception is as follows:
  2021-07-08 06:15:47.773 34 DEBUG oslo.privsep.daemon [-] privsep: Exception 
during request[139820149013392]: Operation not supported on interface eno4, 
namespace None. _process_cmd 
/usr/local/lib/python3.6/site-packages/oslo_privsep/daemon.py:490
  Traceback (most recent call last):
File 
"/usr/local/lib/python3.6/site-packages/neutron/privileged/agent/linux/ip_lib.py",
 line 263, in _run_iproute_link
  return ip.link(command, index=idx, **kwargs)
File "/usr/local/lib/python3.6/site-packages/pyroute2/iproute/linux.py", 
line 1360, in link
  msg_flags=msg_flags)
File "/usr/local/lib/python3.6/site-packages/pyroute2/netlink/nlsocket.py", 
line 376, in nlm_request
  return tuple(self._genlm_request(*argv, **kwarg))
File "/usr/local/lib/python3.6/site-packages/pyroute2/netlink/nlsocket.py", 
line 869, in nlm_request
  callback=callback):
File "/usr/local/lib/python3.6/site-packages/pyroute2/netlink/nlsocket.py", 
line 379, in get
  return tuple(self._genlm_get(*argv, **kwarg))
File "/usr/local/lib/python3.6/site-packages/pyroute2/netlink/nlsocket.py", 
line 704, in get
  raise msg['header']['error']
  pyroute2.netlink.exceptions.NetlinkError: (95, 'Operation not supported')

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/oslo_privsep/daemon.py", line 
485, in _process_cmd
  ret = func(*f_args, **f_kwargs)
File "/usr/local/lib/python3.6/site-packages/oslo_privsep/priv_context.py", 
line 249, in _wrap
  return func(*args, **kwargs)
File 
"/usr/local/lib/python3.6/site-packages/neutron/privileged/agent/linux/ip_lib.py",
 line 403, in set_link_vf_feature
  return _run_iproute_link("set", device, namespace=namespace, vf=vf_config)
File 
"/usr/local/lib/python3.6/site-packages/neutron/privileged/agent/linux/ip_lib.py",
 line 265, in _run_iproute_link
  _translate_ip_device_exception(e, device, namespace)
File 
"/usr/local/lib/python3.6/site-packages/neutron/privileged/agent/linux/ip_lib.py",
 line 237, in _translate_ip_device_exception
  namespace=namespace)
  neutron.privileged.agent.linux.ip_lib.InterfaceOperationNotSupported: 
Operation not supported on interface eno4, namespace None.
  2021-07-08 06:15:47.773 34 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139820149013392]: (5, 
'neutron.privileged.agent.linux.ip_lib.InterfaceOperationNotSupported', 
('Operation not supported on interface eno4, namespace None.',)) _call_back 
/usr/local/lib/python3.6/site-packages/oslo_privsep/daemon.py:511
  2021-07-08 06:15:47.774 24 WARNING 
neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent 
[req-661d08fb-983f-4632-9eb4-91585a557753 - - - - -] Device fa:16:3e:66:e4:91 
does not support state change: 
neutron.privileged.agent.linux.ip_lib.InterfaceOperationNotSupported: Operation 
not supported on interface eno4, namespace None.

  But the vm network traffic is no problem. We use i350 interface, and I
  found these discuss about i350[1][2]. This exception is not impact for
  vm traffic, maybe we can ignore it when interface is i350.

  
  [1]https://sourceforge.net/p/e1000/bugs/653/
  
[2]https://community.intel.com/t5/Ethernet-Products/On-SRIOV-interface-I350-unable-to-change-the-VF-state-from-auto/td-p/704769

  version:
  neutron-sriov-nic-agent version 17.1.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1934957/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936980] [NEW] [DVR] ARP entries for allowed address pairs with IPv4 addresses are added using qr- interface from IPv6 subnets

2021-07-20 Thread Slawek Kaplonski
Public bug reported:

ARP entries for allowed address pairs are added in the DVR routers also
for IPv6 subnets even if IP is IPv4 address really.

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1936980

Title:
  [DVR] ARP entries for allowed address pairs with IPv4 addresses are
  added using qr- interface from IPv6 subnets

Status in neutron:
  New

Bug description:
  ARP entries for allowed address pairs are added in the DVR routers
  also for IPv6 subnets even if IP is IPv4 address really.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1936980/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936983] [NEW] tempest-slow-py3 is failing while creating initial network in neutron

2021-07-20 Thread Slawek Kaplonski
Public bug reported:

Example of failure
https://128d50eaaf9c22786068-bb0b8d002b29cd153f6a742d68988dd1.ssl.cf5.rackcdn.com/792299/6/check/tempest-
slow-py3/fabc438/controller/logs/screen-q-svc.txt

** Affects: neutron
 Importance: Critical
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: gate-failure ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1936983

Title:
  tempest-slow-py3 is failing while creating initial network in neutron

Status in neutron:
  Confirmed

Bug description:
  Example of failure
  
https://128d50eaaf9c22786068-bb0b8d002b29cd153f6a742d68988dd1.ssl.cf5.rackcdn.com/792299/6/check/tempest-
  slow-py3/fabc438/controller/logs/screen-q-svc.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1936983/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1935847] Re: [RFE] Basic Authentication Support for Standalone Neutron

2021-07-23 Thread Slawek Kaplonski
We discussed that RFE today on the drivers meeting and we agreed to not
accepting it for Neutron. We think that better place for such middleware
would be oslo or maybe some new repository.

** Tags removed: rfe-triaged
** Tags added: rfe

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1935847

Title:
  [RFE] Basic Authentication Support for Standalone Neutron

Status in neutron:
  Won't Fix

Bug description:
  There are number of use-cases where users would like run standalone
  neutron (at times along with some other services like Ironic for
  baremetal provisioning), but would still need some basic
  authentication for users accessing neutron APIs.

  Though it's probably possible to deploy neutron with a web server and
  the configure the web server for basic authentication, it can be a big
  'overhead' for small deployments to deploy web server for standalone
  neutron and configure it for basic auth.

  Also, projects like TripleO still does not deploy neutron with
  httpd+mod_wsgi due to some issues encountered earlier. The current
  proposal of a light TripleO undercloud with standalone neutron with
  basic authentication would benefit from this feature.

  It's possible to implement a simple basic auth middleware which is
  non-invasive and provide the desired feature for standalone neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1935847/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938575] Re: Misalignment with extra-dhcp-options between neutronclient & openstackclient

2021-08-03 Thread Slawek Kaplonski
This seems for me like lack of feature in the openstackclient. OSC uses
storyboard to track issues. I opened bug in storyboard
https://storyboard.openstack.org/#!/story/2009095 and I'm closing this
one here.

** Changed in: neutron
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938575

Title:
  Misalignment with extra-dhcp-options between neutronclient &
  openstackclient

Status in neutron:
  Invalid

Bug description:
  The SetPort class from the openstack client does not support 
--extra-dhcp-option [1]
  (overcloud) $ openstack port set --extra-dhcp-option 
name=mtu,value=1700,ip-version=4  port-test-202
  usage: openstack port set [-h] [--description ]
    [--device ] [--mac-address ]
    [--device-owner ]
    [--vnic-type ] [--host ]
    [--dns-domain dns-domain] [--dns-name ]
    [--enable | --disable] [--name ]
    [--fixed-ip subnet=,ip-address=]
    [--no-fixed-ip]
    [--binding-profile ]
    [--no-binding-profile] [--qos-policy ]
    [--security-group ]
    [--no-security-group]
    [--enable-port-security | --disable-port-security]
    [--allowed-address 
ip-address=[,mac-address=]]
    [--no-allowed-address]
    [--data-plane-status ] [--tag ]
    [--no-tag]
    
  openstack port set: error: unrecognized arguments: --extra-dhcp-option 
port-test-202


  
  The UpdatePort class from the neutron client supports --extra-dhcp-opt [2]
  This is aligned with the neutron API [3]
  (overcloud) $ neutron port-update port-test-202 --extra-dhcp-opt 
opt_name=mtu,opt_value=1750,ip_version=4
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  Updated port: port-test-202
  (overcloud) $ neutron port-show port-test-202 | grep mtu
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  |   | {"opt_name": "mtu", "opt_value": "1750", 
"ip_version": 4}
   |


  [1] 
https://opendev.org/openstack/python-openstackclient/src/commit/ed87f7949ef1ef580ed71b9820e16823c0466472/openstackclient/network/v2/port.py#L703
  [2] 
https://github.com/openstack/python-neutronclient/blob/2f047b15957308e84dcb72baee3415b8bf5a470a/neutronclient/neutron/v2_0/port.py#L305
  [3] 
https://docs.openstack.org/api-ref/network/v2/?expanded=update-port-detail#update-port

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1938575/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938766] [NEW] Functional tests related to ovn failing with No such file or directory: '/tmp/tmps9cyr99c/ovn_northd.log'

2021-08-03 Thread Slawek Kaplonski
Public bug reported:

Recently we have been seeing pretty often functional tests issues with
errors like:

ft1.13: 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_maintenance.TestMaintenance.test_check_for_port_security_unknown_addresstesttools.testresult.real._StringException:
 traceback-1: {{{
Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/base.py",
 line 363, in stop
self.mech_driver.nb_ovn.ovsdb_connection.stop()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 153, in nb_ovn
self._post_fork_event.wait()
  File "/usr/lib/python3.8/threading.py", line 558, in wait
signaled = self._cond.wait(timeout)
  File "/usr/lib/python3.8/threading.py", line 302, in wait
waiter.acquire()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/semaphore.py",
 line 120, in acquire
hubs.get_hub().switch()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
return self.greenlet.switch()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 365, in run
self.wait(sleep_time)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/hubs/poll.py",
 line 80, in wait
presult = self.do_poll(seconds)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/hubs/epolls.py",
 line 31, in do_poll
return self.poll.poll(seconds)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler
raise TimeoutException()
fixtures._fixtures.timeout.TimeoutException
}}}

traceback-2: {{{
Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/base.py",
 line 351, in _collect_processes_logs
self._copy_log_file("%s.log" % northd_log, dst_northd)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/base.py",
 line 356, in _copy_log_file
shutil.copyfile(
  File "/usr/lib/python3.8/shutil.py", line 264, in copyfile
with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:
FileNotFoundError: [Errno 2] No such file or directory: 
'/tmp/tmps9cyr99c/ovn_northd.log'
}}}

Traceback (most recent call last):
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_maintenance.py",
 line 41, in setUp
super(_TestMaintenanceHelper, self).setUp()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/base.py",
 line 217, in setUp
self._start_idls()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/base.py",
 line 320, in _start_idls
self.mech_driver.pre_fork_initialize(
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 254, in pre_fork_initialize
self._create_neutron_pg_drop()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 272, in _create_neutron_pg_drop
create_default_drop_port_group(pre_ovn_nb_api)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 1241, in create_default_drop_port_group
txn.add(nb_idl.pg_add_ports(pg_name, list(ports_with_pg)))
  File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
next(self.gen)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py",
 line 274, in transaction
yield t
  File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
next(self.gen)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/ovsdbapp/api.py",
 line 110, in transaction
del self._nested_txns_map[cur_thread_id]
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/ovsdbapp/api.py",
 line 61, in __exit__
self.result = self.commit()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 65, in commit
raise result.ex
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 131, in run
txn.results.put(txn.do_commit())
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 136, in do_commit
self.p

[Yahoo-eng-team] [Bug 1938788] [NEW] Validate if fixed_ip given for port isn't the same as subnet's gateway_ip

2021-08-03 Thread Slawek Kaplonski
Public bug reported:

Currently when new port is created with fixed_ip given, neutron is not
validating if that fixed_ip address isn't the same as subnet's gateway
IP. That may cause problems, like e.g.:

$ openstack subnet show 
| allocation_pools  | 10.0.0.2-10.0.0.254
| cidr  | 10.0.0.0/24   
| enable_dhcp   | True  
...
| gateway_ip| 10.0.0.1  


$ nova boot   --flavor test --image test  --nic  
net-id=,v4-fixed-ip=10.0.0.1  test-vm1

The instance will be created successfully, but after that, network
communication issue could be happened since the gateway ip conflict.

So Neutron should forbid creation of the port with gateway's ip address
if it is not router's port (device_owner isn't set for one of the router
device owners).

** Affects: neutron
 Importance: Undecided
 Assignee: Slawek Kaplonski (slaweq)
 Status: Triaged


** Tags: l3-ipam-dhcp

** Changed in: neutron
 Assignee: (unassigned) => Slawek Kaplonski (slaweq)

** Changed in: neutron
   Status: New => Triaged

** Tags added: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938788

Title:
  Validate if fixed_ip given for port isn't the same as subnet's
  gateway_ip

Status in neutron:
  Triaged

Bug description:
  Currently when new port is created with fixed_ip given, neutron is not
  validating if that fixed_ip address isn't the same as subnet's gateway
  IP. That may cause problems, like e.g.:

  $ openstack subnet show 
  | allocation_pools  | 10.0.0.2-10.0.0.254
  | cidr  | 10.0.0.0/24   
  | enable_dhcp   | True  
  ...
  | gateway_ip| 10.0.0.1  

  
  $ nova boot   --flavor test --image test  --nic  
net-id=,v4-fixed-ip=10.0.0.1  test-vm1

  The instance will be created successfully, but after that, network
  communication issue could be happened since the gateway ip conflict.

  So Neutron should forbid creation of the port with gateway's ip
  address if it is not router's port (device_owner isn't set for one of
  the router device owners).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1938788/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938910] [NEW] Duplicate default SG error when new system scopes are used

2021-08-04 Thread Slawek Kaplonski
Public bug reported:

When new system scopes are enforced, after fix for
https://bugs.launchpad.net/neutron/+bug/1934115 is merged, there is
another problem. When project admin creates SG for some tenant it tries
to get default SG for that tenant to ensure that there is such default
SG. But as project admin can't get resources which belongs to other
tenant default SG is not found even if it actually is in DB. So that
ends up with error like:

Aug 04 16:11:26 devstack-ubuntu-ovs neutron-server[308908]: ERROR oslo_db.api 
oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, 
"Duplicate entry 'c8b4c762cac744da9b442bf12140c70a' for key 
'default_security_group.PRIM>
Aug 04 16:11:26 devstack-ubuntu-ovs neutron-server[308908]: ERROR oslo_db.api 
[SQL: INSERT INTO default_security_group (project_id, security_group_id) VALUES 
(%(project_id)s, %(security_group_id)s)]
Aug 04 16:11:26 devstack-ubuntu-ovs neutron-server[308908]: ERROR oslo_db.api 
[parameters: {'project_id': 'c8b4c762cac744da9b442bf12140c70a', 
'security_group_id': 'b88530f8-46a8-4190-96f1-bbfd9ddac83c'}]
Aug 04 16:11:26 devstack-ubuntu-ovs neutron-server[308908]: ERROR oslo_db.api 
(Background on this error at: http://sqlalche.me/e/14/gkpj)

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: In Progress


** Tags: api sg-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938910

Title:
  Duplicate default SG error when new system scopes are used

Status in neutron:
  In Progress

Bug description:
  When new system scopes are enforced, after fix for
  https://bugs.launchpad.net/neutron/+bug/1934115 is merged, there is
  another problem. When project admin creates SG for some tenant it
  tries to get default SG for that tenant to ensure that there is such
  default SG. But as project admin can't get resources which belongs to
  other tenant default SG is not found even if it actually is in DB. So
  that ends up with error like:

  Aug 04 16:11:26 devstack-ubuntu-ovs neutron-server[308908]: ERROR oslo_db.api 
oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, 
"Duplicate entry 'c8b4c762cac744da9b442bf12140c70a' for key 
'default_security_group.PRIM>
  Aug 04 16:11:26 devstack-ubuntu-ovs neutron-server[308908]: ERROR oslo_db.api 
[SQL: INSERT INTO default_security_group (project_id, security_group_id) VALUES 
(%(project_id)s, %(security_group_id)s)]
  Aug 04 16:11:26 devstack-ubuntu-ovs neutron-server[308908]: ERROR oslo_db.api 
[parameters: {'project_id': 'c8b4c762cac744da9b442bf12140c70a', 
'security_group_id': 'b88530f8-46a8-4190-96f1-bbfd9ddac83c'}]
  Aug 04 16:11:26 devstack-ubuntu-ovs neutron-server[308908]: ERROR oslo_db.api 
(Background on this error at: http://sqlalche.me/e/14/gkpj)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1938910/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1936408] Re: [RFE] Neutron quota change should check available existing resources

2021-08-06 Thread Slawek Kaplonski
We discussed that RFE on today's drivers meeting 
https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-08-06-14.03.log.html#l-14
 and we all agreed that current behavior is actually a feature and we shouldn't 
change it. It also aligns with the comment #6 from Brian.
So we decided to reject that RFE.

** Tags removed: rfe-triaged
** Tags added: rfe

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1936408

Title:
  [RFE] Neutron quota change should check available existing resources

Status in neutron:
  Won't Fix

Bug description:
  Neutron quota change should check available existing resources. This
  is done, for example, in Nova. When a quota resource limit is changed,
  the available resource count is checked first. If the new quota upper
  limit (lower than the previous one) is lower than the amount of
  resources in use, the quota driver should raise an exception.

  This RFE implies a change in the Neutron quota current behaviour. Some
  users are expecting the new quota limit to be applied, regardless of
  being lower than the current resource usage.

  However, other users (Octavia) expect the quota driver to fail when
  lowering the quota limit under the existing resource usage. My
  recommendation is to use a config knob to decide the behaviour of the
  quota driver; by default, the current behaviour will prevail.

  Bugzilla reference:
  https://bugzilla.redhat.com/show_bug.cgi?id=1980728

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1936408/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1914886] Re: Trunk bridges aren't removed

2021-08-09 Thread Slawek Kaplonski
Now I can't reproduce it and it seems that tbr bridge is deleted after
short deplay. So I'm going to close this bug.

** Changed in: neutron
   Status: Incomplete => Invalid

** Changed in: os-vif
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1914886

Title:
  Trunk bridges aren't removed

Status in neutron:
  Invalid
Status in os-vif:
  Invalid

Bug description:
  Recently I found out on the devstack that when I have VM with trunk port and 
some subports connected to it, trunk bridge isn't deleted when VM is migrated 
to the another host.
  I think that the same thing will happen when instance will be simply deleted 
as vif object in case of trunk port (when ML2/ovs is used on Neutron's side) is 
objects.vif.VIFOpenVSwitch and _unplug method for this type of vif object don't 
tries to delete bridge at all.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1914886/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1939507] [NEW] Timeout while waiting for router HA state transition

2021-08-11 Thread Slawek Kaplonski
Public bug reported:

It happens in functional tests, like e.g. in
neutron.tests.functional.agent.l3.test_ha_router.L3HATestCase.test_ipv6_router_advts_and_fwd_after_router_state_change_backup:

https://a1fab4006c6a1daf82f2-bd8cbc347d913753596edf9ef5797d55.ssl.cf1.rackcdn.com/786478/17/check/neutron-
functional-with-uwsgi/7250dcf/testr_results.html


Error is like:

ft1.10: 
neutron.tests.functional.agent.l3.test_ha_router.L3HATestCase.test_ipv6_router_advts_and_fwd_after_router_state_change_backuptesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", 
line 702, in wait_until_true
eventlet.sleep(sleep)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
hub.switch()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
return self.greenlet.switch()
eventlet.timeout.Timeout: 60 seconds

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_ha_router.py",
 line 148, in test_ipv6_router_advts_and_fwd_after_router_state_change_backup
self._test_ipv6_router_advts_and_fwd_helper('backup',
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_ha_router.py",
 line 118, in _test_ipv6_router_advts_and_fwd_helper
common_utils.wait_until_true(lambda: router.ha_state == 'backup')
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", 
line 707, in wait_until_true
raise WaitTimeout(_("Timed out after %d seconds") % timeout)
neutron.common.utils.WaitTimeout: Timed out after 60 seconds

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: functional-tests gate-failure l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1939507

Title:
  Timeout while waiting for router HA state transition

Status in neutron:
  Confirmed

Bug description:
  It happens in functional tests, like e.g. in
  
neutron.tests.functional.agent.l3.test_ha_router.L3HATestCase.test_ipv6_router_advts_and_fwd_after_router_state_change_backup:

  
https://a1fab4006c6a1daf82f2-bd8cbc347d913753596edf9ef5797d55.ssl.cf1.rackcdn.com/786478/17/check/neutron-
  functional-with-uwsgi/7250dcf/testr_results.html

  
  Error is like:

  ft1.10: 
neutron.tests.functional.agent.l3.test_ha_router.L3HATestCase.test_ipv6_router_advts_and_fwd_after_router_state_change_backuptesttools.testresult.real._StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 
702, in wait_until_true
  eventlet.sleep(sleep)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
  hub.switch()
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 60 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_ha_router.py",
 line 148, in test_ipv6_router_advts_and_fwd_after_router_state_change_backup
  self._test_ipv6_router_advts_and_fwd_helper('backup',
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_ha_router.py",
 line 118, in _test_ipv6_router_advts_and_fwd_helper
  common_utils.wait_until_true(lambda: router.ha_state == 'backup')
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 
707, in wait_until_true
  raise WaitTimeout(_("Timed out after %d seconds") % timeout)
  neutron.common.utils.WaitTimeout: Timed out after 60 seconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1939507/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1939558] [NEW] Security group log entry remains in the database after its security group is deleted

2021-08-11 Thread Slawek Kaplonski
Public bug reported:

Issue originally reported by Alex Katz for OSP-16 but I can reproduce it
also in master branch. Original Bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1988793

Description of problem:
After the security group is deleted its corresponding log entry is still 
presented in the database.


Version-Release number of selected component (if applicable):


How reproducible:
Reproduced in ml2/OVS and ml2/OVN setups


Steps to Reproduce:
# openstack security group create sg_1
# openstack network log create --resource-type security_group --resource sg_1 
--event ALL test_log
# openstack security group delete sg_1
# openstack network log show test_log

Actual results:
there is sill entry for `test_log`

Expected results:
security group deletion should fail with the clear error message or the cascade 
deletion should happen

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: api logging

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1939558

Title:
   Security group log entry remains in the database after its security
  group is deleted

Status in neutron:
  Confirmed

Bug description:
  Issue originally reported by Alex Katz for OSP-16 but I can reproduce
  it also in master branch. Original Bugzilla:
  https://bugzilla.redhat.com/show_bug.cgi?id=1988793

  Description of problem:
  After the security group is deleted its corresponding log entry is still 
presented in the database.

  
  Version-Release number of selected component (if applicable):

  
  How reproducible:
  Reproduced in ml2/OVS and ml2/OVN setups

  
  Steps to Reproduce:
  # openstack security group create sg_1
  # openstack network log create --resource-type security_group --resource sg_1 
--event ALL test_log
  # openstack security group delete sg_1
  # openstack network log show test_log

  Actual results:
  there is sill entry for `test_log`

  Expected results:
  security group deletion should fail with the clear error message or the 
cascade deletion should happen

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1939558/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940224] [NEW] UT neutron.tests.unit.services.ovn_l3.test_plugin.OVNL3ExtrarouteTests failing with neutron-lib from master

2021-08-17 Thread Slawek Kaplonski
Public bug reported:

Since 13.08.2021 one of our UT
neutron.tests.unit.services.ovn_l3.test_plugin.OVNL3ExtrarouteTests.test_router_create_with_gwinfo_ext_ip_non_admin
is failing with neutron-lib from master.

Failure example:
https://5095d1cf5e3173e1d222-5acdef5dc10478cee5291df1596ec66a.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/neutron/master/openstack-
tox-py36-with-neutron-lib-master/292883e/testr_results.html

Stacktrace:

ft1.182: 
neutron.tests.unit.services.ovn_l3.test_plugin.OVNL3ExtrarouteTests.test_router_create_with_gwinfo_ext_ip_non_admintesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
return f(self, *args, **kwargs)
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/extensions/test_l3.py",
 line 749, in test_router_create_with_gwinfo_ext_ip_non_admin
self.assertEqual(exc.HTTPForbidden.code, res.status_int)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 393, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 480, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 403 != 201

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure neutron-lib ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940224

Title:
  UT neutron.tests.unit.services.ovn_l3.test_plugin.OVNL3ExtrarouteTests
  failing with neutron-lib from master

Status in neutron:
  Confirmed

Bug description:
  Since 13.08.2021 one of our UT
  
neutron.tests.unit.services.ovn_l3.test_plugin.OVNL3ExtrarouteTests.test_router_create_with_gwinfo_ext_ip_non_admin
  is failing with neutron-lib from master.

  Failure example:
  
https://5095d1cf5e3173e1d222-5acdef5dc10478cee5291df1596ec66a.ssl.cf5.rackcdn.com/periodic/opendev.org/openstack/neutron/master/openstack-
  tox-py36-with-neutron-lib-master/292883e/testr_results.html

  Stacktrace:

  ft1.182: 
neutron.tests.unit.services.ovn_l3.test_plugin.OVNL3ExtrarouteTests.test_router_create_with_gwinfo_ext_ip_non_admintesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/extensions/test_l3.py",
 line 749, in test_router_create_with_gwinfo_ext_ip_non_admin
  self.assertEqual(exc.HTTPForbidden.code, res.status_int)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 393, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 480, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 403 != 201

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1940224/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940243] [NEW] Neutron-tempest-plugin scenario tests - oom-killer is killing mysql process

2021-08-17 Thread Slawek Kaplonski
Public bug reported:

It happens pretty often recently that during our scenario tests we are
running out of memory and oom-killer is killing mysql process as it is
number 1 in memory consumption. That is causing job's failures.

It seems for me that it happens when there are running tests which uses
vms with advanced image (ubuntu). Maybe we should extract those tests
and run them as second stage with "--concurency 1"?

Examples of failures:

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_85a/803462/2/check/neutron-
tempest-plugin-scenario-openvswitch-
iptables_hybrid/85afc13/testr_results.html

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_944/803936/1/gate/neutron-
tempest-plugin-scenario-openvswitch-
iptables_hybrid/9445e5f/testr_results.html

https://09e003a6a650f320c43d-e30f275ad83ed88289c7399adb6c5ee6.ssl.cf1.rackcdn.com/804236/5/check/neutron-
tempest-plugin-scenario-linuxbridge/770722a/testr_results.html

https://bd90009aa1732b7b8d4a-e998c5625939f617052baaae6f827bb8.ssl.cf5.rackcdn.com/797221/1/check/neutron-
tempest-plugin-scenario-openvswitch/2a7ab79/testr_results.html

https://27020bbcd4882754b192-88656c065c39ed46f44b21a92a1cea67.ssl.cf5.rackcdn.com/800445/7/check/neutron-
tempest-plugin-scenario-openvswitch-
iptables_hybrid/5e597ae/testr_results.html

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e7d/800446/9/check/neutron-
tempest-plugin-scenario-openvswitch/e7d72c9/testr_results.html

https://637b02491f0435a9a86b-ccec73fd7dde7a9826f6a9aeb49ab878.ssl.cf5.rackcdn.com/804397/1/gate/neutron-
tempest-plugin-scenario-linuxbridge/64bae23/testr_results.html

https://d1b1a7bc5606074c0db2-9f552c22a38891cd59267376a7a41496.ssl.cf5.rackcdn.com/802596/12/check/neutron-
tempest-plugin-scenario-openvswitch-
iptables_hybrid/de03f1f/testr_results.html

https://b395fe859a68f8d08e03-e48e76b6f53fcff59de7a7c1c3da6c62.ssl.cf1.rackcdn.com/804394/3/check/neutron-
tempest-plugin-scenario-openvswitch-
iptables_hybrid/98bfff3/testr_results.html

https://6ee071d84f4801a650d3-2635c9269ad2bde2592553cd282ad960.ssl.cf2.rackcdn.com/804394/3/check/neutron-
tempest-plugin-scenario-linuxbridge/a9282d0/testr_results.html


Logstash query: 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Details%3A%20Unexpected%20API%20Error.%20Please%20report%20this%20at%20http%3A%2F%2Fbugs.launchpad.net%2Fnova%2F%20and%20attach%20the%20Nova%20API%20log%20if%20possible.%5C%22

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940243

Title:
  Neutron-tempest-plugin scenario tests - oom-killer is killing mysql
  process

Status in neutron:
  Confirmed

Bug description:
  It happens pretty often recently that during our scenario tests we are
  running out of memory and oom-killer is killing mysql process as it is
  number 1 in memory consumption. That is causing job's failures.

  It seems for me that it happens when there are running tests which
  uses vms with advanced image (ubuntu). Maybe we should extract those
  tests and run them as second stage with "--concurency 1"?

  Examples of failures:

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_85a/803462/2/check/neutron-
  tempest-plugin-scenario-openvswitch-
  iptables_hybrid/85afc13/testr_results.html

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_944/803936/1/gate/neutron-
  tempest-plugin-scenario-openvswitch-
  iptables_hybrid/9445e5f/testr_results.html

  
https://09e003a6a650f320c43d-e30f275ad83ed88289c7399adb6c5ee6.ssl.cf1.rackcdn.com/804236/5/check/neutron-
  tempest-plugin-scenario-linuxbridge/770722a/testr_results.html

  
https://bd90009aa1732b7b8d4a-e998c5625939f617052baaae6f827bb8.ssl.cf5.rackcdn.com/797221/1/check/neutron-
  tempest-plugin-scenario-openvswitch/2a7ab79/testr_results.html

  
https://27020bbcd4882754b192-88656c065c39ed46f44b21a92a1cea67.ssl.cf5.rackcdn.com/800445/7/check/neutron-
  tempest-plugin-scenario-openvswitch-
  iptables_hybrid/5e597ae/testr_results.html

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e7d/800446/9/check/neutron-
  tempest-plugin-scenario-openvswitch/e7d72c9/testr_results.html

  
https://637b02491f0435a9a86b-ccec73fd7dde7a9826f6a9aeb49ab878.ssl.cf5.rackcdn.com/804397/1/gate/neutron-
  tempest-plugin-scenario-linuxbridge/64bae23/testr_results.html

  
https://d1b1a7bc5606074c0db2-9f552c22a38891cd59267376a7a41496.ssl.cf5.rackcdn.com/802596/12/check/neutron-
  tempest-plugin-scenario-openvswitch-
  iptables_hybrid/de03f1f/testr_results.html

  
https://b395fe859a68f8d08e03-e48e76b6f53fcff59de7a7c1c3da6c62.ssl.cf1.rackcdn.com/80

[Yahoo-eng-team] [Bug 1906490] Re: SSH failures in the neutron-ovn-tempest-ovs-release-ipv6-only job

2021-08-23 Thread Slawek Kaplonski
I don't see those failures anymore so I'm closing this bug now.

** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1906490

Title:
  SSH failures in the neutron-ovn-tempest-ovs-release-ipv6-only job

Status in neutron:
  Fix Released

Bug description:
  Example of failures:

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d1d/764980/1/check/neutron-
  ovn-tempest-ovs-release-ipv6-only/d1d6264/testr_results.html

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_cac/764356/1/check/neutron-
  ovn-tempest-ovs-release-ipv6-only/cacd054/testr_results.html

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_08c/752795/24/check/neutron-
  ovn-tempest-ovs-release-ipv6-only/08c6400/testr_results.html

  And it seems that it is always the same test failing:
  
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_reboot_server_hard

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1906490/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940905] [NEW] openstack-tox-py36-with-neutron-lib-master is broken

2021-08-24 Thread Slawek Kaplonski
Public bug reported:

Since few days we have some new issue in that UT job. See e.g.:

https://zuul.openstack.org/build/9d84fa1c75814ce4b7571c48b90de60d
https://zuul.openstack.org/build/c96a61ed36294d5fb528daa332a7aa1a
https://zuul.openstack.org/build/a2355327ba7e425a8d389dcbf7344b0f

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940905

Title:
  openstack-tox-py36-with-neutron-lib-master is broken

Status in neutron:
  Confirmed

Bug description:
  Since few days we have some new issue in that UT job. See e.g.:

  https://zuul.openstack.org/build/9d84fa1c75814ce4b7571c48b90de60d
  https://zuul.openstack.org/build/c96a61ed36294d5fb528daa332a7aa1a
  https://zuul.openstack.org/build/a2355327ba7e425a8d389dcbf7344b0f

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1940905/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1912672] Re: [RFE] Enable set quota per floating-ips pool.

2021-08-26 Thread Slawek Kaplonski
Due to no activity on that RFE for few months now, I'm going to close it
for now. Feel free to reopen it and provide additional information about
it when needed.

** Changed in: neutron
   Status: New => Opinion

** Tags removed: rfe
** Tags added: rfe-postponed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1912672

Title:
  [RFE] Enable set quota per floating-ips pool.

Status in neutron:
  Opinion

Bug description:
  [Request]
  In the OpenStack environment when setting a quota for total floating-ips this 
is defined only at the level of tenants/projects, I would like to see the 
possibility of having the option of limiting the number of floating-ips per 
pool of available networks.

  [Example]
  An environment with two floating-ips pools network -  internet and intranet.
  The internet pool has less IPs available than the intranet pool in total, so 
the need to have different quotas per floating-ips pool and to ensure that the 
number of IPs is not exceeded.

  From internet pool max. 1 floating-ips 
  From intranet pool max. 5 floating-ips

  [Env]
  The current environment is Bionic/Train Openstack/Juju-charms but this is 
valid for other versions of Openstack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1912672/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1942190] [NEW] [Fullstack] Timeout while waiting for port to be active

2021-08-31 Thread Slawek Kaplonski
Public bug reported:

I saw similar failures at least 3 times in last week:

https://2b735aae18d0591220ca-ba27e931a99f05bd6f205438b3cd6a3a.ssl.cf1.rackcdn.com/805031/3/check/neutron-tempest-plugin-scenario-linuxbridge/e958ebe/testr_results.html
https://838e8809c9c087f1d2df-d66d94e8460be82c507ecb0f70cc3225.ssl.cf2.rackcdn.com/798009/9/check/neutron-fullstack-with-uwsgi/73910c7/testr_results.html
https://40502112e1d4c65f94dd-005095c9da7f9886ddbc5e1cb2d2328c.ssl.cf5.rackcdn.com/806325/1/check/neutron-fullstack-with-uwsgi/09ff3ee/testr_results.html


Stacktrace:

ft1.5: 
neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_router_fip_qos_after_admin_state_down_uptesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", 
line 703, in wait_until_true
eventlet.sleep(sleep)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
hub.switch()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
return self.greenlet.switch()
eventlet.timeout.Timeout: 60 seconds

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_l3_agent.py",
 line 565, in test_router_fip_qos_after_admin_state_down_up
self._router_fip_qos_after_admin_state_down_up(ha=True)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_l3_agent.py",
 line 204, in _router_fip_qos_after_admin_state_down_up
vm = self._create_net_subnet_and_vm(
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_l3_agent.py",
 line 80, in _create_net_subnet_and_vm
self._create_and_attach_subnet(
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_l3_agent.py",
 line 66, in _create_and_attach_subnet
self.block_until_port_status_active(
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_l3_agent.py",
 line 57, in block_until_port_status_active
common_utils.wait_until_true(lambda: is_port_status_active(), sleep=1)
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", 
line 708, in wait_until_true
raise WaitTimeout(_("Timed out after %d seconds") % timeout)
neutron.common.utils.WaitTimeout: Timed out after 60 seconds

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: fullstack gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1942190

Title:
  [Fullstack] Timeout while waiting for port to be active

Status in neutron:
  Confirmed

Bug description:
  I saw similar failures at least 3 times in last week:

  
https://2b735aae18d0591220ca-ba27e931a99f05bd6f205438b3cd6a3a.ssl.cf1.rackcdn.com/805031/3/check/neutron-tempest-plugin-scenario-linuxbridge/e958ebe/testr_results.html
  
https://838e8809c9c087f1d2df-d66d94e8460be82c507ecb0f70cc3225.ssl.cf2.rackcdn.com/798009/9/check/neutron-fullstack-with-uwsgi/73910c7/testr_results.html
  
https://40502112e1d4c65f94dd-005095c9da7f9886ddbc5e1cb2d2328c.ssl.cf5.rackcdn.com/806325/1/check/neutron-fullstack-with-uwsgi/09ff3ee/testr_results.html

  
  Stacktrace:

  ft1.5: 
neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_router_fip_qos_after_admin_state_down_uptesttools.testresult.real._StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 
703, in wait_until_true
  eventlet.sleep(sleep)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
  hub.switch()
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 60 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_l3_agent.py",
 line 565, in test_router_fip_qos_after_admin_state_down_up
  self._router_fip_qos_after_admin_state_down_up(ha=True)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_l3_agent.py",
 line 204, in _router_fip_qos_after_a

[Yahoo-eng-team] [Bug 1942294] [NEW] Is allow_overlapping_ips config option still needed really?

2021-08-31 Thread Slawek Kaplonski
Public bug reported:

Help message of this option is saying that it "MUST be set to false when
Neutron is used with Nova's security groups". But is it really still the
case? I think it's not and we could maybe remove that option finally?
Any opinions about it?

** Affects: neutron
 Importance: Wishlist
     Assignee: Slawek Kaplonski (slaweq)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1942294

Title:
  Is allow_overlapping_ips config option still needed really?

Status in neutron:
  New

Bug description:
  Help message of this option is saying that it "MUST be set to false
  when Neutron is used with Nova's security groups". But is it really
  still the case? I think it's not and we could maybe remove that option
  finally? Any opinions about it?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1942294/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1942251] Re: Routes has not added after router restart

2021-09-01 Thread Slawek Kaplonski
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1942251

Title:
  Routes has not added after router restart

Status in neutron:
  Fix Released

Bug description:
  Environment: openstack Train installed with kolla-ansible in multinode mode; 
there are two separated network nodes in the cluster. Router has configured 
with static routes on two network nodes in HA mode. Example:
  ```
  openstack router show -f json 18958d58-adf6-4998-a344-f74bc509b676
  {
    "admin_state_up": true,
    "availability_zone_hints": [],
    "availability_zones": [
  "nova"
    ],
    "created_at": "2021-08-31T09:59:57Z",
    "description": "",
    "distributed": false,
    "external_gateway_info": null,
    "flavor_id": null,
    "ha": true,
    "id": "18958d58-adf6-4998-a344-f74bc509b676",
    "interfaces_info": [
  {
    "port_id": "169839bc-39a9-4efa-b2b5-869ab45cbc33",
    "ip_address": "169.254.192.248",
    "subnet_id": "d78b2ac7-1fe5-4b70-a07e-c5044f7bfcda"
  },
  {
    "port_id": "396a8c07-bc0d-444a-8068-5434f3b8d9e9",
    "ip_address": "172.29.9.1",
    "subnet_id": "cefd937e-2811-469c-9c95-00ece7e17897"
  },
  {
    "port_id": "9716682b-174c-43e4-9b0f-48276c2c887c",
    "ip_address": "172.30.1.254",
    "subnet_id": "0e4c8974-d81b-492b-a40f-d0f7542784b9"
  },
  {
    "port_id": "de46c6f5-f6ca-4f14-8b21-2f771480c196",
    "ip_address": "169.254.195.165",
    "subnet_id": "d78b2ac7-1fe5-4b70-a07e-c5044f7bfcda"
  }
    ],
    "location": {
  "cloud": "",
  "region_name": "RegionPD9Over",
  "zone": null,
  "project": {
    "id": "954718a6deca4897afcd44dc7d2b3c8d",
    "name": "admin",
    "domain_id": null,
    "domain_name": "Default"
  }
    },
    "name": "test1_router",
    "project_id": "954718a6deca4897afcd44dc7d2b3c8d",
    "revision_number": 22,
    "routes": [
  {
    "destination": "172.178.178.0/24",
    "nexthop": "172.29.9.2"
  }
    ],
    "status": "ACTIVE",
    "tags": [],
    "updated_at": "2021-08-31T18:15:02Z"
  }
  ```
  Routes are not added automatically after router reenable or l3 agent restart.
  neutron-l3-agent.log:
  ```
  2021-08-31 13:59:11.097 747 DEBUG neutron.agent.l3.router_info [-] Added 
route entry is '{'destination': '172.178.178.0/24', 'nexthop': '172.29.9.2'}' 
routes_updated 
/usr/lib/python3/dist-packages/neutron/agent/l3/router_info.py:190
  2021-08-31 13:59:11.097 747 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-18958d58-adf6-4998-a344-f74bc509b676', 'ip', 'route', 
'replace', 'to', '172.178.178.0/24', 'via', '172.29.9.2'] create_process 
/usr/lib/python3/dist-packages/neutron/agent/linux/utils.py:87
  2021-08-31 13:59:11.317 747 ERROR neutron.agent.linux.utils [-] Exit code: 2; 
Stdin: ; Stdout: ; Stderr: Error: Nexthop has invalid gateway.
  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1942251/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1942615] [NEW] SG shared through RBAC mechanism can't be used to spawn instances

2021-09-03 Thread Slawek Kaplonski
Public bug reported:

Since some time Security groups can be shared with specific tenants
using RBAC mechanism but it's not possible to share SG that way with
TARGET-PROJECT and then, as a member or admin in that TARGET-PROJECT
spawn vm which will use that SG:

$ openstack server create --image cirros-0.5.1-x86_64-disk --flavor m1.tiny 
--network TARGET-PROJECT-net1 --security-group sharedsg --wait testsg004
/usr/lib/python3/dist-packages/secretstorage/dhcrypto.py:15: 
CryptographyDeprecationWarning: int_from_bytes is deprecated, use 
int.from_bytes instead
  from cryptography.utils import int_from_bytes
/usr/lib/python3/dist-packages/secretstorage/util.py:19: 
CryptographyDeprecationWarning: int_from_bytes is deprecated, use 
int.from_bytes instead
  from cryptography.utils import int_from_bytes
Error creating server: testsg004
Error creating server


It is like that because nova in 
https://github.com/openstack/nova/blob/713b653fc0e09301a5674316a49a6f5ffd152b4c/nova/network/neutron.py#L814
 is asking for security groups filtered by tenant_id. And Neutron returns only 
SGs which are owned to that tenant, without the ones shared with tenant using 
RBAC.

Looking at neutron api-ref https://docs.openstack.org/api-
ref/network/v2/index.html?expanded=list-networks-detail,list-security-
groups-detail#security-groups-security-groups it clearly says that it
filters by tenant_id that OWNS the resource so it seems like correct
(documented) behaviour.

Now the question is - should we relax that filter and return SG which
project owns and which are shared with tenant? Or should we add
additional flag to API, like "include_shared" which could be used by
Nova? Or maybe do You have any other ideas about how to solve that
issue?

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1942615

Title:
  SG shared through RBAC mechanism can't be used to spawn instances

Status in neutron:
  Confirmed

Bug description:
  Since some time Security groups can be shared with specific tenants
  using RBAC mechanism but it's not possible to share SG that way with
  TARGET-PROJECT and then, as a member or admin in that TARGET-PROJECT
  spawn vm which will use that SG:

  $ openstack server create --image cirros-0.5.1-x86_64-disk --flavor m1.tiny 
--network TARGET-PROJECT-net1 --security-group sharedsg --wait testsg004
  /usr/lib/python3/dist-packages/secretstorage/dhcrypto.py:15: 
CryptographyDeprecationWarning: int_from_bytes is deprecated, use 
int.from_bytes instead
from cryptography.utils import int_from_bytes
  /usr/lib/python3/dist-packages/secretstorage/util.py:19: 
CryptographyDeprecationWarning: int_from_bytes is deprecated, use 
int.from_bytes instead
from cryptography.utils import int_from_bytes
  Error creating server: testsg004
  Error creating server

  
  It is like that because nova in 
https://github.com/openstack/nova/blob/713b653fc0e09301a5674316a49a6f5ffd152b4c/nova/network/neutron.py#L814
 is asking for security groups filtered by tenant_id. And Neutron returns only 
SGs which are owned to that tenant, without the ones shared with tenant using 
RBAC.

  Looking at neutron api-ref https://docs.openstack.org/api-
  ref/network/v2/index.html?expanded=list-networks-detail,list-security-
  groups-detail#security-groups-security-groups it clearly says that it
  filters by tenant_id that OWNS the resource so it seems like correct
  (documented) behaviour.

  Now the question is - should we relax that filter and return SG which
  project owns and which are shared with tenant? Or should we add
  additional flag to API, like "include_shared" which could be used by
  Nova? Or maybe do You have any other ideas about how to solve that
  issue?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1942615/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1942617] [NEW] SG rules from SG shared using RBAC aren't visible

2021-09-03 Thread Slawek Kaplonski
Public bug reported:

RBAC mechanism allows to share SG with different tenant. But when user
from such target tenant will want to show SG, rules which belongs to
that SG will not be shown as they aren't shared.

Such rules are filtered out by our policy mechanism in
https://github.com/openstack/neutron/blob/c235232501a74b4e7bebdbe2efc16106a4d837ec/neutron/api/v2/base.py#L316

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1942617

Title:
  SG rules from SG shared using RBAC aren't visible

Status in neutron:
  Confirmed

Bug description:
  RBAC mechanism allows to share SG with different tenant. But when user
  from such target tenant will want to show SG, rules which belongs to
  that SG will not be shown as they aren't shared.

  Such rules are filtered out by our policy mechanism in
  
https://github.com/openstack/neutron/blob/c235232501a74b4e7bebdbe2efc16106a4d837ec/neutron/api/v2/base.py#L316

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1942617/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1920923] Re: Rally test NeutronNetworks.create_and_update_subnets fails

2021-09-10 Thread Slawek Kaplonski
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1920923

Title:
  Rally test NeutronNetworks.create_and_update_subnets fails

Status in neutron:
  Fix Released

Bug description:
  It happens pretty often recently that test 
NeutronNetworks.create_and_update_subnets in neutron-rally-task job is failing.
  Examples:

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_97b/681671/13/check/neutron-
  rally-
  
task/97b75cd/results/report.html#/NeutronNetworks.create_and_update_subnets/overview

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_716/780548/3/check/neutron-
  rally-
  
task/7162e07/results/report.html#/NeutronNetworks.create_and_update_subnets/failures

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_156/318542/6/check/neutron-
  rally-
  
task/1567769/results/report.html#/NeutronNetworks.create_and_update_subnets/failures

  
https://a4882fb7db4c401136c2-acad0afc1440c186988309ce1e0a4290.ssl.cf5.rackcdn.com/780916/3/check/neutron-
  rally-
  
task/390be8d/results/report.html#/NeutronNetworks.create_and_update_subnets/failures

  
https://89a28d92de3b4c8c3017-1438e56e418f3d4087dd94ee6330f7d7.ssl.cf5.rackcdn.com/780916/3/check/neutron-
  rally-
  
task/c316a5c/results/report.html#/NeutronNetworks.create_and_update_subnets/failures

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_e3f/781227/1/check/neutron-
  rally-
  
task/e3fd553/results/report.html#/NeutronNetworks.create_and_update_subnets/failures

  
https://72c824f20fd751937cae-512ca6f82afe45a8d7ced45e416cc067.ssl.cf2.rackcdn.com/781566/1/check/neutron-
  rally-
  
task/824bae8/results/report.html#/NeutronNetworks.create_and_update_subnets/output

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1920923/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1830014] Re: [RFE] add API for neutron debug tool "probe"

2021-09-10 Thread Slawek Kaplonski
We discussed that rfe again on the drivers meeting today: 
https://meetings.opendev.org/meetings/neutron_drivers/2021/neutron_drivers.2021-09-10-14.05.log.html#l-18
As use case originally given in the spec (debugging CI) isn't really issue 
anymore, and as we see that implementation of that proposal would be pretty 
complex, we decided to decline that RFE.

** Changed in: neutron
   Status: New => Won't Fix

** Changed in: neutron
   Status: Won't Fix => Confirmed

** Changed in: neutron
   Importance: Undecided => Wishlist

** Tags removed: rfe-triaged
** Tags added: rfe

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1830014

Title:
  [RFE] add API for neutron debug tool "probe"

Status in neutron:
  Won't Fix

Bug description:
  Recently, due to this bug:
  https://bugs.launchpad.net/neutron/+bug/1821912
  We noticed that sometimes the guest OS is not fully UP, but test case is 
trying to login it. A simple idea is to ping it first, then try to login. So we 
hope to find a way for tempest to verify the neutron port link state. In high 
probability, the DB resource state is not reliable. We need an independent 
mechanism to check the VM network status. Because tempest is "blackbox" test, 
it can run in any host, we can not use the current resources under the existing 
mechanism, such as qdhcp-namepace or qrouter-namepace to do such check.

  Then this RFE is up. We have neutron-debug tool which include a "probe" 
resource in the agent side.
  https://docs.openstack.org/neutron/latest/cli/neutron-debug.html
  We could add some API to neutron, and let the proper agent to add such 
"probe" for us.
  In agent side, it will be a general agent extension, you can enable it to the 
ovs-agent, L3-agent or DHCP-agent.
  Once you have such "probe" resource in the agent side, then you can run any 
command in it.
  This will be useful for neutron CI to check the VM link state.

  So a basic workflow will be:
  1. neutron tempest create router and connected to one subnet (network-1)
  2. neutron tempest create one VM
  3. neutron tempest create one floating IP and bind it to the VM-1 port
  4. create a "probe" for network-1 via neutron API
  5. ping the VM port until reachable in the "probe" namespace
  6. ssh the VM by floating IP
  7. do the next step

  One more thing, we now have set the "neutron-debug" tool as deprecated:
  https://bugs.launchpad.net/neutron/+bug/1583700
  But we can remain that "probe" mechanism.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1830014/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1943714] [NEW] DB session commit error in resource_registry.set_resources_dirty

2021-09-15 Thread Slawek Kaplonski
  
 
2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource 
self._end_session_transaction(self.session) 


2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource   File 
"/usr/lib/python3.6/site-packages/oslo_db/sqlalchemy/enginefacade.py", line 
692, in _end_session_transaction
 
2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource session.commit()   


 
2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/session.py", line 1026, in 
commit  
   
2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource 
self.transaction.commit()   


2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/session.py", line 491, in 
commit  

2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource 
self._assert_active(prepared_ok=True)   


2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource   File 
"/usr/lib64/python3.6/site-packages/sqlalchemy/orm/session.py", line 294, in 
_assert_active  

2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource % 
self._rollback_exception

  
2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource 
sqlalchemy.exc.InvalidRequestError: This Session's transaction has been rolled 
back due to a previous exception during flush. To begin a new transaction with 
this Session, first issue Session.rollback(). Original ex$
eption was: (pymysql.err.OperationalError) (1213, 'Deadlock found when trying 
to get lock; try restarting transaction')   

   
2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource [SQL: DELETE FROM 
reservations WHERE reservations.id = %(id)s]

2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource [parameters: {'id': 
'3644bc07-a2b6-47b2-9767-bbf89f9606e2'}]


2021-09-15 09:50:09.540 15 ERROR neutron.api.v2.resource (Background on this 
error at: http://sqlalche.me/e/e3q8)    
    

I guess that this may be some race condition which can be hit under
specific conditions and IMHO it can happend also in master branch as
well.

** Affects: neutron
 Importance: High
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: db

** Changed in: neutron
 Assignee: (unassigned) => Slawek Kaplonski (slaweq)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1943714

Title:
  DB session commit error in resource_registry.set_resources_dirty

Status in neutron:
  New

Bug description:
  It seems that patch
  https://review.opendev.org/c/openstack/neutron/+/805031 introduced
  some new error during call of resource_registry.set_resources_dirty()
  in
  
https://github.com/openstack/neutron/blob/6db261962894b1667dd213b116e89246a3e54386/neutron/api/v2/base.py#L506

  I didn't saw that issue in our CI jobs on master branch but we noticed
  them in the d/s jobs on OSP-16 which is based on Train. Error is like:

  2021-09-15 09:50:09.540 15 ERROR neutron.ap

[Yahoo-eng-team] [Bug 1943930] [NEW] [DHCP] new line symbol in opt_name of extra_dhcp_opt causes dnsmasq to fail

2021-09-17 Thread Slawek Kaplonski
Public bug reported:

Bug originally reported by Alex Katz in Bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=2001626

Description of problem:
The new line symbol (`\n`) can be passed into the opt_name of extra-dhcp-opt 
with the direct API call. It will cause the dnsmasq process to be in a restart 
loop.

There is the following stack trace appear in the dhcp-agent.log

 [-] Unable to enable dhcp for ee4beb3e-89e8-4d32-ba99-97f3c7a092e7.: 
AttributeError: 'NoneType' object has no attribute 'groups'
 Traceback (most recent call last): 
 
   File "/usr/lib/python3.6/site-packages/neutron/agent/dhcp/agent.py", line 
208, in call_driver 
 getattr(driver, action)(**action_kwargs)   
 
   File "/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 
235, in enable  
 common_utils.wait_until_true(self._enable, timeout=300)
   File "/usr/lib/python3.6/site-packages/neutron/common/utils.py", line 703, 
in wait_until_true
 while not predicate():
   File "/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 
248, in _enable
 self.spawn_process()
   File "/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 
475, in spawn_process
 self._spawn_or_reload_process(reload_with_HUP=False)
   File "/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 
484, in _spawn_or_reload_process
 self._output_config_files()
   File "/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 
534, in _output_config_files
 self._output_opts_file()
   File "/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 
1062, in _output_opts_file
 options += self._generate_opts_per_port(subnet_index_map)
   File "/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 
1194, in _generate_opts_per_port
 opt.opt_name, opt.opt_value))
   File "/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 
1247, in _format_option
 extra_tag = matches.groups()[0]
 AttributeError: 'NoneType' object has no attribute 'groups'

Example of the API request:

TOK=`openstack token issue -f value -c id`

curl -v -s -X PUT \
-H "X-Auth-Token: $TOK" \
-H "Content-Type: application/json" \
-d '{ "port": { "extra_dhcp_opts": [{ "opt_name": "yyy:test\nanother", 
"opt_value":"xxx" }]}}' \
"http://10.0.0.120:9696/v2.0/ports/acf0c1ca-56f8-452c-8b31-51ac25e54ac5";

** Affects: neutron
 Importance: High
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1943930

Title:
  [DHCP] new line symbol in opt_name of extra_dhcp_opt causes dnsmasq to
  fail

Status in neutron:
  Confirmed

Bug description:
  Bug originally reported by Alex Katz in Bugzilla:
  https://bugzilla.redhat.com/show_bug.cgi?id=2001626

  Description of problem:
  The new line symbol (`\n`) can be passed into the opt_name of extra-dhcp-opt 
with the direct API call. It will cause the dnsmasq process to be in a restart 
loop.

  There is the following stack trace appear in the dhcp-agent.log

   [-] Unable to enable dhcp for ee4beb3e-89e8-4d32-ba99-97f3c7a092e7.: 
AttributeError: 'NoneType' object has no attribute 'groups'
   Traceback (most recent call last):   
   
 File "/usr/lib/python3.6/site-packages/neutron/agent/dhcp/agent.py", line 
208, in call_driver 
   getattr(driver, action)(**action_kwargs) 
   
 File "/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 
235, in enable  
   common_utils.wait_until_true(self._enable, timeout=300)
 File "/usr/lib/python3.6/site-packages/neutron/common/utils.py", line 703, 
in wait_until_true
   while not predicate():
 File "/usr/lib/python3.6/site-packages/neutron/agent/linux/dhcp.py", line 
248, in _enable
   self.spawn_process()
 File "/usr/lib/python3.6/site-packages/neutron/agent/linu

[Yahoo-eng-team] [Bug 1901707] Re: race condition on port binding vs instance being resumed for live-migrations

2021-09-17 Thread Slawek Kaplonski
** Changed in: neutron
   Status: In Progress => Fix Released

** Tags removed: neutron-proactive-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1901707

Title:
  race condition on port binding vs instance being resumed for live-
  migrations

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ussuri series:
  Fix Released
Status in OpenStack Compute (nova) victoria series:
  Fix Released

Bug description:
  This is a separation from the discussion in this bug
  https://bugs.launchpad.net/neutron/+bug/1815989

  There comment https://bugs.launchpad.net/neutron/+bug/1815989/comments/52 
goes through in
  detail the flow on a Train deployment using neutron 15.1.0 (controller) and 
15.3.0 (compute) and nova 20.4.0

  There is a race condition where nova live-migration will wait for
  neutron to send the network-vif-plugged event but when nova receives
  that event the live migration is faster than the OVS l2 agent can bind
  the port on the destination compute node.

  This causes the RARP frames sent out to update the switches ARP tables
  to fail causing the instance to be completely unaccessible after a
  live migration unless these RARP frames are sent again or traffic is
  initiated egress from the instance.

  See Sean's comments after for the view from the Nova side. The correct
  behavior should be that the port is ready for use when nova get's the
  external event, but maybe that is not possible from the neutron side,
  again see comments in the other bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1901707/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1943708] Re: neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle fails with Port already has an attached device

2021-09-20 Thread Slawek Kaplonski
** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1943708

Title:
  
neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle
  fails with  Port already has an attached device

Status in neutron:
  In Progress
Status in tripleo:
  Triaged

Bug description:
  
neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_trunk_subport_lifecycle
  is failing in periodic-tripleo-ci-centos-8-standalone-full-tempest-
  scenario-master with

  Response - Headers: {'date': 'Mon, 13 Sep 2021 18:30:12 GMT', 'server': 
'Apache', 'content-length': '1695', 'openstack-api-version': 'compute 2.1', 
'x-openstack-nova-api-version': '2.1', 'vary': 
'OpenStack-API-Version,X-OpenStack-Nova-API-Version,Accept-Encoding', 
'x-openstack-request-id': 'req-cbbe0384-f683-4bd9-990a-bbaceff70255', 
'x-compute-request-id': 'req-cbbe0384-f683-4bd9-990a-bbaceff70255', 
'connection': 'close', 'content-type': 'application/json', 'status': '200', 
'content-location': 
'http://192.168.24.3:8774/v2.1/servers/9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf'}
  Body: b'{"server": {"id": "9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf", 
"name": "tempest-server-test-2120448491", "status": "ACTIVE", "tenant_id": 
"f907072324a04823b5267ebfd078f139", "user_id": 
"81a2f96324174768a1aa435f2856272c", "metadata": {}, "hostId": 
"985e26b32ef2005c617cddf445634feda09e1eb51abe1b10032b9f9b", "image": {"id": 
"9add59d5-6458-46fc-b806-ad3b39a7ebfe", "links": [{"rel": "bookmark", "href": 
"http://192.168.24.3:8774/images/9add59d5-6458-46fc-b806-ad3b39a7ebfe"}]}, 
"flavor": {"id": "48b6ea74-8aeb-4086-99ac-c4a4d18398f6", "links": [{"rel": 
"bookmark", "href": 
"http://192.168.24.3:8774/flavors/48b6ea74-8aeb-4086-99ac-c4a4d18398f6"}]}, 
"created": "2021-09-13T18:27:35Z", "updated": "2021-09-13T18:30:11Z", 
"addresses": {"tempest-TrunkTest-398369782": [{"version": 4, "addr": 
"10.100.0.10", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": 
"fa:16:3e:4f:c4:11"}, {"version": 4, "addr": "192.168.24.162", 
"OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": 
"fa:16:3e:4f:c4:11"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": 
"self", "href": 
"http://192.168.24.3:8774/v2.1/servers/9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf"}, 
{"rel": "bookmark", "href": 
"http://192.168.24.3:8774/servers/9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf"}], 
"OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": 
"nova", "config_drive": "True", "key_name": "tempest-TrunkTest-398369782", 
"OS-SRV-USG:launched_at": "2021-09-13T18:27:41.00", 
"OS-SRV-USG:terminated_at": null, "security_groups": [{"name": 
"tempest-TrunkTest-398369782"}], "OS-EXT-STS:task_state": "deleting", 
"OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, 
"os-extended-volumes:volumes_attached": []}}'
  2021-09-13 18:30:14,588 234586 INFO [tempest.lib.common.rest_client] 
Request (TrunkTest:_run_cleanups): 404 GET 
http://192.168.24.3:8774/v2.1/servers/9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf 
0.046s
  2021-09-13 18:30:14,589 234586 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'date': 'Mon, 13 Sep 2021 18:30:14 GMT', 'server': 
'Apache', 'content-length': '111', 'openstack-api-version': 'compute 2.1', 
'x-openstack-nova-api-version': '2.1', 'vary': 
'OpenStack-API-Version,X-OpenStack-Nova-API-Version', 'x-openstack-request-id': 
'req-28741ac7-529e-4f49-a12f-3e75e14e2a0e', 'x-compute-request-id': 
'req-28741ac7-529e-4f49-a12f-3e75e14e2a0e', 'connection': 'close', 
'content-type': 'application/json; charset=UTF-8', 'status': '404', 
'content-location': 
'http://192.168.24.3:8774/v2.1/servers/9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf'}
  Body: b'{"itemNotFound": {"code": 404, "message": "Instance 
9da29ae5-2784-4cb2-8cb6-cd2ec19e7fbf could not be found."}}'
  }}}

  Traceback (most recent call last):
    File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_trunk.py",
 line 266, in test_trunk_subport_lifecycle
  self.client.add_subports(vm2.trunk['id'], subports)
    File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/services/network/json/network_client.py",
 line 848, in add_subports
  return self._subports_action('add_subports', trunk_id, subports)
    File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/services/network/json/network_client.py",
 line 842, in _subports_action
  resp, body = self.put(uri, jsonutils.dumps({'sub_ports': subports}))
    File "/usr/lib/python3.6/site-packages/tempest/lib/common/rest_client.py", 
line 363, in put
  return self.request('PUT', url, extra_headers, headers, body, chunked)
    File "/usr/lib/python3.6/site-packages/tempest/lib/common/rest_cl

[Yahoo-eng-team] [Bug 1945156] [NEW] Port has IP allocated from both IPv4 and IPv6 subnets even if only one subnet is specified by user

2021-09-26 Thread Slawek Kaplonski
Public bug reported:

When network has one IPv4 and one IPv6 subnets and user do request like:

openstack port create -vv --fixed-ip subnet=private-subnet --network
private port1

REQ: curl -g -i -X POST http://192.168.122.177:9696/v2.0/ports -H 
"Content-Type: application/json" -H "User-Agent: openstacksdk/0.56.0 
keystoneauth1/4.3.1 python-requests/2.25.1 CPython/3.6.8" -H "X-Auth-Token: 
{SHA256}7d023d5b0822b23d66de0ae5ad47ee4d2519624c43c9834d292592a727b34e60" -d 
'{"port": {"name": "port1", "network_id": 
"86a96e3f-58da-4244-84f4-0d43f2b5a681", "fixed_ips": [{"subnet_id": 
"f8177ef5-d5f6-449c-ab68-a095d64bb344"}], "admin_state_up": true}}'
http://192.168.122.177:9696 "POST /v2.0/ports HTTP/1.1" 201 1212
RESP: [201] Connection: keep-alive Content-Length: 1212 Content-Type: 
application/json Date: Fri, 24 Sep 2021 08:31:32 GMT X-Openstack-Request-Id: 
req-270f9a86-f27a-4d21-bd2e-206a005bd586
RESP BODY: 
{"port":{"id":"57710d69-8589-43c1-8df8-527e3d066091","name":"port1","network_id":"86a96e3f-58da-4244-84f4-0d43f2b5a681","tenant_id":"44c069d7f3ea40dd859022b91ec5b09b","mac_address":"fa:16:3e:70:24:ff","admin_state_up":true,"status":"DOWN","device_id":"","device_owner":"","fixed_ips":[{"subnet_id":"f8177ef5-d5f6-449c-ab68-a095d64bb344","ip_address":"10.0.0.41"},{"subnet_id":"20284b4a-496c-4c3b-b7e7-8d16054bac13","ip_address":"fd95:a73f:a1b0:0:f816:3eff:fe70:24ff"}],"project_id":"44c069d7f3ea40dd859022b91ec5b09b","port_security_enabled":true,"security_groups":["0295f73d-7523-4d68-bcac-2fb3eb64a08e"],"binding:vnic_type":"normal","binding:profile":{},"binding:host_id":"","binding:vif_type":"unbound","binding:vif_details":{},"allowed_address_pairs":[],"extra_dhcp_opts":[],"description":"","dns_name":"","dns_assignment":[{"ip_address":"10.0.0.41","hostname":"host-10-0-0-41","fqdn":"host-10-0-0-41.openstackgate.local."},{"ip_address":"fd95:a73f:a1b0:0:f816:3eff:fe70:24ff","hostname":"host-fd95-a73f-a1b0-0-f816-3eff-fe70-24ff","fqdn":"host-fd95-a73f-a1b0-0-f816-3eff-fe70-24ff.openstackgate.local."}],"tags":[],"created_at":"2021-09-24T08:31:32Z","updated_at":"2021-09-24T08:31:32Z","revision_number":1}}


Neutron allocates for port IP addresses from IPv4 and IPv6 subnet.

When network has 2 ipv4 subnets and 2 ipv6 subnets then same query
results with only one IPv4 allocated for the port.

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: api l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1945156

Title:
  Port has IP allocated from both IPv4 and IPv6 subnets even if only one
  subnet is specified by user

Status in neutron:
  New

Bug description:
  When network has one IPv4 and one IPv6 subnets and user do request
  like:

  openstack port create -vv --fixed-ip subnet=private-subnet --network
  private port1

  REQ: curl -g -i -X POST http://192.168.122.177:9696/v2.0/ports -H 
"Content-Type: application/json" -H "User-Agent: openstacksdk/0.56.0 
keystoneauth1/4.3.1 python-requests/2.25.1 CPython/3.6.8" -H "X-Auth-Token: 
{SHA256}7d023d5b0822b23d66de0ae5ad47ee4d2519624c43c9834d292592a727b34e60" -d 
'{"port": {"name": "port1", "network_id": 
"86a96e3f-58da-4244-84f4-0d43f2b5a681", "fixed_ips": [{"subnet_id": 
"f8177ef5-d5f6-449c-ab68-a095d64bb344"}], "admin_state_up": true}}'
  http://192.168.122.177:9696 "POST /v2.0/ports HTTP/1.1" 201 1212
  RESP: [201] Connection: keep-alive Content-Length: 1212 Content-Type: 
application/json Date: Fri, 24 Sep 2021 08:31:32 GMT X-Openstack-Request-Id: 
req-270f9a86-f27a-4d21-bd2e-206a005bd586
  RESP BODY: 
{"port":{"id":"57710d69-8589-43c1-8df8-527e3d066091","name":"port1","network_id":"86a96e3f-58da-4244-84f4-0d43f2b5a681","tenant_id":"44c069d7f3ea40dd859022b91ec5b09b","mac_address":"fa:16:3e:70:24:ff","admin_state_up":true,"status":"DOWN"

[Yahoo-eng-team] [Bug 1945283] [NEW] test_overlapping_sec_grp_rules from neutron_tempest_plugin.scenario is failing intermittently

2021-09-27 Thread Slawek Kaplonski
Public bug reported:

Examples of failure:

https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_187/808179/4/check/neutron-
tempest-plugin-scenario-linuxbridge/1872c37/testr_results.html

https://3889a6aa6ea3f28e18b7-a85364018f10e4ce6159c46d9b375288.ssl.cf1.rackcdn.com/808026/7/check/neutron-
tempest-plugin-scenario-openvswitch-
iptables_hybrid/be18102/testr_results.html

Stacktrace:

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 657, in test_overlapping_sec_grp_rules
self._verify_http_connection(client_ssh[0], srv_ssh, srv_ip,
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 69, in _verify_http_connection
raise e
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_security_groups.py",
 line 59, in _verify_http_connection
ret = utils.call_url_remote(ssh_client, url)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 138, in call_url_remote
return ssh_client.exec_command(cmd)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/tenacity/__init__.py",
 line 333, in wrapped_f
return self(f, *args, **kw)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/tenacity/__init__.py",
 line 423, in __call__
do = self.iter(retry_state=retry_state)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/tenacity/__init__.py",
 line 360, in iter
return fut.result()
  File "/usr/lib/python3.8/concurrent/futures/_base.py", line 437, in result
return self.__get_result()
  File "/usr/lib/python3.8/concurrent/futures/_base.py", line 389, in 
__get_result
raise self._exception
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/tenacity/__init__.py",
 line 426, in __call__
result = fn(*args, **kwargs)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/ssh.py",
 line 171, in exec_command
return super(Client, self).exec_command(cmd=cmd, encoding=encoding)
  File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 209, in exec_command
raise exceptions.SSHExecCommandFailed(
neutron_tempest_plugin.common.utils.SSHExecCommandFailed: Command 'curl 
http://10.1.0.3:3000 --retry 3 --connect-timeout 2' failed, exit status: 56, 
stderr:
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed

  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
  0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:01 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:03 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:04 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:05 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:06 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:07 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:08 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:09 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:10 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:11 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:12 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:13 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:14 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:15 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:15 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:16 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:17 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:18 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:19 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:20 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:21 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:22 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:23 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:24 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:25 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00:26 --:--:-- 0
  0 00 00 0  0  0 --:--:--  0:00

[Yahoo-eng-team] [Bug 1945285] [NEW] openstack-tox-py36-with-neutron-lib-master periodic job is failing due to missing notify attribute

2021-09-27 Thread Slawek Kaplonski
Public bug reported:

Examples of failures:

https://zuul.openstack.org/build/d2f120a01f21495c8e7502cd523f562a
https://zuul.openstack.org/build/d2f120a01f21495c8e7502cd523f562a

Failing tests:
neutron.tests.unit.db.test_securitygroups_db.SecurityGroupDbMixinTestCase.test_delete_security_group_in_use:

ft1.19: 
neutron.tests.unit.db.test_securitygroups_db.SecurityGroupDbMixinTestCase.test_delete_security_group_in_usetesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/db/test_securitygroups_db.py",
 line 98, in test_delete_security_group_in_use
mock.patch.object(registry, "notify") as mock_notify:
  File "/usr/lib/python3.6/unittest/mock.py", line 1247, in __enter__
original, local = self.get_original()
  File "/usr/lib/python3.6/unittest/mock.py", line 1221, in get_original
"%s does not have the attribute %r" % (target, name)
AttributeError: 
 does not have the attribute 'notify'


neutron.tests.unit.db.test_securitygroups_db.SecurityGroupDbMixinTestCase.test_update_security_group_statefulness_binded_conflict:


ft1.28: 
neutron.tests.unit.db.test_securitygroups_db.SecurityGroupDbMixinTestCase.test_update_security_group_statefulness_binded_conflicttesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/db/test_securitygroups_db.py",
 line 109, in test_update_security_group_statefulness_binded_conflict
mock.patch.object(registry, "notify") as mock_notify:
  File "/usr/lib/python3.6/unittest/mock.py", line 1247, in __enter__
original, local = self.get_original()
  File "/usr/lib/python3.6/unittest/mock.py", line 1221, in get_original
"%s does not have the attribute %r" % (target, name)
AttributeError: 
 does not have the attribute 'notify'

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure neutron-lib unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1945285

Title:
  openstack-tox-py36-with-neutron-lib-master periodic job is failing due
  to missing notify attribute

Status in neutron:
  Confirmed

Bug description:
  Examples of failures:

  https://zuul.openstack.org/build/d2f120a01f21495c8e7502cd523f562a
  https://zuul.openstack.org/build/d2f120a01f21495c8e7502cd523f562a

  Failing tests:
  
neutron.tests.unit.db.test_securitygroups_db.SecurityGroupDbMixinTestCase.test_delete_security_group_in_use:

  ft1.19: 
neutron.tests.unit.db.test_securitygroups_db.SecurityGroupDbMixinTestCase.test_delete_security_group_in_usetesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/db/test_securitygroups_db.py",
 line 98, in test_delete_security_group_in_use
  mock.patch.object(registry, "notify") as mock_notify:
File "/usr/lib/python3.6/unittest/mock.py", line 1247, in __enter__
  original, local = self.get_original()
File "/usr/lib/python3.6/unittest/mock.py", line 1221, in get_original
  "%s does not have the attribute %r" % (target, name)
  AttributeError: 
 does not have the attribute 'notify'

  
  
neutron.tests.unit.db.test_securitygroups_db.SecurityGroupDbMixinTestCase.test_update_security_group_statefulness_binded_conflict:

  
  ft1.28: 
neutron.tests.unit.db.test_securitygroups_db.SecurityGroupDbMixinTestCase.test_update_security_group_statefulness_binded_conflicttesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/db/test_securitygroups_db.py",
 line 109, in test_update_security_group_statefulness_binded_conflict
  mock.patch.object(registry, "notify") as mock_notify:
File "/usr/lib/python3.6/unittest/mock.py", line 1247, in __enter__
  original, local = self.get_original()
File "/usr/lib/python3.6/unittest/mock.py", line 1221, in get_original
  "%s does not have the attribute %r" % (target, name)
  AttributeError: 
 does not have the attribute 'notify'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1945285/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lis

[Yahoo-eng-team] [Bug 1945156] Re: Port has IP allocated from both IPv4 and IPv6 subnets even if only one subnet is specified by user

2021-09-30 Thread Slawek Kaplonski
Thx Liu and Bence. You are right. It is like that always in case of
SLAAC and DHCPv6-stateless subnets:
https://github.com/openstack/neutron/blob/bd7b40b4f871586068039e6014dbb9081d1630e8/neutron/db/ipam_pluggable_backend.py#L274

** Changed in: neutron
   Status: New => Incomplete

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1945156

Title:
  Port has IP allocated from both IPv4 and IPv6 subnets even if only one
  subnet is specified by user

Status in neutron:
  Invalid

Bug description:
  When network has one IPv4 and one IPv6 subnets and user do request
  like:

  openstack port create -vv --fixed-ip subnet=private-subnet --network
  private port1

  REQ: curl -g -i -X POST http://192.168.122.177:9696/v2.0/ports -H 
"Content-Type: application/json" -H "User-Agent: openstacksdk/0.56.0 
keystoneauth1/4.3.1 python-requests/2.25.1 CPython/3.6.8" -H "X-Auth-Token: 
{SHA256}7d023d5b0822b23d66de0ae5ad47ee4d2519624c43c9834d292592a727b34e60" -d 
'{"port": {"name": "port1", "network_id": 
"86a96e3f-58da-4244-84f4-0d43f2b5a681", "fixed_ips": [{"subnet_id": 
"f8177ef5-d5f6-449c-ab68-a095d64bb344"}], "admin_state_up": true}}'
  http://192.168.122.177:9696 "POST /v2.0/ports HTTP/1.1" 201 1212
  RESP: [201] Connection: keep-alive Content-Length: 1212 Content-Type: 
application/json Date: Fri, 24 Sep 2021 08:31:32 GMT X-Openstack-Request-Id: 
req-270f9a86-f27a-4d21-bd2e-206a005bd586
  RESP BODY: 
{"port":{"id":"57710d69-8589-43c1-8df8-527e3d066091","name":"port1","network_id":"86a96e3f-58da-4244-84f4-0d43f2b5a681","tenant_id":"44c069d7f3ea40dd859022b91ec5b09b","mac_address":"fa:16:3e:70:24:ff","admin_state_up":true,"status":"DOWN","device_id":"","device_owner":"","fixed_ips":[{"subnet_id":"f8177ef5-d5f6-449c-ab68-a095d64bb344","ip_address":"10.0.0.41"},{"subnet_id":"20284b4a-496c-4c3b-b7e7-8d16054bac13","ip_address":"fd95:a73f:a1b0:0:f816:3eff:fe70:24ff"}],"project_id":"44c069d7f3ea40dd859022b91ec5b09b","port_security_enabled":true,"security_groups":["0295f73d-7523-4d68-bcac-2fb3eb64a08e"],"binding:vnic_type":"normal","binding:profile":{},"binding:host_id":"","binding:vif_type":"unbound","binding:vif_details":{},"allowed_address_pairs":[],"extra_dhcp_opts":[],"description":"","dns_name":"","dns_assignment":[{"ip_address":"10.0.0.41","hostname":"host-10-0-0-41","fqdn":"host-10-0-0-41.openstackgate.local."},{"ip_address":"fd95:a73f:a1b0:0:f816:3eff:fe70:24ff","hostname":"host-fd95-a73f-a1b0-0-f816-3eff-fe70-24ff","fqdn":"host-fd95-a73f-a1b0-0-f816-3eff-fe70-24ff.openstackgate.local."}],"tags":[],"created_at":"2021-09-24T08:31:32Z","updated_at":"2021-09-24T08:31:32Z","revision_number":1}}

  
  Neutron allocates for port IP addresses from IPv4 and IPv6 subnet.

  When network has 2 ipv4 subnets and 2 ipv6 subnets then same query
  results with only one IPv4 allocated for the port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1945156/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813709] Re: [L2][scale issue] ovs-agent dump-flows takes a lots of time

2021-10-01 Thread Slawek Kaplonski
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813709

Title:
  [L2][scale issue] ovs-agent dump-flows takes a lots of time

Status in neutron:
  Fix Released

Bug description:
  ovs-agent clean stale flows action will dump all the bridge flows first. When 
subnets or security group ports quantity reach 2000+, this will become really 
time-consuming.
  And sometimes this dump action can also get failed, then the ovs-agent will 
dump again. And things get worse.
  This is a subproblem of bug #1813703, for more information, please see the 
summary:
  https://bugs.launchpad.net/neutron/+bug/1813703

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1813709/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813708] Re: [L2][scale issue] ovs-agent has too many flows to do trouble shooting

2021-10-01 Thread Slawek Kaplonski
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813708

Title:
  [L2][scale issue] ovs-agent has too many flows to do trouble shooting

Status in neutron:
  Fix Released

Bug description:
  When subnets or security group ports quantity reach 2000+, it is
  really too hard to do trouble shooting if one VM lost the connection.
  The flow tables are almost unreadable (reach 30k+ flows). We have no
  way to check the ovs-agent flow status. And restart the L2 agent does
  not help anymore, since we have so many issue at scale.

  This is a subproblem of bug #1813703, for more information, please see the 
summary:
  https://bugs.launchpad.net/neutron/+bug/1813703

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1813708/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813707] Re: [L2][scale issue] ovs-agent restart costs too long time

2021-10-01 Thread Slawek Kaplonski
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813707

Title:
  [L2][scale issue] ovs-agent restart costs too long time

Status in neutron:
  Fix Released

Bug description:
  When subnets or security group ports quantity reach 2000+, the ovs-agent will 
take more than 15-40 mins+ to restart.
  During this restart time, the ovs will not process any port, aka VM booting 
on this host will not get the L2 flows established.
  This is a subproblem of bug #1813703, for more information, please see the 
summary:
  https://bugs.launchpad.net/neutron/+bug/1813703

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1813707/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813706] Re: [L2][scale issue] ovs-agent failed to restart

2021-10-01 Thread Slawek Kaplonski
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813706

Title:
  [L2][scale issue] ovs-agent failed to restart

Status in neutron:
  Fix Released

Bug description:
  When subnets or security group ports quantity reach 2000+, the ovs-agent 
failed to restart and do fullsync infinitely.
  This is a subproblem of bug #1813703, for more information, please see the 
summary:
  https://bugs.launchpad.net/neutron/+bug/1813703

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1813706/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813705] Re: [L2][scale issue] local connection to ovs-vswitchd was drop or timeout

2021-10-01 Thread Slawek Kaplonski
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813705

Title:
  [L2][scale issue] local connection to ovs-vswitchd was drop or timeout

Status in neutron:
  Fix Released

Bug description:
  When subnets or security group ports quantity reach 2000+, the ovs-agent 
connection to ovs-vswitchd may get lost, drop or timeout during restart.
  This is a subproblem of bug #1813703, for more information, please see the 
summary:
  https://bugs.launchpad.net/neutron/+bug/1813703

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1813705/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813704] Re: [L2][scale issue] RPC timeout during ovs-agent restart

2021-10-01 Thread Slawek Kaplonski
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813704

Title:
  [L2][scale issue] RPC timeout during ovs-agent restart

Status in neutron:
  Fix Released

Bug description:
  When ports quantity under one subnet or security group reaches 2000+, the 
ovs-agent will always get RPC timeout during restart.
  This is a subproblem of bug #1813703, for more information, please see the 
summary:
  https://bugs.launchpad.net/neutron/+bug/1813703

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1813704/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1813712] Re: [L2][scale issue] ovs-agent has multipe cookies flows (stale flows)

2021-10-01 Thread Slawek Kaplonski
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1813712

Title:
  [L2][scale issue] ovs-agent has multipe cookies flows (stale flows)

Status in neutron:
  Fix Released

Bug description:
  When subnets or security group ports quantity reach 2000+, there are many 
stale flows.
  Some basic exception procedure:
  (1) ovs-agent dump-flows
  (2) ovs-agent delete some flows
  (3) ovs-agent install new flows (with new cookies)
  (4) any exception raise in (2) or (3), such as (bug #1813705)
  (5) ovs-agent will do full sync again, then go back to (1)
  Finally it will get many stale flows installed, and sometimes cause data 
plane down.

  This is a subproblem of bug #1813703, for more information, please see the 
summary:
  https://bugs.launchpad.net/neutron/+bug/1813703

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1813712/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1946186] [NEW] Fullstack test neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_router_fip_qos_after_admin_state_down_up failing intermittently

2021-10-06 Thread Slawek Kaplonski
Public bug reported:

Failure example:
https://8d5ef598bba78b1573a4-7dfe055f87ad090ed1b50745545f409a.ssl.cf1.rackcdn.com/805391/10/check/neutron-
fullstack-with-uwsgi/6e03086/testr_results.html

Stacktrace:

ft1.6: 
neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_router_fip_qos_after_admin_state_down_uptesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", 
line 703, in wait_until_true
eventlet.sleep(sleep)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
hub.switch()
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
return self.greenlet.switch()
eventlet.timeout.Timeout: 60 seconds

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_l3_agent.py",
 line 565, in test_router_fip_qos_after_admin_state_down_up
self._router_fip_qos_after_admin_state_down_up(ha=True)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_l3_agent.py",
 line 208, in _router_fip_qos_after_admin_state_down_up
vm.block_until_ping(external_vm.ip)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/common/machine_fixtures.py",
 line 67, in block_until_ping
utils.wait_until_true(
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", 
line 707, in wait_until_true
raise exception
neutron.tests.common.machine_fixtures.FakeMachineException: No ICMP reply 
obtained from IP address 240.0.83.1


In the test's logs there are errors that "Network is unreachable" so maybe it 
is some issue with test itself.

** Affects: neutron
 Importance: Medium
 Status: Confirmed


** Tags: fullstack l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1946186

Title:
  Fullstack test
  
neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_router_fip_qos_after_admin_state_down_up
  failing intermittently

Status in neutron:
  Confirmed

Bug description:
  Failure example:
  
https://8d5ef598bba78b1573a4-7dfe055f87ad090ed1b50745545f409a.ssl.cf1.rackcdn.com/805391/10/check/neutron-
  fullstack-with-uwsgi/6e03086/testr_results.html

  Stacktrace:

  ft1.6: 
neutron.tests.fullstack.test_l3_agent.TestHAL3Agent.test_router_fip_qos_after_admin_state_down_uptesttools.testresult.real._StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 
703, in wait_until_true
  eventlet.sleep(sleep)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
  hub.switch()
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 60 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_l3_agent.py",
 line 565, in test_router_fip_qos_after_admin_state_down_up
  self._router_fip_qos_after_admin_state_down_up(ha=True)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_l3_agent.py",
 line 208, in _router_fip_qos_after_admin_state_down_up
  vm.block_until_ping(external_vm.ip)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/common/machine_fixtures.py",
 line 67, in block_until_ping
  utils.wait_until_true(
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 
707, in wait_until_true
  raise exception
  neutron.tests.common.machine_fixtures.FakeMachineException: No ICMP reply 
obtained from IP address 240.0.83.1

  
  In the test's logs there are errors that "Network is unreachable" so maybe it 
is some issue with test itself.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1946186/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1946187] [NEW] HA routers not going to be "primary" at all

2021-10-06 Thread Slawek Kaplonski
Public bug reported:

It happens in the CI from time to time that many tests are failing
because router is in backup state all the time and it's never
transitioned to be primary on the node.

Examples of the failure:
https://3142cc95d58eb8a4ee07-043369ac575bbfe29758366f4ba498a1.ssl.cf1.rackcdn.com/765072/8/check/neutron-tempest-plugin-scenario-openvswitch/499b47d/controller/logs/screen-q-l3.txt

https://6599da62140c9583e14a-cd7f53ffbb0b86c69deae453da021fe8.ssl.cf5.rackcdn.com/811746/4/check/neutron-
tempest-plugin-scenario-openvswitch/3cafcd7/testr_results.html

https://zuul.opendev.org/t/openstack/build/75c056464b6f445ebde18c1b07f5bcce


Example of stacktrace:

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 80, in wait_until_true
eventlet.sleep(sleep)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
hub.switch()
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
return self.greenlet.switch()
eventlet.timeout.Timeout: 600 seconds

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_basic.py",
 line 35, in test_basic_instance
self.setup_network_and_server()
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 281, in setup_network_and_server
router = self.create_router_by_client(**kwargs)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 209, in create_router_by_client
cls._wait_for_router_ha_active(router['id'])
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 228, in _wait_for_router_ha_active
utils.wait_until_true(_router_active_on_l3_agent,
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 84, in wait_until_true
raise exception
tempest.lib.exceptions.TimeoutException: Request timed out
Details: Router 1c4ce297-5a04-4794-9720-20fdec9ca4e5 is not active on any of 
the L3 agents

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: gate-failure l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1946187

Title:
  HA routers not going to be "primary" at all

Status in neutron:
  Confirmed

Bug description:
  It happens in the CI from time to time that many tests are failing
  because router is in backup state all the time and it's never
  transitioned to be primary on the node.

  Examples of the failure:
  
https://3142cc95d58eb8a4ee07-043369ac575bbfe29758366f4ba498a1.ssl.cf1.rackcdn.com/765072/8/check/neutron-tempest-plugin-scenario-openvswitch/499b47d/controller/logs/screen-q-l3.txt

  
https://6599da62140c9583e14a-cd7f53ffbb0b86c69deae453da021fe8.ssl.cf5.rackcdn.com/811746/4/check/neutron-
  tempest-plugin-scenario-openvswitch/3cafcd7/testr_results.html

  https://zuul.opendev.org/t/openstack/build/75c056464b6f445ebde18c1b07f5bcce

  
  Example of stacktrace:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 80, in wait_until_true
  eventlet.sleep(sleep)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
  hub.switch()
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 600 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_basic.py",
 line 35, in test_basic_instance
  self.setup_network_and_server()
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 281, in setup_network_and_server
  router = self.create_router_by_client(**kwargs)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 209, in create_router_by_client
  cls._wait_for_router_ha_active(router['id'])
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 228, in _wait_for_router_ha_active
  utils.wait_until_true(_router_active_on_l3_agent,
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 lin

[Yahoo-eng-team] [Bug 1946479] [NEW] [OVN migration] qr- interfaces and trunk subports aren't cleaned after migration to ML2/OVN

2021-10-08 Thread Slawek Kaplonski
Public bug reported:

After migration from ML2/OVS+DVR to ML2/OVN qr- interfaces (router ports) 
aren't cleaned from the nodes even if they aren't needed in ML2/OVN case at all.
The same is with trunk's subports (spi- and spt- patch ports which connects 
br-int and trunk bridge).

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: In Progress


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1946479

Title:
  [OVN migration] qr- interfaces and trunk subports aren't cleaned after
  migration to ML2/OVN

Status in neutron:
  In Progress

Bug description:
  After migration from ML2/OVS+DVR to ML2/OVN qr- interfaces (router ports) 
aren't cleaned from the nodes even if they aren't needed in ML2/OVN case at all.
  The same is with trunk's subports (spi- and spt- patch ports which connects 
br-int and trunk bridge).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1946479/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1946187] Re: HA routers not going to be "primary" at all

2021-10-13 Thread Slawek Kaplonski
*** This bug is a duplicate of bug 1944201 ***
https://bugs.launchpad.net/bugs/1944201

I checked logs from the failed job and in fact this is duplicate of the 
neutron-ovs-agent crash issue https://bugs.launchpad.net/neutron/+bug/1944201
Routers aren't transitioned to "primary" because neutron-ovs-agent is dead in 
such job thus HA ports of the routers are DOWN. There is no neutron-l3-agent 
issue in that case at all.
I'm closing this bug as duplicate of the 
https://bugs.launchpad.net/neutron/+bug/1944201 and hopefully it will be fixed 
with new os-ken version.

** This bug has been marked a duplicate of bug 1944201
   neutron-openvswitch-agent crashes on start with firewall config of br-int

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1946187

Title:
  HA routers not going to be "primary" at all

Status in neutron:
  Confirmed

Bug description:
  It happens in the CI from time to time that many tests are failing
  because router is in backup state all the time and it's never
  transitioned to be primary on the node.

  Examples of the failure:
  
https://3142cc95d58eb8a4ee07-043369ac575bbfe29758366f4ba498a1.ssl.cf1.rackcdn.com/765072/8/check/neutron-tempest-plugin-scenario-openvswitch/499b47d/controller/logs/screen-q-l3.txt

  
https://6599da62140c9583e14a-cd7f53ffbb0b86c69deae453da021fe8.ssl.cf5.rackcdn.com/811746/4/check/neutron-
  tempest-plugin-scenario-openvswitch/3cafcd7/testr_results.html

  https://zuul.opendev.org/t/openstack/build/75c056464b6f445ebde18c1b07f5bcce

  
  Example of stacktrace:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 80, in wait_until_true
  eventlet.sleep(sleep)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
  hub.switch()
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 600 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_basic.py",
 line 35, in test_basic_instance
  self.setup_network_and_server()
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 281, in setup_network_and_server
  router = self.create_router_by_client(**kwargs)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 209, in create_router_by_client
  cls._wait_for_router_ha_active(router['id'])
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/base.py",
 line 228, in _wait_for_router_ha_active
  utils.wait_until_true(_router_active_on_l3_agent,
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/utils.py",
 line 84, in wait_until_true
  raise exception
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: Router 1c4ce297-5a04-4794-9720-20fdec9ca4e5 is not active on any of 
the L3 agents

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1946187/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1944201] Re: neutron-openvswitch-agent crashes on start with firewall config of br-int

2021-10-19 Thread Slawek Kaplonski
Reopen. It still happens, even with os-ken 2.2.0. And what I noticed it
may happened not only during agent initialization but also later. See
e.g.:

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_2b1/812658/2/check/neutron-
ovs-tempest-multinode-full/2b1d77b/controller/logs/screen-q-agt.txt

or

https://e4aad2fd4ad948b107a3-f332441e20465e1f05e3f334ceb928b5.ssl.cf2.rackcdn.com/812658/2/check/neutron-
ovs-tempest-slow/2806209/compute1/logs/screen-q-agt.txt

** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1944201

Title:
  neutron-openvswitch-agent crashes on start with firewall config of br-
  int

Status in neutron:
  Confirmed

Bug description:
  In upstream CI, Ironic jobs have been encountering failures where we
  never find the networking to be stood up by neutron. Investigation
  into what was going on led us to finding the neutron-openvswitch-agent
  in failed state, exited due to RuntimeError, just a few seconds after
  the service was started.

  neutron-openvswitch-agent[78787]: DEBUG neutron.agent.securitygroups_rpc 
[None req-b18a79b7-7258-44f0-9a69-fa92a490bc26 None None] Init firewall 
settings (driver=openvswitch) {{(pid=78787) init_firewall 
/opt/stack/neutron/neutron/agent/securitygroups_rpc.py:118}}
  neutron-openvswitch-agent[78787]: DEBUG ovsdbapp.backend.ovs_idl.transaction 
[-] Running txn n=1 command(idx=0): DbAddCommand(table=Bridge, record=br-int, 
column=protocols, values=('OpenFlow10', 'OpenFlow11', 'OpenFlow12', 
'OpenFlow13', 'OpenFlow14')) {{(pid=78787) do_commit 
/usr/local/lib/python3.8/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:90}}
  neutron-openvswitch-agent[78787]: ERROR OfctlService [-] unknown dpid 
90695823979334
  neutron-openvswitch-agent[78787]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch [None 
req-b18a79b7-7258-44f0-9a69-fa92a490bc26 None None] ofctl request 
version=None,msg_type=None,msg_len=None,xid=None,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=71,type=1)
 error Datapath Invalid 90695823979334: 
os_ken.app.ofctl.exception.InvalidDatapath: Datapath Invalid 90695823979334
  neutron-openvswitch-agent[78787]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None 
req-b18a79b7-7258-44f0-9a69-fa92a490bc26 None None] ofctl request 
version=None,msg_type=None,msg_len=None,xid=None,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=71,type=1)
 error Datapath Invalid 90695823979334 agent terminated!: RuntimeError: ofctl 
request 
version=None,msg_type=None,msg_len=None,xid=None,OFPFlowStatsRequest(cookie=0,cookie_mask=0,flags=0,match=OFPMatch(oxm_fields={}),out_group=4294967295,out_port=4294967295,table_id=71,type=1)
 error Datapath Invalid 90695823979334
  systemd[1]: devstack@q-agt.service: Main process exited, code=exited, 
status=1/FAILURE
  systemd[1]: devstack@q-agt.service: Failed with result 'exit-code'.

  Originally, this was thought to be related to
  https://bugs.launchpad.net/neutron/+bug/1817022, however this is upon
  service startup on a relatively low load machine where the only action
  really is truly just neutron starting at that time. Also, starting,
  the connections have not been able to exist long enough for inactivity
  idle triggers to occur.

  Investigation into allowed us to identify the general path of what is
  occurring, yet why we don't understand, at least in the Ironic
  community.

  init_firewall() invocation: 
https://github.com/openstack/neutron/blob/79445f12be3a9ca892672fe0e016336ef60877a2/neutron/agent/securitygroups_rpc.py#L70
  Firewall class launch: 
https://github.com/openstack/neutron/blob/79445f12be3a9ca892672fe0e016336ef60877a2/neutron/agent/securitygroups_rpc.py#L121

  As the default for the firewall driver ends up sending us into
  openvswitch's firewall code:

  
https://github.com/openstack/neutron/blob/79445f12be3a9ca892672fe0e016336ef60877a2/neutron/agent/linux/openvswitch_firewall/firewall.py#L548
  
https://github.com/openstack/neutron/blob/79445f12be3a9ca892672fe0e016336ef60877a2/neutron/agent/linux/openvswitch_firewall/firewall.py#L628

  Which eventually ends up in
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py#L91
  where it raises a RuntimeError and the service exits out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1944201/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp

[Yahoo-eng-team] [Bug 1947993] [NEW] Non HA router - missing iptables rule for redirect metadata queries to haproxy

2021-10-21 Thread Slawek Kaplonski
Public bug reported:

In case of the non-HA routers (dvr and legacy) neutron-l3-agent sends
notifications AFTER_CREATE and AFTER_UPDATE of the router. Metadata
driver is subscribed to those notifications to prepare haproxy in the
router's namespace:
https://github.com/openstack/neutron/blob/8353c2adba08f9e7d5ed61589daef81aaf275fb3/neutron/agent/metadata/driver.py#L281
and
https://github.com/openstack/neutron/blob/8353c2adba08f9e7d5ed61589daef81aaf275fb3/neutron/agent/metadata/driver.py#L294

The difference between those 2 functions is that in after_router_added there is 
called apply_metadata_nat_rules() to configure nat rules in the iptables in 
qrouter namespace.
In after_router_update function nat rules aren't created.

And that can cause issue when processing router in _process_added_router() will 
fail: 
https://github.com/openstack/neutron/blob/8353c2adba08f9e7d5ed61589daef81aaf275fb3/neutron/agent/l3/agent.py#L626
 thus notification AFTER_CREATE router will not be called and nat rules will 
not be created.
Router will be processed again in next iteration by L3 agent, but this time 
router_info is already in the agent's router_info cache so it will be treated 
as updated router. Because of that haproxy will be started but NAT rules will 
never be created and metadata for instances will not be available.

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: New


** Tags: l3-dvr-backlog

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1947993

Title:
  Non HA router - missing iptables rule for redirect metadata queries to
  haproxy

Status in neutron:
  New

Bug description:
  In case of the non-HA routers (dvr and legacy) neutron-l3-agent sends
  notifications AFTER_CREATE and AFTER_UPDATE of the router. Metadata
  driver is subscribed to those notifications to prepare haproxy in the
  router's namespace:
  
https://github.com/openstack/neutron/blob/8353c2adba08f9e7d5ed61589daef81aaf275fb3/neutron/agent/metadata/driver.py#L281
  and
  
https://github.com/openstack/neutron/blob/8353c2adba08f9e7d5ed61589daef81aaf275fb3/neutron/agent/metadata/driver.py#L294

  The difference between those 2 functions is that in after_router_added there 
is called apply_metadata_nat_rules() to configure nat rules in the iptables in 
qrouter namespace.
  In after_router_update function nat rules aren't created.

  And that can cause issue when processing router in _process_added_router() 
will fail: 
https://github.com/openstack/neutron/blob/8353c2adba08f9e7d5ed61589daef81aaf275fb3/neutron/agent/l3/agent.py#L626
 thus notification AFTER_CREATE router will not be called and nat rules will 
not be created.
  Router will be processed again in next iteration by L3 agent, but this time 
router_info is already in the agent's router_info cache so it will be treated 
as updated router. Because of that haproxy will be started but NAT rules will 
never be created and metadata for instances will not be available.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1947993/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1948452] Re: qvo ports are not removed correctly when an instance is deleted immediately after creation

2021-10-25 Thread Slawek Kaplonski
AFAIR qvo and qvb ports are created and deleted by os-vif (nova) and not
by neutron. So I don't think this is an Neutron bug really and I'm
moving it to os-vif for now.

** Also affects: os-vif
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1948452

Title:
  qvo ports are not removed correctly when an instance is deleted
  immediately after creation

Status in os-vif:
  New

Bug description:
  Release version:
  Neutron: stable/Ussuri
  (Open vSwitch) 2.11.1


  on searching the issue i see this raised earlier on Mirantis Openstack,
  https://bugs.launchpad.net/mos/+bug/1711637

  As mentioned in the link above, this is leading to excessive logging
  at OVS,

  2021-10-22T03:40:02.728Z|311606|coverage(revalidator631)|INFO|Skipping 
details of duplicate event coverage for hash=4a9bb48e
  2021-10-22T03:43:50.650Z|123897195|bridge|WARN|could not open network device 
qvoda000ab6-48 (No such device)
  2021-10-22T03:43:51.017Z|123897196|bridge|WARN|could not open network device 
qvoda000ab6-48 (No such device)
  2021-10-22T03:43:51.843Z|123897197|bridge|WARN|could not open network device 
qvoda000ab6-48 (No such device)
  2021-10-22T03:43:52.208Z|123897198|bridge|WARN|could not open network device 
qvoda000ab6-48 (No such device)
  2021-10-22T03:43:52.487Z|123897199|bridge|WARN|could not open network device 
qvoda000ab6-48 (No such device)
  2021-10-22T03:43:52.856Z|123897200|bridge|WARN|could not open network device 
qvoda000ab6-48 (No such device)
  2021-10-22T03:43:54.179Z|123897201|bridge|WARN|could not open network device 
qvoda000ab6-48 (No such device)
  2021-10-22T03:43:54.622Z|123897202|poll_loop|INFO|Dropped 17 log messages in 
last 4728 seconds (most recently, 4726 seconds ago) due to excessive rate


  ovs-vsctl shows some 100+ ports with vlan tag 4095.

To manage notifications about this bug go to:
https://bugs.launchpad.net/os-vif/+bug/1948452/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1948642] [NEW] Configuration of the ovs controller by neutron-ovs-agent isn't idempotent

2021-10-25 Thread Slawek Kaplonski
Public bug reported:

When neutron-ovs-agent is restarted, or doing full sync e.g. after
recovery of the connectivity to rabbitmq, it is setting controller for
bridges in the openvswitch. That operation isn't idempotent, and even if
controller was already created, it will create new one. That, in some
cases can lead to short (1-2 seconds) traffic loss in the dataplane. We
observed that by doing UDP traffic test between 2 VMs:

- server A run:
sudo iperf3 -s -p 5000 -i 1

- server B run:
iperf3 -c 192.168.155.143 -p 5000 -u --length 1400 -b 1M -i 1 -t 3001

When neutron-ovs-agent on the node where server B runs is restarted,
packet loss can be observed in iperf, like:

[  5]  44.00-45.00  sec   122 KBytes   997 Kbits/sec  0.028 ms  0/89 (0%)  
iperf3: OUT OF ORDER - incoming packet = 4047 and received packet = 4048 AND SP 
= 5
iperf3: OUT OF ORDER - incoming packet = 4046 and received packet = 4049 AND SP 
= 5
iperf3: OUT OF ORDER - incoming packet = 4051 and received packet = 4053 AND SP 
= 5
iperf3: OUT OF ORDER - incoming packet = 4052 and received packet = 4054 AND SP 
= 5
[  5]  45.00-46.00  sec   123 KBytes  1.01 Mbits/sec  0.021 ms  4/90 (4.4%)  
[  5]  46.00-47.00  sec   122 KBytes   997 Kbits/sec  0.028 ms  0/89 (0%)  
iperf3: OUT OF ORDER - incoming packet = 4218 and received packet = 4219 AND SP 
= 5
iperf3: OUT OF ORDER - incoming packet = 4220 and received packet = 4221 AND SP 
= 5
iperf3: OUT OF ORDER - incoming packet = 4222 and received packet = 4223 AND SP 
= 5
[  5]  47.00-48.00  sec   122 KBytes   997 Kbits/sec  0.024 ms  3/89 (3.4%)

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: ovs

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1948642

Title:
  Configuration of the ovs controller by neutron-ovs-agent isn't
  idempotent

Status in neutron:
  Confirmed

Bug description:
  When neutron-ovs-agent is restarted, or doing full sync e.g. after
  recovery of the connectivity to rabbitmq, it is setting controller for
  bridges in the openvswitch. That operation isn't idempotent, and even
  if controller was already created, it will create new one. That, in
  some cases can lead to short (1-2 seconds) traffic loss in the
  dataplane. We observed that by doing UDP traffic test between 2 VMs:

  - server A run:
  sudo iperf3 -s -p 5000 -i 1

  - server B run:
  iperf3 -c 192.168.155.143 -p 5000 -u --length 1400 -b 1M -i 1 -t 3001

  When neutron-ovs-agent on the node where server B runs is restarted,
  packet loss can be observed in iperf, like:

  [  5]  44.00-45.00  sec   122 KBytes   997 Kbits/sec  0.028 ms  0/89 (0%)  
  iperf3: OUT OF ORDER - incoming packet = 4047 and received packet = 4048 AND 
SP = 5
  iperf3: OUT OF ORDER - incoming packet = 4046 and received packet = 4049 AND 
SP = 5
  iperf3: OUT OF ORDER - incoming packet = 4051 and received packet = 4053 AND 
SP = 5
  iperf3: OUT OF ORDER - incoming packet = 4052 and received packet = 4054 AND 
SP = 5
  [  5]  45.00-46.00  sec   123 KBytes  1.01 Mbits/sec  0.021 ms  4/90 (4.4%)  
  [  5]  46.00-47.00  sec   122 KBytes   997 Kbits/sec  0.028 ms  0/89 (0%)  
  iperf3: OUT OF ORDER - incoming packet = 4218 and received packet = 4219 AND 
SP = 5
  iperf3: OUT OF ORDER - incoming packet = 4220 and received packet = 4221 AND 
SP = 5
  iperf3: OUT OF ORDER - incoming packet = 4222 and received packet = 4223 AND 
SP = 5
  [  5]  47.00-48.00  sec   122 KBytes   997 Kbits/sec  0.024 ms  3/89 (3.4%)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1948642/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1950273] [NEW] Error 500 during log update

2021-11-09 Thread Slawek Kaplonski
Public bug reported:

It happened in the CI job:
https://70b72721af304b4c1528-2577420e766b653c3ec089b9602cc4a1.ssl.cf2.rackcdn.com/811411/7/check/neutron-
tempest-plugin-api/438deaa/testr_results.html

Test failure:

ft1.2: 
neutron_tempest_plugin.api.admin.test_logging.LoggingTestJSON.test_log_lifecycle[id-8d2e1ba5-455b-4519-a88e-e587002faba6]testtools.testresult.real._StringException:
 pythonlogging:'': {{{
2021-11-09 00:01:03,854 122316 INFO [tempest.lib.common.rest_client] 
Request (LoggingTestJSON:test_log_lifecycle): 201 POST 
https://158.69.75.224:9696/v2.0/log/logs 0.096s
2021-11-09 00:01:03,854 122316 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
Body: {"log": {"name": "tempest-test-log-694677312", "description": 
"tempest-test-log-desc-135349090", "resource_type": "security_group", 
"resource_id": null, "target_id": null, "event": "ALL", "enabled": true}}
Response - Headers: {'date': 'Tue, 09 Nov 2021 00:01:03 GMT', 'server': 
'Apache/2.4.41 (Ubuntu)', 'content-type': 'application/json', 'content-length': 
'448', 'x-openstack-request-id': 'req-19dfbf9a-3b85-43d4-9b87-ee3495ddc617', 
'connection': 'close', 'status': '201', 'content-location': 
'https://158.69.75.224:9696/v2.0/log/logs'}
Body: b'{"log": {"id": "fae55a16-cc4f-4346-8d71-020f6573b511", 
"project_id": "7ff7323f47574ee5a1fbf7aa7b472c60", "name": 
"tempest-test-log-694677312", "resource_type": "security_group", "resource_id": 
null, "target_id": null, "event": "ALL", "enabled": true, "revision_number": 0, 
"description": "tempest-test-log-desc-135349090", "created_at": 
"2021-11-09T00:01:03Z", "updated_at": "2021-11-09T00:01:03Z", "tenant_id": 
"7ff7323f47574ee5a1fbf7aa7b472c60"}}'
2021-11-09 00:01:03,893 122316 INFO [tempest.lib.common.rest_client] 
Request (LoggingTestJSON:test_log_lifecycle): 200 GET 
https://158.69.75.224:9696/v2.0/log/logs/fae55a16-cc4f-4346-8d71-020f6573b511 
0.038s
2021-11-09 00:01:03,893 122316 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
Body: None
Response - Headers: {'date': 'Tue, 09 Nov 2021 00:01:03 GMT', 'server': 
'Apache/2.4.41 (Ubuntu)', 'content-type': 'application/json', 'content-length': 
'349', 'x-openstack-request-id': 'req-2ef509a0-a22c-4854-a209-245454142df0', 
'connection': 'close', 'status': '200', 'content-location': 
'https://158.69.75.224:9696/v2.0/log/logs/fae55a16-cc4f-4346-8d71-020f6573b511'}
Body: b'{"log": {"id": "fae55a16-cc4f-4346-8d71-020f6573b511", "name": 
"tempest-test-log-694677312", "resource_type": "security_group", "resource_id": 
null, "target_id": null, "event": "ALL", "enabled": true, "revision_number": 0, 
"description": "tempest-test-log-desc-135349090", "created_at": 
"2021-11-09T00:01:03Z", "updated_at": "2021-11-09T00:01:03Z"}}'
2021-11-09 00:01:03,919 122316 INFO [tempest.lib.common.rest_client] 
Request (LoggingTestJSON:test_log_lifecycle): 200 GET 
https://158.69.75.224:9696/v2.0/log/logs 0.025s
2021-11-09 00:01:03,919 122316 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
Body: None
Response - Headers: {'date': 'Tue, 09 Nov 2021 00:01:03 GMT', 'server': 
'Apache/2.4.41 (Ubuntu)', 'content-type': 'application/json', 'content-length': 
'451', 'x-openstack-request-id': 'req-8edc8bb1-ec39-4ef7-b12e-832a98b5553c', 
'connection': 'close', 'status': '200', 'content-location': 
'https://158.69.75.224:9696/v2.0/log/logs'}
Body: b'{"logs": [{"id": "fae55a16-cc4f-4346-8d71-020f6573b511", 
"project_id": "7ff7323f47574ee5a1fbf7aa7b472c60", "name": 
"tempest-test-log-694677312", "resource_type": "security_group", "resource_id": 
null, "target_id": null, "event": "ALL", "enabled": true, "revision_number": 0, 
"description": "tempest-test-log-desc-135349090", "created_at": 
"2021-11-09T00:01:03Z", "updated_at": "2021-11-09T00:01:03Z", "tenant_id": 
"7ff7323f47574ee5a1fbf7aa7b472c60"}]}'
2021-11-09 00:01:04,176 122316 INFO [tempest.lib.common.rest_client] 
Request (LoggingTestJSON:test_log_lifecycle): 500 PUT 
https://158.69.75.224:9696/v2.0/log/logs/fae55a16-cc4f-4346-8d71-020f6573b511 
0.256s
2021-11-09 00:01:04,177 122316 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'X-Auth-Token': ''}
Body: {"log": {"description": "tempest-test-log-197708411", "enabled": 
false}}
Response - Headers: {'date': 'Tue, 09 Nov 2021 00:01:04 GMT', 'server': 
'Apache/2.4.41 (Ubuntu)', 'content-type': 'application/json', 'content-length': 
'150', 'x-openstack-request-id': 'req-fa4186cb-5926-49b0-a913-f8ffa8e7f6b1', 
'connection': 'close', 'status': '500', 'content-location': 
'https://158.69.75.224:9696/v2.0/log/logs/fae55a16-cc4f-4346-8d71-020f6573b511'}
Body: b'{"NeutronError": {"type"

[Yahoo-eng-team] [Bug 1950275] [NEW] openstack-tox-py36-with-neutron-lib-master job is failing since 05.11.2021

2021-11-09 Thread Slawek Kaplonski
Public bug reported:

There is one failing test in that periodic job:

neutron.tests.unit.objects.test_objects.TestObjectVersions.test_versions

Failure examples:
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_72c/periodic/opendev.org/openstack/neutron/master/openstack-tox-py36-with-neutron-lib-master/72c0cb1/testr_results.html
https://zuul.openstack.org/build/256199f9e58e441c8c480b6303af3f85
https://zuul.openstack.org/build/0038799c75704a63a461733865acacbf

Stacktrace:

ft1.1: 
neutron.tests.unit.objects.test_objects.TestObjectVersions.test_versionstesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/objects/test_objects.py",
 line 150, in test_versions
'Some objects have changed; please make sure the '
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 393, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 480, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: !=:
reference = {'QosRuleType': '1.5-56b25ec81e27aa5c8238b8c43e88aed6'}
actual= {'QosRuleType': '1.5-ea51a164013e05d5956d8bf538622b33'}
: Some objects have changed; please make sure the versions have been bumped, 
and then update their hashes in the object_data map in this test module.

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1950275

Title:
  openstack-tox-py36-with-neutron-lib-master job is failing since
  05.11.2021

Status in neutron:
  Confirmed

Bug description:
  There is one failing test in that periodic job:

  neutron.tests.unit.objects.test_objects.TestObjectVersions.test_versions

  Failure examples:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_72c/periodic/opendev.org/openstack/neutron/master/openstack-tox-py36-with-neutron-lib-master/72c0cb1/testr_results.html
  https://zuul.openstack.org/build/256199f9e58e441c8c480b6303af3f85
  https://zuul.openstack.org/build/0038799c75704a63a461733865acacbf

  Stacktrace:

  ft1.1: 
neutron.tests.unit.objects.test_objects.TestObjectVersions.test_versionstesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/objects/test_objects.py",
 line 150, in test_versions
  'Some objects have changed; please make sure the '
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 393, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/testtools/testcase.py",
 line 480, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: !=:
  reference = {'QosRuleType': '1.5-56b25ec81e27aa5c8238b8c43e88aed6'}
  actual= {'QosRuleType': '1.5-ea51a164013e05d5956d8bf538622b33'}
  : Some objects have changed; please make sure the versions have been bumped, 
and then update their hashes in the object_data map in this test module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1950275/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1906568] Re: [OVN Octavia Provider] OVN provider not setting member offline correctly on create when admin_state_up=False

2021-11-10 Thread Slawek Kaplonski
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1906568

Title:
  [OVN Octavia Provider] OVN provider not setting member offline
  correctly on create when admin_state_up=False

Status in neutron:
  Fix Released

Bug description:
  According to the Octavia API, a provider driver should set the member
  operating_status field to OFFLINE on a create if admin_state_up=False
  in the call.  The OVN provider doesn't look at that flag, so always
  has operating_status set to NO_MONITOR (the default when health
  monitors are not supported).

  Need to fix this in order to enable and pass the tempest API tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1906568/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1888646] Re: [OVN Octavia Provider] octavia_tempest_plugin.tests.api.v2.test_pool.PoolAPITest.test_pool_create_with_listener fails

2021-11-10 Thread Slawek Kaplonski
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1888646

Title:
  [OVN Octavia Provider]
  
octavia_tempest_plugin.tests.api.v2.test_pool.PoolAPITest.test_pool_create_with_listener
  fails

Status in neutron:
  Fix Released

Bug description:
  
octavia_tempest_plugin.tests.api.v2.test_pool.PoolAPITest.test_pool_create_with_listener
  fails

  example failure:

  
https://e500f5844c497d7c1455-bb0af7d0ed113130252cfd767637324e.ssl.cf2.rackcdn.com/742445/4/check/ovn-
  octavia-provider-tempest-release/9fb114c/testr_results.html

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/tests/api/v2/test_pool.py",
 line 86, in test_pool_create_with_listener
  self._test_pool_create(has_listener=True)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/tests/api/v2/test_pool.py",
 line 149, in _test_pool_create
  CONF.load_balancer.build_timeout)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/octavia_tempest_plugin/tests/waiters.py",
 line 96, in wait_for_status
  raise exceptions.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: (PoolAPITest:test_pool_create_with_listener) show_pool 
operating_status failed to update to ONLINE within the required time 300. 
Current status of show_pool: OFFLINE

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1888646/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1916646] Re: ovn-octavia-provider can attempt to write protocol=None to OVSDB

2021-11-10 Thread Slawek Kaplonski
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1916646

Title:
  ovn-octavia-provider can attempt to write protocol=None to OVSDB

Status in neutron:
  Fix Released

Bug description:
  In some functional test output, we can see the line:

  ERROR ovsdbapp.backend.ovs_idl.vlog
  [req-a6293ef9-4b57-444d-8a5c-a27937ddb5db - - - - -] attempting to
  write bad value to column protocol (ovsdb error: expected string, got
  ): Error: ovsdb error: expected string, got 

  An empty protocol is represented in OVSDB via [], so in some place we
  are passing a None value instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1916646/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1912779] Re: [ovn-octavia-provider]: batch update fails when members to remove is empty

2021-11-10 Thread Slawek Kaplonski
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1912779

Title:
  [ovn-octavia-provider]: batch update fails when members to remove is
  empty

Status in neutron:
  Fix Released

Bug description:
  Discovered bug using Kubernetes cloud-provider-openstack and trying to
  expose service using type: Loadbalancer.

  I0122 07:06:34.842456   1 openstack_loadbalancer.go:1122] Updating 1 
members for pool fa4db405-2e66-4ce2-a29c-743700e4d53c
  I0122 07:06:34.842578   1 loadbalancer.go:371] OpenStack Request URL: PUT 
https://sparrow.cf.ac.uk:9876/v2.0/lbaas/pools/fa4db405-2e66-4ce2-a29c-743700e4d53c/members
  I0122 07:06:34.842607   1 loadbalancer.go:371] OpenStack Request Headers:
  I0122 07:06:34.842613   1 loadbalancer.go:371] Accept: application/json
  I0122 07:06:34.842619   1 loadbalancer.go:371] Content-Type: 
application/json
  I0122 07:06:34.842631   1 loadbalancer.go:371] User-Agent: 
openstack-cloud-controller-manager/da85a2c6-dirty gophercloud/2.0.0
  I0122 07:06:34.842637   1 loadbalancer.go:371] X-Auth-Token: ***
  I0122 07:06:34.842676   1 loadbalancer.go:371] OpenStack Request Body: {
  I0122 07:06:34.842686   1 loadbalancer.go:371]   "members": [
  I0122 07:06:34.842692   1 loadbalancer.go:371] {
  I0122 07:06:34.842698   1 loadbalancer.go:371]   "address": 
"10.0.0.75",
  I0122 07:06:34.842704   1 loadbalancer.go:371]   "name": 
"demo-k8s-czynmnrzfgtu-node-0",
  I0122 07:06:34.842711   1 loadbalancer.go:371]   "protocol_port": 
31166,
  I0122 07:06:34.842717   1 loadbalancer.go:371]   "subnet_id": 
"b1c8ea56-f7d1-4b14-b584-d621d77f88c1"
  I0122 07:06:34.842723   1 loadbalancer.go:371] }
  I0122 07:06:34.842729   1 loadbalancer.go:371]   ]
  I0122 07:06:34.842735   1 loadbalancer.go:371] }
  I0122 07:06:35.844346   1 loadbalancer.go:371] OpenStack Response Code: 
500
  I0122 07:06:35.844384   1 loadbalancer.go:371] OpenStack Response Headers:
  I0122 07:06:35.844389   1 loadbalancer.go:371] Connection: keep-alive
  I0122 07:06:35.844394   1 loadbalancer.go:371] Content-Length: 114
  I0122 07:06:35.844399   1 loadbalancer.go:371] Content-Type: 
application/json
  I0122 07:06:35.844403   1 loadbalancer.go:371] Date: Fri, 22 Jan 2021 
07:06:35 GMT
  I0122 07:06:35.844408   1 loadbalancer.go:371] Server: WSGIServer/0.2 
CPython/3.6.8
  I0122 07:06:35.844413   1 loadbalancer.go:371] X-Openstack-Request-Id: 
req-2fff1131-4956-48cf-a018-c0b0f9ab55d7
  I0122 07:06:35.844518   1 loadbalancer.go:371] OpenStack Response Body: {
  I0122 07:06:35.844530   1 loadbalancer.go:371]   "debuginfo": null,
  I0122 07:06:35.844535   1 loadbalancer.go:371]   "faultcode": "Server",
  I0122 07:06:35.844540   1 loadbalancer.go:371]   "faultstring": "Provider 
'ovn' reports error: list index out of range"
  I0122 07:06:35.844545   1 loadbalancer.go:371] }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1912779/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1950478] Re: test_list_external_networks fails with updated get_network:router:external policy

2021-11-12 Thread Slawek Kaplonski
Fix proposed https://review.opendev.org/c/openstack/tripleo-heat-
templates/+/817754

Test patch https://review.opendev.org/c/openstack/tripleo-heat-
templates/+/817754

** Project changed: neutron => tripleo

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1950478

Title:
  test_list_external_networks fails with updated
  get_network:router:external policy

Status in tripleo:
  New

Bug description:
  We've updated most policies in neutron to adhere to common personas
  across OpenStack (system-admin, system-member, system-reader, project-
  admin, project-member, project-reader).

  In the process of updating the policies we changed the default policy
  for listing external network attributes [0]. This policy defaulted to
  open ("") but now it requires uses to be at least a 'reader' on a
  project.

  When you configure tempest to run against a deployment that's using
  this new policy, several neutron tempest tests fail
  (tempest.api.network.test_networks,
  
tempest.api.network.admin.test_external_network_extension.ExternalNetworksTestJSON).

  We should find a way to continue supporting the use case where
  project-readers can view external networks.


  [0] 
https://github.com/openstack/neutron/blob/master/neutron/conf/policies/network.py#L187-L198
  [1] 
https://github.com/openstack/tempest/blob/master/tempest/api/network/admin/test_external_network_extension.py#L72-L88

To manage notifications about this bug go to:
https://bugs.launchpad.net/tripleo/+bug/1950478/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1950795] [NEW] neutron-tempest-plugin-scenario jobs on stable/rocky and stable/queens are failing with POST_FAILURE every time

2021-11-12 Thread Slawek Kaplonski
Public bug reported:

It seems that there is error during generation of the test results file:

2021-11-10 21:38:28.832885 | TASK [fetch-subunit-output : Generate 
testr_results.html file]
2021-11-10 21:38:30.122424 | controller | 
neutron_tempest_plugin.scenario.test_mtu.NetworkWritableMtuTest.test_connectivity_min_max_mtu[id-bc470200-d8f4-4f07-b294-1b4cbaaa35b9]
2021-11-10 21:38:30.122548 | controller | 
neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_subport_connectivity[id-a8a02c9b-b453-49b5-89a2-cce7da66aafb]
2021-11-10 21:38:30.122613 | controller | Traceback (most recent call last):
2021-11-10 21:38:30.122666 | controller |   File "/usr/local/bin/subunit2html", 
line 11, in 
2021-11-10 21:38:30.122746 | controller | sys.exit(main())
2021-11-10 21:38:30.122808 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/os_testr/subunit2html.py", line 761, in 
main
2021-11-10 21:38:30.123606 | controller | result.stopTestRun()
2021-11-10 21:38:30.123673 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testresult/real.py", line 
549, in stopTestRun
2021-11-10 21:38:30.125201 | controller | sink.stopTestRun()
2021-11-10 21:38:30.125351 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testresult/real.py", line 
1775, in stopTestRun
2021-11-10 21:38:30.125835 | controller | self.decorated.stopTestRun()
2021-11-10 21:38:30.125900 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testresult/real.py", line 
1529, in stopTestRun
2021-11-10 21:38:30.126206 | controller | return 
self.decorated.stopTestRun()
2021-11-10 21:38:30.126262 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/os_testr/subunit2html.py", line 517, in 
stopTestRun
2021-11-10 21:38:30.126405 | controller | report = self._generate_report()
2021-11-10 21:38:30.126459 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/os_testr/subunit2html.py", line 603, in 
_generate_report
2021-11-10 21:38:30.126603 | controller | self._generate_report_test(rows, 
cid, tid, n, t, o, e)
2021-11-10 21:38:30.126656 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/os_testr/subunit2html.py", line 685, in 
_generate_report_test
2021-11-10 21:38:30.126814 | controller | output=saxutils.escape(o + e),
2021-11-10 21:38:30.126921 | controller | UnicodeDecodeError: 'ascii' codec 
can't decode byte 0xef in position 121085: ordinal not in range(128)
2021-11-10 21:38:30.473606 | controller | ERROR
2021-11-10 21:38:30.473915 | controller | {
2021-11-10 21:38:30.474026 | controller |   "delta": "0:00:00.590579",
2021-11-10 21:38:30.474104 | controller |   "end": "2021-11-10 21:38:30.146043",
2021-11-10 21:38:30.474198 | controller |   "msg": "non-zero return code",
2021-11-10 21:38:30.474272 | controller |   "rc": 1,
2021-11-10 21:38:30.474341 | controller |   "start": "2021-11-10 
21:38:29.555464"
2021-11-10 21:38:30.474408 | controller | }

But I think that real issue is that some tests are failing earlier in
all those jobs.

** Affects: neutron
 Importance: Critical
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1950795

Title:
  neutron-tempest-plugin-scenario jobs on stable/rocky and stable/queens
  are failing with POST_FAILURE every time

Status in neutron:
  Confirmed

Bug description:
  It seems that there is error during generation of the test results
  file:

  2021-11-10 21:38:28.832885 | TASK [fetch-subunit-output : Generate 
testr_results.html file]
  2021-11-10 21:38:30.122424 | controller | 
neutron_tempest_plugin.scenario.test_mtu.NetworkWritableMtuTest.test_connectivity_min_max_mtu[id-bc470200-d8f4-4f07-b294-1b4cbaaa35b9]
  2021-11-10 21:38:30.122548 | controller | 
neutron_tempest_plugin.scenario.test_trunk.TrunkTest.test_subport_connectivity[id-a8a02c9b-b453-49b5-89a2-cce7da66aafb]
  2021-11-10 21:38:30.122613 | controller | Traceback (most recent call last):
  2021-11-10 21:38:30.122666 | controller |   File 
"/usr/local/bin/subunit2html", line 11, in 
  2021-11-10 21:38:30.122746 | controller | sys.exit(main())
  2021-11-10 21:38:30.122808 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/os_testr/subunit2html.py", line 761, in 
main
  2021-11-10 21:38:30.123606 | controller | result.stopTestRun()
  2021-11-10 21:38:30.123673 | controller |   File 
"/usr/local/lib/python2.7/dist-packages/testtools/testresult/real.py", line 
549, in stopTestRun
  2021-11-10 21:38:30.125201 | controller | sink.stopTestRun()
  2021-11-10 21:38:30.125351 | controller |   File 
&

[Yahoo-eng-team] [Bug 1951225] [NEW] [OVN] Agent can't be found in functional test sometimes

2021-11-17 Thread Slawek Kaplonski
Public bug reported:

It causes failures of the functional test
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestAgentMonitor.test_network_agent_present
test. Examples:

https://164b3e6c67f8dffddea7-f89a5ef8d1bd221399a14347eddacf71.ssl.cf2.rackcdn.com/814009/17/check/neutron-
functional-with-uwsgi/a3fa229/testr_results.html

https://c75035ee6c00054d18c4-7fb3488540a595a48601456545c48d00.ssl.cf1.rackcdn.com/813977/12/check/neutron-
functional-with-uwsgi/d85a9df/testr_results.html

Stacktrace:

ft1.2: 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestAgentMonitor.test_network_agent_presenttesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py",
 line 315, in test_network_agent_present
type(neutron_agent.AgentCache()[self.chassis_name]))
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/agent/neutron_agent.py",
 line 211, in __getitem__
return self.agents[key]
KeyError: 'd02bd040-154c-4a45-8da0-9452ebcd5b12'

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: functional-tests gate-failure ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951225

Title:
  [OVN] Agent can't be found in functional test sometimes

Status in neutron:
  Confirmed

Bug description:
  It causes failures of the functional test
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestAgentMonitor.test_network_agent_present
  test. Examples:

  
https://164b3e6c67f8dffddea7-f89a5ef8d1bd221399a14347eddacf71.ssl.cf2.rackcdn.com/814009/17/check/neutron-
  functional-with-uwsgi/a3fa229/testr_results.html

  
https://c75035ee6c00054d18c4-7fb3488540a595a48601456545c48d00.ssl.cf1.rackcdn.com/813977/12/check/neutron-
  functional-with-uwsgi/d85a9df/testr_results.html

  Stacktrace:

  ft1.2: 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestAgentMonitor.test_network_agent_presenttesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py",
 line 315, in test_network_agent_present
  type(neutron_agent.AgentCache()[self.chassis_name]))
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/agent/neutron_agent.py",
 line 211, in __getitem__
  return self.agents[key]
  KeyError: 'd02bd040-154c-4a45-8da0-9452ebcd5b12'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1951225/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1952066] [NEW] Scenario test test_mac_learning_vms_on_same_network fails intermittently in the ovn job

2021-11-24 Thread Slawek Kaplonski
Public bug reported:

Failure examples:

https://b30211aa4f809fc4a91b-baf4f807d40559415da582760ebf9456.ssl.cf2.rackcdn.com/817525/7/check/neutron-tempest-plugin-scenario-ovn/c356679/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b75/815962/5/check/neutron-tempest-plugin-scenario-ovn/b75474f/testr_results.html

Stacktrace:

Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 107, in 
_get_ssh_connection
ssh.connect(self.host, port=self.port, username=self.username,
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/paramiko/client.py",
 line 368, in connect
raise NoValidConnectionsError(errors)
paramiko.ssh_exception.NoValidConnectionsError: [Errno None] Unable to connect 
to port 22 on 172.24.5.220

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_mac_learning.py",
 line 166, in test_mac_learning_vms_on_same_network
self._prepare_listener(non_receiver, 2)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_mac_learning.py",
 line 138, in _prepare_listener
self._check_cmd_installed_on_server(server['ssh_client'], server['id'],
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_mac_learning.py",
 line 121, in _check_cmd_installed_on_server
ssh_client.execute_script('which %s' % cmd)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/ssh.py",
 line 224, in execute_script
channel = self.open_session()
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/ssh.py",
 line 149, in open_session
client = self.connect()
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/ssh.py",
 line 137, in connect
return super(Client, self)._get_ssh_connection(*args, **kwargs)
  File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 126, in 
_get_ssh_connection
raise exceptions.SSHTimeout(host=self.host,
tempest.lib.exceptions.SSHTimeout: Connection to the 172.24.5.220 via SSH timed 
out.
User: ubuntu, Password: None

We need to check why this specific test is failing in the ovn job more
often than other tests.

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1952066

Title:
  Scenario test test_mac_learning_vms_on_same_network fails
  intermittently in the ovn job

Status in neutron:
  Confirmed

Bug description:
  Failure examples:

  
https://b30211aa4f809fc4a91b-baf4f807d40559415da582760ebf9456.ssl.cf2.rackcdn.com/817525/7/check/neutron-tempest-plugin-scenario-ovn/c356679/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_b75/815962/5/check/neutron-tempest-plugin-scenario-ovn/b75474f/testr_results.html

  Stacktrace:

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 107, in 
_get_ssh_connection
  ssh.connect(self.host, port=self.port, username=self.username,
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/paramiko/client.py",
 line 368, in connect
  raise NoValidConnectionsError(errors)
  paramiko.ssh_exception.NoValidConnectionsError: [Errno None] Unable to 
connect to port 22 on 172.24.5.220

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_mac_learning.py",
 line 166, in test_mac_learning_vms_on_same_network
  self._prepare_listener(non_receiver, 2)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_mac_learning.py",
 line 138, in _prepare_listener
  self._check_cmd_installed_on_server(server['ssh_client'], server['id'],
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/scenario/test_mac_learning.py",
 line 121, in _check_cmd_installed_on_server
  ssh_client.execute_script('which %s' % cmd)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/ssh.py",
 line 224, in execute_script
  channel = self.open_session()
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/ssh.py",
 line 149, in open_session
  client = self.connect()
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/common/ssh.py",
 l

[Yahoo-eng-team] [Bug 1886909] Re: selection_fields for udp and sctp case doesn't work correctly

2021-11-24 Thread Slawek Kaplonski
In the neutron-tempest-plugin-scenario-ovn job we are using ovn 21.06
now. In the Ubuntu based jobs where we are using ovn from packages, it's
still 20.03 but I think we can close that bug now.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1886909

Title:
  selection_fields for udp and sctp case doesn't work correctly

Status in neutron:
  Fix Released

Bug description:
  From https://bugzilla.redhat.com/show_bug.cgi?id=1846189

  Description of problem:
  [ovn 20.E]selection_fields for udp and sctp case doesn't work correctly

  Version-Release number of selected component (if applicable):
  # rpm -qa|grep ovn
  ovn2.13-central-2.13.0-34.el7fdp.x86_64
  ovn2.13-2.13.0-34.el7fdp.x86_64
  ovn2.13-host-2.13.0-34.el7fdp.x86_64

  
  How reproducible:
  always

  Steps to Reproduce:
  server:
rlRun "ovn-nbctl ls-add ls1"
  rlRun "ovn-nbctl lsp-add ls1 ls1p1"

  rlRun "ovn-nbctl lsp-set-addresses ls1p1 00:01:02:01:01:01"
  rlRun "ovn-nbctl lsp-add ls1 ls1p2"

  rlRun "ovn-nbctl lsp-set-addresses ls1p2
  00:01:02:01:01:02"

  rlRun "ovn-nbctl lsp-add ls1 ls1p3"
  rlRun "ovn-nbctl lsp-set-addresses ls1p3 00:01:02:01:01:04"

  rlRun "ovn-nbctl ls-add ls2"
  rlRun "ovn-nbctl lsp-add ls2 ls2p1"
  rlRun "ovn-nbctl lsp-set-addresses ls2p1 00:01:02:01:01:03"

  rlRun "ovs-vsctl add-port br-int vm1 -- set interface vm1 
type=internal"
  rlRun "ip netns add server0"
  rlRun "ip link set vm1 netns server0"
  rlRun "ip netns exec server0 ip link set lo up"
  rlRun "ip netns exec server0 ip link set vm1 up"
  rlRun "ip netns exec server0 ip link set vm1 address 
00:01:02:01:01:01"
  rlRun "ip netns exec server0 ip addr add 192.168.0.1/24 dev 
vm1"
  rlRun "ip netns exec server0 ip addr add 3001::1/64 dev vm1"
  rlRun "ip netns exec server0 ip route add default via 
192.168.0.254 dev vm1"
  rlRun "ip netns exec server0 ip -6 route add default via 
3001::a dev vm1"
  rlRun "ovs-vsctl set Interface vm1 
external_ids:iface-id=ls1p1"
rlRun "ovs-vsctl add-port br-int vm2 -- set interface vm2 
type=internal"
  rlRun "ip netns add server1"
  rlRun "ip link set vm2 netns server1"
  rlRun "ip netns exec server1 ip link set lo up"
  rlRun "ip netns exec server1 ip link set vm2 up"
  rlRun "ip netns exec server1 ip link set vm2 address 
00:01:02:01:01:02"
  rlRun "ip netns exec server1 ip addr add 192.168.0.2/24 dev 
vm2"
  rlRun "ip netns exec server1 ip addr add 3001::2/64 dev vm2"
  rlRun "ip netns exec server1 ip route add default via 
192.168.0.254 dev vm2"
  rlRun "ip netns exec server1 ip -6 route add default via 
3001::a dev vm2"
  rlRun "ovs-vsctl set Interface vm2 
external_ids:iface-id=ls1p2"

  rlRun "ovn-nbctl lr-add lr1"
  rlRun "ovn-nbctl lrp-add lr1 lr1ls1 00:01:02:0d:01:01 
192.168.0.254/24 3001::a/64"
  rlRun "ovn-nbctl lrp-add lr1 lr1ls2 00:01:02:0d:01:02 
192.168.1.254/24 3001:1::a/64"

  rlRun "ovn-nbctl lsp-add ls1 ls1lr1"
  rlRun "ovn-nbctl lsp-set-type ls1lr1 router"
  rlRun "ovn-nbctl lsp-set-options ls1lr1 router-port=lr1ls1"
  rlRun "ovn-nbctl lsp-set-addresses ls1lr1 \"00:01:02:0d:01:01 
192.168.0.254 3001::a\""
  rlRun "ovn-nbctl lsp-add ls2 ls2lr1"
  rlRun "ovn-nbctl lsp-set-type ls2lr1 router"
  rlRun "ovn-nbctl lsp-set-options ls2lr1 router-port=lr1ls2"
  rlRun "ovn-nbctl lsp-set-addresses ls2lr1 \"00:01:02:0d:01:02 
192.168.1.254 3001:1::a\""
rlRun "ovn-nbctl lrp-add lr1 lr1p 00:01:02:0d:0f:01 
172.16.1.254/24 2002::a/64"

  rlRun "ovn-nbctl lb-add lb0 192.168.2.1:12345 
192.168.0.1:12345,192.168.0.2:12345"
  rlRun "ovn-nbctl lb-add lb0 [3000::100]:12345 
[3001::1]:12345,[3001::2]:12345"
uuid=`ovn-nbctl list Load_Balancer |grep uuid|awk 
'{printf $3}'`

  rlRun "ovn-nbctl set load_balancer $uuid 
selection_fields=\"ip_src,ip_dst\""
  rlRun "ovn-nbctl show"
  rlRun "ovn-sbctl show"
ovn-nbctl set  Logical_Router lr1 options:chassis="hv1"

  rlRun "ovn-nbctl ls-lb-add ls1 lb0"

  
  rlRun "ovn-nbctl lb-add lb1 192.168.2.1:12345 
192.168.0.1:12345,192.168.0.2:12345"
  rlR

[Yahoo-eng-team] [Bug 1952357] [NEW] Functional tests job in the ovn-octavia-provider is broken

2021-11-25 Thread Slawek Kaplonski
Public bug reported:

Probably because https://review.opendev.org/c/openstack/neutron/+/814009
Failure example: 
https://zuul.opendev.org/t/openstack/build/642360f0bd8b46699316e0063d9becd0

+ lib/databases/postgresql:configure_database_postgresql:92 :   sudo -u root 
sudo -u postgres -i psql -c 'CREATE ROLE root WITH SUPERUSER LOGIN PASSWORD 
'\''openstack_citest'\'''
CREATE ROLE
++ 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:176
 :   mktemp -d
+ 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:176
 :   tmp_dir=/tmp/tmp.5EA0JIeLQG
+ 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:177
 :   trap 'rm -rf /tmp/tmp.5EA0JIeLQG' EXIT
+ 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:179
 :   cat
+ 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:185
 :   /usr/bin/mysql -u root -popenstack_citest
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1396 (HY000) at line 2: Operation CREATE USER failed for 
'root'@'localhost'

** Affects: neutron
 Importance: Critical
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: functional-tests gate-failure ovn-octavia-provider

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1952357

Title:
  Functional tests job in the ovn-octavia-provider is broken

Status in neutron:
  Confirmed

Bug description:
  Probably because https://review.opendev.org/c/openstack/neutron/+/814009
  Failure example: 
https://zuul.opendev.org/t/openstack/build/642360f0bd8b46699316e0063d9becd0

  + lib/databases/postgresql:configure_database_postgresql:92 :   sudo -u root 
sudo -u postgres -i psql -c 'CREATE ROLE root WITH SUPERUSER LOGIN PASSWORD 
'\''openstack_citest'\'''
  CREATE ROLE
  ++ 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:176
 :   mktemp -d
  + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:176
 :   tmp_dir=/tmp/tmp.5EA0JIeLQG
  + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:177
 :   trap 'rm -rf /tmp/tmp.5EA0JIeLQG' EXIT
  + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:179
 :   cat
  + 
/home/zuul/src/opendev.org/openstack/neutron/tools/configure_for_func_testing.sh:_install_databases:185
 :   /usr/bin/mysql -u root -popenstack_citest
  mysql: [Warning] Using a password on the command line interface can be 
insecure.
  ERROR 1396 (HY000) at line 2: Operation CREATE USER failed for 
'root'@'localhost'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1952357/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1952395] [NEW] Tempes jobs in the ovn-octavia-provider are broken

2021-11-25 Thread Slawek Kaplonski
 ubuntu-focal-inmotion-iad3-0027510773 bash[111448]: 
chown: changing ownership of '/opt/stack/logs/dstat-csv.log': Operation not 
permitted
Nov 26 07:19:11.724747 ubuntu-focal-inmotion-iad3-0027510773 bash[111448]: 
chown: changing ownership of '/opt/stack/logs/ovs-vswitchd.log': Operation not 
permitted
Nov 26 07:19:11.725149 ubuntu-focal-inmotion-iad3-0027510773 bash[111448]: 
chown: changing ownership of 
'/opt/stack/logs/devstacklog.txt.2021-11-26-070811': Operation not permitted
Nov 26 07:19:11.725149 ubuntu-focal-inmotion-iad3-0027510773 bash[111448]: 
chown: changing ownership of '/opt/stack/logs/devstacklog.txt.summary': 
Operation not permitted
Nov 26 07:19:11.725149 ubuntu-focal-inmotion-iad3-0027510773 bash[111448]: 
chown: changing ownership of '/opt/stack/logs/archive': Operation not permitted
Nov 26 07:19:11.725149 ubuntu-focal-inmotion-iad3-0027510773 bash[111448]: 
chown: changing ownership of '/opt/stack/logs/ovsdb-server-nb.log': Operation 
not permitted
Nov 26 07:19:11.725149 ubuntu-focal-inmotion-iad3-0027510773 bash[111448]: 
chown: changing ownership of 
'/opt/stack/logs/devstacklog.txt.2021-11-26-070811.summary.2021-11-26-070811': 
Operation not permitted
Nov 26 07:19:11.725149 ubuntu-focal-inmotion-iad3-0027510773 bash[111448]: 
chown: changing ownership of '/opt/stack/logs': Operation not permitted
Nov 26 07:19:11.725814 ubuntu-focal-inmotion-iad3-0027510773 bash[111449]: 
chown: cannot access '/usr/local/etc/ovn': No such file or directory
Nov 26 07:19:11.727636 ubuntu-focal-inmotion-iad3-0027510773 
ovsdb-server[111450]: ovs|1|vlog|INFO|opened log file 
/opt/stack/logs/ovsdb-server-sb.log
Nov 26 07:19:11.728571 ubuntu-focal-inmotion-iad3-0027510773 
ovsdb-server[111452]: 
ovs|2|lockfile|WARN|/opt/stack/data/ovn/.ovnsb_db.db.~lock~: failed to open 
lock file: Permission denied
Nov 26 07:19:11.728596 ubuntu-focal-inmotion-iad3-0027510773 
ovsdb-server[111452]: 
ovs|3|lockfile|WARN|/opt/stack/data/ovn/.ovnsb_db.db.~lock~: failed to lock 
file: Resource temporarily unavailable
Nov 26 07:19:11.728973 ubuntu-focal-inmotion-iad3-0027510773 bash[111452]: 
ovsdb-server: I/O error: /opt/stack/data/ovn/ovnsb_db.db: failed to lock 
lockfile (Resource temporarily unavailable)
Nov 26 07:19:11.734924 ubuntu-focal-inmotion-iad3-0027510773 ovn-sbctl[111456]: 
ovs|1|sbctl|INFO|Called as ovn-sbctl --no-leader-only init
Nov 26 07:20:41.854729 ubuntu-focal-inmotion-iad3-0027510773 systemd[1]: 
devstack@ovn-northd.service: start operation timed out. Terminating.
Nov 26 07:20:41.855570 ubuntu-focal-inmotion-iad3-0027510773 systemd[1]: 
devstack@ovn-northd.service: Killing process 111456 (ovn-sbctl) with signal 
SIGKILL.
Nov 26 07:20:41.855656 ubuntu-focal-inmotion-iad3-0027510773 systemd[1]: 
devstack@ovn-northd.service: Failed with result 'timeout'.
Nov 26 07:20:41.855824 ubuntu-focal-inmotion-iad3-0027510773 systemd[1]: Failed 
to start Devstack devstack@ovn-northd.service.

** Affects: neutron
 Importance: Critical
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: gate-failure ovn-octavia-provider tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1952395

Title:
  Tempes jobs in the ovn-octavia-provider are broken

Status in neutron:
  Confirmed

Bug description:
  Failure example:

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_c9d/819377/2/check/ovn-
  octavia-provider-tempest-release/c9db1e6/job-output.txt

  2021-11-26 07:20:41.865039 | controller | + ./stack.sh:exit_trap:507  
   :   local r=1
  2021-11-26 07:20:41.868331 | controller | ++ ./stack.sh:exit_trap:508 
:   jobs -p
  2021-11-26 07:20:41.872000 | controller | + ./stack.sh:exit_trap:508  
   :   jobs=84673
  2021-11-26 07:20:41.875064 | controller | + ./stack.sh:exit_trap:511  
   :   [[ -n 84673 ]]
  2021-11-26 07:20:41.877646 | controller | + ./stack.sh:exit_trap:511  
   :   [[ -n /opt/stack/logs/devstacklog.txt.2021-11-26-070811 ]]
  2021-11-26 07:20:41.880814 | controller | + ./stack.sh:exit_trap:511  
   :   [[ True == \T\r\u\e ]]
  2021-11-26 07:20:41.883918 | controller | + ./stack.sh:exit_trap:512  
   :   echo 'exit_trap: cleaning up child processes'
  2021-11-26 07:20:41.883992 | controller | exit_trap: cleaning up child 
processes
  2021-11-26 07:20:41.887286 | controller | + ./stack.sh:exit_trap:513  
   :   kill 84673
  2021-11-26 07:20:41.890505 | controller | + ./stack.sh:exit_trap:517  
   :   '[' -f /tmp/tmp.kOZU5nmHMT ']'
  2021-11-26 07:20:41.893624 | controller | + ./stack.sh:exit_trap:518  
   :   rm /tmp/tmp.kOZU5nmHMT
  2021-11-26 07:20:41.897844 

[Yahoo-eng-team] [Bug 1953479] [NEW] Timeout in the scenario jobs' execution

2021-12-07 Thread Slawek Kaplonski
Public bug reported:

I noticed pretty many times when neutron-tempest-plugin scenario jobs
were timing out due to very slow tests execution. See examples:

https://82f03bf860caa31b7ef2-7540de26d324888abf6b8e200e8c6ffb.ssl.cf5.rackcdn.com/816800/5/check/neutron-
tempest-plugin-scenario-openvswitch/fa11d7a/job-output.txt

https://f4a78187a8f66e46939f-e2f3a8f1da38bd85104d6de65559a608.ssl.cf1.rackcdn.com/819032/3/check/neutron-
tempest-plugin-scenario-linuxbridge/626868b/job-output.txt

https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_38c/816802/7/check/neutron-
tempest-plugin-scenario-linuxbridge/38cc377/job-output.txt

https://eddc2f34a262a7fd8f98-de6b79a0bbc85dd849a2bc7008d89fe0.ssl.cf1.rackcdn.com/820125/1/gate/neutron-
tempest-plugin-scenario-linuxbridge/a49f9a4/job-output.txt

https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_1d3/815855/17/gate/neutron-
tempest-plugin-scenario-openvswitch/1d38034/job-output.txt

https://18df87e68d1f57858165-9339bc0d426f2a9334b357b553bd2c47.ssl.cf2.rackcdn.com/815855/17/gate/neutron-
tempest-plugin-scenario-ovn/1c40e32/job-output.txt

https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_430/804846/18/check/neutron-
tempest-plugin-scenario-ovn/43061b5/job-output.txt

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1953479

Title:
  Timeout in the scenario jobs' execution

Status in neutron:
  Confirmed

Bug description:
  I noticed pretty many times when neutron-tempest-plugin scenario jobs
  were timing out due to very slow tests execution. See examples:

  
https://82f03bf860caa31b7ef2-7540de26d324888abf6b8e200e8c6ffb.ssl.cf5.rackcdn.com/816800/5/check/neutron-
  tempest-plugin-scenario-openvswitch/fa11d7a/job-output.txt

  
https://f4a78187a8f66e46939f-e2f3a8f1da38bd85104d6de65559a608.ssl.cf1.rackcdn.com/819032/3/check/neutron-
  tempest-plugin-scenario-linuxbridge/626868b/job-output.txt

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_38c/816802/7/check/neutron-
  tempest-plugin-scenario-linuxbridge/38cc377/job-output.txt

  
https://eddc2f34a262a7fd8f98-de6b79a0bbc85dd849a2bc7008d89fe0.ssl.cf1.rackcdn.com/820125/1/gate/neutron-
  tempest-plugin-scenario-linuxbridge/a49f9a4/job-output.txt

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_1d3/815855/17/gate/neutron-
  tempest-plugin-scenario-openvswitch/1d38034/job-output.txt

  
https://18df87e68d1f57858165-9339bc0d426f2a9334b357b553bd2c47.ssl.cf2.rackcdn.com/815855/17/gate/neutron-
  tempest-plugin-scenario-ovn/1c40e32/job-output.txt

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_430/804846/18/check/neutron-
  tempest-plugin-scenario-ovn/43061b5/job-output.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1953479/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1953478] [NEW] Resize and shelve server fails in the multinode CI jobs intemittent

2021-12-07 Thread Slawek Kaplonski
Public bug reported:

I noticed failures of the 2 tests in the Neutron CI multinode job. Both 
failures looks similar for me at first glance but if those are different 
issues, feel free to open another bug for one of them.
Failed tests:

tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_shelve_shelved_server

and

tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_revert

Failure examples:

https://4d2664479333d0a7d727-a4adcb41e06ec456a00383225f090da6.ssl.cf5.rackcdn.com/786478/25/check/neutron-ovs-tempest-dvr-ha-multinode-full/bbf40b6/testr_results.html
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_061/804846/18/gate/neutron-ovs-tempest-multinode-full/061df45/testr_results.html
https://be2e92e10ead782aa651-35e07a4cf42cfaed2fcffa4bf0b16f1b.ssl.cf1.rackcdn.com/819253/1/check/neutron-ovs-tempest-dvr-ha-multinode-full/94dc22a/testr_results.html

Stacktrace:


Traceback (most recent call last):
  File 
"/opt/stack/tempest/tempest/api/compute/servers/test_servers_negative.py", line 
50, in tearDown
self.server_check_teardown()
  File "/opt/stack/tempest/tempest/api/compute/base.py", line 222, in 
server_check_teardown
waiters.wait_for_server_status(cls.servers_client,
  File "/opt/stack/tempest/tempest/common/waiters.py", line 96, in 
wait_for_server_status
raise lib_exc.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: (ServersNegativeTestJSON:tearDown) Server 
1a207544-1228-4ec1-ad99-4b6f7b6a5ea1 failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: 
SHELVED_OFFLOADED. Current task state: spawning.

and:

Traceback (most recent call last):
  File "/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", 
line 419, in test_resize_server_revert
waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
  File "/opt/stack/tempest/tempest/common/waiters.py", line 96, in 
wait_for_server_status
raise lib_exc.TimeoutException(message)
tempest.lib.exceptions.TimeoutException: Request timed out
Details: (ServerActionsTestJSON:test_resize_server_revert) Server 
f996ef6b-4417-4eb3-aeb7-9f66d8c4d2c5 failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: REVERT_RESIZE. 
Current task state: resize_reverting.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1953478

Title:
  Resize and shelve server fails in the multinode CI jobs intemittent

Status in OpenStack Compute (nova):
  New

Bug description:
  I noticed failures of the 2 tests in the Neutron CI multinode job. Both 
failures looks similar for me at first glance but if those are different 
issues, feel free to open another bug for one of them.
  Failed tests:

  
tempest.api.compute.servers.test_servers_negative.ServersNegativeTestJSON.test_shelve_shelved_server

  and

  
tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON.test_resize_server_revert

  Failure examples:

  
https://4d2664479333d0a7d727-a4adcb41e06ec456a00383225f090da6.ssl.cf5.rackcdn.com/786478/25/check/neutron-ovs-tempest-dvr-ha-multinode-full/bbf40b6/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_061/804846/18/gate/neutron-ovs-tempest-multinode-full/061df45/testr_results.html
  
https://be2e92e10ead782aa651-35e07a4cf42cfaed2fcffa4bf0b16f1b.ssl.cf1.rackcdn.com/819253/1/check/neutron-ovs-tempest-dvr-ha-multinode-full/94dc22a/testr_results.html

  Stacktrace:

  
  Traceback (most recent call last):
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_servers_negative.py", line 
50, in tearDown
  self.server_check_teardown()
File "/opt/stack/tempest/tempest/api/compute/base.py", line 222, in 
server_check_teardown
  waiters.wait_for_server_status(cls.servers_client,
File "/opt/stack/tempest/tempest/common/waiters.py", line 96, in 
wait_for_server_status
  raise lib_exc.TimeoutException(message)
  tempest.lib.exceptions.TimeoutException: Request timed out
  Details: (ServersNegativeTestJSON:tearDown) Server 
1a207544-1228-4ec1-ad99-4b6f7b6a5ea1 failed to reach ACTIVE status and task 
state "None" within the required time (196 s). Current status: 
SHELVED_OFFLOADED. Current task state: spawning.

  and:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
419, in test_resize_server_revert
  waiters.wait_for_server_status(self.client, self.server_id, 'ACTIVE')
File "/opt/stack/tempest/tempest/common/waiters.py", line 96, in 
wait_for_server_status
  raise lib_exc.TimeoutException(message)
  tempes

[Yahoo-eng-team] [Bug 1953480] [NEW] Agents API test failed due to not expected "alive" status of the agent

2021-12-07 Thread Slawek Kaplonski
Public bug reported:

Failure
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_877/820125/1/check/neutron-
tempest-plugin-api/877167f/testr_results.html

Stacktrace:

Traceback (most recent call last):
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/api/admin/test_agent_management.py",
 line 44, in test_list_agent
self.assertIn(self.agent, agents)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/testtools/testcase.py",
 line 399, in assertIn
self.assertThat(haystack, Contains(needle), message)
  File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/testtools/testcase.py",
 line 480, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: {'binary': 
'neutron-ovn-metadata-agent', 'host': 'ubuntu-focal-rax-iad-0027624514', 
'availability_zone': '', 'topic': 'n/a', 'description': '', 'agent_type': 'OVN 
Metadata agent', 'id': 'c182fa5f-fd73-51c3-bd64-358818183f45', 'alive': False, 
'admin_state_up': True} not in [{'binary': 'neutron-ovn-metadata-agent', 
'host': 'ubuntu-focal-rax-iad-0027624514', 'availability_zone': '', 'topic': 
'n/a', 'description': '', 'agent_type': 'OVN Metadata agent', 'id': 
'c182fa5f-fd73-51c3-bd64-358818183f45', 'alive': True, 'admin_state_up': True}, 
{'binary': 'ovn-controller', 'host': 'ubuntu-focal-rax-iad-0027624514', 
'availability_zone': '', 'topic': 'n/a', 'description': '', 'agent_type': 'OVN 
Controller Gateway agent', 'id': '5c61e06b-c437-45c9-b91e-6f29af544b4f', 
'alive': True, 'admin_state_up': True}]

** Affects: neutron
 Importance: High
 Status: Confirmed


** Tags: gate-failure tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1953480

Title:
  Agents API test failed due to not expected "alive" status of the agent

Status in neutron:
  Confirmed

Bug description:
  Failure
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_877/820125/1/check/neutron-
  tempest-plugin-api/877167f/testr_results.html

  Stacktrace:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/api/admin/test_agent_management.py",
 line 44, in test_list_agent
  self.assertIn(self.agent, agents)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/testtools/testcase.py",
 line 399, in assertIn
  self.assertThat(haystack, Contains(needle), message)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/testtools/testcase.py",
 line 480, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: {'binary': 
'neutron-ovn-metadata-agent', 'host': 'ubuntu-focal-rax-iad-0027624514', 
'availability_zone': '', 'topic': 'n/a', 'description': '', 'agent_type': 'OVN 
Metadata agent', 'id': 'c182fa5f-fd73-51c3-bd64-358818183f45', 'alive': False, 
'admin_state_up': True} not in [{'binary': 'neutron-ovn-metadata-agent', 
'host': 'ubuntu-focal-rax-iad-0027624514', 'availability_zone': '', 'topic': 
'n/a', 'description': '', 'agent_type': 'OVN Metadata agent', 'id': 
'c182fa5f-fd73-51c3-bd64-358818183f45', 'alive': True, 'admin_state_up': True}, 
{'binary': 'ovn-controller', 'host': 'ubuntu-focal-rax-iad-0027624514', 
'availability_zone': '', 'topic': 'n/a', 'description': '', 'agent_type': 'OVN 
Controller Gateway agent', 'id': '5c61e06b-c437-45c9-b91e-6f29af544b4f', 
'alive': True, 'admin_state_up': True}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1953480/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1953481] [NEW] openstack-tox-py36-with-ovsdbapp-master job is failing every day since 12.03.2021

2021-12-07 Thread Slawek Kaplonski
Public bug reported:

Failure example:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_a93/periodic/opendev.org/openstack/neutron/master/openstack-
tox-py36-with-ovsdbapp-master/a93ba3a/testr_results.html

Failed test:
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestOvnDbNotifyHandler.test_watch_and_unwatch_events


Stacktrace: 

ft1.2: 
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestOvnDbNotifyHandler.test_watch_and_unwatch_eventstesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py",
 line 119, in test_watch_and_unwatch_events
self.handler.watch_event(networking_event)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/ovsdbapp/event.py",
 line 151, in watch_event
self._add(event)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/ovsdbapp/event.py",
 line 131, in _add
self._get_queue(event).add(event)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/ovsdbapp/event.py",
 line 124, in _get_queue
return self._queues.setdefault(event.priority, set())
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/sortedcontainers/sorteddict.py",
 line 541, in setdefault
self._list_add(key)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/sortedcontainers/sortedlist.py",
 line 1798, in add
key = self._key(value)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/ovsdbapp/event.py",
 line 112, in 
self._queues = sortedcontainers.SortedDict(lambda p: -p)
TypeError: bad operand type for unary -: 'Mock'

** Affects: neutron
 Importance: Critical
 Status: Confirmed


** Tags: gate-failure unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1953481

Title:
  openstack-tox-py36-with-ovsdbapp-master job is failing every day since
  12.03.2021

Status in neutron:
  Confirmed

Bug description:
  Failure example:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_a93/periodic/opendev.org/openstack/neutron/master/openstack-
  tox-py36-with-ovsdbapp-master/a93ba3a/testr_results.html

  Failed test:
  
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestOvnDbNotifyHandler.test_watch_and_unwatch_events

  
  Stacktrace: 

  ft1.2: 
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestOvnDbNotifyHandler.test_watch_and_unwatch_eventstesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py",
 line 119, in test_watch_and_unwatch_events
  self.handler.watch_event(networking_event)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/ovsdbapp/event.py",
 line 151, in watch_event
  self._add(event)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/ovsdbapp/event.py",
 line 131, in _add
  self._get_queue(event).add(event)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/ovsdbapp/event.py",
 line 124, in _get_queue
  return self._queues.setdefault(event.priority, set())
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/sortedcontainers/sorteddict.py",
 line 541, in setdefault
  self._list_add(key)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/sortedcontainers/sortedlist.py",
 line 1798, in add
  key = self._key(value)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/py36/lib/python3.6/site-packages/ovsdbapp/event.py",
 line 112, in 
  self._queues = sortedcontainers.SortedDict(lambda p: -p)
  TypeError: bad operand type for unary -: 'Mock'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1953481/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1953716] [NEW] Improve message in the NetworkInUse exception

2021-12-08 Thread Slawek Kaplonski
Public bug reported:

Message returned to the user while trying to delete network which still
have some ports could have uuid of the ports which blocks network
deletion. It could maybe list those ports in the error message.

It was reported as one of the pain points in the
https://etherpad.opendev.org/p/pain-point-elimination

** Affects: neutron
 Importance: Medium
 Assignee: Slawek Kaplonski (slaweq)
 Status: Confirmed


** Tags: api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1953716

Title:
  Improve message in the NetworkInUse exception

Status in neutron:
  Confirmed

Bug description:
  Message returned to the user while trying to delete network which
  still have some ports could have uuid of the ports which blocks
  network deletion. It could maybe list those ports in the error
  message.

  It was reported as one of the pain points in the
  https://etherpad.opendev.org/p/pain-point-elimination

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1953716/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1792925] Re: q-dhcp crashes with guru meditation on ironic's grenade

2021-12-10 Thread Slawek Kaplonski
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1792925

Title:
  q-dhcp crashes with guru meditation on ironic's grenade

Status in neutron:
  Fix Released

Bug description:
  Ironic grenade is broken on master with DHCP timeouts when booting
  nodes on Rocky. It is probably caused by q-dhcp going down:
  http://logs.openstack.org/77/586277/11/check/ironic-grenade-
  dsvm/6ad6388/logs/screen-q-dhcp.txt.gz#_Sep_14_18_32_27_656143. Which,
  in turn, may be caused by the eventlet bump.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1792925/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779075] Re: Tempest jobs fails because of timeout

2021-12-10 Thread Slawek Kaplonski
I'm closing this bug now as I don't think we still have such issue in
the gate currently.

** Changed in: neutron
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779075

Title:
  Tempest jobs fails because of timeout

Status in neutron:
  Fix Released

Bug description:
  It happens now quite often that tempest related tests in neutron are failing 
because of reaching global job timeout.
  Example of such failure: 
http://logs.openstack.org/61/566961/4/check/neutron-tempest-iptables_hybrid/c70896b/job-output.txt.gz

  We need to investigate why those timeouts are reached and fix it
  somehow

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1779075/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694764] Re: test_metadata_proxy_respawned failed to spawn metadata proxy

2021-12-10 Thread Slawek Kaplonski
I'm closing this bug now as I don't think we still have such issue in
the gate currently.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694764

Title:
  test_metadata_proxy_respawned failed to spawn metadata proxy

Status in neutron:
  Fix Released

Bug description:
  This is on Newton.

  http://logs.openstack.org/82/465182/1/gate/gate-neutron-dsvm-
  functional-ubuntu-xenial/2eae399/testr_results.html.gz

  Traceback (most recent call last):
File "neutron/tests/functional/agent/test_dhcp_agent.py", line 322, in 
test_metadata_proxy_respawned
  exception=RuntimeError("Metadata proxy didn't respawn"))
File "neutron/common/utils.py", line 821, in wait_until_true
  eventlet.sleep(sleep)
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 34, in sleep
  hub.switch()
File 
"/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 294, in switch
  return self.greenlet.switch()
  RuntimeError: Metadata proxy didn't respawn

  The proxy process is started here:
  http://logs.openstack.org/82/465182/1/gate/gate-neutron-dsvm-
  functional-ubuntu-xenial/2eae399/logs/dsvm-functional-
  
logs/neutron.tests.functional.agent.test_dhcp_agent.DHCPAgentOVSTestCase.test_metadata_proxy_respawned.txt.gz#_2017-05-27_16_55_06_897

  Then later (not sure if related) we see this:

  2017-05-27 16:55:12.768 11565 DEBUG
  neutron.agent.linux.external_process
  [req-4c03cff5-0c93-422e-a542-423c54d67807 - - - - -] Process for
  bcaf27b6-7bdc-4569-93f0-1a4d51e21040 pid 27762 is stale, ignoring
  signal 9 disable neutron/agent/linux/external_process.py:121

  Nothing interesting can be found in syslog.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1694764/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1598466] Re: Neutron VPNaas gate functional tests failing on race condition

2021-12-10 Thread Slawek Kaplonski
I'm closing this bug now as I don't think we still have such issue in
the gate currently.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1598466

Title:
  Neutron VPNaas gate functional tests failing on race condition

Status in neutron:
  Fix Released

Bug description:
  gate-neutron-vpnaas-dsvm-functional-sswan and gate-neutron-vpnaas-
  dsvm-functional are failing on a race condition in
  test_ipsec_site_connections_with_l3ha_routers:

  ft1.4: 
neutron_vpnaas.tests.functional.common.test_scenario.TestIPSecScenario.test_ipsec_site_connections_with_l3ha_routers_StringException:
 Empty attachments:
pythonlogging:''
stderr
stdout

  Traceback (most recent call last):
File "neutron_vpnaas/tests/functional/common/test_scenario.py", line 667, 
in test_ipsec_site_connections_with_l3ha_routers
  self.check_ping(site1, site2, 0)
File "neutron_vpnaas/tests/functional/common/test_scenario.py", line 519, 
in check_ping
  timeout=8, count=4)
File "/opt/stack/new/neutron/neutron/tests/common/net_helpers.py", line 
110, in assert_ping
  dst_ip])
File "/opt/stack/new/neutron/neutron/agent/linux/ip_lib.py", line 876, in 
execute
  log_fail_as_error=log_fail_as_error, **kwargs)
File "/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 138, in 
execute
  raise RuntimeError(msg)
  RuntimeError: Exit code: 1; Stdin: ; Stdout: PING 35.4.2.5 (35.4.2.5) 56(84) 
bytes of data.

  --- 35.4.2.5 ping statistics ---
  1 packets transmitted, 0 received, 100% packet loss, time 0ms

  ; Stderr:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1598466/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819897] Re: Intermittent "Neutron did not start" failures in the gate

2021-12-10 Thread Slawek Kaplonski
I'm closing this bug now as I don't think we still have such issue in
the gate currently.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1819897

Title:
  Intermittent "Neutron did not start" failures in the gate

Status in neutron:
  Fix Released

Bug description:
  Seen here:

  http://logs.openstack.org/68/642068/2/gate/tempest-full-
  py3/3648e00/controller/logs/devstacklog.txt.gz#_2019-03-13_09_43_32_198

  2019-03-13 09:42:32.696 | + functions-common:test_with_retry:2261:   
timeout 60 sh -c 'while ! wget  --no-proxy -q -O- http://198.72.124.14:19696; 
do sleep 0.5; done'
  2019-03-13 09:43:32.198 | + functions-common:test_with_retry:2262:   die 
2262 'Neutron did not start'
  2019-03-13 09:43:32.203 | + functions-common:die:195 :   
local exitcode=0
  2019-03-13 09:43:32.207 | [Call Trace]
  2019-03-13 09:43:32.207 | ./stack.sh:1303:start_neutron_service_and_check
  2019-03-13 09:43:32.207 | 
/opt/stack/devstack/lib/neutron-legacy:504:test_with_retry
  2019-03-13 09:43:32.207 | /opt/stack/devstack/functions-common:2262:die
  2019-03-13 09:43:32.213 | [ERROR] /opt/stack/devstack/functions-common:2262 
Neutron did not start

  devstack waits up to 60 seconds to neutron API to start and looking at
  the API logs it took longer than 2 minutes:

  http://logs.openstack.org/68/642068/2/gate/tempest-full-
  py3/3648e00/controller/logs/screen-q-svc.txt.gz

  Starts here:

  Mar 13 09:42:31.162711 ubuntu-bionic-inap-mtl01-0003755170 systemd[1]:
  Started Devstack devstack@q-svc.service.

  This is the last API log entry:

  Mar 13 09:44:18.906122 ubuntu-bionic-inap-mtl01-0003755170 neutron-
  server[1]: DEBUG oslo_service.service [-]
  

  {{(pid=1) log_opt_values /usr/local/lib/python3.6/dist-
  packages/oslo_config/cfg.py:2577}}

  Looks like there is a big jump in time here:

  Mar 13 09:42:35.558034 ubuntu-bionic-inap-mtl01-0003755170 
neutron-server[1]: DEBUG oslo_db.sqlalchemy.engines [None 
req-46b360f6-5a07-4e28-ad6d-7820e1c174b9 None None] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 {{(pid=1) _check_effective_sql_mode 
/usr/local/lib/python3.6/dist-packages/oslo_db/sqlalchemy/engines.py:307}}
  Mar 13 09:44:18.212177 ubuntu-bionic-inap-mtl01-0003755170 
neutron-server[1]: INFO neutron.plugins.ml2.managers [None 
req-46b360f6-5a07-4e28-ad6d-7820e1c174b9 None None] Initializing driver for 
type 'gre'

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22%5BERROR%5D%5C%22%20AND%20message%3A%5C%22Neutron%20did%20not%20start%5C%22%20AND%20tags%3A%5C%22console%5C%22%20AND%20voting%3A1&from=7d

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1819897/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1812364] Re: Error "OSError: [Errno 22] failed to open netns" in l3-agent logs

2021-12-10 Thread Slawek Kaplonski
No such errors anymore. I'm closing the bug.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1812364

Title:
  Error "OSError: [Errno 22] failed to open netns" in l3-agent logs

Status in neutron:
  Fix Released

Bug description:
  I saw it in fullstack job results:
  http://logs.openstack.org/84/631584/2/check/neutron-
  fullstack/80e1e7d/logs/testr_results.html.gz

  But I checked on logstach with query: 
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22OSError%3A%20%5BErrno%2022%5D%20failed%20to%20open%20netns%5C%22
 and it looks that it happens quite often in various jobs.
  It is mostly visible in L3 agent logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1812364/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1802640] Re: TimeoutException: Commands [

2021-12-10 Thread Slawek Kaplonski
I don't think we still have such issues in the tempest jobs. It can
happen from time to time in the functional job maybe, but it's very rare
recently and IMHO this may be related to overloaded nodes simply. So I'm
closing that bug for now.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1802640

Title:
  TimeoutException: Commands
  [http://logs.openstack.org/40/617040/1/check/nova-
  next/0a82b26/logs/screen-q-agt.txt.gz?level=TRACE

  Nov 10 03:51:03.120446 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command [-] 
Error executing command: TimeoutException: Commands 
[] exceeded timeout 10 seconds
  Nov 10 03:51:03.120918 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command 
Traceback (most recent call last):
  Nov 10 03:51:03.121258 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/local/lib/python2.7/dist-packages/ovsdbapp/backend/ovs_idl/command.py", 
line 35, in execute
  Nov 10 03:51:03.121528 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command 
txn.add(self)
  Nov 10 03:51:03.121801 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
  Nov 10 03:51:03.122032 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command 
self.gen.next()
  Nov 10 03:51:03.122299 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/local/lib/python2.7/dist-packages/ovsdbapp/api.py", line 112, in 
transaction
  Nov 10 03:51:03.122563 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command 
del self._nested_txns_map[cur_thread_id]
  Nov 10 03:51:03.122747 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/local/lib/python2.7/dist-packages/ovsdbapp/api.py", line 69, in __exit__
  Nov 10 03:51:03.123003 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command 
self.result = self.commit()
  Nov 10 03:51:03.123198 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command   File 
"/usr/local/lib/python2.7/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py",
 line 57, in commit
  Nov 10 03:51:03.123418 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command 
timeout=self.timeout)
  Nov 10 03:51:03.123647 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command 
TimeoutException: Commands 
[] exceeded timeout 10 seconds
  Nov 10 03:51:03.123865 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.backend.ovs_idl.command 
  Nov 10 03:51:03.131466 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [-] Commands 
[] exceeded timeout 10 seconds Agent terminated!: 
TimeoutException: Commands 
[] exceeded timeout 10 seconds
  Nov 10 03:51:03.134192 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.schema.open_vswitch.impl_idl 
[-] Post-commit checks failed: TimeoutException: Commands 
[] exceeded timeout 10 seconds
  Nov 10 03:51:03.134358 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.schema.open_vswitch.impl_idl 
Traceback (most recent call last):
  Nov 10 03:51:03.134471 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.schema.open_vswitch.impl_idl   
File 
"/usr/local/lib/python2.7/dist-packages/ovsdbapp/schema/open_vswitch/impl_idl.py",
 line 40, in post_commit
  Nov 10 03:51:03.134590 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.schema.open_vswitch.impl_idl   
  self.do_post_commit(txn)
  Nov 10 03:51:03.134706 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.schema.open_vswitch.impl_idl   
File 
"/usr/local/lib/python2.7/dist-packages/ovsdbapp/schema/open_vswitch/impl_idl.py",
 line 60, in do_post_commit
  Nov 10 03:51:03.134821 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.schema.open_vswitch.impl_idl   
  timeout=self.timeout)
  Nov 10 03:51:03.134989 ubuntu-xenial-ovh-bhs1-461070 
neutron-openvswitch-agent[26802]: ERROR ovsdbapp.schema.open_vswitch.impl_idl 
TimeoutException: Commands 
[] exceeded timeout 10 sec

[Yahoo-eng-team] [Bug 1836642] Re: Metadata responses are very slow sometimes

2021-12-10 Thread Slawek Kaplonski
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1836642

Title:
  Metadata responses are very slow sometimes

Status in OpenStack Compute (nova):
  Expired

Bug description:
  It happens from time to time in CI that VM spawned in test don't get
  public-keys from metadata service. Due to that SSH to instance using
  ssh-key is not possible thus test fails.

  Example of such failures:

  
http://logs.openstack.org/09/666409/7/check/tempest-full/08f4c53/testr_results.html.gz
  
http://logs.openstack.org/35/521035/8/check/tempest-full/031b0b9/testr_results.html.gz
  
http://logs.openstack.org/45/645645/8/gate/tempest-full/4d7874e/testr_results.html.gz

  In each of those cases it looks that neutron metadata agent was
  waiting more than 10 seconds for response from n-api-meta service:

  http://logs.openstack.org/09/666409/7/check/tempest-
  full/08f4c53/controller/logs/screen-n-api-
  meta.txt.gz#_Jul_11_23_43_16_704979 ~ 16 seconds,

  http://logs.openstack.org/35/521035/8/check/tempest-
  full/031b0b9/controller/logs/screen-n-api-
  meta.txt.gz#_Jul_09_17_23_47_871054 ~ 13 seconds,

  http://logs.openstack.org/45/645645/8/gate/tempest-
  full/4d7874e/controller/logs/screen-n-api-
  meta.txt.gz#_Jul_09_01_43_56_549916 ~ 17 seconds.

  I have no idea why nova is responding so slow but it would be worth if
  someone from Nova team would take a look into that.

  Logstash query which can help to find another examples:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22failed%20to%20get%20http%3A%2F%2F169.254.169.254%2F2009-04-04%2Fmeta-
  data%2Fpublic-keys%5C%22

  It is possible that in some cases of failed tests, reason of failure
  may be different but problem described above is quite common in those
  failed tests IMO.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1836642/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832925] Re: Class neutron.common.utils.Timer is not thread safe

2021-12-10 Thread Slawek Kaplonski
Closing this according to the Rodolfo's comment above and docs patch
which is already merged.

** Changed in: neutron
   Status: Confirmed => Fix Released

** Changed in: neutron
   Status: Fix Released => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1832925

Title:
  Class neutron.common.utils.Timer is not thread safe

Status in neutron:
  Won't Fix

Bug description:
  In "Timer" the method used to control the timeout of the class context
  is not thread safe. If two different threads running in the same
  process set signal.signal, the last one will prevail in favor of the
  first one:

signal.signal(signal.SIGALRM, self._timeout_handler)
signal.alarm(self._timeout)

  
  Another method, thread safe, to control the class timeout should be 
implemented.

  This error can be seen in [1].

  [1] http://logs.openstack.org/89/664889/4/check/openstack-tox-
  py37/ecb7c8d/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1832925/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1835914] Re: Test test_show_network_segment_range failing

2021-12-10 Thread Slawek Kaplonski
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1835914

Title:
  Test test_show_network_segment_range failing

Status in neutron:
  Fix Released

Bug description:
  I found it couple of times that neutron_tempest_plugin API test
  test_show_network_segment_range is failing because there is no
  project_id field in returned segment data.

  Example of failure:
  http://logs.openstack.org/57/669557/3/check/neutron-tempest-plugin-
  api/3b8e00b/testr_results.html.gz

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%20209%2C%20in%20test_show_network_segment_range%5C%22
  - it failed twice during last week.

  As You can see in test code, it's failing in 
https://github.com/openstack/neutron-tempest-plugin/blob/eaaf978e25b43f49a1f78c34651d4acd65236eeb/neutron_tempest_plugin/api/admin/test_network_segment_range.py#L209
 - so this clearly means that there was SOME reply from neutron server as 
assertions prior to this one were fine.
  Also in tempest logs there is logged response like:

  Body: b'{"network_segment_range": {"id":
  "f883f498-1831-4743-819c-eaa04e335fef", "name": "tempest-
  test_network_segment_range-1876713703", "default": false, "shared":
  false, "network_type": "vxlan", "minimum": 1100, "maximum": 1105,
  "revision_number": 0, "description": "", "created_at":
  "2019-07-08T07:45:18Z", "updated_at": "2019-07-08T07:45:18Z",
  "available": [1100, 1101, 1102, 1103, 1104, 1105], "used": {}, "tags":
  []}}'

  which don't have project_id in it. Also revision_number=0 looks
  strange for me.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1835914/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1821567] Re: network_segment_ranges could not load in tricirlce test

2021-12-10 Thread Slawek Kaplonski
** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1821567

Title:
  network_segment_ranges could not load in tricirlce test

Status in Tricircle:
  Fix Committed

Bug description:
  after Use network segment ranges for segment allocation
  
https://github.com/openstack/neutron/commit/a01b7125cd965625316d9aec3a7408612b94fc08#diff-441e123742581c5d1da67bf722508f1bR121
  merged in neutron

  many network  unitest could not passed in tricirlce

  the detail info as follows:

  stack@trio-top:~/tricircle$ sudo stestr  run 
tricircle.tests.unit.network.test_segment_plugin.PluginTest.test_create_segment
  Could not load 'NetworkSegmentRange': 'module' object has no attribute 
'NetworkSegmentRange'
  Could not load 'NetworkSegmentRange': 'module' object has no attribute 
'NetworkSegmentRange'
  Using RPC transport for notifications. Please use get_notification_transport 
to obtain a notification transport instance.
  Using RPC transport for notifications. Please use get_notification_transport 
to obtain a notification transport instance.
  Using RPC transport for notifications. Please use get_notification_transport 
to obtain a notification transport instance.
  {0} 
tricircle.tests.unit.network.test_segment_plugin.PluginTest.test_create_segment 
[0.399660s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/mock/mock.py", line 1305, 
in patched
  return func(*args, **keywargs)
File "tricircle/tests/unit/network/test_segment_plugin.py", line 228, 
in test_create_segment
  fake_plugin.central_plugin.create_network(neutron_context, network)
File "tricircle/tests/unit/network/test_central_plugin.py", line 825, 
in create_network
  net = super(FakePlugin, self).create_network(context, network)
File "tricircle/network/central_plugin.py", line 322, in create_network
  tenant_id)
File "tricircle/network/managers.py", line 65, in 
create_network_segments
  segments = self._process_provider_create(network)
File "/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 116, in 
_process_provider_create
  return [self._process_provider_segment(segment)]
File "/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 102, in 
_process_provider_segment
  self.validate_provider_segment(segment)
File "/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 258, in 
validate_provider_segment
  driver.obj.validate_provider_segment(segment)
File "/opt/stack/neutron/neutron/plugins/ml2/drivers/type_vlan.py", 
line 238, in validate_provider_segment
  ranges = self.get_network_segment_ranges()
File "/opt/stack/neutron/neutron/plugins/ml2/drivers/type_vlan.py", 
line 228, in get_network_segment_ranges
  ranges = self._get_network_segment_ranges_from_db()
File "/usr/local/lib/python2.7/dist-packages/neutron_lib/db/api.py", 
line 139, in wrapped
  setattr(e, '_RETRY_EXCEEDED', True)
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", 
line 220, in __exit__
  self.force_reraise()
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", 
line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "/usr/local/lib/python2.7/dist-packages/neutron_lib/db/api.py", 
line 135, in wrapped
  return f(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 154, 
in wrapper
  ectxt.value = e.inner_exc
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", 
line 220, in __exit__
  self.force_reraise()
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", 
line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 142, 
in wrapper
  return f(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/neutron_lib/db/api.py", 
line 183, in wrapped
  LOG.debug("Retry wrapper got retriable exception: %s", e)
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", 
line 220, in __exit__
  self.force_reraise()
File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", 
line 196, in force_reraise
  six.reraise(self.type_, self.value, self.tb)
File "/usr/local/lib/python2.7/dist-packages/neutron_lib/db/api.py", 
line 179, in wrapped
  return f(*dup_args, **dup_kwargs)
File "/opt/stack/neutron/neutron/plugins/ml2/drivers/type_vlan.py", 
line 182, in _get_network_segment_ranges_from_db
  ctx, network_type=self.get_type()))
File "/opt/stack/neutron/neutro

[Yahoo-eng-team] [Bug 1845300] Re: FWaaS tempest tests are failing

2021-12-10 Thread Slawek Kaplonski
I close it as fwaas project is not maintained anymore.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1845300

Title:
  FWaaS tempest tests are failing

Status in neutron:
  Won't Fix

Bug description:
  I noticed at least 2 or 3 times that fwaas tempest tests defined in 
neutron-tempest-plugin repo are failing from time to time.
  It seems that failures are due to errors like:

  traceback-1: {{{
  Traceback (most recent call last):
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/fwaas/api/test_fwaasv2_extensions.py",
 line 131, in _try_delete_firewall_group
  self.firewall_groups_client.delete_firewall_group(fwg_id)
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 38, in delete_firewall_group
  return self.delete_resource(uri)
File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 41, in 
delete_resource
  resp, body = self.delete(req_uri)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 314, in 
delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 679, in 
request
  self._error_checker(resp, resp_body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 800, in 
_error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: Conflict with state of target resource
  Details: {'type': 'FirewallGroupInUse', 'message': 'Firewall group 
4d900386-2806-4879-bddf-d1f77fbdcb2c is still active.', 'detail': ''}
  }}}

  traceback-2: {{{
  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/common/utils/test_utils.py", line 84, 
in call_and_ignore_notfound_exc
  return func(*args, **kwargs)
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 100, in delete_firewall_policy
  return self.delete_resource(uri)
File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 41, in 
delete_resource
  resp, body = self.delete(req_uri)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 314, in 
delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 679, in 
request
  self._error_checker(resp, resp_body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 800, in 
_error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: Conflict with state of target resource
  Details: {'type': 'FirewallPolicyInUse', 'message': 'Firewall policy 
3fab4478-ffa8-41bf-a1ce-f0a2d82af518 is being used.', 'detail': ''}
  }}}

  traceback-3: {{{
  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/common/utils/test_utils.py", line 84, 
in call_and_ignore_notfound_exc
  return func(*args, **kwargs)
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 100, in delete_firewall_policy
  return self.delete_resource(uri)
File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 41, in 
delete_resource
  resp, body = self.delete(req_uri)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 314, in 
delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 679, in 
request
  self._error_checker(resp, resp_body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 800, in 
_error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: Conflict with state of target resource
  Details: {'type': 'FirewallPolicyInUse', 'message': 'Firewall policy 
725c2ef0-5fcb-4149-98e8-e52b8437fa4a is being used.', 'detail': ''}
  }}}

  traceback-4: {{{
  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/common/utils/test_utils.py", line 84, 
in call_and_ignore_notfound_exc
  return func(*args, **kwargs)
File 
"/opt/stack/neutron-tempest-plugin/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 75, in delete_firewall_rule
  return self.delete_resource(uri)
File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 41, in 
delete_resource
  resp, body = self.delete(req_uri)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 314, in 
delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 679, in 
request
  self._error_checker(resp, resp_body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 800,

[Yahoo-eng-team] [Bug 1843418] Re: Functional tests shouldn't fail if kill command will have "no such process" during cleanup

2021-12-10 Thread Slawek Kaplonski
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1843418

Title:
  Functional tests shouldn't fail if kill command will have "no such
  process" during cleanup

Status in neutron:
  Fix Released

Bug description:
  In case when functional tests are doing cleanup and wants to kill process 
which don't exists, it shouldn't cause test failure.
  Example of such issue: 
https://e64dae2a05fb8d47b133-8b2f01caaee154d496c517b9dc32a557.ssl.cf5.rackcdn.com/678905/6/check/neutron-functional/965e654/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1843418/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837012] Re: Neutron-dynamic-routing tempest tests can't be run all in one job

2021-12-10 Thread Slawek Kaplonski
This seems to be fixed already. All tests are run in one job currently.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837012

Title:
  Neutron-dynamic-routing tempest tests can't be run all in one job

Status in neutron:
  Fix Released

Bug description:
  Scenario tests for neutron-dynamic-routing can't be run all in one job with 
single "tempest run" command.
  It is like that because for each class of scenario tests: "base", "ipv4" an 
"ipv6" there is created bridge and docker network, see for example 
https://github.com/openstack/neutron-dynamic-routing/blob/master/neutron_dynamic_routing/tests/tempest/scenario/basic/base.py#L71
 and for each class of those tests there is always same subnet used.
  That cause problems with docker which returns errors like:

  2019-07-17 21:03:40.947 7349 INFO os_ken.tests.integrated.common.docker_base 
[-] sudo docker network create --driver bridge  --gateway 192.168.10.1 --subnet 
192.168.10.0/24 br-docker-basic
  2019-07-17 21:03:40.952 7349 ERROR os_ken.tests.integrated.common.docker_base 
[-] stdout: 
  2019-07-17 21:03:40.952 7349 ERROR os_ken.tests.integrated.common.docker_base 
[-] stderr: Error response from daemon: Pool overlaps with other one on this 
address space

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1837012/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1856523] Re: Sometimes instance can't get public keys due to cirros metadata request failure

2021-12-10 Thread Slawek Kaplonski
This was probably fixed in the Cirros image so I'm closing that bug now.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1856523

Title:
  Sometimes instance can't get public keys due to cirros metadata
  request failure

Status in neutron:
  Fix Released

Bug description:
  On our CI we see random failures of random jobs related to getting public 
keys from metadata.
  As an example I would like to show this change [1]. In addition to current 
implementation of tests it adds three instances and test security groups.

  Sometimes random jobs like:
  neutron-tempest-plugin-scenario-linuxbridge
  neutron-tempest-plugin-scenario-openvswitch-iptables_hybrid-stein
  and others fail on checking SSH connectivity to just created instance. 

  * It didn't work because the instance refused public key authentication, 
example:
  

  2019-12-13 14:43:48,694 31953 INFO [tempest.lib.common.ssh] Creating ssh 
connection to '172.24.5.186:22' as 'cirros' with public key authentication
  2019-12-13 14:43:48,704 31953 WARNING  [tempest.lib.common.ssh] Failed to 
establish authenticated ssh connection to cirros@172.24.5.186 ([Errno None] 
Unable to connect to port 22 on 172.24.5.186). Number attempts: 1. Retry after 
2 seconds.
  


  * While checking the instance console log we can find that the instance 
failed to get public keys list on boot (FIP: 172.24.5.186, Instance IP: 
10.1.0.10):
  -
  cirros-ds 'net' up at 11.67
  checking http://169.254.169.254/2009-04-04/instance-id
  successful after 1/20 tries: up 12.13. iid=i-003c
  failed to get http://169.254.169.254/2009-04-04/meta-data/public-keys
  warning: no ec2 metadata for public-keys
  -

  * In addition to current Neutron logs I added more debugs to Neutron Metadata 
Agent in order to find out if the response from Nova Metadata is empty, then I 
verified Neutron Metadata logs related to this instance:
  -
  Dec 13 14:43:49.572244 ubuntu-bionic-rax-ord-0013383633 
neutron-metadata-agent[17234]: DEBUG neutron.agent.metadata.agent [-] REQUEST: 
HEADERS {'X-Forwarded-For': '10.1.0.10', 'X-Instance-ID': 
'e77a44fc-249f-4c85-8f9c-40f299534c12', 'X-Tenant-ID': 
'8975f89b119046b48f5a674fa6a296c3', 'X-Instance-ID-Signature': 
'908153d94493c68c9cb8fae8aa78fab18244a260d7fe55b5b707ed9b369f45cd'} DATA: b'' 
URL: http://10.210.224.88:8775/2009-04-04/meta-data/public-keys {{(pid=17720) 
_proxy_request /opt/stack/neutron/neutron/agent/metadata/agent.py:214}}
  Dec 13 14:43:49.572451 ubuntu-bionic-rax-ord-0013383633 
neutron-metadata-agent[17234]: DEBUG neutron.agent.metadata.agent [-] RESPONSE: 
HEADERS: {'Content-Length': '32', 'Content-Type': 'text/plain; charset=UTF-8', 
'Connection': 'close'} DATA: b'0=tempest-keypair-test-231375855' {{(pid=17720) 
_proxy_request /opt/stack/neutron/neutron/agent/metadata/agent.py:217}}
  Dec 13 14:43:49.572977 ubuntu-bionic-rax-ord-0013383633 
neutron-metadata-agent[17234]: INFO eventlet.wsgi.server [-] 10.1.0.10, 
"GET /2009-04-04/meta-data/public-keys HTTP/1.1" status: 200  len: 168 time: 
0.3123491
  -

  The response was 200 with body: '0=tempest-keypair-test-231375855'. It
  is the key used also for other instances, so that worked.

  
  Conclusions:
  1) Neutron metadata responds with 200
  2) Nova metadata responds with 200 and valid data

  Questions:
  1) Is this cirros issue? Why there is no retry? 
  2) Maybe its network issue that the data are not send back (connection 
dropped during delivery)?
  3) Why we don't have more logs in cirros on this request failure?

  [1] https://review.opendev.org/#/c/682369/
  [2] https://review.opendev.org/#/c/698001/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1856523/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   6   7   8   >