[Yahoo-eng-team] [Bug 1986545] Re: websockfiy open redirection unit test broken with Python >= 3.10.6 standard lib

2022-11-30 Thread melanie witt
The fix for the vulnerability in cpython has been backported to older
versions:

https://python-security.readthedocs.io/vuln/http-server-redirection.html

so we will need to fix our unit tests for older branches as well.

** Also affects: nova/yoga
   Importance: Undecided
   Status: New

** Also affects: nova/xena
   Importance: Undecided
   Status: New

** Also affects: nova/victoria
   Importance: Undecided
   Status: New

** Also affects: nova/wallaby
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Also affects: nova/ussuri
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1986545

Title:
  websockfiy open redirection unit test broken with Python >= 3.10.6
  standard lib

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) train series:
  New
Status in OpenStack Compute (nova) ussuri series:
  New
Status in OpenStack Compute (nova) victoria series:
  New
Status in OpenStack Compute (nova) wallaby series:
  New
Status in OpenStack Compute (nova) xena series:
  New
Status in OpenStack Compute (nova) yoga series:
  New

Bug description:
  Lucas Nussbaum reported this Debian bug:

  https://bugs.debian.org/1017217

  so I started investigating it. It took me a while to understand it was
  due to a change in the Python 3.10.6 standard http/server.py library.

  Running these 2 unit tests against Python 3.10.5 works:

  test_websocketproxy.NovaProxyRequestHandlerTestCase.test_reject_open_redirect
  
console.test_websocketproxy.NovaProxyRequestHandlerTestCase.test_reject_open_redirect_3_slashes

  However, under Python 3.10.6, this fails. The reason isn't the
  interpreter itself, but the standard library, which has additional
  open redirection protection.

  Looking at the changelog here:
  https://docs.python.org/3/whatsnew/changelog.html

  we see this issue:
  https://github.com/python/cpython/issues/87389

  which has been addressed by this commit:
  
https://github.com/python/cpython/commit/defaa2b19a9a01c79c1d5641a8aa179bb10ead3f

  If I "fix" the Python 3.10.5 standard library using the 2 lines of
  code of the first hunk of this patch, then I can reproduce the issue.

  I guess that the unit testing should be skipped if using Python >=
  3.10.6, probably, or adapted somehow. I leave this to the Nova
  maintainers: for the Debian package, I'll just skip these 2 unit
  tests.

  Cheers,

  Thomas Goirand (zigo)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1986545/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1988499] Re: Snap prevents repartitioning Azure resource disk

2022-11-30 Thread Brett Holman
** Also affects: snapd
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1988499

Title:
  Snap prevents repartitioning Azure resource disk

Status in cloud-init:
  New
Status in snapd:
  New

Bug description:
  In an Azure VM, the resource disk (a.k.a. “local” or “temp” disk) has
  a single partition created by the Azure infrastructure. Linux cloud-
  init creates an ext4 file system in that partition and arranges for it
  to be mounted on /mnt. In Ubuntu 20.04 and Ubuntu 22.04 images in the
  Azure Marketplace, snap then creates a bind mount of /mnt for its
  internal purposes.

  Some customers want to use the Azure resource disk for purposes other
  than a file system mounted on /mnt.  If they unmount the disk, and use
  a partition editor to remove or change the partition structure, the
  closing ioctl to re-read the partition table fails because the Linux
  kernel still has a reference to the disk.  The command “blockdev
  --rereadpt” also fails.

  After debugging this problem, it turns out that the umount of /mnt
  only partially succeeds, and that’s why the ioctl thinks the disk is
  still in use.  From what’s visible in the file system, the umount has
  succeeded.  And “lsblk” shows that /dev/sdb1 (assuming the resource
  disk is /dev/sdb) as not mounted anywhere.  But this message:

   [   51.885870] EXT4-fs (sdb1): unmounting filesystem.

  is *not* output in dmesg because internally the Linux kernel still has
  a reference to the mount that it is waiting (forever) to go away.

  The problem is that snap has a reference to the mount, which was
  created by “snap-confine” doing the bind mount. This behavior of snap
  is specifically for the /mnt mount point (and maybe “/” for the root
  file system?):

  * If I bugger things up a bit so that cloud-init doesn’t force the
  resource disk mount point to be /mnt, and change it to be /mnt2, then
  Ubuntu boots normally, and mounts the resource disk on /mnt2.  At that
  point, I can umount /mnt2, and the umount is done 100%, including the
  “unmounting filesystem” message in dmesg. The ioctl problem in fdisk
  or parted goes away commensurately.

  * If I remove “snap” entirely from my Ubuntu 20.04 installation, the
  problem also goes away.

  * The problem does not occur on RHEL 8.5 or CentOS 8.5, which don’t
  have snap in the first place.

  What’s the right way to solve this problem?  Unfortunately, I’m not
  knowledgeable about snap or what snap-confine is trying to do.

  * Why is snap tracking /mnt?  Is there a way to tell snap not to track
  /mnt?

  * Or is there some design flaw in snap that causes the mount on /mnt
  to not work normally?

  Longer run, we’re looking at enhancing cloud-init with an option to
  not mount the resource disk at all, which should avoid the problem.
  But still, there should be a way for the mount of the resource disk on
  /mnt to work normally.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1988499/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998353] [NEW] Fullstack: test_packet_rate_limit_qos_policy_rule_lifecycle failing

2022-11-30 Thread Lajos Katona
Public bug reported:

neutron.tests.fullstack.test_qos.TestPacketRateLimitQoSOvs.test_packet_rate_limit_qos_policy_rule_lifecycle
(both egress and ingress) direction failing in neutron-fullstack-with-
uwsgi (perhaps in other fullstack jobs also, but I checked this one):

https://zuul.opendev.org/t/openstack/builds?job_name=neutron-fullstack-
with-uwsgi=master=0

** Affects: neutron
 Importance: Critical
 Status: New


** Tags: gate-failure

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1998353

Title:
  Fullstack: test_packet_rate_limit_qos_policy_rule_lifecycle failing

Status in neutron:
  New

Bug description:
  
neutron.tests.fullstack.test_qos.TestPacketRateLimitQoSOvs.test_packet_rate_limit_qos_policy_rule_lifecycle
  (both egress and ingress) direction failing in neutron-fullstack-with-
  uwsgi (perhaps in other fullstack jobs also, but I checked this one):

  https://zuul.opendev.org/t/openstack/builds?job_name=neutron-
  fullstack-with-uwsgi=master=0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1998353/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998343] [NEW] Unittest test_distributed_port_binding_deleted_by_port_deletion fails: DeprecationWarning('ssl.PROTOCOL_TLS is deprecated')

2022-11-30 Thread Anton Kurbatov
Public bug reported:

I got an error in the test_distributed_port_binding_deleted_by_port_deletion 
test on my CI run [1].
Also I found the same failure in another CI run [2]

FAIL: 
neutron.tests.unit.plugins.ml2.test_db.Ml2DvrDBTestCase.test_distributed_port_binding_deleted_by_port_deletion
tags: worker-0
--
stderr: {{{
/home/zuul/src/opendev.org/openstack/neutron/.tox/shared/lib/python3.10/site-packages/ovs/stream.py:794:
 DeprecationWarning: ssl.PROTOCOL_TLS is deprecated
  ctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
/home/zuul/src/opendev.org/openstack/neutron/.tox/shared/lib/python3.10/site-packages/ovs/stream.py:794:
 DeprecationWarning: ssl.PROTOCOL_TLS is deprecated
  ctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
}}}

Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/plugins/ml2/test_db.py",
 line 535, in test_distributed_port_binding_deleted_by_port_deletion
self.assertEqual(
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/shared/lib/python3.10/site-packages/testtools/testcase.py",
 line 393, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/shared/lib/python3.10/site-packages/testtools/testcase.py",
 line 480, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: [] != []: Warnings: {message : DeprecationWarning('ssl.PROTOCOL_TLS 
is deprecated'), category : 'DeprecationWarning', filename : 
'/home/zuul/src/opendev.org/openstack/neutron/.tox/shared/lib/python3.10/site-packages/ovs/stream.py',
 lineno : 794, line : None}

I have spent some time and seem to have found the reason for this behavior on 
python 3.10.
First of all, since python3.10 we get a warning when using ssl.PROTOCOL_TLS [3]:

[root@node0 neutron]# python
Python 3.10.8+ (heads/3.10-dirty:ca3c480, Nov 30 2022, 12:16:40) [GCC 4.8.5 
20150623 (Red Hat 4.8.5-44)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import ssl
>>> ssl.SSLContext(ssl.PROTOCOL_SSLv23)
:1: DeprecationWarning: ssl.PROTOCOL_TLS is deprecated

>>>

I also found that the `test_ssl_connection` test case affects catching warnings 
in the test_distributed_port_binding_deleted_by_port_deletion test case.
I was then able to reproduce the issue like this:

[root@node0 neutron]# cat run_list.txt
neutron.tests.unit.agent.ovsdb.native.test_connection.ConfigureSslConnTestCase.test_ssl_connection
neutron.tests.unit.plugins.ml2.test_db.Ml2DvrDBTestCase.test_distributed_port_binding_deleted_by_port_deletion
[root@node0 neutron]# git diff
diff --git a/neutron/tests/unit/plugins/ml2/test_db.py 
b/neutron/tests/unit/plugins/ml2/test_db.py
index 578a01a..d837871 100644
--- a/neutron/tests/unit/plugins/ml2/test_db.py
+++ b/neutron/tests/unit/plugins/ml2/test_db.py
@@ -531,6 +531,8 @@ class Ml2DvrDBTestCase(testlib_api.SqlTestCase):
 router_id='router_id',
 status=constants.PORT_STATUS_DOWN).create()
 with warnings.catch_warnings(record=True) as warning_list:
+import time
+time.sleep(0.1)
 port.delete()
 self.assertEqual(
 [], warning_list,
[root@node0 neutron]# source .tox/shared/bin/activate
(shared) [root@node0 neutron]# stestr run --concurrency=1 --load-list 
./run_list.txt
...
neutron.tests.unit.plugins.ml2.test_db.Ml2DvrDBTestCase.test_distributed_port_binding_deleted_by_port_deletion
--
Captured traceback:
~~~
Traceback (most recent call last):
  File "/root/github/neutron/neutron/tests/base.py", line 182, in func
return f(self, *args, **kwargs)
  File "/root/github/neutron/neutron/tests/unit/plugins/ml2/test_db.py", 
line 537, in test_distributed_port_binding_deleted_by_port_deletion
self.assertEqual(
  File 
"/root/github/neutron/.tox/shared/lib/python3.10/site-packages/testtools/testcase.py",
 line 393, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/root/github/neutron/.tox/shared/lib/python3.10/site-packages/testtools/testcase.py",
 line 480, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: [] != []: Warnings: {message : 
DeprecationWarning('ssl.PROTOCOL_TLS is deprecated'), category : 
'DeprecationWarning', filename : 
'/root/github/neutron/.tox/shared/lib/python3.10/site-packages/ovs/stream.py', 
lineno : 794, line : None}

==
Totals
==
Ran: 2 tests in 1.3571 sec.
 - Passed: 1
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 1
Sum of execute time for each test: 1.3053 sec.


[1] 

[Yahoo-eng-team] [Bug 1998337] [NEW] test_dvr_router_lifecycle_ha_with_snat_with_fips fails occasionally in the gate

2022-11-30 Thread Bence Romsics
Public bug reported:

Opening this report to track the following test that fails occasionally
in the gate:

job neutron-functional-with-uwsgi
test 
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_lifecycle_ha_with_snat_with_fipstesttools

Sample traceback:

ft1.31: 
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_lifecycle_ha_with_snat_with_fipstesttools.testresult.real._StringException:
 Traceback (most recent call last):
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
return f(self, *args, **kwargs)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_dvr_router.py",
 line 208, in test_dvr_router_lifecycle_ha_with_snat_with_fips
self._dvr_router_lifecycle(enable_ha=True, enable_snat=True)
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_dvr_router.py",
 line 626, in _dvr_router_lifecycle
self._assert_dvr_floating_ips(router, snat_bound_fip=snat_bound_fip,
  File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_dvr_router.py",
 line 791, in _assert_dvr_floating_ips
self.assertTrue(fg_port_created_successfully)
  File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
raise self.failureException(msg)
AssertionError: False is not true

It seems to recur occasionally, for example:

https://675daf3418638bf15806-f7e1f8eddcfdd9404f4b72ab9bb1f324.ssl.cf1.rackcdn.com/865575/1/check/neutron-functional-with-uwsgi/bd983b3/testr_results.html
https://488eb2b76bde124417ee-80e67ec01f194d5b25d665df26ee3378.ssl.cf2.rackcdn.com/839066/18/check/neutron-functional-with-uwsgi/66c7fcc/testr_results.html

There may be more that's similar:

$ logsearch log --project openstack/neutron --result FAILURE --pipeline check 
--job neutron-functional-with-uwsgi --limit 30 'line 208, in 
test_dvr_router_lifecycle_ha_with_snat_with_fips'
Builds with matching logs 5/30:
+--+-+---++
| uuid | finished| review   
 | branch |
+--+-+---++
| 1d265722d23548d6930486699202347d | 2022-11-30T13:42:28 | 
https://review.opendev.org/863881 | master |
| cb2a2d7161764d5f823a09528eedc44c | 2022-11-28T16:47:20 | 
https://review.opendev.org/865018 | master |
| 66c7fcc56a5347648732bfcb90341ef5 | 2022-11-27T00:55:10 | 
https://review.opendev.org/839066 | master |
| 85b3b709e9d54718a4f0847da5b4b2df | 2022-11-25T10:00:01 | 
https://review.opendev.org/865018 | master |
| bd983b367ac441c190e38dcf1fadc87f | 2022-11-24T16:17:06 | 
https://review.opendev.org/865575 | master |
+--+-+---++

** Affects: neutron
 Importance: Medium
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1998337

Title:
  test_dvr_router_lifecycle_ha_with_snat_with_fips fails occasionally in
  the gate

Status in neutron:
  New

Bug description:
  Opening this report to track the following test that fails
  occasionally in the gate:

  job neutron-functional-with-uwsgi
  test 
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_lifecycle_ha_with_snat_with_fipstesttools

  Sample traceback:

  ft1.31: 
neutron.tests.functional.agent.l3.extensions.qos.test_fip_qos_extension.TestL3AgentFipQosExtensionDVR.test_dvr_router_lifecycle_ha_with_snat_with_fipstesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_dvr_router.py",
 line 208, in test_dvr_router_lifecycle_ha_with_snat_with_fips
  self._dvr_router_lifecycle(enable_ha=True, enable_snat=True)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_dvr_router.py",
 line 626, in _dvr_router_lifecycle
  self._assert_dvr_floating_ips(router, snat_bound_fip=snat_bound_fip,
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/l3/test_dvr_router.py",
 line 791, in 

[Yahoo-eng-team] [Bug 1379663] Re: After upgrading - ovs-vswitchd cannot add existing ports

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1379663

Title:
  After upgrading - ovs-vswitchd cannot add existing ports

Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Expired

Bug description:
  Hi there,

  After upgrading (stop all services, yum upgrade, db sync) from older
  Icehouse build to latest Icehouse build, my compute node (specifically
  openstack-nova-compute) cannot be started. I deployed a number of
  instances before upgrade, and after upgrading openstack-nova-compute
  refuses to start up. The logs seem to point to some issue with ovs-
  vswitch unable to bind ports of the existing instances.

  All other services at controller and network nodes seem to be running
  fine. And before upgrading, everything was working fine.

  # rpm -qa | grep openstack-nova
  openstack-nova-compute-2014.1.2-1.el6.noarch
  openstack-nova-common-2014.1.2-1.el6.noarch

  At compute.log:

  2014-10-10 14:37:39.372 24897 ERROR nova.openstack.common.threadgroup [-] 
Unexpected vif_type=binding_failed
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 
117, in wait
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
x.wait()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py", line 
49, in wait
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 173, in wait
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/event.py", line 121, in wait
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 293, in switch
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line 212, in main
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/service.py", line 486, 
in run_service
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
service.start()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/service.py", line 163, in start
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
self.manager.init_host()
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1044, in 
init_host
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
self._init_instance(context, instance)
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 902, in 
_init_instance
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
self.driver.plug_vifs(instance, net_info)
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 860, in 
plug_vifs
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
self.vif_driver.plug(instance, vif)
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/vif.py", line 616, in plug
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
_("Unexpected vif_type=%s") % vif_type)
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup 
NovaException: Unexpected vif_type=binding_failed
  2014-10-10 14:37:39.372 24897 TRACE nova.openstack.common.threadgroup

  
  At ovs-vswitchd.log:

  2014-10-10T06:37:38Z|00377|dpif|WARN|Dropped 9 log messages in last 17953 
seconds (most recently, 17952 seconds ago) due to excessive rate
  2014-10-10T06:37:38Z|00378|dpif|WARN|system@ovs-system: 

[Yahoo-eng-team] [Bug 1384660] Re: Idle rpc traffic with a large number of instances causes failures

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384660

Title:
  Idle rpc traffic with a large number of instances causes failures

Status in Ubuntu Cloud Archive:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  OpenStack Juno (Neutron ML2+OVS/l2pop/neutron security groups), Ubuntu
  14.04

  500 compute node cloud, running 4.5k active instances (can't get it
  any further right now).

  As the number of instances in the cloud increases, the idle loading on
  the neutron-server servers (4 of them all with 4 cores/8 threads and a
  suitable *_worker configuration) increases from nothing to 30;  The db
  call get_port_and_sgs is being serviced around 10 times per second on
  each server at this point. Other things are also happening - I've
  attached the last 1000 lines of the server log with debug enabled.

  The result is that its no longer possible to create new instances, as
  the rpc calls and api thread just don't get onto CPU, resulting in VIF
  plugging timeouts on compute nodes, and ERROR'ed instances.

  ProblemType: Bug
  DistroRelease: Ubuntu 14.04
  Package: neutron-common 1:2014.2-0ubuntu1~cloud0 [origin: Canonical]
  ProcVersionSignature: User Name 3.13.0-35.62-generic 3.13.11.6
  Uname: Linux 3.13.0-35-generic x86_64
  ApportVersion: 2.14.1-0ubuntu3.5
  Architecture: amd64
  CrashDB:
   {
  "impl": "launchpad",
  "project": "cloud-archive",
  "bug_pattern_url": 
"http://people.canonical.com/~ubuntu-archive/bugpatterns/bugpatterns.xml;,
   }
  Date: Thu Oct 23 10:22:14 2014
  PackageArchitecture: all
  SourcePackage: neutron
  UpgradeStatus: No upgrade log present (probably fresh install)
  modified.conffile..etc.neutron.api.paste.ini: [deleted]
  modified.conffile..etc.neutron.fwaas.driver.ini: [deleted]
  modified.conffile..etc.neutron.l3.agent.ini: [deleted]
  modified.conffile..etc.neutron.neutron.conf: [deleted]
  modified.conffile..etc.neutron.policy.json: [deleted]
  modified.conffile..etc.neutron.rootwrap.conf: [deleted]
  modified.conffile..etc.neutron.rootwrap.d.debug.filters: [deleted]
  modified.conffile..etc.neutron.rootwrap.d.ipset.firewall.filters: [deleted]
  modified.conffile..etc.neutron.rootwrap.d.iptables.firewall.filters: [deleted]
  modified.conffile..etc.neutron.rootwrap.d.l3.filters: [deleted]
  modified.conffile..etc.neutron.rootwrap.d.vpnaas.filters: [deleted]
  modified.conffile..etc.neutron.vpn.agent.ini: [deleted]
  modified.conffile..etc.sudoers.d.neutron.sudoers: [deleted]

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1384660/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1412325] Re: test_subnet_details failed due to MismatchError

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1412325

Title:
  test_subnet_details failed due to MismatchError

Status in neutron:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  A new test test_subnet_details failed on the gate for Nova like the
  following:

  http://logs.openstack.org/92/144092/3/check/check-tempest-dsvm-
  neutron-full/6bcefe8/logs/testr_results.html.gz

  Traceback (most recent call last):
    File "tempest/test.py", line 112, in wrapper
  return f(self, *func_args, **func_kwargs)
    File "tempest/scenario/test_network_basic_ops.py", line 486, in 
test_subnet_details
  self._check_dns_server(ssh_client, [alt_dns_server])
    File "tempest/scenario/test_network_basic_ops.py", line 437, in 
_check_dns_server
  trgt_serv=dns_servers))
    File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
348, in assertEqual
  self.assertThat(observed, matcher, message)
    File "/usr/local/lib/python2.7/dist-packages/testtools/testcase.py", line 
433, in assertThat
  raise mismatch_error
  MismatchError: set(['9.8.7.6']) != set(['1.2.3.4']): Looking for servers: 
['9.8.7.6']. Retrieved DNS nameservers: ['1.2.3.4'] From host: 172.24.4.98.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1412325/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1426121] Re: vmw nsx: add/remove interface on dvr is broken

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1426121

Title:
  vmw nsx: add/remove interface on dvr is broken

Status in neutron:
  Fix Released
Status in neutron juno series:
  Fix Released
Status in vmware-nsx:
  Fix Released

Bug description:
  When the NSX specific extension was dropped in favour of the community
  one, there was a side effect that unfortunately caused add/remove
  interface operations to fail when executed passing a subnet id.

  This should be fixed soon and backported to Juno.
  Icehouse is not affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1426121/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525881] Re: unnecessary L3 rpcs

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1525881

Title:
  unnecessary L3 rpcs

Status in networking-midonet:
  In Progress
Status in neutron:
  Won't Fix

Bug description:
  networking-midonet plugin currently issues RPCs for L3-agent unnecessarily.
  they are unnecessary as the plugin doesn't use neutron L3-agent at all.
  many (not all) of these RPCs are a consequence of using L3_NAT_db_mixin as a 
base class.  the plugin should use L3_NAT_dbonly_mixin instead.

  in order to do that, a few RPC assumptions in neutron needs to be fixed.
  namely,
  - ML2 uses disassociate_floatingips(do_notify), which is only available in 
L3_NAT_db_mixin 
  - a few l3 tests unnecessarily assume L3_NAT_db_mixin

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-midonet/+bug/1525881/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1560221] Re: No port create notifications received for DHCP subnet creation nor router interface attach

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1560221

Title:
  No port create notifications received for DHCP subnet creation nor
  router interface attach

Status in neutron:
  Won't Fix
Status in OpenStack Searchlight:
  New

Bug description:
  Creating a subnet with DHCP enabled either creates or updates a port
  with device_owner network:dhcp matching the network id to which the
  subnet belongs. While there is a notification received for the subnet
  creation, the port creation or update is implicit and has not
  necessarily taken place when the subnet creation event is received
  (and similarly we don't get a notification that the port has changed
  or been deleted when the subnet has DHCP disabled).

  My specific use case is that we're trying to index resource
  create/update/delete events for searchlight and we cannot track the
  network DHCP ports in the same way as we can ports created explicitly
  or as part of nova instance boots.

  The same problem exists for router interface:attach events, though
  with a difference that we do at least get a notification indicating
  the port id created. It would be nice if the ports created when
  attaching a router to a network also sent port.create notifications.

  Tested under mitaka RC-1 (or very close to) with 'messaging' as the
  notification driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1560221/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1605338] Re: Flat Provider Network doesn't work if server have IPv6

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1605338

Title:
  Flat Provider Network doesn't work if server have IPv6

Status in neutron:
  Won't Fix

Bug description:
  Hey guys,

   I believe that the following problem:

   https://ask.openstack.org/en/question/92057/after-creating-
  selfservice-subnet-neutron-linux-bridge-log-shows-error-rtnetlink-
  answers-permission-denied/

   https://ask.openstack.org/en/question/68190/how-do-i-resolve-
  rtnetlink-permission-denied-error-encountered-while-running-stacksh/

   ...is a bug on OpenStack on Ubuntu Xenial.

   I faced this problem last night and the workaround (disabling IPv6)
  works.

   Now, "ip -6" shows nothing, I can create neutron networks / subnets,
  no "RTNETLINK answers: Permission denied" anymore.

  Cheers!
  Thiago

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1605338/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1606431] Re: Flavor create and update with service_profiles is not working properly

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1606431

Title:
  Flavor create and update with service_profiles is not working properly

Status in neutron:
  Won't Fix

Bug description:
  1. When creating a new Flavor with service_profiles, is not working
  properly due the it showing empty service_profiles, I entered the
  correct UUID of service_profiles, however the UUID of service_profiles
  is empty.

  2. And another happened when I try to update the existing of Flavor by
  inserting the UUID of service_profiles, the 500 Internal Server Error
  occured.

  Here are my log from the first one (Creating new Falvor with
  service_profiles) and then the second one (Update Existing Flavor by
  Inserting service_profiles)

  

  Creating a new Flavor with service_profiles

  vagrant@ubuntu:~$ curl -g -i -X POST http://192.168.122.139:9696/v2.0/flavors 
-H "X-Auth-Token: $TOKEN" -d '{"flavor": 
{"service_type":"LOADBALANCER","enabled":"true"name":"flavor-test","service_profiles":["8e843ed6-cbd0-4ede-b765-d98e765f1135"]}}'
  HTTP/1.1 201 Created
  Content-Type: application/json
  Content-Length: 173
  X-Openstack-Request-Id: req-6f3047a4-07e9-4dbe-b22a-b61ba167f705
  Date: Mon, 25 Jul 2016 16:12:41 GMT

  {"flavor": {"description": "", "enabled": true, "service_profiles":
  [], "service_type": "LOADBALANCER", "id":
  "79eaa203-5913-41b0-92c5-d6c2a0211a9c", "name": "flavor-test"}}


  -
  Update Existing Flavor By Inserting service_profiles

  vagrant@ubuntu:~$ curl -g -i -X PUT 
http://192.168.122.139:9696/v2.0/flavors/79eaa203-5913-41b0-92c5-d6c2a0211a9c 
-H "X-Auth-Token: $TOKEN" -d '{"flavor": 
{"enabled":"false","service_profiles":["8e843ed6-cbd0-4ede-b765-d98e765f1135"]}}'
  HTTP/1.1 500 Internal Server Error
  Content-Type: application/json
  Content-Length: 150
  X-Openstack-Request-Id: req-d8581b95-a798-4d83-9980-414892553cd3
  Date: Mon, 25 Jul 2016 17:18:56 GMT

  2016-07-25 17:18:54.470 24209 DEBUG oslo_messaging._drivers.amqpdriver [-] 
received message msg_id: 6cc709305c994ebb8cdb5dfaf4c834de reply to 
reply_723cd8289b3c4bee83fce502f5443d1f __call__ 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:194
  2016-07-25 17:18:54.504 24209 DEBUG neutron.callbacks.manager 
[req-b891a9a7-e059-41f9-8989-c7f8e1d05696 - - - - -] Notify callbacks for 
agent, after_update _notify_loop 
/opt/stack/neutron/neutron/callbacks/manager.py:140
  2016-07-25 17:18:54.505 24209 DEBUG neutron.callbacks.manager 
[req-b891a9a7-e059-41f9-8989-c7f8e1d05696 - - - - -] Calling callback 
neutron.services.segments.db._update_segment_host_mapping_for_agent 
_notify_loop /opt/stack/neutron/neutron/callbacks/manager.py:147
  2016-07-25 17:18:54.507 24209 DEBUG oslo_messaging._drivers.amqpdriver 
[req-b891a9a7-e059-41f9-8989-c7f8e1d05696 - - - - -] sending reply msg_id: 
6cc709305c994ebb8cdb5dfaf4c834de reply queue: 
reply_723cd8289b3c4bee83fce502f5443d1f time elapsed: 0.0361202930799s 
_send_reply 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:73
  2016-07-25 17:18:56.457 24207 DEBUG neutron.wsgi [-] (24207) accepted 
('192.168.122.139', 56619) server 
/usr/local/lib/python2.7/dist-packages/eventlet/wsgi.py:868
  2016-07-25 17:18:56.650 24207 DEBUG neutron.api.v2.base 
[req-b42a4171-1c3d-4e67-a375-c5ce7c08546b e01bc3eadeb045edb02fc6b2af4b5d49 
867929bfedca4a719e17a7f3293845de - - -] Request body: {u'flavor': 
{u'service_profiles': [u'8e843ed6-cbd0-4ede-b765-d98e765f1135'], u'enabled': 
u'false'}} prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:649
  2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource 
[req-b42a4171-1c3d-4e67-a375-c5ce7c08546b e01bc3eadeb045edb02fc6b2af4b5d49 
867929bfedca4a719e17a7f3293845de - - -] update failed: No details.
  2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource Traceback (most 
recent call last):
  2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource result = 
method(request=request, **args)
  2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 571, in update
  2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource return 
self._update(request, id, body, **kwargs)
  2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 148, in wrapper
  2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-07-25 17:18:56.674 24207 ERROR neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1518581] Re: [RFE] sriov vxlan network support

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1518581

Title:
  [RFE] sriov vxlan network support

Status in neutron:
  Won't Fix

Bug description:
  1 problem

  now the vxlan can only support ovs vxlan network but donot support sr-
  iov vxlan network

  if support the sr-iov vxlan network, it's need to support create a
  network with parameter [--provider: physical netwrok], the create
  network should be  as following:

  neutron net-create ext_net -provider:network_type vxlan
  --provider:physical_network physnet1 --provider:segmentation_id 1000

  the current neutron DB don't has physical network for vxlan network.

  if support the ovs vxlan network, there is no need parm [--provider:
  physical netwrok]

  so the parm [--provider: physical netwrok] is a optional parm
  forcreating vxlan network

  
  2 how to find this problem
  We do a project which need to deploy a sr-iov vxlan network and find that 
cannot assign the physcial network for vxlan network.
  It seems that the neutron don't support sr-iov vxlan network and only support 
OVS vxlan network

  3 how to support sr-iov vxlan network
  (1) first it need to create vxlan network associating with physical network 

  (2) second it need get the mapping relationship between VNI and vlan

  4 how this problem is going?
  Now we have modified the neuron code and support this question, we hope share 
our code and commit to neutron project.

  5 significance
  As everyone knows that the sr-iov performance is better than the ovs.if the 
SR-IOV support vxlan network, it has a widely potentiall for vxlan network 
application,

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1518581/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1569107] Re: tests should be enforced for new alembic scripts

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1569107

Title:
  tests should be enforced for new alembic scripts

Status in neutron:
  Won't Fix

Bug description:
  As new framework for adding data migration testing is added in neutron that 
does shcema changes should be enforced to 
  write revision test.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1569107/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1704137] Re: Refactor tag standard attribute

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1704137

Title:
  Refactor tag standard attribute

Status in neutron:
  Fix Released

Bug description:
  Tag resource is standard attribute but the implementation is different
  from other standard attributes[1, 2], which StandardAttribute
  basically has them. This difference often causes issues. Therefore,
  tag resource should be reimplemented like other standard attributes.

  [1]: https://github.com/openstack/neutron/blob/master/neutron/db/models/tag.py
  [2]: 
https://github.com/openstack/neutron/blob/master/neutron/db/standard_attr.py#L55-L64

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1704137/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1583028] Re: [fullstack] Add new tests for router functionality

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1583028

Title:
  [fullstack] Add new tests for router functionality

Status in neutron:
  Won't Fix

Bug description:
   Add fullstack tests for following router(legacy, HA, DVR, HA with
  DVR) use cases

   1) test east west traffic
   2) test snat and floatingip

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1583028/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794771] Re: SRIOV trunk port - multiple vlans on same VF

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794771

Title:
  SRIOV trunk port - multiple vlans on same VF

Status in neutron:
  Won't Fix

Bug description:
  Need to implement trunk port for SRIOV ports. There is existing trunk
  port API, the sriov agent need to gather the vlans and configure the
  nic with the list of vlans that are allowed to carry traffic to the VM
  bound to that VF. Some of the nic vendors provide the mechanism to
  allow multiple vlan filters on same VF.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1794771/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1817022] Re: [RFE] set inactivity_probe and max_backoff for OVS bridge controller

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1817022

Title:
  [RFE] set inactivity_probe and max_backoff for OVS bridge controller

Status in neutron:
  Fix Released

Bug description:
  It would be useful to have the option to specify inactivity_probe and
  max_backoff for OVS bridge controllers in neutron config.

  OVS documentation says 
(https://github.com/openvswitch/ovs/blob/master/ovn/TODO.rst):
  The default 5 seconds inactivity_probe value is not sufficient and 
ovsdb-server drops the client IDL connections for openstack deployments when 
the neutron server is heavily loaded.

  This indeed can happen under the heavy load in neutron-ovs-agent. This
  was discussed in http://eavesdrop.openstack.org/irclogs/%23openstack-
  neutron/%23openstack-neutron.2017-01-27.log.html#t2017-01-27T02:46:22
  , and the solution was to increase inactivity_probe.

  Alternative is to set this settings manually after each neutron-ovs-agent 
restart:
  ovs-vsctl set Controller br-tun inactivity_probe=3
  ovs-vsctl set Controller br-int inactivity_probe=3
  ovs-vsctl set Controller br-ex inactivity_probe=3
  ovs-vsctl set Controller br-tun max_backoff=5000
  ovs-vsctl set Controller br-int max_backoff=5000
  ovs-vsctl set Controller br-ex max_backoff=5000

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1817022/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1901936] Re: [OVN Octavia Provider] OVN provider loadbalancer failover should fail as unsupported

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1901936

Title:
  [OVN Octavia Provider] OVN provider loadbalancer failover should fail
  as unsupported

Status in neutron:
  Fix Released

Bug description:
  The core OVN code for Loadbalancers does not support a manual failover
  from one gateway node to another.  But running the command with the
  OVN provider driver seems to succeed:

  $ openstack loadbalancer failover $ID
  (no output)

  The code actually does nothing and just returns the provisioning
  status as ACTIVE.

  Since it's unsupported by the underlying technology, the provider
  driver should return an UnsupportedOptionError() to the caller.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1901936/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1905538] Re: Some OVS bridges may lack OpenFlow10 protocol

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1905538

Title:
  Some OVS bridges may lack OpenFlow10 protocol

Status in neutron:
  Fix Released

Bug description:
  After commit https://review.opendev.org/c/openstack/neutron/+/371455 
OVSAgentBridge.setup_controllers() no longer sets OpenFlow10 protocol for the 
bridge, instead it was moved to ovs_lib.OVSBridge.create(). 
  However some (custom) OVS bridges could be created by nova/os-vif when 
plugging VM interface.
  For such bridges neutron does not call create(), only setup_controllers() - 
as a result such bridges support only OpenFlow13 and ovs-ofctl command fails:

  
2020-11-24T20:18:38Z|1|vconn|WARN|unix:/var/run/openvswitch/br01711489f-fe.24081.mgmt:
 version negotiation failed (we support version 0x01, peer supports version 
0x04)
  ovs-ofctl: br01711489f-fe: failed to connect to socket (Broken pipe)

  Fix: return setting of OpenFlow10 (along with OpenFlow13) to
  setup_controllers(). It's doesn't hurt even if bridge already has
  OpenFlow10 in supported protocols.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1905538/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1906311] Re: dns-integration api extension shouldn't be enabled by ovn_l3 plugin if there is no corresponding ML2 extension driver enabled

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1906311

Title:
  dns-integration api extension shouldn't be enabled by ovn_l3 plugin if
  there is no corresponding ML2 extension driver enabled

Status in neutron:
  Fix Released

Bug description:
  In case when ovn_l3 service plugin is used it loads L3 API extensions from 
the list defined in 
neutron.common.ovn.extensions.ML2_SUPPORTED_API_EXTENSIONS_OVN_L3 list.
  One of the extensions defined there is "dns-integration" extension which adds 
to the Port resource "dns_name" attribute.
  But in fact this shouldn't be enabled if there is no enabled one of the ML2 
extensions which provides dns integration.
  The issue now is that if user will pass dns_name attribute to the neutron, it 
will be accepted as it is defined by dns-integration extension but this field 
will not be visible later as it's not processed properly if there is no 
extension driver enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1906311/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1929012] Re: Missing packages in openSuse installation steps

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1929012

Title:
  Missing packages in openSuse installation steps

Status in neutron:
  Fix Released

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: zypper install --no-recommends 
openstack-neutron \
openstack-neutron-server openstack-neutron-linuxbridge-agent \
openstack-neutron-l3-agent openstack-neutron-dhcp-agent \
openstack-neutron-metadata-agent bridge-utils
  - [ ] This is a doc addition request.
  - [x] I have a fix to the document that I can paste below including example: 
input and output. 
  opensue need install dnsmasq for openstack-neutron-dhcp-agent

  zypper install --no-recommends openstack-neutron \
openstack-neutron-server openstack-neutron-linuxbridge-agent \
openstack-neutron-l3-agent openstack-neutron-dhcp-agent \
openstack-neutron-metadata-agent bridge-utils dnsmasq
  If you have a troubleshooting or support issue, use the following  resources:

   - Ask OpenStack: https://ask.openstack.org
   - The mailing list: https://lists.openstack.org
   - IRC: 'openstack' channel on Freenode

  ---
  Release: 17.1.3.dev3 on 2019-09-17 22:17:20
  SHA: 208fea3bf6835ce460dc515d24cd3871c2420276
  Source: 
https://opendev.org/openstack/neutron/src/doc/source/install/controller-install-option2-obs.rst
  URL: 
https://docs.openstack.org/neutron/victoria/install/controller-install-option2-obs.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1929012/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998317] [NEW] CachedResourceConsumerTracker update triggered for every command

2022-11-30 Thread Szymon Wróblewski
Public bug reported:

Some agents don't report resource_versions in agent state dict, since
they don't use OVO.

But for db.agents_db.AgentDbMixin.is_agent_considered_for_versions (and 
get_agents_resource_versions) every agent is considered for versions tracking.
As a result for each api call 
api.rpc.callbacks.version_manager.get_resource_versions is called and triggers 
refresh of agent OVO versions.

** Affects: neutron
 Importance: Undecided
 Assignee: Szymon Wróblewski (bluex)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Szymon Wróblewski (bluex)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1998317

Title:
  CachedResourceConsumerTracker update triggered for every command

Status in neutron:
  New

Bug description:
  Some agents don't report resource_versions in agent state dict, since
  they don't use OVO.

  But for db.agents_db.AgentDbMixin.is_agent_considered_for_versions (and 
get_agents_resource_versions) every agent is considered for versions tracking.
  As a result for each api call 
api.rpc.callbacks.version_manager.get_resource_versions is called and triggers 
refresh of agent OVO versions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1998317/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585907] Re: 'used_ips' field of 'net-ip-availability-list' command increased by 1 when subnet added into router, In fact, Before subnet added into the router , 'total_ips' of ne

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585907

Title:
  'used_ips' field of 'net-ip-availability-list' command increased by 1
  when subnet added into router,In fact, Before subnet added into the
  router ,'total_ips' of network does not contain 'gateway_ip'.

Status in neutron:
  Won't Fix

Bug description:
  In Mitaka,

  'used_ips' field of 'net-ip-availability-list' command increased by 1
  when subnet added into router,In fact, Before subnet added into the
  router ,'total_ips' of network does not contain 'gateway_ip'.

  The experimental process is as follows:
  Field 'used_ips' of 'net-ip-availability-list' command increased by 1 when 
subnet added into router.
  [root@localhost devstack]# neutron net-create net_test 
  Created a new network:
  +---+--+
  | Field | Value|
  +---+--+
  | admin_state_up| True |
  | availability_zone_hints   |  |
  | availability_zones|  |
  | created_at| 2016-05-26T14:44:36  |
  | description   |  |
  | id| 83dc21b4-715b-4f74-9db6-012ccf13c8ef |
  | ipv4_address_scope|  |
  | ipv6_address_scope|  |
  | mtu   | 4950 |
  | name  | net_test |
  | provider:network_type | vxlan|
  | provider:physical_network |  |
  | provider:segmentation_id  | 1040 |
  | qos_policy_id |  |
  | router:external   | False|
  | shared| False|
  | status| ACTIVE   |
  | subnets   |  |
  | tags  |  |
  | tenant_id | ee4bd2aeeac74bb3ad2b094fc5292cbf |
  | updated_at| 2016-05-26T14:44:36  |
  | vlan_transparent  | False|
  +---+--+
  [root@localhost devstack]# neutron subnet-create net_test 105.1.1.0/24 
--allocation_pool start=105.1.1.6,end=105.1.1.10
  Created a new subnet:
  +---+-+
  | Field | Value   |
  +---+-+
  | allocation_pools  | {"start": "105.1.1.6", "end": "105.1.1.10"} |
  | cidr  | 105.1.1.0/24|
  | created_at| 2016-05-26T14:46:01 |
  | description   | |
  | dns_nameservers   | |
  | enable_dhcp   | True|
  | gateway_ip| 105.1.1.1   |
  | host_routes   | |
  | id| 63aa67d0-55e4-4cb0-8dcb-cdc7d2c83118|
  | ip_version| 4   |
  | ipv6_address_mode | |
  | ipv6_ra_mode  | |
  | name  | |
  | network_id| 83dc21b4-715b-4f74-9db6-012ccf13c8ef|
  | subnetpool_id | |
  | tenant_id | ee4bd2aeeac74bb3ad2b094fc5292cbf|
  | updated_at| 2016-05-26T14:46:01 |
  +---+-+
  [root@localhost devstack]# ip netns |grep 83dc21b4-715b-4f74-9db6-012ccf13c8ef
  qdhcp-83dc21b4-715b-4f74-9db6-012ccf13c8ef

  [root@localhost devstack]# ip netns exec 
qdhcp-83dc21b4-715b-4f74-9db6-012ccf13c8ef ifconfig -a
  lo: flags=73  mtu 65536
  inet 127.0.0.1  netmask 255.0.0.0
  inet6 ::1  prefixlen 128  scopeid 0x10
  loop  txqueuelen 0  (Local Loopback)
  RX packets 0  bytes 0 (0.0 B)
  RX errors 0  dropped 0  overruns 0  frame 0
  TX packets 0  bytes 0 

[Yahoo-eng-team] [Bug 1593788] Re: Without using AZ aware Scheduler, dhcp can recognize AZ, while l3 can't

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1593788

Title:
  Without using AZ aware Scheduler, dhcp can recognize AZ, while l3
  can't

Status in neutron:
  Won't Fix

Bug description:
  I have a env with 3 network nodes.

  The dhcp scheduler is configured as:
  network_scheduler_driver = 
neutron.scheduler.dhcp_agent_scheduler.WeightScheduler
  The l3 scheduler is configured as:
  router_scheduler_driver = 
neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler

  I create 51 legacy routers use the following command:

  neutron router-create router${i} --availability-zone-hint nova2
  neutron router-gateway-set router${i} public

  After router creation, check the routers in L3 agent, the resut is recorded 
at:
  http://paste.openstack.org/show/516896/
  The routers are spawned evenly in the 3 L3 agents.

  
  I create 51 network use the following command:

  neutron net-create net${i} --availability-zone-hint nova2
  neutron subnet-create net${i} ${i}.0.0.0/24

  After network creation, check the networks in dhcp agent, the result is 
recorded at:
  http://paste.openstack.org/show/516897/
  The networks are only spawned in nova2.

  
  Expected result:
  DHCP and L3 should act the same. I would prefer to let l3 be AZ aware even if 
the AZ scheduler is not used. The AZ aware scheduler will share load among AZs. 
For normal scheduler, the AZ should work as constraint for scheduling.

  This might be fixed during the work at bug 1509046

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1593788/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596750] Re: Not specifying '--target-tenant' argument when executing "neutron rbac-create", returns 'Request Failed: internal server error while processing your request.'

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1596750

Title:
  Not specifying '--target-tenant' argument when executing "neutron
  rbac-create",returns 'Request Failed: internal server error while
  processing your request.'

Status in neutron:
  Fix Released

Bug description:
  In Mitaka,
  Executing "neutron rbac-create" with '--target-tenant' argument not 
specified,the result returned is 
  'Request Failed: internal server error while processing your request.',and at 
the same time neutron - server throws an exception
  while the real reason is ''target_tenant' cannot be null',I think should 
return the correct message instead of ’interna server error‘,
  so the user can take correct measures according to the prompt

  [root@localhost devstack]# neutron rbac-create net_xwj_01 --type network 
--action access_as_shared 
  Request Failed: internal server error while processing your request.
  Neutron server returns request_ids: 
['req-b90fd6bf-2e7e-456e-9163-3c496e1ffac7']
  [root@localhost devstack]# 

  
  The details of the exception neutron-server throwing,please see 
http://paste.openstack.org/show/523692/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1596750/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1671338] Re: Wrong ordered fw_rules when set them into fw_policy

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1671338

Title:
  Wrong ordered fw_rules when set them into fw_policy

Status in neutron:
  Won't Fix

Bug description:
  There are 3 sample fw_rules in server. And I expect the order is tcp - ping - 
denyany
  openstack firewall group rule list
  
+--+-+-++
  | ID   | Name| Enabled | Summary  
  |
  
+--+-+-++
  | 563841d1-1ae7-4c74-9231-fab88d44a76c | denyany | True| ANY, 
  |
  |  | | |  source(port): 
none specified(none specified), |
  |  | | |  dest(port): 
none specified(none specified),   |
  |  | | |  deny
  |
  | ab93b257-9449-4545-b46b-8ec011df14e7 | ping| True| ICMP,
  |
  |  | | |  source(port): 
1.1.1.1(none specified),|
  |  | | |  dest(port): 
none specified(none specified),   |
  |  | | |  reject  
  |
  | d53d4015-50e4-4fb2-ab0d-1f7231065012 | tcp | True| TCP, 
  |
  |  | | |  source(port): 
2.2.2.2(),  |
  |  | | |  dest(port): 
none specified(none specified),   |
  |  | | |  deny
  |
  
+--+-+-++
  Then I set them into fw_policy as my expect order.
  openstack firewall group policy set test --firewall-rule tcp
  openstack firewall group policy set test --firewall-rule ping
  openstack firewall group policy set test --firewall-rule denyany

  But I saw the order had changed and the backend driver will apply the rules 
in the wrong order.
  openstack firewall group policy list
  
+--+--+-+
  | ID   | Name | Firewall Rules

  |
  
+--+--+-+
  | 1b93f923-daff-40cc-8145-a3267769f26d | test | 
[u'563841d1-1ae7-4c74-9231-fab88d44a76c', 
u'ab93b257-9449-4545-b46b-8ec011df14e7', 
u'd53d4015-50e4-4fb2-ab0d-1f7231065012'] |
  
+--+--+-+

  
  Currently, neutron-fwaas accept the arguments with full list of fw_rules on 
fw_policy create/update. So this must be a OSC bug.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1671338/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1672485] Re: Neutron APIs respond with Unicode characters ignoring charset in request headers

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1672485

Title:
  Neutron APIs respond with Unicode characters ignoring charset in
  request headers

Status in neutron:
  Won't Fix

Bug description:
  GET /v2.0/networks/{network_id} API responds with \u escaped Unicode 
characters even though the request headers specify that the client will only 
accept UTF-8 encdoding.
  

  Both these headers were tried, with same result : “Accept:
  application/json;charset=UTF-8”   and “Accept-Charset: UTF-8”

  
  We observer a similar behavior for subnets API as well.

  
  GET Request: http://172.24.35.1:9696/v2.0/networks 
  RESPONSE of GET Request: http://172.24.35.1:9696/v2.0/networks 
  {"networks": [{"status": "ACTIVE", "router:external": true, 
"availability_zone_hints": [], "availability_zones": ["nova"], "description": 
"", "provider:physical_network": "extnet", "subnets": 
["1afd5d0c-b5bd-4267-af39-2d81d44437c4"], "name": 
"\u7f51\u7edc1external_network", "created_at": "2017-01-25T16:44:08", "tags": 
[], "updated_at": "2017-02-24T00:32:41", "provider:network_type": "flat", 
"ipv6_address_scope": null, "tenant_id": "345a6e236e414020b39807a657f41d0f", 
"admin_state_up": true, "ipv4_address_scope": null, "is_default": false, 
"shared": true, "mtu": 1500, "id": "bf1c7000-73d4-4daa-90cf-0fb24360ee61", 
"provider:segmentation_id": null}, {"status": "ACTIVE", "subnets": 
["38166705-27fb-49d2-abc6-4d1d079f6086"], "availability_zone_hints": [], 
"availability_zones": ["nova"], "name": "\u7f51\u7edc2private_network", 
"provider:physical_network": null, "admin_state_up": true, "tenant_id": 
"345a6e236e414020b39807a657f41d0f", "created_at": "2017-01-25T16:45:40", 
"tags": [], "updated_at": "2017-02-24T00:32:55", "ipv6_address_scope": null, 
"description": "", "router:external": false, "provider:network_type": "vxlan", 
"ipv4_address_scope": null, "shared": true, "mtu": 1450, "id": 
"afb9f44a-4eb2-4544-8326-3334c540667f", "provider:segmentation_id": 22}, 
{"status": "ACTIVE", "subnets": ["80888752-c11a-4eb2-b52c-b3c036546073"], 
"availability_zone_hints": [], "availability_zones": ["nova"], "name": 
"backend_network", "provider:physical_network": null, "admin_state_up": true, 
"tenant_id": "345a6e236e414020b39807a657f41d0f", "created_at": 
"2017-01-25T19:00:30", "tags": [], "updated_at": "2017-01-26T01:37:39", 
"ipv6_address_scope": null, "description": "", "router:external": false, 
"provider:network_type": "vxlan", "ipv4_address_scope": null, "shared": true, 
"mtu": 1450, "id": "234b6ee3-5f6e-4841-a409-8240352d2a64", 
"provider:segmentation_id": 13}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1672485/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1681945] Re: Neutron Agent error "constraint violation"

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1681945

Title:
  Neutron Agent error "constraint violation"

Status in neutron:
  Won't Fix

Bug description:
  screen-q-agt.txt:2017-04-11 20:40:50.354 8722 ERROR 
neutron.agent.ovsdb.impl_vsctl [req-e330d428-6ab8-4594-8805-139a8ffa67b5 - -] 
Unable to execute [
  'ovs-vsctl', '--timeout=10', '--oneline', '--format=json', '--', 
'--id=@manager', 'create', 'Manager', 'target="ptcp:6640:127.0.0.1"', '--', 
'add', '
  Open_vSwitch', '.', 'manager_options', '@manager']. Exception: Exit code: 1; 
Stdin: ; Stdout: ; Stderr: ovs-vsctl: transaction error: {"details":"Tra
  nsaction causes multiple rows in \"Manager\" table to have identical values 
(\"ptcp:6640:127.0.0.1\") for index on column \"target\".  First row, wit
  h UUID c84c9746-c000-45b5-b5df-c20a39aa4569, was inserted by this 
transaction.  Second row, with UUID b80bee4e-6bea-4dcd-a3ac-12aa027dccf5, 
existed i
  n the database before this transaction and was not modified by the 
transaction.","error":"constraint violation"}

  Triggered by: https://review.openstack.org/412397 patchset 45
  15:20:24 Detailed logs: 
https://stash.opencrowbar.org/logs/97/412397/45/check/dell-hw-tempest-dsvm-ironic-pxe_ipmitool/bd05cfb/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1681945/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1683514] Re: tests should ensure expand scripts don't have non-nullable columns

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1683514

Title:
  tests should ensure expand scripts don't have non-nullable columns

Status in neutron:
  Won't Fix

Bug description:
  Currently our tests don't prevent us from adding a non-nullable column
  without a server default. This will prevent older versions of the
  server from inserting records into the table so we can't let these
  merge.

  It would be good if we had a test in our migration validation logic
  that ensures that any added columns are either nullable or they define
  a server-default.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1683514/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1687064] Re: ovs logs are trashed with healthcheck messages from ovslib

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Confirmed => Fix Released

** Changed in: ovsdbapp
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1687064

Title:
  ovs logs are trashed with healthcheck messages from ovslib

Status in neutron:
  Fix Released
Status in ovsdbapp:
  Invalid

Bug description:
  Those messages are all over the place:

  2017-04-28 14:34:06.478 16259 DEBUG ovsdbapp.backend.ovs_idl.vlog [-]
  [POLLIN] on fd 14 __log_wakeup /usr/local/lib/python2.7/dist-
  packages/ovs/poller.py:246

  We should probably suppress them, they don't seem to carry any value.
  If there is value in knowing when something stopped working, maybe
  consider erroring in this failure mode instead of logging in happy
  path.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1687064/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1691969] Re: Functional tests failing due to uid 65534 not present

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1691969

Title:
  Functional tests failing due to uid 65534 not present

Status in neutron:
  Won't Fix

Bug description:
  We're relying on existing uid 65534 to run functional tests [0] and if it 
doesn't exist,
  metadata proxy will fail to spawn [1] and so will the tests.

  From what I've seen in centos7, user with uid 65534 exists when
  deploying devstack because libvirt package is installed and nfs-utils
  is a dependency. nfs-utils will create nfsnobody user under this uid
  [2] and the functional tests pass.

  We shouldn't rely on this uid to be present on the system. I'll try to
  come up with something to fix the tests but feedback is very welcome
  :)

  Daniel

  [0] 
https://github.com/openstack/neutron/blob/master/neutron/tests/functional/agent/l3/test_metadata_proxy.py#L188
  [1] 
https://github.com/openstack/neutron/blob/03c5283c69f1f5cba8a9f29e7bd7fd306ee0c123/neutron/agent/metadata/driver.py#L100
  [2] http://paste.openstack.org/show/609989/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1691969/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794535] Re: Consider all router ports for dvr arp updates

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794535

Title:
  Consider all router ports for dvr arp updates

Status in neutron:
  Won't Fix

Bug description:
  If you have a subnet with 2 routers and you create and then delete
  a VM it may happen that an old ARP entry may persist. If you create
  another VM with the same IP and the ARP update goes to the other
  router you have a VM which isn't reachable via one router since the
  ARP entry is wrong.

  A solution would be to update all router ports and not just one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1794535/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1915480] Re: DeviceManager's fill_dhcp_udp_checksums assumes IPv6 available

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1915480

Title:
  DeviceManager's fill_dhcp_udp_checksums assumes IPv6 available

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Focal:
  Fix Released

Bug description:
  The following code in DeviceManager's fill_dhcp_udp_checksums assumes
  IPv6 is always enabled:

  iptables_mgr = iptables_manager.IptablesManager(use_ipv6=True,
  namespace=namespace)

  When iptables_mgr.apply() is later called, an attempt to add the UDP
  checksum rule for DHCP is done via iptables-save/iptables-restore and
  if IPv6 has been disabled on a hypervisor (eg, by setting
  `ipv6.disable=1` on the kernel command line) then an many-line error
  occurs in the DHCP agent logfile.

  There should be a way of telling the agent that IPv6 is disabled and
  as such, it should ignore trying to set up the UDP checksum rule for
  IPv6. This can be easily achieved given that IptablesManager already
  has support for disabling it.

  We've seen this on Rocky on Ubuntu Bionic but it appears the issue
  still exists on the master branch.

  =
  Ubuntu SRU details:

   
  [Impact] 

  See above

  
  [Test Plan]

  Disable IPv6 on a hypervisor.
  sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
  sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
  sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1
  Deploy Openstack Ussuri or Victoria with one compute node, using the 
hypervisor which has IPv6 disabled as a neutron gateway.
  Create a network which has a subnetwork with DHCP enabled. Eg:
  openstack network create net1
  openstack subnet create subnet1 --network net1  --subnet-range 192.0.2.0/24  
--dhcp
  Search the `/var/log/neutron/neutron-dhcp-agent.log` (with debug log enabled) 
and check if there are any `ip6tables-restore` commands. Eg:
  sudo grep ip6tables-restore /var/log/neutron/neutron-dhcp-agent.log 
   
  [Where problems could occur]

  Users which were relying on the setting to always be true could be
  affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915480/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1915856] Re: updating a qos policy on a bound port from None to a policy with min_bw rules does not update placement allocation

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1915856

Title:
  updating a qos policy on a bound port  from None to a policy with
  min_bw rules does not update placement allocation

Status in neutron:
  Fix Released

Bug description:
  1) create a port without qos policy (and also the net has no qos policy)
  2) boot a server with the port
  3) set the the qos policy of the port for a policy with min_bw rules

  Expected:
  * port update is rejected as neutron does not know which resource provider 
the bandwidth needs to be allocated

  Actual:
  * port update accepted the resource request of the port is updated according 
to the new policy but placement allocation does not created

  Reproduction with printouts: http://paste.openstack.org/show/802704/

  A variant of the bug:
  1) create a port with qos policy
  2) boot a server with that port
  3) update the port qos policy to None (note it is only possible with the REST 
API, the CLI client does not support $openstack port set --no-qos-policy)
  4) update the qos policy of the port from None to an existing qos policy

  Expected:
  * same as above

  Actual:
  * same as above

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1915856/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1940073] Re: "Unable to create the network. No available network found in maximum allowed attempts." during rally stress test

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1940073

Title:
  "Unable to create the network. No available network found in maximum
  allowed attempts." during rally stress test

Status in neutron:
  Fix Released

Bug description:
  When running rally scenario NeutronNetworks.create_and_delete_networks
  with concurrency of 60 the following error is observed:

  --8<--8<--8<--
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api 
[req-61e1d9da-1bad-4410-94ce-d2945c13a2d5 05971ba84eac4b8eb176bd935909f9d0 
03904310315c47c7b33178da2bfc99a2 - default default] DB exceeded retry limit.: 
oslo_db.exception.RetryRequest: Unable to create the network. No available 
network found in maximum allowed attempts.
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api Traceback (most recent call 
last):
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_db/api.py", line 142, in 
wrapper
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api return f(*args, **kwargs)
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron_lib/db/api.py", line 
183, in wrapped
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api LOG.debug("Retry wrapper 
got retriable exception: %s", e)
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 
220, in __exit__
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api self.force_reraise()
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_utils/excutils.py", line 
196, in force_reraise
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api six.reraise(self.type_, 
self.value, self.tb)
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/six.py", line 703, in reraise
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api raise value
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron_lib/db/api.py", line 
179, in wrapped
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api return f(*dup_args, 
**dup_kwargs)
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/plugin.py",
 line 1053, in create_network
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api result, mech_context = 
self._create_network_db(context, network)
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/plugin.py",
 line 1012, in _create_network_db
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api tenant_id)
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/managers.py",
 line 226, in create_network_segments
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api context, filters=filters)
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/managers.py",
 line 312, in _allocate_tenant_net_segment
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api segment = 
self._allocate_segment(context, network_type, filters)
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/managers.py",
 line 308, in _allocate_segment
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api return 
driver.obj.allocate_tenant_segment(context, filters)
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/type_tunnel.py",
 line 391, in allocate_tenant_segment
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api alloc = 
self.allocate_partially_specified_segment(context, **filters)
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/plugins/ml2/drivers/helpers.py",
 line 153, in allocate_partially_specified_segment
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api 
exceptions.NoNetworkFoundInMaximumAllowedAttempts())
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api oslo_db.exception.RetryRequest: 
Unable to create the network. No available network found in maximum allowed 
attempts.
  2021-08-16 11:28:41.526 710 ERROR oslo_db.api
  --8<--8<--8<--

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1940073/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1955010] Re: [stable] py27 is not supported in "python-lazy-object-proxy" release 1.7.0

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1955010

Title:
  [stable] py27 is not supported in "python-lazy-object-proxy" release
  1.7.0

Status in neutron:
  Fix Released

Bug description:
  Latest "python-lazy-object-proxy" release 1.7.0 is not compatible with
  py27 [1].

  This library is failing during the installation of dsvm-functional-
  py27 jobs [2].

  Error snippet: https://paste.opendev.org/show/811717/

  [1]https://github.com/ionelmc/python-lazy-object-proxy/issues/61
  
[2]https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_3f0/821566/1/check/neutron-functional-python27/3f055a9/job-output.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1955010/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1955546] Re: [stable only] Logging service plugin expects wrong arguments in the callback function

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1955546

Title:
  [stable only] Logging service plugin expects wrong arguments in the
  callback function

Status in neutron:
  Fix Released

Bug description:
  To fix bug https://bugs.launchpad.net/neutron/+bug/1939558 we proposed 2 
patches https://review.opendev.org/c/openstack/neutron/+/810870 and 
https://review.opendev.org/c/openstack/neutron/+/815298/ (those are wallaby 
backports).
  But by mistake we used new style of the payload arguments in the callback 
functions in neutron/services/logapi/logging_plugin.py in 
_clean_logs_by_resource_id method thus now it's failing when that callback is 
called.
  It affects only wallaby and older releases as Xena and master are already 
moved to the new callbacks so it works fine there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1955546/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1982110] Re: [taap-as-a-service] Project requires "webtest" library for testing

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1982110

Title:
  [taap-as-a-service] Project requires "webtest" library for testing

Status in neutron:
  Fix Released

Bug description:
  Project: taap-as-a-service

  According to logs [1], "webtest" library is required for testing but
  not installed [2].

  
  
[1]https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_8aa/698035/12/check/openstack-tox-py38/8aa43cd/job-output.txt
  [2]https://paste.opendev.org/show/bFDIyynkXdMAnVPtJJ7K/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1982110/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1385234] Re: OVS tunneling between multiple neutron nodes misconfigured if amqp is restarted

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1385234

Title:
  OVS tunneling between multiple neutron nodes misconfigured if amqp is
  restarted

Status in neutron:
  Won't Fix
Status in oslo.messaging:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  At completion of a deployment with multiple controllers, by observing
  the gre tunnels created in OVS by the neutron ovs-agent, one will find
  that some neutron nodes may miss the tunnels in between them or to the
  computes.

  This is due to ovs-agents getting disconnected from the rabbit cluster
  without them noticing and as a result, being unable to receive updates
  from other nodes or publish updates.

  The disconnection may happen following a reconfig of a rabbit node,
  the VIP moving over a different node when rabbit is load balanced, or
  even _during_ tripleo overcloud deployment due to rabbit cluster
  configuration changes.

  This was observed using Kombu 3.0.33 as well as 2.5.

  Use of some aggressive (low) kernel keepalive probes interval seems to
  improve the reliability but a more appropriate fix seems to be support
  for heartbeat in oslo.messaging

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1385234/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458682] Re: neutron-db-manage autogenerate does not produce a clean upgrade

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458682

Title:
  neutron-db-manage autogenerate does not produce a clean upgrade

Status in neutron:
  Won't Fix

Bug description:
  neutron-db-manage --autogenerate create commands to drop all the
  tables without models in the neutron tree. All the FWaaS, LBaaS and
  VPNaaS tables have models in separate repos. This means that people
  who just want to add/change some tables will get a lot of unwanted
  commands in their autogenerated migration script.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1458682/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1533455] Re: Stale processes lives after a fanout deleting HA router RPC between L3 agents

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1533455

Title:
  Stale processes lives after a fanout deleting HA router RPC between L3
  agents

Status in neutron:
  Won't Fix

Bug description:
  Stale processes lives after a fanout deleting HA router RPC between L3
  agents:

  The race happened between l3 agents after a fanout deleting HA router RPC. 
Race Scenario:
  1. HA router X was schedulered to L3 agent A and L3 agent B

  2. X in L3 agent A is the master state

  3. a delete X RPC fanout

  4. agent A delete all X HA attributes and processes including
  keepalived

  5. (race) agent B was not ready to process the deleting RPC,
  assume there are a lot of deleting RPC is in the router update
  queue, or anything cause the agent B delay processing the RPC.

  6. (race) X in agent B is backup state, now it can not get the VRRP
  advertisement from X in agent A because of the 4, so X set it's state
  to master

  8. (race) enqueue_state_change for X in agent B

  9. (race) agent B could process the deleting RPC

  10. (race) X is still in agent B router_info, so spawn the metadata-
  proxy

  11. (race) agent B do deleting process for HA router X gateway,
  floating IP etc.

  12. (race) agent B remove X from router info

  13. metadata-proxy for router X in agent B lives.

  If you have tried to use rally to run create_and_delete_routers, you
  will find the l3 agent side will have some stale metadata-proxy
  processes after the rally test.

  The only way to decide whether to spawn the metedata-proxy is to try
  get router in agent router_info dict. But enqueue_state_change and
  processing router deleting can be run concurrently.


  
  Here are some statistics after running Rally create_and_delete_routers:

  yulong@network2:/opt/openstack/neutron$ ~/ha_resource_state.sh

  neutron-keepalived-state-change count:
  0
  neutron-ns-metadata-proxy count:
  2
  keepalived process count:
  0
  HA router master state count:
  0
  IP monitor count:
  9
  external pids:
  2
  -rwxr-xr-x 1 root root 5 Mar  7 17:21 
/opt/openstack/data/neutron/external/pids/5a83fe00-37c9-45fa-b299-2a1c49ce4bcc.pid
  -rwxr-xr-x 1 root root 5 Mar  7 17:20 
/opt/openstack/data/neutron/external/pids/d9e2bdd3-63ac-4302-bb06-2f66e0308292.pid
  HA interface ip:
  all metadata-proxy router id:
  d9e2bdd3-63ac-4302-bb06-2f66e0308292
  5a83fe00-37c9-45fa-b299-2a1c49ce4bcc
  all ovs ha ports:
  0
  all router namespace:
  0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1533455/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534954] Re: policy rule for update_port is inconsistent

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534954

Title:
  policy rule for update_port is inconsistent

Status in neutron:
  Won't Fix

Bug description:
  For user from a common tenant, per [1]
  https://github.com/openstack/neutron/blob/master/etc/policy.json#L77 ,
  seems network owner shouldn't have privilege to update port on her/his
  network if she/he is not port owner.

  But per [2]
  https://github.com/openstack/neutron/blob/master/etc/policy.json#L78-L85
  , seems network owner still have chance to update port attributes such
  as device_owner, fixed_ips, port_security_enabled,
  mac_learning_enabled, allowed_address_pairs.

  This is inconsistent, per [1], policy rule
  "rule:admin_or_network_owner" in [2] should be updated.

  For example:
  If a network owner want to change tenant user port fixed_ip, by looking at 
rule:
    "update_port:fixed_ips": "rule:admin_or_network_owner or 
rule:context_is_advsvc" (one rule in [2])
  she/he may think policy can allow this, for she/he is network owner.
  But after she/he tries that, will get privilege denied as result, for rule:
    "update_port": "rule:admin_or_owner or rule:context_is_advsvc" ([1])
  this is confused.

  ## updated @ 2016-01-19
  What's more. Let's use port attribute fixed_ips to discuss, and with tenant-A 
give tenant-B privilege to do, like adding rbac rule for tenant-B. And 
currently tenant-A has network net-A which has rbac action access_as_shared for 
tenant-B:
  * when tenant-B user try to create a port on net-A without any attributes 
specified, that's OK, nothing error or exception will raise. But when later 
when tenant-B user try to update that port fixed_ips, message endswith 
"disallowed by policy" will raise. Maybe for policy rule:
"update_port:fixed_ips": "rule:admin_or_network_owner or 
rule:context_is_advsvc"
   And even network owner, tenant-A also cannot update that port fixed_ips, 
tenant-A user will get return message "The resource could not be found."
  * when tenant-B user try to create a port with specified fixed_ips, message 
endswith "disallowed by policy" will return. This is defined by policy rule:
"create_port:fixed_ips": "rule:admin_or_network_owner or 
rule:context_is_advsvc"

  So currently, neither port owner/tenant-B nor network owner/tenant-A
  can update the port fixed_ips. (Please ignore admin here, admin can do
  anything she/he want)

  I checked history for policy.json on update_port section, and I found
  https://review.openstack.org/#/c/9845. After glance, seems people put
  more focus on port creating not updating.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1534954/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1549078] Re: neutron router static routes ecmp doesn't work

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1549078

Title:
  neutron router static routes ecmp doesn't work

Status in neutron:
  Won't Fix

Bug description:
  Create 1 router and set gateway network, then create 2 static routes with 
same prefix.
  ~/devstack$ neutron router-show R2   
  
+-++
  | Field   | Value 

 |
  
+-++
  | admin_state_up  | True  

 |
  | availability_zone_hints |   

 |
  | availability_zones  | nova  

 |
  | distributed | False 

 |
  | external_gateway_info   | {"network_id": 
"d5d571b9-c7c2-443f-8591-7da9d40435a7", "enable_snat": true, 
"external_fixed_ips": [{"subnet_id": "4d033b6c-9a05-4aee-8462-ac909f86aa86", 
"ip_address": "172.16.3.3"}]} |
  | ha  | False 

 |
  | id  | 1b2467b2-3b83-45bb-90e1-4d78c781d275  

 |
  | name| R2

 |
  | routes  | {"destination": "1.1.1.0/24", "nexthop": 
"172.16.3.100"} 
  |
  | | {"destination": "1.1.1.0/24", "nexthop": 
"172.16.3.101"} 
  |
  | status  | ACTIVE

 |
  | tenant_id   | d1694a63f75a422b9c83e0693effa482  

 |
  
+-++

  ~/devstack$ sudo ip netns exec qrouter-1b2467b2-3b83-45bb-90e1-4d78c781d275 
ip neighbor show
  172.16.3.101 dev qg-4515a36e-77 lladdr 00:00:01:00:00:02 PERMANENT
  172.16.3.100 dev qg-4515a36e-77 lladdr 00:00:01:00:00:01 PERMANENT
  fe80::f816:3eff:fe8e:a49f dev qg-4515a36e-77 lladdr fa:16:3e:8e:a4:9f STALE
  fe80::5863:baff:fefe:c15e dev qg-4515a36e-77 lladdr 5a:63:ba:fe:c1:5e STALE
  ~/devstack$ sudo ip netns exec qrouter-1b2467b2-3b83-45bb-90e1-4d78c781d275 
ip route
  default via 172.16.3.1 dev qg-4515a36e-77 
  1.1.1.0/24 via 172.16.3.101 dev qg-4515a36e-77 ###ONLY 2nd route 
is installed.
  172.16.3.0/24 dev qg-4515a36e-77  proto kernel  scope link  src 172.16.3.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1549078/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe 

[Yahoo-eng-team] [Bug 1551179] Re: poor network speed due to tso enabled

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1551179

Title:
  poor network speed due to tso enabled

Status in neutron:
  Won't Fix

Bug description:
  In some deployments we are experiencing low network speed. When
  disabling tso on all virtual interfaces the problem is fixed. See also
  [1].  I need to dig more into it, anyway I wonder if we should disable
  TSO automatically every time Neutron creates a vif...

  
  [1] 
http://askubuntu.com/questions/503863/poor-upload-speed-in-kvm-guest-with-virtio-eth-driver-in-openstack-on-3-14

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1551179/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1597752] Re: bulk requests may fail to clean up created resources on failure

2022-11-30 Thread Rodolfo Alonso
*** This bug is a duplicate of bug 1593719 ***
https://bugs.launchpad.net/bugs/1593719

** This bug has been marked a duplicate of bug 1593719
   StaleDataError: DELETE statement on table 'standardattributes' expected to 
delete 1 row(s); 0 were matched

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1597752

Title:
  bulk requests may fail to clean up created resources on failure

Status in neutron:
  Confirmed

Bug description:
  When creating resources in bulk (either with native ml2 support, or
  with emulated support in the base api controller), when an exception
  occurs when creating one of requested resources, an effort is made to
  clean up resources that were successfully created before the failure.

  Sadly, sometimes an exception may occur during cleanup, that will
  leave a resource in the database. For example, StaleDataError may be
  raised from SQLAlchemy, for one because of our usage of
  version_id_col; that state is fine and should be handled by retrying
  the operation, but we don't, instead we just swallow the exception and
  leave the resource to sit in the database.

  We should apply retry mechanism to the cleanup phase to avoid that
  scenario.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1597752/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1799124] Re: Path MTU discovery fails for VMs with Floating IP behind DVR routers

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1799124

Title:
  Path MTU discovery fails for VMs with Floating IP behind DVR routers

Status in neutron:
  Won't Fix

Bug description:
  Tenant VMs using an overlay network with an MTU <1500 and less than
  the MTU of the external network are unable to negotiate MTU using Path
  MTU discovery.

  In most cases, since the instance MTU is configured by DHCP, direct
  instance traffic is not affected however if the VM acts as a router
  for other traffic (e.g. to bridge for Docker, LXD, Libvirt, etc) that
  have the MTU set to 1500 (which is the default in most cases) then
  they rely on Path MTU discovery to discover the 1450 MTU.

  On normal routers and DVR routers where the VM does not have a
  floating IP (and thus is routed via a centralized node), this works as
  expected.

  However on DVR routers where the VM has a Floating IP (and thus
  traffic is routed directly from the compute node) this fails. When a
  packet comes from the external network towards the VM with a size
  larger than the overlay network's MTU, the packet is dropped and no
  ICMP too large fragmentation required response is received by the
  external host. This prevents Path MTU discovery from working to fix
  the connection, the result is that most TCP connections will stall if
  they attempt to send more than 1500 bytes, e.g. a simple HTTP
  download.

  My diagnosis is that the qrouter namespace on the compute host has no
  default route. It has a default route in the alternative routing table
  (16) used for traffic matching an "ip rule" which selects all traffic
  being sent from the VM subnet but there is no default route in the
  global default routing table.

  I have not 100% confirmed this part, however, my understanding is that
  since there is no global default route the kernel is unable to select
  a source IP for the ICMP error. Additionally, even if it did somehow
  select a source IP, the appropriate default route appears to be via
  the RFP interface on the 169.254.0.0/16 subnet back to the FIP
  namespace which would not match the rule for traffic from the VM
  subnet to use the alternate routing table anyway.

  In testing, if I add a default route through the rfp interface then
  ICMP errors are sent and Path MTU discovery successfully works,
  allowing TCP connections to work.

  root@maas-node02:~# ip netns exec 
qrouter-1752c73a-be9f-4326-97cc-99dbe0988b3c ip r
  103.245.215.0/28 dev qr-ec03268e-fb  proto kernel  scope link  src 
103.245.215.1 
  169.254.106.114/31 dev rfp-1752c73a-b  proto kernel  scope link  src 
169.254.106.114 

  root@maas-node02:~# ip -n qrouter-1752c73a-be9f-4326-97cc-99dbe0988b3c route 
show table 16
  default via 169.254.106.115 dev rfp-1752c73a-b 

  root@maas-node02:~# ip -n qrouter-1752c73a-be9f-4326-97cc-99dbe0988b3c
  route add default via 169.254.106.115  dev rfp-1752c73a-b

  It's not clear to me if there is an intentional reason not to install
  a default route here, particularly since such a route exists for non-
  DVR routers. I would appreciate input from anyone who knows if this
  was an intentional design decision or simply oversight.

   = Steps to reproduce =

  (1) Deploy a cloud with DVR and global-physnet-mtu=1500
  (2) Create an overlay tenant network (MTU: 1450), VLAN/flat external network 
(MTU: 1500), router.
  (3) Deploy an Ubuntu 16.04 container
  (4) Verify that a large download works; "wget 
http://archive.ubuntu.com/ubuntu-releases/18.04.1/ubuntu-18.04.1-live-server-amd64.iso.zsync;
  (5) Configure LXD to use a private subnet and NAT; "dpkg-reconfigure -pmedium 
lxd" - you can just hit yes and accept the defaults bascially
  (6) Create an lxd image, "lxc launch ubuntu:16.04 test", then test a download
  (7) lxc exec test "wget 
http://archive.ubuntu.com/ubuntu-releases/18.04.1/ubuntu-18.04.1-live-server-amd64.iso.zsync;

  An alternative simple test to using LXD/docker is to force the MTU of
  the VM back to 1500. "ip link set eth0 mtu 1500" -- this same scenario
  will fail with DVR and work without DVR.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1799124/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1846606] Re: [mysql8] Unknown column 'public' in 'firewall_rules_v2'

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1846606

Title:
  [mysql8] Unknown column 'public' in 'firewall_rules_v2'

Status in neutron:
  Fix Released
Status in neutron-fwaas package in Ubuntu:
  Fix Released

Bug description:
  I installed a fresh openstack test cluster in eoan today (October 3).

  Neutron database initialization with the command:
  sudo su -s /bin/sh -c "neutron-db-manage
--config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini
 upgrade head" neutron

  failed with error message:
  oslo_db.exception.DBError: (pymysql.err.InternalError) (1054, "Unknown column 
'public' in 'firewall_rules_v2'") [SQL: 'ALTER TABLE firewall_rules_v2 CHANGE 
public shared BOOL NULL'] (Background on this error at: 
http://sqlalche.me/e/2j85)

  In mysql the table and the column exist, with a constraint on the column:
  CONSTRAINT `firewall_rules_v2_chk_1` CHECK ((`public` in (0,1))),

  manually updating the column in mysql failed with the same error message.
  mysql> ALTER TABLE firewall_rules_v2 CHANGE public shared BOOL NULL;
  ERROR 1054 (42S22): Unknown column 'public' in 'check constraint 
firewall_rules_v2_chk_1 expression'

  I guessed the constraint did not like it if the name of the column was 
changed.
  I removed the column 'private' and created it again, without the constraint. 
The the alter table command worked fine.
  After doing the same for the private columns in tables firewall_groups_v2 and 
firewall_policies_v2 Neutron could initialize the database and all was fine (I 
could create a network and start an instance).

  neutron 2:15.0.0~rc1-0ubuntu1
  mysql-server 8.0.16-0ubuntu3

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1846606/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1853071] Re: AMQP disconnects, q-reports-plugin queue grows, leading to DBDeadlocks while trying to update agent heartbeats

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1853071

Title:
  AMQP disconnects, q-reports-plugin queue grows, leading to
  DBDeadlocks while trying to update agent heartbeats

Status in neutron:
  Won't Fix

Bug description:
  Since upgrading to Rocky, we have seen this issue pop up in several
  environments, small and large. First we see various AMQP/Rabbit
  related errors - missed heartbeats from neutron-server to rabbitmq,
  then repeated errors such as Socket Closed, Broken Pipe, etc...

  This continues on for a while and all agents report as dead. On the
  agent side, we see RPC timeouts when trying to report state.
  Meanwhile, the q-reports-plugin queue in rabbit grows, to 10k+ -
  presumably because neutron-server can't connect to Rabbit and process
  messages.

  
  Eventually sometime later, we see "DBDeadlock: 
(_mysql_exceptions.OperationalError) (1205, 'Lock wait timeout exceeded; try 
restarting transaction')" errors when neutron-server is trying to update stale 
agent heartbeats. 

  
  Example of various AMQP related errors - all slightly different:

  2019-11-18 07:38:55,200.200 22488 ERROR
  oslo.messaging._drivers.impl_rabbit [req-
  cba0d0fa-8e5a-42f1-a93b-4bb398b22275 - - - - -]
  [eba472a9-021d-4738-801b-7944aad3e3af] AMQP server 127.0.0.1:5672
  closed the connection. Check login credentials: Socket closed:
  IOError: Socket closed

  2019-11-18 07:40:22,454.454 22489 ERROR
  oslo.messaging._drivers.impl_rabbit [-]
  [6eb8d074-02c7-4add-8d91-768cbae60fdc] AMQP server on 127.0.0.1:5672
  is unreachable: Too many heartbeats missed. Trying again in 1
  seconds.: ConnectionForced: Too many heartbeats missed

  2019-11-18 07:40:22,586.586 22489 ERROR
  oslo.messaging._drivers.impl_rabbit
  [req-0b9f092f-27f2-4be1-bdf5-2c5208e54321 - - - - -]
  [4b43df2c-cc3e-4442-807c-dcfd4057cb3d] AMQP server on 127.0.0.1:5672
  is unreachable: [Errno 32] Broken pipe. Trying again in 1 seconds.:
  error: [Errno 32] Broken pipe

  2019-11-18 07:42:06,010.010 22487 WARNING
  oslo.messaging._drivers.impl_rabbit [-] Unexpected error during
  heartbeart thread processing, retrying...: error: [Errno 32] Broken
  pipe

  2019-11-18 07:58:26,692.692 22489 WARNING
  oslo.messaging._drivers.impl_rabbit [-] Unexpected error during
  heartbeart thread processing, retrying...: IOError: Socket closed

  2019-11-18 07:58:26,696.696 22489 ERROR
  oslo.messaging._drivers.impl_rabbit [-]
  [84273ffb-1610-44b1-aff7-d5e4606b7f59] AMQP server on 127.0.0.1:5672
  is unreachable: . Trying again in 1
  seconds.: RecoverableConnectionError: 

  Along with following Broken Pipe stacktrace in oslo messaging:
  http://paste.openstack.org/show/786312/

  This continues for some time (30 min - 1 hour) and all agents report
  as dead, and we see following errors in rabbitmq broker logs: First
  missed heartbeat errors, then handshake_timeout errors:

  2019-11-18 07:41:01.448 [error] <0.6126.71> closing AMQP connection 
<0.6126.71> (127.0.0.1:39817 -> 127.0.0.1:5672 - 
neutron-server:22487:ee468e25-42d7-45b8-aea0-4f6fb58a9034):
  missed heartbeats from client, timeout: 60s
  2019-11-18 07:41:07.665 [error] <0.18727.72> closing AMQP connection 
<0.18727.72> (127.0.0.1:51762 -> 127.0.0.1:5672):
  {handshake_timeout,frame_header}


  
  Eventually we see Rabbitmq q-reports has grown and neutron reporting 
following DBDeadlock stacktrace:

  2019-11-18 08:51:14,505.505 22493 ERROR oslo_db.api
  [req-231004a2-d988-47b3-9730-d6b5276fdcf8 - - - - -] DB exceeded retry
  limit.: DBDeadlock: (_mysql_exceptions.OperationalError) (1205, 'Lock
  wait timeout exceeded; try restarting transaction') [SQL: u'UPDATE
  agents SET heartbeat_timestamp=%s WHERE agents.id = %s'] [parameters:
  (datetime.datetime(2019, 11, 18, 8, 50, 23, 804716),
  '223c754e-9d7f-4df3-b5a5-9be4eb8692b0')] (Background on this error at:
  http://sqlalche.me/e/e3q8)

  Full stacktrace here: http://paste.openstack.org/show/786313/

  
  The only way to recover is we stop neutron-server and rabbitmq, kill any 
neutron workers still dangling (which they usually are), then restart. But then 
we see problem manifest days or a week later.

  Rabbitmq is on same host as neutron-server - it is all localhost
  communication. So we are unsure of why it can't heartbeat or connect.
  Also the subsequent DBDeadlock leads me to think there is some
  syncrhonization issue when neutron gets overwhelmed with outstanding
  RPC messages.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1853071/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938461] Re: [networking-bgpvpn] Port events now use payload

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938461

Title:
  [networking-bgpvpn] Port events now use payload

Status in neutron:
  Fix Released

Bug description:
  Since [1], Neutron port events use payload.

  [1]https://review.opendev.org/c/openstack/neutron/+/800604

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1938461/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1655605] Re: metadata proxy won't start in dhcp namespace when network(subnet) is removed from router

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1655605

Title:
  metadata proxy won't start in dhcp namespace when network(subnet) is
  removed from router

Status in neutron:
  Won't Fix

Bug description:
  When adding network(subnet) into router immediately after creating
  network(subnet), there is no metadata proxy process created in dhcp
  namespace to listen on port 80. It causes problem when deleted
  network(subnet) from router: it won't call metadata service
  successfully until restarting dhcp service. Restarting dhcp service is
  just a workaround and is not acceptable as solution.

  This problem is introduced in Newton release. When adding network, it
  will check whether the network has isolated ipv4 subnet. It queries
  all ports belonging to the network, and see whether there is any port
  used as gateway. if yes, then it thinks the subnet is not isolated. If
  we add subnet to router immediately after creating subnet, the process
  of network creation( creating metadata proxy) and the process of
  adding subnet to interface happens at the same time. The seconds
  process creates port as gateway quickly and then the first process
  checks and treats it no isolated, and then will kill metadata proxy
  created soon earlier.

  # /etc/neutron/dhcp_agent.ini
  enable_isolated_metadata = True
  enable_metadata_network = True

  #execute the following commands in batch without interruption.
  neutron net-create network_1
  neutron subnet-create --name subnet_1 network_1 172.16.255.0/24
  neutron router-interface-add default subnet_1

  # there is no 80 port.
   ip netns exec qdhcp-c5791b7d-ec3e-4e96-9a32-b9d1217ed330 netstat -tunlp
  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address           Foreign Address         State     
  PID/Program name
  tcp        0      0 172.16.255.2:53         0.0.0.0:*               LISTEN    
  16926/dnsmasq
  tcp        0      0 169.254.169.254:53      0.0.0.0:*               LISTEN    
  16926/dnsmasq
  tcp6       0      0 fe80::f816:3eff:fe80:53 :::*                    LISTEN    
  16926/dnsmasq
  udp        0      0 172.16.255.2:53         0.0.0.0:*                         
  16926/dnsmasq
  udp        0      0 169.254.169.254:53      0.0.0.0:*                         
  16926/dnsmasq
  udp        0      0 0.0.0.0:67              0.0.0.0:*                         
  16926/dnsmasq
  udp6       0      0 :::547                  :::*                              
  16926/dnsmasq
  udp6       0      0 fe80::f816:3eff:fe80:53 :::*                              
  16926/dnsmasq

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1655605/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662109] Re: tempest scenario test_qos fails intermittently

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662109

Title:
  tempest scenario test_qos fails intermittently

Status in neutron:
  Won't Fix

Bug description:
  http://logs.openstack.org/67/418867/7/check/gate-tempest-dsvm-neutron-
  dvr-multinode-scenario-ubuntu-xenial-
  nv/b705e56/logs/testr_results.html.gz

  e-r-q:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22line%20189%2C%20in%20test_qos%5C%22%20AND%20build_name%3Agate-
  tempest-dsvm-neutron-dvr-multinode-scenario-ubuntu-xenial-
  nv%20AND%20build_branch%3Amaster%20AND%20tags%3Aconsole

  11 hits in last 24 hours

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1662109/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694524] Re: Neutron OVS agent fails to start when neutron-server is not available

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694524

Title:
  Neutron OVS agent fails to start when neutron-server is not available

Status in neutron:
  Invalid
Status in tripleo:
  Fix Released

Bug description:
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
[req-34115ed3-3043-4fcb-ba3f-ab0e4eb0e83c - - - - -] Agent main thread died of 
an exception
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
Traceback (most recent call last):
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
 line 40, in agent_main_wrapper
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
ovs_agent.main(bridge_classes)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2166, in main
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
agent = OVSNeutronAgent(bridge_classes, cfg.CONF)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 180, in __init__
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.setup_rpc()
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 153, in wrapper
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
return f(*args, **kwargs)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 362, in setup_rpc
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.plugin_rpc = OVSPluginApi(topics.PLUGIN)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 182, in __init__
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.remote_resource_cache = create_cache_for_l2_agent()
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 174, in 
create_cache_for_l2_agent
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
rcache.bulk_flood_cache()
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/agent/resource_cache.py", line 55, in 
bulk_flood_cache
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
for resource in puller.bulk_pull(context, rtype):
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/oslo_log/helpers.py", line 48, in wrapper
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
return method(*args, **kwargs)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py", 
line 109, in bulk_pull
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
version=resource_type_cls.VERSION, filter_kwargs=filter_kwargs)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 174, in call
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
time.sleep(wait)
  2017-05-30 18:58:01.947 27929 ERROR 

[Yahoo-eng-team] [Bug 1715789] Re: ovsfw rejects old connections after re-add former rules

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1715789

Title:
  ovsfw rejects old connections after re-add former rules

Status in neutron:
  Won't Fix

Bug description:
  Reproduction procedure:
  1.An all-in-one devstack enviroment, use latest master branch and openvswitch 
driver:
  [securitygroup]
  firewall_driver = openvswitch

  2. launch two VMs with security_group SG1, which have two rules:
  rule1: egress, IPv4
  rule2: ingress, IPv4, 22/tcp, remote_ip_prefix: 0.0.0.0/0

  3.SSH to VM2 from VM1
  4.Delete rule2, check that SSH connection is blocked
  5.re-add rule1 to SG1, check that SSH connection is still blocked.
  The reason is that the conntrack entry is not aged and marked to 1:
  root@devstack:~# conntrack -L --zone=1
  tcp  6 298 ESTABLISHED src=10.0.0.3 dst=10.0.0.8 sport=38844 dport=22 
src=10.0.0.8 dst=10.0.0.3 sport=22 dport=38844 [ASSURED] mark=1 zone=1 use=1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1715789/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1716721] Re: "OS::Neutron::Subnet" ignore gateway_ip when subnetpool used

2022-11-30 Thread Rodolfo Alonso
Bug closed due to lack of activity, please feel free to reopen if
needed.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1716721

Title:
  "OS::Neutron::Subnet" ignore gateway_ip when subnetpool used

Status in OpenStack Heat:
  New
Status in neutron:
  Won't Fix

Bug description:
  heat version - 8.0.4

  Step to reproduce:
  1. template:
  heat_template_version: 2017-02-24

  description: test template

  resources:
test_subnetpool:
  type: OS::Neutron::SubnetPool
  properties:
default_prefixlen: 25
max_prefixlen: 32
min_prefixlen: 22
prefixes:
  - "192.168.0.0/16"
test_net1:
  type: OS::Neutron::Net
test_subnet1:
  type: OS::Neutron::Subnet
  properties:
network: { get_resource: test_net1 }
ip_version: 4
subnetpool: { get_resource: test_subnetpool }
gateway_ip: null

  2. create stack
  3. created subnet have gateway IP: 192.168.0.1 but expected disabled gateway

  Because of gateway_ip property ignored when subnetpool present there
  is no way to create subnet without gateway from subnetpool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/heat/+bug/1716721/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1904897] Re: Failure in test_l2_agent_restart(OVS, Flat network)

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1904897

Title:
  Failure in test_l2_agent_restart(OVS,Flat network)

Status in neutron:
  Won't Fix

Bug description:
  Occasionally the test fails to ping. In this case, the tests failed on
  an OVN-specific patch that could not have possibly interacted with the
  test:
  https://zuul.opendev.org/t/openstack/build/2eb4d891d3324814b28b1db5ea8ee148

  {0}
  
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart(OVS,Flat
  network) [84.192269s] ... FAILED

  Captured traceback:
  ~~~
  Traceback (most recent call last):

File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", line 182, 
in func
  return f(self, *args, **kwargs)

File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_connectivity.py",
 line 236, in test_l2_agent_restart
  self._assert_ping_during_agents_restart(

File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/base.py", 
line 123, in _assert_ping_during_agents_restart
  common_utils.wait_until_true(

File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
  next(self.gen)

File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/common/net_helpers.py",
 line 145, in async_ping
  f.result()

File "/usr/lib/python3.8/concurrent/futures/_base.py", line 432, in 
result
  return self.__get_result()

File "/usr/lib/python3.8/concurrent/futures/_base.py", line 388, in 
__get_result
  raise self._exception

File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run
  result = self.fn(*self.args, **self.kwargs)

File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/common/net_helpers.py",
 line 127, in assert_async_ping
  ns_ip_wrapper.netns.execute([ping_command, '-W', timeout, '-c', '1',

File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/ip_lib.py", 
line 721, in execute
  return utils.execute(cmd, check_exit_code=check_exit_code,

File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/utils.py", 
line 149, in execute
  raise exceptions.ProcessExecutionError(msg,

  neutron_lib.exceptions.ProcessExecutionError: Exit code: 1; Cmd:
  ['ip', 'netns', 'exec', 'test-dd15072f-a4b0-4568-82e9-51105e34b6a1',
  'ping', '-W', 2, '-c', '1', '20.0.0.112']; Stdin: ; Stdout: PING
  20.0.0.112 (20.0.0.112) 56(84) bytes of data.

  --- 20.0.0.112 ping statistics ---
  1 packets transmitted, 0 received, 100% packet loss, time 0ms

  ; Stderr:

  There seem to be lots of dhcp agent errors here:
  https://zuul.opendev.org/t/openstack/build/2eb4d891d3324814b28b1db5ea8ee148

  
  like:

  Unable to access
  
/tmp/tmp8ryzi43r/tmp_u0y3amv/state_path_c5hn5z2/external/pids/b8da6190-ce0d-4937-8346-678c702ab812.pid.haproxy;
  Error: [Errno 2] No such file or directory:
  
'/tmp/tmp8ryzi43r/tmp_u0y3amv/state_path_c5hn5z2/external/pids/b8da6190-ce0d-4937-8346-678c702ab812.pid.haproxy'
  get_value_from_file
  /home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/utils.py:251

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1904897/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1881558] Re: [OVN]IPv6 tempest tests are failing with OVN backend

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881558

Title:
  [OVN]IPv6 tempest tests are failing with OVN backend

Status in neutron:
  Fix Released

Bug description:
  Recently merged code [1] added a few IPv6 hotplug scenarios.
  In meantime we're working on enabling new Cirros on OVN Gates [2]

  After merging [1] we can find that on [2] the new tests started to
  fail:

  
neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_dhcpv6stateless
  neutron_tempest_plugin.scenario.test_ipv6.IPv6Test.test_ipv6_hotplug_slaac.

  Example failure:
  
https://ef5d43af22af7b1c1050-17fc8f83c20e6521d7d8a3ccd8bca531.ssl.cf2.rackcdn.com/711425/10/check/neutron-ovn-tempest-ovs-release

  [1] https://review.opendev.org/#/c/711931/
  [2] https://review.opendev.org/#/c/711425/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881558/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1903008] Re: Create network failed during functional test

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1903008

Title:
  Create network failed during functional test

Status in neutron:
  Fix Released

Bug description:
  One of the functional, ovn related tests failed with error like:

  ft1.3: 
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestNBDbMonitorOverTcp.test_distributed_locktesttools.testresult.real._StringException:
 Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py",
 line 231, in test_distributed_lock
  self.create_port()
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/ovsdb/test_ovsdb_monitor.py",
 line 77, in create_port
  net = self._make_network(self.fmt, 'net1', True)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/unit/db/test_db_base_plugin_v2.py",
 line 510, in _make_network
  raise webob.exc.HTTPClientError(code=res.status_int)
  webob.exc.HTTPClientError: The server could not comply with the request since 
it is either malformed or otherwise incorrect.

  
  In the testrun log there is error like:
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers Traceback 
(most recent call last):
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/db/ovn_revision_numbers_db.py",
 line 93, in _get_standard_attr_id
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers row = 
context.session.query(STD_ATTR_MAP[resource_type]).filter_by(
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/sqlalchemy/orm/query.py",
 line 3500, in one
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers raise 
orm_exc.NoResultFound("No row was found for one()")
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers 
sqlalchemy.orm.exc.NoResultFound: No row was found for one()
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers 
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers During 
handling of the above exception, another exception occurred:
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers 
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers Traceback 
(most recent call last):
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/managers.py", 
line 477, in _call_on_drivers
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py",
 line 471, in create_network_precommit
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers 
ovn_revision_numbers_db.create_initial_revision(
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/neutron_lib/db/api.py",
 line 233, in wrapped
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers return 
method(*args, **kwargs)
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/db/ovn_revision_numbers_db.py",
 line 108, in create_initial_revision
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers 
std_attr_id = _get_standard_attr_id(
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers   File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/db/ovn_revision_numbers_db.py",
 line 97, in _get_standard_attr_id
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers raise 
StandardAttributeIDNotFound(resource_uuid=resource_uuid)
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers 
neutron.db.ovn_revision_numbers_db.StandardAttributeIDNotFound: Standard 
attribute ID not found for 3f787ea1-0ce5-4310-9b42-b530e7dcb4a1
  2020-11-04 13:34:50.442 61672 ERROR neutron.plugins.ml2.managers 
  2020-11-04 13:34:50.448 61672 ERROR neutron.pecan_wsgi.hooks.translation 
[req-5a27b67c-a80f-4877-84cb-7b3d511d51d6 - tenid - - -] 

[Yahoo-eng-team] [Bug 1907068] Re: Functional test neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestSimpleInterfaceMonitor.test_get_events

2022-11-30 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1907068

Title:
  Functional test
  
neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestSimpleInterfaceMonitor.test_get_events

Status in neutron:
  Fix Released

Bug description:
  Failure example:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_837/764830/1/check/neutron-
  functional-with-uwsgi/837db77/testr_results.html

  Stacktrace:

  ft1.2: 
neutron.tests.functional.agent.linux.test_ovsdb_monitor.TestSimpleInterfaceMonitor.test_get_eventstesttools.testresult.real._StringException:
 traceback-1: {{{
  Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/common/async_process.py",
 line 142, in stop
  raise AsyncProcessException(_('Process is not running.'))
  neutron.agent.common.async_process.AsyncProcessException: Process is not 
running.
  }}}

  Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 
704, in wait_until_true
  eventlet.sleep(sleep)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/greenthread.py",
 line 36, in sleep
  hub.switch()
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.8/site-packages/eventlet/hubs/hub.py",
 line 313, in switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 60 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/linux/test_ovsdb_monitor.py",
 line 134, in test_get_events
  utils.wait_until_true(
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/common/utils.py", line 
709, in wait_until_true
  raise WaitTimeout(_("Timed out after %d seconds") % timeout)
  neutron.common.utils.WaitTimeout: Timed out after 60 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/agent/linux/test_ovsdb_monitor.py",
 line 137, in test_get_events
  raise AssertionError('Initial call should always be true')
  AssertionError: Initial call should always be true

  Logstash query:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22AssertionError%3A%20Initial%20call%20should%20always%20be%20true%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1907068/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp