[Yahoo-eng-team] [Bug 2017748] Re: [SRU] OVN: ovnmeta namespaces missing during scalability test causing DHCP issues

2024-05-01 Thread Brian Haley
Sorry, just clicked the wrong buttons, trying to get this targeted to
the UCA back to Ussuri.

** Also affects: neutron/wallaby
   Importance: Undecided
   Status: New

** Also affects: neutron/xena
   Importance: Undecided
   Status: New

** Also affects: neutron/ussuri
   Importance: High
 Assignee: Terry Wilson (otherwiseguy)
   Status: Fix Released

** Also affects: neutron/victoria
   Importance: Undecided
   Status: New

** No longer affects: neutron

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2017748

Title:
  [SRU] OVN:  ovnmeta namespaces missing during scalability test causing
  DHCP issues

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive yoga series:
  New
Status in Ubuntu Cloud Archive zed series:
  New
Status in neutron:
  New
Status in neutron ussuri series:
  Fix Released
Status in neutron victoria series:
  New
Status in neutron wallaby series:
  New
Status in neutron xena series:
  New
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Focal:
  New
Status in neutron source package in Jammy:
  New

Bug description:
  [Impact]

  ovnmeta- namespaces are missing intermittently then can't reach to VMs

  [Test Case]
  TBD
  - Not able to reproduce this easily.

  [Where problems could occur]
  This patches are related to ovn metadata agent in compute. 
  VM's connectivity can possibly be affected by this patch when ovn is used. 
  Biding port to datapath could be affected.

  [Others]

  == ORIGINAL DESCRIPTION ==

  Reported at: https://bugzilla.redhat.com/show_bug.cgi?id=2187650

  During a scalability test it was noted that a few VMs where having
  issues being pinged (2 out of ~5000 VMs in the test conducted). After
  some investigation it was found that the VMs in question did not
  receive a DHCP lease:

  udhcpc: no lease, failing
  FAIL
  checking http://169.254.169.254/2009-04-04/instance-id
  failed 1/20: up 181.90. request failed

  And the ovnmeta- namespaces for the networks that the VMs was booting
  from were missing. Looking into the ovn-metadata-agent.log:

  2023-04-18 06:56:09.864 353474 DEBUG neutron.agent.ovn.metadata.agent
  [-] There is no metadata port for network
  9029c393-5c40-4bf2-beec-27413417eafa or it has no MAC or IP addresses
  configured, tearing the namespace down if needed _get_provision_params
  /usr/lib/python3.9/site-
  packages/neutron/agent/ovn/metadata/agent.py:495

  Apparently, when the system is under stress (scalability tests) there
  are some edge cases where the metadata port information has not yet
  being propagated by OVN to the Southbound database and when the
  PortBindingChassisEvent event is being handled and try to find either
  the metadata port of the IP information on it (which is updated by
  ML2/OVN during subnet creation) it can not be found and fails silently
  with the error shown above.

  Note that, running the same tests but with less concurrency did not
  trigger this issue. So only happens when the system is overloaded.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2017748/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2060808] Re: already associated floating ip can be associated to another server without check or warning

2024-04-10 Thread Brian Haley
Well, for the PUT operation it's the same thing for neutron (and the
same call here [0]), whether the port is null or another valid port id,
neutron is going to update it.

As I mentioned in the other bug, it will be up to the Nova team to
determine if it's API is working correctly.

[0] PUT
/networking/v2.0/floatingips/1875754d-7b9f-47c2-9c0d-83eafd1a0a76

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2060808

Title:
  already associated floating ip can be associated to another server
  without check or warning

Status in neutron:
  Opinion
Status in OpenStack Compute (nova):
  New

Bug description:
  When adding a floating ip to a server the following CLI command is used: 
  openstack server add floating ip  

  When nova was still handling the floating IPs in the backend, it seems there 
was a check, whether the floating IP was already associated to an instance:
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/floating_ips.py#L243

  And as a user I don't want to accidentally associate a floating IP to
  a server, that is already associated to another.

  But that is right now the case:

  
  Steps to reproduce:

  1. Have two servers, one with an associated floating ip (server A) and one 
without (server B)
  2. execute the command: openstack server add floating ip  
  3. now server B has a floating IP associated but server A has not.

  Example:
  stack@devstack:~/devstack$ openstack server list
  
+--+---++---+--+--+
  | ID   | Name  | Status | Networks
  | Image   
 | Flavor   |
  
+--+---++---+--+--+
  | 66f8f821-ec26-4264-807e-36ec016d51f9 | my-new-server | ACTIVE | 
private=10.0.0.41, fd13:d046:e727:0:f816:3eff:fe98:3e70   | N/A 
(booted from volume) | m1.small |
  | e7c7d615-8abc-4657-a334-953d5c6a95e1 | test-server   | ACTIVE | 
private=10.0.0.45, 172.24.4.210, fd13:d046:e727:0:f816:3eff:febf:840b | N/A 
(booted from volume) | m1.small |
  
+--+---++---+--+--+
  stack@devstack:~/devstack$ openstack floating ip list
  
++-+--+++--+
  | ID | Floating IP Address | Fixed IP Address 
| Port   | Floating Network   | 
Project  |
  
++-+--+++--+
  | 0f340eb1-74c7-4cc0-8495-   | 172.24.4.155| None 
| None   | 73edb86b-d7ab-4db3-82b7-   | 
f58edaee60ad484facd2436d31d9caff |
  | 8f648ff7bc61   | |  
|| 25fa8b012e40   | 
 |
  | 1875754d-7b9f-47c2-9c0d-   | 172.24.4.210| 10.0.0.45
| d8387e3b-3b19-444a-9983-   | 73edb86b-d7ab-4db3-82b7-   | 
15f2ab0eaa5b4372b759bde609e86224 |
  | 83eafd1a0a76   | |  
| 42b61b3d19c1   | 25fa8b012e40   | 
 |
  | 3978a1f6-3af8-432f-978a-   | 172.24.4.222| None 
| None   | 73edb86b-d7ab-4db3-82b7-   | 
15f2ab0eaa5b4372b759bde609e86224 |
  | c7feafd88057   | |  
|| 25fa8b012e40   | 
 |
  | 9e193d33-17f9-400b-b639-   | 172.24.4.107| None 
| None   | 73edb86b-d7ab-4db3-82b7-   | 
15f2ab0eaa5b4372b759bde609e86224 |
  | b51750d41bc0   | |  
|| 25fa8b012e40   | 
 |
  

[Yahoo-eng-team] [Bug 2060812] Re: removing floating ip from server does not check the server

2024-04-10 Thread Brian Haley
The Neutron API is only associating a floating IP with an internal port
[0], there is no check for a server as that is under the purview of
Nova.

Neutron will only raise a 409 on a 'create' call if the floating IP is
already in-use, but not on a call to update it - either clearing or
moving to another port.

I am curious to see what the Nova team thinks as it as according to
their API the server add floating IP code has been deprecated [1].

[0] 
https://docs.openstack.org/api-ref/network/v2/index.html#floating-ips-floatingips
[1] 
https://docs.openstack.org/api-ref/compute/#add-associate-floating-ip-addfloatingip-action-deprecated

** Changed in: neutron
   Status: New => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2060812

Title:
  removing floating ip from server does not check the server

Status in neutron:
  Opinion
Status in OpenStack Compute (nova):
  New

Bug description:
  A floating Ip can be removed from a server with the following CLI command:
  openstack server remove floating IP  

  The documentation says, it will remove the floating IP from the given server:
  
https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html#server-remove-floating-ip

  When nova was responsible for associating/dissociating floating IPs from 
servers this was also checked:
  
https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/floating_ips.py#L294

  As a user I would expect a check, that the given pair of server and
  floating IP is correct and I would like to see an Error, if I made a
  mistake in providing a wrong server or Floating IP.

  But the server is not checked at all.
  Steps to reproduce:
  1. Have a server with an associated floating IP
  2. Remove the floating IP with the command, but give a non-existing server 
name:
  openstack server remove floating IP  
  3. floating IP is removed

  Example:
  stack@devstack:~/devstack$ openstack floating ip list
  
++-+--+++--+
  | ID | Floating IP Address | Fixed IP Address 
| Port   | Floating Network   | 
Project  |
  
++-+--+++--+
  | 0f340eb1-74c7-4cc0-8495-   | 172.24.4.155| 10.0.0.45
| d8387e3b-3b19-444a-9983-   | 73edb86b-d7ab-4db3-82b7-   | 
f58edaee60ad484facd2436d31d9caff |
  | 8f648ff7bc61   | |  
| 42b61b3d19c1   | 25fa8b012e40   | 
 |
  | 1875754d-7b9f-47c2-9c0d-   | 172.24.4.210| 10.0.0.41
| 2a7a9f37-99ce-48e5-aaad-   | 73edb86b-d7ab-4db3-82b7-   | 
15f2ab0eaa5b4372b759bde609e86224 |
  | 83eafd1a0a76   | |  
| 416368438c52   | 25fa8b012e40   | 
 |
  | 3978a1f6-3af8-432f-978a-   | 172.24.4.222| None 
| None   | 73edb86b-d7ab-4db3-82b7-   | 
15f2ab0eaa5b4372b759bde609e86224 |
  | c7feafd88057   | |  
|| 25fa8b012e40   | 
 |
  | 9e193d33-17f9-400b-b639-   | 172.24.4.107| None 
| None   | 73edb86b-d7ab-4db3-82b7-   | 
15f2ab0eaa5b4372b759bde609e86224 |
  | b51750d41bc0   | |  
|| 25fa8b012e40   | 
 |
  
++-+--+++--+
  stack@devstack:~/devstack$ openstack server list
  
+--+---++---+--+--+
  | ID   | Name  | Status | Networks
  | Image   
 | Flavor   |
  
+--+---++---+--+--+
  | 

[Yahoo-eng-team] [Bug 1815827] Re: [RFE] neutron-lib: rehome neutron.object.base along with rbac db/objects

2024-03-08 Thread Brian Haley
I am going to close this as it's been a number of years and the original
patch was abandoned. If someone wants to pick it up please re-open.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1815827

Title:
  [RFE] neutron-lib: rehome neutron.object.base along with rbac
  db/objects

Status in neutron:
  Won't Fix

Bug description:
  This isn't a request for a new feature per say, but rather a
  placeholder for the neutron drivers team to take a look at [1].

  Specifically I'm hoping for drivers team agreement that the 
modules/functionality being rehomed in [1] makes sense; no actual (deep) code 
review of [1] is necessary at this point.
   
  Assuming we can agree that the logic in [1] makes sense to rehome, I can 
proceed by chunking it up into smaller patches that will make the 
rehome/consume process easier.

  This work is part of [2] that's described in [3][4]. However as
  commented in [1], it's also necessary to rehome the rbac db/objects
  modules and their dependencies that weren't discussed previously.

  
  [1] https://review.openstack.org/#/c/621000
  [2] https://blueprints.launchpad.net/neutron/+spec/neutron-lib-decouple-db
  [3] 
https://specs.openstack.org/openstack/neutron-specs/specs/rocky/neutronlib-decouple-db-apiutils.html
  [4] 
https://specs.openstack.org/openstack/neutron-specs/specs/rocky/neutronlib-decouple-models.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1815827/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1833674] Re: [RFE] Improve profiling of port binding and vif plugging

2024-03-08 Thread Brian Haley
This seems to be complete, will close bug. Please re-open if I'm wrong.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1833674

Title:
  [RFE] Improve profiling of port binding and vif plugging

Status in neutron:
  Fix Released

Bug description:
  As discussed on the 2019-May PTG in Denver we want to measure then
  improve the performance of Neutron's most important operation that is
  port binding.

  As we're working with OSProfiler reports we are realizing the report
  is incomplete. We could turn on tracing in other components and
  subcomponents by further propagating trace information.

  We heavily build on some previous work:

  * https://bugs.launchpad.net/neutron/+bug/1335640 [RFE] Neutron support for 
OSprofiler
  * https://review.opendev.org/615350 Integrate rally with osprofiler

  A few patches were already merged before opening this RFE:

  * https://review.opendev.org/662804 Run nova's VM boot rally scenario in the 
neutron gate
  * https://review.opendev.org/665614 Allow VM booting rally scenarios to time 
out

  We already see the need for a few changes:

  * New rally scenario to measure port binding
  * Profiling coverage for vif plugging

  This work is also driven by the discoveries made while interpreting
  profiler reports so I expect further changes here and there.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1833674/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1764738] Re: routed provider networks limit to one host

2024-03-08 Thread Brian Haley
>From all the changes that have merged this seems to be complete, will
close.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1764738

Title:
  routed provider networks limit to one host

Status in neutron:
  Fix Released

Bug description:
  There seems to be limitation for a compute node to only have interface
  on one segment in a multisegment network. This feels wrong and limits
  the compute resources since they can only be part of one segment.

  The purpose of multi segment networks is to group multiple segments
  under one network name. i.e. operators should be able to expand the IP
  pool without having to create multiple network for it like internet1,
  internet2, etc.

  The way it should work is that a compute node can belong to one or
  more segments. It should be up to the operator to decide how they want
  to segment the compute resources or not. It should not be enforced by
  the simple need to add IP ranges to a network.

  way to reproduce.
  1. configure compute nodes to have bridges configured on 2 segments
  2. create a network with 2 segments.
  3. create the segments
  2018-04-17 15:17:59.545 25 ERROR oslo_messaging.rpc.server
  2018-04-17 15:18:18.836 25 ERROR oslo_messaging.rpc.server 
[req-4fdf6ee1-2be3-49c5-b3cb-62a2194465ab - - - - -] Exception during message 
handling: HostConnectedToMultipleSegments: Host eselde03u02s04 is connected to 
multiple segments on routed provider network 
'5c1f4dd4-baff-4c59-ba56-bd9cc2c59fa4'.  It should be connected to one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1764738/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1786226] Re: Use sqlalchemy baked query

2024-03-08 Thread Brian Haley
>From comment in the change that was linked above:

"BakedQuery is a legacy extension that no longer does too much beyond
what SQLAlchemy 1.4 does in most cases automatically. new development w/
BakedQuery is a non-starter, this is a legacy module we would eventually
remove."

For that reason I'm going to close this bug.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1786226

Title:
  Use sqlalchemy baked query

Status in neutron:
  Won't Fix

Bug description:
  I am running rally scenario test create_and_list_ports on a 3
  controller setup(each controller have 8 CPUs i.e 4 cores*2 HTs) with
  (function call trace enabled on neutron server processes) a
  concurrency of 8 for 400 iterations.

  Average time taken for create port is 7.207 seconds(when 400 ports are 
created) and the function call trace  for this run is at 
http://paste.openstack.org/show/727718/ and rally results are 
  
+---+
  |   Response Times (sec)  
  |
  
++---+--+--+--+---+---+-+---+
  | Action | Min (sec) | Median (sec) | 90%ile (sec) | 95%ile 
(sec) | Max (sec) | Avg (sec) | Success | Count |
  
++---+--+--+--+---+---+-+---+
  | neutron.create_network | 2.085 | 2.491| 3.01 | 3.29 
| 7.558 | 2.611 | 100.0%  | 400   |
  | neutron.create_port| 5.69  | 6.878| 7.755| 9.394
| 17.0  | 7.207 | 100.0%  | 400   |
  | neutron.list_ports | 0.72  | 5.552| 9.123| 9.599
| 11.165| 5.559 | 100.0%  | 400   |
  | total  | 10.085| 15.263   | 18.789   | 19.734   
| 28.712| 15.377| 100.0%  | 400   |
  |  -> duration   | 10.085| 15.263   | 18.789   | 19.734   
| 28.712| 15.377| 100.0%  | 400   |
  |  -> idle_duration  | 0.0   | 0.0  | 0.0  | 0.0  
| 0.0   | 0.0   | 100.0%  | 400   |
  
++---+--+--+--+---+---+-+---+


  Michael Bayer (zzzeek) has analysed this callgraph and had some
  suggestions. One suggestion is to use baked query i.e
  https://review.openstack.org/#/c/430973/2

  This is his analysis - "But looking at the profile I see here, it is
  clear that the vast majority of time is spent doing lots and lots of
  small queries, and all of the mechanics involved with turning them
  into SQL strings and invoking them.   SQLAlchemy has a very effective
  optimization for this but it must be coded into Neutron.

  Here is the total time spent for Query to convert its state into SQL:

  148029/356073   15.2320.000 4583.8200.013
  /usr/lib64/python2.7/site-
  packages/sqlalchemy/orm/query.py:3372(Query._compile_context)

  that's 4583 seconds spent in Query compilation, which if Neutron were
  modified  to use baked queries, would be vastly reduced.  I
  demonstrated the beginning of this work in 2017 here:
  https://review.openstack.org/#/c/430973/1  , which illustrates how to
  first start to create a base query method in neutron that other
  functions can begin to make use of.  As more queries start using the
  baked form, this 4500 seconds number will begin to drop."

  
  I have restored his patch https://review.openstack.org/#/c/430973/2 , with 
this the average time taken to create port is 5.196 seconds (when 400 ports are 
created), and the function call trace  for this run is at 
http://paste.openstack.org/show/727719/ also total time spent on Query 
compilation (Query._compile_context) is only 1675 seconds.

  83696/1690627.3080.000 1675.1400.010 
/usr/lib64/python2.7/site-packages/sqlalchemy/orm/query.py:3372(Query._compile_context)
   
  Rally results for this run are

  
+---+
  |   Response Times (sec)  
  |
  
++---+--+--+--+---+---+-+---+
  | Action | Min (sec) | Median (sec) | 90%ile (sec) | 95%ile 
(sec) | Max (sec) | Avg (sec) | Success | Count |
  
++---+--+--+--+---+---+-+---+
  | 

[Yahoo-eng-team] [Bug 1797663] Re: refactor def _get_dvr_sync_data from neutron/db/l3_dvr_db.py

2024-03-08 Thread Brian Haley
As this has never been worked on am going to close. If anyone wants to
pick it up please re-open.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1797663

Title:
  refactor def _get_dvr_sync_data from neutron/db/l3_dvr_db.py

Status in neutron:
  Won't Fix

Bug description:
  The function def _get_dvr_sync_data in neutron/db/l3_dvr_db.py is
  fetching and processing routers data and since its called upon for
  each dvr ha router type on update, its becomes very hard to pin point
  issues in such a massive method, so I propose breaking it into two
  methods.

  def _get_dvr_sync_data and _process_dvr_sync_data. will make debugging
  in future easy.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1797663/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694165] Re: Improve Neutron documentation for simpler deployments

2024-03-08 Thread Brian Haley
The documents have been updated many times over the past 6+ years, I'm
going to close this as they are much better now. If there is something
specific please open a new bug.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694165

Title:
  Improve Neutron documentation for simpler deployments

Status in neutron:
  Won't Fix

Bug description:
  During Boston Summit session, an issue was raised that Neutron
  documentation for simpler deployments should be improved/simplified.

  Couple of observations were noted:

  1) For a non-neutron savvy users, it is not very intuitive to 
specify/configure networking requirements. 
  2) Basic default configuration (as documented) is very OVS centric. It should 
discuss other non-OVS specific deployments as well. 

  Here is the etherpad with the details of the discussion -
  https://etherpad.openstack.org/p/pike-neutron-making-it-easy

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1694165/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1666779] Re: Expose neutron API via a WSGI script

2024-03-08 Thread Brian Haley
Seems this fix is released, will close.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1666779

Title:
  Expose neutron API via a WSGI script

Status in neutron:
  Fix Released

Bug description:
  As per Pike goal [1], we should expose neutron API via a WSGI script,
  and make devstack installation use a web server for default
  deployment. This bug is a RFE/tracker for the feature.

  [1] https://governance.openstack.org/tc/goals/pike/deploy-api-in-
  wsgi.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1666779/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1913664] Re: [CI] neutron multinode jobs does not run neutron_tempest_plugin scenario cases

2024-03-07 Thread Brian Haley
>From the review it seems as the decision was to not do this, so I will
close this bug.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1913664

Title:
  [CI] neutron multinode jobs does not run neutron_tempest_plugin
  scenario cases

Status in neutron:
  Invalid

Bug description:
  This is the job neutron-tempest-plugin-scenario-openvswitch's cases:
  
https://812aefd7f17477a1c0dc-8bc1c0202523f17b73621207314548bd.ssl.cf5.rackcdn.com/772255/6/check/neutron-tempest-plugin-scenario-openvswitch/5221232/testr_results.html

  This is neutron-tempest-dvr-ha-multinode-full cases:
  
https://87e09d95af4c4ee8cb65-839132c9f2f257823716e8f40ef80a9a.ssl.cf1.rackcdn.com/772255/6/check/neutron-tempest-dvr-ha-multinode-full/0e428cd/testr_results.html

  IMO, neutron-tempest-*-multinode-full should contain all the neutron-
  tempest-plugin-scenario-* cases. But it does not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1913664/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1898015] Re: neutron-db-manage returns SUCCESS on wrong subproject name

2024-03-07 Thread Brian Haley
Going to mark invalid for Neutron as it seems like an oslo.config bug.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1898015

Title:
  neutron-db-manage returns SUCCESS on wrong subproject name

Status in neutron:
  Invalid
Status in oslo.config:
  Confirmed

Bug description:
  (neutron-server)[neutron@os-controller-1 /]$ neutron-db-manage --subproject 
neutron-sfc upgrade --contract
  argument --subproject: Invalid String(choices=['vmware-nsx', 
'networking-sfc', 'neutron-vpnaas', 'networking-l2gw', 'neutron-fwaas', 
'neutron', 'neutron-dynamic-routing']) value: neutron-sfc
  (neutron-server)[neutron@os-controller-1 /]$ echo $?
  0

  Tested Train and Victoria, possibly behaved like this since forever.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1898015/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1894799] Re: For existing ovs interface, the ovs_use_veth parameter don't take effect

2024-03-07 Thread Brian Haley
I am going to close this as it has been un-assigned for almost 3 years
and the change abandoned. If you wish to work on it please re-open.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1894799

Title:
  For existing ovs interface, the ovs_use_veth parameter don't take
  effect

Status in neutron:
  Won't Fix

Bug description:
  For existing router, the qr- interface already exists in the
  qrouter namespace, so when change the ovs_use_veth from fralse to
  true, the veth interface can't be created. Just like the
  use_veth_interconnection
  
parameter(https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1513),
  we also need to drop ports if the interface types doesn't match the
  configuration value.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1894799/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880969] Re: Creating FIP takes time

2024-03-07 Thread Brian Haley
Looking at some recent logs these values seem Ok now, so will close
this. If we see the issue again can open a new bug.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1880969

Title:
  Creating FIP takes time

Status in neutron:
  Fix Released

Bug description:
  I noticed on upstream and downstream gates that while creating
  FloatingIP for action like:

  neutron floatingip-create public

  For ml2/ovs and ml2/ovn this operation takes minimum ~4 seconds.

  The same we can find on u/s gates from rally jobs [1].

  While we put the load on Neutron server it normally takes more than 10
  seconds.

  For ML/OVN creating a FIP doesn't end with creating NAT entry in OVN
  NBDB row. So its clearly only API operation.

  Maybe we can consider profiling it?

  [1]
  
https://98a898dcf3dfb1090155-da3b599be5166de1dcb38898c60ea3c9.ssl.cf5.rackcdn.com/729588/1/check/neutron-
  rally-
  
task/dd55aa7/results/report.html#/NeutronNetworks.associate_and_dissociate_floating_ips/overview

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1880969/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1880845] Re: [fullstack] Error assigning IPv4 (network address) in "test_gateway_ip_changed"

2024-03-07 Thread Brian Haley
I think I figured out the issue here, so will close this.

Here's my thinking.

The referenced log above was from a change on stable/train:

  https://review.opendev.org/c/openstack/neutron/+/730888

Lajos fixed a bug in _find_available_ips that seems related:

  https://review.opendev.org/c/openstack/neutron/+/692135

commit 3c9b0a5fac2e3a1321eadc272c8ed46aa61efd3e
Author: elajkat 
Date:   Wed Oct 30 13:38:30 2019 +0100

[fullstack] find ip based on allocation_pool

_find_available_ips tried to find available ips based on the given
subnet's cidr field, which can be misleading if random selection goes
out-of allocation-pool. This patch changes this behaviour to use
cidr's allocation_pool field.

Closes-Bug: #1850292
Change-Id: Ied2ffb5ed58007789b0f5157731687dc2e0b9bb1

That change is only included in these versions:

  master stable/2023.1 stable/2023.2 stable/victoria stable/wallaby stable/xena 
stable/zed
  unmaintained/victoria unmaintained/wallaby unmaintained/xena unmaintained/yoga

So I'm guessing merged in Victoria.

Since the change was from stable/train we could have had an issue with
the subnet['cidr'] being used, which would have included IP addresses
outside the start/end allocation pool.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1880845

Title:
  [fullstack] Error assigning IPv4 (network address) in
  "test_gateway_ip_changed"

Status in neutron:
  Invalid

Bug description:
  Error assigning IPv4 (network address) in "test_gateway_ip_changed".

  LOG:
  
https://8e3d76ba7bcafd7367d8-a42dfacf856f2ce428049edff149969f.ssl.cf1.rackcdn.com/730888/1/check/neutron-
  fullstack/31482ea/testr_results.html

  ERROR MESSAGE: http://paste.openstack.org/show/794029/
  """
  neutronclient.common.exceptions.InvalidIpForNetworkClient: IP address 
240.135.228.0 is not a valid IP for any of the subnets on the specified network.
  """

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1880845/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1879407] Re: [OVN] Modifying FIP that is no associated causes ovn_revision_numbers to go stale

2024-03-07 Thread Brian Haley
I am inclined to leave this as-is since there are other resources that
follow the same pattern, and either the maintenance task will fix it,
otherwise when it's associated to a port.

Thanks for the bug Flavio :)

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1879407

Title:
  [OVN] Modifying FIP that is no associated causes ovn_revision_numbers
  to go stale

Status in neutron:
  Won't Fix

Bug description:
  NOTE: This is a low priority issue, mostly because it eventually gets fixed 
by maintenance task. Also because while fip is not associated, there is no
  real harm done to the NAT functionality.

  CheckRevisionNumberCommand relies in finding a corresponding entry in OVN's 
NAT table
  in order to update the OVN_REV_NUM_EXT_ID_KEY to keep ovn and neutron 
databases in sync.

  Ref: http://lucasgom.es/posts/neutron_ovn_database_consistency.html

  Trouble is that unless the floating ip is associated, there will be no
  entries in OVN's NAT table, causing the call to

   db_rev.bump_revision(context, floatingip, ovn_const.TYPE_FLOATINGIPS)

  to not take place.

  Steps to reproduce it:

  # create a floating ip but do not associate it with anything so router_id is 
None
  FIP=172.24.4.8
  openstack floating ip create --floating-ip-address ${FIP} public
  FIP_UUID=$(openstack floating ip show ${FIP} -f value -c id) ; echo $FIP_UUID

  # Mess with its name, which will bump revision on fip object
  openstack floating ip set --description foo ${FIP_UUID}

  Code when there is no NAT for a given FIP makes line 1044 skip line
  1045

  
https://github.com/openstack/neutron/blob/15088b39bab715e40d8161a85c95ca400708c83f/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L1044

  check_rev_cmd.result is None

  The dbs are now the inconsistent state

  mysql> use neutron;
  Reading table information for completion of table and column names
  You can turn off this feature to get a quicker startup with -A

  Database changed
  mysql> select * from standardattributes where resource_type="floatingips";
  
++---+-+-+-+-+
  | id | resource_type | created_at  | updated_at  | 
description | revision_number |
  
++---+-+-+-+-+
  | 49 | floatingips   | 2020-05-18 20:56:51 | 2020-05-18 20:58:58 | foo2   
 |   2 |
  
++---+-+-+-+-+
  1 row in set (0.01 sec)

  mysql> select * from ovn_revision_numbers where resource_type="floatingips";
  
+--+--+---+-+-+-+
  | standard_attr_id | resource_uuid| resource_type | 
revision_number | created_at  | updated_at  |
  
+--+--+---+-+-+-+
  |   49 | 5a1e1ffa-0312-4e78-b7a0-551c396bcf6b | floatingips   |   
0 | 2020-05-18 20:56:51 | 2020-05-18 20:57:08 |
  
+--+--+---+-+-+-+
  1 row in set (0.00 sec)

  Maintenance task fixes it up later

  May 18 21:50:29 stack neutron-server[909]: DEBUG futurist.periodics [None 
req-35091ee8-f2fe-47cc-b757-8bb70f750b47 None None] Submitting periodic 
callback 'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance.DBIn\
  consistenciesPeriodics.check_for_inconsistencies' {{(pid=3186) 
_process_scheduled 
/usr/local/lib/python3.6/dist-packages/futurist/periodics.py:642}}
  May 18 21:50:29 stack neutron-server[909]: DEBUG 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [None 
req-35091ee8-f2fe-47cc-b757-8bb70f750b47 None None] Maintenance task: 
Synchronizing Neutron and OVN datab\
  ases {{(pid=3186) check_for_inconsistencies 
/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py:347}}
  May 18 21:50:29 stack neutron-server[909]: DEBUG 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [None 
req-35091ee8-f2fe-47cc-b757-8bb70f750b47 None None] Maintenance task: Number of 
inconsistencies found at \
  create/update: floatingips=1 {{(pid=3186) _log 
/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py:325}}
  May 18 21:50:29 stack neutron-server[909]: DEBUG 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance [None 
req-35091ee8-f2fe-47cc-b757-8bb70f750b47 None None] Maintenance task: Fixing 
resource 6b876a35-d286-4407-\
  b538-9ce07ab1a281 (type: floatingips) at create/update 

[Yahoo-eng-team] [Bug 1866615] Re: Packets incorrectly marked as martian

2024-03-07 Thread Brian Haley
I am going to close this since moving to the OVS firewall driver has
helped, and I'm not sure anyone will take the time to investigate
further as OVN is now the default driver. Someone can re-open if they
intend on working on it.

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1866615

Title:
  Packets incorrectly marked as martian

Status in neutron:
  Won't Fix

Bug description:
  Problem:
  The following behaviour is observed:

  The deployment has 2 provider networks. One of the them is the public one and 
another is
  getting outside but through NAT. This second one is the one that they 
hypervisors use and this is what we have as "openstack public" (10.20.6.X). The 
VMs that are launched are attached to the fabric on the 10.10 public network. 
Therefore that network is not present on the hypervisor NICS. What we observe 
is that the switch is sending ARP requests (correctly) from the .250 active 
standby IP but the kernel is marking them as Martian despite the fact that 
neutron knows this network.

  System:
  triple-O Based Rocky Deployment . VXLAN tunneling, DVR enabled with Bond 
interfaces on 2 switches. (Open vSwitch) 2.11.0, Neutron 13.0.5
  kernel: 3.10.0-957.21.3.el7.x86_64
  Host OS: CentOS
  Switches: Arista

  -- --- ---
  | SWITCH | | HYPERVISOR  | |  VM  |
  | 10.10.91.250   | --- | 10.20.6.X   | --- | 10.10.X.Y/23 |
  -- --- ---

  Subnet details :

  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  | 10.10.90.10-10.10.91.240 |
  | cidr  | 10.10.90.0/23|
  | created_at| 2019-09-24T08:43:54Z |
  | description   |  |
  | dns_nameservers   | 10.10.D.Y|
  | enable_dhcp   | True |
  | gateway_ip| 10.10.91.254 |
  | host_routes   |  |
  | id| f91d725a-89d1-4a32-97b5-95409177e8eb |
  | ip_version| 4|
  | ipv6_address_mode | None |
  | ipv6_ra_mode  | None |
  | name  | public-subnet|
  | network_id| a1a3280b-9c78-4e5f-883a-9b4bc4e72b1f |
  | project_id| ec9851ba91854e10bb8d5e752260f5fd |
  | revision_number   | 14   |
  | segment_id| None |
  | service_types |  |
  | subnetpool_id | None |
  | tags  |  |
  | updated_at| 2020-03-03T14:34:26Z |
  +---+--+

  cat openvswitch_agent.ini
  [agent]
  l2_population=True
  arp_responder=True
  enable_distributed_routing=True
  drop_flows_on_start=False
  extensions=qos
  tunnel_csum=False
  tunnel_types=vxlan
  vxlan_udp_port=4789

  [securitygroup]
  firewall_driver=iptables_hybrid

  Expected output:
  No Martian Packets observed

  Actual output:
  Since The extra provider network is configured I would expect that the linux 
kernel would not mark the incoming packets as martian.

  However,

  Mar  9 10:45:41 compute0 kernel: IPv4: martian source 10.10.90.74 from 
10.10.91.250 on dev qbrff08c591-e2
  Mar  9 10:45:41 compute0 kernel: ll header: : ff ff ff ff ff ff 98 5d 
82 a1 a6 cd 08 06...]..
  Mar  9 10:45:42 compute0 kernel: IPv4: martian source 10.10.90.74 from 
10.10.91.250 on dev qbrff08c591-e2
  Mar  9 10:45:42 compute0 kernel: ll header: : ff ff ff ff ff ff 98 5d 
82 a1 a6 cd 08 06...]..
  Mar  9 10:45:43 compute0 kernel: IPv4: martian source 10.10.91.203 from 
10.10.91.250 on dev qbrff08c591-e2
  Mar  9 10:45:43 compute0 kernel: ll header: : ff ff ff ff ff ff 98 5d 
82 a1 a6 cd 08 06...]..
  Mar  9 10:45:44 compute0 kernel: IPv4: martian source 10.10.90.74 from 
10.10.91.250 on dev qbrff08c591-e2
  Mar  9 10:45:44 compute0 kernel: ll header: : ff ff ff ff ff ff 98 5d 
82 a1 a6 cd 08 06...]..
  Mar  9 10:45:44 compute0 kernel: IPv4: martian source 10.10.91.203 from 
10.10.91.250 on dev qbrff08c591-e2
  Mar  9 10:45:44 compute0 kernel: ll header: : ff ff ff ff ff ff 98 5d 
82 a1 a6 cd 08 06...]..

  Perceived severity:
  Minor annoyance since /var/log/messages is flooded.
  Minor security 

[Yahoo-eng-team] [Bug 1779978] Re: [fwaas] FWaaS instance stuck in PENDING_CREATE when devstack enable fwaas-v1

2024-03-06 Thread Brian Haley
I am going to close this as fwaas-v1 has been deprecated. Please open a
new bug if this also affects fwaas-v2. Thanks.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779978

Title:
  [fwaas] FWaaS instance stuck in PENDING_CREATE when devstack enable
  fwaas-v1

Status in neutron:
  Invalid

Bug description:
  When we deploy OpenStack by using devstack and enable FW v1 in
  local.conf  "enable_service neutron-fwaas-v1", deploying process is
  successful, but when we create FW instance, instance will stuck in
  "PENDING_CREATE" status forever, I found a related bug
  https://bugs.launchpad.net/charm-neutron-gateway/+bug/1680164 , that
  only address for charm project, but problem still exist in devstack
  fwaas plugin, I add these options in my local environment and restart
  neutron services, then create FW instance, it will be in ACTIVE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1779978/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025056] Re: Router ports without IP addresses shouldn't be allowed to deletion using port's API directly

2024-03-05 Thread Brian Haley
Patches merged, will close.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025056

Title:
  Router ports without IP addresses shouldn't be allowed to deletion
  using port's API directly

Status in neutron:
  Fix Released

Bug description:
  Long time ago there was bug https://bugs.launchpad.net/neutron/+bug/1104337 
and as fix for this bug there was patch 
https://review.opendev.org/c/openstack/neutron/+/20424 proposed. This patch 
allowed to remove router ports without fixed IPs directly using "port delete" 
command.
  But it may cause error 500 if port really belongs to an existing router. 
Steps to reproduce the issue:

  1. Create network (external) and do NOT create subnet for it,
  2. Create router,
  3. Set network from p. 1 as external gateway for the router,
  4. Try to delete external gateway's port using "openstack port delete" 
command - it will fail with error 500. Stacktrace in neutron server log is as 
below:

  2023-06-22 05:41:06.672 16 DEBUG neutron.db.l3_db 
[req-a261d22f-9243-4b22-8d40-a5e7bcd63453 abd0fab2837040f383c986b6a723fbec 
39e32a986a4d4f42bce967634a308f99 - default default] Port 
9978f00d-4be2-474d-89a7-07d9b1e797df has owner network:router_gateway, but no 
IP address, so it can be deleted prevent_l3_port_deletion 
/usr/lib/python3.9/site-packages/neutron/db/l3_db.py:1675
  2023-06-22 05:41:07.085 16 DEBUG neutron.plugins.ml2.plugin 
[req-a261d22f-9243-4b22-8d40-a5e7bcd63453 abd0fab2837040f383c986b6a723fbec 
39e32a986a4d4f42bce967634a308f99 - default default] Calling delete_port for 
9978f00d-4be2-474d-89a7-07d9b1e797df owned by network:router_gateway 
delete_port /usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py:2069
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
[req-a261d22f-9243-4b22-8d40-a5e7bcd63453 abd0fab2837040f383c986b6a723fbec 
39e32a986a4d4f42bce967634a308f99 - default default] DELETE failed.: 
oslo_db.exception.DBReferenceError: (pymysql.err.IntegrityError) (1451, 'Cannot 
delete or update a parent row: a foreign key constraint fails 
(`ovs_neutron`.`routers`, CONSTRAINT `routers_ibfk_1` FOREIGN KEY 
(`gw_port_id`) REFERENCES `ports` (`id`))')
  [SQL: DELETE FROM ports WHERE ports.id = %(id)s]
  [parameters: {'id': '9978f00d-4be2-474d-89a7-07d9b1e797df'}]
  (Background on this error at: http://sqlalche.me/e/13/gkpj)
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
Traceback (most recent call last):
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1276, in 
_execute_context
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
self.dialect.do_execute(
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 609, in 
do_execute
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
cursor.execute(statement, parameters)
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
result = self._query(query)
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
conn.query(q)
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in 
_read_query_result
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
result.read()
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
first_packet = self.connection._read_packet()
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/connections.py", line 676, in 
_read_packet
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation 
packet.raise_for_error()
  2023-06-22 05:41:07.360 16 ERROR neutron.pecan_wsgi.hooks.translation   File 
"/usr/lib/python3.9/site-packages/pymysql/protocol.py", line 223, in 
raise_for_error
  2023-06-22 

[Yahoo-eng-team] [Bug 1999154] Re: ovs/ovn source deployment broken with ovs_branch=master

2024-03-05 Thread Brian Haley
Seems fixed, closing.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999154

Title:
  ovs/ovn source deployment broken with ovs_branch=master

Status in neutron:
  Fix Released

Bug description:
  Since [1] jobs running with OVS_BRANCH=master are broken, fails as below:-
  utilities/ovn-dbctl.c: In function ‘do_dbctl’:
  utilities/ovn-dbctl.c:724:9: error: too few arguments to function 
‘ctl_context_init_command’
724 | ctl_context_init_command(ctx, c);
| ^~~~
  In file included from utilities/ovn-dbctl.c:23:
  /opt/stack/ovs/lib/db-ctl-base.h:249:6: note: declared here
249 | void ctl_context_init_command(struct ctl_context *, struct 
ctl_command *,
|  ^~~~
  make[1]: *** [Makefile:2352: utilities/ovn-dbctl.o] Error 1
  make[1]: *** Waiting for unfinished jobs
  make[1]: Leaving directory '/opt/stack/ovn'
  make: *** [Makefile:1548: all] Error 2
  + lib/neutron_plugins/ovn_agent:compile_ovn:1 :   exit_trap

  Failure builds example:-
  - https://zuul.opendev.org/t/openstack/build/3a900a1cfe824746ac8ffc6a27fc8ec4
  - https://zuul.opendev.org/t/openstack/build/7d862338d6194a4fb3a34e8c3c67f532
  - https://zuul.opendev.org/t/openstack/build/ae092f4985af41908697240e3f64f522

  
  Until OVN repo[2] get's updated to work with ovs master we have to pin ovs to 
working version to get these experimental jobs back to green.

  [1] 
https://github.com/openvswitch/ovs/commit/b8bf410a5c94173da02279b369d75875c4035959
  [2] https://github.com/ovn-org/ovn

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1999154/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2008912] Re: "_validate_create_network_callback" failing with 'NoneType' object has no attribute 'qos_policy_id'

2024-03-05 Thread Brian Haley
Change merged, will close this.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2008912

Title:
  "_validate_create_network_callback" failing with 'NoneType' object has
  no attribute 'qos_policy_id'

Status in neutron:
  Fix Released

Bug description:
  Logs:
  
https://e138a887655b8fda005f-ea1d911c7c7db668a9aa6765a743313b.ssl.cf5.rackcdn.com/874133/2/check/neutron-
  tempest-plugin-openvswitch-enforce-scope-new-
  defaults/7e5cbf9/controller/logs/screen-q-svc.txt

  Error (snippet): https://paste.opendev.org/show/bYMju0ckz5GK5BYq0yhN/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2008912/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2022914] Re: [neutron-api] remove leader_only for maintenance worker

2024-03-05 Thread Brian Haley
Patches have merged, will close.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2022914

Title:
  [neutron-api] remove leader_only for maintenance worker

Status in neutron:
  Fix Released

Bug description:
  Currently if you want to connect the neutron-api to the souhtbound
  database you cannot use relays, because the maintenance worker has a
  condition set that it requires a leader_only connection.

  This leader_only collection is not necessary since the maintenance
  tasks of the neutron-api are only getting information from the
  souhtbound and are not pushing information into the souhtbound
  database.

  If you adjust the neutron-api to use relays, it will log something
  like "relay database, cannot be leader" every time the maintenance
  task should run.

  I would expect to be able to set the southbound connection for the
  neutron-api to use the relays.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2022914/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1897928] Re: TestOvnDbNotifyHandler test cases failing due to missing attribute "_RowEventHandler__watched_events"

2024-03-05 Thread Brian Haley
Seems to have been fixed with
https://review.opendev.org/c/openstack/neutron/+/820911 will close this
bug.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1897928

Title:
  TestOvnDbNotifyHandler test cases failing due to missing attribute
  "_RowEventHandler__watched_events"

Status in neutron:
  Fix Released

Bug description:
  Some 
neutron.tests.unit.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_ovsdb_monitor.TestOvnDbNotifyHandler
 test cases are failing:
  * test_shutdown
  * test_watch_and_unwatch_events

  The error [1] is caused because of a missing attribute: 
AttributeError: 'OvnDbNotifyHandler' object has no attribute 
'_RowEventHandler__watched_events'

  
  
[1]https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_bec/periodic/opendev.org/openstack/neutron/master/openstack-tox-py36-with-ovsdbapp-master/becf062/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1897928/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1701410] Re: different behavior when deleting with request body (no BadRequest with core resources in case of pecan)

2024-03-05 Thread Brian Haley
I am going to close this as it is over 6 years old and no one has
stepped forward to fix it, so it's just not a priority. Please re-open
if necessary.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1701410

Title:
  different behavior when deleting with request body (no BadRequest with
  core resources in case of pecan)

Status in neutron:
  Won't Fix

Bug description:
  In master environment, it is different behavior when we try to delete with
  request body.  I fixed it in [1] but 
CORE_RESOURCE(network/subnet/port/subnetpool) doesn't pass this code in case of 
web_framework = pecan in /etc/neutron/neutron.conf

  [1]
  https://github.com/openstack/neutron/blame/master/neutron/api/v2/base.py#L555

  [FloatingIP, Router]
  $ source ~/devstack/openrc admin admin; export TOKEN=`openstack token issue | 
grep ' id ' | get_field 2`
  $ curl -i -X DELETE -d '{"floatingip":{"description": "aaa"}}' -H 
"content-type:application/json" -H 'accept:application/json' -H 
"x-auth-token:$TOKEN" 
192.168.122.33:9696/v2.0/floatingips/f4e9b845-4472-4806-bd7a-bec8f7618af2
  HTTP/1.1 400 Bad Request
  Content-Length: 113
  Content-Type: application/json
  X-Openstack-Request-Id: req-deaffdb3-7c13-4604-89d0-78fbcc184ef5
  Date: Fri, 30 Jun 2017 00:56:56 GMT

  {"NeutronError": {"message": "Request body is not supported in
  DELETE.", "type": "HTTPBadRequest", "detail": ""}}

  $ curl -i -X DELETE -d '{"router": {"name": "aaa"}}' -H 
"content-type:application/json" -H 'accept:application/json' -H 
"x-auth-token:$TOKEN" 
192.168.122.33:9696/v2.0/routers/1d0ea30e-c481-4be3-a548-a659d9e3787c
  HTTP/1.1 400 Bad Request
  Content-Length: 113
  Content-Type: application/json
  X-Openstack-Request-Id: req-a2f9babb-4eb3-471e-9b42-ccfe722c44f0
  Date: Fri, 30 Jun 2017 01:44:40 GMT

  {"NeutronError": {"message": "Request body is not supported in
  DELETE.", "type": "HTTPBadRequest", "detail": ""}}

  [Core resources: Network/Subnet/Port/Subnetpool]
  $ source ~/devstack/openrc admin admin; export TOKEN=`openstack token issue | 
grep ' id ' | get_field 2`
  $ curl -i -X DELETE -d '{"network":{"name": ""}}' -H 
"content-type:application/json" -H 'accept:application/json' -H 
"x-auth-token:$TOKEN" 
192.168.122.33:9696/v2.0/networks/1fb94931-dabe-49dc-bce4-68c8bafea8b0

  HTTP/1.1 204 No Content
  Content-Length: 0
  X-Openstack-Request-Id: req-7e838c38-e6cd-46c3-8703-c93f5bb4a503
  Date: Fri, 30 Jun 2017 01:32:12 GMT

  $ curl -i -X DELETE -d '{"subnet": {"name": "aaa"}}' -H "content-
  type:application/json" -H 'accept:application/json' -H "x-auth-
  token:$TOKEN"
  192.168.122.33:9696/v2.0/subnets/a18fb191-2a89-4193-80d1-5330a8052d64

  HTTP/1.1 204 No Content
  Content-Length: 0
  X-Openstack-Request-Id: req-901476cf-7e87-4b7c-ab20-209b81d2eb25
  Date: Fri, 30 Jun 2017 01:37:01 GMT

  $ curl -i -X DELETE -d '{"port": {"name": "aaa"}}' -H "content-
  type:application/json" -H 'accept:application/json' -H "x-auth-
  token:$TOKEN"
  192.168.122.33:9696/v2.0/ports/47f2c36a-7461-4c1a-a23e-931d5aee3f9c

  HTTP/1.1 204 No Content
  Content-Length: 0
  X-Openstack-Request-Id: req-48452706-6309-42c2-ac80-f0f4e387060e
  Date: Fri, 30 Jun 2017 01:37:33 GMT

  $ curl -i -X DELETE -d '{"subnetpool": {"description": "aaa"}}' -H
  "content-type:application/json" -H 'accept:application/json' -H
  "x-auth-token:$TOKEN"
  192.168.122.33:9696/v2.0/subnetpools/e0e09ffc-a4af-4cf0-ac2e-7a8b1475cef6

  HTTP/1.1 204 No Content
  Content-Length: 0
  X-Openstack-Request-Id: req-9601a3ae-74a0-49ca-9f99-02ad624ceacb
  Date: Fri, 30 Jun 2017 06:24:58 GMT

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1701410/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1865453] Re: neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before fails randomly

2024-03-05 Thread Brian Haley
** Changed in: neutron
 Assignee: Adil Ishaq (iradvisor) => (unassigned)

** Changed in: identity-management
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1865453

Title:
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before
  fails randomly

Status in Identity Management:
  Invalid
Status in neutron:
  Confirmed

Bug description:
  Sometimes we see random failures of the test:

  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before

  
  
neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_beforetesttools.testresult.real._StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/mock/mock.py",
 line 1330, in patched
  return func(*args, **keywargs)
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 182, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/functional/plugins/ml2/drivers/ovn/mech_driver/test_mech_driver.py",
 line 280, in test_virtual_port_created_before
  ovn_vport.options[ovn_const.LSP_OPTIONS_VIRTUAL_PARENTS_KEY])
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/testtools/testcase.py",
 line 417, in assertIn
  self.assertThat(haystack, Contains(needle), message)
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-functional/lib/python3.6/site-packages/testtools/testcase.py",
 line 498, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 
'88c0378b-71bd-454b-a0df-8c70b57d257a' not in 
'49043b88-554f-48d0-888d-eeaa749e752f'

To manage notifications about this bug go to:
https://bugs.launchpad.net/identity-management/+bug/1865453/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2037717] Re: [OVN] ``PortBindingChassisEvent`` event is not executing the conditions check

2024-03-01 Thread Brian Haley
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Jammy)
   Status: New => Fix Released

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/wallaby
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2037717

Title:
  [OVN] ``PortBindingChassisEvent`` event is not executing the
  conditions check

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New
Status in neutron source package in Focal:
  New
Status in neutron source package in Jammy:
  Fix Released

Bug description:
  Since [1], that overrides the "match_fn" method, the event is not checking 
the defined conditions in the initialization, that are:
    ('type', '=', ovn_const.OVN_CHASSIS_REDIRECT)

  [1]https://review.opendev.org/q/I3b7c5d73d2b0d20fb06527ade30af8939b249d75

  Related bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2241824

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2037717/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1973347] Re: OVN revision_number infinite update loop

2024-03-01 Thread Brian Haley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973347

Title:
  OVN revision_number infinite update loop

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New
Status in neutron source package in Focal:
  New
Status in neutron source package in Jammy:
  Fix Released

Bug description:
  After the change described in
  https://mail.openvswitch.org/pipermail/ovs-dev/2022-May/393966.html
  was merged and released in stable OVN 22.03, there is a possibility to
  create an endless loop of revision_number update in external_ids of
  ports and router_ports. We have confirmed the bug in Ussuri and Yoga.
  When the problem happens, the Neutron log would look like this:

  2022-05-13 09:30:56.318 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: router_ports) to 4815
  2022-05-13 09:30:56.366 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:56.367 25 ... Running txn n=1 command(idx=1): 
SetLSwitchPortCommand(...)
  2022-05-13 09:30:56.367 25 ... Running txn n=1 command(idx=2): 
PgDelPortCommand(...)
  2022-05-13 09:30:56.467 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: ports) to 4815
  2022-05-13 09:30:56.880 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:56.881 25 ... Running txn n=1 command(idx=1): 
UpdateLRouterPortCommand(...)
  2022-05-13 09:30:56.881 25 ... Running txn n=1 command(idx=2): 
SetLRouterPortInLSwitchPortCommand(...)
  2022-05-13 09:30:56.984 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: router_ports) to 4816
  2022-05-13 09:30:57.057 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:57.057 25 ... Running txn n=1 command(idx=1): 
SetLSwitchPortCommand(...)
  2022-05-13 09:30:57.058 25 ... Running txn n=1 command(idx=2): 
PgDelPortCommand(...)
  2022-05-13 09:30:57.159 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: ports) to 4816
  2022-05-13 09:30:57.523 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:57.523 25 ... Running txn n=1 command(idx=1): 
UpdateLRouterPortCommand(...)
  2022-05-13 09:30:57.524 25 ... Running txn n=1 command(idx=2): 
SetLRouterPortInLSwitchPortCommand(...)
  2022-05-13 09:30:57.627 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: router_ports) to 4817
  2022-05-13 09:30:57.674 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:57.674 25 ... Running txn n=1 command(idx=1): 
SetLSwitchPortCommand(...)
  2022-05-13 09:30:57.675 25 ... Running txn n=1 command(idx=2): 
PgDelPortCommand(...)
  2022-05-13 09:30:57.765 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: ports) to 4817

  (full version here: https://pastebin.com/raw/NLP1b6Qm).

  In our lab environment we have confirmed that the problem is gone
  after mentioned change is rolled back.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1973347/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055245] Re: DHCP Option is not passed to VM via Cloud-init

2024-02-28 Thread Brian Haley
Neutron started using network:distributed for both DHCP and metadata
ports in Victoria [0]

Looking at the change proposed, Nova only ever looks for ports with
network:dhcp in the device_owner field, it also needs to do a lookup of
ports with network:distributed in this field. Unfortunately they can't
be combined in one query at the moment, I might try to fix that.

So I don't think this a valid bug for Neutron.

[0] https://review.opendev.org/c/openstack/neutron/+/732364

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2055245

Title:
  DHCP Option is not passed to VM via Cloud-init

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  Nova-Metadata-API doesn't provide ipv4_dhcp type for OVN (native OVH
  DHCP feature, no DHCP agents) networks with dhcp_enabled but no
  default gateway.

  Problem seems to be in
  
https://opendev.org/openstack/nova/src/branch/master/nova/network/neutron.py#L3617

  There is just an exception to networks without device_owner:
  network:dhcp where default gateway is used, which doesn't cover this
  case.

  Steps to reproduce
  ==

  Create a OVN network in an environment where native DHCP feature is
  provided by ovn (no ml2/ovs DHCP Agents). In addition this network
  needs to have no default gateway enabled.

  Create VM in this network and observe the cloud-init process
  (network_data.json)

  Expected result
  ===

  network_data.json
  (http://169.254.169.254/openstack/2018-08-27/network_data.json) should
  return something like:

  {
"links": [
  {
"id": "tapddc91085-96",
"vif_id": "ddc91085-9650-4b7b-ad9d-b475bac8ec8b",
"type": "ovs",
"mtu": 1442,
"ethernet_mac_address": "fa:16:3e:93:49:fa"
  }
],
"networks": [
  {
"id": "network0",
"type": "ipv4_dhcp",
"link": "tapddc91085-96",
"network_id": "9f61a3a7-26d3-4013-b61d-12880b325ea9"
  }
],
"services": []
  }

  Actual result
  =

  {
"links": [
  {
"id": "tapddc91085-96",
"vif_id": "ddc91085-9650-4b7b-ad9d-b475bac8ec8b",
"type": "ovs",
"mtu": 1442,
"ethernet_mac_address": "fa:16:3e:93:49:fa"
  }
],
"networks": [
  {
"id": "network0",
"type": "ipv4",
"link": "tapddc91085-96",
"ip_address": "10.0.0.40",
"netmask": "255.255.255.0",
"routes": [],
"network_id": "9f61a3a7-26d3-4013-b61d-12880b325ea9",
"services": []
  }
],
"services": []
  }

  Environment
  ===

  Openstack Zed with Neutron OVN feature enabled

  Nova: 26.2.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2055245/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1867214] Re: MTU too large error presented on create but not update

2024-02-19 Thread Brian Haley
Could not reproduce, marking invalid.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1867214

Title:
  MTU too large error presented on create but not update

Status in neutron:
  Invalid

Bug description:
  If an MTU is supplied when creating a network it is rejected if it is
  above global_physnet_mtu.  If an MTU is supplied when updating a
  network it is not rejected even if the value is too large.  When
  global_physnet_mtu is 1500 I can easily set MTU 9000 or even beyond
  through update.  This is not valid.

  ~~~
  w(overcloud) [stack@undercloud-0 ~]$ openstack network show private1
  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| UP  
   |
  | availability_zone_hints   | 
   |
  | availability_zones| nova
   |
  | created_at| 2020-03-09T15:55:38Z
   |
  | description   | 
   |
  | dns_domain| None
   |
  | id| bffac18a-ceaa-4eeb-9a19-800de150def5
   |
  | ipv4_address_scope| None
   |
  | ipv6_address_scope| None
   |
  | is_default| False   
   |
  | is_vlan_transparent   | None
   |
  | mtu   | 1500
   |
  | name  | private1
   |
  | port_security_enabled | True
   |
  | project_id| d69c1c6601c741deaa205fa1a7e9c632
   |
  | provider:network_type | vlan
   |
  | provider:physical_network | tenant  
   |
  | provider:segmentation_id  | 106 
   |
  | qos_policy_id | None
   |
  | revision_number   | 8   
   |
  | router:external   | External
   |
  | segments  | None
   |
  | shared| True
   |
  | status| ACTIVE  
   |
  | subnets   | 51fc6508-313f-41c4-839c-bcbe2fa8795d, 
7b6fcbe1-b064-4660-b04a-e433ab18ba73 |
  | tags  | 
   |
  | updated_at| 2020-03-09T15:56:41Z
   |
  
+---++
  (overcloud) [stack@undercloud-0 ~]$ openstack network set private1 --mtu 9000
  (overcloud) [stack@undercloud-0 ~]$ openstack network set private1 --mtu 9500
  (overcloud) [stack@undercloud-0 ~]$ openstack network show private1

  
+---++
  | Field | Value   
   |
  
+---++
  | admin_state_up| UP  
   |
  | availability_zone_hints   | 
   |
  | availability_zones| nova  

[Yahoo-eng-team] [Bug 1865223] Re: [scale issue] regression for security group list between Newton and Rocky+

2024-02-19 Thread Brian Haley
Looks like this was fixed, will close.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1865223

Title:
  [scale issue] regression for security group list between Newton and
  Rocky+

Status in neutron:
  Fix Released

Bug description:
  We recently upgraded an environment from Newton -> Rocky, and
  experienced a dramatic increase in the amount of time it takes to
  return a full security group list. For ~8,000 security groups, it
  takes nearly 75 seconds. This was not observed in Newton.

  I was able to replicate this in the following 4 environments:

  Newton (virtual machine)
  Rocky (baremetal)
  Stein (virtual machine)
  Train (baremetal)

  Command: openstack security group list

  > Sec Grps vs. Seconds

  QtyNewton VM  Rocky BM  Stein VM  Train BM
  200 4.1 3.7  5.4   5.2  
  500 5.3 7119.4  
  10007.2 12.4 19.2  16   
  20009.2 24.2 35.3  30.7 
  300012.136.5 5244   
  400016.147.2 7358.9 

  At this time, we do not know if this increase in time extends to other
  'list' commands at scale. The 'show' commands appear to be fairly
  performant. This increase in time does have a negative impact on user
  perception, scripts, other dependent resources, etc. The Stein VM is
  slower than Train, but could be due to VM vs BM. The Newton
  environment is virtual, too, so I would expect even better performance
  on bare metal.

  Any assistance or insight into what might have changed between
  releases to cause this would be helpful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1865223/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1821137] Re: neutron_tempest_plugin.api test_show_network_segment_range fails

2024-02-19 Thread Brian Haley
I'm going to close this as the logs to know what the exact error was are
long gone. If it happens again we can open a new bug.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1821137

Title:
  neutron_tempest_plugin.api test_show_network_segment_range fails

Status in neutron:
  Invalid

Bug description:
  Example:
  
http://logs.openstack.org/42/644842/2/check/neutron-tempest-plugin-api/1c82227/testr_results.html.gz

  log search:
  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22observed_range%5B'project_id'%5D)%5C%22

  Exception:
  
http://logs.openstack.org/42/644842/2/check/neutron-tempest-plugin-api/1c82227/controller/logs/screen-q-svc.txt.gz?level=ERROR

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1821137/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1849479] Re: neutron l2 to dhcp lost when migrating in stable/stein 14.0.2

2024-02-19 Thread Brian Haley
I'm goint to close this as Stein has been EOL for quite a while. If this
is happening on a newer, supported release please open a new bug.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1849479

Title:
  neutron l2 to dhcp lost when migrating in stable/stein 14.0.2

Status in neutron:
  Invalid

Bug description:
  Info about the environment:

  3x controller nodes
  50+ compute nodes

  all in stable stein, neutron is 14.0.2 using OVS 2.11.0

  neutron settings:
- max_l3_agents_per_router = 3
- dhcp_agents_per_network = 2
- router_distributed = true
- interface_driver = openvswitch
- l3_ha = true

  l3 agent:
- agent_mode = dvr

  ml2:
- type_drivers = flat,vlan,vxlan
- tenant_network_types = vxlan
- mechanism_drivers = openvswitch,l2population
- extension_drivers = port_security,dns
- external_network_type = vlan

  tenants may have multiple external networks
  instances may have multiple interfaces

  tests have been performed on 10 instances launched in a tenant network
  connected to a router in an external network. all instances have
  floating ip's assigned. these instances had only 1 interface. this
  particular testing tenant has rbac's for 4 external networks in which
  only 1 is used.

  migrations have been done via cli with admin:
  openstack server migrate --live  
  have also tested using evacuate with same results

  expected behavior:
  when _multiple_ (in the ranges of 10+) instances is migrated simultaneously 
from one computehost to another, they should come up with a minor network 
service drop. all l2 should be resumed.

  what actually happends:
  instances are migrated, some errors pop in neutron/nova and then instances 
comes up with a minor network service drop. However L2 toward dhcp-servers is 
totally severed in OVS. The migrated instances will as expected start try 
renewal of lease half-way through it's current lease and at the end of it drop 
the IP. Easy test is try renewal of lease on an instance or icmp to any 
dhcp-server in that vxlan L2.

  current workaround:
  once the instance is migrated the l2 to dhcp-servers can be re-established by 
restarting neutron-openvswitch-agent on the destination host.

  how to test:
  create instances (10+), migrate and then try to ping neutron dhcp-server in 
the vxlan (tenant created network) or simply renew dhcp-leases.

  error messages:

  Exception during message handling: TooManyExternalNetworks: More than
  one external network exists. TooManyExternalNetworks: More than one
  external network exists.

  other oddities:
  when performing migration of small number of instances i.e. 1-4 migrations 
become successful and L2 with dhcp-servers is not lost.

  when looking through debug logs i can't really find anything of
  relevance. no other large errors/warnings occur other that the one
  above.

  i will perform more test when migrations are successful and/or
  neutron-openvswitch-agent restarted and see if L2 to dhcp-servers
  survive 24h.

  This occurs in a 14.0.0 regression bug which should be fixed in 14.0.2
  (this bugreport is for 14.0.2) but it could possible not work with
  this combination of settings(?).

  Please let me know if any versions to api/services is required for
  this or any configurations or other info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1849479/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1794870] Re: NetworkNotFound failures on network test teardown because of retries due to the initial request taking >60 seconds

2024-02-19 Thread Brian Haley
Looks like this was fixed in commit
748dd8df737d28aad7dfd0a1e32659e0256126e2 in the tempest tree, will
close.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1794870

Title:
  NetworkNotFound failures on network test teardown because of retries
  due to the initial request taking >60 seconds

Status in neutron:
  Fix Released
Status in tempest:
  Invalid

Bug description:
  I've seen this in a few different tests and branches, network tests
  are tearing down and hitting NetworkNotFound presumably because the
  test already deleted the network and we're racing on teardown:

  http://logs.openstack.org/70/605270/1/gate/tempest-full-
  py3/f18bf28/testr_results.html.gz

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/services/network/networks_client.py", 
line 52, in delete_network
  return self.delete_resource(uri)
File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 41, in 
delete_resource
  resp, body = self.delete(req_uri)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 310, in 
delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 675, in 
request
  self._error_checker(resp, resp_body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 781, in 
_error_checker
  raise exceptions.NotFound(resp_body, resp=resp)
  tempest.lib.exceptions.NotFound: Object not found
  Details: {'detail': '', 'type': 'NetworkNotFound', 'message': 'Network 
0574d093-73f1-4a7c-b0d8-49c9f43d44fa could not be found.'}

  We should just handle the 404 and ignore it since we're trying to
  delete the network anyway.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1794870/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1685352] Re: Can't invoke function 'get_bind' from alembic.op in expand_drop_exceptions function in alembic migration scripts

2024-02-19 Thread Brian Haley
Looks like this code was changed for Sqlalchemy 2.0 in
d7ba5948ffe4ff4ec760a2774c699774b065cdfb as from_engine() is deprecated,
will close this bug.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1685352

Title:
  Can't invoke function 'get_bind' from alembic.op in
  expand_drop_exceptions function in alembic migration scripts

Status in neutron:
  Invalid

Bug description:
  If something like:

  inspector = reflection.Inspector.from_engine(op.get_bind())

  is used in alembic migration scripts in functions
  expand_drop_exceptions() or contract_creation_exceptions() then there
  is error like:

  NameError: Can't invoke function 'get_bind', as the proxy object
  has not yet been established for the Alembic 'Operations' class.  Try
  placing this code inside a callable.

  Those 2 functions are used only in functional tests but it would be
  nice to have possibility to use this Inspector class for example to
  get names of constraints from database.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1685352/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2035578] Re: [stable branches] devstack-tobiko-neutron job Fails with InvocationError('could not find executable python', None)

2024-02-19 Thread Brian Haley
As this was not a neutron bug and the tobiko patch has merged will close
this bug.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2035578

Title:
  [stable branches] devstack-tobiko-neutron job Fails with
  InvocationError('could not find executable python', None)

Status in neutron:
  Fix Released

Bug description:
  It started failing[1] since the job switched to ubuntu-jammy[2].

  Fails as below:-
  2023-09-13 16:46:18.124882 | TASK [tobiko-tox : run sanity test cases before 
creating resources]
  2023-09-13 16:46:19.463567 | controller | neutron_sanity create: 
/home/zuul/src/opendev.org/x/tobiko/.tox/py3
  2023-09-13 16:46:20.518574 | controller | neutron_sanity installdeps: 
-c/home/zuul/src/opendev.org/x/tobiko/upper-constraints.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/test-requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/extra-requirements.txt
  2023-09-13 16:46:20.519390 | controller | ERROR: could not install deps 
[-c/home/zuul/src/opendev.org/x/tobiko/upper-constraints.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/test-requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/extra-requirements.txt]; v = 
InvocationError('could not find executable python', None)
  2023-09-13 16:46:20.520263 | controller | ___ 
summary 
  2023-09-13 16:46:20.555843 | controller | ERROR:   neutron_sanity: could not 
install deps [-c/home/zuul/src/opendev.org/x/tobiko/upper-constraints.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/test-requirements.txt, 
-r/home/zuul/src/opendev.org/x/tobiko/extra-requirements.txt]; v = 
InvocationError('could not find executable python', None)
  2023-09-13 16:46:21.141713 | controller | ERROR
  2023-09-13 16:46:21.142024 | controller | {
  2023-09-13 16:46:21.142117 | controller |   "delta": "0:00:01.484351",
  2023-09-13 16:46:21.142197 | controller |   "end": "2023-09-13 
16:46:20.556249",
  2023-09-13 16:46:21.142276 | controller |   "failed_when_result": true,
  2023-09-13 16:46:21.142353 | controller |   "msg": "non-zero return code",
  2023-09-13 16:46:21.142688 | controller |   "rc": 1,
  2023-09-13 16:46:21.142770 | controller |   "start": "2023-09-13 
16:46:19.071898"
  2023-09-13 16:46:21.142879 | controller | }
  2023-09-13 16:46:21.142972 | controller | ERROR: Ignoring Errors

  
  Example failures zed/stable2023.1:-
  - https://zuul.opendev.org/t/openstack/build/591dae67122444daa35195f7458ffafe
  - https://zuul.opendev.org/t/openstack/build/5838bf0704b247dc8f1eb12367b1d33e
  - https://zuul.opendev.org/t/openstack/build/8d2e22ff171944b0b549c12e1aaac476

  Wallaby/Xena/Yoga builds started failing with:-
  ++ functions:write_devstack_version:852 :   git log '--format=%H %s %ci' 
-1
  + ./stack.sh:main:230  :   
SUPPORTED_DISTROS='bullseye|focal|f35|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9|openEuler-20.03'
  + ./stack.sh:main:232  :   [[ ! jammy =~ 
bullseye|focal|f35|opensuse-15.2|opensuse-tumbleweed|rhel8|rhel9|openEuler-20.03
 ]]
  + ./stack.sh:main:233  :   echo 'WARNING: this script has 
not been tested on jammy'

  Example:-
  - https://zuul.opendev.org/t/openstack/build/0bd0421e30804b7aa9b6ea032d271be7
  - https://zuul.opendev.org/t/openstack/build/8e06dfc0ccd940f3ab71edc0ec93466c
  - https://zuul.opendev.org/t/openstack/build/899634e90ee94e0294985747075fb26c

  Even before these jobs were broken but there tests used to fail not
  the test setup, that can be handled once the current issues are
  cleared.

  
  [1] 
https://zuul.opendev.org/t/openstack/builds?job_name=devstack-tobiko-neutron=stable%2F2023.1
  [2] https://review.opendev.org/c/x/devstack-plugin-tobiko/+/893662?usp=search

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2035578/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2042941] Re: neutron-{ovn, ovs}-tempest-with-sqlalchemy-master jobs not installing sqlalchemy/alembic from source

2024-02-19 Thread Brian Haley
** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2042941

Title:
  neutron-{ovn,ovs}-tempest-with-sqlalchemy-master jobs not installing
  sqlalchemy/alembic from source

Status in neutron:
  Invalid

Bug description:
  neutron-ovn-tempest-with-sqlalchemy-master and 
neutron-ovs-tempest-with-sqlalchemy-master jobs expected to install sqlalchemy 
and alembic from main branch as defined in required-projects, but these 
installs released versions instead:-
  required-projects:
- name: github.com/sqlalchemy/sqlalchemy
  override-checkout: main
- openstack/oslo.db
- openstack/neutron-lib
- name: github.com/sqlalchemy/alembic
  override-checkout: main

  
  Builds:- 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-with-sqlalchemy-master_name=neutron-ovs-tempest-with-sqlalchemy-master=0

  Noticed it when other jobs running with sqlalchemy master are broken
  but not these https://bugs.launchpad.net/neutron/+bug/2042939

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2042941/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2028285] Re: [unit test][xena+] test_port_deletion_prevention fails when runs in isolation

2024-02-19 Thread Brian Haley
The comment on a failure in Zed looked to not have the fix - version
21.1.2, version 21.2.0 or greater is required. Will close this as the
fixes have been released.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028285

Title:
  [unit test][xena+] test_port_deletion_prevention fails when runs in
  isolation

Status in neutron:
  Fix Released

Bug description:
  Can be reproduced by Just running:-
  tox -epy3 -- test_port_deletion_prevention
  or run any of the below tests individually:-
  
neutron.tests.unit.extensions.test_l3.L3NatDBSepTestCase.test_port_deletion_prevention_handles_missing_port
  
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBSepTestCase.test_port_deletion_prevention_handles_missing_port

  Fails as below:-
  
neutron.tests.unit.extensions.test_extraroute.ExtraRouteDBSepTestCase.test_port_deletion_prevention_handles_missing_port
  


  Captured traceback:
  ~~~
  Traceback (most recent call last):

File "/home/ykarel/work/openstack/neutron/neutron/tests/base.py", line 
178, in func
  return f(self, *args, **kwargs)

File "/home/ykarel/work/openstack/neutron/neutron/tests/base.py", line 
178, in func
  return f(self, *args, **kwargs)

File 
"/home/ykarel/work/openstack/neutron/neutron/tests/unit/extensions/test_l3.py", 
line 4491, in test_port_deletion_prevention_handles_missing_port
  pl.prevent_l3_port_deletion(context.get_admin_context(), 'fakeid')

File "/home/ykarel/work/openstack/neutron/neutron/db/l3_db.py", line 
1742, in prevent_l3_port_deletion
  port = port or self._core_plugin.get_port(context, port_id)

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 223, in wrapped
  return f_with_retry(*args, **kwargs,

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 137, in wrapped
  with excutils.save_and_reraise_exception():

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
  self.force_reraise()

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
  raise self.value

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 135, in wrapped
  return f(*args, **kwargs)

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_db/api.py",
 line 144, in wrapper
  with excutils.save_and_reraise_exception() as ectxt:

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
  self.force_reraise()

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
  raise self.value

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_db/api.py",
 line 142, in wrapper
  return f(*args, **kwargs)

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 183, in wrapped
  with excutils.save_and_reraise_exception():

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 227, in __exit__
  self.force_reraise()

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_utils/excutils.py",
 line 200, in force_reraise
  raise self.value

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/neutron_lib/db/api.py",
 line 181, in wrapped
  return f(*dup_args, **dup_kwargs)

File 
"/home/ykarel/work/openstack/neutron/.tox/py3/lib/python3.10/site-packages/oslo_db/sqlalchemy/enginefacade.py",
 line 1022, in wrapper
  return fn(*args, **kwargs)

File 
"/home/ykarel/work/openstack/neutron/neutron/db/db_base_plugin_v2.py", line 
1628, in get_port
  lazy_fields = [models_v2.Port.port_forwardings,

  AttributeError: type object 'Port' has no attribute
  'port_forwardings'

  It's reproducible Since Xena+ since the inclusion of patch
  https://review.opendev.org/c/openstack/neutron/+/790691

  It do not reproduce if there are other test runs(from the test class)
  before this test which involve other requests(like network get/create
  etc) apart from the ones modified in above patch.

  Considering above point if this test is modified to run other requests like 

[Yahoo-eng-team] [Bug 1742187] Re: osc client missing extra-dhcp-opts option

2024-02-14 Thread Brian Haley
** Changed in: python-openstackclient
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1742187

Title:
  osc client missing extra-dhcp-opts option

Status in neutron:
  Invalid
Status in python-openstackclient:
  Fix Released

Bug description:
  An option to use the extra-dhcp-opt API extension seems to be missing
  from the osc plugin for neutron:

  stack@tm-devstack-master-01:~$ openstack extension list |grep extra_dhcp_opt
  | Neutron Extra DHCP options  
 | extra_dhcp_opt  | Extra options 
configuration for DHCP. For example PXE boot options to DHCP clients can be 
specified (e.g. tftp-server, server-ip-address, bootfile-name) |

  => the corresponding API extension is enabled in this setup

  stack@tm-devstack-master-01:~$ openstack port create 2>&1 |grep extra

  => nothing about extra dhcp opt in the CLI help

  stack@tm-devstack-master-01:~$ openstack port create --network foo 
--extra-dhcp-opt opt_name=42,opt_value=55
  usage: openstack port create [-h] [-f {json,shell,table,value,yaml}]
   [-c COLUMN] [--max-width ] [--fit-width]
   [--print-empty] [--noindent] [--prefix PREFIX]
   --network  [--description ]
   [--device ]
   [--mac-address ]
   [--device-owner ]
   [--vnic-type ] [--host ]
   [--dns-name dns-name]
   [--fixed-ip 
subnet=,ip-address=]
   [--binding-profile ]
   [--enable | --disable] [--project ]
   [--project-domain ]
   [--security-group  | 
--no-security-group]
   [--qos-policy ]
   [--enable-port-security | 
--disable-port-security]
   [--allowed-address 
ip-address=[,mac-address=]]
   [--tag  | --no-tag]
   
  openstack port create: error: unrecognized arguments: --extra-dhcp-opt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1742187/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999677] Re: Defunct nodes are reported as happy in network agent list

2024-02-13 Thread Brian Haley
Since this has been fixed in later Ussuri and/or later neutron code I'm
going to close this. Please re-open if necessary.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1999677

Title:
  Defunct nodes are reported as happy in network agent list

Status in OpenStack Neutron API Charm:
  New
Status in networking-ovn:
  Invalid
Status in neutron:
  Invalid

Bug description:
  When decommissioning a node from a cloud using Neutron and OVN, the Chassis 
is not removed from OVN SB db and also it always shows as happy in "openstack 
network agent list"
  which is a bit weird and the operator would expect to have that as XXX in the 
agent list

  This is more for the upstream neutron but adding the charm for
  visibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-api/+bug/1999677/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051171] [NEW] SQLalchemy 2.0 warning in neutron-lib

2024-01-24 Thread Brian Haley
Public bug reported:

Running 'tox -e pep8' in neutron-lib or neutron repo generates this new
warning:

/home/bhaley/git/neutron-lib/neutron_lib/db/model_base.py:113: 
MovedIn20Warning: Deprecated API features detected! These feature(s) are not 
compatible with SQLAlchemy 2.0. To prevent incompatible upgrades prior to 
updating applications, ensure requirements files are pinned to 
"sqlalchemy<2.0". Set environment variable SQLALCHEMY_WARN_20=1 to show all 
deprecation warnings.  Set environment variable 
SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message. (Background on 
SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
  BASEV2 = declarative.declarative_base(cls=NeutronBaseV2)

Google eventually points in this direction:

https://docs.sqlalchemy.org/en/20/changelog/whatsnew_20.html#step-one-
orm-declarative-base-is-superseded-by-orm-declarativebase

So moving to use sqlalchemy.orm.DeclarativeBase class is the future.

Might be a little tricky to implement as sqlalchemy is currently pinned
in UC:

sqlalchemy===1.4.50

** Affects: neutron
 Importance: High
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2051171

Title:
  SQLalchemy 2.0 warning in neutron-lib

Status in neutron:
  Confirmed

Bug description:
  Running 'tox -e pep8' in neutron-lib or neutron repo generates this
  new warning:

  /home/bhaley/git/neutron-lib/neutron_lib/db/model_base.py:113: 
MovedIn20Warning: Deprecated API features detected! These feature(s) are not 
compatible with SQLAlchemy 2.0. To prevent incompatible upgrades prior to 
updating applications, ensure requirements files are pinned to 
"sqlalchemy<2.0". Set environment variable SQLALCHEMY_WARN_20=1 to show all 
deprecation warnings.  Set environment variable 
SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this message. (Background on 
SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
BASEV2 = declarative.declarative_base(cls=NeutronBaseV2)

  Google eventually points in this direction:

  https://docs.sqlalchemy.org/en/20/changelog/whatsnew_20.html#step-one-
  orm-declarative-base-is-superseded-by-orm-declarativebase

  So moving to use sqlalchemy.orm.DeclarativeBase class is the future.

  Might be a little tricky to implement as sqlalchemy is currently
  pinned in UC:

  sqlalchemy===1.4.50

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2051171/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2049546] Re: neutron-linuxbridge-agent ebtables RULE_DELETE failed (Invalid argument)

2024-01-16 Thread Brian Haley
*** This bug is a duplicate of bug 2038541 ***
https://bugs.launchpad.net/bugs/2038541

This was fixed with
https://review.opendev.org/c/openstack/neutron/+/898832 and is a
duplicate of https://bugs.launchpad.net/neutron/+bug/2038541 - please
try the fix there.

** This bug has been marked a duplicate of bug 2038541
   LinuxBridgeARPSpoofTestCase functional tests fails with latest jammy kernel 
5.15.0-86.96

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2049546

Title:
  neutron-linuxbridge-agent ebtables RULE_DELETE failed (Invalid
  argument)

Status in neutron:
  New

Bug description:
  neutron-linuxbridge-agent fails and gets stuck when cleaning up ARP
  protection rules:

   neutron-linuxbridge-agent[3049824]: Exit code: 4; Cmd:
  ['ebtables', '-t', 'nat', '--concurrent', '-D', 'neutronMAC-
  tap50f1af99-28', '-i', 'tap50f1af99-28', '--among-src',
  'fa:16:3e:ba:10:2a', '-j', 'RETURN']; Stdin: ; Stdout: ; Stderr:
  ebtables v1.8.7 (nf_tables):  RULE_DELETE failed (Invalid argument):
  rule in chain neutronMAC-tap50f1af99-28

  Afterward, it stops responding to RPC messages and nova-compute times
  out waiting for vif-plugged events.

  Version:

* OpenStack Zed from Ubuntu cloud archive
* Ubuntu 22.04 LTS
* 5.15.0-91-generic #101-Ubuntu
* Deployed via Ubuntu cloud archive packages

  Context:

  The document
  https://github.com/openstack/neutron/blob/stable/zed/doc/source/admin/deploy-
  lb.rst mentions some resolved issues with ebtables based on nftables,
  and the scenarios from the linked bug reports do work. The issue here
  appears to only happens when removing ARP spoofing rules. We have a
  few compute hosts with a high churn, many instances created and
  deleted. On these, neutron-linuxbridge-agent works visibly fine until
  it becomes too stuck.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2049546/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2045811] [NEW] neutron-ovn-db-sync-util can fail with KeyError

2023-12-06 Thread Brian Haley
Public bug reported:

If the netron-ovn-db-sync-util is run while neutron-server is active
(which is not recommended), it can randomly fail if there are active API
calls in flight to create networks and/or subnets.

This is an example traceback I've seen many times in a production
environment:

WARNING neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_db_sync 
[req-7fc12422-6fae-4ec9-98bc-8a114f30c9e3 - - - - -] DHCP options for subnet 
0662e4fd-f8b4-4d29-8ba7-5846bd19e45d is present in Neutron but out of sync for 
OVN
CRITICAL neutron_ovn_db_sync_util [req-7fc12422-6fae-4ec9-98bc-8a114f30c9e3 - - 
- - -] Unhandled error: KeyError: 'neutron-93ad1c21-d2cf-448a-8fae-21c71f44dc5c'
ERROR neutron_ovn_db_sync_util Traceback (most recent call last):
ERROR neutron_ovn_db_sync_util   File "/usr/bin/neutron-ovn-db-sync-util", line 
10, in 
ERROR neutron_ovn_db_sync_util sys.exit(main())
ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/cmd/ovn/neutron_ovn_db_sync_util.py", 
line 219, in main
ERROR neutron_ovn_db_sync_util synchronizer.do_sync()
ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 98, in do_sync
ERROR neutron_ovn_db_sync_util self.sync_networks_ports_and_dhcp_opts(ctx)
ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 871, in sync_networks_ports_and_dhcp_opts
ERROR neutron_ovn_db_sync_util self._sync_subnet_dhcp_options(
ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 645, in _sync_subnet_dhcp_options
ERROR neutron_ovn_db_sync_util network = 
db_networks[utils.ovn_name(subnet['network_id'])]
ERROR neutron_ovn_db_sync_util KeyError: 
'neutron-93ad1c21-d2cf-448a-8fae-21c71f44dc5c'

** Affects: neutron
     Importance: Medium
 Assignee: Brian Haley (brian-haley)
 Status: Confirmed


** Tags: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045811

Title:
  neutron-ovn-db-sync-util can fail with KeyError

Status in neutron:
  Confirmed

Bug description:
  If the netron-ovn-db-sync-util is run while neutron-server is active
  (which is not recommended), it can randomly fail if there are active
  API calls in flight to create networks and/or subnets.

  This is an example traceback I've seen many times in a production
  environment:

  WARNING neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_db_sync 
[req-7fc12422-6fae-4ec9-98bc-8a114f30c9e3 - - - - -] DHCP options for subnet 
0662e4fd-f8b4-4d29-8ba7-5846bd19e45d is present in Neutron but out of sync for 
OVN
  CRITICAL neutron_ovn_db_sync_util [req-7fc12422-6fae-4ec9-98bc-8a114f30c9e3 - 
- - - -] Unhandled error: KeyError: 
'neutron-93ad1c21-d2cf-448a-8fae-21c71f44dc5c'
  ERROR neutron_ovn_db_sync_util Traceback (most recent call last):
  ERROR neutron_ovn_db_sync_util   File "/usr/bin/neutron-ovn-db-sync-util", 
line 10, in 
  ERROR neutron_ovn_db_sync_util sys.exit(main())
  ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/cmd/ovn/neutron_ovn_db_sync_util.py", 
line 219, in main
  ERROR neutron_ovn_db_sync_util synchronizer.do_sync()
  ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 98, in do_sync
  ERROR neutron_ovn_db_sync_util self.sync_networks_ports_and_dhcp_opts(ctx)
  ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 871, in sync_networks_ports_and_dhcp_opts
  ERROR neutron_ovn_db_sync_util self._sync_subnet_dhcp_options(
  ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 645, in _sync_subnet_dhcp_options
  ERROR neutron_ovn_db_sync_util network = 
db_networks[utils.ovn_name(subnet['network_id'])]
  ERROR neutron_ovn_db_sync_util KeyError: 
'neutron-93ad1c21-d2cf-448a-8fae-21c71f44dc5c'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2045811/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1853873] Re: The /v2.0/ports/{port_id}/bindings APIs are not documented

2023-12-01 Thread Brian Haley
https://docs.openstack.org/api-ref/network/v2/#port-binding shows these
api's are now present, closing bug.

** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1853873

Title:
  The /v2.0/ports/{port_id}/bindings APIs are not documented

Status in neutron:
  Fix Released

Bug description:
  The following APIs are not documented in the networking api-ref [1]:
  * GET /v2.0/ports/{port_id}/bindings
  * POST /v2.0/ports/{port_id}/bindings
  * PUT /v2.0/ports/{port_id}/bindings/{host}/activate

  
  [1] https://docs.openstack.org/api-ref/network/v2/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1853873/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1779335] Re: neutron-vpnaas doesn't support local tox targets

2023-12-01 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1779335

Title:
  neutron-vpnaas doesn't support local tox targets

Status in neutron:
  Fix Released

Bug description:
  Today it appears the neutron-vpnaas doesn't support proper env setup
  for running tox targets locally. For more details see [1].

  
  [1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131801.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1779335/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2024205] Re: [OVN] Hash Ring nodes removed when "periodic worker" is killed

2023-12-01 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2024205

Title:
  [OVN] Hash Ring nodes removed when "periodic worker" is killed

Status in neutron:
  Fix Released

Bug description:
  Reported at: https://bugzilla.redhat.com/show_bug.cgi?id=2213910

  In the ML2/OVN driver we set a signal handler for SIGTERM to remove
  the hash ring nodes upon the service exit [0] but, during the
  investigation of one bug with a customer we identified that an
  unrelated Neutron worker is killed (such as the "periodic worker" in
  this case) this could lead to that process removing the entries from
  the ovn_hash_ring table for that hostname.

  If this happens on all controllers, the ovn_hash_ring table is
  rendered empty and OVSDB events are no longer processed by ML2/OVN.

  Proposed solution:

  This LP proposes to make this more reliable by instead of removing the
  nodes from the ovn_hash_ring table at exiting, we would mark them as
  offline instead. That way, if a worker dies the nodes will remain
  registered in the table and the heartbeat thread will set them as
  online again on the next beat. If the service is properly stopped the
  heartbeat won't be running and the nodes will be seeing as offline to
  the Hash Ring manager.

  As a note, upon the next startup of the service the nodes matching the
  server hostname will be removed from the ovn_hash_ring table and added
  again accordingly as Neutron worker are spawned [1].

  [0] 
https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py#L295-L296
  [1] 
https://github.com/openstack/neutron/blob/cbb89fdb1414a1b3a8e8b3a9a4154ef627bb9d1a/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py#L316

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2024205/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2015364] Re: [skip-level] OVN tests constantly failing

2023-12-01 Thread Brian Haley
Since the skip-level job is now passing and voting in our gate I am
going to close this bug.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2015364

Title:
  [skip-level] OVN tests constantly failing

Status in devstack:
  Fix Released
Status in neutron:
  Fix Released

Bug description:
  In the new Zed-Bobcat skip-level jobs [1], the OVN job has 4 tests constantly 
failing (1 fail is actually a setup class method):
  
*tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops
  
*tempest.api.compute.servers.test_attach_interfaces.AttachInterfacesUnderV243Test.test_add_remove_fixed_ip
  *setUpClass 
(tempest.api.compute.servers.test_server_actions.ServerActionsTestJSON)
  
*tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops

  Logs:
  
*https://fd50651997fbb0337883-282d0b18354725863279cd3ebda4ab44.ssl.cf5.rackcdn.com/878632/6/experimental/neutron-ovn-grenade-multinode-skip-level/baf4ed5/controller/logs/grenade.sh_log.txt
  
*https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_607/878632/6/experimental/neutron-ovn-grenade-multinode-skip-level/6072d85/controller/logs/grenade.sh_log.txt

  [1]https://review.opendev.org/c/openstack/neutron/+/878632

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/2015364/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1845145] Re: [L3] add abilitiy for iptables_manager to ensure rule was added only once

2023-12-01 Thread Brian Haley
Since the patch on master was abandoned manually I am going to close
this.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1845145

Title:
  [L3] add abilitiy for iptables_manager to ensure rule was added only
  once

Status in neutron:
  Won't Fix

Bug description:
  iptables_manager should have abilitiy to ensure rule was added only
  once. In function [1], it just adds the new rule to the cache list no
  matter if it is duplicated. And finally, warning LOG [2] will be
  raised. Sometimes, there will have multiple threads to add rule for
  one same resource, it may be not easy for users to ensure that their
  rule generation code was run as expected. So rule will be duplicated
  in cache. And during the removal procedure, cache has duplicated
  rules, remove one then there still has same rule remained. As a
  result, the linux netfilter rule may have nothing changed after user's
  removal action.

  [1] 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_manager.py#L205-L225
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/agent/linux/iptables_manager.py#L718-L725

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1845145/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1849463] Re: linuxbridge packet forwarding issue with vlan backed networks

2023-12-01 Thread Brian Haley
I am going to mark this as won't fix as the linuxbridge agent is
unmaintained and experimental on the master branch.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1849463

Title:
  linuxbridge packet forwarding issue with vlan backed networks

Status in neutron:
  Won't Fix

Bug description:
  This is related to: https://bugs.launchpad.net/os-vif/+bug/1837252

  In Ubuntu 18.04 using Ubuntu Cloud Archives (UCA) and Stein os-vif
  version 1.15.1 is deployed.

  According to the bug #1837252/OSSA-2019-004/CVE-2019-15753 this
  version is vulnerable to unicast packet broadcasting to all bridge
  members resulting in traffic interception due to disabled mac-learning
  (ageing set to 0). The fix is to set ageing to the default of 300.

  With this vulnerable set up instances using vlan-backed networks have
  working traffic flows as expected since all packets are being
  distributed to all members.

  The FDB entries show:
  # bridge fdb | grep -e tapb2b8c5ff-8c -e brqa50c5b7b-db -e ens256.3002 | grep 
-v -e ^01:00:5e -e ^33:33
  00:16:3e:ba:fa:33 dev ens256.3002 vlan 1 master brqa50c5b7b-db permanent
  00:16:3e:ba:fa:33 dev ens256.3002 master brqa50c5b7b-db permanent
  fe:16:3e:0d:c0:42 dev tapb2b8c5ff-8c vlan 1 master brqa50c5b7b-db permanent
  fe:16:3e:0d:c0:42 dev tapb2b8c5ff-8c master brqa50c5b7b-db permanent

  Showmacs confirm:
  # brctl showmacs brqa50c5b7b-db
  port no mac addris local?   ageing timer
2 00:16:3e:ba:fa:33   yes0.00
2 00:16:3e:ba:fa:33   yes0.00
1 fe:16:3e:0d:c0:42   yes0.00
1 fe:16:3e:0d:c0:42   yes0.00

  However, once ageing is enabled by either `brctl setageing
  brqa50c5b7b-db 300` or upgrading to UCA/Train with os-vif 1.17.0
  traffic flows directed towards tapb2b8c5ff-8c are not being forwarded.

  Traffic coming from tapb2b8c5ff-8c is being forwarded correctly
  through the bridge and exits ens236.3002.

  Only incoming traffic destined for tapb2b8c5ff-8c' MAC is being
  dropped or not forwarded.

  the FDB entries show:
  # bridge fdb | grep -e tapb2b8c5ff-8c -e brqa50c5b7b-db -e ens256.3002 | grep 
-v -e ^01:00:5e -e ^33:33
  00:50:56:89:64:e0 dev ens256.3002 master brqa50c5b7b-db 
  00:16:3e:ba:fa:33 dev ens256.3002 vlan 1 master brqa50c5b7b-db permanent
  fa:16:3e:f8:76:cf dev ens256.3002 master brqa50c5b7b-db 
  00:16:35:bf:5f:e5 dev ens256.3002 master brqa50c5b7b-db 
  fa:16:3e:0d:c0:42 dev ens256.3002 master brqa50c5b7b-db 
  00:50:56:89:69:d9 dev ens256.3002 master brqa50c5b7b-db 
  9e:dc:1b:a2:9b:2e dev ens256.3002 master brqa50c5b7b-db 
  00:16:3e:ba:fa:33 dev ens256.3002 master brqa50c5b7b-db permanent
  0e:c7:c3:cd:8d:fa dev ens256.3002 master brqa50c5b7b-db 
  fe:16:3e:0d:c0:42 dev tapb2b8c5ff-8c vlan 1 master brqa50c5b7b-db permanent
  fe:16:3e:0d:c0:42 dev tapb2b8c5ff-8c master brqa50c5b7b-db permanent

  Showmacs confirm:
  # brctl showmacs brqa50c5b7b-db
  port no mac addris local?   ageing timer
2 00:16:35:bf:5f:e5   no 0.16
2 00:16:3e:ba:fa:33   yes0.00
2 00:16:3e:ba:fa:33   yes0.00
2 00:50:56:89:64:e0   no 0.10
2 00:50:56:89:69:d9   no 0.20
2 0e:c7:c3:cd:8d:fa   no 0.10
2 9e:dc:1b:a2:9b:2e   no 0.12
2 fa:16:3e:0d:c0:42   no20.00
2 fa:16:3e:f8:76:cf   no13.33
1 fe:16:3e:0d:c0:42   yes0.00
1 fe:16:3e:0d:c0:42   yes0.00

  This shows the Guest (fa:16:3e:0d:c0:42) as Non-Local originating
  ens256.3002 instead of tapb2b8c5ff-8c which I suspect causes packets
  not being forwarded into tapb2b8c5ff-8c.

  The VM has now no means of ingress connectivity to the vlan backed
  network but outgoing packets are still being forwarded fine.

  It's important to note that instances using vXlan backed networks
  function without issues when ageing is set. The issue seems therefore
  limited to vlan backed networks.

  One significant difference in the FDB table between vlan and vxlan
  backed networks is the device which holds the guest MAC. On vxlan
  backed networks, this MAC is mapped to the tap device inside the FDB

  I have 2 pcap recordings of DHCP traffic, one from the bridge and one
  from the tap showing traffic flowing out of the tap but not returning
  despite replies arriving on the bridge interface.

  iptables have been rules out by prepending a -j ACCEPT at the top of
  the neutron-linuxbri-ib2b8c5ff-8 chain.

  I talked to @ralonsoh and @seam-k-mooney on IRC yesterday about this
  issue and both suggested me to open this bug report.

  Let me 

[Yahoo-eng-team] [Bug 2028003] Re: neutron fails with postgres on subnet_id

2023-12-01 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028003

Title:
  neutron fails with postgres on subnet_id

Status in neutron:
  Fix Released

Bug description:
  Ironic's postgres CI test job has started to fail with an error rooted
  in Neutron's database API layer. Specifically how that API is
  interacting with SQLAlchemy to interact with postgres.

  Error:

  DBAPIError exception wrapped.: psycopg2.errors.GroupingError: column
  "subnet_service_types.subnet_id" must appear in the GROUP BY clause or
  be used in an aggregate function

  This is likely just a command formatting issue in the database
  interaction, and should be easily fixed.

  Job Logs:
  
https://96a560a38139b70cb224-e9f29c7afce5197c5c20e02f6b6da59e.ssl.cf5.rackcdn.com/888500/7/check/ironic-
  tempest-pxe_ipmitool-postgres/7eeffae/controller/logs/screen-q-svc.txt

  Full error:

  
  Jul 17 15:02:54.203622 np0034696541 neutron-server[69958]: DEBUG 
neutron.pecan_wsgi.hooks.quota_enforcement 
[req-36ab3b86-999f-48a0-87f8-e2613909b6c4 
req-8aa9b5ab-4403-42dc-b82e-c28f1a37c843 tempest-BaremetalBasicOps-471932799 
tempest-BaremetalBasicOps-471932799-project-member] Made reservation on behalf 
of 9d6bf2710477411887e0dcc4386b458a for: {'port': 1} {{(pid=69958) before 
/opt/stack/neutron/neutron/pecan_wsgi/hooks/quota_enforcement.py:53}}
  Jul 17 15:02:54.206063 np0034696541 neutron-server[69958]: DEBUG 
neutron_lib.callbacks.manager [req-36ab3b86-999f-48a0-87f8-e2613909b6c4 
req-8aa9b5ab-4403-42dc-b82e-c28f1a37c843 tempest-BaremetalBasicOps-471932799 
tempest-BaremetalBasicOps-471932799-project-member] Publish callbacks 
['neutron.plugins.ml2.plugin.SecurityGroupDbMixin._ensure_default_security_group_handler-595366']
 for port (None), before_create {{(pid=69958) _notify_loop 
/usr/local/lib/python3.10/dist-packages/neutron_lib/callbacks/manager.py:176}}
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: WARNING 
oslo_db.sqlalchemy.exc_filters [req-36ab3b86-999f-48a0-87f8-e2613909b6c4 
req-8aa9b5ab-4403-42dc-b82e-c28f1a37c843 tempest-BaremetalBasicOps-471932799 
tempest-BaremetalBasicOps-471932799-project-member] DBAPIError exception 
wrapped.: psycopg2.errors.GroupingError: column 
"subnet_service_types.subnet_id" must appear in the GROUP BY clause or be used 
in an aggregate function
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: LINE 2: ...de, 
subnets.standard_attr_id AS standard_attr_id, subnet_ser...
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]:
  ^
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters Traceback (most recent call last):
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.10/dist-packages/sqlalchemy/engine/base.py", line 1900, 
in _execute_context
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters self.dialect.do_execute(
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python3.10/dist-packages/sqlalchemy/engine/default.py", line 
736, in do_execute
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters cursor.execute(statement, parameters)
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters psycopg2.errors.GroupingError: column 
"subnet_service_types.subnet_id" must appear in the GROUP BY clause or be used 
in an aggregate function
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters LINE 2: ...de, subnets.standard_attr_id AS 
standard_attr_id, subnet_ser...
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters  
^
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters 
  Jul 17 15:02:54.215796 np0034696541 neutron-server[69958]: ERROR 
oslo_db.sqlalchemy.exc_filters 
  Jul 17 15:02:54.218977 np0034696541 neutron-server[69958]: ERROR 
neutron.pecan_wsgi.hooks.translation [req-36ab3b86-999f-48a0-87f8-e2613909b6c4 
req-8aa9b5ab-4403-42dc-b82e-c28f1a37c843 tempest-BaremetalBasicOps-471932799 
tempest-BaremetalBasicOps-471932799-project-member] POST failed.: 
oslo_db.exception.DBError: (psycopg2.errors.GroupingError) column 
"subnet_service_types.subnet_id" must appear in the GROUP BY clause or be used 
in an aggregate function
  Jul 17 15:02:54.218977 np0034696541 neutron-server[69958]: LINE 2: ...de, 
subnets.standard_attr_id AS standard_attr_id, subnet_ser...
  Jul 17 15:02:54.218977 np0034696541 neutron-server[69958]:   

[Yahoo-eng-team] [Bug 1975828] Re: difference in execution time between admin/non-admin call

2023-11-28 Thread Brian Haley
** Changed in: neutron
   Status: Expired => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1975828

Title:
  difference in execution time between admin/non-admin call

Status in neutron:
  Triaged

Bug description:
  Part of https://bugs.launchpad.net/neutron/+bug/1973349 :
  Another interesting thing is difference in execution time between 
admin/non-admin call:
  (openstack) dmitriy@6BT6XT2:~$ . Documents/openrc/admin.rc
  (openstack) dmitriy@6BT6XT2:~$ time openstack port list --project  | 
wc -l
  2142

  real 0m5,401s
  user 0m1,565s
  sys 0m0,086s
  (openstack) dmitriy@6BT6XT2:~$ . Documents/openrc/.rc
  (openstack) dmitriy@6BT6XT2:~$ time openstack port list | wc -l
  2142

  real 2m38,101s
  user 0m1,626s
  sys 0m0,083s
  (openstack) dmitriy@6BT6XT2:~$
  (openstack) dmitriy@6BT6XT2:~$ time openstack port list --project  | 
wc -l
  2142

  real 1m17,029s
  user 0m1,541s
  sys 0m0,085s
  (openstack) dmitriy@6BT6XT2:~$

  So basically if provide tenant_id to query, it will be execute twice
  as fast.But it won't look through networks owned by tenant (which
  would kind of explain difference in speed).

  Environment:
  Neutron SHA: 97180b01837638bd0476c28bdda2340eccd649af
  Backend: ovs
  OS: Ubuntu 20.04
  Mariadb: 10.6.5
  SQLalchemy: 1.4.23
  Backend: openvswitch
  Plugins: router vpnaas metering 
neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1975828/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2043141] [NEW] neutron-lib unit tests need update for sqlalchemy 2.0

2023-11-09 Thread Brian Haley
Public bug reported:

Some of the neutron-lib unit tests do not support sqlalchemy 2.0.

Thomas Goirand ran them on a Debian system and this test file fails:

  neutron_lib/tests/unit/db/test_sqlalchemytypes.py

There are 8 failures, all basically the same:

105s FAIL: neutron_lib.tests.unit.db.test_sqlalchemytypes.CIDRTestCase.test_crud
105s neutron_lib.tests.unit.db.test_sqlalchemytypes.CIDRTestCase.test_crud
105s --
105s testtools.testresult.real._StringException: Traceback (most recent call 
last):
105s   File 
"/tmp/autopkgtest-lxc.jvth6_27/downtmp/build.pBL/src/neutron_lib/tests/unit/db/test_sqlalchemytypes.py",
 line 36, in setUp
105s meta = sa.MetaData(bind=self.engine)
105s^
105s TypeError: MetaData.__init__() got an unexpected keyword argument 'bind'

>From looking at the functional tests and Nova code, should be a
straightforward fix.

We should also look at creating a test job that both tests sqlalchemy
2.0 and neutron-lib main/master branches so we don't regress.

** Affects: neutron
 Importance: High
     Assignee: Brian Haley (brian-haley)
 Status: In Progress


** Tags: unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2043141

Title:
  neutron-lib unit tests need update for sqlalchemy 2.0

Status in neutron:
  In Progress

Bug description:
  Some of the neutron-lib unit tests do not support sqlalchemy 2.0.

  Thomas Goirand ran them on a Debian system and this test file fails:

neutron_lib/tests/unit/db/test_sqlalchemytypes.py

  There are 8 failures, all basically the same:

  105s FAIL: 
neutron_lib.tests.unit.db.test_sqlalchemytypes.CIDRTestCase.test_crud
  105s neutron_lib.tests.unit.db.test_sqlalchemytypes.CIDRTestCase.test_crud
  105s --
  105s testtools.testresult.real._StringException: Traceback (most recent call 
last):
  105s   File 
"/tmp/autopkgtest-lxc.jvth6_27/downtmp/build.pBL/src/neutron_lib/tests/unit/db/test_sqlalchemytypes.py",
 line 36, in setUp
  105s meta = sa.MetaData(bind=self.engine)
  105s^
  105s TypeError: MetaData.__init__() got an unexpected keyword argument 'bind'

  From looking at the functional tests and Nova code, should be a
  straightforward fix.

  We should also look at creating a test job that both tests sqlalchemy
  2.0 and neutron-lib main/master branches so we don't regress.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2043141/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998517] Re: Floating IP not reachable from instance in other project

2023-10-24 Thread Brian Haley
Moving this to the neutron project as networking-ovn has been retired
for a while.

My first question is are you able to test this with a later release?
Since it's been 10 months since it was filed just want to make sure it
hasn't been fixed.

** Project changed: networking-ovn => neutron

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1998517

Title:
  Floating IP not reachable from instance in other project

Status in neutron:
  New

Bug description:
  We noticed a strange behavior regarding Floating IPs in an OpenStack
  environment using ML2/OVN with DVR. Consider the provided test setup
  consisting of 3 projects. Each project has exactly one Network with
  two subnets, one for IPv4 one for IPv6, associated with it. Each
  project’s network is connected to the provider network through a
  router which has two ports facing the provider network and two
  internal ones for the respective subnets.

  The VM (Instance) Layout is also included. The first instance (a1) in Project 
A also has an FIP associated with it. Trying to ping this FIP from outside 
Openstack’s context works without any problems. This is also true when we want 
to ping the FIP from instance a2 in the same project.
  However, trying to do so from any of the other instances in a different 
project does not work. This however, changes when a FIP is assigned to an 
instance in a different project. By assigning a FIP to instance b for example 
will result in b being able to ping the FIP of a1. After removing the FIP this 
still holds through.

  The following observations regarding this have been made.
  When a FIP is assigned new entries in OVN’s SB DB (specifically the 
MAC_Binding table) show up, some of which will disappear again when the FIP is 
released from b. The one entry persisting is a mac-binding of the mac address 
and IPv4 associated with the router of project b facing the provider network, 
with the logical port being the provider net facing port of project a’s router. 
We are not sure if this is relevant to the problem, we are just putting this 
out here.

  In addition, when we were looking for other solutions we came across
  this old bug: https://bugzilla.redhat.com/show_bug.cgi?id=1836963 with
  a possible workaround, this however, lead to pinging not being
  possible afterwards.

  The Overcloud has been deployed using the `/usr/share/openstack-
  tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml`
  template for OVN and the following additional settings were added to
  neutron:

  parameter_defaults:
OVNEmitNeedToFrag: true
NeutronGlobalPhysnetMtu: 9000

  Furthermore, all nodes use a Linux bond for the `br-ex` interface on
  on which the different node networks (Internal API, Storage, ...)
  reside. These networks also use VLANs.

  If you need any additional Information of the setup, please let me know.
  Best Regards

  
  Version Info

  - TripleO Wallaby

  - puppet-ovn-18.5.0-0.20220216211819.d496e5a.el9.noarch
  - ContainerImageTag: ecab4196e43c16aaea91ebb25fb25ab1

  inside ovn_controller container:
  - ovn22.06-22.06.0-24.el8s.x86_64
  - rdo-ovn-host-22.06-3.el8.noarch
  - rdo-ovn-22.06-3.el8.noarch
  - ovn22.06-host-22.06.0-24.el8s.x86_64

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1998517/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038373] [NEW] Segment unit tests are not mocking properly

2023-10-03 Thread Brian Haley
Public bug reported:

Running the segment unit tests -
neutron/tests/unit/extensions/test_segment.py generates a lot of extra
noise, like:

{0}
neutron.tests.unit.extensions.test_segment.TestNovaSegmentNotifier.test_delete_network_and_owned_segments
[1.185650s] ... ok

Captured stderr:


/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/kombu/utils/compat.py:82:
 DeprecationWarning: SelectableGroups dict interface is deprecated. Use select.
  for ep in importlib_metadata.entry_points().get(namespace, [])
Traceback (most recent call last):
  File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/eventlet/hubs/hub.py",
 line 476, in fire_timers
timer()
  File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/eventlet/hubs/timer.py",
 line 59, in __call__
cb(*args, **kw)
  File "/home/bhaley/git/neutron.dev/neutron/common/utils.py", line 956, in 
wrapper
return func(*args, **kwargs)
   ^
  File "/home/bhaley/git/neutron.dev/neutron/notifiers/batch_notifier.py", line 
58, in synced_send
self._notify()
  File "/home/bhaley/git/neutron.dev/neutron/notifiers/batch_notifier.py", line 
70, in _notify
self.callback(batched_events)
  File "/home/bhaley/git/neutron.dev/neutron/services/segments/plugin.py", line 
212, in _send_notifications
event.method(event)
  File "/home/bhaley/git/neutron.dev/neutron/services/segments/plugin.py", line 
384, in _delete_nova_inventory
aggregate_id = self._get_aggregate_id(event.segment_id)
   
  File "/home/bhaley/git/neutron.dev/neutron/services/segments/plugin.py", line 
378, in _get_aggregate_id
for aggregate in self.n_client.aggregates.list():
 ^^^
  File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/novaclient/v2/aggregates.py",
 line 59, in list
return self._list('/os-aggregates', 'aggregates')
   ^^
  File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/novaclient/base.py",
 line 253, in _list
resp, body = self.api.client.get(url)
 
  File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/keystoneauth1/adapter.py",
 line 395, in get
return self.request(url, 'GET', **kwargs)
   ^^
  File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/novaclient/client.py",
 line 77, in request
if raise_exc and resp.status_code >= 400:
 ^^^
TypeError: '>=' not supported between instances of 'MagicMock' and 'int'


>From looking at the code it's not mocking things properly, for example it does 
>this in TestNovaSegmentNotifier.setUp():

self.batch_notifier._waiting_to_send = True

That code was removed in 2016 in
255e8a839db0be10c98b5d9f480ce476e2f2e171 :-/

The noise doesn't seem to cause the test to fail, but it should be
fixed.

There are also keystone auth exceptions in other tests, and again,
nothing seems to fail because of it:

   raise exceptions.MissingAuthPlugin(msg_fmt % msg)
keystoneauth1.exceptions.auth_plugins.MissingAuthPlugin: An auth plugin is 
required to determine endpoint URL

** Affects: neutron
 Importance: Low
 Status: New


** Tags: unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038373

Title:
  Segment unit tests are not mocking properly

Status in neutron:
  New

Bug description:
  Running the segment unit tests -
  neutron/tests/unit/extensions/test_segment.py generates a lot of extra
  noise, like:

  {0}
  
neutron.tests.unit.extensions.test_segment.TestNovaSegmentNotifier.test_delete_network_and_owned_segments
  [1.185650s] ... ok

  Captured stderr:
  
  
/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/kombu/utils/compat.py:82:
 DeprecationWarning: SelectableGroups dict interface is deprecated. Use select.
for ep in importlib_metadata.entry_points().get(namespace, [])
  Traceback (most recent call last):
File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/eventlet/hubs/hub.py",
 line 476, in fire_timers
  timer()
File 
"/home/bhaley/git/neutron.dev/.tox/py311/lib/python3.11/site-packages/eventlet/hubs/timer.py",
 line 59, in __call__
  cb(*args, **kw)
File "/home/bhaley/git/neutron.dev/neutron/common/utils.py", line 956, in 
wrapper
  return func(*args, **kwargs)
 ^
File "/home/bhaley/git/neutron.dev/neutron/notifiers/batch_notifier.py", 
line 58, in synced_send
  self._notify()
File "/home/bhaley/git/neutron.dev/neutron/notifiers/batch_notifier.py", 
line 70, in _notify
  

[Yahoo-eng-team] [Bug 2037500] Re: OVSDB transaction returned TRY_AGAIN, retrying do_commit

2023-10-03 Thread Brian Haley
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2037500

Title:
  OVSDB transaction returned TRY_AGAIN, retrying do_commit

Status in neutron:
  Invalid

Bug description:
  Trying to create instance and got error when it's trying to attach the
  port to instance about details below on neutron server.

  2023-09-27 09:58:10.725 716 DEBUG ovsdbapp.backend.ovs_idl.transaction 
[req-2df7a23e-8b9f-409c-a35e-9b78edb6bce1 - - - - -] OVSDB transaction returned 
TRY_AGAIN, retrying do_commit 
/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py:97
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn 
[req-ca61167b-aca9-46a2-81fb-8f8e3ebba349 - - - - -] OVS database connection to 
OVN_Northbound failed with error: 'Timeout'. Verify that the OVS and OVN 
services are available and that the 'ovn_nb_connection' and 'ovn_sb_connection' 
configuration options are correct.: Exception: Timeout
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn Traceback (most 
recent call last):
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py",
 line 68, in start_connection
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn 
self.ovsdb_connection.start()
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn   File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/connection.py", line 
79, in start
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn 
idlutils.wait_for_change(self.idl, self.timeout)
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn   File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 
219, in wait_for_change
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn raise 
Exception("Timeout")  # TODO(twilson) use TimeoutException?
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn Exception: 
Timeout
  2023-09-27 09:58:10.724 723 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.impl_idl_ovn

  and later the error still come below.

  
  2023-09-27 12:07:36.849 747 ERROR ovsdbapp.backend.ovs_idl.transaction [-] 
OVSDB Error: The transaction failed because the IDL has been configured to 
require a database lock but didn't get it yet or has already lost it
  2023-09-27 12:07:36.849 747 ERROR ovsdbapp.backend.ovs_idl.transaction 
[req-7f9163da-8faf-4509-b650-aedfdf4ff303 - - - - -] Traceback (most recent 
call last):
File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/connection.py", line 
122, in run
  txn.results.put(txn.do_commit())
File 
"/usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/transaction.py", line 
119, in do_commit
  raise RuntimeError(msg)
  RuntimeError: OVSDB Error: The transaction failed because the IDL has been 
configured to require a database lock but didn't get it yet or has already lost 
it

  2023-09-27 12:07:36.849 747 ERROR futurist.periodics 
[req-7f9163da-8faf-4509-b650-aedfdf4ff303 - - - - -] Failed to call periodic 
'neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance.DBInconsistenciesPeriodics.check_for_ha_chassis_group_address'
 (it runs every 600.00 seconds): RuntimeError: OVSDB Error: The transaction 
failed because the IDL has been configured to require a database lock but 
didn't get it yet or has already lost it
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics Traceback (most recent 
call last):
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics   File 
"/usr/lib/python3/dist-packages/futurist/periodics.py", line 293, in run
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics work()
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics   File 
"/usr/lib/python3/dist-packages/futurist/periodics.py", line 67, in __call__
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics return 
self.callback(*self.args, **self.kwargs)
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics   File 
"/usr/lib/python3/dist-packages/futurist/periodics.py", line 181, in decorator
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics return f(*args, 
**kwargs)
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py",
 line 622, in check_for_ha_chassis_group_address
  2023-09-27 12:07:36.849 747 ERROR futurist.periodics priority -= 1
  2023-09-27 12:07:36.849 747 

[Yahoo-eng-team] [Bug 2037239] [NEW] neutron-tempest-plugin-openvswitch-* jobs randomly failing in gate

2023-09-24 Thread Brian Haley
Public bug reported:

A number of different scenario tests seem to be failing randomly in the
same way:

Details: Router 01dda41e-67ed-4af0-ac56-72fd895cef9a is not active on
any of the L3 agents

One example is in
https://review.opendev.org/c/openstack/neutron/+/895832 where these
three jobs are failing:

neutron-tempest-plugin-openvswitch-iptables_hybrid  FAILURE
neutron-tempest-plugin-openvswitch  FAILURE
neutron-tempest-plugin-openvswitch-enforce-scope-new-defaults   FAILURE

I see combinations of these three failing in other recent checks as
well.

Further investigation required.

** Affects: neutron
 Importance: Critical
 Status: New


** Tags: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2037239

Title:
  neutron-tempest-plugin-openvswitch-* jobs randomly failing in gate

Status in neutron:
  New

Bug description:
  A number of different scenario tests seem to be failing randomly in
  the same way:

  Details: Router 01dda41e-67ed-4af0-ac56-72fd895cef9a is not active on
  any of the L3 agents

  One example is in
  https://review.opendev.org/c/openstack/neutron/+/895832 where these
  three jobs are failing:

  neutron-tempest-plugin-openvswitch-iptables_hybridFAILURE
  neutron-tempest-plugin-openvswitchFAILURE
  neutron-tempest-plugin-openvswitch-enforce-scope-new-defaults FAILURE

  I see combinations of these three failing in other recent checks as
  well.

  Further investigation required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2037239/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2028112] Re: Unable to create VM when using the sriov agent with the ml2/ovn driver.

2023-07-19 Thread Brian Haley
*** This bug is a duplicate of bug 1975743 ***
https://bugs.launchpad.net/bugs/1975743

** This bug has been marked a duplicate of bug 1975743
   ML2 OVN - Creating an instance with hardware offloaded port is broken

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028112

Title:
  Unable to create VM when using the sriov agent with the ml2/ovn
  driver.

Status in neutron:
  New

Bug description:
  I am planning to operate nodes using HWOL and OVN Controller and nodes
  using SR-IOV simultaneously in the OVN environment.

  The test content is as follows.

  ** Controller server **

  The ml2 mechanism_drivers were specified as follows:

  ```
  [ml2]
  mechanism_drivers = sriovnicswitch,ovn
  ```

  Upon checking the log, the driver was confirmed to be loaded normally.

  ```
  2023-07-19 00:44:37.403 1697414 INFO neutron.plugins.ml2.managers [-] 
Configured mechanism driver names: ['ovn', 'sriovnicswitch']
  2023-07-19 00:44:37.464 1697414 INFO neutron.plugins.ml2.managers [-] Loaded 
mechanism driver names: ['ovn', 'sriovnicswitch']
  2023-07-19 00:44:37.464 1697414 INFO neutron.plugins.ml2.managers [-] 
Registered mechanism drivers: ['ovn', 'sriovnicswitch']
  2023-07-19 00:44:37.464 1697414 INFO neutron.plugins.ml2.managers [-] No 
mechanism drivers provide segment reachability information for agent scheduling.
  2023-07-19 00:44:38.358 1697414 INFO neutron.plugins.ml2.managers 
[req-bc634856-2d9a-44d0-ae0e-351b440a2a0b - - - - -] Initializing mechanism 
driver 'ovn'
  2023-07-19 00:44:38.378 1697414 INFO neutron.plugins.ml2.managers 
[req-bc634856-2d9a-44d0-ae0e-351b440a2a0b - - - - -] Initializing mechanism 
driver 'sriovnicswitch'
  ```

  ** Compute **

  nova.conf

  ```
  [pci]
  passthrough_whitelist = {  "devname": "enp94s0f1np1", "physical_network": 
"physnet1" }
  ```

  plugin/ml2/sriov-agent.ini
  ```
  [DEFAULT]
  debug = true

  [securitygroup]
  firewall_driver = neutron.agent.firewall.NoopFirewallDriver

  [sriov_nic]
  physical_device_mappings = physnet1:enp94s0f1np1
  ```

  Neutron Agent status
  ```
  
+--+--+---+---+---+---++
  | ID   | Agent Type   | Host  
| Availability Zone | Alive | State | Binary |
  
+--+--+---+---+---+---++
  | 24e9395c-379f-4afd-aa84-ae0d970794ff | NIC Switch agent | 
Qacloudhost06 | None  | :-)   | UP| neutron-sriov-nic-agent 
   |
  | 43ba481c-c0f2-49bc-a34a-c94faa284ac7 | NIC Switch agent | 
Qaamdhost02   | None  | :-)   | UP| neutron-sriov-nic-agent 
   |
  | 4c1a6c78-e58a-48d9-aa4a-abdf44d2f359 | NIC Switch agent | 
Qacloudhost07 | None  | :-)   | UP| neutron-sriov-nic-agent 
   |
  | 534f0946-6eb3-491f-a57d-65cbc0133399 | NIC Switch agent | 
Qacloudhost02 | None  | :-)   | UP| neutron-sriov-nic-agent 
   |
  | 2275f9d4-7c69-51db-ae71-b6e0be15e9b8 | OVN Metadata agent   | 
Qacloudhost05 |   | :-)   | UP| 
neutron-ovn-metadata-agent |
  | 92a7b8dc-e122-49c8-a3bc-ae6a38b56cc0 | OVN Controller Gateway agent | 
Qacloudhost05 |   | :-)   | UP| ovn-controller  
   |
  | c3a1e8fe-8669-5e7a-a3d7-3a2b638fae26 | OVN Metadata agent   | 
Qaamdhost02   |   | :-)   | UP| 
neutron-ovn-metadata-agent |
  | d203ff10-0835-4d7e-bc63-5ff274ade5a3 | OVN Controller agent | 
Qaamdhost02   |   | :-)   | UP| ovn-controller  
   |
  | fc4c5075-9b44-5c21-a24d-f86dfd0009f9 | OVN Metadata agent   | 
Qacloudhost02 |   | :-)   | UP| 
neutron-ovn-metadata-agent |
  | bed0-1519-47f8-b52f-3a9116e1408f | OVN Controller Gateway agent | 
Qacloudhost02 |   | :-)   | UP| ovn-controller  
   |
  
+--+--+---+---+---+---++
  ```

  When creating a vm, Neutron error log
  ```
  2023-07-19 02:44:30.463 1725695 ERROR neutron.plugins.ml2.managers 
[req-9204d6f7-ddc3-44e2-878c-bfa9c3f761ef fbec686e249e4818be7a686833140326 
7a4dd87db099460795d775b055a648ea - default default] Mechanism driver 'ovn' 
failed in update_port_precommit: neutron_lib.exceptions.InvalidInput: Invalid 
input for operation: Invalid binding:profile. too many parameters.
  2023-07-19 02:44:30.463 1725695 ERROR neutron.plugins.ml2.managers Traceback 
(most recent call last):
  2023-07-19 02:44:30.463 1725695 ERROR 

[Yahoo-eng-team] [Bug 2026775] [NEW] Metadata agents do not parse X-Forwarded-For headers properly

2023-07-10 Thread Brian Haley
Public bug reported:

While looking at an unrelated issue I noticed log lines like this in the
neutron-ovn-metadata-agent log file:

  No port found in network b62452f3-ec93-4cd7-af2d-9f9eabb33b12 with IP
address 10.246.166.21,10.131.84.23

While it might seem harmless, looking at the code it only showed a
single value being logged:

  LOG.error("No port found in network %s with IP address %s",
network_id, remote_address)

The code in question is looking for a matching IP address, but will
never match the concatenated string.

Google shows the additional IP address(es) that might be present in this
header are actually proxies:

  https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-
For

And sure enough in my case the second IP was always the same.

The code needs to be changed to account for proxies, which aren't
actually necessary to lookup what port is making the request, but it
could be logged for posterity.

I'll send a change for that soon.

** Affects: neutron
 Importance: Medium
 Assignee: Brian Haley (brian-haley)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2026775

Title:
  Metadata agents do not parse X-Forwarded-For headers properly

Status in neutron:
  In Progress

Bug description:
  While looking at an unrelated issue I noticed log lines like this in
  the neutron-ovn-metadata-agent log file:

No port found in network b62452f3-ec93-4cd7-af2d-9f9eabb33b12 with
  IP address 10.246.166.21,10.131.84.23

  While it might seem harmless, looking at the code it only showed a
  single value being logged:

LOG.error("No port found in network %s with IP address %s",
  network_id, remote_address)

  The code in question is looking for a matching IP address, but will
  never match the concatenated string.

  Google shows the additional IP address(es) that might be present in
  this header are actually proxies:

https://developer.mozilla.org/en-
  US/docs/Web/HTTP/Headers/X-Forwarded-For

  And sure enough in my case the second IP was always the same.

  The code needs to be changed to account for proxies, which aren't
  actually necessary to lookup what port is making the request, but it
  could be logged for posterity.

  I'll send a change for that soon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2026775/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025264] Re: [ovn][DVR]FIP traffic centralized in DVR environments

2023-07-05 Thread Brian Haley
** Changed in: neutron
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025264

Title:
  [ovn][DVR]FIP traffic centralized in DVR environments

Status in neutron:
  Fix Committed

Bug description:
  When a port is down, the FIP associated to it get centralized
  (external_mac removed on NAT table entry) despite DVR being enabled.
  This also happen when deleting a VM with a FIP associated, where
  during some period of time, the FIP gets centralized -- time between
  removing the external_mac from NAT table entry, and the deletion of
  the NAT table entry.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2025264/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2020698] [NEW] neutron-tempest-plugin-bgpvpn-bagpipe job unstable

2023-05-24 Thread Brian Haley
Public bug reported:

The neutron-tempest-plugin-bgpvpn-bagpipe has been unstable for over a
week, and yesterday it got worse where half the tests are failing now.

I thought increasing the job timeout would help, but it has not:

https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/883991

I realize there are changes in-flight wrt to sRBAC which might fix the
issue, but until they all merge I think we should just make it non-
voting on the master branch. The other branches don't seem to have any
problems.

** Affects: neutron
 Importance: High
 Assignee: Brian Haley (brian-haley)
 Status: Confirmed


** Tags: l3-bgp tempest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2020698

Title:
  neutron-tempest-plugin-bgpvpn-bagpipe job unstable

Status in neutron:
  Confirmed

Bug description:
  The neutron-tempest-plugin-bgpvpn-bagpipe has been unstable for over a
  week, and yesterday it got worse where half the tests are failing now.

  I thought increasing the job timeout would help, but it has not:

  https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/883991

  I realize there are changes in-flight wrt to sRBAC which might fix the
  issue, but until they all merge I think we should just make it non-
  voting on the master branch. The other branches don't seem to have any
  problems.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2020698/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1993628] Re: Designate synchronisation inconsistensies with Neutron-API

2023-05-16 Thread Brian Haley
Added neutron since I don't think this is specific to charms.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1993628

Title:
  Designate synchronisation inconsistensies with Neutron-API

Status in OpenStack Designate Charm:
  New
Status in neutron:
  New

Bug description:
  When setting a network to use automatically a dns-domain, some
  inconsistensies were observed when deleting and recreating new
  instances sharing the same names and associating them to the same
  floating IPs from before.

  This has been reproduced on :
  * Focal Ussuri (Neutron-api and Designate charms with Ussuri/edge branch)
  * Focal Yoga  (Neutron-api and Designate charms with Yoga/stable branch)

  
  Reproducible steps :
  * create a domain zone with "openstack zone create"
  * configure an existing self-service with the newly created domain "openstack 
network set --dns-domain ..."
  * create a router on the self-service network with an external gateway on 
provider network
  * create an instance on self-service network
  * create a floating ip address on provider network
  * associate floating ip to instance
  --> the DNS entry gets created

  * delete the instance *WITH* the floating ip still attached
  --> the DNS entry is deleted

  * recreate a new instance with exactly the *same* name and re-use the *same* 
floating ip
  --> the DNS entry doesn't get created
  --> it doesn't seem to be related to TTL, since this makes the issue 
permanent even after a day of testing when TTL is set by default to 1 hour

  Worse inconsistensies can be seen when, instead of deleting an instance, 
moving the floating ip directly to another instance
  * have 2 instances vm-1 and vm-2
  * attach floating ip to vm-1 "openstack server add floating ip XXX vm-1"
  --> the DNS entry is created
  * attach the same floating ip to vm-2 ""openstack server add floating ip XXX 
vm-2"  (this is permitted by CLI and simply move the fIP to vm-2)
  --> the DNS entry still use vm-1, vm-2 doesn't get created

  When you combine these 2 issues, you can be left with either false
  records being kept or automatic records failing silently to be
  created.

  
  Workaround :
  * either always remove floating ip *before* deleting an instance
  or
  * remove floating ip on instance
  * then re-add floating ip on instance

  
  Eventually when deleting the floating ip to reassign it, we are gratified 
with this error on neutron-api unit (on Ussuri but the error is similar on 
Yoga) :

  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db 
[req-e6d270d2-fbde-42d7-a75b-2c8a67c42fcb 2dc4151f6dba4c3e8ba8537c9c354c13 
f548268d5255424591baa8783f1cf277 - 6a71047e7d7f4e01945ec58df06ae63f 
6a71047e7d7f4e01945ec58df06ae63f] Error deleting Floating IP data from external 
DNS service. Name: 'vm-2'. Domain: 'compute.stack.vpn.'. IP addresses 
'192.168.21.217'. DNS service driver message 'Name vm-2.compute.stack.vpn. is 
duplicated in the external DNS service': 
neutron_lib.exceptions.dns.DuplicateRecordSet: Name vm-2.compute.stack.vpn. is 
duplicated in the external DNS service
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db Traceback (most recent 
call last):
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db   File 
"/usr/lib/python3/dist-packages/neutron/db/dns_db.py", line 214, in 
_delete_floatingip_from_external_dns_service
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db 
self.dns_driver.delete_record_set(context, dns_domain, dns_name,
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db   File 
"/usr/lib/python3/dist-packages/neutron/services/externaldns/drivers/designate/driver.py",
 line 172, in delete_record_set
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db ids_to_delete = 
self._get_ids_ips_to_delete(
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db   File 
"/usr/lib/python3/dist-packages/neutron/services/externaldns/drivers/designate/driver.py",
 line 200, in _get_ids_ips_to_delete
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db raise 
dns_exc.DuplicateRecordSet(dns_name=name)
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db 
neutron_lib.exceptions.dns.DuplicateRecordSet: Name vm-2.compute.stack.vpn. is 
duplicated in the external DNS service
  2022-10-19 02:24:12.497 67548 ERROR neutron.db.dns_db

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-designate/+bug/1993628/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2017518] Re: Neutron LInux bridge agents wildly fluctuating

2023-04-28 Thread Brian Haley
The patch would not have fixed the issue, just added some logging so it
was obvious what the agent was doing.

And yes, syncing time is important, it could have just been the time
difference on the agents and server causing things to seem broken. Glad
you solved your issue.

** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2017518

Title:
  Neutron LInux bridge agents wildly fluctuating

Status in neutron:
  Invalid

Bug description:
  Hi All,

  We have OSA Yoga setup. The neutron linux bridge agent is wildly
  fluctuating, the agents going up and down in the `neutron agent list`
  command.  The count of the agents which are down is very intermittent
  and changing every few seconds as shown below:

  
  38
  root@utility-container-:~# neutron agent-list | grep Linux | grep xxx | wc -l
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  34
  root@utility-container-:~# neutron agent-list | grep Linux | grep xxx | wc -l
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  43
  root@utility-container-:~# neutron agent-list | grep Linux | grep xxx | wc -l
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  2
  root@utility-container-:~# neutron agent-list | grep Linux | grep xxx | wc -l
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  2
  root@utility-container-:~# neutron agent-list | grep Linux | grep xxx | wc -l
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  82
  root@utility-container-:~# neutron agent-list | grep Linux | grep xxx | wc -l
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  54
  ---

  As shown above, the agents down count is fluctuating within few
  seconds gap of executing the above command. The logs on the network
  nodes are not indicating anything wrong. Why is this happening ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2017518/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2015377] Re: If dhcp port is deleted from neutron, it is never recreated

2023-04-06 Thread Brian Haley
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) => Brian Haley (brian-haley)

** Changed in: neutron
   Importance: Undecided => Medium

** Tags added: ovn

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2015377

Title:
  If dhcp port is deleted from neutron, it is never recreated

Status in neutron:
  New
Status in neutron package in Ubuntu:
  New

Bug description:
  This is happening in charmed OpenStack yoga/stable using ovn 22.03.

  Neutron version is 2:20.2.0-0ubuntu1

  If the dhcp port of a subnet is deleted via OpenStack API, this will
  never be recreated even toggling the dhcp on the subnet with:

  openstack subnet set --no-dhcp/--dhcp 

  This will cause also a missing route for metadata in OVN DHCP_Options:

  i.e.

  _uuid   : 2d4871f5-b675-4978-b291-a1ea7bb5bd4c
  cidr: "192.168.100.0/24"
  external_ids: {"neutron:revision_number"="1", 
subnet_id="62b269e0-6668-48ae-9728-aacd7a99df95"}
  options : {dns_server="{91.189.91.131, 91.189.91.132}", 
lease_time="43200", mtu="1500", router="192.168.100.1", 
server_id="192.168.100.1", server_mac="fa:16:3e:15:13:e6"}

  
  Note the missing 
classless_static_route="{169.254.169.254/32,192.168.100.2,0.0.0.0/0,192.168.100.1}"

  
  Even if the dhcp port is then recreated manually with device-id 
ovnmeta- and device-owner network:distributed, the missing route 
won't be added to ovn causing VM creation failure. 

  The routes will appear again in OVN DHCP_Options table only when
  updating the subnet host-routes with:

  openstack subnet set --host-route destination=,gateway= 

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2015377/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2015388] [NEW] Deleting network does not remove network namespace

2023-04-05 Thread Brian Haley
Public bug reported:

Environment: ML2/OVS

DevStack Version: 2023.2
Change: b10c06027273d125f2b8cd14d4b19737dfb94b94 Merge "Add config options for 
cinder nfs backend" 2023-03-27 14:20:04 +
OS Version: Ubuntu 22.04 jammy

While testing a fix for a bug on a recent devstack, I noticed that
network namespaces were not getting deleted when I deleted a network
with a subnet attached to it.

$ openstack network list
+--+--++
| ID   | Name | Subnets 
   |
+--+--++
| 32171620-509d-498f-b0e1-b86c2fdc004e | shared   | 
995701e4-7923-411f-b3d6-a0d9a6c22ca5   |
| 6cc3ff11-09a6-40a8-9765-a453fcb7bf2e | private  | 
d990dbe7-5658-46f7-b0a1-691a18444519, e7c91b4a-0595-42be-b777-6a2ee6d45113 |
| b2fbc798-3163-4696-9a12-75a4f0b7c3c7 | public   | 
4be007ea-0a2a-48e2-94a2-15407ff11694, af06a026-7044-4284-b653-955c41685905 |
| cc47f423-c50b-4b06-b62b-6d2603eb5fa0 | mtu-1279 | 
0664a3d9-3eb8-4503-86dd-aba78c02791c   |
+--+--++

$ ip netns | grep cc47f423-c50b-4b06-b62b-6d2603eb5fa0
qdhcp-cc47f423-c50b-4b06-b62b-6d2603eb5fa0 (id: 6)

$ openstack network delete cc47f423-c50b-4b06-b62b-6d2603eb5fa0
$ openstack network list
+--+-++
| ID   | Name| Subnets  
  |
+--+-++
| 32171620-509d-498f-b0e1-b86c2fdc004e | shared  | 
995701e4-7923-411f-b3d6-a0d9a6c22ca5   |
| 6cc3ff11-09a6-40a8-9765-a453fcb7bf2e | private | 
d990dbe7-5658-46f7-b0a1-691a18444519, e7c91b4a-0595-42be-b777-6a2ee6d45113 |
| b2fbc798-3163-4696-9a12-75a4f0b7c3c7 | public  | 
4be007ea-0a2a-48e2-94a2-15407ff11694, af06a026-7044-4284-b653-955c41685905 |
+--+-++

$ ip netns | grep cc47f423-c50b-4b06-b62b-6d2603eb5fa0
qdhcp-cc47f423-c50b-4b06-b62b-6d2603eb5fa0 (id: 6)

I almost would have expected an error since there was a subnet here,
will have to re-check the API ref to see.

During one attempt I actually triggered a SubnetNotFound error, but
that's probably a different issue as it wasn't necessary to recreate
this.

** Affects: neutron
 Importance: Medium
 Assignee: Brian Haley (brian-haley)
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2015388

Title:
  Deleting network does not remove network namespace

Status in neutron:
  New

Bug description:
  Environment: ML2/OVS

  DevStack Version: 2023.2
  Change: b10c06027273d125f2b8cd14d4b19737dfb94b94 Merge "Add config options 
for cinder nfs backend" 2023-03-27 14:20:04 +
  OS Version: Ubuntu 22.04 jammy

  While testing a fix for a bug on a recent devstack, I noticed that
  network namespaces were not getting deleted when I deleted a network
  with a subnet attached to it.

  $ openstack network list
  
+--+--++
  | ID   | Name | Subnets   
 |
  
+--+--++
  | 32171620-509d-498f-b0e1-b86c2fdc004e | shared   | 
995701e4-7923-411f-b3d6-a0d9a6c22ca5   |
  | 6cc3ff11-09a6-40a8-9765-a453fcb7bf2e | private  | 
d990dbe7-5658-46f7-b0a1-691a18444519, e7c91b4a-0595-42be-b777-6a2ee6d45113 |
  | b2fbc798-3163-4696-9a12-75a4f0b7c3c7 | public   | 
4be007ea-0a2a-48e2-94a2-15407ff11694, af06a026-7044-4284-b653-955c41685905 |
  | cc47f423-c50b-4b06-b62b-6d2603eb5fa0 | mtu-1279 | 
0664a3d9-3eb8-4503-86dd-aba78c02791c   |
  
+--+--++

  $ ip netns | grep cc47f423-c50b-4b06-b62b-6d2603eb5fa0
  qdhcp-cc47f423-c50b-4b06-b62b-6d2603eb5fa0 (id: 6)

  $ openstack network delete cc47f423-c50b-4b06-b62b-6d2603eb5fa0
  $ op

[Yahoo-eng-team] [Bug 2009053] Re: OVN: default stateless SG blocks metadata traffic

2023-03-29 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2009053

Title:
  OVN: default stateless SG blocks metadata traffic

Status in neutron:
  Won't Fix

Bug description:
  Bug originally found by Alex Katz and reported in the bugzilla:
  https://bugzilla.redhat.com/show_bug.cgi?id=2149713

  Description of problem:
  When a stateless security group is attached to the instance it fails to fetch 
metadata info. An explicit rule is required to allow metadata traffic from 
169.254.169.254.

  Checked with the custom security group (only egress traffic is
  allowed) as well as with the default security group (egress and
  ingress from the same SG are allowed).

  Version-Release number of selected component (if applicable):
  RHOS-17.1-RHEL-9-20221115.n.2
  Red Hat Enterprise Linux release 9.1 (Plow)

  How reproducible:
  100%

  Steps to Reproduce:
  openstack security group create --stateless test_sg
  openstack server create --image  --flavor  --network  
--security-group test_sg vm_1

  Actual results:
  checking http://169.254.169.254/2009-04-04/instance-id
  failed 1/20: up 21.53. request failed
  failed 2/20: up 70.89. request failed
  failed 3/20: up 120.12. request failed
  failed 4/20: up 169.36. request failed
  failed 5/20: up 218.81. request failed
  failed 6/20: up 268.17. request failed

  Expected results:
  Metadata is successfully fetched

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2009053/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2012144] Re: [OVN] adding/removing floating IPs neutron server errors about binding port

2023-03-21 Thread Brian Haley
** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2012144

Title:
  [OVN] adding/removing floating IPs neutron server errors about binding
  port

Status in neutron:
  Invalid

Bug description:
  Using Zed and Ubuntu and OVN as the ml2 driver.

  Neutron Server Version 21.0.0
  OVN Version 22.09.0

  When adding/removing floating IPs the neutron server errors with the
  following

  2023-02-23 03:30:05.842 25044 INFO
  neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver [None req-
  accc7595-320b-47e4-93e4-b07f6b205295 - - - - - -] Refusing to bind
  port 5266e0cd-1064-4baa-9679-8c5f2eb13d29 on host sora due to the OVN
  chassis bridge mapping physical networks [] not supporting physical
  network: provider

  2023-02-23 03:30:05.843 25044 ERROR neutron.plugins.ml2.managers [None
  req-accc7595-320b-47e4-93e4-b07f6b205295 - - - - - -] Failed to bind
  port 5266e0cd-1064-4baa-9679-8c5f2eb13d29 on host sora for vnic_type
  normal using segments [{'id': '5621a693-771d-4a57-beb4-d7a6e8dfc1b9',
  'network_type': 'flat', 'physical_network': 'provider',
  'segmentation_id': None, 'network_id':
  '71cbb38e-dc91-4db4-9a3a-7e499cd3fd69'}]

  The floating IPs work as expected though so I am unsure why this error
  is given.

  The host has been setup with the following bridge and mapping

  ovs-vsctl --may-exist add-br br-provider -- set bridge br-provider 
protocols=OpenFlow13
  ovs-vsctl set open . external-ids:ovn-bridge-mappings=provider:br-provider
  ovs-vsctl --may-exist add-port br-provider veth1-provider

  
  Looking at the ovn driver code I can see it gets the ovn bridge mappings here 
https://github.com/openstack/neutron/blob/stable/zed/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py#L850

  and if I get the mappings by hand I can see them set as expected
  bellow:

  # ovn-sbctl list Chassis
  _uuid   : ac259942-1da8-425e-aaad-40f861771353
  encaps  : [17ba4674-5955-4dd7-847d-37ffc94dbc38, 
3bf53123-6d0b-4422-8afc-7b52530ea782]
  external_ids: {}
  hostname: sora
  name: "b8edbb46-b62a-4ca2-be87-872a83eb03d5"
  nb_cfg  : 0
  other_config: {ct-no-masked-label="true", datapath-type=system, 
iface-types="bareudp,erspan,geneve,gre,gtpu,internal,ip6erspan,ip6gre,lisp,patch,stt,system,tap,vxlan",
 is-interconn="false", mac-binding-timestamp="true", 
ovn-bridge-mappings="provider:br-provider", ovn-chassis-mac-mappings="", 
ovn-cms-options=enable-chassis-as-gw, ovn-enable-lflow-cache="true", 
ovn-limit-lflow-cache="", ovn-memlimit-lflow-cache-kb="", 
ovn-monitor-all="false", ovn-trim-limit-lflow-cache="", ovn-trim-timeout-ms="", 
ovn-trim-wmark-perc-lflow-cache="", port-up-notif="true"}
  transport_zones : []
  vtep_logical_switches: []

  I believe this error is being generated here
  
https://github.com/openstack/neutron/blob/stable/zed/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py#L1004

  but I am unsure why since everything still seems to work?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2012144/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2007437] Re: Test test_volumes_pagination unstable

2023-02-22 Thread Brian Haley
Fixed in https://review.opendev.org/c/openstack/horizon/+/847985 will
close this bug.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2007437

Title:
  Test test_volumes_pagination unstable

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I noticed while looking at a horizon backport,
  https://review.opendev.org/c/openstack/horizon/+/866891
  that the test_volumes_pagination job was failing in both the stable and 
master branches.

  It's usually a "'NoneType' object is not iterable" failure, here's an
  example:

  https://paste.opendev.org/show/818752/

  While I don't know how to fix it, I think the horizon gate should treat the 
test
  as unstable while it can be looked at. I can propose a patch for that based on
  what neutron does.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2007437/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2007437] [NEW] Test test_volumes_pagination unstable

2023-02-15 Thread Brian Haley
Public bug reported:

I noticed while looking at a horizon backport,
https://review.opendev.org/c/openstack/horizon/+/866891
that the test_volumes_pagination job was failing in both the stable and master 
branches.

It's usually a "'NoneType' object is not iterable" failure, here's an
example:

https://paste.opendev.org/show/818752/

While I don't know how to fix it, I think the horizon gate should treat the test
as unstable while it can be looked at. I can propose a patch for that based on
what neutron does.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2007437

Title:
  Test test_volumes_pagination unstable

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I noticed while looking at a horizon backport,
  https://review.opendev.org/c/openstack/horizon/+/866891
  that the test_volumes_pagination job was failing in both the stable and 
master branches.

  It's usually a "'NoneType' object is not iterable" failure, here's an
  example:

  https://paste.opendev.org/show/818752/

  While I don't know how to fix it, I think the horizon gate should treat the 
test
  as unstable while it can be looked at. I can propose a patch for that based on
  what neutron does.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2007437/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1862315] Re: Sometimes VMs can't get IP when spawned concurrently

2023-01-24 Thread Brian Haley
Since all the changes seemed to have merge will close this.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1862315

Title:
  Sometimes VMs can't get IP when spawned concurrently

Status in neutron:
  Fix Released

Bug description:
  Version: Stein
  Scenario description:
  Rally creates 60 VMs with 6 threads. Each thread:
   - creates a VM
   - pings it
   - if successful ping, tries to reach the VM via ssh and execute a command. 
It tries to do that during 2 minutes.
   - if successful ssh - deletes the VM

  For some VMs ping fails. Console log shows that VM failed to get IP
  from DHCP.

  tcpdump on corresponding DHCP port shows VM's DHCP requests, but dnsmasq does 
not reply.
  From dnsmasq logs:

  Feb  6 00:15:43 dnsmasq[4175]: read 
/var/lib/neutron/dhcp/da73026e-09b9-4f8d-bbdd-84d89c2487b2/addn_hosts - 28 
addresses
  Feb  6 00:15:43 dnsmasq[4175]: duplicate dhcp-host IP address 10.2.0.194 at 
line 28 of /var/lib/neutron/dhcp/da73026e-09b9-4f8d-bbdd-84d89c2487b2/host
  ...
  Feb  6 00:15:48 dnsmasq-dhcp[4175]: 1436802562 DHCPDISCOVER(tap7216a777-13) 
fa:16:3e:b1:a7:f2 no address available

  So it must be something wrong with neutron-dhcp-agent network cache.

  From neutron-dhcp-agent log:

  2020-02-06 00:15:20.282 40 DEBUG neutron.agent.dhcp.agent 
[req-f5107bdd-d53a-4171-a283-de3d7cf7c708 - - - - -] Resync event has been 
scheduled _periodic_resync_helper 
/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/dhcp/agent.py:276
  2020-02-06 00:15:20.282 40 DEBUG neutron.common.utils 
[req-f5107bdd-d53a-4171-a283-de3d7cf7c708 - - - - -] Calling throttled function 
clear wrapper 
/var/lib/openstack/lib/python3.6/site-packages/neutron/common/utils.py:102
  2020-02-06 00:15:20.283 40 DEBUG neutron.agent.dhcp.agent 
[req-f5107bdd-d53a-4171-a283-de3d7cf7c708 - - - - -] resync 
(da73026e-09b9-4f8d-bbdd-84d89c2487b2): ['Duplicate IP addresses found, DHCP 
cache is out of sync'] _periodic_resync_helper 
/var/lib/openstack/lib/python3.6/site-packages/neutron/agent/dhcp/agent.py:293

  so the agent is aware of invalid cache for the net, but for unknown
  reason actual net resync happens only in 8 minutes:

  2020-02-06 00:23:55.297 40 INFO neutron.agent.dhcp.agent
  [req-f5107bdd-d53a-4171-a283-de3d7cf7c708 - - - - -] Synchronizing
  state

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1862315/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1973039] Re: Does FWaaS v2 support linuxbridge-agent ?

2023-01-24 Thread Brian Haley
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973039

Title:
  Does FWaaS v2 support linuxbridge-agent  ?

Status in neutron:
  Invalid

Bug description:
  it is unclear in docs if FWaaS V2 works with linuxBridge Agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1973039/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1991501] Re: l3 agent fails to process router breaking floating ips when updating them on the router qg port via pyroute2

2023-01-24 Thread Brian Haley
Right, this looks like a pyroute2 issue which has since been fixed, and
neutron bumped it's lower contraints file to use the later version.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1991501

Title:
  l3 agent fails to process router breaking floating ips when updating
  them on the router qg port via pyroute2

Status in neutron:
  Fix Released

Bug description:
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent [-] Failed to process 
compatible router: edf1cc99-879a-4fe5-a7b2-d19acb8fdcbf: 
neutron.privileged.agent.linux.ip_lib.InterfaceOperationNotSupported: Operation 
not supported on interface qg-4bb9d20b-a0, namespace 
qrouter-edf1cc99-879a-4fe5-a7b2-d19acb8fdcbf.
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent Traceback (most 
recent call last):
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/agent/l3/agent.py", 
line 848, in _process_routers_if_compatible
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent 
self._process_router_if_compatible(router)
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/agent/l3/agent.py", 
line 639, in _process_router_if_compatible
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent 
self._process_updated_router(router)
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/agent/l3/agent.py", 
line 719, in _process_updated_router
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent ri.process()
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/common/utils.py", line 
177, in call
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent self.logger(e)
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/oslo_utils/excutils.py", line 
227, in __exit__
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent 
self.force_reraise()
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/oslo_utils/excutils.py", line 
200, in force_reraise
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent raise self.value
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/common/utils.py", line 
174, in call
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent return 
func(*args, **kwargs)
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/agent/l3/router_info.py",
 line 1312, in process
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent 
self.process_external()
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/decorator.py", line 232, in fun
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent return 
caller(func, *(extras + args), **kw)
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/common/coordination.py",
 line 78, in _synchronized
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent return f(*a, **k)
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/agent/l3/router_info.py",
 line 1046, in process_external
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent 
self._process_external_gateway(ex_gw_port)
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/agent/l3/router_info.py",
 line 931, in _process_external_gateway
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent 
self.external_gateway_updated(ex_gw_port, interface_name)
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/agent/l3/router_info.py",
 line 884, in external_gateway_updated
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent 
self._external_gateway_added(
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/agent/l3/router_info.py",
 line 822, in _external_gateway_added
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent 
self._external_gateway_settings(ex_gw_port, interface_name,
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/agent/l3/router_info.py",
 line 835, in _external_gateway_settings
  2022-10-03 03:23:33.868 21 ERROR neutron.agent.l3.agent 

[Yahoo-eng-team] [Bug 1954777] Re: setting same static route with subnet already exists as direct connected network

2023-01-24 Thread Brian Haley
Regarding comment #3 - yes, when talking about the route it's both
static and connected. It is considered a user error to configure a route
that conflicts with one of the interface routes, but to not impact the
API behavior we only document that you need to take precaution. For this
reason I don't believe this is a bug.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1954777

Title:
  setting same static route with subnet already exists as direct
  connected network

Status in neutron:
  Invalid

Bug description:
  * High level description:

  When creating a static route in a l3 router where the destination
  subnet already exists as connected network.

  Adding the route will remove the directly connected network and add
  the static route.

  After removing the static route the directly connected route does not
  get restated.

  
  * Pre-conditions: 

  This has been tested on Ussuri, with:
  - Neutron Linux bridge agent
  - Neutron L3 agent
  - Neutron

  * Step-by-step reproduction steps:

  Creating a route where the destination already exists
  # openstack network create network1
  # openstack subnet create --network network1 --subnet-range 192.168.1.0/24 
network1-sub-1
  # openstack network create network2
  # openstack subnet create --network network2 --subnet-range 192.168.2.0/24 
network2-sub-1
  # openstack router create router1
  # openstack router add subnet router1 network1-sub-1
  # openstack router add subnet router1 network2-sub-1

  # Add the route, this will remove the connected network and add a static route
  # openstack router add route --route 
destination=192.168.1.0/24,gateway=192.168.2.10 router1

  # remove the static route, this will remove the route but will not add the 
connected network
  # openstack router remove route --route 
destination=192.168.1.0/24,gateway=192.168.2.10 router1

  
  It is also possible to create a route in the same network:
  # openstack network create network1
  # openstack subnet create --network network1 --subnet-range 192.168.1.0/24 
network1-sub-1
  # openstack router create router1
  # openstack router add subnet router1 network1-sub-1
  # openstack router add route --route 
destination=192.168.1.0/24,gateway=192.168.1.10 router1
  # openstack router remove route --route 
destination=192.168.1.0/24,gateway=192.168.1.10 router1

  
  * Expected output:
  1. Or that the connected route will never get removed
  2. Or that the connected route restated when the static route is removed

  * Version:
** OpenStack version: Ussuri
** Ubuntu Bionic

  * Environment:

  * Perceived severity:
  This can be a high impact when the customer add a route where the destination 
has high/important traffic in that flow.

  * Workaround
  - Currently by disabling the router and then enabling the router will rebuild 
the who router and so also the missing routes.

  -  Using "neutron l3-agent-router-remove" and "neutron l3-agent-
  router-add" commands will also rebuilt the router

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1954777/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930858] Re: OVN central service does not start properly

2023-01-24 Thread Brian Haley
I believe the initial issue was fixed with commit
71c99655479174750bcedfe458328328a1596766
(https://review.opendev.org/c/openstack/devstack/+/861915) in the
devstack tree which fixed issues with not finding the db sock on
startup.

The issue described in comment #6 seems different, a new bug should be
opened for that if it's still happening.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930858

Title:
  OVN central service does not start properly

Status in neutron:
  Fix Released

Bug description:
  This is a Devstack deployment. While re-stacking, Unfortunately OVN
  central service is failing to start properly.

  
  b/neutron_plugins/ovn_agent:start_ovn:713 :   local 
SCRIPTDIR=/share/ovn/scripts
  + lib/neutron_plugins/ovn_agent:start_ovn:714 :   use_new_ovn_repository
  + lib/neutron_plugins/ovn_agent:use_new_ovn_repository:180 :   [[ False == 
\F\a\l\s\e ]]
  + lib/neutron_plugins/ovn_agent:use_new_ovn_repository:181 :   return 0
  + lib/neutron_plugins/ovn_agent:start_ovn:718 :   is_service_enabled 
ovn-northd
  + functions-common:is_service_enabled:1960 :   return 0
  + lib/neutron_plugins/ovn_agent:start_ovn:719 :   [[ False == \T\r\u\e ]]
  + lib/neutron_plugins/ovn_agent:start_ovn:725 :   _start_process 
ovn-central.service
  + lib/neutron_plugins/ovn_agent:_start_process:220 :   sudo systemctl 
daemon-reload
  + lib/neutron_plugins/ovn_agent:_start_process:221 :   sudo systemctl enable 
ovn-central.service
  Created symlink 
/etc/systemd/system/multi-user.target.wants/ovn-central.service → 
/lib/systemd/system/ovn-central.service.
  + lib/neutron_plugins/ovn_agent:_start_process:222 :   sudo systemctl restart 
ovn-central.service
  + lib/neutron_plugins/ovn_agent:start_ovn:729 :   wait_for_sock_file 
/var/run/openvswitch/ovnnb_db.sock
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:169 :   local count=0
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:170 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:171 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:172 :   count=1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:173 :   '[' 1 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:170 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:171 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:172 :   count=2
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:173 :   '[' 2 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:170 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:171 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:172 :   count=3
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:173 :   '[' 3 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:170 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:171 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:172 :   count=4
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:173 :   '[' 4 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:170 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:171 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:172 :   count=5
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:173 :   '[' 5 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:170 :   '[' '!' -S 
/var/run/openvswitch/ovnnb_db.sock ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:171 :   sleep 1
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:172 :   count=6
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:173 :   '[' 6 -gt 5 ']'
  + lib/neutron_plugins/ovn_agent:wait_for_sock_file:174 :   die 174 'Socket 
/var/run/openvswitch/ovnnb_db.sock not found'
  + functions-common:die:198 :   local exitcode=0
  [Call Trace]
  ./stack.sh:1264:start_ovn_services
  /opt/stack/devstack/lib/neutron-legacy:477:start_ovn
  /opt/stack/devstack/lib/neutron_plugins/ovn_agent:729:wait_for_sock_file
  /opt/stack/devstack/lib/neutron_plugins/ovn_agent:174:die
  [ERROR] /opt/stack/devstack/lib/neutron_plugins/ovn_agent:174 Socket 
/var/run/openvswitch/ovnnb_db.sock not found
  Error on exit
  ebtables v1.8.4 (nf_tables): table `broute' is incompatible, use 'nft' tool.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930858/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1881624] Re: full-tempest-scenario-master failing on test_ipv6_hotplug

2023-01-24 Thread Brian Haley
Neutron has re-enabled this test because the bug has been fixed in OVN,
see https://bugs.launchpad.net/neutron/+bug/1881558 for more
information. Will close the neutron part of this bug.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1881624

Title:
  full-tempest-scenario-master failing on test_ipv6_hotplug

Status in neutron:
  Fix Released
Status in tripleo:
  Incomplete

Bug description:
  https://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-
  master_name=periodic-tripleo-ci-centos-8-standalone-full-tempest-
  scenario-master

  test_ipv6_hotplug_dhcpv6stateless[id-9aaedbc4-986d-42d5-9177-3e721728e7e0]
  test_ipv6_hotplug_slaac[id-b13e5408-5250-4a42-8e46-6996ce613e91]


  Traceback (most recent call last):
File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/common/utils.py", line 
78, in wait_until_true
  eventlet.sleep(sleep)
File "/usr/lib/python3.6/site-packages/eventlet/greenthread.py", line 36, 
in sleep
  hub.switch()
File "/usr/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 298, in 
switch
  return self.greenlet.switch()
  eventlet.timeout.Timeout: 120 seconds

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_ipv6.py",
 line 168, in test_ipv6_hotplug_slaac
  self._test_ipv6_hotplug("slaac", "slaac")
File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_ipv6.py",
 line 153, in _test_ipv6_hotplug
  self._test_ipv6_address_configured(ssh_client, vm, ipv6_port)
File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/scenario/test_ipv6.py",
 line 121, in _test_ipv6_address_configured
  "the VM {!r}.".format(ipv6_address, vm['id'])))
File 
"/usr/lib/python3.6/site-packages/neutron_tempest_plugin/common/utils.py", line 
82, in wait_until_true
  raise exception
  RuntimeError: Timed out waiting for IP address 
'2001:db8:0:2:f816:3eff:fe44:681f' to be configured in the VM 
'52e65f9d-f7fe-4198-b6cd-42c7fff3caec'.


  
  
---

  failing consistently on the last 6 periodic runs:

  
  
https://logserver.rdoproject.org/openstack-periodic-master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-centos-8-standalone-full-tempest-scenario-master/7568508/logs/undercloud/var/log/tempest/stestr_results.html.gz

  https://logserver.rdoproject.org/openstack-periodic-
  master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-
  centos-8-standalone-full-tempest-scenario-
  master/0b9fd16/logs/undercloud/var/log/tempest/stestr_results.html.gz

  https://logserver.rdoproject.org/openstack-periodic-
  master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-
  centos-8-standalone-full-tempest-scenario-
  master/7902c02/logs/undercloud/var/log/tempest/stestr_results.html.gz

  https://logserver.rdoproject.org/openstack-periodic-
  master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-
  centos-8-standalone-full-tempest-scenario-
  master/e20fa1e/logs/undercloud/var/log/tempest/stestr_results.html.gz

  https://logserver.rdoproject.org/openstack-periodic-
  master/opendev.org/openstack/tripleo-ci/master/periodic-tripleo-ci-
  centos-8-standalone-full-tempest-scenario-
  master/837d9b7/logs/undercloud/var/log/tempest/stestr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1881624/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1857016] Re: Possible double row.delete() call in ACL code

2023-01-24 Thread Brian Haley
The UpdateACLsCommand class, where the code in question was, has been
completely removed in commit e748f3f2d800de6c84b6fe835edfa1385bc223b1 so
we can close this bug.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1857016

Title:
  Possible double row.delete() call in ACL code

Status in networking-ovn:
  New
Status in neutron:
  Fix Released

Bug description:
  In the field, we've seen:

  2019-11-11 14:10:04.765 54 ERROR ovsdbapp.backend.ovs_idl.transaction 
[req-bb3c2ce5-6a46-47d0-b544-772efa71158f - - - - -] Traceback (most recent 
call last):
File 
"/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/connection.py", line 
99, in run
  txn.results.put(txn.do_commit())
File 
"/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
line 86, in do_commit
  command.run_idl(txn)
File "/usr/lib/python2.7/site-packages/networking_ovn/ovsdb/commands.py", 
line 616, in run_idl
  acl_del_obj.delete()
File "/usr/lib64/python2.7/site-packages/ovs/db/idl.py", line 970, in delete
  assert self._changes is not None
  AssertionError

  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance 
[req-bb3c2ce5-6a46-47d0-b544-772efa71158f - - - - -] Failed to fix resource 
ebc7e039-81af-4f18-babf-8750fe24d5f0 (type: ports): AssertionError
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance Traceback 
(most recent call last):
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance   File 
"/usr/lib/python2.7/site-packages/networking_ovn/common/maintenance.py", line 
232, in check_for_inconsistencies
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance 
self._fix_create_update(row)
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance   File 
"/usr/lib/python2.7/site-packages/networking_ovn/common/maintenance.py", line 
178, in _fix_create_update
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance 
res_map['ovn_update'](n_obj)
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance   File 
"/usr/lib/python2.7/site-packages/networking_ovn/common/ovn_client.py", line 
495, in update_port
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance 
self.add_txns_to_remove_port_dns_records(txn, port_object)
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance 
self.gen.next()
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance   File 
"/usr/lib/python2.7/site-packages/networking_ovn/ovsdb/impl_idl_ovn.py", line 
139, in transaction
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance yield t
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance 
self.gen.next()
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/api.py", line 102, in transaction
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance del 
self._nested_txns_map[cur_thread_id]
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/api.py", line 59, in __exit__
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance 
self.result = self.commit()
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance   File 
"/usr/lib/python2.7/site-packages/ovsdbapp/backend/ovs_idl/transaction.py", 
line 62, in commit
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance raise 
result.ex
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance 
AssertionError
  2019-11-11 14:10:04.767 54 ERROR networking_ovn.common.maintenance

  A cursory look at the python-ovs code makes it look like the
  maintenance thread might be trying to delete the same row twice since
  the only way Row._changes = None is in delete() already.

  The ACL code passes around dicts which, being unhashable, can't be
  added to sets to ensure uniqueness. In addition, from a db-schema
  perspective the same ACL could be referenced from multiple objects.
  Ultimately this code should be refactored, but a simple workaround for
  now would be to do a try/except AssertionError around the row.delete()
  since ignoring a 2nd attempted delete of the same row in a transaction
  is safe.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-ovn/+bug/1857016/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1847203] Re: l3-agent stops processing router updates

2023-01-24 Thread Brian Haley
We have since updated the l3-agent to better log when it starts/stops
processing messages, so there is at least a way to look at the logs and
help determine what happened. That said, as only this one user has seen
this issue and it was on an older release, I'll close this as there have
been no other reports (and might have been fixed in any number of
changes). If you still see if on a newer release please re-open this bug
with more information.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1847203

Title:
  l3-agent stops processing router updates

Status in neutron:
  Invalid

Bug description:
  In our work on upgrading from Queens to Rocky, we have stumbled upon
  some weird behaviour in neutron-l3-agent. After "a while" (usually
  ~days), the l3-agent will simply stop processing router updates. In
  the debug log, we see:

  Got routers updated notification
  :[u'1dea9d84-e5ec-44be-b37f-7f9070dd159e'] routers_updated
  /usr/lib/python2.7/dist-packages/neutron/agent/l3/agent.py:446

  But then nothing happens after that. We're testing this with adding
  and removing a floating IP.

  The problem is that we really have noe clue what happens, other than
  the observed symptoms, so we can't really provide a way to reproduce
  this..

  neutron-l3-agent 2:13.0.4-0ubuntu1~cloud0
  openvswitch-switch   2.10.0-0ubuntu2~cloud0

  Ubuntu 18.04 LTS, running the 4.15.0-51-generic kernel

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1847203/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1837339] Re: CIDR's of the form 12.34.56.78/0 should be an error

2023-01-24 Thread Brian Haley
Looks like this was fixed by a change in neutron to use a "normalized"
CIDR in the securityi group backends,
https://bugs.launchpad.net/neutron/+bug/1869129 has more details.

So I think we can mark the neutron portion here fixed.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1837339

Title:
  CIDR's of the form 12.34.56.78/0 should be an error

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in neutron:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix
Status in OpenStack Security Notes:
  New

Bug description:
  The problem is that some users do not understand how CIDRs work, and
  incorrectly use /0 when they are trying to specify a single IP or a
  subnet in an Access Rule.  Unfortunately 12.34.56.78/0 means the same
  thing as 0.0.0.0/0.

  The proposed fix is to insist that /0 only be used with 0.0.0.0/0 and
  the IPv6 equivalent ::/0 when entering or updating Access Rule CIDRs
  in via the dashboard.

  I am labeling this as a security vulnerability since it leads to naive
  users creating instances with ports open to the world when they didn't
  intend to do that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1837339/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831009] Re: Improper close connection to database leading to mysql/mariadb block connection.

2023-01-24 Thread Brian Haley
** Project changed: neutron => oslo.db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1831009

Title:
  Improper close connection to database leading to mysql/mariadb block
  connection.

Status in oslo.db:
  New

Bug description:
  Version
  ===
  Neutron-server: openstack-neutron-13.0.2-1.el7.noarch
  Nova: openstack-nova-*-18.2.0-1.el7.noarch
  Mariadb: mariadb-server-10.1.20-2.el7.x86_64

  Openstack setup:
  
  HAproxy => 3 Controllers (nova,neutron,keystone) => Mariadb

  Openstack config
  
  connection_recycle_time = 3600 (default)
  the rest of database connection are default.

  Mariadb config
  ==
  interactive_timeout: 28800
  wait_timeout: 28800

  
  Error
  =
  As my Openstack cluster grow, more and more people start using the cluster, i 
start seeing this error everyday now

  2019-05-30 10:42:15.695 44938 ERROR oslo_db.sqlalchemy.exc_filters
  [req-b6fd59b9-8378-49df-bbf6-de9f9b741490 - - - - -] DBAPIError
  exception wrapped from (pymysql.err.InternalError) (1129, u"Host
  'xx.xx.xx.xx' is blocked because of many connection errors; unblock
  with 'mysqladmin flush-hosts'") (Background on this error at:
  http://sqlalche.me/e/2j85): InternalError: (1129, u"Host 'xx.xx.xx.xx'
  is blocked because of many connection errors; unblock with 'mysqladmin
  flush-hosts'")

  This is not necessary happens to only neutron but all of the
  components of openstack. And when i turn log_warnings=4 in Mariadb, I
  can see in the log of Mariadb as below:

  2019-05-27 10:22:04 140078484511488 [Warning] Aborted connection 70834104 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
  2019-05-27 10:22:05 140084673243904 [Warning] Aborted connection 70834111 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
  2019-05-27 10:22:07 140078500485888 [Warning] Aborted connection 70834211 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
  2019-05-27 10:22:07 140078490655488 [Warning] Aborted connection 70834157 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
  2019-05-27 10:22:09 140078698322688 [Warning] Aborted connection 70834327 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
  2019-05-27 10:22:12 140078715833088 [Warning] Aborted connection 70894166 to 
db: 'unconnected' user: 'neutron' host: 'controller2' (CLOSE_CONNECTION)
  2019-05-27 10:22:13 140078737951488 [Warning] Aborted connection 70834380 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
  2019-05-27 10:22:17 140078641797888 [Warning] Aborted connection 70834382 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
  2019-05-27 10:22:21 140078581893888 [Warning] Aborted connection 70834436 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
  2019-05-27 10:22:22 140078724434688 [Warning] Aborted connection 70834469 to 
db: 'nova' user: 'nova' host: 'controller3' (CLOSE_CONNECTION)
  2019-05-27 10:22:28 140078715833088 [Warning] Aborted connection 70894174 to 
db: 'unconnected' user: 'unauthenticated' host: 'controller2' (CLOSE_CONNECTION)
  2019-05-27 10:22:29 140078715833088 [Warning] Aborted connection 70894177 to 
db: 'neutron' user: 'neutron' host: 'controller2' (CLOSE_CONNECTION)
  ...
  2019-05-30  7:35:28 140078596025088 [Warning] Aborted connection 72547571 to 
db: 'nova' user: 'nova' host: 'controller1' (Got an error reading communication 
packets)
  2019-05-30  7:46:54 140078541036288 [Warning] Aborted connection 72552087 to 
db: 'nova_cell0' user: 'nova' host: 'controller1' (Got an error reading 
communication packets)
  2019-05-30  7:46:57 140078799182592 [Warning] Aborted connection 72552086 to 
db: 'nova' user: 'nova' host: 'controller1' (Got an error reading communication 
packets)
  2019-05-30  7:47:02 140078738565888 [Warning] Aborted connection 72534613 to 
db: 'nova_cell0' user: 'nova' host: 'controller1' (Got an error reading 
communication packets)
  2019-05-30  8:31:11 140078638418688 [Warning] Aborted connection 72419897 to 
db: 'nova' user: 'nova' host: 'controller3' (Got timeout reading communication 
packets)
  2019-05-30  8:36:22 140078791195392 [Warning] Aborted connection 72421900 to 
db: 'nova' user: 'nova' host: 'controller2' (Got timeout reading communication 
packets)
  2019-05-30  8:46:23 140078624594688 [Warning] Aborted connection 72577413 to 
db: 'nova_cell0' user: 'nova' host: 'controller1' (Got an error reading 
communication packets)
  2019-05-30  8:46:26 140078716447488 [Warning] Aborted connection 72577414 to 
db: 'nova' user: 'nova' host: 'controller1' (Got an error reading communication 
packets)
  2019-05-30 10:45:23 140078661151488 [Warning] Aborted connection 72675103 to 
db: 'neutron' user: 'neutron' host: 'controller3' (Got an error reading 
communication packets)
  2019-05-30 10:45:23 140078672517888 [Warning] 

[Yahoo-eng-team] [Bug 1818765] Re: the PeriodicWorker function misssing the default desc in constructor

2023-01-24 Thread Brian Haley
I don't think this is a bug. All the neutron agents call
register_common_config_options() to make sure these common options are
registered. This should be done by other users as well. I'll close this,
please re-open if there is something I'm missing.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1818765

Title:
  the PeriodicWorker function misssing the default desc in constructor

Status in neutron:
  Invalid
Status in Tricircle:
  New

Bug description:
  
  After this pr merged.https://review.openstack.org/#/c/637019/

  we should add the default desc in PeriodicWorker. Otherwise some class

  base on the PeriodicWorker which do not set the setproctitle off in
  neutorn conf.

  will get the core dump error. like below, where set_proctitle  is None
  and do not

  have the setproctitle config

  packages/neutron/worker.py", line 21, in __init__
  set_proctitle = set_proctitle or cfg.CONF.setproctitle

  
  ft2.2: 
tricircle.tests.unit.network.test_central_trunk_plugin.PluginTest.test_delete_trunk_StringException:
 Traceback (most recent call last):
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py",
 line 1305, in patched
  return func(*args, **keywargs)
File "tricircle/tests/unit/network/test_central_trunk_plugin.py", line 555, 
in test_delete_trunk
  fake_plugin.delete_trunk(q_ctx, t_trunk['id'])
File "tricircle/network/central_trunk_plugin.py", line 70, in delete_trunk
  super(TricircleTrunkPlugin, self).delete_trunk(context, trunk_id)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/services/trunk/plugin.py",
 line 267, in delete_trunk
  if trunk_port_validator.can_be_trunked_or_untrunked(context):
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/services/trunk/rules.py",
 line 115, in can_be_trunked_or_untrunked
  if not self.is_bound(context):
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/services/trunk/rules.py",
 line 109, in is_bound
  core_plugin = directory.get_plugin()
File "tricircle/tests/unit/network/test_central_trunk_plugin.py", line 254, 
in fake_get_plugin
  return FakeCorePlugin()
File "tricircle/network/central_plugin.py", line 182, in __new__
  n = super(TricirclePlugin, cls).__new__(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 106, in replacement_new
  instance = orig_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py",
 line 156, in __new__
  return super(NeutronDbPluginV2, cls).__new__(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 126, in replacement_new
  instance = orig_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 104, in replacement_new
  instance = super_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 126, in replacement_new
  instance = orig_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py",
 line 106, in replacement_new
  instance = orig_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/external_net_db.py",
 line 77, in __new__
  return super(External_net_db_mixin, cls).__new__(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 126, in replacement_new
  instance = orig_new(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron/db/portbindings_db.py",
 line 54, in __new__
  return super(PortBindingMixin, cls).__new__(cls, *args, **kwargs)
File 
"/home/zuul/src/git.openstack.org/openstack/tricircle/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/resource_extend.py",
 line 124, in replacement_new
  instance = super_new(cls, *args, **kwargs)
File 

[Yahoo-eng-team] [Bug 1809080] Re: reload_cfg doesn't work correctly

2023-01-23 Thread Brian Haley
** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1809080

Title:
  reload_cfg doesn't work correctly

Status in neutron:
  Won't Fix

Bug description:
  class ProcessManager[1] is used to managed processes in neutron agents. The 
reload_cfg function is used to reload configuration by sending HUB signal to 
the target process. This may work for a linux system administrator, where 
she/he updates the config file and then sends HUP signal to reload.
  But in openstack, this is a bit different. I have read the code about 
dnsmasq[2] and haproxy. The parameters and config files are generated by a 
callback function. This function generates config files and return the 
command(with parameters). Sending HUP signal doesn't change parameters, neither 
regenerate config files.

  I would suggest another way to reload/restart process which makes sure
  the callback function is called.

  [1] 
https://github.com/openstack/neutron/blob/8c59cfe98eb15f024b6c24312c0bf51af02a5dc3/neutron/agent/linux/external_process.py#L50
  [2] 
https://github.com/openstack/neutron/blob/8c59cfe98eb15f024b6c24312c0bf51af02a5dc3/neutron/agent/linux/dhcp.py#L457-L458

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1809080/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1809447] Re: performance regression from mitaka to ocata

2023-01-23 Thread Brian Haley
** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1809447

Title:
  performance regression from mitaka to ocata

Status in neutron:
  Won't Fix

Bug description:
  With rally tests i have noticed a generic performance drop in production on 
neutron from 50% to 10% in all components after migration to Ocata from Mitaka 
and Newton.
  I'm able to reproduce the problem with isolated VM using ubuntu 16.04 
packages with related repo and use create port as a test because seems the more 
relevant. Here below the times of 40 serial create port using curl direct to 
neutron:

  mitaka   0m 21s
  newton   0m 31s
  ocata0m 50s
  rocky0m 37s

  I have done more tests with the next releases with devstack to check
  if the problem was solved, but unfortunately seems not.

  I have also done a bit of profiling and seems also that the orm
  produce different behavior on ocata and have some call that on mitaka
  are never done:

  ORMAdapter.traverse
  ORMAdapter.replace
  ORMAdapter._corresponding_column
  ORMAdapter._locate_col

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1809447/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1814209] Re: Messages for security rules are being sent to a wrong MQ topic. Security rules are out of sync

2023-01-23 Thread Brian Haley
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1814209

Title:
  Messages for security rules are being sent to a wrong MQ topic.
  Security rules are out of sync

Status in neutron:
  Invalid

Bug description:
  Hello,

  We deployed Neutron + OVS + DVR long time ago.
  After upgrade from Ocata->Pike->Queens we've got a problem with security 
groups. They all are out of sync because messages are being sent to a queue 
with no consumers (there were some old Ocata consumers though, but we turned 
them off for the testing).

  Request logs - https://pastebin.com/80BMDLai

  Queue q-agent-notifier-security_group-update doesn't have any
  consumers at all. So, the compute nodes don't get it, thus they don't
  update security rules accordingly. Is this queue used in rocky?

  Sometimes, I can see messages are being sent to neutron-vo-
  SecurityGroupRule-1.0 and all the compute nodes get it. It looks like
  a floating problem.

  How to reproduce: Upgrade sequentially from Ocata to Pike and to
  Rocky.

  Why it may happen and how to fix it?

  If you need any additional information just let me know.

  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1814209/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1795716] Re: neutron-openvswitch-agent deletes existing other_config and thus breaks undercloud's MAC address assignment in tripleo

2023-01-23 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1795716

Title:
  neutron-openvswitch-agent deletes existing other_config and thus
  breaks undercloud's MAC address assignment in tripleo

Status in neutron:
  Invalid

Bug description:
  
https://github.com/openstack/neutron/commit/1f8378e0ac4b8c3fc4670144e6efc51940d796ad
 was supposed to change other-config:mac-table-size to set a higher value of 
mac-table-size
  option can be set for bridge. Instead, it overwrites the entire other-config 
array and thus interferes with tripleo's undercloud settings where 
other-config:hwaddr is a requirement.

  neutron-openvswitch-agent thus resets the MAC address to a random
  value although it should be fixed to the underlying interface's MAC>

  The original bridge configuration is:
  ~~~
  ov[root@undercloud-7 ~]# ovs-vsctl list-bridge br-ctlplane
  ovs-vsctl: unknown command 'list-bridge'; use --help for help
  [root@undercloud-7 ~]# ovs-vsctl list bridge br-ctlplane
  _uuid   : d56235c5-4933-4334-b33b-be2134c5
  auto_attach : []
  controller  : []
  datapath_id : "525400ec14c2"
  datapath_type   : ""
  datapath_version: ""
  external_ids: {bridge-id=br-ctlplane}
  fail_mode   : standalone
  flood_vlans : []
  flow_tables : {}
  ipfix   : []
  mcast_snooping_enable: false
  mirrors : []
  name: br-ctlplane
  netflow : []
  other_config: {hwaddr="52:54:00:ec:14:c2"}
  ports   : [054cde3c-0e02-497d-ac25-be8e6992f708, 
fcbfcff7-d6b8-4bce-824d-085a681663cf]
  protocols   : []
  rstp_enable : false
  rstp_status : {}
  sflow   : []
  status  : {}
  stp_enable  : false
  ~~~

  The new version of neutron-openvswitch-agent sets this:
  ~~~
  2018-10-02 12:31:43.032 3949 DEBUG neutron.agent.ovsdb.impl_idl [-] Running 
txn command(idx=1): DbSetCommand(table=Bridge, col_values=(('other_config', 
{'mac-table-size': '5'}),), record=br-ctlplane) do_commit 
/usr/lib/python2.7/site-packages/neutron/agent/ovsdb/impl_idl.py:98
  ~~~

  Which removes the hwaddr:
  ~~~
  [root@undercloud-7 ~]# ovs-vsctl list bridge br-ctlplane
  _uuid   : 334f1314-e024-4c0e-ad6f-acddaa43bb40
  auto_attach : []
  controller  : [505d73e7-4049-44b8-862c-e19e556bc051]
  datapath_id : "16134f330e4c"
  datapath_type   : system
  datapath_version: ""
  external_ids: {bridge-id=br-ctlplane}
  fail_mode   : secure
  flood_vlans : []
  flow_tables : {}
  ipfix   : []
  mcast_snooping_enable: false
  mirrors : []
  name: br-ctlplane
  netflow : []
  other_config: {mac-table-size="5"}
  ports   : [18c205e9-c869-4c0b-a24a-18e249cf4f3e, 
90ab6c75-f108-4716-a328-9c26ba7b1a75]
  protocols   : ["OpenFlow10", "OpenFlow13"]
  rstp_enable : false
  rstp_status : {}
  sflow   : []
  status  : {}
  stp_enable  : false
  ~~~

  When it should run something similar to this manual command:
  ~~~
  [root@undercloud-7 ~]# ovs-vsctl set bridge br-ctlplane 
other-config:mac-table-size=5
  [root@undercloud-7 ~]# ovs-vsctl list bridge br-ctlplane
  _uuid   : d56235c5-4933-4334-b33b-be2134c5
  auto_attach : []
  controller  : []
  datapath_id : "525400ec14c2"
  datapath_type   : ""
  datapath_version: ""
  external_ids: {bridge-id=br-ctlplane}
  fail_mode   : standalone
  flood_vlans : []
  flow_tables : {}
  ipfix   : []
  mcast_snooping_enable: false
  mirrors : []
  name: br-ctlplane
  netflow : []
  other_config: {hwaddr="52:54:00:ec:14:c2", mac-table-size="5"}
  ports   : [054cde3c-0e02-497d-ac25-be8e6992f708, 
fcbfcff7-d6b8-4bce-824d-085a681663cf]
  protocols   : []
  rstp_enable : false
  rstp_status : {}
  sflow   : []
  status  : {}
  stp_enable  : false
  [root@undercloud-7 ~]# 
  [root@undercloud-7 ~]# ip link ls dev br-ctlplane
  14: br-ctlplane:  mtu 1500 qdisc noqueue 
state UNKNOWN mode DEFAULT group default qlen 1000
  link/ether 52:54:00:ec:14:c2 brd ff:ff:ff:ff:ff:ff
  ~~~

  The neutron OVS agent issue can be reproduced manually:
  ~~~
  [root@undercloud-7 ~]# ovs-vsctl set bridge br-ctlplane 
other-config='{mac-table-size=5}'
  [root@undercloud-7 ~]# ovs-vsctl list bridge br-ctlplane | grep other
  other_config: {mac-table-size="5"}
  [root@undercloud-7 ~]# ip link ls dev br-ctlplane
  14: br-ctlplane:  mtu 1500 qdisc noqueue 
state 

[Yahoo-eng-team] [Bug 1790706] Re: Additional metadata service endpoints on OpenStack accessible

2023-01-23 Thread Brian Haley
** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1790706

Title:
  Additional metadata service endpoints on OpenStack accessible

Status in neutron:
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Note: I'm reporting this on behalf of our partner SAP. While the bug
  is about Newton, one of our neutron developers believes this may still
  be valid for newer versions: "The bug might be still valid in
  upstream, since there are no specific case where they are filtering
  based on the IP 169.254.169.254, since they are passing the same port
  as such."

  # Setup:
  OpenStack Newton with `force_metadata = true` on all network nodes
  Kubernetes Gardener setup (seed+shoot) on OpenStack

  # Detail description from the hacker simulation:

  By running a nmap -sn … scan (ping scan) we discovered several
  endpoints in the shoot network (apart from the nodes that can be seen
  from `kubectl --kubeconfig myshoot.kubeconfig get nodes`). We noticed
  that some of the endpoints also serve meta and user data on port 80,
  i.e. the metadata service is not only available from the well-known
  metadata service IP (http://169.254.169.254/…,
  https://docs.openstack.org/nova/latest/user/metadata-service.html) but
  also from those other addresses. In our test the endpoints were
  10.250.0.2-7.
  We learned that the
  endpoints probably are the OpenStack DHCP nodes, i.e. every OpenStack
  DHCP endpoint appears to also serve the metadata.
  While the accessibility of the metadata service is a known problem,
  this situation is “worse” (compared to e.g. Gardener seed and shoot
  clusters on AWS) for the following reasons:
  1. If a network policy is applied to block access from cluster payloads
  to the metadata service, it’s not enough to block well-known
  `169.254.169.254` but it must also block all access to all other
  existing endpoints. How can the definite set of endpoints be
  determined? Are they guaranteed to not change during the lifetime of a
  cluster?
  2. If the metadata service is only accessible via 169.254.169.254, the
  known `kubectl proxy` issue (details can be shared if needed)
  cannot be used to get access to the metadata service, as the
  link-local 169.254.0.0/16 address range is not allowed by the Kubernetes API 
server
  as an endpoint address. But for example 10.250… is allowed, i.e. a shoot user 
on
  OpenStack can use the attack to get access to the metadata service in
  the seed network.
  The fact that no fix is in sight for the `kubectl proxy` issue and it
  might not be patchable poses an additional risk regarding 2. We will
  try to follow up on that with the Kubernetes security team once again.

  # Detail information:
  Due to the “force_metadata” setting the DHCP namespaces are exposing the 
metadata service:

  # ip netns exec qdhcp-54ad9fe0-2ce5-4083-a32b-ca744e806d1f netstat -tulpen
  Active Internet connections (only servers)
  Proto Recv-Q Send-Q Local Address   Foreign Address State 
  User   Inode  PID/Program name
  tcp0  0 0.0.0.0:80  0.0.0.0:*   LISTEN
  0  1934832519 54198/python
  tcp0  0 10.222.0.3:53   0.0.0.0:*   LISTEN
  0  1934920972 54135/dnsmasq
  tcp0  0 169.254.169.254:53  0.0.0.0:*   LISTEN
  0  1934920970 54135/dnsmasq
  tcp0  0 fe80::f816:3eff:fe01:53 :::*LISTEN
  1981934909191 54135/dnsmasq
  udp0  0 10.222.0.3:53   0.0.0.0:* 
  0  1934920971 54135/dnsmasq
  udp0  0 169.254.169.254:53  0.0.0.0:* 
  0  1934920969 54135/dnsmasq
  udp0  0 0.0.0.0:67  0.0.0.0:* 
  0  1934920966 54135/dnsmasq
  udp0  0 fe80::f816:3eff:fe01:53 :::*  
  1981934909190 54135/dnsmasq

  The problem is that the metadata proxy is listening to 0.0.0.0:80 instead of 
169.254.169.254:80.
  This let the metadata service respond also to DHCP ip addresses which cannot 
be blocked easily.

  This fix mitigated the problem:
  --- neutron.org/agent/metadata/namespace_proxy.py   2018-08-31 
12:42:25.901681939 +
  +++ neutron/agent/metadata/namespace_proxy.py   2018-08-31 12:43:17.541826180 
+
  @@ -130,7 +130,7 @@
   self.router_id)
   proxy = wsgi.Server('neutron-network-metadata-proxy',
   num_threads=self.proxy_threads)
  -proxy.start(handler, self.port)
  +proxy.start(handler, self.port, '169.254.169.254')

   # Drop privileges after port bind
   super(ProxyDaemon, self).run()

To manage notifications 

[Yahoo-eng-team] [Bug 1776778] Re: Floating IPs broken after kernel upgrade to Centos/RHEL 7.5 - DNAT not working

2023-01-23 Thread Brian Haley
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1776778

Title:
  Floating IPs broken after kernel upgrade to Centos/RHEL 7.5 - DNAT not
  working

Status in neutron:
  Invalid

Bug description:
  Since upgrading to Centos 7.5 (with kernel 3.10.0-862), floating IP
  functionality has been completely busted. Packets arrive inbound to
  qrouter from fip namespace via RFP, but are not DNAT'd or routed, as
  we see nothing going out qr- interface. For outbound packets leaving
  the VM, they are fine, but then all responses are again dropped
  inbound to qrouter after arriving on rfp. It appears the DNAT rules in
  the "-t nat" iptables within qrouter are not being hit (packet
  counters are zero).

  SNAT functionality works when we remove floating IP from the VM (VM
  can then ping outbound). So problem seems isolated to DNAT / qrouter
  receiving packets from fip?

  We are able to reproduce this 100% consistently, whenever we update
  our working centos 7.2 / centos 7.4 hosts to 7.5. Nothing changes
  except a "yum update". All routes, rules, iptables are identical on a
  working older host vs. broken centos 7.5 host.

  I added some basic rules to log packets at top of PREROUTING chain in
  raw, mangle, and nat tables. Filtering either by my source IP, or all
  packets on -i rfp ingress interface. While packet counters increment
  for raw and mangle, they remain at 0 for nat, indicating the nat
  iptable is not invoked for PREROUTING.

  Floating IP = 10.8.17.52, Fixed IP = 192.168.94.9.

  [root@centos7-neutron-template ~]# ip netns exec 
qrouter-f48d5536-eefa-4410-b17b-1b3d14426323 tcpdump -l -evvvnn -i 
rfp-f48d5536-e
  tcpdump: listening on rfp-f48d5536-e, link-type EN10MB (Ethernet), capture 
size 262144 bytes
  13:42:00.345440 7a:3b:f1:c7:5d:4e > aa:24:89:9e:c8:f0, ethertype IPv4 
(0x0800), length 98: (tos 0x0, ttl 62, id 1832, offset 0, flags [DF], proto 
ICMP (1), length 84)
  10.4.165.22 > 10.8.17.52: ICMP echo request, id 5771, seq 1, length 64
  13:42:01.344047 7a:3b:f1:c7:5d:4e > aa:24:89:9e:c8:f0, ethertype IPv4 
(0x0800), length 98: (tos 0x0, ttl 63, id 1833, offset 0, flags [DF], proto 
ICMP (1), length 84)
  10.4.165.22 > 10.8.17.52: ICMP echo request, id 5771, seq 2, length 64
  13:42:02.398300 7a:3b:f1:c7:5d:4e > aa:24:89:9e:c8:f0, ethertype IPv4 
(0x0800), length 98: (tos 0x0, ttl 63, id 1834, offset 0, flags [DF], proto 
ICMP (1), length 84)
  10.4.165.22 > 10.8.17.52: ICMP echo request, id 5771, seq 3, length 64
  13:42:03.344345 7a:3b:f1:c7:5d:4e > aa:24:89:9e:c8:f0, ethertype IPv4 
(0x0800), length 98: (tos 0x0, ttl 63, id 1835, offset 0, flags [DF], proto 
ICMP (1), length 84)
  10.4.165.22 > 10.8.17.52: ICMP echo request, id 5771, seq 4, length 64
  ^C
  4 packets captured
  4 packets received by filter
  0 packets dropped by kernel
  [root@centos7-neutron-template ~]# ip netns exec 
qrouter-f48d5536-eefa-4410-b17b-1b3d14426323 tcpdump -l -evvvnn -i 
qr-295f9857-21
  tcpdump: listening on qr-295f9857-21, link-type EN10MB (Ethernet), capture 
size 262144 bytes

  ***CRICKETS***

  [root@centos7-neutron-template ~]# ip netns exec 
qrouter-f48d5536-eefa-4410-b17b-1b3d14426323 ip a
  1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default qlen 1000
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
     valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host
     valid_lft forever preferred_lft forever
  2: rfp-f48d5536-e:  mtu 1500 qdisc noqueue 
state UP group default qlen 1000
  link/ether aa:24:89:9e:c8:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
  inet 169.254.106.114/31 scope global rfp-f48d5536-e
     valid_lft forever preferred_lft forever
  inet6 fe80::a824:89ff:fe9e:c8f0/64 scope link
     valid_lft forever preferred_lft forever
  59: qr-295f9857-21:  mtu 1500 qdisc noqueue 
state UNKNOWN group default qlen 1000
  link/ether fa:16:3e:3d:f1:12 brd ff:ff:ff:ff:ff:ff
  inet 192.168.94.1/24 brd 192.168.94.255 scope global qr-295f9857-21
     valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe3d:f112/64 scope link
     valid_lft forever preferred_lft forever

  [root@centos7-neutron-template ~]# ip netns exec 
qrouter-f48d5536-eefa-4410-b17b-1b3d14426323 ip route
  169.254.106.114/31 dev rfp-f48d5536-e proto kernel scope link src 
169.254.106.114
  192.168.94.0/24 dev qr-295f9857-21 proto kernel scope link src 192.168.94.1
  [root@centos7-neutron-template ~]# ip netns exec 
qrouter-f48d5536-eefa-4410-b17b-1b3d14426323 ip rule
  0:from all lookup local
  32766:from all lookup main
  32767:from all lookup default
  57481:from 192.168.94.9 lookup 16
  3232259585:   from 192.168.94.1/24 lookup 3232259585
  [root@centos7-neutron-template ~]# ip netns exec 

[Yahoo-eng-team] [Bug 1774257] Re: neutron-openvswitch-agent RuntimeError: Switch connection timeout

2023-01-23 Thread Brian Haley
There have been a number of changes to the agent code since this bug was
opened, and since we have not seen this reported elsewhere I'm going to
close it. Please re-open if you have more data and see it again.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1774257

Title:
  neutron-openvswitch-agent RuntimeError: Switch connection timeout

Status in neutron:
  Invalid

Bug description:
  In neutron-openvswitch-agent.log I see lot of timeout message.

    RuntimeError: Switch connection timeout

  This timeout prevents sometime neutron-openvswitch-agent to be UP.
  We are running Pike and we have ~1000 ports in Open vSwitch.

  I'm able to run ovs-vsctl, ovs-ofctl, etc... commands which mean that
  Open vSwitch (vswitchd+db) is working fine.

  This is the full TRACE of neutron-openvswitch-agent log:

  2018-05-30 19:22:42.353 7 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: receive error: Connection reset by peer
  2018-05-30 19:22:42.358 7 WARNING ovsdbapp.backend.ovs_idl.vlog [-] 
tcp:127.0.0.1:6640: connection dropped (Connection reset by peer)
  2018-05-30 19:24:17.626 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch 
[req-3c335d47-9b3e-4f18-994b-afca7d7d70be - - - - -] Switch connection timeout: 
RuntimeError: Switch connection timeout
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-3c335d47-9b3e-4f18-994b-afca7d7d70be - - - - -] Error while processing VIF 
ports: RuntimeError: Switch connection timeout
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent Traceback (most 
recent call last):
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2066, in rpc_loop
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
ofport_changed_ports = self.update_stale_ofport_rules()
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/osprofiler/profiler.py", 
line 153, in wrapper
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent return 
f(*args, **kwargs)
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1210, in update_stale_ofport_rules
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self.int_br.delete_arp_spoofing_protection(port=ofport)
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/br_int.py",
 line 255, in delete_arp_spoofing_protection
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent match=match)
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py",
 line 111, in uninstall_flows
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent (dp, ofp, 
ofpp) = self._get_dp()
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_bridge.py",
 line 67, in _get_dp
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self._cached_dpid = new_dpid
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", 
line 220, in __exit__
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
self.force_reraise()
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent   File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/oslo_utils/excutils.py", 
line 196, in force_reraise
  2018-05-30 19:24:17.628 7 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
six.reraise(self.type_, self.value, self.tb)
  2018-05-30 19:24:17.628 7 ERROR 

[Yahoo-eng-team] [Bug 1773551] Re: Error loading interface driver 'neutron.agent.linux.interface.Bri dgeInterfaceDriver'

2023-01-23 Thread Brian Haley
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1773551

Title:
  Error loading interface driver 'neutron.agent.linux.interface.Bri
  dgeInterfaceDriver'

Status in neutron:
  Invalid

Bug description:
  ERROR message below:

  2018-05-27 00:00:59.541 11767 ERROR neutron_lib.utils.runtime [-]
  Error loading class by alias: NoMatches: No 'neutron.interface_dri
  vers' driver found, looking for
  'neutron.agent.linux.interface.BridgeInterfaceDriver'

  2018-05-27 00:00:59.541 11767 ERROR neutron_lib.utils.runtime
  Traceback (most recent call last):

  2018-05-27 00:00:59.541 11767 ERROR neutron_lib.utils.runtime File
  "/usr/lib/python2.7/dist-packages/neutron_lib/utils/runtime.py" , line
  46, in load_class_by_alias_or_classname

  2018-05-27 00:00:59.541 11767 ERROR neutron_lib.utils.runtime
  namespace, name, warn_on_missing_entrypoint=False)

  2018-05-27 00:00:59.541 11767 ERROR neutron_lib.utils.runtime File
  "/usr/lib/python2.7/dist-packages/stevedore/driver.py", line 61 , in
  __init__

  2018-05-27 00:00:59.541 11767 ERROR neutron_lib.utils.runtime
  warn_on_missing_entrypoint=warn_on_missing_entrypoint

  2018-05-27 00:00:59.541 11767 ERROR neutron_lib.utils.runtime File
  "/usr/lib/python2.7/dist-packages/stevedore/named.py", line 89, in
  __init__

  2018-05-27 00:00:59.541 11767 ERROR neutron_lib.utils.runtime
  self._init_plugins(extensions)

  2018-05-27 00:00:59.541 11767 ERROR neutron_lib.utils.runtime File
  "/usr/lib/python2.7/dist-packages/stevedore/driver.py", line 11 3, in
  _init_plugins

  2018-05-27 00:00:59.541 11767 ERROR neutron_lib.utils.runtime
  (self.namespace, name))

  2018-05-27 00:00:59.541 11767 ERROR neutron_lib.utils.runtime
  NoMatches: No 'neutron.interface_drivers' driver found, looking for 'n
  eutron.agent.linux.interface.BridgeInterfaceDriver'

  2018-05-27 00:00:59.541 11767 ERROR neutron_lib.utils.runtime

  2018-05-27 00:00:59.542 11767 ERROR neutron_lib.utils.runtime [-]
  Error loading class by class name: ImportError: Class BridgeInterf
  aceDriver cannot be found (['Traceback (most recent call last):\n', '
  File "/usr/local/lib/python2.7/dist-packages/oslo_utils/impor
  tutils.py", line 32, in import_class\n return
  getattr(sys.modules[mod_str], class_str)\n', "AttributeError: 'module'
  object has n o attribute 'BridgeInterfaceDriver'\n"])

  2018-05-27 00:00:59.542 11767 ERROR neutron_lib.utils.runtime
  Traceback (most recent call last):

  2018-05-27 00:00:59.542 11767 ERROR neutron_lib.utils.runtime File
  "/usr/lib/python2.7/dist-packages/neutron_lib/utils/runtime.py" , line
  52, in load_class_by_alias_or_classname 2018-05-27 00:00:59.542 11767
  ERROR neutron_lib.utils.runtime class_to_load =
  importutils.import_class(name) 2018-05-27 00:00:59.542 11767 ERROR
  neutron_lib.utils.runtime File "/usr/local/lib/python2.7/dist-
  packages/oslo_utils/importutils. py", line 36, in import_class

  2018-05-27 00:00:59.542 11767 ERROR neutron_lib.utils.runtime
  traceback.format_exception(*sys.exc_info(

  2018-05-27 00:00:59.542 11767 ERROR neutron_lib.utils.runtime
  ImportError: Class BridgeInterfaceDriver cannot be found (['Traceback
  (most recent call last):\n', ' File "/usr/local/lib/python2.7/dist-
  packages/oslo_utils/importutils.py", line 32, in import_class\n return
  getattr(sys.modules[mod_str], class_str)\n', "AttributeError: 'module'
  object has no attribute 'BridgeInterfaceDriver'\n"] )

  2018-05-27 00:00:59.542 11767 ERROR neutron_lib.utils.runtime

  2018-05-27 00:00:59.542 11767 ERROR neutron.agent.common.utils [-]
  Error loading interface driver 'neutron.agent.linux.interface.Bri
  dgeInterfaceDriver'

  I has worked well a long time after the first install. When i do a
  system update action ,this issue appear。

  I hava check the source as well,which is correct.

  I do a test like below ,and it work well:

  # python
  Python 2.7.12 (default, Dec  4 2017, 14:50:18) 
  [GCC 5.4.0 20160609] on linux2
  Type "help", "copyright", "credits" or "license" for more information.
  >>> import_str = "neutron.agent.linux.interface.BridgeInterfaceDriver"
  >>> mod_str, _sep, class_str = import_str.rpartition('.')
  >>> mod_str
  'neutron.agent.linux.interface'
  >>> class_str
  'BridgeInterfaceDriver'
  >>> __import__(mod_str)
  
  >>> import sys
  >>> sys.modules[mod_str]
  
  >>> a=getattr(sys.modules[mod_str],class_str)
  >>> a
  
  >>> exit()

  Why this issue happend ? Any one can help me?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1773551/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773827] Re: Sort flavors with tenant_id return 500

2023-01-23 Thread Brian Haley
Seems this has been fixed, closing.

** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1773827

Title:
  Sort flavors with tenant_id return 500

Status in neutron:
  Invalid

Bug description:
  I tried to list flavors with 'tenant_id' as a sort key and neutron
  server returned 500 response. I expected the response to be 4xx.

$ openstack network flavor create --service-type L3_ROUTER_NAT dummy-flavor
$ openstack network flavor create --service-type L3_ROUTER_NAT dummy-flavor2
$ curl -g -i -X GET 
"http://127.0.0.1:9696/v2.0/flavors?sort_dir=asc_key=tenant_id; -H 
"Accept: application/json" -H "X-Auth-Token: $TOKEN"
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 150
X-Openstack-Request-Id: req-918c6f4d-97c8-4ebf-8d16-35621eac0cb8
Date: Tue, 05 Jun 2018 19:05:55 GMT

{"NeutronError": {"message": "Request Failed: internal server error while 
processing your request.", "type": "HTTPInternalServerError", "detail": ""}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1773827/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771493] Re: Add option to hide ipaddress in neutron logs

2023-01-23 Thread Brian Haley
I'm going to close this for a couple of reasons.

1) It doesn't seem like this is specific to neutron, so it's not the
correct place to make such a change.

2) The neutron code has no concept of a particular person, everything is
just a project ID, which isn't an identifiable piece of information
without another mapping, for example, from Keystone.

3) At the end of the day, an operation such as filtering IP addresses
from logs seems like something for an operator, and something they would
have to do for a lot more than just this.

If there is a clearer document on what should be done here please update
this bug with more information.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1771493

Title:
  Add option to hide ipaddress in neutron logs

Status in neutron:
  Won't Fix

Bug description:
  As some might know, EU has released a new law, that gets in action on
  May25th 2018, that prohibits misuse of any personal data corresponding
  to a natural person. Any information that can directly or indirectly
  lead to tracking of a natural person can be captured, stored or
  processed only with the consent of the natural person. IP Address is
  categorized as one such data. It can be debated if ipaddress can be
  classified as personal data or not but that would be beyond the scope
  of this defect.

  The below log statements from neutron log displays the fixed ips
  associated with the VMs provisioned. The VMs provisioned from an Cloud
  platform like OpenStack could host someone's website and thus could be
  used to identify a natural person. Having said that, this
  information(logged ip) could be very useful from a serviceability
  perspective. So, the question is is it possible to add a mechanism
  such that we are able to configure whether this information should be
  logged or not?

  2018-05-10 03:50:01.157 18683 INFO neutron.wsgi
  [req-b7b52f32-bbde-41b2-b882-707d63729256
  a1e569eb16f0ec710b82314e31af4f8cfb1eedc3f0fb38554186e08717c21f0c
  676da0962c9e48c687312f1a023af9ca - 96c9c4469e0b499e8c14043aa093b5bd
  96c9c4469e0b499e8c14043aa093b5bd] 10.253.234.23,127.0.0.1 "GET
  
/v2.0/floatingips?fixed_ip_address=162.42.34.10_id=41a1dff7-d5f5-43eb-a911-a594c4576f6a
  HTTP/1.1" status: 200  len: 217 time: 0.0247259

  2018-05-10 03:50:02.049 18683 INFO neutron.wsgi
  [req-879d19ac-7b6f-4af0-b8b0-48d259f20ae7
  a1e569eb16f0ec710b82314e31af4f8cfb1eedc3f0fb38554186e08717c21f0c
  676da0962c9e48c687312f1a023af9ca - 96c9c4469e0b499e8c14043aa093b5bd
  96c9c4469e0b499e8c14043aa093b5bd] 10.253.234.23,127.0.0.1 "GET
  
/v2.0/floatingips?fixed_ip_address=162.42.34.10_id=41a1dff7-d5f5-43eb-a911-a594c4576f6a
  HTTP/1.1" status: 200  len: 217 time: 0.0201731

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1771493/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1772874] Re: attr_ops.verify_attributes() wrongly reject binding:host_id in _fixup_res_dict()

2023-01-23 Thread Brian Haley
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1772874

Title:
  attr_ops.verify_attributes() wrongly reject binding:host_id in
  _fixup_res_dict()

Status in neutron:
  Invalid

Bug description:
  While testing puppet-openstack and Neutron with Debian packages (ie:
  running with Stretch and Queens), I had to use neutron-api using
  uwsgi, as /usr/bin/neutron-server would not work with Eventlet +
  Python 3 (which is famously broken). Therefore, I did a setup with
  uwsgi and running nova-rpc-server.

  Then, I tried spawning an instance, then I got some issues in the rpc-
  server:

   [req-4b2c2379-78ef-437c-b08a-bd8b309fa0b0 - - - - -] Exception during 
message handling: ValueError: Unrecognized attribute(s) 'binding:host_id'
   Traceback (most recent call last):
 File "/usr/lib/python3/dist-packages/neutron/plugins/common/utils.py", 
line 162, in _fixup_res_dict
   attr_ops.verify_attributes(res_dict)
 File "/usr/lib/python3/dist-packages/neutron_lib/api/attributes.py", line 
200, in verify_attributes
   raise exc.HTTPBadRequest(msg)
   webob.exc.HTTPBadRequest: Unrecognized attribute(s) 'binding:host_id'

   During handling of the above exception, another exception occurred:

   Traceback (most recent call last):
 File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 
163, in _process_incoming
   res = self.dispatcher.dispatch(message)
 File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 220, in dispatch
   return self._do_dispatch(endpoint, method, ctxt, args)
 File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", 
line 190, in _do_dispatch
   result = func(ctxt, **new_args)
 File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 
226, in inner
   return func(*args, **kwargs)
 File "/usr/lib/python3/dist-packages/neutron/db/api.py", line 91, in 
wrapped
   setattr(e, '_RETRY_EXCEEDED', True)
 File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
   self.force_reraise()
 File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
   six.reraise(self.type_, self.value, self.tb)
 File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise
   raise value
 File "/usr/lib/python3/dist-packages/neutron/db/api.py", line 87, in 
wrapped
   return f(*args, **kwargs)
 File "/usr/lib/python3/dist-packages/oslo_db/api.py", line 147, in wrapper
   ectxt.value = e.inner_exc
 File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
   self.force_reraise()
 File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
   six.reraise(self.type_, self.value, self.tb)
 File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise
   raise value
 File "/usr/lib/python3/dist-packages/oslo_db/api.py", line 135, in wrapper
   return f(*args, **kwargs)
 File "/usr/lib/python3/dist-packages/neutron/db/api.py", line 126, in 
wrapped
   LOG.debug("Retry wrapper got retriable exception: %s", e)
 File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
   self.force_reraise()
 File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
   six.reraise(self.type_, self.value, self.tb)
 File "/usr/lib/python3/dist-packages/six.py", line 693, in reraise
   raise value
 File "/usr/lib/python3/dist-packages/neutron/db/api.py", line 122, in 
wrapped
   return f(*dup_args, **dup_kwargs)
 File "/usr/lib/python3/dist-packages/neutron/quota/resource_registry.py", 
line 99, in wrapper
   ret_val = f(_self, context, *args, **kwargs)
 File 
"/usr/lib/python3/dist-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 
271, in create_dhcp_port
   return self._port_action(plugin, context, port, 'create_port')
 File 
"/usr/lib/python3/dist-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 98, 
in _port_action
   return p_utils.create_port(plugin, context, port)
 File "/usr/lib/python3/dist-packages/neutron/plugins/common/utils.py", 
line 189, in create_port
   check_allow_post=check_allow_post)
 File "/usr/lib/python3/dist-packages/neutron/plugins/common/utils.py", 
line 166, in _fixup_res_dict
   raise ValueError(e.detail)
   ValueError: Unrecognized attribute(s) 'binding:host_id'

  FYI, the content of the res_dict variable before calling
  attr_ops.verify_attributes(res_dict) is as follow (I added a
  LOG.debug() to find out):

  res_dict var: {'device_owner': 'network:dhcp', 'network_id':
  '5fa58f3a-3a72-4d5a-a781-dca20d882007', 'fixed_ips': [{'subnet_id':
  '85a0b153-fcd8-418d-90c2-7d0140431d61'}], 

[Yahoo-eng-team] [Bug 1773046] Re: IPAM returns random ip address instead of next IP from subnet pool

2023-01-23 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1773046

Title:
  IPAM returns random ip address instead of next IP from subnet pool

Status in neutron:
  Won't Fix

Bug description:
  When a new VM is spawned with no ip address in network request,
  neutron allocates a new ip address from corresponding subnet pool. And
  it does not pick next available ip. Instead of
  PreferNextAddressRequest, AnyAddressRequest is set for ip_request.

  This is due to neutron/ipam/request.py
  AddressRequestFactory.get_request() returns AnyAddressRequest under
  final else clause. It is preferred that it returns
  PreferNextAddressRequest so that IP addresses are allocated in a
  sequential pattern.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1773046/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1766812] Re: the machine running dhcp agent will have very high cpu load when start dhcp agent after the agent down more than 150 seconds

2023-01-23 Thread Brian Haley
The DHCP agent was changed to use GreenPool thread, and will dynamically
increase the number as of commit
7369b69e2ef5b1b3c30b237885c2648c63f1dffb. This was a similar change as
presented in the linked patch. For this reason I'll mark this bug fixed.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1766812

Title:
  the machine running dhcp agent will have very high cpu load when start
  dhcp agent after the agent down more than 150 seconds

Status in neutron:
  Fix Released

Bug description:
  This issue can be reproduced by following steps:

  openstack Ocata version, centos 7.2

  1. two dhcp agent nodes
  2. neutron-server side config allow_automatic_dhcp_failover is True and 
dhcp_agents_per_network is 2
  3. create a lot of networks and each one have one subnet, I created 200.The 
more networks, the higher cpu load of dhcp agent node, and the longer high cpu 
load duration
  4. stop one dhcp agent, and wait at least more than 150s (agent_down_time * 
2). It is best to check the distribution of networks on two dhcp agent nodes. 
Neutron-server will remove the networks of the dead dhcp agent after 150s, it 
is better to wait until all the networks is removed from the dead dhcp agent in 
the db. So if have 200 networks, you can do the next step after more than 5 
minites.
  5. start the dhcp agent above, and use top to check the cpu situation, after 
a while, you will see very high cpu load.

  If you have rabbitmq web UI, after do the 5 step, the dhcp agent will
  sync the networks and the dhcp agent consumer has not been created
  yet. Neutron-server find that the dhcp agent is active and re schedule
  network to the dhcp agent, you will find that the messages heap up in
  the dhcp agent side. After the dhcp agent finished syncing networks,
  the dhcp agent consumer is created and will consume the messages but
  not deal. When the dhcp agent queue consumes the heap messages and
  deal, the cpu load of dhcp agent node will become higher and higher.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1766812/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750353] Re: _get_changed_synthetic_fields() does not guarantee returned fields to be updatable

2023-01-23 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1750353

Title:
  _get_changed_synthetic_fields() does not guarantee returned fields to
  be updatable

Status in neutron:
  Fix Released

Bug description:
  While revising [1], I discovered an issue of
  _get_changed_synthetic_fields(): it does not guarantee returned fields
  to be updatable.

  How to reproduce:
   Set a breakpoint in [2] and then run 
neutron.tests.unit.objects.test_ports.DistributedPortBindingIfaceObjTestCase.test_update_updates_from_db_object,
 the returned fields are
  -> return fields
  (Pdb) fields
  {'host': u'c2753a12ec', 'port_id': 'ae5700cd-f872-4694-bf36-92b919b0d3bf'}
  where 'host' and 'port_id' are not updatable.

  [1] https://review.openstack.org/#/c/544206/
  [2] 
https://github.com/openstack/neutron/blob/master/neutron/objects/base.py#L696

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1750353/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1758353] Re: neutron - qr- and qg- interfaces looses their vlan tag

2023-01-23 Thread Brian Haley
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1758353

Title:
  neutron - qr- and qg- interfaces looses their vlan tag

Status in neutron:
  Invalid

Bug description:
  On a running instance, we have had several occasions of network
  issues.

  During the last issue we noticed that the interfaces lost their vlan tags in 
openvswitch:
  ovs-vsctl show |grep qg-fb5a3595-48 -A 10
  Port "qg-fb5a3595-48"
  Interface "qg-fb5a3595-48"
  type: internal
  ---

  A complete restart of neutron-l3-agent caused a migration and now it works 
with a different vlan tag:
  ovs-vsctl show |grep -A 10 qg-fb5a3595-48 

 
  Port "qg-fb5a3595-48"
  tag: 52
  Interface "qg-fb5a3595-48"
  type: internal
  ---

  How is this possible?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1758353/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1749030] Re: Filter by 'created_at' return 500

2023-01-23 Thread Brian Haley
Looks like this was fixed with
https://review.opendev.org/c/openstack/neutron/+/574907

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1749030

Title:
  Filter by 'created_at' return 500

Status in neutron:
  Fix Released

Bug description:
  Neutron server will return 500 if we try to list ports with
  'created_at' as filter. For example:

$ curl -g -i -X GET -H "X-Auth-Token: $TOKEN" 
"http://10.0.0.19:9696/v2.0/ports?created_at=test;
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 150
X-Openstack-Request-Id: req-78d586d8-2ad1-4303-89e9-e93ffe4229cb
Date: Mon, 12 Feb 2018 23:12:17 GMT

{"NeutronError": {"message": "Request Failed: internal server error
  while processing your request.", "type": "HTTPInternalServerError",
  "detail": ""}}

  IMO, the neutron server should return a 4xx error in this case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1749030/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1725587] Re: rally create_and_list_floating_ips task fails with duplicate IpamAllocation

2023-01-20 Thread Brian Haley
I don't see this error in any recent rally runs, so will close. Please
re-open if you see it again.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1725587

Title:
  rally create_and_list_floating_ips task fails with duplicate
  IpamAllocation

Status in neutron:
  Invalid

Bug description:
  Description
  ===

  environment
  ---

  * RDO OpenStack Ocata
  * neutron conf:
   service_plugins = router
   ml2 mechanism_drivers = openvswitch
   external network ex-network with a subnet 111.0.0.0/16
  * rally 0.9.1
  create_and_list_floating_ips.json
  {
  "NeutronNetworks.create_and_list_floating_ips": [
  {
  "args": {
  "floating_network": "ex-network",
  "floating_ip_args": {}
  },
  "runner": {
  "type": "constant",
  "times": 500,
  "concurrency": 100
  },
  "context": {
  "users": {
  "tenants": 2,
  "users_per_tenant": 3
  },
  "quotas": {
  "neutron": {
  "floatingip": -1
  }
  }
  },
  "sla": {
  "failure_rate": {
  "max": 0
  }
  }
  }
  ]
  }

  rally result
  

  Total durations
  ActionMin (sec)   Median (sec)90%ile (sec)95%ile (sec)
Max (sec)   Avg (sec)   Success Count
  neutron.create_floating_ip4.2 18.946  71.561  91.803  131.663 29.769  
94.2%   500
  -> neutron.list_networks  0.991   1.837   10.769  11.56   12.955  3.366   
94.2%   500
  neutron.list_floating_ips 0.162   0.951.672   1.884   3.009   1.007   
100.0%  471
  total 4.396   20.196  71.896  92.886  131.663 30.717  94.2%   500
  -> duration   4.396   20.196  71.896  92.886  131.663 30.717  94.2%   500
  -> idle_duration  0   0   0   0   0   0   94.2%   
500

  html report here: https://pastebin.com/9bLJVhfS

  neutron-server log
  --

  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api 
[req-f4c8db2c-d80d-475a-b408-6eafd6701b62 07a6aba4b1a44dbc9453bfbe4a65 
bb0f198d854c440d8f7c0beb27eaae2b - - -] DB exceeded retry limit.
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api Traceback (most recent call 
last):
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/oslo_db/api.py", line 139, in wrapper
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api return f(*args, **kwargs)
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 131, in wrapped
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api traceback.format_exc())
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api self.force_reraise()
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api six.reraise(self.type_, 
self.value, self.tb)
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/db/api.py", line 126, in wrapped
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api return f(*dup_args, 
**dup_kwargs)
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 1192, in 
create_port
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api result, mech_context = 
self._create_port_db(context, port)
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/plugin.py", line 1163, in 
_create_port_db
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api port_db = 
self.create_port_db(context, port)
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/db/db_base_plugin_v2.py", line 1221, 
in create_port_db
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api context, port, port_id)
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/neutron/db/ipam_pluggable_backend.py", line 
191, in allocate_ips_for_port_and_store
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api revert_on_fail=False)
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api self.force_reraise()
  2017-10-21 10:50:50.445 20020 ERROR oslo_db.api   

[Yahoo-eng-team] [Bug 1712065] Re: can we configure multiple external networks in newton or ocata release

2023-01-20 Thread Brian Haley
We are many releases past Ocata and this issue has already been
addressed.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1712065

Title:
  can we configure multiple external networks in newton or ocata release

Status in neutron:
  Invalid

Bug description:
  http://blog.oddbit.com/2014/05/28/multiple-external-networks-wit/
   
  As per the above blog we can configure multiple external networks on neutron 
l3 agent. But will this work on the new release of openstack(ovs with gre)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1712065/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1714416] Re: Incorrect response returned for invalid Accept header

2023-01-20 Thread Brian Haley
** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1714416

Title:
  Incorrect response returned for invalid Accept header

Status in Cinder:
  Won't Fix
Status in Glance:
  Invalid
Status in OpenStack Heat:
  Won't Fix
Status in OpenStack Identity (keystone):
  Won't Fix
Status in masakari:
  Won't Fix
Status in neutron:
  Won't Fix
Status in OpenStack Compute (nova):
  Won't Fix

Bug description:
  As of now, when user passes 'Accept' header in request other than JSON
  and XML using curl command then it returns 200 OK response with json
  format data.

  In api-ref guide [1] also it's not clearly mentioned about what
  response it should return if invalid value for 'Accept' header is
  specified. IMO instead of 'HTTP 200 OK' it should return 'HTTP 406 Not
  Acceptable' response.

  Steps to reproduce:
   
  Request:
  curl -g -i -X GET 
http://controller/volume/v2/c72e66cc4f1341f381e0c2eb7b28b443/volumes/detail -H 
"User-Agent: python-cinderclient" -H "Accept: application/abc" -H 
"X-Auth-Token: cd85aff745ce4dc0a04f686b52cf7e4f"
   
   
  Response:
  HTTP/1.1 200 OK
  Date: Thu, 31 Aug 2017 07:12:18 GMT
  Server: Apache/2.4.18 (Ubuntu)
  x-compute-request-id: req-ab48db9d-f869-4eb4-95f9-ef8e90a918df
  Content-Type: application/json
  Content-Length: 2681
  x-openstack-request-id: req-ab48db9d-f869-4eb4-95f9-ef8e90a918df
  Connection: close
   
  [1] 
https://developer.openstack.org/api-ref/block-storage/v2/#list-volumes-with-details

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1714416/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1717245] Re: openstack router set --external-gateway creates port without tenant-id

2023-01-20 Thread Brian Haley
** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1717245

Title:
  openstack router set --external-gateway creates port without tenant-id

Status in neutron:
  Invalid

Bug description:
  Adding an external gateway to router creates port without tenant-id.
  It also creates a security group without tenant-id and its rules are
  also without tenant-id.

  http://paste.openstack.org/show/621108/

  Relevant call from the log:
  REQ: curl -g -i -X PUT 
http://10.195.115.111:9696/v2.0/routers/a6f357f7-2383-4e77-9b8a-98a634214aeb -H 
"User-Agent: openstacksdk/0.9.13 keystoneauth1/2.18.0 python-requests/2.11.1 
CPython/2.7.5" -H "Content-Type: application/json" -H "X-Auth-Token: 
{SHA1}3f452ec3ac595bbba2e2a916349ce340aaa2eecb" -d '{"router": 
{"external_gateway_info": {"network_id": 
"bd8f1306-2a90-476e-b55c-01d327032d7c"}}}'

  RESP: [200] Content-Type: application/json Content-Length: 603 
X-Openstack-Request-Id: req-b992ecf6-4c17-4d3f-9cbc-be391cdb41d4 Date: Thu, 14 
Sep 2017 12:01:28 GMT 
  RESP BODY: {"router": {"status": "ACTIVE", "external_gateway_info": 
{"network_id": "bd8f1306-2a90-476e-b55c-01d327032d7c", "enable_snat": true, 
"external_fixed_ips": [{"subnet_id": "88112ad5-d671-4e60-b347-362401606389", 
"ip_address": "10.195.115.108"}]}, "description": "", "tags": [], "tenant_id": 
"94ef30f2b49f420bae3cae2761b9e8d9", "created_at": "2017-09-14T11:19:35Z", 
"admin_state_up": true, "distributed": false, "updated_at": 
"2017-09-14T12:01:28Z", "revision_number": 25, "routes": [], "project_id": 
"94ef30f2b49f420bae3cae2761b9e8d9", "id": 
"a6f357f7-2383-4e77-9b8a-98a634214aeb", "name": "test-router"}}

  The port needs to have tenant-id. The security group is not used
  anywhere, so there's no point in creating it.

  Version: Ocata
  Linux distro: CentOS Linux release 7.3.1611
  Deployment: Apex from OPNFV, which uses RDO

  
  UPDATE: I've noticed this also happens to ports which correspond to floating 
ips:
  [root@overcloud-controller-0 ~]# neutron port-list
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  
+--+--+--+---+---+
  | id   | name | tenant_id 
   | mac_address   | fixed_ips  
   |
  
+--+--+--+---+---+
  | 0a068ffa-2188-4625-93bc-32fcb4900187 |  |   
   | fa:16:3e:f8:ec:ef | {"subnet_id": "88112ad5-d671-4e60-b347-362401606389", 
"ip_address": "10.195.115.100"} |
  | 12485af0-a9aa-484c-acf3-2ea74718015d |  | 
94ef30f2b49f420bae3cae2761b9e8d9 | fa:16:3e:f6:8c:9f | {"subnet_id": 
"8854242b-3306-402f-b1a3-d1fab2ecd781", "ip_address": "192.168.20.7"}   |
  | 46973789-8363-4e3b-9e66-59241d0a8224 |  |   
   | fa:16:3e:3a:d0:c8 | {"subnet_id": "88112ad5-d671-4e60-b347-362401606389", 
"ip_address": "10.195.115.103"} |
  | 47b7b6c9-6f35-4e1d-ad57-905351285138 |  | 
94ef30f2b49f420bae3cae2761b9e8d9 | fa:16:3e:5d:79:28 | {"subnet_id": 
"8854242b-3306-402f-b1a3-d1fab2ecd781", "ip_address": "192.168.20.11"}  |
  | 550d3f6d-967a-4087-b0a0-c3eacf5c588c |  | 
94ef30f2b49f420bae3cae2761b9e8d9 | fa:16:3e:f4:b2:00 | {"subnet_id": 
"8854242b-3306-402f-b1a3-d1fab2ecd781", "ip_address": "192.168.20.2"}   |
  | fe34fd72-1251-4887-8b14-dbfdc4d3f3a7 |  | 
94ef30f2b49f420bae3cae2761b9e8d9 | fa:16:3e:4b:8d:fb | {"subnet_id": 
"8854242b-3306-402f-b1a3-d1fab2ecd781", "ip_address": "192.168.20.1"}   |
  
+--+--+--+---+---+
  [root@overcloud-controller-0 ~]# neutron floatingip-list
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  
+--+--+--+-+--+
  | id   | tenant_id| 
fixed_ip_address | floating_ip_address | port_id  |
  
+--+--+--+-+--+
  | 2d02e69d-cc83-4f66-beee-d4f1899fccfb | 94ef30f2b49f420bae3cae2761b9e8d9 | 
192.168.20.7 | 10.195.115.100  | 12485af0-a9aa-484c-acf3-2ea74718015d |
  

[Yahoo-eng-team] [Bug 1703099] Re: Wiki Document Needs to be Updated

2023-01-20 Thread Brian Haley
Looking at the parent page, https://wiki.openstack.org/wiki/Neutron I
can see there is a note about the wiki being obsolete and to instead use
https://docs.openstack.org/neutron/latest/ For that reason I'll close
this since it would apply here as well.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1703099

Title:
  Wiki Document Needs to be Updated

Status in neutron:
  Won't Fix

Bug description:
  The OpenStack wiki for Neutron with DVR available at 
https://wiki.openstack.org/wiki/Neutron/DVR needs to be updated as it still 
refers to the Juno release.
  Under the "Executive Summary" section, here is what it says:

  "The sole objective of this document is to set the context behind why
  the Neutron community focused some of its efforts on improving certain
  routing capabilities offered by the open source framework, and what
  users can expect to see when they get their hands on its latest
  release, aka Juno."

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1703099/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1709104] Re: cloud-init with only ipv6 Failes to POST encrypted password

2023-01-20 Thread Brian Haley
As metadata over IPv6 is supported in the master branch, will close this
as there is a workaround now.

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1709104

Title:
  cloud-init with only ipv6 Failes to POST encrypted password

Status in neutron:
  Won't Fix

Bug description:
  Release = newton

  When launching a Windows instance utilizing an IPv6-only network,
  cloudinit fails to post the encrypted (generated) password to the
  metadata service due to the absence of the metadata service on non-
  dual-stack/IPv4 probably

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1709104/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   >