[Yahoo-eng-team] [Bug 2059236] [NEW] Add a RBAC action field in the query hooks

2024-03-27 Thread Rodolfo Alonso
Public bug reported:

Any Neutron resource (that is not only a single database table but a
view, a combination of several tables), can register a set of hooks that
will be used during the DB query creation [1]. These hooks include a
query hook (to modify query depending on the database relationships), a
filter hook (to add extra filtering steps to the final query) and a
results filter hook (that could be used to join other tables with other
dependencies).

This bug proposes to add an extra field to this hooks to be able to
filter the RBAC actions. Some resources, like networks [2] and subnets
[3], need to add an extra RBAC action "ACCESS_EXTERNAL" to the query
filter. This is done now by adding again the same RBAC filter included
in the ``query_with_hooks`` [4] but with the "ACCESS_EXTERNAL" action.

If instead of this, the ``query_with_hooks`` can include a configurable
set of RBAC actions, the result query could be shorter, less complex and
faster.

[1]https://github.com/openstack/neutron-lib/blob/625ae19e29758da98c5dd8c9ce03962840a87949/neutron_lib/db/model_query.py#L86-L90
[2]https://github.com/openstack/neutron/blob/bcf1f707bc9169e8f701613214516e97f039d730/neutron/db/external_net_db.py#L75-L80
[3]https://review.opendev.org/c/openstack/neutron/+/907313/15/neutron/db/external_net_db.py
[4]https://github.com/openstack/neutron-lib/blob/625ae19e29758da98c5dd8c9ce03962840a87949/neutron_lib/db/model_query.py#L127-L132

** Affects: neutron
 Importance: Low
     Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2059236

Title:
  Add a RBAC action field in the query hooks

Status in neutron:
  New

Bug description:
  Any Neutron resource (that is not only a single database table but a
  view, a combination of several tables), can register a set of hooks
  that will be used during the DB query creation [1]. These hooks
  include a query hook (to modify query depending on the database
  relationships), a filter hook (to add extra filtering steps to the
  final query) and a results filter hook (that could be used to join
  other tables with other dependencies).

  This bug proposes to add an extra field to this hooks to be able to
  filter the RBAC actions. Some resources, like networks [2] and subnets
  [3], need to add an extra RBAC action "ACCESS_EXTERNAL" to the query
  filter. This is done now by adding again the same RBAC filter included
  in the ``query_with_hooks`` [4] but with the "ACCESS_EXTERNAL" action.

  If instead of this, the ``query_with_hooks`` can include a
  configurable set of RBAC actions, the result query could be shorter,
  less complex and faster.

  
[1]https://github.com/openstack/neutron-lib/blob/625ae19e29758da98c5dd8c9ce03962840a87949/neutron_lib/db/model_query.py#L86-L90
  
[2]https://github.com/openstack/neutron/blob/bcf1f707bc9169e8f701613214516e97f039d730/neutron/db/external_net_db.py#L75-L80
  
[3]https://review.opendev.org/c/openstack/neutron/+/907313/15/neutron/db/external_net_db.py
  
[4]https://github.com/openstack/neutron-lib/blob/625ae19e29758da98c5dd8c9ce03962840a87949/neutron_lib/db/model_query.py#L127-L132

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2059236/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2057914] Re: [neutron-vpnaas-dashboard] UT failing due to a change in django

2024-03-15 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2057914

Title:
  [neutron-vpnaas-dashboard] UT failing due to a change in django

Status in neutron:
  Fix Released

Bug description:
  Since [1], present in django=4.1a1, the assertion
  ``assertQuerysetEqual`` does not implicitly converts the queries
  passed to string. That needs to be changed in the unit tests.

  
[1]https://github.com/django/django/commit/e2be307b3ab6ebf339b3a765fe64967c9602266f

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2057914/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2058006] [NEW] [RFE] Allow multiple segments per host in ML2/OVN

2024-03-15 Thread Rodolfo Alonso
Public bug reported:

This is a sibling bug of [1]. The aim of this bug is to provide the same
functionality as in [1] was provided to ML2/OVS but to ML2/OVN. The goal
is to be able to connect a single host to more than one RPN segment.

This RFE would require a spec (to be presented in 2024.2 Dalmatian).

[1]https://bugs.launchpad.net/neutron/+bug/1764738

** Affects: neutron
 Importance: Wishlist
 Status: New

** Changed in: neutron
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2058006

Title:
  [RFE] Allow multiple segments per host in ML2/OVN

Status in neutron:
  New

Bug description:
  This is a sibling bug of [1]. The aim of this bug is to provide the
  same functionality as in [1] was provided to ML2/OVS but to ML2/OVN.
  The goal is to be able to connect a single host to more than one RPN
  segment.

  This RFE would require a spec (to be presented in 2024.2 Dalmatian).

  [1]https://bugs.launchpad.net/neutron/+bug/1764738

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2058006/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2057914] [NEW] [neutron-vpnaas-dashboard] UT failing due to a change in django

2024-03-14 Thread Rodolfo Alonso
Public bug reported:

Since [1], present in django=4.1a1, the assertion
``assertQuerysetEqual`` does not implicitly converts the queries passed
to string. That needs to be changed in the unit tests.

[1]https://github.com/django/django/commit/e2be307b3ab6ebf339b3a765fe64967c9602266f

** Affects: neutron
 Importance: High
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2057914

Title:
  [neutron-vpnaas-dashboard] UT failing due to a change in django

Status in neutron:
  New

Bug description:
  Since [1], present in django=4.1a1, the assertion
  ``assertQuerysetEqual`` does not implicitly converts the queries
  passed to string. That needs to be changed in the unit tests.

  
[1]https://github.com/django/django/commit/e2be307b3ab6ebf339b3a765fe64967c9602266f

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2057914/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2057770] [NEW] [neutron-dynamic-routing] LBaaS constants still present in older versions of Neutron

2024-03-13 Thread Rodolfo Alonso
Public bug reported:

Since [1], neutron-lib removed the LBaaS constants. These constants were
also removed from Neutron in [2]. It is needed to bump the Neutron
version required to sync with the latest neutron-lib release.

[1]https://review.opendev.org/c/openstack/neutron-lib/+/902133
[2]https://review.opendev.org/c/openstack/neutron/+/902048

** Affects: neutron
 Importance: High
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2057770

Title:
  [neutron-dynamic-routing] LBaaS constants still present in older
  versions of Neutron

Status in neutron:
  In Progress

Bug description:
  Since [1], neutron-lib removed the LBaaS constants. These constants
  were also removed from Neutron in [2]. It is needed to bump the
  Neutron version required to sync with the latest neutron-lib release.

  [1]https://review.opendev.org/c/openstack/neutron-lib/+/902133
  [2]https://review.opendev.org/c/openstack/neutron/+/902048

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2057770/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2056558] [NEW] ``OVNL3RouterPlugin._port_update`` can be called before the LRP is created in the OVN DB

2024-03-08 Thread Rodolfo Alonso
Public bug reported:

``OVNL3RouterPlugin._port_update`` [1] is called AFTER_UPDATE the router
port is created (for example, when a subnet is attached to a router).
This event is guaranteed to be called after the Neutron DB has the
resource (port) in the database. However, as the code highlights in the
comment, this event can be called before the OVN NB database has the LRP
resource created. The called method, ``update_router_port`` -->
``_update_router_port``, guarantees that the LRP update is executed only
when the LRP exists but the LRP read [2] does not have this
consideration.

This event should be replaced by an OVN DB event, checking the same
conditions as in [1] and guaranteeing that the LRP resource is already
created in the DB.

Example of this failure:
https://zuul.opendev.org/t/openstack/build/3f7935d7ed53473898bbf213e85dfb61/log/controller/logs/dsvm-
functional-
logs/ovn_octavia_provider.tests.functional.test_driver.TestOvnOctaviaProviderDriver.test_create_lb_custom_network/testrun.txt

[1]https://opendev.org/openstack/neutron/src/commit/e8468a6dd647fd62eac429417c7f382e8859b574/neutron/services/ovn_l3/plugin.py#L372-L381
[2]https://opendev.org/openstack/neutron/src/commit/e8468a6dd647fd62eac429417c7f382e8859b574/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L1809-L1811

** Affects: neutron
 Importance: Medium
 Status: New

** Description changed:

  ``OVNL3RouterPlugin._port_update`` [1] is called AFTER_UPDATE the router
  port is created (for example, when a subnet is attached to a router).
  This event is guaranteed to be called after the Neutron DB has the
  resource (port) in the database. However, as the code highlights in the
  comment, this event can be called before the OVN NB database has the LRP
  resource created. The called method, ``update_router_port`` -->
  ``_update_router_port``, guarantees that the LRP update is executed only
  when the LRP exists but the LRP read [2] does not have this
  consideration.
  
  This event should be replaced by an OVN DB event, checking the same
  conditions as in [1] and guaranteeing that the LRP resource is already
  created in the DB.
  
+ Example of this failure:
+ 
https://zuul.opendev.org/t/openstack/build/3f7935d7ed53473898bbf213e85dfb61/log/controller/logs/dsvm-
+ functional-
+ 
logs/ovn_octavia_provider.tests.functional.test_driver.TestOvnOctaviaProviderDriver.test_create_lb_custom_network/testrun.txt
+ 
  
[1]https://opendev.org/openstack/neutron/src/commit/e8468a6dd647fd62eac429417c7f382e8859b574/neutron/services/ovn_l3/plugin.py#L372-L381
  
[2]https://opendev.org/openstack/neutron/src/commit/e8468a6dd647fd62eac429417c7f382e8859b574/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L1809-L1811

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2056558

Title:
  ``OVNL3RouterPlugin._port_update`` can be called before the LRP is
  created in the OVN DB

Status in neutron:
  New

Bug description:
  ``OVNL3RouterPlugin._port_update`` [1] is called AFTER_UPDATE the
  router port is created (for example, when a subnet is attached to a
  router). This event is guaranteed to be called after the Neutron DB
  has the resource (port) in the database. However, as the code
  highlights in the comment, this event can be called before the OVN NB
  database has the LRP resource created. The called method,
  ``update_router_port`` --> ``_update_router_port``, guarantees that
  the LRP update is executed only when the LRP exists but the LRP read
  [2] does not have this consideration.

  This event should be replaced by an OVN DB event, checking the same
  conditions as in [1] and guaranteeing that the LRP resource is already
  created in the DB.

  Example of this failure:
  
https://zuul.opendev.org/t/openstack/build/3f7935d7ed53473898bbf213e85dfb61/log/controller/logs/dsvm-
  functional-
  
logs/ovn_octavia_provider.tests.functional.test_driver.TestOvnOctaviaProviderDriver.test_create_lb_custom_network/testrun.txt

  
[1]https://opendev.org/openstack/neutron/src/commit/e8468a6dd647fd62eac429417c7f382e8859b574/neutron/services/ovn_l3/plugin.py#L372-L381
  
[2]https://opendev.org/openstack/neutron/src/commit/e8468a6dd647fd62eac429417c7f382e8859b574/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L1809-L1811

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2056558/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2056199] [NEW] ``DvrDriver`` and ``OvnDriver`` incorrectly define distributed flag

2024-03-05 Thread Rodolfo Alonso
Public bug reported:

The class ``L3ServiceProvider`` defines the distributed support with the
class variable "distributed_support" [1]. The classes ``DvrDriver`` and
``OvnDriver`` are using the variable "dvr_support" instead.

The method to ensure a driver "ha" and "distributed" support uses
"distributed_support" too [2].

[1]https://github.com/openstack/neutron/blob/c4c14f9589b54cc518564f1e1679898d2729c9e2/neutron/services/l3_router/service_providers/base.py#L57
[2]https://github.com/openstack/neutron/blob/c4c14f9589b54cc518564f1e1679898d2729c9e2/neutron/services/l3_router/service_providers/driver_controller.py#L273

** Affects: neutron
     Importance: Medium
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
     Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2056199

Title:
  ``DvrDriver`` and ``OvnDriver`` incorrectly define distributed flag

Status in neutron:
  In Progress

Bug description:
  The class ``L3ServiceProvider`` defines the distributed support with
  the class variable "distributed_support" [1]. The classes
  ``DvrDriver`` and ``OvnDriver`` are using the variable "dvr_support"
  instead.

  The method to ensure a driver "ha" and "distributed" support uses
  "distributed_support" too [2].

  
[1]https://github.com/openstack/neutron/blob/c4c14f9589b54cc518564f1e1679898d2729c9e2/neutron/services/l3_router/service_providers/base.py#L57
  
[2]https://github.com/openstack/neutron/blob/c4c14f9589b54cc518564f1e1679898d2729c9e2/neutron/services/l3_router/service_providers/driver_controller.py#L273

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2056199/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055886] [NEW] [FT] Check port binding object in "test_virtual_port_host_update_upon_failover"

2024-03-04 Thread Rodolfo Alonso
Public bug reported:

Error:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_44f/898238/27/check/neutron-
functional-with-uwsgi/44f78d4/testr_results.html

Snippet: https://paste.opendev.org/show/bDneDxmXx4AiBscSfrbE/

The port binding check failed because the "Port_Binding" register was
not created yet. We should consider that option when checking that port
binding type.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2055886

Title:
  [FT] Check port binding object in
  "test_virtual_port_host_update_upon_failover"

Status in neutron:
  New

Bug description:
  Error:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_44f/898238/27/check/neutron-
  functional-with-uwsgi/44f78d4/testr_results.html

  Snippet: https://paste.opendev.org/show/bDneDxmXx4AiBscSfrbE/

  The port binding check failed because the "Port_Binding" register was
  not created yet. We should consider that option when checking that
  port binding type.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2055886/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055561] [NEW] [stable-only][OVN] Configure VETH interface MAC address before setting UP the device

2024-03-01 Thread Rodolfo Alonso
Public bug reported:

In [1] a change was introduced, along with the implemented feature, to
set the MAC address of the metadata VETH interface before setting it up.
This change should be backported to stable versions too.

[1]https://review.opendev.org/c/openstack/neutron/+/894026/12/neutron/agent/ovn/metadata/agent.py#710

** Affects: neutron
 Importance: Low
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2055561

Title:
  [stable-only][OVN] Configure VETH interface MAC address before setting
  UP the device

Status in neutron:
  New

Bug description:
  In [1] a change was introduced, along with the implemented feature, to
  set the MAC address of the metadata VETH interface before setting it
  up. This change should be backported to stable versions too.

  
[1]https://review.opendev.org/c/openstack/neutron/+/894026/12/neutron/agent/ovn/metadata/agent.py#710

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2055561/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055173] [NEW] [netaddr>=1.0.0] Do not use netaddr.core.ZEROFILL flag with IPv6 addresses

2024-02-27 Thread Rodolfo Alonso
Public bug reported:

In case of using this flag, that will raise the following exception:
>>> netaddr.IPAddress("200a::1", flags=netaddr.core.ZEROFILL)
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/local/lib/python3.10/dist-packages/netaddr/ip/__init__.py", line 
333, in __init__
self._value = module.str_to_int(addr, flags)
  File "/usr/local/lib/python3.10/dist-packages/netaddr/strategy/ipv4.py", line 
120, in str_to_int
addr = '.'.join(['%d' % int(i) for i in addr.split('.')])
  File "/usr/local/lib/python3.10/dist-packages/netaddr/strategy/ipv4.py", line 
120, in 
addr = '.'.join(['%d' % int(i) for i in addr.split('.')])
ValueError: invalid literal for int() with base 10: '200a::1'

** Affects: neutron
     Importance: Medium
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2055173

Title:
  [netaddr>=1.0.0] Do not use netaddr.core.ZEROFILL flag with IPv6
  addresses

Status in neutron:
  In Progress

Bug description:
  In case of using this flag, that will raise the following exception:
  >>> netaddr.IPAddress("200a::1", flags=netaddr.core.ZEROFILL)
  Traceback (most recent call last):
File "", line 1, in 
File "/usr/local/lib/python3.10/dist-packages/netaddr/ip/__init__.py", line 
333, in __init__
  self._value = module.str_to_int(addr, flags)
File "/usr/local/lib/python3.10/dist-packages/netaddr/strategy/ipv4.py", 
line 120, in str_to_int
  addr = '.'.join(['%d' % int(i) for i in addr.split('.')])
File "/usr/local/lib/python3.10/dist-packages/netaddr/strategy/ipv4.py", 
line 120, in 
  addr = '.'.join(['%d' % int(i) for i in addr.split('.')])
  ValueError: invalid literal for int() with base 10: '200a::1'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2055173/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055045] [NEW] [OVN] "LRP.external_ids" router name is incorrect, we are setting the ID only

2024-02-26 Thread Rodolfo Alonso
Public bug reported:

In [1], we are setting the router_id instead of the router name (that
should be the router ID with the "neutron-" prefix). That will be
consistent with any other external_ids with "name" in the key.

[1]https://github.com/openstack/neutron/blob/89d21083905dd22d062b516b7279d18085f65af4/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L1588

** Affects: neutron
 Importance: Low
 Status: New

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2055045

Title:
  [OVN] "LRP.external_ids" router name is incorrect, we are setting the
  ID only

Status in neutron:
  New

Bug description:
  In [1], we are setting the router_id instead of the router name (that
  should be the router ID with the "neutron-" prefix). That will be
  consistent with any other external_ids with "name" in the key.

  
[1]https://github.com/openstack/neutron/blob/89d21083905dd22d062b516b7279d18085f65af4/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L1588

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2055045/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2008238] Re: SRIOV port binding_profile attributes for OVS hardware offload are stripped on instance deletion or port detachment

2024-02-26 Thread Rodolfo Alonso
Neutron no longer needs to provide any port_binding information for HW
offloaded ports: https://review.opendev.org/c/openstack/neutron/+/898556

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2008238

Title:
  SRIOV port binding_profile attributes for OVS hardware offload are
  stripped on instance deletion or port detachment

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===

  This issue applies for systems using SRIOV with Mellanox ASAP2 SDN
  offloads.

  An SRIOV port capable for ASAP2 SDN acceleration (OVS hardware
  offloads) has 'capabilities=[switchdev]' added to the port
  binding_profile.

  After a VM has been created with SRIOV port attached, the port can no
  longer be used for subsequent VM builds.  Attempt to reuse the port
  results in an error of the form "Cannot set interface MAC/vlanid to
  / for ifname ens1f0 vf 7: Operation not supported"

  The underlying issue appears to be that when an SRIOV port is detached
  from a VM, or the VM is destroyed, the capabilities=[switchdev]
  property is removed from the port binding_profile.  This converts the
  port from ASAP2 to “Legacy SRIOV” (in Mellanox-speak) and makes it
  unusable.

  If the port binding_profile property is restored then the port can be
  successfully reused.

  The property is preserved during live migration, instance resizes and
  rebuilds.  It only appears to be instance depletion or port detachment
  where the binding_profile property is removed.

  Steps to reproduce
  ==

  1. Create SRIOV port with ASAP2 capability:

  openstack port create --project  --network  --vnic-
  type=direct --binding-profile '{"capabilities": ["switchdev"]}' sriov-
  port-1

  2. Check the port binding_profile property:

  openstack port show -c binding_profile sriov-port-1

  3. Create an instance using the port:

  openstack server create --flavor  --image  --key-name
   --nic port-id=sriov-port-1 sriov-vm-1

  4. Delete the instance:

  openstack server delete sriov-vm-1

  5. Check the port binding_profile property:

  openstack port show -c binding_profile sriov-port-1

  Expected Result
  ===

  Nova sets properties in the binding_profile while the instance is in
  use.  Alongside those properties the capabilities='['switchdev']'
  property should be preserved.

  Actual Result
  =

  After the instance is deleted (or port detached), the binding_profile
  is empty.

  Environment
  ===

  This has been observed with the following configuration:

  - OpenStack Yoga
  - OVN Neutron driver

  Logs
  

  From Nova Compute:

  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest Traceback (most 
recent call last):
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", 
line 165, in launch
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest return 
self._domain.createWithFlags(flags)
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 190, 
in doit
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest result = 
proxy_call(self._autowrap, f, *args, **kwargs)
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 148, 
in proxy_call
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest rv = execute(f, 
*args, **kwargs)
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 129, 
in execute
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest six.reraise(c, e, 
tb)
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest raise value
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 83, 
in tworker
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest rv = meth(*args, 
**kwargs)
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/usr/lib64/python3.6/site-packages/libvirt.py", line 1385, in createWithFlags
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest raise 
libvirtError('virDomainCreateWithFlags() failed')
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest libvirt.libvirtError: 
Cannot set interface MAC/vlanid to fa:16:3e:43:1e:ce/2107 for ifname ens1f0 vf 
7: Operation not supported
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest
  2023-01-24 19:55:32.273 7 ERROR nova.virt.libvirt.driver 
[req-581cd9e8-11c8-44be-9ed2-a03a5f70d0f4 

[Yahoo-eng-team] [Bug 2008238] Re: SRIOV port binding_profile attributes for OVS hardware offload are stripped on instance deletion or port detachment

2024-02-22 Thread Rodolfo Alonso
Adding Neutron to this bug. A port extension could be needed to avoid
writing in the port port_binding register when creating a port. That
will avoid the Neutron policy restriction, that only allows to write on
this field to service role.

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2008238

Title:
  SRIOV port binding_profile attributes for OVS hardware offload are
  stripped on instance deletion or port detachment

Status in neutron:
  New
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===

  This issue applies for systems using SRIOV with Mellanox ASAP2 SDN
  offloads.

  An SRIOV port capable for ASAP2 SDN acceleration (OVS hardware
  offloads) has 'capabilities=[switchdev]' added to the port
  binding_profile.

  After a VM has been created with SRIOV port attached, the port can no
  longer be used for subsequent VM builds.  Attempt to reuse the port
  results in an error of the form "Cannot set interface MAC/vlanid to
  / for ifname ens1f0 vf 7: Operation not supported"

  The underlying issue appears to be that when an SRIOV port is detached
  from a VM, or the VM is destroyed, the capabilities=[switchdev]
  property is removed from the port binding_profile.  This converts the
  port from ASAP2 to “Legacy SRIOV” (in Mellanox-speak) and makes it
  unusable.

  If the port binding_profile property is restored then the port can be
  successfully reused.

  The property is preserved during live migration, instance resizes and
  rebuilds.  It only appears to be instance depletion or port detachment
  where the binding_profile property is removed.

  Steps to reproduce
  ==

  1. Create SRIOV port with ASAP2 capability:

  openstack port create --project  --network  --vnic-
  type=direct --binding-profile '{"capabilities": ["switchdev"]}' sriov-
  port-1

  2. Check the port binding_profile property:

  openstack port show -c binding_profile sriov-port-1

  3. Create an instance using the port:

  openstack server create --flavor  --image  --key-name
   --nic port-id=sriov-port-1 sriov-vm-1

  4. Delete the instance:

  openstack server delete sriov-vm-1

  5. Check the port binding_profile property:

  openstack port show -c binding_profile sriov-port-1

  Expected Result
  ===

  Nova sets properties in the binding_profile while the instance is in
  use.  Alongside those properties the capabilities='['switchdev']'
  property should be preserved.

  Actual Result
  =

  After the instance is deleted (or port detached), the binding_profile
  is empty.

  Environment
  ===

  This has been observed with the following configuration:

  - OpenStack Yoga
  - OVN Neutron driver

  Logs
  

  From Nova Compute:

  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest Traceback (most 
recent call last):
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/libvirt/guest.py", 
line 165, in launch
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest return 
self._domain.createWithFlags(flags)
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 190, 
in doit
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest result = 
proxy_call(self._autowrap, f, *args, **kwargs)
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 148, 
in proxy_call
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest rv = execute(f, 
*args, **kwargs)
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 129, 
in execute
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest six.reraise(c, e, 
tb)
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/usr/lib/python3.6/site-packages/six.py", line 703, in reraise
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest raise value
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/tpool.py", line 83, 
in tworker
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest rv = meth(*args, 
**kwargs)
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest   File 
"/usr/lib64/python3.6/site-packages/libvirt.py", line 1385, in createWithFlags
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest raise 
libvirtError('virDomainCreateWithFlags() failed')
  2023-01-24 19:55:32.270 7 ERROR nova.virt.libvirt.guest libvirt.libvirtError: 
Cannot set interface MAC/vlanid to fa:16:3e:43:1e:ce/2107 for ifname ens1f0 vf 
7: Operation not supported
  2023-01-24 19:55:32.270 

[Yahoo-eng-team] [Bug 1909234] Re: [fullstack] "test_min_bw_qos_port_removed" failing randomly

2024-02-21 Thread Rodolfo Alonso
This issue in no longer happening in the CI. Closing the bug.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1909234

Title:
  [fullstack] "test_min_bw_qos_port_removed" failing randomly

Status in neutron:
  Invalid

Bug description:
  Fullstack test "test_min_bw_qos_port_removed" is failing randomly.

  LOG:
  
https://e67888f71e3c619ed125-948b6a5ed4d9dccaba5208d54797c4a1.ssl.cf1.rackcdn.com/764917/1/check/neutron-
  fullstack-with-uwsgi/6f59bc6/testr_results.html

  
  Error (snippet): http://paste.openstack.org/show/801284/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1909234/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2015728] Re: ovs/ovn source(with master branch) deployments broken

2024-02-21 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2015728

Title:
  ovs/ovn source(with master branch) deployments broken

Status in neutron:
  Fix Released

Bug description:
  With [1] ovn/ovs jobs running with OVS_BRANCH=master,OVN_BRANCH=main
  are broken, fails as below:-

  utilities/ovn-dbctl.c: In function ‘server_loop’:
  utilities/ovn-dbctl.c:1105:5: error: too few arguments to function 
‘daemonize_start’
   1105 | daemonize_start(false);
| ^~~
  In file included from utilities/ovn-dbctl.c:22:
  /opt/stack/ovs/lib/daemon.h:170:6: note: declared here
170 | void daemonize_start(bool access_datapath, bool 
access_hardware_ports);
|  ^~~
  make[1]: *** [Makefile:2374: utilities/ovn-dbctl.o] Error 1
  make[1]: *** Waiting for unfinished jobs

  
  Example failure:- 
https://zuul.openstack.org/build/b7b1700e2e5941f7a52b57ca411db722

  Builds:-
  - 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ipv6-only-ovs-master
  - 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-full-multinode-ovs-master
  - https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ovs-master
  - 
https://zuul.openstack.org/builds?job_name=neutron-ovn-tempest-ovs-master-centos-9-stream
  - 
https://zuul.openstack.org/builds?job_name=ovn-octavia-provider-functional-master
  - 
https://zuul.openstack.org/builds?job_name=ovn-octavia-provider-tempest-master

  
  Until ovn main branch is adapted to this change we need to pin ovs_branch to 
working commit or better stable branch(as done with [2])

  Also i noticed some of these jobs running in neutron/ovn-octavia
  stable branches, that likely not needed to be run, so should be
  checked and cleaned up.

  [1] https://github.com/openvswitch/ovs/commit/07cf5810de 
  [2] https://github.com/ovn-org/ovn/commit/b61e819bf9673

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2015728/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1863577] Re: [ovn] tempest.scenario.test_network_v6.TestGettingAddress tests failing 100% times

2024-02-21 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1863577

Title:
  [ovn] tempest.scenario.test_network_v6.TestGettingAddress tests
  failing 100% times

Status in neutron:
  Fix Released

Bug description:
  In job neutron-ovn-tempest-slow tests from module
  tempest.scenario.test_network_v6.TestGettingAddress are failin 100% of
  times.

  Example of failure:

  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_2c7/707248/2/check/neutron-
  ovn-tempest-slow/2c77f9b/testr_results.html

  
  Failure details:

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 89, in 
wrapper
  return f(*func_args, **func_kwargs)
File "/opt/stack/tempest/tempest/scenario/test_network_v6.py", line 244, in 
test_slaac_from_os
  self._prepare_and_test(address6_mode='slaac')
File "/opt/stack/tempest/tempest/scenario/test_network_v6.py", line 225, in 
_prepare_and_test
  (ip, srv['id'], ssh.exec_command("ip address")))
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/unittest2/case.py",
 line 690, in fail
  raise self.failureException(msg)
  AssertionError: Address 2001:db8::f816:3eff:fe88:ea22 not configured for 
instance 55c9f9f2-3a99-4c0a-8536-be3cfe180381, ip address output is
  1: lo:  mtu 65536 qdisc noqueue qlen 1
  link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
  inet 127.0.0.1/8 scope host lo
 valid_lft forever preferred_lft forever
  inet6 ::1/128 scope host 
 valid_lft forever preferred_lft forever
  2: eth0:  mtu 1392 qdisc pfifo_fast qlen 1000
  link/ether fa:16:3e:88:ea:22 brd ff:ff:ff:ff:ff:ff
  inet 10.1.0.9/28 brd 10.1.0.15 scope global eth0
 valid_lft forever preferred_lft forever
  inet6 fe80::f816:3eff:fe88:ea22/64 scope link 
 valid_lft forever preferred_lft forever

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1863577/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2020363] Re: [stable/train/] openstacksdk-functional-devstack fails with POST_FAILURE

2024-02-21 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2020363

Title:
  [stable/train/] openstacksdk-functional-devstack fails with
  POST_FAILURE

Status in neutron:
  Fix Released

Bug description:
  This was spotted in recent train backport
  https://review.opendev.org/c/openstack/neutron/+/883429

  100% failing and blocking train gate, sample run:
  https://zuul.opendev.org/t/openstack/build/e795cfe1007042668194b398553d2b19

  Obtaining file:///opt/stack/heat
  openstack-heat requires Python '>=3.8' but the running Python is 2.7.17

  Note the end result/fix may be just dropping the job from this old EM
  branch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2020363/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1896733] Re: [FT] Error while executing "remove_nodes_from_host" method

2024-02-19 Thread Rodolfo Alonso
No, I didn't see this failure in a while, so we can close this bug for
now.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1896733

Title:
  [FT] Error while executing "remove_nodes_from_host" method

Status in neutron:
  Invalid

Bug description:
  When executing the method "remove_nodes_from_host", sometimes the
  following exception is raised:

  2020-09-23 08:27:26.539774 | controller | 
oslo_db.exception.DBNonExistentTable: (sqlite3.OperationalError) no such table: 
ovn_hash_ring
  2020-09-23 08:27:26.539793 | controller | [SQL: DELETE FROM ovn_hash_ring 
WHERE ovn_hash_ring.hostname = ? AND ovn_hash_ring.group_name = ?]
  2020-09-23 08:27:26.539812 | controller | [parameters: 
('ubuntu-bionic-rax-dfw-0019985577', 'mechanism_driver')]
  2020-09-23 08:27:26.539832 | controller | (Background on this error at: 
http://sqlalche.me/e/13/e3q8)

  
  The test does not fail but this error should not be thrown.


  Logs:
  
https://92c26ef15aa414901341-0fbf40408b145a382fe5780d47baa8e9.ssl.cf5.rackcdn.com/753053/2/check/neutron-
  functional-with-uwsgi/316d1c2/job-output.txt

  
  Snippet: http://paste.openstack.org/show/798254/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1896733/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033651] Re: [fullstack] Reduce the CI job time

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033651

Title:
  [fullstack] Reduce the CI job time

Status in neutron:
  Fix Released

Bug description:
  The "neutron-fullstack-with-uwsgi" job usually takes between 2 and 3
  hours, depending on the node used. It is not practical to have a CI
  job so long that, at the same time, is not very stable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2033651/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2031087] Re: ICMPv6 Neighbor Advertisement packets from VM's link-local address dropped by OVS

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2031087

Title:
  ICMPv6 Neighbor Advertisement packets from VM's link-local address
  dropped by OVS

Status in neutron:
  Fix Released

Bug description:
  When a VM transmits an ICMPv6 Neighbour Advertisement packet from its
  link-local (fe80::/64) address, the NA packet ends up being dropped by
  the OVS and is not forwarded to the external provider network. This
  causes connectivity issues as the external router is unable to resolve
  the link-layer MAC address for the VM's link-local IPv6 address. NA
  packets from the VM's global IPv6 address are forwarded correctly.

  Adding security group rule such as "Egress,IPv6,Any,Any,::/0" does
  *not* help, the drop rule appears to be built-in and not possible to
  override. However, disabling port security altogether does make the
  problem go away.

  We are running OpenStack Antelope, neutron 22.0.2 and OVN 23.03.
  Platform is AlmaLinux 9.2, RDO packages.

  We believe, but are not 100% sure, that this problem may have started
  after upgrading from OVN 22.12. Reverting the upgrade to confirm is
  unfortunately a complicated task, so we would like to avoid that if
  possible.

  Tcpdump can be used to confirm that the packets vanish inside OVS.
  First, on the tap interface connected to the VM. We can here see the
  external router (fe80::669d:99ff:fe3a:3d58) transmit NS packets to the
  VM's solicited-node multicast address, and the VM
  (fe80::18:59ff:fe37:204a) responds with a unicast NA packet:

  $ sudo tcpdump -i tapb7c872a4-a5 host fe80::669d:99ff:fe3a:3d58 and icmp6
  08:41:24.201970 IP6 fe80::669d:99ff:fe3a:3d58 > ff02::1:ff37:204a: ICMP6, 
neighbor solicitation, who has fe80::18:59ff:fe37:204a, length 32
  08:41:24.202004 IP6 fe80::18:59ff:fe37:204a > fe80::669d:99ff:fe3a:3d58: 
ICMP6, neighbor advertisement, tgt is fe80::18:59ff:fe37:204a, length 32
  08:41:25.366752 IP6 fe80::669d:99ff:fe3a:3d58 > ff02::1:ff37:204a: ICMP6, 
neighbor solicitation, who has fe80::18:59ff:fe37:204a, length 32
  08:41:25.366775 IP6 fe80::18:59ff:fe37:204a > fe80::669d:99ff:fe3a:3d58: 
ICMP6, neighbor advertisement, tgt is fe80::18:59ff:fe37:204a, length 32
  08:41:26.374637 IP6 fe80::669d:99ff:fe3a:3d58 > ff02::1:ff37:204a: ICMP6, 
neighbor solicitation, who has fe80::18:59ff:fe37:204a, length 32
  08:41:26.374693 IP6 fe80::18:59ff:fe37:204a > fe80::669d:99ff:fe3a:3d58: 
ICMP6, neighbor advertisement, tgt is fe80::18:59ff:fe37:204a, length 32

  However, while tcpdumping the same traffic on the external interface
  (bond0) on the provider VLAN tag the network is using, the NA packets
  are no longer there:

  $ sudo tcpdump -i bond0 vlan 882 and host fe80::669d:99ff:fe3a:3d58 and icmp6
  08:41:24.201964 IP6 fe80::669d:99ff:fe3a:3d58 > ff02::1:ff37:204a: ICMP6, 
neighbor solicitation, who has fe80::18:59ff:fe37:204a, length 32
  08:41:25.366747 IP6 fe80::669d:99ff:fe3a:3d58 > ff02::1:ff37:204a: ICMP6, 
neighbor solicitation, who has fe80::18:59ff:fe37:204a, length 32
  08:41:26.374625 IP6 fe80::669d:99ff:fe3a:3d58 > ff02::1:ff37:204a: ICMP6, 
neighbor solicitation, who has fe80::18:59ff:fe37:204a, length 32

  This explains why there are so many NS packets - the router keeps
  retrying forever.

  Compare this with NA packets from the VM's global address, which works
  as expected:

  $ sudo tcpdump -ni tapb7c872a4-a5 ether host 64:9d:99:3a:3d:58 and icmp6 and 
net not fe80::/10
  08:56:03.015378 IP6 2a02:c0:200:f012::0:1:1 > ff02::1:ff37:204a: ICMP6, 
neighbor solicitation, who has 2a02:c0:200:f012:18:59ff:fe37:204a, length 32
  08:56:03.015408 IP6 2a02:c0:200:f012:18:59ff:fe37:204a > 
2a02:c0:200:f012::0:1:1: ICMP6, neighbor advertisement, tgt is 
2a02:c0:200:f012:18:59ff:fe37:204a, length 32

  $ sudo tcpdump -ni bond0 vlan 882 and ether host 64:9d:99:3a:3d:58 and icmp6 
and net not fe80::/10
  08:56:03.015292 IP6 2a02:c0:200:f012::0:1:1 > ff02::1:ff37:204a: ICMP6, 
neighbor solicitation, who has 2a02:c0:200:f012:18:59ff:fe37:204a, length 32
  08:56:03.015539 IP6 2a02:c0:200:f012:18:59ff:fe37:204a > 
2a02:c0:200:f012::0:1:1: ICMP6, neighbor advertisement, tgt is 
2a02:c0:200:f012:18:59ff:fe37:204a, length 32

  We can further confirm it by finding an explicit drop rule within OVS:

  $ sudo ovs-appctl dpif/dump-flows br-int | grep drop
  
recirc_id(0),in_port(8),eth(src=02:18:59:37:20:4a),eth_type(0x86dd),ipv6(src=fe80::18:59ff:fe37:204a,proto=58,hlimit=255,frag=no),icmpv6(type=136,code=0),nd(target=fe80::18:59ff:fe37:204a,tll=02:18:59:37:20:4a),
 packets:104766, bytes:9009876, used:0.202s, actions:drop

  We see that there are a ton of built-in default rules pertaining to NA
  packets:

  $ sudo ovs-ofctl dump-flows br-int | grep -c icmp_type=136
  178

  This is not unexpected as ICMPv6 ND (NS/NA/RS/RA/etc) are essential

[Yahoo-eng-team] [Bug 2036607] Re: [OVN] The API worker fails during "post_fork_initialize" call

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2036607

Title:
  [OVN] The API worker fails during "post_fork_initialize" call

Status in neutron:
  Fix Released

Bug description:
  Bugzilla reference:
  https://bugzilla.redhat.com/show_bug.cgi?id=2233797

  This issue has been reproduced using the Tobiko framework. The test,
  that is executed several times, is rebooting the controllers and thus
  the Neutron API. Randomly, one Neutron API worker fails during the
  event method execution "post_fork_initialize", during the
  "_setup_hash_ring" call [1].

  Regardless of the result of the method "post_fork_initialize", the API
  worker starts. But in this case there are some methods (mainly related
  to the OVN agents) that are not patched and thus the result of the API
  calls ("agent show", "agent list", etc) is wrong.

  This bug proposes:
  * To properly handle any possible error in the "_setup_hash_ring" call.
  * To log a message at the end of the "post_fork_initialize" method to check 
that this event method has finished properly.
  * To catch any possible error during the "post_fork_initialize" execution and 
if this error is not retried, fail and exit.

  [1]https://paste.opendev.org/show/bqzDPR5TukLq9d1GIcnz/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2036607/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2042947] Re: [stable branches] nftables job inherits from openvswitch-iptables_hybrid master job

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2042947

Title:
  [stable branches] nftables job inherits from openvswitch-
  iptables_hybrid master job

Status in neutron:
  Fix Released

Bug description:
  These jobs runs in periodic pipeline and are broken[1], these jobs
  inherit from ovs master jobs instead of stable variant. This needs to
  be fixed.

  
  [1] 
https://zuul.openstack.org/builds?job_name=neutron-linuxbridge-tempest-plugin-scenario-nftables

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2042947/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1908057] Re: Ensure "keepalived" is correctly disabled (the process does not exist)

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1908057

Title:
  Ensure "keepalived" is correctly disabled (the process does not exist)

Status in neutron:
  Fix Released

Bug description:
  Ensure "keepalived" is correctly disabled, that means the process does
  not exist. Currently the process is terminated sending a SIGTERM
  signal [1]. If that fails, for any reason, the "keepalived" process
  will continue running. For example in [2][3]: the SIGTERM signal is
  sent but the process is still running ("psutil.Process(pid)" is able
  to retrieve the process information).

  
  
[1]https://github.com/openstack/neutron/blob/28c79c9747a3031a0d1321199d2d72336e4076ed/neutron/agent/linux/keepalived.py#L475-L480
  
[2]https://be8bc338a39cf869db17-e8a1a74829c12f710e3c5d0fa3d85b60.ssl.cf1.rackcdn.com/763231/10/check/neutron-functional-with-uwsgi/db6a5a3/controller/logs/dsvm-functional-logs/neutron.tests.functional.agent.l3.extensions.test_gateway_ip_qos_extension.TestRouterGatewayIPQosAgentExtensionDVR.test_dvr_router_lifecycle_ha_with_snat_with_fips.txt
  [3]http://paste.openstack.org/show/801003/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1908057/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1923644] Re: OVN agent delete support should be backported to pre-wallaby releases

2024-02-19 Thread Rodolfo Alonso
"networking-ovn" project is now unsupported. Closing this bug as "Won't
fix".

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1923644

Title:
  OVN agent delete support should be backported to pre-wallaby releases

Status in neutron:
  Won't Fix

Bug description:
  Until wallaby, ml2/ovn and networking-ovn, despite having support for
  the Agent API, did not implement "delete" support. This means that
  when scaling down compute nodes, etc. there are orphaned agents that
  cannot be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1923644/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1946589] Re: [OVN] localport might not be updated when create multiple subnets for its network

2024-02-19 Thread Rodolfo Alonso
Closing this bug. Please feel free to reopen it if needed, providing new
information.

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1946589

Title:
  [OVN] localport might not be updated when create multiple subnets for
  its network

Status in neutron:
  Won't Fix

Bug description:
  When create a subnet for a specific network, ovn_client would update 
external_ids of metadata port.
  We focus on these fileds of localport port: 'neutron:cidrs' and 'mac', 
because those fields contain fixed_ips.

  But any scenarios like batch create multiple subnets for one network,
  its 'neutron:cidrs' and 'mac' might not updated.

  
  metadata port info:
   neutron port-show 15f0d39b-445b-4b19-a32f-6db8136871de -c device_owner -c 
fixed_ips
  
+--++
  | Field| Value
  |
  
+--++
  | device_owner | network:distributed  
  |
  | fixed_ips| {"subnet_id": "373254b2-6791-4fe2-8038-30e91a9e9c8d", 
"ip_address": "192.168.0.2"} |
  |  | {"subnet_id": "d0d871af-158e-4f45-8af2-92f2058521a3", 
"ip_address": "192.168.1.2"} |
  |  | {"subnet_id": "eeac857f-ab0e-438f-9fa6-2ae0cd3de41a", 
"ip_address": "192.168.2.2"} |
  
+--++

  localport port info:
  _uuid   : 2e17ffa7-f501-49e5-97ce-9a8731e60699
  chassis : []
  datapath: cfebe6fc-52fc-43ec-a25b-73d30abe4d00
  encap   : []
  external_ids: {"neutron:cidrs"="192.168.2.2/24", 
"neutron:device_id"=ovnmeta-d39ddf74-9542-4ebf-9d9b-a44d3c11d1fc, 
"neutron:device_owner"="network:distributed", 
"neutron:network_name"=neutron-d39ddf74-9542-4ebf-9d9b-a44d3c11d1fc, 
"neutron:port_name"="", "neutron:project_id"=fc5ea82972ce42499ddc18bc4733eaab, 
"neutron:revision_number"="4", "neutron:security_group_ids"=""}
  gateway_chassis : []
  ha_chassis_group: []
  logical_port: "15f0d39b-445b-4b19-a32f-6db8136871de"
  mac : ["fa:16:3e:e2:60:18 192.168.2.2"]
  nat_addresses   : []
  options : {requested-chassis=""}
  parent_port : []
  tag : []
  tunnel_key  : 1
  type: localport
  up  : false
  virtual_parent  : []

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1946589/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2024044] Re: [sqlalchemy-20] Remove redundant indexes of some tables

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2024044

Title:
  [sqlalchemy-20] Remove redundant indexes of some tables

Status in neutron:
  Fix Released

Bug description:
  There are some tables where some primary key columns are also defined
  as key (index). This redundant indexing is no necessary and could
  raise SQLAlchemy errors.

  These are the tables and columns affected:
  * portdataplanestatuses, port_id
  * portdnses, port_id
  * portuplinkstatuspropagation, port_id
  * qos_policies_default, project_id
  * quotausages, resource
  * subnet_dns_publish_fixed_ips, subnet_id

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2024044/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025246] Re: [OVN] Improve log for the exception handling of ovn_l3/plugin.py

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025246

Title:
  [OVN] Improve log for the exception handling of ovn_l3/plugin.py

Status in neutron:
  Fix Released

Bug description:
  While debugging an internal issue, the create_router() from the
  ovn_l3/plugin.py was raising an exception and as part of the handling
  of this exception it was logging an ERROR but, there was no traceback
  which makes it really hard to figure out where this error is being
  raised even in with debug mode enabled. For example, that's the logs:

  
['neutron.services.ovn_l3.plugin.RouterAvailabilityZoneMixin._process_az_request-13770945',
 
'neutron.services.ovn_l3.plugin.OVNL3RouterPlugin.create_router_precommit-181103']
 for router, precommit_create _notify_loop 
/usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:192
  2023-06-26 10:11:39.037 30 DEBUG neutron.db.ovn_revision_numbers_db 
[req-18ddc15b-8b4e-4c3c-a360-1d6c10680b39 46d4bcf86b0a4de691d824308920146c 
b867c3a5edc8442c946948014a351985 - default default] create_initial_revision 
uuid=9969c69e-ca2f-4f6d-bdfd-74cf1febda83, type=routers, rev=-1 
create_initial_revision 
/usr/lib/python3.9/site-packages/neutron/db/ovn_revision_numbers_db.py:104
  2023-06-26 10:11:39.159 30 DEBUG neutron_lib.callbacks.manager 
[req-18ddc15b-8b4e-4c3c-a360-1d6c10680b39 46d4bcf86b0a4de691d824308920146c 
b867c3a5edc8442c946948014a351985 - default default] Notify callbacks [] for 
router, after_create _notify_loop 
/usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:192
  2023-06-26 10:11:39.160 30 INFO 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.extensions.qos 
[req-18ddc15b-8b4e-4c3c-a360-1d6c10680b39 46d4bcf86b0a4de691d824308920146c 
b867c3a5edc8442c946948014a351985 - default default] Starting 
OVNClientQosExtension
  2023-06-26 10:11:39.210 30 ERROR neutron.services.ovn_l3.plugin 
[req-18ddc15b-8b4e-4c3c-a360-1d6c10680b39 46d4bcf86b0a4de691d824308920146c 
b867c3a5edc8442c946948014a351985 - default default] Unable to create lrouter 
for 9969c69e-ca2f-4f6d-bdfd-74cf1febda83: 
neutron_lib.exceptions.l3.RouterNotFound: Router 
9969c69e-ca2f-4f6d-bdfd-74cf1febda83 could not be found
  2023-06-26 10:11:39.219 30 DEBUG neutron_lib.callbacks.manager 
[req-18ddc15b-8b4e-4c3c-a360-1d6c10680b39 46d4bcf86b0a4de691d824308920146c 
b867c3a5edc8442c946948014a351985 - default default] Notify callbacks [] for 
router, before_delete _notify_loop 
/usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:192
  2023-06-26 10:11:39.386 32 DEBUG neutron.wsgi [-] (32) accepted 
('fd00:fd00:fd00:2000::399', 33076, 0, 0) server 
/usr/lib/python3.9/site-packages/eventlet/wsgi.py:992
  2023-06-26 10:11:39.389 32 INFO neutron.wsgi [-] fd00:fd00:fd00:2000::399 
"GET / HTTP/1.1" status: 200  len: 244 time: 0.0014989
  2023-06-26 10:11:39.406 30 DEBUG neutron_lib.callbacks.manager 
[req-18ddc15b-8b4e-4c3c-a360-1d6c10680b39 46d4bcf86b0a4de691d824308920146c 
b867c3a5edc8442c946948014a351985 - default default] Notify callbacks [] for 
router, precommit_delete _notify_loop 
/usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:192
  2023-06-26 10:11:39.471 30 DEBUG neutron_lib.callbacks.manager 
[req-18ddc15b-8b4e-4c3c-a360-1d6c10680b39 46d4bcf86b0a4de691d824308920146c 
b867c3a5edc8442c946948014a351985 - default default] Notify callbacks [] for 
router, after_delete _notify_loop 
/usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:192
  2023-06-26 10:11:39.472 30 DEBUG 
neutron.api.rpc.agentnotifiers.l3_rpc_agent_api 
[req-18ddc15b-8b4e-4c3c-a360-1d6c10680b39 46d4bcf86b0a4de691d824308920146c 
b867c3a5edc8442c946948014a351985 - default default] Fanout notify agent at 
l3_agent the message router_deleted on router 
9969c69e-ca2f-4f6d-bdfd-74cf1febda83 _notification_fanout 
/usr/lib/python3.9/site-packages/neutron/api/rpc/agentnotifiers/l3_rpc_agent_api.py:118
  2023-06-26 10:11:39.549 30 INFO neutron.api.v2.resource 
[req-18ddc15b-8b4e-4c3c-a360-1d6c10680b39 46d4bcf86b0a4de691d824308920146c 
b867c3a5edc8442c946948014a351985 - default default] create failed (client 
error): The resource could not be found.

  
  As you can see the error is:

  46d4bcf86b0a4de691d824308920146c b867c3a5edc8442c946948014a351985 -
  default default] Unable to create lrouter for
  9969c69e-ca2f-4f6d-bdfd-74cf1febda83:
  neutron_lib.exceptions.l3.RouterNotFound: Router
  9969c69e-ca2f-4f6d-bdfd-74cf1febda83 could not be found

  But there's no traceback to know where this error has originally being
  raised. The create_router() method does a many things and it's very
  difficult to figure this out without the traceback.

  This LP is about improving the logs for that module in general as I
  see other parts where nothing is being logged when things goes bad.

To manage notifications about this bug go 

[Yahoo-eng-team] [Bug 2027610] Re: [sqlalchemy-20] Error in "test_notify_port_status_all_values" using an wrong OVO parameter

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2027610

Title:
  [sqlalchemy-20] Error in "test_notify_port_status_all_values" using an
  wrong OVO parameter

Status in neutron:
  Fix Released

Bug description:
  Snippet: https://paste.opendev.org/show/bCteNr5TAk8k4YqS3VAC/

  The "Port.status" field only accepts string values. If other types are 
passed, the OVO validator will fail with the following error (from the logs):
ValueError: A string is required in field status, not a LoaderCallableStatus

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2027610/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2028161] Re: [ovn-octavia-provider] FIP not included into LogicalSwitchPortUpdate event handler method

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2028161

Title:
  [ovn-octavia-provider] FIP not included into LogicalSwitchPortUpdate
  event  handler method

Status in neutron:
  Fix Released

Bug description:
  When a LogicalSwitchPortUpdate event is triggered after removing
  FIP from LB VIP, the event received include the port affected,
  but the FIP related is not passing to the handler method.

  Including the FIP would help on decide if it is an association or and
  disassociation action and also help to search the related objects
  to be updated/deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2028161/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1783965] Re: Openvswtich agent break the existing data plane as not stable server

2024-02-19 Thread Rodolfo Alonso
Closing this bug for now. Please feel free to reopen it if needed,
providing new information.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1783965

Title:
  Openvswtich agent break the existing data plane as not stable server

Status in neutron:
  Won't Fix

Bug description:
  Current openvswitch agent need to be stronger for more cases.

  Please see [1]

  This line will clean up all stale ovs flows. Try to think, if there is
  a case, when the ovs agent restart and try to get its hold device
  info(rpc to server get them and store into local cache if possible).In
  this case, we can only get them from server after scan existing ovs
  bridge. But at this moment, some device info can not be got successful
  by neutron server not stable/rabbitmq hang. Then this kind devices
  will failure to sync. The following step is [1], it cleans the
  previous ovs flow which there maybe some users traffic on that. That
  means it breaks the existing data plane. This is a terrible situation.

  For private cloud providers, when they face the issue online or need
  to upgrade servers. This kind situation would be very frequency. So
  once they hit this issue, the effects are quite large.

  
  [1]  
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#n2158

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1783965/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1860586] Re: [Tempest] SSH exception "No existing session"

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1860586

Title:
  [Tempest] SSH exception "No existing session"

Status in neutron:
  Fix Released

Bug description:
  Seen in several CI jobs:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_820/611605/22/gate/neutron-tempest-plugin-scenario-linuxbridge/82023b2/testr_results.html
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ca7/653883/11/check/neutron-tempest-plugin-scenario-linuxbridge/ca7d140/testr_results.html

  Error [1]:
  2020-01-21 00:35:15,421 31613 INFO [tempest.lib.common.ssh] Creating ssh 
connection to '172.24.5.164:22' as 'cirros' with public key authentication
  2020-01-21 00:35:15,462 31613 INFO [paramiko.transport] Connected 
(version 2.0, client dropbear_2012.55)
  2020-01-21 00:35:17,402 31613 INFO [paramiko.transport] Authentication 
(publickey) successful!
  2020-01-21 00:35:17,402 31613 INFO [tempest.lib.common.ssh] ssh 
connection to cirros@172.24.5.164 successfully created
  2020-01-21 00:35:17,855 31613 INFO [tempest.lib.common.ssh] Creating ssh 
connection to '172.24.5.164:22' as 'cirros' with public key authentication
  2020-01-21 00:35:17,890 31613 INFO [paramiko.transport] Connected 
(version 2.0, client dropbear_2012.55)
  2020-01-21 00:35:19,780 31613 INFO [paramiko.transport] Authentication 
(publickey) successful!
  2020-01-21 00:35:19,780 31613 INFO [tempest.lib.common.ssh] ssh 
connection to cirros@172.24.5.164 successfully created
  2020-01-21 00:35:19,856 31613 INFO [tempest.lib.common.ssh] Creating ssh 
connection to '172.24.5.164:22' as 'cirros' with public key authentication
  2020-01-21 00:35:19,865 31613 INFO [paramiko.transport] Connected 
(version 2.0, client dropbear_2012.55)
  2020-01-21 00:35:29,878 31613 ERROR[tempest.lib.common.ssh] Failed to 
establish authenticated ssh connection to cirros@172.24.5.164 after 0 attempts
  2020-01-21 00:35:29.878 31613 ERROR tempest.lib.common.ssh Traceback (most 
recent call last):
  2020-01-21 00:35:29.878 31613 ERROR tempest.lib.common.ssh   File 
"/opt/stack/tempest/tempest/lib/common/ssh.py", line 107, in _get_ssh_connection
  2020-01-21 00:35:29.878 31613 ERROR tempest.lib.common.ssh 
sock=proxy_chan)
  2020-01-21 00:35:29.878 31613 ERROR tempest.lib.common.ssh   File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/paramiko/client.py",
 line 412, in connect
  2020-01-21 00:35:29.878 31613 ERROR tempest.lib.common.ssh server_key = 
t.get_remote_server_key()
  2020-01-21 00:35:29.878 31613 ERROR tempest.lib.common.ssh   File 
"/opt/stack/tempest/.tox/tempest/lib/python3.6/site-packages/paramiko/transport.py",
 line 834, in get_remote_server_key
  2020-01-21 00:35:29.878 31613 ERROR tempest.lib.common.ssh raise 
SSHException("No existing session")
  2020-01-21 00:35:29.878 31613 ERROR tempest.lib.common.ssh 
paramiko.ssh_exception.SSHException: No existing session
  2020-01-21 00:35:29.878 31613 ERROR tempest.lib.common.ssh 

  
  [1] http://paste.openstack.org/show/788688/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1860586/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1962167] Re: [ntp] "test_network_attached_with_two_routers" randomly failing

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1962167

Title:
  [ntp] "test_network_attached_with_two_routers" randomly failing

Status in neutron:
  Fix Released

Bug description:
  Failures detected in neutron-lib CI.

  Example:
  
https://c0a6da4decb9452ac635-13a95957352e8408b84a588b720357c0.ssl.cf2.rackcdn.com/828738/6/check/neutron-
  tempest-plugin-api/90fd383/testr_results.html

  Error: https://paste.opendev.org/show/bH5jPQGIgO5TF70wDXMv/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1962167/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2022059] Re: [OVN] Trunk can be deleted when the parent port is bound to a VM

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2022059

Title:
  [OVN] Trunk can be deleted when the parent port is bound to a VM

Status in neutron:
  Fix Released

Bug description:
  Inlike in other backends (ML2/OVS for example), the "Trunk" object can
  be deleted while the parent port (and the subports) are bound. This
  operation should raise an exception instead (same as in ML2/OVS).

  Example using ML2/OVS:
  $ openstack network trunk delete trunk1
  Failed to delete trunk with name or ID 'trunk1': ConflictException: 409: 
Client Error for url: 
http://192.168.10.70:9696/networking/v2.0/trunks/c406114f-8453-4b55-8642-c9265defdec4,
 Trunk c406114f-8453-4b55-8642-c9265defdec4 is currently in use.
  1 of 1 trunks failed to delete.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2022059/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1907411] Re: neutron-keepalived-state-change file descriptor leak

2024-02-19 Thread Rodolfo Alonso
Closing this bug. According to c#7, the bug issue is no longer valid.
Please feel free to reopen if needed.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1907411

Title:
  neutron-keepalived-state-change file descriptor leak

Status in neutron:
  Invalid

Bug description:
  the https://bugs.launchpad.net/neutron/+bug/1870313 fix the code to
  use threading to send garp, well the garp works, but it also
  introduced another very seriously bug file descriptor leak!

  I tested the train, ussuri branches, both can reproduce the bug, the
  reproduce steps is simple, just create floating ip with --port option,
  which trigger the l3-agent to configure ip address on qg- interface,
  then the neutron-keepalived-state-change will send garp, AND there is
  file named "anon_inode:[eventpoll]" left in /proc//fd.

  AS you can imagine, frequently create floating ip and delete floating
  ip, what will happen in /proc/X/fd

  this could also cause the process neutron-keepalived-state-change
  consume huge memory like 10G+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1907411/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1995091] Re: [CI] "neutron-functional-with-oslo-master" failing with timeout

2024-02-19 Thread Rodolfo Alonso
This issue is already solved.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1995091

Title:
  [CI] "neutron-functional-with-oslo-master" failing with timeout

Status in neutron:
  Fix Released

Bug description:
  Found in the patch
  https://review.opendev.org/c/openstack/neutron/+/861859

  Logs:
  https://zuul.opendev.org/t/openstack/build/c886b9f68c0647fabc37bcd9be76c66c

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1995091/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2019802] Re: [master] Not all fullstack tests running in CI

2024-02-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2019802

Title:
  [master] Not all fullstack tests running in CI

Status in neutron:
  Fix Released

Bug description:
  From last couple of days Only few tests(just 6 tests) are running in
  job neutron-fullstack-with-uwsgi job.

  Example:-
  
https://d16311159baa9c9fc692-58e8a805a242f8a07eac2fd1c3f6b11b.ssl.cf1.rackcdn.com/880867/6/gate/neutron-
  fullstack-with-uwsgi/6b4c3ba/testr_results.html

  Builds:- https://zuul.opendev.org/t/openstack/builds?job_name=neutron-
  fullstack-with-uwsgi=openstack%2Fneutron=master=0

  The job logs have below Traceback:-
  2023-05-16 06:18:13.539943 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron Traceback (most recent call last):
  2023-05-16 06:18:13.539958 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/sqlalchemy/engine/base.py",
 line 1900, in _execute_context
  2023-05-16 06:18:13.539983 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron self.dialect.do_execute(
  2023-05-16 06:18:13.539997 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/sqlalchemy/engine/default.py",
 line 736, in do_execute
  2023-05-16 06:18:13.540008 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron cursor.execute(statement, parameters)
  2023-05-16 06:18:13.540021 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/cursors.py",
 line 158, in execute
  2023-05-16 06:18:13.540032 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron result = self._query(query)
  2023-05-16 06:18:13.540044 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/cursors.py",
 line 325, in _query
  2023-05-16 06:18:13.540056 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron conn.query(q)
  2023-05-16 06:18:13.540067 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/connections.py",
 line 549, in query
  2023-05-16 06:18:13.540078 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2023-05-16 06:18:13.540090 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/connections.py",
 line 779, in _read_query_result
  2023-05-16 06:18:13.540118 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron result.read()
  2023-05-16 06:18:13.540131 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/connections.py",
 line 1157, in read
  2023-05-16 06:18:13.540143 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron first_packet = self.connection._read_packet()
  2023-05-16 06:18:13.540154 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/connections.py",
 line 729, in _read_packet
  2023-05-16 06:18:13.540165 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron packet.raise_for_error()
  2023-05-16 06:18:13.540177 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/protocol.py",
 line 221, in raise_for_error
  2023-05-16 06:18:13.540188 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron err.raise_mysql_exception(self._data)
  2023-05-16 06:18:13.540199 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron   File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.10/site-packages/pymysql/err.py",
 line 143, in raise_mysql_exception
  2023-05-16 06:18:13.540210 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron raise errorclass(errno, errval)
  2023-05-16 06:18:13.540221 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron pymysql.err.OperationalError: (1049, "Unknown database 'ybptypesko'")
  2023-05-16 06:18:13.540233 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron
  2023-05-16 06:18:13.540244 | controller | 2023-05-16 06:18:13.529 15888 ERROR 
neutron The above 

[Yahoo-eng-team] [Bug 1928764] Re: Fullstack test TestUninterruptedConnectivityOnL2AgentRestart failing often with LB agent

2024-02-19 Thread Rodolfo Alonso
The test "test_l2_agent_restart" is no longer considered as unstable and
we didn't see any occurrence of this error recently. Closing this bug
for now.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1928764

Title:
  Fullstack test TestUninterruptedConnectivityOnL2AgentRestart failing
  often with LB agent

Status in neutron:
  Fix Released

Bug description:
  It seems that test
  
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart
  in various LB scenarios (flat, vxlan network) are failing recently
  pretty often.

  Examples of failures:

  
https://09f8e4e92bfb8d2ac89d-b41143eab52d80358d8555f964e9341b.ssl.cf5.rackcdn.com/670611/13/check/neutron-fullstack-with-uwsgi/8f51833/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_400/790288/1/check/neutron-fullstack-with-uwsgi/40025f9/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_400/790288/1/check/neutron-fullstack-with-uwsgi/40025f9/testr_results.html
  
https://0603beb4ddbd36de1165-42644bdefd5590a8f7e4e2e8a8a4112f.ssl.cf5.rackcdn.com/787956/1/check/neutron-fullstack-with-uwsgi/7640987/testr_results.html
  
https://e978bdcfc0235dcd9417-6560bc3b6382c1d289b358872777ca09.ssl.cf1.rackcdn.com/787956/1/check/neutron-fullstack-with-uwsgi/779913e/testr_results.html
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0cb/789648/5/check/neutron-fullstack-with-uwsgi/0cb6d65/testr_results.html

  Stacktrace:

  ft1.1: 
neutron.tests.fullstack.test_connectivity.TestUninterruptedConnectivityOnL2AgentRestart.test_l2_agent_restart(LB,Flat
 network)testtools.testresult.real._StringException: Traceback (most recent 
call last):
File "/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/base.py", 
line 183, in func
  return f(self, *args, **kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/test_connectivity.py",
 line 236, in test_l2_agent_restart
  self._assert_ping_during_agents_restart(
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/fullstack/base.py", 
line 123, in _assert_ping_during_agents_restart
  common_utils.wait_until_true(
File "/usr/lib/python3.8/contextlib.py", line 120, in __exit__
  next(self.gen)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/common/net_helpers.py",
 line 147, in async_ping
  f.result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 432, in result
  return self.__get_result()
File "/usr/lib/python3.8/concurrent/futures/_base.py", line 388, in 
__get_result
  raise self._exception
File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run
  result = self.fn(*self.args, **self.kwargs)
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/tests/common/net_helpers.py",
 line 128, in assert_async_ping
  ns_ip_wrapper.netns.execute(
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/ip_lib.py", 
line 718, in execute
  return utils.execute(cmd, check_exit_code=check_exit_code,
File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/utils.py", 
line 156, in execute
  raise exceptions.ProcessExecutionError(msg,
  neutron_lib.exceptions.ProcessExecutionError: Exit code: 1; Cmd: ['ip', 
'netns', 'exec', 'test-af70cf3a-c531-4fdf-ab4c-31cc69cc2c56', 'ping', '-W', 2, 
'-c', '1', '20.0.0.212']; Stdin: ; Stdout: PING 20.0.0.212 (20.0.0.212) 56(84) 
bytes of data.

  --- 20.0.0.212 ping statistics ---
  1 packets transmitted, 0 received, 100% packet loss, time 0ms

  ; Stderr:


  I checked linuxbridge-agent logs (2 cases) and I found there error
  like below:

  2021-05-13 15:46:07.721 96421 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139960964907248]: (4, ()) _call_back 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py:510
  2021-05-13 15:46:07.725 96421 DEBUG oslo.privsep.daemon [-] privsep: 
reply[139960964907248]: (4, None) _call_back 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py:510
  2021-05-13 15:46:07.728 96421 DEBUG oslo.privsep.daemon [-] privsep: 
Exception during request[139960964907248]: Network interface brqa235fa8c-09 not 
found in namespace None. _process_cmd 
/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py:488
  Traceback (most recent call last):
File 
"/home/zuul/src/opendev.org/openstack/neutron/.tox/dsvm-fullstack-gate/lib/python3.8/site-packages/oslo_privsep/daemon.py",
 line 485, 

[Yahoo-eng-team] [Bug 2054119] Re: [OVN] LRP in tunnelled networks cannot be removed

2024-02-16 Thread Rodolfo Alonso
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2054119

Title:
  [OVN] LRP in tunnelled networks cannot be removed

Status in neutron:
  Invalid

Bug description:
  Since [1], a LRP located in a tunnelled network won't have a
  "gateway_chassis" assigned (or a "redirect-chassis" key in options);
  that logic was implemented in [2]. However, a LRP that is acting as GW
  has the key "neutron:is_ext_gw" in external_ids, that could be used to
  identify a GW LRP.

  [1]https://review.opendev.org/c/openstack/neutron/+/908325
  [2]https://review.opendev.org/c/openstack/neutron/+/877831

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2054119/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2054119] [NEW] [OVN] LRP in tunnelled networks cannot be removed

2024-02-16 Thread Rodolfo Alonso
Public bug reported:

Since [1], a LRP located in a tunnelled network won't have a
"gateway_chassis" assigned (or a "redirect-chassis" key in options);
that logic was implemented in [2]. However, a LRP that is acting as GW
has the key "neutron:is_ext_gw" in external_ids, that could be used to
identify a GW LRP.

[1]https://review.opendev.org/c/openstack/neutron/+/908325
[2]https://review.opendev.org/c/openstack/neutron/+/877831

** Affects: neutron
 Importance: High
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2054119

Title:
  [OVN] LRP in tunnelled networks cannot be removed

Status in neutron:
  New

Bug description:
  Since [1], a LRP located in a tunnelled network won't have a
  "gateway_chassis" assigned (or a "redirect-chassis" key in options);
  that logic was implemented in [2]. However, a LRP that is acting as GW
  has the key "neutron:is_ext_gw" in external_ids, that could be used to
  identify a GW LRP.

  [1]https://review.opendev.org/c/openstack/neutron/+/908325
  [2]https://review.opendev.org/c/openstack/neutron/+/877831

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2054119/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052681] Re: Many stale neutron-keepalived-state-change processes left after upgrade to native pyroute2 state-change

2024-02-16 Thread Rodolfo Alonso
Hello:

I've tested with an older version of Neutron (Stein) and the problem you
are describing is not related to the new implementation of "neutron-
keepalived-state-change" process, but how this process was stopped.

The "ip -o monitor" processes are child processes of "neutron-
keepalived-state-change" and are (should be) stopped when "neutron-
keepalived-state-change" is. If the process is killed, the child
processed won't be correctly stopped. If the "neutron-keepalived-state-
change" is started again (with the old or the new implementation), the
"ip -o monitor" leftovers will remain in the system.

Please check how are you stopping the "neutron-keepalived-state-change"
processes and how are you upgrading your system.

Regards.


** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052681

Title:
  Many stale neutron-keepalived-state-change processes left after
  upgrade to native pyroute2 state-change

Status in neutron:
  Invalid

Bug description:
  Needs a post-upgrade script to remove those stale "ip -o monitor" and
  traditional "neutron-keepalived-state-change" processes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2052681/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2053122] [NEW] [OVN] Since "service type" for OVN routers feature, previously created routers cannot be deleted

2024-02-14 Thread Rodolfo Alonso
Public bug reported:

Since [1], any new OVN router has a service provider configuration. This
is something not present in OVN routers created before this
functionality.

When we try to delete a "old" OVN router, this is the exception
received: [3].

The associated ``ProviderResourceAssociation`` register in missing for
those OVN routers created before this feature.


[1]https://review.opendev.org/c/openstack/neutron/+/883988
[2]https://review.opendev.org/c/openstack/neutron/+/883988/83/neutron/services/ovn_l3/service_providers/driver_controller.py#42
[3]https://paste.opendev.org/show/bXHekK7yKUj7fbvma6Bf/

** Affects: neutron
 Importance: High
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2053122

Title:
  [OVN] Since "service type" for OVN routers feature, previously created
  routers cannot be deleted

Status in neutron:
  New

Bug description:
  Since [1], any new OVN router has a service provider configuration.
  This is something not present in OVN routers created before this
  functionality.

  When we try to delete a "old" OVN router, this is the exception
  received: [3].

  The associated ``ProviderResourceAssociation`` register in missing for
  those OVN routers created before this feature.

  
  [1]https://review.opendev.org/c/openstack/neutron/+/883988
  
[2]https://review.opendev.org/c/openstack/neutron/+/883988/83/neutron/services/ovn_l3/service_providers/driver_controller.py#42
  [3]https://paste.opendev.org/show/bXHekK7yKUj7fbvma6Bf/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2053122/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052915] [NEW] "neutron-ovs-grenade-multinode" and "neutron-ovn-grenade-multinode" failing in 2023.1 and Zed

2024-02-12 Thread Rodolfo Alonso
Public bug reported:

The issue seems to be in the neutron-lib version installed:
2024-02-07 16:19:35.155231 | compute1 | ERROR: neutron 21.2.1.dev38 has 
requirement neutron-lib>=3.1.0, but you'll have neutron-lib 2.20.2 which is 
incompatible.

That leads to an error when starting the Neutron API (an API definition is not 
found) [1]:
Feb 07 16:13:54.385467 np0036680724 neutron-server[67288]: ERROR neutron 
ImportError: cannot import name 'port_mac_address_override' from 
'neutron_lib.api.definitions' 
(/usr/local/lib/python3.8/dist-packages/neutron_lib/api/definitions/__init__.py)

Setting priority to Critical because that affects to the CI.

[1]https://9faad8159db8d6994977-b587eccfce0a645f527dfcbc49e54bb4.ssl.cf2.rackcdn.com/891397/4/check/neutron-
ovs-grenade-multinode/ba47cef/controller/logs/screen-q-svc.txt

** Affects: neutron
 Importance: Critical
 Status: New

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052915

Title:
  "neutron-ovs-grenade-multinode" and "neutron-ovn-grenade-multinode"
  failing in 2023.1 and Zed

Status in neutron:
  New

Bug description:
  The issue seems to be in the neutron-lib version installed:
  2024-02-07 16:19:35.155231 | compute1 | ERROR: neutron 21.2.1.dev38 has 
requirement neutron-lib>=3.1.0, but you'll have neutron-lib 2.20.2 which is 
incompatible.

  That leads to an error when starting the Neutron API (an API definition is 
not found) [1]:
  Feb 07 16:13:54.385467 np0036680724 neutron-server[67288]: ERROR neutron 
ImportError: cannot import name 'port_mac_address_override' from 
'neutron_lib.api.definitions' 
(/usr/local/lib/python3.8/dist-packages/neutron_lib/api/definitions/__init__.py)

  Setting priority to Critical because that affects to the CI.

  
[1]https://9faad8159db8d6994977-b587eccfce0a645f527dfcbc49e54bb4.ssl.cf2.rackcdn.com/891397/4/check/neutron-
  ovs-grenade-multinode/ba47cef/controller/logs/screen-q-svc.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2052915/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052821] [NEW] [OVN] Pin a Logical_Router to a chassis when external network is tunnelled

2024-02-09 Thread Rodolfo Alonso
Public bug reported:

When a gateway network (external network) is added to a router (OVN
Logical_Router), a gateway port (OVN Logical_Router_Port) is created. If
the router GW network is a tunnelled network, the GW LRP won't be bound
to any GW chassis; the network has no physical network thus there is no
correspondence with any GW physical bridge. Check [1] for more
information.

In order to be able to send traffic through a GW chassis, it is needed to pin 
the LR to a chassis manually:
  LR:options:chassis=

Note: there is a proposal [3] to make this functionality a core OVN
functionality, being able to assign a "HA_Chassis_Group" to a Logical
Router. This LP bug wants to implement the same functionality but in the
Neutron code.

[1]https://review.opendev.org/c/openstack/neutron/+/908325

References:
[2]https://bugzilla.redhat.com/show_bug.cgi?id=2259161
[3]https://issues.redhat.com/browse/FDP-365

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052821

Title:
  [OVN] Pin a Logical_Router to a chassis when external network is
  tunnelled

Status in neutron:
  New

Bug description:
  When a gateway network (external network) is added to a router (OVN
  Logical_Router), a gateway port (OVN Logical_Router_Port) is created.
  If the router GW network is a tunnelled network, the GW LRP won't be
  bound to any GW chassis; the network has no physical network thus
  there is no correspondence with any GW physical bridge. Check [1] for
  more information.

  In order to be able to send traffic through a GW chassis, it is needed to pin 
the LR to a chassis manually:
LR:options:chassis=

  Note: there is a proposal [3] to make this functionality a core OVN
  functionality, being able to assign a "HA_Chassis_Group" to a Logical
  Router. This LP bug wants to implement the same functionality but in
  the Neutron code.

  [1]https://review.opendev.org/c/openstack/neutron/+/908325

  References:
  [2]https://bugzilla.redhat.com/show_bug.cgi?id=2259161
  [3]https://issues.redhat.com/browse/FDP-365

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2052821/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052786] [NEW] Add "socket" to NUMA affinity policy

2024-02-09 Thread Rodolfo Alonso
Public bug reported:

This is an extension of [1]. The goal of this bug is to add a new field
to the supported NUMA affinity policy list: "socket".

[1]https://bugs.launchpad.net/neutron/+bug/1886798

** Affects: neutron
 Importance: Low
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052786

Title:
  Add "socket" to NUMA affinity policy

Status in neutron:
  New

Bug description:
  This is an extension of [1]. The goal of this bug is to add a new
  field to the supported NUMA affinity policy list: "socket".

  [1]https://bugs.launchpad.net/neutron/+bug/1886798

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2052786/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052419] [NEW] SG list retrieval is slow because we are always loading "synthetic_fields"

2024-02-05 Thread Rodolfo Alonso
Public bug reported:

Since [1], we introduced a way to skip the load of the OVO synthetic
fields depending on the resource fields retrieved. In the case of the
security groups (SG), the SG rules are child objects to the SGs. The SG
rules are retrieved when a SG OVO is created.

The improvement done in [1] is to make the SG rules load dynamically,
that means using the load mode "lazy='dynamic'". That will issue a SQL
query only if the SG rules are read; if not, the query is never issued.

However since [3] (and this is previous to the [1] optimization), we
always add the field "shared" to the filters and thus to the fields to
retrieve, because it is a policy required field. Because "shared" is a
synthetic field [4], that will force the SG "synthetic_fields" load and
the retrieval of the SG rules always. This is undoing any performance
improvement.

Because the "shared" object belongs to the RBAC functionality and we are
always going to load it, we need another way to load the "shared"
synthetic field *without* loading the others (if no other synthetic
field is required, as is for example the "os security group list"
command).


[1]https://review.opendev.org/q/topic:%22bug/1810563%22
[2]https://github.com/openstack/neutron/blob/b85b19e3846fa74975ba5d703336b6e7cd8af433/neutron/db/models/securitygroup.py#L90-L93
[3]https://review.opendev.org/c/openstack/neutron/+/328313
[4]https://github.com/openstack/neutron/blob/b85b19e3846fa74975ba5d703336b6e7cd8af433/neutron/objects/rbac_db.py#L349

** Affects: neutron
 Importance: Medium
 Status: New

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052419

Title:
  SG list retrieval is slow because we are always loading
  "synthetic_fields"

Status in neutron:
  New

Bug description:
  Since [1], we introduced a way to skip the load of the OVO synthetic
  fields depending on the resource fields retrieved. In the case of the
  security groups (SG), the SG rules are child objects to the SGs. The
  SG rules are retrieved when a SG OVO is created.

  The improvement done in [1] is to make the SG rules load dynamically,
  that means using the load mode "lazy='dynamic'". That will issue a SQL
  query only if the SG rules are read; if not, the query is never
  issued.

  However since [3] (and this is previous to the [1] optimization), we
  always add the field "shared" to the filters and thus to the fields to
  retrieve, because it is a policy required field. Because "shared" is a
  synthetic field [4], that will force the SG "synthetic_fields" load
  and the retrieval of the SG rules always. This is undoing any
  performance improvement.

  Because the "shared" object belongs to the RBAC functionality and we
  are always going to load it, we need another way to load the "shared"
  synthetic field *without* loading the others (if no other synthetic
  field is required, as is for example the "os security group list"
  command).

  
  [1]https://review.opendev.org/q/topic:%22bug/1810563%22
  
[2]https://github.com/openstack/neutron/blob/b85b19e3846fa74975ba5d703336b6e7cd8af433/neutron/db/models/securitygroup.py#L90-L93
  [3]https://review.opendev.org/c/openstack/neutron/+/328313
  
[4]https://github.com/openstack/neutron/blob/b85b19e3846fa74975ba5d703336b6e7cd8af433/neutron/objects/rbac_db.py#L349

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2052419/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1686405] Re: Wrong port status after some operations

2024-01-29 Thread Rodolfo Alonso
Hello:

The reproducer involves operations not supported in OpenStack. We don't
support manual OVS operations like "ovs-vsctl del-port". Any port
creation/deletion/update should be done via OpenStack API; os-vif
(Neutron in some cases) will be in charge of any operation on the
device, depending on the backend.

Regards.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1686405

Title:
  Wrong port status after some operations

Status in neutron:
  Invalid

Bug description:
  The environment is, openstack master branch, all in one devstack, openvswitch 
agent, dhcp agent.
  After some operations, a nova port turns to wrong status. This issue can be 
reproduced by following steps:
  1. Remove ovs port from ovs bridge using ovs-vsctl del-port command, neutron 
port status changed to down, that's right;[1]
  2. Update name of the port, the port turns to ACTIVE status, that's wrong.[2]

  It seems a problem of DHCP provisioning block, when updating port
  name, a DHCP provisioning block was added, then DHCP agent complete
  this provisioning block and make the port ACTIVE.

  [1] After step 1:
  | admin_state_up| True
| binding:host_id   
| c4
  | binding:vif_type  | ovs 

| binding:vnic_type | normal
   
  | device_owner  | compute:nova

  | id| cad2e6a0-5bda-4e4d-9232-ba7c06acf28e

  | status| DOWN
  | updated_at| 2017-04-26T13:32:22Z


  [2] After step 2:
  | admin_state_up| True
| binding:host_id   
| c4
  | binding:vif_type  | ovs 

| binding:vnic_type | normal
  | device_owner
  | compute:nova
| id| 
cad2e6a0-5bda-4e4d-9232-ba7c06acf28e
| name  | bbc   

  | status| ACTIVE  
 | 
updated_at| 2017-04-26T13:34:39Z

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1686405/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039586] Re: [CI] Update the cirros image versions

2024-01-29 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039586

Title:
  [CI] Update the cirros image versions

Status in neutron:
  Fix Released

Bug description:
  Since [1], some cirros images have been deleted from the CI cache. It
  could happen [2] that using cirros images not cached, that imply to
  download them, could fail. Any version not cached should be replaced
  by its closer image version.

  [1]https://review.opendev.org/c/openstack/project-config/+/873735
  [2]https://zuul.openstack.org/build/97f5b27439ca469a9cd110087a386c62

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2039586/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662477] Re: rbac shared network add_router_interface fails for non-admin

2024-01-29 Thread Rodolfo Alonso
Hello:

I can't reproduce this bug with newer versions. I'll close it.

Regards.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1662477

Title:
  rbac shared network add_router_interface fails for non-admin

Status in neutron:
  Invalid

Bug description:
  We are on mitaka and use rbac to share private networks.
  We defined in the neutron policy that non-admin users can attach router 
interfaces, but this fails on shared networks, because rbac is not taken into 
account here: 
https://github.com/openstack/neutron/blob/a0e0e8b6686b847a4963a6aa6a3224b5768544e6/neutron/api/v2/attributes.py#L372

  The related error, that led me to that line is this:
  http://paste.openstack.org/show/597918/

  And this is still present in master:
  
https://github.com/openstack/neutron/blob/1c5bf09a03b0fe463ba446d2a19087be7a0504a7/neutron/api/v2/attributes.py#L372

  I'm happy to give more details, if needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1662477/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051510] [NEW] [OVN] Audit the L2/L3 extensions support

2024-01-29 Thread Rodolfo Alonso
Public bug reported:

The ML2/OVN mechanism driver has the supported L2 and L3 extensions hardcoded 
in [1]. It is needed to review the list of supported extensions in ML2/OVN and 
Neutron and provide:
* A list of not supported extensions.
* For any extension that should be supported and is not in the hardcoded list, 
provide a fix.

[1]https://github.com/openstack/neutron/blob/c6ac441a5160b79c48d04596ab464e0bac9f6592/neutron/common/ovn/extensions.py

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2051510

Title:
  [OVN] Audit the L2/L3 extensions support

Status in neutron:
  New

Bug description:
  The ML2/OVN mechanism driver has the supported L2 and L3 extensions hardcoded 
in [1]. It is needed to review the list of supported extensions in ML2/OVN and 
Neutron and provide:
  * A list of not supported extensions.
  * For any extension that should be supported and is not in the hardcoded 
list, provide a fix.

  
[1]https://github.com/openstack/neutron/blob/c6ac441a5160b79c48d04596ab464e0bac9f6592/neutron/common/ovn/extensions.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2051510/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051501] [NEW] [UT] "openstack-tox-py311-with-sqlalchemy-master" failing because of https://review.opendev.org/c/openstack/neutron-lib/+/903841

2024-01-29 Thread Rodolfo Alonso
Public bug reported:

Since [1], the test ``TimeStampDBMixinTestCase.test_update_timpestamp``
is failing because now the ``timeutils.utcnow`` method is also called
from ``ContextBase`` during the init method. This is happening when
using neutron-lib master branch (current last tag is 3.10.0).

[1]https://review.opendev.org/c/openstack/neutron-lib/+/903841

** Affects: neutron
 Importance: Critical
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2051501

Title:
  [UT] "openstack-tox-py311-with-sqlalchemy-master" failing because of
  https://review.opendev.org/c/openstack/neutron-lib/+/903841

Status in neutron:
  In Progress

Bug description:
  Since [1], the test
  ``TimeStampDBMixinTestCase.test_update_timpestamp`` is failing because
  now the ``timeutils.utcnow`` method is also called from
  ``ContextBase`` during the init method. This is happening when using
  neutron-lib master branch (current last tag is 3.10.0).

  [1]https://review.opendev.org/c/openstack/neutron-lib/+/903841

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2051501/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1964575] Re: [DB] Migration to SQLAlchemy 2.0

2024-01-24 Thread Rodolfo Alonso
Yes, I think we can close this one and consider it as released. So far
we didn't find any new error in the CI jobs. Any new bug can be tracked
in a new LP bug.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1964575

Title:
  [DB] Migration to SQLAlchemy 2.0

Status in neutron:
  Fix Released

Bug description:
  This is a container for the efforts to be done in Neutron, neutron-lib
  and plugins projects to migrate to SQLAlchemy 2.0.

  There is currently a patch in neutron-lib to disable the session
  "__autocommit" flag, that will be optional in SQLAlchemy 1.4 and
  mandatory in SQLAlchemy 2.0 [1]. We have found problems with how the
  session transactions are now handled by SQLAlchemy.

  In Neutron there are many places where we make a database call running
  on an implicit transaction, that means we don't explicitly create a
  reader/writer context. With "autocommit=True", this transaction is
  discarded immediately; under non-autocommit sessions, the transaction
  created remains open. That is leading to database errors as seen in
  the tempest tests.

  In [2], as recommended by Mike Bayer (main maintainer and author of
  SQLAlchemy), we have enabled again the "autocommit" flag and create a
  log message to track when Neutron tries to execute a command with
  session with an inactive transaction.

  The goal of this bug is to move all Neutron database interactions to
  be SQLAlchemy 2.0 compliant.

  
  [1]https://review.opendev.org/c/openstack/neutron-lib/+/828738
  [2]https://review.opendev.org/c/openstack/neutron-lib/+/833103

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1964575/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038978] Re: [OVN] Floating IP <=> Floating IP across subnets

2024-01-24 Thread Rodolfo Alonso
*** This bug is a duplicate of bug 2035281 ***
https://bugs.launchpad.net/bugs/2035281

This issue is the same as
https://bugs.launchpad.net/neutron/+bug/2035281 and fixed in
https://review.opendev.org/c/openstack/neutron/+/895260.

** This bug has been marked a duplicate of bug 2035281
   [ML2/OVN] DGP/Floating IP issue - no flows for chassis gateway port

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038978

Title:
  [OVN] Floating IP <=> Floating IP across subnets

Status in neutron:
  In Progress

Bug description:
  When using OVN, if you have a virtual router with a gateway that is in
  subnet A, and has a port that has a floating IP attached to it from
  subnet B, they seem to not be reachable.

  https://mail.openvswitch.org/pipermail/ovs-dev/2021-July/385253.html

  There was a fix brought into OVN with this not long ago, it introduces
  an option called `options:add_route` to `true`.

  see: https://mail.openvswitch.org/pipermail/ovs-
  dev/2021-July/385255.html

  I think we should do this in order to mirror the same behaviour in
  ML2/OVS since we install scope link routes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2038978/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2049622] Re: Some Port Properties Cannot Be Set

2024-01-22 Thread Rodolfo Alonso
Hello:

In order to be able to define the DNS records of a port, you need to enable the 
extension driver "dns":
  $ cat /etc/neutron/plugins/ml2/ml2_conf.ini
  [ml2]
  extension_drivers = ...,dns

You should also define a DNS domain different from the default one 
"openstacklocal.". This should be set in the neutron configuration:
  $ cat /etc/neutron/neutron.conf
  [DEFAULT]
  dns_domain = fistro.

Then you'll be able to define a DNS name per port.

Regards.


** Changed in: neutron
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2049622

Title:
  Some Port Properties Cannot Be Set

Status in neutron:
  Invalid

Bug description:
  Hi,

  Some port properties like dns-name cannot be set. We got mismatch
  error in some tempest tests as below. I debugged it and i saw that the
  dns-name properties cannot be set when creating a port. I checked with
  openstack cli commands and i saw dns-name properties cannot be set. I
  can set this properties in wallaby version.

  Openstack version: Zed
  Openstack cli version: 6.4.0

  Some Tempest Tests:
  
neutron_tempest_plugin.scenario.test_internal_dns.InternalDNSTest.test_create_and_update_port_with_dns_name
  
neutron_tempest_plugin.api.test_ports.PortsTestJSON.test_create_update_port_with_dns_domain
  
neutron_tempest_plugin.api.test_ports.PortsTestJSON.test_create_update_port_with_dns_name
  
neutron_tempest_plugin.api.test_revisions.TestRevisions.test_update_dns_domain_bumps_revision

  '''
  $ openstack port show b976b4c3-7fc6-430a-a587-33520a4b44c0
  
+-++
  | Field   | Value 
 |
  
+-++
  | admin_state_up  | UP
 |
  | allowed_address_pairs   |   
 |
  | binding_host_id |   
 |
  | binding_profile |   
 |
  | binding_vif_details |   
 |
  | binding_vif_type| unbound   
 |
  | binding_vnic_type   | normal
 |
  | created_at  | 2024-01-17T11:50:45Z  
 |
  | data_plane_status   | None  
 |
  | description |   
 |
  | device_id   |   
 |
  | device_owner|   
 |
  | device_profile  | None  
 |
  | dns_assignment  | fqdn='host-10-100-0-10.openstacklocal.', 
hostname='host-10-100-0-10', ip_address='10.100.0.10' |
  | dns_domain  |   
 |
  | dns_name|   
 |
  | extra_dhcp_opts |   
 |
  | fixed_ips   | ip_address='10.100.0.10', 
subnet_id='76262d47-90be-4182-9678-8eaa01661851' |
  | hints   |   
 |
  | id  | b976b4c3-7fc6-430a-a587-33520a4b44c0  
 |
  | ip_allocation   | None  
 |
  | mac_address | fa:16:3e:2c:02:19 
 |
  | name| tempest-internal-dns-test-port-853671869  
 |
  

[Yahoo-eng-team] [Bug 2049623] [NEW] [RFE] Refactor OVS Trunk plugin to have one single port

2024-01-17 Thread Rodolfo Alonso
Public bug reported:

The aim of this RFE is to refactor the OVS Trunk plugin, in order to
have one single port between the trunk bridge and the integration
bridge. The VLAN filtering will be done by configuring the port with
"vlan_mode=trunk" and defining the corresponding VLAN IDs in the "trunk"
parameter, including VLAN 0 for the untagged traffic that is handled by
the parent port.

This is just an initial idea that should be refined, tested and
presented first as a POC in order to check its validity.

** Affects: neutron
 Importance: Wishlist
 Status: New

** Changed in: neutron
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2049623

Title:
  [RFE] Refactor OVS Trunk plugin to have one single port

Status in neutron:
  New

Bug description:
  The aim of this RFE is to refactor the OVS Trunk plugin, in order to
  have one single port between the trunk bridge and the integration
  bridge. The VLAN filtering will be done by configuring the port with
  "vlan_mode=trunk" and defining the corresponding VLAN IDs in the
  "trunk" parameter, including VLAN 0 for the untagged traffic that is
  handled by the parent port.

  This is just an initial idea that should be refined, tested and
  presented first as a POC in order to check its validity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2049623/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2049590] [NEW] Restore the device namespace if "set_netns" fails

2024-01-17 Thread Rodolfo Alonso
Public bug reported:

If the method ``IpLinkCommand.set_netns`` [1] fails, the device
namespace should keep the previous value.

[1]https://github.com/openstack/neutron/blob/12115302944293b7d6b022f5acb68fe9c649a53e/neutron/agent/linux/ip_lib.py#L479

** Affects: neutron
 Importance: Medium
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2049590

Title:
  Restore the device namespace if "set_netns" fails

Status in neutron:
  In Progress

Bug description:
  If the method ``IpLinkCommand.set_netns`` [1] fails, the device
  namespace should keep the previous value.

  
[1]https://github.com/openstack/neutron/blob/12115302944293b7d6b022f5acb68fe9c649a53e/neutron/agent/linux/ip_lib.py#L479

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2049590/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2049265] [NEW] [OVN] The ``MaintenanceWorker`` shoudl register the "Private_Chassis" table in the SB connection

2024-01-13 Thread Rodolfo Alonso
Public bug reported:

Since [1], the ``MaintenanceWorker`` uses the "Private_Chassis" table to
perform a database clean up of the duplicated "Private_Chassis" and
"Chassis" registers. This table should be added to the IDL connection.

[1]https://review.opendev.org/q/Ib3c6f0dc01efd31430691e720ba23ccb4ede65fa

** Affects: neutron
 Importance: High
 Assignee: Terry Wilson (otherwiseguy)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Terry Wilson (otherwiseguy)

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2049265

Title:
  [OVN] The ``MaintenanceWorker`` shoudl register the "Private_Chassis"
  table in the SB connection

Status in neutron:
  New

Bug description:
  Since [1], the ``MaintenanceWorker`` uses the "Private_Chassis" table
  to perform a database clean up of the duplicated "Private_Chassis" and
  "Chassis" registers. This table should be added to the IDL connection.

  [1]https://review.opendev.org/q/Ib3c6f0dc01efd31430691e720ba23ccb4ede65fa

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2049265/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2048198] [NEW] neutron-lib role enforcer is writting warning messages

2024-01-05 Thread Rodolfo Alonso
Public bug reported:

If any policy rules is defined in the policy file, the neutron-lib role 
enforcer is writing WARNING log messages. E.g.:
"""
WARNING oslo_policy.policy [None req-99d94096-bf0c-4898-979c-42da329c27a4 None 
None] Policies ['get_floatingip'] reference a rule that is not defined.
"""

This is happening when the policy file rules are loaded. These rules
could not be in the rule enforcer base rules [2] (currently 3: admin,
advance service and service role). If that happens, the oslo.policy
``Enforcer`` will write the referred WARNING message.

Although this behavior is harmful, it is needed a way to avoid these
kind of messages.


[1]https://github.com/openstack/neutron-lib/blob/9e3a3a608670d2d7bc0ae98fd39551920e563efe/neutron_lib/policy/_engine.py#L62-L66
[2]https://github.com/openstack/neutron-lib/blob/9e3a3a608670d2d7bc0ae98fd39551920e563efe/neutron_lib/policy/_engine.py#L33-L46

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2048198

Title:
  neutron-lib role enforcer is writting warning messages

Status in neutron:
  New

Bug description:
  If any policy rules is defined in the policy file, the neutron-lib role 
enforcer is writing WARNING log messages. E.g.:
  """
  WARNING oslo_policy.policy [None req-99d94096-bf0c-4898-979c-42da329c27a4 
None None] Policies ['get_floatingip'] reference a rule that is not defined.
  """

  This is happening when the policy file rules are loaded. These rules
  could not be in the rule enforcer base rules [2] (currently 3: admin,
  advance service and service role). If that happens, the oslo.policy
  ``Enforcer`` will write the referred WARNING message.

  Although this behavior is harmful, it is needed a way to avoid these
  kind of messages.

  
  
[1]https://github.com/openstack/neutron-lib/blob/9e3a3a608670d2d7bc0ae98fd39551920e563efe/neutron_lib/policy/_engine.py#L62-L66
  
[2]https://github.com/openstack/neutron-lib/blob/9e3a3a608670d2d7bc0ae98fd39551920e563efe/neutron_lib/policy/_engine.py#L33-L46

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2048198/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2041861] Re: [ovn] instance in a shared network owned by another project is unreachable via floating IP

2024-01-02 Thread Rodolfo Alonso
Hello:

About the bug description. I can't reproduce the issue reported. I've
deployed master branch and I can use VLAN, VXLAN or Geneve as tenant
network without any issue. The metadata agent creates the corresponding
namespaces and provides the metadata to the cloud-init requests without
any issue. IMO, but this is just a surmise, the problem could be in the
OVN metadata agent you have deployed. Please check the logs and report
them here if there is any issue.

As Bence commented, if the issue is in the OS cloud-init, then you need
further investigation on the OS itself but not related to Neutron.

About c#3 comment, I can't neither reproduce that. I've used Geneve and VXLAN 
networks, shared from a project to another. The VMs created from the project 
owner and the other one, are both working when connected to a floating IP. The 
only missing step in c#3 I would comment is the SG rule needed to accept ping 
packets from an external device (I'm pinging from the host, that is an IP that 
doesn't belong to the SG remote group):
  # openstack security group rule create --ethertype IPv4 --protocol icmp 
--ingress $sg

Once added the SG rule, I can ping to the floating IP of the VMs from
both projects (owner of the network, project using the shared network).

I'll keep this bug as "status=invalid" unless new
logs/evidences/reproducers are reported.

Regards.

** Changed in: neutron
   Status: Triaged => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2041861

Title:
  [ovn] instance in a shared network owned by another project is
  unreachable via floating IP

Status in neutron:
  Invalid

Bug description:
  In my openstack environment, whenever I create an instance in a shared 
network owned by another project, the instance becomes unreachable even via 
floating IP. I'm using the latest kolla ansible with ovn network. I cannot even 
ssh into the server because it does not run the cloud-inti to set the password
  (and also because public network unreachable). Spice brings the login prompt 
but cannot login due to lack of  credentials.

   Any suggestion on how to solve the issue?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2041861/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2047049] [NEW] [UT] OVN fake resources ``create`` method should return instance

2023-12-20 Thread Rodolfo Alonso
Public bug reported:

``FakeChassis``, ``FakeOVNRouter`` and ``FakeOVNPort`` factory methods
should return an instance of the object instead of returning a "type"
metaclass object.

** Affects: neutron
 Importance: Low
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2047049

Title:
  [UT] OVN fake resources ``create`` method should return instance

Status in neutron:
  New

Bug description:
  ``FakeChassis``, ``FakeOVNRouter`` and ``FakeOVNPort`` factory methods
  should return an instance of the object instead of returning a "type"
  metaclass object.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2047049/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2046457] Re: ovn provider fails with "AttributeError: 'Client' object has no attribute 'ports'"

2023-12-19 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2046457

Title:
  ovn provider fails with "AttributeError: 'Client' object has no
  attribute 'ports'"

Status in neutron:
  Fix Released

Bug description:
  octavia-driver-agent[91903]: ERROR futurist.periodics [-] Failed to call 
periodic 
'ovn_octavia_provider.maintenance.DBInconsistenciesPeriodics.change_device_owner_lb_hm_ports'
 (it runs every 600.00 seconds): AttributeError: 'Client' object has no 
attribute 'ports'
  octavia-driver-agent[91903]: ERROR futurist.periodics Traceback (most recent 
call last):
  octavia-driver-agent[91903]: ERROR futurist.periodics   File 
"/usr/local/lib/python3.10/dist-packages/futurist/periodics.py", line 290, in 
run
  octavia-driver-agent[91903]: ERROR futurist.periodics work()
  octavia-driver-agent[91903]: ERROR futurist.periodics   File 
"/usr/local/lib/python3.10/dist-packages/futurist/periodics.py", line 64, in 
__call__
  octavia-driver-agent[91903]: ERROR futurist.periodics return 
self.callback(*self.args, **self.kwargs)
  octavia-driver-agent[91903]: ERROR futurist.periodics   File 
"/usr/local/lib/python3.10/dist-packages/futurist/periodics.py", line 178, in 
decorator
  octavia-driver-agent[91903]: ERROR futurist.periodics return f(*args, 
**kwargs)
  octavia-driver-agent[91903]: ERROR futurist.periodics   File 
"/opt/stack/ovn-octavia-provider/ovn_octavia_provider/maintenance.py", line 79, 
in change_device_owner_lb_hm_ports
  octavia-driver-agent[91903]: ERROR futurist.periodics ovn_lb_hm_ports = 
neutron_client.ports(
  octavia-driver-agent[91903]: ERROR futurist.periodics AttributeError: 
'Client' object has no attribute 'ports'

  see also k8s e2e test results:

  https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/cloud-
  provider-openstack/2504/openstack-cloud-controller-
  manager-e2e-test/1735270733262622720

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2046457/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2046939] [NEW] [OVN] ``OVNAgentExtensionManager`` is resetting the ``agent_api`` during the initialization

2023-12-19 Thread Rodolfo Alonso
Public bug reported:

The ``OVNAgentExtensionManager`` instance of the OVN agent is resetting the 
``agent_api`` member during the extensions manager initialization. The 
``OVNAgentExtensionManager`` inherits from ``AgentExtensionsManager``. The 
``initialize`` method iterates through the loaded extensions and execute the 
following methods:
* ``consume_api``: assigns the agent API to the extension.
* ``initialize``: due to a wrong implementation, this method is now assigning 
None to the agent API, previously assigned.

** Affects: neutron
 Importance: Medium
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2046939

Title:
  [OVN] ``OVNAgentExtensionManager`` is resetting the ``agent_api``
  during the  initialization

Status in neutron:
  In Progress

Bug description:
  The ``OVNAgentExtensionManager`` instance of the OVN agent is resetting the 
``agent_api`` member during the extensions manager initialization. The 
``OVNAgentExtensionManager`` inherits from ``AgentExtensionsManager``. The 
``initialize`` method iterates through the loaded extensions and execute the 
following methods:
  * ``consume_api``: assigns the agent API to the extension.
  * ``initialize``: due to a wrong implementation, this method is now assigning 
None to the agent API, previously assigned.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2046939/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2046892] [NEW] [OVN] Retrieve the OVN agent extensions correctly

2023-12-19 Thread Rodolfo Alonso
Public bug reported:

The OVN agent extensions are stored in the
``OVNNeutronAgent.ext_manager``, that is an instance of
``OVNAgentExtensionManager`` (that inherits from
``NamedExtensionManager``). In order to retrieve the loaded extension
object, it is needed to retrieve the extension using the name from the
extension manager.

Right now, the QOS HWOL extension is using an OVN agent member
(``agent.qos_hwol``) that does not exists.

** Affects: neutron
 Importance: Medium
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2046892

Title:
  [OVN] Retrieve the OVN agent extensions correctly

Status in neutron:
  New

Bug description:
  The OVN agent extensions are stored in the
  ``OVNNeutronAgent.ext_manager``, that is an instance of
  ``OVNAgentExtensionManager`` (that inherits from
  ``NamedExtensionManager``). In order to retrieve the loaded extension
  object, it is needed to retrieve the extension using the name from the
  extension manager.

  Right now, the QOS HWOL extension is using an OVN agent member
  (``agent.qos_hwol``) that does not exists.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2046892/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2045889] [NEW] [OVN] ML2/OVN mech driver does not set the OVS bridge name in the port VIF details

2023-12-07 Thread Rodolfo Alonso
Public bug reported:

The ML2/OVN mech driver does not set the OVS bridge name nor the datapath type 
in the Neutron DB port VIF details. Example of an ML2/OVS port:
"""
binding_vif_details: bound_drivers.0='openvswitch', bridge_name='br-int', 
connectivity='l2', datapath_type='system', ovs_hybrid_plug='False', 
port_filter='True'
"""

Missing parameters:
* bridge_name
* datapath_type

This information is needed by Nova (os-vif library)

** Affects: neutron
 Importance: Medium
 Status: New

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045889

Title:
  [OVN] ML2/OVN mech driver does not set the OVS bridge name in the port
  VIF details

Status in neutron:
  New

Bug description:
  The ML2/OVN mech driver does not set the OVS bridge name nor the datapath 
type in the Neutron DB port VIF details. Example of an ML2/OVS port:
  """
  binding_vif_details: bound_drivers.0='openvswitch', bridge_name='br-int', 
connectivity='l2', datapath_type='system', ovs_hybrid_plug='False', 
port_filter='True'
  """

  Missing parameters:
  * bridge_name
  * datapath_type

  This information is needed by Nova (os-vif library)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2045889/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2045237] Re: [bulk] there is no clean operation if bulk create ports fails

2023-12-05 Thread Rodolfo Alonso
*** This bug is a duplicate of bug 2039550 ***
https://bugs.launchpad.net/bugs/2039550

Hello:

I can confirm that this issue is the same as reported in [1]. That was
fixed in [2] (up to Wallaby). Please use a version containing this patch
(if that version is still not released, please propose a
"openstack/releases" patch bumping the version).

Regards.

[1]https://bugs.launchpad.net/neutron/+bug/2039550
[2]https://review.opendev.org/q/topic:%22bug/2039550%22


** This bug has been marked a duplicate of bug 2039550
   [IPAM] During port bulk creation, if a "fixed_ip" request is incorrect, the 
previous IPAM allocations generated are not deleted

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045237

Title:
  [bulk] there is no  clean operation if bulk create ports fails

Status in neutron:
  Confirmed

Bug description:
  When we use the bulk api to create ports use one subnet e.g., 
'935fe38e-743f-45e5-a646-4ebbbf16ade7', and then we found any errors occurred.
  We use sql queries to find diffrent IPs between ipallocations and 
ipamallocations:

  ip_address in ipallocations but not in ipamallocations:

  MariaDB [neutron]> select count(*) from ipallocations where 
subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7' && ip_address  not in (select 
ip_address from ipamallocations where ipam_subnet_id in (select id from  
ipamsubnets where neutron_subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7')) ;
  +--+
  | count(*) |
  +--+
  |  873 |
  +--+
  1 row in set (0.01 sec)

  ip_address in ipamallocations but not in ipallocations:

  MariaDB [neutron]> select count(*)  from ipamallocations where ipam_subnet_id 
in (select id from  ipamsubnets where 
neutron_subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7') && ip_address not in 
(select ip_address from ipallocations where 
subnet_id='935fe38e-743f-45e5-a646-4ebbbf16ade7');
  +--+
  | count(*) |
  +--+
  |   63 |
  +--+
  1 row in set (0.01 sec)

  It seems that there are still resources remaining if ports creation
  fails, and we cannot find the operation to clean up failed ports.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2045237/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039650] [NEW] [FT] "neutron-functional-with-uwsgi-fips" failing in stable/yoga

2023-10-18 Thread Rodolfo Alonso
Public bug reported:

The patch [1] introduced two functional tests that are failing only in
stable/yoga and in "neutron-functional-with-uwsgi-fips" only (not in the
regular "neutron-functional-with-uwsgi" job, this is why this patch was
merged).

[1]https://review.opendev.org/c/openstack/neutron/+/897044

** Affects: neutron
 Importance: Medium
     Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039650

Title:
  [FT] "neutron-functional-with-uwsgi-fips" failing in stable/yoga

Status in neutron:
  New

Bug description:
  The patch [1] introduced two functional tests that are failing only in
  stable/yoga and in "neutron-functional-with-uwsgi-fips" only (not in
  the regular "neutron-functional-with-uwsgi" job, this is why this
  patch was merged).

  [1]https://review.opendev.org/c/openstack/neutron/+/897044

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2039650/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039586] [NEW] [CI] Update the cirros image versions

2023-10-17 Thread Rodolfo Alonso
Public bug reported:

Since [1], some cirros images have been deleted from the CI cache. It
could happen [2] that using cirros images not cached, that imply to
download them, could fail. Any version not cached should be replaced by
its closer image version.

[1]https://review.opendev.org/c/openstack/project-config/+/873735
[2]https://zuul.openstack.org/build/97f5b27439ca469a9cd110087a386c62

** Affects: neutron
 Importance: Medium
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039586

Title:
  [CI] Update the cirros image versions

Status in neutron:
  New

Bug description:
  Since [1], some cirros images have been deleted from the CI cache. It
  could happen [2] that using cirros images not cached, that imply to
  download them, could fail. Any version not cached should be replaced
  by its closer image version.

  [1]https://review.opendev.org/c/openstack/project-config/+/873735
  [2]https://zuul.openstack.org/build/97f5b27439ca469a9cd110087a386c62

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2039586/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039012] Re: [fwaas] Scenario job fails randomly

2023-10-17 Thread Rodolfo Alonso
Patch: https://review.opendev.org/c/openstack/neutron-fwaas/+/898231

** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039012

Title:
  [fwaas] Scenario job fails randomly

Status in neutron:
  Fix Released

Bug description:
  Seen couple of occurrences across different releases:-

  Failures:-
  - https://zuul.opendev.org/t/openstack/build/30c625cd86aa40e6b6252689a7e88910 
neutron-tempest-plugin-fwaas-2023-1
  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/api/test_fwaasv2_extensions.py",
 line 269, in test_create_show_delete_firewall_group
  body = self.firewall_groups_client.create_firewall_group(
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 25, in create_firewall_group
  return self.create_resource(uri, post_data)
File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 62, in 
create_resource
  resp, body = self.post(req_uri, req_post_data)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 300, in 
post
  return self.request('POST', url, extra_headers, headers, body, chunked)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 742, in 
request
  self._error_checker(resp, resp_body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 922, in 
_error_checker
  raise exceptions.ServerFault(resp_body, resp=resp,
  tempest.lib.exceptions.ServerFault: Got server fault
  Details: Request Failed: internal server error while processing your request.

  -
  https://zuul.opendev.org/t/openstack/build/0d0fbfc009cb4142920ddb96e9695ec0
  neutron-tempest-plugin-fwaas-2023-2

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/scenario/test_fwaas_v2.py",
 line 220, in test_icmp_reachability_scenarios
  self.check_vm_connectivity(
File "/opt/stack/tempest/tempest/scenario/manager.py", line 983, in 
check_vm_connectivity
  self.assertTrue(self.ping_ip_address(ip_address,
File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true : Timed out waiting for 172.24.5.235 to 
become reachable

  
  - https://zuul.opendev.org/t/openstack/build/5ed8731220654e9fb67ba910a3f08c25 
neutron-tempest-plugin-fwaas-zed
  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/fwaas/api/test_fwaasv2_extensions.py",
 line 344, in test_update_firewall_group
  self.firewall_groups_client.delete_firewall_group(fwg_id)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.8/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 38, in delete_firewall_group
  return self.delete_resource(uri)
File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 42, in 
delete_resource
  resp, body = self.delete(req_uri)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 339, in 
delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 742, in 
request
  self._error_checker(resp, resp_body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 867, in 
_error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: Conflict with state of target resource
  Details: {'type': 'FirewallGroupInUse', 'message': 'Firewall group 
a20ead7e-40ca-488b-a751-a36e5fb4119a is still active.', 'detail': ''}

  
  - https://zuul.opendev.org/t/openstack/build/fbe68294562a43b492dd2ba66dec9d43 
neutron-tempest-plugin-fwaas-2023-2
  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/scenario/test_fwaas_v2.py",
 line 220, in test_icmp_reachability_scenarios
  self.check_vm_connectivity(
File "/opt/stack/tempest/tempest/scenario/manager.py", line 983, in 
check_vm_connectivity
  self.assertTrue(self.ping_ip_address(ip_address,
File "/usr/lib/python3.10/unittest/case.py", line 687, in assertTrue
  raise self.failureException(msg)
  AssertionError: False is not true : Timed out waiting for 172.24.5.126 to 
become reachable

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2039012/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : 

[Yahoo-eng-team] [Bug 2039550] [NEW] [IPAM] During port bulk creation, if a "fixed_ip" request is incorrect, the previous IPAM allocations generated are not deleted

2023-10-17 Thread Rodolfo Alonso
Public bug reported:

Related bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2244365

During the port bulk creation, before the port DB registers are created,
there is a method that pre-creates the port MAC address and the IPAM
allocations [1]. If one of the requested "fixed_ips" is incorrect (for
example, the IP address is out of the subnet CIDR), the method will
raise an exception. However, the IPAM allocations created previously
will remain in the DB.

Steps:
$ openstack create network net1
$ openstack subnet create --subnet-range 10.0.50.0/24 --network net1 snet1
$ OS_TOKEN=`openstack token issue | grep "| id" | tr -s " " | cut -f4 -d" "`
$ curl -H "X-Auth-Token:$OS_TOKEN" -X POST 
http://192.168.10.100:9696/networking/v2.0/ports
-d '{"ports": [
  {"network_id": ,
   "fixed_ips": [{"subnet_id": , "ip_address": "10.0.50.10"}]},
  {"network_id": ,
   "fixed_ips": [{"subnet_id": , "ip_address": "10.0.51.20"}]}]}'


Note that the second IP address 10.0.51.20 is not in the subnet CIDR 
10.0.50.0/24. The IPAM allocation for 10.0.50.10 (the first request), will 
remain in the DB.

[1]https://github.com/openstack/neutron/blob/2bc9c3833627da0dfdf901e15f78b9be397014e0/neutron/plugins/ml2/plugin.py#L1621-L1660

** Affects: neutron
 Importance: Medium
     Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039550

Title:
  [IPAM] During port bulk creation, if a "fixed_ip" request is
  incorrect, the previous IPAM allocations generated are not deleted

Status in neutron:
  New

Bug description:
  Related bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2244365

  During the port bulk creation, before the port DB registers are
  created, there is a method that pre-creates the port MAC address and
  the IPAM allocations [1]. If one of the requested "fixed_ips" is
  incorrect (for example, the IP address is out of the subnet CIDR), the
  method will raise an exception. However, the IPAM allocations created
  previously will remain in the DB.

  Steps:
  $ openstack create network net1
  $ openstack subnet create --subnet-range 10.0.50.0/24 --network net1 snet1
  $ OS_TOKEN=`openstack token issue | grep "| id" | tr -s " " | cut -f4 -d" "`
  $ curl -H "X-Auth-Token:$OS_TOKEN" -X POST 
http://192.168.10.100:9696/networking/v2.0/ports
  -d '{"ports": [
{"network_id": ,
 "fixed_ips": [{"subnet_id": , "ip_address": "10.0.50.10"}]},
{"network_id": ,
 "fixed_ips": [{"subnet_id": , "ip_address": 
"10.0.51.20"}]}]}'

  
  Note that the second IP address 10.0.51.20 is not in the subnet CIDR 
10.0.50.0/24. The IPAM allocation for 10.0.50.10 (the first request), will 
remain in the DB.

  
[1]https://github.com/openstack/neutron/blob/2bc9c3833627da0dfdf901e15f78b9be397014e0/neutron/plugins/ml2/plugin.py#L1621-L1660

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2039550/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038936] Re: [fwaas] CI job "neutron-fwaas-v2-dsvm-tempest-multinode" failing

2023-10-17 Thread Rodolfo Alonso
Fixed in https://review.opendev.org/c/openstack/neutron-fwaas/+/898231

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038936

Title:
  [fwaas] CI job "neutron-fwaas-v2-dsvm-tempest-multinode" failing

Status in neutron:
  Fix Released

Bug description:
  Error:
  https://zuul.opendev.org/t/openstack/build/74f05799b17340f5824a652e1d1ecbfb

  Snippet: https://paste.opendev.org/show/b3HfOkty2EIeaG1UyCI5/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2038936/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039285] [NEW] [neutron-fwaas] "neutron-fwaas-fullstack" job broken

2023-10-13 Thread Rodolfo Alonso
Public bug reported:

The experimental job "neutron-fwaas-fullstack" is broken right now.

Logs:
https://25f3b33717093a9f4f4e-a759d6b54561529b072782a6b0052389.ssl.cf5.rackcdn.com/896741/6/experimental/neutron-
fwaas-fullstack/87a2cd7/testr_results.html

Error: https://paste.opendev.org/show/ba1P2MdMl5kXIsQd71Qu/

** Affects: neutron
 Importance: Low
 Status: New

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039285

Title:
  [neutron-fwaas] "neutron-fwaas-fullstack" job broken

Status in neutron:
  New

Bug description:
  The experimental job "neutron-fwaas-fullstack" is broken right now.

  Logs:
  
https://25f3b33717093a9f4f4e-a759d6b54561529b072782a6b0052389.ssl.cf5.rackcdn.com/896741/6/experimental/neutron-
  fwaas-fullstack/87a2cd7/testr_results.html

  Error: https://paste.opendev.org/show/ba1P2MdMl5kXIsQd71Qu/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2039285/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039038] [NEW] [fullstack] The "dhclient" kill command can take more than 90 seconds

2023-10-11 Thread Rodolfo Alonso
Public bug reported:

The "dhcpclient" spawned in the fullstack tests have a respawn interval
configured. Because of that, when the process is stopped, the
``AsyncProcess._handle_process_error`` method will restart it again, as
seen in the fullstack test logs. For example:
https://paste.opendev.org/show/bG1K9ke8gcRsbjZMDB70/

The ``AsyncProcess._kill_process_and_wait`` has an active wait until the
process is killed, slowing down the test tear down process and
increasing the fullstack CI job time.

This bug is related to https://bugs.launchpad.net/neutron/+bug/2033651.

** Affects: neutron
 Importance: Low
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Medium => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039038

Title:
  [fullstack] The "dhclient" kill command can take more than 90 seconds

Status in neutron:
  New

Bug description:
  The "dhcpclient" spawned in the fullstack tests have a respawn
  interval configured. Because of that, when the process is stopped, the
  ``AsyncProcess._handle_process_error`` method will restart it again,
  as seen in the fullstack test logs. For example:
  https://paste.opendev.org/show/bG1K9ke8gcRsbjZMDB70/

  The ``AsyncProcess._kill_process_and_wait`` has an active wait until
  the process is killed, slowing down the test tear down process and
  increasing the fullstack CI job time.

  This bug is related to
  https://bugs.launchpad.net/neutron/+bug/2033651.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2039038/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039027] [NEW] tempest nftables jobs are not running in periodic queue

2023-10-11 Thread Rodolfo Alonso
Public bug reported:

The jobs "neutron-linuxbridge-tempest-plugin-nftables" and "neutron-ovs-
tempest-plugin-iptables_hybrid-nftables" are not running the in periodic
(and experimental) queue since both template names were changed.

** Affects: neutron
 Importance: Low
     Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039027

Title:
  tempest nftables jobs are not running in periodic queue

Status in neutron:
  New

Bug description:
  The jobs "neutron-linuxbridge-tempest-plugin-nftables" and "neutron-
  ovs-tempest-plugin-iptables_hybrid-nftables" are not running the in
  periodic (and experimental) queue since both template names were
  changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2039027/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038936] [NEW] [fwaas] CI job "neutron-fwaas-v2-dsvm-tempest-multinode" failing

2023-10-10 Thread Rodolfo Alonso
Public bug reported:

Error:
https://zuul.opendev.org/t/openstack/build/74f05799b17340f5824a652e1d1ecbfb

Snippet: https://paste.opendev.org/show/b3HfOkty2EIeaG1UyCI5/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038936

Title:
  [fwaas] CI job "neutron-fwaas-v2-dsvm-tempest-multinode" failing

Status in neutron:
  New

Bug description:
  Error:
  https://zuul.opendev.org/t/openstack/build/74f05799b17340f5824a652e1d1ecbfb

  Snippet: https://paste.opendev.org/show/b3HfOkty2EIeaG1UyCI5/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2038936/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025946] Re: Neutron 504 Gateway Timeout Openstack Kolla-Ansible : Ussuri

2023-10-09 Thread Rodolfo Alonso
Hello Adelia:

The problem you have is that there are other processes running in this
host on this TCP port [1]. Try first to list what processes are using
this port, stop them and then restart the OVN controller.

I'm closing this bug because it doesn't seem to be a Neutron problem but
a system/backend issue.

Regards.

[1]https://mail.openvswitch.org/pipermail/ovs-
discuss/2017-February/043597.html

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025946

Title:
  Neutron 504 Gateway Timeout Openstack Kolla-Ansible : Ussuri

Status in neutron:
  Invalid

Bug description:
  i have 3 controller openstack, but the network agent often 504 Gateway
  timeout. when i see neutron_server.log, this logs showed up in one of
  my controllers

  2023-05-24 10:00:23.314 687 ERROR neutron.api.v2.resource 
[req-a1f3e58a-00f7-4ed9-b8e5-6c538dc5d5a3 3fe50ccef00f49e3b1b0bbd58705a930 
c7d2001e7a2c4c32b9f2a3657f29b6b0 - default default] index failed: No details.: 
ovsdbapp.exceptions.TimeoutException: Commands 
[] exceeded timeout 180 seconds
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource Traceback (most 
recent call last):  

  
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 153, in queue_txn 
  
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource 
self.txns.put(txn, timeout=self.timeout)

  
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/ovsdbapp/backend/ovs_idl/connection.py",
 line 51, in put
  
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource 
super(TransactionQueue, self).put(*args, **kwargs)  

  
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/queue.py", line 264, 
in put  

  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource result = 
waiter.wait()   

 
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/queue.py", line 141, 
in wait 

  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource return 
get_hub().switch()  

   
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/eventlet/hubs/hub.py", line 
298, in switch  
  
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource return 
self.greenlet.switch()  

   
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource queue.Full  


  
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource 


  
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource During handling of 
the above exception, another exception occurred:

   
  2023-07-05 09:49:03.453 670 ERROR neutron.api.v2.resource  

  How do i solve this?

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 2038646] [NEW] [RBAC] Update "subnet" policies

2023-10-06 Thread Rodolfo Alonso
Public bug reported:

* "get_subnet"
Currently only the admin or a project reader can get a subnet. However, it 
doesn't make sense that the net owner can create the subnet [1] but cannot list 
it.

* "update_subnet"
Currently only the admin and the network owner can modify the subnet. Any 
project member should be able too.

* "delete_subnet"
Same argument as in "update_subnet"

[1]https://github.com/openstack/neutron/blob/8cba97016e421e4b01b96de70b4b194972d0186f/neutron/conf/policies/subnet.py#L42-L43

** Affects: neutron
 Importance: Medium
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
     Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Low => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038646

Title:
  [RBAC] Update "subnet" policies

Status in neutron:
  New

Bug description:
  * "get_subnet"
  Currently only the admin or a project reader can get a subnet. However, it 
doesn't make sense that the net owner can create the subnet [1] but cannot list 
it.

  * "update_subnet"
  Currently only the admin and the network owner can modify the subnet. Any 
project member should be able too.

  * "delete_subnet"
  Same argument as in "update_subnet"

  
[1]https://github.com/openstack/neutron/blob/8cba97016e421e4b01b96de70b4b194972d0186f/neutron/conf/policies/subnet.py#L42-L43

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2038646/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038555] [NEW] Remove unused tables

2023-10-05 Thread Rodolfo Alonso
Public bug reported:

There are many tables created by default during the installation of Neutron 
that are no longer used because the related projects are no longer maintained. 
For example:
* neutron.db.migration.alembic_migrations.external.REPO_CISCO_TABLES
* neutron.db.migration.alembic_migrations.external.REPO_VMWARE_TABLES
* neutron.db.migration.alembic_migrations.external.REPO_BROCADE_TABLES
* neutron.db.migration.alembic_migrations.external.REPO_NUAGE_TABLES
* neutron.db.migration.alembic_migrations.nec_init_ops tables

** Affects: neutron
 Importance: Low
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038555

Title:
  Remove unused tables

Status in neutron:
  New

Bug description:
  There are many tables created by default during the installation of Neutron 
that are no longer used because the related projects are no longer maintained. 
For example:
  * neutron.db.migration.alembic_migrations.external.REPO_CISCO_TABLES
  * neutron.db.migration.alembic_migrations.external.REPO_VMWARE_TABLES
  * neutron.db.migration.alembic_migrations.external.REPO_BROCADE_TABLES
  * neutron.db.migration.alembic_migrations.external.REPO_NUAGE_TABLES
  * neutron.db.migration.alembic_migrations.nec_init_ops tables

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2038555/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038520] [NEW] [UT] Error accesing to row.external_ids in ``TestOvnSbIdlNotifyHandler`` tests

2023-10-05 Thread Rodolfo Alonso
Public bug reported:

Example of error: https://paste.opendev.org/show/bRQn9cQ2oouHaG6DV04a/

This is happening in all ``TestOvnSbIdlNotifyHandler`` tests.

** Affects: neutron
 Importance: Low
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038520

Title:
  [UT] Error accesing to row.external_ids in
  ``TestOvnSbIdlNotifyHandler`` tests

Status in neutron:
  New

Bug description:
  Example of error: https://paste.opendev.org/show/bRQn9cQ2oouHaG6DV04a/

  This is happening in all ``TestOvnSbIdlNotifyHandler`` tests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2038520/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2037717] [NEW] [OVN] ``PortBindingChassisEvent`` event is not executing the conditions check

2023-09-29 Thread Rodolfo Alonso
Public bug reported:

Since [1], that overrides the "match_fn" method, the event is not checking the 
defined conditions in the initialization, that are:
  ('type', '=', ovn_const.OVN_CHASSIS_REDIRECT)

[1]https://review.opendev.org/q/I3b7c5d73d2b0d20fb06527ade30af8939b249d75

** Affects: neutron
 Importance: High
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2037717

Title:
  [OVN] ``PortBindingChassisEvent`` event is not executing the
  conditions check

Status in neutron:
  In Progress

Bug description:
  Since [1], that overrides the "match_fn" method, the event is not checking 
the defined conditions in the initialization, that are:
('type', '=', ovn_const.OVN_CHASSIS_REDIRECT)

  [1]https://review.opendev.org/q/I3b7c5d73d2b0d20fb06527ade30af8939b249d75

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2037717/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2037259] Re: [UT] "openstack-tox-py311" failing 100% of the times (since Sept 23)

2023-09-25 Thread Rodolfo Alonso
Closing this bug. We have been testing py311 in Neutron during the last
two releases, using a Neutron CI job. The jobs failing in the provided
list belong to failed patches.

The job is passing right now:
https://review.opendev.org/c/openstack/neutron/+/896351

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2037259

Title:
  [UT] "openstack-tox-py311" failing 100% of the times (since Sept 23)

Status in neutron:
  Invalid

Bug description:
  As reported in [1] "openstack-tox-py311" is failing in Neutron right
  now.

  Logs:
  
https://89bb42ae6b1dac5826ea-8afd88a2556265530ed64527d716118c.ssl.cf5.rackcdn.com/883988/34/check/openstack-
  tox-py311/992b55a/testr_results.html

  [1]https://lists.openstack.org/pipermail/openstack-
  discuss/2023-September/035210.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2037259/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2037259] [NEW] [UT] "openstack-tox-py311" failing 100% of the times (since Sept 23)

2023-09-25 Thread Rodolfo Alonso
Public bug reported:

As reported in [1] "openstack-tox-py311" is failing in Neutron right
now.

Logs:
https://89bb42ae6b1dac5826ea-8afd88a2556265530ed64527d716118c.ssl.cf5.rackcdn.com/883988/34/check/openstack-
tox-py311/992b55a/testr_results.html

[1]https://lists.openstack.org/pipermail/openstack-
discuss/2023-September/035210.html

** Affects: neutron
 Importance: High
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2037259

Title:
  [UT] "openstack-tox-py311" failing 100% of the times (since Sept 23)

Status in neutron:
  New

Bug description:
  As reported in [1] "openstack-tox-py311" is failing in Neutron right
  now.

  Logs:
  
https://89bb42ae6b1dac5826ea-8afd88a2556265530ed64527d716118c.ssl.cf5.rackcdn.com/883988/34/check/openstack-
  tox-py311/992b55a/testr_results.html

  [1]https://lists.openstack.org/pipermail/openstack-
  discuss/2023-September/035210.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2037259/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2036763] [NEW] [pep8] Pylint error W0105 (pointless-string-statement) in random CI executions

2023-09-20 Thread Rodolfo Alonso
Public bug reported:

This error has been seen in [1].

Logs:
https://zuul.opendev.org/t/openstack/build/1c542d5ac7b1433e82e84e52737461b2

Snippet: https://paste.opendev.org/show/bLkR97YQEzEKBeCmUxkt/

[1]https://review.opendev.org/c/openstack/neutron/+/882832

** Affects: neutron
 Importance: Medium
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2036763

Title:
  [pep8] Pylint error W0105 (pointless-string-statement) in random CI
  executions

Status in neutron:
  New

Bug description:
  This error has been seen in [1].

  Logs:
  https://zuul.opendev.org/t/openstack/build/1c542d5ac7b1433e82e84e52737461b2

  Snippet: https://paste.opendev.org/show/bLkR97YQEzEKBeCmUxkt/

  [1]https://review.opendev.org/c/openstack/neutron/+/882832

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2036763/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2031526] Re: [ndr] "neutron-tempest-plugin-dynamic-routing" failing

2023-09-19 Thread Rodolfo Alonso
The job is fixed and working right now:
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-tempest-
plugin-dynamic-routing=0

I'm closing this bug.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2031526

Title:
  [ndr] "neutron-tempest-plugin-dynamic-routing" failing

Status in neutron:
  Fix Released

Bug description:
  "neutron-tempest-plugin-dynamic-routing" jobs are currently failing
  (same error in the last 4 tested releases). E.g.:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_7ab/891550/2/check/neutron-
  tempest-plugin-dynamic-routing/7ab85ff/testr_results.html

  Error: https://paste.opendev.org/show/bTwcOK65EKMQkiOhAvL9/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2031526/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2036607] [NEW] [OVN] The API worker fails during "post_fork_initialize" call

2023-09-19 Thread Rodolfo Alonso
Public bug reported:

Bugzilla reference: https://bugzilla.redhat.com/show_bug.cgi?id=2233797

This issue has been reproduced using the Tobiko framework. The test,
that is executed several times, is rebooting the controllers and thus
the Neutron API. Randomly, one Neutron API worker fails during the event
method execution "post_fork_initialize", during the "_setup_hash_ring"
call [1].

Regardless of the result of the method "post_fork_initialize", the API
worker starts. But in this case there are some methods (mainly related
to the OVN agents) that are not patched and thus the result of the API
calls ("agent show", "agent list", etc) is wrong.

This bug proposes:
* To properly handle any possible error in the "_setup_hash_ring" call.
* To log a message at the end of the "post_fork_initialize" method to check 
that this event method has finished properly.
* To catch any possible error during the "post_fork_initialize" execution and 
if this error is not retried, fail and exit.

[1]https://paste.opendev.org/show/bqzDPR5TukLq9d1GIcnz/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2036607

Title:
  [OVN] The API worker fails during "post_fork_initialize" call

Status in neutron:
  New

Bug description:
  Bugzilla reference:
  https://bugzilla.redhat.com/show_bug.cgi?id=2233797

  This issue has been reproduced using the Tobiko framework. The test,
  that is executed several times, is rebooting the controllers and thus
  the Neutron API. Randomly, one Neutron API worker fails during the
  event method execution "post_fork_initialize", during the
  "_setup_hash_ring" call [1].

  Regardless of the result of the method "post_fork_initialize", the API
  worker starts. But in this case there are some methods (mainly related
  to the OVN agents) that are not patched and thus the result of the API
  calls ("agent show", "agent list", etc) is wrong.

  This bug proposes:
  * To properly handle any possible error in the "_setup_hash_ring" call.
  * To log a message at the end of the "post_fork_initialize" method to check 
that this event method has finished properly.
  * To catch any possible error during the "post_fork_initialize" execution and 
if this error is not retried, fail and exit.

  [1]https://paste.opendev.org/show/bqzDPR5TukLq9d1GIcnz/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2036607/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2036603] [NEW] [tempest] Test "test_multicast_between_vms_on_same_network" radonmly failing

2023-09-19 Thread Rodolfo Alonso
Public bug reported:

This test has been failing in the same job 
"neutron-tempest-plugin-openvswitch-iptables_hybrid" randomly in the last days:
* 
https://6b9cedea8669c4f84008-911d33bf37120f9bf3989feced914a34.ssl.cf5.rackcdn.com/885354/5/check/neutron-tempest-plugin-openvswitch-iptables_hybrid/88cca83/testr_results.html
* 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ee8/882832/13/check/neutron-tempest-plugin-openvswitch-iptables_hybrid/ee8c405/testr_results.html
* 
https://1997b6358f32b9b17414-df15d2af993436e3199d990a33423a03.ssl.cf2.rackcdn.com/895155/1/gate/neutron-tempest-plugin-openvswitch-iptables_hybrid/1582390/testr_results.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2036603

Title:
  [tempest] Test "test_multicast_between_vms_on_same_network" radonmly
  failing

Status in neutron:
  New

Bug description:
  This test has been failing in the same job 
"neutron-tempest-plugin-openvswitch-iptables_hybrid" randomly in the last days:
  * 
https://6b9cedea8669c4f84008-911d33bf37120f9bf3989feced914a34.ssl.cf5.rackcdn.com/885354/5/check/neutron-tempest-plugin-openvswitch-iptables_hybrid/88cca83/testr_results.html
  * 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ee8/882832/13/check/neutron-tempest-plugin-openvswitch-iptables_hybrid/ee8c405/testr_results.html
  * 
https://1997b6358f32b9b17414-df15d2af993436e3199d990a33423a03.ssl.cf2.rackcdn.com/895155/1/gate/neutron-tempest-plugin-openvswitch-iptables_hybrid/1582390/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2036603/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2035573] Re: Insertion of a duplicated ProviderResourceAssociation entry while creating a HA router

2023-09-19 Thread Rodolfo Alonso
*** This bug is a duplicate of bug 2016198 ***
https://bugs.launchpad.net/bugs/2016198

Hello:

As Brian commented, this issue was solved in [1]. The problem is that
this fix includes a DB migration and cannot be backported. Please
upgrade to stable/2023.2, if possible, and check again.

Regards.

[1]https://bugs.launchpad.net/neutron/+bug/2016198

** This bug has been marked a duplicate of bug 2016198
   [L3][HA] race condition between first two router creations when tenant has 
no HA network

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2035573

Title:
  Insertion of a duplicated ProviderResourceAssociation entry while
  creating a HA router

Status in neutron:
  Incomplete

Bug description:
  [SUMMARY]
  While creating multiple HA routers in a row, some router creations are failed 
with the log below.

  2023-07-21 02:28:19.968 12 DEBUG neutron.wsgi [-] (12) accepted 
('10.2.41.158', 57438) server 
/var/lib/kolla/venv/lib/python3.10/site-packages/eventlet/wsgi.py:1004
  2023-07-21 02:28:19.979 12 DEBUG neutron.api.v2.base [None 
req-726ab30e-b9c9-4e38-8c9d-216752ab2aea 439ae1ccaa9a494284cee3fdb6227208 
97895007888245c3acdfc41146d2e151 - - default default] Request body: {'router': 
{'name': 'test6', 'admin_state_up': True, 'tenant_id': 
'97895007888245c3acdfc41146d2e151'}} prepare_request_body 
/var/lib/kolla/venv/lib/python3.10/site-packages/neutron/api/v2/base.py:731
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db [None 
req-726ab30e-b9c9-4e38-8c9d-216752ab2aea 439ae1ccaa9a494284cee3fdb6227208 
97895007888245c3acdfc41146d2e151 - - default default] Failed to schedule HA 
router 5a599ade-7b6a-4b3e-b635-ca00e37f2657.: 
neutron_lib.objects.exceptions.NeutronDbObjectDuplicateEntry: Failed to create 
a duplicate ProviderResourceAssociation: for attribute(s) ['PRIMARY'] with 
value(s) ha-5a599ade-7b6a-4b3e-b635-ca00e37f2657
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db Traceback (most 
recent call last):
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", 
line 1900, in _execute_context
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db     
self.dialect.do_execute(
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/engine/default.py",
 line 736, in do_execute
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db     
cursor.execute(statement, parameters)
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/cursors.py", line 
148, in execute
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db     result = 
self._query(query)
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/cursors.py", line 
310, in _query
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db     conn.query(q)
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/connections.py", line 
548, in query
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db     
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/connections.py", line 
775, in _read_query_result
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db     result.read()
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/connections.py", line 
1156, in read
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db     first_packet = 
self.connection._read_packet()
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/connections.py", line 
725, in _read_packet
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db     
packet.raise_for_error()
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/protocol.py", line 
221, in raise_for_error
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db     
err.raise_mysql_exception(self._data)
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/err.py", line 143, in 
raise_mysql_exception
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db     raise 
errorclass(errno, errval)
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db 
pymysql.err.IntegrityError: (1062, "Duplicate entry 
'ha-5a599ade-7b6a-4b3e-b635-ca00e37f2657' for key 'PRIMARY'")
  2023-07-21 02:28:20.178 12 ERROR neutron.db.l3_hamode_db
  2023-07-21 02:28:20.178 12 ERROR 

[Yahoo-eng-team] [Bug 2024160] Re: [OVN][Trunk] subport doesn't reach status ACTIVE

2023-09-14 Thread Rodolfo Alonso
** Changed in: neutron
   Status: Fix Released => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2024160

Title:
  [OVN][Trunk] subport doesn't reach status ACTIVE

Status in neutron:
  Confirmed
Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Test test_live_migration_with_trunk has been failing for the last two days.
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_3f3/831018/30/check/nova-live-migration/3f3b065/testr_results.html

  It's a test about live-migration, but it is important to notice that
  it fails before any live migration happens.

  The test creates a VM with a port and a subport.
  The test waits until the VM status is ACTIVE -> this passes
  The test waits until the subport status is ACTIVE -> this started failing two 
days ago because the port status is DOWN

  There was only one neutron patch merged that day[1], but I checked the
  test failed during some jobs even before that patch was merged.

  
  I compared some logs.
  Neutron logs when the test passes: [2]
  Neutron logs when the test fails: [3]

  When it fails, I see this during the creation of the subport (and I don't see 
this event when it passes):
  Jun 14 18:13:43.052982 np0034303809 neutron-server[77531]: DEBUG 
ovsdbapp.backend.ovs_idl.event [None req-929dd199-4247-46f5-9466-622c7d538547 
None None] Matched DELETE: PortBindingUpdateVirtualPortsEvent(events=('update', 
'delete'), table='Port_Binding', conditions=None, old_conditions=None), 
priority=20 to row=Port_Binding(parent_port=[], mac=['fa:16:3e:93:9d:5a 
19.80.0.42'], chassis=[], ha_chassis_group=[], options={'mcast_flood_reports': 
'true', 'requested-chassis': ''}, type=, tag=[], requested_chassis=[], 
tunnel_key=2, up=[False], logical_port=f8c707ec-ecd8-4f1e-99ba-6f8303b598b2, 
gateway_chassis=[], encap=[], external_ids={'name': 
'tempest-subport-2029248863', 'neutron:cidrs': '19.80.0.42/24', 
'neutron:device_id': '', 'neutron:device_owner': '', 'neutron:network_name': 
'neutron-5fd9faa7-ec1c-4f42-ab87-6ce19edda245', 'neutron:port_capabilities': 
'', 'neutron:port_name': 'tempest-subport-2029248863', 'neutron:project_id': 
'6f92a9f8e16144148026725b25711d3a', 'neutron:revision_n
 umber': '1', 'neutron:security_group_ids': 
'5eab41ef-c5c1-425c-a931-f5b6b4b330ad', 'neutron:subnet_pool_addr_scope4': '', 
'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, 
virtual_parent=[], nat_addresses=[], 
datapath=3c472399-d6ee-4b7c-aa97-6777f2bc2772) old= {{(pid=77531) matches 
/usr/local/lib/python3.10/dist-packages/ovsdbapp/backend/ovs_idl/event.py:43}}
  ...
  Jun 14 18:13:49.597911 np0034303809 neutron-server[77531]: DEBUG 
neutron.plugins.ml2.plugin [None req-3588521e-7878-408d-b1f8-15db562c69f8 None 
None] Port f8c707ec-ecd8-4f1e-99ba-6f8303b598b2 cannot update to ACTIVE because 
it is not bound. {{(pid=77531) _port_provisioned 
/opt/stack/neutron/neutron/plugins/ml2/plugin.py:361}}


  It seems the ovn version has changed between these jobs:
  Passes [4]:
  2023-06-14 10:01:46.358875 | controller | Preparing to unpack 
.../ovn-common_22.03.0-0ubuntu1_amd64.deb ...

  
  Fails [5]:
  2023-06-14 17:55:07.077377 | controller | Preparing to unpack 
.../ovn-common_22.03.2-0ubuntu0.22.04.1_amd64.deb ...





  [1] https://review.opendev.org/c/openstack/neutron/+/883687
  [2] 
https://96b562ba0d2478fe5bc1-d58fbc463536b3122b4367e996d5e5b0.ssl.cf1.rackcdn.com/831018/30/check/nova-live-migration/312c2ab/controller/logs/screen-q-svc.txt
  [3] 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_3f3/831018/30/check/nova-live-migration/3f3b065/controller/logs/screen-q-svc.txt
  [4] 
https://96b562ba0d2478fe5bc1-d58fbc463536b3122b4367e996d5e5b0.ssl.cf1.rackcdn.com/831018/30/check/nova-live-migration/312c2ab/job-output.txt
  [5] 
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_3f3/831018/30/check/nova-live-migration/3f3b065/job-output.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2024160/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2035382] [NEW] [FT] "TestMonitorDaemon._run_monitor" failing radomly, initial message not written

2023-09-13 Thread Rodolfo Alonso
Public bug reported:

The method ``TestMonitorDaemon._run_monitor`` is randomly failing. The
initial message expected after "keepalived_state_change" process is
started is not written in the logs.

Logs:
https://8271f43479f81f7d3395-4bdeef087e9d95514555f2932706a956.ssl.cf1.rackcdn.com/893555/1/check/neutron-
functional-with-uwsgi/90dde54/testr_results.html

Snippet: https://paste.opendev.org/show/bJCAjIqiUOcWB0xBhPnX/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2035382

Title:
  [FT] "TestMonitorDaemon._run_monitor" failing radomly, initial message
  not written

Status in neutron:
  New

Bug description:
  The method ``TestMonitorDaemon._run_monitor`` is randomly failing. The
  initial message expected after "keepalived_state_change" process is
  started is not written in the logs.

  Logs:
  
https://8271f43479f81f7d3395-4bdeef087e9d95514555f2932706a956.ssl.cf1.rackcdn.com/893555/1/check/neutron-
  functional-with-uwsgi/90dde54/testr_results.html

  Snippet: https://paste.opendev.org/show/bJCAjIqiUOcWB0xBhPnX/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2035382/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2034684] Re: UEFI (edk2/ovmf) network boot with OVN fail because no DHCP release reply

2023-09-12 Thread Rodolfo Alonso
Removing the Neutron dependency. We'll monitor the core OVN bug to track
the progress and test it.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2034684

Title:
  UEFI (edk2/ovmf) network boot with OVN fail because no DHCP release
  reply

Status in Ironic:
  New
Status in neutron:
  Invalid

Bug description:
  When attempting to verify neutron change[1], we discovered that
  despite options in DHCPv6 ADV and REQ/REPLY are correct network
  booting still fails.

  When comparing traffic capture between openvswitch+neutron-dhcp-agent setup 
to the ovn setup a significant difference is that:
  * neutron-dhcp-ageent(dnsmasq) does REPLY to RELEASE with a packet including 
a dhcpv6 option type Status code (13) success to confirm the release. edk2/ovmf 
does TFTP transfer of the NBP immediately after recieving this reply.
  * OVN does not respond with a REPLY to the clients RELEASE. In traffic 
capture we can see the client repeates the RELEASE several times, but finally 
give up and raise an error:

  >>Start PXE over IPv6..
Station IP address is FC01:0:0:0:0:0:0:206
Server IP address is FC00:0:0:0:0:0:0:1
NBP filename is snponly.efi
NBP filesize is 0 Bytes
PXE-E53: No boot filename received.

  --
  FAILING - sequence on OVN
  --
  No.   TimeSource  Destination ProtocolLength  Info
  1 0.00fe80::f816:3eff:fe6f:a0ab   ::  ICMPv6  118 
Router Advertisement from fa:16:3e:6f:a0:ab
  2 51.931422   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  177 
Solicit XID: 0x4f04ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 
  3 51.931840   fe80::f816:3eff:feeb:b176   fe80::5054:ff:feb1:a5b0 
DHCPv6  198 Advertise XID: 0x4f04ed CID: 
000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  4 56.900421   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  219 
Request XID: 0x5004ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  5 56.900726   fe80::f816:3eff:feeb:b176   fe80::5054:ff:feb1:a5b0 
DHCPv6  198 Reply XID: 0x5004ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 
IAA: fc01::2ad 
  6 68.861979   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  152 
Release XID: 0x5104ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  7 69.900715   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  152 
Release XID: 0x5104ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  8 72.900784   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  152 
Release XID: 0x5104ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  9 77.900774   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  152 
Release XID: 0x5104ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  1086.900759   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  152 
Release XID: 0x5104ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  11103.900786  fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  152 
Release XID: 0x5104ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 

  
  --
  WORKING - sequence on neutron-dhcp-agent (dnsmasq)
  --
  No.   TimeSource  Destination ProtocolLength  Info
  1 0.00fe80::f816:3eff:fe38:eef0   ff02::1 ICMPv6  142 
Router Advertisement from fa:16:3e:38:ee:f0
  2 0.001102fe80::5054:ff:fed9:3d5c ff02::1:2   DHCPv6  116 
Solicit XID: 0x71d892 CID: 0004c9b0caa37bce994e85633d7572708047 
  3 0.001245fe80::f816:3eff:fef5:ef7a   fe80::5054:ff:fed9:3d5c 
DHCPv6  208 Advertise XID: 0x71d892 CID: 
0004c9b0caa37bce994e85633d7572708047 IAA: fc01::87 
  4 0.002436fe80::5054:ff:fed9:3d5c ff02::1:2   DHCPv6  162 
Request XID: 0x72d892 CID: 0004c9b0caa37bce994e85633d7572708047 IAA: fc01::87 
  5 0.002508fe80::f816:3eff:fef5:ef7a   fe80::5054:ff:fed9:3d5c 
DHCPv6  219 Reply XID: 0x72d892 CID: 0004c9b0caa37bce994e85633d7572708047 
IAA: fc01::87 
  6 3.130605fe80::5054:ff:fed9:3d5c ff02::1:2   DHCPv6  223 
Request XID: 0x73d892 CID: 0004c9b0caa37bce994e85633d7572708047 IAA: fc01::87 
  7 3.130791fe80::f816:3eff:fef5:ef7a   fe80::5054:ff:fed9:3d5c 
DHCPv6  256 Reply XID: 0x73d892 CID: 0004c9b0caa37bce994e85633d7572708047 
IAA: fc01::2a0 
  8 3.132060fe80::5054:ff:fed9:3d5c ff02::1:2   DHCPv6  156 
Release XID: 0x74d892 CID: 0004c9b0caa37bce994e85633d7572708047 IAA: fc01::87 
  9 3.132126fe80::f816:3eff:fef5:ef7a   fe80::5054:ff:fed9:3d5c 
DHCPv6  128 Reply XID: 0x74d892 CID: 

[Yahoo-eng-team] [Bug 2030747] Re: Port creation on shared network fails with port_security defined

2023-09-07 Thread Rodolfo Alonso
Hello Roman:

This is the default security policy for non-admin users. By default, a
non-admin user cannot create a port defining the flags "--disable-port-
security" or "--enable-port-security". A non-admin user must create a
port with "--enable-port-security" implicitly defined.

To avoid this default rule, you can change your Neutron policy file, adding a 
rule similar to the "create_port" one:
  "create_port:port_security_enabled": "(rule:admin_only) or (role:member and 
project_id:%(project_id)s)"

Take in mind that this is a potential security issue because you are
allowing non-admin users to create ports without any security.

I'm closing this bug.

Regards.


** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2030747

Title:
  Port creation on shared network fails with port_security defined

Status in neutron:
  Invalid

Bug description:
  OpenStack deployment: kolla-ansible 2023.1
  Neutron version is reported as 

  ubuntu@os:~$ docker exec neutron_server neutron --version
  neutron CLI is deprecated and will be removed in the Z cycle. Use openstack 
CLI instead.
  9.0.0

  When user tries to create port on shared network, operation fails when option
  [--enable-port-security | --disable-port-security]
  is specified. If not, port created successfully with port_security_enabled = 
True

  ubuntu@os:~$ openstack port create --network 
30e7e427-c5f7-46b2-b04d-3ebccff5c532 --fixed-ip 
subnet=cf062558-3c32-48c3-96d1-dcaebad3ee71 --project 
71558625372d467c85061759fd2e6bf8 --enable-port-security myport-01
  ForbiddenException: 403: Client Error for url: 
https://os-api:9696/v2.0/ports, ((rule:create_port and 
(rule:create_port:fixed_ips and (rule:create_port:fixed_ips:subnet_id))) and 
rule:create_port:port_security_enabled) is disallowed by policy
  ubuntu@os:~$ openstack port create --network 
30e7e427-c5f7-46b2-b04d-3ebccff5c532 --fixed-ip 
subnet=cf062558-3c32-48c3-96d1-dcaebad3ee71 --project 
71558625372d467c85061759fd2e6bf8 --disable-port-security myport-01
  ForbiddenException: 403: Client Error for url: 
https://os-api:9696/v2.0/ports, ((rule:create_port and 
(rule:create_port:fixed_ips and (rule:create_port:fixed_ips:subnet_id))) and 
rule:create_port:port_security_enabled) is disallowed by policy
  ubuntu@os:~$ openstack port create --network 
30e7e427-c5f7-46b2-b04d-3ebccff5c532 --fixed-ip 
subnet=cf062558-3c32-48c3-96d1-dcaebad3ee71 --project 
71558625372d467c85061759fd2e6bf8 myport-01
  
+-++
  | Field   | Value 
 |
  
+-++
  | admin_state_up  | UP
 |
  | allowed_address_pairs   |   
 |
  | binding_host_id | None  
 |
  | binding_profile | None  
 |
  | binding_vif_details | None  
 |
  | binding_vif_type| None  
 |
  | binding_vnic_type   | normal
 |
  | created_at  | 2023-08-08T11:56:10Z  
 |
  | data_plane_status   | None  
 |
  | description |   
 |
  | device_id   |   
 |
  | device_owner|   
 |
  | device_profile  | None  
 |
  | dns_assignment  | None  
 |
  | dns_domain  | None  
 |
  | dns_name| None  
 |
  | extra_dhcp_opts |   
 |
  | fixed_ips   | ip_address='100.100.100.100', 
subnet_id='cf062558-3c32-48c3-96d1-dcaebad3ee71' |
  | id  | 

[Yahoo-eng-team] [Bug 2034589] [NEW] [FT][OVN] "ovsdb_connection.stop()" failing during the test cleanup process

2023-09-06 Thread Rodolfo Alonso
Public bug reported:

This issue has been found in several tests (not always the same one).
The "ovsdb_connection" timeouts during the stop command. This error,
during the cleanup process, can be skipped or before trying to stop the
connection, it could be tested if is still active.

Log:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_2a0/892897/1/gate/neutron-
functional-with-uwsgi/2a03caa/testr_results.html

Error: https://paste.opendev.org/show/brQIBgVSeJmrfqzDWpoT/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2034589

Title:
  [FT][OVN] "ovsdb_connection.stop()" failing during the test cleanup
  process

Status in neutron:
  New

Bug description:
  This issue has been found in several tests (not always the same one).
  The "ovsdb_connection" timeouts during the stop command. This error,
  during the cleanup process, can be skipped or before trying to stop
  the connection, it could be tested if is still active.

  Log:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_2a0/892897/1/gate/neutron-
  functional-with-uwsgi/2a03caa/testr_results.html

  Error: https://paste.opendev.org/show/brQIBgVSeJmrfqzDWpoT/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2034589/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2034540] [NEW] [FT] "test_add_tc_filter_policy" failing in master and stable branches

2023-09-06 Thread Rodolfo Alonso
Public bug reported:

Test
"neutron.tests.functional.privileged.agent.linux.test_tc_lib.TcFilterClassTestCase.test_add_tc_filter_policy"
is failing 100% of the times.

Error: https://paste.opendev.org/show/bURZiJElPj70CKiGpUbz/

Log:
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0b4/893655/3/check/neutron-
functional-with-uwsgi/0b4905d/testr_results.html

** Affects: neutron
 Importance: Critical
 Status: New

** Changed in: neutron
   Importance: Undecided => Critical

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2034540

Title:
  [FT] "test_add_tc_filter_policy" failing in master and stable branches

Status in neutron:
  New

Bug description:
  Test
  
"neutron.tests.functional.privileged.agent.linux.test_tc_lib.TcFilterClassTestCase.test_add_tc_filter_policy"
  is failing 100% of the times.

  Error: https://paste.opendev.org/show/bURZiJElPj70CKiGpUbz/

  Log:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_0b4/893655/3/check/neutron-
  functional-with-uwsgi/0b4905d/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2034540/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033683] Re: openvswitch.agent.ovs_neutron_agent fails to Cmd: ['iptables-restore', '-n']

2023-09-05 Thread Rodolfo Alonso
** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033683

Title:
  openvswitch.agent.ovs_neutron_agent fails to Cmd: ['iptables-restore',
  '-n']

Status in neutron:
  Invalid
Status in tripleo:
  New

Bug description:
  Description
  ===
  Wallaby deployment via undercloud/overcloud started to fail recently on 
overcloud node provision
  Neutron constantly reports inability to update iptables that in turn makes 
baremetal to fail to boot from PXE
  From the review it seems that /usr/bin/update-alternatives set to legacy 
fails since neutron user doesn't have sudo to run it
  In the info I can see that neutron user has the following subset of commands 
it's able to run:
  ...
  (root) NOPASSWD: /usr/bin/update-alternatives --set iptables 
/usr/sbin/iptables-legacy
  (root) NOPASSWD: /usr/bin/update-alternatives --set ip6tables 
/usr/sbin/ip6tables-legacy
  (root) NOPASSWD: /usr/bin/update-alternatives --auto iptables
  (root) NOPASSWD: /usr/bin/update-alternatives --auto ip6tables

  But the issue is the fact that command isn't found as it was moved to
  /usr/sbin/update-alternatives

  Steps to reproduce
  ==
  1. Deploy undercloud
  2. Deploy networks and VIP
  3. Add and introspect a node
  4. Execute overcloud node provision ... that will timeout 

  Expected result
  ===
  Successful overcloud node baremetal provisioning

  Logs & Configs
  ==
  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-18d52177-9c93-401c-b97d-0334e488a257 - - - - -] Error while processing VIF 
ports: neutron_lib.exceptions.ProcessExecutionError: Exit code: 1; Cmd: 
['iptables-restore', '-n']; Stdin: # Generated by iptables_manager

  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent COMMIT
  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent # Completed by 
iptables_manager
  2023-08-31 18:21:28.613 4413 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent ; Stdout: ; 
Stderr: iptables-restore: line 23 failed

  Environment
  ===
  Centos 9 Stream and undercloud deployment tool

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2033683/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033887] [NEW] [OVN][Trunk] The cold migration process is broken since patch 882581

2023-09-01 Thread Rodolfo Alonso
Public bug reported:

The patch [1] is incorrectly handling the port binding of the subports.
This is a more complex operation than just updating the "PortBinding"
object. In order to change the port binding host ID, it is needed to
remove all "PortBinding" and "PortBindingLevels" registers and create
new ones pointing to the new host ID.

This patch is also breaking the cold migration process in ML2/OVN
environments with Trunk ports. The subports are not receiving the DHCP
information because OVN is not creating the DHCP flows that attend to
the DHCP requests. For example [2].

The recommended action is to revert the mentioned patch.

[1]https://review.opendev.org/c/openstack/neutron/+/882581
[2]https://paste.opendev.org/show/bVnaVLVP9fkrm0QlXtYc/

** Affects: neutron
 Importance: High
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033887

Title:
  [OVN][Trunk] The cold migration process is broken since patch 882581

Status in neutron:
  In Progress

Bug description:
  The patch [1] is incorrectly handling the port binding of the
  subports. This is a more complex operation than just updating the
  "PortBinding" object. In order to change the port binding host ID, it
  is needed to remove all "PortBinding" and "PortBindingLevels"
  registers and create new ones pointing to the new host ID.

  This patch is also breaking the cold migration process in ML2/OVN
  environments with Trunk ports. The subports are not receiving the DHCP
  information because OVN is not creating the DHCP flows that attend to
  the DHCP requests. For example [2].

  The recommended action is to revert the mentioned patch.

  [1]https://review.opendev.org/c/openstack/neutron/+/882581
  [2]https://paste.opendev.org/show/bVnaVLVP9fkrm0QlXtYc/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2033887/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033651] [NEW] [fullstack] Reduce the CI job time

2023-08-31 Thread Rodolfo Alonso
Public bug reported:

The "neutron-fullstack-with-uwsgi" job usually takes between 2 and 3
hours, depending on the node used. It is not practical to have a CI job
so long that, at the same time, is not very stable.

** Affects: neutron
 Importance: Low
 Status: New

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033651

Title:
  [fullstack] Reduce the CI job time

Status in neutron:
  New

Bug description:
  The "neutron-fullstack-with-uwsgi" job usually takes between 2 and 3
  hours, depending on the node used. It is not practical to have a CI
  job so long that, at the same time, is not very stable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2033651/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1624648] Re: Do not need run _enable_netfilter_for_bridges() for each new device

2023-08-30 Thread Rodolfo Alonso
Closed due to inactivity. Please feel free to reopen if needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1624648

Title:
  Do not need run _enable_netfilter_for_bridges() for each new device

Status in neutron:
  Won't Fix

Bug description:
  For a new device, when set security group for it, function
  prepare_port_filter will be called. Then
  self._enable_netfilter_for_bridges() will be executed once, which is
  not needed every time.

  def prepare_port_filter(self, port):
  LOG.debug("Preparing device (%s) filter", port['device'])
  self._remove_chains()
  self._set_ports(port)
  self._enable_netfilter_for_bridges()
  # each security group has it own chains
  self._setup_chains()
  return self.iptables.apply()

  We could have a look at _enabled_netfilter_for_bridges first and
  decide whether to run self._enable_netfilter_for_bridges().

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1624648/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1809823] Re: Neutron_api (unhealthy) after few days

2023-08-30 Thread Rodolfo Alonso
Closed due to inactivity. Please feel free to reopen if needed.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1809823

Title:
  Neutron_api (unhealthy) after few days

Status in neutron:
  Won't Fix
Status in oslo.service:
  Confirmed

Bug description:
  Description
  ===
  on the undercloud ( pretty sure we also seen it on overcloud, i'll update 
when sure ) 
  Without any action, we notice that neutron_api service is in "unhealthy" 
state and stop functioning. 
  Log shows - 
  2018-12-26 00:00:35.774 7 INFO oslo_service.service [-] Caught SIGHUP, 
stopping children
  2018-12-26 00:00:36.077 40997 ERROR oslo_service.service [-] Error starting 
thread.: RuntimeError: A fixed interval looping call can only run one function 
at a time

  openstack commands that needs neutron fails ( e.g openstack server
  list  )

  Restarting the docker ( neutron_api ) resolves the problem.

  
  Steps to reproduce
  ==
  Deploy. 
  Wait 4 days. 

  Expected result
  ===
  Service should remain healthy.. 

  Actual result
  =
  not healthy ..

  Environment
  ===
  Rocky , container based.

  
  Logs & Configs
  ==

  Logs : http://paste.openstack.org/show/738658/

  
  More info: 
  ==
  Google showed this - 
  https://bugs.launchpad.net/oslo.service/+bug/1547029
  follow by - 
  http://paste.openstack.org/show/487420/

  It seems that if we'll add "eventlet.sleep(0)" in <<>> below, it
  might resolve the issue. -

  def run_service(service, done):
  """Service start wrapper.

  :param service: service to run
  :param done: event to wait on until a shutdown is triggered
  :returns: None

  """
  try:
  < HERE  
  service.start()
  except Exception:
  LOG.exception('Error starting thread.')
  raise SystemExit(1)
  else:
  done.wait()

  
  The problem is that I didnt come up with an easy way to reproduce the issue 
in order to confirm it.

  Any suggestions ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1809823/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033493] [NEW] [ndr] Unit tests failing due to a missing patch, still unreleased, in Neutron

2023-08-30 Thread Rodolfo Alonso
Public bug reported:

The Neutron patch [1] removed the need of
"neutron_lib.db.model_base.HasProjectPrimaryKeyIndex", that was deleted
from neutron-lib in [2].

We are now using an old version of Neutron, 23.0.0.0b2, with the latest
released neutron-lib, that removed the mentioned class.

[1]https://review.opendev.org/c/openstack/neutron/+/886213
[2]https://review.opendev.org/c/openstack/neutron-lib/+/886589

** Affects: neutron
 Importance: High
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033493

Title:
  [ndr] Unit tests failing due to a missing patch, still unreleased, in
  Neutron

Status in neutron:
  New

Bug description:
  The Neutron patch [1] removed the need of
  "neutron_lib.db.model_base.HasProjectPrimaryKeyIndex", that was
  deleted from neutron-lib in [2].

  We are now using an old version of Neutron, 23.0.0.0b2, with the
  latest released neutron-lib, that removed the mentioned class.

  [1]https://review.opendev.org/c/openstack/neutron/+/886213
  [2]https://review.opendev.org/c/openstack/neutron-lib/+/886589

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2033493/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2033387] [NEW] [FT] "TestMaintenance.test_port_forwarding" randomly failing in call assertion

2023-08-29 Thread Rodolfo Alonso
Public bug reported:

Test 
"neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_maintenance.TestMaintenance.test_port_forwarding"
 is randomly failing during the assertion of the 
"neutron_lib.callbacks.registry.publish" calls after calling 
"delete_floatingip_port_forwarding". The method is reporting two calls:
* call('port', 'provisioning_complete', 'L2', 
payload=),
* call('port_forwarding', 'after_delete', 
, payload=)

The first one could be a delayed call from the same test. The proposal
is to specifically test the call we are expecting, that is
('port_forwarding', 'after_delete').

Logs:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_8a2/periodic/opendev.org/openstack/neutron/master/neutron-
functional-with-pyroute2-master/8a279fa/testr_results.html

Snippet: https://paste.opendev.org/show/bNlr2vMbWn8i7JCuiF8a/

** Affects: neutron
 Importance: Low
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: Triaged

** Changed in: neutron
   Importance: Undecided => Low

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2033387

Title:
  [FT] "TestMaintenance.test_port_forwarding" randomly failing in call
  assertion

Status in neutron:
  Triaged

Bug description:
  Test 
"neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.ovsdb.test_maintenance.TestMaintenance.test_port_forwarding"
 is randomly failing during the assertion of the 
"neutron_lib.callbacks.registry.publish" calls after calling 
"delete_floatingip_port_forwarding". The method is reporting two calls:
  * call('port', 'provisioning_complete', 'L2', 
payload=),
  * call('port_forwarding', 'after_delete', 
, payload=)

  The first one could be a delayed call from the same test. The proposal
  is to specifically test the call we are expecting, that is
  ('port_forwarding', 'after_delete').

  Logs:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_8a2/periodic/opendev.org/openstack/neutron/master/neutron-
  functional-with-pyroute2-master/8a279fa/testr_results.html

  Snippet: https://paste.opendev.org/show/bNlr2vMbWn8i7JCuiF8a/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2033387/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2027610] [NEW] [sqlalchemy-20] Error in "test_notify_port_status_all_values" using an wrong OVO parameter

2023-07-12 Thread Rodolfo Alonso
Public bug reported:

Snippet: https://paste.opendev.org/show/bCteNr5TAk8k4YqS3VAC/

The "Port.status" field only accepts string values. If other types are passed, 
the OVO validator will fail with the following error (from the logs):
  ValueError: A string is required in field status, not a LoaderCallableStatus

** Affects: neutron
 Importance: Medium
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2027610

Title:
  [sqlalchemy-20] Error in "test_notify_port_status_all_values" using an
  wrong OVO parameter

Status in neutron:
  New

Bug description:
  Snippet: https://paste.opendev.org/show/bCteNr5TAk8k4YqS3VAC/

  The "Port.status" field only accepts string values. If other types are 
passed, the OVO validator will fail with the following error (from the logs):
ValueError: A string is required in field status, not a LoaderCallableStatus

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2027610/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2027604] [NEW] [sqlalchemy-20] The Query.get() method is considered legacy as of the 1.x series of SQLAlchemy and becomes a legacy construct in 2.0

2023-07-12 Thread Rodolfo Alonso
Public bug reported:

Captured stderr:


  
/opt/stack/neutron/neutron/plugins/ml2/driver_context.py:45: LegacyAPIWarning: 
The Query.get() method is considered legacy as of the 1.x series of SQLAlchemy 
and becomes a legacy construct in 2.0. The method is now availab
le as Session.get() (deprecated since: 2.0) (Background on SQLAlchemy 2.0 at: 
https://sqlalche.me/e/b8d9)
  db_obj = session.query(self._model_class).get(self._identity_key)

** Affects: neutron
 Importance: Medium
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Medium

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2027604

Title:
  [sqlalchemy-20] The Query.get() method is considered legacy as of the
  1.x series of SQLAlchemy and becomes a legacy construct in 2.0

Status in neutron:
  New

Bug description:
  Captured stderr:
    


/opt/stack/neutron/neutron/plugins/ml2/driver_context.py:45: LegacyAPIWarning: 
The Query.get() method is considered legacy as of the 1.x series of SQLAlchemy 
and becomes a legacy construct in 2.0. The method is now availab
  le as Session.get() (deprecated since: 2.0) (Background on SQLAlchemy 2.0 at: 
https://sqlalche.me/e/b8d9)
db_obj = session.query(self._model_class).get(self._identity_key)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2027604/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


  1   2   3   4   5   6   7   8   9   10   >