[Yahoo-eng-team] [Bug 1905493] Re: cloud-init status --wait hangs indefinitely in a nested lxd container

2022-06-13 Thread Christian Ehrhardt 
Due to a ping on IRC I wanted to summarize the situation here as it
seems this still affects people.

In nested LXD container we seem to have multiple issues:
- apparmor service failing to start (might need to work with LXD to sort out 
why and how to fix it)
  - if it doesn't work at least fail to start more gracefully
  - comment 2 has a workaround to make dbus not insist on apparmor, but that is 
not a real fix we could generally apply

- snapd snapd.seeded.service needs code to die/exit gracefully in this 
situation (as it won't work)
  - See comment 7, might have changed since then, but worth a revisit

** Also affects: lxd (Ubuntu)
   Importance: Undecided
   Status: New

** Summary changed:

- cloud-init status --wait hangs indefinitely in a nested lxd container
+ Services (apparmor, snapd.seeded, ...?) fail to start in nested lxd container

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1905493

Title:
  Services (apparmor, snapd.seeded, ...?) fail to start in nested lxd
  container

Status in AppArmor:
  New
Status in cloud-init:
  Invalid
Status in snapd:
  Confirmed
Status in dbus package in Ubuntu:
  Confirmed
Status in lxd package in Ubuntu:
  New
Status in systemd package in Ubuntu:
  Invalid

Bug description:
  When booting a nested lxd container inside another lxd container (just
  a normal container, not a VM) (i.e. just L2), using cloud-init -status
  --wait, the "." is just printed off infinitely and never returns.

To manage notifications about this bug go to:
https://bugs.launchpad.net/apparmor/+bug/1905493/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978519] [NEW] Create auto allocated topology encounter error when enable ndp_proxy service plugin

2022-06-13 Thread yangjianfeng
Public bug reported:

In master branch, when `auto_allocate` and `ndp_proxy` service plugin
were enabled simultaneously. Then, execute the below command to create a
`auto allocated topology create`, the neutron server log will report
ERROR [1].

  openstack network auto allocated topology create --or-show

[1] https://paste.opendev.org/show/bYdBYap1NcOnTjIK7inP/

** Affects: neutron
 Importance: Undecided
 Assignee: yangjianfeng (yangjianfeng)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => yangjianfeng (yangjianfeng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1978519

Title:
  Create auto allocated topology encounter error when enable ndp_proxy
  service plugin

Status in neutron:
  New

Bug description:
  In master branch, when `auto_allocate` and `ndp_proxy` service plugin
  were enabled simultaneously. Then, execute the below command to create
  a `auto allocated topology create`, the neutron server log will report
  ERROR [1].

openstack network auto allocated topology create --or-show

  [1] https://paste.opendev.org/show/bYdBYap1NcOnTjIK7inP/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1978519/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1977831] Re: Response time increasing on new subnets over same network

2022-06-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/844959
Committed: 
https://opendev.org/openstack/neutron/commit/c25097b0b0da2af9021699036a69cd10e66533b1
Submitter: "Zuul (22348)"
Branch:master

commit c25097b0b0da2af9021699036a69cd10e66533b1
Author: Fernando Royo 
Date:   Tue Jun 7 13:51:51 2022 +0200

Optimize queries for port operations

Port create/update are most time-consuming operations
on subnet creation. As example, in those cases where
several subnets are created over the same network the
response time for those port operations is linearly
increased as the total subnets increases.

This patch improves the number of queries required on port
operations in order to reduce the response time.

Closes-Bug: #1977831
Change-Id: I0fccf36a2035e8f6c2fa8dab0307358da600c8f7


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1977831

Title:
  Response time increasing on new subnets over same network

Status in neutron:
  Fix Released

Bug description:
  After some subnets created over same network, response time for new
  ones is incresase linearly, if number of subnet is elevated (over
  1000) timeout is triggered.

  The issue can be easily reproduced by creating subnets in a loop and
  capture the time it takes to create as the total count increases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1977831/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1939545] Re: device_path not saved into bdm.connection_info during pre_live_migration

2022-06-13 Thread melanie witt
** Also affects: nova/victoria
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Also affects: nova/ussuri
   Importance: Undecided
   Status: New

** Also affects: nova/wallaby
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1939545

Title:
  device_path not saved into bdm.connection_info during
  pre_live_migration

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) train series:
  New
Status in OpenStack Compute (nova) ussuri series:
  New
Status in OpenStack Compute (nova) victoria series:
  New
Status in OpenStack Compute (nova) wallaby series:
  New

Bug description:
  Description
  ===

  Various block based volume drivers attempt to save a device_path back
  into the stashed connection_info stored within Nova's block device
  mappings *after* connecting a volume to the underlying host. Thanks to
  the indirection caused by the various data structures used between the
  virt and compute layers this isn't actually saved into the underlying
  block device mapping database record during a typical attach flow
  until we get back into the compute and driver block device layer:

  
https://github.com/openstack/nova/blob/84b61790763f91e12eebb96d955e2f83abc00d56/nova/virt/block_device.py#L613-L619

  However when an instance is being live migrated these volumes are
  connected as part of pre_live_migration on the destination and no
  attempt is made to save the updates made to the connection_info of the
  volume into the database. This isn't a massive problem as os-brick can
  for the most part lookup the device during future operations but it is
  obviously inefficient.

  This was initially hit in bug #1936439 but that bug is now being used
  to track in a trivial DEBUG log change while this bug will track in
  the underlying fix for the above issue.

  Steps to reproduce
  ==
  * Attach an iSCSI/FC/NVME etc volume to an instance
  * Live migrate the instance
  * Confirm that device_path isn't present in the connection_info stashed in 
the bdm

  Expected result
  ===
  device_path is stashed in the connection_info of the bdm

  Actual result
  =
  device_path isn't stashed in the connection_info of the bdm

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

  master

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

  libvirt

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

  LVM/iSCSI/FC/NVMe, any block based volume backends.

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

  N/A

  Logs & Configs
  ==

  See bug #1936439

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1939545/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667736] Re: gate-neutron-fwaas-dsvm-functional failure after recent localrc change

2022-06-13 Thread Corey Bryant
** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667736

Title:
  gate-neutron-fwaas-dsvm-functional failure after recent localrc change

Status in Ubuntu Cloud Archive:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron-fwaas package in Ubuntu:
  Fix Released

Bug description:
  eg. http://logs.openstack.org/59/286059/1/check/gate-neutron-fwaas-
  dsvm-functional/a0f2285/console.html

  2017-02-24 15:27:58.187720 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:26 : 
  source /opt/stack/new/devstack/localrc
  2017-02-24 15:27:58.187833 | 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh: line 26: 
/opt/stack/new/devstack/localrc: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1667736/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1832210] Re: fwaas netfilter_log: incorrect decode of log prefix under python 3

2022-06-13 Thread Corey Bryant
** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1832210

Title:
  fwaas netfilter_log: incorrect decode of log prefix under python 3

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Won't Fix
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron-fwaas package in Ubuntu:
  Fix Released
Status in neutron-fwaas source package in Cosmic:
  Won't Fix
Status in neutron-fwaas source package in Disco:
  Fix Released
Status in neutron-fwaas source package in Eoan:
  Fix Released

Bug description:
  Under Python 3, the prefix of a firewall log message is not correctly
  decoded "b'10612530182266949194'":

  2019-06-10 09:14:34 Unknown cookie packet_in 
pkt=ethernet(dst='fa:16:3e:c6:58:5e',ethertype=2048,src='fa:16:3e:e0:2c:be')ipv4(csum=51290,dst='10.5.0.10',flags=2,header_length=5,identification=37612,offset=0,option=None,proto=6,src='192.168.21.182',tos=16,total_length=52,ttl=63,version=4)tcp(ack=3151291228,bits=17,csum=23092,dst_port=57776,offset=8,option=[TCPOptionNoOperation(kind=1,length=1),
 TCPOptionNoOperation(kind=1,length=1), 
TCPOptionTimestamps(kind=8,length=10,ts_ecr=1574746440,ts_val=482688)],seq=2769917228,src_port=22,urgent=0,window_size=3120)
  2019-06-10 09:14:34 {'prefix': "b'10612530182266949194'", 'msg': 
"ethernet(dst='fa:16:3e:c6:58:5e',ethertype=2048,src='fa:16:3e:e0:2c:be')ipv4(csum=51290,dst='10.5.0.10',flags=2,header_length=5,identification=37612,offset=0,option=None,proto=6,src='192.168.21.182',tos=16,total_length=52,ttl=63,version=4)tcp(ack=3151291228,bits=17,csum=23092,dst_port=57776,offset=8,option=[TCPOptionNoOperation(kind=1,length=1),
 TCPOptionNoOperation(kind=1,length=1), 
TCPOptionTimestamps(kind=8,length=10,ts_ecr=1574746440,ts_val=482688)],seq=2769917228,src_port=22,urgent=0,window_size=3120)"}
  2019-06-10 09:14:34 {'0bf81ded-bf94-437d-ad49-063bba9be9bb': 
[, 
]}

  This results in the firewall log driver not being able to map the
  message to the associated port and log resources in neutron resulting
  in the 'unknown cookie packet_in' warning message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1832210/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831986] Re: fwaas_v2 - unable to associate port with firewall (PXC strict mode)

2022-06-13 Thread Corey Bryant
** Changed in: cloud-archive
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1831986

Title:
  fwaas_v2 - unable to associate port with firewall (PXC strict mode)

Status in Ubuntu Cloud Archive:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Won't Fix
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in neutron:
  In Progress
Status in neutron-fwaas package in Ubuntu:
  Fix Released
Status in neutron-fwaas source package in Disco:
  Won't Fix
Status in neutron-fwaas source package in Eoan:
  Fix Released

Bug description:
  [Impact]
  Unable to associate ports with a firewall under FWaaS v2

  [Test Case]
  Deploy OpenStack (stein or Later) using Charms
  Create firewall policy, apply to router - failure as unable to associate port 
with policy in underlying DB

  [Regression Potential]
  Medium; the proposed fix has not been accepted upstream as yet (discussion 
ongoing due to change of database migrations).

  [Original Bug Report]
  Impacts both Stein and Rocky (although rocky does not enable v2 just yet).

  542 a9761fa9124740028d0c1d70ff7aa542] DBAPIError exception wrapped from 
(pymysql.err.InternalError) (1105, 'Percona-XtraDB-Cluster prohibits use of DML 
command on a table (neutron.firewall_group_port_associations_v2) without an 
explicit primary key with pxc_strict_mode = ENFORCING or MASTER') [SQL: 'DELETE 
FROM firewall_group_port_associations_v2 WHERE 
firewall_group_port_associations_v2.firewall_group_id = 
%(firewall_group_id_1)s'] [parameters: {'firewall_group_id_1': 
'85a277d0-ebaf-4a5d-9d45-6a74b8f54372'}] (Background on this error at: 
http://sqlalche.me/e/2j85): pymysql.err.InternalError: (1105, 
'Percona-XtraDB-Cluster prohibits use of DML command on a table 
(neutron.firewall_group_port_associations_v2) without an explicit primary key 
with pxc_strict_mode = ENFORCING or MASTER')
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1193, in 
_execute_context
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
context)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 509, in 
do_execute
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/cursors.py", line 165, in execute
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters result 
= self._query(query)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/cursors.py", line 321, in _query
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
conn.query(q)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 860, in query
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1061, in 
_read_query_result
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
result.read()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1349, in read
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
first_packet = self.connection._read_packet()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1018, in 
_read_packet
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
packet.check_error()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 384, in 
check_error
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/err.py", line 107, in 
raise_mysql_exception
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters raise 
errorclass(errno, errval)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
pymysql.err.InternalError: (1105, 'Percona-XtraDB-Cluster prohibits use of DML 
command on a table 

[Yahoo-eng-team] [Bug 1667736] Re: gate-neutron-fwaas-dsvm-functional failure after recent localrc change

2022-06-13 Thread Corey Bryant
** Changed in: cloud-archive
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1667736

Title:
  gate-neutron-fwaas-dsvm-functional failure after recent localrc change

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in neutron:
  Fix Released
Status in neutron-fwaas package in Ubuntu:
  Fix Released

Bug description:
  eg. http://logs.openstack.org/59/286059/1/check/gate-neutron-fwaas-
  dsvm-functional/a0f2285/console.html

  2017-02-24 15:27:58.187720 | + 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh:main:26 : 
  source /opt/stack/new/devstack/localrc
  2017-02-24 15:27:58.187833 | 
/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/contrib/gate_hook.sh: line 26: 
/opt/stack/new/devstack/localrc: No such file or directory

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1667736/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1831986] Re: fwaas_v2 - unable to associate port with firewall (PXC strict mode)

2022-06-13 Thread Corey Bryant
** Changed in: cloud-archive
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1831986

Title:
  fwaas_v2 - unable to associate port with firewall (PXC strict mode)

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive rocky series:
  Won't Fix
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in neutron:
  In Progress
Status in neutron-fwaas package in Ubuntu:
  Fix Released
Status in neutron-fwaas source package in Disco:
  Won't Fix
Status in neutron-fwaas source package in Eoan:
  Fix Released

Bug description:
  [Impact]
  Unable to associate ports with a firewall under FWaaS v2

  [Test Case]
  Deploy OpenStack (stein or Later) using Charms
  Create firewall policy, apply to router - failure as unable to associate port 
with policy in underlying DB

  [Regression Potential]
  Medium; the proposed fix has not been accepted upstream as yet (discussion 
ongoing due to change of database migrations).

  [Original Bug Report]
  Impacts both Stein and Rocky (although rocky does not enable v2 just yet).

  542 a9761fa9124740028d0c1d70ff7aa542] DBAPIError exception wrapped from 
(pymysql.err.InternalError) (1105, 'Percona-XtraDB-Cluster prohibits use of DML 
command on a table (neutron.firewall_group_port_associations_v2) without an 
explicit primary key with pxc_strict_mode = ENFORCING or MASTER') [SQL: 'DELETE 
FROM firewall_group_port_associations_v2 WHERE 
firewall_group_port_associations_v2.firewall_group_id = 
%(firewall_group_id_1)s'] [parameters: {'firewall_group_id_1': 
'85a277d0-ebaf-4a5d-9d45-6a74b8f54372'}] (Background on this error at: 
http://sqlalche.me/e/2j85): pymysql.err.InternalError: (1105, 
'Percona-XtraDB-Cluster prohibits use of DML command on a table 
(neutron.firewall_group_port_associations_v2) without an explicit primary key 
with pxc_strict_mode = ENFORCING or MASTER')
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters Traceback 
(most recent call last):
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1193, in 
_execute_context
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
context)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 509, in 
do_execute
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/cursors.py", line 165, in execute
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters result 
= self._query(query)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/cursors.py", line 321, in _query
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
conn.query(q)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 860, in query
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1061, in 
_read_query_result
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
result.read()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1349, in read
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
first_packet = self.connection._read_packet()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 1018, in 
_read_packet
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
packet.check_error()
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/connections.py", line 384, in 
check_error
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters   File 
"/usr/lib/python3/dist-packages/pymysql/err.py", line 107, in 
raise_mysql_exception
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters raise 
errorclass(errno, errval)
  2019-06-07 10:07:50.937 30837 ERROR oslo_db.sqlalchemy.exc_filters 
pymysql.err.InternalError: (1105, 'Percona-XtraDB-Cluster prohibits use of DML 
command on a table 

[Yahoo-eng-team] [Bug 1832210] Re: fwaas netfilter_log: incorrect decode of log prefix under python 3

2022-06-13 Thread Corey Bryant
** Changed in: cloud-archive
   Status: Fix Released => Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1832210

Title:
  fwaas netfilter_log: incorrect decode of log prefix under python 3

Status in Ubuntu Cloud Archive:
  Fix Committed
Status in Ubuntu Cloud Archive rocky series:
  Won't Fix
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron-fwaas package in Ubuntu:
  Fix Released
Status in neutron-fwaas source package in Cosmic:
  Won't Fix
Status in neutron-fwaas source package in Disco:
  Fix Released
Status in neutron-fwaas source package in Eoan:
  Fix Released

Bug description:
  Under Python 3, the prefix of a firewall log message is not correctly
  decoded "b'10612530182266949194'":

  2019-06-10 09:14:34 Unknown cookie packet_in 
pkt=ethernet(dst='fa:16:3e:c6:58:5e',ethertype=2048,src='fa:16:3e:e0:2c:be')ipv4(csum=51290,dst='10.5.0.10',flags=2,header_length=5,identification=37612,offset=0,option=None,proto=6,src='192.168.21.182',tos=16,total_length=52,ttl=63,version=4)tcp(ack=3151291228,bits=17,csum=23092,dst_port=57776,offset=8,option=[TCPOptionNoOperation(kind=1,length=1),
 TCPOptionNoOperation(kind=1,length=1), 
TCPOptionTimestamps(kind=8,length=10,ts_ecr=1574746440,ts_val=482688)],seq=2769917228,src_port=22,urgent=0,window_size=3120)
  2019-06-10 09:14:34 {'prefix': "b'10612530182266949194'", 'msg': 
"ethernet(dst='fa:16:3e:c6:58:5e',ethertype=2048,src='fa:16:3e:e0:2c:be')ipv4(csum=51290,dst='10.5.0.10',flags=2,header_length=5,identification=37612,offset=0,option=None,proto=6,src='192.168.21.182',tos=16,total_length=52,ttl=63,version=4)tcp(ack=3151291228,bits=17,csum=23092,dst_port=57776,offset=8,option=[TCPOptionNoOperation(kind=1,length=1),
 TCPOptionNoOperation(kind=1,length=1), 
TCPOptionTimestamps(kind=8,length=10,ts_ecr=1574746440,ts_val=482688)],seq=2769917228,src_port=22,urgent=0,window_size=3120)"}
  2019-06-10 09:14:34 {'0bf81ded-bf94-437d-ad49-063bba9be9bb': 
[, 
]}

  This results in the firewall log driver not being able to map the
  message to the associated port and log resources in neutron resulting
  in the 'unknown cookie packet_in' warning message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1832210/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1942329] Re: MAC address of direct-physical port is not updated during migration

2022-06-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/829248
Committed: 
https://opendev.org/openstack/nova/commit/cd03bbc1c33e33872594cf002f0e7011ab8ea047
Submitter: "Zuul (22348)"
Branch:master

commit cd03bbc1c33e33872594cf002f0e7011ab8ea047
Author: Balazs Gibizer 
Date:   Tue Feb 15 14:38:41 2022 +0100

Record SRIOV PF MAC in the binding profile

Today Nova updates the mac_address of a direct-physical port to reflect
the MAC address of the physical device the port is bound to. But this
can only be done before the port is bound. However during migration Nova
does not update the MAC when the port is bound to a different physical
device on the destination host.

This patch extends the libvirt virt driver to provide the MAC address of
the PF in the pci_info returned to the resource tracker. This
information will be then persisted in the extra_info field of the
PciDevice object.

Then the port update logic during migration, resize, live
migration, evacuation and unshelve is also extended to record the MAC of
physical device in the port binding profile according to the device on
the destination host.

The related neutron change Ib0638f5db69cb92daf6932890cb89e83cf84f295
uses this info from the binding profile to update the mac_address field
of the port when the binding is activated.

Closes-Bug: #1942329

Change-Id: Iad5e70b43a65c076134e1874cb8e75d1ba214fde


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1942329

Title:
  MAC address of direct-physical port is not updated during migration

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  Nova update the MAC of the direct-physical port based on the MAC of the PF 
selected during the initial boot of the VM. But Nova does not update the MAC 
when the VM is migrated to another compute and therefore using another PF. 

  
  Steps to reproduce
  ==
  Needs a multi node devstack with available SRIOV PFs.

  stack@master0:~$ openstack hypervisor list
  ++-+-+---+---+
  | ID | Hypervisor Hostname | Hypervisor Type | Host IP   | State |
  ++-+-+---+---+
  |  1 | master0 | QEMU| 10.1.0.21 | up|
  |  2 | node0   | QEMU| 10.1.0.22 | up|
  ++-+-+---+---+

  
  stack@master0:~$ mysql -D nova_cell1 -e "select status, address, parent_addr, 
dev_type, compute_node_id, product_id, instance_uuid from pci_devices;"
  
+---+--+--+--+-++---+
  | status| address  | parent_addr  | dev_type | compute_node_id | 
product_id | instance_uuid |
  
+---+--+--+--+-++---+
  | available | :81:00.0 | NULL | type-PF  |   1 | 154d 
  | NULL  |
  | available | :81:00.1 | NULL | type-PF  |   1 | 154d 
  | NULL  |
  | available | :81:10.0 | :81:00.0 | type-VF  |   1 | 10ed 
  | NULL  |
  | available | :81:10.2 | :81:00.0 | type-VF  |   1 | 10ed 
  | NULL  |
  | available | :81:10.4 | :81:00.0 | type-VF  |   1 | 10ed 
  | NULL  |
  | available | :81:10.6 | :81:00.0 | type-VF  |   1 | 10ed 
  | NULL  |
  | available | :81:00.0 | NULL | type-PF  |   2 | 154d 
  | NULL  |
  | available | :81:00.1 | NULL | type-PF  |   2 | 154d 
  | NULL  |
  | available | :81:10.0 | :81:00.0 | type-VF  |   2 | 10ed 
  | NULL  |
  | available | :81:10.2 | :81:00.0 | type-VF  |   2 | 10ed 
  | NULL  |
  | available | :81:10.4 | :81:00.0 | type-VF  |   2 | 10ed 
  | NULL  |
  | available | :81:10.6 | :81:00.0 | type-VF  |   2 | 10ed 
  | NULL  |
  
+---+--+--+--+-++---+

  These are the PF MAC's

  stack@master0:~$ ip a | grep b4:96:91:34
  link/ether b4:96:91:34:f4:34 brd ff:ff:ff:ff:ff:ff
  link/ether b4:96:91:34:f4:36 brd ff:ff:ff:ff:ff:ff

  stack@node0:~/nova$ ip a | grep b4:96:91:34
  link/ether b4:96:91:34:ed:d4 brd ff:ff:ff:ff:ff:ff
  link/ether b4:96:91:34:ed:d6 brd ff:ff:ff:ff:ff:ff

  
  1) create a port with vnic_type=directy-physical

  stack@master0:~$ openstack port show 

[Yahoo-eng-team] [Bug 1978489] [NEW] libvirt / cgroups v2: cannot boot instance with more than 16 CPUs

2022-06-13 Thread Artom Lifshitz
Public bug reported:

Description
===

Using the libvirt driver and a host OS that uses cgroups v2 (RHEL 9,
Ubuntu Jammy), an instance with more than 16 CPUs cannot be booted.

Steps to reproduce
==

1. Boot an instance with 10 (or more) CPUs on RHEL 9 or Ubuntu Jammy
using Nova with the libvirt driver.

Expected result
===

Instance boots.

Actual result
=

Instance fails to boot with a 'Value specified in CPUWeight is out of
range' error.

Environment
===

Originially report as a libvirt but in RHEL 9 [1]

Additional information
==

This is happening because Nova defaults to 1024 * (# of CPUs) for the
value of domain/cputune/shares in the libvirt XML. This is then passed
directly by libvirt to the cgroups API, but cgroups v2 has a maximum
value of 1. 1 / 1024 ~= 9.76

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2035518

** Affects: nova
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1978489

Title:
  libvirt / cgroups v2: cannot boot instance with more than 16 CPUs

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===

  Using the libvirt driver and a host OS that uses cgroups v2 (RHEL 9,
  Ubuntu Jammy), an instance with more than 16 CPUs cannot be booted.

  Steps to reproduce
  ==

  1. Boot an instance with 10 (or more) CPUs on RHEL 9 or Ubuntu Jammy
  using Nova with the libvirt driver.

  Expected result
  ===

  Instance boots.

  Actual result
  =

  Instance fails to boot with a 'Value specified in CPUWeight is out of
  range' error.

  Environment
  ===

  Originially report as a libvirt but in RHEL 9 [1]

  Additional information
  ==

  This is happening because Nova defaults to 1024 * (# of CPUs) for the
  value of domain/cputune/shares in the libvirt XML. This is then passed
  directly by libvirt to the cgroups API, but cgroups v2 has a maximum
  value of 1. 1 / 1024 ~= 9.76

  [1] https://bugzilla.redhat.com/show_bug.cgi?id=2035518

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1978489/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1974173] Re: Remaining ports are not unbound if one port is missing

2022-06-13 Thread Elod Illes
** Also affects: nova/ussuri
   Importance: Undecided
   Status: New

** Also affects: nova/wallaby
   Importance: Undecided
   Status: New

** Also affects: nova/victoria
   Importance: Undecided
   Status: New

** Also affects: nova/yoga
   Importance: Undecided
   Status: New

** Also affects: nova/xena
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1974173

Title:
  Remaining ports are not unbound if one port is missing

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) train series:
  New
Status in OpenStack Compute (nova) ussuri series:
  New
Status in OpenStack Compute (nova) victoria series:
  New
Status in OpenStack Compute (nova) wallaby series:
  New
Status in OpenStack Compute (nova) xena series:
  New
Status in OpenStack Compute (nova) yoga series:
  New

Bug description:
  As part of the instance deletion process, we must unbind ports
  associated with said instance. To do this, we loop over all ports
  currently attached to an instance. However, if neutron returns HTTP
  404 (Not Found) for any of these ports, we will return early and fail
  to unbind the remaining ports. We've seen the problem in the context
  of Kubernetes on OpenStack. Our deinstaller is brute-force, so it
  deletes ports and servers at the same time, so a race means the port
  can get deleted early. This normally wouldn't be an issue as we'd just
  "untrunk" it and proceed to delete it. But that won't work for SR-IOV
  ports as in that case you cannot "untrunk" bound ports.

  The solution here is obvious: if we fail to find a port, we should
  simply skip that and continue unbinding everything else.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1974173/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1978369] Re: [ovn] External Gateway Loop in NB SB DB

2022-06-13 Thread yatin
*** This bug is a duplicate of bug 1973347 ***
https://bugs.launchpad.net/bugs/1973347

This looks duplicate of https://bugs.launchpad.net/neutron/+bug/1973347
and is fixed with
https://review.opendev.org/c/openstack/neutron/+/842147. This should be
backported to stable branches as well.

@Ammad Can you try out the patch and confirm it fixes the issue for
your?

For now i will mark out it as duplicate of other lp i.e 1973347, please
reopen if you still consider it different issue once you check other bug
and the fix.

** This bug has been marked a duplicate of bug 1973347
   OVN revision_number infinite update loop

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1978369

Title:
  [ovn] External Gateway Loop in NB SB DB

Status in neutron:
  New

Bug description:
  Hi,

  I have installed neutron 20.0 and OVN 22.03 on ubuntu 22.04. When I
  create a router and attach external network with it, it generate loop
  thousands of ovn NB and SB DB transection cause the DB size grows.

  In SB

  OVSDB JSON 300 0f200aa6397e53cd203c99e6674bda75bdd53151
  
{"_date":1654929577073,"Multicast_Group":{"9b50bf0f-f9fe-4b9a-9333-fe2d1744575c":{"ports":["uuid","efc3d1a7-56a6-4235-8a29-4d1defdb459c"]}},"_is_diff":true,"_comment":"ovn-northd","Port_Binding":{"efc3d1a7-56a6-4235-8a29-4d1defdb459c":{"external_ids":["map",[["neutron:revision_number","10678"]]]}}}
  OVSDB JSON 402 86de47a7521717bd9ab7182422a6ad9b424c93d0
  
{"_date":1654929577345,"Multicast_Group":{"9b50bf0f-f9fe-4b9a-9333-fe2d1744575c":{"ports":["uuid","efc3d1a7-56a6-4235-8a29-4d1defdb459c"]}},"_is_diff":true,"_comment":"ovn-northd","Port_Binding":{"d34d2dd5-260b-4253-8429-5a7a89f3a500":{"external_ids":["map",[["neutron:revision_number","10679"]]]},"2ce0135e-b9b5-441b-aaae-7ce580bcf600":{"external_ids":["map",[["neutron:revision_number","10679"]]]}}}

  and In NB

  OVSDB JSON 334 e0ee7ff61d595e6151abd694ce2179c11d9e2570
  
{"_date":1654929536919,"_is_diff":true,"Logical_Router_Port":{"a0c2e43e-f4cb-4331-b070-a726b3da7a17":{"external_ids":["map",[["neutron:revision_number","10567"]]]}},"Logical_Switch_Port":{"cc97ca2c-979e-4754-a8d2-4fff0a666df8":{"options":["map",[["mcast_flood_reports","true"],["requested-chassis","kvm01-a1-r17-lhr01.rapid.pk"]]]}}}
  OVSDB JSON 269 dd8f87d8b132415a423b0f020b23f07d2488acba
  
{"_date":1654929536992,"Logical_Switch_Port":{"cc97ca2c-979e-4754-a8d2-4fff0a666df8":{"options":["map",[["mcast_flood_reports","true"],["requested-chassis","kvm01-a1-r17-lhr01.rapid.pk"]]],"external_ids":["map",[["neutron:revision_number","10567"]]]}},"_is_diff":true}
  OVSDB JSON 334 42d2a02531bd91d88b8783a45da47a33b5e3dc94
  
{"_date":1654929537262,"_is_diff":true,"Logical_Router_Port":{"a0c2e43e-f4cb-4331-b070-a726b3da7a17":{"external_ids":["map",[["neutron:revision_number","10568"]]]}},"Logical_Switch_Port":{"cc97ca2c-979e-4754-a8d2-4fff0a666df8":{"options":["map",[["mcast_flood_reports","true"],["requested-chassis","kvm01-a1-r17-lhr01.rapid.pk"]]]}}}
  OVSDB JSON 269 b8454f003de8cb14961aa37d5a557d2490d34049
  
{"_date":1654929537355,"Logical_Switch_Port":{"cc97ca2c-979e-4754-a8d2-4fff0a666df8":{"options":["map",[["mcast_flood_reports","true"],["requested-chassis","kvm01-a1-r17-lhr01.rapid.pk"]]],"external_ids":["map",[["neutron:revision_number","10568"]]]}},"_is_diff":true}
  OVSDB JSON 334 705b3007e83f0646642510903602965a6192fccf
  
{"_date":1654929537648,"_is_diff":true,"Logical_Router_Port":{"a0c2e43e-f4cb-4331-b070-a726b3da7a17":{"external_ids":["map",[["neutron:revision_number","10569"]]]}},"Logical_Switch_Port":{"cc97ca2c-979e-4754-a8d2-4fff0a666df8":{"options":["map",[["mcast_flood_reports","true"],["requested-chassis","kvm01-a1-r17-lhr01.rapid.pk"]]]}}}
  OVSDB JSON 269 4506e6ee9336bf2b8bde3134badbea7d23e72d33

  I also see below logs in ovn-northd.log

  2022-06-11T06:46:55.927Z|00171|northd|WARN|Dropped 650 log messages in last 
60 seconds (most recently, 0 seconds ago) due to excessive rate
  2022-06-11T06:46:55.927Z|00172|northd|WARN|Unknown chassis '' set as 
options:requested-chassis on LSP '426cf7d5-4fd7-4aa9-806b-9dbe170c543e'.
  2022-06-11T06:47:55.941Z|00173|northd|WARN|Dropped 644 log messages in last 
60 seconds (most recently, 0 seconds ago) due to excessive rate
  2022-06-11T06:47:55.941Z|00174|northd|WARN|Unknown chassis '' set as 
options:requested-chassis on LSP '426cf7d5-4fd7-4aa9-806b-9dbe170c543e'.

  
  I have tested it on ubuntu 20.04 via UCA AND 22.04. Below are the test 
scenerio.

  - Two gateway chassis
  - 5 compute nodes

  I have also tested this with one chassis as well, for which I am
  attaching neutron-server.log when I attached external interface to
  router and ovn nb and sb DBs as well.

  I would be happy to provide any further info that is needed.

  Ammad

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1978369/+subscriptions


-- 
Mailing list: