[Yahoo-eng-team] [Bug 1951669] [NEW] The 'flavor' field is Not available after launching vm

2021-11-19 Thread Eric Xie
Public bug reported:

When launching one vm and its status is active, but the 'flavor' field
displayed "Not available".

After refreshing the page, got correct flavor.

$ git log
commit 9d1bb3626bc1dbcf29a55aeb094f4350067317cd (HEAD -> master, tag: 20.2.0, 
origin/master, origin/HEAD)
Author: Akihiro Motoki 
Date:   Tue Oct 26 09:18:16 2021 +0900

Allow both Django 2.2 and 3.2 for smooth transition

I believe we need the following steps and it is what I did in past
when we bump the Django minimum version.

1. (already done) update global-requirements.txt to allow horizon to
   update requirements.txt to include Django 3.2.
2. specify the required Django version which includes both 2.2 and 3.2
   (at this point upper-constraints uses 2.2)
3. update upper-constraints.txt in the requirements repo to use Django 3.2
4. bump the min version of Django in horizon

(optionally) update non-primary-django tests to include non-primary 
versions of
Django. It seems you drops 2.2 support together when we support 3.2, so 
perhaps
this step is not the case though.

https://review.opendev.org/c/openstack/horizon/+/811412 directly updated
the min version to Django 3.2 which is incompatible with the global
upper-constraints.txt.
To avoid this, https://review.opendev.org/c/openstack/horizon/+/815206 made
almost all tests non-voting. I am not a fan of such approach and believe
there is a way to make the transition of Django version more smoothly.

---

This commit reverts the zuul configuration changes in
https://review.opendev.org/c/openstack/horizon/+/815206 and
https://review.opendev.org/c/openstack/horizon/+/811412.

horizon-tox-python3-django32 is voting now as we are making it
the default version.

Change-Id: I60bb672ef1b197e657a8b3bd86d07464bcb1759f

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1951669

Title:
  The 'flavor' field is Not available after launching vm

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When launching one vm and its status is active, but the 'flavor' field
  displayed "Not available".

  After refreshing the page, got correct flavor.

  $ git log
  commit 9d1bb3626bc1dbcf29a55aeb094f4350067317cd (HEAD -> master, tag: 20.2.0, 
origin/master, origin/HEAD)
  Author: Akihiro Motoki 
  Date:   Tue Oct 26 09:18:16 2021 +0900

  Allow both Django 2.2 and 3.2 for smooth transition

  I believe we need the following steps and it is what I did in past
  when we bump the Django minimum version.

  1. (already done) update global-requirements.txt to allow horizon to
 update requirements.txt to include Django 3.2.
  2. specify the required Django version which includes both 2.2 and 3.2
 (at this point upper-constraints uses 2.2)
  3. update upper-constraints.txt in the requirements repo to use Django 3.2
  4. bump the min version of Django in horizon

  (optionally) update non-primary-django tests to include non-primary 
versions of
  Django. It seems you drops 2.2 support together when we support 3.2, so 
perhaps
  this step is not the case though.

  https://review.opendev.org/c/openstack/horizon/+/811412 directly updated
  the min version to Django 3.2 which is incompatible with the global
  upper-constraints.txt.
  To avoid this, https://review.opendev.org/c/openstack/horizon/+/815206 
made
  almost all tests non-voting. I am not a fan of such approach and believe
  there is a way to make the transition of Django version more smoothly.

  ---

  This commit reverts the zuul configuration changes in
  https://review.opendev.org/c/openstack/horizon/+/815206 and
  https://review.opendev.org/c/openstack/horizon/+/811412.

  horizon-tox-python3-django32 is voting now as we are making it
  the default version.

  Change-Id: I60bb672ef1b197e657a8b3bd86d07464bcb1759f

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1951669/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951656] [NEW] Nova fails to parse new libvirt mediated device name format

2021-11-19 Thread Joe Kralicky
Public bug reported:

The name format of mediated devices in libvirt was recently changed from 
`mdev_` to `mdev__`, e.g.: 
Old: `mdev_a12c7bf8_fcf4_4c3b_a256_604cda8e62d5`
New: `mdev_a12c7bf8_fcf4_4c3b_a256_604cda8e62d5__c1_00_0`


This results in the following error:

2021-11-19 22:51:45.952 7 ERROR nova.compute.manager 
[req-570c7e8f-0540-49fb-b2b0-8c2ac932e4dc - - - - -] Error updating resources 
for node: ValueError: badly formed hexadecimal UUID string 
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager Traceback (most recent 
call last): 

2021-11-19 22:51:45.952 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py", line 
9993, in _update_available_resource_for_node
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager startup=startup)   


2021-11-19 22:51:45.952 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py",
 line 895, in update_available_resource  
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager 
self._update_available_resource(context, resources, startup=startup)
   
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_concurrency/lockutils.py",
 line 360, in inner 
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager return f(*args, 
**kwargs)   
   
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py",
 line 975, in _update_available_resource 
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager self._update(context, 
cn, startup=startup)
 
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py",
 line 1227, in _update   
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager 
self._update_to_placement(context, compute_node, startup)   
   
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/retrying.py", line 49, in 
wrapped_f
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager return 
Retrying(*dargs, **dkw).call(f, *args, **kw)

2021-11-19 22:51:45.952 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/retrying.py", line 206, in 
call
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager return 
attempt.get(self._wrap_exception)   

2021-11-19 22:51:45.952 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/retrying.py", line 247, in get 

2021-11-19 22:51:45.952 7 ERROR nova.compute.manager 
six.reraise(self.value[0], self.value[1], self.value[2])
   
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager   File 
"/usr/local/lib/python3.6/site-packages/six.py", line 719, in reraise   

2021-11-19 22:51:45.952 7 ERROR nova.compute.manager raise value


2021-11-19 22:51:45.952 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/retrying.py", line 200, in 
call
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager attempt = 
Attempt(fn(*args, **kwargs), attempt_number, False) 
 
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/resource_tracker.py",
 line 1163, in _update_to_placement  
2021-11-19 22:51:45.952 7 ERROR nova.compute.manager 

[Yahoo-eng-team] [Bug 1950679] Re: [ovn] neutron_ovn_db_sync_util hangs on sync_routers_and_rports

2021-11-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/817637
Committed: 
https://opendev.org/openstack/neutron/commit/7e2f73350ffdc90f7b340788db36edc439f96f6e
Submitter: "Zuul (22348)"
Branch:master

commit 7e2f73350ffdc90f7b340788db36edc439f96f6e
Author: Daniel Speichert 
Date:   Thu Nov 11 13:18:49 2021 -0500

[OVN] Fix deadlock in neutron_ovn_db_sync_util.py

A feature to synchronize OVN DB connections when handling events
introduced in 90980f496cfa3cc5df1c93cf834a44f33d3f1f6f is not applicable
to the offline sync process executed by this utility.

Closes-bug: #1950679
Change-Id: Iac4eb364bfc1c44f5d4526bae71967bede29cc36


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1950679

Title:
  [ovn] neutron_ovn_db_sync_util hangs on sync_routers_and_rports

Status in neutron:
  Fix Released

Bug description:
  neutron-ovn-db-sync-util hangs in certain scenarios while running
  sync_routers_and_rports.

  Specifically, it seems to be hanging on self.l3_plugin.get_routers(ctx)
  -> model_query.get_collection(...) of get_routers(...) in neutron.db.l3_db.py
  -> get_collection(...) in neutron_lib.db.model_query.py runs dict_funcs which 
somehow reaches to nb_ovn property accessor in 
neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.py
  -> which runs self._post_fork_event.wait() 

  That mutex seems to never be "set" and blocks further execution
  because it might not be applicable to this flow.

  It looks like the neutron-ovn-db-sync-util might need to always "set"
  it since it mocks other parts of the NB/DB client in a similar fashion
  to some unit tests.

  I'm not yet sure what kind of exact circumstances lead to that access
  and that wait(), syncing via the util to an empty OVN NB/DB seems to
  work. I see the issue more frequently on subsequent runs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1950679/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951639] [NEW] Package installation unreliable on systems without RTC

2021-11-19 Thread Max
Public bug reported:

It seems that when you specify "packages:" to be installed in the cloud-init 
user-data, it attempts to update the repository files and install the packages 
straight away, without waiting for date/time to be synchronized through NTP.
This creates problems for systems without RTC (like the Raspberry Pi) that do 
not have valid date/time at that point yet, as apt throws a fit if the 
InRelease files are newer than current time.

==
Cloud-init v. 21.3-1-g6803368d-0ubuntu3 running 'modules:final' at Wed, 13 Oct 
2021 13:33:26 +. Up 47.18 seconds.
Get:1 http://ports.ubuntu.com/ubuntu-ports impish InRelease [270 kB]
Get:2 http://ports.ubuntu.com/ubuntu-ports impish-updates InRelease [110 kB]
Get:3 http://ports.ubuntu.com/ubuntu-ports impish-backports InRelease [101 kB]
Get:4 http://ports.ubuntu.com/ubuntu-ports impish-security InRelease [110 kB]
Reading package lists...
E: Release file for http://ports.ubuntu.com/ubuntu-ports/dists/impish/InRelease 
is not valid yet (invalid for another 1d 3h 1min 32s). Updates for this 
repository will not be applied.
E: Release file for 
http://ports.ubuntu.com/ubuntu-ports/dists/impish-updates/InRelease is not 
valid yet (invalid for another 37d 3h 47min 18s). Updates for this repository 
will not be applied.
E: Release file for 
http://ports.ubuntu.com/ubuntu-ports/dists/impish-backports/InRelease is not 
valid yet (invalid for another 37d 3h 47min 29s). Updates for this repository 
will not be applied.
E: Release file for 
http://ports.ubuntu.com/ubuntu-ports/dists/impish-security/InRelease is not 
valid yet (invalid for another 37d 3h 47min 13s). Updates for this repository 
will not be applied.
==

Think you should wait until systemd reaches time-sync.target before
doing anything through apt.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1951639

Title:
  Package installation unreliable on systems without RTC

Status in cloud-init:
  New

Bug description:
  It seems that when you specify "packages:" to be installed in the cloud-init 
user-data, it attempts to update the repository files and install the packages 
straight away, without waiting for date/time to be synchronized through NTP.
  This creates problems for systems without RTC (like the Raspberry Pi) that do 
not have valid date/time at that point yet, as apt throws a fit if the 
InRelease files are newer than current time.

  ==
  Cloud-init v. 21.3-1-g6803368d-0ubuntu3 running 'modules:final' at Wed, 13 
Oct 2021 13:33:26 +. Up 47.18 seconds.
  Get:1 http://ports.ubuntu.com/ubuntu-ports impish InRelease [270 kB]
  Get:2 http://ports.ubuntu.com/ubuntu-ports impish-updates InRelease [110 kB]
  Get:3 http://ports.ubuntu.com/ubuntu-ports impish-backports InRelease [101 kB]
  Get:4 http://ports.ubuntu.com/ubuntu-ports impish-security InRelease [110 kB]
  Reading package lists...
  E: Release file for 
http://ports.ubuntu.com/ubuntu-ports/dists/impish/InRelease is not valid yet 
(invalid for another 1d 3h 1min 32s). Updates for this repository will not be 
applied.
  E: Release file for 
http://ports.ubuntu.com/ubuntu-ports/dists/impish-updates/InRelease is not 
valid yet (invalid for another 37d 3h 47min 18s). Updates for this repository 
will not be applied.
  E: Release file for 
http://ports.ubuntu.com/ubuntu-ports/dists/impish-backports/InRelease is not 
valid yet (invalid for another 37d 3h 47min 29s). Updates for this repository 
will not be applied.
  E: Release file for 
http://ports.ubuntu.com/ubuntu-ports/dists/impish-security/InRelease is not 
valid yet (invalid for another 37d 3h 47min 13s). Updates for this repository 
will not be applied.
  ==

  Think you should wait until systemd reaches time-sync.target before
  doing anything through apt.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1951639/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951632] [NEW] RFE: Create a role for service-to-service communication

2021-11-19 Thread Lance Bragstad
Public bug reported:

In Rocky, keystone added a default role hierarchy. This was part of a
large initiative to improve RBAC across all OpenStack projects. Through
the process of adopting the default roles implemented in Rocky,
OpenStack developers and operators have acknowledged that several
OpenStack service accounts have too much authorization.

Having a service-specific default role will make it easier to implement
the principle of least privilege to service accounts and harden
OpenStack default security posture.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1951632

Title:
  RFE: Create a role for service-to-service communication

Status in OpenStack Identity (keystone):
  New

Bug description:
  In Rocky, keystone added a default role hierarchy. This was part of a
  large initiative to improve RBAC across all OpenStack projects.
  Through the process of adopting the default roles implemented in
  Rocky, OpenStack developers and operators have acknowledged that
  several OpenStack service accounts have too much authorization.

  Having a service-specific default role will make it easier to
  implement the principle of least privilege to service accounts and
  harden OpenStack default security posture.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1951632/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1915480] Re: DeviceManager's fill_dhcp_udp_checksums assumes IPv6 available

2021-11-19 Thread Brian Murray
16.4.1 is included in focal-updates and hirsute has 18.0 so I'm setting
this to Fix Released.

 $ rmadison neutron
 neutron | 1:2014.1-0ubuntu1 | trusty  | 
source
 neutron | 1:2014.1.3-0ubuntu1.1 | trusty-security | 
source
 neutron | 1:2014.1.5-0ubuntu8   | trusty-updates  | 
source
 neutron | 2:8.0.0-0ubuntu1  | xenial  | 
source
 neutron | 2:8.4.0-0ubuntu7.4| xenial-security | 
source
 neutron | 2:8.4.0-0ubuntu7.5| xenial-updates  | 
source
 neutron | 2:12.0.1-0ubuntu1 | bionic  | 
source
 neutron | 2:12.1.1-0ubuntu8 | bionic-updates  | 
source
 neutron | 2:16.0.0~b3~git2020041516.5f42488a9a-0ubuntu2 | focal   | 
source
 neutron | 2:16.4.1-0ubuntu2 | focal-updates   | 
source
 neutron | 2:18.0.0-0ubuntu2 | hirsute | 
source
 neutron | 2:18.1.1-0ubuntu2 | hirsute-updates | 
source
 neutron | 2:19.0.0-0ubuntu1 | impish  | 
source
 neutron | 2:19.0.0-0ubuntu1 | jammy   | 
source


** Changed in: neutron (Ubuntu)
   Status: New => Fix Released

** Changed in: neutron (Ubuntu Focal)
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1915480

Title:
  DeviceManager's fill_dhcp_udp_checksums assumes IPv6 available

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in neutron:
  Fix Committed
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Focal:
  Fix Released

Bug description:
  The following code in DeviceManager's fill_dhcp_udp_checksums assumes
  IPv6 is always enabled:

  iptables_mgr = iptables_manager.IptablesManager(use_ipv6=True,
  namespace=namespace)

  When iptables_mgr.apply() is later called, an attempt to add the UDP
  checksum rule for DHCP is done via iptables-save/iptables-restore and
  if IPv6 has been disabled on a hypervisor (eg, by setting
  `ipv6.disable=1` on the kernel command line) then an many-line error
  occurs in the DHCP agent logfile.

  There should be a way of telling the agent that IPv6 is disabled and
  as such, it should ignore trying to set up the UDP checksum rule for
  IPv6. This can be easily achieved given that IptablesManager already
  has support for disabling it.

  We've seen this on Rocky on Ubuntu Bionic but it appears the issue
  still exists on the master branch.

  =
  Ubuntu SRU details:

   
  [Impact] 

  See above

  
  [Test Plan]

  Disable IPv6 on a hypervisor.
  sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1
  sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1
  sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1
  Deploy Openstack Ussuri or Victoria with one compute node, using the 
hypervisor which has IPv6 disabled as a neutron gateway.
  Create a network which has a subnetwork with DHCP enabled. Eg:
  openstack network create net1
  openstack subnet create subnet1 --network net1  --subnet-range 192.0.2.0/24  
--dhcp
  Search the `/var/log/neutron/neutron-dhcp-agent.log` (with debug log enabled) 
and check if there are any `ip6tables-restore` commands. Eg:
  sudo grep ip6tables-restore /var/log/neutron/neutron-dhcp-agent.log 
   
  [Where problems could occur]

  Users which were relying on the setting to always be true could be
  affected.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1915480/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951623] [NEW] Error live migration vm with disabled port

2021-11-19 Thread Alexander Shishebarov
Public bug reported:

We use neutron(stable/stein) ml2/ovs plugin and nova (stable/stein)
An error occurs in case of server migration with a disabled port(admin_state_up 
DWON)/
Here ports configuration
+--+--+---+-++
| ID   | Name | MAC Address   | Fixed IP 
Addresses  | Status |
+--+--+---+-++
| 09d5db66-1f89-41dd-a215-74888f3099f3 |  | fa:16:3e:98:30:50 | 
ip_address='10.1.1.1', subnet_id='a6786536-b67b-40a4-9470-e3b158a71dbc' | 
ACTIVE |
| df08c6b0-ca45-4985-8e5f-8fb95f904ac6 |  | fa:16:3e:5c:eb:f4 | 
ip_address='192.168.0.7', subnet_id='0bb936ed-c4a4-4a5d-be18-2794a73aea79'  | 
DOWN   |
+--+--+---+-++
When we try migrate what vm
openstack server  migrate b4743fab-17e0-48af-8ad3-3b81fd05a968 --live cmp1
An error occurs in the pre live migration process.
The error occurs on the server from which the migration is performed
2021-11-18 17:34:28,910.910 2173136 WARNING nova.compute.manager [-] [instance: 
b4743fab-17e0-48af-8ad3-3b81fd05a968] Timed out waiting for events: 
[('network-vif-plugged', u'e09dca39-f62f-4a3b-a0f6-4d98edcd037e'), 
('network-vif-plugged', u'e331b3d3-cf59-49a0-b531-590433523f6f')]. If these 
timeouts are a persistent issue it could mean the networking backend on host 
cmp1 does not support sending these events unless there are port binding host 
changes which does not happen at this point in the live migration process. You 
may need to disable the live_migration_wait_for_vif_plug option on host cmp1.: 
Timeout: 300 seconds

This happens because nova-compute is waiting for an event on each port, 
regardless of its initial state.
https://github.com/openstack/nova/blob/stable/stein/nova/compute/manager.py#L6767

But the neutron server does not send a message if the port is disabled.
https://github.com/openstack/neutron/blob/stable/stein/neutron/notifiers/nova.py#L207

2021-11-18 16:09:03,642.642 2916237 DEBUG neutron.notifiers.nova
[req-98b397eb-db00-4370-9510-b071c501a12e
b6ba9c75146a49829a7427a3e8cc3c10 192796e61c174f718d6147b129f3f2ff -
default default] Ignoring state change previous_port_status: DOWN
current_port_status: DOWN port_id e09dca39-f62f-4a3b-a0f6-4d98edcd037e
record_port_status_changed /usr/lib/python2.7/dist-
packages/neutron/notifiers/nova.py

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1951623

Title:
  Error live migration vm with disabled port

Status in OpenStack Compute (nova):
  New

Bug description:
  We use neutron(stable/stein) ml2/ovs plugin and nova (stable/stein)
  An error occurs in case of server migration with a disabled 
port(admin_state_up DWON)/
  Here ports configuration
  
+--+--+---+-++
  | ID   | Name | MAC Address   | Fixed IP 
Addresses  | Status |
  
+--+--+---+-++
  | 09d5db66-1f89-41dd-a215-74888f3099f3 |  | fa:16:3e:98:30:50 | 
ip_address='10.1.1.1', subnet_id='a6786536-b67b-40a4-9470-e3b158a71dbc' | 
ACTIVE |
  | df08c6b0-ca45-4985-8e5f-8fb95f904ac6 |  | fa:16:3e:5c:eb:f4 | 
ip_address='192.168.0.7', subnet_id='0bb936ed-c4a4-4a5d-be18-2794a73aea79'  | 
DOWN   |
  
+--+--+---+-++
  When we try migrate what vm
  openstack server  migrate b4743fab-17e0-48af-8ad3-3b81fd05a968 --live cmp1
  An error occurs in the pre live migration process.
  The error occurs on the server from which the migration is performed
  2021-11-18 17:34:28,910.910 2173136 WARNING nova.compute.manager [-] 
[instance: b4743fab-17e0-48af-8ad3-3b81fd05a968] Timed out waiting for events: 
[('network-vif-plugged', u'e09dca39-f62f-4a3b-a0f6-4d98edcd037e'), 
('network-vif-plugged', u'e331b3d3-cf59-49a0-b531-590433523f6f')]. If these 
timeouts are a persistent issue it could mean the networking backend on host 
cmp1 does not support sending these events unless there are port binding host 
changes which does not happen at this point in the live migration process. You 

[Yahoo-eng-team] [Bug 1951622] [NEW] RFE: Create a role in between admin and member

2021-11-19 Thread Lance Bragstad
Public bug reported:

In Rocky, keystone added a default role hierarchy. This was part of a
large initiative to improve RBAC across all OpenStack projects. That
effort is still underway today. Through the process of adopting the
default roles implemented in Rocky, OpenStack developers and operators
have acknowledged the need for another default role that sits in between
``admin`` and ``member``.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1951622

Title:
  RFE: Create a role in between admin and member

Status in OpenStack Identity (keystone):
  New

Bug description:
  In Rocky, keystone added a default role hierarchy. This was part of a
  large initiative to improve RBAC across all OpenStack projects. That
  effort is still underway today. Through the process of adopting the
  default roles implemented in Rocky, OpenStack developers and operators
  have acknowledged the need for another default role that sits in
  between ``admin`` and ``member``.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1951622/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951617] [NEW] "Quota exceeded" message is confusing for "resize"

2021-11-19 Thread Belmiro Moreira
Public bug reported:

"Quota exceeded" message is confusing for "resize"


When trying to create an instance and there is no quota available, the user 
gets an error message.
example:
"Quota exceeded for cores: Requested 1, but already used 100 of 100 cores (HTTP 
403)"

The user can see that the project is already using 100 vCPUs out of 100
vCPUs available (vCPU quota) in the project.

However, if he tries to resize an instance we can get a similar error message:
"Quota exceeded for cores: Requested 2, but already used 42 of 100 cores (HTTP 
403)"

So, this has a completely different meaning!
It means that the user (of the instance that he's trying to resize) is using 42 
vCPUs in the project out of 100 cores allowed by the quota.

This is hard to understand for a end user.
When naively reading this message looks like the project still has plenty of 
resources for the resize.

I believe this comes from the time when Nova allowed quotas per user.
In my opinion this distinction shouldn't be done anymore. As mentioned we don't 
do it when creating a new instance.

+++

This was tested with the master branch (19/11/2021)

** Affects: nova
 Importance: Undecided
 Assignee: Belmiro Moreira (moreira-belmiro-email-lists)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Belmiro Moreira (moreira-belmiro-email-lists)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1951617

Title:
  "Quota exceeded" message is confusing for "resize"

Status in OpenStack Compute (nova):
  New

Bug description:
  "Quota exceeded" message is confusing for "resize"

  
  When trying to create an instance and there is no quota available, the user 
gets an error message.
  example:
  "Quota exceeded for cores: Requested 1, but already used 100 of 100 cores 
(HTTP 403)"

  The user can see that the project is already using 100 vCPUs out of
  100 vCPUs available (vCPU quota) in the project.

  However, if he tries to resize an instance we can get a similar error message:
  "Quota exceeded for cores: Requested 2, but already used 42 of 100 cores 
(HTTP 403)"

  So, this has a completely different meaning!
  It means that the user (of the instance that he's trying to resize) is using 
42 vCPUs in the project out of 100 cores allowed by the quota.

  This is hard to understand for a end user.
  When naively reading this message looks like the project still has plenty of 
resources for the resize.

  I believe this comes from the time when Nova allowed quotas per user.
  In my opinion this distinction shouldn't be done anymore. As mentioned we 
don't do it when creating a new instance.

  +++

  This was tested with the master branch (19/11/2021)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1951617/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951593] [NEW] Feature request: keyboard layout module

2021-11-19 Thread Max
Public bug reported:

It would be nice if generic support to set a keyboard layout was added
to cloudinit.


- It seems one can currently do set keyboard layout when using the Ubuntu 
installer, with autoinstall.
But when using just cloudinit with ready made images (e.g. with the Ubuntu 
server images for the Pi), there does not seem to be an option for it.

- As an alternative, I tried writing /etc/default/keyboard with write_files 
manually, and runcmd'ing "dpkg-reconfigure -f noninteractive 
keyboard-configuration" but that leaves to be desired.
Since it is run pretty late in the boot process, the new keyboard layout does 
not take effect until first reboot.
And it obviously will only work with Debian based distributions.
Generic commands to set a keyboard layout (that could also work with other 
Linux distributions in the future) would be nicer.

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: keyboard layout

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1951593

Title:
  Feature request: keyboard layout module

Status in cloud-init:
  New

Bug description:
  It would be nice if generic support to set a keyboard layout was added
  to cloudinit.

  
  - It seems one can currently do set keyboard layout when using the Ubuntu 
installer, with autoinstall.
  But when using just cloudinit with ready made images (e.g. with the Ubuntu 
server images for the Pi), there does not seem to be an option for it.

  - As an alternative, I tried writing /etc/default/keyboard with write_files 
manually, and runcmd'ing "dpkg-reconfigure -f noninteractive 
keyboard-configuration" but that leaves to be desired.
  Since it is run pretty late in the boot process, the new keyboard layout does 
not take effect until first reboot.
  And it obviously will only work with Debian based distributions.
  Generic commands to set a keyboard layout (that could also work with other 
Linux distributions in the future) would be nicer.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1951593/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951450] Re: [CI] Rally jobs failing 100% of times

2021-11-19 Thread Hongbin Lu
This doesn't seem like a neutron bug.

** Also affects: devstack
   Importance: Undecided
   Status: New

** No longer affects: devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951450

Title:
  [CI] Rally jobs failing 100% of times

Status in neutron:
  Fix Released

Bug description:
  Rally jobs are always failing, both OVN and OVS.

  Error:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_803/817734/5/check/neutron-
  ovs-rally-task/803186b/job-output.txt

  Snippet: https://paste.opendev.org/show/811177/

  This could have been caused by
  https://review.opendev.org/c/openstack/devstack/+/780417.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1951450/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951450] Re: [CI] Rally jobs failing 100% of times

2021-11-19 Thread Rodolfo Alonso
Patch in rally-openstack merged.

** Changed in: neutron
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951450

Title:
  [CI] Rally jobs failing 100% of times

Status in neutron:
  Fix Released

Bug description:
  Rally jobs are always failing, both OVN and OVS.

  Error:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_803/817734/5/check/neutron-
  ovs-rally-task/803186b/job-output.txt

  Snippet: https://paste.opendev.org/show/811177/

  This could have been caused by
  https://review.opendev.org/c/openstack/devstack/+/780417.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1951450/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951569] [NEW] [L3] L3 agent extension should always inherit from "L3AgentExtension"

2021-11-19 Thread Rodolfo Alonso
Public bug reported:

All L3 agent extensions should inherit from
"neutron_lib.agent.l3_extension.L3AgentExtension". The
"L3AgentExtensionsManager" should check, just after the extension
initialization, if the loaded extensions inherit from this API. If not,
the extension will raise an exception and exit.

Once this check is done, all methods ("add_router", "update_router",
"delete_router" and "ha_state_change") can directly call the extension
function without checking the presence or not of the related function.
That is ensured by making all extension inherit by the defined API
(located in neutron-lib and available for any project).

** Affects: neutron
 Importance: Low
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951569

Title:
  [L3] L3 agent extension should always inherit from "L3AgentExtension"

Status in neutron:
  New

Bug description:
  All L3 agent extensions should inherit from
  "neutron_lib.agent.l3_extension.L3AgentExtension". The
  "L3AgentExtensionsManager" should check, just after the extension
  initialization, if the loaded extensions inherit from this API. If
  not, the extension will raise an exception and exit.

  Once this check is done, all methods ("add_router", "update_router",
  "delete_router" and "ha_state_change") can directly call the extension
  function without checking the presence or not of the related function.
  That is ensured by making all extension inherit by the defined API
  (located in neutron-lib and available for any project).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1951569/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951564] [NEW] snat random-fully supported with iptables 1.6.0

2021-11-19 Thread Maximilian Stinsky
Public bug reported:

With the following report
https://bugs.launchpad.net/neutron/+bug/1814002 neutron was set to
create SNAT rules with the --random-fully flag.

This is only getting applied with iptables 1.6.2 through a version check on the 
neutorn-l3-agent start. 
--random-fully is already supported since iptables 1.6.0 for SNAT rules. 1.6.2 
is only required for MASQUERADE.

As far as I can see neutron is only setting SNAT rules so it would be
reasonable to decrease the version check to 1.6.0 - this would enable
--random-fully for more deployments as ubuntu bionic for example only
ships with iptables 1.6.1.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951564

Title:
  snat random-fully supported with iptables 1.6.0

Status in neutron:
  New

Bug description:
  With the following report
  https://bugs.launchpad.net/neutron/+bug/1814002 neutron was set to
  create SNAT rules with the --random-fully flag.

  This is only getting applied with iptables 1.6.2 through a version check on 
the neutorn-l3-agent start. 
  --random-fully is already supported since iptables 1.6.0 for SNAT rules. 
1.6.2 is only required for MASQUERADE.

  As far as I can see neutron is only setting SNAT rules so it would be
  reasonable to decrease the version check to 1.6.0 - this would enable
  --random-fully for more deployments as ubuntu bionic for example only
  ships with iptables 1.6.1.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1951564/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1951559] [NEW] [OVN] Router ports gateway_mtu option should not always be set

2021-11-19 Thread Elvira García Ruiz
Public bug reported:

High level description: 
If a neutron router is connected to a provider network 'A' and private geneve 
networks.  If the mtu of private networks (1442 in normal cases) is lesser than 
that of provider network (1500), then there is no need for neutron ml2ovn to 
set options:gateway_mtu=1500 even if the config option to enable gateway mtu is 
set.

Step-by-step reproduction steps:
1. Modify ml2_conf.ini: Add ovn_emit_need_to_frag = True in [ovn]

2. Wire up the networks in a router:

$ openstack network create net1
$ openstack subnet create --subnet-range 192.168.100.0/24 --network net1 subnet1
$ openstack router create r1
$ openstack router add subnet r1 subnet1
$ openstack router set --external-gateway public r1

3. Check MTUs for each network:
By default, net1 will be 1442 and private will be 1500, so mtu_gateway 
shouldn't be set

4. Check if gateway_mtu was set on the Logical_Router_Port associated to the 
gateway
$ ovn-nbctl list Logical_Router_port | less
In this case, it shouldn't be set.

Expected results:
gateway_mtu is set in the Gateway LRP options for r1 only if provider MTU < 
private MTU.

Actual results:
gateway_mtu is always set if ovn_emit_need_to_frag is enabled.

more info at [0]

[0] https://bugzilla.redhat.com/show_bug.cgi?id=2019938

** Affects: neutron
 Importance: Undecided
 Assignee: Elvira García Ruiz (elviragr)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Elvira García Ruiz (elviragr)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1951559

Title:
  [OVN] Router ports gateway_mtu option should not always be set

Status in neutron:
  New

Bug description:
  High level description: 
  If a neutron router is connected to a provider network 'A' and private geneve 
networks.  If the mtu of private networks (1442 in normal cases) is lesser than 
that of provider network (1500), then there is no need for neutron ml2ovn to 
set options:gateway_mtu=1500 even if the config option to enable gateway mtu is 
set.

  Step-by-step reproduction steps:
  1. Modify ml2_conf.ini: Add ovn_emit_need_to_frag = True in [ovn]

  2. Wire up the networks in a router:

  $ openstack network create net1
  $ openstack subnet create --subnet-range 192.168.100.0/24 --network net1 
subnet1
  $ openstack router create r1
  $ openstack router add subnet r1 subnet1
  $ openstack router set --external-gateway public r1

  3. Check MTUs for each network:
  By default, net1 will be 1442 and private will be 1500, so mtu_gateway 
shouldn't be set

  4. Check if gateway_mtu was set on the Logical_Router_Port associated to the 
gateway
  $ ovn-nbctl list Logical_Router_port | less
  In this case, it shouldn't be set.

  Expected results:
  gateway_mtu is set in the Gateway LRP options for r1 only if provider MTU < 
private MTU.

  Actual results:
  gateway_mtu is always set if ovn_emit_need_to_frag is enabled.

  more info at [0]

  [0] https://bugzilla.redhat.com/show_bug.cgi?id=2019938

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1951559/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp