[Yahoo-eng-team] [Bug 2079831] Re: [tempest] VM ports have status=DOWN when calling ``TestNetworkBasicOps._setup_network_and_servers``

2024-09-21 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/tempest/+/928471
Committed: 
https://opendev.org/openstack/tempest/commit/d6437c9dd175371cd13d0a5d305a8863bda5
Submitter: "Zuul (22348)"
Branch:master

commit d6437c9dd175371cd13d0a5d305a8863bda5
Author: Brian Haley 
Date:   Fri Sep 6 16:09:26 2024 -0400

Wait for all instance ports to become ACTIVE

get_server_port_id_and_ip4() gets a list of neutron ports
for an instance, but it could be one or more of those have
not completed provisioning at the time of the call, so are
still marked DOWN.

Wait for all ports to become active since it could just be
neutron has not completed its work yet.

Added new waiter function and tests to verify it worked.

Closes-bug: #2079831
Change-Id: I758e5eeb8ab05e79d6bdb2b560aa0f9f38c5992c


** Changed in: tempest
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2079831

Title:
  [tempest] VM ports have status=DOWN when calling
  ``TestNetworkBasicOps._setup_network_and_servers``

Status in neutron:
  Invalid
Status in tempest:
  Fix Released

Bug description:
  The method ``TestNetworkBasicOps._setup_network_and_servers`` is used
  in several tempest tests. It creates a set of resources (network,
  servers, FIPs, etc). This method has a race condition when the config
  option "project_networks_reachable" if False (by default).

  The server is created [1] but there is no connectivity test [2] (due
  to project_networks_reachable=False). The next step is to create a FIP
  [3]. Because we are not passing the port_id, we first retrieve all the
  VM ports [4]. The issue happens at [5]: the ports are created but are
  still down.

  An example of this can be seen in [5][6]:
  1) The tempest test list the VM ports (only one in this case) but the port is 
down: https://paste.opendev.org/show/bSLi4joS6blqipbwa7Pq/

  2) The Neutron API finishes processing the port activation at the same
  time the port list call was made:
  https://paste.opendev.org/show/brRqntkQYdDoVeEqCeXF/

  
  It is needed to add an active wait in the method 
``get_server_port_id_and_ip4`` in order to wait all ports to be active.

  
  
[1]https://github.com/openstack/tempest/blob/0a0e1070e573674332cb5126064b95f17099307e/tempest/scenario/test_network_basic_ops.py#L120
  
[2]https://github.com/openstack/tempest/blob/0a0e1070e573674332cb5126064b95f17099307e/tempest/scenario/test_network_basic_ops.py#L124
  
[3]https://github.com/openstack/tempest/blob/0a0e1070e573674332cb5126064b95f17099307e/tempest/scenario/test_network_basic_ops.py#L128
  
[4]https://github.com/openstack/tempest/blob/0a0e1070e573674332cb5126064b95f17099307e/tempest/scenario/manager.py#L1143
  
[5]https://3fdd3adccbbbca8893fe-55e7a9d33a731efe4f7611907a31a4a1.ssl.cf1.rackcdn.com/924317/10/experimental/neutron-ovn-tempest-ovs-master/038956b/controller/logs/screen-neutron-api.txt
  
[6]https://3fdd3adccbbbca8893fe-55e7a9d33a731efe4f7611907a31a4a1.ssl.cf1.rackcdn.com/924317/10/experimental/neutron-ovn-tempest-ovs-master/038956b/controller/logs/tempest_log.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2079831/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998268] Re: Fernet uid/gid logic issue

2024-09-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/866096
Committed: 
https://opendev.org/openstack/keystone/commit/1cf7d94d6eb27aff92d3a612ee05efcc19e08917
Submitter: "Zuul (22348)"
Branch:master

commit 1cf7d94d6eb27aff92d3a612ee05efcc19e08917
Author: Sam Morrison 
Date:   Wed Nov 30 12:16:40 2022 +1100

Fix logic of fernet creation when running as root

Running `keystone-manage fernet_rotate
--keystone-user root --keystone-group keystone`

Will cause group to be root not keystone due to
checking the uid (0) against false, as opposed to None.

Closes-Bug: #1998268

Change-Id: Ib20550bf698f4fab381b48571ff8d096a2ae3335


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1998268

Title:
  Fernet uid/gid logic issue

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Running

  keystone-manage fernet_rotate --keystone-user root --keystone-group
  keystone

  Will not work as expected due to some wrong logic when uid is set to 0
  due to 0 == False

  The new 0 key will have ownership of root:root, not root:keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1998268/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078999] Re: nova_manage: Image property restored after migration

2024-09-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/924319
Committed: 
https://opendev.org/openstack/nova/commit/2a1fad41453ca7ce15b1cd9b517055c4ccdd12cf
Submitter: "Zuul (22348)"
Branch:master

commit 2a1fad41453ca7ce15b1cd9b517055c4ccdd12cf
Author: zhong.zhou 
Date:   Wed Jul 17 18:29:46 2024 +0800

nova-manage: modify image properties in request_spec

At present, we can modify the properties in the instance
system_metadata through the sub command image_property of
nova-manage, but there may be inconsistencies between their
values and those in request_specs.

And the migration is based on request_specs, so the same image
properties are also written to request_specs.

Closes-Bug: 2078999
Change-Id: Id36ecd022cb6f7f9a0fb131b0d202b79715870a9


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2078999

Title:
  nova_manage: Image property restored after migration

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===

  I use "nova-manage image_property" to modify image meta of an
  instance, however, the change lost and restored to the older prop
  after migration.

  Steps to reproduce
  ==

  1.create an instance and set a property in image like 
hw_qemu_guest_agent=False
  2.use nova-manage image_property set to modify the instance and the prop 
expected to True
  3.migration the instance

  Expected result
  ===
  hw_qemu_guest_agent is always True after migration

  Actual result
  =
  hw_qemu_guest_agent was restored to False after migration

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2078999/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2081087] Re: Performance regression in neutron-server from 2023.1 to 2024.1 when fetching a Security Group

2024-09-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/929941
Committed: 
https://opendev.org/openstack/neutron/commit/c1b05e29adf9d0d68c1ac636013a8a363a92eb85
Submitter: "Zuul (22348)"
Branch:master

commit c1b05e29adf9d0d68c1ac636013a8a363a92eb85
Author: Rodolfo Alonso Hernandez 
Date:   Thu Sep 19 14:00:57 2024 +

Change the load method of SG rule "default_security_group"

Since [1], the SG rule SQL view also retrieves the table
"default_security_group", using a complex relationship [2].
When the number of SG rules of a SG is high (above 50 it
is clearly noticeable the performance degradation), the
API call can take several seconds. For example, for 100
SG rules it can take up to one minute.

This patch changes the load method of the SG rule
"default_security_group" relationship to "selectin".
Benchmarks with a single default SG and 100 rules,
doing "openstack security group show $sg":
* 2023.2 (without this feature): around 0.05 seconds
* master: between 45-50 seconds (1000x time increase)
* loading method "selectin" or "dynamic": around 0.5 seconds.

NOTE: this feature [1] was implemented in 2024.1. At this
time, SQLAlchemy version was <2.0 and "selectin" method was
not available. For this version, "dynamic" can be used instead.

[1]https://review.opendev.org/q/topic:%22bug/2019960%22

[2]https://github.com/openstack/neutron/blob/08fff4087dc342be40db179fca0cd9bbded91053/neutron/db/models/securitygroup.py#L120-L121

Closes-Bug: #2081087
Change-Id: I46af1179f6905307c0d60b5c0fdee264a40a4eac


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2081087

Title:
  Performance regression in neutron-server from 2023.1 to 2024.1 when
  fetching a Security Group

Status in neutron:
  Fix Released

Bug description:
  With upgrade from 2023.1 to 2024.1 with driver ML2/OVS we've spotted a
  significant (10 times) performance regression on some operations.

  As best example - we can take security groups operations.

  Neutron is running in eventlet, since uWSGI is not yet fully
  functional for 2024.1 (see
  https://review.opendev.org/c/openstack/neutron/+/926922).

  So neutron-server is just being launched with exactly same database
  and config, just from different venvs.

  ```
  # cat /etc/systemd/system/neutron-server.service 
  [Unit]
  Description = neutron-server service
  After = network-online.target
  After = syslog.target

  [Service]
  Type = simple
  User = neutron
  Group = neutron
  ExecStart = /openstack/venvs/neutron-29.0.2/bin/neutron-server --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
  ExecReload = /bin/kill -HUP $MAINPID
  # Give a reasonable amount of time for the server to start up/shut down
  TimeoutSec = 120
  Restart = on-failure
  RestartSec = 2
  # This creates a specific slice which all services will operate from
  #  The accounting options give us the ability to see resource usage through
  #  the `systemd-cgtop` command.
  Slice = neutron.slice
  # Set Accounting
  CPUAccounting = True
  BlockIOAccounting = True
  MemoryAccounting = True
  TasksAccounting = True
  # Set Sandboxing
  PrivateTmp = False
  PrivateDevices = False
  PrivateNetwork = False
  PrivateUsers = False

  [Install]
  WantedBy = multi-user.target

  # time curl -X GET 
http://127.0.0.1:9696/v2.0/security-groups?project_id=${OS_PROJECT_ID} -H 
"X-Auth-Token: ${TOKEN}"
  ...
  real0m24.450s
  user0m0.008s
  sys 0m0.010s
  # time curl -X GET 
http://127.0.0.1:9696/v2.0/security-groups/${security_group_uuid} -H 
"X-Auth-Token: ${TOKEN}"
  ...
  real0m54.841s
  user0m0.010s
  sys 0m0.012s
  # sed -i 's/29.0.2/27.4.0/g' /etc/systemd/system/neutron-server.service
  # systemctl daemon-reload
  # systemctl restart neutron-server
  # time curl -X GET 
http://127.0.0.1:9696/v2.0/security-groups?project_id=${OS_PROJECT_ID} -H 
"X-Auth-Token: ${TOKEN}"
  ...
  real0m1.040s
  user0m0.011s
  sys 0m0.007s
  # time curl -X GET 
http://127.0.0.1:9696/v2.0/security-groups/${security_group_uuid} -H 
"X-Auth-Token: ${TOKEN}"
  ...
  real0m0.589s
  user0m0.012s
  sys 0m0.007s
  ```

  So as you might see, difference in response time is very significant,
  while the only change I've made is to use previous codebase for the
  service.

  I am also providing pip freeze for both venvs for comparison, though both of 
them were using upper-constraints:
  # /openstack/venvs/neutron-27.4.0/bin/pip freeze
  alembic==1.8.1
  amqp==5.1.1
  appdirs==1.4.4
  attrs==22.1.0
  autopage==0.5.1
  bcrypt==4.0.0
  cachetools==5.2.0
  certifi==2023.11.17
  cffi==1.15.1
  charset-normalizer==2.1.1
  cliff==4.2.0
  cmd2==2.4.2
  cryptography==38.0.2
  debtcollector==2.5.0
  decor

[Yahoo-eng-team] [Bug 2081174] Re: Handle EndpointNotFound in nova notifier

2024-09-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/929920
Committed: 
https://opendev.org/openstack/neutron/commit/7d1a20ed4d458c6682a52679b71b6bc8dea20d07
Submitter: "Zuul (22348)"
Branch:master

commit 7d1a20ed4d458c6682a52679b71b6bc8dea20d07
Author: yatinkarel 
Date:   Thu Sep 19 18:32:11 2024 +0530

Handle EndpointNotFound in nova notifier

Currently if the nova endpoint do not exist
exception is raised. Even the endpoint gets created
notification keeps on failing until the session
expires.
If the endpoint not exist the session is not useful
so marking it as invalid, this will ensure if endpoint is
created later the notification do not fail.

Closes-Bug: #2081174
Change-Id: I1f7fd1d1371ca0a3c4edb409cffd2177d44a1f23


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2081174

Title:
  Handle EndpointNotFound in nova notifier

Status in neutron:
  Fix Released

Bug description:
  When nova endpoint for endpoint_type(public/internal/admin) is not
  exist, following traceback is raised:-

  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova [-] Failed to notify 
nova on events: [{'name': 'network-changed', 'server_uuid': 
'3c634df2-eb78-4f49-bb01-ae1c546411af', 'tag': 
'feaa6ca6-7c33-4778-a33f-cd065112cc99'}]: 
keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for 
compute service in regionOne region not found
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova Traceback (most 
recent call last):
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/neutron/notifiers/nova.py", line 282, in 
send_events
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova response = 
novaclient.server_external_events.create(
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/novaclient/v2/server_external_events.py", 
line 38, in create
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova return 
self._create('/os-server-external-events', body, 'events',
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/novaclient/base.py", line 363, in _create
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova resp, body = 
self.api.client.post(url, body=body)
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 401, in post
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova return 
self.request(url, 'POST', **kwargs)
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/novaclient/client.py", line 69, in request
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova resp, body = 
super(SessionClient, self).request(url,
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 554, in 
request
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova resp = 
super(LegacyJsonAdapter, self).request(*args, **kwargs)
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 257, in 
request
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova return 
self.session.request(url, method, **kwargs)
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/session.py", line 811, in 
request
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova base_url = 
self.get_endpoint(auth, allow=allow,
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/session.py", line 1243, in 
get_endpoint
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova return 
auth.get_endpoint(self, **kwargs)
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/identity/base.py", line 375, in 
get_endpoint
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova endpoint_data = 
self.get_endpoint_data(
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/identity/base.py", line 275, in 
get_endpoint_data
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova endpoint_data = 
service_catalog.endpoint_data_for(
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova   File 
"/usr/lib/python3.9/site-packages/keystoneauth1/access/service_catalog.py", 
line 462, in endpoint_data_for
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova raise 
exceptions.EndpointNotFound(msg)
  2024-09-18 13:19:38.182 15 ERROR neutron.notifiers.nova 
keystoneauth1.exceptions.catalog.EndpointNotFound: inte

[Yahoo-eng-team] [Bug 2080933] Re: neutron_fwaas.tests.functional.services.logapi.agents.drivers.iptables.test_log.FWLoggingTestCase is broken

2024-09-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-fwaas/+/929658
Committed: 
https://opendev.org/openstack/neutron-fwaas/commit/caca5ae4a0adbf5a2f2eeabbd746dac9d3ac37e6
Submitter: "Zuul (22348)"
Branch:master

commit caca5ae4a0adbf5a2f2eeabbd746dac9d3ac37e6
Author: Brian Haley 
Date:   Tue Sep 17 10:58:57 2024 -0400

Account for iptables-save output spacing differences

There are places where the iptables-save output is not
exactly as the input, for example:

1) extra space after '-j NFLOG --nflog-prefix'
2) '#/sec' instead of '#/s' for limit-burst
3) '-j REJECT --reject-with icmp-port-unreachable' instead
   of '-REJECT'

Account for that in the code so when iptables debug is
enabled the functional tests pass.

Related-bug: #2079048
Closes-bug: #2080933

Change-Id: I98fe93019b7d1b84d0622b4430e56b37b7cc0250


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2080933

Title:
  
neutron_fwaas.tests.functional.services.logapi.agents.drivers.iptables.test_log.FWLoggingTestCase
  is broken

Status in neutron:
  Fix Released

Bug description:
  The test cases in
  
neutron_fwaas.tests.functional.services.logapi.agents.drivers.iptables.test_log.FWLoggingTestCase
  are consistently failing now, which blocks the neutron-fwaas-
  functional job.

  Example build:
  https://zuul.opendev.org/t/openstack/build/05d7f31ef63c449d9de275e9a121704b

  Example failure:

  ```
  
neutron_fwaas.tests.functional.services.logapi.agents.drivers.iptables.test_log.FWLoggingTestCase.test_start_logging_when_create_log
  


  Captured traceback:
  ~~~
  Traceback (most recent call last):

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/.tox/dsvm-functional-gate/lib/python3.10/site-packages/neutron/tests/base.py",
 line 178, in func
  return f(self, *args, **kwargs)

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/neutron_fwaas/tests/functional/services/logapi/agents/drivers/iptables/test_log.py",
 line 301, in test_start_logging_when_create_log
  self.run_start_logging(ipt_mgr,

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/neutron_fwaas/tests/functional/services/logapi/agents/drivers/iptables/test_log.py",
 line 250, in run_start_logging
  self.log_driver.start_logging(self.context,

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/neutron_fwaas/services/logapi/agents/drivers/iptables/log.py",
 line 241, in start_logging
  self._create_firewall_group_log(context, resource_type,

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/neutron_fwaas/services/logapi/agents/drivers/iptables/log.py",
 line 309, in _create_firewall_group_log
  ipt_mgr.defer_apply_off()

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/.tox/dsvm-functional-gate/lib/python3.10/site-packages/neutron/agent/linux/iptables_manager.py",
 line 451, in defer_apply_off
  self._apply()

File 
"/home/zuul/src/opendev.org/openstack/neutron-fwaas/.tox/dsvm-functional-gate/lib/python3.10/site-packages/neutron/agent/linux/iptables_manager.py",
 line 478, in _apply
  raise l3_exc.IpTablesApplyException(msg)

  neutron_lib.exceptions.l3.IpTablesApplyException: IPTables Rules did not 
converge. Diff: # Generated by iptables_manager
  *filter
  -D run.py-accepted 1
  -I run.py-accepted 1 -i qr-b0f055da-3f -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 12158444994202490671
  -D run.py-accepted 2
  -I run.py-accepted 2 -o qr-b0f055da-3f -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 12158444994202490671
  -D run.py-accepted 3
  -I run.py-accepted 3 -i qr-790b0516-f4 -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 13796087923523008474
  -D run.py-accepted 4
  -I run.py-accepted 4 -o qr-790b0516-f4 -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 13796087923523008474
  -D run.py-rejected 1
  -I run.py-rejected 1 -j REJECT
  COMMIT
  # Completed by iptables_manager
  # Generated by iptables_manager
  *filter
  -D run.py-accepted 1
  -I run.py-accepted 1 -i qr-b0f055da-3f -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 12158444994202490671
  -D run.py-accepted 2
  -I run.py-accepted 2 -o qr-b0f055da-3f -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 12158444994202490671
  -D run.py-accepted 3
  -I run.py-accepted 3 -i qr-790b0516-f4 -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 13796087923523008474
  -D run.py-accepted 4
  -I run.py-accepted 4 -o qr-790b0516-f4 -m limit --limit 100/s --limit-burst 
25 -j NFLOG --nflog-prefix 137

[Yahoo-eng-team] [Bug 2068644] Re: Issue associating floating IP with OVN load balancer

2024-09-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/921663
Committed: 
https://opendev.org/openstack/neutron/commit/d8a4ad9167afd824a3f823d86a8fd33fb67c4abd
Submitter: "Zuul (22348)"
Branch:master

commit d8a4ad9167afd824a3f823d86a8fd33fb67c4abd
Author: Will Szumski 
Date:   Mon Jun 10 13:44:14 2024 +0100

Correct logic error when associating FIP with OVN LB

Fixes a logic error which meant that we didn't iterate over all logical
switches when associating a FIP to an OVN loadbalancer. The symptom was
that the FIP would show in neutron, but would not exist in OVN.

Closes-Bug: #2068644
Change-Id: I6d1979dfb4d6f455ca419e64248087047fbf73d7
Co-Authored-By: Brian Haley 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2068644

Title:
  Issue associating floating IP with OVN load balancer

Status in neutron:
  Fix Released

Bug description:
  Version: yoga

  I'm seeing this failure when trying to associate a floating IP to a
  OVN based loadbalancer:

  Maintenance task: Failed to fix resource 
990f1d44-2401-49ba-b8c5-aedf7fb0c1ec (type: floatingips): TypeError: 'NoneType' 
object is not iterable
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance Traceback (most 
recent call last):
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py",
 line 400, in check_for_inconsistencies
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
self._fix_create_update(admin_context, row)
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py",
 line 239, in _fix_create_update
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
res_map['ovn_create'](context, n_obj)
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/maintenance.py",
 line 467, in _create_floatingip_and_pf
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
self._ovn_client.create_floatingip(context, floatingip)
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 1201, in create_floatingip
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
LOG.error('Unable to create floating ip in gateway '
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/oslo_utils/excutils.py", line 
227, in __exit__
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
self.force_reraise()
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/oslo_utils/excutils.py", line 
200, in force_reraise
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance raise 
self.value
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 1197, in create_floatingip
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
self._create_or_update_floatingip(floatingip, txn=txn)
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance   File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py",
 line 1007, in _create_or_update_floatingip
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 
commands.extend(
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance TypeError: 
'NoneType' object is not iterable
  2024-06-06 15:25:22.565 40 ERROR 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.maintenance 

  Unsure if that is masking another issue, but seems like even in master
  _handle_lb_fip_cmds can return None e.g:

  

[Yahoo-eng-team] [Bug 2080556] Re: old nova instances cant be started on post victoria deployments

2024-09-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/929187
Committed: 
https://opendev.org/openstack/nova/commit/2a870323c3d44d2056b326c184c435a484513532
Submitter: "Zuul (22348)"
Branch:master

commit 2a870323c3d44d2056b326c184c435a484513532
Author: Sean Mooney 
Date:   Thu Sep 12 21:05:54 2024 +0100

allow upgrade of pre-victoria InstanceNUMACells

This change ensures that if we are upgrading a
InstanceNUMACell object created before victoria
<1.5 that we properly set pcpuset=set() when
loading the object form the db.

This is requried to support instances with a numa
topology that do not use cpu pinning.

Depends-On: 
https://review.opendev.org/c/openstack/python-openstackclient/+/929236
Closes-Bug: #2080556
Change-Id: Iea55aabe71c250d8c8e93c61421450b909a7fa3d


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2080556

Title:
  old nova instances cant be started on post victoria deployments

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Downstream we had an interesting but report
  https://bugzilla.redhat.com/show_bug.cgi?id=2311875

  Instances created after liberty but before victoria
  that request a numa topology but do not have CPU pinning
  cannot be started on post victoria nova.

  as part of the 
  
https://specs.openstack.org/openstack/nova-specs/specs/train/implemented/cpu-resources.html
  spec we started tracking cpus as PCVU and VCPU resource classes but since a 
given instance
  would either have pinned cpus or floating cpus  no changes too the instance 
numa topology object
  were required.

  with the introduction of mixed cpus in a single instnace

  https://specs.openstack.org/openstack/nova-
  specs/specs/victoria/implemented/use-pcpu-vcpu-in-one-instance.html

  the instnace numa topology object was extended with a new pcpuset
  field.

  as part of that work the _migrate_legacy_object function was extended to 
default pcpuset to an empty set
  
https://github.com/openstack/nova/commit/867d4471013bf6a70cd3e9e809daf80ea358df92#diff-ed76deb872002cf64931c6d3f2d5967396240dddcb93da85f11886afc7dc4333R212
  for numa topologies that predate ovo

  and

  an new _migrate_legacy_dedicated_instance_cpuset function was added to
  migrate existing pinned instances and instnace with ovo in the  db.

  what we missed in the review is that unpinned guests should have had the 
cell.pcpuset set to the empty set
  here
  
https://github.com/openstack/nova/commit/867d4471013bf6a70cd3e9e809daf80ea358df92#diff-ed76deb872002cf64931c6d3f2d5967396240dddcb93da85f11886afc7dc4333R178

  The new filed is not nullable and is not present in the existing json 
serialised object
  as a result accessing cell.pcpuset on object returned form the db will raise 
a NotImplementedError because it is unset if the VM was created between liberty 
and victoria.
  this only applies to non-pinned vms with a numa topology i.e. 
  hw:mem_page_size= or hw:numa_nodes=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2080556/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2080436] Re: Live migration breaks VM on NUMA enabled systems with shared storage

2024-09-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/928970
Committed: 
https://opendev.org/openstack/nova/commit/035b8404fce878b0a88c4741bea46135b6af51e8
Submitter: "Zuul (22348)"
Branch:master

commit 035b8404fce878b0a88c4741bea46135b6af51e8
Author: Matthew N Heler 
Date:   Wed Sep 11 12:28:15 2024 -0500

Fix regression with live migration on shared storage

The commit c1ccc1a3165ec1556c605b3b036274e992b0a09d introduced
a regression when NUMA live migration was done on shared storage

The live migration support for the power mgmt feature means we need to
call driver.cleanup() for all NUMA instances to potentially offline
pcpus that are not used any more after the instance is migrated away.
However this change exposed an issue with the disk cleanup logic. Nova
should never delete the instance directory if that directory is on
shared storage (e.g. the nova instances path is backed by NFS).

This patch will fix that behavior so live migration will function

Closes-Bug: #2080436
Change-Id: Ia2bbb5b4ac728563a8aabd857ed0503449991df1


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2080436

Title:
  Live migration breaks VM on NUMA enabled systems with shared storage

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The commit c1ccc1a3165ec1556c605b3b036274e992b0a09d introduced
  a regression when NUMA live migration was done on shared storage

  power_management_possible = (
  'dst_numa_info' in migrate_data and
  migrate_data.dst_numa_info is not None)
  # No instance booting at source host, but instance dir
  # must be deleted for preparing next block migration
  # must be deleted for preparing next live migration w/o shared
  # storage
  # vpmem must be cleaned
  do_cleanup = (not migrate_data.is_shared_instance_path or
has_vpmem or has_mdevs or power_management_possible)

  Based on the commit, if any type of NUMA system is used with shared
  storage. Live migration will delete the backing folder for the VM,
  making the VM unusable for future operations.

  My team is experiencing this issue on 2024.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2080436/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2079850] Re: Ephemeral with vfat format fails inspection

2024-09-18 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/928829
Committed: 
https://opendev.org/openstack/nova/commit/8de15e9a276dc4261dd0656e26ca5a917825f441
Submitter: "Zuul (22348)"
Branch:master

commit 8de15e9a276dc4261dd0656e26ca5a917825f441
Author: Sean Mooney 
Date:   Tue Sep 10 14:41:15 2024 +0100

only safety check bootable files created from glance

For blank files that are created by nova such as swap
disks and ephemeral disks we do not need need to safety
check them as they always are just bare filesystems.

In the future we should refactor the qcow imagebackend to
not require backing files for swap and ephemeral disks
but for now we simply disable the check to workaround
the addition of the gpt image inspector and the incompatiblity
with vfat. future versions of oslo will account for vfat boot
recored. this is a minimal patch to avoid needing a new oslo
release for 2024.2

Closes-Bug: #2079850
Change-Id: I7df3d9859aa4be3a012ff919f375a7a3d9992af4


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2079850

Title:
  Ephemeral with vfat format fails inspection

Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.utils:
  In Progress

Bug description:
  When configured to format ephemerals as vfat, we get this failure:

  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.358 2 
DEBUG oslo_utils.imageutils.format_inspector [None 
req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 60ed4d3e522640b6ad19633b28c5b5bb 
ae43aec9c3c242a785c8256abdda1747 - - default default] Format inspector failed, 
aborting: Signature KDMV not found: b'\xebX\x90m' _process_chunk 
/usr/lib/python3.9/site-packages/oslo_utils/imageutils/format_inspector.py:1302
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.365 2 
DEBUG oslo_utils.imageutils.format_inspector [None 
req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 60ed4d3e522640b6ad19633b28c5b5bb 
ae43aec9c3c242a785c8256abdda1747 - - default default] Format inspector failed, 
aborting: Region signature not found at 3 _process_chunk 
/usr/lib/python3.9/site-packages/oslo_utils/imageutils/format_inspector.py:1302
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.366 2 
WARNING oslo_utils.imageutils.format_inspector [None 
req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 60ed4d3e522640b6ad19633b28c5b5bb 
ae43aec9c3c242a785c8256abdda1747 - - default default] Safety check mbr on gpt 
failed because GPT MBR has no partitions defined: 
oslo_utils.imageutils.format_inspector.SafetyViolation: GPT MBR has no 
partitions defined
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.366 2 
WARNING nova.virt.libvirt.imagebackend [None 
req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 60ed4d3e522640b6ad19633b28c5b5bb 
ae43aec9c3c242a785c8256abdda1747 - - default default] Base image 
/var/lib/nova/instances/_base/ephemeral_1_0706d66 failed safety check: Safety 
checks failed: mbr: oslo_utils.imageutils.format_inspector.SafetyCheckFailed: 
Safety checks failed: mbr
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [None req-fcf3a278-3417-4a6d-8b10-66e91ca1677d 
60ed4d3e522640b6ad19633b28c5b5bb ae43aec9c3c242a785c8256abdda1747 - - default 
default] [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] Instance failed to 
spawn: nova.exception.InvalidDiskInfo: Disk info file is invalid: Base image 
failed safety check
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
Traceback (most recent call last):
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a]   
File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py", line 
685, in create_image
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
inspector.safety_check()
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a]   
File 
"/usr/lib/python3.9/site-packages/oslo_utils/imageutils/format_inspector.py", 
line 430, in safety_check
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
raise SafetyCheckFailed(failures)
  Sep 03 17:34:28 compute-2 nova_compute[133243]: 2024-09-03 17:34:28.367 2 
ERROR nova.compute.manager [instance: 263ccd01-10b1-46a6-9f81-a6fc27c7177a] 
oslo_utils.imageutils.format_inspector.SafetyCheckFailed: Safety checks failed: 
mbr
  Sep 03 17:34

[Yahoo-eng-team] [Bug 2075349] Re: JSONDecodeError when OIDCRedirectURI is the same as the Keystone OIDC auth endpoint

2024-09-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/puppet-keystone/+/928755
Committed: 
https://opendev.org/openstack/puppet-keystone/commit/fdf2a2b31a6de76973a35a2494455ef176eee936
Submitter: "Zuul (22348)"
Branch:master

commit fdf2a2b31a6de76973a35a2494455ef176eee936
Author: Takashi Kajinami 
Date:   Tue Sep 10 13:39:46 2024 +0900

Fix default OIDCRedirectURI hiding keystone federation auth endpoint

This updates the default OIDCRedirectURI according to the change made
in the example file in keystone repo[1].

[1] https://review.opendev.org/925553

Closes-Bug: #2075349
Change-Id: Ia0f3cbb842a4c01e6a3ca44ca66dc9a8a731720c


** Changed in: puppet-keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2075349

Title:
  JSONDecodeError when OIDCRedirectURI is the same as the Keystone OIDC
  auth endpoint

Status in OpenStack Keystone OIDC Integration Charm:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released
Status in puppet-keystone:
  Fix Released

Bug description:
  This bug is about test failures for jammy-caracal, jammy-bobcat, and
  jammy-antelope in cherry-pick commits from this change:

  https://review.opendev.org/c/openstack/charm-keystone-openidc/+/922049

  That change fixed some bugs in the Keystone OpenIDC charm and added
  some additional configuration options to help with proxies.

  The tests all fail with a JSONDecodeError during the Zaza tests for
  the Keystone OpenIDC charm. Here is an example of the error:

  Expecting value: line 1 column 1 (char 0)
  Traceback (most recent call last):
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/requests/models.py", line 
974, in json
  return complexjson.loads(self.text, **kwargs)
    File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
  return _default_decoder.decode(s)
    File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
  raise JSONDecodeError("Expecting value", s, err.value) from None
  json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File "/home/jadon/py3-venv/lib/python3.10/site-packages/cliff/app.py", line 
414, in run_subcommand
  self.prepare_to_run_command(cmd)
    File "/home/jadon/py3-venv/lib/python3.10/site-packages/osc_lib/shell.py", 
line 516, in prepare_to_run_command
  self.client_manager.auth_ref
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/osc_lib/clientmanager.py", 
line 208, in auth_ref
  self._auth_ref = self.auth.get_auth_ref(self.session)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/identity/v3/federation.py",
 line 62, in get_auth_ref
  auth_ref = self.get_unscoped_auth_ref(session)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/identity/v3/oidc.py",
 line 293, in get_unscoped_auth_ref
  return access.create(resp=response)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/access/access.py",
 line 36, in create
  body = resp.json()
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/requests/models.py", line 
978, in json
  raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
  requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
  clean_up ListServer: Expecting value: line 1 column 1 (char 0)
  END return value: 1

  According to debug output, the failure happens during the OIDC
  authentication flow. Testing using the OpenStack CLI shows the failure
  happen right after this request:

  REQ: curl -g -i --insecure -X POST 
https://10.70.143.111:5000/v3/OS-FEDERATION/identity_providers/keycloak/protocols/openid/auth
 -H "Authorization: 
{SHA256}45dbb29ea555e0bd24995cbb1481c8ac66c2d03383bc0c335be977d0daaf6959" -H 
"User-Agent: openstacksdk/3.3.0 keystoneauth1/5.7.0 python-requests/2.32.3 
CPython/3.10.12"
  Starting new HTTPS connection (1): 10.70.143.111:5000
  RESP: [200] Connection: Keep-Alive Content-Length: 0 Date: Tue, 30 Jul 2024 
19:28:17 GMT Keep-Alive: timeout=75, max=1000 Server: Apache/2.4.52 (Ubuntu)
  RESP BODY: Omitted, Content-Type is set to None. Only text/plain, 
application/json responses have their bodies logged.

  This request is unusual in that the request is a POST request with no
  request body, and the response is an empty response. The empty
  response causes the JSONDecodeError because the keystoneauth package
  expects a JSON document to return from the request for a Keystone
  token. The empty response causes the JSONDecodeError because an empty
  string is not a valid document.

  This strange beh

[Yahoo-eng-team] [Bug 2079813] Re: [ovn-octavia-provider] Fully populated LB wrong member subnet id when not specified

2024-09-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/928335
Committed: 
https://opendev.org/openstack/ovn-octavia-provider/commit/6e2ba02339cdb06a63abc74a2b58f993d0560d9c
Submitter: "Zuul (22348)"
Branch:master

commit 6e2ba02339cdb06a63abc74a2b58f993d0560d9c
Author: Fernando Royo 
Date:   Fri Sep 6 12:27:56 2024 +0200

Fix member subnet id on a fully populated LB

When a fully populated LB is created, if the member is not created
indicating the subnet_id to whom it belongs, the LB vip_network_id
is inherit by error as member.subnet_id.

This patch fix this behaviour to inherit the member.subnet_id from
the loadbalancer.vip_subnet_id that is always passed from Octavia
API when call is redirect to the OVN-provider.

Closes-Bug: #2079813
Change-Id: I098afab053119d1a6eac86a12c1a20cc312b06ef


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2079813

Title:
  [ovn-octavia-provider] Fully populated LB  wrong member subnet id when
  not specified

Status in neutron:
  Fix Released

Bug description:
  When a fully populated LB is created, if the member is not created
  indicating the subnet_id to whom it belongs, the LB vip_network_id is
  inherit by error as member.subnet_id [1]

  If the member subnet_id is indicated in the call or added after LB
  creation in a later step this issue is not happening.

  [1] https://opendev.org/openstack/ovn-octavia-
  
provider/blame/commit/0673f16fc68d80c364ed8907b26c061be9b8dec1/ovn_octavia_provider/driver.py#L118

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2079813/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078476] Re: rbd_store_chunk_size defaults to 8M not 4M

2024-09-09 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/glance/+/927844
Committed: 
https://opendev.org/openstack/glance/commit/39e407e9ffe956d40a261905ab98c13b5455e27d
Submitter: "Zuul (22348)"
Branch:master

commit 39e407e9ffe956d40a261905ab98c13b5455e27d
Author: Cyril Roelandt 
Date:   Tue Sep 3 17:25:54 2024 +0200

Documentation: fix default value for rbd_store_chunk_size

Closes-Bug: #2078476
Change-Id: I3b83e57eebf306c4de28fd58589522970e62cf42


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2078476

Title:
  rbd_store_chunk_size defaults to 8M not 4M

Status in Glance:
  Fix Released

Bug description:
  Versions affected: from current master to at least Antelope.

  The documentation
  
(https://docs.openstack.org/glance/2024.1/configuration/configuring.html#configuring-
  the-rbd-storage-backend) states that the default rbd_store_chunk_size
  defaults to 4M while in reality it's 8M. This could have been 'only' a
  documentation bug, but there are two concerns here:

  1) Was it the original intention to have 8M chunk size (which is
  different from Ceph's defaults = 4M) or was it an inadvertent effect
  of other changes?

  2) Cinder defaults to rbd_store_chunk_size=4M. Having volumes created
  from Glance images results in an inherited chunk size of 8M (due to
  snapshotting) and could have unpredicted performance consequences. It
  feels like this scenario should at least be documented, if not
  avoided.

  Steps to reproduce:
  - deploy Glance with RBD backend enabled and default config;
  - query stores information for the configured chunk size 
(/v2/info/stores/detail)
  Optional:
  - have an image created in Ceph pool and validate its chunk size with rbd 
info command.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2078476/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073836] Re: "Tagging" extension cannot add tags with charater "/"

2024-09-06 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/924724
Committed: 
https://opendev.org/openstack/neutron/commit/5a558b7d132b6d5cdda2720a1b345643e08246e2
Submitter: "Zuul (22348)"
Branch:master

commit 5a558b7d132b6d5cdda2720a1b345643e08246e2
Author: Rodolfo Alonso Hernandez 
Date:   Sat Jul 20 20:01:40 2024 +

Add new "tagging" API method: create (POST)

This new method allows to create multiple tags for a single resource.
The tags are passed as arguments in the ``POST`` call. That solves
the issue with the usage of URI reserved characters in the name of
the tags.

Bumped neutron-lib library to version 3.15.0, that contains [1].

[1]https://review.opendev.org/c/openstack/neutron-lib/+/924700

APIImpact add create method for service pluging "tagging"
Closes-Bug: #2073836

Change-Id: I9709da13c321695f324fe8d6c1cdc03756660a03


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073836

Title:
  "Tagging" extension cannot add tags with charater "/"

Status in neutron:
  Fix Released

Bug description:
  The calls to add and remove an individual tag, accept the tag as part
  of the URL path. However, a url-encoded slash character (as `%2F`) is
  interpreted as a literal slash (`/`) BEFORE path splitting:

  ```
  curl -g -i -X PUT \
  
'https://neutron.example:13696/v2.0/security-groups/51d6c739-dc9e-454e-bf72-54beb2afc5f8/tags/one%2Ftwo'
 \
  -H "X-Auth-Token: "
  HTTP/1.1 404 Not Found
  content-length: 103
  content-type: application/json
  x-openstack-request-id: req-3d5911e5-10be-41e2-b83f-5b6ea5b0bbdf
  date: Mon, 22 Jul 2024 08:57:16 GMT

  {"NeutronError": {"type": "HTTPNotFound", "message": "The resource
  could not be found.", "detail": ""}}

  
  Bugzilla reference: https://bugzilla.redhat.com/show_bug.cgi?id=2299208

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2073836/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2076916] Re: VMs cannot access metadata when connected to a network with only IPv6 subnets with the ML2/OVS and ML2/LB backends in the Neutron zuul gate

2024-09-06 Thread OpenStack Infra
Reviewed:  
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/926503
Committed: 
https://opendev.org/openstack/neutron-tempest-plugin/commit/4a0b2343d723ea1227e85e0776fc58988a6b9e35
Submitter: "Zuul (22348)"
Branch:master

commit 4a0b2343d723ea1227e85e0776fc58988a6b9e35
Author: Miguel Lavalle 
Date:   Sun Aug 18 17:20:51 2024 -0500

Test metadata query over IPv6 only network with OVS and LB

This change enables the testing of querying the metadata service over an
IPv6 only network

Depends-On: https://review.opendev.org/c/openstack/neutron/+/922264

Change-Id: I56b1b7e5ca69e2fb01d359ab302e676773966aca
Related-Bug: #2069482
Closes-Bug: 2076916


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2076916

Title:
  VMs cannot access metadata when connected to a network with only IPv6
  subnets with the ML2/OVS and ML2/LB backends in the Neutron zuul gate

Status in neutron:
  Fix Released

Bug description:
  While fixing https://bugs.launchpad.net/neutron/+bug/2069482 "[OVN]
  VMs cannot access metadata when connected to a network with only IPv6
  subnets", a neutron-tempest-plugin test case was proposed to make sure
  in the CI system that VM's can access the metadata service over an
  IPv6 only network: https://review.opendev.org/c/openstack/neutron-
  tempest-plugin/+/925928. While the new test case succeeds with the
  ML2/OVN backend thanks to this fix
  https://review.opendev.org/c/openstack/neutron/+/922264, it also
  showed the following failure with the ML2/OVS and ML2/LB backends:

  curl: (28) Failed to connect to fe80::a9fe:a9fe port 80: Connection
  timed out

  Steps to reproduce:

  Recheck https://review.opendev.org/c/openstack/neutron-tempest-
  plugin/+/925928 and see the logs for the ML2/OVS and ML2/LB jobs

  
  How reproducible: 100%

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2076916/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2060916] Re: [RFE] Add 'trusted_vif' field to the port attributes

2024-09-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/926068
Committed: 
https://opendev.org/openstack/neutron/commit/104cbf9e60001329968bcab2e6d95ef38168cbc5
Submitter: "Zuul (22348)"
Branch:master

commit 104cbf9e60001329968bcab2e6d95ef38168cbc5
Author: Slawek Kaplonski 
Date:   Fri Aug 9 16:47:04 2024 +0200

Add trusted vif api extension for the port

This patch adds implementation of the "port_trusted_vif" API extension
as ml2 extension.
With this extension enabled, it is now possible for ADMIN users to set
port as trusted without modifying directly 'binding:profile' field
which is supposed to be just for machine to machine communication.

Value set in the 'trusted' attribute of the port is included in the
port's binding:profile so that it is still in the same place where e.g.
Nova expects it.

For now setting this flag directly in the port's binding:profile field
is not forbidden and only warning is generated in such case but in
future releases it should be forbiden and only allowed to be done using
this new attribute of the port resource.

This patch implements also definition of the new API extension directly
in Neutron. It is temporary and will be removed once patch [1] in
neutron-lib will be merged and released.

[1] https://review.opendev.org/c/openstack/neutron-lib/+/923860

Closes-Bug: #2060916
Change-Id: I69785c5d72a5dc659c5a2f27e043c686790b4d2b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2060916

Title:
  [RFE] Add 'trusted_vif' field to the port attributes

Status in neutron:
  Fix Released

Bug description:
  Currently 'trusted=true' can be passed to Neutron by admin user
  through the port's "binding:profile" field but this field originally
  was intended to be used only for the machine-machine communication,
  and not to be used by any cloud user. There is even info about that in
  the api-ref:

  "A dictionary that enables the application running on the specific
  host to pass and receive vif port information specific to the
  networking back-end. This field is only meant for machine-machine
  communication for compute services like Nova, Ironic or Zun to pass
  information to a Neutron back-end. It should not be used by multiple
  services concurrently or by cloud end users. The existing
  counterexamples (capabilities: [switchdev] for Open vSwitch hardware
  offload and trusted=true for Trusted Virtual Functions) are due to be
  cleaned up. The networking API does not define a specific format of
  this field. ..."

  
  This will be even worst with the new S-RBAC policies where "binding:profile" 
field is allowed to be changed only for the SERVICE role users, not even for 
admins.

  So this small RFE is proposal to add new API extension which will add
  field, like "trusted_vif" to the port object. This field would be then
  accesible for ADMIN role users in the Secure-RBAC policies.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2060916/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078787] Re: [postgresql] CI job randomly failing during "get_ports" command

2024-09-04 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/927801
Committed: 
https://opendev.org/openstack/neutron/commit/c7d07b7421034c2722fb0d0cfd2371e052928b97
Submitter: "Zuul (22348)"
Branch:master

commit c7d07b7421034c2722fb0d0cfd2371e052928b97
Author: Rodolfo Alonso Hernandez 
Date:   Tue Sep 3 10:31:24 2024 +

Protect the "standardattr" retrieval from a concurrent deletion

The method ``_extend_tags_dict`` can be called from a "list" operation.
If one resource and its "standardattr" register is deleted concurrently,
the "standard_attr" field retrieval will  fail.

The "list" operation is protected with a READER transaction context;
however this is failing with the DB PostgreSQL backend.

Closes-Bug: #2078787
Change-Id: I55142ce21cec8bd8e2d6b7b8b20c0147873699da


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078787

Title:
  [postgresql] CI job randomly failing during "get_ports" command

Status in neutron:
  Fix Released

Bug description:
  This issue is happening in master and stable branches.

  The Neutron API fails during a "get_ports" command with the following error:
  * Logs: 
https://1a2314758f28e1d7bdcb-9b5b0c3ad08d4708e738c2961a946a92.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-ovn-tempest-postgres-full/ac172f5/testr_results.html
  * Snippet: https://paste.opendev.org/show/boCN2S0gesS1VldBuxpj/

  It seems that, during the port retrieval in the "get_ports" command,
  one of the ports is concurrently deleted along with the
  "standard_attr" related register. This is happening despite of the
  reader context that should protect the "get_ports" command. This is
  not happening with MySQL/MariaDB.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2078787/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2075178] Re: test_snapshot_running test fails if qemu-img binary is missing

2024-09-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/925208
Committed: 
https://opendev.org/openstack/nova/commit/0809f75d7921fe01a6832211081e756a11b3ad4e
Submitter: "Zuul (22348)"
Branch:master

commit 0809f75d7921fe01a6832211081e756a11b3ad4e
Author: Julien Le Jeune 
Date:   Tue Jul 30 15:45:48 2024 +0200

Skip snapshot test when missing qemu-img

Since the commit the remove AMI snapshot format special casing
has merged, we're now running the libvirt snapshot tests as expected.
However, for those tests qemu-img binary needs to be installed.
Because these tests have been silently and incorrectly skipped for so long,
they didn't receive the same maintenance as other tests as the failures 
went unnoticed.

Change-Id: Ia90eedbe35f4ab2b200bdc90e0e35e5a86cc2110
Closes-bug: #2075178
Signed-off-by: Julien Le Jeune 


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2075178

Title:
  test_snapshot_running test fails if qemu-img binary is missing

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When qemu-img binary is not present on the system, this test fails
  like we can see on that log:

  ==
  ERROR: 
nova.tests.unit.virt.test_virt_drivers.LibvirtConnTestCase.test_snapshot_running
  --
  pythonlogging:'': {{{
  2024-07-30 15:47:15,058 INFO [nova.db.migration] Applying migration(s)
  2024-07-30 15:47:15,170 INFO [nova.db.migration] Migration(s) applied
  2024-07-30 15:47:15,245 INFO [nova.db.migration] Applying migration(s)
  2024-07-30 15:47:15,901 INFO [nova.db.migration] Migration(s) applied
  2024-07-30 15:47:15,997 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-30 15:47:15,998 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-30 15:47:16,000 WARNING [oslo_policy.policy] Policy Rules 
['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
'os_compute_api:os-quota-sets:defaults', 
'os_compute_api:os-availability-zone:list', 'os_compute_api:limits', 
'project_member_api', 'project_reader_api', 'project_member_or_admin', 
'project_reader_or_admin', 'os_compute_api:limits:other_project', 
'os_compute_api:os-lock-server:unlock:unlock_override', 
'os_compute_api:servers:create:zero_disk_flavor', 
'compute:servers:resize:cross_cell', 
'os_compute_api:os-shelve:unshelve_to_host'] specified in policy files are the 
same as the defaults provided by the service. You can remove these rules from 
policy files which will make maintenance easier. You can detect these redundant 
rules by ``oslopolicy-list-redundant`` tool also.
  2024-07-30 15:47:17,245 INFO [os_vif] Loaded VIF plugins: linux_bridge, noop, 
ovs
  2024-07-30 15:47:17,426 INFO [nova.virt.libvirt.driver] Creating image(s)
  2024-07-30 15:47:17,560 INFO [nova.virt.libvirt.host] kernel doesn't support 
AMD SEV
  2024-07-30 15:47:17,642 INFO [nova.virt.libvirt.driver] Instance spawned 
successfully.
  2024-07-30 15:47:17,711 INFO [nova.virt.libvirt.driver] Beginning live 
snapshot process
  }}}

  Traceback (most recent call last):
File "/home/jlejeune/dev/pci_repos/stash/nova/nova/virt/libvirt/driver.py", 
line 3110, in snapshot
  metadata['location'] = root_disk.direct_snapshot(
File "/usr/lib/python3.10/unittest/mock.py", line 1114, in __call__
  return self._mock_call(*args, **kwargs)
File "/usr/lib/python3.10/unittest/mock.py", line 1118, in _mock_call
  return self._execute_mock_call(*args, **kwargs)
File "/usr/lib/python3.10/unittest/mock.py", line 1173, in 
_execute_mock_call
  raise effect
  NotImplementedError: direct_snapshot() is not implemented

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/virt/test_virt_drivers.py",
 line 60, in wrapped_func
  return f(self, *args, **kwargs)
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/virt/test_virt_drivers.py"

[Yahoo-eng-team] [Bug 2078518] Re: neutron designate scenario job failing with new RBAC

2024-09-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/designate/+/927792
Committed: 
https://opendev.org/openstack/designate/commit/4388f00d267c4090b7de6bc94da9e2970abdf0cc
Submitter: "Zuul (22348)"
Branch:master

commit 4388f00d267c4090b7de6bc94da9e2970abdf0cc
Author: Slawek Kaplonski 
Date:   Tue Sep 3 10:49:04 2024 +0200

Add "admin" role to the designate user created by devstack plugin

Service user with name "designate" had only "service" role up to now but
it seems that with oslo.policy 4.4.0 where "enforce_new_defaults" is set
to True by default, this breaks integration between Neutron and
Designate as e.g. Neutron's creation of the recordset fails with
Forbidden exception as this seems to be allowed only for admin user or
shared or primary zone.

This patch adds also "admin" role for this "designate" service user to
workaround that issue, at least until Designate will support "service"
role usage with Secure RBAC policies.

Closes-Bug: #2078518
Change-Id: I477cc96519e7396a614f92d10986707ec388


** Changed in: designate
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078518

Title:
  neutron designate scenario job failing with new RBAC

Status in Designate:
  Fix Released
Status in neutron:
  Invalid

Bug description:
  Oslo.policy 4.4.0 enabled the new RBAC defaults by default, which does
  not change any config on the neutron side because neutron already
  enabled the new defaults, but it enabled the designated new RBAC. That
  is causing the neutron-tempest-plugin-designate-scenario job failing.

  It is failing here
  - https://review.opendev.org/c/openstack/neutron/+/926085

  And this is a debugging change
  - https://review.opendev.org/c/openstack/neutron/+/926945/7

  I see from the log that the admin designate client is getting the
  error. If you see the below log, its designate_admin is getting an
  error while creating the recordset in the designate

  Aug 09 19:08:30.539307 np0038166723 neutron-server[86674]: ERROR
  neutron_lib.callbacks.manager
  designate_admin.recordsets.create(in_addr_zone_name,

  
https://zuul.opendev.org/t/openstack/build/7a18c093d50242ebbea666d92c671945/log/controller/logs/screen-
  q-svc.txt#7665

  
https://github.com/openstack/neutron/blob/b847d89ac1f922362945ad610c9787bc28f37457/neutron/services/externaldns/drivers/designate/driver.py#L92

  which is caused by the GET Zone returning 403 in designateclient

  
https://zuul.opendev.org/t/openstack/build/7a18c093d50242ebbea666d92c671945/log/controller/logs/screen-q-svc.txt#7674
  I compared the designate Zone RBAC default if any change in that causing it:

  Old policy: admin or owner
  New policy: admin or project reader

  
https://github.com/openstack/designate/blob/50f686fcffd007506e0cd88788a668d4f57febc3/designate/common/policies/zone.py
  Only difference in policy is if it is not admin then it check role also 
member and reader needs only have access. But here neutron try to access with 
admin role only.

  I tried to query designate with "'all_projects': True" in admin
  designate client request but still it fail

  
https://zuul.opendev.org/t/openstack/build/25be97774e3a4d72a39eb6b2d2bed4a0/log/controller/logs/screen-
  q-svc.txt#7716

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate/+bug/2078518/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078789] Re: [SR-IOV] The "auto" VF status has precedence over the "enable"/"disable" status

2024-09-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/927795
Committed: 
https://opendev.org/openstack/neutron/commit/8211c29158d6fc8a1af938c326dfbaa685428a4a
Submitter: "Zuul (22348)"
Branch:master

commit 8211c29158d6fc8a1af938c326dfbaa685428a4a
Author: Rodolfo Alonso Hernandez 
Date:   Tue Sep 3 09:30:54 2024 +

[SR-IOV] The port status=DOWN has precedence in the VF link status

If a ML2/SR-IOV port is disabled (status=DOWN), it will have precedence
on the VF link state value over the "auto" value. That will stop any
transmission from the VF.

Closes-Bug: #2078789
Change-Id: I11d973d245dd391623e501aa14b470daa780b4db


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078789

Title:
  [SR-IOV] The "auto" VF status has precedence over the
  "enable"/"disable" status

Status in neutron:
  Fix Released

Bug description:
  This bug only applies to ML2/SR-IOV.

  The port field "propagate_uplink_status" defines if the port (VF) will
  follow the parent port (PF) status (enabled/disabled). The "auto"
  status has precedence over the "enable"/"disable" status. However,
  this could be a security issue: if the port owner wants to stop the VF
  (VM port) from transmitting any traffic, it is needed first to unset
  the "propagate_uplink_status" field [1] and the set the port status to
  "disabled".

  Scope of this bug: The "disabled" status must have precedence over the
  "auto" or "enabled" statuses, for security reasons.

  
  [1]https://bugs.launchpad.net/neutron/+bug/2078661

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2078789/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2078382] Re: [OVN] User defined router flavor with no LSP associated to router interfaces

2024-09-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/917800
Committed: 
https://opendev.org/openstack/neutron/commit/44cbbba369ad12bfdc8276319c7bcea173ddaa96
Submitter: "Zuul (22348)"
Branch:master

commit 44cbbba369ad12bfdc8276319c7bcea173ddaa96
Author: Miguel Lavalle 
Date:   Tue Apr 30 20:15:23 2024 -0500

User defined router flavor driver with no LSP

There is a use case where a user defined router flavor requires router
interfaces that don't have a corresponding OVN LSP. In this use case,
Neutron acts only as an IP address manager for the router interfaces.

This change adds a user defined router flavor driver that implements
the described use case. The new functionality is completely contained in
the new driver, with no logic added to the rest of ML2/OVN. This is
accomplished as follows:

1) When an interface is added to a router, the driver deletes the LSP
and the OVN revision number.

2) When an interface is about to be removed from a router, the driver
re-creates the LSP and the OVN revision number. In this way, ML2/OVN
can later delete the port normally.

Closes-Bug: #2078382

Change-Id: I14d675af2da281cc5cd435cae947ccdb13ece12b


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078382

Title:
  [OVN] User defined router flavor with no LSP associated to router
  interfaces

Status in neutron:
  Fix Released

Bug description:
  There is at least one OpenStack operator that requires the ability to
  create a ML2/OVN user defined router flavor that doesn't have Logical
  Switch Ports associated to their router interfaces. This allows a user
  defined flavor driver to process traffic bypassing the OVN pipeline.
  In this use case, Neutron acts only as an IP address manager for the
  router interfaces.

  The associated functionality shouldn't conflict with the ML2/OVN
  mechanism manager when deleting the associated Neutron ports, which
  would happen if the LSP is just removed without making provisions for
  the eventual removal of the router interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2078382/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2075723] Re: Wrong token expiration time format with expiring application credentials

2024-09-02 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/925596
Committed: 
https://opendev.org/openstack/keystone/commit/d01cde5a19d83736c9be235b27af8cc84ee01ed6
Submitter: "Zuul (22348)"
Branch:master

commit d01cde5a19d83736c9be235b27af8cc84ee01ed6
Author: Boris Bobrov 
Date:   Fri Aug 2 15:16:10 2024 +0200

Correct format for token expiration time

Tokens with expiration time limited by application credentials had an
incorrect format.

Fix the format, control it with the test.

Closes-Bug: 2075723
Change-Id: I09fe34541615090766a5c4a010a3f39756debedc


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2075723

Title:
  Wrong token expiration time format with expiring application
  credentials

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  In bug #1992183, token expiration time was limited to the application
  credentials expiration time. Unfortunately, the format used in the
  token is not the one specified in api-ref.

  Steps to reproduce:
  1. Create application credentials expiring very soon
  2. Issue a token with the application credentials
  3. Validate the token and check token expiration time

  Observed:
  "expires_at": "2024-08-02T13:47:05",

  Expected:
  "expires_at": "2024-08-02T13:47:05.00Z",

  I expect this, because:
  1. 
https://docs.openstack.org/api-ref/identity/v3/#validate-and-show-information-for-token
 - our docs say so
  2. The format is with Z in the end in all other authentication plugins

  This is also expected by tools that parse the token and convert it to
  objects, and are more strict to the formats.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2075723/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2075349] Re: JSONDecodeError when OIDCRedirectURI is the same as the Keystone OIDC auth endpoint

2024-09-02 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/925553
Committed: 
https://opendev.org/openstack/keystone/commit/7ac0c3cd33214ff3c926e2b5316b637892d701fb
Submitter: "Zuul (22348)"
Branch:master

commit 7ac0c3cd33214ff3c926e2b5316b637892d701fb
Author: Jadon Naas 
Date:   Thu Aug 1 21:10:43 2024 -0400

Update OIDC Apache config to avoid masking Keystone API endpoint

The current configuration for the OIDCRedirectURI results in
mod_auth_openidc masking the Keystone federation authentication
endpoint, which results in incorrect responses to requests for
Keystone tokens. This change updates the documentation to
recommend using a vanity URL that does not match a Keystone
API endpoint.

Closes-Bug: 2075349
Change-Id: I1dfba5c71da68522fdb6059f0dc03cddc74cb07d


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2075349

Title:
  JSONDecodeError when OIDCRedirectURI is the same as the Keystone OIDC
  auth endpoint

Status in OpenStack Keystone OIDC Integration Charm:
  In Progress
Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  This bug is about test failures for jammy-caracal, jammy-bobcat, and
  jammy-antelope in cherry-pick commits from this change:

  https://review.opendev.org/c/openstack/charm-keystone-openidc/+/922049

  That change fixed some bugs in the Keystone OpenIDC charm and added
  some additional configuration options to help with proxies.

  The tests all fail with a JSONDecodeError during the Zaza tests for
  the Keystone OpenIDC charm. Here is an example of the error:

  Expecting value: line 1 column 1 (char 0)
  Traceback (most recent call last):
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/requests/models.py", line 
974, in json
  return complexjson.loads(self.text, **kwargs)
    File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
  return _default_decoder.decode(s)
    File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
  obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
  raise JSONDecodeError("Expecting value", s, err.value) from None
  json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
    File "/home/jadon/py3-venv/lib/python3.10/site-packages/cliff/app.py", line 
414, in run_subcommand
  self.prepare_to_run_command(cmd)
    File "/home/jadon/py3-venv/lib/python3.10/site-packages/osc_lib/shell.py", 
line 516, in prepare_to_run_command
  self.client_manager.auth_ref
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/osc_lib/clientmanager.py", 
line 208, in auth_ref
  self._auth_ref = self.auth.get_auth_ref(self.session)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/identity/v3/federation.py",
 line 62, in get_auth_ref
  auth_ref = self.get_unscoped_auth_ref(session)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/identity/v3/oidc.py",
 line 293, in get_unscoped_auth_ref
  return access.create(resp=response)
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/keystoneauth1/access/access.py",
 line 36, in create
  body = resp.json()
    File 
"/home/jadon/py3-venv/lib/python3.10/site-packages/requests/models.py", line 
978, in json
  raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
  requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
  clean_up ListServer: Expecting value: line 1 column 1 (char 0)
  END return value: 1

  According to debug output, the failure happens during the OIDC
  authentication flow. Testing using the OpenStack CLI shows the failure
  happen right after this request:

  REQ: curl -g -i --insecure -X POST 
https://10.70.143.111:5000/v3/OS-FEDERATION/identity_providers/keycloak/protocols/openid/auth
 -H "Authorization: 
{SHA256}45dbb29ea555e0bd24995cbb1481c8ac66c2d03383bc0c335be977d0daaf6959" -H 
"User-Agent: openstacksdk/3.3.0 keystoneauth1/5.7.0 python-requests/2.32.3 
CPython/3.10.12"
  Starting new HTTPS connection (1): 10.70.143.111:5000
  RESP: [200] Connection: Keep-Alive Content-Length: 0 Date: Tue, 30 Jul 2024 
19:28:17 GMT Keep-Alive: timeout=75, max=1000 Server: Apache/2.4.52 (Ubuntu)
  RESP BODY: Omitted, Content-Type is set to None. Only text/plain, 
application/json responses have their bodies logged.

  This request is unusual in that the request is a POST request with no
  request body, and the response is an empty response. The empty
  response causes the JSONDecodeError because the keystoneauth package
  expects a JSON document to return from the request for a Keystone
  token. The empty res

[Yahoo-eng-team] [Bug 2056195] Re: Return 409 at neutron-client conflict

2024-09-02 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/918048
Committed: 
https://opendev.org/openstack/nova/commit/88b661b0780ee534630c2d345ffd4545158db806
Submitter: "Zuul (22348)"
Branch:master

commit 88b661b0780ee534630c2d345ffd4545158db806
Author: Rajesh Tailor 
Date:   Sat Apr 20 15:37:50 2024 +0530

Handle neutron-client conflict

When user tries to add stateless and stateful security
groups on same port, neutron raises SecurityGroupConflict (409),
but nova doesnot handle it and raises InternalServerError (500).

As it appears to be invalid operation from user, so user should get
the message that they are doing wrong.

This changes catches SecurityGroupConflict from neutron
client and raises newly added nova exception
SecurityGroupConnectionStateConflict with 409 error code.

Closes-Bug: #2056195
Change-Id: Ifad28fdd536ff0a4b30e786b2fcbc5a55987a13a


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2056195

Title:
  Return 409 at neutron-client conflict

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  When attaching a stateless and stateful security group to a VM, nova returns 
a 500 error but it's a user issue and a 409 conflict error should be returned.

  Steps to reproduce
  ==

  1. create network
  2. create VM "test-vm" attached to the network
  3. may create a statefull security group, but default group should already do
  4. openstack securit group create --stateless stateless-group
  5. openstack server add security group test-vm stateless-group

  Expected result
  ===
  Nova forwards the 409 error from Neutron with the error description from 
Neutron.

  Actual result
  =
  Nova returns: 
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) (Request-ID: 
req-c6bbaf50-99b7-4108-98f0-808dfee84933)
   

  Environment
  ===

  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

  # nova-api --version
  26.2.2 (Zed)

  
  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

  Neutron with OVN

  
  Logs & Configs
  ==
  Stacktrace:

  Traceback (most recent call last):,
File "/usr/local/lib/python3.10/site-packages/nova/api/openstack/wsgi.py", 
line 658, in wrapped,
  return f(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/nova/api/openstack/compute/security_groups.py",
 line 437, in _addSecurityGroup,
  return security_group_api.add_to_instance(context, instance,,
File 
"/usr/local/lib/python3.10/site-packages/nova/network/security_group_api.py", 
line 653, in add_to_instance,
  raise e,
File 
"/usr/local/lib/python3.10/site-packages/nova/network/security_group_api.py", 
line 648, in add_to_instance,
  neutron.update_port(port['id'], {'port': updated_port}),
File "/usr/local/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper,
  ret = obj(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/neutronclient/v2_0/client.py", line 
828, in update_port,
  return self._update_resource(self.port_path % (port), body=body,,
File "/usr/local/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper,
  ret = obj(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/neutronclient/v2_0/client.py", line 
2548, in _update_resource,
  return self.put(path, **kwargs),
File "/usr/local/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper,
  ret = obj(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/neutronclient/v2_0/client.py", line 
365, in put,
  return self.retry_request("PUT", action, body=body,,
File "/usr/local/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper,
  ret = obj(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/neutronclient/v2_0/client.py", line 
333, in retry_request,
  return self.do_request(method, action, body=body,,
File "/usr/local/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper,
  ret = obj(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/neutronclient/v2_0/client.py", line 
297, in do_request,
  self._handle_fault_response(status_code, replybody, resp),
File "/usr/local/lib/python3.10/site-packages/nova/network/neutron.py", 
line 196, in wrapper,
  ret = obj(*args, **kwargs),
File 
"/usr/local/lib/python3.10/site-packages/neutronclient/v2_0/client.py", line 
272, in _handle_fault_response,
 

[Yahoo-eng-team] [Bug 2078432] Re: Port_hardware_offload_type API extension is reported as available but attribute is not set for ports

2024-08-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/927577
Committed: 
https://opendev.org/openstack/neutron/commit/fbb7c9ae3d672796b72b796c53f89865ea6b3763
Submitter: "Zuul (22348)"
Branch:master

commit fbb7c9ae3d672796b72b796c53f89865ea6b3763
Author: Slawek Kaplonski 
Date:   Fri Aug 30 11:50:55 2024 +0200

Fix port_hardware_offload_type ML2 extension

This patch fixes 2 issues related to that port_hardware_offload_type
extension:

1. API extension is now not supported by the ML2 plugin directly so if
   ml2 extension is not loaded Neutron will not report that API
   extension is available,
2. Fix error 500 when creating port with hardware_offload_type
   attribute set but when binding:profile is not set (is of type
   Sentinel).

Closes-bug: #2078432
Closes-bug: #2078434
Change-Id: Ib0038dd39d8d210104ee8a70e4519124f09292da


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078432

Title:
  Port_hardware_offload_type API extension is reported as available but
  attribute is not set for ports

Status in neutron:
  Fix Released

Bug description:
  This API extension is implemented as ML2 plugin's extension but API
  extension is added also to the _supported_extension_aliases list
  directly in the ML2 plugin. Because of that even if ML2 extension is
  not really loaded, this API extension is reported as available.
  Because of that 'hardware_offload_type' attribute send from client is
  accepted by neutron but it is not saved in the db at all:

  
  $ openstack port create --network private --extra-property 
type=str,name=hardware_offload_type,value=switchdev test-port-hw-offload 

  
  
+-+-+



 
  | Field   | Value 
  | 



  
+-+-+



 
  | admin_state_up  | UP
  | 



  | allowed_address_pairs   |   
  | 



  | binding_host_id |   
  | 



  | binding_profile |   
  | 



  | binding_vif_details |   
  | 



  | binding_vif_type 

[Yahoo-eng-team] [Bug 2078434] Re: Creating port with hardware_offload_type attribute set fails with error 500

2024-08-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/927577
Committed: 
https://opendev.org/openstack/neutron/commit/fbb7c9ae3d672796b72b796c53f89865ea6b3763
Submitter: "Zuul (22348)"
Branch:master

commit fbb7c9ae3d672796b72b796c53f89865ea6b3763
Author: Slawek Kaplonski 
Date:   Fri Aug 30 11:50:55 2024 +0200

Fix port_hardware_offload_type ML2 extension

This patch fixes 2 issues related to that port_hardware_offload_type
extension:

1. API extension is now not supported by the ML2 plugin directly so if
   ml2 extension is not loaded Neutron will not report that API
   extension is available,
2. Fix error 500 when creating port with hardware_offload_type
   attribute set but when binding:profile is not set (is of type
   Sentinel).

Closes-bug: #2078432
Closes-bug: #2078434
Change-Id: Ib0038dd39d8d210104ee8a70e4519124f09292da


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2078434

Title:
  Creating port with hardware_offload_type attribute set fails with
  error 500

Status in neutron:
  Fix Released

Bug description:
  Wne port_hardware_offload_type ml2 extension is enabled and port with
  hardware_offload_attribute is created it may fail with error 500 if
  there is no binding:profile field provided (and it is of type
  'Sentinel'). Error is:

  ...
  ERROR neutron.pecan_wsgi.hooks.translation with 
excutils.save_and_reraise_exception():  



  
  ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", 
line 227, in __exit__   



  ERROR neutron.pecan_wsgi.hooks.translation self.force_reraise()   




  ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", 
line 200, in force_reraise  



  ERROR neutron.pecan_wsgi.hooks.translation raise self.value   




  ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/plugins/ml2/managers.py", line 1132, in 
_call_on_ext_drivers



  ERROR neutron.pecan_wsgi.hooks.translation getattr(driver.obj, 
method_name)(plugin_context, data, result)  


   
  ERROR neutron.pecan_wsgi.hooks.translation   File 
"/opt/stack/neutron/neutron/plugins/ml2/extensions/port_hardware_offload_type.py",
 line 44, in process_create_port


  
  ERROR neutron.pecan_wsgi.hooks.translation 
self._process_create_port(context, data, result)


  

[Yahoo-eng-team] [Bug 1929805] Re: Can't remove records in 'Create Record Set' form in DNS dashboard

2024-08-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/793420
Committed: 
https://opendev.org/openstack/horizon/commit/3b222c85c1e07ad0f55da93460520e1a07713a54
Submitter: "Zuul (22348)"
Branch:master

commit 3b222c85c1e07ad0f55da93460520e1a07713a54
Author: Vadym Markov 
Date:   Wed May 26 16:01:49 2021 +0300

CSS fix makes "Delete item" button active

Currently, used in designate-dashboard at DNS Zones - Create Record Set
modal window

Closes-Bug: #1929805
Change-Id: Ibcc97927df4256298a5c8d5e9834efa9ee498291


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1929805

Title:
  Can't remove records in 'Create Record Set' form in DNS dashboard

Status in Designate Dashboard:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Reproduced on devstack with master, but seems that any setup with
  Designate since Mitaka is affected.

  Steps to reproduce:

  1. Go to Project/DNS/Zones page 
  2. Create a Zone
  3. Click on ‘Create Record Set’ button at the right of the Zone record
  4. Try to fill several ‘Record’ fields in the ‘Records’ section of the form, 
then to delete data in the field with 'x' button

  Expected behavior:
  Record deleted

  Actual behavior:
  'x' button is inactive

  It is bug in CSS used in array widget in Horizon, but currently this
  array widget used only in designate-dashboard

To manage notifications about this bug go to:
https://bugs.launchpad.net/designate-dashboard/+bug/1929805/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2075147] Re: "neutron-tempest-plugin-api-ovn-wsgi" not working with TLS

2024-08-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/925376
Committed: 
https://opendev.org/openstack/neutron/commit/76f343c5868556f12f9ee74b7ef2291cf5e2ff85
Submitter: "Zuul (22348)"
Branch:master

commit 76f343c5868556f12f9ee74b7ef2291cf5e2ff85
Author: Rodolfo Alonso Hernandez 
Date:   Wed Jul 31 10:53:14 2024 +

Monkey patch the system libraries before calling them

The Neutron API with WSGI module, and specifically when using ML2/OVN,
was importing some system libraries before patching them. That was
leading to a recursion error, as reported in the related LP bug.
By calling ``eventlet_utils.monkey_patch()`` at the very beginning
of the WSGI entry point [1], this issue is fixed.

[1] WSGI entry point:
  $ cat /etc/neutron/neutron-api-uwsgi.ini
  ...
  module = neutron.wsgi.api:application

Closes-Bug: #2075147
Change-Id: If2aa37b2a510a85172da833ca20564810817d246


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2075147

Title:
  "neutron-tempest-plugin-api-ovn-wsgi" not working with TLS

Status in neutron:
  Fix Released

Bug description:
  The Neutron CI job "neutron-tempest-plugin-api-ovn-wsgi" is not
  working because TLS is enabled. There is an issue in the SSL library
  that throws a recursive exception.

  Snippet https://paste.opendev.org/show/briEIdk5z5SwYg25axnf/

  Log:
  
https://987c691fdc28f24679c7-001d480fc44810e6cf7b18a72293f87e.ssl.cf1.rackcdn.com/periodic/opendev.org/openstack/neutron/master/neutron-
  tempest-plugin-api-ovn-wsgi/8e01634/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2075147/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2051935] Re: [OVN] SNAT only happens for subnets directly connected to a router

2024-08-29 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/926495
Committed: 
https://opendev.org/openstack/neutron/commit/dbf53b7bbfa27cb74b1d0b0e47629bf3e1403645
Submitter: "Zuul (22348)"
Branch:master

commit dbf53b7bbfa27cb74b1d0b0e47629bf3e1403645
Author: Ihar Hrachyshka 
Date:   Fri Aug 16 22:22:24 2024 +

Support nested SNAT for ml2/ovn

When ovn_router_indirect_snat = True, ml2/ovn will set a catch-all snat
rule for each external ip, instead of a snat rule per attached subnet.

NB: This option is global to cluster and cannot be controlled per
project or per router.

NB2: this patch assumes that 0.0.0.0/0 snat rules are properly handled
by OVN. Some (e.g. 22.03 and 24.03) OVN versions may have this scenario
broken. See: https://issues.redhat.com/browse/FDP-744 for details.

--

A long time ago, nested SNAT behavior was unconditionally enabled for
ml2/ovs, see: https://bugs.launchpad.net/neutron/+bug/1386041

Since this behavior has potential security implications, and since it
may not be desired in all environments, a new flag is introduced.

Since OVN was deployed without nested SNAT enabled in multiple
environments, the flag is set to False by default (meaning: no nested
SNAT).

In theory, instead of a config option, neutron could introduce a new API
to allow users to control the behavior per router. This would require
more work though. This granular API is left out of the patch. Interested
parties are welcome to start a discussion about adding the new API as a
new neutron extension to routers.

--

Before this patch, there was an alternative implementation proposed that
was not relying on 0.0.0.0/0 snat behavior implemented properly in OVN.
The implementation was abandoned because it introduced non-negligible
complexity in the neutron code and the OVN NB database.

See: https://review.opendev.org/c/openstack/neutron/+/907504

--

Closes-Bug: #2051935
Co-Authored-By: Brian Haley 
Change-Id: I28fae44edc122fae389916e25b3321550de001fd


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2051935

Title:
  [OVN] SNAT only happens for subnets directly connected to a router

Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New

Bug description:
  I am trying to achieve the following scenario:

  I have a VM attached to a router w/o external gateway (called project-
  router) but with a default route which send all the traffic to another
  router (transit router) which has an external gateway with snat
  enabled and it is connected to a transit network 192.168.100.0/24

  My VM is  on 172.16.100.0/24, traffic hits the project-router thanks
  to the default route gets redirected to the transit-router correctly,
  here it gets into the external gateway but w/o being snat.

  This is because in ovn I see that SNAT on this router is only enabled
  for logical ip in 192.168.100.0/24 which is the subnet directly
  connected to the router

  # ovn-nbctl lr-nat-list neutron-6d1e6bb7-3949-43d1-8dac-dc55155b9ad8
  TYPE EXTERNAL_IPEXTERNAL_PORTLOGICAL_IP
EXTERNAL_MAC LOGICAL_PORT
  snat 147.22.16.207   192.168.100.0/24

  But I would like that this router snat all the traffic that hits it,
  even when coming from a subnet not directly connected to it.

  I can achieve this by setting in ovn the snat for 0.0.0.0/0

  # ovn-nbctl lr-nat-add neutron-6d1e6bb7-3949-43d1-8dac-dc55155b9ad8
  snat 147.22.16.207 0.0.0.0/0

  # ovn-nbctl lr-nat-list neutron-6d1e6bb7-3949-43d1-8dac-dc55155b9ad8
  TYPE EXTERNAL_IPEXTERNAL_PORTLOGICAL_IP
EXTERNAL_MAC LOGICAL_PORT
  snat 147.22.16.207   0.0.0.0/0
  snat 147.22.16.207   192.168.100.0/24

  But this workaround can be wiped if I run the neutron-ovn-db-sync-util
  on any of the neutron-api unit.

  Is there a way to achieve this via OpenStack? If not does it make
  sense to have this as a new feature?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2051935/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2077228] Re: libvirt reports powerd down CPUs as being on socket 0 regardless of their real socket

2024-08-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/926218
Committed: 
https://opendev.org/openstack/nova/commit/79d1f06094599249e6e30ebba2488b8b7a10834e
Submitter: "Zuul (22348)"
Branch:master

commit 79d1f06094599249e6e30ebba2488b8b7a10834e
Author: Artom Lifshitz 
Date:   Tue Aug 13 11:29:10 2024 -0400

libvirt: call get_capabilities() with all CPUs online

While we do cache the hosts's capabilities in self._caps in the
libvirt Host object, if we happen to fist call get_capabilities() with
some of our dedicated CPUs offline, libvirt erroneously reports them
as being on socket 0 regardless of their real socket. We would then
cache that topology, thus breaking pretty much all of our NUMA
accounting.

To fix this, this patch makes sure to call get_capabilities()
immediately upon host init, and to power up all our dedicated CPUs
before doing so. That way, we cache their real socket ID.

For testing, because we don't really want to implement a libvirt bug
in our Python libvirt fixture, we make due with a simple unit tests
that asserts that init_host() has powered on the correct CPUs.

Closes-bug: 2077228
Change-Id: I9a2a7614313297f11a55d99fb94916d3583a9504


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2077228

Title:
  libvirt reports powerd down CPUs as being on socket 0 regardless of
  their real socket

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This is more of a libvirt (or maybe even lower down in the kernel)
  bug, but the consequence of $topic's reporting is that if libvirt CPU
  power management is enabled, we mess up our NUMA accounting because we
  have the wrong socket for some/all of our dedicated CPUs, depending on
  whether they were online or not when we called get_capabilities().

  Initially found by internal Red Hat testing, and reported here:
  https://issues.redhat.com/browse/OSPRH-8712

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2077228/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2045974] Re: RFE: Create a role for domain-scoped self-service identity management by end users

2024-08-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/924132
Committed: 
https://opendev.org/openstack/keystone/commit/69d1897d0974aafc5f41b851ce61f62ab879c805
Submitter: "Zuul (22348)"
Branch:master

commit 69d1897d0974aafc5f41b851ce61f62ab879c805
Author: Markus Hentsch 
Date:   Mon Jul 15 11:09:55 2024 +0200

Implement the Domain Manager Persona for Keystone

Introduces domain-scoped policies for the 'manager' role to permit
domain-wide management capabilities in regards to users, groups,
projects and role assignments.
Defines a new base policy rule to restrict the roles assignable by
domain managers.

Closes-Bug: #2045974
Change-Id: I62742ed7d906c92d1132251080758bb54d0fc8e1


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2045974

Title:
  RFE: Create a role for domain-scoped self-service identity management
  by end users

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  When assigning individual domains to customers of an OpenStack cloud,
  customer-side self-service identity management (i.e. managing users,
  projects and groups) within a domain is a popular use case but hard to
  implement with the current default role model.

  With its current architecture, assigning the "admin" role to end users is 
very risky even if scoped [1] and usually not an option.
  Furthermore, the "admin" role already has an implicit meaning associated with 
it that goes beyond identity management according to operator feedback [2].

  The Consistent and Secure RBAC rework introduced a "manager" role for 
projects [3].
  Having a similar role model on domain-level for identity management would be 
a good complement to that and enable self-service capabilities for end users.

  Request: introduce a new "domain-manager" role in Keystone and associated 
policy rules.
  The new "domain-manager" role - once assigned to an end user in a domain 
scope - would enable them to manage projects, groups, users and associated role 
assignments within the limitations of the domain.

  [1] https://bugs.launchpad.net/keystone/+bug/968696

  [2] https://governance.openstack.org/tc/goals/selected/consistent-and-
  secure-rbac.html#the-issues-we-are-facing-with-scope-concept

  [3] https://governance.openstack.org/tc/goals/selected/consistent-and-
  secure-rbac.html#project-manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2045974/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2077790] Re: [eventlet] RPC handler thread model is incompatible with eventlet

2024-08-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/926922
Committed: 
https://opendev.org/openstack/neutron/commit/ae90e2ccbfa45a8e864ec6f7fca2f28fa90d8062
Submitter: "Zuul (22348)"
Branch:master

commit ae90e2ccbfa45a8e864ec6f7fca2f28fa90d8062
Author: Rodolfo Alonso Hernandez 
Date:   Sat Aug 24 10:35:03 2024 +

Make RPC event cast synchronous with the event

Sometimes, the methods ``NeutronObject.get_object`` and
``ResourcesPushRpcApi.push`` yield the GIL during the execution.
Because of that, the thread in charge of sending the RPC information
doesn't finish until other operation is pushed (implemented in [1]).

By making the RPC cast synchronous with the update/delete events, it
is ensured that both operations will finish and the agents will receive
the RPC event on time, just after the event happens.

This issue is hitting more frequently in the migration to the WSGI
server, due to [2]. Once the eventlet library has been deprecated from
OpenStack, it will be possible to use the previous model (using a long
thread to handle the RCP updates to the agents). It is commented in the
code as a TODO.

This patch is temporarily reverting [3]. This code should be restored
too.

[1]https://review.opendev.org/c/openstack/neutron/+/788510
[2]https://review.opendev.org/c/openstack/neutron/+/925376
[3]https://review.opendev.org/c/openstack/neutron/+/824508

Closes-Bug: #2077790
Related-Bug: #2075147
Change-Id: I7b806e6de74164ad9730480a115a76d30e7f15fc


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2077790

Title:
  [eventlet] RPC handler thread model is incompatible with eventlet

Status in neutron:
  Fix Released

Bug description:
  The RPC handler class ``_ObjectChangeHandler``, that is instantiated
  in ``OVOServerRpcInterface``, is not eventlet compatible.

  The ``OVOServerRpcInterface`` class is in charge of receiving the
  resource events (port, network, SG, etc.) and send this update via RPC
  to the listeners (agents like OVS agent or DHCP agent). Since [1], we
  create a single long running thread that reads the stored events and
  sends the RPC message (``RPCClient.cast`` because it is not expected a
  reply).

  Although this architecture is correct, it is not fully compatible with
  eventlet. Since [2] and the upper patches testing this patch, the OVS
  jobs (that use RPC between the server and the agents) are randomly
  failing. This is more frequently with the SG API operations (SG rule
  addition and deletion).

  This bug proposes to make the event RPC cast synchronous with the API
  call, avoiding using a thread to collect and send the RPC messages.
  Once eventlet is removed from the OpenStack project, we'll be able to
  use the previous model.

  POC patch: https://review.opendev.org/c/openstack/neutron/+/926922
  Testing patch: https://review.opendev.org/c/openstack/neutron/+/926788

  [1]https://review.opendev.org/c/openstack/neutron/+/788510
  [2]https://review.opendev.org/c/openstack/neutron/+/925376

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2077790/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2018737] Re: neutron-dynamic-routing announces routes for disabled routers

2024-08-28 Thread OpenStack Infra
Reviewed:  
https://review.opendev.org/c/openstack/neutron-dynamic-routing/+/882560
Committed: 
https://opendev.org/openstack/neutron-dynamic-routing/commit/06232f0b2c78bb983c5cefcd8a573761f87a
Submitter: "Zuul (22348)"
Branch:master

commit 06232f0b2c78bb983c5cefcd8a573761f87a
Author: Felix Huettner 
Date:   Mon May 8 11:53:55 2023 +0200

Ignore disabled routers for advertising

currently if a router is set to disabled the dragents still advertise
the routes. This causes the upstream routers to still know these routes
and try to forward packets to a non existing router.

By removing these routes we allow these upstream routers do directly
drop the traffic to these addresses instead of trying to forward it to
neutron routers.

Closes-Bug: 2018737

Change-Id: Icd6803769f37a04bf7581afb9722c78a44737374


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2018737

Title:
  neutron-dynamic-routing announces routes for disabled routers

Status in neutron:
  Fix Released

Bug description:
  neutron routers can be disabled, thereby basically removing them from their 
l3 agents.
  They will no longer accept, process or forward packets once they are disabled.

  Currently if a router is set to disabled the dragents still advertise the 
routes to its networks and floating ips.
  Even though the router is actually not active and can not handle these 
packets.
  This causes the upstream routers to still know these routes and try to 
forward packets to this disabled router.

  For example for internet network this causes unneeded traffic on the upstream 
routers and the network nodes.
  They will receive traffic that they forward to the network node which then 
will drop this traffic as the router is gone.

  It would be ideal if routes for disabled routers are no longer advertised by 
the dragents.
  This would cause upstream routers to loose the routes to these network/fips 
and allow them to drop the traffic as early as possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2018737/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2076670] Re: Default Roles in keystone: wrong format in example

2024-08-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/926291
Committed: 
https://opendev.org/openstack/keystone/commit/112331d9e95f7b7035f3f818716c2a5111baeb3e
Submitter: "Zuul (22348)"
Branch:master

commit 112331d9e95f7b7035f3f818716c2a5111baeb3e
Author: Artem Goncharov 
Date:   Wed Aug 14 17:37:46 2024 +0200

Fix role statement in admin doc

Closes-Bug: 2076670
Change-Id: I843dcce351d664124c769d815f72cd57caa5e429


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2076670

Title:
  Default Roles in keystone: wrong format in example

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:

  This bug tracker is for errors with the documentation, use the
  following as a template and remove or add fields as you see fit.
  Convert [ ] into [x] to check boxes:

  - [x] This doc is inaccurate in this way: __
  - [ ] This is a doc addition request.
  - [x] I have a fix to the document that I can paste below including example: 
input and output. 

  ```yaml
  "identity:create_foo": "role:service" or "role:admin"
  ```

  has to be

  ```yaml
  "identity:create_foo": "role:service or role:admin"
  ```

  ---
  Release: 25.1.0.dev52 on 2022-11-02 15:54:51
  SHA: a0cc504543e639c90212d69f3bcf91665648e71a
  Source: 
https://opendev.org/openstack/keystone/src/doc/source/admin/service-api-protection.rst
  URL: 
https://docs.openstack.org/keystone/latest/admin/service-api-protection.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2076670/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073894] Re: IPv6 dns nameservers described with their scope on the IP are not supported

2024-08-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/926079
Committed: 
https://opendev.org/openstack/neutron/commit/1ed8609a6818d99133bf56483adb9bce8c886fd6
Submitter: "Zuul (22348)"
Branch:master

commit 1ed8609a6818d99133bf56483adb9bce8c886fd6
Author: Elvira García 
Date:   Fri Aug 9 18:16:59 2024 +0200

Get ips from system dns resolver without scope

Currently, is_valid_ipv6 accepts ipv6 addresses with scope. However
netaddr library won't accept an address with scope. Now,
get_noscope_ipv6() can be used to avoid this situation. In a future we
will be able to use the same function which is also being defined on
oslo.utils. https://review.opendev.org/c/openstack/oslo.utils/+/925469

Closes-Bug: #2073894
Signed-off-by: Elvira García 
Change-Id: I27f25f90c54d7aaa3c4a7b5317b4b8a4122e4068


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073894

Title:
  IPv6 dns nameservers described with their scope on the IP are not
  supported

Status in neutron:
  Fix Released
Status in oslo.utils:
  In Progress

Bug description:
  When updating a port, we sometimes need to check dns nameserver ips.
  When this happens, if the DNS resolver file (resolv.conf) includes an
  address with scope like fe80::5054:ff:fe96:8af7%eth2, oslo_utils
  is_valid_ipv6 detects this as valid ipv6 input, but netaddr will raise
  an exception since this is not strictly just the IPv6 address, and
  therefore the port update fails with a raised exception and the port
  is deleted.

  On a normal scenario, this means that the metadata port cannot be
  spawned and therefore no VMs can be properly configured using
  metadata.

  [resolv.conf example]
  # Generated by NetworkManager
  nameserver 10.0.0.1
  nameserver fe80::5054:ff:fe96:8af7%eth2
  nameserver 2620:52:0:13b8::fe

  This was found on an environment using Train, but affects every
  version.

  100% Reproducible, just need to try to spawn a VM on an environment
  with the resolv.conf similar to the example.

  Traceback found on controller logs:
  https://paste.opendev.org/show/bzqgpsJRifX0uovHw5nJ/

  From the compute logs we see the metadata port was deleted after the
  exception:

  2024-07-18 04:38:06.036 49524 DEBUG
  networking_ovn.agent.metadata.agent [-] There is no metadata port for
  network 75b73d16-cb05-42d1-84c5-19eccf3a252d or it has no MAC or IP
  addresses configured, tearing the namespace down if needed
  _get_provision_params /usr/lib/python3.6/site-
  packages/networking_ovn/agent/metadata/agent.py:474

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2073894/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2077351] Re: "Error formatting log line" sometimes seen in l3-agent log

2024-08-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/926566
Committed: 
https://opendev.org/openstack/neutron/commit/b9ca288a5d387acf01464e80b3d8b7b42ce9a9ae
Submitter: "Zuul (22348)"
Branch:master

commit b9ca288a5d387acf01464e80b3d8b7b42ce9a9ae
Author: Brian Haley 
Date:   Mon Aug 19 13:48:55 2024 -0400

Log a warning if pid file could not be read in l3-agent

A formatting error can sometimes be seen in the l3-agent
log while spawning the state change monitor if the pid
file is empty. Log a warning to that effect instead so
an admin is aware in case there is an issue observed
with the router.

Closes-bug: #2077351
Change-Id: Ic599c2419ca204a5e10654cb4bef66e6770cbcd7


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2077351

Title:
  "Error formatting log line" sometimes seen in l3-agent log

Status in neutron:
  Fix Released

Bug description:
  While looking at another issue, I noticed this error in the l3-agent
  log:

  Aug 19 14:53:06.339086 np0038216951 neutron-keepalived-state-change[103857]: 
INFO neutron.common.config [-] Logging enabled!
  Aug 19 14:53:06.339318 np0038216951 neutron-keepalived-state-change[103857]: 
INFO neutron.common.config [-] 
/opt/stack/data/venv/bin/neutron-keepalived-state-change version 
25.0.0.0b2.dev120
  Aug 19 14:53:06.339942 np0038216951 neutron-keepalived-state-change[103857]: 
DEBUG neutron.common.config [-] command line: 
/opt/stack/data/venv/bin/neutron-keepalived-state-change 
--router_id=2a06f3a4-8964-4200-97e8-a9d635f31fba 
--namespace=qrouter-2a06f3a4-8964-4200-97e8-a9d635f31fba 
--conf_dir=/opt/stack/data/neutron/ha_confs/2a06f3a4-8964-4200-97e8-a9d635f31fba
 
--log-file=/opt/stack/data/neutron/ha_confs/2a06f3a4-8964-4200-97e8-a9d635f31fba/neutron-keepalived-state-change.log
 --monitor_interface=ha-b1ac3293-17 --monitor_cidr=169.254.0.132/24 
--pid_file=/opt/stack/data/neutron/external/pids/2a06f3a4-8964-4200-97e8-a9d635f31fba.monitor.pid.neutron-keepalived-state-change-monitor
 --state_path=/opt/stack/data/neutron --user=1001 --group=1001 {{(pid=103857) 
setup_logging /opt/stack/neutron/neutron/common/config.py:123}}
  Aug 19 14:53:06.352377 np0038216951 neutron-l3-agent[62158]: ERROR 
neutron.agent.linux.utils [-] Unable to convert value in 
/opt/stack/data/neutron/external/pids/2a06f3a4-8964-4200-97e8-a9d635f31fba.monitor.pid.neutron-keepalived-state-change-monitor
  Aug 19 14:53:06.352377 np0038216951 neutron-l3-agent[62158]: DEBUG 
neutron.agent.l3.ha_router [-] Error formatting log line msg='Router 
*(router_id)s *(process)s pid *(pid)d' err=TypeError('*d format: a real number 
is required, not NoneType') {{(pid=62158) spawn_state_change_monitor 
/opt/stack/neutron/neutron/agent/l3/ha_router.py:453}}

  The code in question is printing the PID as %(pid)d so when it is None
  it generates a TypeError.

  I think in this case the pid file has simply not been written yet and
  the process is still spawning, so we should print a warning to that
  effect. That way if the admin does see an issue with that router there
  is something to indicate why.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2077351/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055784] Re: Resource MEMORY_MB Unable to retrieve providers information

2024-08-22 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/926160
Committed: 
https://opendev.org/openstack/horizon/commit/30888edfd52bfaadad241aa2fcdf44151d0aed96
Submitter: "Zuul (22348)"
Branch:master

commit 30888edfd52bfaadad241aa2fcdf44151d0aed96
Author: Tatiana Ovchinnikova 
Date:   Mon Aug 12 14:40:45 2024 -0500

Fix Placement statistics display

For some inventories MEMORY_MB and DISK_GB are optional,
so we need to check before displaying them.

Closes-Bug: #2055784
Change-Id: I2ef63caf72f0f8f72fe8af87b21742088221578c


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2055784

Title:
  Resource MEMORY_MB Unable to retrieve providers information

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Hello
  When I click on admin -> compute -> hypervisor I get error and nothing is 
displayed in resource providers summary.
  This
  This happens because i have an environment in which i have activate pci 
passthrough in nova and there are tracked in placement: 
https://docs.openstack.org/nova/2023.1/admin/pci-passthrough.html#pci-tracking-in-placement

  
  and this inventory doesn't have MEMORY_MB or DISK_GB (i looked at 
https://opendev.org/openstack/horizon/src/branch/master/openstack_dashboard/api/placement.py#L117-L127)

  eg
  one compute node where i have activated pci passthrough
  ```
  (openstack) [osc@ansible-3 ~]$ openstack resource provider show 
1fe9d32f-43cd-445a-8a49-a68f9ff5158f
  +--+--+
  | Field| Value|
  +--+--+
  | uuid | 1fe9d32f-43cd-445a-8a49-a68f9ff5158f |
  | name | compute-19_:04:00.0  |
  | generation   | 44   |
  | root_provider_uuid   | e9cb6a8d-e638-4245-bf79-981211c5a232 |
  | parent_provider_uuid | e9cb6a8d-e638-4245-bf79-981211c5a232 |
  +--+--+
  (openstack) [osc@ansible-3 ~]$ openstack resource provider usage show  
1fe9d32f-43cd-445a-8a49-a68f9ff5158f
  +--+---+
  | resource_class   | usage |
  +--+---+
  | CUSTOM_PCI_10DE_2330 | 1 |
  +--+---+
  ```

  One simple compute node show as
  ```
  (openstack) [osc@ansible-3 ~]$ openstack resource provider usage show  
f7af998e-1563-4a55-9145-4ee5f527d12b
  +++
  | resource_class |  usage |
  +++
  | VCPU   |412 |
  | MEMORY_MB  | 824320 |
  | DISK_GB|  0 |
  +++

  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2055784/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2072978] Re: Show some error in logs when failing to load nb connection certificate

2024-08-21 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/924059
Committed: 
https://opendev.org/openstack/ovn-octavia-provider/commit/efd63d1721742400e7ba2c0bfc55249ef15fc549
Submitter: "Zuul (22348)"
Branch:master

commit efd63d1721742400e7ba2c0bfc55249ef15fc549
Author: Chris Buggy 
Date:   Mon Jul 29 15:16:30 2024 +0200

Error log for missing certs with NB and SB DBs

When the ovn-provider starts up,
it attempts to connect to the NB and SB databases
by retrieving SSL and cert files.
To avoid errors, the code will now check if these
files exist before using them.
If the files are missing,
connections will be skipped and an error message
will be displayed in the logs.

Refactoring _check_and_set_ssl_files method to be
public and reusable. it will now check to see if a
string value is set and will now check path and
LOG an error message if not found.

Adding unit tests for ovsdb_monitor to bring up test coverage.
Updated ovsdb_tests to improve code.

Closes-Bug: #2072978
Change-Id: I2a21b94fee03767a5f703486bdab2908cda18746


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2072978

Title:
  Show some error in logs when failing to load nb connection certificate

Status in neutron:
  Fix Released

Bug description:
  When ovn-provider (api or driver-agent) start up they should connect
  to OVN NB/SB db using certificates in case they are configured in
  config file. Currently in case any of those files are not found they
  avoid the connection and no msg on the logs are shown.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2072978/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2076328] Re: SubnetsSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links fails sporadically

2024-08-21 Thread OpenStack Infra
Reviewed:  
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/926201
Committed: 
https://opendev.org/openstack/neutron-tempest-plugin/commit/0274381d31b7a5e6dff7a8e3ce8ff53d5c97d443
Submitter: "Zuul (22348)"
Branch:master

commit 0274381d31b7a5e6dff7a8e3ce8ff53d5c97d443
Author: yatinkarel 
Date:   Tue Aug 13 18:14:37 2024 +0530

Filter resources in pagination tests to avoid random failures

When running tempest with higher concurrency, pagination tests
randomly fails as returned resources also include resources
created from other concurrent tests.
Filtering the returned results with names should help.

Closes-Bug: #2076328
Change-Id: I72de57cc382bb06606187c62b51ebb613f76291c


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2076328

Title:
  SubnetsSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links
  fails sporadically

Status in neutron:
  Fix Released

Bug description:
  
neutron_tempest_plugin.api.test_subnets.SubnetsSearchCriteriaTest.test_list_pagination_page_reverse_with_href_links
  fails time to time with something like tihs:

  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/api/test_subnets.py",
 line 62, in test_list_pagination_page_reverse_with_href_links
  self._test_list_pagination_page_reverse_with_href_links()
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/api/base.py",
 line 1413, in inner
  return f(self, *args, **kwargs)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/api/base.py",
 line 1404, in inner
  return f(self, *args, **kwargs)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/api/base.py",
 line 1629, in _test_list_pagination_page_reverse_with_href_links
  self.assertSameOrder(expected_resources, reversed(resources))
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/api/base.py",
 line 1441, in assertSameOrder
  self.assertEqual(len(original), len(actual))
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/testtools/testcase.py",
 line 419, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/testtools/testcase.py",
 line 509, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 7 != 8

  Similar bug for pagination from the past:
  https://bugs.launchpad.net/neutron/+bug/1881311

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2076328/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2072754] Re: Restarting octavia breaks IPv4 Load Balancers with health checks

2024-08-21 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/923196
Committed: 
https://opendev.org/openstack/ovn-octavia-provider/commit/ae1540bb1a04464c7065e542ec5e981947247f3b
Submitter: "Zuul (22348)"
Branch:master

commit ae1540bb1a04464c7065e542ec5e981947247f3b
Author: Vasyl Saienko 
Date:   Mon Jul 1 10:37:14 2024 +0300

Maintenance task: do not change IPv4 ip_port_mappings

IPv4 port mappings would get cleared by format_ip_port_mappings_ipv6(),
breaking load balancers with health monitors.

Change-Id: Ia29fd3c533b40f6eb13278a163ebb95465d77a99
Closes-Bug: #2072754
Co-Authored-By: Pierre Riteau 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2072754

Title:
  Restarting octavia breaks IPv4 Load Balancers with health checks

Status in neutron:
  Fix Released

Bug description:
  After implementing IPv6 health check support in change #919229 for the
  ovn-octavia-provider, it appears that the maintenance task is
  inadvertently deleting the `ip_port_mappings` of IPv4 load balancers.
  This issue results in the load balancers ceasing to function upon the
  restart of Octavia.

  I found this as a potential fix for this issue: [Proposed
  Fix](https://review.opendev.org/c/openstack/ovn-octavia-
  provider/+/923196).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2072754/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2069482] Re: [OVN] VMs cannot access metadata when connected to a network with only IPv6 subnets

2024-08-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/922264
Committed: 
https://opendev.org/openstack/neutron/commit/f7000f3d57bc59732522c4943d6ff2e9dfcf7d31
Submitter: "Zuul (22348)"
Branch:master

commit f7000f3d57bc59732522c4943d6ff2e9dfcf7d31
Author: Miguel Lavalle 
Date:   Tue Jun 18 19:36:13 2024 -0500

Fix support of IPv6 only networks in OVN metadata agent

When an IPv6 only network is used as the sole network for a VM and
there are no other bound ports on the same network in the same chassis,
the OVN metadata agent concludes that the associated namespace is not
needed and deletes it. As a consequence, the VM cannot access the
metadata service. With this change, the namespace is preserved if there
is at least one bound port on the chassis with either IPv4 or IPv6
addresses.

Closes-Bug: #2069482

Change-Id: Ie15c3344161ad521bf10b98303c7bb730351e2d8


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2069482

Title:
  [OVN] VMs cannot access metadata when connected to a network with only
  IPv6 subnets

Status in neutron:
  Fix Released

Bug description:
  VMs cannot access the metadata service when connected to a network
  with only IPv6 subnets.

  Neutron branch: master

  Steps to reproduce:

  1) Create a network with a single IPv6 subnet:

  $ openstack network create ipv6-net-dhcpv6-slaac
  $ openstack subnet create --subnet-range fdba:e036:9e22::/64 --ip-version 6 
--gateway dba:e036:9e22::1 --ipv6-ra-mode slaac --ipv6-address-mode slaac 
--network ipv6-net-dhcpv6-slaac ipv6-subnet-dhcpv6-slaac

  2) Create a VM using this network:

  $ openstack server create --key-name my_key --flavor m1.small --image
  ubuntu-20.04-minimal-cloudimg-amd64 --network ipv6-net-dhcpv6-slaac
  --security-group sg1 my-vm-slaac

  3) The following message is added to the metadata agent log file:

  Jun 14 22:00:32 central neutron-ovn-metadata-agent[89379]: DEBUG
  neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for
  network 191a0539-edbc-4037-b973-dfa77e3208f6, tearing the namespace
  down if needed {{(pid=89379) _get_provision_params
  /opt/stack/neutron/neutron/agent/ovn/metadata/agent.py:720}}

  which is produced here:

  
https://github.com/openstack/neutron/blob/79b2d709c80217830fed8ad73dcf6fbd3eea91b4/neutron/agent/ovn/metadata/agent.py#L719-L723

  4) When an IPv4 subnet is added to the network and the VM is
  recreated, the metadata service is accessible to it over IPv6:

  $ openstack subnet create --network ipv6-net-dhcpv6-slaac 
ipv4-subnet-dhcpv6-slaac --subnet-range 10.2.0.0/24
  $ openstack server delete my-vm-slaac
  $ openstack server create --key-name my_key --flavor m1.small --image 
ubuntu-20.04-minimal-cloudimg-amd64 --network ipv6-net-dhcpv6-slaac 
--security-group sg1 my-vm-slaac

  From the VM:

  ubuntu@my-vm-slaac:~$ curl http://[fe80::a9fe:a9fe%ens3]
  1.0
  2007-01-19
  2007-03-01
  2007-08-29
  2007-10-10
  2007-12-15
  2008-02-01
  2008-09-01
  2009-04-04
  latest

  ubuntu@my-vm-slaac:~$ curl http://[fe80::a9fe:a9fe%ens3]/openstack
  2012-08-10
  2013-04-04
  2013-10-17
  2015-10-15
  2016-06-30
  2016-10-06
  2017-02-22
  2018-08-27
  2020-10-14
  latest

  
  How reproducible: 100%

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2069482/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1860555] Re: PCI passthrough reschedule race condition

2024-08-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/926407
Committed: 
https://opendev.org/openstack/nova/commit/f8b98390dc99f6cb0101c88223eb840e0d1c7124
Submitter: "Zuul (22348)"
Branch:master

commit f8b98390dc99f6cb0101c88223eb840e0d1c7124
Author: Balazs Gibizer 
Date:   Thu Aug 15 13:06:39 2024 +0200

Fix PCI passthrough cleanup on reschedule

The resource tracker Claim object works on a copy of the instance object
got from the compute manager. But the PCI claim logic does not use the
copy but use the original instance object. However the abort claim logic
including the abort PCI claim logic worked on the copy only. Therefore the
claimed PCI devices are visible to the compute manager in the
instance.pci_decives list even after the claim is aborted.

There was another bug in the PCIDevice object where the instance object
wasn't passed to the free() function and therefore the
instance.pci_devices list wasn't updated when the device was freed.

Closes-Bug: #1860555
Change-Id: Iff343d4d78996cd17a6a584fefa7071c81311673


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1860555

Title:
  PCI passthrough reschedule race condition

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Steps to reproduce
  --

  Create multiple instances concurrently using a flavor with a PCI
  passthrough request (--property
  "pci_passthrough:alias"=":"), and a scheduler hint with
  some anti-affinity constraint.

  Expected result
  ---

  The instances are created successfully, and each have the expected
  number of PCI devices attached.

  Actual result
  -

  Sometimes, instances may fail during creation, or may be created with
  more PCI devices than requested.

  Environment
  ---

  Nova 18.2.2 (rocky), CentOS 7, libvirt, deployed by kolla-ansible.

  Analysis
  

  If an instance with PCI passthrough devices is rescheduled (e.g. due to
  affinity violation), the instance can end up with extra PCI devices attached.
  If the devices selected on the original and subsequent compute nodes have the
  same address, the instance will fail to create, with the following error:

  libvirtError: internal error: Device :89:00.0 is already in use

  However, if the devices are different, and all available on the first and
  second compute nodes, the VM may end up with additional hostdevs.

  On investigation, when the node is rescheduled, the instance object passed to
  the conductor RPC API contains the PCI devices that should have been freed.
  This is because the claim object holds a clone of the instance that is used to
  perform the abort on failure [1][2], and the PCI devices removed from its 
list are not
  reflected in the original object. There is a secondary issue that the PCI
  manager was not passing through the instance to the PCI object's free() method
  in all cases [3], resulting in the PCI device not being removed from the
  instance.pci_devices list.

  I have two alternative fixes for this issue, but they will need a
  little time to work their way out of an organisation. Essentially:

  1. pass the original instance (not the clone) to the abort function in the 
Claim.
  2. refresh the instance from DB when rescheduling

  The former is a more general solution, but I don't know the reasons
  for using a clone in the first place. The second works for
  reschedules, but may leave a hole for resize or migration. I haven't
  reproduced the issue in those cases but it seems possible that it
  would be present.

  [1] 
https://opendev.org/openstack/nova/src/branch/master/nova/compute/claims.py#L64
  [2] 
https://opendev.org/openstack/nova/src/branch/master/nova/compute/claims.py#L83
  [3] 
https://opendev.org/openstack/nova/src/branch/master/nova/pci/manager.py#L309

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1860555/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2075959] Re: NUMATopologyFilter pagesize logs are missleading

2024-08-16 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/926223
Committed: 
https://opendev.org/openstack/nova/commit/4678bcbb064da580500b1dbeddb0bdfdeac074ef
Submitter: "Zuul (22348)"
Branch:master

commit 4678bcbb064da580500b1dbeddb0bdfdeac074ef
Author: Stephen Finucane 
Date:   Tue Aug 13 17:24:31 2024 +0100

hardware: Correct log

We currently get the following error message if attempting to fit a
guest with hugepages on a node that doesn't have enough:

  Host does not support requested memory pagesize, or not enough free
  pages of the requested size. Requested: -2 kB

Correct this, removing the kB suffix and adding a note on the meaning of
the negative values, like we have for the success path.

Change-Id: I247dc0ec03cd9e5a7b41f5c5534bdfb1af550029
Signed-off-by: Stephen Finucane 
Closes-Bug: #2075959


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2075959

Title:
  NUMATopologyFilter pagesize logs are missleading

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When the instance request mem pages via symbolic names (e.g. "large"
  instead of specifying the exact size) and the instance does not fit to
  a NUMA cell due to the memory requirements nova logs are confusing:

  ./nova-scheduler-scheduler.log:2024-07-31 23:37:28.428 1 DEBUG
  nova.virt.hardware [None req-c3efb10b-641c-4066-a569-206226315366
  f05a486d957b4e6082293ce5e707009d 8c8a6763e6924cd3a94427af5f8ef6ee - -
  default default] Host does not support requested memory pagesize, or
  not enough free pages of the requested size. Requested: -2 kB
  _numa_fit_instance_cell /usr/lib/python3.9/site-
  packages/nova/virt/hardware.py:944

  This happens because the symbolic name translated to a negative
  integer placeholder inside nova. So when the field is printed it
  should be translated back to the symbolic name instead.

  
  
https://github.com/openstack/nova/blob/bb2d7f9cad577f3a32cb9523e2b1d9a6d6db3407/nova/virt/hardware.py#L943-L946

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2075959/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938571] Re: vpnaas problem:ipsec pluto not running centos 8 victoria wallaby

2024-08-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-vpnaas/+/895824
Committed: 
https://opendev.org/openstack/neutron-vpnaas/commit/8e8f3b5a1d0108771d712b699e87839146a3
Submitter: "Zuul (22348)"
Branch:master

commit 8e8f3b5a1d0108771d712b699e87839146a3
Author: Bodo Petermann 
Date:   Tue Sep 19 15:58:56 2023 +0200

Support for libreswan 4

With libreswan 4 some command line option changed, the rundir is now
/run/pluto instead of /var/run/pluto, and nat_traversal must not be set
in ipsec.conf.
Adapt the libreswan device driver accordingly.
Users will require libreswan v4.0 or higher, compatibility with v3.x is
not maintained.

Closes-Bug: #1938571
Change-Id: Ib55e3c3f9cfbe3dfe1241ace8c821256d7fc174a


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1938571

Title:
  vpnaas problem:ipsec pluto not running centos 8 victoria wallaby

Status in neutron:
  Fix Released

Bug description:
  Hello. 
  I apologize if I don't do things right to explain the bug.
  I am using Centos 8 and I install openstak with, kolla ansible. Whether it is 
Ussuri, Victoria or Wallaby, when establishing the connection between the 2 
networks(with vpnaas), the error message is as follows:
  ipsec whack --status" (no "/run/pluto/pluto.ctl")

  The problem would be present with the Libreswan version 4.X which does not 
include the option "--use-netkey " used by the ipsec pluto command 
  This option was present in Libreswan 3.X.
  So the command "ipsec pluto." failed , so no "/run/pluto/pluto.ctl".

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1938571/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2070486] Re: XStatic-JQuery.quicksearch is not updated in horizon

2024-08-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/926134
Committed: 
https://opendev.org/openstack/horizon/commit/fd1fa88c680d2068eb47ecfa8dbfd74caf194140
Submitter: "Zuul (22348)"
Branch:master

commit fd1fa88c680d2068eb47ecfa8dbfd74caf194140
Author: manchandavishal 
Date:   Mon Aug 12 17:12:06 2024 +0530

Update XStatic-JQuery.quicksearch min. version to include latest CVE fix

This patch updates XStatic-JQuery.quicksearch minimum version to ensure
the latest security vulnerabilities are addressed.

Closes-Bug: 2070486
Change-Id: Id8d00b325ad563ca7c720c758f4da928fed176cd


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2070486

Title:
  XStatic-JQuery.quicksearch is not updated in horizon

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Currently, horizon is using XStatic-JQuery.quicksearch version 2.0.3.1 which 
is very old and doesn't include the latest bug fixes. It was released in May 
2014. 
  We should use latest version of XStatic-JQuery.quicksearch = 2.0.3.2 [1].

  [1] https://pypi.org/project/XStatic-JQuery.quicksearch/#history

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2070486/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2034035] Re: neutron allowed address pair with same ip address causes ValueError

2024-08-15 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/893650
Committed: 
https://opendev.org/openstack/horizon/commit/9c75ebba01cc58c77a7114226ebaeedbe033962a
Submitter: "Zuul (22348)"
Branch:master

commit 9c75ebba01cc58c77a7114226ebaeedbe033962a
Author: Tobias Urdin 
Date:   Mon Sep 4 13:03:15 2023 +

Fix allowed address pair row unique ID

This fixes so that the ID for the allowed
address pair rows is unique if it's the
same ip_address range but different
mac_address.

Closes-Bug: 2034035
Change-Id: I49e84568ef7cfbc1547258305f2101bffe5bdea5


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2034035

Title:
  neutron allowed address pair with same ip address causes ValueError

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  when managing allowed address pairs in horizon for a neutron port and
  you create two identical ip_address but with different mac_address,
  horizon crashes because the id in the table is the same, see below
  traceback.

  solution is to concat mac_address if set in the ID for that row

  Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/django/core/handlers/exception.py", 
line 47, in inner
  response = get_response(request)
File "/usr/lib/python3.6/site-packages/django/core/handlers/base.py", line 
181, in _get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python3.6/site-packages/horizon/decorators.py", line 51, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/horizon/decorators.py", line 35, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/horizon/decorators.py", line 35, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/horizon/decorators.py", line 111, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/horizon/decorators.py", line 83, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/django/views/generic/base.py", line 
70, in view
  return self.dispatch(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/django/views/generic/base.py", line 
98, in dispatch
  return handler(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/horizon/tabs/views.py", line 156, in 
post
  return self.get(request, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/horizon/tabs/views.py", line 135, in 
get
  handled = self.handle_table(self._table_dict[table_name])
File "/usr/lib/python3.6/site-packages/horizon/tabs/views.py", line 116, in 
handle_table
  handled = tab._tables[table_name].maybe_handle()
File "/usr/lib/python3.6/site-packages/horizon/tables/base.py", line 1802, 
in maybe_handle
  return self.take_action(action_name, obj_id)
File "/usr/lib/python3.6/site-packages/horizon/tables/base.py", line 1644, 
in take_action
  response = action.multiple(self, self.request, obj_ids)
File "/usr/lib/python3.6/site-packages/horizon/tables/actions.py", line 
305, in multiple
  return self.handle(data_table, request, object_ids)
File "/usr/lib/python3.6/site-packages/horizon/tables/actions.py", line 
760, in handle
  datum = table.get_object_by_id(datum_id)
File "/usr/lib/python3.6/site-packages/horizon/tables/base.py", line 1480, 
in get_object_by_id
  % matches)
  ValueError: Multiple matches were returned for that id: 
[, ].

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2034035/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2047132] Re: floating ip on inactive port not shown in Horizon UI floating ip details

2024-08-14 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/904172
Committed: 
https://opendev.org/openstack/horizon/commit/53c82bbff75f646654585f66f666cfd1f1b53987
Submitter: "Zuul (22348)"
Branch:master

commit 53c82bbff75f646654585f66f666cfd1f1b53987
Author: Tobias Urdin 
Date:   Thu Dec 21 11:36:26 2023 +0100

Fix floating IP associated to unbound port

This fixes a bug where a floating IP associated to a
unbound port would now show the fixed IP of that port.

Closes-Bug: 2047132
Change-Id: I4fbbcc4c0509e74ce3c46fa55e006c5bc3837be3


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2047132

Title:
  floating ip on inactive port not shown in Horizon UI floating ip
  details

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When setting up a port that is not bound and assinging a Floating IP
  (FIP) to it, the FIP gets associated but the Horizon UI does not show
  the IP of the port, instead it shows a - .

  The terraform/tofu snippet for the setup:

  resource "openstack_networking_floatingip_associate_v2" "fip_1" {
floating_ip = 
data.openstack_networking_floatingip_v2.fip_1.address
port_id = openstack_networking_port_v2.port_vip.id
  }
  resource "openstack_networking_port_v2" "port_vip" {
name   = "port_vip"
network_id = 
data.openstack_networking_network_v2.network_1.id
fixed_ip {
  subnet_id  = 
data.openstack_networking_subnet_v2.subnet_1.id
  ip_address = "192.168.56.30"
}
  }

  Example from UI :

185.102.215.242 floatit 
stack1-config-barssl-3-hostany-bootstrap-1896c992-3e17-4fab-b084-bb642c517cbe 
192.168.56.20 europe-se-1-1a-net0 Active  
193.93.250.171  -   europe-se-1-1a-net0 Active  

  The top one is a port that is asigned to a host that looks as
  expected, the second is not and corresponds to the terraform snippet.
  ( it is being used as a floating IP internal for load balancing )

  Expected is to see the IP 192.168.56.30 that is set at creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2047132/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2076430] Re: rally job broken with recent keystone change and fails with ValueError: Cannot convert datetime.date(2024, 8, 7) to primitive

2024-08-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/oslo.serialization/+/926172
Committed: 
https://opendev.org/openstack/oslo.serialization/commit/f6e879db55465e6d5f17f054ed2757cbfcfc43bc
Submitter: "Zuul (22348)"
Branch:master

commit f6e879db55465e6d5f17f054ed2757cbfcfc43bc
Author: yatinkarel 
Date:   Tue Aug 13 11:35:05 2024 +0530

[jsonutils] Add handling of datetime.date format

Recent patch from keystone[1] do not work when
osprofiler is enabled as osprofiler does jsonutils.dumps
and datetime.date is not handled so it fails.
This patch adds the handling for it.

[1] https://review.opendev.org/c/openstack/keystone/+/924892

Needed-By: 
https://review.opendev.org/q/I1b71fb3881dc041db01083fbb4f2592400096a31
Related-Bug: #2074018
Closes-Bug: #2076430
Change-Id: Ifbcf5a1b3d42516bdf73f7ca6b2a7338f3985283


** Changed in: oslo.serialization
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2076430

Title:
  rally job broken with recent keystone change and fails with
  ValueError: Cannot convert datetime.date(2024, 8, 7) to primitive

Status in neutron:
  New
Status in oslo.serialization:
  Fix Released

Bug description:
  Test fails as:-
  2024-08-07 17:06:05.650302 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients [-] Unable to authenticate for user 
c_rally_927546a8_h6aDLbnK in project c_rally_927546a8_ahUUmlp9: 
keystoneauth1.exceptions.http.InternalServerError: Internal Server Error (HTTP 
500)
  2024-08-07 17:06:05.650941 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients Traceback (most recent call last):
  2024-08-07 17:06:05.651113 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients   File 
"/opt/stack/rally-openstack/rally_openstack/common/osclients.py", line 269, in 
auth_ref
  2024-08-07 17:06:05.651156 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients self.cache["keystone_auth_ref"] = 
plugin.get_access(sess)
  2024-08-07 17:06:05.651193 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/keystoneauth1/identity/base.py",
 line 131, in get_access
  2024-08-07 17:06:05.651229 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients self.auth_ref = self.get_auth_ref(session)
  2024-08-07 17:06:05.651263 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/keystoneauth1/identity/generic/base.py",
 line 205, in get_auth_ref
  2024-08-07 17:06:05.651334 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients return self._plugin.get_auth_ref(session, 
**kwargs)
  2024-08-07 17:06:05.651348 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/keystoneauth1/identity/v3/base.py",
 line 185, in get_auth_ref
  2024-08-07 17:06:05.651356 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients resp = session.post(token_url, json=body, 
headers=headers,
  2024-08-07 17:06:05.651363 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/keystoneauth1/session.py", 
line 1162, in post
  2024-08-07 17:06:05.651370 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients return self.request(url, 'POST', **kwargs)
  2024-08-07 17:06:05.651377 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/keystoneauth1/session.py", 
line 985, in request
  2024-08-07 17:06:05.651384 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients raise exceptions.from_response(resp, 
method, url)
  2024-08-07 17:06:05.651391 | controller | 2024-08-07 17:06:05.647 76504 ERROR 
rally_openstack.common.osclients 
keystoneauth1.exceptions.http.InternalServerError: Internal Server Error (HTTP 
500)

  From keyston log:-
  Aug 07 17:03:06.408072 np0038147114 devstack@keystone.service[56399]: 
CRITICAL keystone [None req-96647e8e-2585-4279-80fa-c4fa97b8c455 None None] 
Unhandled error: ValueError: Cannot convert datetime.date(2024, 8, 7) to 
primitive
  Aug 07 17:03:06.408072 np0038147114 devstack@keystone.service[56399]: ERROR 
keystone Traceback (most recent call last):
  Aug 07 17:03:06.408072 np0038147114 devstack@keystone.service[56399]: ERROR 
keystone   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/flask/app.py", line 1498, in 
__call__
  Aug 07 17:03:06.408072 np0038147114 devstack

[Yahoo-eng-team] [Bug 1981165] Re: Edit Instance - description box should not accept multi-line input

2024-08-12 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/914838
Committed: 
https://opendev.org/openstack/horizon/commit/55e9db65282f124041dc66cfa0d51b2901db7c29
Submitter: "Zuul (22348)"
Branch:master

commit 55e9db65282f124041dc66cfa0d51b2901db7c29
Author: flokots 
Date:   Tue Apr 2 03:04:35 2024 +0200

Add help text for edit instance form

This commit adds help text to the Edit Instance form to describe the
limitations of the text allowed in the name and description. The help text 
guides the maximum length for the instance name and description
and advises against using special characters or leading or trailing spaces. 
By providing this information, users will be better informed when modifying 
instance details, reducing the likelihood of
encountering errors.

Closes-Bug: #1981165
Change-Id: If8879c20b2842c3dd769e4cdef80834219c637cd


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1981165

Title:
  Edit Instance - description box should not accept multi-line input

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Currently, the text box for the Description is a multi-line text area.
  But if you try to enter more than one line it fails saying "Error:
  Unable to modify instance "name of instance".

  It turns out that the Nova / Nova client will reject any description
  that doesn't match a gnarly regex.  And newlines are rejected by that
  regex.  The error message you get from the CLI is like this:

  
  Invalid input for field/attribute description. Value: helloworld. 
'hello\nworld' does not match '^[\\ 
-\\~\xa0-¬®-ͷͺ-Ϳ΄-ΊΌΎ-ΡΣ-ԯԱ-Ֆՙ-֊֍-֏-א-תׯ-״؆-؛؞-۞-܍ܐ-ݍ-ޱ߀-ߺ-࠰-࠾ࡀ-࡞ࡠ-ࡪࢠ-ࢴࢶ-ࢽ--ঃঅ-ঌএ-ঐও-নপ-রলশ-হ-ে-ৈো-ৎৗড়-ঢ়য়-০--ਃਅ-ਊਏ-ਐਓ-ਨਪ-ਰਲ-ਲ਼ਵ-ਸ਼ਸ-ਹਾ---ਖ਼-ੜਫ਼੦-੶-ઃઅ-ઍએ-ઑઓ-નપ-રલ-ળવ-હ--ૉો-ૐૠ-૦-૱ૹ--ଃଅ-ଌଏ-ଐଓ-ନପ-ରଲ-ଳଵ-ହ-େ-ୈୋ--ୗଡ଼-ଢ଼ୟ-୦-୷-ஃஅ-ஊஎ-ஐஒ-கங-சஜஞ-டண-தந-பம-ஹா-ூெ-ைொ-ௐௗ௦-௺-ఌఎ-ఐఒ-నప-హఽ-ౄ---ౘ-ౚౠ-౦-౯౷-ಌಎ-ಐಒ-ನಪ-ಳವ-ಹ-ೄ-ೈೊ-ೕ-ೖೞೠ-೦-೯ೱ-ೲ-ഃഅ-ഌഎ-ഐഒ-െ-ൈൊ-൏ൔ-൦-ൿං-ඃඅ-ඖක-නඳ-රලව-ෆා-ෘ-ෟ෦-෯ෲ-෴ก-฿-๛ກ-ຂຄຆ-ຊຌ-ຣລວ-ຽເ-ໄໆ-໐-໙ໜ-ໟༀ-ཇཉ-ཬ--྾-࿌࿎-࿚က-ჅჇჍა-ቈቊ-ቍቐ-ቖቘቚ-ቝበ-ኈኊ-ኍነ-ኰኲ-ኵኸ-ኾዀዂ-ዅወ-ዖዘ-ጐጒ-ጕጘ-ፚ-፼ᎀ-᎙Ꭰ-Ᏽᏸ-ᏽ᐀-᚜ᚠ-ᛸᜀ-ᜌᜎ-ᜠ-᜶ᝀ-ᝠ-ᝬᝮ-ᝰ-ក-០-៩៰-៹᠀-᠐-᠙ᠠ-ᡸᢀ-ᢪᢰ-ᣵᤀ-ᤞ-ᤫᤰ-᥀᥄-ᥭᥰ-ᥴᦀ-ᦫᦰ-ᧉ᧐-᧚᧞-᨞---᪉᪐-᪙᪠-᪭--ᭋ᭐-᭼-᯳᯼-᰻-᱉ᱍ-ᲈᲐ-ᲺᲽ-᳇-ᳺᴀ--ἕἘ-Ἕἠ-ὅὈ-Ὅὐ-ὗὙὛὝὟ-ώᾀ-ᾴᾶ-ῄῆ-ΐῖ-Ί῝-`ῲ-ῴῶ-῾\u2000-\u200a‐-‧\u202f-\u205f⁰-ⁱ⁴-₎ₐ-ₜ₠-₿-℀-↋←-␦⑀-⑊①-⭳⭶-⮕⮘-Ⱞⰰ-ⱞⱠ-ⳳ⳹-ⴥⴧⴭⴰ-ⵧⵯ-⵰-ⶖⶠ-ⶦⶨ-ⶮⶰ-ⶶⶸ-ⶾⷀ-ⷆⷈ-ⷎⷐ-ⷖⷘ-ⷞ-⹏⺀-⺙⺛-⻳⼀-⿕⿰-⿻\u3000-〿ぁ-ゖ-ヿㄅ-ㄯㄱ-ㆎ㆐-ㆺ㇀-㇣ㇰ-㈞㈠-䶵䷀-鿯ꀀ-ꒌ꒐-꓆ꓐ-ꘫꙀ-꛷꜀-ꞿꟂ-Ᶎꟷ-꠫꠰-꠹ꡀ-꡷ꢀ-꣎-꣙-꥓꥟-ꥼ-꧍ꧏ-꧙꧞-ꧾꨀ-ꩀ-ꩍ꩐-꩙꩜-ꫂꫛ-ꬁ-ꬆꬉ-ꬎꬑ-ꬖꬠ-ꬦꬨ-ꬮꬰ-ꭧꭰ-꯰-꯹가-힣--豈-舘並-龎ff-stﬓ-ﬗיִ-זּטּ-לּמּנּ-סּףּ-פּצּ-﯁ﯓ-﴿ﵐ-ﶏﶒ-ﷇﷰ-﷽-︙-﹒﹔-﹦﹨-﹫ﹰ-ﹴﹶ-ﻼ!-하-ᅦᅧ-ᅬᅭ-ᅲᅳ-ᅵ¢-₩│-○-�]*$'

  ... which you would NOT want to show to an end user!

  Possible fixes for the problem would include:

  - Add help text to the Edit Instance form to describe the limitations on the 
text allowed in the Description.
  - Change the "look and feel" of the box to avoid giving the impression that 
multi-line descriptions are OK.
  - Change the UI to reject descriptions with bad characters ... before the get 
sent to nova / nova client
  - Detect the specific response message and translate it into a meaningful 
user error message; e.g. something like "Error: Description contains one or 
more unacceptable characters".

  At least the first one ... please.

  This syndrome may apply to other name, description and other text
  fields in the UI.  I didn't do looking for examples.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1981165/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1984736] Fix included in openstack/nova 27.5.0

2024-08-09 Thread OpenStack Infra
This issue was fixed in the openstack/nova 27.5.0  release.

** Changed in: nova/antelope
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1984736

Title:
  "TypeError: catching classes that do not inherit from BaseException is
  not allowed" is raised if volume mount fails in python3

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) antelope series:
  Fix Released
Status in OpenStack Compute (nova) wallaby series:
  In Progress
Status in OpenStack Compute (nova) xena series:
  In Progress
Status in OpenStack Compute (nova) yoga series:
  In Progress
Status in OpenStack Compute (nova) zed series:
  In Progress

Bug description:
  Saw this on a downstream CI run where a volume mount failed:

  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager 
[req-67e1cef8-e30a-4a47-8010-9e966fd30fce 8882186b6a324a0e9fb6fd268d337cce 
8b290d651e9b42fd89c95b5e2a9a25fb - default default] [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Failed to attach 
5a6a5f37-0888-44b2-9456-cf087ae8c356 at /dev/vdb: TypeError: catching classes 
that do not inherit from BaseException is not allowed
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Traceback (most recent call last):
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177]   File 
"/usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py", line 305, 
in mount
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] nova.privsep.fs.mount(fstype, export, 
mountpoint, options)
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177]   File 
"/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 247, in 
_wrap
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] return self.channel.remote_call(name, 
args, kwargs)
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177]   File 
"/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 224, in 
remote_call
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] raise exc_type(*result[2])
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] 
oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while 
running command.
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Command: mount -t nfs 
192.168.1.50:/vol_cinder /var/lib/nova/mnt/724dab229d80c6a1a1e49a71c8356eed
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Exit code: 32
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Stdout: ''
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Stderr: 'Failed to connect to bus: No 
data available\nmount.nfs: Operation not permitted\n'
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] 
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] During handling of the above exception, 
another exception occurred:
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] 
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] Traceback (most recent call last):
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177]   File 
"/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 7023, in 
_attach_volume
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] bdm.attach(context, instance, 
self.volume_api, self.driver,
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177]   File 
"/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 46, in 
wrapped
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] ret_val = method(obj, context, *args, 
**kwargs)
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177]   File 
"/usr/lib/python3.9/site-packages/nova/virt/block_device.py", line 672, in 
attach
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manager [instance: 
6a9a59d1-861d-4536-84ed-e54d817f0177] self._do_attach(context, instance, 
volume, volume_api,
  2022-07-29 11:56:57.606 2 ERROR nova.compute.manag

[Yahoo-eng-team] [Bug 2073862] Fix included in openstack/nova 27.5.0

2024-08-09 Thread OpenStack Infra
This issue was fixed in the openstack/nova 27.5.0  release.

** Changed in: nova/antelope
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2073862

Title:
  test_vmdk_bad_descriptor_mem_limit and
  test_vmdk_bad_descriptor_mem_limit_stream_optimized fail if qemu-img
  binary is missing

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) 2024.1 series:
  Fix Released
Status in OpenStack Compute (nova) antelope series:
  Fix Released
Status in OpenStack Compute (nova) bobcat series:
  Fix Released

Bug description:
  When qemu-img binary is not present on the system, these tests fail
  like we can see on these logs:

  ==
  ERROR: 
nova.tests.unit.image.test_format_inspector.TestFormatInspectors.test_vmdk_bad_descriptor_mem_limit
  --
  pythonlogging:'': {{{
  2024-07-23 11:44:54,011 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:44:54,012 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:44:54,015 WARNING [oslo_policy.policy] Policy Rules 
['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
'os_compute_api:os-quota-sets:defaults', 
'os_compute_api:os-availability-zone:list', 'os_compute_api:limits', 
'project_member_api', 'project_reader_api', 'project_member_or_admin', 
'project_reader_or_admin', 'os_compute_api:limits:other_project', 
'os_compute_api:os-lock-server:unlock:unlock_override', 
'os_compute_api:servers:create:zero_disk_flavor', 
'compute:servers:resize:cross_cell', 
'os_compute_api:os-shelve:unshelve_to_host'] specified in policy files are the 
same as the defaults provided by the service. You can remove these rules from 
policy files which will make maintenance easier. You can detect these redundant 
rules by ``oslopolicy-list-redundant`` tool also.
  }}}

  Traceback (most recent call last):
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 408, in test_vmdk_bad_descriptor_mem_limit
  self._test_vmdk_bad_descriptor_mem_limit()
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 382, in _test_vmdk_bad_descriptor_mem_limit
  img = self._create_allocated_vmdk(image_size // units.Mi,
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 183, in _create_allocated_vmdk
  subprocess.check_output(
File "/usr/lib/python3.10/subprocess.py", line 421, in check_output
  return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/lib/python3.10/subprocess.py", line 526, in run
  raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command 'qemu-img convert -f raw -O vmdk -o 
subformat=monolithicSparse -S 0 
/tmp/tmpw0q0ibvj/nova-unittest-formatinspector--monolithicSparse-wz0i4kj1.raw 
/tmp/tmpw0q0ibvj/nova-unittest-formatinspector--monolithicSparse-qpo78jee.vmdk' 
returned non-zero exit status 127.


  
  ==
  ERROR: 
nova.tests.unit.image.test_format_inspector.TestFormatInspectors.test_vmdk_bad_descriptor_mem_limit_stream_optimized
  --
  pythonlogging:'': {{{
  2024-07-23 11:43:31,443 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:43:31,443 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to

[Yahoo-eng-team] [Bug 2073862] Fix included in openstack/nova 28.3.0

2024-08-09 Thread OpenStack Infra
This issue was fixed in the openstack/nova 28.3.0  release.

** Changed in: nova/bobcat
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2073862

Title:
  test_vmdk_bad_descriptor_mem_limit and
  test_vmdk_bad_descriptor_mem_limit_stream_optimized fail if qemu-img
  binary is missing

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) 2024.1 series:
  Fix Released
Status in OpenStack Compute (nova) antelope series:
  Fix Released
Status in OpenStack Compute (nova) bobcat series:
  Fix Released

Bug description:
  When qemu-img binary is not present on the system, these tests fail
  like we can see on these logs:

  ==
  ERROR: 
nova.tests.unit.image.test_format_inspector.TestFormatInspectors.test_vmdk_bad_descriptor_mem_limit
  --
  pythonlogging:'': {{{
  2024-07-23 11:44:54,011 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:44:54,012 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:44:54,015 WARNING [oslo_policy.policy] Policy Rules 
['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
'os_compute_api:os-quota-sets:defaults', 
'os_compute_api:os-availability-zone:list', 'os_compute_api:limits', 
'project_member_api', 'project_reader_api', 'project_member_or_admin', 
'project_reader_or_admin', 'os_compute_api:limits:other_project', 
'os_compute_api:os-lock-server:unlock:unlock_override', 
'os_compute_api:servers:create:zero_disk_flavor', 
'compute:servers:resize:cross_cell', 
'os_compute_api:os-shelve:unshelve_to_host'] specified in policy files are the 
same as the defaults provided by the service. You can remove these rules from 
policy files which will make maintenance easier. You can detect these redundant 
rules by ``oslopolicy-list-redundant`` tool also.
  }}}

  Traceback (most recent call last):
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 408, in test_vmdk_bad_descriptor_mem_limit
  self._test_vmdk_bad_descriptor_mem_limit()
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 382, in _test_vmdk_bad_descriptor_mem_limit
  img = self._create_allocated_vmdk(image_size // units.Mi,
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 183, in _create_allocated_vmdk
  subprocess.check_output(
File "/usr/lib/python3.10/subprocess.py", line 421, in check_output
  return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/lib/python3.10/subprocess.py", line 526, in run
  raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command 'qemu-img convert -f raw -O vmdk -o 
subformat=monolithicSparse -S 0 
/tmp/tmpw0q0ibvj/nova-unittest-formatinspector--monolithicSparse-wz0i4kj1.raw 
/tmp/tmpw0q0ibvj/nova-unittest-formatinspector--monolithicSparse-qpo78jee.vmdk' 
returned non-zero exit status 127.


  
  ==
  ERROR: 
nova.tests.unit.image.test_format_inspector.TestFormatInspectors.test_vmdk_bad_descriptor_mem_limit_stream_optimized
  --
  pythonlogging:'': {{{
  2024-07-23 11:43:31,443 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:43:31,443 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to Y

[Yahoo-eng-team] [Bug 2035375] Re: Detaching multiple NVMe-oF volumes may leave the subsystem in connecting state

2024-08-08 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/895192
Committed: 
https://opendev.org/openstack/nova/commit/18163761d02fc02d5484f91bf52cd4f25536f95e
Submitter: "Zuul (22348)"
Branch:master

commit 18163761d02fc02d5484f91bf52cd4f25536f95e
Author: Gorka Eguileor 
Date:   Tue Sep 12 20:53:15 2023 +0200

Fix guard for NVMeOF volumes

When detaching multiple NVMe-oF volumes from the same host we may end
with a NVMe subsystem in "connecting" state, and we'll see a bunch nvme
error in dmesg.

This happens on storage systems that share the same subsystem for
multiple volumes because Nova has not been updated to support the
tri-state "shared_targets" option that groups the detach and unmap of
volumes to prevent race conditions.

This is related to the issue mentioned in an os-brick commit message [1]

For the guard_connection method of os-brick to work as expected for
NVMe-oF volumes we need to use microversion 3.69 when retrieving the
cinder volume.

In microversion 3.69 we started reporting 3 states for shared_targets:
True, False, and None.

- True is to guard iSCSI volumes and will only be used if the iSCSI
  initiator running on the host doesn't have the manual scans feature.

- False is that no target/subsystem is being shared so no guard is
  necessary.

- None is to force guarding, and it's currenly used for NVMe-oF volumes
  when sharing the subsystem.

[1]: https://review.opendev.org/c/openstack/os-brick/+/836062/12//COMMIT_MSG

Closes-Bug: #2035375
Change-Id: I4def1c0f20118d0b8eb7d3bbb09af2948ffd70e1


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2035375

Title:
  Detaching multiple NVMe-oF volumes may leave the subsystem in
  connecting state

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When detaching multiple NVMe-oF volumes from the same host we may end
  with a NVMe subsystem in "connecting" state, and we'll see a bunch
  nvme error in dmesg.

  This happens on storage systems that share the same subsystem for
  multiple volumes because Nova has not been updated to support the tri-
  state "shared_targets" option that groups the detach and unmap of
  volumes to prevent race conditions.

  This is related to the issue mentioned in an os-brick commit message:
  https://review.opendev.org/c/openstack/os-
  brick/+/836062/12//COMMIT_MSG

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2035375/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052761] Re: libvirt: swtpm_ioctl is required for vTPM support

2024-08-08 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/908546
Committed: 
https://opendev.org/openstack/nova/commit/9a11bb25238288139c4473d9d91bf365ed88f435
Submitter: "Zuul (22348)"
Branch:master

commit 9a11bb25238288139c4473d9d91bf365ed88f435
Author: Takashi Kajinami 
Date:   Fri Feb 9 12:16:45 2024 +0900

libvirt: Ensure swtpm_ioctl is available for vTPM support

Libvirt uses swtpm_ioctl to terminate swtpm processes. If the binary
does not exist, swtpm processes are kept running after the associated
VM terminates, because QEMU does not send shutdown to swtpm.

Closes-Bug: #2052761
Change-Id: I682f71512fc33a49b8dfe93894f144e48f33abe6


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2052761

Title:
  libvirt: swtpm_ioctl is required for vTPM support

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  Libvirt uses swtpm_ioctl to shutdown the swtpm process at VM termination, 
because QEMU does not send shutdown command.
  However the binary is not included in the required binaries (swtpm and 
swtpm_setup, at the time of writing) checked by libvirt driver. So users can 
use vTPM support without binaries, which leaves swtpm processes kept running.

  Steps to reproduce
  ==
  * Deploy nova-compute with vTPM support
  * Move swtpm_ioctl from PATH
  * Restart nova-compute

  Expected result
  ===
  nova-compute fails to start because swtpm_ioctl is missing

  Actual result
  =
  nova-compute starts without error and reports TPM traits.

  Environment
  ===
  This issue was initially found in master, but would be present in stable 
branches.

  Logs & Configs
  ==
  N/A

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2052761/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2076163] Re: Persistent mdev support does not work with < libvirt 8.10 due to missing VIR_NODE_DEVICE_CREATE_XML_VALIDATE

2024-08-08 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/925826
Committed: 
https://opendev.org/openstack/nova/commit/f63029b461b81ad93e0681973ed9b5bfca405d5a
Submitter: "Zuul (22348)"
Branch:master

commit f63029b461b81ad93e0681973ed9b5bfca405d5a
Author: melanie witt 
Date:   Tue Aug 6 20:29:22 2024 +

libvirt: Remove node device XML validate flags

Node device XML validation flags [1]:

  VIR_NODE_DEVICE_(CREATE|DEFINE)_XML_VALIDATE

were added in libvirt 8.10.0 but we support older libvirt versions
which will raise an AttributeError when flag access is attempted.

We are not currently using the flags (nothing calling with
validate=True) so this removes the flags from the code entirely. If the
flags are needed in the future, they can be added again at that time.

Closes-Bug: #2076163

[1] 
https://github.com/libvirt/libvirt/commit/d8791c3c7caa6e3cadaf98a5a2c94b232ac30fed

Change-Id: I015d9b7cad413986058da4d29ca7711c844bfa84


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2076163

Title:
  Persistent mdev support does not work with < libvirt 8.10 due to
  missing VIR_NODE_DEVICE_CREATE_XML_VALIDATE

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The persistent mdev feautre passes that flag but that flag was only
  supported since libvirt 8.10.0 so with older libvirt like 7.3.0 (min
  persistent mdev) or 8.0.0 (ubuntu 22.04) the persistent mdev feature
  cannot be enabled as nova-compute will fail due to the missing
  constant.

  XML validation is just a nice to have feature so we can make that flag
  optional and only pass it if libvirt is >= 8.10.0

  
https://github.com/openstack/nova/commit/74befb68a79f8bff823fe067e0054504391ee179#diff-67d0163175a798156def4ec53c18fa2ce6eba79b6400fa833a9219d3669e9a11R1267
  
https://github.com/libvirt/libvirt/commit/d8791c3c7caa6e3cadaf98a5a2c94b232ac30fed

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2076163/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2059236] Re: Add a RBAC action field in the query hooks

2024-08-07 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/915370
Committed: 
https://opendev.org/openstack/neutron/commit/f22f7ae012e75b34051945fcac29f955861896ab
Submitter: "Zuul (22348)"
Branch:master

commit f22f7ae012e75b34051945fcac29f955861896ab
Author: Rodolfo Alonso Hernandez 
Date:   Mon Apr 8 22:19:50 2024 +

Use the RBAC actions field for "network" and "subnet"

Since [1], it is possible to define a set of RBAC actions to filter the
model query. For "network" and "subnet" models, it is needed to add the
RBAC action "access_as_external" to the query. Instead of adding an
additional filter (as is now), this patch replaces the default RBAC
actions used in the model query, adding this extra one.

The neutron-lib library is bumped to version 3.14.0.

[1]https://review.opendev.org/c/openstack/neutron-lib/+/914473

Closes-Bug: #2059236
Change-Id: Ie3e77e2f812bd5cddf1971bc456854866843d4f3


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2059236

Title:
  Add a RBAC action field in the query hooks

Status in neutron:
  Fix Released

Bug description:
  Any Neutron resource (that is not only a single database table but a
  view, a combination of several tables), can register a set of hooks
  that will be used during the DB query creation [1]. These hooks
  include a query hook (to modify query depending on the database
  relationships), a filter hook (to add extra filtering steps to the
  final query) and a results filter hook (that could be used to join
  other tables with other dependencies).

  This bug proposes to add an extra field to this hooks to be able to
  filter the RBAC actions. Some resources, like networks [2] and subnets
  [3], need to add an extra RBAC action "ACCESS_EXTERNAL" to the query
  filter. This is done now by adding again the same RBAC filter included
  in the ``query_with_hooks`` [4] but with the "ACCESS_EXTERNAL" action.

  If instead of this, the ``query_with_hooks`` can include a
  configurable set of RBAC actions, the result query could be shorter,
  less complex and faster.

  
[1]https://github.com/openstack/neutron-lib/blob/625ae19e29758da98c5dd8c9ce03962840a87949/neutron_lib/db/model_query.py#L86-L90
  
[2]https://github.com/openstack/neutron/blob/bcf1f707bc9169e8f701613214516e97f039d730/neutron/db/external_net_db.py#L75-L80
  
[3]https://review.opendev.org/c/openstack/neutron/+/907313/15/neutron/db/external_net_db.py
  
[4]https://github.com/openstack/neutron-lib/blob/625ae19e29758da98c5dd8c9ce03962840a87949/neutron_lib/db/model_query.py#L127-L132

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2059236/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2074018] Re: disable_user_account_days_inactive option locks out all users

2024-08-07 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/924892
Committed: 
https://opendev.org/openstack/keystone/commit/e9513f8e4f25e1f20bc6fcab71d917712abf
Submitter: "Zuul (22348)"
Branch:master

commit e9513f8e4f25e1f20bc6fcab71d917712abf
Author: Douglas Mendizábal 
Date:   Fri Jul 19 17:10:11 2024 -0400

Add keystone-manage reset_last_active command

This patch adds the `reset_last_active` subcommand to the
`keystone-manage` command line tool.

This subcommand will update every user in the database that has a null
value in the `last_active_at` property to the current server time. This
is necessary to prevent user lockout in deployments that have been
running for a long time without `disable_user_account_days_inactive` and
later decide to turn it on.

This patch also includes a change to the logic that sets
`last_active_at` to fix the root issue of the lockout.

Closes-Bug: 2074018
Change-Id: I1b71fb3881dc041db01083fbb4f2592400096a31


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2074018

Title:
  disable_user_account_days_inactive option locks out all users

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Enabling the option `[security_compliance]
  disable_user_account_days_inactive = X` disables all user accounts in
  deployments that have been running for longer than X.

  The root cause seems to be the way that the values of the
  `last_active_at` column in the `user` table are set.  When the option
  is disabled, the `last_active_at` column is never updated, so it is
  null for all users.

  If you later decide to turn on this option for compliance reasons, the
  current logic in Keystone will use the value of `created_at` as the
  last time the user was active.  For any deployment where the users
  were created more than the value of
  `disable_user_account_days_inactive` will result in all users being
  disabled including the admin user regardless of when the user last
  logged in.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2074018/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1938103] Re: assertDictContainsSubset is deprecated since Python3.2

2024-08-06 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/blazar/+/925743
Committed: 
https://opendev.org/openstack/blazar/commit/3999bf1fb7b51ae6eb8e313cfc8526a57336677a
Submitter: "Zuul (22348)"
Branch:master

commit 3999bf1fb7b51ae6eb8e313cfc8526a57336677a
Author: Pierre Riteau 
Date:   Tue Aug 6 10:42:19 2024 +0200

Replace deprecated assertDictContainsSubset

This deprecated method was removed in Python 3.12 [1].

[1] https://docs.python.org/3/whatsnew/3.12.html#id3

Closes-Bug: #1938103
Change-Id: Ic5fcf58bfb6bea0cff669feadbe8fee5b01b1ce0


** Changed in: blazar
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1938103

Title:
  assertDictContainsSubset is deprecated since Python3.2

Status in Blazar:
  Fix Released
Status in Designate:
  Fix Released
Status in Glance:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Mistral:
  Fix Released
Status in neutron:
  Fix Released
Status in python-neutronclient:
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  unittest.TestCase.assertDictContainsSubset is deprecated since Python
  3.2[1] and shows the following warning.

  ~~~
  /usr/lib/python3.9/unittest/case.py:1134: DeprecationWarning: 
assertDictContainsSubset is deprecated
warnings.warn('assertDictContainsSubset is deprecated',
  ~~~

  [1] https://docs.python.org/3/whatsnew/3.2.html#unittest

To manage notifications about this bug go to:
https://bugs.launchpad.net/blazar/+bug/1938103/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2066115] Re: Prevent KeyError getting value of optional data

2024-08-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/919430
Committed: 
https://opendev.org/openstack/horizon/commit/fcce68a914f49938137785a4635d781b5a1741df
Submitter: "Zuul (22348)"
Branch:master

commit fcce68a914f49938137785a4635d781b5a1741df
Author: MinhNLH2 
Date:   Sun May 19 20:58:47 2024 +0700

Prevent KeyError when getting value of optional key

Closes-Bug: #2066115
Change-Id: Ica10eb749b48410583cb34bfa2fda0433a26c664
Signed-off-by: MinhNLH2 


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2066115

Title:
  Prevent KeyError getting value of optional data

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Problem: Some of optional data is retrieved in this way:
  backup_name = data['backup_name'] or None
  volume_id = data['volume_id'] or None

  etc...

  When the key does not exist, KeyError will be raised.
  Moreover, or None here is meaningless.

  Solution:
  Change to 
  backup_name = data.get('backup_name')
  volume_id = data.get('volume_id')

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2066115/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2072483] Re: Revert image status to queued if image conversion fails

2024-08-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/glance/+/923624
Committed: 
https://opendev.org/openstack/glance/commit/ea131dd1442861cb5884f99b6bb9e47e397605ce
Submitter: "Zuul (22348)"
Branch:master

commit ea131dd1442861cb5884f99b6bb9e47e397605ce
Author: Abhishek Kekane 
Date:   Mon Jul 8 09:49:55 2024 +

Revert image state to queued if conversion fails

Made changes to revert image state to `queued` and deleting image data
from staging area if image conversion fails. If image is importing to
multiple stores at a time then resetting the image properties
`os_glance_importing_to_stores` and `os_glance_failed_imports` to
reflect the actual result of the operation.

Closes-Bug: 2072483
Change-Id: I373dde3a07332184c43d9605bad7a59c70241a71


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2072483

Title:
  Revert image status to queued if image conversion fails

Status in Glance:
  Fix Released

Bug description:
  When glance has enabled import plugin `image_conversion` and if this
  plugin fails to convert the image to its desired format then image
  remains in `importing` state forever and also image data remains in
  staging area unless you delete that image.

  Ideally image data should be deleted from the staging area and image
  state should be rolled back to `queued so that user can rectify the
  error in previous attempt and import image again.

  Environment settings:
  Ensure you have glance-direct,web-download method enabled in your 
glance-api.conf
  Ensure you have image_conversion plugin enabled in glance-image-import.conf

  How to reproduce:
  1. Create bad image file with below command
 qemu-img create -f qcow2 -o data_file=abcdefghigh,data_file_raw=on 
disk.qcow2 1G
  2. Use above file to create image using import workflow
 glance image-create-via-import --disk-format qcow2 --container-format bare 
--import-method glance-direct --file disk.qcow2 --name 
test-glance-direct-conversion_1

  Expected result:
  Operation should fail, image should be in queued state and image data 
should be deleted from staging area

  
  Actual result:
  Operation fails, image remains in importing state and image data remains 
in staging area

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2072483/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073987] Re: Switch from distributed to centralized Floating IPs breaks connectivity to the existing FIPs

2024-08-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/925007
Committed: 
https://opendev.org/openstack/neutron/commit/4b1bfb93e380b8dce78935395b2cda57076e5476
Submitter: "Zuul (22348)"
Branch:master

commit 4b1bfb93e380b8dce78935395b2cda57076e5476
Author: Slawek Kaplonski 
Date:   Fri Jul 26 12:02:27 2024 +0200

Fix setting correct 'reside-on-chassis-redirect' in the maintenance task

Setting of the 'reside-on-chassis-redirect' was skipped for LRP ports of
the provider tenant networks in patch [1] but later patch [2] removed
this limitation from the ovn_client but not from the maintenance task.
Due to that this option wasn't updated after e.g. change of the
'enable_distributed_floating_ip' config option and connectivity to the
existing Floating IPs associated to the ports in vlan tenant networks
was broken.

This patch removes that limitation and this option is now updated for
all of the Logical_Router_Ports for vlan networks, not only for external
gateways.

[1] https://review.opendev.org/c/openstack/neutron/+/871252
[2] https://review.opendev.org/c/openstack/neutron/+/878450

Closes-bug: #2073987
Change-Id: I56e791847c8f4f3a07f543689bf22fde8160c9b7


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073987

Title:
  Switch from distributed to centralized Floating IPs breaks
  connectivity to the existing FIPs

Status in neutron:
  Fix Released

Bug description:
  This affects only ML2/OVN deployments. I was checking it with
  initially enabled distributed floating ips
  (enable_distributed_floating_ip=True in the neutron ml2 plugin's
  config file).

  Steps to reproduce the issue:

  1. Create vlan tenant network -- THIS IS VERY IMPORTANT, USING TUNNEL 
NETWORKS WILL NOT CAUSE THAT PROBLEM AT ALL
  2. Create external network - can be vlan or flat
  3. Create router and attach vlan tenant network to that router
  4. Set external network as router's gateway
  5. Create vm connected to that vlan tenant network and add Floating IP to it,
  6. Check connectivity to the VM using Floating IP - all works fine until 
now...

  7. Change 'enable_distributed_floating_ip' config option in Neutron to be 
FALSE
  8. Restart neutron-server
  9. FIP is not working anymore - it is because SNAT_AND_DNAT entry was changed 
to be centralized (no external_mac not set anymore in ovn-nb) but 
Logical_Router_Port still have option "reside-on-redirect-chassis" set to 
"false". After updating it manually to "True" connectiity over centralized 
gateway chassis works again.

  This option reside-on-redirect-chassis was added with patch
  https://review.opendev.org/c/openstack/neutron/+/871252. Additionally
  patch https://review.opendev.org/c/openstack/neutron/+/878450 added
  maintenance task to set correct value of the redirect-type in the
  Logical_Router's gateway port. But it seems that we are missing update
  of the 'reside-on-redirect-chassis' option for the existing
  Logical_Router_Ports when this config option is changed. Maybe we
  should have maintenance task for that also.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2073987/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073782] Re: "Tagging" extension does not initialize the policy enforcer

2024-08-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/924656
Committed: 
https://opendev.org/openstack/neutron/commit/776178e90763d004ccb595b131cdd4dd617cd34f
Submitter: "Zuul (22348)"
Branch:master

commit 776178e90763d004ccb595b131cdd4dd617cd34f
Author: Rodolfo Alonso Hernandez 
Date:   Sat Jul 20 00:46:04 2024 +

Initialize the policy enforcer for the "tagging" service plugin

The "tagging" service plugin API extension does use the policy enforcer
since [1]. If a tag API call is done just after the Neutron server has
been initialized and the policy enforcer, that is a global variable per
API worker, has not been initialized, the API call will fail.

This patch initializes the policy enforcer as is done in the
``PolicyHook``, that is called by many other API resources that inherit
from the ``APIExtensionDescriptor`` class.

[1]https://review.opendev.org/q/I9f3e032739824f268db74c5a1b4f04d353742dbd

Closes-Bug: #2073782
Change-Id: Ia35c51fb81cfc0a55c5a2436fc5c55f2b4c9bd01


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073782

Title:
  "Tagging" extension does not initialize the policy enforcer

Status in neutron:
  Fix Released

Bug description:
  The "tagging" service plugin extension uses its own controller. This
  controller doesn't call the WSGI hooks like the policy hook. Instead
  of this, the controller implements the policy enforcer directly on the
  WSGI methods (create, update, delete, etc.).

  It is needed to initialize the policy enforcer before any enforcement
  is done. If a tag API call is done just after the Neutron server has
  been restarted, the server will fail with the following error: [1].

  The policy enforcement was implemented in [2]. The fix for this bug
  should be backported up to 2023.2.

  [1]https://paste.opendev.org/show/bIeSoD2Y0vrTpJb4uYQ5/
  [2]https://review.opendev.org/q/I9f3e032739824f268db74c5a1b4f04d353742dbd

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2073782/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2074209] Re: OVN maintenance tasks may be delayed 10 minutes in the podified deployment

2024-08-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/925194
Committed: 
https://opendev.org/openstack/neutron/commit/04c217bcd0eda07d52a60121b6f86236ba6e26ee
Submitter: "Zuul (22348)"
Branch:master

commit 04c217bcd0eda07d52a60121b6f86236ba6e26ee
Author: Slawek Kaplonski 
Date:   Tue Jul 30 14:17:44 2024 +0200

Lower spacing time of the OVN maintenance tasks which should be run once

Some of the OVN maintenance tasks are expected to be run just once and
then they raise periodic.NeverAgain() to not be run anymore. Those tasks
also require to have acquried ovn db lock so that only one of the
maintenance workers really runs them.
All those tasks had set 600 seconds as a spacing time so they were run
every 600 seconds. This works fine usually but that may cause small
issue in the environments were Neutron is run in POD as k8s/openshift
application. In such case, when e.g. configuration of neutron is
updated, it may happen that first new POD with Neutron is spawned and
only once it is already running, k8s will stop old POD. Because of that
maintenance worker running in the new neutron-server POD will not
acquire lock on the OVN DB (old POD still holds the lock) and will not
run all those maintenance tasks immediately. After old POD will be
terminated, one of the new PODs will at some point acquire that lock and
then will run all those maintenance tasks but this would cause 600
seconds delay in running them.

To avoid such long waiting time to run those maintenance tasks, this
patch lowers its spacing time from 600 to just 5 seconds.
Additionally maintenance tasks which are supposed to be run only once and
only by the maintenance worker which has acquired ovn db lock will now be
stopped (periodic.NeverAgain will be raised) after 100 attempts of
run.
This will avoid running them every 5 seconds forever on the workers
which don't acquire lock on the OVN DB at all.

Closes-bug: #2074209
Change-Id: Iabb4bb427588c1a5da27a5d313f75b5bd23805b2


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2074209

Title:
  OVN maintenance tasks may be delayed 10 minutes in the podified
  deployment

Status in neutron:
  Fix Released

Bug description:
  When running Neutron server on the K8s (or OpenShift) cluster it may
  happen that ovn maintenance periodic tasks which are supposed to be
  run imediatelly are delayed for about 10 minutes. It is like when e.g.
  Neutron's configuration is changed and K8s is restarting neutron pods.
  What happens in such case is:

  1. pods with neutron-api application are running,
  2. configuration is updated and k8s is first starting new pods and after new 
ones are ready it terminates old pods,
  3. during that time, neutron-server process which runs in the new pod is 
starting maintenance task and it immediately tries to run tasks defined with 
"periodics.periodic(spacing=600, run_immediately=True)" decorator.
  4. This new pod don't yet have lock to the ovn northbound db so each of such 
maintenance tasks is stopped immediately,
  5. Few seconds later OLD neutron-server pod is terminated by k8s and then new 
pod (the one started above in point 3) got lock to the ovn database,
  6. Now all maintenance tasks are run again by the maintenance worked after 
time defined in the "spacing" parameter which is 600 seconds. This 600 seconds 
is pretty long time to wait for e.g. some options in the ovn database will be 
adjusted to the new Neutron configuration.

  We could reduce this spacing time to e.g. 5 seconds. This will
  decrease this additonal waiting time significantly in the case
  described in this bug. It would make all those methods to be called
  much more often in neutron-server processes which don't have lock
  granted but we may introduce additional parameter for that and e.g.
  raise NeverAgain() after 100 attempts of run such periodic task.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2074209/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073745] Re: [eventlet-deprecation] Reduce the ``IpConntrackManager`` process pool to a single thread

2024-08-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/924582
Committed: 
https://opendev.org/openstack/neutron/commit/23b9077df53d2d61a3749ea8631ce4c7fe277b35
Submitter: "Zuul (22348)"
Branch:master

commit 23b9077df53d2d61a3749ea8631ce4c7fe277b35
Author: Rodolfo Alonso Hernandez 
Date:   Fri Jul 19 18:25:39 2024 +

Reduce to 1 thread the processing of ``IpConntrackManager`` events

The multithread processing does not add any speed improvement to the
event processing. The aim of this patch is to reduce to 1 the number of
threads processing the ``IpConntrackManager`` events.

Closes-Bug: #2073745
Change-Id: I190d842349a86868578d6b6ee2ff53cfcd6fb1cc


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073745

Title:
  [eventlet-deprecation] Reduce the ``IpConntrackManager`` process pool
  to a single thread

Status in neutron:
  Fix Released

Bug description:
  This bug has the same justification as [1]. The multithread processing
  does not add any speed improvement to the event processing. The aim of
  this bug is to reduce to 1 the number of threads processing the
  ``IpConntrackManager`` events.

  [1]https://bugs.launchpad.net/neutron/+bug/2070376

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2073745/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073567] Re: [master][ml2-ovn] Multiple Unexpected exception in notify_loop: neutron_lib.exceptions.PortNotFound

2024-07-31 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/925039
Committed: 
https://opendev.org/openstack/neutron/commit/c1b88fc5f52283380261f4fdc1562ff56ea06a29
Submitter: "Zuul (22348)"
Branch:master

commit c1b88fc5f52283380261f4fdc1562ff56ea06a29
Author: Miro Tomaska 
Date:   Fri Jul 26 10:50:40 2024 -0400

Only query for port do not get directly

It was observed in the tempest tests that the port could be already
deleted by some other concurrent event when the `run` is called.
This caused a flood of exception logs. Thus, with this patch we only
query for the port and only update_router_port when the port was
found.

Closes-Bug: #2073567
Change-Id: I54d027f7cb5014d296a99029cfa0a13a7800da0a


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073567

Title:
  [master][ml2-ovn] Multiple Unexpected exception in notify_loop:
  neutron_lib.exceptions.PortNotFound

Status in neutron:
  Fix Released

Bug description:
  Multiple traces can be seen in ovn job like below:-
  Jul 18 19:35:46.623330 np0038010972 neutron-server[84540]: WARNING 
neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_client [None 
req-fbcd2914-d4f5-4f87-a685-96f16cc4f5f2 None None] No port found with ID 
40c61c6b-8569-4bbd-a71d-4bf9a0917d19: RuntimeError: No port found with ID 
40c61c6b-8569-4bbd-a71d-4bf9a0917d19
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event [None req-fbcd2914-d4f5-4f87-a685-96f16cc4f5f2 None None] 
Unexpected exception in notify_loop: neutron_lib.exceptions.PortNotFound: Port 
40c61c6b-8569-4bbd-a71d-4bf9a0917d19 could not be found.
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event Traceback (most recent call last):
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File "/opt/stack/neutron/neutron/db/db_base_plugin_common.py", 
line 295, in _get_port
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event port = model_query.get_by_id(context, models_v2.Port, id,
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/neutron_lib/db/model_query.py",
 line 178, in get_by_id
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event return query.filter(model.id == object_id).one()
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/sqlalchemy/orm/query.py", 
line 2778, in one
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event return self._iter().one()  # type: ignore
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/sqlalchemy/engine/result.py",
 line 1810, in one
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event return self._only_one_row(
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/sqlalchemy/engine/result.py",
 line 752, in _only_one_row
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event raise exc.NoResultFound(
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event sqlalchemy.exc.NoResultFound: No row was found when one was 
required
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event 
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event During handling of the above exception, another exception 
occurred:
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event 
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event Traceback (most recent call last):
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/ovsdbapp/event.py", line 
177, in notify_loop
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event match.run(event, row, updates)
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovsdb_monitor.py",
 line 581, in run
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event port = self.driver._plugin.get_port(self.admin_context, 
row.name)
  Jul 18 19:35:46.630294 np0038010972 neutron-server[84540]: ERROR 
ovsdbapp.event   File 
"/opt/stack/data/venv/lib/python3.10/site-packages/neutron_lib/db/api.py", line 
223, in wrapped
  Jul 18 19:35:46.630294 np0038010972 neutron-server[8

[Yahoo-eng-team] [Bug 2073743] Re: ``ProcessMonitor._check_child_processes`` cannot release the GIL inside a locked method

2024-07-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/924581
Committed: 
https://opendev.org/openstack/neutron/commit/baa57ab38d754bfa2dba488feb9429c1380d616c
Submitter: "Zuul (22348)"
Branch:master

commit baa57ab38d754bfa2dba488feb9429c1380d616c
Author: Rodolfo Alonso Hernandez 
Date:   Fri Jul 19 18:03:16 2024 +

Do not release the executor inside ``_check_child_processes``

The method ``ProcessMonitor._check_child_processes`` was releasing
the thread executor inside a method that creates a lock for the resource
"_check_child_processes". Despite this resource is not used anywhere
else (at least for this instance), this could lead to a potential
deadlock.

The current implementation of ``lockutils.synchronized`` with the
default value "external=False" and "fair=False" is a
``threading.Lock()`` instance. The goal of this lock is, precisely, to
execute the code inside the locked code without any interruption and
then to be able to release the executor.

Closes-Bug: #2073743
Change-Id: I44c7a4ce81a67b86054832ac050cf5b465727adf


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2073743

Title:
  ``ProcessMonitor._check_child_processes`` cannot release the GIL
  inside a locked method

Status in neutron:
  Fix Released

Bug description:
  The method ``ProcessMonitor._check_child_processes`` releases the
  thread executor inside a method that creates a lock for the resource
  "_check_child_processes". Despite this resource is not used anywhere
  else (at least for this instance), this could lead to a potential
  deadlock.

  The current implementation of ``lockutils.synchronized`` with the
  default value "external=False" and "fair=False" is a
  ``threading.Lock()`` instance. The goal of this lock is, precisely, to
  execute the code inside the locked code without any interruption and
  then to be able to release the executor.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2073743/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2059821] Re: Deprecated glanceclient exceptions are still used

2024-07-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/914792
Committed: 
https://opendev.org/openstack/horizon/commit/8cca9edbd84d378012af701c22077f1c84767bf7
Submitter: "Zuul (22348)"
Branch:master

commit 8cca9edbd84d378012af701c22077f1c84767bf7
Author: Takashi Kajinami 
Date:   Sun Mar 31 11:13:34 2024 +0900

Replace deprecated glanceclient exceptions

These exceptions were deprecated in 0.4.0[1].

Also glanceclient.common.exceptions was deprecated in favor of
glanceclient.exc[2].

[1] 354c98b087515dc4303a07d1ff0d9a9d7b4dd48b
[2] 53acf1a0ca70c900267286a249e476fffe078a9f

Closes-Bug: #2059821
Change-Id: If119a261f52b99fe9faf4b8f749a2abbd7a79957


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2059821

Title:
  Deprecated glanceclient exceptions are still used

Status in Cinder:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Description
  ===
  The exceptions in glanceclient.exc were deprecated in glanceclient 0.4.0[1].
   glanceclient.exc.Forbidden
   glanceclient.exc.NotFound
   glanceclient.exc.ServiceUnavailable
   glanceclient.exc.Unauthorized

  https://github.com/openstack/python-
  glanceclient/commit/354c98b087515dc4303a07d1ff0d9a9d7b4dd48b

  But these are still used in the code.

  We should replace these by the new HTTP* exceptions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/2059821/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2074360] Re: [fullstack] Error when creating QoS rules: Unrecognized attribute(s) 'tenant_id, project_id'

2024-07-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/925085
Committed: 
https://opendev.org/openstack/neutron/commit/bfd32488a6795dde178b43126b41a9b80a08750f
Submitter: "Zuul (22348)"
Branch:master

commit bfd32488a6795dde178b43126b41a9b80a08750f
Author: Rodolfo Alonso Hernandez 
Date:   Wed Jul 24 03:46:56 2024 +

Remove the tenant_id/project_id parameter from QoS rule commands

Removed the tenant_id/project_id parameter from any QoS rule command
in the fullstack framework.

Closes-Bug: #2074360
Related-Bug: #2022043
Change-Id: I18efb28ffc02323e82f6b116a3f713cb9e2a132e


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2074360

Title:
  [fullstack] Error when creating QoS rules: Unrecognized attribute(s)
  'tenant_id, project_id'

Status in neutron:
  Fix Released

Bug description:
  The fullstack job is now failing with the following message:
  ```
  neutronclient.common.exceptions.BadRequest: Unrecognized attribute(s) 
'tenant_id, project_id'
  ```

  This is happening since 2024-07-26 14:29:17, according to the last
  zuul executions of "neutron-fullstack-with-uwsgi" job.

  Logs:
  
https://aff82a8c5891f54ff809-cc9a180b756778dd4f9ff9a8ddf16569.ssl.cf2.rackcdn.com/924656/3/check/neutron-
  fullstack-with-uwsgi/abdf4bd/testr_results.html

  Snippet: https://paste.opendev.org/show/boVjhu8gDRJP9mh80FAY/

  This error could be caused by
  https://review.opendev.org/c/openstack/neutron-lib/+/921649. The last
  good execution of this CI job was using neutron-lib 3.13. The next
  one, that failed, was using 3.14 (that contains this patch).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2074360/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2073862] Re: test_vmdk_bad_descriptor_mem_limit and test_vmdk_bad_descriptor_mem_limit_stream_optimized fail if qemu-img binary is missing

2024-07-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/924148
Committed: 
https://opendev.org/openstack/nova/commit/a3202f7bf9f1aecc7d0632011167d38a1698f0a0
Submitter: "Zuul (22348)"
Branch:master

commit a3202f7bf9f1aecc7d0632011167d38a1698f0a0
Author: Julien Le Jeune 
Date:   Mon Jul 15 14:10:43 2024 +0200

Fix test_vmdk_bad_descriptor_mem_limit and 
test_vmdk_bad_descriptor_mem_limit_stream_optimized

These tests depend on qemu-img being installed and in the path, if it is 
not installed, skip them.

Change-Id: I896f16c512f24bcdd898ab002af4e5e068f66b64
Closes-bug: #2073862
Signed-off-by: Julien Le Jeune 


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2073862

Title:
  test_vmdk_bad_descriptor_mem_limit and
  test_vmdk_bad_descriptor_mem_limit_stream_optimized fail if qemu-img
  binary is missing

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When qemu-img binary is not present on the system, these tests fail
  like we can see on these logs:

  ==
  ERROR: 
nova.tests.unit.image.test_format_inspector.TestFormatInspectors.test_vmdk_bad_descriptor_mem_limit
  --
  pythonlogging:'': {{{
  2024-07-23 11:44:54,011 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:44:54,012 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible way: 
https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
  2024-07-23 11:44:54,015 WARNING [oslo_policy.policy] Policy Rules 
['os_compute_api:extensions', 'os_compute_api:os-floating-ip-pools', 
'os_compute_api:os-quota-sets:defaults', 
'os_compute_api:os-availability-zone:list', 'os_compute_api:limits', 
'project_member_api', 'project_reader_api', 'project_member_or_admin', 
'project_reader_or_admin', 'os_compute_api:limits:other_project', 
'os_compute_api:os-lock-server:unlock:unlock_override', 
'os_compute_api:servers:create:zero_disk_flavor', 
'compute:servers:resize:cross_cell', 
'os_compute_api:os-shelve:unshelve_to_host'] specified in policy files are the 
same as the defaults provided by the service. You can remove these rules from 
policy files which will make maintenance easier. You can detect these redundant 
rules by ``oslopolicy-list-redundant`` tool also.
  }}}

  Traceback (most recent call last):
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 408, in test_vmdk_bad_descriptor_mem_limit
  self._test_vmdk_bad_descriptor_mem_limit()
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 382, in _test_vmdk_bad_descriptor_mem_limit
  img = self._create_allocated_vmdk(image_size // units.Mi,
File 
"/home/jlejeune/dev/pci_repos/stash/nova/nova/tests/unit/image/test_format_inspector.py",
 line 183, in _create_allocated_vmdk
  subprocess.check_output(
File "/usr/lib/python3.10/subprocess.py", line 421, in check_output
  return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/usr/lib/python3.10/subprocess.py", line 526, in run
  raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command 'qemu-img convert -f raw -O vmdk -o 
subformat=monolithicSparse -S 0 
/tmp/tmpw0q0ibvj/nova-unittest-formatinspector--monolithicSparse-wz0i4kj1.raw 
/tmp/tmpw0q0ibvj/nova-unittest-formatinspector--monolithicSparse-qpo78jee.vmdk' 
returned non-zero exit status 127.


  
  ==
  ERROR: 
nova.tests.unit.image.test_format_inspector.TestFormatInspectors.test_vmdk_bad_descriptor_mem_limit_stream_optimized
  --
  pythonlogging:'': {{{
  2024-07-23 11:43:31,443 WARNING [oslo_policy.policy] JSON formatted 
policy_file support is deprecated since Victoria release. You need to use YAML 
format which will be default in future. You can use 
``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted 
policy file to YAML-formatted in backward compatible w

[Yahoo-eng-team] [Bug 2048106] Re: CSV Injection while download csv summary

2024-05-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/914156
Committed: 
https://opendev.org/openstack/horizon/commit/c6bba842af621c5a634bfc4798bb13ae8c43ed00
Submitter: "Zuul (22348)"
Branch:master

commit c6bba842af621c5a634bfc4798bb13ae8c43ed00
Author: Tatiana Ovchinnikova 
Date:   Thu Mar 21 15:43:39 2024 -0500

Sanitize data for CSV generation

CSV generation is not fully sanitized to prevent CSV injection.
According to https://owasp.org/www-community/attacks/CSV_Injection,
we have to use the following sanitization:
 - Wrap each cell field in double quotes
 - Prepend each cell field with a single quote
 - Escape every double quote using an additional double quote

The patch https://review.opendev.org/c/openstack/horizon/+/679161
takes care of the double quotes. This patch adds a single quote to
the cell fields beginning with specific characters, so their content
will be read by a spreadsheet editor as text, not a formula.

Closes-Bug: #2048106

Change-Id: I882fe376613ff1dc13a61f38b59d2a2567dbba7d


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2048106

Title:
  CSV Injection while download csv summary

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  Members of the VMT received the following report by E-mail:

  1 admin add a user.

  2 the user  logins and create a compute instance

  3 the user  change the instance   name as "=1+cmd|'/C calc'!A0"

  4 admin go to download  csv summary

  5 admin open the csv and we can see that the calculator is opened.

  see https://owasp.org/www-community/attacks/CSV_Injection to fix it

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2048106/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728031] Fix included in openstack/horizon 23.0.2

2024-04-30 Thread OpenStack Infra
This issue was fixed in the openstack/horizon 23.0.2  release.

** Changed in: cloud-archive/zed
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1728031

Title:
  [SRU] Unable to change user password when ENFORCE_PASSWORD_CHECK is
  True

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive antelope series:
  New
Status in Ubuntu Cloud Archive bobcat series:
  New
Status in Ubuntu Cloud Archive yoga series:
  New
Status in Ubuntu Cloud Archive zed series:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in horizon package in Ubuntu:
  New
Status in horizon source package in Focal:
  New
Status in horizon source package in Jammy:
  New
Status in horizon source package in Mantic:
  New

Bug description:
  After following the security hardening guidelines:
  
https://docs.openstack.org/security-guide/dashboard/checklist.html#check-dashboard-09-is-enforce-password-check-set-to-true
  After this check is enabled
  Check-Dashboard-09: Is ENFORCE_PASSWORD_CHECK set to True
  The user password cannot be changed.
  The form submission fails by displaying that admin password is incorrect.

  The reason for this is in keystone.py in openstack_dashboard/api/keystone.py
  user_verify_admin_password method uses internal url to communicate with the 
keystone.
  line 500:
  endpoint = _get_endpoint_url(request, 'internalURL')
  This should be changed to adminURL

  ===
  SRU Description
  ===

  [Impact]

  Admins cannot change user's password as it gives an error saying that the 
admin's password is incorrect, despite being correct. There are 2 causes:
  1) due to the lack of user_domain being specified when validating the admin's 
password, it will always fail if the admin is not registered in the "default" 
domain, because the user_domain defaults to "default" when not specified.
  2) even if the admin user is registered in the "default" domain, it may fail 
due to the wrong endpoint being used in the request to validate the admin's 
password.
  The issues are fixed in 2 separate patches [1] and [2]. However, [2] is 
introducing a new config option, while [1] alone is also enough to fix the 
occurrence on some deployments. We are including only [1] in the SRU.

  
  [Test case]

  1. Setting up the env

  1a. Deploy openstack env with horizon/openstack-dashboard

  1b. Set up admin user in a domain not named "default", such as
  "admin_domain".

  1c. Set up any other user, such as demo. Preferably in the
  admin_domain as well for convenience.

  2. Reproduce the bug

  2a. Login as admin and navigate to Identity > Users

  2b. On the far right-hand side of the demo user row, click the options
  button and select Change Password

  2c. Type in any new password, repeat it below, and type in the admin
  password. Click Save and you should see a message "The admin password
  is incorrect"

  3. Install package that contains the fixed code

  4. Confirm fix

  5a. Repeat steps 2a-2c

  5b. The password should now be saved successfully

  [Regression Potential]

  The code is a 1-line change that was tested in upstream CI (without
  the addition of bug-specific functional tests) from master(Caracal) to
  stable/zed without any issue captured. No side effects or risks are
  foreseen. Usage of fix [1] has also been tested manually without fix
  [2] and still worked.

  [Other Info]

  None.

  [1] https://review.opendev.org/c/openstack/horizon/+/913250
  [2] https://review.opendev.org/c/openstack/horizon/+/844574

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1728031/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2062965] Re: octavia/ovn: missed healthmon port cleanup

2024-04-30 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/916637
Committed: 
https://opendev.org/openstack/ovn-octavia-provider/commit/f034bab144b68cf96c538339e389c4cc7c6d7d63
Submitter: "Zuul (22348)"
Branch:master

commit f034bab144b68cf96c538339e389c4cc7c6d7d63
Author: Fernando Royo 
Date:   Mon Apr 22 15:47:46 2024 +0200

Remove leftover OVN LB HM port upon deletion of a member

When a load balancer pool has a Health Monitor associated with it,
an OVN LB Health Monitor port is created for each backend member
subnet added.

When removing backend members, the OVN LB Health Monitor port is
cleaned up only if no more members are associated with the Health
Monitor pool. However, this assumption is incorrect. This patch
corrects this behavior by checking instead if there are more members
from the same subnet associated with the pool. It ensures that the
OVN LB Health Monitor port is deleted only when the last member from
the subnet is deleted. If the port is being used by another different
LB Health Monitor, `_clean_up_hm_port` will handle it.

Closes-Bug: #2062965
Change-Id: I4c35cc5c6af14bb208f4313bb86e3519df0a30fa


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2062965

Title:
  octavia/ovn: missed healthmon port cleanup

Status in neutron:
  Fix Released

Bug description:
  Creating an octavia load-balancer with the ovn provider, adding a 
health-monitor and then members, octavia creates a neutron hm port in each 
subnet where a member was added.
  Removing the members again, the hm ports do not get cleaned up. The hm 
removal then cleans up one of the hm ports, the one that is in the subnet where 
the vip happens to be. The others are still left and do not get cleaned up by 
octavia. This of course will cause issues when subnets can later not be deleted 
due to being still populated by the orphaned ports.
  The cleanup logic simply does not match the hm port creation logic.

  Mitigating factors:
  * openstack loadbalancer delete --cascade does clean up all hm ports.
  * Deleting the health mon before removing the members also avoids the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2062965/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2006490] Fix included in openstack/glance zed-eom

2024-04-29 Thread OpenStack Infra
This issue was fixed in the openstack/glance zed-eom  release.

** Changed in: glance/zed
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/2006490

Title:
  Limit CaptureRegion sizes in format_inspector for VMDK and VHDX

Status in Glance:
  Fix Released
Status in Glance xena series:
  New
Status in Glance yoga series:
  In Progress
Status in Glance zed series:
  Fix Released

Bug description:
  VMDK:
  When parsing a VMDK file to calculate its size, the format_inspector
  determines the location of the Descriptor section by reading two
  uint64 from the headers of the file and uses them to create the
  descriptor CaptureRegion.

  It would be possible to craft a VMDK file that commands the
  format_inspector to create a very big CaptureRegion, thus exhausting
  resources on the glance-api process.

  VHDX:
  It is a bit more involved, but similar: when looking for the
  VIRTUAL_DISK_SIZE metadata, the format_inspector was creating an
  unbounded CaptureRegion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/2006490/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365438] Re: Add conntrackd support to HA routers in L3 agent

2024-04-29 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.opendev.org/c/openstack/neutron/+/917430

** Changed in: neutron
   Status: Expired => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365438

Title:
  Add conntrackd support to HA routers in L3 agent

Status in neutron:
  In Progress

Bug description:
  Open TCP sessions are discarded during HA router failover. Adding
  conntrackd support should solve this issue.

  Some work has already been done in the following two patches:
  https://review.openstack.org/#/c/71586/
  https://review.openstack.org/#/c/80332/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365438/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2054435] Re: Testing errors when using new netaddr library

2024-04-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/917260
Committed: 
https://opendev.org/openstack/neutron/commit/ae704369b5390d939db653c9bb0cbc965f7a761a
Submitter: "Zuul (22348)"
Branch:master

commit ae704369b5390d939db653c9bb0cbc965f7a761a
Author: Ihar Hrachyshka 
Date:   Fri Apr 26 22:50:10 2024 -0400

tests: fix invalid `addr` mock values in ovs firewall suite

The tests were passing a string of IPv4 address where the actual code
was expecting a tuple of (IPv4 address, MAC address). The type of the
`addr` changed from a string to a tuple in:

I2e3aa7c400d7bb17cc117b65faaa160b41013dde

but the names of variables and the test cases were not.

Tests still didn't fail till recently because addr[0] resulted in the
first character of the IPv4 address string (e.g. `1`), and was then
interpreted by `netaddr` library as an address. This worked until
`netaddr>=1.0` started to enforce proper formats for passed IPv4
addresses - which broke the tests.

Closes-Bug: #2054435
Change-Id: Ib9594bc0611007efdbaf3219ccd44bbb37cfc627


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2054435

Title:
  Testing errors when using new netaddr library

Status in neutron:
  Fix Released

Bug description:
  Failures can be seen in
  
neutron.tests.unit.agent.linux.openvswitch_firewall.test_firewall.TestConjIPFlowManager
  unit tests with netaddr >= 1.0.0 (see e.g.
  https://zuul.opendev.org/t/openstack/build/3f23859f8ce44ebbb41eda01b76d1d3b):

  netaddr.core.AddrFormatError: '1' is not a valid IPv4 address string!

  The code being executed is

File 
"/home/zuul/src/opendev.org/openstack/neutron/neutron/agent/linux/openvswitch_firewall/firewall.py",
 line 426, in _update_flows_for_vlan_subr
  removed_ips = set([str(netaddr.IPNetwork(addr[0]).cidr) for addr in (
^^^

  Debugging shows that in that moment addr=="10.22.3.4", so
  addr[0]=="1", which newer netaddr complains about as invalid.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2054435/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2058908] Re: fix auto_scheduler_network understanding dhcp_agents_per_network

2024-04-23 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/910708
Committed: 
https://opendev.org/openstack/neutron/commit/1bc945f0d5f91f23ba6df617403b6b93ca1a4d98
Submitter: "Zuul (22348)"
Branch:master

commit 1bc945f0d5f91f23ba6df617403b6b93ca1a4d98
Author: Sahid Orentino Ferdjaoui 
Date:   Fri Mar 1 10:05:35 2024 +0100

dhcp: fix auto_scheduler_network understanding dhcp_agents_per_network

When using routed provided network, the condition is bypassing
dhcp_agents_per_network which results that in a env with 3 agents and
dhcp_agents_per_network=2, for a given network already well handled
by 2 agents. If restarting the third agent It will start to handle the
network also which will result to have 3 agents handling the
network.

Closes-bug: #2058908
Signed-off-by: Sahid Orentino Ferdjaoui 

Change-Id: Ia05a879b0ed88172694bd6bffc6f7eb0d36bb6b0


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2058908

Title:
  fix auto_scheduler_network understanding dhcp_agents_per_network

Status in neutron:
  Fix Released

Bug description:
  when using routed provided network there is condition which is bypassing the 
option dhcp_agents_per_network, which results that in a env with 3 agents and 
dhcp_agents_per_network=2, for a given network already well handled by 2 
agents. if you restart the third agent It will start to handle the
  network also which will result to have 3 agents handling the network.

  The issue in under auto_scheduler_network function.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2058908/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2054799] Re: Issue with Project administration at Cloud Admin level

2024-04-18 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/910321
Committed: 
https://opendev.org/openstack/horizon/commit/ed768ab5071307ee15f95636ea548050cb894f9e
Submitter: "Zuul (22348)"
Branch:master

commit ed768ab5071307ee15f95636ea548050cb894f9e
Author: Zhang Hua 
Date:   Tue Feb 27 15:26:28 2024 +0800

Fix Users/Groups tab list when a domain context is set

The list of users assigned to a project becomes invisible when a domain 
context
is set in Horizon. If a domain context is set, the user list call should
provide a list of users within the specified domain context, rather than 
users
within the user's own domain.

Groups tab of project also has the same problem.

Change-Id: Ia778317acc41fe589765e6cd04c7fe8cad2360ab
Closes-Bug: #2054799


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2054799

Title:
  Issue with Project administration at Cloud Admin level

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  We are not able to see the list of users assigned to a project in Horizon.
  Scenario:
  - Log in as Cloud Admin
  - Set Domain Context (k8s)
  - Go to projects section
  - Click on project Permissions_Roles_Test
  - Go to Users

  Expectation: Get a table with the users assigned to this project.
  Result: Get an error - https://i.imgur.com/TminwUy.png

  
  [Test steps]

  1, Create an ordinary openstack test env with horizon.

  2, Prepared some test data (eg: one domain k8s, one project k8s, and
  one user k8s-admain with the role k8s-admin-role)

  openstack domain create k8s
  openstack role create k8s-admin-role
  openstack project create --domain k8s k8s
  openstack user create --project-domain k8s --project k8s --domain k8s 
--password password k8s-admin
  openstack role add --user k8s-admin --user-domain k8s --project k8s 
--project-domain k8s k8s-admin-role
  $ openstack role assignment list --project k8s --names
  
++---+---+-+++---+
  | Role   | User  | Group | Project | Domain | System | 
Inherited |
  
++---+---+-+++---+
  | k8s-admin-role | k8s-admin@k8s |   | k8s@k8s ||| False  
   |
  
++---+---+-+++---+

  3, Log in horizon dashboard with admin user(eg:
  admin/openstack/admin_domain).

  4, Click 'Identity -> Domains' to set domain context to the domain
  'k8s'.

  5, Click 'Identity -> Project -> k8s project -> Users'.

  6, This is the result, it said 'Unable to disaply the users of this
  project' - https://i.imgur.com/TminwUy.png

  7, These are some logs

  ==> /var/log/apache2/error.log <==
  [Fri Feb 23 10:03:12.201024 2024] [wsgi:error] [pid 47342:tid 
140254008985152] [remote 10.5.3.120:58978] Recoverable error: 
'e900b8934d11458b8eb9db21671c1b11'
  ==> /var/log/apache2/ssl_access.log <==
  10.5.3.120 - - [23/Feb/2024:10:03:11 +] "GET 
/identity/07123041ee0544e0ab32e50dde780afd/detail/?tab=project_details__users 
HTTP/1.1" 200 1125 
"https://10.5.3.120/identity/07123041ee0544e0ab32e50dde780afd/detail/"; 
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) 
Chrome/119.0.0.0 Safari/537.36"

  
  [Some Analyses]

  This action will call this function in horizon [1].
  This function will firstly get a list of users (api.keystone.user_list) [2], 
then role assignment list (api.keystone.get_project_users_roles) [3].
  Without setting domain context, this works fine.
  However, if setting domain context, the project displayed is in a different 
domain.
  The user list from [2] only contains users of the user's own domain, while 
the role assignment list [3] includes users in another domain since the project 
is in another domain.

  From horizon's debug log, here is an example of user list:
  {"users": [{"email": "juju@localhost", "id": 
"8cd8f92ac2f94149a91488ad66f02382", "name": "admin", "domain_id": 
"103a4eb1712f4eb9873240d5a7f66599", "enabled": true, "password_expires_at": 
null, "options": {}, "links": {"self": 
"https://192.168.1.59:5000/v3/users/8cd8f92ac2f94149a91488ad66f02382"}}], 
"links": {"next": null, "self": "https://192.168.1.59:5000/v3/users";, 
"previous": null}}

  Here is an example of role assignment list:
  {"role_assignments": [{"links": {"assignment": 
"https://192.168.1.59:5000/v3/projects/82e250e8492b49a1a05467994d33ea1b/users/a70745ed9ac047ad88b917f24df3c873/roles/f606fafcb4fd47018aeffec2b07b7e84"},
 "scope": {"project": {"id": "82e250e8492b49a1a05467994d33ea1b"}}, "user": 
{"id": "a70745ed9ac047ad88b917f24df3c873"}, "role": {"id": 
"f606fafcb4fd47018aeffec2b07b7e84"}}, {"links": {"assignment": 
"https://192.168.1.59:5000/v3/project

[Yahoo-eng-team] [Bug 2060587] Re: [ML2][OVS] more precise flow table cleaning

2024-04-16 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/915302
Committed: 
https://opendev.org/openstack/neutron/commit/bac1b1f721e6b23da2063340827576fd9c59d0f4
Submitter: "Zuul (22348)"
Branch:master

commit bac1b1f721e6b23da2063340827576fd9c59d0f4
Author: LIU Yulong 
Date:   Tue Apr 9 09:11:03 2024 +0800

More precise flow table cleaning

OVS-agent wants to clean flows table by table during restart,
but actually it does not. If one table has same cookie with
other tables, all related flows will be clean at once.

This patch adds the table_id param to the related call
to limit the flow clean on one table at once.

Closes-Bug: #2060587
Change-Id: I266eb0f5115af718b91f930d759581616310999d


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2060587

Title:
  [ML2][OVS] more precise flow table cleaning

Status in neutron:
  Fix Released

Bug description:
  OVS-agent wants to clean flows table by table during restart, but
  actually it does not. [1] If one table has same cookie with other
  tables, all related flows will be clean at once. A bit radical in such
  style.

  
  [1] 
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ofswitch.py#L186

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2060587/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052821] Fix merged to neutron (master)

2024-04-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/909194
Committed: 
https://opendev.org/openstack/neutron/commit/25a1809964f4845d98d9b3d05148b764a65c9458
Submitter: "Zuul (22348)"
Branch:master

commit 25a1809964f4845d98d9b3d05148b764a65c9458
Author: Rodolfo Alonso Hernandez 
Date:   Sun Feb 11 18:50:55 2024 +

[OVN] "Logical_Router" pinned to chassis, OVN L3 scheduler

Pin a "Logical_Router" to a chassis when the gateway network (external
network) is tunnelled. When the external network is tunnelled, the
"Logical_Router_Port" acting as gateway port is not bound to any
chassis (the network has no physical provider network defined).

In that case, the router is pinned to a chassis instead. A
"HA_Chassis_Group" is created per router. The highest "HA_Chassis" of
this group is assigned to the router. If the gateway port is deleted,
the pinned chassis is removed from the "options" field. If the
router is deleted, the "HA_Chassis_Group" is deleted too.

NOTE: in the a chassis belonging to the router "HA_Chassis_Group"
changes, the list of "HA_Chassis" will be updated in
``ChassisEvent.handle_ha_chassis_group_changes``. However, a
"HA_Chassis_Group" change is handled by OVN, when assiged.

But in this case we are using this artifact, as commented before,
to "manually assign" (from core OVN point of view) the highest
priority "HA_Chassis" to the router (this upcoming funcionality
will be implemented in core OVN). A new follow-up patch will be
pushed to provide HA functionality and update the "HA_Chassis"
assigned to the "Logical_Router" when the chassis list changes.

Partial-Bug: #2052821
Change-Id: I33555fc8a8441149b683ae68f1f10548ffb662a6


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052821

Title:
  [OVN] Pin a Logical_Router to a chassis when the external network is
  tunnelled

Status in neutron:
  Fix Released

Bug description:
  When a gateway network (external network) is added to a router (OVN
  Logical_Router), a gateway port (OVN Logical_Router_Port) is created.
  If the router GW network is a tunnelled network, the GW LRP won't be
  bound to any GW chassis; the network has no physical network thus
  there is no correspondence with any GW physical bridge. Check [1] for
  more information.

  In order to be able to send traffic through a GW chassis, it is needed to pin 
the LR to a chassis manually:
LR:options:chassis=

  Note: there is a proposal [3] to make this functionality a core OVN
  functionality, being able to assign a "HA_Chassis_Group" to a Logical
  Router. This LP bug wants to implement the same functionality but in
  the Neutron code.

  [1]https://review.opendev.org/c/openstack/neutron/+/908325

  References:
  [2]https://bugzilla.redhat.com/show_bug.cgi?id=2259161
  [3]https://issues.redhat.com/browse/FDP-365

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2052821/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2042925] Re: DNS Integration throws expection logs entries were warnings would be sufficient

2024-04-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/900212
Committed: 
https://opendev.org/openstack/neutron/commit/5fe5188ce513c928b45cbc890bed80d8d5a2106b
Submitter: "Zuul (22348)"
Branch:master

commit 5fe5188ce513c928b45cbc890bed80d8d5a2106b
Author: Jayce Houtman 
Date:   Mon Nov 6 17:23:44 2023 +0100

Change exception messages to error log messages for DNS integration.

Change non-harmful stack trace errors for dns_exc.DNSDomainNotFound and
dns_exc.DuplicateRecordSet to error log messages. This prevents the logs
from filling with stack traces where error messages would have been
sufficient enough.

Closes-Bug: #2042925
Change-Id: Icf1fff28bb560c506392f16c579de6d92cd56c23


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2042925

Title:
  DNS Integration throws expection logs entries were warnings would be
  sufficient

Status in neutron:
  Fix Released

Bug description:
  
  The DNS Integration in the ML2 drivers generates exception LOG entries 
whenever dns_exc.DNSDomainNotFound and dns_exc.DuplicateRecordSet are triggered 
on the creation or deletion of DNS records. These exceptions occur if a 
duplicate record set is found or if the DNS domain is not located.

  The exceptions serve to alert the administrators that there was an
  issue with the record set's creation. However, I believe a warning
  would be sufficiently informative for the administrators to recognize
  that something did not proceed as anticipated, which would prevent
  stack traces in the logs. An error could also serve this purpose, but
  I think a warning is more suitable since the system remains
  operational when the exceptions are caught within Neutron.

  Example stack trace of a duplicate record set: 
https://jhcsmedia.com/?f=stack_trace_neutron_900212.txt
  The stack traces comes from our deployment which runs version 2023.1 deployed 
with Kolla-Ansible. We use OpenStack Designate as our DNS driver.

  I have already submitted a review to modify the exceptions to
  warnings, which also eliminates the catching of
  dns_exc.DuplicateRecordSet when removing DNS records. This change is
  based on the premise that duplicate record sets should not exist
  initially. https://review.opendev.org/c/openstack/neutron/+/900212

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2042925/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055876] Re: [ovn-octavia-provider] OVN LB health checks for IPv6 not working

2024-04-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/911413
Committed: 
https://opendev.org/openstack/ovn-octavia-provider/commit/bd1137ad57b6e336800d701d6d52733abf968aa5
Submitter: "Zuul (22348)"
Branch:master

commit bd1137ad57b6e336800d701d6d52733abf968aa5
Author: Fernando Royo 
Date:   Wed Mar 6 09:56:30 2024 +0100

FIX OVN LB Health Monitor checks for IPv6 members

After [1] the IPv6 backend members health checks are supported,
they are mapping into field ip_port_mappings of the OVN LB entity
and translated to OVN SB DB Service_Monitor entries, same way
for IPv4 ones.

However, IPv6 backend members require being enclosed in [ ], and
this was not occurring, causing it not to translate into entries
in the Service_Monitor table. This patch fixes this issue.

Furthermore, a one-time maintenance task has been developed to fix
those existing IPv6 Health Monitors directly upon the startup of
the ovn-octavia-provider component without requiring any action
by the administrator/user.

[1] 
https://github.com/ovn-org/ovn/commit/40a686e8e70f4141464f7fe2c949ea5b9bf29060

Closes-Bug: #2055876
Change-Id: I9b97aa9e6c8d601bc9e465e6aa8895dcc2666568


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2055876

Title:
  [ovn-octavia-provider] OVN LB health checks for IPv6 not working

Status in neutron:
  Fix Released

Bug description:
  When creating a health monitor for a IPv6 load-balancer the members
  are not correctly checked. Upon further analysis, the problem is
  related to there being no entry in the OVN SB database
  (Service_Monitor table) to map LB health checks created in the OVN NB
  database.

  [root@controller-0 /]# ovn-nbctl list load_balancer   


  _uuid   : b67d67ef-d4b6-4c84-95a4-21f211008525
  external_ids: {enabled=True, 
listener_23b0368b-4b69-442d-8e7a-118fac8bc3cf="8082:pool_3f820089-7769-46ee-92ea-7e1c15f03c98",
 lr_ref=neutron-94f17de0-91bc-4b3d-b808-e2cbdf963c66, 
ls_refs="{\"neutron-eba8acfd-b0e4-4874-b106-fa8542a8
  2c4e\": 7}", 
"neutron:member_status"="{\"0db4a0e0-23ed-4ee8-8283-2e5784f172ae\": \"ONLINE\", 
\"8dfc2bdc-193e-4e61-adbf-503e36e3aab9\": \"ONLINE\", 
\"c1c0b48d-a477-4fe1-965e-60da20e34cc1\": \"ONLINE\", 
\"6f2b2e6a-18d0-4783-b871-0c424e8397c
  0\": \"ONLINE\", \"49b28a9f-07b9-4d9f-8c7e-8cf5161be031\": \"ONLINE\", 
\"54691261-3f18-4afe-8239-ed0b0c6082e2\": \"ONLINE\"}", 
"neutron:vip"="fd2e:6f44:5dd8:c956:f816:3eff:fe56:d5a7", 
"neutron:vip_port_id"="489cbe15-de07-4f1e-93db-a8d552380653", 
"octavia:healthmonitors"="[\"195b1c33-cfd4-4994-98cb-240103a0b653\"]", 
pool_3f820089-7769-46ee-92ea-7e1c15f03c98="member_0db4a0e0-23ed-4ee8-8283-2e5784f172ae_fd2e:6f44:5dd8:c956:f816:3eff:fe2a:1eac:31602_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_8dfc2bdc-193e-4e61-adbf-503e36e3aab9_fd2e:6f44:5dd8:c956:f816:3eff:fe46:52d2:31602_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_c1c0b48d-a477-4fe1-965e-60da20e34cc1_fd2e:6f44:5dd8:c956:f816:3eff:fe48:1ba0:31602_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_6f2b2e6a-18d0-4783-b871-0c424e8397c0_fd2e:6f44:5dd8:c956:f816:3eff:fe06:cf4a:31602_38898007-e0de-4cdf-b83e-ec8c5113bfd6,member_49b28a9f-07b9-4d9f-8c7e-8cf5161be031_fd2e:6f44:5dd8:c956:f816:3eff:fe09:1b3e:31602_38898007-e0de-4cdf-b83e-ec8c5113bfd6,m
 
ember_54691261-3f18-4afe-8239-ed0b0c6082e2_fd2e:6f44:5dd8:c956:f816:3eff:fea4:1218:31602_38898007-e0de-4cdf-b83e-ec8c5113bfd6"}
   
  health_check: [04b18ea0-0f88-43fa-b759-aba5fde256bf]  

   
  ip_port_mappings: 
{"fd2e:6f44:5dd8:c956:f816:3eff:fe06:cf4a"="c5f06200-d036-432a-b1f2-8266075cfb0e:fd2e:6f44:5dd8:c956:f816:3eff:fed6:32af",
 
"fd2e:6f44:5dd8:c956:f816:3eff:fe09:1b3e"="0f8c9ee5-e322-4101-a74a-e9dd8b4db132:fd2e:6f44:5dd
  8:c956:f816:3eff:fed6:32af", 
"fd2e:6f44:5dd8:c956:f816:3eff:fe2a:1eac"="2c60050e-1732-49e6-b194-3981d015fa5e:fd2e:6f44:5dd8:c956:f816:3eff:fed6:32af",
 
"fd2e:6f44:5dd8:c956:f816:3eff:fe46:52d2"="5609e438-b02e-48b0-a188-1bc53be90835:fd2e:6f
  44:5dd8:c956:f816:3eff:fed6:32af", 
"fd2e:6f44:5dd8:c956:f816:3eff:fe48:1ba0"="f148f0c3-8d0d-4d00-94b3-bbb3b68cc8d8:fd2e:6f44:5dd8:c956:f816:3eff:fed6:32af",
 
"fd2e:6f44:5dd8:c956:f816:3eff:fea4:1218"="6fa3c8cc-c1c7-45a7-a445-b7c50324a469:f
  d2e:6f44:5dd8:c956:f816:3eff:fed6:32af"}  
   

[Yahoo-eng-team] [Bug 2058400] Re: [ovn-octavia-provider] functional jobs are unstable

2024-04-09 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/ovn-octavia-provider/+/915266
Committed: 
https://opendev.org/openstack/ovn-octavia-provider/commit/8d3e5c7ed3064abac33d152ed74e2f0427d4e676
Submitter: "Zuul (22348)"
Branch:master

commit 8d3e5c7ed3064abac33d152ed74e2f0427d4e676
Author: Fernando Royo 
Date:   Mon Apr 8 13:06:16 2024 +0200

Adding isolation to functional tests

Due to some events received among different tests in the
same worker, this patch add '--isolated' tag to stestr
to run every test on a unique worker and in a different stestr
process.

Closes-Bug: #2058400

Change-Id: I9ff255c110774590f53811162572eef596ff9a04


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2058400

Title:
  [ovn-octavia-provider] functional jobs are unstable

Status in neutron:
  Fix Released

Bug description:
  Functional tests of ovn-octavia-providers are very unstable and fail quite 
often.
  This has been caught in 
https://review.opendev.org/c/openstack/ovn-octavia-provider/+/907582 .

  According to the job history -release is more unstable than -master

  https://zuul.opendev.org/t/openstack/builds?job_name=ovn-octavia-
  provider-functional-release&project=openstack%2Fovn-octavia-
  provider&change=907582&skip=0

  https://zuul.opendev.org/t/openstack/builds?job_name=ovn-octavia-
  provider-functional-master&project=openstack%2Fovn-octavia-
  provider&change=907582&skip=0

  The failures seen there are quite random and I could not find any test
  case (or error) consistently failing.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2058400/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2045811] Re: neutron-ovn-db-sync-util can fail with KeyError

2024-04-06 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/902841
Committed: 
https://opendev.org/openstack/neutron/commit/e4323e1f209ea1c63fe7af5275ea2b96f52b8740
Submitter: "Zuul (22348)"
Branch:master

commit e4323e1f209ea1c63fe7af5275ea2b96f52b8740
Author: Brian Haley 
Date:   Wed Dec 6 16:37:24 2023 -0500

Fix KeyError failure in _sync_subnet_dhcp_options()

If the netron-ovn-db-sync-util is run while neutron-server
is active (which is not recommended), it can randomly fail
if there are active API calls in flight to create networks
and/or subnets.

Skip the subnet and log a warning if detected.

Closes-bug: #2045811
Change-Id: Ic5d9608277dd5c4881b3e4b494e1864be0bed1b4


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2045811

Title:
  neutron-ovn-db-sync-util can fail with KeyError

Status in neutron:
  Fix Released

Bug description:
  If the netron-ovn-db-sync-util is run while neutron-server is active
  (which is not recommended), it can randomly fail if there are active
  API calls in flight to create networks and/or subnets.

  This is an example traceback I've seen many times in a production
  environment:

  WARNING neutron.plugins.ml2.drivers.ovn.mech_driver.ovsdb.ovn_db_sync 
[req-7fc12422-6fae-4ec9-98bc-8a114f30c9e3 - - - - -] DHCP options for subnet 
0662e4fd-f8b4-4d29-8ba7-5846bd19e45d is present in Neutron but out of sync for 
OVN
  CRITICAL neutron_ovn_db_sync_util [req-7fc12422-6fae-4ec9-98bc-8a114f30c9e3 - 
- - - -] Unhandled error: KeyError: 
'neutron-93ad1c21-d2cf-448a-8fae-21c71f44dc5c'
  ERROR neutron_ovn_db_sync_util Traceback (most recent call last):
  ERROR neutron_ovn_db_sync_util   File "/usr/bin/neutron-ovn-db-sync-util", 
line 10, in 
  ERROR neutron_ovn_db_sync_util sys.exit(main())
  ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/cmd/ovn/neutron_ovn_db_sync_util.py", 
line 219, in main
  ERROR neutron_ovn_db_sync_util synchronizer.do_sync()
  ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 98, in do_sync
  ERROR neutron_ovn_db_sync_util self.sync_networks_ports_and_dhcp_opts(ctx)
  ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 871, in sync_networks_ports_and_dhcp_opts
  ERROR neutron_ovn_db_sync_util self._sync_subnet_dhcp_options(
  ERROR neutron_ovn_db_sync_util   File 
"/usr/lib/python3/dist-packages/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py",
 line 645, in _sync_subnet_dhcp_options
  ERROR neutron_ovn_db_sync_util network = 
db_networks[utils.ovn_name(subnet['network_id'])]
  ERROR neutron_ovn_db_sync_util KeyError: 
'neutron-93ad1c21-d2cf-448a-8fae-21c71f44dc5c'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2045811/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1864374] Re: ml2 ovs does not flush iptables switching to FW ovs

2024-04-06 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/911957
Committed: 
https://opendev.org/openstack/neutron/commit/46245c015403c5770d2bd9b6d08f52f89fd6aa40
Submitter: "Zuul (22348)"
Branch:master

commit 46245c015403c5770d2bd9b6d08f52f89fd6aa40
Author: Brian Haley 
Date:   Thu Mar 7 14:00:21 2024 -0500

Add note on iptables cleanup after OVS firewall migration

Add an item to the instructions on iptables to OVS
firewall migration that the admin should cleanup
any stale iptables rules after completion. It is
out of scope of our documents on how exactly an
adminstrator might do that.

Closes-bug: #1864374
Change-Id: Ie1bf6b82e57a00f61640a131a29d897a9cde4629


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1864374

Title:
  ml2 ovs does not flush iptables switching to FW ovs

Status in neutron:
  Fix Released

Bug description:
  hi,

  When switching fw engine from itables to openvswitch and restart the
  agent, the old iptables rules are not flushed. One has to clean that
  up by hand or reboot. This is not documented anywhere afaik and it
  gives very tricky issues that are hard to detect.

  
   OVS with FW = openvswithc
  # iptables -L | grep neutron
  < returns nothing >

   switching to FW = iptables and restart agent
  # iptables -S | grep neutron
  -N neutron-filter-top
  -N neutron-openvswi-FORWARD
  -N neutron-openvswi-INPUT
  -N neutron-openvswi-OUTPUT
  -N neutron-openvswi-local
  -N neutron-openvswi-sg-chain
  -N neutron-openvswi-sg-fallback
  -A INPUT -j neutron-openvswi-INPUT
  -A FORWARD -j neutron-filter-top
  -A FORWARD -j neutron-openvswi-FORWARD
  -A OUTPUT -j neutron-filter-top
  -A OUTPUT -j neutron-openvswi-OUTPUT
  -A neutron-filter-top -j neutron-openvswi-local
  -A neutron-openvswi-FORWARD -m physdev --physdev-out tapc02b9364-d2 
--physdev-is-bridged -m comment --comment "Accept all packets when port 
security is disabled." -j ACCEPT
  -A neutron-openvswi-FORWARD -m physdev --physdev-in tapc02b9364-d2 
--physdev-is-bridged -m comment --comment "Accept all packets when port 
security is disabled." -j ACCEPT
  -A neutron-openvswi-INPUT -m physdev --physdev-in tapc02b9364-d2 
--physdev-is-bridged -m comment --comment "Accept all packets when port 
security is disabled." -j ACCEPT
  -A neutron-openvswi-sg-chain -j ACCEPT
  -A neutron-openvswi-sg-fallback -m comment --comment "Default drop rule for 
unmatched traffic." -j DROP

  
   swtiching back to FW = ovs and restarting the agent, the iptables rules 
are still there
  # iptables -S | grep neutron
  -N neutron-filter-top
  -N neutron-openvswi-FORWARD
  -N neutron-openvswi-INPUT
  -N neutron-openvswi-OUTPUT
  -N neutron-openvswi-local
  -N neutron-openvswi-sg-chain
  -N neutron-openvswi-sg-fallback
  -A INPUT -j neutron-openvswi-INPUT
  -A FORWARD -j neutron-filter-top
  -A FORWARD -j neutron-openvswi-FORWARD
  -A OUTPUT -j neutron-filter-top
  -A OUTPUT -j neutron-openvswi-OUTPUT
  -A neutron-filter-top -j neutron-openvswi-local
  -A neutron-openvswi-FORWARD -m physdev --physdev-out tapc02b9364-d2 
--physdev-is-bridged -m comment --comment "Accept all packets when port 
security is disabled." -j ACCEPT
  -A neutron-openvswi-FORWARD -m physdev --physdev-in tapc02b9364-d2 
--physdev-is-bridged -m comment --comment "Accept all packets when port 
security is disabled." -j ACCEPT
  -A neutron-openvswi-INPUT -m physdev --physdev-in tapc02b9364-d2 
--physdev-is-bridged -m comment --comment "Accept all packets when port 
security is disabled." -j ACCEPT
  -A neutron-openvswi-sg-chain -j ACCEPT
  -A neutron-openvswi-sg-fallback -m comment --comment "Default drop rule for 
unmatched traffic." -j DROP

  ### Expected behavior #
  The agent should check what FW engine is used and check if there is something 
to clean up 
  i.e. 
  if config fw = ovs, check and clean up iptables
  if config fw = iptabls, check and clean up ovs fw flows

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1864374/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2049615] Re: multisegments: cleaning DHCP process for segment 0 should happen first

2024-04-06 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/905617
Committed: 
https://opendev.org/openstack/neutron/commit/5453c92a2e777b9a41989cc21d6064ffe4711e8d
Submitter: "Zuul (22348)"
Branch:master

commit 5453c92a2e777b9a41989cc21d6064ffe4711e8d
Author: Sahid Orentino Ferdjaoui 
Date:   Mon Jan 15 17:11:23 2024 +0100

dhcp: ensure that cleaning DHCP process with one segment happens first

Previously, the code used to clean up old DHCP processes for a network
before creating new ones supporting multiple segments per network
could potentially not be executed first. Since disabling applies to
cleaning the namespace, this could have led to the network setup being
destroyed after being done.

This change moves the part that cleans up the old DHCP setup to ensure
it is executed first.

Closes-bug: #2049615
Signed-off-by: Sahid Orentino Ferdjaoui 

Change-Id: Iecdb2d81ee077c9b9057d0708c5c88e159970039


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2049615

Title:
  multisegments: cleaning DHCP process for segment 0 should happen first

Status in neutron:
  Fix Released

Bug description:
  With the new support of multi-segments some code has been added to
  clean old dhcp setup for a network. that clean should happen first and
  clean segment index == 0.

  As list of segment for a given network does not come ordered by
  segment index, in the process we can be in that situation of having
  network setup for multi index 1 coming before index 0 which means it
  will be destroyed by the clean resulting a missing setup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2049615/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1997352] Re: When REBUILDING from UEFI to non-UEFI instance ends up in ERROR state

2024-04-05 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/906380
Committed: 
https://opendev.org/openstack/nova/commit/406d590a364d2c3ebc91e5f28f94011b158459d2
Submitter: "Zuul (22348)"
Branch:master

commit 406d590a364d2c3ebc91e5f28f94011b158459d2
Author: Simon Hensel 
Date:   Tue Jan 23 16:16:17 2024 +0100

Always delete NVRAM files when deleting instances

When deleting an instance, always send VIR_DOMAIN_UNDEFINE_NVRAM to
delete the NVRAM file, regardless of whether the image is of type UEFI.
This prevents a bug when rebuilding an instance from an UEFI image to a
non-UEFI image.

Closes-Bug: #1997352

Change-Id: I24648f5b7895bf5d093f222b6c6e364becbb531f
Signed-off-by: Simon Hensel 


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1997352

Title:
  When REBUILDING from UEFI to non-UEFI instance ends up in ERROR state

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  If an UEFI instance is REBUILDED using a non-UEFI image as a
  replacement via e.g.:

  # openstack server create --flavor c4.2xlarge --image
  ubuntu-22.04-x86_64-uefi --network mynetwork --key-name mykey ubuntu-
  uefi-test --security-group default

  # openstack server rebuild --image ubuntu-22.04-x86_64 ubuntu-uefi-
  test

  
  The instance ends up in an error state:

  ```
  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/nova/virt/libvirt/guest.py", line 285, 
in delete_configuration
  self._domain.undefineFlags(flags)
File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 193, in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 151, in 
proxy_call
  rv = execute(f, *args, **kwargs)
File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 132, in 
execute
  six.reraise(c, e, tb)
File "/usr/lib/python3/dist-packages/six.py", line 719, in reraise
  raise value
File "/usr/lib/python3/dist-packages/eventlet/tpool.py", line 86, in tworker
  rv = meth(*args, **kwargs)
File "/usr/lib/python3/dist-packages/libvirt.py", line 2924, in 
undefineFlags 
  if ret == -1: raise libvirtError (\'virDomainUndefineFlags() failed\', 
dom=self)
  libvirt.libvirtError: Requested operation is not valid: cannot undefine 
domain with nvram

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 200, in 
decorated_function
  return function(self, context, *args, **kwargs)
File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 3095, 
in terminate_instance
  do_terminate_instance(instance, bdms)
File "/usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py", line 
360, in inner
  return f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 3093, 
in do_terminate_instance
  self._set_instance_obj_error_state(instance)
File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in 
__exit__
  self.force_reraise()
File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
  raise self.value
File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 3083, 
in do_terminate_instance
  self._delete_instance(context, instance, bdms)
File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 3018, 
in _delete_instance
  self._shutdown_instance(context, instance, bdms)
File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2910, 
in _shutdown_instance
  self._try_deallocate_network(context, instance,
File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in 
__exit__
  self.force_reraise()
File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in 
force_reraise
  raise self.value
   File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2897, in 
_shutdown_instance
  self.driver.destroy(context, instance, network_info,
File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 
1423, in destroy
  self.cleanup(context, instance, network_info, block_device_info,
File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 
1493, in cleanup
  return self._cleanup(
File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 
1585, in _cleanup
  self._undefine_domain(instance)
File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 
1442, in _undefine_domain
  LOG.error(\'Error from libvirt during undefine. \'
File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in 
__exit__
 

[Yahoo-eng-team] [Bug 2048493] Re: Horizon exposes secrets via volume transfer URL

2024-04-04 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/914104
Committed: 
https://opendev.org/openstack/horizon/commit/ccef197e038a2cd3aa36fd0961686163c8524306
Submitter: "Zuul (22348)"
Branch:master

commit ccef197e038a2cd3aa36fd0961686163c8524306
Author: Radomir Dopieralski 
Date:   Mon Mar 25 12:10:11 2024 +0100

Don't pass the auth_key for volume transfer in the URL

Instead we pass it as data in the POST request.

Closes-Bug: #2048493

Change-Id: I9085eb146b8f013909f6369b731c076aba3216ab


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2048493

Title:
  Horizon exposes secrets via volume transfer URL

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  Hey folks,

  During our latest annual penetration test of our OpenStack based
  cloud, it was discovered that Horizon, when creating a volume transfer
  to another project, provides a link to download the transfer
  credentials to a file which can then be saved. This link includes the
  transfer secret aka authorisation key, and is sent as a GET to the
  server.  The server code parses the URL, pulls out the UUID and
  authorisation key and the renders a template to be returned. The use
  of GET here exposes the authorisation key.

  As you're no doubt aware, sensitive information contained within URLs
  can be recorded in multiple locations, including the user’s web
  browser, the web server, and any forward or reverse proxy servers
  situated between the two endpoints. In addition, URLs are commonly
  displayed on-screen, bookmarked, or shared via email by users. When
  users follow off-site links, these URLs may also be revealed to third
  parties through the Referrer header. Therefore, inserting credentials
  into a URL heightens the vulnerability of them being intercepted and
  subsequently exploited by potential attackers (in this case the
  Referrer header is unlikely).

  To reproduce:

  1. Configure a browser to use an intercepting proxy such as Burp.
  2. Log in to Horizon as a user with privileges to manager volumes.
  3. Navigate to: ”Volumes” > ”Volumes”.
  4. Choose a volume then select the ”Create Transfer” action.
  5. Provide a transfer name then click the ”Create Volume Transfer” button.
  6. Observe that the ”Transfer ID” and ”Authorisation Key” are populated.
  7. Click the ”Download transfer credentials” button to download the 
credentials.
  8. Observe a URL like the following in Burp:
  GET / project / volumes /98fd<>66b0/ download_creds /363e<< 
redacted >>27c1 HTTP /1.1
  Host: dashboard.example.com

  This is made worse by the fact that Cinder doesn't expire transfer requests:
  
https://opendev.org/openstack/cinder/src/commit/4ee1bdaf648064adb6ad9f0e4fda6adc6ad1cbb6/cinder/transfer/api.py#L164

  Path match in Horizon:
  
https://opendev.org/openstack/horizon/src/commit/fb1a3e88daf479b8fc5edcf26995fb860c76d05c/openstack_dashboard/dashboards/project/volumes/urls.py#L66
 (and is present in master)

  Code in Horizon which generates the result:
  
https://opendev.org/openstack/horizon/src/commit/fb1a3e88daf479b8fc5edcf26995fb860c76d05c/openstack_dashboard/dashboards/project/volumes/views.py#L672
 (and is present in master)

  Ideally the generation of creds file would be fully in the browser,
  there is no need for a round trip to the server. If a round trip is
  necessary, then it should be via a POST and not include the transfer
  key in the URL.

  Kind regards,
  Andrew

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/2048493/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1988297] Re: Unique constraint on external_id still created for table access_rule

2024-04-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/885463
Committed: 
https://opendev.org/openstack/keystone/commit/90dcff07c03ee60227b01f47d67fe9e5b1629593
Submitter: "Zuul (22348)"
Branch:master

commit 90dcff07c03ee60227b01f47d67fe9e5b1629593
Author: Christian Rohmann 
Date:   Wed Jun 7 14:49:35 2023 +0200

sql: Fixup for invalid unique constraint on external_id in access_rule table

There was a big drop of invalid constraints with [1]. One of them was on
`external_id` in the access_rule table.

While the change made it into a Alembic revision with [2], it still exists 
in
the schema causing an a new Alembic autogeneration to actually add it again 
as
a revision.

[1] https://review.opendev.org/c/openstack/keystone/+/851845
[2] 
https://opendev.org/openstack/keystone/commit/7d169870fe418b9aa5765dc2f413ecdf9c8f1d48#diff-26484e3f6683ce7557e17b67220003784ff84fbe

Closes-Bug: #1988297
Change-Id: I66626ba8771ef2aa8b3580fd3f5d15fd4b58ab48


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1988297

Title:
  Unique constraint on external_id still created for table access_rule

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Currently the primary key and an additional unique index are configured on 
the same column.
  This is why sqlalchemy logs a warning on a database migration displaying 
following information:

  ```
  ​/usr/lib/python3/dist-packages/pymysql/cursors.py:170: Warning: (1831, 
'Duplicate index `uniq_instances0uuid`. This is deprecated and will be 
disallowed in a future release')
  result = self._query(query)
  ```
  (​This example is actually taken from the nova output, but looks just the 
same for Keystone.
  There actually is the same issue within Nova schemas, see bug 
https://bugs.launchpad.net/nova/+bug/1641185)

  From my understanding of the documentation of mysql (see [1] [2]) and
  postgres (see [3] [4]) a unique constraint, which is created in the
  first place, automatically creates an index for the column(s). So
  there should be no need to create an additional index for the same
  column:

  ```
  Table: access_rule 
(https://opendev.org/openstack/keystone/src/commit/7c2d0f589c8daf5c65a80ed20d1e7fbfcc282312/keystone/common/sql/migrations/versions/27e647c0fad4_initial_version.py#L120)

  Column: external_id
  Indexes:
  Unique Constraint: access_rule_external_id_key
  Index: external_id
  ```


  [1] 
https://dev.mysql.com/doc/refman/8.0/en/create-index.html#create-index-unique
  [2] https://dev.mysql.com/doc/refman/8.0/en/mysql-indexes.html
  [3] 
https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-UNIQUE-CONSTRAINTS
  [4] https://www.postgresql.org/docs/current/indexes-types.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1988297/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287418] Re: Compound sorting should be more obvious

2024-03-28 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/914421
Committed: 
https://opendev.org/openstack/horizon/commit/04d6edb38a5382e90b6f3071492ea1b1f6387f3f
Submitter: "Zuul (22348)"
Branch:master

commit 04d6edb38a5382e90b6f3071492ea1b1f6387f3f
Author: AgnesNM 
Date:   Wed Mar 27 09:05:36 2024 +0300

Include compound sorting information in docs

Compound sorting is an existing feature on the Horizon dashboard.
It is not obvious, however.
This change should bring this feature to the attention of users.
It should also guide users into how to use the feature when sorting.
Horizon DataTables.

DocImpact
Closes-Bug: #1287418
Implements: compound sorting

Change-Id: I810e863e01ca54f6751e1608e99ce97833597aff


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1287418

Title:
  Compound sorting should be more obvious

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Tablesorter plugin allows user to compound sort when they hold down
  the SHIFT key. However, this is not an obvious feature and not
  documented anywhere. I recommend that we add this to horizon's
  documentation.

  Having a table header sort indicator will also make this more obvious,
  and allow the user to track compound sorting. Related bug#1279578

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1287418/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2016346] Re: nova-manage image_property show throws unexpected keyword

2024-03-27 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/880557
Committed: 
https://opendev.org/openstack/nova/commit/1c02c0da1702ab1f58e930782a8866ed683c3c7d
Submitter: "Zuul (22348)"
Branch:master

commit 1c02c0da1702ab1f58e930782a8866ed683c3c7d
Author: Robert Breker 
Date:   Fri Apr 14 23:24:02 2023 +0100

Fix nova-manage image_property show unexpected keyword

Reproduction steps:
1. Execute: nova-manage image_property show  \
hw_vif_multiqueue_enabled
2. Observe:
  An error has occurred:
  Traceback (most recent call last):
File "/var/lib/kolla/venv/lib/python3.9/
  site-packages/nova/cmd/manage.py", line 3394, in main
  ret = fn(*fn_args, **fn_kwargs)
  TypeError: show() got an unexpected keyword argument 'property'

Change-Id: I1349b880934ad9f44a943cf7de324d7338619d2e
Closes-Bug: #2016346


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2016346

Title:
  nova-manage image_property show throws unexpected keyword

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  The `nova-manage image_property show` CLI command passes in an invalid 
parameter.
  It passes in `property` while `image_property` is expected by the method.

  
  Steps to reproduce
  ==
  1. Execute: nova-manage image_property show 
23fb2a96-2739-4dda-8f05-aa543bbc305f hw_vif_multiqueue_enabled
  2. Observe:
An error has occurred:
Traceback (most recent call last):
  File 
"/var/lib/kolla/venv/lib/python3.9/site-packages/nova/cmd/manage.py", line 
3394, in main
ret = fn(*fn_args, **fn_kwargs)
TypeError: show() got an unexpected keyword argument 'property'

  Environment
  ===
  Zed - nova.26.0.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2016346/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2057698] Re: Concurrent routerroute update fails on deletion with AttributeError

2024-03-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/912629
Committed: 
https://opendev.org/openstack/neutron/commit/27b2f22df10e6e41c6fc4e1ce7f839aeb3dc3e13
Submitter: "Zuul (22348)"
Branch:master

commit 27b2f22df10e6e41c6fc4e1ce7f839aeb3dc3e13
Author: Sebastian Lohff 
Date:   Tue Mar 12 19:34:17 2024 +0100

Don't delete already deleted extra router routes

When handling the deletion of extra routes we need to handle the case
that the route is already deleted by another call in the time we have
fetched the extra routes and try to delete it. This is a classic race
condition when two calls try to update the routes of a router at the
same time. The default MariaDB/MySQL transaction isolation level does
not suffice to prevent this scenario. Directly deleting the route
without fetching it solves this problem.

Change-Id: Ie8238310569eb7c1c53296195800bef5c9cb92a3
Closes-Bug: #2057698


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2057698

Title:
  Concurrent routerroute update fails on deletion with AttributeError

Status in neutron:
  Fix Released

Bug description:
  When updating a router and providing a set of extra routes /
  routerroutes that result in some routes being deleted, it might happen
  that two workers fetch the routes at the same time and then both try
  to delete the route. As the route is fetched before deletion, in one
  of the two workers the get_object() will return None, on which
  delete() is called, resulting in an AttributeError:

  AttributeError: 'NoneType' object has no attribute 'delete'

  The result is not fulfilled properly and a 500 is returned to the
  user.

  This was observed on neutron yoga, though the same code (+ a breaking
  test) seem to confirm this on current master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2057698/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2059032] Re: Neutron metadata service returns http code 500 if nova metadata service is down

2024-03-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/914154
Committed: 
https://opendev.org/openstack/neutron/commit/6395b4fe8ed99855853587fa93cb59fd2691aed5
Submitter: "Zuul (22348)"
Branch:master

commit 6395b4fe8ed99855853587fa93cb59fd2691aed5
Author: Anton Kurbatov 
Date:   Mon Mar 25 18:49:52 2024 +

Fixing the 500 HTTP code in the metadata service if Nova is down

If the Nova metadata service is unavailable, the requests.request()
function may raise a ConnectionError. This results in the upper code
returning a 500 HTTP status code to the user along with a traceback.
Let's handle this scenario and instead return a 503 HTTP status code
(service unavailable).

If the Nova service is down and is behind another proxy (such as
Nginx), then instead of a ConnectionError, the request may result in
receiving a 502 or 503 HTTP status code. Let's also consider this
situation and add support for an additional 504 code.

Closes-Bug: #2059032
Change-Id: I16be18c46a6796224b0793dc385b0ddec01739c4


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2059032

Title:
  Neutron metadata service returns http code 500 if nova metadata
  service is down

Status in neutron:
  Fix Released

Bug description:
  We discovered that if the nova metadata service is down, then the
  neutron metadata service starts printing stack traces with a 500 HTTP
  code to the user.

  Demo on a newly installed devstack

  $ systemctl stop devstack@n-api-meta.service

  Then inside a VM:

  $ curl http://169.254.169.254/latest/meta-data/hostname
  
   
500 Internal Server Error
   
   
500 Internal Server Error
An unknown error has occurred. Please try your request again.
   
  $

  Stack trace:

  ERROR neutron.agent.metadata.agent Traceback (most recent call last):
  ERROR neutron.agent.metadata.agent   File 
"/opt/stack/neutron/neutron/agent/metadata/agent.py", line 85, in __call__
  ERROR neutron.agent.metadata.agent res = self._proxy_request(instance_id, 
tenant_id, req)
  ERROR neutron.agent.metadata.agent   File 
"/opt/stack/neutron/neutron/agent/metadata/agent.py", line 249, in 
_proxy_request
  ERROR neutron.agent.metadata.agent resp = 
requests.request(method=req.method, url=url,
  ERROR neutron.agent.metadata.agent   File 
"/usr/local/lib/python3.9/site-packages/requests/api.py", line 59, in request
  ERROR neutron.agent.metadata.agent return session.request(method=method, 
url=url, **kwargs)
  ERROR neutron.agent.metadata.agent   File 
"/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 589, in 
request
  ERROR neutron.agent.metadata.agent resp = self.send(prep, **send_kwargs)
  ERROR neutron.agent.metadata.agent   File 
"/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
  ERROR neutron.agent.metadata.agent r = adapter.send(request, **kwargs)
  ERROR neutron.agent.metadata.agent   File 
"/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 519, in send
  ERROR neutron.agent.metadata.agent raise ConnectionError(e, 
request=request)
  ERROR neutron.agent.metadata.agent requests.exceptions.ConnectionError: 
HTTPConnectionPool(host='10.136.16.184', port=8775): Max retries exceeded with 
url: /latest/meta-data/hostname (Caused by 
NewConnectionError(': Failed to establish a new connection: [Errno 111] 
ECONNREFUSED'))
  ERROR neutron.agent.metadata.agent
  INFO eventlet.wsgi.server [-] :::192.168.100.14, "GET 
/latest/meta-data/hostname HTTP/1.1" status: 500  len: 362 time: 0.1392403

  
  Also, in our installation the nova service is behind nginx. And if we stop 
nova metadata service we also get 500 http code but with another traceback:

  2024-03-25 20:27:01.985 24 ERROR neutron.agent.metadata.agent [-] Unexpected 
error.: Exception: Unexpected response code: 502
  2024-03-25 20:27:01.985 24 ERROR neutron.agent.metadata.agent Traceback (most 
recent call last):
  2024-03-25 20:27:01.985 24 ERROR neutron.agent.metadata.agent   File 
"/usr/lib/python3.6/site-packages/neutron/agent/metadata/agent.py", line 93, in 
__call__
  2024-03-25 20:27:01.985 24 ERROR neutron.agent.metadata.agent res = 
self._proxy_request(instance_id, tenant_id, req)
  2024-03-25 20:27:01.985 24 ERROR neutron.agent.metadata.agent   File 
"/usr/lib/python3.6/site-packages/neutron/agent/metadata/agent.py", line 288, 
in _proxy_request
  2024-03-25 20:27:01.985 24 ERROR neutron.agent.metadata.agent 
resp.status_code)
  2024-03-25 20:27:01.985 24 ERROR neutron.agent.metadata.agent Exception: 
Unexpected response code: 502
  2024-03-25 20:27:01.985 24 ERROR neutron.agent.metadata.agent
  2024-03-25 20:27:01.988 24 INFO eventlet.wsgi.server [-] 
10.197.115.207, "GET /latest/meta-data/hostname HTTP/1.1" status: 500  
len: 

[Yahoo-eng-team] [Bug 2059051] Re: The user defined router flavor OVN driver doesn't check properly unspecified flavor

2024-03-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/914162
Committed: 
https://opendev.org/openstack/neutron/commit/9d729bda207847b4c94d570eacdd26951294f49f
Submitter: "Zuul (22348)"
Branch:master

commit 9d729bda207847b4c94d570eacdd26951294f49f
Author: Miguel Lavalle 
Date:   Mon Mar 25 17:30:01 2024 -0500

Check unspecified flavor in user defined driver

In order to decide whether to process a router related
request, the user defined router flavor OVN driver needs to
check the flavor_id specified in the request. This change adds
the code to test the case when the API passed the flavor_id as
unspecified.

Change-Id: I4d7d9d5582b97246cad63ef7f5511b159d6c6791
Closes-Bug: #2059051


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2059051

Title:
  The user defined router flavor OVN driver doesn't check properly
  unspecified flavor

Status in neutron:
  Fix Released

Bug description:
  In order to decide whether to process a router related request, the
  user defined router flavor OVN driver needs to check the flavor_id
  specified in the request. If the flavor_id is passed in the request as
  'ATTR_NOT_SPECIFIED', the check in the driver will fail:
  
https://github.com/openstack/neutron/blob/e003fd73f6ba17328f3e15a2cc2d199a630229ca/neutron/services/ovn_l3/service_providers/user_defined.py#L47

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2059051/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2002687] Re: [RFE] Active-active L3 Gateway with Multihoming

2024-03-26 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/899402
Committed: 
https://opendev.org/openstack/neutron/commit/0199a8457b4e813fa1eb6235ad68b49a71725b8f
Submitter: "Zuul (22348)"
Branch:master

commit 0199a8457b4e813fa1eb6235ad68b49a71725b8f
Author: Frode Nordahl 
Date:   Thu Oct 26 15:52:18 2023 +0200

Add documentation for aa-l3-gw-multihoming

Closes-Bug: #2002687
Depends-On: I4e69bdf2ac9da1154d3847f3191b110f09130e02
Signed-off-by: Frode Nordahl 
Change-Id: I717ca97164eb9a34bb1095c6222f9879017af5ca


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2002687

Title:
  [RFE] Active-active L3 Gateway with Multihoming

Status in neutron:
  Fix Released

Bug description:
  Some network designs include multiple L3 gateways to:

  * Share the load across different gateways;
  * Provide independent network paths for the north-south direction (e.g. via
    different ISPs).

  Having multi-homing implemented at the instance level imposes additional 
burden
  on the end user of a cloud and support requirements for the guest OS, whereas
  utilizing ECMP and BFD at the router side alleviates the need for 
instance-side
  awareness of a more complex routing setup.

  Adding more than one gateway port implies extending the existing data model
  which was described in the multiple external gateways spec 
(https://specs.openstack.org/openstack/neutron-specs/specs/xena/multiple-external-gateways.html).
 However, it left
  adding additional gateway routes out of scope leaving this to future
  improvements around dynamic routing. Also the focus of neutron-dynamic-routing
  has so far been around advertising routes, not accepting new ones from the
  external peers - so dynamic routing support like this is a very different
  subject. However, manual addition of extra routes does not utilize the default
  gateway IP information available from subnets in Neutron while this could be
  addressed by implementing an extra conditional behavior when adding more than
  one gateway port to a router.

  ECMP routes can result in black-holing of traffic should the next-hop of a
  route becomes unreachable. BFD is a standard protocol adopted by IETF
  for next-hop failure detection which can be used for route eviction. OVN
  supports BFD as of v21.03.0 
(https://github.com/ovn-org/ovn/commit/6e0a69ad4bcdf9e4cace5c73ef48ab06065e8519)
 with a data model that allows enabling
  BFD on a per next-hop basis by associating BFD session information with 
routes,
  however, it is not modeled at the Neutron level even if a backend supports it.

  From the Neutron data model perspective, ECMP for routes is already a 
supported
  concept since ECMP support spec got implemented 
(https://specs.openstack.org/openstack/neutron-specs/specs/wallaby/l3-router-support-ecmp.html)
 in Wallaby (albeit the
  spec focused on the L3-agent based implementation only).

  As for OVN and BFD, the OVN database state needs to be populated by Neutron
  based on the data from the Neutron database, therefore, data model changes to
  the Neutron DB are needed to represent the BFD session parameters.

  ---

  The previous work on multiple gateway ports did not get completed and
  the neutron-lib changes were reverted. Likewise, the scope of this RFE
  is bigger with some overlap and augmentation compared to prior art.
  The spec will follow for this RFE with more details as to how the data
  model and API changes are proposed to be made.

  Upd: https://review.opendev.org/c/openstack/neutron-specs/+/870030

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2002687/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2052786] Re: Add "socket" to NUMA affinity policy

2024-03-21 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/910594
Committed: 
https://opendev.org/openstack/neutron/commit/70ddf4eef579653c327067f05496f735970e7944
Submitter: "Zuul (22348)"
Branch:master

commit 70ddf4eef579653c327067f05496f735970e7944
Author: Rodolfo Alonso Hernandez 
Date:   Sat Feb 24 10:41:39 2024 +

Add "socket" NUMA affinity policy

This new extension adds a new parameter to the NUMA affinity policy
list: "socket". The "socket" NUMA affinity policy has been supported
in Nova since [1].

[1]https://review.opendev.org/c/openstack/nova/+/773792

Closes-Bug: #2052786
Change-Id: Iad2d4c461a2aceef6ed2d5e622cce38362d79687


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2052786

Title:
  Add "socket" to NUMA affinity policy

Status in neutron:
  Fix Released

Bug description:
  This is an extension of [1]. The goal of this bug is to add a new
  field to the supported NUMA affinity policy list: "socket".

  [1]https://bugs.launchpad.net/neutron/+bug/1886798

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2052786/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2058138] Re: Neutron OVSHybridIptablesFirewallDriver and IptablesFirewallDriver don't enforce Remote address groups

2024-03-21 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/913708
Committed: 
https://opendev.org/openstack/neutron/commit/5e1188ef38da3f196aadf82a3842fa982c9a0c83
Submitter: "Zuul (22348)"
Branch:master

commit 5e1188ef38da3f196aadf82a3842fa982c9a0c83
Author: Robert Breker 
Date:   Sun Mar 17 14:43:50 2024 +

Enhance IptablesFirewallDriver with remote address groups

This change enhances the IptablesFirewallDriver with support for remote
address groups. Previously, this feature was only available in the
OVSFirewallDriver. This commit harmonizes the capabilities across both
firewall drivers, and by inheritance also to 
OVSHybridIptablesFirewallDriver.

Background -
The Neutron API allows operators to configure remote address groups [1],
however the OVSHybridIptablesFirewallDriver and IptablesFirewallDriver do
not implement these remote group restrictions. When configuring security
group rules with remote address groups, connections get enabled
based on other rule parameters, ignoring the configured remote address
group restrictions.
This behaviour undocumented, and may lead to more-open-than-configured 
network
access.

Closes-Bug: #2058138
Change-Id: I76b3cb46ee603fa5e829537af41316bb42a6f30f


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2058138

Title:
  Neutron OVSHybridIptablesFirewallDriver and IptablesFirewallDriver
  don't enforce Remote address groups

Status in neutron:
  Fix Released
Status in OpenStack Security Advisory:
  Incomplete

Bug description:
  High level description -

  The Neutron API allows operators to configure remote address groups [1], 
however the OVSHybridIptablesFirewallDriver and IptablesFirewallDriver do not 
implement these remote group restrictions. When configuring security group 
rules with remote address groups, connections get enabled based on other rule 
parameters, ignoring the configured remote address group restrictions.
  This behaviour undocumented, and may lead to more-open-than-configured 
network access.

  Background -

  Remote address groups enable specifying rules that target many CIDRs
  efficiently. In line with the remote security group support, this
  should be implemented through the use of hashed ipsets in case of the
  IptablesFirewallDriver.

  Pre-conditions -
  * Using OVSHybridIptablesFirewallDriver or IptablesFirewallDriver
  * Configured remote Address Groups.

  Version -
  All OpenStack versions with remote address group support are impacted. We 
noticed it on 2024.1.

  [1] https://docs.openstack.org/python-
  openstackclient/latest/cli/command-objects/address-group.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2058138/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1728031] Re: Unable to change user password when ENFORCE_PASSWORD_CHECK is True

2024-03-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/horizon/+/913250
Committed: 
https://opendev.org/openstack/horizon/commit/da8e959298575127434e6e15aae5d1f0638a6e22
Submitter: "Zuul (22348)"
Branch:master

commit da8e959298575127434e6e15aae5d1f0638a6e22
Author: Rodrigo Barbieri 
Date:   Thu Mar 14 15:22:14 2024 -0300

Fix error on changing user password by admin

Previous change I8438bedaf7cead452fc499e484d23690b48894d9
attempted to address bug LP#1728031 by improving upon
patch https://review.opendev.org/854005 but missed the
line that allows the keystone client to properly
authenticate a cloud admin user that IS NOT in the
default domain.

Without this 1-line fix, a cloud admin that is not
in the default domain will face an "incorrect admin
password" error in the UI (despite the admin password
being correct) and an authentication error in the logs,
regardless of the endpoint type used (adminURL,
internalURL or publicURL).

Closes-bug: #1728031
Change-Id: I018e7d9cb84fd6ce8635c9054e15052ded7e9368


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1728031

Title:
  Unable to change user password when ENFORCE_PASSWORD_CHECK is True

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  After following the security hardening guidelines:
  
https://docs.openstack.org/security-guide/dashboard/checklist.html#check-dashboard-09-is-enforce-password-check-set-to-true
  After this check is enabled
  Check-Dashboard-09: Is ENFORCE_PASSWORD_CHECK set to True
  The user password cannot be changed. 
  The form submission fails by displaying that admin password is incorrect.

  The reason for this is in keystone.py in openstack_dashboard/api/keystone.py
  user_verify_admin_password method uses internal url to communicate with the 
keystone.
  line 500:
  endpoint = _get_endpoint_url(request, 'internalURL')
  This should be changed to adminURL

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1728031/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1999674] Re: nova compute service does not reset instance with task_state in rebooting_hard

2024-03-20 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/867832
Committed: 
https://opendev.org/openstack/nova/commit/aa3e8fef7b949ec3ddb3c4eaa348eb004593d29e
Submitter: "Zuul (22348)"
Branch:master

commit aa3e8fef7b949ec3ddb3c4eaa348eb004593d29e
Author: Pierre-Samuel Le Stang 
Date:   Thu Dec 15 18:30:15 2022 +0100

Correctly reset instance task state in rebooting hard

When a user ask for a reboot hard of a running instance while nova compute 
is
unavailable (service stopped or host down) it might happens under certain
conditions that the instance stays in rebooting_hard task_state after
nova-compute start again. This patch aims to fix that.

Closes-Bug: #1999674
Change-Id: I170e390fe4e467898a8dc7df6a446f62941d49ff


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1999674

Title:
  nova compute service does not reset instance with task_state in
  rebooting_hard

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  When a user ask for a reboot hard of a running instance while nova compute is 
unavailable (service stopped or host down) it might happens under certain 
conditions that the instance stays in rebooting_hard task_state after 
nova-compute start again.

  The condition to get this issue is to have a rabbitmq message-ttl of
  messages in queue which is lower than the time needed to get nova
  compute up again.

  
  Steps to reproduce
  ==

  Prerequisites:
  * Set a low message-ttl (let's say 60 seconds) in your rabbitmq 
  * Have a running instance on a host

  First case is having a failure on nova-compute service
  1/ stop nova compute service on host
  2/ ask for a reboot hard: openstack server reboot --hard 
  3/ wait 60 seconds
  4/ start nova compute service
  5/ check instance task_state and status 

  Second case is having a failure on the host
  1/ hard shutdown the host (let's say a power supply issue)
  2/ ask for a reboot hard: openstack server reboot --hard 
  3/ wait 60 seconds
  2/ restart the host
  5/ check instance task_state and status 

  
  Expected result
  ===
  We expect nova compute to be able to reset the state to active as we lost the 
message, to let the user take some other actions on the instance.

  Actual result
  =
  The instance is stuck in rebooting_hard task_state, user is blocked

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1999674/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2056366] Re: Neutron ml2/ovn does not exit when killed with SIGTERM

2024-03-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/911625
Committed: 
https://opendev.org/openstack/neutron/commit/a4e49b6b8fcf9acfa4e84c65de19ffd56b9022e7
Submitter: "Zuul (22348)"
Branch:master

commit a4e49b6b8fcf9acfa4e84c65de19ffd56b9022e7
Author: Terry Wilson 
Date:   Wed Mar 6 20:13:58 2024 +

Use oslo_service's SignalHandler for signals

When Neutron is killed with SIGTERM (like via systemctl), when using
ML2/OVN neutron workers do not exit and instead are eventually killed
with SIGKILL when the graceful timeout is reached (often around 1
minute).

This is happening due to the signal handlers for SIGTERM. There are
multiple issues.

1) oslo_service, ml2/ovn mech_driver, and ml2/ovo_rpc.py all call
   signal.signal(signal.SIGTERM, ...) overwriting each others signal
   handlers.
2) SIGTERM is handled in the main thread, and running blocking code
   there causes AssertionErrors in eventlet which also prevents the
   process from exiting.
3) The ml2/ovn cleanup code doesn't cause the process to end, so it
   interrupts the killing of the process.

oslo_service has a singleton SignalHandler class that solves all of
these issues

Closes-Bug: #2056366
Depends-On: https://review.opendev.org/c/openstack/oslo.service/+/911627
Change-Id: I730a12746bceaa744c658854e38439420efc4629
Signed-off-by: Terry Wilson 


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2056366

Title:
  Neutron ml2/ovn does not exit when killed with SIGTERM

Status in neutron:
  Fix Released

Bug description:
  When Neutron is killed with SIGTERM (like via systemctl), when using
  ML2/OVN neutron workers do not exit and instead are eventually killed
  with SIGKILL when the graceful timeout is reached (often around 1
  minute).

  This is happening due to the signal handlers for SIGTERM. There are
  multiple issues.

  1) oslo_service, ml2/ovn mech_driver, and ml2/ovo_rpc.py all call 
signal.signal(signal.SIGTERM, ...) overwriting each others signal handlers.
  2) SIGTERM is handled in the main thread, and running blocking code there 
causes AssertionErrors in eventlet
  3) The ml2/ovn cleanup code doesn't cause the process to end, so it 
interrupts the killing of the process

  oslo_service has a singleton SignalHandler class that solves all of
  these issues and we should use that instead of calling signal.signal()
  ourselves.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2056366/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2057983] Re: The HA flag of user defined flavos routers is always set to true

2024-03-19 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/913276
Committed: 
https://opendev.org/openstack/neutron/commit/26ff51bf05dd8b61d96489f6b459e8f62f855823
Submitter: "Zuul (22348)"
Branch:master

commit 26ff51bf05dd8b61d96489f6b459e8f62f855823
Author: Miguel Lavalle 
Date:   Thu Mar 14 18:09:28 2024 -0500

Fix making all user defined flavor routers HA

Since [1] was merged, user defined flavor routers with the HA
attribute set to False cannot be created. This change fixes
it.

Closes-Bug: #2057983

[1] https://review.opendev.org/c/openstack/neutron/+/910889

Change-Id: Ic72979cfe535c1bb8cba77fb82a380c167509060


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2057983

Title:
  The HA flag of user defined flavos routers is always set to true

Status in neutron:
  Fix Released

Bug description:
  Since [1] was merged, non HA user defined flavor routers cannot be
  created.

  100% reproducible following these steps:

  1) Create a non high availability flavor profile:

  $ openstack network flavor profile create --description "User-defined router 
flavor profile" --enable --driver 
neutron.services.ovn_l3.service_providers.user_defined.UserDefined  

  
  
+-++

   
  | Field   | Value 
 |  
 
  
+-++

   
  | description | User-defined router flavor profile
 |  
 
  | driver  | 
neutron.services.ovn_l3.service_providers.user_defined.UserDefined |

   
  | enabled | True  
 |  
 
  | id  | 04f95202-a0ce-42ca-b76d-1b1678b0caf5  
 |  
 
  | meta_info   |   
 |  
 
  | project_id  | None  
 |  
 
  
+-++

   

  2) Create a router flavor and associate to the profile:

  $ openstack network flavor create --service-type L3_ROUTER_NAT --description 
"User-defined flavor for routers in the L3 OVN plugin" 
user-defined-router-flavor
  
+-+--+  

   
  | Field   | Value
|   
  
  
+-+--+  

   
  | description | User-defined flavor for routers in the L3 OVN plugin 
|   
  
  | enabled | True 
|   
  
  | id  | a350b61e-c5b5-42d1-a95d-73215065eb78 
|   
  
  | name| user-defined-router-flavor   
|   
  

  1   2   3   4   5   6   7   8   9   10   >