[Yahoo-eng-team] [Bug 2024945] [NEW] nova.exception.ImageNotAuthorized: Not authorized for image

2023-06-23 Thread Satish Patel
Public bug reported:

Environment:

OS: Ubuntu 22.04
Openstack Release - Zed Release 
Deployment tool - Kolla-ansible 


We have Ceph backed backend for storage and after upgrading from Yoga to Zed we 
started noticed following error in nova-compute logs during taking snapshot of 
instance. 

2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return 
self._client.call(
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/image/glance.py", line 
191, in call
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server result = 
getattr(controller, method)(*args, **kwargs)
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", 
line 503, in add_location
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server response = 
self._send_image_update_request(image_id, add_patch)
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/utils.py",
 line 670, in inner
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return 
RequestIdProxy(wrapped(*args, **kwargs))
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/v2/images.py", 
line 483, in _send_image_update_request
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server resp, body = 
self.http_client.patch(url, headers=hdrs,
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/keystoneauth1/adapter.py", 
line 407, in patch
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return 
self.request(url, 'PATCH', **kwargs)
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", 
line 380, in request
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return 
self._handle_response(resp)
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/glanceclient/common/http.py", 
line 120, in _handle_response
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise 
exc.from_response(resp, resp.content)
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server 
glanceclient.exc.HTTPForbidden: HTTP 403 Forbidden: Its not allowed to 
add locations if locations are invisible.
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server During handling of 
the above exception, another exception occurred:
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server Traceback (most 
recent call last):
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_messaging/rpc/server.py",
 line 165, in _process_incoming
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server res = 
self.dispatcher.dispatch(message)
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 309, in dispatch
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return 
self._do_dispatch(endpoint, method, ctxt, args)
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_messaging/rpc/dispatcher.py",
 line 229, in _do_dispatch
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server result = 
func(ctxt, **new_args)
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/exception_wrapper.py", 
line 65, in wrapped
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server with 
excutils.save_and_reraise_exception():
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 
227, in __exit__
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server 
self.force_reraise()
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 
200, in force_reraise
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server raise self.value
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/exception_wrapper.py", 
line 63, in wrapped
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server return f(self, 
context, *args, **kw)
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server   File 
"/var/lib/kolla/venv/lib/python3.10/site-packages/nova/compute/manager.py", 
line 164, in decorated_function
2023-06-23 22:18:17.075 7 ERROR oslo_messaging.rpc.server with 

[Yahoo-eng-team] [Bug 2024921] [NEW] Formalize use of subnet service-type for draining subnets

2023-06-23 Thread Dr. Jens Harbott
Public bug reported:

As documented in https://docs.openstack.org/neutron/latest/admin/config-
service-subnets.html, subnets can be assigned a service-type which
ensures that they are only used to allocate addresses to a specific
device owner. But the current implementation also allows this feature to
be used to ensure that no addresses at all are assigned from a subnet by
setting the service type to an invalid owner like "compute:bogus" or
"network:drain".

One use case for this is extending or reducing FIP pools in a
deployment. Assume there is a /24 in use as public subnet which is
running full. Adding a second /24 is possible, but will waste some IPs
for network, gateway and broadcast address. So the better solution will
be to add a /23, gradually migrate the existing users away from the /24
and finally remove the old /24. In order for this to be feasible, one
must prevent allocation from the old subnet to happen during the
migration phase. The same applies when an operator wants to reduce the
size of a pool.

Since the above solution is undocumented, it would be useful to make it
documented and thus ensure that this stays a dependable workflow for
operators. Maybe one can also define a well-known "bogus" owner that
could be added in case the verification of device owners was to be made
more strict. Having some functional testing for this scenario might be
an extra bonus.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2024921

Title:
  Formalize use of subnet service-type for draining subnets

Status in neutron:
  New

Bug description:
  As documented in
  https://docs.openstack.org/neutron/latest/admin/config-service-
  subnets.html, subnets can be assigned a service-type which ensures
  that they are only used to allocate addresses to a specific device
  owner. But the current implementation also allows this feature to be
  used to ensure that no addresses at all are assigned from a subnet by
  setting the service type to an invalid owner like "compute:bogus" or
  "network:drain".

  One use case for this is extending or reducing FIP pools in a
  deployment. Assume there is a /24 in use as public subnet which is
  running full. Adding a second /24 is possible, but will waste some IPs
  for network, gateway and broadcast address. So the better solution
  will be to add a /23, gradually migrate the existing users away from
  the /24 and finally remove the old /24. In order for this to be
  feasible, one must prevent allocation from the old subnet to happen
  during the migration phase. The same applies when an operator wants to
  reduce the size of a pool.

  Since the above solution is undocumented, it would be useful to make
  it documented and thus ensure that this stays a dependable workflow
  for operators. Maybe one can also define a well-known "bogus" owner
  that could be added in case the verification of device owners was to
  be made more strict. Having some functional testing for this scenario
  might be an extra bonus.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2024921/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2024912] [NEW] [ovn-octavia-provider] Updating status on incorrect pool when HM delete

2023-06-23 Thread Fernando Royo
Public bug reported:

When a Health Monitor is deleted from a LB with multiples pools, the HM
is deleted correctly. But sometimes a random pool (not related to the HM
being deleted) keeps in PENDING_UPDATE provisioning_status.

Look like the status update sending to Octavia API was referencing a
pool not related to the HM deleted.

** Affects: neutron
 Importance: Undecided
 Assignee: Fernando Royo (froyoredhat)
 Status: New


** Tags: ovn-octavia-provider

** Changed in: neutron
 Assignee: (unassigned) => Fernando Royo (froyoredhat)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2024912

Title:
  [ovn-octavia-provider] Updating status on incorrect pool when HM
  delete

Status in neutron:
  New

Bug description:
  When a Health Monitor is deleted from a LB with multiples pools, the
  HM is deleted correctly. But sometimes a random pool (not related to
  the HM being deleted) keeps in PENDING_UPDATE provisioning_status.

  Look like the status update sending to Octavia API was referencing a
  pool not related to the HM deleted.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2024912/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2024903] [NEW] [neutron-lib] FT "test_negative_update_floatingip_port_forwarding" failing randomly very often

2023-06-23 Thread Rodolfo Alonso
Public bug reported:

Checking the zuul builds [1], the first time this test started to fail
was on 2023-05-31 11:32:02 [2].

This issue is also affecting "neutron-functional-with-sqlalchemy-master"
because it is using neutron-lib master branch.

[1]https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional-with-uwsgi=openstack%2Fneutron-lib=FAILURE=0
[2]https://zuul.opendev.org/t/openstack/build/794b34cbece9491a950ed7c920760cd2

** Affects: neutron
 Importance: Critical
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
   Importance: Undecided => Critical

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2024903

Title:
  [neutron-lib] FT "test_negative_update_floatingip_port_forwarding"
  failing randomly very often

Status in neutron:
  New

Bug description:
  Checking the zuul builds [1], the first time this test started to fail
  was on 2023-05-31 11:32:02 [2].

  This issue is also affecting "neutron-functional-with-sqlalchemy-
  master" because it is using neutron-lib master branch.

  
[1]https://zuul.opendev.org/t/openstack/builds?job_name=neutron-functional-with-uwsgi=openstack%2Fneutron-lib=FAILURE=0
  [2]https://zuul.opendev.org/t/openstack/build/794b34cbece9491a950ed7c920760cd2

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2024903/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp