[Yahoo-eng-team] [Bug 1930195] Re: Bump os-ken to 2.0.0

2021-06-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/793735
Committed: 
https://opendev.org/openstack/neutron/commit/e3bb98c7e7ea0dab4a81dd3dbf50e4d50ddc66ee
Submitter: "Zuul (22348)"
Branch:master

commit e3bb98c7e7ea0dab4a81dd3dbf50e4d50ddc66ee
Author: Rodolfo Alonso Hernandez 
Date:   Mon May 31 08:12:08 2021 +

Bump os-ken to 2.0.0

That will avoid problems with eventlet 0.31.0 version, as seen during
the requirements upgrade.

Change-Id: I9a6798a6b0438149af8190dc90c70f79735bb01d
Closes-Bug: #1930195


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930195

Title:
  Bump os-ken to 2.0.0

Status in neutron:
  Fix Released

Bug description:
  Bump os-ken to 2.0.0. That will avoid problems with newer version of
  eventlet 0.31.0, as seen in the requirements CI during the upgrade.

  Logs: https://e5436c934ffae117a95a-
  002c6183c0f1ab9234471cb74705bd70.ssl.cf2.rackcdn.com/793021/1/check
  /cross-neutron-py38/7b99a40/job-output.txt

  Snippet: http://paste.openstack.org/show/805855/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1821755] Re: live migration break the anti-affinity policy of server group simultaneously

2021-06-10 Thread melanie witt
** Also affects: nova/victoria
   Importance: Undecided
   Status: New

** Also affects: nova/wallaby
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1821755

Title:
  live migration break the anti-affinity policy of server group
  simultaneously

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) victoria series:
  New
Status in OpenStack Compute (nova) wallaby series:
  New

Bug description:
  Description
  ===
  If we live migrate two instance simultaneously, the instances will break the 
instance group policy.

  Steps to reproduce
  ==
  OpenStack env with three compute nodes(node1, node2 and node3). Then we 
create two VMs(vm1, vm2) with the anti-affinity policy.
  At last, we live migrate two VMs simultaneously.

  Before live-migration, the VMs are located as followed:
  node1  ->  vm1
  node2  ->  vm2
  node3

  * nova live-migration vm1
  * nova live-migration vm2

  Expected result
  ===
  Fail to live migrate vm1 and vm2.

  Actual result
  =
  node1
  node2
  node3  ->  vm1,vm2

  Environment
  ===
  master branch of openstack

  As described above, the live migration could not check the in-progress
  live-migration and just select the host by scheduler filter. So that
  they are migrated to the same host.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1821755/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1931639] [NEW] [OVN Octavia Provider] Load Balancer not reachable from some Subnets

2021-06-10 Thread Flavio Fernandes
Public bug reported:

In situations where router port and load balancer are created back to
back, there is a potential race condition that would render OVN with a
logical switch that is missing a reference to the load balancer.

This issue is also being tracked in Bugzilla, under the link:

 https://bugzilla.redhat.com/show_bug.cgi?id=1937392

** Affects: neutron
 Importance: High
 Assignee: Flavio Fernandes (ffernand)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Flavio Fernandes (ffernand)

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
   Status: New => In Progress

** Description changed:

- 
- In situations where router port and load balancer are created back to back,
- there is a potential race condition that would render OVN with a logical 
switch
- that is missing a reference to the load balancer.
- 
+ In situations where router port and load balancer are created back to
+ back, there is a potential race condition that would render OVN with a
+ logical switch that is missing a reference to the load balancer.
  
  This issue is also being tracked in Bugzilla, under the link:
  
-  https://bugzilla.redhat.com/show_bug.cgi?id=1937392
+  https://bugzilla.redhat.com/show_bug.cgi?id=1937392

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1931639

Title:
  [OVN Octavia Provider] Load Balancer not reachable from some Subnets

Status in neutron:
  In Progress

Bug description:
  In situations where router port and load balancer are created back to
  back, there is a potential race condition that would render OVN with a
  logical switch that is missing a reference to the load balancer.

  This issue is also being tracked in Bugzilla, under the link:

   https://bugzilla.redhat.com/show_bug.cgi?id=1937392

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1931639/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930838] Re: key error in deleted_ports

2021-06-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/794755
Committed: 
https://opendev.org/openstack/neutron/commit/383f209b502493ca6b059394e2644def754b2de1
Submitter: "Zuul (22348)"
Branch:master

commit 383f209b502493ca6b059394e2644def754b2de1
Author: Nurmatov Mamatisa 
Date:   Fri Jun 4 12:04:27 2021 +0300

[DHCP] Fix cleanup_deleted_ports method

Assume that only one port is deleted within 24 hours, in method
cleanup_deleted_ports the port removes from deleted_ports but not
removes from deleted_ports_ts
In this fix ports older than 1 day will be dropped from
deleted_ports and deleted_ports_ts properly

Closes-Bug: #1930838
Change-Id: I1af32e72abb9f101f9729aa6d1354c33a95c98ee


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930838

Title:
  key error in deleted_ports

Status in neutron:
  Fix Released

Bug description:
  In cleanup_deleted_ports 
https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L1044
  the port is deleted from _deleted_ports, but not from _deleted_ports_ts. In 
the next loop, the port will be deleted again. However, the port has been 
deleted from _deleted_ports, and a keyerror error is reported.

  ERROR [oslo.service.loopingcall] Fixed interval looping call 
'neutron.agent.dhcp.agent.NetworkCache.cleanup_deleted_ports' failed
  Traceback (most recent call last):
  File 
"/home/isabek/projects/GIT/neutron/.tox/py38/lib/python3.8/site-packages/oslo_service/loopingcall.py",
 line 150, in _run_loop
  result = func(*self.args, **self.kw)
  File "/home/isabek/projects/GIT/neutron/neutron/agent/dhcp/agent.py", line 
1057, in cleanup_deleted_ports
  self._deleted_ports.remove(port_id)
  KeyError: '12345678-1234--1234567890ab'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930838/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1929710] Re: virDomainGetBlockJobInfo fails during swap_volume as disk '$disk' not found in domain

2021-06-10 Thread Lee Yarwood
** No longer affects: qemu

** Changed in: nova
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1929710

Title:
  virDomainGetBlockJobInfo fails during swap_volume as disk '$disk' not
  found in domain

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  The error handling around swap_volume is missing the following failure
  when calling virDomainGetBlockJobInfo() after the entire device is
  detached by QEMU (?) after it encounters a failure during the block
  copy job that at first pauses and then somehow resumes:

  https://8a5fc27780098c5ee1bc-
  3ac81d180a9c011938b2cbb0293272f3.ssl.cf5.rackcdn.com/790660/5/gate
  /nova-next/e915ed4/controller/logs/screen-n-cpu.txt

  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver [None 
req-7cfcd661-29d4-4cc3-bc54-db0e7fed1a6e tempest-TestVolumeSwap-1841575704 
tempest-TestVolumeSwap-1841575704-project-admin] Failure rebasing volume 
/dev/sdb on vdb.: libvirt.libvirtError: invalid argument: disk 'vdb' not found 
in domain
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver Traceback (most recent 
call last):
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2107, in _swap_volume
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver while not 
dev.is_job_complete():
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver   File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 800, in is_job_complete
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver status = 
self.get_job_info()
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver   File 
"/opt/stack/nova/nova/virt/libvirt/guest.py", line 707, in get_job_info
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver status = 
self._guest._domain.blockJobInfo(self._disk, flags=0)
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver   File 
"/usr/local/lib/python3.8/dist-packages/eventlet/tpool.py", line 190, in doit
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver result = 
proxy_call(self._autowrap, f, *args, **kwargs)
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver   File 
"/usr/local/lib/python3.8/dist-packages/eventlet/tpool.py", line 148, in 
proxy_call
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver rv = execute(f, *args, 
**kwargs)
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver   File 
"/usr/local/lib/python3.8/dist-packages/eventlet/tpool.py", line 129, in execute
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver six.reraise(c, e, tb)
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver   File 
"/usr/local/lib/python3.8/dist-packages/six.py", line 719, in reraise
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver raise value
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver   File 
"/usr/local/lib/python3.8/dist-packages/eventlet/tpool.py", line 83, in tworker
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver rv = meth(*args, 
**kwargs)
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver   File 
"/usr/local/lib/python3.8/dist-packages/libvirt.py", line 985, in blockJobInfo
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver raise 
libvirtError('virDomainGetBlockJobInfo() failed')
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 
nova-compute[114649]: ERROR nova.virt.libvirt.driver libvirt.libvirtError: 
invalid argument: disk 'vdb' not found in domain
  May 26 09:49:47.314813 ubuntu-focal-vexxhost-ca-ymq-1-0024823853 

[Yahoo-eng-team] [Bug 1931583] [NEW] Wrong status of trunk sub-port after seting binding_profile

2021-06-10 Thread Kamil Sambor
Public bug reported:

When sub-port was created (with OVN enabled) and event was process
without binding profile this sub port will end forever in DOWN status

** Affects: neutron
 Importance: Undecided
 Assignee: Kamil Sambor (ksambor)
 Status: In Progress

** Description changed:

- When sub-port was created and event was process without binding profile
- this sub port will end forever in DOWN status
+ When sub-port was created (with OVN enabled) and event was process
+ without binding profile this sub port will end forever in DOWN status

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Kamil Sambor (ksambor)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1931583

Title:
  Wrong status of trunk sub-port after seting binding_profile

Status in neutron:
  In Progress

Bug description:
  When sub-port was created (with OVN enabled) and event was process
  without binding profile this sub port will end forever in DOWN status

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1931583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1836754] Re: Conflict when deleting allocations for an instance that hasn't finished building

2021-06-10 Thread Lee Yarwood
** Changed in: nova
   Status: Confirmed => Incomplete

** No longer affects: nova/stein

** No longer affects: nova/train

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1836754

Title:
  Conflict when deleting allocations for an instance that hasn't
  finished building

Status in OpenStack Compute (nova):
  Incomplete

Bug description:
  Description
  ===

  When deleting an instance that hasn't finished building, we'll
  sometimes get a 409 from placement as such:

  Failed to delete allocations for consumer 6494d4d3-013e-478f-
  9ac1-37ca7a67b776. Error: {"errors": [{"status": 409, "title":
  "Conflict", "detail": "There was a conflict when trying to complete
  your request.nn Inventory and/or allocations changed while
  attempting to allocate: Another thread concurrently updated the data.
  Please retry your update  ", "code": "placement.concurrent_update",
  "request_id": "req-6dcd766b-f5d3-49fa-89f3-02e64079046a"}]}

  Steps to reproduce
  ==

  1. Boot an instance
  2. Don't wait for it to become active
  3. Delete it immediately

  Expected result
  ===

  The instance deletes successfully.

  Actual result
  =

  Nova bubbles up that error from Placement.

  Logs & Configs
  ==

  This is being hit at a low rate in various CI tests, logstash query is
  here:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Inventory%20and%2For%20allocations%20changed%20while%20attempting%20to%20allocate%3A%20Another%20thread%20concurrently%20updated%20the%20data%5C%22%20AND%20filename%3A%5C
  %22job-output.txt%5C%22

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1836754/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1929446] Re: OVS polling loop created by ovsdbapp and os-vif starving n-cpu threads

2021-06-10 Thread Lee Yarwood
Another possible hit in
https://bugs.launchpad.net/nova/+bug/1863889/comments/3 ?

** Summary changed:

- check_can_live_migrate_source taking > 60 seconds in CI
+ OVS polling loop created by ovsdbapp and os-vif starving n-cpu threads

** Also affects: os-vif
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1929446

Title:
  OVS polling loop created by ovsdbapp and os-vif starving n-cpu threads

Status in OpenStack Compute (nova):
  Triaged
Status in os-vif:
  New

Bug description:
  I've been seeing lots of failures caused by timeouts in
  test_volume_backed_live_migration during the live-migration and
  multinode grenade jobs, for example:

  
https://zuul.opendev.org/t/openstack/build/bb6fd21b5d8c471a89f4f6598aa84e5d/logs

  During check_can_live_migrate_source I'm seeing the following gap in
  the logs that I can't explain:

  12225 May 24 10:23:02.637600 ubuntu-focal-inap-mtl01-0024794054 
nova-compute[107012]: DEBUG nova.virt.libvirt.driver [None 
req-b5288b85-d642-426f-a525-c64724fe4091 tempest-LiveMigrationTest-312230369 
tempest-LiveMigrationTest-312230369-project-admin] [instance: 
91a0e0ca-e6a8-43ab-8e68-a10a77ad615b] Check if temp file 
/opt/stack/data/nova/instances/tmp5lcmhuri exists to indicate shared storage is 
being used for migration. Exists? False {{(pid=107012) 
_check_shared_storage_test_file 
/opt/stack/nova/nova/virt/libvirt/driver.py:9367}}
  [..]
  12282 May 24 10:24:22.385187 ubuntu-focal-inap-mtl01-0024794054 
nova-compute[107012]: DEBUG nova.virt.libvirt.driver [None 
req-b5288b85-d642-426f-a525-c64724fe4091 tempest-LiveMigrationTest-312230369 
tempest-LiveMigrationTest-312230369-project-admin] skipping disk /dev/sdb (vda) 
as it is a volume {{(pid=107012) _get_instance_disk_info_from_config 
/opt/stack/nova/nova/virt/libvirt/driver.py:10458}}

  ^ this leads to both the HTTP request to live migrate (that's still a
  synchronous call at this point [1]) *and* the RPC call from the dest
  to the source both timing out.

  [1] https://docs.openstack.org/nova/latest/reference/live-
  migration.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1929446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1850656] Re: Deploy will fail if keystone.conf has '[oslo_policy]/enforce_scope=true'

2021-06-10 Thread Florian Faltermeier
Hello,

this affect kolla-ansible/wallaby, too.

** Also affects: wallaby
   Importance: Undecided
   Status: New

** No longer affects: wallaby

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1850656

Title:
  Deploy will fail if keystone.conf has
  '[oslo_policy]/enforce_scope=true'

Status in OpenStack Identity (keystone):
  Invalid
Status in kolla-ansible:
  In Progress
Status in kolla-ansible train series:
  Won't Fix
Status in kolla-ansible ussuri series:
  Won't Fix
Status in kolla-ansible victoria series:
  In Progress

Bug description:
  In current Kolla master (train) keystone permission system has not
  been adapted to the new scope thinking.

  $ cat /etc/kolla/config/keystone/keystone.conf 
  [oslo_policy]
  enforce_scope = True

  $ kolla-ansible -i multinode deploy
  ...
  TASK [service-ks-register : keystone | Creating services] 

  ...
  failed: [control1.example.com -> control1.example.com] 
(item={u'service_type': u'identity', u'name': u'keystone'}) => {"action": 
"os_keystone_service", "ansible_loop_var": "item", "attempts": 5, "changed": 
false, "item": {"description": "Openstack Identity Service", "endpoints": 
[{"interface": "admin", "url": "http://vip.example.com:35357"}, {"interface": 
"internal", "url": "http://vip.example.com:5000"}, {"interface": "public", 
"url": "https://openstack.example.com:5000"}], "name": "keystone", "type": 
"identity"}, "msg": "Failed to list services: Client Error for url: 
http://vip.example.com:35357/v3/services, You are not authorized to perform the 
requested action: identity:list_services."}


  == https://docs.openstack.org/releasenotes/keystone/en_GB/train.html ==
  This release leverages oslo.policy’s policy-in-code feature to modify the 
default check strings and scope types for nearly all of keystone’s API 
policies. These changes make the policies more precise than they were before, 
using the reader, member, and admin roles where previously only the admin role 
and a catch-all rule was available. The changes also take advantage of system, 
domain, and project scope, allowing you to create role assignments for your 
users that are appropriate to the actions they need to perform. Eventually this 
will allow you to set [oslo_policy]/enforce_scope=true in your keystone 
configuration, which simplifies access control management by ensuring that 
oslo.policy checks both the role and the scope on API requests.

  [bug 1806762] [bug 1630434] The entire policy.v3cloudsample.json file
  has been removed. If you were using this policy file to supply
  overrides in your deployment, you should consider using the defaults
  in code and setting keystone.conf [oslo_policy] enforce_scope=True.
  The new policy defaults are more flexible, they’re tested extensively,
  and they solve all the problems the policy.v3cloudsample.json file was
  trying to solve.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1850656/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1931577] [NEW] NoCloud documentation does not mention which files are required in the ISO

2021-06-10 Thread Guilherme Moro
Public bug reported:

The documentation (https://github.com/canonical/cloud-
init/blob/master/doc/rtd/topics/datasources/nocloud.rst) does not
mention that user-data and meta-data need to be present in the ISO to
work

if one of the files is missing the user will only see

DataSourceNoCloud.py[WARNING]: device /dev/sr0 with label=cidata not a
valid seed.

without any other further info

A note in the docs along these lines

"user-data and meta-data are both required to be present for it to be
considered a valid seed ISO"

should suffice, I don't have a CLA but if there's any way I could send a
pull request to be signed and merged by someone with one, i would be
happy to help.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1931577

Title:
  NoCloud documentation does not mention which files are required in the
  ISO

Status in cloud-init:
  New

Bug description:
  The documentation (https://github.com/canonical/cloud-
  init/blob/master/doc/rtd/topics/datasources/nocloud.rst) does not
  mention that user-data and meta-data need to be present in the ISO to
  work

  if one of the files is missing the user will only see

  DataSourceNoCloud.py[WARNING]: device /dev/sr0 with label=cidata not a
  valid seed.

  without any other further info

  A note in the docs along these lines

  "user-data and meta-data are both required to be present for it to be
  considered a valid seed ISO"

  should suffice, I don't have a CLA but if there's any way I could send
  a pull request to be signed and merged by someone with one, i would be
  happy to help.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1931577/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1931259] Re: API "subnet-segmentid-writable" does not include "is_filter" in the "segment_id" field

2021-06-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-lib/+/795340
Committed: 
https://opendev.org/openstack/neutron-lib/commit/e9b42d44593aabd400a68e3003894aa429b7cdaf
Submitter: "Zuul (22348)"
Branch:master

commit e9b42d44593aabd400a68e3003894aa429b7cdaf
Author: Rodolfo Alonso Hernandez 
Date:   Tue Jun 8 14:00:55 2021 +

API "subnet-segmentid-writable" should inherit field definition

API extensions "subnet-segmentid-writable" should inherit field
"segment_id" definition from the parent API extension, "segment".
This patch fixes how this field is defined, re-introducing the
missing key "is_filter", present in the parent API.

Change-Id: Ib8840e5e884100943f6e0cfbee1737a70c0d3874
Closes-Bug: #1931259


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1931259

Title:
  API "subnet-segmentid-writable" does not include "is_filter" in the
  "segment_id" field

Status in neutron:
  Fix Released

Bug description:
  API "subnet-segmentid-writable" does not include "is_filter" in the
  "segment_id" field.

  This field is present in the main extension, "segments". This child
  extension should include it too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1931259/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1916052] Re: Unable to create trust errors in glance-api

2021-06-10 Thread Erno Kuvaja
** Changed in: glance
   Status: Incomplete => Triaged

** Changed in: glance
 Assignee: (unassigned) => Erno Kuvaja (jokke)

** Changed in: glance
   Importance: Undecided => High

** Also affects: glance/ussuri
   Importance: Undecided
   Status: New

** Also affects: glance/victoria
   Importance: High
 Assignee: Erno Kuvaja (jokke)
   Status: Triaged

** Changed in: glance/ussuri
 Assignee: (unassigned) => Erno Kuvaja (jokke)

** Changed in: glance/ussuri
   Status: New => Triaged

** Changed in: glance/ussuri
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1916052

Title:
  Unable to create trust errors in glance-api

Status in Glance:
  In Progress
Status in Glance ussuri series:
  Triaged
Status in Glance victoria series:
  In Progress

Bug description:
  Hi,

  I  enabled swift_store_expire_soon_interval = 1800 for images taking
  long time to complete the upload, but it doesnt seem to work as
  planned, i see a trust  issue (based on docu, this is True by default)
  but i see the below in the glance-api logs :

  021-02-18 14:04:52,948.948 30 INFO glance.api.v2.image_data [req-
  bb9660d8-c24c-4350-9d4e-7cfaffebf8d9
  332d21d621e27dd887ff1f3388312be975597e42b755eec00ceff70d033228b8
  97bf741678d44e8da33c43f4c4662ade - ec213443e8834473b579f7bea9e8c194
  ec213443e8834473b579f7bea9e8c194] Unable to create trust: no such
  option collect_timing in group [keystone_authtoken] Use the existing
  user token

  2021-02-18 12:02:43,166.166 33 INFO glance.api.v2.image_data [req-
  8a48bfa1-9d37-4095-8f7f-70438d4daff6 a10475412aa34d05a815fac977df8620
  caa6209d2c38450f8266311fd0f05446 - default
  582d6603e91d4d3d8193fa9160a599f0] Unable to create trust: no such
  option collect_timing in group [keystone_authtoken] Use the existing
  user token.

  http://paste.openstack.org/show/802787/

  Ref :

  https://bugs.launchpad.net/keystone/+bug/1775140
  https://review.opendev.org/c/openstack/glance/+/479047

  Glance Version : Victoria Release, 21.0.0
  Glance_Store Version : 2.3.0
  Swift Version (Backend) :  Victoria (i have enabled multi-tenant)
  Keystone : Train

  Please let me know if further information is required.

  Regards,
  Rajiv

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1916052/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1931571] [NEW] Nova ignores reader role conventions in default policies

2021-06-10 Thread Florian Faltermeier
Public bug reported:

In keystone, if I grant someone the reader role on a project the
readonly (role reader) user is able to create a new instance within the
project.

Openstack Version: wallaby

1. Create a user within a project and add role reader to the user.
2. Login with the readonly user into the project and try to create an instance.

Florian

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

  In keystone, if I grant someone the reader role on a project the
  readonly (role reader) user is able to create a new instance within the
  project.
  
  Openstack Version: wallaby
  
  1. Create a user within a project and add role reader to the user.
- 2. Login with the readonly user into the project and create an instance.
+ 2. Login with the readonly user into the project and try to create an 
instance.
  
  Florian

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1931571

Title:
  Nova ignores reader role conventions in default policies

Status in OpenStack Compute (nova):
  New

Bug description:
  In keystone, if I grant someone the reader role on a project the
  readonly (role reader) user is able to create a new instance within
  the project.

  Openstack Version: wallaby

  1. Create a user within a project and add role reader to the user.
  2. Login with the readonly user into the project and try to create an 
instance.

  Florian

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1931571/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1808010] Re: Tempest cirros ssh setup fails due to lack of disk space causing config-drive setup to fail forcing fallback to metadata server which fails due to hitting 10 second

2021-06-10 Thread Lee Yarwood
** No longer affects: nova

** Changed in: devstack
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1808010

Title:
  Tempest cirros ssh setup fails due to lack of disk space causing
  config-drive setup to fail forcing fallback to metadata server which
  fails due to hitting 10 second timeout.

Status in devstack:
  Fix Released
Status in OpenStack-Gate:
  New

Bug description:
  Some tempest tests fail because cirros boots up and isn't able to
  write files due to a lack of disk space. Currently unsure if this is a
  host cloud hypervisor disk limitation or if we are running out of disk
  in the test node, or if the nova test flavor is now too small for
  cirros.

  Looking at logstash it seems restricted to a subset of our cloud
  regions (though not a single cloud) which may indicate it is a host
  cloud provider disk issue.

  Adding this bug so we can track it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1808010/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1844174] Re: test_fail_set_az fails intermittently with "AssertionError: OpenStackApiException not raised by _set_az_aggregate"

2021-06-10 Thread Balazs Gibizer
The fix https://review.opendev.org/c/openstack/nova/+/682486 has been
merged and no recent hits are visible

** Changed in: nova
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1844174

Title:
  test_fail_set_az fails intermittently with "AssertionError:
  OpenStackApiException not raised by _set_az_aggregate"

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Since 20190910 we've hit this 10x: 8x in functional and 2x in
  functional-py36

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22OpenStackApiException%20not%20raised%20by%20_set_az_aggregate%5C%22

  It looks to be a NoValidHosts caused by

  2019-09-16 15:10:21,389 INFO [nova.filters] Filter AvailabilityZoneFilter 
returned 0 hosts
  2019-09-16 15:10:21,390 INFO [nova.filters] Filtering removed all hosts for 
the request with instance ID 'e1ae6109-2bc2-4a40-9249-3dee7d5e80b5'. Filter 
results: ['AvailabilityZoneFilter: (start: 2, end: 0)']

  Here's one example:
  
https://14cb8680ad7e2d5893c2-a0a2161f988b6356e48326da15450ffb.ssl.cf1.rackcdn.com/671800/36/check
  /nova-tox-functional-py36/abc690a/testr_results.html.gz

  or pasted here for when ^ expires:
  http://paste.openstack.org/raw/776821/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1844174/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1707160] Re: test_create_port_in_allowed_allocation_pools test fails on ironic grenade

2021-06-10 Thread Lee Yarwood
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1707160

Title:
  test_create_port_in_allowed_allocation_pools test fails on ironic
  grenade

Status in neutron:
  Fix Released
Status in oslo.messaging:
  Fix Released

Bug description:
  Here is an example of a job at
  http://logs.openstack.org/58/487458/6/check/gate-grenade-dsvm-ironic-
  ubuntu-xenial/d8f187e/console.html#_2017-07-28_09_33_52_031224

  2017-07-28 09:33:52.027473 | Captured pythonlogging:
  2017-07-28 09:33:52.027484 | ~~~
  2017-07-28 09:33:52.027539 | 2017-07-28 09:15:48,746 9778 INFO 
[tempest.lib.common.rest_client] Request 
(PortsTestJSON:test_create_port_in_allowed_allocation_pools): 201 POST 
http://149.202.183.40:9696/v2.0/networks 0.342s
  2017-07-28 09:33:52.027604 | 2017-07-28 09:15:48,746 9778 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2017-07-28 09:33:52.027633 | Body: {"network": {"name": 
"tempest-PortsTestJSON-test-network-1596805013"}}
  2017-07-28 09:33:52.027728 | Response - Headers: {u'date': 'Fri, 28 
Jul 2017 09:15:48 GMT', u'x-openstack-request-id': 
'req-0502025a-db49-4f1f-b30d-c38b8098b79e', u'content-type': 
'application/json', u'content-length': '582', 'content-location': 
'http://149.202.183.40:9696/v2.0/networks', 'status': '201', u'connection': 
'close'}
  2017-07-28 09:33:52.027880 | Body: 
{"network":{"status":"ACTIVE","router:external":false,"availability_zone_hints":[],"availability_zones":[],"description":"","subnets":[],"shared":false,"tenant_id":"5c851bb85bef4b008714ef04d1fe3671","created_at":"2017-07-28T09:15:48Z","tags":[],"ipv6_address_scope":null,"mtu":1450,"updated_at":"2017-07-28T09:15:48Z","admin_state_up":true,"revision_number":2,"ipv4_address_scope":null,"is_default":false,"port_security_enabled":true,"project_id":"5c851bb85bef4b008714ef04d1fe3671","id":"b8a3fb1c-86a4-4518-8c3a-dd12db585659","name":"tempest-PortsTestJSON-test-network-1596805013"}}
  2017-07-28 09:33:52.027936 | 2017-07-28 09:15:49,430 9778 INFO 
[tempest.lib.common.rest_client] Request 
(PortsTestJSON:test_create_port_in_allowed_allocation_pools): 201 POST 
http://149.202.183.40:9696/v2.0/subnets 0.682s
  2017-07-28 09:33:52.027998 | 2017-07-28 09:15:49,431 9778 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2017-07-28 09:33:52.028054 | Body: {"subnet": {"ip_version": 4, 
"allocation_pools": [{"end": "10.1.0.14", "start": "10.1.0.2"}], "network_id": 
"b8a3fb1c-86a4-4518-8c3a-dd12db585659", "gateway_ip": "10.1.0.1", "cidr": 
"10.1.0.0/28"}}
  2017-07-28 09:33:52.028135 | Response - Headers: {u'date': 'Fri, 28 
Jul 2017 09:15:49 GMT', u'x-openstack-request-id': 
'req-1a50b739-8683-4aaa-ba4a-6e9daf73f1c8', u'content-type': 
'application/json', u'content-length': '594', 'content-location': 
'http://149.202.183.40:9696/v2.0/subnets', 'status': '201', u'connection': 
'close'}
  2017-07-28 09:33:52.030085 | Body: 
{"subnet":{"service_types":[],"description":"","enable_dhcp":true,"tags":[],"network_id":"b8a3fb1c-86a4-4518-8c3a-dd12db585659","tenant_id":"5c851bb85bef4b008714ef04d1fe3671","created_at":"2017-07-28T09:15:49Z","dns_nameservers":[],"updated_at":"2017-07-28T09:15:49Z","gateway_ip":"10.1.0.1","ipv6_ra_mode":null,"allocation_pools":[{"start":"10.1.0.2","end":"10.1.0.14"}],"host_routes":[],"revision_number":0,"ip_version":4,"ipv6_address_mode":null,"cidr":"10.1.0.0/28","project_id":"5c851bb85bef4b008714ef04d1fe3671","id":"be974b50-e56b-44a8-86a9-6bcc345f9d55","subnetpool_id":null,"name":""}}
  2017-07-28 09:33:52.030176 | 2017-07-28 09:15:50,616 9778 INFO 
[tempest.lib.common.rest_client] Request 
(PortsTestJSON:test_create_port_in_allowed_allocation_pools): 201 POST 
http://149.202.183.40:9696/v2.0/ports 1.185s
  2017-07-28 09:33:52.030232 | 2017-07-28 09:15:50,617 9778 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2017-07-28 09:33:52.030259 | Body: {"port": {"network_id": 
"b8a3fb1c-86a4-4518-8c3a-dd12db585659"}}
  2017-07-28 09:33:52.030369 | Response - Headers: {u'date': 'Fri, 28 
Jul 2017 09:15:50 GMT', u'x-openstack-request-id': 
'req-6b57ff81-c874-4e97-8183-bd57c7e8de81', u'content-type': 
'application/json', u'content-length': '691', 'content-location': 
'http://149.202.183.40:9696/v2.0/ports', 'status': '201', u'connection': 
'close'}
  2017-07-28 09:33:52.030596 | Body: 

[Yahoo-eng-team] [Bug 1907117] Re: Could not find a version that satisfies the requirement packaging>=20.4 (from oslo-utils)

2021-06-10 Thread Lee Yarwood
** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1907117

Title:
  Could not find a version that satisfies the requirement
  packaging>=20.4 (from oslo-utils)

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) victoria series:
  Fix Released

Bug description:
  Description
  ===
  openstack-tox-lower-constraints on openstack/nova master & stable/victoria is 
often failing due to the following error to find a version of the packaging 
module as required by oslo.utils:

  2020-12-07 13:25:45.153758 | ubuntu-focal | ERROR: Could not find a version 
that satisfies the requirement packaging>=20.4 (from oslo-utils)
  2020-12-07 13:25:45.153770 | ubuntu-focal | ERROR: No matching distribution 
found for packaging>=20.4

  The following logstash query finds 11 hits at present:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22No%20matching%20distribution%20found%20for%20packaging%5C%22%20AND%20tags%3A%5C%22console%5C%22

  fungi was able to reproduce this and found that Nova's lower-
  constraints.txt is actually to blame after the following change bumped
  our oslo.utils requirement to 4.5:

  https://review.opendev.org/c/openstack/nova/+/748059

  oslo.utils having a lower-constraints.txt packaging requirement of
  20.4 while Nova has a lower-constraints.txt packaging requirement of
  17.1 creating the conflict. To fix this we need to bump the
  requirement in Nova to 20.4.

  Steps to reproduce
  ==

  * Use a recent version of pip with the lower-constraints env.

  Expected result
  ===
  Passes

  Actual result
  =
  Fails

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 master and stable/victoria

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 N/A

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 N/A

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 N/A

  Logs & Configs
  ==

  See above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1907117/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1907438] Re: ERROR: Package 'bandit' requires a different Python: 2.7.17 not in '>=3.5'

2021-06-10 Thread Lee Yarwood
** Changed in: neutron
   Status: Confirmed => Fix Released

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1907438

Title:
  ERROR: Package 'bandit' requires a different Python: 2.7.17 not in
  '>=3.5'

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) train series:
  Fix Released
Status in OpenStack Object Storage (swift):
  Confirmed

Bug description:
  The 1.6.3 [1] release has dropped support for py2 [2].

  This should be capped within Nova's test-requirements.txt as linters
  are not covered by UC.

  [1] https://github.com/PyCQA/bandit/releases/tag/1.6.3
  [2] https://github.com/PyCQA/bandit/pull/615

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1907438/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1926399] Re: UT failing with sqlalchemy 1.4

2021-06-10 Thread Balazs Gibizer
Nova fix has been merged
https://review.opendev.org/c/openstack/nova/+/788471

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1926399

Title:
  UT failing with sqlalchemy 1.4

Status in Cinder:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in masakari:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.db:
  Fix Released

Bug description:
  See job cross-neutron-py36 in test patch
  https://review.opendev.org/c/openstack/requirements/+/788339/

  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ac7/788339/1/check
  /cross-neutron-py36/ac77335/testr_results.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1926399/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1580927] Re: spans beyond the subnet reported incorrectly in ipam

2021-06-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/318542
Committed: 
https://opendev.org/openstack/neutron/commit/437a311eca27bde5799b04d6a27d8e0e2aaf1c1f
Submitter: "Zuul (22348)"
Branch:master

commit 437a311eca27bde5799b04d6a27d8e0e2aaf1c1f
Author: Nurmatov Mamatisa 
Date:   Thu Feb 25 21:19:17 2021 +0300

Using 31-Bit and 32-Bit prefixes for IPv4 reasonably

When needing to create a point to point connection via a subnet,
generally and /31 is the recommended cidr. Neutron supports /31
disabling dhcp and gateway on a subnet. /32 is also supported in
openstack.

Closes-Bug: #1580927
Change-Id: I3bfa3efb9fb8076656b16c89d2f35d74efde12b7


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1580927

Title:
  spans beyond the subnet reported incorrectly in ipam

Status in neutron:
  Fix Released

Bug description:
  summary:  When needing to create a point to point connection via a
  subnet, generally and /31 is the recommended cidr.  Neutron supports
  /31 via disabling dhcp and gateway on a subnet.   However, IPam does
  not provide the allocation pool of the subnet properly and a VM cannot
  be created.

  Steps to reproduce

  root@ubuntu:~# neutron subnet-create  --disable-dhcp --no-gateway 
--cidr=10.14.0.20/31 --name bug-subnet 69c5342a-5526-4257-880a-f8fd2e633de9
  Created a new subnet:
  +---+--+
  | Field | Value|
  +---+--+
  | allocation_pools  |  |
  | cidr  | 10.14.0.20/31|
  | dns_nameservers   |  |
  | enable_dhcp   | False|
  | gateway_ip|  |
  | host_routes   |  |
  | id| 63ce4e26-9838-4fa3-b2d5-e59f88f5b7ce |
  | ip_version| 4|
  | ipv6_address_mode |  |
  | ipv6_ra_mode  |  |
  | name  | bug-subnet   |
  | network_id| 69c5342a-5526-4257-880a-f8fd2e633de9 |
  | subnetpool_id |  |
  | tenant_id | ca02fc470acc4a27b468dff32ee850b2 |
  +---+--+
  root@ubuntu:~# neutron subnet-update --allocation-pool 
start=10.14.0.20,end=10.14.0.21 bug-subnet
  The allocation pool 10.14.0.20-10.14.0.21 spans beyond the subnet cidr 
10.14.0.20/31.

  Recommended Fix:

  in db/ipam_backend_mixin.py :: function: validate_allocation_pools
  ~~lines: 276

 if start_ip < subnet_first_ip or end_ip > subnet_last_ip:
  LOG.info(_LI("Found pool larger than subnet "
   "CIDR:%(start)s - %(end)s"),
   {'start': start_ip, 'end': end_ip})
  raise n_exc.OutOfBoundsAllocationPool(
  pool=ip_pool,
  subnet_cidr=subnet_cidr)

  This if block should have a special case for ipv4 /31 and /32  for "<= and 
>=" :   
  start_ip <= subnet_first_ip or end_ip >= subnet_last_ip

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1580927/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp