[Yahoo-eng-team] [Bug 1639230] Re: reschedule fails with ip already allocated error

2020-08-06 Thread Onap Sig
** Changed in: nova/newton
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1639230

Title:
  reschedule fails with ip already allocated error

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  Fix Released
Status in nova package in Ubuntu:
  Fix Released
Status in nova source package in Xenial:
  Confirmed

Bug description:
  Tried to create a server in a multi-host environment. The create
  failed on the first host that was attempted due to a ClientException
  raised by nova.volume.cinder.API.initialize_connection while trying to
  attach a volume. When the build was rescheduled on a different host,
  it should have realized that the network was already allocated by the
  first attempt and reused that, but the network_allocated=True from
  instance.system_metadata somehow disappeared, leading to the following
  exception that causes the reschedule to fail:

  2016-10-13 04:48:29.007 16273 WARNING nova.network.neutronv2.api 
[req-9b343ef7-e8d9-4a61-b86c-a61908afe4df 
0688b01e6439ca32d698d20789d52169126fb41fb1a4ddafcebb97d854e836c9 
94e1baed634145e0aade858973ae88e8 - - -] [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b] Neutron error creating port on network 
5038a36b-cb1e-4a61-b26c-a05a80b37ed6
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b] Traceback (most recent call last):
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b]   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 392, in 
_create_port_minimal
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b] port_response = 
port_client.create_port(port_req_body)
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b]   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 98, in 
wrapper
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b] ret = obj(*args, **kwargs)
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b]   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 750, in 
create_port
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b] return self.post(self.ports_path, 
body=body)
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b]   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 98, in 
wrapper
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b] ret = obj(*args, **kwargs)
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b]   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 365, in 
post
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b] headers=headers, params=params)
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b]   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 98, in 
wrapper
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b] ret = obj(*args, **kwargs)
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b]   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 300, in 
do_request
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b] 
self._handle_fault_response(status_code, replybody, resp)
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b]   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 98, in 
wrapper
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b] ret = obj(*args, **kwargs)
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b]   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 275, in 
_handle_fault_response
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [instance: 
b85d6c6c-e385-4601-aa47-5c580f893c9b] exception_handler_v20(status_code, 
error_body)
  2016-10-13 04:48:29.007 16273 ERROR nova.network.neutronv2.api [i

[Yahoo-eng-team] [Bug 1700999] Re: during nova service install , an unneeded httpd conf bug fix causes openstack client cli crash

2020-08-06 Thread melanie witt
This is not an issue on the master branch, so marking it as Invalid for
master.

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: In Progress => Invalid

** Changed in: nova/rocky
   Importance: Undecided => Low

** Changed in: nova/rocky
   Status: New => In Progress

** Changed in: nova/rocky
 Assignee: (unassigned) => Harshavardhan Metla (harsha24)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1700999

Title:
  during nova service install ,an unneeded httpd conf bug fix causes
  openstack client cli crash

Status in OpenStack Compute (nova):
  Invalid
Status in OpenStack Compute (nova) pike series:
  New
Status in OpenStack Compute (nova) queens series:
  New
Status in OpenStack Compute (nova) rocky series:
  In Progress

Bug description:
  - [X] This doc is inaccurate in this way:

  Hi ,

  Please MOVE the quoted section below from :

  https://docs.openstack.org/ocata/install-guide-rdo/nova-controller-
  install.html#install-and-configure-components

  To:

  https://docs.openstack.org/ocata/install-guide-rdo/nova-verify.html
  (before no.5).

  reason:

  When the patch is done at the original location it breaks the
  openstack client cli , breaking openstack configuration cli (openstack
  user create ...etc.):

  details further down ...

  """

  Due to a packaging bug, you must enable access to the Placement
  API by adding the following configuration to /etc/httpd/conf.d/00
  -nova-placement-api.conf:

  
     = 2.4>
    Require all granted
     
     
    Order allow,deny
    Allow from all
     
  

  Restart the httpd service:

  # systemctl restart httpd

  """

  when you do add this  section to the named httpd conf file.
  it causes the following error on all openstack client cli :

  #openstack endpoint list;openstack catalog list
  Discovering versions from the identity service failed when creating the 
password plugin. Attempting to determine version from URL.
  Internal Server Error (HTTP 500)
  Discovering versions from the identity service failed when creating the 
password plugin. Attempting to determine version from URL.
  Internal Server Error (HTTP 500)

  Bug suggested solution :
  Please MOVE the mentioned section to 
https://docs.openstack.org/ocata/install-guide-rdo/nova-verify.html (before 
no.5 - nova-status upgrade check).
  Adding the httpd section patch at this step works flawlessly.

  The solution to the original bug placement as is :
  Undo && redo the following part of the identity service install :
  https://docs.openstack.org/ocata/install-guide-rdo/keystone-install.html

  openstack vers. :
  #rpm -qa |grep openstack-nova
  openstack-nova-common-15.0.3-2.el7.noarch
  openstack-nova-conductor-15.0.3-2.el7.noarch
  openstack-nova-api-15.0.3-2.el7.noarch
  openstack-nova-console-15.0.3-2.el7.noarch
  openstack-nova-scheduler-15.0.3-2.el7.noarch
  openstack-nova-novncproxy-15.0.3-2.el7.noarch

  OS details :
  CentOS Linux release 7.3.1611 (Core)
  Derived from Red Hat Enterprise Linux 7.3 (Source)
  NAME="CentOS Linux"
  VERSION="7 (Core)"
  ID="centos"
  ID_LIKE="rhel fedora"
  VERSION_ID="7"
  PRETTY_NAME="CentOS Linux 7 (Core)"
  ANSI_COLOR="0;31"
  CPE_NAME="cpe:/o:centos:centos:7"
  HOME_URL="https://www.centos.org/";
  BUG_REPORT_URL="https://bugs.centos.org/";

  CENTOS_MANTISBT_PROJECT="CentOS-7"
  CENTOS_MANTISBT_PROJECT_VERSION="7"
  REDHAT_SUPPORT_PRODUCT="centos"
  REDHAT_SUPPORT_PRODUCT_VERSION="7"

  CentOS Linux release 7.3.1611 (Core)
  CentOS Linux release 7.3.1611 (Core)
  cpe:/o:centos:centos:7

  ---
  Release: 15.0.0 on 2017-06-22 12:09
  SHA: 33b12839643984b75465df265ee355683e40c6cf
  Source: 
https://git.openstack.org/cgit/openstack/openstack-manuals/tree/doc/install-guide/source/nova-controller-install.rst
  URL: 
https://docs.openstack.org/ocata/install-guide-rdo/nova-controller-install.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1700999/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663364] Re: A server creation failed due to "Failed to allocate the network"

2020-08-06 Thread SPC Reactive TIM
*** This bug is a duplicate of bug 1643911 ***
https://bugs.launchpad.net/bugs/1643911

** Also affects: ubuntu
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1663364

Title:
  A server creation failed due to "Failed to allocate the network"

Status in OpenStack Compute (nova):
  New
Status in Ubuntu:
  New

Bug description:
  On test_create_image_from_deleted_server, Tempest waits for the server 
creation with pooling nova-api with "Get a server" API.
  The API returned an error response like

  http://logs.openstack.org/59/426459/4/check/gate-tempest-dsvm-neutron-
  full-ubuntu-
  xenial/0222b58/logs/tempest.txt.gz#_2017-02-09_18_16_37_270

  Body: {"server": {"OS-EXT-STS:task_state": null, "addresses": {},
  "links": [{"href":
  
"https://198.72.124.211:8774/v2.1/servers/bd439544-a42f-4802-a457-a5fdf5d97e1b";,
  "rel": "self"}, {"href":
  "https://198.72.124.211:8774/servers/bd439544-a42f-4802-a457-a5fdf5d97e1b";,
  "rel": "bookmark"}], "image": {"id": "2dc6d0fb-6371-478e-a0fd-
  05c181fdeb15", "links": [{"href":
  "https://198.72.124.211:8774/images/2dc6d0fb-6371-478e-a0fd-
  05c181fdeb15", "rel": "bookmark"}]}, "OS-EXT-STS:vm_state": "error",
  "OS-SRV-USG:launched_at": null, "flavor": {"id": "42", "links":
  [{"href": "https://198.72.124.211:8774/flavors/42";, "rel":
  "bookmark"}]}, "id": "bd439544-a42f-4802-a457-a5fdf5d97e1b",
  "user_id": "ffc3b5a76f264f88b6fb8bea40597379", "OS-DCF:diskConfig":
  "MANUAL", "accessIPv4": "", "accessIPv6": "", "OS-EXT-
  STS:power_state": 0, "OS-EXT-AZ:availability_zone": "", "metadata":
  {}, "status": "ERROR", "updated": "2017-02-09T18:16:36Z", "hostId":
  "", "OS-SRV-USG:terminated_at": null, "key_name": null, "name":
  "tempest-ImagesNegativeTestJSON-server-483537554", "created":
  "2017-02-09T18:16:34Z", "tenant_id":
  "e3c9cc38bc4d458796351bd05ff199b6", "os-extended-
  volumes:volumes_attached": [], "fault": {"message": "Build of instance
  bd439544-a42f-4802-a457-a5fdf5d97e1b aborted: Failed to allocate the
  network(s), not rescheduling.", "code": 500, "created":
  "2017-02-09T18:16:36Z"}, "config_drive": ""}}

  and the test failed. The "fault" parameter shows
   {"message": "Build of instance bd439544-a42f-4802-a457-a5fdf5d97e1b aborted: 
Failed to allocate the network(s), not rescheduling.",
"code": 500,
"created": "2017-02-09T18:16:36Z"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1663364/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890717] [NEW] Keystone deploy enters ERROR state with nothing running on port 35337

2020-08-06 Thread Ryan Farrell
Public bug reported:

A fresh deployment of Keystone has failed; the charm entered and error
state then become blocked with a message indicating it was waiting for a
certain number of peers.

2020-08-06 17:15:33 DEBUG identity-service-relation-changed
RuntimeError: The call within manager.py failed with the error: 'Unable
to establish connection to http://localhost:35337/v3/auth/tokens:
HTTPConnectionPool(host='localhost', port=35337): Max retries exceeded
with url: /v3/auth/tokens (Caused by
NewConnectionError(': Failed to establish a new connection: [Errno 111]
Connection refused',))'. The call was: path=['resolve_role_id'],
args=('member',), kwargs={}, api_version=None

Logs indicated that there was nothing listing on port 35337, which was
confirmed with nc:

# keystone/0  - working unit
ubuntu@juju-2bccdc-9-lxd-1:~$ sudo netstat -tupln | grep 35337  

   
tcp6   0  0 :::35337:::*LISTEN  
30815/apache2

#keystone/1  - non working unit
ubuntu@juju-2bccdc-10-lxd-1:~$ sudo netstat -tupln | grep 35337


Additionally the directory: /etc/apache2/ssl/keystone was completely empty - we 
expected there to be symlinks to certs / keyst there.


This deployment was using cs:~openstack-charmers-next/keystone-500 - 
redeploying the unit was successful.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1890717

Title:
  Keystone deploy enters ERROR state with nothing running on port  35337

Status in OpenStack Identity (keystone):
  New

Bug description:
  A fresh deployment of Keystone has failed; the charm entered and error
  state then become blocked with a message indicating it was waiting for
  a certain number of peers.

  2020-08-06 17:15:33 DEBUG identity-service-relation-changed
  RuntimeError: The call within manager.py failed with the error:
  'Unable to establish connection to
  http://localhost:35337/v3/auth/tokens:
  HTTPConnectionPool(host='localhost', port=35337): Max retries exceeded
  with url: /v3/auth/tokens (Caused by
  NewConnectionError(': Failed to establish a new connection: [Errno 111]
  Connection refused',))'. The call was: path=['resolve_role_id'],
  args=('member',), kwargs={}, api_version=None

  Logs indicated that there was nothing listing on port 35337, which was
  confirmed with nc:

  # keystone/0  - working unit
  ubuntu@juju-2bccdc-9-lxd-1:~$ sudo netstat -tupln | grep 35337

 
  tcp6   0  0 :::35337:::*LISTEN
  30815/apache2

  #keystone/1  - non working unit
  ubuntu@juju-2bccdc-10-lxd-1:~$ sudo netstat -tupln | grep 35337

  
  Additionally the directory: /etc/apache2/ssl/keystone was completely empty - 
we expected there to be symlinks to certs / keyst there.

  
  This deployment was using cs:~openstack-charmers-next/keystone-500 - 
redeploying the unit was successful.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1890717/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890353] Re: support pyroute2 0.5.13

2020-08-06 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/744809
Committed: 
https://git.openstack.org/cgit/openstack/os-vif/commit/?id=c8703df185ae3a3445e1e3368ee3229f485b85d9
Submitter: Zuul
Branch:master

commit c8703df185ae3a3445e1e3368ee3229f485b85d9
Author: Sean Mooney 
Date:   Wed Aug 5 00:07:11 2020 +

support pyroute2 0.5.13

This change modifes os-vif add interface to account
for the new behavior of link_lookup.
In 0.5.13 if a link is not found link_lookup returns an
empty list. In previous releases it raised ipexc.NetlinkError.

Closes-bug: #1890353

Change-Id: I567afb544425c1b91d98968a0b597be718869089


** Changed in: os-vif
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1890353

Title:
  support pyroute2 0.5.13

Status in neutron:
  In Progress
Status in os-vif:
  Fix Released

Bug description:
  pytroute2 version 0.5.13 changed the behaviour of ip.link_lookup
  such that if the device is not found it now returns an empty list instead
  of raising ipexc.NetlinkError.

  as we have unit test that assert the old behaviour this breaks the os-vif 
unit tests.
  recently https://review.opendev.org/#/c/743277/ was merged to bump the 
pytroute2 version
  but the requirements repo does not currently run the os-vif unit test so this 
breakage was not seen.

  https://review.opendev.org/#/c/744803/1 adds a new cross job to track
  that but os-vif should also be updated to account for the new
  behaviour.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1890353/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890596] [NEW] ovn-controller cannot connect to the integration bridge with OVS 2.12 and later

2020-08-06 Thread Jakub Libosvar
Public bug reported:

There is a bug in OVS 2.12 and later where protocols cannot be set after
bridge is created. OVS agents set explicitly OpenFlow protocols but
there is a mismatch with ovn-controller. That means even though the
migration process resets the protocols, because of the OVS bug
https://bugzilla.redhat.com/show_bug.cgi?id=1782834 ovn-controller can't
connect.

** Affects: neutron
 Importance: Undecided
 Assignee: Jakub Libosvar (libosvar)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Jakub Libosvar (libosvar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1890596

Title:
  ovn-controller cannot connect to the integration bridge with OVS 2.12
  and later

Status in neutron:
  In Progress

Bug description:
  There is a bug in OVS 2.12 and later where protocols cannot be set
  after bridge is created. OVS agents set explicitly OpenFlow protocols
  but there is a mismatch with ovn-controller. That means even though
  the migration process resets the protocols, because of the OVS bug
  https://bugzilla.redhat.com/show_bug.cgi?id=1782834 ovn-controller
  can't connect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1890596/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890580] [NEW] HTTP 409 error from glance-api with swift backend

2020-08-06 Thread Rajiv Mucheli
Public bug reported:

Hi Team,

I would like to understand why and how does glance-api pod generate HTTP
409 errors, i did look into the available documentation and code but had
no conclusion. I referred the below :

https://github.com/openstack/glance/blob/stable/train/doc/source/user/glanceapi.rst
https://github.com/openstack/glance/blob/54329c6a21b0d3f845b09e79f710fc795976a175/releasenotes/source/locale/ja/LC_MESSAGES/releasenotes.po
https://bugs.launchpad.net/glance/+bug/1229823
https://docs.openstack.org/glance/pike/configuration/configuring.html#configuring-the-swift-storage-backend

I wonder the HTTP 409 conflicts occur due to the below flags, are the
api calls parallel or sequential for deletion ? i.e. a HTTP 409 means a
Conflict response status code, maybe the deletion calls dont wait until
the deletion is completed ? :

swift_store_large_object_size = 5120
swift_store_large_object_chunk_size = 512 (default is 200Mb (i dont see HTTP 
409), would it generate HTTP 409 if its increased to 500Mb)

glance-api logs :

2020-07-18 01:55:46,627.627 52 ERROR glance.common.wsgi [req-236a9c8c-
396e-42a1-8987-f847923c7e13
f1083795e1da57ca00ff8c967ad0c3d80751fe341a1e64046869e0ae0770cc1d
7e49c7a15b4a4f149cae86a0c1366afa - ec213443e8834473b579f7bea9e8c194
ec213443e8834473b579f7bea9e8c194] Caught error: Container DELETE failed:
https://xxx:443/v1/AUTH_7e49c7a15b4a4f149cae86a0c1366afa/glance_1352ddc3
-12ba-4afe-9c89-304cefd90ef5 409 Conflict [first 60 chars of response]
b'ConflictThere was a conflict when trying t':
swiftclient.exceptions.ClientException: Container DELETE failed:
https://xxx:443/v1/AUTH_7e49c7a15b4a4f149cae86a0c1366afa/glance_1352ddc3
-12ba-4afe-9c89-304cefd90ef5 409 Conflict [first 60 chars of response]
b'ConflictThere was a conflict when trying t'

2020-07-18 01:55:46,707.707 52 INFO eventlet.wsgi.server [req-236a9c8c-
396e-42a1-8987-f847923c7e13
f1083795e1da57ca00ff8c967ad0c3d80751fe341a1e64046869e0ae0770cc1d
7e49c7a15b4a4f149cae86a0c1366afa - ec213443e8834473b579f7bea9e8c194
ec213443e8834473b579f7bea9e8c194] 10.46.14.92,100.85.0.29 - -
[18/Jul/2020 01:55:46] "DELETE /v2/images/1352ddc3-12ba-4afe-
9c89-304cefd90ef5 HTTP/1.1" 500 449 11.140969

Openstack Glance Version : Train
Glance-api.conf : 
https://github.com/sapcc/helm-charts/blob/master/openstack/glance/templates/etc/_glance-api.conf.tpl
Swift-api.conf 
Swift conf file : 
https://github.com/sapcc/helm-charts/blob/master/openstack/swift/templates/etc/_proxy-server.conf.tpl

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1890580

Title:
  HTTP 409 error from glance-api with swift backend

Status in Glance:
  New

Bug description:
  Hi Team,

  I would like to understand why and how does glance-api pod generate
  HTTP 409 errors, i did look into the available documentation and code
  but had no conclusion. I referred the below :

  
https://github.com/openstack/glance/blob/stable/train/doc/source/user/glanceapi.rst
  
https://github.com/openstack/glance/blob/54329c6a21b0d3f845b09e79f710fc795976a175/releasenotes/source/locale/ja/LC_MESSAGES/releasenotes.po
  https://bugs.launchpad.net/glance/+bug/1229823
  
https://docs.openstack.org/glance/pike/configuration/configuring.html#configuring-the-swift-storage-backend

  I wonder the HTTP 409 conflicts occur due to the below flags, are the
  api calls parallel or sequential for deletion ? i.e. a HTTP 409 means
  a Conflict response status code, maybe the deletion calls dont wait
  until the deletion is completed ? :

  swift_store_large_object_size = 5120
  swift_store_large_object_chunk_size = 512 (default is 200Mb (i dont see HTTP 
409), would it generate HTTP 409 if its increased to 500Mb)

  glance-api logs :

  2020-07-18 01:55:46,627.627 52 ERROR glance.common.wsgi [req-236a9c8c-
  396e-42a1-8987-f847923c7e13
  f1083795e1da57ca00ff8c967ad0c3d80751fe341a1e64046869e0ae0770cc1d
  7e49c7a15b4a4f149cae86a0c1366afa - ec213443e8834473b579f7bea9e8c194
  ec213443e8834473b579f7bea9e8c194] Caught error: Container DELETE
  failed:
  https://xxx:443/v1/AUTH_7e49c7a15b4a4f149cae86a0c1366afa/glance_1352ddc3
  -12ba-4afe-9c89-304cefd90ef5 409 Conflict [first 60 chars of response]
  b'ConflictThere was a conflict when trying t':
  swiftclient.exceptions.ClientException: Container DELETE failed:
  https://xxx:443/v1/AUTH_7e49c7a15b4a4f149cae86a0c1366afa/glance_1352ddc3
  -12ba-4afe-9c89-304cefd90ef5 409 Conflict [first 60 chars of response]
  b'ConflictThere was a conflict when trying t'

  2020-07-18 01:55:46,707.707 52 INFO eventlet.wsgi.server [req-
  236a9c8c-396e-42a1-8987-f847923c7e13
  f1083795e1da57ca00ff8c967ad0c3d80751fe341a1e64046869e0ae0770cc1d
  7e49c7a15b4a4f149cae86a0c1366afa - ec213443e8834473b579f7bea9e8c194
  ec213443e8834473b579f7bea9e8c194] 10.46.14.92,100.85.0.29 - -
  [18/Jul/2020 01:55:46] "DELETE /v2/images/1352dd

[Yahoo-eng-team] [Bug 1883671] Re: [SRIOV] When a VF is bound to a VM, Nova can't retrieve the PCI info

2020-08-06 Thread sean mooney
reading the nic feature flags was intoduced in pike 
https://github.com/openstack/nova/commit/e6829f872aca03af6181557260637c8b601e476a

but this only seams to happen on mondern version of libvirt so setting
as wont fix. it can be backported if someone hits the issue and care to
do so

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Also affects: nova/ussuri
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Changed in: nova/pike
   Status: New => Won't Fix

** Changed in: nova/queens
   Status: New => Won't Fix

** Changed in: nova/rocky
   Status: New => Won't Fix

** Changed in: nova/stein
   Status: New => Triaged

** Changed in: nova/stein
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1883671

Title:
  [SRIOV] When a VF is bound to a VM, Nova can't retrieve the PCI info

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Won't Fix
Status in OpenStack Compute (nova) queens series:
  Won't Fix
Status in OpenStack Compute (nova) rocky series:
  Won't Fix
Status in OpenStack Compute (nova) stein series:
  Won't Fix
Status in OpenStack Compute (nova) train series:
  Triaged
Status in OpenStack Compute (nova) ussuri series:
  Triaged

Bug description:
  Nova periodically updates the available resources per hypervisor [1].
  That implies the reporting of the PCI devices [2]->[3].

  In [4], a new feature was introduced to read from libvirt the NIC
  capabilities (gso, tso, tx, etc.). But when the NIC interface is bound
  to the VM and the MAC address is not the one assigned by the driver
  (Nova changes the MAC address according to the info provided by
  Neutron), libvirt fails reading the non-existing device:
  http://paste.openstack.org/show/794799/.

  This command should be avoided or at least, if the executing fails,
  the exception could be hidden.

  
  [1]https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L9642
  
[2]https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6980
  
[3]https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6898
  [4]Ia5b6abbbf4e5f762e0df04167c32c6135781d305

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1883671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1889213] Re: libvirt: live-migration crash with segfault while transfering memstate

2020-08-06 Thread Balazs Gibizer
Based on the logs I don't think this is a nova bug. I suggest to contact
the libvirt developers at https://libvirt.org/bugs.html

I'm setting this as Invalid if further investigation on the libvirt side
indicates that somehow nova causes the libvirt segfault then feel free
to set this bugreport back to New.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1889213

Title:
  libvirt: live-migration crash with segfault while transfering memstate

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===

  Using recent Ubuntu Stein Cloud Packages, we are observing random
  live-migration crashes on the target host. Libvirt is having a
  SEGFAULT on the qemu driver. Transferring block devices usually works
  without issues. However, the following memory transfer is causing the
  target libvirtd randomly to close down its socket, resulting in a
  roll-backed migration process.

  Libvirt log on target host before the crash is attached.

  
  Steps to reproduce
  ==

  - Start a live-migration with block mode between 2 identical hosts.
  - Wait until transfer of blockdisks is done
  - During memory transfer, target host libvirt crashes.

  Expected result
  ===

  Live-Migration completes onto the new host as intended.

  Actual result
  =

  Target host libvirtd crashes with SEGFAULT, causing a rollback of the
  migration.

  Environment
  ===

  Ubuntu 18.04.4 LTS (GNU/Linux 4.15.0-99-generic x86_64)
  OpenStack Stein (Ubuntu Cloud Archive)
  Libvirt+QEMU_x86

  keystone-common 2:15.0.1-0ubuntu1~cloud0
  libvirt-daemon 5.0.0-1ubuntu2.6~cloud0
  qemu-system-x86 1:3.1+dfsg-2ubuntu3.7~cloud0
  neutron-linuxbridge-agent 2:14.2.0-0ubuntu1~cloud0
  neutron-plugin-ml2 2:14.2.0-0ubuntu1~cloud0
  nova-compute 2:19.2.0-0ubuntu1~cloud0
  nova-compute-libvirt 2:19.2.0-0ubuntu1~cloud0
  python-rbd 14.2.10-1bionic
  python3-cinderclient 1:4.1.0-0ubuntu1~cloud0
  python3-designateclient 2.9.0-0ubuntu1
  python3-glanceclient 1:2.16.0-0ubuntu1~cloud0
  python3-neutronclient 1:6.11.0-0ubuntu1~cloud0
  python3-novaclient 2:13.0.0-0ubuntu1~cloud0

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1889213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890065] Re: Impossible to migrate affinity instances

2020-08-06 Thread Balazs Gibizer
I'm setting this as Invalid as it is not a bug. Maybe a new
functionality.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1890065

Title:
  Impossible to migrate affinity instances

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  We have a hypervisor that needs to go down for maintenance. There are
  2 instances on the host within a server group with affinity.

  It seems to be impossible to live migrate them both to a different
  host.

  Looks like there used to be a force argument to live migration but
  this was removed in microversion 2.68

  "Remove support for forced live migration and evacuate server
  actions."

  Doesn't mention why this was removed sadly.

  
  Is there a way to temporarily break the affinity contract for maintenance?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1890065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890219] Re: nova-compute can not boot cause of old resource provider

2020-08-06 Thread Balazs Gibizer
I'm marking this as Invalid. If you disagree then feel free to set it
back to New.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1890219

Title:
  nova-compute can not boot cause of old resource provider

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  Description
  ===
  The service 'nova-compute' will register resource provider in placement when 
it's starting.
  But if old one exist with same name, nova-compute serivce can not boot 
successfully.

  Steps to reproduce
  ==
  * Boot nova-compute with hostname 'host1'
  * Create one instance placed on the compute node
  * Change hostname to 'host2' and boot nova-compute service
  * Rollback hostname to 'host1' and boot nova-compute serivce

  Expected result
  ===
  Service 'nova-compute' booted successfully

  Actual result
  =
  Got error 'Failed to create resource provider'

  Environment
  ===
  1. nova: stable/rocky
  $ git log
  commit e3093d42f46af810f316421a9b59eafe94039807 (HEAD -> stable/rocky, 
origin/stable/rocky)
  Author: Luigi Toscano 
  Date:   Fri Jul 10 13:26:48 2020 +0200

  zuul: remove legacy-tempest-dsvm-neutron-dvr-multinode-full

  The job was part of the neutron experimental queue but then removed
  during the ussuri lifecycle.
  See https://review.opendev.org/#/c/693630/

  Conflicts:
  .zuul.yaml
  The content of .zuul.yaml changed slightly.

  Change-Id: I04717b95dd44ae89f24bd74525d1c9607e3bc0fc
  (cherry picked from commit bce4a3ab97320bdc2a6a43e2a961a0aa0b8ffb63)
  (cherry picked from commit cf399a363ca530151895c4b7cf49ad7b2a79e01b)
  (cherry picked from commit b1ead1fb2adf25493e5cab472d529fde31f985f0)
  (cherry picked from commit 7b005f37853a56e3ec6da455008fa5ef0d03c21b)

  2. Which hypervisor did you use?
  libvirt+KVM

  Logs & Configs
  ==
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager 
[req-52534aeb-4dd3-4f83-83f1-e6e47e1aa13e - - - - -] Error updating resources 
for node compute01.: ResourceProviderCreationFailed: Failed to create resource 
provider compute01
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager Traceback (most 
recent call last):
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager   File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/compute/manager.py", line 
8157, in _update_available_resource_for_node
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager 
rt.update_available_resource(context, nodename)
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager   File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 724, in update_available_resource
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager 
self._update_available_resource(context, resources)
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager   File 
"/var/lib/openstack/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", 
line 274, in inner
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager return f(*args, 
**kwargs)
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager   File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 801, in _update_available_resource
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager 
self._update(context, cn)
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager   File 
"/var/lib/openstack/lib/python2.7/site-packages/retrying.py", line 49, in 
wrapped_f
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager return 
Retrying(*dargs, **dkw).call(f, *args, **kw)
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager   File 
"/var/lib/openstack/lib/python2.7/site-packages/retrying.py", line 206, in call
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager return 
attempt.get(self._wrap_exception)
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager   File 
"/var/lib/openstack/lib/python2.7/site-packages/retrying.py", line 247, in get
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager 
six.reraise(self.value[0], self.value[1], self.value[2])
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager   File 
"/var/lib/openstack/lib/python2.7/site-packages/retrying.py", line 200, in call
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager attempt = 
Attempt(fn(*args, **kwargs), attempt_number, False)
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager   File 
"/var/lib/openstack/lib/python2.7/site-packages/nova/compute/resource_tracker.py",
 line 963, in _update
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager 
self._update_to_placement(context, compute_node)
  2020-08-03 08:38:01.296 21734 ERROR nova.compute.manager   File 
"/var/lib/openstack/lib/python2

[Yahoo-eng-team] [Bug 1883671] Re: [SRIOV] When a VF is bound to a VM, Nova can't retrieve the PCI info

2020-08-06 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/739131
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=efc27ff84c3f38fbcbf75b0dc230963c58d093e4
Submitter: Zuul
Branch:master

commit efc27ff84c3f38fbcbf75b0dc230963c58d093e4
Author: Sean Mooney 
Date:   Fri Jul 3 15:58:02 2020 +

Lookup nic feature by PCI address

In some environments the libvirt nodedev list can become out of sync
with the current MAC address assigned to a netdev, As a result the
nodedev lookup can fail. This results in an uncaught libvirt exception
which breaks the update_available_resource function resultingin an
incorrect resource view in the database.

e.g. libvirt.libvirtError: Node device not found:
no node device with matching name 'net_enp7s0f3v1_ea_60_77_1f_21_50'

This change removes the dependency on the nodedev name when looking up
nic feature flags.

Change-Id: Ibf8dca4bd57b3bddb39955b53cc03564506f5754
Closes-Bug: #1883671


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1883671

Title:
  [SRIOV] When a VF is bound to a VM, Nova can't retrieve the PCI info

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova periodically updates the available resources per hypervisor [1].
  That implies the reporting of the PCI devices [2]->[3].

  In [4], a new feature was introduced to read from libvirt the NIC
  capabilities (gso, tso, tx, etc.). But when the NIC interface is bound
  to the VM and the MAC address is not the one assigned by the driver
  (Nova changes the MAC address according to the info provided by
  Neutron), libvirt fails reading the non-existing device:
  http://paste.openstack.org/show/794799/.

  This command should be avoided or at least, if the executing fails,
  the exception could be hidden.

  
  [1]https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L9642
  
[2]https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6980
  
[3]https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L6898
  [4]Ia5b6abbbf4e5f762e0df04167c32c6135781d305

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1883671/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1889676] Re: "stores" can be set as property breaking multistore indication of stores where the images are present

2020-08-06 Thread Erno Kuvaja
** Changed in: glance/ussuri
 Assignee: (unassigned) => Erno Kuvaja (jokke)

** No longer affects: glance/train

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1889676

Title:
  "stores" can be set as property breaking multistore indication of
  stores where the images are present

Status in Glance:
  Fix Released
Status in Glance ussuri series:
  In Progress
Status in Glance victoria series:
  Fix Released

Bug description:
  Glance API happily accepts `glance image-create --property
  stores:test1,test2` while stores is reserved for indication in which
  store IDs the image is actually present.

  For the fix we need client patch [0] merged and released.

  [0] https://review.opendev.org/#/c/744024/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1889676/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890428] Re: format_message() is specifica novaException is not should raise in generic exeptions

2020-08-06 Thread Brin Zhang
** Also affects: nova/ussuri
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1890428

Title:
  format_message() is specifica novaException is not should raise in
  generic  exeptions

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ussuri series:
  New

Bug description:
  In [1] we used format_message() to print the exception info, but the
  format_message() was specific for nova exception, we dont should do
  for that, just need to print exec is enough.

  [1]https://review.opendev.org/#/c/631244/69/nova/compute/manager.py@2599

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1890428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1890432] Re: Create subnet is failing under high load with OVN

2020-08-06 Thread Frode Nordahl
Adding upstream Neutron project to this LP.

The lock contention arises from the update of the metadata_port:
https://github.com/openstack/neutron/blob/24590a334fff0ed1cb513b0f496be965bc9309d4/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_client.py#L2111
https://github.com/openstack/neutron/blob/24590a334fff0ed1cb513b0f496be965bc9309d4/neutron/db/ipam_backend_mixin.py#L653-L680

Updating the fixed_ips field of the metadata_port will make Neutron
attempt to lock all the subnets involved regardless of
update_metadata_port only changing one subnet or all of them.

I wonder how the OVS driver dealt with this, as it would have the exact
same issue.

Perhaps the only option is for Neutron to gracefully ignore a
update_metadata_port failure at subnet creation and update the metadata
port at a later time in one of its maintenance jobs.

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: charm-neutron-api
   Status: Triaged => Invalid

** Changed in: charm-neutron-api
   Importance: High => Undecided

** Changed in: charm-neutron-api
 Assignee: Frode Nordahl (fnordahl) => (unassigned)

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Frode Nordahl (fnordahl)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1890432

Title:
  Create subnet is failing under high load with OVN

Status in OpenStack neutron-api charm:
  Invalid
Status in neutron:
  In Progress

Bug description:
  Under a high concurrency level create subnet is starting to fail.
  (12-14% failure rate) The bundle is OVN / Ussuri.

  neutronclient.common.exceptions.Conflict: Unable to complete operation
  on subnet  This subnet is being modified by another concurrent
  operation.

  Stacktrace: https://pastebin.ubuntu.com/p/sQ5CqD6NyS/
  Rally task:

  {% set flavor_name = flavor_name or "m1.medium" %}
  {% set image_name = image_name or "bionic-kvm" %}

  ---
NeutronNetworks.create_and_delete_subnets:
  -
args:
  network_create_args: {}
  subnet_create_args: {}
  subnet_cidr_start: "1.1.0.0/30"
  subnets_per_network: 2
runner:
  type: "constant"
  times: 100
  concurrency: 10
context:
  network: {}
  users:
tenants: 30
users_per_tenant: 1
  quotas:
neutron:
  network: -1
  subnet: -1

  Concurrency level set to 1 instead of 10 is not triggering the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/charm-neutron-api/+bug/1890432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp