[Yahoo-eng-team] [Bug 1855015] [NEW] Intermittent fails since 11/23 with "Multiple possible networks found, use a Network ID to be more specific."

2019-12-03 Thread Eric Fried
Public bug reported:

There was something similar before [1] but it was 100% and in one job.
This is intermittent and in multiple jobs across multiple projects.

http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22Multiple%20possible%20networks%20found,%20use%20a%20Network%20ID%20to%20be%20more%20specific%5C%22

[1] https://bugs.launchpad.net/nova/+bug/1822605

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: nova
 Importance: Undecided
 Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1855015

Title:
  Intermittent fails since 11/23 with "Multiple possible networks found,
  use a Network ID to be more specific."

Status in neutron:
  New
Status in OpenStack Compute (nova):
  New

Bug description:
  There was something similar before [1] but it was 100% and in one job.
  This is intermittent and in multiple jobs across multiple projects.

  
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22Multiple%20possible%20networks%20found,%20use%20a%20Network%20ID%20to%20be%20more%20specific%5C%22

  [1] https://bugs.launchpad.net/nova/+bug/1822605

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1855015/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1840788] Re: websockify-0.9.0 breaks tempest tests

2019-12-03 Thread melanie witt
I had used Partial-Bug on my patch because the bug still needed a change
in tempest to be fully fixed. But I had forgotten how launchpad works
where each component/project in a bug needs to "Closes-Bug" for their
own part. Marking this as "Fix Released" accordingly.

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1840788

Title:
  websockify-0.9.0 breaks tempest tests

Status in OpenStack Compute (nova):
  Fix Released
Status in tempest:
  Fix Released

Bug description:
  see https://review.opendev.org/677479 for a test review

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1840788/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1855006] [NEW] config/cloud.cfg.d/README says "All files" rather than "*.cfg"

2019-12-03 Thread Nathan Stratton Treadway
Public bug reported:

The README file installed in /etc/cloudinit/cloud.cfg.d/ currently says
"All files in this directory will be read by cloud-init"... but actually
cloud-init only reads files whose names end in ".cfg".

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1855006

Title:
  config/cloud.cfg.d/README says "All files" rather than "*.cfg"

Status in cloud-init:
  New

Bug description:
  The README file installed in /etc/cloudinit/cloud.cfg.d/ currently
  says "All files in this directory will be read by cloud-init"... but
  actually cloud-init only reads files whose names end in ".cfg".

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1855006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854994] [NEW] git push to launchpad branch results in git push produces "remote error: Unexpected Zope exception"

2019-12-03 Thread Frederick Lefebvre
Public bug reported:

I've been trying to follow the steps highlighted here:
https://cloudinit.readthedocs.io/en/19.3/topics/hacking.html to push a
branch of cloud-init to by launchpad account, in order to link it to my
github account.

Basically running:
git clone https://git.launchpad.net/cloud-init
cd cloud-init
git remote add USER ssh://u...@git.launchpad.net/~USER/cloud-init
git push USER master

With the last command timing out with "remote error: Unexpected Zope
exception". Both git 2.13 and git 2.17 produced the same outcome.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1854994

Title:
  git push to launchpad branch results in git push produces "remote
  error: Unexpected Zope exception"

Status in cloud-init:
  New

Bug description:
  I've been trying to follow the steps highlighted here:
  https://cloudinit.readthedocs.io/en/19.3/topics/hacking.html to push a
  branch of cloud-init to by launchpad account, in order to link it to
  my github account.

  Basically running:
  git clone https://git.launchpad.net/cloud-init
  cd cloud-init
  git remote add USER ssh://u...@git.launchpad.net/~USER/cloud-init
  git push USER master

  With the last command timing out with "remote error: Unexpected Zope
  exception". Both git 2.13 and git 2.17 produced the same outcome.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1854994/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852727] Related fix merged to nova (master)

2019-12-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/694521
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=557728abaf0c822f2b1a5cdd4fb2e11e19d8ead7
Submitter: Zuul
Branch:master

commit 557728abaf0c822f2b1a5cdd4fb2e11e19d8ead7
Author: Stephen Finucane 
Date:   Fri Nov 15 11:33:26 2019 +

docs: Change order of PCI configuration steps

It doesn't really make sense to describe the "higher level"
configuration steps necessary for PCI passthrough before describing
things like BIOS configuration. Simply switch the ordering.

Change-Id: I4ea1d9a332d6585ce2c0d5a531fa3c4ad9c89482
Signed-off-by: Stephen Finucane 
Related-Bug: #1852727


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1852727

Title:
  PCI passthrough documentation does not describe the steps necessary to
  passthrough PFs

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This came up on IRC [1]. By default, nova will not allow you to use PF
  devices unless you specifically request this type of device. This is
  intentional behavior to allow users to whitelist all devices from a
  particular vendor and avoid passing through the PF device when they
  meant to only consume the VFs. In the future, we might want to prevent
  whitelisting of both PF and VFs, but for now we should document the
  current behavior.

  [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova
  /%23openstack-nova.2019-11-15.log.html#t2019-11-15T08:39:17

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1852727/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854992] [NEW] Frequent instances stuck in BUILD with no apparent failure

2019-12-03 Thread Erik Olof Gunnar Andersson
Public bug reported:

We are getting frequent instances stuck indefinitely in BUILD without an
error message. This seems to be triggered by high concurrency (e.g.
build a lot of instances with terraform).

We have multiple synthetic instances that are being built and destroyed
ever 10 minutes and they never hit this issue.

This is running one commit behind of the latest stable/rocky branch.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1854992

Title:
  Frequent instances stuck in BUILD with no apparent failure

Status in OpenStack Compute (nova):
  New

Bug description:
  We are getting frequent instances stuck indefinitely in BUILD without
  an error message. This seems to be triggered by high concurrency (e.g.
  build a lot of instances with terraform).

  We have multiple synthetic instances that are being built and
  destroyed ever 10 minutes and they never hit this issue.

  This is running one commit behind of the latest stable/rocky branch.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1854992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854993] [NEW] QoS bandwidth tempest test no longer running

2019-12-03 Thread Eric Fried
Public bug reported:

In [1] the tempest-slow-py3 job was dropped and non-redundant bits
folded into the nova-next job.

Except we forgot to move over some of the config necessary to make this
QoS bandwidth test [2] work, so it gets skipped:

setUpClass
(tempest.scenario.test_minbw_allocation_placement.MinBwAllocationPlacementTest)
... SKIPPED: Skipped as no physnet is available in config for placement
based QoS allocation.

We think we just need to get the nova-next job synced up with the config
like what was done for tempest-slow here [3].

[1] https://review.opendev.org/#/c/683988/
[2] 
https://github.com/openstack/tempest/blob/3eb3c29e979fd3f13c205d62119748952d63054a/tempest/scenario/test_minbw_allocation_placement.py#L142
[3] 
https://github.com/openstack/tempest/commit/c87a06b3c29427dc8f2513047c804e0410b4b99c

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1854993

Title:
  QoS bandwidth tempest test no longer running

Status in OpenStack Compute (nova):
  New

Bug description:
  In [1] the tempest-slow-py3 job was dropped and non-redundant bits
  folded into the nova-next job.

  Except we forgot to move over some of the config necessary to make
  this QoS bandwidth test [2] work, so it gets skipped:

  setUpClass
  
(tempest.scenario.test_minbw_allocation_placement.MinBwAllocationPlacementTest)
  ... SKIPPED: Skipped as no physnet is available in config for
  placement based QoS allocation.

  We think we just need to get the nova-next job synced up with the
  config like what was done for tempest-slow here [3].

  [1] https://review.opendev.org/#/c/683988/
  [2] 
https://github.com/openstack/tempest/blob/3eb3c29e979fd3f13c205d62119748952d63054a/tempest/scenario/test_minbw_allocation_placement.py#L142
  [3] 
https://github.com/openstack/tempest/commit/c87a06b3c29427dc8f2513047c804e0410b4b99c

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1854993/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1795309] Re: server group quota exeeding makes forced log out

2019-12-03 Thread Vishal Manchanda
It is already fixed in https://review.opendev.org/#/c/677580/.

** Changed in: horizon
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1795309

Title:
  server group quota exeeding makes forced log out

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When creating a new server group and it exceeds server group quota, a
  user will be forced to log out.

  horizon 14.0.0.0rc2.dev93

  In case of the default quota, server group quota is 10.
  This happens when we try to create 11th server group.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1795309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1852777] Re: Neutron allows to create two subnets with same CIDR in a network through heat

2019-12-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/695060
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=397eb2a2febd234ba7246f40f950c3ed4202a3d5
Submitter: Zuul
Branch:master

commit 397eb2a2febd234ba7246f40f950c3ed4202a3d5
Author: Rodolfo Alonso Hernandez 
Date:   Thu Nov 21 09:55:45 2019 +

Serialize subnet creating depending on the network ID

Add a new DB table "network_subnet_lock". The primary key will be the
network_id. When a subnet is created, inside the write context during
the "subnet" object creation, a register in the mentioned table is
created or updated. This will enforce the serialization of the "subnet"
registers belonging to the same network, due to the write lock in the
DB.

This will solve the problem of attending several "subnet" creation
requests, described in the related bug. If several subnets with the
same CIDR are processed in parallel, the implemented logic won't reject
them because any of them will not contain the information of each other.

This DB lock will also work in case of distributed servers because the
lock is not enforced in the server logic but in the DB backend.

Change-Id: Iecbb096e0b7e080a3e0299ea340f8b03e87ddfd2
Closes-Bug: #1852777


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1852777

Title:
  Neutron allows to create two subnets with same CIDR in a network
  through heat

Status in neutron:
  Fix Released

Bug description:
  If I use heat to create a network, with overlapping subnet CIDR, we
  will  not get an error from Neutron that there is an overlap.

  There is an example heat template attached. In my environment, Out of
  10 times only two times Neutron reported error of overlapping and in
  all other cases the stack create was successful.

  stack@ubuntu:~$ openstack stack list
  
+--+--+--+-+--+--+
  | ID   | Stack Name   | Project   
   | Stack Status| Creation Time| Updated Time |
  
+--+--+--+-+--+--+
  | 26f32175-c5e8-49e2-abde-75bd2e1d3b3a | overlapping-subnets9 | 
c48d7b879e40472e8e1a070918abf8c5 | CREATE_FAILED   | 2019-11-15T17:16:30Z | 
None |
  | 158c6c2f-ac9b-4131-ac9d-54cabfccf64c | overlapping-subnets8 | 
c48d7b879e40472e8e1a070918abf8c5 | CREATE_COMPLETE | 2019-11-15T17:16:26Z | 
None |
  | cab371f6-6aeb-43af-ab2a-4c1c1452d253 | overlapping-subnets7 | 
c48d7b879e40472e8e1a070918abf8c5 | CREATE_COMPLETE | 2019-11-15T17:16:22Z | 
None |
  | 480cd3db-395d-4de9-a8e4-27c8d08e6174 | overlapping-subnets6 | 
c48d7b879e40472e8e1a070918abf8c5 | CREATE_COMPLETE | 2019-11-15T17:16:19Z | 
None |
  | e4409fc6-e3b4-4664-93a0-648b31ae80ee | overlapping-subnets5 | 
c48d7b879e40472e8e1a070918abf8c5 | CREATE_COMPLETE | 2019-11-15T17:16:16Z | 
None |
  | 45552045-ec57-4fc4-b5b6-f8886da19521 | overlapping-subnets4 | 
c48d7b879e40472e8e1a070918abf8c5 | CREATE_COMPLETE | 2019-11-15T17:16:11Z | 
None |
  | ec3f2c27-7306-47ee-a501-97d246fc7fa9 | overlapping-subnets3 | 
c48d7b879e40472e8e1a070918abf8c5 | CREATE_COMPLETE | 2019-11-15T17:16:08Z | 
None |
  | 15050524-4711-490d-b344-d1a5be376ca8 | overlapping-subnets2 | 
c48d7b879e40472e8e1a070918abf8c5 | CREATE_COMPLETE | 2019-11-15T17:16:04Z | 
None |
  | da6b235a-83c2-44d4-8e73-3243be310bc1 | overlapping-subnets1 | 
c48d7b879e40472e8e1a070918abf8c5 | CREATE_FAILED   | 2019-11-15T17:16:01Z | 
None |
  | c596b822-d57f-4160-b03b-6f02711fc003 | overlapping-subnets  | 
c48d7b879e40472e8e1a070918abf8c5 | CREATE_COMPLETE | 2019-11-15T17:15:58Z | 
None |
  
+--+--+--+-+--+--+

  Output from the neutron net-list which validates this:

  stack@ubuntu:~$ neutron net-list
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  
+--++--+--+
  | id   | name   | tenant_id   
 | subnets  |
  
+--++--+--+
  | 0396cfc9-3f7c-4562-82cf-1273178acafd | overlappingsubnets | 
c48d7b879e4047

[Yahoo-eng-team] [Bug 1825018] Re: security group driver gets loaded way too much in the api

2019-12-03 Thread Matt Riedemann
** Also affects: nova/ocata
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Also affects: nova/rocky
   Importance: Undecided
   Status: New

** Also affects: nova/train
   Importance: Undecided
   Status: New

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/stein
   Importance: Undecided
   Status: New

** Changed in: nova/ocata
   Importance: Undecided => Low

** Changed in: nova/queens
   Importance: Undecided => Low

** Changed in: nova/pike
   Importance: Undecided => Low

** Changed in: nova/train
   Importance: Undecided => Low

** Changed in: nova/stein
   Importance: Undecided => Low

** Changed in: nova/rocky
   Importance: Undecided => Low

** Changed in: nova/ocata
   Status: New => Confirmed

** Changed in: nova/pike
   Status: New => Confirmed

** Changed in: nova/rocky
   Status: New => Confirmed

** Changed in: nova/train
   Status: New => Confirmed

** Changed in: nova/stein
   Status: New => Confirmed

** Changed in: nova/queens
   Status: New => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1825018

Title:
  security group driver gets loaded way too much in the api

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) ocata series:
  Confirmed
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed
Status in OpenStack Compute (nova) rocky series:
  Confirmed
Status in OpenStack Compute (nova) stein series:
  Confirmed
Status in OpenStack Compute (nova) train series:
  Confirmed

Bug description:
  There was a fix in Mitaka https://review.openstack.org/#/c/256073/ to
  cache the security group driver once it was loaded per process. That
  cache was removed in Newton https://review.openstack.org/#/c/325684/.
  I put up a test patch to see how many times the security group driver
  gets loaded https://review.openstack.org/#/c/652783/ and in the
  neutron-grenade-multinode job the nova-api logs show it getting loaded
  over 1000 times (browser count hit tops out at 1000). So the fix from
  mitaka was definitely regressed in newton and we should add the driver
  cache code again.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1825018/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854950] [NEW] VM spice console not clear

2019-12-03 Thread YG Kumar
Public bug reported:

Hi,

We have openstack ansible rocky 18.1.9 setup and when a user is trying
to access a vm console from web browser, they are not able to send
keystrokes properly. When for example, pressing ENTER key, the display
is broken into number of lines and not clear what they are typing in. In
the nova-spice-console logs, we are observing these messages frequently:


2019-12-03 09:16:01.278 37844 INFO nova.console.websocketproxy [-] handler 
exception: [Errno 32] Broken pipe
2019-12-03 09:16:01.279 37844 DEBUG nova.console.websocketproxy [-] exception 
vmsg 
/openstack/venvs/nova-18.1.9/lib/python2.7/site-packages/websockify/websocket.py:875
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy Traceback (most 
recent call last):
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/openstack/venvs/nova-18.1.9/lib/python2.7/site-packages/websockify/websocket.py",
 line 930, in top_new_client
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy client = 
self.do_handshake(startsock, address)
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/openstack/venvs/nova-18.1.9/lib/python2.7/site-packages/websockify/websocket.py",
 line 860, in do_handshake
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy 
self.RequestHandlerClass(retsock, address, self)
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/openstack/venvs/nova-18.1.9/lib/python2.7/site-packages/nova/console/websocketproxy.py",
 line 308, in __init__
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy 
websockify.ProxyRequestHandler.__init__(self, *args, **kwargs)
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/openstack/venvs/nova-18.1.9/lib/python2.7/site-packages/websockify/websocket.py",
 line 114, in __init__
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy 
SimpleHTTPRequestHandler.__init__(self, req, addr, server)
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/usr/lib/python2.7/SocketServer.py", line 652, in __init__
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy 
self.handle()
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/openstack/venvs/nova-18.1.9/lib/python2.7/site-packages/websockify/websocket.py",
 line 581, in handle
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy 
SimpleHTTPRequestHandler.handle(self)
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/usr/lib/python2.7/BaseHTTPServer.py", line 340, in handle
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy 
self.handle_one_request()
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/usr/lib/python2.7/BaseHTTPServer.py", line 328, in handle_one_request
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy method()
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/openstack/venvs/nova-18.1.9/lib/python2.7/site-packages/websockify/websocket.py",
 line 567, in do_HEAD
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy 
SimpleHTTPRequestHandler.do_HEAD(self)
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/usr/lib/python2.7/SimpleHTTPServer.py", line 54, in do_HEAD
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy f = 
self.send_head()
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/usr/lib/python2.7/SimpleHTTPServer.py", line 103, in send_head
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy 
self.send_header("Last-Modified", self.date_time_string(fs.st_mtime))
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/usr/lib/python2.7/BaseHTTPServer.py", line 412, in send_header
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy 
self.wfile.write("%s: %s\r\n" % (keyword, value))
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/usr/lib/python2.7/socket.py", line 328, in write
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy self.flush()
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/usr/lib/python2.7/socket.py", line 307, in flush
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy 
self._sock.sendall(view[write_offset:write_offset+buffer_size])
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/openstack/venvs/nova-18.1.9/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 390, in sendall
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy tail = 
self.send(data, flags)
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy   File 
"/openstack/venvs/nova-18.1.9/lib/python2.7/site-packages/eventlet/greenio/base.py",
 line 384, in send
2019-12-03 09:16:01.279 37844 ERROR nova.console.websocketproxy return 
self._send_loop(s

[Yahoo-eng-team] [Bug 1854940] [NEW] DhcpLocalProcess "_enable" method should not call "restart" --> "enable"

2019-12-03 Thread Rodolfo Alonso
Public bug reported:

DhcpLocalProcess "DhcpLocalProcess._enable" method should not call
"DhcpLocalProcess.restart", what is calling "DhcpLocalProcess.enable".
This call chain will introduce a second active wait [1], that is
unnecessary.

PS: IMO, this should be considered "importance = wishlist".

[1]
https://github.com/openstack/neutron/blob/26526ba275ecbab5064fb7db9ae2ee2f286a83ca/neutron/agent/linux/dhcp.py#L217

** Affects: neutron
 Importance: Undecided
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Description changed:

  DhcpLocalProcess "DhcpLocalProcess._enable" method should not call
  "DhcpLocalProcess.restart", what is calling "DhcpLocalProcess.enable".
  This call chain will introduce a second active wait [1], that is
  unnecessary.
  
+ PS: IMO, this should be considered "importance = wishlist".
+ 
  [1]
  
https://github.com/openstack/neutron/blob/26526ba275ecbab5064fb7db9ae2ee2f286a83ca/neutron/agent/linux/dhcp.py#L217

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1854940

Title:
  DhcpLocalProcess "_enable" method should not call "restart" -->
  "enable"

Status in neutron:
  New

Bug description:
  DhcpLocalProcess "DhcpLocalProcess._enable" method should not call
  "DhcpLocalProcess.restart", what is calling "DhcpLocalProcess.enable".
  This call chain will introduce a second active wait [1], that is
  unnecessary.

  PS: IMO, this should be considered "importance = wishlist".

  [1]
  
https://github.com/openstack/neutron/blob/26526ba275ecbab5064fb7db9ae2ee2f286a83ca/neutron/agent/linux/dhcp.py#L217

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1854940/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1854724] Re: non-admin user can't create volume group

2019-12-03 Thread Radomir Dopieralski
** Changed in: horizon
   Status: New => Incomplete

** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1854724

Title:
  non-admin user can't create volume group

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When a user does not have admin role, they can not create volume group in 
cinder via Horizon,
  with error saying "Unable to create volume group.", while they can do it via 
CLI.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1854724/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1847367] Re: Images with hw:vif_multiqueue_enabled can be limited to 8 queues even if more are supported

2019-12-03 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/695118
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=0e6aac3c2d97c999451da50537df6a0cbddeb4a6
Submitter: Zuul
Branch:master

commit 0e6aac3c2d97c999451da50537df6a0cbddeb4a6
Author: Sean Mooney 
Date:   Wed Nov 20 00:13:03 2019 +

add [libvirt]/max_queues config option

This change adds a max_queues config option to allow
operators to set the maximium number of virtio queue
pairs that can be allocated to a virtio network
interface.

Change-Id: I9abe783a9a9443c799e7c74a57cc30835f679a01
Closes-Bug: #1847367


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1847367

Title:
  Images with hw:vif_multiqueue_enabled can be limited to 8 queues even
  if more are supported

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Nova version: 18.2.3
  Release: Rocky
  Compute node OS: CentOS 7.3
  Compute node kernel: 3.10.0-327.13.1.el7.x86_64

  In https://bugs.launchpad.net/nova/+bug/1570631 and commit
  https://review.opendev.org/#/c/332660/, a bug was fixed by making the
  assumption that the kernel version should also dictate the max number
  of queues on the tap interface when setting
  hw:vif_multiqueue_enabled=True. It was decided that 3.x kernels have a
  max queue count of 8. Unfortunately not all distributions follow this,
  and CentOS/RHEL has supported up to 256 queues since at least 7.2 even
  with a 3.x kernel.

  The result of this is that a 20 core VM created in Mitaka will have 20
  queues enabled (because the limit of 8 had not been added). The very
  same host after being upgraded to Rocky will instead only give 8
  queues to the VM even though the kernel supports 256.

  Could a workaround option be implemented to disable this check, or
  manually define the max queue count?

  Snippet of drivers/net/tun.c from CentOS 7.2 kernel source code:
  /* MAX_TAP_QUEUES 256 is chosen to allow rx/tx queues to be equal
   * to max number of VCPUs in guest. */
  #define MAX_TAP_QUEUES 256
  #define MAX_TAP_FLOWS  4096

  Snippet from the 3.10.0 kernel code from 
https://elixir.bootlin.com/linux/v3.10/source/drivers/net/tun.c:
  /* DEFAULT_MAX_NUM_RSS_QUEUES were choosed to let the rx/tx queues allocated 
for
   * the netdevice to be fit in one page. So we can make sure the success of
   * memory allocation. TODO: increase the limit. */
  #define MAX_TAP_QUEUES DEFAULT_MAX_NUM_RSS_QUEUES
  #define MAX_TAP_FLOWS  4096

  In the above example, DEFAULT_MAX_NUM_RSS_QUEUES is set to 8.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1847367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp