[Yahoo-eng-team] [Bug 1762876] Re: test_resize_with_reschedule_then_live_migrate intermittently failing; migration is not yet complete

2018-04-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/560454
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4afa8c2b97929faaf8bbf2bbaa72235fab7d3d13
Submitter: Zuul
Branch:master

commit 4afa8c2b97929faaf8bbf2bbaa72235fab7d3d13
Author: Matt Riedemann 
Date:   Wed Apr 11 10:43:34 2018 -0400

Fix race fail in test_resize_with_reschedule_then_live_migrate

The assertion in the test that the migration status is 'completed'
is flawed in that it assumes when the instance status is 'ACTIVE'
the migration is completed, which isn't True, since the instance
status gets changed before the migration is completed, but they are
very close in time so there is a race, which is how this test slipped
by. This fixes the issue by polling the migration status until it
is actually completed or we timeout.

Change-Id: I61f745667f4c003d7e3ca6f2f9a99194930ac892
Closes-Bug: #1762876


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1762876

Title:
  test_resize_with_reschedule_then_live_migrate intermittently failing;
  migration is not yet complete

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I noticed this in a stable/pike functional test job run:

  http://logs.openstack.org/46/560146/2/check/nova-tox-
  functional/4a9d1fd/job-output.txt.gz#_2018-04-10_21_37_20_943583

  2018-04-10 21:37:20.944928 | ubuntu-xenial | Captured stderr:
  2018-04-10 21:37:20.944966 | ubuntu-xenial | 
  2018-04-10 21:37:20.945029 | ubuntu-xenial | Traceback (most recent call 
last):
  2018-04-10 21:37:20.945231 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 457, in fire_timers
  2018-04-10 21:37:20.945268 | ubuntu-xenial | timer()
  2018-04-10 21:37:20.945467 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/eventlet/hubs/timer.py",
 line 58, in __call__
  2018-04-10 21:37:20.945513 | ubuntu-xenial | cb(*args, **kw)
  2018-04-10 21:37:20.945598 | ubuntu-xenial |   File "nova/utils.py", line 
1030, in context_wrapper
  2018-04-10 21:37:20.945650 | ubuntu-xenial | func(*args, **kwargs)
  2018-04-10 21:37:20.945756 | ubuntu-xenial |   File 
"nova/compute/manager.py", line 5620, in dispatch_live_migration
  2018-04-10 21:37:20.945839 | ubuntu-xenial | 
self._do_live_migration(*args, **kwargs)
  2018-04-10 21:37:20.945939 | ubuntu-xenial |   File 
"nova/compute/manager.py", line 5599, in _do_live_migration
  2018-04-10 21:37:20.945993 | ubuntu-xenial | clean_task_state=True)
  2018-04-10 21:37:20.946194 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  2018-04-10 21:37:20.946246 | ubuntu-xenial | self.force_reraise()
  2018-04-10 21:37:20.946452 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  2018-04-10 21:37:20.946532 | ubuntu-xenial | six.reraise(self.type_, 
self.value, self.tb)
  2018-04-10 21:37:20.946679 | ubuntu-xenial |   File 
"nova/compute/manager.py", line 5588, in _do_live_migration
  2018-04-10 21:37:20.946764 | ubuntu-xenial | block_migration, 
migrate_data)
  2018-04-10 21:37:20.946856 | ubuntu-xenial |   File "nova/virt/fake.py", 
line 497, in live_migration
  2018-04-10 21:37:20.946901 | ubuntu-xenial | migrate_data)
  2018-04-10 21:37:20.947003 | ubuntu-xenial |   File 
"nova/exception_wrapper.py", line 76, in wrapped
  2018-04-10 21:37:20.947069 | ubuntu-xenial | function_name, 
call_dict, binary)
  2018-04-10 21:37:20.947270 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 220, in __exit__
  2018-04-10 21:37:20.947322 | ubuntu-xenial | self.force_reraise()
  2018-04-10 21:37:20.947536 | ubuntu-xenial |   File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional/local/lib/python2.7/site-packages/oslo_utils/excutils.py",
 line 196, in force_reraise
  2018-04-10 21:37:20.947619 | ubuntu-xenial | six.reraise(self.type_, 
self.value, self.tb)
  2018-04-10 21:37:20.947707 | ubuntu-xenial |   File 
"nova/exception_wrapper.py", line 67, in wrapped
  2018-04-10 21:37:20.947779 | ubuntu-xenial | return f(self, context, 
*args, **kw)
  2018-04-10 21:37:20.947878 | ubuntu-xenial |   File 
"nova/compute/manager.py", line 218, in 

[Yahoo-eng-team] [Bug 1763204] [NEW] wsgi.py is missing

2018-04-11 Thread Adrian Turjak
Public bug reported:

Horizon was likely started very early along with Django, and thus has the old 
format wsgi file as "django.wsgi".
https://github.com/openstack/horizon/tree/master/openstack_dashboard/wsgi

This is not how django names this file anymore, nor how it is really
used.

https://stackoverflow.com/questions/20035252/difference-between-wsgi-py-and-django-wsgi
https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/modwsgi/

The expectation is having a wsgi.py file somewhere along your importable
python path. Normally this is in the same place as your settings.py file
when building a default django project.

Ideally we should rename and move the file to a place it is easier to import 
from:
horizon/openstack_dashboard/wsgi/django.wsgi  
horizon/openstack_dashboard/wsgi.py

gunicorn cannot import and run it because it isn't a '.py' file, and is
one of the most popular wsgi servers around.

By doing the above move and rename the file can now be imported and run as:
gunicorn openstack_dashboard.wsgi:application


NOTE: This will likely break anyone using it right now. We may instead want to 
copy the file to the new location and add a deprecation log into the old one 
with a notice to remove in 2 cycles. Ideally also document that deployers 
should be using the new file.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1763204

Title:
  wsgi.py is missing

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon was likely started very early along with Django, and thus has the old 
format wsgi file as "django.wsgi".
  https://github.com/openstack/horizon/tree/master/openstack_dashboard/wsgi

  This is not how django names this file anymore, nor how it is really
  used.

  
https://stackoverflow.com/questions/20035252/difference-between-wsgi-py-and-django-wsgi
  https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/modwsgi/

  The expectation is having a wsgi.py file somewhere along your
  importable python path. Normally this is in the same place as your
  settings.py file when building a default django project.

  Ideally we should rename and move the file to a place it is easier to import 
from:
  horizon/openstack_dashboard/wsgi/django.wsgi  
horizon/openstack_dashboard/wsgi.py

  gunicorn cannot import and run it because it isn't a '.py' file, and
  is one of the most popular wsgi servers around.

  By doing the above move and rename the file can now be imported and run as:
  gunicorn openstack_dashboard.wsgi:application

  
  NOTE: This will likely break anyone using it right now. We may instead want 
to copy the file to the new location and add a deprecation log into the old one 
with a notice to remove in 2 cycles. Ideally also document that deployers 
should be using the new file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1763204/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1735821] Re: netplan needs bridge port-priority support

2018-04-11 Thread Launchpad Bug Tracker
This bug was fixed in the package nplan - 0.32~17.10.3

---
nplan (0.32~17.10.3) artful; urgency=medium

  * Don't silently break Bridge Priority by adding port-priority.

nplan (0.32~17.10.2) artful; urgency=medium

  * Fix syntax for IPv6 addresses in doc. (LP: #1735317)
  * doc: routes are not top-level but per-interface. (LP: #1726695)
  * Implement bridge port-priority parameter. (LP: #1735821)
  * Implement "optional: true" to correctly write systemd network definitions
with "RequiredForOnline=false", such that these networks do not block boot.
(LP: #1664844)
  * Various documentation fixes. (LP: #1751814)

 -- Mathieu Trudel-Lapierre   Fri, 02 Mar 2018
16:50:47 -0500

** Changed in: nplan (Ubuntu Artful)
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1735821

Title:
  netplan needs bridge port-priority support

Status in cloud-init:
  Fix Released
Status in nplan package in Ubuntu:
  Fix Released
Status in nplan source package in Xenial:
  Fix Committed
Status in nplan source package in Artful:
  Fix Released

Bug description:
  [Impact]
  Users of netplan configuring any bridge. Port priority is a very common 
setting to change when setting up bridge devices that might have multiple 
interfaces.

  [Test case]
  1) Write a netplan configuration:
  network:
  version: 2
  ethernets:
  eth0:
  match:
  name: eth0
  bridges:
  br0:
  addresses:
  - 192.168.14.2/24
  interfaces:
  - eth0
  parameters:
  path-cost:
  eth0: 50
  priority: 22
  port-priority:
  eth0: 14

  2) Run 'sudo netplan apply'

  3) Validate that the config generated by netplan is correct:

  In /run/systemd/network/10-netplan-eth0.network:

  [...]
  [Bridge]
  [...]
  Priority=14

  4) Validate that the port-priority value for the bridge has been
  correctly set:

  $ cat /sys/class/net/mybr/brif/eth0/priority


  [Regression potential]
  This might impact STP behavior, such that while the port priority for a 
bridge changes, the general network topology might change -- this may lead to 
loss of connectivity on the bridge itself or on other devices on the network, 
invalid packet traffic (packets showing up where they should not), etc.

  ---

  Now that systemd supports port-priority for bridges (LP: #1668347)
  netplan should handle port-priority like it does path-cost.

  1) % lsb_release -rd
  Description:  Ubuntu 16.04.3 LTS
  Release:  16.04

  1) # lsb_release -rd
  Description:  Ubuntu Bionic Beaver (development branch)
  Release:  18.04

  2) # apt-cache policy nplan
  nplan:
    Installed: 0.30
    Candidate: 0.32
    Version table:
   0.32 500
  500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages
   *** 0.30 100
  100 /var/lib/dpkg/status

  3) netplan generate renders a networkd .network file which has
  [Bridge] section including  Priority  value set on each of the bridge
  ports specified

  4) netplan fails to parse the input yaml with

  Sample config that should parse:

  % cat br-pp.yaml
  network:
  version: 2
  ethernets:
  eth0:
  match:
  macaddress: '52:54:00:12:34:04'
  bridges:
  br0:
  addresses:
  - 192.168.14.2/24
  interfaces:
  - eth0
  parameters:
  path-cost:
  eth0: 50
  priority: 22
  port-priority:
  eth0: 14

  % netplan generate
  Error in network definition br-pp.yaml line 13 column 16: unknown key 
port-priority

  If fixed, then I would expect a /run/systemd/network/10-netplan-eth0.network 
that looks like
  [Match]
  MACAddress=52:54:00:12:34:00
  Name=eth0

  [Network]
  Bridge=br0
  LinkLocalAddressing=no
  IPv6AcceptRA=no

  [Bridge]
  Cost=50
  Priority=14

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1735821/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1760017] Re: The 'test_supports_direct_io' method belongs a wrong test class

2018-04-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/557883
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=4a78dccf70bf8c37b3586cedfe5a4f0fb0d45b96
Submitter: Zuul
Branch:master

commit 4a78dccf70bf8c37b3586cedfe5a4f0fb0d45b96
Author: Takashi NATSUME 
Date:   Fri Mar 30 13:24:30 2018 +0900

Remove mox in tests/unit/test_utils.py

Replace mox with mock in nova/tests/unit/test_utils.py.
The 'test_supports_direct_io' method is
in the 'GetEndpointTestCase' class currently.
But it should be in an isolated test class.
So add the test class and move the method into it.

Then split the method into methods for each test case
to improve readability.

Change-Id: Id2350529b3322dfa8f7c13ac8e5f85aaf3041082
Implements: blueprint mox-removal
Closes-Bug: #1760017


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1760017

Title:
  The 'test_supports_direct_io' method belongs a wrong test class

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The 'test_supports_direct_io' method belongs the 'GetEndpointTestCase' class.
  But it should not be in the class because the class is for the 'get_endpoint' 
method.
  The 'test_supports_direct_io' method should be in an isolated class.

  
https://github.com/openstack/nova/blob/942ed9b265b0f1fe4c237052030f2d73a3807b7a/nova/tests/unit/test_utils.py#L1337-L1436

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1760017/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1763183] [NEW] DELETE /os-services/{service_id} does not block for hosted instances

2018-04-11 Thread Matt Riedemann
Public bug reported:

This came up while reviewing the fix for bug 1756179:

https://review.openstack.org/#/c/554920/6/nova/api/openstack/compute/services.py@226

Full IRC conversation is here:

http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-
nova.2018-04-11.log.html#t2018-04-11T20:32:13

The summary is that it's possible to delete a compute service and it's
associated compute node record even if that compute node has instances
on it.

Before placement, this wasn't a huge problem because you could evacuate
the instances to another host or if you brought the host back up, it
will recreate the service and compute node and the resource tracker will
"heal" itself by finding instances running on that host and node combo:

https://github.com/openstack/nova/blob/2c5da2212c3fa3e589c4af171486a2097fd8c54e/nova/compute/resource_tracker.py#L714

The problem is after we started requiring placement, and creating
allocations in the scheduler in Pike, those allocations are against the
compute_nodes.uuid for the compute node resource provider. If the
service and it's related compute node record are deleted, restarting the
service will create a new service and compute node record with a new
UUID which will result in a new resource provider in placement, and the
instances running on that host will have allocations against the now
orphaned resource provider. The new resource provider will be reporting
incorrect consumption so scheduling will also be affected.

So we should block deleting a compute service (and it's node) here:

https://github.com/openstack/nova/blob/2c5da2212c3fa3e589c4af171486a2097fd8c54e/nova/api/openstack/compute/services.py#L213

If that host (node) has instances on it.

This problem goes back to Pike. Ocata is OK in that the resource tracker
on Ocata computes will "heal" allocations during the
update_available_resource periodic task (and when the compute service
starts up), and in Ocata the FilterScheduler does not create allocations
in Placement.

** Affects: nova
 Importance: High
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged

** Affects: nova/pike
 Importance: Undecided
 Status: New

** Affects: nova/queens
 Importance: Undecided
 Status: New


** Tags: api placement

** Also affects: nova/pike
   Importance: Undecided
   Status: New

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova
 Assignee: (unassigned) => Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1763183

Title:
  DELETE /os-services/{service_id} does not block for hosted instances

Status in OpenStack Compute (nova):
  Triaged
Status in OpenStack Compute (nova) pike series:
  New
Status in OpenStack Compute (nova) queens series:
  New

Bug description:
  This came up while reviewing the fix for bug 1756179:

  
https://review.openstack.org/#/c/554920/6/nova/api/openstack/compute/services.py@226

  Full IRC conversation is here:

  http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-
  nova.2018-04-11.log.html#t2018-04-11T20:32:13

  The summary is that it's possible to delete a compute service and it's
  associated compute node record even if that compute node has instances
  on it.

  Before placement, this wasn't a huge problem because you could
  evacuate the instances to another host or if you brought the host back
  up, it will recreate the service and compute node and the resource
  tracker will "heal" itself by finding instances running on that host
  and node combo:

  
https://github.com/openstack/nova/blob/2c5da2212c3fa3e589c4af171486a2097fd8c54e/nova/compute/resource_tracker.py#L714

  The problem is after we started requiring placement, and creating
  allocations in the scheduler in Pike, those allocations are against
  the compute_nodes.uuid for the compute node resource provider. If the
  service and it's related compute node record are deleted, restarting
  the service will create a new service and compute node record with a
  new UUID which will result in a new resource provider in placement,
  and the instances running on that host will have allocations against
  the now orphaned resource provider. The new resource provider will be
  reporting incorrect consumption so scheduling will also be affected.

  So we should block deleting a compute service (and it's node) here:

  
https://github.com/openstack/nova/blob/2c5da2212c3fa3e589c4af171486a2097fd8c54e/nova/api/openstack/compute/services.py#L213

  If that host (node) has instances on it.

  This problem goes back to Pike. Ocata is OK in that the resource
  tracker on Ocata computes will "heal" allocations during the
  update_available_resource periodic task (and when the compute service
  starts up), and in Ocata the FilterScheduler does not create
  allocations in Placement.

To 

[Yahoo-eng-team] [Bug 1763181] [NEW] test_parallel_evacuate_with_server_group intermittently fails

2018-04-11 Thread Matt Riedemann
Public bug reported:

http://logs.openstack.org/54/560454/1/gate/nova-tox-functional-
py35/40cf8c5/job-output.txt.gz#_2018-04-11_18_49_13_614017

2018-04-11 18:49:13.618886 | ubuntu-xenial | Captured traceback:
2018-04-11 18:49:13.618960 | ubuntu-xenial | ~~~
2018-04-11 18:49:13.619063 | ubuntu-xenial | b'Traceback (most recent call 
last):'
2018-04-11 18:49:13.619319 | ubuntu-xenial | b'  File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/tests/functional/regressions/test_bug_1735407.py",
 line 158, in test_parallel_evacuate_with_server_group'
2018-04-11 18:49:13.619446 | ubuntu-xenial | b"
server2['OS-EXT-SRV-ATTR:host'])"
2018-04-11 18:49:13.619949 | ubuntu-xenial | b'  File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional-py35/lib/python3.5/site-packages/unittest2/case.py",
 line 845, in assertNotEqual'
2018-04-11 18:49:13.620066 | ubuntu-xenial | b'raise 
self.failureException(msg)'
2018-04-11 18:49:13.620172 | ubuntu-xenial | b"AssertionError: 'host3' == 
'host3'"
2018-04-11 18:49:13.620237 | ubuntu-xenial | b''

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22AssertionError%3A%20'host3'%20%3D%3D%20'host3'%5C%22%20AND%20tags%3A%5C%22console%5C%22=7d

10 hits in 7 days, check and gate, all failures.

** Affects: nova
 Importance: Medium
 Status: Confirmed


** Tags: functional testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1763181

Title:
  test_parallel_evacuate_with_server_group intermittently fails

Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  http://logs.openstack.org/54/560454/1/gate/nova-tox-functional-
  py35/40cf8c5/job-output.txt.gz#_2018-04-11_18_49_13_614017

  2018-04-11 18:49:13.618886 | ubuntu-xenial | Captured traceback:
  2018-04-11 18:49:13.618960 | ubuntu-xenial | ~~~
  2018-04-11 18:49:13.619063 | ubuntu-xenial | b'Traceback (most recent 
call last):'
  2018-04-11 18:49:13.619319 | ubuntu-xenial | b'  File 
"/home/zuul/src/git.openstack.org/openstack/nova/nova/tests/functional/regressions/test_bug_1735407.py",
 line 158, in test_parallel_evacuate_with_server_group'
  2018-04-11 18:49:13.619446 | ubuntu-xenial | b"
server2['OS-EXT-SRV-ATTR:host'])"
  2018-04-11 18:49:13.619949 | ubuntu-xenial | b'  File 
"/home/zuul/src/git.openstack.org/openstack/nova/.tox/functional-py35/lib/python3.5/site-packages/unittest2/case.py",
 line 845, in assertNotEqual'
  2018-04-11 18:49:13.620066 | ubuntu-xenial | b'raise 
self.failureException(msg)'
  2018-04-11 18:49:13.620172 | ubuntu-xenial | b"AssertionError: 'host3' == 
'host3'"
  2018-04-11 18:49:13.620237 | ubuntu-xenial | b''

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22AssertionError%3A%20'host3'%20%3D%3D%20'host3'%5C%22%20AND%20tags%3A%5C%22console%5C%22=7d

  10 hits in 7 days, check and gate, all failures.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1763181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762700] Re: lxd containers created by juju are not receiving IP addresses

2018-04-11 Thread Nicholas Skaggs
** Also affects: juju/2.3
   Importance: Undecided
   Status: New

** Also affects: juju/2.4
   Importance: Undecided
   Status: New

** Changed in: juju/2.3
   Status: New => Invalid

** Changed in: juju/2.4
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1762700

Title:
  lxd containers created by juju are not receiving IP addresses

Status in cloud-init:
  New
Status in juju:
  In Progress
Status in juju 2.3 series:
  Invalid
Status in juju 2.4 series:
  In Progress

Bug description:
  using juju 2.3.5, maas 2.3.0-6434-gd354690-0ubuntu1~16.04.1, and lxd
  2.0.11-0ubuntu1~16.04.4, containers are not receiving IP addresses:

  http://paste.ubuntu.com/p/wdY9ShtrTP/

  there is an error in cloud-init-output.log:
  http://paste.ubuntu.com/p/k8rdGHjDQq/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1762700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1763161] [NEW] Segment ignore sorting parameters

2018-04-11 Thread Hongbin Lu
Public bug reported:

Send a request to get a list of network segments sorted by name but the
response doesn't seem to be sorted (below is an example). Segment should
support sorting to align with other API resources.

$ curl -g -s -X GET 
"http://10.0.0.15:9696/v2.0/segments?sort_dir=asc_key=name; -H "Accept: 
application/json" -H "X-Auth-Token: $TOKEN" | jq .
{
  "segments": [
{
  "name": null,
  "network_id": "02dd8479-ef26-4398-a102-d19d0a7b3a1f",
  "segmentation_id": 24,
  "network_type": "vxlan",
  "physical_network": null,
  "id": "0a046acf-594f-4488-80a5-89b01f4c0f7d",
  "description": null
},
{
  "name": "segment1",
  "network_id": "a650210a-6ad6-44fc-8a17-fee8829352ca",
  "segmentation_id": 55,
  "network_type": "vxlan",
  "physical_network": null,
  "id": "d0cc2af3-903b-4451-9d4a-8d7d8216fa16",
  "description": null
},
{
  "name": "segment2",
  "network_id": "a650210a-6ad6-44fc-8a17-fee8829352ca",
  "segmentation_id": 2016,
  "network_type": "vxlan",
  "physical_network": null,
  "id": "5a0e733b-bc40-4225-9dab-729338107d1a",
  "description": null
},
{
  "name": null,
  "network_id": "ad93b454-4836-46c7-bfc8-64a20a2ab95a",
  "segmentation_id": null,
  "network_type": "flat",
  "physical_network": "public",
  "id": "6131da02-f769-4b31-997d-cefa46c5198c",
  "description": null
}
  ]
}

** Affects: neutron
 Importance: Undecided
 Assignee: Hongbin Lu (hongbin.lu)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Hongbin Lu (hongbin.lu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1763161

Title:
  Segment ignore sorting parameters

Status in neutron:
  New

Bug description:
  Send a request to get a list of network segments sorted by name but
  the response doesn't seem to be sorted (below is an example). Segment
  should support sorting to align with other API resources.

  $ curl -g -s -X GET 
"http://10.0.0.15:9696/v2.0/segments?sort_dir=asc_key=name; -H "Accept: 
application/json" -H "X-Auth-Token: $TOKEN" | jq .
  {
"segments": [
  {
"name": null,
"network_id": "02dd8479-ef26-4398-a102-d19d0a7b3a1f",
"segmentation_id": 24,
"network_type": "vxlan",
"physical_network": null,
"id": "0a046acf-594f-4488-80a5-89b01f4c0f7d",
"description": null
  },
  {
"name": "segment1",
"network_id": "a650210a-6ad6-44fc-8a17-fee8829352ca",
"segmentation_id": 55,
"network_type": "vxlan",
"physical_network": null,
"id": "d0cc2af3-903b-4451-9d4a-8d7d8216fa16",
"description": null
  },
  {
"name": "segment2",
"network_id": "a650210a-6ad6-44fc-8a17-fee8829352ca",
"segmentation_id": 2016,
"network_type": "vxlan",
"physical_network": null,
"id": "5a0e733b-bc40-4225-9dab-729338107d1a",
"description": null
  },
  {
"name": null,
"network_id": "ad93b454-4836-46c7-bfc8-64a20a2ab95a",
"segmentation_id": null,
"network_type": "flat",
"physical_network": "public",
"id": "6131da02-f769-4b31-997d-cefa46c5198c",
"description": null
  }
]
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1763161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1763146] [NEW] [ovs] Binding failed for port

2018-04-11 Thread Annie Melen
Public bug reported:

Hello!

I've already had successful deployment on Pike (3 control node + several 
compute nodes): 
 - ML2 plugin - openvswitch
 - default tunnel type - vxlan
 - dvr is enabled

Instances were started, running, migrating without errors. 
Everything was fine until I've upgraded to Queens... I've use same neutron 
configuration files and encountered the following error:

...
2018-04-11 12:21:19.128 36774 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [-] 
Bridge br-int has datapath-ID 5618b7026f46
2018-04-11 12:21:23.460 36774 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-ac08fa6c-7257-4296-aede-67a051568440 - - - - -] Mapping physical network 
external to bridge br0
2018-04-11 12:21:23.948 36774 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge 
[req-ac08fa6c-7257-4296-aede-67a051568440 - - - - -] Bridge br0 has datapath-ID 
ec0d9a7abceb
2018-04-11 12:21:24.206 36774 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge 
[req-ac08fa6c-7257-4296-aede-67a051568440 - - - - -] Bridge br-tun has 
datapath-ID d2588392e746
2018-04-11 12:21:24.220 36774 INFO neutron.agent.agent_extensions_manager 
[req-ac08fa6c-7257-4296-aede-67a051568440 - - - - -] Initializing agent 
extension 'qos'
2018-04-11 12:21:24.323 36774 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_dvr_neutron_agent 
[req-ac08fa6c-7257-4296-aede-67a051568440 - - - - -] L2 Agent operating in DVR 
Mode with MAC FA-16-3F-7C-00-B2
2018-04-11 12:21:24.375 36774 INFO neutron.common.ipv6_utils 
[req-ac08fa6c-7257-4296-aede-67a051568440 - - - - -] IPv6 not present or 
configured not to bind to new interfaces on this system. Please ensure IPv6 is 
enabled and /proc/sys/net/ipv6/conf/default/disable_ipv6 is set to 0 to enable 
IPv6.
2018-04-11 12:21:25.275 36774 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-1c05ff93-84cb-45ed-937c-95f048b553f1 - - - - -] Agent initialized 
successfully, now running...
...
2018-04-11 21:24:10.248 3953 INFO neutron.agent.common.ovs_lib 
[req-64bf5e4c-32a9-4936-93cd-2658095b2d35 - - - - -] Port 
b8b42046-14ba-4b43-a24c-3a0a1b350aea not present in bridge br-int
2018-04-11 21:24:10.249 3953 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-64bf5e4c-32a9-4936-93cd-2658095b2d35 - - - - -] port_unbound(): net_uuid 
None not managed by VLAN manager
2018-04-11 21:24:10.249 3953 INFO neutron.agent.securitygroups_rpc 
[req-64bf5e4c-32a9-4936-93cd-2658095b2d35 - - - - -] Remove device filter for 
['b8b42046-14ba-4b43-a24c-3a0a1b350aea']
...


It happens every time I try to launch instance. No matter, internal or external 
(vlan provider) network, result is the same.

So my question is 'What I'm doing wrong?' May be I missed something in
Queens config samples, or encountered the huge bug, or what?..


Pike Environment
--
Ubuntu 16.04.4 LTS, 4.4.0-119-generic
Neutron 11.0.3-0ubuntu1.1~cloud0
openvswitch-switch 2.8.1


Queens Environment
--
Ubuntu 16.04.4 LTS, 4.4.0-119-generic
Neutron 12.0.0-0ubuntu2~cloud0
openvswitch-switch 2.9.0

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1763146

Title:
  [ovs]  Binding failed for port

Status in neutron:
  New

Bug description:
  Hello!

  I've already had successful deployment on Pike (3 control node + several 
compute nodes): 
   - ML2 plugin - openvswitch
   - default tunnel type - vxlan
   - dvr is enabled

  Instances were started, running, migrating without errors. 
  Everything was fine until I've upgraded to Queens... I've use same neutron 
configuration files and encountered the following error:

  ...
  2018-04-11 12:21:19.128 36774 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge [-] 
Bridge br-int has datapath-ID 5618b7026f46
  2018-04-11 12:21:23.460 36774 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent 
[req-ac08fa6c-7257-4296-aede-67a051568440 - - - - -] Mapping physical network 
external to bridge br0
  2018-04-11 12:21:23.948 36774 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge 
[req-ac08fa6c-7257-4296-aede-67a051568440 - - - - -] Bridge br0 has datapath-ID 
ec0d9a7abceb
  2018-04-11 12:21:24.206 36774 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge 
[req-ac08fa6c-7257-4296-aede-67a051568440 - - - - -] Bridge br-tun has 
datapath-ID d2588392e746
  2018-04-11 12:21:24.220 36774 INFO neutron.agent.agent_extensions_manager 
[req-ac08fa6c-7257-4296-aede-67a051568440 - - - - -] Initializing agent 
extension 'qos'
  2018-04-11 12:21:24.323 36774 INFO 
neutron.plugins.ml2.drivers.openvswitch.agent.ovs_dvr_neutron_agent 

[Yahoo-eng-team] [Bug 1745358] Re: Marker reset option for nova-manage map_instances

2018-04-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/539501
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=98163f98b700840f50c689e4a3aed7e28dd0c1ef
Submitter: Zuul
Branch:master

commit 98163f98b700840f50c689e4a3aed7e28dd0c1ef
Author: Surya Seetharaman 
Date:   Wed Jan 31 12:39:34 2018 +0100

Marker reset option for nova-manage map_instances

Currently nova-manage map_instances uses a marker set-up by which repeated
runs of the command will start from where the last run finished. Even
deleting the cell with the instance_mappings will not remove the marker
since the marker mapping has a NULL cell_mapping field. There needs to be
a way to reset this marker so that the user can run map_instances from the
beginning instead of the map_instances command saying "all instances are
already mapped" as is the current behavior.

Change-Id: Ic9a0bda9314cc1caed993db101bf6f874c0a0ae8
Closes-Bug: #1745358


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1745358

Title:
  Marker reset option for nova-manage map_instances

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Currently nova-manage map_instances uses a marker set-up by which
  repeated runs of the command will start from where the last run
  finished. Even deleting the cell with the instance_mappings will not
  remove the marker since the marker mapping has a NULL cell_mapping
  field. There needs to be a way to reset this marker so that the user
  can run map_instances from the beginning instead of the map_instances
  command saying "all instances are already mapped" as is the current
  behavior.

  
  Solution :

  Add a --reset flag to "nova-manage map_instances"

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1745358/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1743589] Re: openstack_dashboard.api.nova.QuotaSet does not exclude nova-network quotas

2018-04-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/534386
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=986e90248b462b1e8c8e2033caee68ebd50e0bf8
Submitter: Zuul
Branch:master

commit 986e90248b462b1e8c8e2033caee68ebd50e0bf8
Author: Akihiro Motoki 
Date:   Mon Jan 15 06:09:09 2018 +0900

Exclude nova-network quotas properly

openstack_dashboard.api.nova.QuotaSet was introduced to exclude
nova-network properly, but it did not work because QuotaSet does
not support 'in' and 'del' operations.
This commit fixes the logic and add a unit test.

Closes-Bug: #1743589
Change-Id: I8c3bfe985cccdf53fd555bf185c293806c14b6f6


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1743589

Title:
  openstack_dashboard.api.nova.QuotaSet  does not exclude nova-network
  quotas

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  ignore_quotas of openstack_dashboard.api.nova.QuotaSet [1] was
  introduced to exclude nova-network quotas, but it turns out it does
  not work.

  [2] is a session example. It is expected that
  api_nova.QuotaSet(quotas) does not include nova-network related quotas
  but actually nova-network related quotas are included.

  [1] 
https://github.com/openstack/horizon/blob/e129ba919260e85881a08ab33f0b142c6603c988/openstack_dashboard/api/nova.py#L242-L247
  [2] http://paste.openstack.org/show/645735/

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1743589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1763043] Re: Unnecessary "Instance not resizing, skipping migration" warning in n-cpu logs during live migration

2018-04-11 Thread Matt Riedemann
** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/queens
   Status: New => Confirmed

** Changed in: nova/queens
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1763043

Title:
  Unnecessary "Instance not resizing, skipping migration" warning in
  n-cpu logs during live migration

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:
  In a 7 day CI run, we have over 40K hits of this warning in the logs:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Instance%20not%20resizing%2C%20skipping%20migration%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22=7d

  http://logs.openstack.org/54/507854/4/gate/legacy-tempest-dsvm-
  multinode-live-
  migration/d723002/logs/subnode-2/screen-n-cpu.txt#_Apr_11_13_54_16_225676

  Apr 11 13:54:16.225676 ubuntu-xenial-rax-dfw-0003443206 nova-
  compute[29642]: WARNING nova.compute.resource_tracker [None req-
  61a6f9c9-3355-4594-acfa-ebf31ba995aa tempest-
  LiveMigrationTest-1725408283 tempest-LiveMigrationTest-1725408283]
  [instance: 6f4923e3-bf1f-4cb7-bd37-00e5d437759e] Instance not
  resizing, skipping migration.

  That warning was written back in 2012 when resize support was added to
  the resource tracker:

  https://review.openstack.org/#/c/15799/

  And since https://review.openstack.org/#/c/226411/ in 2015 it doesn't
  apply to evacuations.

  We shouldn't see a warning in the nova-compute logs during a normal
  operation like a live migration, so we really should either just drop
  this down to debug or remove it completely.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1763043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694127] Re: Unauthorized exception in angular users page as a member user.

2018-04-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/468726
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=dc9a6a33ea5a4b0e92e79252f6a4acf1ef426ec8
Submitter: Zuul
Branch:master

commit dc9a6a33ea5a4b0e92e79252f6a4acf1ef426ec8
Author: wei.ying 
Date:   Sun May 28 22:00:38 2017 +0800

Fix unauthorized exception when using member user to access angular users 
panel

When a member role user go to angular users panel, it will be
raised unauthorized exception since the user has no access to
get users list. It should be add a policy check mechanism at
the panel.

Change-Id: I9cfa1aeab27aca1631322d8c0b3e6a7a930d9cfe
Closes-Bug: #1694127


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1694127

Title:
  Unauthorized exception in angular users page  as a member user.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Env: devstack master branch

  Steps to reproduce:
  1. Set 'users_panel' is True in the settings.py
  2. Use a member user to login.
  3. Go to identity/users panel

  It will be redirect to login page when click users panel or click user
  name to redirect the detail page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1694127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1738930] Fix merged to nova (master)

2018-04-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/550648
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=bab3184ced29ef8333cdd4d1cf6caf0b4226a1d4
Submitter: Zuul
Branch:master

commit bab3184ced29ef8333cdd4d1cf6caf0b4226a1d4
Author: Takashi NATSUME 
Date:   Thu Mar 8 09:21:13 2018 +0900

api-ref: Parameter verification for servers.inc (2/3)

This patch verifies BDM, fault and scheduler hint parameters.
A subsequent patch will verify other parameters.

Change-Id: If57aa3e37ebaa6fa13718480bb216d10664aa5b1
Partial-Bug: #1738930


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1738930

Title:
  api-ref: parameters in servers.inc should be verified

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  In api-ref/source/servers.inc, there is a following description.
  Parameter verification has not been finished yet.
  So parameters should be verified.

  .. TODO(sdague) parameter microversions need to be gone through in the
 response (request side should be good)

  
https://raw.githubusercontent.com/openstack/nova/07c925a5321e379293bbf0e55bf3c40798eaf21b
  /api-ref/source/servers.inc

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1738930/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1763051] [NEW] Need to audit when notifications are sent during live migration

2018-04-11 Thread Matt Riedemann
Public bug reported:

We do a pretty good job of testing that notifications are sent during
certain operations, like live migration, but not so great a job at
making sure that notifications are sent at expected times, like that
start and end notifications actually happen at the start and end of a
method (seems we should really use a decorator function for something
like this for future proofing...).

This is a follow on from bug 1762876 where I thought about relying on
the "live_migration._post.end" notification to be able to tell when a
migration record should be 'completed' but that notification is sent
*before* we change the status on the migration record:

https://github.com/openstack/nova/blob/fe976dcc559d059589a9ccf953a28e855abf50fb/nova/compute/manager.py#L6323

If you look at the beginning of the same method, the start notification
is sent well after we've already started doing some work:

https://github.com/openstack/nova/blob/fe976dcc559d059589a9ccf953a28e855abf50fb/nova/compute/manager.py#L6261

So this bug is primarily meant to be an audit of at least the live
migration flows where the methods are big and hairy so it's easy to see
how over the years, the notifications got pushed into weird spots in
those methods, and should be moved back to the appropriate start/end
locations (or write a decorator to handle this).

** Affects: nova
 Importance: Medium
 Status: Triaged


** Tags: live-migration notifications

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1763051

Title:
  Need to audit when notifications are sent during live migration

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  We do a pretty good job of testing that notifications are sent during
  certain operations, like live migration, but not so great a job at
  making sure that notifications are sent at expected times, like that
  start and end notifications actually happen at the start and end of a
  method (seems we should really use a decorator function for something
  like this for future proofing...).

  This is a follow on from bug 1762876 where I thought about relying on
  the "live_migration._post.end" notification to be able to tell when a
  migration record should be 'completed' but that notification is sent
  *before* we change the status on the migration record:

  
https://github.com/openstack/nova/blob/fe976dcc559d059589a9ccf953a28e855abf50fb/nova/compute/manager.py#L6323

  If you look at the beginning of the same method, the start
  notification is sent well after we've already started doing some work:

  
https://github.com/openstack/nova/blob/fe976dcc559d059589a9ccf953a28e855abf50fb/nova/compute/manager.py#L6261

  So this bug is primarily meant to be an audit of at least the live
  migration flows where the methods are big and hairy so it's easy to
  see how over the years, the notifications got pushed into weird spots
  in those methods, and should be moved back to the appropriate
  start/end locations (or write a decorator to handle this).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1763051/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1736336] Re: Image data stays in backend if image signature verification fails

2018-04-11 Thread Brian Rosmaita
** Also affects: glance/queens
   Importance: Undecided
   Status: New

** Changed in: glance/queens
Milestone: None => queens-stable-2

** Changed in: glance/queens
   Importance: Undecided => High

** Changed in: glance/queens
   Status: New => Triaged

** Changed in: glance/queens
 Assignee: (unassigned) => Abhishek Kekane (abhishek-kekane)

** Tags removed: queens-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1736336

Title:
  Image data stays in backend if image signature verification fails

Status in Glance:
  In Progress
Status in Glance queens series:
  Triaged

Bug description:
  If image signature verification is enabled then while creating the
  image if verfication fails then it returns vaild error, deletes image
  from the database but image data stays in the bakend forever.

  Ideally if image verfication fails then it should delete the data from
  the backend as well.

  Pre-requisites:
  1. Ensure Barbican is enabled
  2. Create Keys and Certificate (Reference  
https://etherpad.openstack.org/p/mitaka-glance-image-signing-instructions#90)
  3. Create Signature (Reference 
https://etherpad.openstack.org/p/mitaka-glance-image-signing-instructions#184) 
and note down output of 'signature_64'
  4. Create context and upload certificate using context (Reference 
https://etherpad.openstack.org/p/glance-image-signing-create-context) and note 
down output of 'cert_uuid'

  
  Steps to reproduce:
  1. Upload Image to Glance, with Signature Metadata
 img_signature_certificate_uuid = 'fb67edd2-95ef-404b-9af2-910708c6d9b7'
 img_signature_hash_method = 'SHA-256'
 img_signature_key_type = 'RSA-PSS'
 img_signature = 
'ezccBYtJEdj2gOrN09woioHwi2rDVvBsmRI0i+9EYAYdE7E6FV8jzJD9BImcq/m7Dm6yZZPkCUHz+y4HBKeYqK0+otcz921zaeqcKGBvU1t7J9AL0hEgJbWg0RY6RXqDXpsOQrrkrHuna4O+BUOp6sPwb3j2eFYbbsqW6d/obgM='
 (different which is noted in Pre-requisites section Point 4 as 'signature_64')

 $ glance image-create --property
  name=cirrosSignedImage_goodSignature --property is-public=true
  --container-format bare --disk-format qcow2 --property
  
img_signature='abcdBYtJEdj2gOrN09woioHwi2rDVvBsmRI0i+9EYAYdE7E6FV8jzJD9BImcq/m7Dm6yZZPkCUHz+y4HBKeYqK0+otcz921zaeqcKGBvU1t7J9AL0hEgJbWg0RY6RXqDXpsOQrrkrHuna4O+BUOp6sPwb3j2eFYbbsqW6d/obgM='
  --property img_signature_certificate_uuid='fb67edd2-95ef-404b-
  9af2-910708c6d9b7' --property img_signature_hash_method='SHA-256'
  --property img_signature_key_type='RSA-PSS' --file
  cirros-0.3.2-source.tar.gz

  Note:
  'img_signature' starts with 'ezcc...' but in create command I have passed as 
'abcd..'

  Actual Output:
  
++--+
  | Property   | Value  
  |
  
++--+
  | checksum   | None   
  |
  | container_format   | bare   
  |
  | created_at | 2017-12-05T07:04:38Z   
  |
  | disk_format| qcow2  
  |
  | id | 6e8bec71-2176-4bcc-a732-2f76c5ac589f   
  |
  | img_signature  | 
abcdBYtJEdj2gOrN09woioHwi2rDVvBsmRI0i+9EYAYdE7E6FV8jzJD9BImcq/m7Dm6yZZPkCUHz+y4H
 |
  || 
BKeYqK0+otcz921zaeqcKGBvU1t7J9AL0hEgJbWg0RY6RXqDXpsOQrrkrHuna4O+BUOp6sPwb3j2eFYb
 |
  || bsqW6d/obgM=   
  |
  | img_signature_certificate_uuid | fb67edd2-95ef-404b-9af2-910708c6d9b7   
  |
  | img_signature_hash_method  | SHA-256
  |
  | img_signature_key_type | RSA-PSS
  |
  | is-public  | true   
  |
  | min_disk   | 0  
  |
  | min_ram| 0  
  |
  | name   | cirrosSignedImage_goodSignature
  |
  | owner  | 4f186fe25c934eeb95186fd0c5afda49 

[Yahoo-eng-team] [Bug 1726213] Re: KNOWN_EXCEPTIONS don't include all possible exceptions

2018-04-11 Thread Brian Rosmaita
** Also affects: glance/queens
   Importance: Undecided
   Status: New

** Changed in: glance/queens
Milestone: None => queens-stable-2

** Changed in: glance/queens
   Importance: Undecided => High

** Changed in: glance/queens
   Status: New => Triaged

** Changed in: glance/queens
 Assignee: (unassigned) => Brian Rosmaita (brian-rosmaita)

** Tags removed: queens-backport-potential

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1726213

Title:
  KNOWN_EXCEPTIONS don't include all possible exceptions

Status in Glance:
  In Progress
Status in Glance queens series:
  Triaged

Bug description:
  oslo.config may raise a ConfigFileValueError when run
  server.start(***) in `glance/cmd/api.py`, which is a subclass of
  ValueError so it can be caught using KNOWN_EXCEPTIONS.

  But ConfigFileValueError is not in the KNOWN_EXCEPTIONS, use index
  method of KNOWN_EXCEPTIONS will raise an ValueError, which is
  unexpected:

  ```
  2017-10-22 22:47:46.460 94 CRITICAL glance [-] Unhandled error: ValueError: 
tuple.index(x): x not in tuple
  2017-10-22 22:47:46.460 94 ERROR glance Traceback (most recent call last):
  2017-10-22 22:47:46.460 94 ERROR glance   File 
"/var/lib/kolla/venv/bin/glance-api", line 10, in 
  2017-10-22 22:47:46.460 94 ERROR glance sys.exit(main())
  2017-10-22 22:47:46.460 94 ERROR glance   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/glance/cmd/api.py", line 92, 
in main
  2017-10-22 22:47:46.460 94 ERROR glance fail(e)
  2017-10-22 22:47:46.460 94 ERROR glance   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/glance/cmd/api.py", line 65, 
in fail
  2017-10-22 22:47:46.460 94 ERROR glance return_code = 
KNOWN_EXCEPTIONS.index(type(e)) + 1
  2017-10-22 22:47:46.460 94 ERROR glance ValueError: tuple.index(x): x not in 
tuple
  2017-10-22 22:47:46.460 94 ERROR glance

  ```

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1726213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1763043] [NEW] Unnecessary "Instance not resizing, skipping migration" warning in n-cpu logs during live migration

2018-04-11 Thread Matt Riedemann
Public bug reported:

In a 7 day CI run, we have over 40K hits of this warning in the logs:

http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Instance%20not%20resizing%2C%20skipping%20migration%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22=7d

http://logs.openstack.org/54/507854/4/gate/legacy-tempest-dsvm-
multinode-live-
migration/d723002/logs/subnode-2/screen-n-cpu.txt#_Apr_11_13_54_16_225676

Apr 11 13:54:16.225676 ubuntu-xenial-rax-dfw-0003443206 nova-
compute[29642]: WARNING nova.compute.resource_tracker [None req-
61a6f9c9-3355-4594-acfa-ebf31ba995aa tempest-
LiveMigrationTest-1725408283 tempest-LiveMigrationTest-1725408283]
[instance: 6f4923e3-bf1f-4cb7-bd37-00e5d437759e] Instance not resizing,
skipping migration.

That warning was written back in 2012 when resize support was added to
the resource tracker:

https://review.openstack.org/#/c/15799/

And since https://review.openstack.org/#/c/226411/ in 2015 it doesn't
apply to evacuations.

We shouldn't see a warning in the nova-compute logs during a normal
operation like a live migration, so we really should either just drop
this down to debug or remove it completely.

** Affects: nova
 Importance: Medium
 Assignee: Matt Riedemann (mriedem)
 Status: Triaged


** Tags: logging serviceability

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1763043

Title:
  Unnecessary "Instance not resizing, skipping migration" warning in
  n-cpu logs during live migration

Status in OpenStack Compute (nova):
  Triaged

Bug description:
  In a 7 day CI run, we have over 40K hits of this warning in the logs:

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22Instance%20not%20resizing%2C%20skipping%20migration%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22=7d

  http://logs.openstack.org/54/507854/4/gate/legacy-tempest-dsvm-
  multinode-live-
  migration/d723002/logs/subnode-2/screen-n-cpu.txt#_Apr_11_13_54_16_225676

  Apr 11 13:54:16.225676 ubuntu-xenial-rax-dfw-0003443206 nova-
  compute[29642]: WARNING nova.compute.resource_tracker [None req-
  61a6f9c9-3355-4594-acfa-ebf31ba995aa tempest-
  LiveMigrationTest-1725408283 tempest-LiveMigrationTest-1725408283]
  [instance: 6f4923e3-bf1f-4cb7-bd37-00e5d437759e] Instance not
  resizing, skipping migration.

  That warning was written back in 2012 when resize support was added to
  the resource tracker:

  https://review.openstack.org/#/c/15799/

  And since https://review.openstack.org/#/c/226411/ in 2015 it doesn't
  apply to evacuations.

  We shouldn't see a warning in the nova-compute logs during a normal
  operation like a live migration, so we really should either just drop
  this down to debug or remove it completely.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1763043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1763039] [NEW] evacuate instance documentation not mentioning host-evacuate

2018-04-11 Thread do3meli
Public bug reported:

- [X] This is a doc addition request.

The current documentation topic "evacuate" in the admin guide is not
mentioning the host-evacuate command at all.

if one wants to evacuate all instances on a failed host it is easier to
use above command. As far as i understand the host-evacuate command will
loop over the instances on the failed node and then run each instance
against the scheduler so it can be recreated on a different compute
node.

i suggest to enhance the current documentation with a new section "auto
schedule failover host" and move the current part into a section called
"manually define failover host". or something like this.

---
Release: 17.0.0.0rc2.dev637 on 2018-04-11 13:22
SHA: 80fa0ff912e37890f255bbbcd1c25f26759070ff
Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/evacuate.rst
URL: https://docs.openstack.org/nova/latest/admin/evacuate.html

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: doc

** Tags added: doc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1763039

Title:
  evacuate instance documentation not mentioning host-evacuate

Status in OpenStack Compute (nova):
  New

Bug description:
  - [X] This is a doc addition request.

  The current documentation topic "evacuate" in the admin guide is not
  mentioning the host-evacuate command at all.

  if one wants to evacuate all instances on a failed host it is easier
  to use above command. As far as i understand the host-evacuate command
  will loop over the instances on the failed node and then run each
  instance against the scheduler so it can be recreated on a different
  compute node.

  i suggest to enhance the current documentation with a new section
  "auto schedule failover host" and move the current part into a section
  called "manually define failover host". or something like this.

  ---
  Release: 17.0.0.0rc2.dev637 on 2018-04-11 13:22
  SHA: 80fa0ff912e37890f255bbbcd1c25f26759070ff
  Source: 
https://git.openstack.org/cgit/openstack/nova/tree/doc/source/admin/evacuate.rst
  URL: https://docs.openstack.org/nova/latest/admin/evacuate.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1763039/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1662867] Re: update_available_resource_for_node racing instance deletion

2018-04-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/553067
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=5f16e714f58336344752305f94451e7c7c55742c
Submitter: Zuul
Branch:master

commit 5f16e714f58336344752305f94451e7c7c55742c
Author: Matt Riedemann 
Date:   Wed Mar 14 16:43:22 2018 -0400

libvirt: handle DiskNotFound during update_available_resource

The update_available_resource periodic task in the compute manager
eventually calls through to the resource tracker and virt driver
get_available_resource method, which gets the guests running on
the hypervisor, and builds up a set of information about the host.
This includes disk information for the active domains.

However, the periodic task can race with instances being deleted
concurrently and the hypervisor can report the domain but the driver
has already deleted the backing files as part of deleting the
instance, and this leads to failures when running "qemu-img info"
on the disk path which is now gone.

When that happens, the entire periodic update fails.

This change simply tries to detect the specific failure from
'qemu-img info' and translate it into a DiskNotFound exception which
the driver can handle. In this case, if the associated instance is
undergoing a task state transition such as moving to another host or
being deleted, we log a message and continue. If the instance is in
steady state (task_state is not set), then we consider it a failure
and re-raise it up.

Note that we could add the deleted=False filter to the instance query
in _get_disk_over_committed_size_total but that doesn't help us in
this case because the hypervisor says the domain is still active
and the instance is not actually considered deleted in the DB yet.

Change-Id: Icec2769bf42455853cbe686fb30fda73df791b25
Closes-Bug: #1662867


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1662867

Title:
  update_available_resource_for_node racing instance deletion

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===
  The following trace was seen multiple times during a CI run for 
https://review.openstack.org/#/c/383859/ :

  
http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ERROR#_2017-02-07_19_10_25_548
  
http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-devstack-plugin-nfs-nv/a4c1057/logs/screen-n-cpu.txt.gz?level=ERROR#_2017-02-07_19_15_26_004

  In the first example a request to terminate the instance 60b7cb32
  appears to race an existing run of the
  update_available_resource_for_node periodic task :

  req-fa96477b-34d2-4ab6-83bf-24c269ed7c28

  http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
  devstack-plugin-nfs-
  nv/a4c1057/logs/screen-n-cpu.txt.gz?#_2017-02-07_19_10_25_478

  req-dc60ed89-d3da-45f6-b98c-8f57c767d751

  http://logs.openstack.org/09/395709/7/check/gate-tempest-dsvm-full-
  devstack-plugin-nfs-
  nv/a4c1057/logs/screen-n-cpu.txt.gz?#_2017-02-07_19_10_25_548

  Steps to reproduce
  ==
  Delete an instance while update_available_resource_for_node is running

  Expected result
  ===
  Either swallow the exception and move on or lock instances in such a way that 
they can't be removed while this periodic task is running.

  Actual result
  =
  update_available_resource_for_node fails and stops.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

 https://review.openstack.org/#/c/383859/ - but it should reproduce
  against master.

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of that?

 Libvirt

  2. Which storage type did you use?
 (For example: Ceph, LVM, GPFS, ...)
 What's the version of that?

 n/a

  3. Which networking type did you use?
 (For example: nova-network, Neutron with OpenVSwitch, ...)

 n/a

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1662867/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1761487] Fix merged to nova (master)

2018-04-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/559111
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=587eb4303be65810133e88114106ee9019940118
Submitter: Zuul
Branch:master

commit 587eb4303be65810133e88114106ee9019940118
Author: Matt Riedemann 
Date:   Thu Apr 5 11:25:49 2018 -0400

Log a more useful error when neutron isn't configured

When the [neutron] section of nova.conf isn't configured for
auth with the networking service, end users of the compute
REST API get a 500 error and the logs contain this mostly
unhelpful error message:

  Unauthorized: Unknown auth type: None

This change adds a more useful error log message indicating
the root problem and provides a link to the networking service
install guide for how to resolve it.

Change-Id: I18f162c4f8d1964cb4d0c184ff2149c76e1e86b4
Partial-Bug: #1761487


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1761487

Title:
  Install guide and error message in logs is not clear that [neutron]
  auth must be configured in nova.conf

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) pike series:
  Confirmed
Status in OpenStack Compute (nova) queens series:
  Confirmed

Bug description:
  While trying to launch new instance on newly installed test OpenStack
  I get this error:

  Unexpected API Error. Please report this at
  http://bugs.launchpad.net/nova/ and attach the Nova API log if
  possible.  (HTTP
  500) (Request-ID: req-70a6cb07-72d9-41f3-80aa-ae6c230c62da)

  The same error from command line:

  openstack server create --flavor Basic --image cirros \
  >   --nic net-id=f5f0b96c-ab0d-4025-aca3-d2de8f0e6139 --security-group 
default \
  >   --key-name demo selfservice-instance
  Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ 
and attach the Nova API log if possible.
   (HTTP 500) 
(Request-ID: req-3050043e-0baa-471b-8c64-3015b38cef56)

  Openstack: Queens

  root@controller:~# dpkg -l | grep nova
  ii  nova-api2:17.0.1-0ubuntu1~cloud0  
 all  OpenStack Compute - API frontend
  ii  nova-common 2:17.0.1-0ubuntu1~cloud0  
 all  OpenStack Compute - common files
  ii  nova-conductor  2:17.0.1-0ubuntu1~cloud0  
 all  OpenStack Compute - conductor service
  ii  nova-consoleauth2:17.0.1-0ubuntu1~cloud0  
 all  OpenStack Compute - Console Authenticator
  ii  nova-novncproxy 2:17.0.1-0ubuntu1~cloud0  
 all  OpenStack Compute - NoVNC proxy
  ii  nova-placement-api  2:17.0.1-0ubuntu1~cloud0  
 all  OpenStack Compute - placement API frontend
  ii  nova-scheduler  2:17.0.1-0ubuntu1~cloud0  
 all  OpenStack Compute - virtual machine scheduler
  ii  python-nova 2:17.0.1-0ubuntu1~cloud0  
 all  OpenStack Compute Python libraries
  ii  python-novaclient   2:9.1.1-0ubuntu1~cloud0   
 all  client library for OpenStack Compute API - Python 2.7

  Libvirt + KVM

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1761487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762992] [NEW] extract_messages does not work with python3

2018-04-11 Thread Akihiro Motoki
Public bug reported:

extract_messages does not work with python3.

http://paste.openstack.org/show/718907/

Note that babel_extract_angular does not depend on Django so it is not
related to Django version.

** Affects: horizon
 Importance: Undecided
 Assignee: Akihiro Motoki (amotoki)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1762992

Title:
  extract_messages does not work with python3

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  extract_messages does not work with python3.

  http://paste.openstack.org/show/718907/

  Note that babel_extract_angular does not depend on Django so it is not
  related to Django version.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1762992/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762941] [NEW] testtools.matchers._impl.MismatchError: 'completed' != u'running' in test_bug_1718512.TestRequestSpecRetryReschedule.test_resize_with_reschedule_then_live_migrate

2018-04-11 Thread Takashi NATSUME
Public bug reported:

[Environment]
nova master(commit 15f1caf98a46ba0ab3f8365075c564e89f06eef3)
Ubuntu 16.04.2 LTS

[Step to reproduce]
tox -e functional 
nova.tests.functional.regressions.test_bug_1718512.TestRequestSpecRetryReschedule.test_resize_with_reschedule_then_live_migrate

[Log]
See an attached file in comment #1. 

It also occurs in the gate check job (nova-tox-functional).

http://logs.openstack.org/67/560267/1/check/nova-tox-functional/550e5a1
/job-output.txt.gz

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1762941

Title:
  testtools.matchers._impl.MismatchError: 'completed' != u'running' in
  
test_bug_1718512.TestRequestSpecRetryReschedule.test_resize_with_reschedule_then_live_migrate

Status in OpenStack Compute (nova):
  New

Bug description:
  [Environment]
  nova master(commit 15f1caf98a46ba0ab3f8365075c564e89f06eef3)
  Ubuntu 16.04.2 LTS

  [Step to reproduce]
  tox -e functional 
nova.tests.functional.regressions.test_bug_1718512.TestRequestSpecRetryReschedule.test_resize_with_reschedule_then_live_migrate

  [Log]
  See an attached file in comment #1. 

  It also occurs in the gate check job (nova-tox-functional).

  http://logs.openstack.org/67/560267/1/check/nova-tox-
  functional/550e5a1/job-output.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1762941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1762700] Re: lxd containers created by juju are not receiving IP addresses

2018-04-11 Thread Jason Hobbs
07:13 < jam> jhobbs: my initial thought is that it is a cloud-init bug, where 
we might be passing something like "networking=false" because we are going to 
set up networking ourself
07:14 < jam> and that causes a particular place that always assumed it had a 
value to iterate, but the object it has is now None.


** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1762700

Title:
  lxd containers created by juju are not receiving IP addresses

Status in cloud-init:
  New
Status in juju:
  Triaged

Bug description:
  using juju 2.3.5, maas 2.3.0-6434-gd354690-0ubuntu1~16.04.1, and lxd
  2.0.11-0ubuntu1~16.04.4, containers are not receiving IP addresses:

  http://paste.ubuntu.com/p/wdY9ShtrTP/

  there is an error in cloud-init-output.log:
  http://paste.ubuntu.com/p/k8rdGHjDQq/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1762700/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp