[Yahoo-eng-team] [Bug 1775114] [NEW] Error when launching instance with a PCI on it

2018-06-04 Thread vpc
Public bug reported:

Hi All,

After launching an instance with GPU in Kolla openstack queens

im using KVM.

2018-06-05 10:43:49.592 7 ERROR nova.compute.manager [instance: 
1ec82ee4-833d-47b0-abc7-a4a2ea848e9a]   File 
"/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags
2018-06-05 10:43:49.592 7 ERROR nova.compute.manager [instance: 
1ec82ee4-833d-47b0-abc7-a4a2ea848e9a] if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
2018-06-05 10:43:49.592 7 ERROR nova.compute.manager [instance: 
1ec82ee4-833d-47b0-abc7-a4a2ea848e9a] libvirtError: internal error: qemu 
unexpectedly closed the monitor: 2018-06-05T02:43:47.447602Z qemu-kvm: -device 
vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,addr=0x5: vfio error: :01:00.0: 
group 1 is not viable
2018-06-05 10:43:49.592 7 ERROR nova.compute.manager [instance: 
1ec82ee4-833d-47b0-abc7-a4a2ea848e9a] Please ensure all devices within the 
iommu_group are bound to their vfio bus driver.

** Affects: nova
 Importance: Undecided
 Status: New

** Summary changed:

- PCI 
+ Error when launching instance with a PCI on it

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1775114

Title:
  Error when launching instance with a PCI on it

Status in OpenStack Compute (nova):
  New

Bug description:
  Hi All,

  After launching an instance with GPU in Kolla openstack queens

  im using KVM.

  2018-06-05 10:43:49.592 7 ERROR nova.compute.manager [instance: 
1ec82ee4-833d-47b0-abc7-a4a2ea848e9a]   File 
"/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags
  2018-06-05 10:43:49.592 7 ERROR nova.compute.manager [instance: 
1ec82ee4-833d-47b0-abc7-a4a2ea848e9a] if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
  2018-06-05 10:43:49.592 7 ERROR nova.compute.manager [instance: 
1ec82ee4-833d-47b0-abc7-a4a2ea848e9a] libvirtError: internal error: qemu 
unexpectedly closed the monitor: 2018-06-05T02:43:47.447602Z qemu-kvm: -device 
vfio-pci,host=01:00.0,id=hostdev0,bus=pci.0,addr=0x5: vfio error: :01:00.0: 
group 1 is not viable
  2018-06-05 10:43:49.592 7 ERROR nova.compute.manager [instance: 
1ec82ee4-833d-47b0-abc7-a4a2ea848e9a] Please ensure all devices within the 
iommu_group are bound to their vfio bus driver.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1775114/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1734832] Re: Unreachable 'ImageSizeLimitExceeded' exception block in upload call

2018-06-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/523366
Committed: 
https://git.openstack.org/cgit/openstack/glance/commit/?id=ffc3923e93dc1d4eea789ce5163d176efb7d685b
Submitter: Zuul
Branch:master

commit ffc3923e93dc1d4eea789ce5163d176efb7d685b
Author: Abhishek Kekane 
Date:   Tue Nov 28 09:51:02 2017 +

Fix unreachable 'ImageSizeLimitExceeded' exception in image-upload

ImageSizeLimitExceeded exception block [1] is unreachable in upload because
it is caught at [2] and raised StorageQuotaFull exception from there. The
problem here is that we have nested usage of the limiting reader.

To make it correct changed the limiting reader to accept
exception class as parameter so that we can pass the StorageQuotaFull
in case LimitingReader is used for quota check and ImageSizeExceeded 
exception
if it is used for image size cap check.

[1] 
https://github.com/openstack/glance/blob/fd16fa4f258fd3f77c14900a019e97bb90bc5ac0/glance/api/v2/image_data.py#L230
[2] 
https://github.com/openstack/glance/blob/fd16fa4f258fd3f77c14900a019e97bb90bc5ac0/glance/quota/__init__.py#L305

Closes-Bug: #1734832
Change-Id: I5a419b763bee7f983c2a94c6f3a2245281e86743


** Changed in: glance
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1734832

Title:
  Unreachable 'ImageSizeLimitExceeded' exception block in upload call

Status in Glance:
  Fix Released

Bug description:
  ImageSizeLimitExceeded exception block [1] is unreachable in upload because
  it is caught at [2] and raised StorageQuotaFull exception from there.

  Further as it raised StorageQuataFull exception it prints None as a
  size in the glance-api logs.

  Reference glance-api-logs:
  Nov 28 07:04:13 devstack devstack@g-api.service[11453]: ERROR 
glance.api.v2.image_data [None req-17b243db-9b3d-46d9-97f0-05f74bc76e18 admin 
admin] Image exceeds the storage quota: The size of the data None will exceed 
the limit. None bytes remaining.: StorageQuotaFull: The size of the data None 
will exceed the limit. None bytes remaining.

  To make it correct we need to remove code from [2] where
  ImageSizeLimitExceeded is caught and StorageQuotaFull is raised so
  that it will be reachable in the controller [1].

  [1] 
https://github.com/openstack/glance/blob/master/glance/api/v2/image_data.py#L232
  [2] 
https://github.com/openstack/glance/blob/master/glance/quota/__init__.py#L305

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1734832/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775094] [NEW] Lack of documentation for role permutations and possibilities

2018-06-04 Thread Lance Bragstad
Public bug reported:

Keystone supports a bunch of different usecases with roles. You can do
different things like having one role imply another, role inheritance
between targets, domain-specific roles, and effective roles.
Unfortunately, not much of this is documented. If someone does find
documentation, it's usually sparse and doesn't include much context from
the perspective of a new user.

This usually leads to people expecting certain scenarios to work, and
they don't [0]. THis report is to track work to include

- a document describing inherited roles
- a document describing domain-specific roles
- a document describing implied roles

[0] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone
/%23openstack-keystone.2018-06-04.log.html#t2018-06-04T20:40:32

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: documentation

** Tags added: docu

** Tags removed: docu
** Tags added: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1775094

Title:
  Lack of documentation for role permutations and possibilities

Status in OpenStack Identity (keystone):
  New

Bug description:
  Keystone supports a bunch of different usecases with roles. You can do
  different things like having one role imply another, role inheritance
  between targets, domain-specific roles, and effective roles.
  Unfortunately, not much of this is documented. If someone does find
  documentation, it's usually sparse and doesn't include much context
  from the perspective of a new user.

  This usually leads to people expecting certain scenarios to work, and
  they don't [0]. THis report is to track work to include

  - a document describing inherited roles
  - a document describing domain-specific roles
  - a document describing implied roles

  [0] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone
  /%23openstack-keystone.2018-06-04.log.html#t2018-06-04T20:40:32

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1775094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775075] Re: EndpointNotFound raised by Pike n-cpu when running alongside Queens n-api

2018-06-04 Thread Matt Riedemann
** Tags added: upgrades volumes

** Also affects: nova/queens
   Importance: Undecided
   Status: New

** Changed in: nova/queens
 Assignee: (unassigned) => Lee Yarwood (lyarwood)

** Changed in: nova/queens
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1775075

Title:
  EndpointNotFound raised by Pike n-cpu when running alongside Queens
  n-api

Status in OpenStack Compute (nova):
  New
Status in OpenStack Compute (nova) queens series:
  In Progress

Bug description:
  Description
  ===
  During a P to Q upgrade n-cpu processes still running P will be unable to 
find the volumev2 endpoint when running alongside Q n-api processes due to the 
following change: 

  Update cinder in RequestContext service catalog
  https://review.openstack.org/#/c/510947/

  This results in failures anytime the P n-cpu process attempts to
  interact with the volume service, for example during LM from the node:

  2018-06-02 00:19:17.683 1 WARNING nova.virt.libvirt.driver 
[req-3712be3d-b883-4fe1-bab0-83ee44bd5bb5 e16a043a84b14e2b8afbdd1b8677259f 
cb92ed750eac463faf8935cb137f1e60 - default default] [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] Error monitoring migration: internalURL 
endpoint for volumev2 service named cinderv2 not found: EndpointNotFound: 
internalURL endpoint for volumev2 service named cinderv2 not found
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] Traceback (most recent call last):
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6817, in 
_live_migration
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] finish_event, disk_paths)
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6728, in 
_live_migration_monitor
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] migrate_data)
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in 
wrapped
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] function_name, call_dict, binary)
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] self.force_reraise()
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] six.reraise(self.type_, self.value, 
self.tb)
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in 
wrapped
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] return f(self, context, *args, **kw)
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 218, in 
decorated_function
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] kwargs['instance'], e, sys.exc_info())
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] self.force_reraise()
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] six.reraise(self.type_, self.value, 
self.tb)
  2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 206, in 

[Yahoo-eng-team] [Bug 1775075] [NEW] EndpointNotFound raised by Pike n-cpu when running alongside Queens n-api

2018-06-04 Thread Lee Yarwood
Public bug reported:

Description
===
During a P to Q upgrade n-cpu processes still running P will be unable to find 
the volumev2 endpoint when running alongside Q n-api processes due to the 
following change: 

Update cinder in RequestContext service catalog
https://review.openstack.org/#/c/510947/

This results in failures anytime the P n-cpu process attempts to
interact with the volume service, for example during LM from the node:

2018-06-02 00:19:17.683 1 WARNING nova.virt.libvirt.driver 
[req-3712be3d-b883-4fe1-bab0-83ee44bd5bb5 e16a043a84b14e2b8afbdd1b8677259f 
cb92ed750eac463faf8935cb137f1e60 - default default] [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] Error monitoring migration: internalURL 
endpoint for volumev2 service named cinderv2 not found: EndpointNotFound: 
internalURL endpoint for volumev2 service named cinderv2 not found
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] Traceback (most recent call last):
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6817, in 
_live_migration
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] finish_event, disk_paths)
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 6728, in 
_live_migration_monitor
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] migrate_data)
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in 
wrapped
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] function_name, call_dict, binary)
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] self.force_reraise()
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] six.reraise(self.type_, self.value, 
self.tb)
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in 
wrapped
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] return f(self, context, *args, **kw)
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 218, in 
decorated_function
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] kwargs['instance'], e, sys.exc_info())
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] self.force_reraise()
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] six.reraise(self.type_, self.value, 
self.tb)
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 206, in 
decorated_function
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] return function(self, context, *args, 
**kwargs)
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 
"/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5684, in 
_post_live_migration
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b] migrate_data)
2018-06-02 00:19:17.683 1 ERROR nova.virt.libvirt.driver [instance: 
1a3800b3-9f75-4999-b726-a1afb0ebdd9b]   File 

[Yahoo-eng-team] [Bug 1775071] [NEW] functional tests string mismatch error in py35

2018-06-04 Thread Brian Rosmaita
Public bug reported:

This happens for me locally when running 'tox -e functional-py35', but
doesn't seem to happen in the gate:

==
Failed 2 tests - output below:
==

glance.tests.functional.test_glance_manage.TestGlanceManage.test_contract
-

Captured traceback:
~~~
b'Traceback (most recent call last):'
b'  File "/home/rosmabr/working/repos/glance/glance/tests/utils.py", line 
168, in _runner'
b'func(*args, **kw)'
b'  File "/home/rosmabr/working/repos/glance/glance/tests/utils.py", line 
184, in wrapped'
b'func(*a, **kwargs)'
b'  File 
"/home/rosmabr/working/repos/glance/glance/tests/functional/test_glance_manage.py",
 line 176, in test_contract'
b"self.assertIn('Database is up to date. No migrations needed.', out)"
b'  File 
"/home/rosmabr/working/repos/glance/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py",
 line 417, in assertIn'
b'self.assertThat(haystack, Contains(needle), message)'
b'  File 
"/home/rosmabr/working/repos/glance/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py",
 line 498, in assertThat'
b'raise mismatch_error'
b"testtools.matchers._impl.MismatchError: 'Database is up to date. No 
migrations needed.' not in b'Database is up to date. No migrations needed.\\n'"
b''

glance.tests.functional.test_glance_manage.TestGlanceManage.test_expand
---

Captured traceback:
~~~
b'Traceback (most recent call last):'
b'  File "/home/rosmabr/working/repos/glance/glance/tests/utils.py", line 
168, in _runner'
b'func(*args, **kw)'
b'  File "/home/rosmabr/working/repos/glance/glance/tests/utils.py", line 
184, in wrapped'
b'func(*a, **kwargs)'
b'  File 
"/home/rosmabr/working/repos/glance/glance/tests/functional/test_glance_manage.py",
 line 137, in test_expand'
b"'No expansion needed.', out)"
b'  File 
"/home/rosmabr/working/repos/glance/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py",
 line 417, in assertIn'
b'self.assertThat(haystack, Contains(needle), message)'
b'  File 
"/home/rosmabr/working/repos/glance/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py",
 line 498, in assertThat'
b'raise mismatch_error'
b"testtools.matchers._impl.MismatchError: 'Database expansion is up to 
date. No expansion needed.' not in b'Database expansion is up to date. No 
expansion needed.\\n'"
b''

The problematic line looks like this:
  assertIn('expected', actual)
We can convert 'actual' to a string and then the comparison should work on both 
py27 and py35.

** Affects: glance
 Importance: Undecided
 Assignee: Brian Rosmaita (brian-rosmaita)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1775071

Title:
  functional tests string mismatch error in py35

Status in Glance:
  In Progress

Bug description:
  This happens for me locally when running 'tox -e functional-py35', but
  doesn't seem to happen in the gate:

  ==
  Failed 2 tests - output below:
  ==

  glance.tests.functional.test_glance_manage.TestGlanceManage.test_contract
  -

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File "/home/rosmabr/working/repos/glance/glance/tests/utils.py", line 
168, in _runner'
  b'func(*args, **kw)'
  b'  File "/home/rosmabr/working/repos/glance/glance/tests/utils.py", line 
184, in wrapped'
  b'func(*a, **kwargs)'
  b'  File 
"/home/rosmabr/working/repos/glance/glance/tests/functional/test_glance_manage.py",
 line 176, in test_contract'
  b"self.assertIn('Database is up to date. No migrations needed.', out)"
  b'  File 
"/home/rosmabr/working/repos/glance/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py",
 line 417, in assertIn'
  b'self.assertThat(haystack, Contains(needle), message)'
  b'  File 
"/home/rosmabr/working/repos/glance/.tox/functional-py35/lib/python3.5/site-packages/testtools/testcase.py",
 line 498, in assertThat'
  b'raise mismatch_error'
  b"testtools.matchers._impl.MismatchError: 'Database is up to date. No 
migrations needed.' not in b'Database is up to date. No migrations needed.\\n'"
  b''

  glance.tests.functional.test_glance_manage.TestGlanceManage.test_expand
  ---

  Captured traceback:
  ~~~
  b'Traceback (most recent call last):'
  b'  File 

[Yahoo-eng-team] [Bug 1775074] [NEW] collect logs: grab /var/lib/cloud data files

2018-06-04 Thread Chad Smith
Public bug reported:

collect-logs only does not capture /var/lib/cloud/seed input files while
creating a log tarfile. These files can be helpful in understanding what
the image or datasource is using as input to cloud-init.

** Affects: cloud-init
 Importance: Medium
 Status: Confirmed

** Changed in: cloud-init
   Status: New => Confirmed

** Changed in: cloud-init
   Importance: Undecided => Medium

** Changed in: cloud-init
Milestone: None => 18.4

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1775074

Title:
  collect logs: grab /var/lib/cloud data files

Status in cloud-init:
  Confirmed

Bug description:
  collect-logs only does not capture /var/lib/cloud/seed input files
  while creating a log tarfile. These files can be helpful in
  understanding what the image or datasource is using as input to cloud-
  init.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1775074/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1770712] Re: It would be nice if cloud-init provides full version in logs

2018-06-04 Thread Launchpad Bug Tracker
This bug was fixed in the package cloud-init -
18.2-64-gbbcc5e82-0ubuntu1

---
cloud-init (18.2-64-gbbcc5e82-0ubuntu1) cosmic; urgency=medium

  * debian/rules: update version.version_string to contain packaged version.
(LP: #1770712)
  * New upstream snapshot.
- util: add get_linux_distro function to replace platform.dist
  [Robert Schweikert] (LP: #1745235)
- pyflakes: fix unused variable references identified by pyflakes 2.0.0.
- - Do not use the systemd_prefix macro, not available in this environment
  [Robert Schweikert]
- doc: Add config info to ec2, openstack and cloudstack datasource docs
- Enable SmartOS network metadata to work with netplan via per-subnet
  routes [Dan McDonald] (LP: #1763512)

 -- Chad Smith   Mon, 04 Jun 2018 12:18:16
-0600

** Changed in: cloud-init (Ubuntu Cosmic)
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1770712

Title:
  It would be nice if cloud-init provides full version in logs

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Fix Released
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Artful:
  Confirmed
Status in cloud-init source package in Bionic:
  Confirmed
Status in cloud-init source package in Cosmic:
  Fix Released

Bug description:
  Cloud-init rsyslog has the major version of cloud-init:

  May 11 17:40:51 maas-enlisting-node cloud-init[550]: Cloud-init v.
  18.2 running 'init-local' at Fri, 11 May 2018 17:40:47 +. Up 15.63
  seconds.

  
  However, it would be nice if it places the whole version, so that we can now 
exactly what version of cloud-init its running, e.g:

  May 11 17:40:51 maas-enlisting-node cloud-init[550]: Cloud-init v.
  18.2 (27-g6ef92c98-0ubuntu1~18.04.1) running 'init-local' at Fri, 11
  May 2018 17:40:47 +. Up 15.63 seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1770712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1750843] Re: pysaml2 version in global requirements must be updated to 4.5.0

2018-06-04 Thread Matthew Thode
** Changed in: openstack-requirements
   Status: New => Fix Released

** Changed in: openstack-requirements
 Assignee: (unassigned) => Matthew Thode (prometheanfire)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1750843

Title:
  pysaml2 version in global requirements must be updated to 4.5.0

Status in OpenStack Identity (keystone):
  Fix Committed
Status in OpenStack Global Requirements:
  Fix Released

Bug description:
  As per security vulnerability CVE-2016-10149, XML External Entity
  (XXE) vulnerability in PySAML2 4.4.0 and earlier allows remote
  attackers to read arbitrary files via a crafted SAML XML request or
  response and it has a CVSS v3 Base Score of 7.5.

  The above vulnerability has been fixed in version 4.5.0 as per
  https://github.com/rohe/pysaml2/issues/366. The latest version of
  pysaml2 (https://pypi.python.org/pypi/pysaml2/4.5.0) has this fix.
  However, the global requirements has the version set to < 4.0.3

  
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L230
  pysaml2>=4.0.2,<4.0.3

  
https://github.com/openstack/requirements/blob/master/upper-constraints.txt#L347
  pysaml2===4.0.2

  The version of pysaml2 supported for OpenStack should be updated such
  that OpenStack deployments are not vulnerable to the above mentioned
  CVE.

  pysaml2 is used by OpenStack Keystone for identity Federation. This
  bug in itself is not a security vulnerability but not fixing this bug
  causes OpenStack deployments to be vulnerable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1750843/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1774623] Re: Remove unnecessary disclaimer from the login page

2018-06-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/571702
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=61a79a9b7a2c6421381a36ec4c3c2dc27681626a
Submitter: Zuul
Branch:master

commit 61a79a9b7a2c6421381a36ec4c3c2dc27681626a
Author: vmarkov 
Date:   Fri Jun 1 13:56:56 2018 +0300

Show WEBSSO disclaimer only when it is needed

Horizon can support several auth mechanisms, i.e. Keystone creds and
OpenID. User allowed to choose proper way of auth and disclaimer is
shown. But it is possible to allow choose from the only variant, and in
this case disclaimer also shown, which is confusing. Proposed patch fix
disclaimer display and makes it reasonable

Closes-bug: #1774623
Change-Id: Ib039c6fdf1e4cd21b5ebe426fe2a15355a37353c


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1774623

Title:
  Remove unnecessary disclaimer from the login page

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Horizon has WEBSSO_CHOICES config option which allows user to select
  auth mechanism to login, like Keystone credentials or OpenID. List of
  valid auth choices also is configurable. If this option is enabled,
  user gets list of auth variants and disclaimer "If you are not sure
  which authentication method to use, contact your administrator".
  Disclaimer is shown even if list has only one possible option, which
  is confusing.

  Way to reproduce:
  Enable WEBSSO_ENABLED and WEBSSO_CHOICES options in Horizon config and make 
WEBSSO_CHOICES include exactly one element

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1774623/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775030] [NEW] Security Key Pair creation is not allowed with "underscore" character from the Launch Instance menu on Horizon.

2018-06-04 Thread David Hill
Public bug reported:

Using Horizon to instantiate VM:
Instances -> Launch Instance -> Key Pair -> Create Key Pair 
Trying to create Key Pair name with underscore character, getting "name 
contains bad characters" error - see attached screenshot.
If you are trying to create Security Key Pair prior to VM instantiation using 
Access & Security -> Create Key Pair, "underscore" character is accepted.
"Underscore" character shall be accepted in Security Key Pair name also in 
Instances -> Launch Instance -> Key Pair -> Create Key Pair menu.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1775030

Title:
  Security Key Pair creation is not allowed with "underscore" character
  from the Launch Instance menu on Horizon.

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Using Horizon to instantiate VM:
  Instances -> Launch Instance -> Key Pair -> Create Key Pair 
  Trying to create Key Pair name with underscore character, getting "name 
contains bad characters" error - see attached screenshot.
  If you are trying to create Security Key Pair prior to VM instantiation using 
Access & Security -> Create Key Pair, "underscore" character is accepted.
  "Underscore" character shall be accepted in Security Key Pair name also in 
Instances -> Launch Instance -> Key Pair -> Create Key Pair menu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1775030/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1770712] Re: It would be nice if cloud-init provides full version in logs

2018-06-04 Thread Scott Moser
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu)
   Importance: Undecided => Medium

** Also affects: cloud-init (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Xenial)
   Importance: Undecided
   Status: New

** Also affects: cloud-init (Ubuntu Cosmic)
   Importance: Medium
   Status: Confirmed

** Also affects: cloud-init (Ubuntu Artful)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu Xenial)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Artful)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Bionic)
   Status: New => Confirmed

** Changed in: cloud-init (Ubuntu Xenial)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Artful)
   Importance: Undecided => Medium

** Changed in: cloud-init (Ubuntu Bionic)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1770712

Title:
  It would be nice if cloud-init provides full version in logs

Status in cloud-init:
  Fix Committed
Status in cloud-init package in Ubuntu:
  Confirmed
Status in cloud-init source package in Xenial:
  Confirmed
Status in cloud-init source package in Artful:
  Confirmed
Status in cloud-init source package in Bionic:
  Confirmed
Status in cloud-init source package in Cosmic:
  Confirmed

Bug description:
  Cloud-init rsyslog has the major version of cloud-init:

  May 11 17:40:51 maas-enlisting-node cloud-init[550]: Cloud-init v.
  18.2 running 'init-local' at Fri, 11 May 2018 17:40:47 +. Up 15.63
  seconds.

  
  However, it would be nice if it places the whole version, so that we can now 
exactly what version of cloud-init its running, e.g:

  May 11 17:40:51 maas-enlisting-node cloud-init[550]: Cloud-init v.
  18.2 (27-g6ef92c98-0ubuntu1~18.04.1) running 'init-local' at Fri, 11
  May 2018 17:40:47 +. Up 15.63 seconds.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1770712/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1774217] Re: docs: vif_plugging_timeout nova config option should mention running neutron rootwrap in daemon mode for performance

2018-06-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/571579
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=50b3a168e86316ab92ba828c1a00ab963532ad1c
Submitter: Zuul
Branch:master

commit 50b3a168e86316ab92ba828c1a00ab963532ad1c
Author: Matt Riedemann 
Date:   Thu May 31 17:41:27 2018 -0400

Mention neutron-rootwrap-daemon in root_helper_daemon option help

An operators mailing list thread [1] discussing scale issues
where vif plugging times out eventually came to the conclusion
that several operators run rootwrap in the neutron agent in daemon
mode.

This ability is not clear from the configuration option help, however,
so this change adds more details on running in this mode to achieve
scale. Another nice discussion about this can be found here [2].

There is a corresponding nova patch [3] to mention this option
if operators are hitting vif plugging timeouts in nova.

This doesn't change any defaults, it just improves the documentation.

The help text formatting is also updated for easier readability
and maintainability in the source.

[1] 
http://lists.openstack.org/pipermail/openstack-operators/2018-May/015364.html
[2] 
https://cloudblog.switch.ch/2017/08/28/starting-1000-instances-on-switchengines/
[3] https://review.openstack.org/571577/

Change-Id: I8b8a0cfea409c3469ad0ee223c87e0380b3afbad
Closes-Bug: #1774217


** Changed in: neutron
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1774217

Title:
  docs: vif_plugging_timeout nova config option should mention running
  neutron rootwrap in daemon mode for performance

Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress

Bug description:
  There are notes from this operators mailing list thread about how the
  neutron agent needed to be configured for rootwrap daemon mode to hit
  scale targets otherwise servers would fail to create due to vif
  plugging timeouts:

  http://lists.openstack.org/pipermail/openstack-
  operators/2018-May/015364.html

  This bug is used to track the documentation updates needed in both
  nova and neutron.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1774217/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1771851] Re: Image panel doesn't check 'compute:create' policy

2018-06-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/569163
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e98d7e6890dc9443b9885b1a3ee69a64ccaafa94
Submitter: Zuul
Branch:master

commit e98d7e6890dc9443b9885b1a3ee69a64ccaafa94
Author: andrewbogott 
Date:   Wed May 16 23:08:41 2018 -0500

Image panel: check instance create policy for 'Launch' button

Closes-Bug: #1771851
Change-Id: If77746cbbe73540218b90c516e5f8802c002854f


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1771851

Title:
  Image panel doesn't check 'compute:create' policy

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The Horizon image panel provides a 'Launch' button to create a server
  from a given image.

  The django code for this button has correct policy checks; the Angular
  code has none.  That means that the 'Launch' button displays even if
  the user is not permitted to launch instances, resulting in a
  frustrating failure much later in the process.

  The button should not display if the user is not permitted to create
  VMs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1771851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773945] Re: nova client servers.list crashes with bad marker

2018-06-04 Thread Takashi NATSUME
My guess is as follows:

In nova,
if the record of the marker VM instance exists in the cell
but the 'cell_mapping' in the InstanceMapping of the marker VM instance is null 
(None),
the issue occurs.

https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/compute/instance_list.py#L56-L73

--
def get_marker_record(self, ctx, marker):
try:
im = objects.InstanceMapping.get_by_instance_uuid(ctx, marker)
except exception.InstanceMappingNotFound:
raise exception.MarkerNotFound(marker=marker)

elevated = ctx.elevated(read_deleted='yes')
with context.target_cell(elevated, im.cell_mapping) as cctx:
try:
# NOTE(danms): We query this with no columns_to_join()
# as we're just getting values for the sort keys from
# it and none of the valid sort keys are on joined
# columns.
db_inst = db.instance_get_by_uuid(cctx, marker,
  columns_to_join=[])
except exception.InstanceNotFound:
raise exception.MarkerNotFound(marker=marker) <-- Here
return db_inst
--

In nova-conductor, the record of the VM instance exists in the cell
but the 'cell_mapping' in the InstanceMapping of the VM instance is null (None)
between *1 and *2.

*1: 
https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/conductor/manager.py#L1172-L1175
*2: 
https://github.com/openstack/nova/blob/f902e0d5d87fb05207e4a7aca73d185775d43df2/nova/conductor/manager.py#L1236-L1239


** Tags added: cells

** Project changed: python-novaclient => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1773945

Title:
  nova client servers.list crashes with bad marker

Status in OpenStack Compute (nova):
  New

Bug description:
  We have a python script that called servers.list() on an instance of
  novaclient.v2.client.Client . Sometimes that raises a "BadRequest
  marker not found" exception:

  Our call:

    client = nova_client.Client("2", session=some_session)
    client.servers.list()

  Observed Stacktrace:

    File "/usr/lib/python2.7/site-packages//.py", line 630, in :
  all_servers = self.nova.servers.list()
    File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 854, 
in list
  "servers")
    File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 257, in 
_list
  resp, body = self.api.client.get(url)
    File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 304, 
in get
  return self.request(url, 'GET', **kwargs)
    File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 83, in 
request
  raise exceptions.from_response(resp, body, url, method)
  BadRequest: marker [6a91d602-ab6e-42e0-929e-5ec33df2ddef] not found (HTTP 
400) (Request-ID: req-78827725-801d-4514-8cc8-e4b94f15c191)

  Discussion:

  We have a lot of stacks and we sometimes create multiple stacks at the same 
time. We've noticed that that the stacks with the mentioned UUIDs were created 
just before these errors occur. It seems that when a newly-created stack 
appears at a certain location in the server list, its UUID is used as a marker, 
but the code that validates the marker does
  not recognize such stacks.

  Relevant versions:

  - python-novaclient (9.1.0)
  - nova (16.0.0)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1773945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773945] [NEW] nova client servers.list crashes with bad marker

2018-06-04 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

We have a python script that called servers.list() on an instance of
novaclient.v2.client.Client . Sometimes that raises a "BadRequest marker
not found" exception:

Our call:

  client = nova_client.Client("2", session=some_session)
  client.servers.list()

Observed Stacktrace:

  File "/usr/lib/python2.7/site-packages//.py", line 630, in :
all_servers = self.nova.servers.list()
  File "/usr/lib/python2.7/site-packages/novaclient/v2/servers.py", line 854, 
in list
"servers")
  File "/usr/lib/python2.7/site-packages/novaclient/base.py", line 257, in _list
resp, body = self.api.client.get(url)
  File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 304, 
in get
return self.request(url, 'GET', **kwargs)
  File "/usr/lib/python2.7/site-packages/novaclient/client.py", line 83, in 
request
raise exceptions.from_response(resp, body, url, method)
BadRequest: marker [6a91d602-ab6e-42e0-929e-5ec33df2ddef] not found (HTTP 400) 
(Request-ID: req-78827725-801d-4514-8cc8-e4b94f15c191)

Discussion:

We have a lot of stacks and we sometimes create multiple stacks at the same 
time. We've noticed that that the stacks with the mentioned UUIDs were created 
just before these errors occur. It seems that when a newly-created stack 
appears at a certain location in the server list, its UUID is used as a marker, 
but the code that validates the marker does
not recognize such stacks.

Relevant versions:

- python-novaclient (9.1.0)
- nova (16.0.0)

** Affects: nova
 Importance: Undecided
 Assignee: Surya Seetharaman (tssurya)
 Status: New


** Tags: cells
-- 
nova client servers.list crashes with bad marker
https://bugs.launchpad.net/bugs/1773945
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1770326] Re: Can't Update Volume Status on dashboard if cinder volume stunk at reserved status

2018-06-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/567504
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=4776428cb69c57408745f75bd5c24228e8bdea44
Submitter: Zuul
Branch:master

commit 4776428cb69c57408745f75bd5c24228e8bdea44
Author: Larry GUO 
Date:   Thu May 10 08:09:58 2018 +

Add reserved status key word to horizon

Horizon don't have related change which align with the
change in https://review.openstack.org/#/c/330285.
With this fix, horizon can work as expected

Change-Id: I5940c662a0bec2beaf4863e07f7244311ba51212
Closes-Bug: #1770326
Signed-off-by: GUO Larry 


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1770326

Title:
  Can't Update Volume Status on dashboard if cinder volume stunk at
  reserved status

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  I am using Openstack Version Queens with CentOS7
   OpenStack Horizon version: 
  # rpm -qa | grep horizon
  puppet-horizon-12.4.0-1.el7.noarch
  python-django-horizon-13.0.0-1.el7.noarch

  
  several of my volumes stunk at reserved status for some reason. I tried to 
click "Update Volume Status" button on GUI, but no window pop out. Instead, a 
ERROR message display on the up-right corner.

  "Danger: An error occurred. Please try again later."

  The horizon.log complains:

  2018-05-10 05:47:34,253 16741 ERROR django.request Internal Server Error: 
/dashboard/admin/volumes/6d6f9816-3f15-4475-bfad-2727768d85e8/update_status
  Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/django/core/handlers/exception.py", 
line 41, in inner
  response = get_response(request)
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
249, in _legacy_get_response
  response = self._get_response(request)
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
187, in _get_response
  response = self.process_exception_by_middleware(e, request)
File "/usr/lib/python2.7/site-packages/django/core/handlers/base.py", line 
185, in _get_response
  response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 52, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 52, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 36, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 113, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/horizon/decorators.py", line 84, in 
dec
  return view_func(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
68, in view
  return self.dispatch(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/base.py", line 
88, in dispatch
  return handler(request, *args, **kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/edit.py", line 
174, in get
  return self.render_to_response(self.get_context_data())
File 
"/usr/share/openstack-dashboard/openstack_dashboard/dashboards/admin/volumes/views.py",
 line 233, in get_context_data
  context = super(UpdateStatusView, self).get_context_data(**kwargs)
File "/usr/lib/python2.7/site-packages/horizon/forms/views.py", line 141, 
in get_context_data
  context = super(ModalFormView, self).get_context_data(**kwargs)
File "/usr/lib/python2.7/site-packages/horizon/forms/views.py", line 74, in 
get_context_data
  context = super(ModalFormMixin, self).get_context_data(**kwargs)
File "/usr/lib/python2.7/site-packages/horizon/forms/views.py", line 55, in 
get_context_data
  context = super(ModalBackdropMixin, self).get_context_data(**kwargs)
File "/usr/lib/python2.7/site-packages/django/views/generic/edit.py", line 
93, in get_context_data
  kwargs['form'] = self.get_form()
File "/usr/lib/python2.7/site-packages/horizon/forms/views.py", line 176, 
in get_form
  return form_class(self.request, **self.get_form_kwargs())
File 
"/usr/share/openstack-dashboard/openstack_dashboard/dashboards/admin/volumes/forms.py",
 line 225, in __init__
  kwargs['initial']['status'] = choices[current_status]
  KeyError: u'reserved'

  
  I can successfully reset-state with openstack CLI command.

  cinder reset-state --state available 

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1772759] Re: Horizon checks wrong policy rule for attach_volume

2018-06-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/570268
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b9a1c445d9c357967ea2ee7152651131118a135d
Submitter: Zuul
Branch:master

commit b9a1c445d9c357967ea2ee7152651131118a135d
Author: jmoffitt 
Date:   Wed May 23 15:05:47 2018 -0700

Update attach_volume and detach_volume policy rules

The prior commit for this was functional but not quite
correct. The policy rules currently in Horizon for
attach and detach of volumes don't exist in Nova and
are missing from the local copy of nova_policy.json and
from Nova generated policy files. The fix to use the
create instance copy of the rule only worked for attach
and not detach ( https://review.openstack.org/#/c/570071/ )

This commit updates detach as well, and should be correct
going forward based on the Nova policy rules at:

https://git.openstack.org/cgit/openstack/nova/tree/nova/policies/volumes_attachments.py

Change-Id: I07fccd6f12149cd88a049c46aa113dfd2b60bbaa
Closes-bug: 1772759


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1772759

Title:
  Horizon checks wrong policy rule for attach_volume

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The instances table in Horizon checks for policy rule
  "os_compute_api:servers:attach_volume" (see:
  
https://git.openstack.org/cgit/openstack/horizon/tree/openstack_dashboard/dashboards/project/instances/tables.py#n895
  ), but this rule doesn't exist in the default policy file (
  
https://git.openstack.org/cgit/openstack/horizon/tree/openstack_dashboard/conf/nova_policy.json?h=master
  ) or in Novas own source or tests.

  Instead, the rule "os_compute_api:servers:create:attach_volume" is
  used...

  In the policy file:
  
https://git.openstack.org/cgit/openstack/horizon/tree/openstack_dashboard/conf/nova_policy.json?h=master#n138

  And in Novas unit tests:
  
https://git.openstack.org/cgit/openstack/nova/tree/nova/tests/unit/api/openstack/compute/test_serversV21.py#n5414

  Generating a policy file from Nova using the oslo policy generator (
  https://docs.openstack.org/horizon/latest/contributor/topics/policy.html
  ) has the same results, the output file includes a rule of
  "os_compute_api:servers:create:attach_volume"  but *NOT* of
  "os_compute_api:servers:attach_volume" . The net result is that using
  the default policy file, or an unmodified generated policy file,
  results in the "attach volume" option missing from the Compute
  Instances menu.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1772759/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773935] Re: Unable to upload image via dashboard

2018-06-04 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/571174
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=b84da5e84ef92e653084faed402bcac6271f1dfc
Submitter: Zuul
Branch:master

commit b84da5e84ef92e653084faed402bcac6271f1dfc
Author: Michal Arbet 
Date:   Wed May 30 10:56:47 2018 +

Fix issue with uploading image to glance on Python3

It was unable to upload image when horizon was running on python3,
there was a problem with closed file in new thread. This commit is
fixing this issue with added if clause whether horizon running on
python2 or python3 and correctly call close_called on file when
running on python3.

Change-Id: Ice178f6269ac527ba62b26d86976b5336987c922
Closes-Bug: #1773935


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1773935

Title:
  Unable to upload image via dashboard

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  It is unable to upload image via Horizon Dashboard.
  Admin -> compute -> images -> Create new Image 

  Here is error log from apache:

   Failed to remove temporary image file /tmp/tmpx0sza19m.upload ([Errno 2] No 
such file or directory: '/tmp/tmpx0sza19m.upload') 
   Unhandled exception in thread started by .upload at 0x7f651ac7bb70> 
   Traceback (most recent call last):   
  
 File "/usr/lib/python3/dist-packages/openstack_dashboard/api/glance.py", 
line 465, in upload 
   return glanceclient(request).images.upload(image.id, data)   
  
 File "/usr/lib/python3/dist-packages/glanceclient/common/utils.py", line 
545, in inner   
   return RequestIdProxy(wrapped(*args, **kwargs))  
  
 File "/usr/lib/python3/dist-packages/glanceclient/v2/images.py", line 238, 
in upload 
   return (resp, body), resp
  
 File "/usr/lib/python3/dist-packages/glanceclient/common/http.py", line 
318, in put  
   class SessionClient(adapter.Adapter, _BaseHTTPClient):   
  
 File "/usr/lib/python3/dist-packages/glanceclient/common/http.py", line 
271, in _request 
   {'url': conn_url, 'e': e})   
  
 File "/usr/lib/python3/dist-packages/requests/sessions.py", line 508, in 
request 
   resp = self.send(prep, **send_kwargs)
  
 File "/usr/lib/python3/dist-packages/requests/sessions.py", line 618, in 
send
   r = adapter.send(request, **kwargs)  
  
 File "/usr/lib/python3/dist-packages/requests/adapters.py", line 460, in 
send
   for i in request.body:   
  
 File "/usr/lib/python3/dist-packages/glanceclient/common/http.py", line 
92, in _chunk_body   
   # a file-like object 
  
 File "/usr/lib/python3.5/tempfile.py", line 622, in func_wrapper   
  
   return func(*args, **kwargs) 
  
   ValueError: read of closed file   

  I tried replicate the issue on devstack/queens but it is working there ... ( 
i think the issue is python3 related , so on devstack it is working because 
devstack is running under py2.7 )
  I'm running horizon and also glance on python3 , installed from debian 
packages which are now supporting python3.

  dpkg -l | grep horizon

  ii  python3-django-horizon   3:13.0.1-1 all
  Django module providing web interaction with OpenStack

  dpkg -l | grep glance

  ii  glance   2:16.0.1-2+deb9ut1
  all  OpenStack Image Registry