[Yahoo-eng-team] [Bug 1642689] Re: ceph: volume detach fails with "libvirtError: operation failed: disk vdb not found"

2016-11-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/401375
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=d079f37f7ccb8da9e74416cedce4f194b66ee1d0
Submitter: Jenkins
Branch:master

commit d079f37f7ccb8da9e74416cedce4f194b66ee1d0
Author: int32bit 
Date:   Wed Nov 23 18:21:16 2016 +0800

Fix wait for detach code to handle 'disk not found error'

Currently, the code does an initial detach from the persistent and
transient domains in one call, then follows up with a call of the
retryable function, which first checks the domain xml before retrying
the transient domain detach. If the transient domain detach (which is
asynchronous) completes after we checked the domain xml, trying to
detach it again will raise a LibvirtError exception. Then, our loop
code in `_do_wait_and_retry_detach` doesn't catch it and will respond
with error. We should be handling the `disk not found error` from
libvirt and consider the job complete.

Closes-Bug: #1642689

Change-Id: I131aaf28d2f5d5d964d4045e3d7d62207079cfb0


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642689

Title:
  ceph: volume detach fails with "libvirtError: operation failed: disk
  vdb not found"

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  In Progress

Bug description:
  Seeing this failure in this job:

  
tempest.api.volume.test_volumes_snapshots.VolumesV1SnapshotTestJSON.test_snapshot_create_with_volume_in_use[compute
  ,id-b467b54c-07a4-446d-a1cf-651dedcc3ff1]

  While detaching a volume on cleanup it times out waiting for the
  volume status to go from 'in-use' back to 'available':

  2016-11-17 11:58:59.110301 | Captured traceback:
  2016-11-17 11:58:59.110310 | ~~~
  2016-11-17 11:58:59.110322 | Traceback (most recent call last):
  2016-11-17 11:58:59.110342 |   File "tempest/common/waiters.py", line 
189, in wait_for_volume_status
  2016-11-17 11:58:59.110356 | raise lib_exc.TimeoutException(message)
  2016-11-17 11:58:59.110373 | tempest.lib.exceptions.TimeoutException: 
Request timed out
  2016-11-17 11:58:59.110405 | Details: Volume 
db12eda4-4ce6-4f00-a4e0-9df115f230e5 failed to reach available status (current 
in-use) within the required time (196 s).

  The volume detach request is here:

  2016-11-17 11:58:59.031058 | 2016-11-17 11:38:55,018 8316 INFO 
[tempest.lib.common.rest_client] Request 
(VolumesV1SnapshotTestJSON:_run_cleanups): 202 DELETE 
http://127.0.0.1:8774/v2.1/servers/584a65b5-07fa-4994-a2d5-1676d0e13a8c/os-volume_attachments/db12eda4-4ce6-4f00-a4e0-9df115f230e5
 0.277s
  2016-11-17 11:58:59.031103 | 2016-11-17 11:38:55,018 8316 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2016-11-17 11:58:59.031113 | Body: None
  2016-11-17 11:58:59.031235 | Response - Headers: {'content-location': 
'http://127.0.0.1:8774/v2.1/servers/584a65b5-07fa-4994-a2d5-1676d0e13a8c/os-volume_attachments/db12eda4-4ce6-4f00-a4e0-9df115f230e5',
 'content-type': 'application/json', 'x-openstack-nova-api-version': '2.1', 
'date': 'Thu, 17 Nov 2016 11:38:55 GMT', 'content-length': '0', 'status': 
'202', 'connection': 'close', 'x-compute-request-id': 
'req-9f0541d3-6eec-4793-8852-7bd01708932e', 'openstack-api-version': 'compute 
2.1', 'vary': 'X-OpenStack-Nova-API-Version'}
  2016-11-17 11:58:59.031248 | Body: 

  Following the req-9f0541d3-6eec-4793-8852-7bd01708932e request ID to
  the compute logs we see this detach failure:

  http://logs.openstack.org/00/398800/1/gate/gate-tempest-dsvm-full-
  devstack-plugin-ceph-ubuntu-
  xenial/a387fb0/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-11-17_11_39_00_649

  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager 
[req-9f0541d3-6eec-4793-8852-7bd01708932e 
tempest-VolumesV1SnapshotTestJSON-1819335716 
tempest-VolumesV1SnapshotTestJSON-1819335716] [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] Failed to detach volume 
db12eda4-4ce6-4f00-a4e0-9df115f230e5 from /dev/vdb
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] Traceback (most recent call last):
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 4757, in 
_driver_detach_volume
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] encryption=encryption)
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 

[Yahoo-eng-team] [Bug 1639239] Re: ValueError for Invalid InitiatorConnector in s390

2016-11-29 Thread Ryan Beisner
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1639239

Title:
  ValueError for Invalid InitiatorConnector in s390

Status in Ubuntu Cloud Archive:
  New
Status in OpenStack Compute (nova):
  Fix Released
Status in os-brick:
  Fix Released

Bug description:
  Description
  ===
  Calling the InitiatorConnector factory results in a ValueError for 
unsupported protocols, which goes unhandled and may crash a calling service.

  Steps to reproduce
  ==
  - clone devstack
  - make stack

  Expected result
  ===
  The nova compute service should run.

  Actual result
  =
  A ValueError is thrown, which, in the case of the nova libvirt driver, is not 
handled appropriately. The compute service crashes.

  Environment
  ===
  os|distro=kvmibm1
  os|vendor=kvmibm
  os|release=1.1.3-beta4.3
  git|cinder|master[f6ab36d]
  git|devstack|master[928b3cd]
  git|nova|master[56138aa]
  pip|os-brick|1.7.0

  Logs & Configs
  ==
  [...]
  2016-11-03 17:56:57.204 46141 INFO nova.virt.driver 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] Loading compute driver 
'libvirt.LibvirtDriver'
  2016-11-03 17:56:57.442 46141 DEBUG os_brick.initiator.connector 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] Factory for ISCSI on s390x 
factory /usr/lib/python2.7/site-packages/os_brick/initiator/connector.py:261
  2016-11-03 17:56:57.444 46141 DEBUG os_brick.initiator.connector 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] Factory for ISCSI on s390x 
factory /usr/lib/python2.7/site-packages/os_brick/initiator/connector.py:261
  2016-11-03 17:56:57.445 46141 DEBUG os_brick.initiator.connector 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] Factory for ISER on s390x 
factory /usr/lib/python2.7/site-packages/os_brick/initiator/connector.py:261
  2016-11-03 17:56:57.445 46141 CRITICAL nova 
[req-fb30a5af-e87c-4ee0-903c-a5aa7d3ad5e3 - -] ValueError: Invalid 
InitiatorConnector protocol specified ISER
  2016-11-03 17:56:57.445 46141 ERROR nova Traceback (most recent call last):
  2016-11-03 17:56:57.445 46141 ERROR nova   File "/usr/bin/nova-compute", line 
10, in 
  2016-11-03 17:56:57.445 46141 ERROR nova sys.exit(main())
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/cmd/compute.py", line 56, in main
  2016-11-03 17:56:57.445 46141 ERROR nova topic=CONF.compute_topic)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/service.py", line 216, in create
  2016-11-03 17:56:57.445 46141 ERROR nova 
periodic_interval_max=periodic_interval_max)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/service.py", line 91, in __init__
  2016-11-03 17:56:57.445 46141 ERROR nova self.manager = 
manager_class(host=self.host, *args, **kwargs)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/compute/manager.py", line 537, in __init__
  2016-11-03 17:56:57.445 46141 ERROR nova self.driver = 
driver.load_compute_driver(self.virtapi, compute_driver)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/virt/driver.py", line 1625, in load_compute_driver
  2016-11-03 17:56:57.445 46141 ERROR nova virtapi)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/usr/lib/python2.7/site-packages/oslo_utils/importutils.py", line 44, in 
import_object
  2016-11-03 17:56:57.445 46141 ERROR nova return 
import_class(import_str)(*args, **kwargs)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 356, in __init__
  2016-11-03 17:56:57.445 46141 ERROR nova self._get_volume_drivers(), 
self._host)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/virt/driver.py", line 44, in driver_dict_from_config
  2016-11-03 17:56:57.445 46141 ERROR nova driver_registry[driver_type] = 
driver_class(*args, **kwargs)
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/opt/stack/nova/nova/virt/libvirt/volume/iser.py", line 34, in __init__
  2016-11-03 17:56:57.445 46141 ERROR nova transport=self._get_transport())
  2016-11-03 17:56:57.445 46141 ERROR nova   File 
"/usr/lib/python2.7/site-packages/os_brick/initiator/connector.py", line 285, 
in factory
  2016-11-03 17:56:57.445 46141 ERROR nova raise ValueError(msg)
  2016-11-03 17:56:57.445 46141 ERROR nova ValueError: Invalid 
InitiatorConnector protocol specified ISER
  2016-11-03 17:56:57.445 46141 ERROR nova

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1639239/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1642689] Re: ceph: volume detach fails with "libvirtError: operation failed: disk vdb not found"

2016-11-29 Thread Matt Riedemann
** Also affects: nova/newton
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642689

Title:
  ceph: volume detach fails with "libvirtError: operation failed: disk
  vdb not found"

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  In Progress

Bug description:
  Seeing this failure in this job:

  
tempest.api.volume.test_volumes_snapshots.VolumesV1SnapshotTestJSON.test_snapshot_create_with_volume_in_use[compute
  ,id-b467b54c-07a4-446d-a1cf-651dedcc3ff1]

  While detaching a volume on cleanup it times out waiting for the
  volume status to go from 'in-use' back to 'available':

  2016-11-17 11:58:59.110301 | Captured traceback:
  2016-11-17 11:58:59.110310 | ~~~
  2016-11-17 11:58:59.110322 | Traceback (most recent call last):
  2016-11-17 11:58:59.110342 |   File "tempest/common/waiters.py", line 
189, in wait_for_volume_status
  2016-11-17 11:58:59.110356 | raise lib_exc.TimeoutException(message)
  2016-11-17 11:58:59.110373 | tempest.lib.exceptions.TimeoutException: 
Request timed out
  2016-11-17 11:58:59.110405 | Details: Volume 
db12eda4-4ce6-4f00-a4e0-9df115f230e5 failed to reach available status (current 
in-use) within the required time (196 s).

  The volume detach request is here:

  2016-11-17 11:58:59.031058 | 2016-11-17 11:38:55,018 8316 INFO 
[tempest.lib.common.rest_client] Request 
(VolumesV1SnapshotTestJSON:_run_cleanups): 202 DELETE 
http://127.0.0.1:8774/v2.1/servers/584a65b5-07fa-4994-a2d5-1676d0e13a8c/os-volume_attachments/db12eda4-4ce6-4f00-a4e0-9df115f230e5
 0.277s
  2016-11-17 11:58:59.031103 | 2016-11-17 11:38:55,018 8316 DEBUG
[tempest.lib.common.rest_client] Request - Headers: {'X-Auth-Token': 
'', 'Accept': 'application/json', 'Content-Type': 'application/json'}
  2016-11-17 11:58:59.031113 | Body: None
  2016-11-17 11:58:59.031235 | Response - Headers: {'content-location': 
'http://127.0.0.1:8774/v2.1/servers/584a65b5-07fa-4994-a2d5-1676d0e13a8c/os-volume_attachments/db12eda4-4ce6-4f00-a4e0-9df115f230e5',
 'content-type': 'application/json', 'x-openstack-nova-api-version': '2.1', 
'date': 'Thu, 17 Nov 2016 11:38:55 GMT', 'content-length': '0', 'status': 
'202', 'connection': 'close', 'x-compute-request-id': 
'req-9f0541d3-6eec-4793-8852-7bd01708932e', 'openstack-api-version': 'compute 
2.1', 'vary': 'X-OpenStack-Nova-API-Version'}
  2016-11-17 11:58:59.031248 | Body: 

  Following the req-9f0541d3-6eec-4793-8852-7bd01708932e request ID to
  the compute logs we see this detach failure:

  http://logs.openstack.org/00/398800/1/gate/gate-tempest-dsvm-full-
  devstack-plugin-ceph-ubuntu-
  xenial/a387fb0/logs/screen-n-cpu.txt.gz?level=TRACE#_2016-11-17_11_39_00_649

  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager 
[req-9f0541d3-6eec-4793-8852-7bd01708932e 
tempest-VolumesV1SnapshotTestJSON-1819335716 
tempest-VolumesV1SnapshotTestJSON-1819335716] [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] Failed to detach volume 
db12eda4-4ce6-4f00-a4e0-9df115f230e5 from /dev/vdb
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] Traceback (most recent call last):
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 4757, in 
_driver_detach_volume
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] encryption=encryption)
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 1307, in detach_volume
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] wait_for_detach()
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 385, 
in func
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] return evt.wait()
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/event.py", line 121, in wait
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c] return hubs.get_hub().switch()
  2016-11-17 11:39:00.649 2249 ERROR nova.compute.manager [instance: 
584a65b5-07fa-4994-a2d5-1676d0e13a8c]   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 294, in 
switch
  2016-11-17 11:39:00.649 2249 ERROR 

[Yahoo-eng-team] [Bug 1645947] [NEW] cc_growpart fails on ZFS root mount

2016-11-29 Thread Andres Montalban
Public bug reported:

Hey guys,

When having / mounted on ZFS cc_growpart fails with the following
message:

2016-11-29 21:40:34,761 - handlers.py[DEBUG]: finish: 
init-network/config-growpart: FAIL: running config-growpart with frequency 
always
2016-11-29 21:40:34,761 - handlers.py[DEBUG]: finish: 
init-network/config-growpart: FAIL: running config-growpart with frequency 
always
2016-11-29 21:40:34,762 - util.py[WARNING]: Running module growpart () 
failed
2016-11-29 21:40:34,762 - util.py[WARNING]: Running module growpart () 
failed
2016-11-29 21:40:34,763 - util.py[DEBUG]: Running module growpart () 
failed
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/cloudinit/stages.py", line 792, 
in _run_modules
freq=freq)
  File "/usr/local/lib/python2.7/site-packages/cloudinit/cloud.py", line 70, in 
run
return self._runners.run(name, functor, args, freq, clear_on_fail)
  File "/usr/local/lib/python2.7/site-packages/cloudinit/helpers.py", line 199, 
in run
results = functor(*args)
  File 
"/usr/local/lib/python2.7/site-packages/cloudinit/config/cc_growpart.py", line 
350, in handle
func=resize_devices, args=(resizer, devices))
  File "/usr/local/lib/python2.7/site-packages/cloudinit/util.py", line 2194, 
in log_time
ret = func(*args, **kwargs)
  File 
"/usr/local/lib/python2.7/site-packages/cloudinit/config/cc_growpart.py", line 
270, in resize_devices
blockdev = devent2dev(devent)
  File 
"/usr/local/lib/python2.7/site-packages/cloudinit/config/cc_growpart.py", line 
259, in devent2dev
result = util.get_mount_info(devent)
  File "/usr/local/lib/python2.7/site-packages/cloudinit/util.py", line 2156, 
in get_mount_info
return parse_mount(path)
  File "/usr/local/lib/python2.7/site-packages/cloudinit/util.py", line 2112, 
in parse_mount
devpth = m.group(1)
AttributeError: 'NoneType' object has no attribute 'group'
2016-11-29 21:40:34,763 - util.py[DEBUG]: Running module growpart () 
failed
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/cloudinit/stages.py", line 792, 
in _run_modules
freq=freq)
  File "/usr/local/lib/python2.7/site-packages/cloudinit/cloud.py", line 70, in 
run
return self._runners.run(name, functor, args, freq, clear_on_fail)
  File "/usr/local/lib/python2.7/site-packages/cloudinit/helpers.py", line 199, 
in run
results = functor(*args)
  File 
"/usr/local/lib/python2.7/site-packages/cloudinit/config/cc_growpart.py", line 
350, in handle
func=resize_devices, args=(resizer, devices))
  File "/usr/local/lib/python2.7/site-packages/cloudinit/util.py", line 2194, 
in log_time
ret = func(*args, **kwargs)
  File 
"/usr/local/lib/python2.7/site-packages/cloudinit/config/cc_growpart.py", line 
270, in resize_devices
blockdev = devent2dev(devent)
  File 
"/usr/local/lib/python2.7/site-packages/cloudinit/config/cc_growpart.py", line 
259, in devent2dev
result = util.get_mount_info(devent)
  File "/usr/local/lib/python2.7/site-packages/cloudinit/util.py", line 2156, 
in get_mount_info
return parse_mount(path)
  File "/usr/local/lib/python2.7/site-packages/cloudinit/util.py", line 2112, 
in parse_mount
devpth = m.group(1)
AttributeError: 'NoneType' object has no attribute 'group'
2016-11-29 21:40:34,766 - stages.py[DEBUG]: Running module resizefs () 
with frequency always
2016-11-29 21:40:34,766 - stages.py[DEBUG]: Running module resizefs () 
with frequency always


The 'mount' command in FreeBSD with / under ZFS is the following:

zroot on / (zfs, local, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
zroot/tmp on /tmp (zfs, local, nosuid, nfsv4acls)
zroot/usr on /usr (zfs, local, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, nfsv4acls)
zroot/usr/home/vagrant on /usr/home/vagrant (zfs, local, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, nosuid, nfsv4acls)
zroot/usr/ports/distfiles on /usr/ports/distfiles (zfs, local, noexec, nosuid, 
nfsv4acls)
zroot/usr/ports/packages on /usr/ports/packages (zfs, local, noexec, nosuid, 
nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noexec, nosuid, nfsv4acls)
zroot/var on /var (zfs, local, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noexec, nosuid, nfsv4acls)
zroot/var/db on /var/db (zfs, local, noexec, nosuid, nfsv4acls)
zroot/var/db/pkg on /var/db/pkg (zfs, local, nosuid, nfsv4acls)
zroot/var/empty on /var/empty (zfs, local, noexec, nosuid, read-only, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, noexec, nosuid, nfsv4acls)
zroot/var/run on /var/run (zfs, local, noexec, nosuid, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, nosuid, nfsv4acls)

Attached is a patch to fix this, for testing this regex you can go to
https://regex101.com/r/nKDjgA/1

Thanks!

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: freebsd growpart patch zfs

** Patch added: "patch"
   

[Yahoo-eng-team] [Bug 1645810] Re: neutron api update port and agent rpc update port may cause table port deadlock

2016-11-29 Thread huangyunpeng
** Project changed: fuel-plugin-contrail => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645810

Title:
  neutron  api update port and agent rpc  update port  may cause table
  port deadlock

Status in neutron:
  New

Bug description:
  the test scenario as follow step :

  1,server api update some attributes of port ,like hostid

  2,then agent receive rpc message ,handler  update port attributes
  ,like statues

  3,at the same time, server update the port when agent rpc update port

  4,it will result in db port deadlock

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645810] [NEW] neutron api update port and agent rpc update port may cause table port deadlock

2016-11-29 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

the test scenario as follow step :

1,server api update some attributes of port ,like hostid

2,then agent receive rpc message ,handler  update port attributes ,like
statues

3,at the same time, server update the port when agent rpc update port

4,it will result in db port deadlock

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
neutron  api update port and agent rpc  update port  may cause table port 
deadlock
https://bugs.launchpad.net/bugs/1645810
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1640818] Re: missing samples from the notification dev-ref

2016-11-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/396272
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=b8897aba72497548934c4c9bb5a107c027bf0674
Submitter: Jenkins
Branch:master

commit b8897aba72497548934c4c9bb5a107c027bf0674
Author: Balazs Gibizer 
Date:   Thu Nov 10 15:52:32 2016 +0100

Fix notification doc generator

The doc generator only picked up the event types related to instance
actions because the doc generator looked up the registered notification
classes only. Until now, only the instance related notifications were
imported in the doc generation environment so the other notification
classes were not registered.

This commit explicitly imports the modules that defines the notification
classes to make the doc complete.

Change-Id: I269e05ddb62ec6c6cc7f7922c1344186ccf850d1
Closes-bug: #1640818


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1640818

Title:
  missing samples from the notification dev-ref

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  The non instance related notifications (e.g. service.update,
  compute.exception) is missing from the notification devref [1]. It
  seems that the doc generation does not pick up those notifications.

  [1] http://docs.openstack.org/developer/nova/notifications.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1640818/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645620] Re: [FWaaS] update firewall policy with rule to empty returns 500

2016-11-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/404103
Committed: 
https://git.openstack.org/cgit/openstack/neutron-fwaas/commit/?id=fffd6bcb5a95d5d07852328049dab28b44009acc
Submitter: Jenkins
Branch:master

commit fffd6bcb5a95d5d07852328049dab28b44009acc
Author: Yushiro FURUKAWA 
Date:   Tue Nov 29 18:11:43 2016 +0900

Fix removing rule_association on updating a policy

This commit fixes removing rule_associations entry with updating
firewall-policy with empty firewall_rules like that:

{'firewall_policy': {'firewall_rules': []}}

Change-Id: I1e50703a559392ac0e1ab50830ac6b3b0cad7224
Closes-Bug: #1645620


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645620

Title:
  [FWaaS] update firewall policy with rule to empty returns 500

Status in neutron:
  Fix Released

Bug description:
  When firewall policy includes at least 1 firewall-rule and try to update empty
  firewall_rules, following error occurred:

  It should be aligned with how to remove rule_association table as
  remove_rules.

  [How to reprocude]
  curl -s -X PUT -d 
'{"firewall_policy":{"firewall_rules":["50c87d7e-63c1-4911-9bee-d15455073c78"]}}'
 -H "x-auth-token:$TOKEN" 
192.168.122.181:9696/v2.0/fwaas/firewall_policies/86a47474-f2d2-4a89-a4b4-22119fe6e459

  {
"firewall_policy": {
  "description": "",
  "firewall_rules": [
"50c87d7e-63c1-4911-9bee-d15455073c78"
  ],
  "tenant_id": "36bc640624964521b494fd0bd46d2a6e",
  "public": false,
  "id": "86a47474-f2d2-4a89-a4b4-22119fe6e459",
  "project_id": "36bc640624964521b494fd0bd46d2a6e",
  "audited": false,
  "name": "policy1"
}
  }

  curl -s -X PUT -d '{"firewall_policy":{"firewall_rules":[]}}' -H "x
  -auth-token:$TOKEN"
  
192.168.122.181:9696/v2.0/fwaas/firewall_policies/86a47474-f2d2-4a89-a4b4-22119fe6e459

  {
"NeutronError": {
  "message": "Request Failed: internal server error while processing your 
request.",
  "type": "HTTPInternalServerError",
  "detail": ""
}
  }

  [Error log on q-svc.log]
  2016-11-29 16:40:10.868 ERROR neutron.api.v2.resource 
[req-1c906036-e041-43fd-919a-1bd1bc1ebc81 admin 
36bc640624964521b494fd0bd46d2a6e] update failed: No details.
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 612, in update
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource return 
self._update(request, id, body, **kwargs)
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 92, in wrapped
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource self.force_reraise()
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 88, in wrapped
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource self.force_reraise()
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
  2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 128, in 

[Yahoo-eng-team] [Bug 1645910] [NEW] Trust creation for SSO users fails in assert_user_enabled

2016-11-29 Thread Pooja Ghumre
Public bug reported:

Openstack version: Mitaka
Operation: Heat stack/trust creation for SSO users

For SSO users, keystone trust creation workflow fails while asserting
that the user is enabled.

The assert_user_enabled() function in keystone/identity/core.py fails at the 
below line:
self.resource_api.assert_domain_enabled(user['domain_id'])

Since user['domain_id'] throws a KeyError for federated users, this
function raises an exception. To avoid this failure, we should invoke
assert_domain_enabled() check conditionally only for local users.

Proposing to add a 'is_local' user flag to distinguish between local and
federated users so that we can conditionally assert the user domain and
do other such things.

** Affects: keystone
 Importance: Undecided
 Status: New

** Project changed: nova => keystone

** Description changed:

- Openstack version: Liberty
+ Openstack version: Mitaka
  Operation: Heat stack/trust creation for SSO users
  
  For SSO users, keystone trust creation workflow fails while asserting
  that the user is enabled.
  
  The assert_user_enabled() function in keystone/identity/core.py fails at the 
below line:
- self.resource_api.assert_domain_enabled(user['domain_id'])
+ self.resource_api.assert_domain_enabled(user['domain_id'])
  
  Since user['domain_id'] throws a KeyError for federated users, this
  function raises an exception. To avoid this failure, we should invoke
  assert_domain_enabled() check conditionally only for local users.
  
  Proposing to add a 'is_local' user flag to distinguish between local and
  federated users so that we can conditionally assert the user domain and
  do other such things.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1645910

Title:
  Trust creation for SSO users fails in assert_user_enabled

Status in OpenStack Identity (keystone):
  New

Bug description:
  Openstack version: Mitaka
  Operation: Heat stack/trust creation for SSO users

  For SSO users, keystone trust creation workflow fails while asserting
  that the user is enabled.

  The assert_user_enabled() function in keystone/identity/core.py fails at the 
below line:
  self.resource_api.assert_domain_enabled(user['domain_id'])

  Since user['domain_id'] throws a KeyError for federated users, this
  function raises an exception. To avoid this failure, we should invoke
  assert_domain_enabled() check conditionally only for local users.

  Proposing to add a 'is_local' user flag to distinguish between local
  and federated users so that we can conditionally assert the user
  domain and do other such things.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1645910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645908] [NEW] Domain id reference for federated users fails in keystone middleware

2016-11-29 Thread Pooja Ghumre
Public bug reported:

Version: Keystone Mitaka

Keystone middleware expects the domain id field to be set for a user.
For federated users, the domain id is set to be None and hence causes an
error during autoscaling of a Heat stack created by SSO user.

Had to modify _populate_user() function in
keystone/token/providers/common.py to set a dummy domain id for
federated users as below to fix this issue:

# Fix: domain id for federated users is None, so send dummy value.
# Added is_local user attribute to distinguish local and federated 
users.
if user_ref.get('is_local'):
domain = self._get_filtered_domain(user_ref['domain_id'])
else:
domain = {
  'id': CONF.federation.federated_domain_name,
  'name': CONF.federation.federated_domain_name
 }
# end

Wondering if this is the right way to resolve the domain reference issue
for SSO.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645908

Title:
  Domain id reference for federated users fails in keystone middleware

Status in OpenStack Identity (keystone):
  New

Bug description:
  Version: Keystone Mitaka

  Keystone middleware expects the domain id field to be set for a user.
  For federated users, the domain id is set to be None and hence causes
  an error during autoscaling of a Heat stack created by SSO user.

  Had to modify _populate_user() function in
  keystone/token/providers/common.py to set a dummy domain id for
  federated users as below to fix this issue:

  # Fix: domain id for federated users is None, so send dummy value.
  # Added is_local user attribute to distinguish local and federated 
users.
  if user_ref.get('is_local'):
  domain = self._get_filtered_domain(user_ref['domain_id'])
  else:
  domain = {
'id': CONF.federation.federated_domain_name,
'name': CONF.federation.federated_domain_name
   }
  # end

  Wondering if this is the right way to resolve the domain reference
  issue for SSO.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1645908/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645894] [NEW] user with _admin_ role of one project can list and delete private images in others projects

2016-11-29 Thread yuanyuan zhang
Public bug reported:

in glance V2, admin of one project can list and delete private images in
others projects

Assume have project1(admin user: admin1) and project2(admin user:
admin2), have created a private image (image1) in project1

Then login into project2 as user admin2,

In Horizon:
in project tab, can see image1 and visibility is 'Shared with Project', but 
unable to delete this image, same with Admin Tab

In CLI:
user admin2 can list and delete image1 (private image in project1)

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1645894

Title:
  user with _admin_ role of one project can list and delete private
  images in others projects

Status in Glance:
  New

Bug description:
  in glance V2, admin of one project can list and delete private images
  in others projects

  Assume have project1(admin user: admin1) and project2(admin user:
  admin2), have created a private image (image1) in project1

  Then login into project2 as user admin2,

  In Horizon:
  in project tab, can see image1 and visibility is 'Shared with Project', but 
unable to delete this image, same with Admin Tab

  In CLI:
  user admin2 can list and delete image1 (private image in project1)

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1645894/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-29 Thread Kirill Zaitsev
** No longer affects: python-muranoclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in Karbor:
  Fix Released
Status in kolla:
  Fix Committed
Status in kuryr:
  Fix Released
Status in kuryr-libnetwork:
  Fix Released
Status in Magnum:
  In Progress
Status in Mistral:
  Fix Released
Status in networking-calico:
  In Progress
Status in networking-ovn:
  Fix Released
Status in networking-sfc:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in tacker:
  In Progress
Status in watcher:
  Fix Released

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1082248] Re: Use uuidutils instead of uuid.uuid4()

2016-11-29 Thread gordon chung
** No longer affects: gnocchi

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1082248

Title:
  Use uuidutils instead of uuid.uuid4()

Status in Cinder:
  In Progress
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in Karbor:
  Fix Released
Status in kolla:
  Fix Committed
Status in kuryr:
  Fix Released
Status in kuryr-libnetwork:
  Fix Released
Status in Magnum:
  In Progress
Status in Mistral:
  Fix Released
Status in networking-calico:
  In Progress
Status in networking-ovn:
  Fix Released
Status in networking-sfc:
  In Progress
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in python-muranoclient:
  In Progress
Status in Sahara:
  Fix Released
Status in senlin:
  Fix Released
Status in tacker:
  In Progress
Status in watcher:
  Fix Released

Bug description:
  Openstack common has a wrapper for generating uuids.

  We should only use that function when generating uuids for
  consistency.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1082248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645858] [NEW] newer image (create from local file) keep in saving status

2016-11-29 Thread yuanyuan zhang
Public bug reported:

create a image from local file, the horizon image status will keep in 'saving', 
need to refresh page to update status.
this issue only occurs when create from file (no matter size) 
Probably the newer images doesn't poll for status.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1645858

Title:
  newer image (create from local file) keep in saving status

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  create a image from local file, the horizon image status will keep in 
'saving', need to refresh page to update status.
  this issue only occurs when create from file (no matter size) 
  Probably the newer images doesn't poll for status.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1645858/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1632383] Re: The current Horizon settings indicate no valid image creation methods are available

2016-11-29 Thread Rob Cresswell
** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1632383

Title:
  The current Horizon settings indicate no valid image creation methods
  are available

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in openstack-ansible:
  Confirmed

Bug description:
  This is an all new install on CentOS 7.

  The error message is:

  "The current Horizon settings indicate no valid image creation methods
  are available. Providing an image location and/or uploading from the
  local file system must be allowed to support image creation."

  This link gives you some additional information about the file the message is 
associated with.
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/images/images/forms.py

  I am mostly new to OpenStack, so any initial help is much appreciated.

  Thanks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1632383/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361235] Re: visit horizon failure because of import module failure

2016-11-29 Thread Rob Cresswell
** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1361235

Title:
  visit horizon failure because of import module failure

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in openstack-ansible:
  Fix Released
Status in osprofiler:
  In Progress
Status in python-mistralclient:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  1. Use TripleO to deploy both undercloud, and overcloud, and enable horizon 
when building images.
  2. Visit horizon portal always failure, and has below errors in 
horizon_error.log

  [Wed Aug 20 01:45:58.441221 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] mod_wsgi (pid=5035): Exception occurred processing WSGI 
script 
'/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/django.wsgi'.
  [Wed Aug 20 01:45:58.441273 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] Traceback (most recent call last):
  [Wed Aug 20 01:45:58.441294 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/core/handlers/wsgi.py",
 line 187, in __call__
  [Wed Aug 20 01:45:58.449979 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] self.load_middleware()
  [Wed Aug 20 01:45:58.45 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/core/handlers/base.py",
 line 44, in load_middleware
  [Wed Aug 20 01:45:58.450556 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] for middleware_path in settings.MIDDLEWARE_CLASSES:
  [Wed Aug 20 01:45:58.450576 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/__init__.py",
 line 54, in __getattr__
  [Wed Aug 20 01:45:58.454248 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] self._setup(name)
  [Wed Aug 20 01:45:58.454269 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/__init__.py",
 line 49, in _setup
  [Wed Aug 20 01:45:58.454305 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] self._wrapped = Settings(settings_module)
  [Wed Aug 20 01:45:58.454319 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/conf/__init
  __.py", line 128, in __init__
  [Wed Aug 20 01:45:58.454338 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] mod = importlib.import_module(self.SETTINGS_MODULE)
  [Wed Aug 20 01:45:58.454350 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/django/utils/importlib.py",
 line 40, in import_module
  [Wed Aug 20 01:45:58.462806 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] __import__(name)
  [Wed Aug 20 01:45:58.462826 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/settings.py",
 line 28, in 
  [Wed Aug 20 01:45:58.467136 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] from openstack_dashboard import exceptions
  [Wed Aug 20 01:45:58.467156 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../openstack_dashboard/exceptions.py",
 line 22, in 
  [Wed Aug 20 01:45:58.467667 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] from keystoneclient import exceptions as keystoneclient
  [Wed Aug 20 01:45:58.467685 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneclient/__init__.py",
 line 28, in 
  [Wed Aug 20 01:45:58.472968 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] from keystoneclient import client
  [Wed Aug 20 01:45:58.472989 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 
"/opt/stack/venvs/openstack/lib/python2.7/site-packages/openstack_dashboard/wsgi/../../keystoneclient/client.py",
 line 13, in 
  [Wed Aug 20 01:45:58.473833 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198] from keystoneclient import discover
  [Wed Aug 20 01:45:58.473851 2014] [:error] [pid 5035:tid 3038755648] [remote 
10.74.104.27:54198]   File 

[Yahoo-eng-team] [Bug 1642419] Re: GPU Passthrough isn't working

2016-11-29 Thread Steven Dake
** Changed in: kolla-ansible
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1642419

Title:
  GPU Passthrough isn't working

Status in kolla-ansible:
  Invalid
Status in OpenStack Compute (nova):
  New

Bug description:
  Hi,
  I cannot get IOMMU working on my Openstack cloud. Everytime I launch an 
instance with one gpu I get this error complaining that host doesn't have the 
requested feature.

  Message
  Exceeded maximum number of retries. Exceeded max scheduling attempts 10 for 
instance 769a2108-9a53-4cf5-9055-82411ce5cafd. Last exception: internal error: 
process exited while connecting to monitor: warning: host doesn't support 
requested feature: CPUID.0
  Code
  500
  Details
  File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/conductor/manager.py",
 line 480, in build_instances filter_properties, instances[0].uuid) File 
"/var/lib/kolla/venv/local/lib/python2.7/site-packages/nova/scheduler/utils.py",
 line 184, in populate_retry raise exception.MaxRetriesExceeded(reason=msg)

  Here is the log from nova-compute:
  http://paste.openstack.org/show/589527/

  And here is a log from nova-scheduler:
  http://paste.openstack.org/show/589528/

  
  I'm running Openstack Kolla v3.0.1 on an i7 4790k. My GPU is Nvidia GTX 970 
(that I want to passthrough) and my motherboard is Maximus VI Extreme.

  IOMMU has worked on this setup in the past on Arch Linux.

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1642419/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645484] Re: Not able to update default quotas

2016-11-29 Thread Chetna
Could update the security group default count. Can close this bug.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645484

Title:
  Not able to update default quotas

Status in neutron:
  Invalid

Bug description:
  Description: 
  Trying to update/increase the default quota for number of security_groups 
that can be created.

  Pre-conditions: 
  User- cloud admin
  Project- admin

  Step-by-step reproduction steps:

  neutron $AUTH quota-default-show
  +-+---+
  | Field   | Value |
  +-+---+
  | floatingip  | 50|
  | network | 10|
  | port| 50|
  | rbac_policy | 10|
  | router  | 10|
  | security_group  | 10|
  | security_group_rule | 100   |
  | subnet  | 10|
  | subnetpool  | -1|
  +-+---+

  Try to update the default value for security_group as for project admin:
  openstack $AUTH project list
  
+--+--+
  | ID   | Name 
|
  
+--+--+
  | 3f3dccb1ec12494ab9f7024e2ddd32de | admin

  
+--+--+

  neutron $AUTH quota-update --tenant_id 3f3dccb1ec12494ab9f7024e2ddd32de 
security_group 15
  Invalid values_specs 15

  Expected output: Quota should be updated

  Actual output: Invalid values_specs 15

  Version:
 OpenStack version: Newton

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645835] [NEW] dhcp agent dosen't update port cache after database recovery

2016-11-29 Thread Jiafang Zhang
Public bug reported:

The scenario might be:
1)Database backup
2)Create a new VM, dhcp allocate an IP to the VM
2)Destroy the new VM
3)Recovery database, neutron-server clean up the port information of the VM, 
but dhcp agent dosen't clean up the port information, so dhcp agent occupy the 
released IP. New vms might fail to obtain IP.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645835

Title:
  dhcp agent dosen't update port cache after database recovery

Status in neutron:
  New

Bug description:
  The scenario might be:
  1)Database backup
  2)Create a new VM, dhcp allocate an IP to the VM
  2)Destroy the new VM
  3)Recovery database, neutron-server clean up the port information of the VM, 
but dhcp agent dosen't clean up the port information, so dhcp agent occupy the 
released IP. New vms might fail to obtain IP.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645835/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645772] Re: [SRU] newton stable releases

2016-11-29 Thread Corey Bryant
** Also affects: horizon (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: cinder (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: heat (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: ironic (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: neutron-lbaas (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: aodh (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: ironic-inspector (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: mistral (Ubuntu Yakkety)
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/newton
   Importance: Undecided
   Status: New

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645772

Title:
  [SRU] newton stable releases

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive newton series:
  New
Status in aodh package in Ubuntu:
  Invalid
Status in cinder package in Ubuntu:
  Invalid
Status in heat package in Ubuntu:
  Invalid
Status in horizon package in Ubuntu:
  Invalid
Status in ironic package in Ubuntu:
  Invalid
Status in ironic-inspector package in Ubuntu:
  Invalid
Status in mistral package in Ubuntu:
  Invalid
Status in neutron package in Ubuntu:
  Invalid
Status in neutron-lbaas package in Ubuntu:
  Invalid
Status in aodh source package in Yakkety:
  New
Status in cinder source package in Yakkety:
  New
Status in heat source package in Yakkety:
  New
Status in horizon source package in Yakkety:
  New
Status in ironic source package in Yakkety:
  New
Status in ironic-inspector source package in Yakkety:
  New
Status in mistral source package in Yakkety:
  New
Status in neutron source package in Yakkety:
  New
Status in neutron-lbaas source package in Yakkety:
  New

Bug description:
  Stable release updates for the following packages to support OpenStack
  Newton:

  aodh 3.0.1
  cinder 9.1.0
  heat 7.0.1
  horizon 10.0.1
  ironic 6.2.2
  ironic-inspector 4.2.1
  mistral 3.0.2
  neutron 9.1.1
  neutron-lbaas 9.1.0
  nova 14.0.2
  python-ceilometer-client 2.6.2
  python-ceilometer-middleware 0.5.1
  python-django-openstack-auth 2.4.2
  python-heatclient 1.5.0
  python-ironic-lib 2.1.1
  python-keystoneauth1 2.12.2
  python-magnumclient 2.3.1
  python-muranoclient 0.11.1
  python-os-win 1.2.1
  python-oslo.rootwrap 5.1.1
  python-taskflow 2.6.0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1645772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1596507] Re: XenAPI: Support neutron security group

2016-11-29 Thread Maciej Szankin
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1596507

Title:
  XenAPI: Support neutron security group

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  https://review.openstack.org/251271
  Dear bug triager. This bug was created since a commit was marked with 
DOCIMPACT.
  Your project "openstack/nova" is set up so that we directly report the 
documentation bugs against it. If this needs changing, the docimpact-group 
option needs to be added for the project. You can ask the OpenStack infra team 
(#openstack-infra on freenode) for help if you need to.

  commit bebc0a4b2ea915fa214518ea667cd25812cda058
  Author: Huan Xie 
  Date:   Mon Nov 30 09:24:54 2015 +

  XenAPI: Support neutron security group
  
  This implementation is to give support on neutron security group with
  XenServer as compute driver. When using neutron+openvswitch, the ovs
  agent on compute node cannot run correctly due to lack of qbr linux
  bridge on compute node. This change will add qbr linux bridge when
  xenserver as hypervisor
  Xenserver driver now doesn't have linux bridge, the connection is:
  compute node: vm-vif -> br-int -> br-eth
  network node: br-eth -> br-int -> br-ex
  With this implemented, linux bridge(qbr) will be added in compute
  node. Thus the security group rules can be applied on qbr bridge.
  The connection will look like:
  compute node: vm-vif -> qbr(linux bridge) -> br-int -> br-eth
  network node: br-eth -> br-int -> br-ex
  
  Closes-Bug: #1526138
  
  Implements: blueprint support-neutron-security-group
  
  DocImpact: /etc/modprobe.d/blacklist-bridge file in dom0 should be
  deleted since it prevent loading linux bridge module in dom0
  
  Depends-On: I377f8ad51e1d2725c3e0153e64322055fcce7b54
  
  Change-Id: Id9b39aa86558a9f7099caedabd2d517bf8ad3d68

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1596507/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625101] Re: Some cinder failures return "500" errors to users

2016-11-29 Thread Maciej Szankin
Fixed in https://review.openstack.org/#/c/382660/

** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1625101

Title:
  Some cinder failures return "500" errors to users

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  When a user tries to delete a volume that is attached to a VM using
  nova the user gets an "Unexpected Exception" error when the cinder
  extension fails with an InvalidInput exception. This is not
  particularly helpful for troubleshooting and the message is misleading
  for users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1625101/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1625462] Re: The value of migration.dest_compute is incorrect after resize_revert operation successfully

2016-11-29 Thread Maciej Szankin
** Changed in: nova
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1625462

Title:
  The value of migration.dest_compute is incorrect after resize_revert
  operation successfully

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  I have two host: tecs-controller-node and tecs-OpenStack-Nova.

  1. I did migrate action, then instance's state is as follow:
  
+--+--+---++-++
  | ID   | Name | Status| Task 
State | Power State | Networks   |
  
+--+--+---++-++
  | ea40a5f7-d92b-4ad6-94c0-a18f2465519e | hanrong1 | VERIFY_RESIZE | - 
 | Running | public=172.24.4.2, 2001:db8::8 |
  
+--+--+---++-++

  2. I did look at migrations' table for instance source_node and
  dest_node.

  mysql> select * from migrations where 
instance_uuid='ea40a5f7-d92b-4ad6-94c0-a18f2465519e';
  
+-+-+++--+-+--+--+--+--+--+--+-+-+++--+--+--++++
  | created_at  | updated_at  | deleted_at | id | 
source_compute   | dest_compute| dest_host| status   | 
instance_uuid| old_instance_type_id | 
new_instance_type_id | source_node  | dest_node   | deleted | 
migration_type | hidden | memory_total | memory_processed | memory_remaining | 
disk_total | disk_processed | disk_remaining |
  
+-+-+++--+-+--+--+--+--+--+--+-+-+++--+--+--++++
  | 2016-09-20 02:38:24 | 2016-09-20 02:38:58 | NULL   |  1 | 
tecs-controller-node | tecs-OpenStack-Nova | 192.168.1.60 | finished | 
ea40a5f7-d92b-4ad6-94c0-a18f2465519e |6 |   
 6 | tecs-controller-node | tecs-OpenStack-Nova |   0 | migration  |
  0 | NULL | NULL | NULL |   NULL | 
  NULL |   NULL |
  
+-+-+++--+-+--+--+--+--+--+--+-+-+++--+--+--++++

  source_compute: tecs-controller-node
  source_node: tecs-controller-node
  dest_compute:tecs-OpenStack-Nova
  dest_node: tecs-OpenStack-Nova

  3. I did resize-revert action
  stack@tecs-controller-node:~$ nova resize-revert hanrong1
  stack@tecs-controller-node:~$ nova list
  
+--+--+---+--+-++
  | ID   | Name | Status| Task 
State   | Power State | Networks   |
  
+--+--+---+--+-++
  | ea40a5f7-d92b-4ad6-94c0-a18f2465519e | hanrong1 | REVERT_RESIZE | 
resize_reverting | Running | public=172.24.4.2, 2001:db8::8 |
  
+--+--+---+--+-++
  stack@tecs-controller-node:~$ nova list
  
+--+--+++-++
  | ID   | Name | Status | Task State | 
Power State | Networks   |
  
+--+--+++-++
  | ea40a5f7-d92b-4ad6-94c0-a18f2465519e | hanrong1 | ACTIVE | -  | 
Running | public=172.24.4.2, 2001:db8::8 |
  

[Yahoo-eng-team] [Bug 1645824] [NEW] NoCloud source doesn't work on FreeBSD

2016-11-29 Thread Andres Montalban
Public bug reported:

Hey guys,

I'm trying to use cloud-init on FreeBSD using CD to seed metadata, the
thing is that it had some issues:

- Mount option 'sync' is not allowed for cd9660 filesystem.
- I optimized the list of filesystems that needed to be scanned for metadata by 
having three lists (vfat, iso9660, and label list) and then checking against 
them to see which filesystem option needs to be passed to mount command.

Additionally I'm going to push some changes to FreeBSD cloud-init
package so it can build last version. I will open another ticket for
fixing networking in FreeBSD as it doesn't support sysfs
(/sys/class/net/) by default.

Thanks!

** Affects: cloud-init
 Importance: Undecided
 Status: New


** Tags: freebsd nocloud

** Attachment added: "Patch to fix NoCloud in FreeBSD"
   
https://bugs.launchpad.net/bugs/1645824/+attachment/4784804/+files/fix_nocloud_freebsd.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1645824

Title:
  NoCloud source doesn't work on FreeBSD

Status in cloud-init:
  New

Bug description:
  Hey guys,

  I'm trying to use cloud-init on FreeBSD using CD to seed metadata, the
  thing is that it had some issues:

  - Mount option 'sync' is not allowed for cd9660 filesystem.
  - I optimized the list of filesystems that needed to be scanned for metadata 
by having three lists (vfat, iso9660, and label list) and then checking against 
them to see which filesystem option needs to be passed to mount command.

  Additionally I'm going to push some changes to FreeBSD cloud-init
  package so it can build last version. I will open another ticket for
  fixing networking in FreeBSD as it doesn't support sysfs
  (/sys/class/net/) by default.

  Thanks!

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1645824/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645652] Re: Vlan aware VM- trunk is not delted when it is in admin-state-up False

2016-11-29 Thread Armando Migliaccio
Allowing trunk deletion while a trunk is disabled would not be difficult
to fix, in that it's a matter of skipping check [1]. That said, this is
a matter of consistency, IMO: since the meaning of a disabled admin
state is to prevent management actions on a trunk, this will include
add/remove subports as well as trunk deletion (which as a result causes
a cascade deletion of subports).

[1]
https://github.com/openstack/neutron/blob/master/neutron/services/trunk/plugin.py#L257

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645652

Title:
  Vlan aware VM- trunk is not delted when it is in admin-state-up False

Status in neutron:
  Won't Fix

Bug description:
  [stack@undercloud-0 ~]$ openstack network trunk delete trunk1
  Failed to delete trunk with name or ID 'trunk1': Trunk 
3b5c8493-7832-4501-a93f-65c5131512bb is currently disabled.
  Neutron server returns request_ids: 
['req-2ee037a3-6d2f-49fe-babb-cc9a4b2ce9dd']
  1 of 1 trunks failed to delete.

  
  [stack@undercloud-0 ~]$  openstack network trunk set trunk1 --enable
  [stack@undercloud-0 ~]$ 
  [stack@undercloud-0 ~]$ 
  [stack@undercloud-0 ~]$ 
  [stack@undercloud-0 ~]$ openstack network trunk delete trunk1
  [stack@undercloud-0 ~]$ 

  
  I think that the admin state is not relevant when we want to delete the trunk

  
  newton, 
  [root@controller-0 ~]# rpm -qa | grep openstack | grep neutron
  openstack-neutron-lbaas-9.1.0-1.el7ost.noarch
  openstack-neutron-bigswitch-lldp-9.40.0-1.1.el7ost.noarch
  openstack-neutron-metering-agent-9.1.0-6.el7ost.noarch
  openstack-neutron-sriov-nic-agent-9.1.0-6.el7ost.noarch
  openstack-neutron-ml2-9.1.0-6.el7ost.noarch
  openstack-neutron-bigswitch-agent-9.40.0-1.1.el7ost.noarch
  openstack-neutron-openvswitch-9.1.0-6.el7ost.noarch
  openstack-neutron-common-9.1.0-6.el7ost.noarch
  openstack-neutron-9.1.0-6.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645770] [NEW] Calling of volume_api.create_snapshot_force() should pass the parameter of extra image metadata.

2016-11-29 Thread Teng Fei
Public bug reported:

I created an instance from volume, then I created a instance snapshot by volume 
backend. I discovered that the extra image properties is not passed to the 
volume api's function.
This parameter should be passed to the function of create_snapshot_force(), 
when creating an instance snapshot by volume backend.

** Affects: nova
 Importance: Undecided
 Assignee: Teng Fei (teng-fei)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => Teng Fei (teng-fei)

** Changed in: nova
   Status: New => In Progress

** Summary changed:

- Calling of volume_api.create_snapshot_force should pass the parameter of 
extra image metadata.
+ Calling of volume_api.create_snapshot_force() should pass the parameter of 
extra image metadata.

** Description changed:

- I created an instance from volume, then I created a instance snapshot by 
volume backend. I discovered that the extra image properties is not passed to 
volume api's function.
+ I created an instance from volume, then I created a instance snapshot by 
volume backend. I discovered that the extra image properties is not passed to 
the volume api's function.
  This parameter should be passed to the function of create_snapshot_force(), 
when creating an instance snapshot by volume backend.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1645770

Title:
  Calling of volume_api.create_snapshot_force() should pass the
  parameter of extra image metadata.

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  I created an instance from volume, then I created a instance snapshot by 
volume backend. I discovered that the extra image properties is not passed to 
the volume api's function.
  This parameter should be passed to the function of create_snapshot_force(), 
when creating an instance snapshot by volume backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1645770/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1644263] Re: passlib 1.7.0 deprecates sha512_crypt.encrypt()

2016-11-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/403514
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=71cde670d5b7f2e9e16d860545d0c36aee115dad
Submitter: Jenkins
Branch:master

commit 71cde670d5b7f2e9e16d860545d0c36aee115dad
Author: Steve Martinelli 
Date:   Mon Nov 28 01:22:08 2016 -0500

Use sha512.hash() instead of .encrypt()

Now that we have switched to passlib 1.7.0, remove the temporary
workaround and use the new function.

Change-Id: Id574221f65d72a763b8205df0891b6e300856230
Depends-On: I6525dc8cf305ae03b81a53ac7fd06bf63d4a6d48
Closes-Bug: 1644263


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1644263

Title:
  passlib 1.7.0 deprecates sha512_crypt.encrypt()

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  tests are failing due to a new deprecation warning:

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "keystone/tests/unit/test_backend_sql.py", line 59, in setUp
  self.load_fixtures(default_fixtures)
File "keystone/tests/unit/core.py", line 754, in load_fixtures
  user_copy = self.identity_api.create_user(user_copy)
File "keystone/common/manager.py", line 123, in wrapped
  __ret_val = __f(*args, **kwargs)
File "keystone/identity/core.py", line 410, in wrapper
  return f(self, *args, **kwargs)
File "keystone/identity/core.py", line 420, in wrapper
  return f(self, *args, **kwargs)
File "keystone/identity/core.py", line 925, in create_user
  ref = driver.create_user(user['id'], user)
File "keystone/common/sql/core.py", line 429, in wrapper
  return method(*args, **kwargs)
File "keystone/identity/backends/sql.py", line 121, in create_user
  user = utils.hash_user_password(user)
File "keystone/common/utils.py", line 129, in hash_user_password
  return dict(user, password=hash_password(password))
File "keystone/common/utils.py", line 136, in hash_password
  password_utf8, rounds=CONF.crypt_strength)
File 
"/var/lib/jenkins/workspace/openstack_gerrit/keystone/.tox/sqla_py27/lib/python2.7/site-packages/passlib/utils/decor.py",
 line 190, in wrapper
  warn(msg % tmp, DeprecationWarning, stacklevel=2)
  DeprecationWarning: the method 
passlib.handlers.sha2_crypt.sha512_crypt.encrypt() is deprecated as of Passlib 
1.7, and will be removed in Passlib 2.0, use .hash() instead.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1644263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645767] [NEW] nova evacuate fails when encrypted volume is connected to instance

2016-11-29 Thread Dawid Deja
Public bug reported:

Description
===
If instance has encrypted volume connected to it and mounted during the host 
failure, 'nova evacuate' would result in instance entering the error state.

Steps to reproduce
==
1. Boot a VM
a) nova boot --image 2c60a713-bbba-4696-adff-c80a12cab7d8 --flavor 42 --nic 
net-id=eb6c7f6b-100f-41bb-9c5d-11975a2cdba6 --availability-zone nova:compute1 
test
b) nova floating-ip-associate test 192.168.57.50

2. Create an encrypted volume
a) cinder type-create LUKS
b) cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512 \  
   
  --control_location front-end LUKS nova.volume.encryptors.luks.LuksEncryptor
c) openstack volume create --size 1 --type 58931c06-1431-45cf-9b4c-4e998594447d 
test-volume

3. Attach the volume to instance
a) openstack server add volume test d06b7b7c-641d-4086-a3b1-4eee898b5c2a

4. SSH to instance, create file system on volume and mount it
a) ssh cirros@192.168.57.50
b) sudo mkfs.ext3 /dev/vdb
c) sudo mount /dev/vdb /mnt
d) sudo touch /mtn/test

5. Kill the compute node (In my case it was powering of the compute1)

6. Try to evacuate instance using `nova evacuate`

Expected result
===
Instance rebuilded on another host, with volume attached to it.

Actual result
=
Instance entered the error state. Output from `nova show test`:

Unexpected error while running command.   
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf cryptsetup luksOpen 
--key-file=- /dev/sdc 
crypt-ip-192.168.57.2:3260-iscsi-iqn.2010-10.org.openstack:volume-d06b7b7c-641d-4086-a3b1-4eee898b5c2a-lun-1
 
 Exit code: 2", "code": 500, "details": "  File 
\"/opt/stack/nova/nova/compute/manager.py\", line 204, in decorated_function

 return function(self, context, *args, **kwargs)

   File \"/opt/stack/nova/nova/compute/manager.py\", line 2703, in 
rebuild_instance
 bdms, recreate, on_shared_storage, preserve_ephemeral) 
   
   File \"/opt/stack/nova/nova/compute/manager.py\", line 2747, in 
_do_rebuild_instance_with_claim
 self._do_rebuild_instance(*args, **kwargs) 
   
   File \"/opt/stack/nova/nova/compute/manager.py\", line 2862, in 
_do_rebuild_instance
 self._rebuild_default_impl(**kwargs)   
 
   File \"/opt/stack/nova/nova/compute/manager.py\", line 2626, in 
_rebuild_default_impl
 block_device_info=new_block_device_info)   
 
   File \"/opt/stack/nova/nova/virt/libvirt/driver.py\", line 2622, in spawn

 post_xml_callback=gen_confdrive)
   File \"/opt/stack/nova/nova/virt/libvirt/driver.py\", line 4845, in 
_create_domain_and_network
 encryptor.attach_volume(context, **encryption) 
   
   File \"/opt/stack/nova/nova/volume/encryptors/luks.py\", line 102, in 
attach_volume
 self._open_volume(passphrase, **kwargs)

   File \"/opt/stack/nova/nova/volume/encryptors/luks.py\", line 86, in 
_open_volume
 run_as_root=True, check_exit_code=True)

   File \"/opt/stack/nova/nova/utils.py\", line 295, in execute 
   
 return RootwrapProcessHelper().execute(*cmd, **kwargs) 
   
   File \"/opt/stack/nova/nova/utils.py\", line 178, in execute 
   
 return processutils.execute(*cmd, **kwargs)

   File 
\"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/processutils.py\", 
line 394, in execute
 cmd=sanitized_cmd)  

Environment
===

* Multinode devstack on VMs running Ubuntu 16.04
* Networking: Neutron with OVS
* Cinder backend - LVM

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1645767

Title:
  nova evacuate fails when encrypted volume is connected to instance

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  If instance has encrypted volume connected to it and mounted during the host 
failure, 'nova evacuate' would result in instance entering the error state.

  Steps to reproduce
  ==
  1. Boot a VM
  a) nova boot --image 

[Yahoo-eng-team] [Bug 1644862] Re: domain ldap tls_cacertfile "forgotten" in multidomain configuration

2016-11-29 Thread Lance Bragstad
Are you able to recreate this using Newton or master?

** Also affects: keystone/mitaka
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1644862

Title:
  domain ldap tls_cacertfile "forgotten" in multidomain configuration

Status in OpenStack Identity (keystone):
  New
Status in OpenStack Identity (keystone) mitaka series:
  New

Bug description:
  Environment:
  Centos 7 using the OpenStack Mitaka release

  RPMS from:
  http://mirror.centos.org/centos/7/cloud/$basearch/openstack-mitaka/

  openstack-keystone-9.2.0-1.el7.noarch

  —

  I have a multidomain configuration with multiple AD backends in
  keystone.

  For one of the AD configurations I've configured a custom
  tls_cacertfile as follows:

  «
  [identity]
  driver = ldap

  [assignment]
  driver = ldap

  [ldap]
  url  = ldap://dc1.domain1.ca ldap://dc1.domain1.ca
  use_tls  = true
  …
  »

  For the other:

  «
  [identity]
  driver = ldap

  [assignment]
  driver = ldap

  [ldap]
  url  = ldap://dc1.domain2.ca ldap://dc2.domain2.ca
  query_scope  = sub
  use_tls  = true
  tls_cacertfile   = /etc/keystone/domains/domain2-caroot.pem
  …
  »

  What I've observed is when logging in to domain2 I will get very
  inconsistent behaviour:

  * sometimes fails: "Unable to retrieve authorized projects."
  * sometimes fails: "An error occurred authenticating. Please try again later."
  * sometimes fails: "Unable to authenticate to any available projects."
  * sometimes fails: "Invalid credentials."
  * sometimes succeeds

  Example traceback from keystone log:
  «
  2016-11-25 09:54:06.699 27879 INFO keystone.common.wsgi 
[req-c145506b-69fc-4fc2-9bad-76d77a79e3ca - - - - -] POST 
http://os-controller.lab.netdirect.ca:5000/v3/auth/tokens
  2016-11-25 09:54:07.147 27879 ERROR keystone.common.wsgi 
[req-c145506b-69fc-4fc2-9bad-76d77a79e3ca - - - - -] {'info': "TLS error 
-8179:Peer's Certificate issuer is not recognized.", 'desc': 'Connect error'}
  2016-11-25 09:54:07.147 27879 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  …
  2016-11-25 09:54:07.147 27879 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/ldappool/__init__.py", line 224, in 
_create_connector
  2016-11-25 09:54:07.147 27879 ERROR keystone.common.wsgi raise 
BackendError(str(exc), backend=conn)
  2016-11-25 09:54:07.147 27879 ERROR keystone.common.wsgi BackendError: 
{'info': "TLS error -8179:Peer's Certificate issuer is not recognized.", 
'desc': 'Connect error'}
  »

  I've also tried putting a merged tls_cacertfile containing the system
  default ca roots and the domain2-specific ca root. That felt like it
  improved but did not fix the problem.

  The workaround is putting the merged cacertfile into BOTH domain
  configurations, which should not be necessary. After doing so I
  haven't had any trouble.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1644862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1608980] Re: Remove MANIFEST.in as it is not explicitly needed by PBR

2016-11-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/403987
Committed: 
https://git.openstack.org/cgit/openstack/keystonemiddleware/commit/?id=77a7828fc1c07b121c1e24bc087c9f7789900446
Submitter: Jenkins
Branch:master

commit 77a7828fc1c07b121c1e24bc087c9f7789900446
Author: Spencer Yu 
Date:   Mon Nov 28 18:46:29 2016 -0800

Drop MANIFEST.in - it's not needed by pbr

Keystonemiddleware already uses PBR:-
setuptools.setup(
 setup_requires=['pbr>=1.8'],
 pbr=True)

This patch removes `MANIFEST.in` file as pbr generates a
sensible manifest from git files and some standard files
and it removes the need for an explicit `MANIFEST.in` file.

Change-Id: I9886df7fc8cfe3d35795f475ddc20f4006521694
Closes-Bug: #1608980


** Changed in: keystonemiddleware
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1608980

Title:
  Remove MANIFEST.in as it is not explicitly needed by PBR

Status in anvil:
  Invalid
Status in craton:
  Fix Released
Status in DragonFlow:
  Fix Released
Status in ec2-api:
  Fix Released
Status in gce-api:
  Fix Released
Status in Karbor:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystoneauth:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Kosmos:
  Fix Released
Status in Magnum:
  Fix Released
Status in masakari:
  Fix Released
Status in neutron:
  Fix Released
Status in Neutron LBaaS Dashboard:
  Confirmed
Status in octavia:
  Fix Released
Status in os-vif:
  In Progress
Status in python-searchlightclient:
  In Progress
Status in OpenStack Search (Searchlight):
  Fix Released
Status in Solum:
  Fix Released
Status in Swift Authentication:
  In Progress
Status in OpenStack Object Storage (swift):
  In Progress
Status in Tricircle:
  Fix Released
Status in OpenStack DBaaS (Trove):
  Fix Released
Status in watcher:
  Fix Released
Status in Zun:
  Fix Released

Bug description:
  PBR do not explicitly require MANIFEST.in, so it can be removed.

  
  Snippet from: http://docs.openstack.org/infra/manual/developers.html

  Manifest

  Just like AUTHORS and ChangeLog, why keep a list of files you wish to
  include when you can find many of these in git. MANIFEST.in generation
  ensures almost all files stored in git, with the exception of
  .gitignore, .gitreview and .pyc files, are automatically included in
  your distribution. In addition, the generated AUTHORS and ChangeLog
  files are also included. In many cases, this removes the need for an
  explicit ‘MANIFEST.in’ file

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1608980/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1633535] Re: Cinder fails to attach second volume to Nova VM

2016-11-29 Thread Andrey Pavlov
** Changed in: ec2-api
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1633535

Title:
  Cinder fails to attach second volume to Nova VM

Status in Cinder:
  In Progress
Status in ec2-api:
  Fix Released
Status in Manila:
  Confirmed
Status in OpenStack Compute (nova):
  New
Status in tempest:
  In Progress

Bug description:
  Cinder fails to attach second volume to Nova VM. This second volume gets 
"in-use" status, but does not have any attachments. Also,  such volume cannot 
be detached from VM [4].  Test gerrit change [2] proves that commit to Cinder 
[3] is THE CAUSE of a bug.
  Also, bug was reproduced even before merge of [3] with 
"gate-rally-dsvm-cinder" CI job [4], but, I assume, no one has paid attention 
to this.

  Local testing shows that IF bug appears then volume never gets
  attached and list of attachments stays empty. And waiting between
  'create' (wait until 'available' status) and 'attach' commands does
  not help at all.

  How to reproduce:
  1) Create VM
  2) Create Volume
  3) Attach volume (2) to the VM (1)
  4) Create second volume
  5) Try attach second volume (4) to VM (1) - it will fail.

  [Tempest] Also, the fact that Cinder gates passed with [3] means that
  tempest does not have test that attaches more than one volume to one
  Nova VM. And it is also tempest bug, that should be addressed.

  [Manila] In scope of Manila project, one of its drivers is broken -
  Generic driver that uses Cinder as backend.

  [1] http://logs.openstack.org/64/386364/1/check/gate-manila-tempest-
  dsvm-postgres-generic-singlebackend-ubuntu-xenial-
  nv/eef11b0/logs/screen-m-shr.txt.gz?level=TRACE#_2016-10-14_15_15_19_898

  [2] https://review.openstack.org/387915

  [3]
  
https://github.com/openstack/cinder/commit/6f174b412696bfa6262a5bea3ac42f45efbbe2ce
  ( https://review.openstack.org/385122 )

  [4] http://logs.openstack.org/22/385122/1/check/gate-rally-dsvm-
  cinder/b0332e2/rally-
  plot/results.html.gz#/CinderVolumes.create_snapshot_and_attach_volume/failures

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1633535/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645721] [NEW] Gettext inside settings

2016-11-29 Thread Radomir Dopieralski
Public bug reported:

We are using gettext inside the local_settings.py file, and I think we
shouldn't be doing that.

>From what I can see, there are basically two use cases:


Default values for labels
-

Those should probably be set inside settings.py, and local_settings.py
should only give an option to override them into non-translated strings.
We can't translate user-provided strings anyways, because we don't know
them up front.

Example values in commented code


Those should probably not be translated -- the user will translate them when 
setting to their own values anyways. They are not translated anyways, as they 
are commented out, and so the translation
system doesn't pick them up.

** Affects: horizon
 Importance: Undecided
 Assignee: Radomir Dopieralski (thesheep)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Radomir Dopieralski (thesheep)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1645721

Title:
  Gettext inside settings

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We are using gettext inside the local_settings.py file, and I think we
  shouldn't be doing that.

  From what I can see, there are basically two use cases:

  
  Default values for labels
  -

  Those should probably be set inside settings.py, and local_settings.py
  should only give an option to override them into non-translated
  strings. We can't translate user-provided strings anyways, because we
  don't know them up front.

  Example values in commented code
  

  Those should probably not be translated -- the user will translate them when 
setting to their own values anyways. They are not translated anyways, as they 
are commented out, and so the translation
  system doesn't pick them up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1645721/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645716] [NEW] Migrating HA routers to Legacy doesn't update interface's device_owner

2016-11-29 Thread John Schwarz
Public bug reported:

Patch I322c392529c04aca2448fd957a35f4908b323449 added a new device_owner
for HA interfaces between a router and an internal subnet, which is used
to differentiate it from normal, non-HA interfaces. However, when
migrating a router from HA to legacy, the device_owner isn't switched
back to its non-HA counterpart. This can cause migration of the router
to DVR to not work properly as the snat interface isn't created.

A log and reproducible can be found in [1].

[1]: http://paste.openstack.org/show/590804/

** Affects: neutron
 Importance: High
 Assignee: John Schwarz (jschwarz)
 Status: Confirmed


** Tags: l3-ha

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645716

Title:
  Migrating HA routers to Legacy doesn't update interface's device_owner

Status in neutron:
  Confirmed

Bug description:
  Patch I322c392529c04aca2448fd957a35f4908b323449 added a new
  device_owner for HA interfaces between a router and an internal
  subnet, which is used to differentiate it from normal, non-HA
  interfaces. However, when migrating a router from HA to legacy, the
  device_owner isn't switched back to its non-HA counterpart. This can
  cause migration of the router to DVR to not work properly as the snat
  interface isn't created.

  A log and reproducible can be found in [1].

  [1]: http://paste.openstack.org/show/590804/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645716/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645708] [NEW] Can't create Port as Admin on an unshared Network in another project

2016-11-29 Thread Rob Cresswell
Public bug reported:

https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/neutron.py#L675
alters the network[subnets] value to be a list of Subnet objects instead
of a list of unicode strings. Since the calling code has no idea what it
will get back (thanks Python), it breaks in strange ways.

Specifically, the Create Port form expects a list of Subnet objects, not
a list of strings and so falls about laughing
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/networks/ports/forms.py#L150

The easiest way to recreate this is a standard devstack, log in as
Admin, try to create a Port on the default Private network.

** Affects: horizon
 Importance: High
 Assignee: Rob Cresswell (robcresswell)
 Status: New


** Tags: mitaka-backport-potential newton-backport-potential

** Description changed:

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/neutron.py#L675
  alters the network[subnets] value to be a list of Subnet objects instead
  of a list of unicode strings. Since the calling code has no idea what it
  will get back (thanks Python), it breaks in strange ways.
  
  Specifically, the Create Port form expects a list of Subnet objects, not
- a list of strings and so falls about laughing.
+ a list of strings and so falls about laughing
+ 
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/networks/ports/forms.py#L150

** Description changed:

  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/neutron.py#L675
  alters the network[subnets] value to be a list of Subnet objects instead
  of a list of unicode strings. Since the calling code has no idea what it
  will get back (thanks Python), it breaks in strange ways.
  
  Specifically, the Create Port form expects a list of Subnet objects, not
  a list of strings and so falls about laughing
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/networks/ports/forms.py#L150
+ 
+ The easiest way to recreate this is a standard devstack, log in as
+ Admin, try to create a Port on the default Private network.

** Changed in: horizon
 Assignee: (unassigned) => Rob Cresswell (robcresswell)

** Tags added: mitaka-backport-potential newton-backport-potential

** Changed in: horizon
   Importance: Undecided => High

** Changed in: horizon
Milestone: None => ocata-2

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1645708

Title:
  Can't create Port as Admin on an unshared Network in another project

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/api/neutron.py#L675
  alters the network[subnets] value to be a list of Subnet objects
  instead of a list of unicode strings. Since the calling code has no
  idea what it will get back (thanks Python), it breaks in strange ways.

  Specifically, the Create Port form expects a list of Subnet objects,
  not a list of strings and so falls about laughing
  
https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/admin/networks/ports/forms.py#L150

  The easiest way to recreate this is a standard devstack, log in as
  Admin, try to create a Port on the default Private network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1645708/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645607] Re: keystone-manage mapping_populate fails and gives unhandled exception

2016-11-29 Thread Boris Bobrov
Oh right, this one is indeed invalid.

** Changed in: keystone
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645607

Title:
  keystone-manage mapping_populate fails and gives unhandled exception

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Running keystone-manage mapping_populate --domain-name 
  displays the ID of the domain but throws an unhandled exception. This
  is visible in keystone.log

  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/bin/keystone-manage", line 10, in 
  2016-11-18 09:48:27.730 7685 ERROR keystone sys.exit(main())
  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 43, in main
  2016-11-18 09:48:27.730 7685 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1260, in main
  2016-11-18 09:48:27.730 7685 ERROR keystone CONF.command.cmd_class.main()
  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1211, in main
  2016-11-18 09:48:27.730 7685 ERROR keystone 
cls.identity_api.list_users(domain_scope=domain_id)
  2016-11-18 09:48:27.730 7685 ERROR keystone NameError: global name 
'domain_id' is not defined
  2016-11-18 09:48:27.730 7685 ERROR keystone

  It seems to me that the variable domain_id hasn't been properly scoped
  which is causing this error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1645607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645607] Re: keystone-manage mapping_populate fails and gives unhandled exception

2016-11-29 Thread Boris Bobrov
Actually i can confirm it.

** Changed in: keystone
   Status: Invalid => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645607

Title:
  keystone-manage mapping_populate fails and gives unhandled exception

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Running keystone-manage mapping_populate --domain-name 
  displays the ID of the domain but throws an unhandled exception. This
  is visible in keystone.log

  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/bin/keystone-manage", line 10, in 
  2016-11-18 09:48:27.730 7685 ERROR keystone sys.exit(main())
  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 43, in main
  2016-11-18 09:48:27.730 7685 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1260, in main
  2016-11-18 09:48:27.730 7685 ERROR keystone CONF.command.cmd_class.main()
  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1211, in main
  2016-11-18 09:48:27.730 7685 ERROR keystone 
cls.identity_api.list_users(domain_scope=domain_id)
  2016-11-18 09:48:27.730 7685 ERROR keystone NameError: global name 
'domain_id' is not defined
  2016-11-18 09:48:27.730 7685 ERROR keystone

  It seems to me that the variable domain_id hasn't been properly scoped
  which is causing this error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1645607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1641713] Re: There is no api-ref for 2.24 and aborting a live migration

2016-11-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/397407
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=e975603b881d357853bb4c5288d30fcfce7ad751
Submitter: Jenkins
Branch:master

commit e975603b881d357853bb4c5288d30fcfce7ad751
Author: Matt Riedemann 
Date:   Mon Nov 14 18:20:59 2016 -0500

api-ref: body verification for abort live migration

This completes the DELETE method body verification
for server migrations.

This was only supported in microversion >= 2.24 and only
on the libvirt driver so there are notes about it being
conditional.

Closes-Bug: #1641713

Part of blueprint api-ref-in-rst-ocata

Change-Id: I3bc2cb70f8ad12124098376ef01eb7df2f6b2f88


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1641713

Title:
  There is no api-ref for 2.24 and aborting a live migration

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This change added the 2.24 microversion for aborting a live migration:

  https://review.openstack.org/#/c/277971/

  However, we don't have any API reference for a DELETE action on a
  migration resource:

  http://developer.openstack.org/api-ref/compute/?expanded=migrate-
  server-migrate-action-detail#servers-run-an-administrative-action-
  servers-action

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1641713/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645479] Re: ServerExternalEventsController doesn't properly pre-load migration_context

2016-11-29 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/403917
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=cbcff11b6a206a58093017fb9471d818d484ca34
Submitter: Jenkins
Branch:master

commit cbcff11b6a206a58093017fb9471d818d484ca34
Author: Matt Riedemann 
Date:   Mon Nov 28 16:45:58 2016 -0500

Fix expected_attrs kwarg in server_external_events

The Instance.get_by_uuid method takes an expected_attrs
kwarg which needs to be a list or tuple, not just any old
iterable like a string. Because of how the underlying
Instance object code massages this value, it's not a hard
failure but does mean you don't join the columns you expect
when getting the instance.

This makes it a list and makes sure the stub in the unit
tests is checking for valid values.

Change-Id: I3ad85f9062b5cb19962d9e6a7af52440166def45
Closes-Bug: #1645479


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1645479

Title:
  ServerExternalEventsController doesn't properly pre-load
  migration_context

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) newton series:
  New

Bug description:
  This code is passing a string for expected_attrs when getting all
  instances:

  
https://github.com/openstack/nova/blob/4e747092bcb015303efc2ab13da98ef5ce575ec8/nova/api/openstack/compute/server_external_events.py#L72

  That's used to join the migration_context from the DB, but it's not
  doing that as expected_attr should be a list:

  
https://github.com/openstack/nova/blob/4e747092bcb015303efc2ab13da98ef5ce575ec8/nova/objects/instance.py#L73

  So we aren't getting the optimization in the API.

  That was added in https://review.openstack.org/#/c/371048/ which was
  also backported to stable/newton.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1645479/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645683] [NEW] NoReverseMatch at /project/access_and_security/

2016-11-29 Thread John Davidge
Public bug reported:

After associating a Floating IP via Horizon, the Access and Security
page produces the following error:

Reverse for 'detail' with arguments '(u'',)' and keyword arguments '{}'
not found. 1 pattern(s) tried:
[u'project/instances/(?P[^/]+)/$']

Complete paste: http://paste.openstack.org/show/590655/

Any attempt to reload the page produces the same error.

This is with a fresh pull of devstack and horizon as of 28/11/2016.

** Affects: horizon
 Importance: Undecided
 Status: New

** Description changed:

  After associating a Floating IP via Horizon, the Access and Security
  page produces the following error:
  
  Reverse for 'detail' with arguments '(u'',)' and keyword arguments '{}'
  not found. 1 pattern(s) tried:
  [u'project/instances/(?P[^/]+)/$']
  
  Complete paste: http://paste.openstack.org/show/590655/
  
  Any attempt to reload the page produces the same error.
+ 
+ This is with a fresh pull of devstack and horizon as of 28/11/2016.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1645683

Title:
  NoReverseMatch at /project/access_and_security/

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After associating a Floating IP via Horizon, the Access and Security
  page produces the following error:

  Reverse for 'detail' with arguments '(u'',)' and keyword arguments
  '{}' not found. 1 pattern(s) tried:
  [u'project/instances/(?P[^/]+)/$']

  Complete paste: http://paste.openstack.org/show/590655/

  Any attempt to reload the page produces the same error.

  This is with a fresh pull of devstack and horizon as of 28/11/2016.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1645683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645652] [NEW] Vlan aware VM- trunk is not delted when it is in admin-state-up False

2016-11-29 Thread Alex Stafeyev
Public bug reported:

[stack@undercloud-0 ~]$ openstack network trunk delete trunk1
Failed to delete trunk with name or ID 'trunk1': Trunk 
3b5c8493-7832-4501-a93f-65c5131512bb is currently disabled.
Neutron server returns request_ids: ['req-2ee037a3-6d2f-49fe-babb-cc9a4b2ce9dd']
1 of 1 trunks failed to delete.


[stack@undercloud-0 ~]$  openstack network trunk set trunk1 --enable
[stack@undercloud-0 ~]$ 
[stack@undercloud-0 ~]$ 
[stack@undercloud-0 ~]$ 
[stack@undercloud-0 ~]$ openstack network trunk delete trunk1
[stack@undercloud-0 ~]$ 


I think that the admin state is not relevant when we want to delete the trunk


newton, 
[root@controller-0 ~]# rpm -qa | grep openstack | grep neutron
openstack-neutron-lbaas-9.1.0-1.el7ost.noarch
openstack-neutron-bigswitch-lldp-9.40.0-1.1.el7ost.noarch
openstack-neutron-metering-agent-9.1.0-6.el7ost.noarch
openstack-neutron-sriov-nic-agent-9.1.0-6.el7ost.noarch
openstack-neutron-ml2-9.1.0-6.el7ost.noarch
openstack-neutron-bigswitch-agent-9.40.0-1.1.el7ost.noarch
openstack-neutron-openvswitch-9.1.0-6.el7ost.noarch
openstack-neutron-common-9.1.0-6.el7ost.noarch
openstack-neutron-9.1.0-6.el7ost.noarch

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645652

Title:
  Vlan aware VM- trunk is not delted when it is in admin-state-up False

Status in neutron:
  New

Bug description:
  [stack@undercloud-0 ~]$ openstack network trunk delete trunk1
  Failed to delete trunk with name or ID 'trunk1': Trunk 
3b5c8493-7832-4501-a93f-65c5131512bb is currently disabled.
  Neutron server returns request_ids: 
['req-2ee037a3-6d2f-49fe-babb-cc9a4b2ce9dd']
  1 of 1 trunks failed to delete.

  
  [stack@undercloud-0 ~]$  openstack network trunk set trunk1 --enable
  [stack@undercloud-0 ~]$ 
  [stack@undercloud-0 ~]$ 
  [stack@undercloud-0 ~]$ 
  [stack@undercloud-0 ~]$ openstack network trunk delete trunk1
  [stack@undercloud-0 ~]$ 

  
  I think that the admin state is not relevant when we want to delete the trunk

  
  newton, 
  [root@controller-0 ~]# rpm -qa | grep openstack | grep neutron
  openstack-neutron-lbaas-9.1.0-1.el7ost.noarch
  openstack-neutron-bigswitch-lldp-9.40.0-1.1.el7ost.noarch
  openstack-neutron-metering-agent-9.1.0-6.el7ost.noarch
  openstack-neutron-sriov-nic-agent-9.1.0-6.el7ost.noarch
  openstack-neutron-ml2-9.1.0-6.el7ost.noarch
  openstack-neutron-bigswitch-agent-9.40.0-1.1.el7ost.noarch
  openstack-neutron-openvswitch-9.1.0-6.el7ost.noarch
  openstack-neutron-common-9.1.0-6.el7ost.noarch
  openstack-neutron-9.1.0-6.el7ost.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645655] [NEW] ovs firewall probably cannot handle server reboot

2016-11-29 Thread IWAMOTO Toshihiro
Public bug reported:

See tempest test results for 
https://review.openstack.org/#/c/399400/

tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
 fails at ssh connection after server soft reboot.
A few other tests seem to have some issues, too.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: ovs-fw

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645655

Title:
  ovs firewall probably cannot handle server reboot

Status in neutron:
  New

Bug description:
  See tempest test results for 
  https://review.openstack.org/#/c/399400/

  
tempest.scenario.test_minimum_basic.TestMinimumBasicScenario.test_minimum_basic_scenario
 fails at ssh connection after server soft reboot.
  A few other tests seem to have some issues, too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645655/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645644] [NEW] ntp is not restarted after writing /etc/ntp.conf by cloud-init

2016-11-29 Thread Nobuto Murata
Public bug reported:

cloud-init: 0.7.8-49-g9e904bb-0ubuntu1~16.04.1

Expected NTP server address is written in /etc/ntp.conf by cloud-init through 
vendor-data. However, `ntpq -p` shows the default ntp pools, not my local NTP 
server written in /etc/ntp.conf.
It looks like cloud-init needs to write /etc/ntp.conf before installing ntp 
package, or restart ntp after writing /etc/ntp.conf.

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Affects: cloud-init (Ubuntu)
 Importance: Undecided
 Status: New

** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1645644

Title:
  ntp is not restarted after writing /etc/ntp.conf by cloud-init

Status in cloud-init:
  New
Status in cloud-init package in Ubuntu:
  New

Bug description:
  cloud-init: 0.7.8-49-g9e904bb-0ubuntu1~16.04.1

  Expected NTP server address is written in /etc/ntp.conf by cloud-init through 
vendor-data. However, `ntpq -p` shows the default ntp pools, not my local NTP 
server written in /etc/ntp.conf.
  It looks like cloud-init needs to write /etc/ntp.conf before installing ntp 
package, or restart ntp after writing /etc/ntp.conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1645644/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645607] Re: keystone-manage mapping_populate fails and gives unhandled exception

2016-11-29 Thread the-bling
** Changed in: keystone
   Status: Confirmed => Invalid

** Tags removed: newton

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645607

Title:
  keystone-manage mapping_populate fails and gives unhandled exception

Status in OpenStack Identity (keystone):
  Invalid

Bug description:
  Running keystone-manage mapping_populate --domain-name 
  displays the ID of the domain but throws an unhandled exception. This
  is visible in keystone.log

  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/bin/keystone-manage", line 10, in 
  2016-11-18 09:48:27.730 7685 ERROR keystone sys.exit(main())
  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 43, in main
  2016-11-18 09:48:27.730 7685 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1260, in main
  2016-11-18 09:48:27.730 7685 ERROR keystone CONF.command.cmd_class.main()
  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1211, in main
  2016-11-18 09:48:27.730 7685 ERROR keystone 
cls.identity_api.list_users(domain_scope=domain_id)
  2016-11-18 09:48:27.730 7685 ERROR keystone NameError: global name 
'domain_id' is not defined
  2016-11-18 09:48:27.730 7685 ERROR keystone

  It seems to me that the variable domain_id hasn't been properly scoped
  which is causing this error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1645607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645632] [NEW] policy.v3cloudsample.json still broken in Newton

2016-11-29 Thread Jay Jahns
Public bug reported:

This policy file works using the CLI clients; however, it does NOT work
with Horizon, yet again.  This is the same problem I observed with
Mitaka, that forced me to revert to the liberty policy file.  I've had
to do that again.

Error message for user who just logged in linked below.  This is a problem that 
we need to fix, and I can recreate the issue at will. 
 
https://gist.githubusercontent.com/jjahns/970daf472abea086403269920280d53c/raw/64480893b4cb1f65f63d48bbf05391597c5db74d/gistfile1.txt

My workaround has been to take the liberty V3 policy json file and copy
it into openstack_dashboard/conf/keystone_policy.json.  That appears to
be fully functional, but we need to fix this behavior as having a policy
from 2 versions ago does not make any sense.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645632

Title:
  policy.v3cloudsample.json still broken in Newton

Status in OpenStack Identity (keystone):
  New

Bug description:
  This policy file works using the CLI clients; however, it does NOT
  work with Horizon, yet again.  This is the same problem I observed
  with Mitaka, that forced me to revert to the liberty policy file.
  I've had to do that again.

  Error message for user who just logged in linked below.  This is a problem 
that we need to fix, and I can recreate the issue at will. 
   
  
https://gist.githubusercontent.com/jjahns/970daf472abea086403269920280d53c/raw/64480893b4cb1f65f63d48bbf05391597c5db74d/gistfile1.txt

  My workaround has been to take the liberty V3 policy json file and
  copy it into openstack_dashboard/conf/keystone_policy.json.  That
  appears to be fully functional, but we need to fix this behavior as
  having a policy from 2 versions ago does not make any sense.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1645632/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645630] [NEW] Material Theme in Newton Appears to be Broken

2016-11-29 Thread Jay Jahns
Public bug reported:

Pulled down project from git (stable/newton).

Used OpenStack global requirements and installed in a virtualenv under
Ubuntu 14.04 (python 2.7).

Problem manifested during compress operation, when setting up the web
application for use.

Code excerpt attached.

To workaround, I explicitly disabled the Material theme.

https://gist.githubusercontent.com/jjahns/025e51e0b82009dd17a30651c2256262/raw/d02d96dcba8c2f68a0b68a3b175e6ed1c77190fc/gistfile1.txt

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1645630

Title:
  Material Theme in Newton Appears to be Broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Pulled down project from git (stable/newton).

  Used OpenStack global requirements and installed in a virtualenv under
  Ubuntu 14.04 (python 2.7).

  Problem manifested during compress operation, when setting up the web
  application for use.

  Code excerpt attached.

  To workaround, I explicitly disabled the Material theme.

  
https://gist.githubusercontent.com/jjahns/025e51e0b82009dd17a30651c2256262/raw/d02d96dcba8c2f68a0b68a3b175e6ed1c77190fc/gistfile1.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1645630/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645625] [NEW] [RFE] Add support active-active router

2016-11-29 Thread denghui huang
Public bug reported:

   In current neutron reference implementation, it supports legacy
router and DVR router. In order to archive high availability, in legacy
router implementation, it supports HA router, and in DVR router
implementation, it composes of two parts, one part of it is responsible
for east-west traffic which located on computer node(called east-west
part), the other part of it is responsible for north-south traffic which
located on network node(called north-south part),the north-south already
support HA.

   HA router can only support high availability,but can't support load
balance functionality. This RFE will propose a active-active router in
both scenarios. In DVR router implementation, since east-west part is
distributed,this rfe only propose a active-active for north-south part.
Active-Active router supports some kind of traffic load balance
functionality based on five tuple(such as, source ip, destination ip,
source port, destination port, protocol) hash. There are two key
technical problems need to fix. 1. how to implement load balance? 2. how
to monitor router's availability?

   Load balance method:
 Method 1: called L2 load balance, in active-active router, there are at 
least two namespaces will be spawned by L3_Agent in two different network 
nodes, from logical router model point of view, there is one router interface 
which has one logical port object. and from data plane point of view, there are 
two qr interface in two namespaces in two different network nodes. this two qr 
interfaces have the same IP address and MAC address. This works in VXLAN 
provider network, since OVS supports multipath instruction,but it don't work in 
VLAN provider network. since in VLAN provider network, physical switch don't 
support the same MAC learns from two port.
   Monitor:
 It needs a component to monitor the qr interface's availability. Maybe 
OVS-Agent can take this role.

 Method 2: called L3 load balance, in active-active router, there are at 
least two namespaces will be spawned by L3_Agent in two different network 
nodes, from logical router model point of view, there is one router interface 
which has two logical port objects. and from data plane point of view, there 
are two qr interface in two namespaces in two different network nodes. this two 
qr interfaces have different IP address and MAC address. This works in both 
VXLAN and VLAN provider network. But it need some changes from gateway ip 
properties of subnet object. Since in active-active router, actually there are 
at least two gateways.From routing point of view, there are multiple next-hops. 
In case of legacy router implementation, VMs belong to the subnet which attach 
to the active-active router should configure ECMP routes. In case of DVR router 
implementation, east-west part should configure ECMP routes.
   Monitor:
 It can deploy BFD in qrouter namespace and SNAT namespace to detect qr 
interface's availability in DVR implementation. But in legacy router 
implementation, it is still open question about how to monitor qr interface's 
availability. Maybe DHCP-Agent can take this role.

 For a active-active router towards external network side, it always
connects to data center VLAN provider network, qg interfaces should have
two different IP and MAC addresses. In this way, it needs to configure
ECMP routes in upstream physical router. About monitoring, it should
configure upstream physical router's monitoring feature.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: rfe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645625

Title:
  [RFE] Add support active-active router

Status in neutron:
  New

Bug description:
 In current neutron reference implementation, it supports legacy
  router and DVR router. In order to archive high availability, in
  legacy router implementation, it supports HA router, and in DVR router
  implementation, it composes of two parts, one part of it is
  responsible for east-west traffic which located on computer
  node(called east-west part), the other part of it is responsible for
  north-south traffic which located on network node(called north-south
  part),the north-south already support HA.

 HA router can only support high availability,but can't support load
  balance functionality. This RFE will propose a active-active router in
  both scenarios. In DVR router implementation, since east-west part is
  distributed,this rfe only propose a active-active for north-south
  part. Active-Active router supports some kind of traffic load balance
  functionality based on five tuple(such as, source ip, destination ip,
  source port, destination port, protocol) hash. There are two key
  technical problems need to fix. 1. how to implement load balance? 2.
  how to monitor router's availability?

 Load balance method:
   

[Yahoo-eng-team] [Bug 1645620] [NEW] [FWaaS] update firewall policy with rule to empty returns 500

2016-11-29 Thread Yushiro FURUKAWA
Public bug reported:

When firewall policy includes at least 1 firewall-rule and try to update empty
firewall_rules, following error occurred:

It should be aligned with how to remove rule_association table as
remove_rules.

[How to reprocude]
curl -s -X PUT -d 
'{"firewall_policy":{"firewall_rules":["50c87d7e-63c1-4911-9bee-d15455073c78"]}}'
 -H "x-auth-token:$TOKEN" 
192.168.122.181:9696/v2.0/fwaas/firewall_policies/86a47474-f2d2-4a89-a4b4-22119fe6e459

{
  "firewall_policy": {
"description": "",
"firewall_rules": [
  "50c87d7e-63c1-4911-9bee-d15455073c78"
],
"tenant_id": "36bc640624964521b494fd0bd46d2a6e",
"public": false,
"id": "86a47474-f2d2-4a89-a4b4-22119fe6e459",
"project_id": "36bc640624964521b494fd0bd46d2a6e",
"audited": false,
"name": "policy1"
  }
}

curl -s -X PUT -d '{"firewall_policy":{"firewall_rules":[]}}' -H "x
-auth-token:$TOKEN"
192.168.122.181:9696/v2.0/fwaas/firewall_policies/86a47474-f2d2-4a89-a4b4-22119fe6e459

{
  "NeutronError": {
"message": "Request Failed: internal server error while processing your 
request.",
"type": "HTTPInternalServerError",
"detail": ""
  }
}

[Error log on q-svc.log]
2016-11-29 16:40:10.868 ERROR neutron.api.v2.resource 
[req-1c906036-e041-43fd-919a-1bd1bc1ebc81 admin 
36bc640624964521b494fd0bd46d2a6e] update failed: No details.
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/resource.py", line 79, in resource
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 612, in update
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource return 
self._update(request, id, body, **kwargs)
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 92, in wrapped
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource setattr(e, 
'_RETRY_EXCEEDED', True)
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource self.force_reraise()
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 88, in wrapped
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 151, in wrapper
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource self.force_reraise()
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 139, in wrapper
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 128, in wrapped
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource 
traceback.format_exc())
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in 
__exit__
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource self.force_reraise()
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in 
force_reraise
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/db/api.py", line 123, in wrapped
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource return f(*dup_args, 
**dup_kwargs)
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 
"/opt/stack/neutron/neutron/api/v2/base.py", line 660, in _update
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
2016-11-29 16:40:10.868 TRACE neutron.api.v2.resource   File 

[Yahoo-eng-team] [Bug 1645616] [NEW] dnsmasq doesn't like providing DHCP for subnets with prefixes shorter than 64

2016-11-29 Thread Kevin Benton
Public bug reported:

Trace when you enable DHCP on an IPv6 network with a prefix less than
64.

2016-11-15 17:33:54.321 102837 ERROR neutron.agent.dhcp.agent 
ProcessExecutionError: Exit code: 1; Stdin: ; Stdout: ; Stderr:
2016-11-15 17:33:54.321 102837 ERROR neutron.agent.dhcp.agent dnsmasq: bad 
command line options: prefix length must be at least 64

At a minimum we need to skip these on the DHCP agent to prevent a bunch
of log noise and retries. We probably should consider rejecting
enable_dhcp=True in the API when the prefix is like this for IPv6 if
it's a fundamental limitation of DHCPv6.

** Affects: neutron
 Importance: High
 Assignee: Kevin Benton (kevinbenton)
 Status: In Progress

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

** Summary changed:

- dnsmasq doesn't like providing DHCP for subnets with prefixes larger than 64
+ dnsmasq doesn't like providing DHCP for subnets with prefixes shorter than 64

** Description changed:

- Trace when you enable DHCP on an IPv6 network with a prefix larger than
+ Trace when you enable DHCP on an IPv6 network with a prefix less than
  64.
  
  2016-11-15 17:33:54.321 102837 ERROR neutron.agent.dhcp.agent 
ProcessExecutionError: Exit code: 1; Stdin: ; Stdout: ; Stderr:
  2016-11-15 17:33:54.321 102837 ERROR neutron.agent.dhcp.agent dnsmasq: bad 
command line options: prefix length must be at least 64
  
- 
- At a minimum we need to skip these on the DHCP agent to prevent a bunch of 
log noise and retries. We probably should consider rejecting enable_dhcp=True 
in the API when the prefix is like this for IPv6 if it's a fundamental 
limitation of DHCPv6.
+ At a minimum we need to skip these on the DHCP agent to prevent a bunch
+ of log noise and retries. We probably should consider rejecting
+ enable_dhcp=True in the API when the prefix is like this for IPv6 if
+ it's a fundamental limitation of DHCPv6.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645616

Title:
  dnsmasq doesn't like providing DHCP for subnets with prefixes shorter
  than 64

Status in neutron:
  In Progress

Bug description:
  Trace when you enable DHCP on an IPv6 network with a prefix less than
  64.

  2016-11-15 17:33:54.321 102837 ERROR neutron.agent.dhcp.agent 
ProcessExecutionError: Exit code: 1; Stdin: ; Stdout: ; Stderr:
  2016-11-15 17:33:54.321 102837 ERROR neutron.agent.dhcp.agent dnsmasq: bad 
command line options: prefix length must be at least 64

  At a minimum we need to skip these on the DHCP agent to prevent a
  bunch of log noise and retries. We probably should consider rejecting
  enable_dhcp=True in the API when the prefix is like this for IPv6 if
  it's a fundamental limitation of DHCPv6.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645616/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645078] Re: There was confusion of string and uuid

2016-11-29 Thread PanFengyun
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1645078

Title:
  There was confusion of string and uuid

Status in neutron:
  Invalid

Bug description:
  The following code of agent/linux/external_process.py:
  --
   with open(cmdline, "r") as f:
  return self.uuid in f.readline()
  --
  will lead to this bug:
  --
 TypeError: 'in ' requires string as left operand, not UUID
  --

  We should let uuid formatted into a string.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1645078/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645613] [NEW] Create host aggregate empty name causes error

2016-11-29 Thread Yakup Adakli
Public bug reported:

When we try to create new host aggregate and leave the name empty, it
gives us error which is 'NoneType' object has no attribute 'lower'

To reproduce error:
1. Login to dashboard
2. Go to Admin>System>Host Aggregates
3. Click Create Host Aggregate
4. Leave name field empty.
5. Save

** Affects: horizon
 Importance: Undecided
 Assignee: Yakup Adakli (yakupa)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1645613

Title:
  Create host aggregate empty name causes error

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When we try to create new host aggregate and leave the name empty, it
  gives us error which is 'NoneType' object has no attribute 'lower'

  To reproduce error:
  1. Login to dashboard
  2. Go to Admin>System>Host Aggregates
  3. Click Create Host Aggregate
  4. Leave name field empty.
  5. Save

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1645613/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1645607] [NEW] keystone-manage mapping_populate fails and gives unhandled exception

2016-11-29 Thread the-bling
Public bug reported:

Running keystone-manage mapping_populate --domain-name 
displays the ID of the domain but throws an unhandled exception. This is
visible in keystone.log

2016-11-18 09:48:27.730 7685 ERROR keystone   File "/usr/bin/keystone-manage", 
line 10, in 
2016-11-18 09:48:27.730 7685 ERROR keystone sys.exit(main())
2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 43, in main
2016-11-18 09:48:27.730 7685 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1260, in main
2016-11-18 09:48:27.730 7685 ERROR keystone CONF.command.cmd_class.main()
2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1211, in main
2016-11-18 09:48:27.730 7685 ERROR keystone 
cls.identity_api.list_users(domain_scope=domain_id)
2016-11-18 09:48:27.730 7685 ERROR keystone NameError: global name 'domain_id' 
is not defined
2016-11-18 09:48:27.730 7685 ERROR keystone

It seems to me that the variable domain_id hasn't been properly scoped
which is causing this error.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: newton

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1645607

Title:
  keystone-manage mapping_populate fails and gives unhandled exception

Status in OpenStack Identity (keystone):
  New

Bug description:
  Running keystone-manage mapping_populate --domain-name 
  displays the ID of the domain but throws an unhandled exception. This
  is visible in keystone.log

  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/bin/keystone-manage", line 10, in 
  2016-11-18 09:48:27.730 7685 ERROR keystone sys.exit(main())
  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/manage.py", line 43, in main
  2016-11-18 09:48:27.730 7685 ERROR keystone cli.main(argv=sys.argv, 
config_files=config_files)
  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1260, in main
  2016-11-18 09:48:27.730 7685 ERROR keystone CONF.command.cmd_class.main()
  2016-11-18 09:48:27.730 7685 ERROR keystone   File 
"/usr/lib/python2.7/site-packages/keystone/cmd/cli.py", line 1211, in main
  2016-11-18 09:48:27.730 7685 ERROR keystone 
cls.identity_api.list_users(domain_scope=domain_id)
  2016-11-18 09:48:27.730 7685 ERROR keystone NameError: global name 
'domain_id' is not defined
  2016-11-18 09:48:27.730 7685 ERROR keystone

  It seems to me that the variable domain_id hasn't been properly scoped
  which is causing this error.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1645607/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp