[Yahoo-eng-team] [Bug 1276694] Re: Openstack services should support SIGHUP signal

2016-02-08 Thread Eric Harney
** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1276694

Title:
  Openstack services should support SIGHUP signal

Status in Cinder:
  New
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Identity (keystone):
  Invalid
Status in Murano:
  Confirmed
Status in neutron:
  In Progress
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo-incubator:
  Invalid
Status in oslo.config:
  New
Status in oslo.log:
  New
Status in oslo.service:
  Fix Released
Status in Sahara:
  In Progress

Bug description:
  1)In order to more effectively manage the unlinked and open (lsof +L1)
  log files descriptors w/o restarting the services, SIGHUP signal
  should be accepted by every Openstack service.

  That would allow, e.g. logrotate jobs to gracefully HUP services after
  their log files were rotated. The only option we have for now is to
  force the services restart, quite a poor option from the services
  continuous accessibility PoV.

  Note: according to  http://en.wikipedia.org/wiki/Unix_signal
  SIGHUP
     ... Many daemons will reload their configuration files and reopen their 
logfiles instead of exiting when receiving this signal.

  Currently Murano and Glance are out of sync with Oslo SIGHUP support.

  There is also the following issue exists for some of the services of OS 
projects with synced SIGHUP support:
  2)
  heat-api-cfn, heat-api, heat-api-cloudwatch, keystone:  looks like the synced 
code is never being executed, thus SIGHUP is not supported for them. Here is a 
simple test scenario:
  2.1) modify 
/site-packages//openstack/common/service.py
  def _sighup_supported():
  +LOG.warning("SIGHUP is supported: {0}".format(hasattr(signal, 'SIGHUP')))
  return hasattr(signal, 'SIGHUP')
  2.2) restart service foo-service-name and check logs for "SIGHUP is 
supported", if service  really supports it, the appropriate messages would be 
present in the logs.
  2.3) issue kill -HUP  and check logs for "SIGHUP is 
supported" and "Caught SIGHUP", if service  really supports it, the appropriate 
messages would be present in the logs. Besides that, the service should remain 
started and its main thread PID should not be changed.

  e.g.
  2.a) heat-engine supports HUPing:
  #service openstack-heat-engine restart
  <132>Apr 11 14:03:48 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True

  2.b)But heat-api don't know how to HUP:
  #service openstack-heat-api restart
  <134>Apr 11 14:06:22 node-3 heat-heat.api INFO: Starting Heat ReST API on 
0.0.0.0:8004
  <134>Apr 11 14:06:22 node-3 heat-eventlet.wsgi.server INFO: Starting single 
process server

  2.c) HUPing heat-engine is OK
  #pid=$(cat /var/run/heat/openstack-heat-engine.pid); kill -HUP $pid && echo 
$pid
  16512
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service INFO: Caught 
SIGHUP, exiting
  <132>Apr 11 14:12:15 node-3 heat-heat.openstack.common.service WARNING: 
SIGHUP is supported: True
  <134>Apr 11 14:12:15 node-3 heat-heat.openstack.common.rpc.common INFO: 
Connected to AMQP server on ...
  service openstack-heat-engine status
  openstack-heat-engine (pid  16512) is running...

  2.d) HUPed heat-api is dead now ;(
  #kill -HUP $(cat /var/run/heat/openstack-heat-api.pid)
  (no new logs)
  # service openstack-heat-api status
  openstack-heat-api dead but pid file exists

  3)
  nova-cert, nova-novncproxy, nova-objectstore, nova-consoleauth, 
nova-scheduler - unlike to case 2, after kill -HUP  command 
was issued, there would be a "Caught SIGHUP" message in the logs, BUT the 
associated service would have got dead anyway. Instead, the service should 
remain started and its main thread PID should not be changed (similar to the 
2.c case).

  So, looks like there are a lot of things still should be done to
  ensure POSIX standards abidance in Openstack :-)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1276694/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543012] [NEW] Routres: attaching a router to a external network without a subnet leads to exceptions

2016-02-08 Thread Gary Kotton
Public bug reported:

2016-01-29 06:45:03.920 18776 ERROR neutron.api.v2.resource 
[req-c2074082-a6d0-4e5a-8657-41fecb82dacc ] add_router_interface failed
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 207, in 
_handle_action
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v.py",
 line 1672, in add_router_interface
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource context, router_id, 
interface_info)
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 723, in add_router_interface
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource context, router_id, 
router_db.admin_state_up)
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 468, in _bind_router_on_available_edge
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource 
self._get_available_and_conflicting_ids(context, router_id))
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 273, in _get_available_and_conflicting_ids
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource 
gwp['fixed_ips'][0]['subnet_id'])
2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource IndexError: list 
index out of range

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543012

Title:
  Routres: attaching a router to a external network without a subnet
  leads to exceptions

Status in neutron:
  New

Bug description:
  2016-01-29 06:45:03.920 18776 ERROR neutron.api.v2.resource 
[req-c2074082-a6d0-4e5a-8657-41fecb82dacc ] add_router_interface failed
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/resource.py", line 83, in 
resource
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/lib/python2.7/dist-packages/neutron/api/v2/base.py", line 207, in 
_handle_action
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource return 
getattr(self._plugin, name)(*arg_list, **kwargs)
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v.py",
 line 1672, in add_router_interface
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource context, 
router_id, interface_info)
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 723, in add_router_interface
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource context, 
router_id, router_db.admin_state_up)
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 468, in _bind_router_on_available_edge
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource 
self._get_available_and_conflicting_ids(context, router_id))
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource File 
"/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/plugins/vmware/plugins/nsx_v_drivers/shared_router_driver.py",
 line 273, in _get_available_and_conflicting_ids
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource 
gwp['fixed_ips'][0]['subnet_id'])
  2016-01-29 06:45:03.920 18776 TRACE neutron.api.v2.resource IndexError: list 
index out of range

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1543012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1543010] [NEW] Nova clears DB if ESX nova-compute node restarted

2016-02-08 Thread Prashant Shetty
Public bug reported:

I had 12 ESX nova-compute cluster with 100 ESX hypervisor. For some reason one 
of nova-compute node went down.
After couple of attempt nova-compute came up fine. But,

1. Nova deleted all the instances running on that particular( esx-compute11) 
from its DB
2. All the instances were deleted from the backend as well.

Filing this bug to track if there is any issue with nova scheduler on
ESX setup.

Logs:

stack@runner:~/nsbu_cqe_openstack/nested$ nova service-list | grep nova-compute 
| grep esx
| 6 | nova-compute | esx-compute2 | nova | enabled | up | 
2016-02-03T09:45:15.00 | - |
| 7 | nova-compute | esx-compute1 | nova | enabled | up | 
2016-02-03T09:45:17.00 | - |
| 8 | nova-compute | esx-compute4 | nova | enabled | up | 
2016-02-03T09:45:18.00 | - |
| 9 | nova-compute | esx-compute3 | nova | enabled | up | 
2016-02-03T09:45:21.00 | - |
| 10 | nova-compute | esx-compute8 | nova | enabled | up | 
2016-02-03T09:45:20.00 | - |
| 11 | nova-compute | esx-compute7 | nova | enabled | up | 
2016-02-03T09:45:19.00 | - |
| 12 | nova-compute | esx-compute12 | nova | enabled | up | 
2016-02-03T09:45:19.00 | - |
| 13 | nova-compute | esx-compute5 | nova | enabled | up | 
2016-02-03T09:45:19.00 | - |
| 14 | nova-compute | esx-compute9 | nova | enabled | up | 
2016-02-03T09:45:17.00 | - |
| 15 | nova-compute | esx-compute6 | nova | enabled | up | 
2016-02-03T09:45:19.00 | - |
| 16 | nova-compute | esx-compute10 | nova | enabled | up | 
2016-02-03T09:45:20.00 | - |
| 17 | nova-compute | esx-compute11 | nova | enabled | down | 
2016-02-03T09:26:53.00 | - |
stack@runner:~/nsbu_cqe_openstack/nested$


stack@controller:~$ sudo netstat -anp | grep 62.24.1.87
tcp6 0 0 62.24.1.111:5672 62.24.1.87:58180 ESTABLISHED 8687/beam.smp
tcp6 0 0 62.24.1.111:5672 62.24.1.87:58179 ESTABLISHED 8687/beam.smp
stack@controller:~$


2016-02-03 01:27:03.217 INFO nova.service [-] Starting compute node (version 
13.0.0)
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 457, 
in fire_timers
timer()
  File "/usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py", line 
58, in __call__
cb(*args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py", line 
214, in main
result = function(*args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/oslo_service/service.py", line 
671, in run_service
service.start()
  File "/opt/stack/nova/nova/service.py", line 183, in start
self.manager.init_host()
  File "/opt/stack/nova/nova/compute/manager.py", line 1313, in init_host
context, self.host, expected_attrs=['info_cache', 'metadata'])
  File "/usr/local/lib/python2.7/dist-packages/oslo_versionedobjects/base.py", 
line 172, in wrapper
args, kwargs)
  File "/opt/stack/nova/nova/conductor/rpcapi.py", line 241, in 
object_class_action_versions
args=args, kwargs=kwargs)
  File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", 
line 158, in call
retry=self.retry)
  File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", 
line 90, in _send
timeout=timeout, retry=retry)
  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 464, in send
retry=retry)
  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 453, in _send
result = self._waiter.wait(msg_id, timeout)
  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 336, in wait
message = self.waiters.get(msg_id, timeout=timeout)
  File 
"/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", 
line 239, in get
'to message ID %s' % msg_id)
MessagingTimeout: Timed out waiting for a reply to message ID 
5a19ba4d2a694453b5db95fb2f73f9e8
2016-02-03 01:28:58.448 INFO oslo_messaging._drivers.amqpdriver [-] No calling 
threads waiting for msg_id : 5a19ba4d2a694453b5db95fb2f73f9e8


Logs:

M-Release, master branch

stack@esx-compute3:/opt/stack/nova$ git log -1
commit 197bd6dd1231f1f57cdd6c0acb1dfbdc3b2b0989
Merge: 1ec0b56 5f5590f
Author: Jenkins 
Date:   Sun Feb 7 04:08:54 2016 +

Merge "libvirt: use osinfo when configuring the disk bus"
stack@esx-compute3:/opt/stack/nova$

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543010

Title:
  Nova clears DB if ESX nova-compute node restarted

Status in OpenStack Compute (nova):
  New

Bug description:
  I had 12 ESX nova-compute cluster with 100 ESX hypervisor. For some reason 
one of nova-compute node went down.
  After couple of attempt nova-compute came up fine. But,

  1. Nova deleted all the instances running on that particular( esx-compute11) 

[Yahoo-eng-team] [Bug 1277104] Re: wrong order of assertEquals args

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/266724
Committed: 
https://git.openstack.org/cgit/openstack/manila/commit/?id=dac72337f7efbd75e0eb269ad8e1fdd89a13e4b9
Submitter: Jenkins
Branch:master

commit dac72337f7efbd75e0eb269ad8e1fdd89a13e4b9
Author: Yatin Kumbhare 
Date:   Wed Jan 13 12:16:06 2016 +0530

Fix params order in assertEqual

Fix params order to correspond to real signature:
assertEqual(expected, actual)

Change-Id: I5887e9c4fbd8953b3be9e89ce86758f8d1d842b2
Closes-Bug: #1277104


** Changed in: manila
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1277104

Title:
  wrong order of assertEquals args

Status in Ceilometer:
  Fix Released
Status in Cinder:
  Fix Released
Status in Glance:
  Fix Released
Status in ironic-python-agent:
  Fix Released
Status in OpenStack Identity (keystone):
  Fix Released
Status in Manila:
  Fix Released
Status in neutron:
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in oslo.policy:
  Fix Released
Status in python-ceilometerclient:
  Fix Released
Status in python-glanceclient:
  Fix Released
Status in python-ironicclient:
  Fix Released
Status in python-novaclient:
  Fix Released
Status in python-openstackclient:
  Fix Released
Status in Python client library for Sahara:
  Fix Released
Status in python-solumclient:
  Fix Released
Status in python-swiftclient:
  Won't Fix
Status in python-troveclient:
  Fix Released
Status in Rally:
  Confirmed
Status in Trove:
  Fix Released

Bug description:
  Args of assertEquals method in ceilometer.tests are arranged in wrong order. 
In result when test fails it shows incorrect information about observed and 
actual data. It's found more than 2000 times.
  Right order of arguments is "expected, actual".

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1277104/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543181] [NEW] Raw and qcow2 disks are never preallocated on systems with newer util-linux

2016-02-08 Thread Matthew Booth
Public bug reported:

imagebackend.Image._can_fallocate tests if fallocate works by running
the following command:

  fallocate -n -l 1 .fallocate_test

where  exists, but .fallocate_test does not.
This command line is copied from the code which actually fallocates a
disk. However, while this works on systems with an older version of
util-linux, such as RHEL 7, it does not work on systems with a newer
version of util-linux, such as Fedora 23. The result of this is that
this test will always fail, and preallocation with fallocate will be
erroneously disabled.

On RHEL 7, which has util-linux-2.23.2-26.el7.x86_64 on my system:

$ fallocate -n -l 1 foo
$ ls -lh foo
-rw-r--r--. 1 mbooth mbooth 0 Feb  8 15:33 foo
$ du -sh foo
4.0Kfoo

On Fedora 23, which has util-linux-2.27.1-2.fc23.x86_64 on my system:

$ fallocate -n -l 1 foo
fallocate: cannot open foo: No such file or directory

The F23 behaviour actually makes sense. From the fallocate man page:

  -n, --keep-size
  Do  not modify the apparent length of the file.

This doesn't make any sense if the file doesn't exist. That is, the -n
option makes sense when preallocating an existing disk image, but not
when testing if fallocate works on a given filesystem and the test file
doesn't already exist.

You could also reasonably argue that util-linux probably should be
breaking an interface like this, even when misused. However, that's a
separate discussion. We shouldn't be misusing it.

** Affects: nova
 Importance: Undecided
 Assignee: Matthew Booth (mbooth-9)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543181

Title:
  Raw and qcow2 disks are never preallocated on systems with newer util-
  linux

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  imagebackend.Image._can_fallocate tests if fallocate works by running
  the following command:

fallocate -n -l 1 .fallocate_test

  where  exists, but .fallocate_test does not.
  This command line is copied from the code which actually fallocates a
  disk. However, while this works on systems with an older version of
  util-linux, such as RHEL 7, it does not work on systems with a newer
  version of util-linux, such as Fedora 23. The result of this is that
  this test will always fail, and preallocation with fallocate will be
  erroneously disabled.

  On RHEL 7, which has util-linux-2.23.2-26.el7.x86_64 on my system:

  $ fallocate -n -l 1 foo
  $ ls -lh foo
  -rw-r--r--. 1 mbooth mbooth 0 Feb  8 15:33 foo
  $ du -sh foo
  4.0K  foo

  On Fedora 23, which has util-linux-2.27.1-2.fc23.x86_64 on my system:

  $ fallocate -n -l 1 foo
  fallocate: cannot open foo: No such file or directory

  The F23 behaviour actually makes sense. From the fallocate man page:

-n, --keep-size
Do  not modify the apparent length of the file.

  This doesn't make any sense if the file doesn't exist. That is, the -n
  option makes sense when preallocating an existing disk image, but not
  when testing if fallocate works on a given filesystem and the test
  file doesn't already exist.

  You could also reasonably argue that util-linux probably should be
  breaking an interface like this, even when misused. However, that's a
  separate discussion. We shouldn't be misusing it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543073] [NEW] keystoneclient API misused in MappingsViewTests

2016-02-08 Thread Victor Stinner
Public bug reported:

The test
openstack_dashboard.dashboards.identity.mappings.tests.MappingsViewTests.test_index()
uses a keystoneclient.v3.contrib.federation.mappings.Mapping object
created by the data() function of
openstack_dashboard/test/test_data/keystone_data.py. Problem: data()
pass the keystoneclient.v3.contrib.federation.mappings.MappingManager
*class* to Mapping constructor, whereas it should pass an instance.

keystoneclient.v3.contrib.federation.mappings.MappingManager constructor
(keystoneclient.base.Manager constructor) has a client parameter but I
don't know how to create such client: "instance of BaseClient descendant
for HTTP requests".

The bug is hidden on Python 2 in
horizon.tables.base.DataTable.get_object_display() by hasattr(datum,
'name'), because hasattr() ignores *all* exceptions.

I found this bug when running the test on Python 3, since hasattr() now
only catchs AttributeError.

I proposed https://review.openstack.org/#/c/275265/ to mimick hasattr()
Python 2 beheaviour on Python 3 in
horizon.tables.base.DataTable.get_object_display().

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1543073

Title:
  keystoneclient API misused in MappingsViewTests

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The test
  
openstack_dashboard.dashboards.identity.mappings.tests.MappingsViewTests.test_index()
  uses a keystoneclient.v3.contrib.federation.mappings.Mapping object
  created by the data() function of
  openstack_dashboard/test/test_data/keystone_data.py. Problem: data()
  pass the keystoneclient.v3.contrib.federation.mappings.MappingManager
  *class* to Mapping constructor, whereas it should pass an instance.

  keystoneclient.v3.contrib.federation.mappings.MappingManager
  constructor (keystoneclient.base.Manager constructor) has a client
  parameter but I don't know how to create such client: "instance of
  BaseClient descendant for HTTP requests".

  The bug is hidden on Python 2 in
  horizon.tables.base.DataTable.get_object_display() by hasattr(datum,
  'name'), because hasattr() ignores *all* exceptions.

  I found this bug when running the test on Python 3, since hasattr()
  now only catchs AttributeError.

  I proposed https://review.openstack.org/#/c/275265/ to mimick
  hasattr() Python 2 beheaviour on Python 3 in
  horizon.tables.base.DataTable.get_object_display().

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1543073/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527575] Re: failed to create user from domain scoped token

2016-02-08 Thread Itxaka Serrano
** Also affects: django-openstack-auth
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1527575

Title:
  failed to create user from domain scoped token

Status in django-openstack-auth:
  In Progress
Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  current tests are reporting
  .Failed to create user from domain scoped token.
  2015-12-18 10:15:46.894 | .Failed to create user from domain scoped token.
  2015-12-18 10:15:46.919 | ..Failed to create user from domain scoped token.
  2015-12-18 10:15:46.925 | .Failed to create user from domain scoped token.
  2015-12-18 10:15:46.942 | .Failed to create user from domain scoped token.
  2015-12-18 10:15:46.997 | .Failed to create user from domain scoped token.

  but still passing
  E.g this one here:
  
http://logs.openstack.org/13/259013/3/check/gate-horizon-tox-py27dj18/dac7716/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/1527575/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543094] [NEW] [Pluggable IPAM] DB exceeded retry limit (RetryRequest) on create_router call

2016-02-08 Thread Pavel Bondar
Public bug reported:

Observed errors "DB exceeded retry limit" [1] in cases where pluggable ipam is 
enabled, observed on master branch.
Each time retest is done different tests are failed, so looks like concurency 
issue.
4  errors 'DB exceeded retry limit' are observed in [1].

2016-02-04 11:55:59.944 15476 ERROR oslo_db.api 
[req-7ad8b69e-a851-4b6c-8c2c-33258c53bb54 admin -] DB exceeded retry limit.
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api Traceback (most recent call 
last):
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_db/api.py", line 137, in wrapper
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api return f(*args, **kwargs)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 519, in _create
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api obj = do_create(body)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 501, in do_create
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api request.context, 
reservation.reservation_id)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204, in 
__exit__
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api six.reraise(self.type_, 
self.value, self.tb)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/api/v2/base.py", line 494, in do_create
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api return 
obj_creator(request.context, **kwargs)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_hamode_db.py", line 411, in create_router
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api 
self).create_router(context, router)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 200, in create_router
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api self.delete_router(context, 
router_db.id)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204, in 
__exit__
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api six.reraise(self.type_, 
self.value, self.tb)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 196, in create_router
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api gw_info, router=router_db)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_gwmode_db.py", line 69, in 
_update_router_gw_info
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api context, router_id, info, 
router=router)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 429, in 
_update_router_gw_info
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api ext_ips)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_dvr_db.py", line 185, in _create_gw_port
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api ext_ips)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 399, in _create_gw_port
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api new_network_id, ext_ips)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/l3_db.py", line 310, in 
_create_router_gw_port
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api context.elevated(), 
{'port': port_data})
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/plugins/common/utils.py", line 149, in 
create_port
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api return 
core_plugin.create_port(context, {'port': port_data})
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1069, in 
create_port
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api result, mech_context = 
self._create_port_db(context, port)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py", line 1045, in 
_create_port_db
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api result = super(Ml2Plugin, 
self).create_port(context, port)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py", line 1193, in 
create_port
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api port_id)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/opt/stack/new/neutron/neutron/db/ipam_pluggable_backend.py", line 172, in 
allocate_ips_for_port_and_store
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api revert_on_fail=False)
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 204, in 
__exit__
2016-02-04 11:55:59.944 15476 ERROR oslo_db.api six.reraise(self.type_, 

[Yahoo-eng-team] [Bug 1543092] [NEW] glance-manage db purge failure

2016-02-08 Thread Abhishek Kekane
Public bug reported:

While running glance-manage db purge command, it fail with error "his
version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'"

$ glance-manage db purge 1

2016-02-05 01:46:01.902 DEBUG oslo_db.sqlalchemy.engines 
[req-1cfabb0d-e775-44ea-b253-d134b0abb303 None None] MySQL server mode set to 
STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
 from (pid=26959) _check_effective_sql_mode 
/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:256
2016-02-05 01:46:02.340 INFO glance.db.sqlalchemy.api 
[req-1cfabb0d-e775-44ea-b253-d134b0abb303 None None] Purging deleted rows older 
than %(age_in_days)d day(s) from table %(tbl)s
2016-02-05 01:46:02.417 ERROR oslo_db.sqlalchemy.exc_filters 
[req-1cfabb0d-e775-44ea-b253-d134b0abb303 None None] DBAPIError exception 
wrapped from (pymysql.err.NotSupportedError) (1235, u"This version of MySQL 
doesn't yet support 'LIMIT & IN/ALL/ANY/SOME subquery'") [SQL: u'DELETE FROM 
image_tags WHERE image_tags.id IN (SELECT image_tags.id \nFROM image_tags 
\nWHERE image_tags.deleted_at < %(deleted_at_1)s \n LIMIT %(param_1)s)'] 
[parameters: {u'deleted_at_1': datetime.datetime(2016, 2, 4, 9, 46, 1, 906022), 
u'param_1': 100}]
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters Traceback (most 
recent call last):
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1139, 
in _execute_context
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters context)
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 
450, in do_execute
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters 
cursor.execute(statement, parameters)
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 146, in 
execute
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters result = 
self._query(query)
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/cursors.py", line 296, in _query
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters conn.query(q)
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 819, in 
query
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters 
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1001, in 
_read_query_result
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters result.read()
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 1285, in 
read
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters first_packet = 
self.connection._read_packet()
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 965, in 
_read_packet
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters 
packet.check_error()
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/connections.py", line 394, in 
check_error
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters 
err.raise_mysql_exception(self._data)
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 120, in 
raise_mysql_exception
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters 
_check_mysql_exception(errinfo)
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters   File 
"/usr/local/lib/python2.7/dist-packages/pymysql/err.py", line 112, in 
_check_mysql_exception
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters raise 
errorclass(errno, errorvalue)
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters NotSupportedError: 
(1235, u"This version of MySQL doesn't yet support 'LIMIT & IN/ALL/ANY/SOME 
subquery'")
2016-02-05 01:46:02.417 TRACE oslo_db.sqlalchemy.exc_filters
2016-02-05 01:46:02.468 CRITICAL glance 
[req-1cfabb0d-e775-44ea-b253-d134b0abb303 None None] DBError: 
(pymysql.err.NotSupportedError) (1235, u"This version of MySQL doesn't yet 
support 'LIMIT & IN/ALL/ANY/SOME subquery'") [SQL: u'DELETE FROM image_tags 
WHERE image_tags.id IN (SELECT image_tags.id \nFROM image_tags \nWHERE 
image_tags.deleted_at < %(deleted_at_1)s \n LIMIT %(param_1)s)'] [parameters: 
{u'deleted_at_1': datetime.datetime(2016, 2, 4, 9, 46, 1, 906022), u'param_1': 
100}]

2016-02-05 01:46:02.468 

[Yahoo-eng-team] [Bug 1543040] [NEW] neutron.tests.unit.agent.linux.test_async_process.TestFailingAsyncProcess.test_failing_async_process_handle_error_once failed in Liberty periodic job

2016-02-08 Thread Ihar Hrachyshka
Public bug reported:

http://logs.openstack.org/periodic-stable/periodic-neutron-python27
-constraints-liberty/25e168f/testr_results.html.gz

  File "neutron/tests/unit/agent/linux/test_async_process.py", line 294, in 
test_failing_async_process_handle_error_once
self.assertEqual(1, handle_error_mock.call_count)
  File 
"/home/jenkins/workspace/periodic-neutron-python27-constraints-liberty/.tox/py27-constraints/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
self.assertThat(observed, matcher, message)
  File 
"/home/jenkins/workspace/periodic-neutron-python27-constraints-liberty/.tox/py27-constraints/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
raise mismatch_error
testtools.matchers._impl.MismatchError: 1 != 0

Also note that in the same run,
neutron.tests.unit.agent.linux.test_utils.AgentUtilsExecuteTest.test_encode_process_input
failed too.

Traceback (most recent call last):
  File "neutron/tests/unit/agent/linux/test_utils.py", line 141, in 
test_encode_process_input
self.mock_popen.assert_called_once_with(str_idata)
  File 
"/home/jenkins/workspace/periodic-neutron-python27-constraints-liberty/.tox/py27-constraints/local/lib/python2.7/site-packages/mock/mock.py",
 line 947, in assert_called_once_with
raise AssertionError(msg)
AssertionError: Expected 'communicate' to be called once. Called 2 times.

The failing test was introduced in Liberty as part of
https://review.openstack.org/#/c/272682/ backport.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: gate-failure unittest

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543040

Title:
  
neutron.tests.unit.agent.linux.test_async_process.TestFailingAsyncProcess.test_failing_async_process_handle_error_once
  failed in Liberty periodic job

Status in neutron:
  New

Bug description:
  http://logs.openstack.org/periodic-stable/periodic-neutron-python27
  -constraints-liberty/25e168f/testr_results.html.gz

File "neutron/tests/unit/agent/linux/test_async_process.py", line 294, in 
test_failing_async_process_handle_error_once
  self.assertEqual(1, handle_error_mock.call_count)
File 
"/home/jenkins/workspace/periodic-neutron-python27-constraints-liberty/.tox/py27-constraints/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 350, in assertEqual
  self.assertThat(observed, matcher, message)
File 
"/home/jenkins/workspace/periodic-neutron-python27-constraints-liberty/.tox/py27-constraints/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 435, in assertThat
  raise mismatch_error
  testtools.matchers._impl.MismatchError: 1 != 0

  Also note that in the same run,
  
neutron.tests.unit.agent.linux.test_utils.AgentUtilsExecuteTest.test_encode_process_input
  failed too.

  Traceback (most recent call last):
File "neutron/tests/unit/agent/linux/test_utils.py", line 141, in 
test_encode_process_input
  self.mock_popen.assert_called_once_with(str_idata)
File 
"/home/jenkins/workspace/periodic-neutron-python27-constraints-liberty/.tox/py27-constraints/local/lib/python2.7/site-packages/mock/mock.py",
 line 947, in assert_called_once_with
  raise AssertionError(msg)
  AssertionError: Expected 'communicate' to be called once. Called 2 times.

  The failing test was introduced in Liberty as part of
  https://review.openstack.org/#/c/272682/ backport.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1543040/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1513541] Re: Support sub-second accuracy in Fernet's creation timestamp

2016-02-08 Thread Morgan Fainberg
** Changed in: keystone
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1513541

Title:
  Support sub-second accuracy in Fernet's creation timestamp

Status in OpenStack Identity (keystone):
  Won't Fix

Bug description:
  The fernet token provider has sub-second format, but it is currently
  truncated to .00Z. This is because the library (pyca/cryptography
  [0]) that keystone relies on for generating fernet tokens uses integer
  timestamps instead of floats, which loses sub-second accuracy. We
  should find a way to support sub-second accuracy in Fernet's creation
  timestamp so that we don't hit token revocation edge cases, like the
  ones documented here - https://review.openstack.org/#/c/227995/ .

  This will likely have to be a coordinated effort between the
  cryptography development community and the maintainers of the Fernet
  specification [1].

  This bug is to track that we include the corresponding fix (via
  version bump of cryptography) for keystone.

  
  [0] https://github.com/pyca/cryptography
  [1] https://github.com/fernet/spec

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1513541/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521581] Re: v2 - "readOnly" key should be used in schemas

2016-02-08 Thread Kairat Kushaev
** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** Changed in: python-glanceclient
   Status: New => In Progress

** Changed in: python-glanceclient
   Importance: Undecided => Low

** Changed in: python-glanceclient
 Assignee: (unassigned) => zwei (suifeng20)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1521581

Title:
  v2 - "readOnly" key should be used in schemas

Status in Glance:
  Fix Released
Status in python-glanceclient:
  In Progress

Bug description:
  Currently, the way object properties are labelled read-only is through
  the description, like so:

  "status": {
  "enum": [
  "queued",
  "saving",
  "active",
  "killed",
  "deleted",
  "pending_delete",
  "deactivated"
  ],
  "type": "string",
  "description": "Status of the image (READ-ONLY)"
  }

  
  This is not the recommended way to indicate read-only status. The "readOnly" 
property should be used instead:

  "status": {
  "enum": [
  "queued",
  "saving",
  "active",
  "killed",
  "deleted",
  "pending_delete",
  "deactivated"
  ],
  "type": "string",
  "readOnly": true,
  "description": "Status of the image"
  }

  
  Further link for reference: 
http://json-schema.org/latest/json-schema-hypermedia.html#anchor15

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1521581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543053] [NEW] The color of question circle in Checkbox is not correct

2016-02-08 Thread Kenji Ishii
Public bug reported:

The correct color is black, but at the moment the question circle in Checkbox 
is only gray color.
because of html structure.
Question circle should be in outside from label.

** Affects: horizon
 Importance: Undecided
 Assignee: Kenji Ishii (ken-ishii)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Kenji Ishii (ken-ishii)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1543053

Title:
  The color of question circle in Checkbox is not correct

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The correct color is black, but at the moment the question circle in Checkbox 
is only gray color.
  because of html structure.
  Question circle should be in outside from label.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1543053/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1521581] Re: v2 - "readOnly" key should be used in schemas

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/269406
Committed: 
https://git.openstack.org/cgit/openstack/python-glanceclient/commit/?id=22e3bf0234049982c83f13d3404ef0aaf6413aec
Submitter: Jenkins
Branch:master

commit 22e3bf0234049982c83f13d3404ef0aaf6413aec
Author: zwei 
Date:   Mon Jan 18 10:17:14 2016 +0800

v2 - "readOnly" key should be used in schemas

If it has a value of boolean true,
this keyword indicates that the instance property SHOULD NOT be changed,
and attempts by a user agent to modify the value of this property are 
expected to be rejected by a server.
The value of this keyword MUST be a boolean.
The default value is false.

Further link for reference: 
http://json-schema.org/latest/json-schema-hypermedia.html#anchor15

Closes-Bug: #1521581
Depends-On: I279fba4099667d193609a31259057b897380d6f0
Change-Id: I96717506259c0d28500b8747369c47029b1dd9b6


** Changed in: python-glanceclient
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1521581

Title:
  v2 - "readOnly" key should be used in schemas

Status in Glance:
  Fix Released
Status in python-glanceclient:
  Fix Released

Bug description:
  Currently, the way object properties are labelled read-only is through
  the description, like so:

  "status": {
  "enum": [
  "queued",
  "saving",
  "active",
  "killed",
  "deleted",
  "pending_delete",
  "deactivated"
  ],
  "type": "string",
  "description": "Status of the image (READ-ONLY)"
  }

  
  This is not the recommended way to indicate read-only status. The "readOnly" 
property should be used instead:

  "status": {
  "enum": [
  "queued",
  "saving",
  "active",
  "killed",
  "deleted",
  "pending_delete",
  "deactivated"
  ],
  "type": "string",
  "readOnly": true,
  "description": "Status of the image"
  }

  
  Further link for reference: 
http://json-schema.org/latest/json-schema-hypermedia.html#anchor15

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1521581/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543048] [NEW] support alternative password hashing in keystone

2016-02-08 Thread Morgan Fainberg
Public bug reported:

Once upon a time there was bug #862730 recommending that alternative
password hashing be supported which was closed as invalid since hashing
became base-line feature of Keystone's passwords. It would be generally
beneficial to support at the very least the passlib implementation of
bcrypt as an alternative to strictly sha512 based password hashing.
Ideally this should also take into account the relatively new player
scrypt.

NIST has standardized (afaict) on the SHA-2 based hashing, which should
remain the default. Architecture that will support some different
password hashing made available at least through passlib will make
keystone better in the long term, allowing for operators to determine
more than just the SHA-2 based cost.

The proposal is as follows:

  * Allow selected support of different password hashing algorithms from
with passlib architecturally

  * Expand to support bcrypt

  * Deprecate the "crypt_strength" option in favor of identifying the
cost when selecting the password hashing algorithm such as:
sha512::1 or bcrypt::12

  * Keep the default the same as today

  * Identify the password hash based upon the algorithm used, no
identifier = sha512 (this might not be required)

  * Add "py-bcrypt" or similar "preferred" backend(s) to extras in
setup.cfg

** Affects: keystone
 Importance: Wishlist
 Status: New


** Tags: password security

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1543048

Title:
  support alternative password hashing in keystone

Status in OpenStack Identity (keystone):
  New

Bug description:
  Once upon a time there was bug #862730 recommending that alternative
  password hashing be supported which was closed as invalid since
  hashing became base-line feature of Keystone's passwords. It would be
  generally beneficial to support at the very least the passlib
  implementation of bcrypt as an alternative to strictly sha512 based
  password hashing. Ideally this should also take into account the
  relatively new player scrypt.

  NIST has standardized (afaict) on the SHA-2 based hashing, which
  should remain the default. Architecture that will support some
  different password hashing made available at least through passlib
  will make keystone better in the long term, allowing for operators to
  determine more than just the SHA-2 based cost.

  The proposal is as follows:

* Allow selected support of different password hashing algorithms
  from with passlib architecturally

* Expand to support bcrypt

* Deprecate the "crypt_strength" option in favor of identifying the
  cost when selecting the password hashing algorithm such as:
  sha512::1 or bcrypt::12

* Keep the default the same as today

* Identify the password hash based upon the algorithm used, no
  identifier = sha512 (this might not be required)

* Add "py-bcrypt" or similar "preferred" backend(s) to extras in
  setup.cfg

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1543048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543397] [NEW] fwaas tempest plugin needs an update

2016-02-08 Thread YAMAMOTO Takashi
Public bug reported:

fwaas tempest plugin needs an update.

an example of failure: http://logs.openstack.org/87/199387/18/check
/gate-tempest-dsvm-networking-
midonet-v2/4344770/logs/testr_results.html.gz

Traceback (most recent call last):
  File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/test_fwaas_extensions.py",
 line 207, in test_create_show_delete_firewall
self.client.add_router_interface_with_subnet_id(
AttributeError: 'NetworkClient' object has no attribute 
'add_router_interface_with_subnet_id'

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: fwaas

** Tags added: fwaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543397

Title:
  fwaas tempest plugin needs an update

Status in neutron:
  New

Bug description:
  fwaas tempest plugin needs an update.

  an example of failure: http://logs.openstack.org/87/199387/18/check
  /gate-tempest-dsvm-networking-
  midonet-v2/4344770/logs/testr_results.html.gz

  Traceback (most recent call last):
File 
"/opt/stack/new/neutron-fwaas/neutron_fwaas/tests/tempest_plugin/tests/api/test_fwaas_extensions.py",
 line 207, in test_create_show_delete_firewall
  self.client.add_router_interface_with_subnet_id(
  AttributeError: 'NetworkClient' object has no attribute 
'add_router_interface_with_subnet_id'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1543397/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1420056] Re: Deleting last rule in Security Group does not update firewall

2016-02-08 Thread Hong Hui Xiao
*** This bug is a duplicate of bug 1460562 ***
https://bugs.launchpad.net/bugs/1460562

I just checked, the bug can't be reproduced in the latest code. After
checking the history, the bug has been fixed at [1]. I will close this
bug as duplicated.


[1] 
https://github.com/openstack/neutron/blob/764f018f50ac7cd42c29efeabaccbb5aec21f6f4/neutron/db/securitygroups_rpc_base.py#L208-L212

** This bug has been marked a duplicate of bug 1460562
   ipset can't be destroyed when last sg rule is deleted

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1420056

Title:
  Deleting last rule in Security Group does not update firewall

Status in neutron:
  In Progress

Bug description:
  
  Scenario:
   VM port with 1 Security Group with 1 egress icmp rule
  (example rule:
  {u'ethertype': u'IPv4', u'direction': u'egress', u'protocol': u'icmp', 
u'dest_ip_prefix': u'0.0.0.0/0'}
  )

  Steps:
   Delete the (last) rule from the above Security Group via Horizon

  Result:
  Find that iptables  shows the egress icmp rule even after its deletion

  Root Cause:
  In this scenario, security_group_info_for_devices() returns the following 
to the agent: Note that the
   'security_groups ' field is an empty dictionary {} !! this causes 
_update_security_groups_info in the agent to NOT update firewall.

  The security_groups field must contain the security_group_id as key
  with an empty list for the rules.

  
  {u'sg_member_ips': {}, u'devices': {u'ea19fb55-39bb-4e59-9d10-26c74eb3ff95': 
{u'status': u'ACTIVE', u'security_group_source_groups': [], u'binding:host_id': 
u'vRHEL29-1', u'name': u'', u'allowed_address_pairs': [{u'ip_address': 
u'10.0.0.201', u'mac_address': u'fa:16:3e:02:4b:b3'}, {u'ip_address': 
u'10.0.10.202', u'mac_address': u'fa:16:3e:02:4b:b3'}, {u'ip_address': 
u'10.0.20.203', u'mac_address': u'fa:16:3e:02:4b:b3'}], u'admin_state_up': 
True, u'network_id': u'f665dc8c-76da-4fde-8d26-535871487e4c', u'tenant_id': 
u'f5019aeae9e64443970bb0842e22e2b3', u'extra_dhcp_opts': [], 
u'security_group_rules': [{u'source_port_range_min': 67, u'direction': 
u'ingress', u'protocol': u'udp', u'ethertype': u'IPv4', u'port_range_max': 68, 
u'source_port_range_max': 67, u'source_ip_prefix': u'10.0.2.3', 
u'port_range_min': 68}], u'binding:vif_details': {u'port_filter': False}, 
u'binding:vif_type': u'bridge', u'device_owner': u'compute:nova', 
u'mac_address': u'fa:16:3e:02:4b:b3', u'device': u'tapea19fb5
 5-39', u'binding:profile': {}, u'binding:vnic_type': u'normal', u'fixed_ips': 
[u'10.0.2.6'], u'id': u'ea19fb55-39bb-4e59-9d10-26c74eb3ff95', 
u'security_groups': [u'849ee59c-d100-4940-930b-44e358775ed3'], u'device_id': 
u'2b330c29-c16f-4bbf-b80a-bd5bae41b514'}}, u'security_groups': {}} 
security_group_info_for_devices 
/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py:104

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1420056/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1527059] Re: Magic Search box doesn't work

2016-02-08 Thread Shu Muto
** Changed in: horizon
   Status: New => Fix Released

** Changed in: magnum-ui
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1527059

Title:
  Magic Search box doesn't work

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Magnum UI:
  Fix Released

Bug description:
  Magic Search box doesn't been shown in table view of all panels.

  And browser console shows following error messages.

  Error: [$injector:strictdi] function($compile) is not using explicit 
annotation and cannot be invoked in strict mode
  http://errors.angularjs.org/1.3.7/$injector/strictdi?p0=function(%24compile)
  at angular.js:63
  at annotate (angular.js:3451)
  at Object.invoke (angular.js:4158)
  at angular.js:6480
  at forEach (angular.js:323)
  at Object. (angular.js:6478)
  at Object.invoke (angular.js:4180)
  at Object.enforcedReturnValue [as $get] (angular.js:4033)
  at Object.invoke (angular.js:4180)
  at angular.js:3998

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1527059/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542587] Re: keystone-manage commands should use sys.exit()

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/274519
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=297a2c17374ffbca4f18f9fd39eb5863d57c92a0
Submitter: Jenkins
Branch:master

commit 297a2c17374ffbca4f18f9fd39eb5863d57c92a0
Author: Qiaowei Ren 
Date:   Mon Feb 1 14:36:10 2016 +0800

Replace exit() with sys.exit()

The exit() function is added by the site module so that it can be used
in the interactive interpreter. It's use is not recommended in
applications:

https://docs.python.org/2/library/constants.html#constants-added-by-the-site-module

Co-Authored-By: David Stanek 
Closes-Bug: #1542587
Change-Id: Ic6a1fe7f3925b0efd34111713cc56857757b29cf


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1542587

Title:
  keystone-manage commands should use sys.exit()

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  when change exit() to sys.exit() in
  keystone.cmd.cli.DomainConfigUpload.main(), the following unit tests
  failed:

  File "keystone/tests/unit/test_cli.py", line 357, in test_config_upload
  File "keystone/tests/unit/test_cli.py", line 297, in test_no_overwrite_config
  File "keystone/tests/unit/test_cli.py", line 323, in test_config_upload
  File "keystone/tests/unit/test_cli.py", line 340, in test_config_upload

  the log is as follow:

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "keystone/tests/unit/test_cli.py", line 357, in test_config_upload
  self.assertRaises(SystemExit, cli.DomainConfigUpload.main)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 434, in assertRaises
  self.assertThat(our_callable, matcher)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 445, in assertThat
  mismatch_error = self._matchHelper(matchee, matcher, message, verbose)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 495, in _matchHelper
  mismatch = matcher.match(matchee)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 108, in match
  mismatch = self.exception_matcher.match(exc_info)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_higherorder.py",
 line 62, in match
  mismatch = matcher.match(matchee)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 426, in match
  reraise(*matchee)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/matchers/_exception.py",
 line 101, in match
  result = matchee()
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/testtools/testcase.py",
 line 982, in __call__
  return self._callable_object(*self._args, **self._kwargs)
File "keystone/cmd/cli.py", line 696, in main
  sys.exit(status)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1062, in __call__
  return _mock_self._mock_call(*args, **kwargs)
File 
"/opt/stack/keystone/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", 
line 1118, in _mock_call
  raise effect
  keystone.tests.unit.core.UnexpectedExit

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1542587/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543423] [NEW] RuntimeError: Exit code: -15; Stdin: ; Stdout: ; Stderr: Signal 15 (TERM) caught by ps (procps-ng version 3.3.10).

2016-02-08 Thread Yi Ba
Public bug reported:

I really don't know what had happened here, but message asks to report
this bug.

2016-02-09 03:04:20.683 12651 ERROR neutron Traceback (most recent call last):
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/usr/bin/neutron-openvswitch-agent", line 10, in 
2016-02-09 03:04:20.683 12651 ERROR neutron sys.exit(main())
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py", line 
20, in main
2016-02-09 03:04:20.683 12651 ERROR neutron agent_main.main()
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/main.py", 
line 49, in main
2016-02-09 03:04:20.683 12651 ERROR neutron mod.main()
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/main.py",
 line 36, in main
2016-02-09 03:04:20.683 12651 ERROR neutron 
ovs_neutron_agent.main(bridge_classes)
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2068, in main
2016-02-09 03:04:20.683 12651 ERROR neutron agent.daemon_loop()
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 1991, in daemon_loop
2016-02-09 03:04:20.683 12651 ERROR neutron 
self.rpc_loop(polling_manager=pm)
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
2016-02-09 03:04:20.683 12651 ERROR neutron self.gen.next()
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/polling.py", line 42, in 
get_polling_manager
2016-02-09 03:04:20.683 12651 ERROR neutron pm.stop()
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/polling.py", line 61, in stop
2016-02-09 03:04:20.683 12651 ERROR neutron self._monitor.stop()
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/async_process.py", line 129, in stop
2016-02-09 03:04:20.683 12651 ERROR neutron self._kill(kill_signal)
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/async_process.py", line 162, in _kill
2016-02-09 03:04:20.683 12651 ERROR neutron pid = self.pid
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/async_process.py", line 158, in pid
2016-02-09 03:04:20.683 12651 ERROR neutron run_as_root=self.run_as_root)
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 265, in 
get_root_helper_child_pid
2016-02-09 03:04:20.683 12651 ERROR neutron pid = find_child_pids(pid)[0]
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 196, in find_child_pids
2016-02-09 03:04:20.683 12651 ERROR neutron return []
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 204, in __exit__
2016-02-09 03:04:20.683 12651 ERROR neutron six.reraise(self.type_, 
self.value, self.tb)
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 188, in find_child_pids
2016-02-09 03:04:20.683 12651 ERROR neutron log_fail_as_error=False)
2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/agent/linux/utils.py", line 140, in execute
2016-02-09 03:04:20.683 12651 ERROR neutron raise RuntimeError(msg)
2016-02-09 03:04:20.683 12651 ERROR neutron RuntimeError: Exit code: -15; 
Stdin: ; Stdout: ; Stderr: Signal 15 (TERM) caught by ps (procps-ng version 
3.3.10).
2016-02-09 03:04:20.683 12651 ERROR neutron ps:display.c:66: please report this 
bug
2016-02-09 03:04:20.683 12651 ERROR neutron
2016-02-09 03:04:20.683 12651 ERROR neutron

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543423

Title:
  RuntimeError: Exit code: -15; Stdin: ; Stdout: ; Stderr: Signal 15
  (TERM) caught by ps (procps-ng version 3.3.10).

Status in neutron:
  New

Bug description:
  I really don't know what had happened here, but message asks to report
  this bug.

  2016-02-09 03:04:20.683 12651 ERROR neutron Traceback (most recent call last):
  2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/usr/bin/neutron-openvswitch-agent", line 10, in 
  2016-02-09 03:04:20.683 12651 ERROR neutron sys.exit(main())
  2016-02-09 03:04:20.683 12651 ERROR neutron   File 
"/opt/stack/neutron/neutron/cmd/eventlet/plugins/ovs_neutron_agent.py", line 
20, in main
  2016-02-09 03:04:20.683 12651 ERROR neutron agent_main.main()
  

[Yahoo-eng-team] [Bug 1502297] Re: [RFE] Improve SG performance as VMs/containers scale on compute node

2016-02-08 Thread Armando Migliaccio
This can probably be closed, but Kevin should chime in on any minor
performance improvement that may be left.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1502297

Title:
  [RFE] Improve SG performance as VMs/containers scale on compute node

Status in neutron:
  Fix Released

Bug description:
  Please refer to the comments in the following bug:

  https://bugs.launchpad.net/neutron/+bug/1492456

  In which it was suggested to handle improving SG programming
  performance as a RFE bug.

  To Summarize the problem, when there are about 160 VMs, the neutron-
  ovs-agent takes more than 2 seconds per VM  to program the iptables
  rules mainly because of the inefficiency in the iptables programming
  code.

  #VMs = 0, provision 10 new VMs on compute node
  cumulative time in _modify_rules : Before 0.34 After: 0.20

  #VMs = 10, provision 10 new VMs on compute node
  cumulative time in _modify_rules : Before 1.68 After: 0.94

  #VMs = 20, provision 10 new VMs on compute node
  cumulative time in _modify_rules : Before 4.27 After: 2.12

  #VMs = 40, provision 10 new VMs on compute node
  cumulative time in _modify_rules : Before 11.8 After: 6.44

  #VMs = 80, provision 10 new VMs on compute node
  cumulative time in _modify_rules : Before 20.2 After: 13.6

  #VMs = 160, provision 10 new VMs on compute node
  cumulative time in _modify_rules : Before 50 After: 23.2   < more than 2 
seconds per VM !!!

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1502297/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1496802] Re: [RFE] Stop duplicating schema for common attributes

2016-02-08 Thread Armando Migliaccio
Technically speaking this bug report was addressed. It's the ones that
leverage this capability that need to be moved along.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1496802

Title:
  [RFE] Stop duplicating schema for common attributes

Status in neutron:
  Fix Released

Bug description:
  Several features have come up that can apply to multiple types of
  objects (qos, port security enabled, rbac, timestamps, tags) and each
  time we implement them we either duplicate schema across a bunch of
  tables or we have a single table with no referential integrity.

  We should add a new table that all of the Neutron resources relate to
  and then have new features that apply to multiple object types relate
  to the new neutron resources table. This prevents duplication of
  schema while maintaining referential integrity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1496802/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543253] Re: cells test_rebuild_instance_with_volume Tempest test fails

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/277536
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=674531df15d30faea08a58421160816c56b4bfba
Submitter: Jenkins
Branch:master

commit 674531df15d30faea08a58421160816c56b4bfba
Author: Matthew Treinish 
Date:   Mon Feb 8 13:57:47 2016 -0500

Add new test_rebuild_instance_with_volume to cells exclude list

This commit adds a newly added tempest test to the cells exclude list.
The test is always failing on cells and it's because it's doing
operations that don't work with cells turned on. So lets exclude it
for now. Ideally all this skip logic will be in the tempest config and
we don't have to do this anymore.

Closes-Bug: #1543253
Change-Id: Ic9db51a41f95b0d18f97745a0da7e99fdfa21e51


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543253

Title:
  cells test_rebuild_instance_with_volume Tempest test fails

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This Tempest test will not pass as it relies on creating a
  security_group and using it which has never worked with cells.

  
  2016-02-08 11:22:30.886 | 
tempest.scenario.test_rebuild_instance_with_volume.TestRebuildInstanceWithVolume.test_rebuild_instance_with_volume[compute,id-36c3d492-f5bd-11e4-b9b2-1697f925ec7b,image,network,volume]
  2016-02-08 11:22:30.887 | 

  2016-02-08 11:22:30.887 | 
  2016-02-08 11:22:30.887 | Captured traceback-1:
  2016-02-08 11:22:30.887 | ~
  2016-02-08 11:22:30.887 | Traceback (most recent call last):
  2016-02-08 11:22:30.887 |   File "tempest/scenario/manager.py", line 108, 
in delete_wrapper
  2016-02-08 11:22:30.887 | delete_thing(*args, **kwargs)
  2016-02-08 11:22:30.887 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/services/compute/security_groups_client.py",
 line 75, in delete_security_group
  2016-02-08 11:22:30.887 | 'os-security-groups/%s' % security_group_id)
  2016-02-08 11:22:30.887 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 290, in delete
  2016-02-08 11:22:30.887 | return self.request('DELETE', url, 
extra_headers, headers, body)
  2016-02-08 11:22:30.888 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 642, in request
  2016-02-08 11:22:30.888 | resp, resp_body)
  2016-02-08 11:22:30.888 |   File 
"/opt/stack/new/tempest/.tox/all/local/lib/python2.7/site-packages/tempest_lib/common/rest_client.py",
 line 700, in _error_checker
  2016-02-08 11:22:30.888 | raise exceptions.BadRequest(resp_body, 
resp=resp)
  2016-02-08 11:22:30.888 | tempest_lib.exceptions.BadRequest: Bad request
  2016-02-08 11:22:30.888 | Details: {u'code': 400, u'message': u'Security 
group is still in use'}
  2016-02-08 11:22:30.888 | 

  http://logs.openstack.org/68/276868/2/gate/gate-tempest-dsvm-
  cells/60eafe4/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543253/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542972] Re: linux bridge device processing loop breaks on removals

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/277283
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=59d815c704d9adfefe85543b5c89ebb163d5bc44
Submitter: Jenkins
Branch:master

commit 59d815c704d9adfefe85543b5c89ebb163d5bc44
Author: Mr. Bojangles 
Date:   Sun Feb 7 21:57:00 2016 -0700

Make add_tap_interface resillient to removal

This patch makes add_tap_interface safe to race conditions
where the interface is removed in the middle of processing
by catching exceptions and checking to see if the interface
still exists. If it no longer exists it assumes the exception
was caused by the missing interface and returns False as it
would if the interface did not exist to begin with.

Change-Id: Ie0d89fc2584490b6985aee66da70bae027a130ed
Closes-bug: #1542972


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1542972

Title:
  linux bridge device processing loop breaks on removals

Status in neutron:
  Fix Released

Bug description:
  The code to handle new interfaces makes a check to see if the
  interface exists and then assumes it will exist for the rest of its
  operations. This assumption does not hold true because a device can be
  removed at any time (by Nova, the agents, whatever). So when a device
  is removed at the wrong time it will cause an exception that will
  trigger all of the ports to be rewired, which is an expensive
  operation and will cause the status of the ports on the server side to
  go ACTIVE->BUILD->ACTIVE.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1542972/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1523343] Re: It failed to update the user information when we do it from pencil icon in keystone v2.

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/254011
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=6beefb372673e1d23f6cc4a9d787336086a7336e
Submitter: Jenkins
Branch:master

commit 6beefb372673e1d23f6cc4a9d787336086a7336e
Author: kenji-ishii 
Date:   Mon Dec 7 22:58:58 2015 +0900

Modify update user info from pencil icon in keystone v2

When we update the user info from pencil icon in User List,
data doesn't have 'project' attribute.
Therefore, date.pop('project') failed and exception occur.

The v2 API updates user model and default project separately.
And in User List, operator don't need to consider if a user
have a default tenant.
So we should check if data has a 'project' attribute and
if data has no 'project' attribute, it will update only user info.

Change-Id: I979bedeb8ddb15d3f7f171660ec9df4875edb53a
Closes-Bug: #1523343


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1523343

Title:
  It failed to update the user information when we do it from pencil
  icon in keystone v2.

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  When we update the user info from pencil icon in User List,  data doesn't 
have 'project' attribute.
  Therefore, date.pop('project') failed.

  The v2 API updates user model and default project separately.
  So we should check if data has a project info.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1523343/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1459361] Re: VM created even though root disk creation failed.

2016-02-08 Thread Launchpad Bug Tracker
[Expired for OpenStack Compute (nova) because there has been no activity
for 60 days.]

** Changed in: nova
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1459361

Title:
  VM created even though root disk creation failed.

Status in OpenStack Compute (nova):
  Expired

Bug description:
  When creating a vm qemu-nbd returned error code and created ephemeral
  disk with 5 gigs instead of requested 350gigs. After nova rootwrap
  returned non-0 code I assume vm creation should fail.

  
  1. Openstack version:
  ii  nova-compute 1:2014.2-fuel6.0~mira19  
  OpenStack Compute - compute node base

  2. Log files:
  attached nova-compute.log

  3. Reproduce steps:
  it happened once, don't know how to reproduce

  Expeceted result:
  vm ends up in error state

  Actual result:
  vm started but with smaller disk than requested

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1459361/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1223472] Re: firewall status does not become ACTIVE when a router does not exist

2016-02-08 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1223472

Title:
  firewall status does not become ACTIVE when  a router does not exist

Status in neutron:
  Expired

Bug description:
  I use FWaaS combined with OVS plugin and l3-agent.

  When I created a firewall with admin user, the status of the firewall
  didn't change from PENDING_CREATE to ACTIVE even after waiting for one
  or two minutes.

  On the other hand, when I created a firewall with non-admin user, the
  status of the created firewall became ACTIVE at most after 30 seconds
  (probably shorter than this).

  What is the difference between admin and non-admin?

  The detail command line log is here.
  http://paste.openstack.org/show/46493/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1223472/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1537903] Re: Glance metadefs for OS::Nova::Instance should be OS::Nova::Server

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/275747
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=e19bb206bb0332c6a772d44654fa474642ddc429
Submitter: Jenkins
Branch:master

commit e19bb206bb0332c6a772d44654fa474642ddc429
Author: Justin Pomeroy 
Date:   Wed Feb 3 09:06:45 2016 -0600

Use OS::Nova::Server resource type for instance metadata

Horizon currently uses the OS::Nova::Instance resource type when
querying for instance metadata, but it should use OS::Nova::Server
to align with Heat and Searchlight.

See also:
  https://review.openstack.org/272271

Closes-Bug: #1537903
Change-Id: I1deab8ed74515d08301e68bd2c75604d35592c50


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1537903

Title:
  Glance metadefs for OS::Nova::Instance should be OS::Nova::Server

Status in Glance:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The metadata definitions in etc/metadefs allow each namespace to be
  associated with a resource type in OpenStack.  Now that Horizon is
  supporting adding metadata to instances (just got in during the mitaka
  cycle - so unrealeased), I realized that we used OS::Nova::Instance
  instead of OS::Nova::Server in Glance.  This doesn’t align with Heat
  [0] or Searchlight [1].

  Glance Issue:

  There are a couple of metadef files that have OS::Nova::Instance that
  need to change to OS::Nova::Server.  I see also that OS::Nova:Instance
  is in one of the db scripts.  That script simply adds some initial
  "resource types" to the database. [3]. It should be noted that there
  is no hard dependency on that resource type in the DB script. You can
  add new resource types at any time via API or JSON files and they are
  automatically added.

  I'm not sure if the change to the db script needs to be done in a
  different patch or not, but that can easily be accommodated.

  Horizon Issue:

  The instance update metadata action and NG launch instance should
  retrieve OS::Nova::Server instead.  The Horizon patch shouldn't merge
  until the glance patch merges, but there is not an actual hard
  dependency between the two.

  [0] 
http://docs.openstack.org/developer/heat/template_guide/openstack.html#OS::Nova::Server
  [1] 
https://github.com/openstack/searchlight/blob/master/searchlight/elasticsearch/plugins/nova/servers.py#L35
  [3] 
https://github.com/openstack/glance/search?utf8=%E2%9C%93=%22OS%3A%3ANova%3A%3AInstance%22

  Finally:

  It should be noted that updating namespaces in Glance is already
  possible with glance-manage. E.g.

  ttripp@ubuntu:/opt/stack/glance$ glance-manage db_load_metadefs etc/metadefs 
-h
  usage: glance-manage db_load_metadefs [-h]
[path] [merge] [prefer_new] [overwrite]

  positional arguments:
path
merge
prefer_new
overwrite

  So, you just have to call:

  ttripp@ubuntu:/opt/stack/glance$ glance-manage db_load_metadefs
  etc/metadefs true true

  See also: https://youtu.be/zJpHXdBOoeM

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1537903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1541774] Re: create response for networks and ports is missing extensions

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/276219
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=b6d091279fffe441f6d28e687313ad031a24a3ec
Submitter: Jenkins
Branch:master

commit b6d091279fffe441f6d28e687313ad031a24a3ec
Author: Kevin Benton 
Date:   Wed Feb 3 23:49:11 2016 -0800

ML2: Call _dict_extend in create_(net|port) ops

The extensions fields were not being added to the response being
sent back to the user for ports and networks that weren't from
ML2 extensions (e.g. availability zones). This created an
inconsistent response between creates and updates/gets. It also
resulted in incomplete data in the AMQP create notification emitted
in the API layer.

This patch adjusts ML2 to call the _apply_dict_extend_functions
method after creating ports and networks. To make this work, another
DB lookup to get the latest model state was necessary. However, this
is part of an already expensive operation (create) so the performance
impact should be minimal.

This issue stems from the fact that db_base_plugin_v2 does not
process extensions when its create_port, create_network methods
are called. This original skipping behavior was added back in
patch If0f0277191884aab4dcb1ee36826df7f7d66a8fa as a performance
improvement to deal with dictionary extension functions that
performed DB lookups. However, at this point the dictionary
extension functions should have been optimized to skip any DB
lookups to avoid the massive performance penalties they incur
during list operations.

An alternative to this patch was to adjust the db_base_plugin_v2
to stop skipping extensions. However, because this is usually
called by inheriting plugins before they process extensions
for the new port/network, the extensions do not yet have the
required information to extend the dict and will fail. So each
core plugin will need to apply similar logic to support extensions
that rely on the extend_dict functions.

Closes-Bug: #1541774
Change-Id: Iea2c0e7f9ee5eeae28b99797874ca8a8e5790ec2


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541774

Title:
  create response for networks and ports is missing extensions

Status in neutron:
  Fix Released

Bug description:
  when issuing a create port or a create network, any extensions loaded
  via the dictionary extension mechanisms are not included in the
  response.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541774/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1503396] Re: glance wizard form var passing broken

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/251670
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=50a868094f3da1062bd8c476fda16fc18bad605c
Submitter: Jenkins
Branch:master

commit 50a868094f3da1062bd8c476fda16fc18bad605c
Author: Richard Jones 
Date:   Tue Dec 1 17:11:13 2015 +1100

Fix Create Image angularjs code

This code intended to allow pre-filling of the form values through
Django's handling of GET parameters, but it did not do so as the
code was incorrect. This was not previously noticed as the angularjs
code wasn't actually being executed. Once it was, it broke. Ironic,
I know.

Change-Id: I8d641de9246fd4f43c96bf85d47bb648f4401def
Closes-Bug: 1503396


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1503396

Title:
  glance wizard form var passing broken

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  A user of the app-catalog-ui plugin reported horizon's broken with
  passing form vars from the app catalog to the glance wizard.

  I've narrowed it down to commit:
  0f39dea2c7e722781fe3374abbaada781488c2cc

  Reverting the commit fixes the issue:

  git diff
  
3d156f04759f27b8252915d09c09aa8fcf2a70d7..0f39dea2c7e722781fe3374abbaada781488c2cc
  | patch -R -p1

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1503396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1525915] Re: [OSSA 2016-006] Normal user can change image status if show_multiple_locations has been set to true (CVE-2016-0757)

2016-02-08 Thread Tristan Cacqueray
** Description changed:

- This issue is being treated as a potential security risk under embargo.
- Please do not make any public mention of embargoed (private) security
- vulnerabilities before their coordinated publication by the OpenStack
- Vulnerability Management Team in the form of an official OpenStack
- Security Advisory. This includes discussion of the bug or associated
- fixes in public forums such as mailing lists, code review systems and
- bug trackers. Please also avoid private disclosure to other individuals
- not already approved for access to this information, and provide this
- same reminder to those who are made aware of the issue prior to
- publication. All discussion should remain confined to this private bug
- report, and any proposed fixes should be added to the bug as
- attachments.
- 
- --
- 
  User (non admin) can set image back to queued state by deleting
  location(s) from image when "show_multiple_locations" config parameter
  has been set to true.
  
  This breaks the immutability promise glance has similar way as described
  in OSSA 2015-019 as the image gets transitioned from active to queued
  and new image data can be uploaded.
  
  ubuntu@devstack-02:~/devstack$ glance image-show 
f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc
  
+--+--+
  | Property | Value
|
  
+--+--+
  | checksum | eb9139e4942121f22bbc2afc0400b2a4 
|
  | container_format | ami  
|
  | created_at   | 2015-12-14T09:58:54Z 
|
  | disk_format  | ami  
|
  | id   | f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc 
|
  | locations| [{"url": 
"file:///opt/stack/data/glance/images/f4bb4c9e-71ba-4a8c-b70a-  |
  |  | 640dbe37b3bc", "metadata": {}}]  
|
  | min_disk | 0
|
  | min_ram  | 0
|
  | name | cirros-test  
|
  | owner| ab69274aa31a4fba8bf559af2b0b98bd 
|
  | protected| False
|
  | size | 25165824 
|
  | status   | active   
|
  | tags | []   
|
  | updated_at   | 2015-12-14T09:58:54Z 
|
  | virtual_size | None 
|
  | visibility   | private  
|
  
+--+--+
  ubuntu@devstack-02:~/devstack$ glance location-delete --url 
file:///opt/stack/data/glance/images/f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc 
f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc
  
  ubuntu@devstack-02:~/devstack$ glance image-show 
f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc
  +--+--+
  | Property | Value|
  +--+--+
  | checksum | eb9139e4942121f22bbc2afc0400b2a4 |
  | container_format | ami  |
  | created_at   | 2015-12-14T09:58:54Z |
  | disk_format  | ami  |
  | id   | f4bb4c9e-71ba-4a8c-b70a-640dbe37b3bc |
  | locations| []   |
  | min_disk | 0|
  | min_ram  | 0|
  | name | cirros-test  |
  | owner| ab69274aa31a4fba8bf559af2b0b98bd |
  | protected| False|
  | size | None |
  | status   | queued   |
  | tags | []   |
  | updated_at   | 2015-12-14T13:43:23Z 

[Yahoo-eng-team] [Bug 1543149] [NEW] Reserve host pages on compute nodes

2016-02-08 Thread sahid
Public bug reported:

In some use cases we may want to avoid Nova to use an amount of
hugepages in compute nodes. (example when using ovs-dpdk). We should to
provide an option 'reserved_memory_pages' which provides way to
determine amount of pages we want to reserved for third part components

** Affects: nova
 Importance: Undecided
 Assignee: sahid (sahid-ferdjaoui)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) => sahid (sahid-ferdjaoui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543149

Title:
  Reserve host pages on compute nodes

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  In some use cases we may want to avoid Nova to use an amount of
  hugepages in compute nodes. (example when using ovs-dpdk). We should
  to provide an option 'reserved_memory_pages' which provides way to
  determine amount of pages we want to reserved for third part
  components

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543149/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540208] Re: CSRF mechanism is not safe.

2016-02-08 Thread Tristan Cacqueray
** Changed in: ossa
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540208

Title:
  CSRF mechanism is not safe.

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Security Advisory:
  Won't Fix

Bug description:
  I'm using burp suite to check secure of horizon 8.0.0.0. CSRF
  mechanism   is not safe.

  I saw : csrftoken equals with csrfmidlewaretoken ==> the reques is
  valid.

  Example: Do update network's name.

  The first request: 
   - I got csrftoken and csrfmidlewaretoken: PvVPmsOEqepSWnWgJa1GKYtBxcSXMTu1
  -  network's name :  attt_net_test_129

  then I change  csrftoken and csrfmidlewaretoken to "1" ,  network 's
  name value to "attt_net_test_121"

  Final, do send request ==> Network is updated succesfuly. (attach
  file)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540208/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534522] Re: [django 1.9] uses django.utils.importlib

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/268042
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=8d1ec570458a7bb5171958e616b7d1c570147bf5
Submitter: Jenkins
Branch:master

commit 8d1ec570458a7bb5171958e616b7d1c570147bf5
Author: Thomas Goirand 
Date:   Fri Jan 15 17:42:47 2016 +0800

[Django 1.9] Stop using django.utils.importlib

Horizon still uses django.utils.importlib which is removed from Django
1.9. We should use:
from importlib import import_module

instead of:
from django.utils.importlib import import_module

Change-Id: I422e14546468cb9c5627e746023948aab107a338
Closes-Bug: #1534522
Partially-Implements: blueprint drop-dj17
Co-Authored-By: Rob Cresswell 


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1534522

Title:
  [django 1.9] uses django.utils.importlib

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  Horizon still uses django.utils.importlib which is removed from Django
  1.9. We should use:

  from importlib import import_module

  instead of:

  from django.utils.importlib import import_module

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1534522/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543123] [NEW] PCI device not freed on failed migration

2016-02-08 Thread Ludovic Beliveau
Public bug reported:

But found in latest master.

It has been observed that a failed cold migration for an instance with
PCI devices cause those PCI devices to not be freed (still in used) by
the PciDevTracker.

The audit task (ResourceTracker.update_available_resource) gets the list
of in progress migration (from
migration_get_in_progress_by_host_and_node) and attempt to clean the
allocation of PCI devices (PciDevTracker.clean_usage).  In this case PCI
devices that are not part of any migration are freed up and put back in
the pool of available PCI devices.

The problem is that migration_get_in_progress_by_host_and_node only
filters out migration in state ['confirmed', 'reverted', 'error'].
Migration is state 'failed' are reported as in progress.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543123

Title:
  PCI device not freed on failed migration

Status in OpenStack Compute (nova):
  New

Bug description:
  But found in latest master.

  It has been observed that a failed cold migration for an instance with
  PCI devices cause those PCI devices to not be freed (still in used) by
  the PciDevTracker.

  The audit task (ResourceTracker.update_available_resource) gets the
  list of in progress migration (from
  migration_get_in_progress_by_host_and_node) and attempt to clean the
  allocation of PCI devices (PciDevTracker.clean_usage).  In this case
  PCI devices that are not part of any migration are freed up and put
  back in the pool of available PCI devices.

  The problem is that migration_get_in_progress_by_host_and_node only
  filters out migration in state ['confirmed', 'reverted', 'error'].
  Migration is state 'failed' are reported as in progress.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543123/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540245] Re: Navigation for plugins page structure doesn't work for tests

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/274666
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=f93c39258694d880b930823b5a1120114dda65bc
Submitter: Jenkins
Branch:master

commit f93c39258694d880b930823b5a1120114dda65bc
Author: Timur Sufiev 
Date:   Mon Feb 1 16:44:24 2016 +0300

Fix i9n tests pluggable nav structure

Since JSON converted into Python object contains only dicts and lists,
when searching for leaf-nodes in sidebar nav structure we should treat
lists the same way as tuples. Also remove 'Data Processing' section
from CORE_PAGE_STRUCTURE which hid this issue before with 2 initlal
sahara-dashboard tests.

Change-Id: I5b84fd3b769ae559cea484319b6b8956b80f99ae
Closes-Bug: #1540245


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1540245

Title:
  Navigation for plugins page structure doesn't work for tests

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  I can't override navigation structure for sahara dashboard in tests.
  Fresh sahara navigation structure looks like this (from config):

  plugin_page_structure={
  "Project":
  {
  "Data Processing":
  {
  "_":
  (
  "Clusters",
  "Jobs",
  "Cluster Templates",
  "Node Group Templates",
  "Job Templates",
  "Job Binaries",
  "Data Sources",
  "Image Registry",
  "Plugins"
  )
  }
  }
  }

  In horizon 
(https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/pages/navigation.py#L333)
 this structure present as string value.
  After decoding it using json.loads structure contains dicts and list, but rec 
function 
(https://github.com/openstack/horizon/blob/master/openstack_dashboard/test/integration_tests/pages/navigation.py#L320)
  doesn't process lists (only tuples).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1540245/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538518] Re: Avoid using `len(x)` to check if x is empty

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/276886
Committed: 
https://git.openstack.org/cgit/openstack/rally/commit/?id=efd641caf0ad0bba0c6ad0993a5b65d08078966d
Submitter: Jenkins
Branch:master

commit efd641caf0ad0bba0c6ad0993a5b65d08078966d
Author: Steve Wilkerson 
Date:   Fri Feb 5 13:38:51 2016 -0600

Use booleans to check for empty lists

Changed from checking for empty collections with
len(list) to checking the boolean value of the list
instead

Closes-Bug: #1538518
Change-Id: Ib601a83b8b6e19ab78690f8ca2834e7ef622cb9b


** Changed in: rally
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1538518

Title:
  Avoid using `len(x)` to check if x is empty

Status in Glance:
  In Progress
Status in OpenStack Compute (nova):
  In Progress
Status in Rally:
  Fix Released
Status in OpenStack Object Storage (swift):
  In Progress

Bug description:
  `len()` is used to check if collection (e.g., a dict, list, set, etc.)
  has items. As collections have a boolean representation too, it is
  better to directly check for true / false.

  rally/common/utils.py
  rally/task/utils.py
  rally/task/validation.py
  tests/unit/doc/test_specs.py

  This change will be more obvious and more readable, and is probably
  cheaper than computing a set intersection, too.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1538518/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1501366] Re: libvirtError: Error while building firewall: Some rules could not be created for interface

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/246581
Committed: 
https://git.openstack.org/cgit/openstack/nova/commit/?id=17264ee6a66dd60f9af1aa3a737b17f290fc7e19
Submitter: Jenkins
Branch:master

commit 17264ee6a66dd60f9af1aa3a737b17f290fc7e19
Author: Chet Burgess 
Date:   Tue Nov 17 11:55:55 2015 -0800

ebtables/libvirt workaround

Idealy nova is run with libvirt 1.2.11 or later to guarantee that
libvirt is calling ebtables with --concurrent. Since we can't
always guarantee this we have created this workaround.

The workaround is extremely hacky and not recommend but for those
who simply have no other way to address this bug the following
should be done.

 * Copy /sbin/ebtables to /sbin/ebtables.real
 * Copy the ebtables.workaround script to /sbin/ebtables

Caution: Future OS level updates and packages way overwrite the
above changes. Its recommend users upgrade to libvirt 1.2.11.

The work around script was copied from devstack and originally
written by sdague.

Change-Id: Icdffc59d68b73a6df22ce138558d6e23e1c96336
Closes-Bug: #1501366


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1501366

Title:
  libvirtError: Error while building firewall: Some rules could not be
  created for interface

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  http://logs.openstack.org/20/223320/3/check/gate-tempest-dsvm-
  
nova-v20-api/ce04943/logs/screen-n-cpu.txt.gz?level=TRACE#_2015-09-29_21_38_32_631

  2015-09-29 21:38:32.631 ERROR nova.compute.manager 
[req-f57dc3ad-e960-4a18-8290-b01ab46b256b 
tempest-SecurityGroupsTestJSON-816336435 
tempest-SecurityGroupsTestJSON-717971163] [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91] Instance failed to spawn
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91] Traceback (most recent call last):
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2155, in _build_resources
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91] yield resources
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 2009, in 
_build_and_run_instance
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91] block_device_info=block_device_info)
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 2444, in spawn
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91] block_device_info=block_device_info)
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4516, in 
_create_domain_and_network
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91] xml, pause=pause, power_on=power_on)
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 4446, in _create_domain
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91] guest.launch(pause=pause)
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 141, in launch
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91] self._encoded_xml, errors='ignore')
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 195, in 
__exit__
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91] six.reraise(self.type_, self.value, 
self.tb)
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91]   File 
"/opt/stack/new/nova/nova/virt/libvirt/guest.py", line 136, in launch
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91] return 
self._domain.createWithFlags(flags)
  2015-09-29 21:38:32.631 24733 ERROR nova.compute.manager [instance: 
00a32e3b-4fb9-4c95-951e-febc02c1ba91]   File 

[Yahoo-eng-team] [Bug 1543166] [NEW] Tables have a bunch of extra padding

2016-02-08 Thread Rob Cresswell
Public bug reported:

There is twice the usual padding around the tables due to some recent
SCSS refactoring. This should be removed.

** Affects: horizon
 Importance: Low
 Assignee: Rob Cresswell (robcresswell)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1543166

Title:
  Tables have a bunch of extra padding

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  There is twice the usual padding around the tables due to some recent
  SCSS refactoring. This should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1543166/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543105] [NEW] Artifacts code conceals what/where errors happened during plugin loading

2016-02-08 Thread Kirill Zaitsev
Public bug reported:

Currently exception is converted to string and we get smth like 
"2016-02-08 15:47:53.218 12604 ERROR glance.common.artifacts.loader [-] Could 
not load plugin from assets.glance_image.v1.package: __init__() got multiple 
values for keyword argument 'mutable'
"
and the stacktrace shown later conceals the place where the actual error has 
happened.

this doesn't help much with debugging plugin code. It would be great to
have relevant stacktrace printed on errors in artifact definitions.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: artifacts

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1543105

Title:
  Artifacts code conceals what/where errors happened during plugin
  loading

Status in Glance:
  New

Bug description:
  Currently exception is converted to string and we get smth like 
  "2016-02-08 15:47:53.218 12604 ERROR glance.common.artifacts.loader [-] Could 
not load plugin from assets.glance_image.v1.package: __init__() got multiple 
values for keyword argument 'mutable'
  "
  and the stacktrace shown later conceals the place where the actual error has 
happened.

  this doesn't help much with debugging plugin code. It would be great
  to have relevant stacktrace printed on errors in artifact definitions.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1543105/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534273] Re: Keystone configuration options for nova.conf missing from Redhat/CentOS install guide

2016-02-08 Thread Matt Kassawara
** Changed in: openstack-manuals
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1534273

Title:
  Keystone configuration options for nova.conf missing from
  Redhat/CentOS install guide

Status in OpenStack Compute (nova):
  Invalid
Status in openstack-manuals:
  Invalid

Bug description:
  Hi,

  Launching new/first instance fails with an error, this is a  new openstack 
Liberty with 2 node deployment, following official documentation:
  
http://docs.openstack.org/liberty/install-guide-rdo/launch-instance-public.html

  
  Command used::
  [root@xepcloud ~]# nova boot --flavor m1.tiny --image cirros --nic 
net-id=8fb32974-8dcf-47c8-a42b-a890e47725f4 --security-group default --key-name 
mykey public-instance

  Error::
  ERROR (ClientException): Unexpected API Error. Please report this at 
http://bugs.launchpad.net/nova/ and attach the Nova API 
  log if possible.
   (HTTP 500) (Request-ID: 
req-a2d56513-0c19-46e8-9a45-d63fcee17224)

  Note:
  Could this be related to networking config? I was not sure what IP to use for 
OVERLAY_INTERFACE_IP_ADDRESS, so I used mgmt IP but I have a second physical 
interface for public access with no ip assigned.

  
  nova API logs:

  
  2016-01-14 11:00:13.292 11528 INFO nova.osapi_compute.wsgi.server 
[req-c3f11e04-e83e-46c8-9f3b-7a1709f5cc5d 7d4aa0f0645248f3b49ec2a4956a2535 
ee13d79dc9954b458a8d0f173bd63ccb - - -] 192.168.178.90 "GET 
/v2/ee13d79dc9954b458a8d0f173bd63ccb/flavors?is_public=None HTTP/1.1" status: 
200 len: 1477 time: 0.0188160
  2016-01-14 11:00:13.311 11528 INFO nova.osapi_compute.wsgi.server 
[req-696093c4-6513-4209-9f56-4a0ae9488a25 7d4aa0f0645248f3b49ec2a4956a2535 
ee13d79dc9954b458a8d0f173bd63ccb - - -] 192.168.178.90 "GET 
/v2/ee13d79dc9954b458a8d0f173bd63ccb/flavors/1 HTTP/1.1" status: 200 len: 629 
time: 0.0156209
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions 
[req-a2d56513-0c19-46e8-9a45-d63fcee17224 7d4aa0f0645248f3b49ec2a4956a2535 
ee13d79dc9954b458a8d0f173bd63ccb - - -] Unexpected exception in API method
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/extensions.py", line 478, 
in wrapped
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/validation/__init__.py", line 73, in 
wrapper
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/api/openstack/compute/servers.py", line 
611, in create
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions 
**create_kwargs)
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/hooks.py", line 149, in inner
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions rv = 
f(*args, **kwargs)
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1581, in create
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions 
check_server_group_quota=check_server_group_quota)
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 1181, in 
_create_instance
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions 
auto_disk_config, reservation_id, max_count)
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/compute/api.py", line 955, in 
_validate_and_build_base_options
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions 
pci_request_info, requested_networks)
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 1059, in 
create_pci_requests_for_sriov_ports
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions neutron 
= get_client(context, admin=True)
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions   File 
"/usr/lib/python2.7/site-packages/nova/network/neutronv2/api.py", line 237, in 
get_client
  2016-01-14 11:00:16.143 11528 ERROR nova.api.openstack.extensions 

[Yahoo-eng-team] [Bug 1500631] Re: support multiple LDAP URIs

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/228644
Committed: 
https://git.openstack.org/cgit/openstack/keystone/commit/?id=9d3b676b1f17fb42fe92a421948ebaa597ba2d24
Submitter: Jenkins
Branch:master

commit 9d3b676b1f17fb42fe92a421948ebaa597ba2d24
Author: Steve Martinelli 
Date:   Sun Feb 7 02:54:08 2016 -0500

Support multiple URLs for LDAP server

python-ldap calls out to openldap which can handle multiple URLs for
ldap servers (for the purpose of high availability). openldap expects
these urls to be separated by a comma or whitespace.

Change the help text to specify a comma separated list of URLs is
allowed.

Change-Id: I523dcfc1701a6f7c725c4aa11482bfc15a3515a5
Closes-Bug: #1500631


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1500631

Title:
  support multiple LDAP URIs

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  The help text for the ldap.url config option states: "URL for
  connecting to the LDAP server."  This implies only one URL can be
  specified.  But actually, multiple may be specified due to the python-
  ldap module being used.

  The python-ldap module is basically a wrapper for the openldap client
  library.  And if you look into the source, ldap.initialize() calls
  ldap_initialize() which supports multiple URIs in the URI string.  And
  is easily found in the man page for ldap_initialize:

  ldap_initialize()  acts like ldap_init(), but it returns an integer 
indicating either suc‐
   cess or the failure reason, and it allows to specify details for  the  
connection  in  the
   schema portion of the URI.  The uri parameter may be a comma- or 
whitespace-separated list
   of URIs containing only the schema, the host, and the port fields. .

  So I did try comma separated ldap URLs in keystone, which worked as I
  would expect.  It attempts connections with the first host and tries
  the next if it fails to bind.  My simple example using python-ldap
  where there is no ldap server at localhost, but there is at
  ldaps.company.com

  l = ldap.initialize('ldap://localhost:389,ldaps://ldaps.company.com:636')
   l.simple_bind_s()
  (97, [], 1, [])

  The same works in keystone, so the keystone config help should be
  updated to show this is actually a supported option.  Its very useful
  for deployers using AD where there is commonly redundancy using many
  domain controllers behind that one domain.

  Note: the whitespace-separated list did not work for me, only comma.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1500631/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1540411] Re: kilo: ValueError: git history requires a target version of pbr.version.SemanticVersion(2015.1.4), but target version is pbr.version.SemanticVersion(2015.1.3)

2016-02-08 Thread Dave Walker
** Also affects: neutron/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1540411

Title:
  kilo: ValueError: git history requires a target version of
  pbr.version.SemanticVersion(2015.1.4), but target version is
  pbr.version.SemanticVersion(2015.1.3)

Status in neutron:
  In Progress
Status in neutron kilo series:
  New

Bug description:
  http://logs.openstack.org/57/266957/1/gate/gate-tempest-dsvm-neutron-
  linuxbridge/ed15bbf/logs/devstacklog.txt.gz#_2016-01-31_05_55_23_989

  
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22error%20in%20setup%20command%3A%20Error%20parsing%5C%22%20AND%20message%3A%5C%22ValueError%3A%20git%20history%20requires%20a%20target%20version%20of%20pbr.version.SemanticVersion(2015.1.4)%2C%20but%20target%20version%20is%20pbr.version.SemanticVersion(2015.1.3)%5C%22%20AND%20tags%3A%5C%22console%5C%22=7d

  This hits multiple projects, it's a known issue, this is just a bug
  for tracking the failures in elastic-recheck.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1540411/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543288] [NEW] osinfo should not emit multiple error messages when module isn't loaded

2016-02-08 Thread Vladik Romanovsky
Public bug reported:

Currently osinfo module emits multiple error messages , when libosinfo
module cannot be loaded:

2016-02-08 12:44:15.270 2868 ERROR nova.virt.osinfo [req-cb9744f0-c5af-
4bc7-a164-6e0ba06c021d tempest-VolumesV1SnapshotTestJSON-1106516754
tempest-VolumesV1SnapshotTestJSON-1593599156] Cannot find OS information
- Reason: (Cannot load Libosinfo: (No module named
gi.repository.Libosinfo))

Since loading the libosinfo module is optional, It should only report
this info once and not as an error message.

** Affects: nova
 Importance: Undecided
 Assignee: Vladik Romanovsky (vladik-romanovsky)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Vladik Romanovsky (vladik-romanovsky)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543288

Title:
  osinfo should not emit multiple error messages when module isn't
  loaded

Status in OpenStack Compute (nova):
  New

Bug description:
  Currently osinfo module emits multiple error messages , when libosinfo
  module cannot be loaded:

  2016-02-08 12:44:15.270 2868 ERROR nova.virt.osinfo [req-cb9744f0
  -c5af-4bc7-a164-6e0ba06c021d tempest-
  VolumesV1SnapshotTestJSON-1106516754 tempest-
  VolumesV1SnapshotTestJSON-1593599156] Cannot find OS information -
  Reason: (Cannot load Libosinfo: (No module named
  gi.repository.Libosinfo))

  Since loading the libosinfo module is optional, It should only report
  this info once and not as an error message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1489562] Re: Support docker image type in ng launch instance

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/217894
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=9a0f130119f8659189f2714d9937fcdf7265a61f
Submitter: Jenkins
Branch:master

commit 9a0f130119f8659189f2714d9937fcdf7265a61f
Author: Justin Pomeroy 
Date:   Thu Aug 27 14:16:50 2015 -0500

Support docker image type in ng launch instance wizard

This updates the angular Launch Instance wizard so the source tables
correctly display DOCKER as the type when the disk format is raw and
the container format is docker.

Closes-Bug: #1489562
Change-Id: Id8c93376237bd37efded1d6f9d0c036d8a5b1144


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1489562

Title:
  Support docker image type in ng launch instance

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  The angular Launch Instance workflow does not show the correct image
  type for Docker images in the Source allocated and available tables.
  These show as RAW but when the container format is 'docker' the image
  type should show as DOCKER.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1489562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1156298] Re: templated Catalog backend does not support listing services or endpoints

2016-02-08 Thread Steve Martinelli
David, agreed, I'll mark it as released, since that's what we do now
when a patch closes the bug

** Changed in: keystone
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1156298

Title:
  templated Catalog backend does not support listing services or
  endpoints

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  Recently we switched from using the SQL backend for the Catalog
  Keystone component to using the templated Catalog backend. We switched
  for performance reasons -- SQL connection performance over WAN was
  unacceptable -- as well as the fact that novaclient and the Keystone
  API itself apparently has no way of filtering endpoints based on an
  availability zone, and the client simply picks the first compute
  endpoint it finds in the region

  Sidenote: did nobody think about cases where there is >1 availability
  zone per region? :( See my summit proposal on this topic:
  http://summit.openstack.org/cfp/edit/114

  Anyway, I got the templated Catalog backend going without too much
  fuss, but then I noticed the following:

  root@c1r1.dal2 18:38:14:~# keystone service-list
  No handlers could be found for logger "keystoneclient.client"
  Unable to communicate with identity service: 404 Not Found

  The resource could not be found.

 . (HTTP 404)
  root@c1r1.dal2 18:38:18:~# keystone endpoint-list
  No handlers could be found for logger "keystoneclient.client"
  Unable to communicate with identity service: 404 Not Found

  The resource could not be found.

 . (HTTP 404)
  root@c1r1.dal2 18:38:20:~# keystone catalog
  Service: volume
  
+-+-+
  |   Property  |Value  
  |
  
+-+-+
  |   adminURL  | 
http://volume.dal2.tfoundry.com/v1/3bfaed94e4554e4c884b7f87d65e02e4 |
  | internalURL | 
http://volume.dal2.tfoundry.com/v1/3bfaed94e4554e4c884b7f87d65e02e4 |
  |  publicURL  | 
http://volume.dal2.tfoundry.com/v1/3bfaed94e4554e4c884b7f87d65e02e4 |
  |region   |  ci   
  |
  
+-+-+
  Service: image
  +-++
  |   Property  |   Value|
  +-++
  |   adminURL  | http://image.int.dal2.tfoundry.com:9292/v1 |
  | internalURL | http://image.int.dal2.tfoundry.com:9292/v1 |
  |  publicURL  | http://image.int.dal2.tfoundry.com:9292/v1 |
  |region   | ci |
  +-++
  Service: compute
  
+-+---+
  |   Property  | Value 
|
  
+-+---+
  |   adminURL  | 
https://compute.dal2.tfoundry.com/v2/3bfaed94e4554e4c884b7f87d65e02e4 |
  | internalURL | 
https://compute.dal2.tfoundry.com/v2/3bfaed94e4554e4c884b7f87d65e02e4 |
  |  publicURL  | 
https://compute.dal2.tfoundry.com/v2/3bfaed94e4554e4c884b7f87d65e02e4 |
  |region   |   ci  
|
  
+-+---+
  Service: ec2
  +-+--+
  |   Property  |Value |
  +-+--+
  |   adminURL  | https://ec2.dal2.tfoundry.com/services/Cloud |
  | internalURL | https://ec2.dal2.tfoundry.com/services/Cloud |
  |  publicURL  | https://ec2.dal2.tfoundry.com/services/Cloud |
  |region   |  ci  |
  +-+--+
  Service: identity
  +-++
  |   Property  |   Value|
  +-++
  |   adminURL  |https://auth.dal2.tfoundry.com/v2.0/|
  | internalURL | https://auth.dal2.tfoundry.com:35357/v2.0/ |
  |  publicURL  |https://auth.dal2.tfoundry.com/v2.0/|
  |region   | ci |
  +-++

  The service and endpoint lists should be trivial to implement since
  the catalog already contains this information. In addition, the error
  message returned from keystone 

[Yahoo-eng-team] [Bug 1541348] Re: Regression in routers auto scheduling logic

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/275653
Committed: 
https://git.openstack.org/cgit/openstack/neutron/commit/?id=51adde5eb93993fbd24c14daab16ade74565978f
Submitter: Jenkins
Branch:master

commit 51adde5eb93993fbd24c14daab16ade74565978f
Author: Oleg Bondarev 
Date:   Wed Feb 3 15:01:52 2016 +0300

Fix regression in routers auto scheduling logic

Routers auto scheduling works when an l3 agent starts and performs
a full sync with neutron server. Neutron server looks for all
unscheduled routers and schedules them to that agent if applicable.
This was broken by commit 0e97feb0f30bc0ef6f4fe041cb41b7aa81042263
which changed full sync logic a bit: now l3 agent requests all ids
of routers scheduled to it first. get_router_ids() didn't call
routers auto scheduling which caused the regression.
This patch adds routers auto scheduling to get_router_ids().

Closes-Bug: #1541348
Change-Id: If6d4e7b3a4839c93296985e169631e5583d9fa12


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1541348

Title:
  Regression in routers auto scheduling logic

Status in neutron:
  Fix Released

Bug description:
  Routers auto scheduling works when an l3 agent starst and performs a full 
sync with neutron server . Neutron server looks for all unscheduled routers 
(non-dvr routers only) and schedules them to that agent if applicable.
  This was broken by commit 0e97feb0f30bc0ef6f4fe041cb41b7aa81042263 which 
changed full sync logic a bit: now l3 agent requests all ids of routers 
scheduled to it first. get_router_ids() didn't call routers auto scheduling 
which caused the regression.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1541348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1462484] Re: Port Details VNIC type value is not translatable

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/257271
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=ae0a7039b34921526bee0895981d0734e990e743
Submitter: Jenkins
Branch:master

commit ae0a7039b34921526bee0895981d0734e990e743
Author: Itxaka 
Date:   Mon Dec 14 11:43:34 2015 +0100

Make Port Details VNIC type translatable

Port Details VNIC type value was not translatable.

Change-Id: I64e16adfa8ebf08fcc81a5648f8b0a0f4404c344
Closes-Bug: #1462484


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1462484

Title:
  Port Details VNIC type value is not translatable

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  On port deatails, the Binding/VNIC type value is not translatable. To 
recreate the problem:
  - create a pseudo translation:

  ./run_tests.sh --makemessages
  ./run_tests.sh --pseudo de
  ./run_tests.sh --compilemessages

  start the dev server, login and change to German/Deutsch (de)

  Navigate to
  Project->Network->Networks->[Detail]->[Port Detail]

  notice at the bottom of the panel the VNIC type is not translated.

  The 3 VNIC types should be translated when displayed in Horizon
  
https://github.com/openstack/neutron/blob/master/neutron/extensions/portbindings.py#L73
  but neutron will expect these to be provided in English on API calls.

  Note that the mapping is already correct on Edit Port - the
  translations just need to be applied on the details panel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1462484/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543251] [NEW] Missing comma in the Column status_choices tuple

2016-02-08 Thread Lin Hua Cheng
Public bug reported:

Missing comma in the tuple example:

http://docs.openstack.org/developer/horizon/ref/tables.html#horizon.tables.Column.status_choices

** Affects: horizon
 Importance: Undecided
 Assignee: Lin Hua Cheng (lin-hua-cheng)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Lin Hua Cheng (lin-hua-cheng)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1543251

Title:
  Missing comma in the Column status_choices tuple

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Missing comma in the tuple example:

  
http://docs.openstack.org/developer/horizon/ref/tables.html#horizon.tables.Column.status_choices

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1543251/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506653] Re: Retrieving either a project's parents or subtree as_list does not work

2016-02-08 Thread Timothy Symanczyk
Closing with "Fix Released".
https://bugs.launchpad.net/keystone/+bug/1506986


** Changed in: keystone
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1506653

Title:
  Retrieving either a project's parents or subtree as_list does not work

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  To reproduce this I created five projects ("1", "2", "3", "4", "5") -
  with "1" as the top level project, and each subsequent project as a
  child of the previous. All four of the following calls were performed
  against project "3".

  parents_as_list (NON-working) :
  $ curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" 
http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248?parents_as_list
  {
"project": {
  "name": "3",
  "is_domain": false,
  "description": "",
  "links": {
"self": 
"http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248;
  },
  "enabled": true,
  "id": "b4a6fb7dcc504373a2e1301ab357d248",
  "parent_id": "0b09fce9246f42dda11125d4d32aa013",
  "parents": [],
  "domain_id": "default"
}
  }

  parents_as_ids (working) :
  $ curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" 
http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248?parents_as_ids
  {
"project": {
  "name": "3",
  "is_domain": false,
  "description": "",
  "links": {
"self": 
"http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248;
  },
  "enabled": true,
  "id": "b4a6fb7dcc504373a2e1301ab357d248",
  "parent_id": "0b09fce9246f42dda11125d4d32aa013",
  "parents": {
"0b09fce9246f42dda11125d4d32aa013": {
  "7092bca4a8d444619bcee53a47585876": null
}
  },
  "domain_id": "default"
}
  }

  subtree_as_list (NON-working) :
  $ curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" 
http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248?subtree_as_list
  {
"project": {
  "name": "3",
  "is_domain": false,
  "description": "",
  "links": {
"self": 
"http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248;
  },
  "enabled": true,
  "subtree": [],
  "id": "b4a6fb7dcc504373a2e1301ab357d248",
  "parent_id": "0b09fce9246f42dda11125d4d32aa013",
  "domain_id": "default"
}
  }

  subtree_as_ids (working) :
  $ curl -i -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" 
http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248?subtree_as_ids
  {
"project": {
  "name": "3",
  "is_domain": false,
  "description": "",
  "links": {
"self": 
"http://community:35357/v3/projects/b4a6fb7dcc504373a2e1301ab357d248;
  },
  "enabled": true,
  "subtree": {
"421143ab145e4b278d1b971d6509dd23": {
  "1484a4e8493d4f3eb6a81bef582f455a": null
}
  },
  "id": "b4a6fb7dcc504373a2e1301ab357d248",
  "parent_id": "0b09fce9246f42dda11125d4d32aa013",
  "domain_id": "default"
}
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1506653/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1534458] Re: [RFE] Multi-region Security Group

2016-02-08 Thread Sean M. Collins
Yes. There are numerous resources that are not currently synced between
regions. SGs are just one. Images, flavors, keypairs, etc...

** Changed in: neutron
   Status: Incomplete => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1534458

Title:
  [RFE] Multi-region Security Group

Status in neutron:
  Won't Fix

Bug description:
  This RFE is requesting feature "Multi-region Security Group" to
  configure security group across multiple regions.

  [Backgroud]
  OpenStack 'Regions' is used to construct more than one openstack environments 
between geographically-distributed places. Each region is independent openstack 
environment and placed in a datacenter which is geographically distant from 
each other, for example, different country. This is important to ensure 
availability. Even if one region stops due to problems, we can continue our 
work in other regions.

  [Existing problem]
  In multi-region environment, the one of inconvenient points is configuring 
security group. For example, there are two regions 'region 1' and 'region 2'. 
Each region has web server and its db server.
  Region 1: web server(W1) and db server (D1)
  Region 2: web server(W2) and db server (D2)
  Say that each region is connected in L3 layer (IP is reachable each other).

  In such a case, we want to set up security group so that both of W1
  and W2 can access to D1 and D2. But each region is independent and we
  have to set up security group one by one in each region.

  [Proposal]
  Multi-region security group enables us to create security group across 
regions. Once it is introduced, we can add security group which can be shared 
between regions. In the case above:
  - Make two multi-region security group, SG1 and SG2
  - Add W1,W2 to SG1
  - Add D1,D2 to SG2
  And then by adding rule to SG2 to allow access from SG1, W1 and W2 can access 
to D1 and D2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1534458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509304] Re: Add support for 'l2-cache-size' (a QCOW2 run time option for metadata cache size) for drives

2016-02-08 Thread Augustina Ragwitz
Marked as invalid, this is a feature request.

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1509304

Title:
  Add support for 'l2-cache-size' (a QCOW2 run time option for metadata
  cache size) for drives

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  QCOW2 performance can be significantly improved by setting the l2
  -cache-size parameter for large qcow2 based storage (such as images
  and ephemeral drives) (see
  https://events.linuxfoundation.org/sites/events/files/slides/p0.pp_.pdf).

  Adding support for this optional parameter (which adds [...]-drive
  file=hd.qcow2,l2-cache-size=2097152 to the qemu CLI) would allow
  operators and/or users to configure a more tuned configuration.

  A simple implementation where a default value could be set in
  nova.conf would already give an improved flexibility. A formula could
  also be used with a factor relative to the size of the storage.

  Ideally, this could also be specified as an image or flavor property
  so that only large drives would need to allocate the additional
  memory.

  I could not locate an option such as this in the OpenStack versions up
  to Liberty.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1509304/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543312] [NEW] Curvature Network Topology shows each IP twice

2016-02-08 Thread Ido Ovadia
Public bug reported:

Description of problem:
===
Curvature network topology shows IP twice

Version-Release number of selected component:
=
python-django-horizon-8.0.0-10.el7ost.noarch
openstack-dashboard-8.0.0-10.el7ost.noarch

How reproducible:
=
100%

Steps to Reproduce:
===
1. Launch an instance
2. Browse to: Projec --> Network --> Network Topology
3. Click on instance

Actual results:
===
On the drill down each IP appears twice (screenshot enclosed) 

Expected results:
=
Each IP appears once

Additional info:

Screenshot is enclosed

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screenshot from 2016-02-08 23:33:46.png"
   
https://bugs.launchpad.net/bugs/1543312/+attachment/4567284/+files/Screenshot%20from%202016-02-08%2023%3A33%3A46.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1543312

Title:
  Curvature Network Topology shows each IP twice

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Description of problem:
  ===
  Curvature network topology shows IP twice

  Version-Release number of selected component:
  =
  python-django-horizon-8.0.0-10.el7ost.noarch
  openstack-dashboard-8.0.0-10.el7ost.noarch

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. Launch an instance
  2. Browse to: Projec --> Network --> Network Topology
  3. Click on instance

  Actual results:
  ===
  On the drill down each IP appears twice (screenshot enclosed) 

  Expected results:
  =
  Each IP appears once

  Additional info:
  
  Screenshot is enclosed

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1543312/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543327] [NEW] Angular controllers (routes being evaluated) on django navbar clicks (hash routing)

2016-02-08 Thread Travis Tripp
Public bug reported:

While evaluating another patch, I found that if I'm on an angular page
(ng-images) that whenever I click an accordion on the horizon navbar
(e.g. start on Project --> Images, then click Admin) this is causing the
current angular controller to refresh.


See picture, but note that I have paused the debugger in a Keystone API hit, 
that you can see the requests in the terminal window, and that in the URL you 
can see #sidebar-accordion-admin.

http://pasteboard.co/1pw530qO.png

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1543327

Title:
  Angular controllers (routes being evaluated) on django navbar clicks
  (hash routing)

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  While evaluating another patch, I found that if I'm on an angular page
  (ng-images) that whenever I click an accordion on the horizon navbar
  (e.g. start on Project --> Images, then click Admin) this is causing
  the current angular controller to refresh.

  
  See picture, but note that I have paused the debugger in a Keystone API hit, 
that you can see the requests in the terminal window, and that in the URL you 
can see #sidebar-accordion-admin.

  http://pasteboard.co/1pw530qO.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1543327/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543316] [NEW] Curvature network topology: Deactivate Open Console from topology when instance does not on running state

2016-02-08 Thread Ido Ovadia
Public bug reported:

===
Deactivate Open Console from network topology when instance does not on running 
state
 
Version-Release number of selected component (if applicable):
=
python-django-horizon-8.0.0-10.el7ost.noarch
openstack-dashboard-8.0.0-10.el7ost.noarch

How reproducible:
=
100%

Steps to Reproduce:
===
1. Launch an instance
2. Pause or Suspend an instance
3. Browse to: Projec --> Network --> Network Topology
4. Click on instance

Actual results:
===
Open Console option displayed and active 

Expected results:
=
Open Console option should not displayed or inactive

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1543316

Title:
   Curvature network topology: Deactivate Open Console from topology
  when instance does not on running state

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  ===
  Deactivate Open Console from network topology when instance does not on 
running state
   
  Version-Release number of selected component (if applicable):
  =
  python-django-horizon-8.0.0-10.el7ost.noarch
  openstack-dashboard-8.0.0-10.el7ost.noarch

  How reproducible:
  =
  100%

  Steps to Reproduce:
  ===
  1. Launch an instance
  2. Pause or Suspend an instance
  3. Browse to: Projec --> Network --> Network Topology
  4. Click on instance

  Actual results:
  ===
  Open Console option displayed and active 

  Expected results:
  =
  Open Console option should not displayed or inactive

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1543316/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543318] [NEW] Token for trust does not expand implied roles

2016-02-08 Thread Adam Young
Public bug reported:

def test_trusts_from_implied_role(self):
self._create_three_roles()
self._create_implied_role(self.role_list[0], self.role_list[1])
self._create_implied_role(self.role_list[1], self.role_list[2])
self._assign_top_role_to_user_on_project(self.user, self.project)

# Create a trustee and assign the prior role to her
trustee = unit.create_user(self.identity_api, domain_id=self.domain_id)
ref = unit.new_trust_ref(
trustor_user_id=self.user['id'],
trustee_user_id=trustee['id'],
project_id=self.project['id'],
role_ids=[self.role_list[0]['id']])
r = self.post('/OS-TRUST/trusts', body={'trust': ref})
trust = r.result['trust']

# Only the role that was specified is in the trust, NOT implies roles
self.assertEqual(self.role_list[0]['id'], trust['roles'][0]['id'])
self.assertThat(trust['roles'], matchers.HasLength(1))

# Authenticate as the trustee
auth_data = self.build_authentication_request(
user_id=trustee['id'],
password=trustee['password'],
trust_id=trust['id'])
r = self.v3_create_token(auth_data)
token = r.result['token']

# This fails
self.assertThat(token['roles'], matchers.HasLength(3))

** Affects: keystone
 Importance: Undecided
 Assignee: Adam Young (ayoung)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Adam Young (ayoung)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1543318

Title:
  Token for trust does not expand implied roles

Status in OpenStack Identity (keystone):
  New

Bug description:
  def test_trusts_from_implied_role(self):
  self._create_three_roles()
  self._create_implied_role(self.role_list[0], self.role_list[1])
  self._create_implied_role(self.role_list[1], self.role_list[2])
  self._assign_top_role_to_user_on_project(self.user, self.project)

  # Create a trustee and assign the prior role to her
  trustee = unit.create_user(self.identity_api, 
domain_id=self.domain_id)
  ref = unit.new_trust_ref(
  trustor_user_id=self.user['id'],
  trustee_user_id=trustee['id'],
  project_id=self.project['id'],
  role_ids=[self.role_list[0]['id']])
  r = self.post('/OS-TRUST/trusts', body={'trust': ref})
  trust = r.result['trust']

  # Only the role that was specified is in the trust, NOT implies roles
  self.assertEqual(self.role_list[0]['id'], trust['roles'][0]['id'])
  self.assertThat(trust['roles'], matchers.HasLength(1))

  # Authenticate as the trustee
  auth_data = self.build_authentication_request(
  user_id=trustee['id'],
  password=trustee['password'],
  trust_id=trust['id'])
  r = self.v3_create_token(auth_data)
  token = r.result['token']

  # This fails
  self.assertThat(token['roles'], matchers.HasLength(3))

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1543318/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543321] [NEW] Trusts on v2.0 are undocumented

2016-02-08 Thread Lance Bragstad
Public bug reported:

The trust extension, at the time targeted version 3. It was never
implemented to *not* work against v2.0.

We don't document this anywhere and we support it. We should either
officially support it or remove support for trust authentication in
v2.0.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1543321

Title:
  Trusts on v2.0 are undocumented

Status in OpenStack Identity (keystone):
  New

Bug description:
  The trust extension, at the time targeted version 3. It was never
  implemented to *not* work against v2.0.

  We don't document this anywhere and we support it. We should either
  officially support it or remove support for trust authentication in
  v2.0.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1543321/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1538261] Re: Material theme inline edit exit icon not showing

2016-02-08 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/272701
Committed: 
https://git.openstack.org/cgit/openstack/horizon/commit/?id=29bf94c090481df9286edcc5c65263524b2b78e8
Submitter: Jenkins
Branch:master

commit 29bf94c090481df9286edcc5c65263524b2b78e8
Author: Cindy Lu 
Date:   Tue Jan 26 11:08:03 2016 -0800

Material theme icon not showing for inline edit close

Added to _icons.scss (times: 'close') and also alphabetize
the icons for easy searching.

Change-Id: I5d1867a225dc77403bfbda7fde61db91fecc0b94
Closes-Bug: #1538261


** Changed in: horizon
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1538261

Title:
  Material theme inline edit exit icon not showing

Status in OpenStack Dashboard (Horizon):
  Fix Released

Bug description:
  See image.  The 'exit'/'close' icon isn't showing up.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1538261/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1543379] [NEW] Neutron *.delete.end notification payload do not contain metadata other than id of the entity being deleted

2016-02-08 Thread Rohit Jaiswal
Public bug reported:

When Neutron emits notification for objects like subnet, port, router
and network being deleted, the notification payload only contain the id
of the entity being deleted.

Eg - RECEIVED MESSAGE: {u'_context_domain': None, u'_context_request_id': 
u'req-82232bf3-5032-4351-b8d6-71028cfe24eb', u'event_type': u'port.delete.end', 
u'_context_auth_token': u'682e4fec9d584d29b1f3a1a803a2560c', 
u'_context_resource_uuid': None, u'_context_tenant_name': u'admin', 
u'_context_user_id': u'a0934b6ddd264d619a6aba59b978cabc', u'payload':
{u'port_id': u'ce56ff00-5af0-45a0-af33-061c2d8a64c5'}

, u'_context_show_deleted': False, u'priority': u'INFO',
u'_context_is_admin': True, u'_context_project_domain': None,
u'_context_user': u'a0934b6ddd264d619a6aba59b978cabc', u'publisher_id':
u'network.padawan-ccp-c1-m1-mgmt', u'message_id':
u'2b8a5fa3-968c-4808-a235-bf06ecdba412', u'_context_roles': [u'monasca-
user', u'admin', u'key-manager:admin', u'key-manager:service-admin'],
u'timestamp': u'2016-02-08 20:47:02.026986', u'_context_timestamp':
u'2016-02-08 20:47:01.178041', u'_unique_id':
u'1248f6703a0f41bfb40d0f7cd6407371', u'_context_tenant_id':
u'a5b63ca418bf45bc9f2cfc14c0c3c59e', u'_context_project_name': u'admin',
u'_context_user_identity': u'a0934b6ddd264d619a6aba59b978cabc
a5b63ca418bf45bc9f2cfc14c0c3c59e - - -', u'_context_tenant':
u'a5b63ca418bf45bc9f2cfc14c0c3c59e', u'_context_project_id':
u'a5b63ca418bf45bc9f2cfc14c0c3c59e', u'_context_read_only': False,
u'_context_user_domain': None, u'_context_user_name': u'admin'}

Compare that to the metadata obtained when a port is created:

RECEIVED MESSAGE: {u'_context_domain': None, u'_context_request_id': 
u'req-89234e73-0294-4a29-bada-d0daa1e66b70', u'event_type': u'port.create.end', 
u'_context_auth_token': u'318cca31e08d4ecc8cf48e33f3c661f6', 
u'_context_resource_uuid': None, u'_context_tenant_name': u'admin', 
u'_context_user_id': u'a0934b6ddd264d619a6aba59b978cabc',
u'payload': {u'port': {u'status': u'DOWN', u'binding:host_id': u'', u'name': 
u'', u'allowed_address_pairs': [], u'admin_state_up': True, u'network_id': 
u'ecbcd2ac-e066-4bc7-8f65-d4cf182677b9', u'dns_name': u'', 
u'binding:vif_details': {}, u'mac_address': u'fa:16:3e:8e:df:e9', 
u'dns_assignment': [
{u'hostname': u'host-192-168-1-6', u'ip_address': u'192.168.1.6', u'fqdn': 
u'host-192-168-1-6.openstacklocal.'}

], u'binding:vnic_type': u'normal', u'binding:vif_type': u'unbound', 
u'device_owner': u'', u'tenant_id': u'a5b63ca418bf45bc9f2cfc14c0c3c59e', 
u'binding:profile': {}, u'fixed_ips': [
{u'subnet_id': u'a650342c-5db4-4f37-aecb-4eb723355176', u'ip_address': 
u'192.168.1.6'}

], u'id': u'4adbe0de-6f27-4745-9c36-e56ee43a6ea3', u'security_groups': 
[u'02d614cd-053d-485f-aadf-83fc5409d111'], u'device_id': u''}},
u'_context_show_deleted': False, u'priority': u'INFO', u'_context_is_admin': 
True, u'_context_project_domain': None, u'_context_user': 
u'a0934b6ddd264d619a6aba59b978cabc', u'publisher_id': 
u'network.padawan-ccp-c1-m3-mgmt', u'message_id': 
u'f418fa6c-5059-450b-9e0b-f7bf6970a24c', u'_context_roles': [u'monasca-user', 
u'admin', u'key-manager:admin', u'key-manager:service-admin'], u'timestamp': 
u'2016-02-08 20:48:00.589837', u'_context_timestamp': u'2016-02-08 
20:48:00.069004', u'_unique_id': u'bc1cf24aed20440c8d80f09914eaabf2', 
u'_context_tenant_id': u'a5b63ca418bf45bc9f2cfc14c0c3c59e', 
u'_context_project_name': u'admin', u'_context_user_identity': 
u'a0934b6ddd264d619a6aba59b978cabc a5b63ca418bf45bc9f2cfc14c0c3c59e - - -', 
u'_context_tenant': u'a5b63ca418bf45bc9f2cfc14c0c3c59e', 
u'_context_project_id': u'a5b63ca418bf45bc9f2cfc14c0c3c59e', 
u'_context_read_only': False, u'_context_user_domain': None, 
u'_context_user_name': u'admin'}

The metadata is much richer for a *.create.end event compared to the
*.delete.end event above. Ceilometer needs the metadata for the
*.delete.end events.

For accurate billing, Ceilometer needs to handle the network related 
*.delete.end events, so this change is needed for accurate billing use cases.
Refer: 
https://github.com/openstack/ceilometer/blob/master/ceilometer/network/notifications.py#L50

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1543379

Title:
  Neutron *.delete.end notification payload do not contain metadata
  other than id of the entity being deleted

Status in neutron:
  New

Bug description:
  When Neutron emits notification for objects like subnet, port, router
  and network being deleted, the notification payload only contain the
  id of the entity being deleted.

  Eg - RECEIVED MESSAGE: {u'_context_domain': None, u'_context_request_id': 
u'req-82232bf3-5032-4351-b8d6-71028cfe24eb', u'event_type': u'port.delete.end', 
u'_context_auth_token': u'682e4fec9d584d29b1f3a1a803a2560c', 
u'_context_resource_uuid': None, u'_context_tenant_name': 

[Yahoo-eng-team] [Bug 1543382] [NEW] DBError (psycopg2.ProgrammingError) operator does not exist: character varying = text[]

2016-02-08 Thread melanie witt
Public bug reported:

Opening a new bug based on a comment from another bug:

https://bugs.launchpad.net/nova/+bug/1518200/comments/6

There appears to be a problem with postgres queries using the in_
operator for example:

  query = query.filter(models.Migration.status.in_(status))

where status is an array of strings like ['accepted', 'done'].

The error:

 DBError: (psycopg2.ProgrammingError) operator does not exist: character 
varying = text[]
 LINE 3: ...HERE migrations.deleted = 0 AND migrations.status = ARRAY['a...
 HINT: No operator matches the given name and argument type(s). You might need 
to add explicit type casts.

looks to be about the fact that the "status" column of the Migration
table is of type varchar whereas the array for the IN operator is
defaulting to being treated as an array of text types, and that an
explicit cast is needed.

I didn't find any existing type casting and we do have a number of
similar queries already of style "column.in_(array of strings)" so I
wonder if this is a problem for all such queries, and not just this
migration status example one.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: db postgresql

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543382

Title:
  DBError (psycopg2.ProgrammingError) operator does not exist: character
  varying = text[]

Status in OpenStack Compute (nova):
  New

Bug description:
  Opening a new bug based on a comment from another bug:

  https://bugs.launchpad.net/nova/+bug/1518200/comments/6

  There appears to be a problem with postgres queries using the in_
  operator for example:

query = query.filter(models.Migration.status.in_(status))

  where status is an array of strings like ['accepted', 'done'].

  The error:

   DBError: (psycopg2.ProgrammingError) operator does not exist: character 
varying = text[]
   LINE 3: ...HERE migrations.deleted = 0 AND migrations.status = ARRAY['a...
   HINT: No operator matches the given name and argument type(s). You might 
need to add explicit type casts.

  looks to be about the fact that the "status" column of the Migration
  table is of type varchar whereas the array for the IN operator is
  defaulting to being treated as an array of text types, and that an
  explicit cast is needed.

  I didn't find any existing type casting and we do have a number of
  similar queries already of style "column.in_(array of strings)" so I
  wonder if this is a problem for all such queries, and not just this
  migration status example one.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1543382/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1539963] Re: Style: Material Design: Material Selection Menu

2016-02-08 Thread Diana Whitten
** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1539963

Title:
  Style: Material Design: Material Selection Menu

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In a recent regression, the selection checkmark for the context-picker
  in 'material' is no longer visible in the responsive menu because its
  the same color as the menu background.  WHoops. :)

  https://i.imgur.com/CaOqlXo.png

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1539963/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp