[Yahoo-eng-team] [Bug 1372792] [NEW] Inconsistent timestamp formats in ceilometer metering messages

2014-09-23 Thread Daniele Venzano
Public bug reported:

The messages generated by neutron-metering-agent contain timestamps in a
different format than the other messages received through UDP from
ceilometer-agent-notification. This creates unnecessary troubles for
whoever is trying to decode the messages and do something useful with
them.

I particular, up to now, I found out about the timestamp field in the
bandwidth message.

They contain UTC dates (I hope), but there is no Z at the end, and they
contain a space instead of a T between date and time. In short, they are
not in ISO8601 as the timestamps in the other messages. I found out
about them because elasticsearch tries to parse them and fails, throwing
away the message.

This bug was filed against Ceilometer, but I have been redirected here:
https://bugs.launchpad.net/ceilometer/+bug/1370607

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372792

Title:
  Inconsistent timestamp formats in ceilometer metering messages

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The messages generated by neutron-metering-agent contain timestamps in
  a different format than the other messages received through UDP from
  ceilometer-agent-notification. This creates unnecessary troubles for
  whoever is trying to decode the messages and do something useful with
  them.

  I particular, up to now, I found out about the timestamp field in the
  bandwidth message.

  They contain UTC dates (I hope), but there is no Z at the end, and
  they contain a space instead of a T between date and time. In short,
  they are not in ISO8601 as the timestamps in the other messages. I
  found out about them because elasticsearch tries to parse them and
  fails, throwing away the message.

  This bug was filed against Ceilometer, but I have been redirected here:
  https://bugs.launchpad.net/ceilometer/+bug/1370607

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372792/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365061] Re: Warn against sorting requirements

2014-09-23 Thread Nikhil Manchanda
** No longer affects: trove

** No longer affects: python-troveclient

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365061

Title:
  Warn against sorting requirements

Status in Cinder:
  Fix Committed
Status in Designate:
  Fix Released
Status in OpenStack Image Registry and Delivery Service (Glance):
  Fix Committed
Status in OpenStack Identity (Keystone):
  Fix Committed
Status in OpenStack Identity  (Keystone) Middleware:
  Fix Committed
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Python client library for Keystone:
  Fix Committed
Status in OpenStack Object Storage (Swift):
  Fix Committed

Bug description:
  Contrary to bug 1285478, requirements files should not be sorted
  alphabetically. Given that requirements files can contain comments,
  I'd suggest a header in all requirements files along the lines of:

  # The order of packages is significant, because pip processes them in the 
order
  # of appearance. Changing the order has an impact on the overall integration
  # process, which may cause wedges in the gate later.

  This is the result of a mailing list discussion (thanks, Sean!):

http://www.mail-archive.com/openstack-
  d...@lists.openstack.org/msg33927.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1365061/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1363558] Re: check the value of the configuration item block_device_allocate_retries

2014-09-23 Thread Liusheng
** Description changed:

  we need to to check the value of the configuration item
  block_device_retries in the code in order to ensure the
  block_device_retries   equal or bigger than 1 , done like that the
  configuration item network_allocate_retries
+ 
+ =
+ In ceilometer, there are similar issues, there is no check for value of 
retries
+ ceilometer.storage.mongo.utils.ConnectionPool#_mongo_connect
+ and:
+ ceilometer.ipmi.platform.intel_node_manager.NodeManager#init_node_manager

** Also affects: ceilometer (Ubuntu)
   Importance: Undecided
   Status: New

** No longer affects: ceilometer (Ubuntu)

** Also affects: ceilometer
   Importance: Undecided
   Status: New

** Changed in: ceilometer
 Assignee: (unassigned) = Liusheng (liusheng)

** Summary changed:

- check the value of the configuration item block_device_allocate_retries
+ check the value of the configuration item retries

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1363558

Title:
  check the value of the configuration item retries

Status in OpenStack Telemetry (Ceilometer):
  New
Status in OpenStack Compute (Nova):
  Fix Committed

Bug description:
  we need to to check the value of the configuration item
  block_device_retries in the code in order to ensure the
  block_device_retries   equal or bigger than 1 , done like that the
  configuration item network_allocate_retries

  =
  In ceilometer, there are similar issues, there is no check for value of 
retries
  ceilometer.storage.mongo.utils.ConnectionPool#_mongo_connect
  and:
  ceilometer.ipmi.platform.intel_node_manager.NodeManager#init_node_manager

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1363558/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372812] [NEW] instance task status keep for long time when reboot

2014-09-23 Thread Eli Qiao
Public bug reported:


When doing reboot, the task status will be REBOOT_STARTED for
long time in some cases, it's better to add this status 
when polling the rebooting instances and try to trigger
reboot again.

[tagett@stack-01 devstack]$ nova list
+--+---+++-+---+
| ID   | Name  | Status | Task State | 
Power State | Networks  |
+--+---+++-+---+
| bb44fafb-7a63-4c93-974f-031a9cceeaa3 | instance0 | ACTIVE | -  | 
Running | private=192.168.1.74, 172.24.4.41 |
| 990a03e7-4487-4137-9b51-e8f20986ffcb | t2| REBOOT | reboot_started | 
Running | private=192.168.1.91  |
+--+---+++-+---+
[tagett@stack-01 devstack]$ nova reboot t2
ERROR (Conflict): Cannot 'reboot' while instance is in task_state 
reboot_started (HTTP 409) (Request-ID: req-9006c602-dd74-41ef-b154-2e5e0ed53d65)
[tagett@stack-01 devstack]$ date
2014年 09月 23日 星期二 16:11:54 CST
[tagett@stack-01 devstack]$ nova list
+--+---+++-+---+
| ID   | Name  | Status | Task State | 
Power State | Networks  |
+--+---+++-+---+
| bb44fafb-7a63-4c93-974f-031a9cceeaa3 | instance0 | ACTIVE | -  | 
Running | private=192.168.1.74, 172.24.4.41 |
| 990a03e7-4487-4137-9b51-e8f20986ffcb | t2| REBOOT | reboot_started | 
Running | private=192.168.1.91  |
+--+---+++-+---+
[tagett@stack-01 devstack]$ date
2014年 09月 23日 星期二 16:12:00 CST
[tagett@stack-01 devstack]$ nova list
+--+---+++-+---+
| ID   | Name  | Status | Task State | 
Power State | Networks  |
+--+---+++-+---+
| bb44fafb-7a63-4c93-974f-031a9cceeaa3 | instance0 | ACTIVE | -  | 
Running | private=192.168.1.74, 172.24.4.41 |
| 990a03e7-4487-4137-9b51-e8f20986ffcb | t2| REBOOT | reboot_started | 
Running | private=192.168.1.91  |
+--+---+++-+---+
[tagett@stack-01 devstack]$ nova list
+--+---+++-+---+
| ID   | Name  | Status | Task State | 
Power State | Networks  |
+--+---+++-+---+
| bb44fafb-7a63-4c93-974f-031a9cceeaa3 | instance0 | ACTIVE | -  | 
Running | private=192.168.1.74, 172.24.4.41 |
| 990a03e7-4487-4137-9b51-e8f20986ffcb | t2| REBOOT | reboot_started | 
Running | private=192.168.1.91  |
+--+---+++-+---+
[tagett@stack-01 devstack]$ date
2014年 09月 23日 星期二 16:12:08 CST
[tagett@stack-01 devstack]$ nova list
+--+---+++-+---+
| ID   | Name  | Status | Task State | 
Power State | Networks  |
+--+---+++-+---+
| bb44fafb-7a63-4c93-974f-031a9cceeaa3 | instance0 | ACTIVE | -  | 
Running | private=192.168.1.74, 172.24.4.41 |
| 990a03e7-4487-4137-9b51-e8f20986ffcb | t2| REBOOT | reboot_started | 
Running | private=192.168.1.91  |
+--+---+++-+---+
[tagett@stack-01 devstack]$ nova list
+--+---+++-+---+
| ID   | Name  | Status | Task State | 
Power State | Networks  |

[Yahoo-eng-team] [Bug 1372823] [NEW] iSCSI LUN list not refreshed in Hyper-V 2012 R2 compute nodes

2014-09-23 Thread Luis Fernandez
Public bug reported:

When an iSCSI volume is attached to Hyper-V, the OS has to refresh the
list of LUNs on the iSCSI target to discover the new one.

The current mechanism implemented only works for the first LUN because
the connection to the target is done after the LUN is exposed to the
hypervisor. The rest of LUNs exposed to the hypervisor hosted in the
same iSCSI target won't be refreshed on time to be discovered by the
machine.

This looks related with the wrong assumption of having one LUN per iscsi
target, but it's also possible to have several LUNs per iscsi target.

The patch for this bug should refresh the list of LUNs when a new
attachment request is received. In our test environment (Hyper-V 2012
R2), a WMI call like the following one helped to solve this issue:

self._conn_storage.query(SELECT * FROM MSFT_iSCSISessionToDisk)

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: hyper-v iscsi volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372823

Title:
  iSCSI LUN list not refreshed in Hyper-V 2012 R2 compute nodes

Status in OpenStack Compute (Nova):
  New

Bug description:
  When an iSCSI volume is attached to Hyper-V, the OS has to refresh the
  list of LUNs on the iSCSI target to discover the new one.

  The current mechanism implemented only works for the first LUN because
  the connection to the target is done after the LUN is exposed to the
  hypervisor. The rest of LUNs exposed to the hypervisor hosted in the
  same iSCSI target won't be refreshed on time to be discovered by the
  machine.

  This looks related with the wrong assumption of having one LUN per
  iscsi target, but it's also possible to have several LUNs per iscsi
  target.

  The patch for this bug should refresh the list of LUNs when a new
  attachment request is received. In our test environment (Hyper-V 2012
  R2), a WMI call like the following one helped to solve this issue:

  self._conn_storage.query(SELECT * FROM MSFT_iSCSISessionToDisk)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1372823/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372805] Re: [libvirt] LibvirtDriver.get_num_instances should not count dom0 when driver_type=xen

2014-09-23 Thread Christian Berendt
I tested this on an Icehouse environment. The code changed in Juno. Len
of list_instances is used for the number of instances and list_instances
only returns guests, no dom0. So everything is fine on Juno and this bug
is invalid.

** Changed in: nova
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372805

Title:
  [libvirt] LibvirtDriver.get_num_instances should not count dom0 when
  driver_type=xen

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  When using Libvirt with Xen the dom0 is not available in the database
  and because of that nova.compute.manager will log a lot of warnings
  about this.

  2014-09-08 09:49:23.709 11673 WARNING nova.compute.manager [req-
  5d0ba692-0f18-445f-b55b-526043636976 None None] Found 1 in the
  database and 2 on the hypervisor.

  # virsh list
   IdName   State
  
   0 Domain-0   running
   3 test_berendt_001   idle

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1372805/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372829] [NEW] vcpu_pin_set setting raises exception

2014-09-23 Thread Irena Berezovsky
Public bug reported:

once enabled vcpu_pin_set=0-9  in nova.conf, got the following
exception:

2014-09-23 11:00:41.603 14427 DEBUG nova.openstack.common.processutils [-] 
Result was 0 execute /opt/stack/nova/nova/openstack/common/processutils.py:195
Traceback (most recent call last):
  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 455, 
in fire_timers
timer()
  File /usr/local/lib/python2.7/dist-packages/eventlet/hubs/timer.py, line 
58, in __call__
cb(*args, **kw)
  File /usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 168, in 
_do_send
waiter.switch(result)
  File /usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 
212, in main
result = function(*args, **kwargs)
  File /opt/stack/nova/nova/openstack/common/service.py, line 490, in 
run_service
service.start()
  File /opt/stack/nova/nova/service.py, line 181, in start
self.manager.pre_start_hook()
  File /opt/stack/nova/nova/compute/manager.py, line 1152, in pre_start_hook
self.update_available_resource(nova.context.get_admin_context())
  File /opt/stack/nova/nova/compute/manager.py, line 5922, in 
update_available_resource
nodenames = set(self.driver.get_available_nodes())
  File /opt/stack/nova/nova/virt/driver.py, line 1237, in get_available_nodes
stats = self.get_host_stats(refresh=refresh)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 5760, in 
get_host_stats
return self.host_state.get_host_stats(refresh=refresh)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 470, in host_state
self._host_state = HostState(self)
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 6320, in __init__
self.update_status()
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 6376, in 
update_status
numa_topology = self.driver._get_host_numa_topology()
  File /opt/stack/nova/nova/virt/libvirt/driver.py, line 4869, in 
_get_host_numa_topology
cell.cpuset = allowed_cpus
TypeError: unsupported operand type(s) for =: 'set' and 'list'
2014-09-23 11:00:42.032 14427 ERROR nova.openstack.common.threadgroup [-] 
unsupported operand type(s) for =: 'set' and 'list'
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup Traceback 
(most recent call last):
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 125, in wait
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
x.wait()
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/threadgroup.py, line 47, in wait
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 173, in 
wait
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/event.py, line 121, in wait
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 293, in 
switch
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/usr/local/lib/python2.7/dist-packages/eventlet/greenthread.py, line 212, in 
main
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/openstack/common/service.py, line 490, in run_service
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
service.start()
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/service.py, line 181, in start
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/compute/manager.py, line 1152, in pre_start_hook
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/compute/manager.py, line 5922, in 
update_available_resource
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup 
nodenames = set(self.driver.get_available_nodes())
2014-09-23 11:00:42.032 14427 TRACE nova.openstack.common.threadgroup   File 
/opt/stack/nova/nova/virt/driver.py, line 

[Yahoo-eng-team] [Bug 1372845] [NEW] libvirt: Instance NUMA fitting code fails to account for vpu_pin_set config option properly

2014-09-23 Thread Nikola Đipanov
Public bug reported:

Looking at this branch of the NUMA fitting code

https://github.com/openstack/nova/blob/51de439a4d1fe5e17d59d3aac3fd2c49556e641b/nova/virt/libvirt/driver.py#L3738

We do not account for allowed cpus when choosing viable cells for the
given instance. meaning we could chose a NUMA cell that has no viable
CPUs which we will try to pin to.

We need to consider allowed_cpus when calculating viable NUMA cells for
the instance.

** Affects: nova
 Importance: High
 Status: Confirmed


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372845

Title:
  libvirt: Instance NUMA fitting code fails to account for vpu_pin_set
  config option properly

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Looking at this branch of the NUMA fitting code

  
https://github.com/openstack/nova/blob/51de439a4d1fe5e17d59d3aac3fd2c49556e641b/nova/virt/libvirt/driver.py#L3738

  We do not account for allowed cpus when choosing viable cells for the
  given instance. meaning we could chose a NUMA cell that has no viable
  CPUs which we will try to pin to.

  We need to consider allowed_cpus when calculating viable NUMA cells
  for the instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1372845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372862] [NEW] There is no need to enable globalizaion for debug level log

2014-09-23 Thread Yang Yu
Public bug reported:

Currently, in neutron code, there still are some codes to enable
translation tag for debug level log.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372862

Title:
  There is no need to enable globalizaion for debug level log

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently, in neutron code, there still are some codes to enable
  translation tag for debug level log.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365751] Re: Use of assert_called_once() instead of assert_called_once_with()

2014-09-23 Thread Jacek Świderski
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) = Jacek Świderski (jacek-swiderski)

** Changed in: neutron
   Status: New = In Progress

** Description changed:

  mock.assert_called_once() is a noop, it doesn't test anything.
  
  Instead it should be mock.assert_called_once_with()
  
  This occurs in the following places:
-   Nova
+   Nova
      nova/tests/virt/hyperv/test_ioutils.py
      nova/tests/virt/libvirt/test_driver.py
-   Cliff
- cliff/tests/test_app.py
+   Cliff
+ cliff/tests/test_app.py
+   Neutron
+ neutron/tests/unit/services/l3_router/test_l3_apic_plugin.py
+ 
neutron/tests/unit/services/loadbalancer/drivers/radware/test_plugin_driver.py
+ neutron/tests/unit/test_l3_agent.py
+ neutron/tests/unit/ml2/drivers/cisco/apic/test_cisco_apic_sync.py
+ 
neutron/tests/unit/ml2/drivers/cisco/apic/test_cisco_apic_mechanism_driver.py

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365751

Title:
  Use of assert_called_once() instead of assert_called_once_with()

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Command Line Interface Formulation Framework:
  In Progress

Bug description:
  mock.assert_called_once() is a noop, it doesn't test anything.

  Instead it should be mock.assert_called_once_with()

  This occurs in the following places:
    Nova
      nova/tests/virt/hyperv/test_ioutils.py
      nova/tests/virt/libvirt/test_driver.py
    Cliff
  cliff/tests/test_app.py
Neutron
  neutron/tests/unit/services/l3_router/test_l3_apic_plugin.py
  
neutron/tests/unit/services/loadbalancer/drivers/radware/test_plugin_driver.py
  neutron/tests/unit/test_l3_agent.py
  neutron/tests/unit/ml2/drivers/cisco/apic/test_cisco_apic_sync.py
  
neutron/tests/unit/ml2/drivers/cisco/apic/test_cisco_apic_mechanism_driver.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368667] Re: Nova supports creating flavor with an existing ID if the previous flavor(with same ID) is in deleted state.

2014-09-23 Thread Abhishek Talwar
** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368667

Title:
  Nova supports creating flavor with an existing ID if the previous
  flavor(with same ID) is in deleted state.

Status in Python client library for Nova:
  In Progress

Bug description:
  Nova supports creating flavor with an existing ID if the previous
  flavor(with same ID) is in deleted state.

  1 Create a new flavor with some ID (in this case 99)

  [root@nova1 ~]# nova flavor-create flavor_1 99 128 1 1
  
++-+---+--+---+--+---+-+---+
  | ID | Name| Memory_MB | Disk | Ephemeral | Swap | VCPUs | 
RXTX_Factor | Is_Public |
  
++-+---+--+---+--+---+-+---+
  | 99 | flavor_1 | 128   | 1| 0 |  | 1 | 1.0 | 
True  |
  
++-+---+--+---+--+---+-+---+

  2 Now boot a VM with the flavor we just created (flavor_1 with ID 99)

  [root@nova1 ~]# nova boot --image 55479ee4-e28e-4c45-a093-742f71cff25c 
--flavor 99 --nic net-id=8b0c3729-380f-4949-b471-1acd8a42aa62 vm_test
  
+--++
  | Property | Value
  |
  
+--++
  | flavor   | flavor_1 (99)
   |
  | hostId   |  
  |
  | id   | 4d08c60e-1771-48ca-93ec-35aa30097e01 
  |
  | image| cirros-0.3.1-x86_64-uec 
(55479ee4-e28e-4c45-a093-742f71cff25c) |
  | name | vm_test  
  |
  
+--++

  3 Delete the flavor that was created in Step1 (flavor_1, ID = 99)

  [root@nova1 ~]# nova flavor-delete 99
  
++-+---+--+---+--+---+-+---+
  | ID | Name| Memory_MB | Disk | Ephemeral | Swap | VCPUs | 
RXTX_Factor | Is_Public |
  
++-+---+--+---+--+---+-+---+
  | 99 | flavor_1 | 128   | 1| 0 |  | 1 | 1.0 | 
True  |
  
++-+---+--+---+--+---+-+---+

  4 Run nova show vm_test to get the details of the instance
  [root@nova1 ~]# nova show vm_test
  
+--++
  | Property | Value
  |
  
+--++
  | flavor   | flavor_1 (99)
   |
  | hostId   | 
2af068c472a702b23c0bb6fc39be58cad4620aebc8f5c7f66535b277   |
  | id   | 4d08c60e-1771-48ca-93ec-35aa30097e01 
  |
  | image| cirros-0.3.1-x86_64-uec 
(55479ee4-e28e-4c45-a093-742f71cff25c) |
  | name | vm_test  
  |
  
+--++

  5 Now create an other flavor with the same ID(99) and different
  specifications

  [root@nova1 ~]#  nova flavor-create flavor_2 99 512 6 4
  
++-+---+--+---+--+---+-+---+
  | ID | Name| Memory_MB | Disk | Ephemeral | Swap | VCPUs | 
RXTX_Factor | Is_Public |
  
++-+---+--+---+--+---+-+---+
  | 99 | flavor_2| 512   | 6| 0 |  | 4 | 1.0
 | True  |
  
++-+---+--+---+--+---+-+---+

  6 Run nova show again to get the details of the instance vm_test
  
+--++
  | Property | Value
  |
  
+--++
  | flavor 

[Yahoo-eng-team] [Bug 1372882] [NEW] Neutron should drop certain outbound ICMPv6 packets

2014-09-23 Thread Alexey I. Froloff
Public bug reported:

Neutron should drop certain ICMPv6 messages (like 134) on VM - Network
path.  Like it allows certain ICMPv6 messages to be accepted on Network
- VM path, as in bug #1242933

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372882

Title:
  Neutron should drop certain outbound ICMPv6 packets

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Neutron should drop certain ICMPv6 messages (like 134) on VM -
  Network path.  Like it allows certain ICMPv6 messages to be accepted
  on Network - VM path, as in bug #1242933

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372883] [NEW] DHCP agent should specify prefix-len for IPv6 dhcp-range's

2014-09-23 Thread Alexey I. Froloff
Public bug reported:

If Network contains Subnet smaller than /64, prefix-len should be
specified for dnsmasq's --dhcp-range option.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372883

Title:
  DHCP agent should specify prefix-len for IPv6 dhcp-range's

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If Network contains Subnet smaller than /64, prefix-len should be
  specified for dnsmasq's --dhcp-range option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372883/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372888] [NEW] g-api raise 500 error if filesystem_store_datadirs and filesystem_store_datadir both specified

2014-09-23 Thread Abhishek Kekane
Public bug reported:

g-api raise 500 error if filesystem_store_datadirs and
filesystem_store_datadir both specified

If both filesystem_store_datadirs and filesystem_store_datadir
parameters specified in glance-api.conf file then while creating new
image 500 internal server error will be raised. Ideally it should raise
'BadStoreConfiguration' exception and glance-api service should not be
started.

Stack trace on the console:

2014-09-23 03:15:24.407 7594 ERROR glance.api.v1.upload_utils 
[f351b844-8b1e-429c-8943-a79b331311be 41d56c1d7e134fdb8a1dcfe4ea3c82de 
73078aaf41fb47b5bb0cfd4e9fdc79fb - - -] Failed to upload image 
288f7386-bddf-4b2b-a97f-49d2814f7b99
2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils Traceback (most 
recent call last):
2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils   File 
/opt/stack/glance/glance/api/v1/upload_utils.py, line 106, in 
upload_data_to_store
2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils store)
2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils   File 
/usr/local/lib/python2.7/dist-packages/glance_store/backend.py, line 342, in 
store_add_to_backend
2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils (location, 
size, checksum, metadata) = store.add(image_id, data, size)
2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils   File 
/usr/local/lib/python2.7/dist-packages/glance_store/driver.py, line 149, in 
add_disabled
2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils raise 
exceptions.StoreAddDisabled
2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils StoreAddDisabled: 
None

Steps to reproduce:
1. edit glance-api.conf file and specify values for both 
filesystem_store_datadirs and filesystem_store_datadir options
2. Restart the glance-api service
3. Hit create-image api/upload image using horizon

** Affects: glance
 Importance: Undecided
 Assignee: Abhishek Kekane (abhishek-kekane)
 Status: New


** Tags: ntt

** Changed in: glance
 Assignee: (unassigned) = Abhishek Kekane (abhishek-kekane)

** Description changed:

  g-api raise 500 error if filesystem_store_datadirs and
  filesystem_store_datadir both specified
  
  If both filesystem_store_datadirs and filesystem_store_datadir
  parameters specified in glance-api.conf file then while creating new
  image 500 internal server error will be raised. Ideally it should raise
  'BadStoreConfiguration' exception and glance-api service should not be
  started.
  
  Stack trace on the console:
  
  2014-09-23 03:15:24.407 7594 ERROR glance.api.v1.upload_utils 
[f351b844-8b1e-429c-8943-a79b331311be 41d56c1d7e134fdb8a1dcfe4ea3c82de 
73078aaf41fb47b5bb0cfd4e9fdc79fb - - -] Failed to upload image 
288f7386-bddf-4b2b-a97f-49d2814f7b99
  2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils Traceback (most 
recent call last):
  2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils   File 
/opt/stack/glance/glance/api/v1/upload_utils.py, line 106, in 
upload_data_to_store
  2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils store)
  2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils   File 
/usr/local/lib/python2.7/dist-packages/glance_store/backend.py, line 342, in 
store_add_to_backend
  2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils (location, 
size, checksum, metadata) = store.add(image_id, data, size)
  2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils   File 
/usr/local/lib/python2.7/dist-packages/glance_store/driver.py, line 149, in 
add_disabled
  2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils raise 
exceptions.StoreAddDisabled
  2014-09-23 03:15:24.407 7594 TRACE glance.api.v1.upload_utils 
StoreAddDisabled: None
  
  Steps to reproduce:
  1. edit glance-api.conf file and specify values for both 
filesystem_store_datadirs and filesystem_store_datadir options
  2. Restart the glance-api service
- 3. Hit create-image api
+ 3. Hit create-image api/upload image using horizon

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1372888

Title:
  g-api raise 500 error if filesystem_store_datadirs and
  filesystem_store_datadir both specified

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  g-api raise 500 error if filesystem_store_datadirs and
  filesystem_store_datadir both specified

  If both filesystem_store_datadirs and filesystem_store_datadir
  parameters specified in glance-api.conf file then while creating new
  image 500 internal server error will be raised. Ideally it should
  raise 'BadStoreConfiguration' exception and glance-api service should
  not be started.

  Stack trace on the console:

  2014-09-23 03:15:24.407 7594 ERROR glance.api.v1.upload_utils 
[f351b844-8b1e-429c-8943-a79b331311be 41d56c1d7e134fdb8a1dcfe4ea3c82de 

[Yahoo-eng-team] [Bug 1372375] Re: Attaching LVM encrypted volumes (with LUKS) could cause data loss if LUKS headers get corrupted

2014-09-23 Thread Duncan Thomas
Adding cinder on since it is likely the fix will end up in cinder

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1372375

Title:
  Attaching LVM encrypted volumes (with LUKS) could cause data loss if
  LUKS headers get corrupted

Status in Cinder:
  New
Status in OpenStack Compute (Nova):
  Incomplete
Status in OpenStack Security Advisories:
  Won't Fix

Bug description:
  I have doubts about the flow of the volume attaching operation, as
  defined in /usr/lib/python2.6/site-
  packages/nova/volume/encryptors/luks.py.

  If the device is not recognized to be a valid luks device, the script is luks 
formatting it! So if for some reason the luks header get corrupted, it erases 
the whole data.
  To manage corrupted headers there are the 

  cryptsetup luksHeaderBackup

  and

  cryptsetup luksHeaderRestore

  commands that respectively do the backup and the restore of the
  headers.

  I think that the process has to be reviewed, and the luksFormat
  operation has to be performed during the volume creation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1372375/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372909] [NEW] Action title can oveflow pulldown menu when action title is long using FIrefox

2014-09-23 Thread Mitsuhiro Tanino
Public bug reported:

When an action title of pulldown menu is long, the action title can overflow 
the pulldown menu.
This error occurs when the language setting is Japanese.

BZ1300868 tried to fix this bug, but this bug is still remaining specific 
browser environment.
https://bugs.launchpad.net/horizon/+bug/1300868

From my little investigation, this bug depends on the browsers.
  - FIrefox: overflow occurs
  - Google Crome : overflow does not occur
  - IE11 : overflow does not occur

Target menu which overflows is ボリュームのスナップショットの削除
プロジェクト→コンピュート→ボリューム→ボリュームのスナップショット→ボリュームのスナップショットの削除
https://cloud.githubusercontent.com/assets/6553985/4371403/fa6b28f0-4316-11e4-84e8-544c5db34bc2.png

In English,
Project-Compute-Volume-Volume snapshots-Delete Volume Snapshot

It would be nice to fix this bug for FIrefox users.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1372909

Title:
   Action title can oveflow pulldown menu when action title is long
  using FIrefox

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When an action title of pulldown menu is long, the action title can overflow 
the pulldown menu.
  This error occurs when the language setting is Japanese.

  BZ1300868 tried to fix this bug, but this bug is still remaining specific 
browser environment.
  https://bugs.launchpad.net/horizon/+bug/1300868

  From my little investigation, this bug depends on the browsers.
- FIrefox: overflow occurs
- Google Crome : overflow does not occur
- IE11 : overflow does not occur

  Target menu which overflows is ボリュームのスナップショットの削除
  プロジェクト→コンピュート→ボリューム→ボリュームのスナップショット→ボリュームのスナップショットの削除
  
https://cloud.githubusercontent.com/assets/6553985/4371403/fa6b28f0-4316-11e4-84e8-544c5db34bc2.png

  In English,
  Project-Compute-Volume-Volume snapshots-Delete Volume Snapshot

  It would be nice to fix this bug for FIrefox users.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1372909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1368910] Re: intersphinx requires network access which sometimes fails

2014-09-23 Thread Doug Hellmann
** Also affects: oslo-incubator
   Importance: Undecided
   Status: New

** Changed in: oslo-incubator
Milestone: None = kilo-1

** Changed in: oslo-incubator
   Status: New = Fix Released

** Changed in: oslo-incubator
 Assignee: (unassigned) = Andreas Jaeger (jaegerandi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1368910

Title:
  intersphinx requires network access  which sometimes fails

Status in Cinder:
  In Progress
Status in Manila:
  Fix Committed
Status in OpenStack Compute (Nova):
  In Progress
Status in The Oslo library incubator:
  Fix Released
Status in python-manilaclient:
  Fix Committed

Bug description:
  The intersphinx module requires internet access, and periodically
  causes docs jobs to fail.

  This module also prevents docs from being built without internet
  access.

  Since we don't actually use intersphinx for much (if anything), lets
  just remove it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1368910/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369627] Re: libvirt disk.config will have issues when booting two with different config drive values

2014-09-23 Thread Jeremy Stanley
This only affects juno right? (Those changes are only in the master
branch?) Just confirming we don't need an advisory for any released
versions.

** Also affects: ossa
   Importance: Undecided
   Status: New

** Changed in: ossa
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369627

Title:
  libvirt disk.config will have issues when booting two with different
  config drive values

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Security Advisories:
  Incomplete

Bug description:
  Currently, in the image creating code for Juno we have

  if configdrive.required_by(instance):
  LOG.info(_LI('Using config drive'), instance=instance)

  image_type = self._get_configdrive_image_type()
  backend = image('disk.config', image_type)
  backend.cache(fetch_func=self._create_configdrive,
filename='disk.config' + suffix,
instance=instance,
admin_pass=admin_pass,
files=files,
network_info=network_info)

  The important thing to notice here is that we have
  filename='disk.confg' + suffix.  This means that the filename for
  the config drive in the cache directory will be simply 'disk.config'
  followed by any potential suffix (e.g. '.rescue').  This name is not
  unique to the instance whose config drive we are creating.  Therefore,
  when we go to boot another instance with a different config drive, the
  cache function will detect the old config drive, and decide it doesn't
  need to create the new config drive with the appropriate config for
  the new instance.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1369627/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365751] Re: Use of assert_called_once() instead of assert_called_once_with()

2014-09-23 Thread Terry Howe
** Changed in: python-cliff
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1365751

Title:
  Use of assert_called_once() instead of assert_called_once_with()

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Command Line Interface Formulation Framework:
  Fix Released

Bug description:
  mock.assert_called_once() is a noop, it doesn't test anything.

  Instead it should be mock.assert_called_once_with()

  This occurs in the following places:
    Nova
      nova/tests/virt/hyperv/test_ioutils.py
      nova/tests/virt/libvirt/test_driver.py
    Cliff
  cliff/tests/test_app.py
Neutron
  neutron/tests/unit/services/l3_router/test_l3_apic_plugin.py
  
neutron/tests/unit/services/loadbalancer/drivers/radware/test_plugin_driver.py
  neutron/tests/unit/test_l3_agent.py
  neutron/tests/unit/ml2/drivers/cisco/apic/test_cisco_apic_sync.py
  
neutron/tests/unit/ml2/drivers/cisco/apic/test_cisco_apic_mechanism_driver.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1365751/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372956] [NEW] Wrong idp_metadata_path parameter group

2014-09-23 Thread Marek Denis
Public bug reported:

federation.controllers.SAMLMetadataV3.get_metadata() method expects
parameter CONF.federation.idp_metadata_path whereas the right parameter
is CONF.saml.idp_metadata_path.

Controllers should be fixed.

** Affects: keystone
 Importance: Undecided
 Assignee: Marek Denis (marek-denis)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Marek Denis (marek-denis)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1372956

Title:
  Wrong idp_metadata_path parameter group

Status in OpenStack Identity (Keystone):
  New

Bug description:
  federation.controllers.SAMLMetadataV3.get_metadata() method expects
  parameter CONF.federation.idp_metadata_path whereas the right
  parameter is CONF.saml.idp_metadata_path.

  Controllers should be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1372956/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1297468] Re: [UI] Cleanup backward compatibility for folsom URLs

2014-09-23 Thread Chad Roberts
This was never really a bug against horizon, but rather the old sahara-
dashboard module which isn't an issue anymore since the Sahara dashboard
is now part of horizon.  Setting this bug to invalid because it truly
is totally invalid.

** Changed in: horizon
   Status: Incomplete = Invalid

** Changed in: horizon
 Assignee: Chad Roberts (croberts) = (unassigned)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1297468

Title:
  [UI] Cleanup backward compatibility for folsom URLs

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  The utils.compatibility module has code that supports backward
  compatibility for the Folsom release of Horizon.  That support is no
  longer required, so that code should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1297468/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372971] [NEW] Different style buttons for Create Network in Network Topology tab and adjacent Networks tab

2014-09-23 Thread Jeff Needle
Public bug reported:

In the Projects-Network section, the tab for Network Topology and the
tab for Networks both have a Create Network button that do the same
thing but the styles on the button are different.  Same thing with the
Router tab and Create Router button on that page. Its styling differs
from the Network Topology page, but matches the button styling of the
Networks tab.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: Here the Create Network button doesn't have a plus, and 
isn't bolded
   
https://bugs.launchpad.net/bugs/1372971/+attachment/4212513/+files/Selection_339.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1372971

Title:
   Different style buttons for Create Network in Network Topology tab
  and adjacent Networks tab

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In the Projects-Network section, the tab for Network Topology and the
  tab for Networks both have a Create Network button that do the same
  thing but the styles on the button are different.  Same thing with the
  Router tab and Create Router button on that page. Its styling differs
  from the Network Topology page, but matches the button styling of the
  Networks tab.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1372971/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372981] [NEW] ModelsMigrationsSync test should be moved to functional tests

2014-09-23 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Functional tests can't provide both MySQL and PostgreSQL installed which
are needed to properly run ModelsMigrationsSync. After that is fixed we
can move this test to functional.

** Affects: neutron
 Importance: Medium
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: New

-- 
 ModelsMigrationsSync test should be moved to functional tests
https://bugs.launchpad.net/bugs/1372981
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1365901] Re: cinder-api ran into hang loop in python2.6

2014-09-23 Thread Matt Riedemann
Nova is doing the same thing:

http://git.openstack.org/cgit/openstack/nova/tree/nova/wsgi.py#n138

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Importance: Undecided = High

** Changed in: nova
   Status: New = In Progress

** Changed in: nova
 Assignee: (unassigned) = Matt Riedemann (mriedem)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1365901

Title:
  cinder-api ran into hang loop in python2.6

Status in Cinder:
  Fix Committed
Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  cinder-api ran into hang loop in python2.6

  #cinder-api
  ...
  ...
  snip...
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  Exception RuntimeError: 'maximum recursion depth exceeded in 
__subclasscheck__' in type 'exceptions.AttributeError' ignored
  Exception AttributeError: 'GreenSocket' object has no attribute 'fd' in 
bound method GreenSocket.__del__ of eventlet.greenio.GreenSocket object at 
0x4e052d0 ignored
  ...
  ...
  snip...

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1365901/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372810] [NEW] Incorrect declaration plugin's name in setup.cfg

2014-09-23 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

In setup.cfg name of OneConvergencePlugin is set incorrectly. Used '.'
instead of ':'.

** Affects: neutron
 Importance: Undecided
 Assignee: Ann Kamyshnikova (akamyshnikova)
 Status: In Progress

-- 
Incorrect declaration plugin's name in setup.cfg
https://bugs.launchpad.net/bugs/1372810
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372981] Re: ModelsMigrationsSync test should be moved to functional tests

2014-09-23 Thread Maru Newby
** Changed in: python-neutronclient
   Importance: Undecided = Medium

** Project changed: python-neutronclient = neutron

** Changed in: neutron
   Status: New = Confirmed

** Summary changed:

-  ModelsMigrationsSync test should be moved to functional tests
+ Neutron db migration tests can't run via the functional job

** Summary changed:

- Neutron db migration tests can't run via the functional job
+ Only one of (postgres, mysql) can be installed by devstack

** Summary changed:

- Only one of (postgres, mysql) can be installed by devstack
+ Only one of (postgres, mysql) can be installed by devstack at a time

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372981

Title:
  Only one of (postgres, mysql) can be installed by devstack at a time

Status in OpenStack Neutron (virtual network service):
  Confirmed

Bug description:
  Functional tests can't provide both MySQL and PostgreSQL installed
  which are needed to properly run ModelsMigrationsSync. After that is
  fixed we can move this test to functional.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372981/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372810] Re: Incorrect declaration plugin's name in setup.cfg

2014-09-23 Thread Ann Kamyshnikova
** Project changed: python-neutronclient = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372810

Title:
  Incorrect declaration plugin's name in setup.cfg

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  In setup.cfg name of OneConvergencePlugin is set incorrectly. Used '.'
  instead of ':'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1372810/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371185] Re: Flavor Tables Extra Specs link is broken

2014-09-23 Thread Gary W. Smith
Per David Lyle, this used to be supported, but we are no longer doing so
due to the increasing changes in APIs.

I'll close it and save you a step.

** Changed in: horizon
   Status: Incomplete = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1371185

Title:
  Flavor Tables Extra Specs link is broken

Status in OpenStack Dashboard (Horizon):
  Won't Fix

Bug description:
  Clicking on any Extra Specs link puts the browser into the never
  ending spinner.

  I attached the browser console showing the error causing the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1371185/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373054] [NEW] VMWare compute driver incorrectly attaches volumes on iSCSI for target with multiple LUNs

2014-09-23 Thread Mikhail Kebich
Public bug reported:

VMWare compute driver incorrectly attaches volumes on iSCSI for target
with multiple LUNs.

Driver handles only first LUN for the target with iqn retrieved from
Cinder service to attach volume:

at nova.virt.vmwareapi.volumeops.VMwareVolumeOps._iscsi_get_target(self, data), 
line 169:
for lun in target.lun:
if 'host.ScsiDisk' in lun.scsiLun:
scsi_lun_key = lun.scsiLun
break

Thus, only one volume will be attached to VM, but multiple times (see in
attachment).

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware volumes

** Attachment added: openstack+vcenter.png
   
https://bugs.launchpad.net/bugs/1373054/+attachment/4212656/+files/openstack%2Bvcenter.png

** Description changed:

  VMWare compute driver incorrectly attaches volumes on iSCSI for target
  with multiple LUNs.
  
  Driver handles only first LUN for the target with iqn retrieved from
  Cinder service to attach volume:
  
- at nova.virt.vmwareapi.VMwareVolumeOps._iscsi_get_target(self, data), line 
169:
- for lun in target.lun:
- if 'host.ScsiDisk' in lun.scsiLun:
- scsi_lun_key = lun.scsiLun
- break
+ at nova.virt.vmwareapi.volumeops.VMwareVolumeOps._iscsi_get_target(self, 
data), line 169:
+ for lun in target.lun:
+ if 'host.ScsiDisk' in lun.scsiLun:
+ scsi_lun_key = lun.scsiLun
+ break
  
  Thus, only one volume will be attached to VM, but multiple times (see in
  attachment).

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373054

Title:
  VMWare compute driver incorrectly attaches volumes on iSCSI for target
  with multiple LUNs

Status in OpenStack Compute (Nova):
  New

Bug description:
  VMWare compute driver incorrectly attaches volumes on iSCSI for target
  with multiple LUNs.

  Driver handles only first LUN for the target with iqn retrieved from
  Cinder service to attach volume:

  at nova.virt.vmwareapi.volumeops.VMwareVolumeOps._iscsi_get_target(self, 
data), line 169:
  for lun in target.lun:
  if 'host.ScsiDisk' in lun.scsiLun:
  scsi_lun_key = lun.scsiLun
  break

  Thus, only one volume will be attached to VM, but multiple times (see
  in attachment).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373054/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1367941] Re: Able to aquire the semaphore used in lockutils.synchronized_with_prefix twice at the same time

2014-09-23 Thread Doug Hellmann
** Changed in: oslo-incubator
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1367941

Title:
  Able to aquire the semaphore used in
  lockutils.synchronized_with_prefix twice at the same time

Status in OpenStack Compute (Nova):
  Invalid
Status in The Oslo library incubator:
  Fix Released
Status in Oslo Concurrency Library:
  Fix Committed

Bug description:
  In nova-compute the semaphore compute_resources is used  in
  lockutils.synchronized_with_prefix('nova-') as part of
  nova/compute/resource_tracker.py

  The compute_resources  semaphore is acquired once at:

  http://logs.openstack.org/58/117258/2/gate/gate-tempest-dsvm-neutron-
  full/48c8627/logs/screen-n-cpu.txt.gz?#_2014-09-10_20_19_17_176

  And then again at:

  In  http://logs.openstack.org/58/117258/2/gate/gate-tempest-dsvm-
  neutron-
  full/48c8627/logs/screen-n-cpu.txt.gz?#_2014-09-10_20_19_52_234

  without being released in between.  This means
  lockutils.synchronized_with_prefix('nova-') isn't working as expected.

  While https://review.openstack.org/#/c/119586/ is a possible culprit
  for this issue, a spot check of nova-compute logs from before that
  patch was merged show this was happening before (although in my spot
  checking it happened significantly less often, but I only checked one
  file).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1367941/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1369401] Re: Multiple services with same name and type

2014-09-23 Thread Morgan Fainberg
As per comment #11 this is being marked as wont fix since it's V2
related and V3 works as expected.

** Changed in: keystone
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1369401

Title:
  Multiple services with same name and type

Status in OpenStack Identity (Keystone):
  Won't Fix

Bug description:
  I am working on the current devstack environment.

  keystone service-create allows creating multiple services with the same 
--name and --type.
  Is this expected ?

  $ keystone --debug --os-endpoint http://10.0.2.15:35357/v2.0 --os-
  token Passw0rd service-create --name junk --type junk

  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description |  |
  |   enabled   |   True   |
  |  id | 69a40a334b58433ea7440e6336240611 |
  | name|   junk   |
  | type|   junk   |
  +-+--+

  $ keystone --debug --os-endpoint http://10.0.2.15:35357/v2.0 --os-
  token Passw0rd service-create --name junk --type junk

  +-+--+
  |   Property  |  Value   |
  +-+--+
  | description |  |
  |   enabled   |   True   |
  |  id | e976635ceb9d4c47bf0763f7b3cdcec9 |
  | name|   junk   |
  | type|   junk   |
  +-+--+

  I expected it to fail as the 'user-create' fails with conflict error.

  After creating multiple service with same name, keystone endpoint-
  create fails with the error

  $ keystone --debug --os-endpoint http://10.0.2.15:35357/v2.0 --os-
  token Passw0rd endpoint-create --service junk --publicurl http://junk

  Multiple service matches found for 'junk', use an ID to be more
  specific.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1369401/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1340793] Re: DB2 deadlock error not detected

2014-09-23 Thread Morgan Fainberg
** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1340793

Title:
  DB2 deadlock error not detected

Status in Cinder:
  Confirmed
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed
Status in The Oslo library incubator:
  Fix Released

Bug description:
  Currently, only mysql and postgresql deadlock errors are properly handled.
  The error message for DB2 looks like:

  'SQL0911N  The current transaction has been rolled back because of a
  deadlock or timeout.  deadlock details'

  Olso.db needs to include a regex to detect this deadlock. Essentially the 
same as
  https://bugs.launchpad.net/nova/+bug/1270725
  but for DB2

  This is an example error:

  2014-07-01 19:52:16.574 2710 TRACE
  nova.openstack.common.db.sqlalchemy.session ProgrammingError:
  (ProgrammingError) ibm_db_dbi::ProgrammingError: Statement Execute
  Failed: [IBM][CLI Driver][DB2/LINUXX8664] SQL0911N  The current
  transaction has been rolled back because of a deadlock or timeout.
  Reason code 2.  SQLSTATE=40001 SQLCODE=-911 'UPDATE reservations SET
  updated_at=updated_at, deleted_at=?, deleted=id WHERE
  reservations.deleted = ? AND reservations.uuid IN (?, ?, ?)'
  (datetime.datetime(2014, 7, 1, 23, 52, 10, 774722), 0,
  'e2353f5e-f444-4a94-b7bf-f877402c15ab', 'c4b22c95-284a-4ce3-810b-
  5d9bbe6dd7b7', 'ab0294cb-c317-4594-9b19-911589228aa5')

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1340793/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1339564] Re: glance image-delete on an image with the status saving doesn't delete the image's file from store

2014-09-23 Thread Tushar Patil
*** This bug is a duplicate of bug 1243127 ***
https://bugs.launchpad.net/bugs/1243127

** This bug is no longer a duplicate of bug 1329319
   Restart glance when a image is uploading, then delete the image. The data of 
the image is not deleted
** This bug has been marked a duplicate of bug 1243127
   Image is not clean when uploading image kill glance process

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1339564

Title:
  glance image-delete on an image with the status saving doesn't
  delete the image's file from store

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Description of problem:
  After running the the scenario described in 
bugs.launchpad.net/cinder/+bug/1339545 , I've deleted two images with that were 
stuck in saving status with 
  # glance image-delete image-id image-id

  both of the image's files were still in the store: 
  #ls -l /var/lib/glance/images
  -rw-r-. 1 glance glance  2158362624 Jul  9 10:18 
d4da7dea-c94d-4c9e-a987-955a905a7fed
  -rw-r-. 1 glance glance  1630994432 Jul  9 10:09 
8532ef07-3dfa-4d63-8537-033c31b16814

  Version-Release number of selected component (if applicable):
  python-glanceclient-0.12.0-1.el7ost.noarch
  python-glance-2014.1-4.el7ost.noarch
  openstack-glance-2014.1-4.el7ost.noarch

  
  How reproducible:

  
  Steps to Reproduce:
  1. Run the scenario from bugs.launchpad.net/cinder/+bug/1339545
  2. Delete the image:
  # glance image-delete image-id

  
  Actual results:
  The file is still in the store.

  Expected results:
  The file has been deleted from the store.

  Additional info:
  The logs are attached -
  images uid's: 
  d4da7dea-c94d-4c9e-a987-955a905a7fed
  8532ef07-3dfa-4d63-8537-033c31b16814

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1339564/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1337367] Re: The add method of swift.py have a problem.When a large image is uploading and the glance-api is restarted, then we can not delete the image content that have been up

2014-09-23 Thread Tushar Patil
*** This bug is a duplicate of bug 1243127 ***
https://bugs.launchpad.net/bugs/1243127

** This bug is no longer a duplicate of bug 1329319
   Restart glance when a image is uploading, then delete the image. The data of 
the image is not deleted
** This bug has been marked a duplicate of bug 1243127
   Image is not clean when uploading image kill glance process

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1337367

Title:
  The add method of swift.py have a problem.When a large image is
  uploading and the glance-api is restarted, then we can not delete the
  image content that have been uploaded in swift

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  1. upload a large image, for example 50G
  2. kill glance-api when image status:saving
  3. restart glance-api
  4. delete image

  the image content that have been uploaded can not be deleted. I think the add 
method of glance/swift/BaseStore should put the object manifest onto swift 
first, before we upload the content when we upload a large image in chunks.
   manifest = %s/%s- % (location.container, location.obj)
   headers = {'ETag': hashlib.md5().hexdigest(), 'X-Object-Manifest': 
manifest}
  connection.put_object(location.container, location.obj,  None, 
headers=headers)
  the code above shoud put before the code we upload the image chunks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1337367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1371447] Re: German translation is quite bad

2014-09-23 Thread Gary W. Smith
Translations are done outside of horizon at transifex. Please contribute the 
fix here:
https://www.transifex.com/projects/p/horizon

** Changed in: horizon
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1371447

Title:
  German translation is quite bad

Status in OpenStack Dashboard (Horizon):
  Won't Fix
Status in OpenStack I18n  L10n:
  New

Bug description:
  Reboot instances is translated as: Neustart Instanzen, which is a
  bumpy translation by words. I'd propose Instanzen neu starten.

  
   The same issue is true for Terminate instances (Translated as Beenden 
Instanzen). I'd prefer to switch both words: Instanzen beenden.

  Bonus points will be awarded to find a better word for Instanzen
  (instances).

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1371447/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373113] [NEW] Wrong exception when deleting a domain group assignment using a not domain-aware backend

2014-09-23 Thread Samuel de Medeiros Queiroz
Public bug reported:

When deleting a domain group assignment using a not domain-aware
backend, such as LDAP, we should get an exception like: 'NotImplemented:
Domain metadata not supported by LDAP', as we have for user assignments.

However, trying to delete an assignment of such type, we get:

Traceback (most recent call last):
  File keystone/assignment/core.py, line 570, in delete_grant
domain_id):
  File keystone/common/manager.py, line 47, in wrapper
return f(self, *args, **kwargs)
  File keystone/identity/core.py, line 202, in wrapper
return f(self, *args, **kwargs)
  File keystone/identity/core.py, line 213, in wrapper
return f(self, *args, **kwargs)
  File keystone/identity/core.py, line 816, in list_users_in_group
self._mark_domain_id_filter_satisfied(hints)
  File keystone/identity/core.py, line 526, in 
_mark_domain_id_filter_satisfied
for filter in hints.filters:
AttributeError: 'str' object has no attribute 'filters'

Pointers to the code are [1][2][3].
This occurs because we pass the domain_id (of type str) as it was a hint (of 
type driver_hints.Hints) on [1].

A patch to this bug should create a driver_hints.Hints() object with
domain_id as a filter of it and pass it as argument, instead of passing
domain_id directly.

[1] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/core.py#L569-L570
[2] 
https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L813-L816
[3] 
https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L526

** Affects: keystone
 Importance: Undecided
 Assignee: Samuel de Medeiros Queiroz (samuel-z)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1373113

Title:
  Wrong exception when deleting a domain group assignment using a not
  domain-aware backend

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When deleting a domain group assignment using a not domain-aware
  backend, such as LDAP, we should get an exception like:
  'NotImplemented: Domain metadata not supported by LDAP', as we have
  for user assignments.

  However, trying to delete an assignment of such type, we get:

  Traceback (most recent call last):
File keystone/assignment/core.py, line 570, in delete_grant
  domain_id):
File keystone/common/manager.py, line 47, in wrapper
  return f(self, *args, **kwargs)
File keystone/identity/core.py, line 202, in wrapper
  return f(self, *args, **kwargs)
File keystone/identity/core.py, line 213, in wrapper
  return f(self, *args, **kwargs)
File keystone/identity/core.py, line 816, in list_users_in_group
  self._mark_domain_id_filter_satisfied(hints)
File keystone/identity/core.py, line 526, in 
_mark_domain_id_filter_satisfied
  for filter in hints.filters:
  AttributeError: 'str' object has no attribute 'filters'

  Pointers to the code are [1][2][3].
  This occurs because we pass the domain_id (of type str) as it was a hint (of 
type driver_hints.Hints) on [1].

  A patch to this bug should create a driver_hints.Hints() object with
  domain_id as a filter of it and pass it as argument, instead of
  passing domain_id directly.

  [1] 
https://github.com/openstack/keystone/blob/master/keystone/assignment/core.py#L569-L570
  [2] 
https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L813-L816
  [3] 
https://github.com/openstack/keystone/blob/master/keystone/identity/core.py#L526

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1373113/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373106] Re: jogo and sdague are making me sad

2014-09-23 Thread Matt Riedemann
** Changed in: nova
   Status: New = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373106

Title:
  jogo and sdague are making me sad

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Just like when my parents would fight pre-separation...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373106] Re: jogo and sdague are making me sad

2014-09-23 Thread Dan Smith
** Changed in: nova
   Status: Opinion = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373106

Title:
  jogo and sdague are making me sad

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  Just like when my parents would fight pre-separation...

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373106/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1360504] Re: tempest.api.identity.admin.v3.test_credentials.CredentialsTestJSON create credential unauthorized

2014-09-23 Thread Steve Martinelli
** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1360504

Title:
  tempest.api.identity.admin.v3.test_credentials.CredentialsTestJSON
  create credential unauthorized

Status in OpenStack Identity (Keystone):
  Invalid
Status in Tempest:
  Invalid

Bug description:
  The bug appeared in a gate-tempest-dsvm-neutron-full run:
  https://review.openstack.org/#/c/47/5

  Full console.log here: http://logs.openstack.org/47/47/5/gate
  /gate-tempest-dsvm-neutron-full/f21c917/console.html

  Stacktrace:
  2014-08-22 10:49:35.168 | 
tempest.api.identity.admin.v3.test_credentials.CredentialsTestJSON.test_credentials_create_get_update_delete[gate,smoke]
  2014-08-22 10:49:35.168 | 

  2014-08-22 10:49:35.168 | 
  2014-08-22 10:49:35.168 | Captured traceback:
  2014-08-22 10:49:35.168 | ~~~
  2014-08-22 10:49:35.168 | Traceback (most recent call last):
  2014-08-22 10:49:35.168 |   File 
tempest/api/identity/admin/v3/test_credentials.py, line 62, in 
test_credentials_create_get_update_delete
  2014-08-22 10:49:35.168 | self.projects[0])
  2014-08-22 10:49:35.168 |   File 
tempest/services/identity/v3/json/credentials_client.py, line 43, in 
create_credential
  2014-08-22 10:49:35.168 | resp, body = self.post('credentials', 
post_body)
  2014-08-22 10:49:35.168 |   File tempest/common/rest_client.py, line 
219, in post
  2014-08-22 10:49:35.169 | return self.request('POST', url, 
extra_headers, headers, body)
  2014-08-22 10:49:35.169 |   File tempest/common/rest_client.py, line 
431, in request
  2014-08-22 10:49:35.169 | resp, resp_body)
  2014-08-22 10:49:35.169 |   File tempest/common/rest_client.py, line 
472, in _error_checker
  2014-08-22 10:49:35.169 | raise exceptions.Unauthorized(resp_body)
  2014-08-22 10:49:35.169 | Unauthorized: Unauthorized
  2014-08-22 10:49:35.169 | Details: {error: {message: The request you 
have made requires authentication. (Disable debug mode to suppress these 
details.), code: 401, title: Unauthorized}}
  2014-08-22 10:49:35.169 | 
  2014-08-22 10:49:35.169 | 
  2014-08-22 10:49:35.169 | Captured pythonlogging:
  2014-08-22 10:49:35.170 | ~~~
  2014-08-22 10:49:35.170 | 2014-08-22 10:31:28,001 5831 INFO 
[tempest.common.rest_client] Request 
(CredentialsTestJSON:test_credentials_create_get_update_delete): 401 POST 
http://127.0.0.1:35357/v3/credentials 0.065s

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1360504/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373126] [NEW] API v2 Test (test_page_reverse) Has No Assertion

2014-09-23 Thread Ian Cordasco
Public bug reported:

In neutron/tests/unit/test_api_v2.py's APIv2TestCase.test_page_reverse,
there are essentially two tests cases. There is no assertion made for
the second test case here:
https://github.com/openstack/neutron/blob/d3dfbf3e500a30d88f1c08664204dfc118a0154c/neutron/tests/unit/test_api_v2.py#L378..L383

Ostensibly no assertion was made because the pattern is to use
assert_called_once_with, but a mock can be reset, e.g.,  on line 377 you
could write

instance.get_networks.reset_mock()

Such that the pattern could continue to be followed.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373126

Title:
  API v2 Test (test_page_reverse) Has No Assertion

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In neutron/tests/unit/test_api_v2.py's
  APIv2TestCase.test_page_reverse, there are essentially two tests
  cases. There is no assertion made for the second test case here:
  
https://github.com/openstack/neutron/blob/d3dfbf3e500a30d88f1c08664204dfc118a0154c/neutron/tests/unit/test_api_v2.py#L378..L383

  Ostensibly no assertion was made because the pattern is to use
  assert_called_once_with, but a mock can be reset, e.g.,  on line 377
  you could write

  instance.get_networks.reset_mock()

  Such that the pattern could continue to be followed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373153] [NEW] Unit tests for ML2 drivers use incorrect base class

2014-09-23 Thread Mohammad Banikazemi
Public bug reported:

In unit tests for several ML2 mechanism drivers (listed below) the
NeutronDbPluginV2TestCase class instead of Ml2PluginV2TestCase which is
derived from that class is used:

Unit tests for ML2 mechanism drivers in neutron/tests/unit/ml2:
   drivers/cisco/nexus/test_cisco_mech.py
   drivers/brocade/test_brocade_mechanism_driver.py
   test_mechanism_fslsdn.py 
   test_mechanism_ncs.py
   test_mechanism_odl.py

In other cases, such as tests in drivers/test_bigswitch_mech.py some
unit tests from the corresponding monolithic driver is reused and
therefor Ml2PluginV2TestCase class is not used.

This prevents specialization needed for testing ML2 specific
implementations.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373153

Title:
  Unit tests for ML2 drivers  use incorrect base class

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In unit tests for several ML2 mechanism drivers (listed below) the
  NeutronDbPluginV2TestCase class instead of Ml2PluginV2TestCase which
  is derived from that class is used:

  Unit tests for ML2 mechanism drivers in neutron/tests/unit/ml2:
 drivers/cisco/nexus/test_cisco_mech.py
 drivers/brocade/test_brocade_mechanism_driver.py
 test_mechanism_fslsdn.py 
 test_mechanism_ncs.py
 test_mechanism_odl.py

  In other cases, such as tests in drivers/test_bigswitch_mech.py some
  unit tests from the corresponding monolithic driver is reused and
  therefor Ml2PluginV2TestCase class is not used.

  This prevents specialization needed for testing ML2 specific
  implementations.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373153/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373157] [NEW] [sahara] Security groups should be supported

2014-09-23 Thread Sergey Lukjanov
Public bug reported:

We should support specifying security groups and enabling auto creation
of security groups for Sahara clusters.

It was implemented as part of the Sahara blueprint
https://blueprints.launchpad.net/sahara/+spec/cluster-secgroups

** Affects: horizon
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1373157

Title:
  [sahara] Security groups should be supported

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  We should support specifying security groups and enabling auto
  creation of security groups for Sahara clusters.

  It was implemented as part of the Sahara blueprint
  https://blueprints.launchpad.net/sahara/+spec/cluster-secgroups

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1373157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373165] [NEW] Extra eyeball icon on Update User dialog

2014-09-23 Thread Gary W. Smith
Public bug reported:

Steps to reproduce:
1. Go to Identity  Users
2. Edit a user

The Update User dialog has an extra eyeball to the right of the
Description header.

See attached screenshot

** Affects: horizon
 Importance: Low
 Status: New


** Tags: bootstrap

** Attachment added: extra_eye.png
   
https://bugs.launchpad.net/bugs/1373165/+attachment/4212888/+files/extra_eye.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1373165

Title:
  Extra eyeball icon on Update User dialog

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:
  1. Go to Identity  Users
  2. Edit a user

  The Update User dialog has an extra eyeball to the right of the
  Description header.

  See attached screenshot

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1373165/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373167] [NEW] Infinite Recursion for __getattr__ in keystone.token.persistence.core.Manager due to dep injection

2014-09-23 Thread Morgan Fainberg
Public bug reported:

On initializing the token_api due to the way the dependency injector
works, an infinite recursion occurs at:

https://github.com/openstack/keystone/blob/1af24284bdc093dae4f027ade2ddb29656b676f0/keystone/token/persistence/core.py#L228-L236

This occurs when doing the lookup for token_provider_api causes an
issue. The solution simply requires verifying that the 'item' is not in
self._dependencies or self._optionals.


This stabilizes eventually after startup.

** Affects: keystone
 Importance: Medium
 Assignee: Morgan Fainberg (mdrnstm)
 Status: Triaged

** Changed in: keystone
   Status: New = Triaged

** Changed in: keystone
   Importance: Undecided = Medium

** Changed in: keystone
Milestone: None = juno-rc1

** Changed in: keystone
 Assignee: (unassigned) = Morgan Fainberg (mdrnstm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1373167

Title:
  Infinite Recursion for __getattr__ in
  keystone.token.persistence.core.Manager due to dep injection

Status in OpenStack Identity (Keystone):
  Triaged

Bug description:
  On initializing the token_api due to the way the dependency injector
  works, an infinite recursion occurs at:

  
https://github.com/openstack/keystone/blob/1af24284bdc093dae4f027ade2ddb29656b676f0/keystone/token/persistence/core.py#L228-L236

  This occurs when doing the lookup for token_provider_api causes an
  issue. The solution simply requires verifying that the 'item' is not
  in self._dependencies or self._optionals.

  
  This stabilizes eventually after startup.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1373167/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373183] [NEW] Enable GBP service plugin with Juno

2014-09-23 Thread Robert Kukura
Public bug reported:

Due to the implementation of the Group-Based Policy blueprint
(https://review.openstack.org/#/c/89469) that was approved for Juno not
being merged, the Neutron Group Policy sub-team plans to deliver a GBP
service plugin as an add-on to the Juno version of Neutron via a
separate  StackForge repository, for easy consumption by early GBP
adopters.

Since the proposed patch to enable addition of service plugins without
modifying Neutron code (https://review.openstack.org/#/c/116996/) was
also not merged, it is not possible to use this GBP service plugin with
Neutron without modifying Neutron code. To avoid the need to modify
neutron code when using this service plugin, two no-risk lines of code
need to be added now to neutron.plugins.common.constants, as shown in
https://review.openstack.org/#/c/95900/31/neutron/plugins/common/constants.py,
for inclusion in the Juno Neutron release.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373183

Title:
  Enable GBP service plugin with Juno

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Due to the implementation of the Group-Based Policy blueprint
  (https://review.openstack.org/#/c/89469) that was approved for Juno
  not being merged, the Neutron Group Policy sub-team plans to deliver a
  GBP service plugin as an add-on to the Juno version of Neutron via a
  separate  StackForge repository, for easy consumption by early GBP
  adopters.

  Since the proposed patch to enable addition of service plugins without
  modifying Neutron code (https://review.openstack.org/#/c/116996/) was
  also not merged, it is not possible to use this GBP service plugin
  with Neutron without modifying Neutron code. To avoid the need to
  modify neutron code when using this service plugin, two no-risk lines
  of code need to be added now to neutron.plugins.common.constants, as
  shown in
  https://review.openstack.org/#/c/95900/31/neutron/plugins/common/constants.py,
  for inclusion in the Juno Neutron release.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373183/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373199] [NEW] _delete_port_security_group_bindings not needed in ML2 delete_port

2014-09-23 Thread Xurong Yang
Public bug reported:

_delete_port_security_group_bindings will delete
SecurityGroupPortBinding associated with the port being deleted. But
since SecurityGroupPortBinding has a foreign key constraint with port
and cascade deleting is defined, SecurityGroupPortBinding can be deleted
when port is deleted, thus _delete_port_security_group_bindings call in
ML2 delete_port is not needed and can be removed.

** Affects: neutron
 Importance: Undecided
 Assignee: Xurong Yang (idopra)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Xurong Yang (idopra)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1373199

Title:
  _delete_port_security_group_bindings not needed in ML2 delete_port

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  _delete_port_security_group_bindings will delete
  SecurityGroupPortBinding associated with the port being deleted. But
  since SecurityGroupPortBinding has a foreign key constraint with port
  and cascade deleting is defined, SecurityGroupPortBinding can be
  deleted when port is deleted, thus
  _delete_port_security_group_bindings call in ML2 delete_port is not
  needed and can be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1373199/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1338857] Re: help_text for create subnet not transfer

2014-09-23 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1338857

Title:
  help_text for create subnet not transfer

Status in OpenStack Dashboard (Horizon):
  Expired

Bug description:
  The help_text for create subnet when allocation_polls.

  For lt and gt should be transfer to  and  , but it does
  not in .po

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1338857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1352193] Re: The nova API service can’t hand image metadata properly when metadata key contains uppercase letter

2014-09-23 Thread jichenjc
from following result

jichen@cloudcontroller:~$ glance image-update --property Key1=Value2 
--purge-props 64f067bd-ce03-4f04-a354-7188a4828e8e
+--+--+
| Property | Value|
+--+--+
| Property 'key1'  | Value2   |


I think we need to confirm with glance whether they only accept lower case 
key/value pair
if that's the case, nova should be updated in order to fit for that restriction


** Project changed: nova = glance

** Also affects: nova
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Confirmed

** Changed in: nova
 Assignee: (unassigned) = jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1352193

Title:
  The nova API service can’t hand image metadata properly when metadata
  key contains uppercase letter

Status in OpenStack Image Registry and Delivery Service (Glance):
  Confirmed
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  OS: centos 6.5 64bit
  openstack release: icehouse

  Steps to reproduce:
  1. Call the image metadata API of nova using the following command:
   curl -X 'POST' -v http://IP:PORT/v2/${tenant_id}/images/${image_id}/metadata 
-H X-Auth-Token: $token -H 'Content-type: application/json' -d 
'{metadata:{Key1:Value1}}' | python -mjson.tool
  2. Execute the above command again:
curl -X 'POST' -v 
http://IP:PORT/v2/${tenant_id}/images/${image_id}/metadata -H X-Auth-Token: 
$token -H 'Content-type: application/json' -d '{metadata:{Key1:Value1}}' 
| python -mjson.tool

  Expected result:
  In step1, the json response should be:
{metadata:{Key1:Value1}}
  In setp2, the json response should be:
   {metadata:{Key1:Value1}}

  Observed result: 
  In step1, the json response is:
{metadata:{key1:Value1}}
  In setp2, the json response is:
   {metadata:{key1:Value1,Value1}}

  Besides, we can observer that each image metadata key in table
  image_properties of glance DB is converted to lowercase even if the
  key user inputted contains uppercase letter.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1352193/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373232] [NEW] The ldap driver needs to bubble up some ldap exceptions

2014-09-23 Thread Mahesh Sawaiker
Public bug reported:

LDAP driver can bubble up some exceptions as 400 errors.
Example ldap.CONSTRAINT_VIOLATION and ldap.UNWILLING

def update_user(self, user_id, user):
self.user.check_allow_update()
if 'id' in user and user['id'] != user_id:
raise exception.ValidationError(_('Cannot change user ID'))
old_obj = self.user.get(user_id)
#Defect 118381, user name update in ldap should be allowed.
#if 'name' in user and old_obj.get('name') != user['name']:
  #  raise exception.Conflict(_('Cannot change user name'))

#user = utils.hash_ldap_user_password(user)
if self.user.enabled_mask:
self.user.mask_enabled_attribute(user)
try:
self.user.update(user_id, user, old_obj)
except ldap.CONSTRAINT_VIOLATION as e:
if 'info' in e[0]:
raise  exception.ValidationError(e[0]['info'])
else:
raise AssertionError(_('Error updating user'))
return self.user.get_filtered(user_id)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1373232

Title:
  The ldap driver needs to bubble up some ldap exceptions

Status in OpenStack Identity (Keystone):
  New

Bug description:
  LDAP driver can bubble up some exceptions as 400 errors.
  Example ldap.CONSTRAINT_VIOLATION and ldap.UNWILLING

  def update_user(self, user_id, user):
  self.user.check_allow_update()
  if 'id' in user and user['id'] != user_id:
  raise exception.ValidationError(_('Cannot change user ID'))
  old_obj = self.user.get(user_id)
  #Defect 118381, user name update in ldap should be allowed.
  #if 'name' in user and old_obj.get('name') != user['name']:
#  raise exception.Conflict(_('Cannot change user name'))

  #user = utils.hash_ldap_user_password(user)
  if self.user.enabled_mask:
  self.user.mask_enabled_attribute(user)
  try:
  self.user.update(user_id, user, old_obj)
  except ldap.CONSTRAINT_VIOLATION as e:
  if 'info' in e[0]:
  raise  exception.ValidationError(e[0]['info'])
  else:
  raise AssertionError(_('Error updating user'))
  return self.user.get_filtered(user_id)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1373232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373230] [NEW] start/stop instance in EC2 API shouldn't return active/stopped status immediately

2014-09-23 Thread Alex Xu
Public bug reported:

Always see this error in the gate:

http://logs.openstack.org/73/122873/1/gate/gate-tempest-dsvm-neutron-
full/e5a2bf6/logs/screen-n-cpu.txt.gz?level=ERROR#_2014-09-21_05_18_23_709

014-09-21 05:18:23.709 ERROR oslo.messaging.rpc.dispatcher [req-52e7fee5
-65ee-4c4d-abcc-099b29352846 InstanceRunTest-2053569555
InstanceRunTest-179702724] Exception during message handling: Unexpected
task state: expecting [u'powering-off'] but the actual state is deleting

Checking the EC2 API test in tempest,

def test_run_stop_terminate_instance(self):
# EC2 run, stop and terminate instance
image_ami = self.ec2_client.get_image(self.images[ami]
  [image_id])
reservation = image_ami.run(kernel_id=self.images[aki][image_id],
ramdisk_id=self.images[ari][image_id],
instance_type=self.instance_type)
rcuk = self.addResourceCleanUp(self.destroy_reservation, reservation)

for instance in reservation.instances:
LOG.info(state: %s, instance.state)
if instance.state != running:
self.assertInstanceStateWait(instance, running)

for instance in reservation.instances:
instance.stop()
LOG.info(state: %s, instance.state)
if instance.state != stopped:
self.assertInstanceStateWait(instance, stopped)

self._terminate_reservation(reservation, rcuk)

The test is wait for instance become to stopped. But check the ec2 api code
https://github.com/openstack/nova/blob/master/nova/api/ec2/cloud.py#L1075

it always return stopped status immediately. Actually start/stop action
is async call.

** Affects: nova
 Importance: Undecided
 Assignee: Alex Xu (xuhj)
 Status: New


** Tags: api

** Tags added: api

** Changed in: nova
 Assignee: (unassigned) = Alex Xu (xuhj)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373230

Title:
  start/stop instance in EC2 API shouldn't return active/stopped status
  immediately

Status in OpenStack Compute (Nova):
  New

Bug description:
  Always see this error in the gate:

  http://logs.openstack.org/73/122873/1/gate/gate-tempest-dsvm-neutron-
  full/e5a2bf6/logs/screen-n-cpu.txt.gz?level=ERROR#_2014-09-21_05_18_23_709

  014-09-21 05:18:23.709 ERROR oslo.messaging.rpc.dispatcher [req-
  52e7fee5-65ee-4c4d-abcc-099b29352846 InstanceRunTest-2053569555
  InstanceRunTest-179702724] Exception during message handling:
  Unexpected task state: expecting [u'powering-off'] but the actual
  state is deleting

  Checking the EC2 API test in tempest,

  def test_run_stop_terminate_instance(self):
  # EC2 run, stop and terminate instance
  image_ami = self.ec2_client.get_image(self.images[ami]
[image_id])
  reservation = image_ami.run(kernel_id=self.images[aki][image_id],
  ramdisk_id=self.images[ari][image_id],
  instance_type=self.instance_type)
  rcuk = self.addResourceCleanUp(self.destroy_reservation, reservation)

  for instance in reservation.instances:
  LOG.info(state: %s, instance.state)
  if instance.state != running:
  self.assertInstanceStateWait(instance, running)

  for instance in reservation.instances:
  instance.stop()
  LOG.info(state: %s, instance.state)
  if instance.state != stopped:
  self.assertInstanceStateWait(instance, stopped)

  self._terminate_reservation(reservation, rcuk)

  The test is wait for instance become to stopped. But check the ec2 api code
  https://github.com/openstack/nova/blob/master/nova/api/ec2/cloud.py#L1075

  it always return stopped status immediately. Actually start/stop
  action is async call.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1373238] [NEW] load extension lead to error calling 'volumes': 'NoneType' object has no attribute 'controller' for v3 api

2014-09-23 Thread jichenjc
Public bug reported:

Add a new plugin lead to following error, the root cause is servers was loaded 
after volumes
so the inherits.controller is None

if resource.inherits:
inherits = self.resources.get(resource.inherits)
if not resource.controller:
resource.controller = inherits.controller


ERROR [stevedore.extension] error calling 'volumes': 'NoneType' object has no 
attribute 'controller'
ERROR [stevedore.extension] 'NoneType' object has no attribute 'controller'
Traceback (most recent call last):
  File 
/home/jichen/git/nova/.venv/local/lib/python2.7/site-packages/stevedore/extension.py,
 line 248, in _invoke_one_plugin
response_callback(func(e, *args, **kwds))
  File /home/jichen/git/nova/nova/api/openstack/__init__.py, line 376, in 
_register_resources
resource.controller = inherits.controller
AttributeError: 'NoneType' object has no attribute 'controller'
DEBUG [nova.api.openstack] Running _register_resources on 
nova.api.openstack.compute.plugins.v3.servers.Servers object at 0xb484d6c

** Affects: nova
 Importance: Undecided
 Assignee: jichenjc (jichenjc)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = jichenjc (jichenjc)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1373238

Title:
  load extension lead to error calling 'volumes': 'NoneType' object has
  no attribute 'controller' for v3 api

Status in OpenStack Compute (Nova):
  New

Bug description:
  Add a new plugin lead to following error, the root cause is servers was 
loaded after volumes
  so the inherits.controller is None

  if resource.inherits:
  inherits = self.resources.get(resource.inherits)
  if not resource.controller:
  resource.controller = inherits.controller

  
  ERROR [stevedore.extension] error calling 'volumes': 'NoneType' object has no 
attribute 'controller'
  ERROR [stevedore.extension] 'NoneType' object has no attribute 'controller'
  Traceback (most recent call last):
File 
/home/jichen/git/nova/.venv/local/lib/python2.7/site-packages/stevedore/extension.py,
 line 248, in _invoke_one_plugin
  response_callback(func(e, *args, **kwds))
File /home/jichen/git/nova/nova/api/openstack/__init__.py, line 376, in 
_register_resources
  resource.controller = inherits.controller
  AttributeError: 'NoneType' object has no attribute 'controller'
  DEBUG [nova.api.openstack] Running _register_resources on 
nova.api.openstack.compute.plugins.v3.servers.Servers object at 0xb484d6c

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1373238/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp