[Yahoo-eng-team] [Bug 1262424] Re: Files without code should not contain copyright notices

2014-01-21 Thread Alexander Ignatov
** Also affects: python-openstackclient
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262424

Title:
  Files without code should not contain copyright notices

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Cinder:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in Ironic (Bare Metal Provisioning):
  Fix Committed
Status in OpenStack Message Queuing Service (Marconi):
  New
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in Python client library for Cinder:
  In Progress
Status in Python client library for Neutron:
  In Progress
Status in OpenStack Command Line Client:
  New
Status in Trove client binding:
  In Progress
Status in Tempest:
  In Progress
Status in Trove - Database as a Service:
  In Progress

Bug description:
  Due to a recent policy change in HACKING
  (http://docs.openstack.org/developer/hacking/#openstack-licensing),
  empty files should no longer contain copyright notices.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1262424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262424] Re: Files without code should not contain copyright notices

2014-01-21 Thread Alexander Ignatov
** Also affects: marconi
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262424

Title:
  Files without code should not contain copyright notices

Status in OpenStack Telemetry (Ceilometer):
  Fix Committed
Status in Cinder:
  Fix Committed
Status in Orchestration API (Heat):
  Fix Committed
Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in Ironic (Bare Metal Provisioning):
  Fix Committed
Status in OpenStack Message Queuing Service (Marconi):
  New
Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  Fix Committed
Status in Python client library for Cinder:
  In Progress
Status in Python client library for Neutron:
  In Progress
Status in OpenStack Command Line Client:
  New
Status in Trove client binding:
  In Progress
Status in Tempest:
  In Progress
Status in Trove - Database as a Service:
  In Progress

Bug description:
  Due to a recent policy change in HACKING
  (http://docs.openstack.org/developer/hacking/#openstack-licensing),
  empty files should no longer contain copyright notices.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1262424/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271102] [NEW] If live_migration failed, VM stay in state MIGRATING

2014-01-21 Thread Gregory Cunha
Public bug reported:

During the live_migration, if a InvalidSharedStorage is raised, the VM stay in 
MIGRATING state.
The sequence of calls between services is the following (the request is a 
live_migration from Compute src to Compute dest):
Scheduler (rpc call)- Compute dest : check_can_live_migrate_destination
Compute dest (rpc call) - Compute src : check_can_live_migrate_source

Exception InvalidSharedStorage raised by Compute src is deserialised by 
Compute dest as InvalidSharedStorage_Remote. 
Exception InvalidSharedStorage_Remote raised by Compute dest is 
deserialised by Scheduler as RemoteError.
So the rollback on status is not done by Scheduler

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271102

Title:
  If live_migration failed, VM stay in state MIGRATING

Status in OpenStack Compute (Nova):
  New

Bug description:
  During the live_migration, if a InvalidSharedStorage is raised, the VM stay 
in MIGRATING state.
  The sequence of calls between services is the following (the request is a 
live_migration from Compute src to Compute dest):
  Scheduler (rpc call)- Compute dest : 
check_can_live_migrate_destination
  Compute dest (rpc call) - Compute src : check_can_live_migrate_source

  Exception InvalidSharedStorage raised by Compute src is deserialised by 
Compute dest as InvalidSharedStorage_Remote. 
  Exception InvalidSharedStorage_Remote raised by Compute dest is 
deserialised by Scheduler as RemoteError.
  So the rollback on status is not done by Scheduler

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271102/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271095] Re: nova-compute start failed because of instance_type not found

2014-01-21 Thread wangpan
** Changed in: nova
   Status: New = Invalid

** Description changed:

- I got this issue in my stable havana nova,
- the reproduce steps are:
- 1. create a new flavor
- 2. boot an instance with this flavor
- 3. delete the flavor
- 4. restart the nova-compute service
- 
- the trace stack in compute.log:
- 2014-01-21 17:03:58.395 45856 ERROR nova.openstack.common.threadgroup [-] 
Instance type 3537 could not be found.
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
Traceback (most recent call last):
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
117, in wait
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
x.wait()
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/threadgroup.py, line 
49, in wait
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
return self.thread.wait()
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 168, in wait
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
return self._exit_event.wait()
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
return hubs.get_hub().switch()
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 187, in switch
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
return self.greenlet.switch()
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 194, in main
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
result = function(*args, **kwargs)
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/service.py, line 65, 
in run_service
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
service.start()
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 164, in start
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
self.manager.pre_start_hook()
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 963, in 
pre_start_hook
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
self.update_available_resource(nova.context.get_admin_context())
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 5617, in 
update_available_resource
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
rt.update_available_resource(context)
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, line 
246, in inner
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
return f(*args, **kwargs)
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py, line 306, 
in update_available_resource
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
self._update_usage_from_instances(resources, instances)
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py, line 637, 
in _update_usage_from_instances
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
self._update_usage_from_instance(resources, instance)
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/compute/resource_tracker.py, line 564, 
in _update_usage_from_instance
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
instance['instance_type_id'])
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/conductor/api.py, line 110, in 
instance_type_get
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
return self._manager.instance_type_get(context, instance_type_id)
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup   File 
/usr/lib/python2.7/dist-packages/nova/utils.py, line 1129, in wrapper
- 2014-01-21 17:03:58.395 45856 TRACE nova.openstack.common.threadgroup 
return 

[Yahoo-eng-team] [Bug 1271108] [NEW] LBaaS tests refactoring

2014-01-21 Thread Tatiana Mazur
Public bug reported:

Some LBaaS tests have redundant code and need to be refactored

** Affects: horizon
 Importance: Wishlist
 Assignee: Tatiana Mazur (tmazur)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = Tatiana Mazur (tmazur)

** Changed in: horizon
   Importance: Undecided = Wishlist

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1271108

Title:
  LBaaS tests refactoring

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Some LBaaS tests have redundant code and need to be refactored

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1271108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1198813] Re: Duplicated glance image service

2014-01-21 Thread Ghe Rivero
** Also affects: python-glanceclient
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1198813

Title:
  Duplicated glance image service

Status in Cinder:
  New
Status in Ironic (Bare Metal Provisioning):
  Triaged
Status in OpenStack Compute (Nova):
  New
Status in Python client library for Glance:
  New

Bug description:
  This code is duplicated in nova, cinder and ironic. Should be removed
  and use the common version on python-glanceclient once the code lands.

  https://review.openstack.org/#/c/33327/

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1198813/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271146] [NEW] V2 API instance_actions does not catch InstanceNotFound exceptions

2014-01-21 Thread Christopher Yeoh
Public bug reported:

The V2 API instance_actions extension (addFloatingIp) does not catch
InstanceNotFound exceptions which causes a traceback in the nova api log

** Affects: nova
 Importance: Undecided
 Assignee: Christopher Yeoh (cyeoh-0)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271146

Title:
  V2 API instance_actions does not catch InstanceNotFound exceptions

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  The V2 API instance_actions extension (addFloatingIp) does not catch
  InstanceNotFound exceptions which causes a traceback in the nova api
  log

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271146/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271151] [NEW] Unable to include openstack_dashboard.settings in other top-level django settings module

2014-01-21 Thread Timur Sufiev
Public bug reported:

In order to augment openstack_dashboard.settings with specific django
application settings (namely, Murano) while editing neither
openstack_dashboard.settings (it contains only settings related to
vanilla openstack dashboards), nor
/etc/openstack_dashoards/local_settings.py (it is for customization by
admins) the following scheme was devised.  Change DJANGO_SETTINGS_MODULE
in Apache config/WSGI python module to 'muranodashboard.settings' which
contains all Murano-specific settings and imports
openstack_dashboard.settings, which in turn imports
local.local_settings. This approach seemed fine until I coded it and
ran: it immediately failed with 'ImproperlyConfigured: The SECRET_KEY
setting must not be empty.' exception.

After spending some time in debugger, I found out that during
'openstack_dashboard.settings' module evaluation 'django.conf.Settings'
class is instantiated and it requires the 'SECRET_KEY' parameter to be
present in settings module referenced by DJANGO_SETTINGS_MODULE
environment variable (which, in my case is 'muranodashboard.settings').
But if I try to avoid this error and define my own SECRET_KEY in
'muranodashboard.settings', I end up with 2 different SECRET_KEY values
(one from muranodashboard.settings, the other hanging somewhere in
horizon's machinery from local.local_settings /
openstack_dashboard.settings) which is not good either.

This behaviour clearly seems like a bug, because
openstack_dashboard.settings shouldn't invoke functions that rely on
SECRET_KEY being already defined in DJANGO_SETTINGS_MODULE module.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1271151

Title:
  Unable to include openstack_dashboard.settings in other top-level
  django settings module

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In order to augment openstack_dashboard.settings with specific django
  application settings (namely, Murano) while editing neither
  openstack_dashboard.settings (it contains only settings related to
  vanilla openstack dashboards), nor
  /etc/openstack_dashoards/local_settings.py (it is for customization by
  admins) the following scheme was devised.  Change
  DJANGO_SETTINGS_MODULE in Apache config/WSGI python module to
  'muranodashboard.settings' which contains all Murano-specific settings
  and imports openstack_dashboard.settings, which in turn imports
  local.local_settings. This approach seemed fine until I coded it and
  ran: it immediately failed with 'ImproperlyConfigured: The SECRET_KEY
  setting must not be empty.' exception.

  After spending some time in debugger, I found out that during
  'openstack_dashboard.settings' module evaluation
  'django.conf.Settings' class is instantiated and it requires the
  'SECRET_KEY' parameter to be present in settings module referenced by
  DJANGO_SETTINGS_MODULE environment variable (which, in my case is
  'muranodashboard.settings'). But if I try to avoid this error and
  define my own SECRET_KEY in 'muranodashboard.settings', I end up with
  2 different SECRET_KEY values (one from muranodashboard.settings, the
  other hanging somewhere in horizon's machinery from
  local.local_settings / openstack_dashboard.settings) which is not good
  either.

  This behaviour clearly seems like a bug, because
  openstack_dashboard.settings shouldn't invoke functions that rely on
  SECRET_KEY being already defined in DJANGO_SETTINGS_MODULE module.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1271151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271150] [NEW] Appropriate 5XX error should be raised if glance-api terminated abruptly during image download

2014-01-21 Thread Aswad Rangnekar
Public bug reported:

Using commit: 2c4bd695652a628758eb56cb36394940a855d696
Empty response is sent to user downloading an image, if glance-api stops 
abruptly

Steps to reproduce:
1 put a sleep of 10 sec in show()
https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L443
2 Download glance image
$ glance image-download image-uuid
3 Kill glance-api during its 10 sec sleep
4 Following output is displayed on terminal
$ ''

IMO if this scenario occurs then appropriate 5XX error should be raised.

** Affects: glance
 Importance: Undecided
 Assignee: Aswad Rangnekar (aswad-r)
 Status: New


** Tags: ntt

** Changed in: glance
 Assignee: (unassigned) = Aswad Rangnekar (aswad-r)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1271150

Title:
  Appropriate 5XX error should be raised if glance-api terminated
  abruptly during image download

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Using commit: 2c4bd695652a628758eb56cb36394940a855d696
  Empty response is sent to user downloading an image, if glance-api stops 
abruptly

  Steps to reproduce:
  1 put a sleep of 10 sec in show()
  https://github.com/openstack/glance/blob/master/glance/api/v1/images.py#L443
  2 Download glance image
  $ glance image-download image-uuid
  3 Kill glance-api during its 10 sec sleep
  4 Following output is displayed on terminal
  $ ''

  IMO if this scenario occurs then appropriate 5XX error should be
  raised.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1271150/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271171] [NEW] Neutron allows overlapping IPs in the same tenant

2014-01-21 Thread Yair Fried
Public bug reported:

Neutron allows creation of multiple network with the same CIDR in the
same tenant

How to reproduce:
1. create 2 networks in the same tenant
2. for each create a subnet with cidr 10.0.0.0/24

Expected Result:
second subnet should raise an error

Actual Result:
subnet is created with the same cidr

ubuntu@yfried-devstack:~/devstack$ neutron subnet-list -c tenant_id -c cidr -c 
name --network
+--+-++
| tenant_id| cidr| name   |
+--+-++
| 66293febf7164c849b694a8d3f14cc1a | 10.0.0.0/24 | private-subnet |
| 66293febf7164c849b694a8d3f14cc1a | 10.0.0.0/24 | subnet1|
| 66293febf7164c849b694a8d3f14cc1a | 10.0.0.0/24 | subnet2|
+--+-++

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271171

Title:
  Neutron allows overlapping IPs in the same tenant

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Neutron allows creation of multiple network with the same CIDR in the
  same tenant

  How to reproduce:
  1. create 2 networks in the same tenant
  2. for each create a subnet with cidr 10.0.0.0/24

  Expected Result:
  second subnet should raise an error

  Actual Result:
  subnet is created with the same cidr

  ubuntu@yfried-devstack:~/devstack$ neutron subnet-list -c tenant_id -c cidr 
-c name --network
  +--+-++
  | tenant_id| cidr| name   |
  +--+-++
  | 66293febf7164c849b694a8d3f14cc1a | 10.0.0.0/24 | private-subnet |
  | 66293febf7164c849b694a8d3f14cc1a | 10.0.0.0/24 | subnet1|
  | 66293febf7164c849b694a8d3f14cc1a | 10.0.0.0/24 | subnet2|
  +--+-++

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1271171/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 904307] Re: Application/server name not available in service logs

2014-01-21 Thread Bogdan Dobrelya
Added all affected core projects to not forgot to update them in I-3 as
well

** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: neutron
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/904307

Title:
  Application/server name not available in service logs

Status in Cinder:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  New
Status in OpenStack Identity (Keystone):
  New
Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed
Status in Oslo - a Library of Common OpenStack Code:
  In Progress

Bug description:
  If Nova is configured to use syslog based logging, and there are
  multiple services running on a system, it becomes difficult to
  identify the service that emitted the log. This can be resolved if the
  log record also contains the name of the service/binary that generated
  the log. This will also be useful with an OpenStack system using a
  centralized syslog based logging mechanism.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/904307/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271194] [NEW] Checkboxes get displayed even if no table actions are available

2014-01-21 Thread Julie Pichon
Public bug reported:

Even if no table actions are available, checkboxes are displayed in the
first column which can be confusing, since nothing can actually be done
using them (see screenshot).

Example for how to reproduce:
1. As an admin, create a shared network
2. Create a subnet for it
3. From the Project dashboard, navigate to the Networks panel and select the 
new shared network that was created

Actual result:
4. The subnet is displayed, including a checkbox that doesn't serve any purpose

Expected result:
4. The subnet is displayed, no need for checkboxes if there is no action 
buttons for the table

** Affects: horizon
 Importance: Low
 Status: New


** Tags: ux

** Attachment added: checkboxes_without_action.png
   
https://bugs.launchpad.net/bugs/1271194/+attachment/3953562/+files/checkboxes_without_action.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1271194

Title:
  Checkboxes get displayed even if no table actions are available

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Even if no table actions are available, checkboxes are displayed in
  the first column which can be confusing, since nothing can actually be
  done using them (see screenshot).

  Example for how to reproduce:
  1. As an admin, create a shared network
  2. Create a subnet for it
  3. From the Project dashboard, navigate to the Networks panel and select the 
new shared network that was created

  Actual result:
  4. The subnet is displayed, including a checkbox that doesn't serve any 
purpose

  Expected result:
  4. The subnet is displayed, no need for checkboxes if there is no action 
buttons for the table

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1271194/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1259711] Re: Volume type or nova flavor extra_spec containing '/' can't be deleted

2014-01-21 Thread Rushi Agrawal
Skipped reading the first line of bug description, that there is already
a bug registered against Nova.

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1259711

Title:
  Volume type or nova flavor extra_spec containing '/' can't be deleted

Status in Cinder:
  Confirmed

Bug description:
  Written based on Nova bug 1256119.

  It is possible to set an extra spec for a volume type containing a '/'
  that then cannot be deleted.

  
  $ cinder type-create test
  +--+--+
  |  ID  | Name |
  +--+--+
  | ff8b49fb-3883-428e-a942-61fa34633c23 | test |
  +--+--+
  $ cinder type-key test set 'a/b=c'
  $ cinder extra-specs-list
  +--+--++
  |  ID  | Name |  extra_specs   |
  +--+--++
  | ff8b49fb-3883-428e-a942-61fa34633c23 | test | {u'a/b': u'c'} |
  +--+--++
  $ cinder --debug type-key test unset 'a/b'

  ...snip...

  REQ: curl -i
  
http://192.168.122.100:8776/v2/b3d6b9a5b8f04df08bf33714fe99d52a/types/ff8b49fb-3883-428e-a942-61fa34633c23/extra_specs/a/b
  -X DELETE -H X-Auth-Project-Id: demo -H User-Agent: python-
  cinderclient -H Accept: application/json -H X-Auth-Token:
  MI...ITF

  DEBUG:cinderclient.client:
  REQ: curl -i 
http://192.168.122.100:8776/v2/b3d6b9a5b8f04df08bf33714fe99d52a/types/ff8b49fb-3883-428e-a942-61fa34633c23/extra_specs/a/b
 -X DELETE -H X-Auth-Project-Id: demo -H User-Agent: python-cinderclient -H 
Accept: application/json -H X-Auth-Token: MI...ITF

  RESP: [404] CaseInsensitiveDict({'date': 'Tue, 10 Dec 2013 22:07:01 GMT', 
'content-length': '52', 'content-type': 'text/plain; charset=UTF-8'})
  RESP BODY: 404 Not Found

  The resource could not be found.

  
  DEBUG:cinderclient.client:RESP: [404] CaseInsensitiveDict({'date': 'Tue, 10 
Dec 2013 22:07:01 GMT', 'content-length': '52', 'content-type': 'text/plain; 
charset=UTF-8'})
  RESP BODY: 404 Not Found

  The resource could not be found.

  
  ERROR: Not found (HTTP 404)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1259711/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1266962] Re: Remove set_time_override in timeutils

2014-01-21 Thread Roman Podoliaka
** Changed in: tuskar
   Status: Fix Committed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1266962

Title:
  Remove set_time_override in timeutils

Status in OpenStack Telemetry (Ceilometer):
  In Progress
Status in Cinder:
  Invalid
Status in Gantt:
  New
Status in OpenStack Image Registry and Delivery Service (Glance):
  Triaged
Status in Ironic (Bare Metal Provisioning):
  In Progress
Status in OpenStack Identity (Keystone):
  In Progress
Status in Manila:
  New
Status in OpenStack Message Queuing Service (Marconi):
  Triaged
Status in OpenStack Compute (Nova):
  Triaged
Status in Oslo - a Library of Common OpenStack Code:
  Triaged
Status in Messaging API for OpenStack:
  Fix Committed
Status in Python client library for Keystone:
  Fix Committed
Status in Python client library for Nova:
  Fix Committed
Status in Tuskar:
  Fix Released

Bug description:
  set_time_override was written as a helper function to mock utcnow in
  unittests.

  However we now use mock or fixture to mock our objects so
  set_time_override has become obsolete.

  We should first remove all usage of set_time_override from downstream
  projects before deleting it from oslo.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceilometer/+bug/1266962/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271220] [NEW] Updated information should be shown after attaching/detaching volume operation against instance

2014-01-21 Thread David Jia
Public bug reported:

Updated information can be shown after instance rename, reboot and son
on, but when attach/detach a volume to virtual server, cannot get the
updated virtual server info.

The operation step:
1) Get the last updated time '2013-12-09T08:23:40Z' of vs.
2) Attach a volume to vs.
3) Use the following url and get the updated virtual server.
The request url: 
/v2/8501b77195204880bebc6bb4806c43aa/servers/detail?changes-since=2013-12-09T08:23:40Z
The resp body:
?xml version='1.0' encoding='UTF-8'?
servers 
xmlns:os-extended-volumes=http://docs.openstack.org/compute/ext/extended_volumes/api/v1.1;
 xmlns:OS-EXT-IPS=http://docs.openstack.org/compute/ext/extended_ips/api/v1.1; 
xmlns:atom=http://www.w3.org/2005/Atom; 
xmlns:OS-DCF=http://docs.openstack.org/compute/ext/disk_config/api/v1.1; 
xmlns:OS-EXT-IPS-MAC=http://docs.openstack.org/compute/ext/extended_ips_mac/api/v1.1;
 
xmlns:OS-EXT-SRV-ATTR=http://docs.openstack.org/compute/ext/extended_status/api/v1.1;
 xmlns:OS-SRV-USG=http://docs.openstack.org/compute/ext/server_usage/api/v1.1; 
xmlns:OS-EXT-STS=http://docs.openstack.org/compute/ext/extended_status/api/v1.1;
 
xmlns:OS-EXT-AZ=http://docs.openstack.org/compute/ext/extended_availability_zone/api/v2;
 xmlns=http://docs.openstack.org/compute/api/v1.1/

I cannot see the updated virtual server in resp body. I checked the
table instances and found the 'updated_at' field is not modified after
attaching a volume to vs, but it is modified after rename/reboot an
instance.Here I think it should get the updated vs info because it
indeed change the virtual server when attaching an volume.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271220

Title:
  Updated information should be shown after attaching/detaching volume
  operation against instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  Updated information can be shown after instance rename, reboot and son
  on, but when attach/detach a volume to virtual server, cannot get the
  updated virtual server info.

  The operation step:
  1) Get the last updated time '2013-12-09T08:23:40Z' of vs.
  2) Attach a volume to vs.
  3) Use the following url and get the updated virtual server.
  The request url: 
/v2/8501b77195204880bebc6bb4806c43aa/servers/detail?changes-since=2013-12-09T08:23:40Z
  The resp body:
  ?xml version='1.0' encoding='UTF-8'?
  servers 
xmlns:os-extended-volumes=http://docs.openstack.org/compute/ext/extended_volumes/api/v1.1;
 xmlns:OS-EXT-IPS=http://docs.openstack.org/compute/ext/extended_ips/api/v1.1; 
xmlns:atom=http://www.w3.org/2005/Atom; 
xmlns:OS-DCF=http://docs.openstack.org/compute/ext/disk_config/api/v1.1; 
xmlns:OS-EXT-IPS-MAC=http://docs.openstack.org/compute/ext/extended_ips_mac/api/v1.1;
 
xmlns:OS-EXT-SRV-ATTR=http://docs.openstack.org/compute/ext/extended_status/api/v1.1;
 xmlns:OS-SRV-USG=http://docs.openstack.org/compute/ext/server_usage/api/v1.1; 
xmlns:OS-EXT-STS=http://docs.openstack.org/compute/ext/extended_status/api/v1.1;
 
xmlns:OS-EXT-AZ=http://docs.openstack.org/compute/ext/extended_availability_zone/api/v2;
 xmlns=http://docs.openstack.org/compute/api/v1.1/

  I cannot see the updated virtual server in resp body. I checked the
  table instances and found the 'updated_at' field is not modified after
  attaching a volume to vs, but it is modified after rename/reboot an
  instance.Here I think it should get the updated vs info because it
  indeed change the virtual server when attaching an volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271220/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1211973] Re: Error notification payload not a dict

2014-01-21 Thread Mark Washenberger
** Changed in: glance
   Status: In Progress = Won't Fix

** Changed in: glance
Milestone: icehouse-2 = None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1211973

Title:
  Error notification payload not a dict

Status in OpenStack Image Registry and Delivery Service (Glance):
  Won't Fix

Bug description:
  According to the NotificationSystem wiki page [1], the payload element of the 
notification envelope should be a python dict.
  Certain error priority notifications in Glance are emitted with a string as a 
payload. This breaks the contract defined
  in the docs.

  An example of this can be seen in this notification: 
https://gist.github.com/ramielrowe/6225570
  This notification is generated as part of 
glance.api.v1.upload_utils.upload_data_to_store(...)

  The expected payload would be something like:
  'payload' {'message': 'Received HTTP error while uploading image xx'}
  But, it may also be a good idea to include further details about the 
exception, like the exceptions class.

  [1]
  
https://wiki.openstack.org/wiki/NotificationSystem#General_Notification_Message_Format

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1211973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271230] [NEW] vmware nsx: floating ip to fail to become reachable

2014-01-21 Thread Armando Migliaccio
Public bug reported:

The following stacktrace has been observed recently in tempest runs for
the VMware NSX Neutron plugin

http://paste.openstack.org/show/61635/

This does not seem to be random and happens on master.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: nicira

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271230

Title:
  vmware nsx: floating ip to fail to become reachable

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The following stacktrace has been observed recently in tempest runs
  for the VMware NSX Neutron plugin

  http://paste.openstack.org/show/61635/

  This does not seem to be random and happens on master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1271230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271235] [NEW] User Defined Regions not supported

2014-01-21 Thread Steve Martinelli
Public bug reported:

According to the API Spec, we should be able to create a region with a specific 
id:
https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#create-region-with-specific-id-put-regionsuser_defined_region_id

I don't believe this will work with the code I'm seeing at: 
https://github.com/openstack/keystone/blob/master/keystone/catalog/controllers.py#L150

A unique id is assigned everytime.

Also, looking at the tests that were submitted with the code drop, there were 
not tests for user defined ids:
https://review.openstack.org/#/c/63570/7/keystone/tests/test_v3_catalog.py

** Affects: keystone
 Importance: Undecided
 Status: Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1271235

Title:
  User Defined Regions not supported

Status in OpenStack Identity (Keystone):
  Triaged

Bug description:
  According to the API Spec, we should be able to create a region with a 
specific id:
  
https://github.com/openstack/identity-api/blob/master/openstack-identity-api/v3/src/markdown/identity-api-v3.md#create-region-with-specific-id-put-regionsuser_defined_region_id

  I don't believe this will work with the code I'm seeing at: 
  
https://github.com/openstack/keystone/blob/master/keystone/catalog/controllers.py#L150

  A unique id is assigned everytime.

  Also, looking at the tests that were submitted with the code drop, there were 
not tests for user defined ids:
  https://review.openstack.org/#/c/63570/7/keystone/tests/test_v3_catalog.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1271235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271231] [NEW] securitygroups table is created twice while migrating database from havana to icehouse

2014-01-21 Thread Jakub Libosvar
Public bug reported:

There is attempt to create a new table securitygroups by
49f5e553f61f_ml2_security_groups.py while table already exists (created
by 3cb5d900c5de_security_groups.py)

INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Running upgrade havana - e197124d4b9, add unique 
constraint to members
INFO  [alembic.migration] Running upgrade e197124d4b9 - 1fcfc149aca4, Add a 
unique constraint on (agent_type, host) columns to prevent a race
condition when an agent entry is 'upserted'.
INFO  [alembic.migration] Running upgrade 1fcfc149aca4 - 50e86cb2637a, 
nsx_mappings
INFO  [alembic.migration] Running upgrade 50e86cb2637a - ed93525fd003, 
bigswitch_quota
INFO  [alembic.migration] Running upgrade ed93525fd003 - 49f5e553f61f, 
security_groups
Traceback (most recent call last):
  File /usr/local/bin/neutron-db-manage, line 10, in module
sys.exit(main())
  File /opt/stack/new/neutron/neutron/db/migration/cli.py, line 143, in main
CONF.command.func(config, CONF.command.name)
  File /opt/stack/new/neutron/neutron/db/migration/cli.py, line 80, in 
do_upgrade_downgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File /opt/stack/new/neutron/neutron/db/migration/cli.py, line 59, in 
do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/alembic/command.py, line 124, 
in upgrade
script.run_env()
  File /usr/local/lib/python2.7/dist-packages/alembic/script.py, line 199, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File /usr/local/lib/python2.7/dist-packages/alembic/util.py, line 199, in 
load_python_file
module = load_module(module_id, path)
  File /usr/local/lib/python2.7/dist-packages/alembic/compat.py, line 55, in 
load_module
mod = imp.load_source(module_id, path, fp)
  File /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py, 
line 105, in module
run_migrations_online()
  File /opt/stack/new/neutron/neutron/db/migration/alembic_migrations/env.py, 
line 89, in run_migrations_online
options=build_options())
  File string, line 7, in run_migrations
  File /usr/local/lib/python2.7/dist-packages/alembic/environment.py, line 
652, in run_migrations
self.get_context().run_migrations(**kw)
  File /usr/local/lib/python2.7/dist-packages/alembic/migration.py, line 225, 
in run_migrations
change(**kw)
  File 
/opt/stack/new/neutron/neutron/db/migration/alembic_migrations/versions/49f5e553f61f_ml2_security_groups.py,
 line 53, in upgrade
sa.PrimaryKeyConstraint('id')
  File string, line 7, in create_table
  File /usr/local/lib/python2.7/dist-packages/alembic/operations.py, line 
647, in create_table
self._table(name, *columns, **kw)
  File /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 149, 
in create_table
self._exec(schema.CreateTable(table))
  File /usr/local/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 76, 
in _exec
conn.execute(construct, *multiparams, **params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
1449, in execute
params)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
1542, in _execute_ddl
compiled
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
1698, in _execute_context
context)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 
1691, in _execute_context
context)
  File /usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, 
line 331, in do_execute
cursor.execute(statement, parameters)
  File /usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in 
execute
self.errorhandler(self, exc, value)
  File /usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
raise errorclass, errorvalue
sqlalchemy.exc.OperationalError: (OperationalError) (1050, Table 
'securitygroups' already exists) '\nCREATE TABLE securitygroups (\n\ttenant_id 
VARCHAR(255), \n\tid VARCHAR(36) NOT NULL, \n\tname VARCHAR(255), 
\n\tdescription VARCHAR(255), \n\tPRIMARY KEY (id)\n)\n\n' ()

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271231

Title:
  securitygroups table is created twice while migrating database from
  havana to icehouse

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There is attempt to create a new table securitygroups by
  49f5e553f61f_ml2_security_groups.py while table already exists
  (created by 3cb5d900c5de_security_groups.py)

  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Running upgrade havana - e197124d4b9, add 

[Yahoo-eng-team] [Bug 1271230] Re: vmware nsx: floating ip to fail to become reachable

2014-01-21 Thread Armando Migliaccio
** Also affects: tempest
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271230

Title:
  vmware nsx: floating ip to fail to become reachable

Status in OpenStack Neutron (virtual network service):
  New
Status in Tempest:
  New

Bug description:
  The following stacktrace has been observed recently in tempest runs
  for the VMware NSX Neutron plugin

  http://paste.openstack.org/show/61635/

  This does not seem to be random and happens on master.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1271230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271273] [NEW] Policy testing checks could be simplified in test_v3_filters

2014-01-21 Thread Henry Nash
Public bug reported:

test_v3_filters resets the policy file repeatedly to perform its various
checks.  Usinf the current code, if you do this within a given test more
than once it seems to break, leading to more complex test code.  E.g.:

# TODO(henry-nash) Ideally the policy setting would happen in the
# entity for-loop below, using a string substitution of 'plural'.
# However, that appears to lead to unreliable policy checking - i.e.
# multiple calls to _set_policy doesn't work properly.

self._set_policy({identity:list_users: [],
  identity:list_groups: [],
  identity:list_projects: [])

for entity in ['user', 'group', 'project']:
  Do some action that needs the policy above for that type of entity

whereas it would be better to reset the policy every time in the loop.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1271273

Title:
  Policy testing checks could be simplified in test_v3_filters

Status in OpenStack Identity (Keystone):
  New

Bug description:
  test_v3_filters resets the policy file repeatedly to perform its
  various checks.  Usinf the current code, if you do this within a given
  test more than once it seems to break, leading to more complex test
  code.  E.g.:

  # TODO(henry-nash) Ideally the policy setting would happen in the
  # entity for-loop below, using a string substitution of 'plural'.
  # However, that appears to lead to unreliable policy checking - i.e.
  # multiple calls to _set_policy doesn't work properly.

  self._set_policy({identity:list_users: [],
identity:list_groups: [],
identity:list_projects: [])

  for entity in ['user', 'group', 'project']:
Do some action that needs the policy above for that type of 
entity

  whereas it would be better to reset the policy every time in the loop.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1271273/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271276] [NEW] db migration of table brocadenetworks incorrectly specifies id as int

2014-01-21 Thread Shiv Haris
Public bug reported:

Incorrect column specification of brocadenetworks id, should be
string(36) instead of int

** Affects: neutron
 Importance: Undecided
 Assignee: Shiv Haris (shh)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Shiv Haris (shh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271276

Title:
  db migration of table brocadenetworks incorrectly specifies id as int

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Incorrect column specification of brocadenetworks id, should be
  string(36) instead of int

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1271276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254555] Re: tenant does not see network that is routable from tenant-visible network until neutron-server is restarted

2014-01-21 Thread Robert Collins
** Changed in: tripleo
   Status: Triaged = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1254555

Title:
  tenant does not see network that is routable from tenant-visible
  network until neutron-server is restarted

Status in OpenStack Neutron (virtual network service):
  Confirmed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  In TripleO We have a setup script[1] that does this as an admin:

  neutron net-create default-net --shared
  neutron subnet-create --ip_version 4 --allocation-pool 
start=10.0.0.2,end=10.255.255.254 --gateway 10.0.0.1 10.0.0.0/8 
$ID_OF_default_net
  neutron router-create default-router
  neutron router-interface-add default-router $ID_OF_10.0.0.0/8_subnet
  neutron net-create ext-net --router:external=True
  neutron subnet-create ext-net $FLOATING_CIDR --disable-dhcp --alocation-pool 
start=$FLOATING_START,end=$FLOATING_END
  neutron router-gateway-set default-router ext-net

  I would then expect that all users will be able to see ext-net using
  'neutron net-list' and that they will be able to create floating IPs
  on ext-net.

  As of this commit:

  commit c655156b98a0a25568a3745e114a0bae41bc49d1
  Merge: 75ac6c1 c66212c
  Author: Jenkins jenk...@review.openstack.org
  Date:   Sun Nov 24 10:02:04 2013 +

  Merge MidoNet: Added support for the admin_state_up flag

  I see that the ext-net network is not available after I do all of the
  above router/subnet creation. It does become available to tenants as
  soon as I restart neutron-server.

  [1] https://git.openstack.org/cgit/openstack/tripleo-
  incubator/tree/scripts/setup-neutron

  I can reproduce this at will using the TripleO devtest process on real
  hardware. I have not yet reproduced on VMs using the 'devtest'
  workflow.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1254555/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271311] [NEW] Neutron should disallow a CIDR of /32

2014-01-21 Thread Paul Ward
Public bug reported:

NeutronDbPluginV2._validate_allocation_pools() currently does basic
checks to be sure you don't have an invalid subnet specified.  However,
one thing missing is checking for a CIDR of /32.  Such a subnet would
only have one valid IP in it, which would be consumed by the gateway,
thus making this network a dead network since no IPs are left over to be
allocated to VMs.

I propose a change to disallow start_ip == end_ip in
NeutronDbPluginV2._validate_allocation_pools() to cover the CIDR of /32
case.

** Affects: neutron
 Importance: Undecided
 Assignee: Paul Ward (wpward)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Paul Ward (wpward)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271311

Title:
  Neutron should disallow a CIDR of /32

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  NeutronDbPluginV2._validate_allocation_pools() currently does basic
  checks to be sure you don't have an invalid subnet specified.
  However, one thing missing is checking for a CIDR of /32.  Such a
  subnet would only have one valid IP in it, which would be consumed by
  the gateway, thus making this network a dead network since no IPs are
  left over to be allocated to VMs.

  I propose a change to disallow start_ip == end_ip in
  NeutronDbPluginV2._validate_allocation_pools() to cover the CIDR of
  /32 case.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1271311/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254462] Re: flake8 test fail in Horizon

2014-01-21 Thread Floren
Test again, the behavior described in this ticket is not reproducible.

I change the status to invalid.

** Changed in: horizon
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1254462

Title:
  flake8 test fail in Horizon

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When I execute ./run_test.sh appears several errors when flake8 test
  are passed:

  Running flake8 ...
  ./openstack_dashboard/test/api_tests/vpnaas_tests.py:234:17: E126 
continuation line over-indented for hanging indent
  ./openstack_dashboard/dashboards/project/loadbalancers/tests.py:421:17: E126 
continuation line over-indented for hanging indent
  ./openstack_dashboard/dashboards/project/vpn/tests.py:292:17: E126 
continuation line over-indented for hanging indent
  ./openstack_dashboard/dashboards/project/vpn/tests.py:359:17: E126 
continuation line over-indented for hanging indent
  ./openstack_dashboard/dashboards/project/vpn/tests.py:423:17: E126 
continuation line over-indented for hanging indent
  ./openstack_dashboard/dashboards/project/vpn/tests.py:533:17: E126 
continuation line over-indented for hanging indent
  Tests failed.

  I did it in devstat fresh install.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1254462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271330] [NEW] Customize the flavor's label

2014-01-21 Thread George Peristerakis
Public bug reported:

In the create instance form, provide a way to customize the flavor's
option labels

** Affects: horizon
 Importance: Undecided
 Assignee: George Peristerakis (george-peristerakis)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) = George Peristerakis (george-peristerakis)

** Changed in: horizon
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1271330

Title:
  Customize the flavor's label

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  In the create instance form, provide a way to customize the flavor's
  option labels

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1271330/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271331] [NEW] unit test failure in gate nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping

2014-01-21 Thread Joe Gordon
Public bug reported:

We are occasionally seeing the test
nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping fail in the
gate due to


Traceback (most recent call last):
  File nova/tests/db/test_sqlite.py, line 53, in test_big_int_mapping
output, _ = utils.execute(get_schema_cmd, shell=True)
  File nova/utils.py, line 166, in execute
return processutils.execute(*cmd, **kwargs)
  File nova/openstack/common/processutils.py, line 168, in execute
result = obj.communicate()
  File /usr/lib/python2.7/subprocess.py, line 754, in communicate
return self._communicate(input)
  File /usr/lib/python2.7/subprocess.py, line 1314, in _communicate
stdout, stderr = self._communicate_with_select(input)
  File /usr/lib/python2.7/subprocess.py, line 1438, in 
_communicate_with_select
data = os.read(self.stdout.fileno(), 1024)
OSError: [Errno 11] Resource temporarily unavailable


logstash query: message:FAIL: 
nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogbm92YS50ZXN0cy5kYi50ZXN0X3NxbGl0ZS5UZXN0U3FsaXRlLnRlc3RfYmlnX2ludF9tYXBwaW5nXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTAzMzk1MTU1NDcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: oslo
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271331

Title:
  unit test failure in gate
  nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping

Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  We are occasionally seeing the test
  nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping fail in the
  gate due to

  
  Traceback (most recent call last):
File nova/tests/db/test_sqlite.py, line 53, in test_big_int_mapping
  output, _ = utils.execute(get_schema_cmd, shell=True)
File nova/utils.py, line 166, in execute
  return processutils.execute(*cmd, **kwargs)
File nova/openstack/common/processutils.py, line 168, in execute
  result = obj.communicate()
File /usr/lib/python2.7/subprocess.py, line 754, in communicate
  return self._communicate(input)
File /usr/lib/python2.7/subprocess.py, line 1314, in _communicate
  stdout, stderr = self._communicate_with_select(input)
File /usr/lib/python2.7/subprocess.py, line 1438, in 
_communicate_with_select
  data = os.read(self.stdout.fileno(), 1024)
  OSError: [Errno 11] Resource temporarily unavailable

  
  logstash query: message:FAIL: 
nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogbm92YS50ZXN0cy5kYi50ZXN0X3NxbGl0ZS5UZXN0U3FsaXRlLnRlc3RfYmlnX2ludF9tYXBwaW5nXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTAzMzk1MTU1NDcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271333] [NEW] nova boot permits admin user to boot instances on networks belong to other tenants

2014-01-21 Thread Lars Kellogg-Stedman
Public bug reported:

I have deployed OpenStack using RedHat's packstack tool by running
packstack --allinone, which results in the following tenants:

(keystone_admin)# keystone tenant-list
+--+--+-+
|id|   name   | enabled |
+--+--+-+
| 6b027a9f4d5e48128a7bca823bdb3d2b |  admin   |   True  |
| 04981100ee194c9697b6a05c6415f9d5 | alt_demo |   True  |
| 8639e75e13c742c093746c8e70d5cef8 |   demo   |   True  |
| 0d4f0baadf914584a70633c35a89a072 | services |   True  |
+--+--+-+

There are two networks defined in my environment.  As the admin user, I
can see both of them...

(keystone_admin)# neutron net-list

+--+-+--+
| id   | name| subnets  
|

+--+-+--+
| 9039c750-de15-4358-8a38-5807a7fc5c35 | private | 
4930ef6a-b03c-43c5-99a7-e003762bc4be 10.0.0.0/24 |
| fdf2804f-8fae-4753-9330-484e7e061c1f | public  | 
fc69c07c-6ff7-4bc5-9984-d1f8e7e55887 172.24.4.224/28 |

+--+-+--+

...even though the private network is owned by the demo tenant:

(keystone_admin)# neutron net-show private
+---+--+
| Field | Value|
+---+--+
| admin_state_up| True |
| id| 9039c750-de15-4358-8a38-5807a7fc5c35 |
| name  | private  |
| provider:network_type | local|
| provider:physical_network |  |
| provider:segmentation_id  |  |
| router:external   | False|
| shared| False|
| status| ACTIVE   |
| subnets   | 4930ef6a-b03c-43c5-99a7-e003762bc4be |
| tenant_id | 8639e75e13c742c093746c8e70d5cef8 |
+---+--+

Because this network is visible to admin, I can do this:

nova boot ... --nic net-id=8639e75e13c742c093746c8e70d5cef8 test0

Which works great...until I reboot.  At this point, attempts to
interact with the instance (e.g., using nova reboot) result in the
following exception:

 File /usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py, 
line 1029, in _build_network_info_model
   subnets)
 File /usr/lib/python2.6/site-packages/nova/network/neutronv2/api.py, 
line 959, in _nw_info_build_network
   label=network_name,
UnboundLocalError: local variable 'network_name' referenced before 
assignment

This happens because in nova/network/neutronv2/api.py, in
API._get_available_networks(), there is this:

search_opts = {'tenant_id': project_id, 'shared': False}
nets = neutron.list_networks(**search_opts).get('networks', [])

Here, nova is explicitly filtering on project_id, which means that
networks that do not belong to the admin tenant will not be
discovered.  In _nw_info_build_network(), this causes problems in the
initial loop:

def _nw_info_build_network(self, port, networks, subnets):
# NOTE(danms): This loop can't fail to find a network since we
# filtered ports to only the ones matching networks in our parent
for net in networks:
if port['network_id'] == net['id']:
network_name = net['name']
break

Because port['network_id'] = '9039c750-de15-4358-8a38-5807a7fc5c35',
but that network was never discovered in _get_available_networks, this
loops exits without setting network_name, causing the above exception.

I think that the initial nova boot command should have failed, but
also that this situation ought to be recoverable (currently, because
of this error, the instance is effectively unmaintainable -- it can be
neither rebooted nor deleted).

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271333

Title:
  nova boot permits admin user to boot instances on networks belong to
  other tenants

Status in OpenStack Compute (Nova):
  New

Bug description:
  I have 

[Yahoo-eng-team] [Bug 1271331] Re: unit test failure in gate nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping

2014-01-21 Thread Russell Bryant
** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271331

Title:
  unit test failure in gate
  nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping

Status in OpenStack Compute (Nova):
  New
Status in Oslo - a Library of Common OpenStack Code:
  New

Bug description:
  We are occasionally seeing the test
  nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping fail in the
  gate due to

  
  Traceback (most recent call last):
File nova/tests/db/test_sqlite.py, line 53, in test_big_int_mapping
  output, _ = utils.execute(get_schema_cmd, shell=True)
File nova/utils.py, line 166, in execute
  return processutils.execute(*cmd, **kwargs)
File nova/openstack/common/processutils.py, line 168, in execute
  result = obj.communicate()
File /usr/lib/python2.7/subprocess.py, line 754, in communicate
  return self._communicate(input)
File /usr/lib/python2.7/subprocess.py, line 1314, in _communicate
  stdout, stderr = self._communicate_with_select(input)
File /usr/lib/python2.7/subprocess.py, line 1438, in 
_communicate_with_select
  data = os.read(self.stdout.fileno(), 1024)
  OSError: [Errno 11] Resource temporarily unavailable

  
  logstash query: message:FAIL: 
nova.tests.db.test_sqlite.TestSqlite.test_big_int_mapping

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRkFJTDogbm92YS50ZXN0cy5kYi50ZXN0X3NxbGl0ZS5UZXN0U3FsaXRlLnRlc3RfYmlnX2ludF9tYXBwaW5nXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6ImFsbCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTAzMzk1MTU1NDcsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271342] [NEW] using Resource Usage Overview only appears the id and not the name

2014-01-21 Thread Graciela Salcedo Mancilla
Public bug reported:

When using Resource Usage Overview all the graphs shows the id, that is
not friendly, I think could be better if we could have the instance name
or the image name instead only the id.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1271342

Title:
  using Resource Usage Overview only appears the id and not the name

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When using Resource Usage Overview all the graphs shows the id, that
  is not friendly, I think could be better if we could have the instance
  name or the image name instead only the id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1271342/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271344] [NEW] neutron-dhcp-agent not getting updates after ~24h running

2014-01-21 Thread Robert Collins
Public bug reported:

Hi, last two days on ci-overcloud.tripleo.org, the neutron-dhcp-agent
has stopped updating DHCP entries - new VMs don't get IP addresses until
neutron-dhcp-agent is restarted.

Haven't seen anything obvious in the logs yet, happy to set specific log
levels or whatever to try and debug this.

** Affects: neutron
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Critical
 Status: Triaged

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Importance: Undecided = Critical

** Changed in: tripleo
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271344

Title:
  neutron-dhcp-agent not getting updates after ~24h running

Status in OpenStack Neutron (virtual network service):
  New
Status in tripleo - openstack on openstack:
  Triaged

Bug description:
  Hi, last two days on ci-overcloud.tripleo.org, the neutron-dhcp-agent
  has stopped updating DHCP entries - new VMs don't get IP addresses
  until neutron-dhcp-agent is restarted.

  Haven't seen anything obvious in the logs yet, happy to set specific
  log levels or whatever to try and debug this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1271344/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271348] [NEW] I18n: untranslated help text for port create

2014-01-21 Thread Doug Fish
Public bug reported:

In Admin-Networks-[network name]-Ports-Create Port, the Device ID
and Device Owner have untranslated tool tips.  Looks like the issue is
in openstack_dashboard/dashboards/admin/networks/ports/forms.py

** Affects: horizon
 Importance: Undecided
 Assignee: Doug Fish (drfish)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Doug Fish (drfish)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1271348

Title:
  I18n:  untranslated help text for port create

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Admin-Networks-[network name]-Ports-Create Port, the Device ID
  and Device Owner have untranslated tool tips.  Looks like the issue is
  in openstack_dashboard/dashboards/admin/networks/ports/forms.py

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1271348/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271381] [NEW] Incorrect number of security groups in Project Overview

2014-01-21 Thread Vahid Hashemian
Public bug reported:

Security Groups of a fresh project is reported as 0 (default: Used 0 of
10) even though the default security group is the default security
group of the project. So it should report as 1 (Used 1 of 10).

Steps to reproduce:
1. Using an admin user create a new project 'proj1' and add any user to it 
(role doesn't matter)
2. Login as the user you just added to 'proj1'
3. In Project tab select 'proj1', and then click on Overview. Security Groups 
under 'Limit Summary' report Used 0 of 10
4. Click on Access  Security. default security group is listed.

5. Now create a new security group.
6. Go back to Overview. Now Security Groups report Used 2 of 10 (which is 
correct)

7. If you go back and remove the security group created and come back to
Overview it would show Used 1 of 10 (which is correct)

** Affects: horizon
 Importance: Undecided
 Assignee: Vahid Hashemian (vahidhashemian)
 Status: New


** Tags: miscount overview project security-group

** Changed in: horizon
 Assignee: (unassigned) = Vahid Hashemian (vahidhashemian)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1271381

Title:
  Incorrect number of security groups in Project Overview

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Security Groups of a fresh project is reported as 0 (default: Used 0
  of 10) even though the default security group is the default
  security group of the project. So it should report as 1 (Used 1 of
  10).

  Steps to reproduce:
  1. Using an admin user create a new project 'proj1' and add any user to it 
(role doesn't matter)
  2. Login as the user you just added to 'proj1'
  3. In Project tab select 'proj1', and then click on Overview. Security Groups 
under 'Limit Summary' report Used 0 of 10
  4. Click on Access  Security. default security group is listed.

  5. Now create a new security group.
  6. Go back to Overview. Now Security Groups report Used 2 of 10 (which is 
correct)

  7. If you go back and remove the security group created and come back
  to Overview it would show Used 1 of 10 (which is correct)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1271381/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269667] Re: metering service unreachable:ServiceCatalogException: Invalid service catalog service: metering

2014-01-21 Thread LIU Yulong
** Changed in: horizon
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1269667

Title:
  metering service unreachable:ServiceCatalogException: Invalid service
  catalog service: metering

Status in OpenStack Dashboard (Horizon):
  Incomplete

Bug description:
  
  metering service unreachable
  ServiceCatalogException: Invalid service catalog service: metering

  but I can use 
  root@client:/etc/nova# ceilometer meter-list
  
+++---+--+--+--+
  | Name   | Type   | Unit  | Resource ID   
   | User ID  | Project ID  
 |
  
+++---+--+--+--+
  | cpu| cumulative | ns| 
e2aefa0a-1773-46a5-99df-c642650f4780 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 
01a79d4c-28eb-452a-8d01-0b089871c401 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 
08bd3dd4-a498-41bd-9f8a-c2f0355546ed | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 
1c8de487-5e63-4ca7-a299-8a48de13f136 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 
3f580752-1755-42f5-88f9-b0c8f9213591 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 
502321d1-22a9-4ec7-b1ee-04825f9d97de | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 
805e8a9a-520c-467a-b90d-bd7e2ba31fcc | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 
abbcff90-dcfe-4e37-a7aa-770747adb437 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 
c87cab87-7f1a-4b1a-b15e-18826573f8ea | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 
e2aefa0a-1773-46a5-99df-c642650f4780 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.read.bytes| cumulative | B | 
e2aefa0a-1773-46a5-99df-c642650f4780 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.read.requests | cumulative | request   | 
e2aefa0a-1773-46a5-99df-c642650f4780 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.root.size | gauge  | GB| 
01a79d4c-28eb-452a-8d01-0b089871c401 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.root.size | gauge  | GB| 
08bd3dd4-a498-41bd-9f8a-c2f0355546ed | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.root.size | gauge  | GB| 
1c8de487-5e63-4ca7-a299-8a48de13f136 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.root.size | gauge  | GB| 
3f580752-1755-42f5-88f9-b0c8f9213591 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.root.size | gauge  | GB| 
502321d1-22a9-4ec7-b1ee-04825f9d97de | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.root.size | gauge  | GB| 
805e8a9a-520c-467a-b90d-bd7e2ba31fcc | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.root.size | gauge  | GB| 
abbcff90-dcfe-4e37-a7aa-770747adb437 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.root.size | gauge  | GB| 
c87cab87-7f1a-4b1a-b15e-18826573f8ea | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.root.size | gauge  | GB| 
e2aefa0a-1773-46a5-99df-c642650f4780 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.write.bytes   | cumulative | B | 
e2aefa0a-1773-46a5-99df-c642650f4780 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.write.requests| cumulative | request   | 
e2aefa0a-1773-46a5-99df-c642650f4780 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | image   

[Yahoo-eng-team] [Bug 1270943] Re: Hypervisor crashes after instance is spawned

2014-01-21 Thread Joe Gordon
Any idea of what process is causing the kernel to panic?  In your
pastebin you mention its 'OpenVSwitch crash mem dump' in which case I
don't think this is a nova bug.  Including neutron here to see if anyone
over there has seen this, although it sounds like an OpenVSwitch issue
(assuming its OpenVSwitch causing the kernel panic)

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: nova
   Status: New = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1270943

Title:
  Hypervisor crashes after instance is spawned

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  I am running Grizzly on Ubuntu 13.04 (so the network service ==
  Quantum).  Nova runs Quantum with LibvirtHybridOVSBridgeDriver and
  LinuxOVSInterfaceDriver, while Quantum is configured to use GRE
  tunnels.  Further, Quantum runs on a dedicated node and VLAN.

  Starting in mid-December, new Compute nodes that came online were
  unable to spin new VMs.  At the moment the nova-compute.log indicated
  that the instance had spawned successfully, the hypervisor system
  crashed with the following console dump message (last screen's worth):
  http://pastebin.com/004MYzvR.

  The installation of the Compute packages are controlled by puppet:

  1:2013.1-0ubuntu2 nova-common
  1:2013.1-0ubuntu2 nova-compute
  1:2013.1-0ubuntu2 nova-compute-kvm
  1.9.0-0ubuntu1openvswitch-common
  1.9.0-0ubuntu1openvswitch-datapath-dkms
  1.9.0-0ubuntu1openvswitch-datapath-source
  1.9.0-0ubuntu1openvswitch-switch
  1:1.0.3-0ubuntu1 python-cinderclient
  1:2013.1.4-0ubuntu1  python-glance
  1:0.9.0-0ubuntu1.2  python-glanceclient
  1:2013.1.4-0ubuntu1.1   python-keystone
  1:0.2.3-0ubuntu2.2  python-keystoneclient
  1:2013.1-0ubuntu2  python-nova
  1:2.13.0-0ubuntu1   python-novaclient
  1:1.1.0-0ubuntu1  python-oslo.config
  1:2013.1-0ubuntu2  python-quantum
  1:2.2.0-0ubuntu1  python-quantumclient
  1:1.3.0-0ubuntu1  python-swiftclient
  1:2013.1-0ubuntu2  quantum-common
  1:2013.1-0ubuntu2  quantum-plugin-openvswitch
  1:2013.1-0ubuntu2  quantum-plugin-openvswitch-agent

  The kernel being used is *not* controlled by Puppet and ends up being
  whatever the latest and greatest version is in raring-updates.  The
  kernels in use: 3.8.0.34.52.  I tried upgrading to 3.8.0.35.53 when it
  became available, but that had no effect.

  I'm lost.  No idea how to debug this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1270943/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269667] Re: metering service unreachable:ServiceCatalogException: Invalid service catalog service: metering

2014-01-21 Thread LIU Yulong
+--+---+--+--+-+--+
|id|   region  |publicurl   
  |   internalurl|   adminurl   
   |service_id|
+--+---+--+--+-+--+
| 0d80ce69afa54fc9aa398ee1a9b36916 | RegionOne |   http://client:8777/  
  |   http://client:8777/| 
http://client:8777/ | ce5ac58692174c12b66c035579a4b76d |
| 1c69e47f03ba4d69b2ddfcf481c1469d | RegionOne |   
http://client:8776/v1/%(tenant_id)s|   http://client:8776/v1/%(tenant_id)s  
  | http://client:8776/v1/%(tenant_id)s | a18c7eadda374158a8e9a870230c5a0e |
| 470946ffb09a459d853ee46cd25c9232 | RegionOne |http://client:9696  
  |http://client:9696|  
http://client:9696 | 5e8906d004bf4341a8f959c8445e396d |
| 520c1833c0f34cdcb4d6dbadd0a3 | RegionOne |  http://client:9292/v1 
  |  http://client:9292/v1   |
http://client:9292/v1| 56d9ec5874474634ae6b9fff6951c6bd |
| 5967d1ef4db249078bb0b18486f46064 | RegionOne | 
http://client:5000/v2.0  | http://client:5000/v2.0  |   
http://client:35357/v2.0  | a92a837757364982aa8708668f65206b |
| 681b6f383f2f4a3a8b0c59c6348d0240 | RegionOne |  http://client:8000/v1 
  |  http://client:8000/v1   |
http://client:8000/v1| 93062274f0a54e39b94c06c7073e46e2 |
| 6b21663bf27649608cd54d77ff5b239e | RegionOne |   
http://client:8004/v1/%(tenant_id)s|   http://client:8004/v1/%(tenant_id)s  
  | http://client:8004/v1/%(tenant_id)s | 07737f06a417449d8878f4f2d9d9a8a1 |
| 80157c781f62426da6af1907eea5eb24 | RegionOne |
http://client:8773/services/Cloud |http://client:8773/services/Cloud
 |  http://client:8773/services/Admin  | 9b8e3eedf995441b83b87f38da3cdf90 |
| 8efbf4bf00084ae39795dbbdbacf4d59 | RegionOne |   
http://client:8774/v2/%(tenant_id)s|   http://client:8774/v2/%(tenant_id)s  
  | http://client:8774/v2/%(tenant_id)s | 34479f7aaa984397a83a1fd699d6bc09 |
| bba201ca5dce482daa6f376895077008 | RegionOne |   
http://client:8776/v2/%(tenant_id)s|   http://client:8776/v2/%(tenant_id)s  
  | http://client:8776/v2/%(tenant_id)s | a882e2f7d36b48f4ad7fc3f9946e1712 |
| fa3e5e35f7834df9a33e77697c84f4e7 | RegionOne | 
http://client:/v1/AUTH_%(tenant_id)s | 
http://client:/v1/AUTH_%(tenant_id)s | http://client:/ 
| 1699737817fd4b7ca0721a258a580edd |
+--+---+--+--+-+--+

** Changed in: horizon
   Status: Invalid = Incomplete

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1269667

Title:
  metering service unreachable:ServiceCatalogException: Invalid service
  catalog service: metering

Status in OpenStack Dashboard (Horizon):
  Incomplete

Bug description:
  
  metering service unreachable
  ServiceCatalogException: Invalid service catalog service: metering

  but I can use 
  root@client:/etc/nova# ceilometer meter-list
  
+++---+--+--+--+
  | Name   | Type   | Unit  | Resource ID   
   | User ID  | Project ID  
 |
  
+++---+--+--+--+
  | cpu| cumulative | ns| 
e2aefa0a-1773-46a5-99df-c642650f4780 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 
01a79d4c-28eb-452a-8d01-0b089871c401 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 
08bd3dd4-a498-41bd-9f8a-c2f0355546ed | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 
1c8de487-5e63-4ca7-a299-8a48de13f136 | ed14e41272114ec98ffa52f035bd5403 | 
fada418776274dbfa9b3948a43a2c72b |
  | disk.ephemeral.size| gauge  | GB| 

[Yahoo-eng-team] [Bug 1118066] Re: Possible to get and update quotas for nonexistant tenant

2014-01-21 Thread Joe Gordon
So this is a known issue, nova doesn't do any tenant validation for
quotas.   Right now the assumption is that only global admins (think
cloud operator) should have access to the last three methods in:

http://docs.openstack.org/api/openstack-compute/2/content/os-quota-
sets.html

GET v2{/tenant_id}/os-quota-sets{/tenant_id}{/user_id}  
Enables an admin user to show quotas for a specified tenant and user.

POSTv2{/tenant_id}/os-quota-sets{/tenant_id}{/user_id}  
Updates quotas for a specified tenant/project and user.

GET v2{/tenant_id}/os-quota-sets{/tenant_id}/detail{/user_id}   
Shows details for quotas for a specified tenant and user.

And as an admin (trusted user), we expect them to not break things.

This is part of a bigger issue, which is nova doesn't have great RBAC
support. Say you want to create a tenant admin who can set quotas per
user.

** Changed in: nova
   Status: Confirmed = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1118066

Title:
  Possible to get and update quotas for nonexistant tenant

Status in OpenStack Compute (Nova):
  Opinion

Bug description:
  GET /v2/:tenant/os-quota-sets/:this_tenant_does_not_exist
  returns 200 with the default quotas.

  Moreover
  POST /v2/:tenant/os-quota-sets/:this_tenant_does_not_exist
  with updated quotas succeeds and that metadata is saved!

  I'm not sure if this is a bug or not. I cannot find any documentation
  on this interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1118066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1270845] Re: nova-api-metadata - refused to start due to missing fake_network configuration

2014-01-21 Thread Joe Gordon
To prevent this in the future, we should add a mode in devstack to
deploy each nova-api service as a separate binary and use that in one of
or gate jobs (either postgres or mysql versions)

** Also affects: devstack
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1270845

Title:
  nova-api-metadata - refused to start due to missing fake_network
  configuration

Status in devstack - openstack dev environments:
  New
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  nova from trunk testing packages; the nova-api-metadata service fails
  on start:

  2014-01-20 14:22:04.593 4291 INFO nova.network.driver [-] Loading network 
driver 'nova.network.linux_net'
  2014-01-20 14:22:04.598 4291 CRITICAL nova [-] no such option: fake_network
  2014-01-20 14:22:04.598 4291 TRACE nova Traceback (most recent call last):
  2014-01-20 14:22:04.598 4291 TRACE nova   File /usr/bin/nova-api-metadata, 
line 10, in module
  2014-01-20 14:22:04.598 4291 TRACE nova sys.exit(main())
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/cmd/api_metadata.py, line 48, in main
  2014-01-20 14:22:04.598 4291 TRACE nova server = 
service.WSGIService('metadata', use_ssl=should_use_ssl)
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 339, in __init__
  2014-01-20 14:22:04.598 4291 TRACE nova self.manager = self._get_manager()
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 375, in _get_manager
  2014-01-20 14:22:04.598 4291 TRACE nova return manager_class()
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/api/manager.py, line 32, in __init__
  2014-01-20 14:22:04.598 4291 TRACE nova 
self.network_driver.metadata_accept()
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 658, in 
metadata_accept
  2014-01-20 14:22:04.598 4291 TRACE nova iptables_manager.apply()
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 426, in apply
  2014-01-20 14:22:04.598 4291 TRACE nova self._apply()
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py, line 
249, in inner
  2014-01-20 14:22:04.598 4291 TRACE nova return f(*args, **kwargs)
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 446, in 
_apply
  2014-01-20 14:22:04.598 4291 TRACE nova attempts=5)
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py, line 1196, in 
_execute
  2014-01-20 14:22:04.598 4291 TRACE nova if CONF.fake_network:
  2014-01-20 14:22:04.598 4291 TRACE nova   File 
/usr/lib/python2.7/dist-packages/oslo/config/cfg.py, line 1648, in __getattr__
  2014-01-20 14:22:04.598 4291 TRACE nova raise NoSuchOptError(name)
  2014-01-20 14:22:04.598 4291 TRACE nova NoSuchOptError: no such option: 
fake_network
  2014-01-20 14:22:04.598 4291 TRACE nova

  We use this service on network gateway nodes alongside the neutron
  metadata proxy for scale-out.

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1270845/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1118066] Re: Possible to get and update quotas for nonexistant tenant

2014-01-21 Thread Scott Devoid
** Changed in: nova
   Status: Opinion = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1118066

Title:
  Possible to get and update quotas for nonexistant tenant

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  GET /v2/:tenant/os-quota-sets/:this_tenant_does_not_exist
  returns 200 with the default quotas.

  Moreover
  POST /v2/:tenant/os-quota-sets/:this_tenant_does_not_exist
  with updated quotas succeeds and that metadata is saved!

  I'm not sure if this is a bug or not. I cannot find any documentation
  on this interface.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1118066/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1264452] Re: update method in os-services v2api raise unexpect exception

2014-01-21 Thread lizheming
** Changed in: nova
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1264452

Title:
  update method in os-services v2api raise unexpect exception

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  when review the os-services v2api code, I found update method have some bug.
  nova/api/openstack/compute/contrib/services.py:
  the original code:
   def update(self, req, id, body):
  
  except (TypeError, KeyError):
  msg = _('Invalid attribute in the request')
  if 'host' in body and 'binary' in body:
  msg = _('Missing disabled reason field')
  raise webob.exc.HTTPBadRequest(detail=msg)

  parameter body is a dict:
  body={service:{
  host:xxx,
 binary:,
  
  }}
  so the 'msg' will nerver be '_('Missing disabled reason field')'.
  the code should be if host in body['service'] not if host in body

  v3 api has the same issue. but the input checking for the V3 API is
  all getting replaced by jsonschema,so we only modify v2 api.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1264452/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271405] [NEW] local variable 'network_name' referenced before assignment

2014-01-21 Thread Xiang Hui
Public bug reported:

Hi team,

  I encounter this error in a lightly complicated situation:

  OS : ubuntu
  Branch : Icehouse

  When booting a vm with admin role and assigning it with another tenant's 
private network, it's fine to work.
  But the update info cache action will started periodically, at this time, the 
privous mentioned  other tenant's
  private network is filtered out from the function get_available_network() 
if do not transfer the provisioned Non-None value net_ids parameter.  
Then bad things happened as below:

  The worst affect is the vm can not be deleted normally.

  2014-01-20 11:03:19.632 DEBUG nova.compute.manager [-] An error occurred from 
(pid=5521) _heal_instance_info_cache 
/opt/stack/nova/nova/compute/manager.py:4488
2014-01-20 11:03:19.632 TRACE nova.compute.manager Traceback (most recent call 
last):
2014-01-20 11:03:19.632 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/compute/manager.py, line 4484, in 
_heal_instance_info_cache
2014-01-20 11:03:19.632 TRACE nova.compute.manager 
self._get_instance_nw_info(context, instance, use_slave=True)
2014-01-20 11:03:19.632 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/compute/manager.py, line 891, in _get_instance_nw_info
2014-01-20 11:03:19.632 TRACE nova.compute.manager instance)
2014-01-20 11:03:19.632 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/api.py, line 50, in wrapper
2014-01-20 11:03:19.632 TRACE nova.compute.manager res = f(self, context, 
*args, **kwargs)
2014-01-20 11:03:19.632 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 439, in 
get_instance_nw_info
2014-01-20 11:03:19.632 TRACE nova.compute.manager result = 
self._get_instance_nw_info(context, instance, networks)
2014-01-20 11:03:19.632 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 446, in 
_get_instance_nw_info
2014-01-20 11:03:19.632 TRACE nova.compute.manager nw_info = 
self._build_network_info_model(context, instance, networks)
2014-01-20 11:03:19.632 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 1043, in 
_build_network_info_model
2014-01-20 11:03:19.632 TRACE nova.compute.manager subnets)
2014-01-20 11:03:19.632 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 952, in 
_nw_info_build_network
2014-01-20 11:03:19.632 TRACE nova.compute.manager for net in networks:
2014-01-20 11:03:19.632 TRACE nova.compute.manager UnboundLocalError: local 
variable 'network_name' referenced before assignment

** Affects: nova
 Importance: Undecided
 Assignee: Xiang Hui (xianghui)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Xiang Hui (xianghui)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271405

Title:
  local variable 'network_name' referenced before assignment

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hi team,

    I encounter this error in a lightly complicated situation:

    OS : ubuntu
    Branch : Icehouse

    When booting a vm with admin role and assigning it with another tenant's 
private network, it's fine to work.
    But the update info cache action will started periodically, at this time, 
the privous mentioned  other tenant's
    private network is filtered out from the function get_available_network() 
if do not transfer the provisioned Non-None value net_ids parameter.  
  Then bad things happened as below:

    The worst affect is the vm can not be deleted normally.

    2014-01-20 11:03:19.632 DEBUG nova.compute.manager [-] An error occurred 
from (pid=5521) _heal_instance_info_cache 
/opt/stack/nova/nova/compute/manager.py:4488
  2014-01-20 11:03:19.632 TRACE nova.compute.manager Traceback (most recent 
call last):
  2014-01-20 11:03:19.632 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/compute/manager.py, line 4484, in 
_heal_instance_info_cache
  2014-01-20 11:03:19.632 TRACE nova.compute.manager 
self._get_instance_nw_info(context, instance, use_slave=True)
  2014-01-20 11:03:19.632 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/compute/manager.py, line 891, in _get_instance_nw_info
  2014-01-20 11:03:19.632 TRACE nova.compute.manager instance)
  2014-01-20 11:03:19.632 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/api.py, line 50, in wrapper
  2014-01-20 11:03:19.632 TRACE nova.compute.manager res = f(self, context, 
*args, **kwargs)
  2014-01-20 11:03:19.632 TRACE nova.compute.manager   File 
/opt/stack/nova/nova/network/neutronv2/api.py, line 439, in 
get_instance_nw_info
  2014-01-20 11:03:19.632 TRACE nova.compute.manager result = 
self._get_instance_nw_info(context, instance, networks)
  2014-01-20 11:03:19.632 TRACE nova.compute.manager   File 

[Yahoo-eng-team] [Bug 1183817] Re: instances created in newly created subnet are not accessible via private range ip adress 10.0.11.2 from devstack host machine

2014-01-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1183817

Title:
  instances created in newly created subnet are not accessible via
  private range ip adress 10.0.11.2 from devstack host machine

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  we have  single node devstack (branch: stable/grizzly) installation.

  subnet list:
  first is subnet of default private network subnet

  | 6ea6f812-3ad4-4414-ba9e-d57a44ae15cc || 
10.0.55.0/24 | {start: 10.0.55.2, end: 10.0.55.254}   |
  | da7ad45b-6b92-426e-8c6b-1453f271e350 | not_working_subnet| 10.0.11.0/24 
| {start: 10.0.11.2, end: 10.0.11.254} |

  not_working_subnet is connected to a router which is also connected to
  public network via its gateway inerface

  when creating VM in private network them is accessible (ping, ssh) from 
devstack host machine on its ip e.g. 10.0.55.2.
  if i create VM in not_working_subnet then is not accessible via its ip e.g. 
10.0.11.2.

  We have found problem is probably that routing table of devstack host
  machine is missing route routing packets to router gateway interface.
  I added bellow route then instnances on not_working_subnet are also
  accessible from host machine and everything works fine

  Destination Gateway Genmask Flags Metric RefUse Iface
  10.0.11.0   XX.YY.ZZ.205   255.255.255.0   UG0  00 br-ex

  route for private network is already present therefore for private
  network everything works fine

  Kernel IP routing table
  Destination Gateway Genmask Flags Metric RefUse Iface
  10.0.55.0   XX.YY.ZZ.194   255.255.255.0   UG0  00 br-ex

  
  We assume that routing table of host machine should get automatically updated 
when subnet created with connection to public network via router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1183817/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245862] Re: Duplicate iptables rule handling can be surprising

2014-01-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1245862

Title:
  Duplicate iptables rule handling can be surprising

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  When helping someone debug a deployment we discovered a missing
  iptables rule that was required. The user appended the rule instead of
  inserting it which ended up placing it after a reject rule, rendering
  it moot. I asked them to insert it instead, so it looked something
  like

  USER-RULE  # inserted
  openstack-rule-0
  .
  .
  .
  openstack-rule-n-1
  openstack-rule-n -j REJECT 
  USER-RULE  # appended

  When IPTablesManager ran again, the rules looked like
  openstack-rule-0
  .
  .
  .
  openstack-rule-n-1
  openstack-rule-n -j REJECT 
  USER-RULE  # appended

  Oops!

  I had the user redirect the rules to a file and edit it manually to
  move the remaining rule to a better place.

  USER-RULE  # appended and moved
  openstack-rule-0
  .
  .
  .
  openstack-rule-n-1
  openstack-rule-n -j REJECT 

  ... and then the IPTablesManager left the rule alone and all was well
  with the world.

  I haven't debugged the IPTablesManager with a sample iptables set yet
  so I cannot state definitively if that is the root cause but I'm
  pointing the finger as the logic seems plausible.

  If it *is* the culprit. I wonder if it is possible to give precendence
  the first rule. Also it might be a good idea to add debug logging for
  dropping rules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1245862/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1202362] Re: quantum-ovs-cleanup removes system devices that OVS is still looking for

2014-01-21 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1202362

Title:
  quantum-ovs-cleanup removes system devices that OVS is still looking
  for

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  After rebooting a machine, and running the quantum-ovs-cleanup utility
  in Grizzly (which seems to be the recommended practice--however only
  recommended by way of the Launchpad bug thread in which it was
  created), the script seems to delete the qg-/qr-/tap system devices. I
  did not instruct it to do so via the quantum_ports=False setting.

  This behavior almost sounds normal, however even after ensuring that
  the L3 and DHCP agents start after this cleanup script finishes, I get
  hundreds of errors in the OVS vswitchd log indicating it can't find
  these interfaces (mostly gq-/qr-). Additionally, none of my Quantum
  networks are functional at that point.

  From this point forward, nothing recovers; I continue seeing the same
  vswitchd log errors (to the point that it rate limits them), I presume
  since the agents are requesting status on them so often.

  Let me know what log/console output is helpful here and I'll be happy
  to paste or attach.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1202362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271416] [NEW] tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.setupClass(): TimeoutException: Request timed out

2014-01-21 Thread Masayuki Igawa
Public bug reported:


ft7.1: setUpClass 
(tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON)_StringException:
 Traceback (most recent call last):
  File tempest/api/compute/images/test_list_image_filters.py, line 60, in 
setUpClass
cls.server1['id'], wait_until='ACTIVE')
  File tempest/api/compute/base.py, line 211, in create_image_from_server
kwargs['wait_until'])
  File tempest/services/compute/json/images_client.py, line 85, in 
wait_for_image_status
waiters.wait_for_image_status(self, image_id, status)
  File tempest/common/waiters.py, line 124, in wait_for_image_status
raise exceptions.TimeoutException(message)
TimeoutException: Request timed out
Details: Image 41113dda-a32b-42d2-a326-6c135b58120a failed to reach ACTIVE 
status within the required time (196 s). Current status: SAVING.

Sample failure: http://logs.openstack.org/09/68009/1/check/check-
tempest-dsvm-postgres-full/676b390/

logstash query for the failure string:
http://logstash.openstack.org/index.html#eyJzZWFyY2giOiJmaWxlbmFtZTpcImNvbnNvbGUuaHRtbFwiIEFORCBtZXNzYWdlOlwiZmFpbGVkIHRvIHJlYWNoIEFDVElWRSBzdGF0dXMgd2l0aGluIHRoZSByZXF1aXJlZCB0aW1lXCIgQU5EIG1lc3NhZ2U6XCJTQVZJTkcuXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTAzNjg0ODg2NjR9

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: image testing

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271416

Title:
  
tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON.setupClass():
  TimeoutException: Request timed out

Status in OpenStack Compute (Nova):
  New

Bug description:
  
  ft7.1: setUpClass 
(tempest.api.compute.images.test_list_image_filters.ListImageFiltersTestJSON)_StringException:
 Traceback (most recent call last):
File tempest/api/compute/images/test_list_image_filters.py, line 60, in 
setUpClass
  cls.server1['id'], wait_until='ACTIVE')
File tempest/api/compute/base.py, line 211, in create_image_from_server
  kwargs['wait_until'])
File tempest/services/compute/json/images_client.py, line 85, in 
wait_for_image_status
  waiters.wait_for_image_status(self, image_id, status)
File tempest/common/waiters.py, line 124, in wait_for_image_status
  raise exceptions.TimeoutException(message)
  TimeoutException: Request timed out
  Details: Image 41113dda-a32b-42d2-a326-6c135b58120a failed to reach ACTIVE 
status within the required time (196 s). Current status: SAVING.

  Sample failure: http://logs.openstack.org/09/68009/1/check/check-
  tempest-dsvm-postgres-full/676b390/

  logstash query for the failure string:
  
http://logstash.openstack.org/index.html#eyJzZWFyY2giOiJmaWxlbmFtZTpcImNvbnNvbGUuaHRtbFwiIEFORCBtZXNzYWdlOlwiZmFpbGVkIHRvIHJlYWNoIEFDVElWRSBzdGF0dXMgd2l0aGluIHRoZSByZXF1aXJlZCB0aW1lXCIgQU5EIG1lc3NhZ2U6XCJTQVZJTkcuXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTAzNjg0ODg2NjR9

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271416/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271417] [NEW] glance_num_retries can't work

2014-01-21 Thread Liping Mao
Public bug reported:

Version : Havana

I have two controllers in my environment, and deploy glance-api on each 
controller.
In nova.conf :
glance_api_servers=controller2:9292,controller2:9292
glance_num_retries = 2

When I kill the glance on controller2, then run nova image-list, I will get 
error sometimes:
# nova image-list
+--++++
| ID   | Name   | Status | Server |
+--++++
| 668a4e66-97d9-40d7-888e-dd9db53438c4 | centos | ACTIVE ||
| 7e7917a6-96c5-4000-b167-efc13be4124c | cirros | ACTIVE ||
+--++++
# nova image-list
ERROR: The server has either erred or is incapable of performing the requested 
operation. (HTTP 500) (Request-ID: req-09d764df-4e8b-4330-b39e-14e4736c2e68)

The error in /var/log/nova/api.log is:
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack Traceback (most recent 
call last):
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/__init__.py, line 119, in 
__call__
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack return 
req.get_response(self.application)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/request.py, line 1296, in send
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack application, 
catch_exc_info=False)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/request.py, line 1260, in 
call_application
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack app_iter = 
application(self.environ, start_response)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py, 
line 574, in __call__
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack return self.app(env, 
start_response)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py,
 line 131, in __call__
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack response = 
self.app(environ, start_response)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 144, in __call__
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack return resp(environ, 
start_response)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 130, in __call__
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack resp = 
self.call_func(req, *args, **self.kwargs)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/webob/dec.py, line 195, in call_func
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack return self.func(req, 
*args, **kwargs)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 917, in 
__call__
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack content_type, body, 
accept)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 976, in 
_process_stack
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack action_result = 
self.dispatch(meth, request, action_args)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py, line 1057, in 
dispatch
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack return 
method(req=request, **action_args)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/api/openstack/compute/images.py, line 
203, in detail
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack **page_params)
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/nova/image/glance.py, line 264, in detail
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack for image in images:
2014-01-22 05:33:08.568 8866 TRACE nova.api.openstack   File 
/usr/lib/python2.6/site-packages/glanceclient/v1/images.py, line 

[Yahoo-eng-team] [Bug 1271426] [NEW] protected property change not rejected if a subsequent rule match accepts them

2014-01-21 Thread Mark Washenberger
Public bug reported:

See initial report here: http://lists.openstack.org/pipermail/openstack-
dev/2014-January/024861.html

What is happening is that if there is a specific rule that would reject
an action and a less specific rule that comes after that would accept
the action, then the action is being accepted. It should be rejected.

This is because we iterate through the property protection rules rather
than just finding the first match. This bug does not occur when policies
are used to determine property protections, only when roles are used
directly.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1271426

Title:
  protected property change not rejected if a subsequent rule match
  accepts them

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  See initial report here: http://lists.openstack.org/pipermail
  /openstack-dev/2014-January/024861.html

  What is happening is that if there is a specific rule that would
  reject an action and a less specific rule that comes after that would
  accept the action, then the action is being accepted. It should be
  rejected.

  This is because we iterate through the property protection rules
  rather than just finding the first match. This bug does not occur when
  policies are used to determine property protections, only when roles
  are used directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1271426/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271429] [NEW] quota commit to error project/user in auto confirm resize

2014-01-21 Thread wangpan
Public bug reported:

it is that, if the confirmation is launched through periodic task,
then the project_id and user_id in context are all None, which
result in reservations in DB are also have `NULL` project_id and
user_id, when these reservations are committed, the quotas usage of
the instance's project/user will become wrong eventually.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1271429

Title:
  quota commit to error project/user in auto confirm resize

Status in OpenStack Compute (Nova):
  New

Bug description:
  it is that, if the confirmation is launched through periodic task,
  then the project_id and user_id in context are all None, which
  result in reservations in DB are also have `NULL` project_id and
  user_id, when these reservations are committed, the quotas usage of
  the instance's project/user will become wrong eventually.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1271429/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1271432] [NEW] Error supported_extension_aliases in metaplugin.ini

2014-01-21 Thread Hirofumi Ichihara
Public bug reported:

There is an error in metaplugin.ini.
Supported_extension_aliases is not 'providernet', it should be set 'provider'.

** Affects: neutron
 Importance: Undecided
 Assignee: Hirofumi Ichihara (ichihara-hirofumi)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Hirofumi Ichihara (ichihara-hirofumi)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1271432

Title:
  Error supported_extension_aliases in metaplugin.ini

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  There is an error in metaplugin.ini.
  Supported_extension_aliases is not 'providernet', it should be set 'provider'.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1271432/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp