[Yahoo-eng-team] [Bug 1357652] Re: Grenade cannot start keystone on icehouse

2014-08-15 Thread Marc Koderer
** Also affects: keystone
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1357652

Title:
  Grenade cannot start keystone on icehouse

Status in Grenade - OpenStack upgrade testing:
  New
Status in OpenStack Identity (Keystone):
  New

Bug description:
  Starting up keystone in grenade (old/icehouse) fails . This blocks all
  tempest check-grenade-dsvm-icehouse job with:

  (keystone.token.provider): 2014-08-15 22:04:42,459 WARNING provider 
get_token_provider keystone.conf [signing] token_format is deprecated in favor 
of keystone.conf [token] provider
  (keystone): 2014-08-15 22:04:42,508 CRITICAL log logging_excepthook No module 
named utils
  key failed to start

  Example Patch: https://review.openstack.org/#/c/113850/
  Logs: 
http://logs.openstack.org/20/113820/3/check/check-grenade-dsvm-icehouse/3fb23cb/logs/grenade.sh.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/grenade/+bug/1357652/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1272172] Re: nicira: any string is accepted at interface-name option of net-gateway-create

2014-08-15 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1272172

Title:
  nicira: any string is accepted at interface-name option of net-
  gateway-create

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  [Issue]

  I attempt to net-gateway-create with interface-name which string is no sense.
  I expected returning error which code is 400, but it works.

  [Reproduce]

  openstack@devstack:~/devstack$ neutron net-gateway-create --device \
  id=c4369a1c-3fb2-4f45-8ac7-17d15b20508e,interface_name=foobar \
  NetworkgatewayName

  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | default   | False   
 |
  | devices   | {"interface_name": "foobar", "id": 
"c4369a1c-3fb2-4f45-8ac7-17d15b20508e"}   |
  | id| 213640a8-7f65-4e72-bdc3-91ce00bd527d
 |
  | name  | NetworkgatewayName  
 |
  | ports | 
 |
  | tenant_id | ec2918c3e7514158987c8f04c64d7521
 |
  
+---+--+

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1272172/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1277439] Re: EOF error during SSH connection

2014-08-15 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1277439

Title:
  EOF error during SSH connection

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  This has been observed during a tempest run:
  http://paste.openstack.org/show/63009/

  It might be a paramiko issue, and might not happen anymore, but it
  sound like another intermittent failure that need to be tracked with
  an elastic recheck query.

  Full logs: http://logs.openstack.org/90/67390/6/check/check-tempest-
  dsvm-neutron-isolated/552fb81

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1277439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357599] [NEW] race condition with neutron in nova migrate code

2014-08-15 Thread Aaron Rosen
Public bug reported:

The tempest test that does a resize on the instance from time to time
fails with a neutron virtual interface timeout error. The reason why
this is occurring is because resize_instance() calls:

disk_info = self.driver.migrate_disk_and_power_off(
context, instance, migration.dest_host,
instance_type, network_info,
block_device_info)

which calls destory() which unplugs the vifs(). Then,

self.driver.finish_migration(context, migration, instance,
 disk_info,
 network_info,
 image, resize_instance,
 block_device_info, power_on)

is called which expects a vif_plugged event. Since this happens on the
same host the neutron agent is able to detect that the vif was unplugged
then plugged because it happens so fast.  To fix this we should check if
we are migrating to the same host if we are we should not expect to get
an event.


8d1] Setting instance vm_state to ERROR
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] Traceback (most recent call last):
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3714, in finish_resize
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] disk_info, image)
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3682, in _finish_resize
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] old_instance_type, sys_meta)
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] six.reraise(self.type_, self.value, 
self.tb)
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3677, in _finish_resize
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on)
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5302, in 
finish_migration
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] block_device_info, power_on)
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3792, in 
_create_domain_and_network
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] raise 
exception.VirtualInterfaceCreateException()
2014-08-14 00:03:58.010 1276 TRACE nova.compute.manager [instance: 
dca468e4-d26f-4ae2-a522-7d02ef7c98d1] VirtualInterfaceCreateException: Virtual 
Interface creation failed

** Affects: nova
 Importance: High
 Assignee: Aaron Rosen (arosen)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Aaron Rosen (arosen)

** Changed in: nova
   Importance: Undecided => High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357599

Title:
  race condition with neutron in nova migrate code

Status in OpenStack Compute (Nova):
  New

Bug description:
  The tempest test that does a resize on the instance from time to time
  fails with a neutron virtual interface timeout error. The reason why
  this is occurring is because resize_instance() calls:

  disk_info = self.driver.migrate_disk_and_power_off(
  context, instance, migration.dest_host,
  instance_type, network_info,
  block_device_info)

  which calls destory() which unplugs the vifs(). Then,

  self.driver.finish_migration(context, migration, instance,
   disk_info,
   network_info,
   image, resize_instance,
   block_device_info, power_on)

  is called which expects a vif_plugged event. Since this happens on the
  same host the neutron agent is able to detect that the vif was
  unplugged then plugged because it happens so fast.

[Yahoo-eng-team] [Bug 1357592] [NEW] Log exceptions for update_device|(up/down)

2014-08-15 Thread Aaron Rosen
Public bug reported:

Previously, we log these failures at debug level but these should be logged
at least at error level as now if we we fail to update the port_status
value it can cause nova to not starting the vm. Logging as exception should
hopefully provide a little more info about why this sometimes fails
in the gate.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357592

Title:
  Log exceptions for update_device|(up/down)

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Previously, we log these failures at debug level but these should be 
logged
  at least at error level as now if we we fail to update the port_status
  value it can cause nova to not starting the vm. Logging as exception 
should
  hopefully provide a little more info about why this sometimes fails
  in the gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357592/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357593] [NEW] Log exceptions for update_device|(up/down)

2014-08-15 Thread Aaron Rosen
Public bug reported:

Previously, we log these failures at debug level but these should be logged
at least at error level as now if we we fail to update the port_status
value it can cause nova to not starting the vm. Logging as exception should
hopefully provide a little more info about why this sometimes fails
in the gate.

** Affects: neutron
 Importance: Undecided
 Assignee: Aaron Rosen (arosen)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357593

Title:
  Log exceptions for update_device|(up/down)

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Previously, we log these failures at debug level but these should be 
logged
  at least at error level as now if we we fail to update the port_status
  value it can cause nova to not starting the vm. Logging as exception 
should
  hopefully provide a little more info about why this sometimes fails
  in the gate.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357593/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357591] [NEW] Resizing panel causes table actions to misalign

2014-08-15 Thread Thai Tran
Public bug reported:

See attached image.

** Affects: horizon
 Importance: Low
 Status: Confirmed


** Tags: low-hanging-fruit ui ux

** Attachment added: "bug.png"
   https://bugs.launchpad.net/bugs/1357591/+attachment/4178910/+files/bug.png

** Summary changed:

- Resizing panel cause table actions to misalign
+ Resizing panel causes table actions to misalign

** Tags added: low-hanging-fruit ui ux

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1357591

Title:
  Resizing panel causes table actions to misalign

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  See attached image.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1357591/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357589] [NEW] Edit icon jitters when hovered over

2014-08-15 Thread Thai Tran
Public bug reported:

When user mouse over editable cell, the pencil icon shows up.
When user mouse over the pencil icon, a border appears, causing the icon to 
jitter.

We should remove this border resulting in:
1. A cleaner look.
2. Removing the jittering effect.

** Affects: horizon
 Importance: Undecided
 Assignee: Thai Tran (tqtran)
 Status: In Progress

** Changed in: horizon
 Assignee: (unassigned) => Thai Tran (tqtran)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1357589

Title:
  Edit icon jitters when hovered over

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  When user mouse over editable cell, the pencil icon shows up.
  When user mouse over the pencil icon, a border appears, causing the icon to 
jitter.

  We should remove this border resulting in:
  1. A cleaner look.
  2. Removing the jittering effect.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1357589/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357586] [NEW] volume volume type create/edit allow name with only white spaces

2014-08-15 Thread Gloria Gu
Public bug reported:

When create volume or volume type, it allows name field with only white
spaces.

How to reproduce:

Just go to project -> volume to create a volume with only white spaces a
name, the volume shows up in the volume table with an empty name.

go to admin -> volume to create a volume type with only white spaces as
a name, the volume type shows up in the volume table an empty name.

Expect:

form should not allow empty name when create/edit volume or volume type

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1357586

Title:
  volume volume type create/edit allow name with only white spaces

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When create volume or volume type, it allows name field with only
  white spaces.

  How to reproduce:

  Just go to project -> volume to create a volume with only white spaces
  a name, the volume shows up in the volume table with an empty name.

  go to admin -> volume to create a volume type with only white spaces
  as a name, the volume type shows up in the volume table an empty name.

  Expect:

  form should not allow empty name when create/edit volume or volume
  type

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1357586/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280368] Re: "fixed IPs" not available to edit in "update defaults"

2014-08-15 Thread Gary W. Smith
In Juno, the default updates panel (accessible via Admin > System >
System Info > Default Quotas), is display-only for all fields.

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1280368

Title:
  "fixed IPs" not available to edit in "update defaults"

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Even though it appears in default tab there's no field to edit when
  opening "default updates" in horizon

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1280368/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245719] Re: RBD backed instance can't shutdown and restart

2014-08-15 Thread Gary W. Smith
Marking the horizon bug as Invalid due to lack of confirmation and low
probability that there is a bug in horizon

** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1245719

Title:
  RBD backed instance can't shutdown and restart

Status in OpenStack Dashboard (Horizon):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released
Status in Ubuntu:
  Confirmed

Bug description:
  Version: Havana w/ Ubuntu Repos. with Ceph for RBD.

  
  When creating Launching a instance with "Boot from image (Creates a new 
volume)" this creates the instance fine and all is well however if you shutdown 
the instance I can't turn it back on again.

  
  I get the following error in the nova-compute.log when trying to power on an 
shutdown instance.

  
###
  2013-10-29 00:48:33.859 2746 WARNING nova.compute.utils 
[req-89bbd72f-2280-4fac-802a-1211ec774980 27106b78ceac4e389558566857a7875f 
464099f86eb94d049ed1f7b0f0144275] [instance: 
cc370f6d-4be0-4cd3-9f20-bf86f5ad7c09] Can't access image $
  2013-10-29 00:48:34.040 2746 WARNING nova.virt.libvirt.vif 
[req-89bbd72f-2280-4fac-802a-1211ec774980 27106b78ceac4e389558566857a7875f 
464099f86eb94d049ed1f7b0f0144275] Deprecated: The LibvirtHybridOVSBridgeDriver 
VIF driver is now de$
  2013-10-29 00:48:34.578 2746 ERROR nova.openstack.common.rpc.amqp 
[req-89bbd72f-2280-4fac-802a-1211ec774980 27106b78ceac4e389558566857a7875f 
464099f86eb94d049ed1f7b0f0144275] Exception during message handling
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp **args)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp result 
= getattr(proxyobj, method)(ctxt, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 353, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 90, in wrapped
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp payload)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 73, in wrapped
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp return 
f(self, context, *args, **kw)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 243, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp pass
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 229, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 294, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp 
function(self, context, *args, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 271, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp e, 
sys.exc_info())
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 258, in 
decorated_function
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp return 
function(self, context, *args, **kwargs)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1832, in 
start_instance
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp 
self._power_on(context, instance)
  2013-10-29 00:48:34.578 2746 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1

[Yahoo-eng-team] [Bug 1025168] Re: Dashboard install fails on Ubuntu 12.04

2014-08-15 Thread Gary W. Smith
Marking as invalid as it has not been confirmed in nearly two years, nor
is it likely to be a bug given the large numbers of developers using
this same configuration.

** Changed in: horizon
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1025168

Title:
  Dashboard install fails on Ubuntu 12.04

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  I followed the dashboard installation instructions and am getting the
  following error when running:

  $ sudo python tools/install_venv.py

  
  Checking dependencies...
  dependency check done.
  Creating venv... done.
  Installing pip in virtualenv...
  Traceback (most recent call last):
File "tools/install_venv.py", line 156, in 
  main()
File "tools/install_venv.py", line 150, in main
  create_virtualenv()
File "tools/install_venv.py", line 105, in create_virtualenv
  if not run_command([WITH_VENV, 'easy_install', 'pip']).strip():
File "tools/install_venv.py", line 55, in run_command
  proc = subprocess.Popen(cmd, cwd=cwd, stdout=stdout)
File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
  errread, errwrite)
File "/usr/lib/python2.7/subprocess.py", line 1249, in _execute_child
  raise child_exception
  OSError: [Errno 8] Exec format error

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1025168/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357578] [NEW] Unit test: nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm timing out in gate

2014-08-15 Thread Joe Gordon
Public bug reported:

http://logs.openstack.org/62/114062/3/gate/gate-nova-
python27/2536ea4/console.html

 FAIL:
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm

014-08-15 13:46:09.155 | INFO [nova.tests.integrated.api.client] Doing GET on 
/v2/openstack//flavors/detail
2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
sent launcher_process pid: 10564 signal: 15
2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
waiting on process 10566 to exit
2014-08-15 13:46:09.155 | INFO [nova.wsgi] Stopping WSGI server.
2014-08-15 13:46:09.155 | }}}
2014-08-15 13:46:09.156 | 
2014-08-15 13:46:09.156 | Traceback (most recent call last):
2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 206, in 
test_terminate_sigterm
2014-08-15 13:46:09.156 | self._terminate_with_signal(signal.SIGTERM)
2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 194, in 
_terminate_with_signal
2014-08-15 13:46:09.156 | self.wait_on_process_until_end(pid)
2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 146, in 
wait_on_process_until_end
2014-08-15 13:46:09.157 | time.sleep(0.1)
2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 31, in sleep
2014-08-15 13:46:09.157 | hub.switch()
2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 287, in switch
2014-08-15 13:46:09.157 | return self.greenlet.switch()
2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 339, in run
2014-08-15 13:46:09.158 | self.wait(sleep_time)
2014-08-15 13:46:09.158 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 82, in wait
2014-08-15 13:46:09.158 | sleep(seconds)
2014-08-15 13:46:09.158 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py",
 line 52, in signal_handler
2014-08-15 13:46:09.158 | raise TimeoutException()
2014-08-15 13:46:09.158 | TimeoutException

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357578

Title:
  Unit test:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm
  timing out in gate

Status in OpenStack Compute (Nova):
  New

Bug description:
  http://logs.openstack.org/62/114062/3/gate/gate-nova-
  python27/2536ea4/console.html

   FAIL:
  
nova.tests.integrated.test_multiprocess_api.MultiprocessWSGITest.test_terminate_sigterm

  014-08-15 13:46:09.155 | INFO [nova.tests.integrated.api.client] Doing GET on 
/v2/openstack//flavors/detail
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
sent launcher_process pid: 10564 signal: 15
  2014-08-15 13:46:09.155 | INFO [nova.tests.integrated.test_multiprocess_api] 
waiting on process 10566 to exit
  2014-08-15 13:46:09.155 | INFO [nova.wsgi] Stopping WSGI server.
  2014-08-15 13:46:09.155 | }}}
  2014-08-15 13:46:09.156 | 
  2014-08-15 13:46:09.156 | Traceback (most recent call last):
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 206, in 
test_terminate_sigterm
  2014-08-15 13:46:09.156 | self._terminate_with_signal(signal.SIGTERM)
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 194, in 
_terminate_with_signal
  2014-08-15 13:46:09.156 | self.wait_on_process_until_end(pid)
  2014-08-15 13:46:09.156 |   File 
"nova/tests/integrated/test_multiprocess_api.py", line 146, in 
wait_on_process_until_end
  2014-08-15 13:46:09.157 | time.sleep(0.1)
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/greenthread.py",
 line 31, in sleep
  2014-08-15 13:46:09.157 | hub.switch()
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 287, in switch
  2014-08-15 13:46:09.157 | return self.greenlet.switch()
  2014-08-15 13:46:09.157 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/hub.py",
 line 339, in run
  2014-08-15 13:46:09.158 | self.wait(sleep_time)
  2014-08-15 13:46:09.158 |   File 
"/home/jenkins/workspace/gate-nova-python27/.tox/py27/local/lib/python2.7/site-packages/eventlet/hubs/poll.py",
 line 82, in wait
  2014-08-15 13

[Yahoo-eng-team] [Bug 1320224] Re: Unable to set a new flavor to be public

2014-08-15 Thread Gary W. Smith
As Santiago suggested, please create a blueprint for this.l

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1320224

Title:
  Unable to set a new flavor to be public

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  When creating a new flavor there's no option to set whether it is
  public or not.

  This is with the RDO Icehouse packages:

  openstack-dashboard-2014.1-1.el6.noarch

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1320224/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1355125] Re: keystonemiddleware appears not to hash PKIZ tokens

2014-08-15 Thread Adam Young
** Also affects: python-keystoneclient
   Importance: Undecided
   Status: New

** No longer affects: keystone

** Changed in: python-keystoneclient
 Assignee: (unassigned) => Adam Young (ayoung)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1355125

Title:
  keystonemiddleware appears not to hash PKIZ tokens

Status in OpenStack Identity  (Keystone) Middleware:
  In Progress
Status in Python client library for Keystone:
  New

Bug description:
  It looks like Keystone hashes only PKI tokens [1] and test 
test_verify_signed_token_raises_exception_for_revoked_pkiz_token [2] does not 
take hashing into account (and checks only already hashed data and not hashing 
itself)
  And that should make token revocation for PKIZ tokens broken.

  
  [1] 
https://github.com/openstack/keystonemiddleware/blob/c9036a00ef3f7c4b9475799d5b713db7a2d94961/keystonemiddleware/auth_token.py#L1399
  [2] 
https://github.com/openstack/keystonemiddleware/blob/c9036a00ef3f7c4b9475799d5b713db7a2d94961/keystonemiddleware/tests/test_auth_token_middleware.py#L741

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystonemiddleware/+bug/1355125/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347891] Re: mis-use of XML canonicalization in keystone tests

2014-08-15 Thread Dolph Mathews
** Also affects: keystone/icehouse
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1347891

Title:
  mis-use of XML canonicalization in keystone tests

Status in OpenStack Identity (Keystone):
  In Progress
Status in Keystone icehouse series:
  New

Bug description:
  running the keystone suite on a new Fedora VM, I get many many
  failures of the variety of XML comparison failing, in a non-
  deterministic way:

  [classic@localhost keystone]$ tox -e py27 --  
keystone.tests.test_versions.XmlVersionTestCase.test_v3_disabled
  py27 develop-inst-noop: /home/classic/dev/redhat/keystone
  py27 runtests: PYTHONHASHSEED='2335155056'
  py27 runtests: commands[0] | python setup.py testr --slowest 
--testr-args=keystone.tests.test_versions.XmlVersionTestCase.test_v3_disabled
  running testr
  running=
  OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
  ${PYTHON:-python} -m subunit.run discover -t ./ ./keystone/tests --list 
  running=
  OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  OS_LOG_CAPTURE=${OS_LOG_CAPTURE:-1} \
  ${PYTHON:-python} -m subunit.run discover -t ./ ./keystone/tests  --load-list 
/tmp/tmpCKSHDr
  ==
  FAIL: keystone.tests.test_versions.XmlVersionTestCase.test_v3_disabled
  tags: worker-0
  --
  Empty attachments:
pythonlogging:''-1
stderr
stdout

  pythonlogging:'': {{{
  Adding cache-proxy 'keystone.tests.test_cache.CacheIsolatingProxy' to backend.
  Deprecated: keystone.common.kvs.Base is deprecated as of Icehouse in favor of 
keystone.common.kvs.KeyValueStore and may be removed in Juno.
  Registering Dogpile Backend 
keystone.tests.test_kvs.KVSBackendForcedKeyMangleFixture as 
openstack.kvs.KVSBackendForcedKeyMangleFixture
  Registering Dogpile Backend keystone.tests.test_kvs.KVSBackendFixture as 
openstack.kvs.KVSBackendFixture
  KVS region configuration for token-driver: 
{'keystone.kvs.arguments.distributed_lock': True, 'keystone.kvs.backend': 
'openstack.kvs.Memory', 'keystone.kvs.arguments.lock_timeout': 6}
  Using default dogpile sha1_mangle_key as KVS region token-driver key_mangler
  It is recommended to only use the base key-value-store implementation for the 
token driver for testing purposes.  Please use 
keystone.token.backends.memcache.Token or keystone.token.backends.sql.Token 
instead.
  KVS region configuration for os-revoke-driver: 
{'keystone.kvs.arguments.distributed_lock': True, 'keystone.kvs.backend': 
'openstack.kvs.Memory', 'keystone.kvs.arguments.lock_timeout': 6}
  Using default dogpile sha1_mangle_key as KVS region os-revoke-driver 
key_mangler
  Callback: `keystone.contrib.revoke.core.Manager._trust_callback` subscribed 
to event `identity.OS-TRUST:trust.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._consumer_callback` 
subscribed to event `identity.OS-OAUTH1:consumer.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._access_token_callback` 
subscribed to event `identity.OS-OAUTH1:access_token.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._role_callback` subscribed to 
event `identity.role.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._user_callback` subscribed to 
event `identity.user.disabled`.
  Callback: `keystone.contrib.revoke.core.Manager._project_callback` subscribed 
to event `identity.project.deleted`.
  Callback: `keystone.contrib.revoke.core.Manager._project_callback` subscribed 
to event `identity.project.disabled`.
  Callback: `keystone.contrib.revoke.core.Manager._domain_callback` subscribed 
to event `identity.domain.disabled`.
  Deprecated: keystone.middleware.core.XmlBodyMiddleware is deprecated as of 
Icehouse in favor of support for "application/json" only and may be removed in 
K.
  Auth token not in the request header. Will not build auth context.
  arg_dict: {}
  }}}

  Traceback (most recent call last):
File 
"/home/classic/dev/redhat/keystone/.tox/py27/lib/python2.7/site-packages/mock.py",
 line 1201, in patched
  return func(*args, **keywargs)
File "keystone/tests/test_versions.py", line 460, in test_v3_disabled
  self.assertThat(data, matchers.XMLEquals(expected))
File 
"/home/classic/dev/redhat/keystone/.tox/py27/lib/python2.7/site-packages/testtools/testcase.py",
 line 406, in assertThat
  raise mismatch_error
  MismatchError: expected = http://docs.openstack.org/identity/api/v2.0"; id="v2.0" status="stable" 
updated="2014-04-17T00:00:00Z">

  
  


  http://localhost:26739/v2.0/"; rel="self"/>
  http://

[Yahoo-eng-team] [Bug 1346424] Re: Baremetal node id not supplied to driver

2014-08-15 Thread James Slagle
** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1346424

Title:
  Baremetal node id not supplied to driver

Status in OpenStack Compute (Nova):
  Fix Committed
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  A random overcloud baremetal node fails to boot during check-tripleo-
  overcloud-f20. Occurs intermittently.

  Full logs:

  
http://logs.openstack.org/26/105326/4/check-tripleo/check-tripleo-overcloud-f20/9292247/
  
http://logs.openstack.org/81/106381/2/check-tripleo/check-tripleo-overcloud-f20/ca8a59b/
  
http://logs.openstack.org/08/106908/2/check-tripleo/check-tripleo-overcloud-f20/e9894ca/

  
  Seed's nova-compute log shows this exception:

  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 ERROR oslo.messaging.rpc.dispatcher 
[req-9f090bea-a974-4f3c-ab06-ebd2b7a5c9e6 ] Exception during message handling: 
Baremetal node id not supplied to driver for 
'e13f2660-b72d-4a97-afac-64ff0eecc448'
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent 
call last):
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 133, in _dispatch_and_reply
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher incoming.message))
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 176, in _dispatch
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 122, in _do_dispatch
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/exception.py", line 88, 
in wrapped
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher payload)
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  Jul 21 13:46:07 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/exception.py", line 71, 
in wrapped
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher return f(self, 
context, *args, **kw)
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py", 
line 291, in decorated_function
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher pass
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 82, in __exit__
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/venvs/nova/lib/python2.7/site-packages/nova/compute/manager.py", 
line 277, in decorated_function
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]: 2014-07-21 
13:46:07.981 3608 TRACE oslo.messaging.rpc.dispatcher return function(self, 
context, *args, **kwargs)
  Jul 21 13:46:08 host-192-168-1-236 nova-compute[3608]

[Yahoo-eng-team] [Bug 1357498] [NEW] Can't upgrade from 2013.2 or 2013.2.1 to Juno

2014-08-15 Thread Brant Knudson
Public bug reported:


`keystone-manage db_sync` fails when you start from the 2013.2 or 2013.2.1 
release.

The failure is like

 CRITICAL keystone [-] KeyError: 

The migrations for 2013.2 and 2013.2.1 end at
'034_add_default_project_id_column_to_user.py'.

The migrations for Juno start at '036_havana.py'.

The migrations for 2013.2.2 end at '036_token_drop_valid_index.py', so
migration from that release works.

The Juno migrations should be changed so that it supports going from
034.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1357498

Title:
  Can't upgrade from 2013.2 or 2013.2.1 to Juno

Status in OpenStack Identity (Keystone):
  New

Bug description:
  
  `keystone-manage db_sync` fails when you start from the 2013.2 or 2013.2.1 
release.

  The failure is like

   CRITICAL keystone [-] KeyError: 

  The migrations for 2013.2 and 2013.2.1 end at
  '034_add_default_project_id_column_to_user.py'.

  The migrations for Juno start at '036_havana.py'.

  The migrations for 2013.2.2 end at '036_token_drop_valid_index.py', so
  migration from that release works.

  The Juno migrations should be changed so that it supports going from
  034.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1357498/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357491] [NEW] Detach service from compute_node

2014-08-15 Thread Jay Pipes
Public bug reported:

AFAICT, there's no good reason to have a foreign key relation between
compute_nodes and services. In fact, I see no reason why compute_nodes
needs to have a service_id column at all.

The service is the representation of the message bus between the nova-
conductor and the nova-compute worker processes. The compute node is
merely the collection of resources for a provider of compute resources.
There's really no reason to relate the two with each other.

The fact that they are related to each other means that the resource
tracker ends up needing to "find" its compute node record by first
looking up the service record for the 'compute' topic and the host for
the resource tracker, and then grabs the first compute_node record that
is related to the service record that matches that query. There is no
reason to do this in the resource tracker ... other than the fact that
right now the compute_node table has a service_id field and a relation
to the services table. But this relationship is contrived and is not
needed AFAICT.

The solution to this is to remove the service_id column from the
compute_nodes table and model, remove the foreign key relation to the
services table from the compute_nodes table, and then simply look up a
compute_node record directly from the host and nodename fields instead
of looking up a service record first.

** Affects: nova
 Importance: Wishlist
 Status: Triaged


** Tags: low-hanging-fruit resource-tracker

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357491

Title:
  Detach service from compute_node

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  AFAICT, there's no good reason to have a foreign key relation between
  compute_nodes and services. In fact, I see no reason why compute_nodes
  needs to have a service_id column at all.

  The service is the representation of the message bus between the nova-
  conductor and the nova-compute worker processes. The compute node is
  merely the collection of resources for a provider of compute
  resources. There's really no reason to relate the two with each other.

  The fact that they are related to each other means that the resource
  tracker ends up needing to "find" its compute node record by first
  looking up the service record for the 'compute' topic and the host for
  the resource tracker, and then grabs the first compute_node record
  that is related to the service record that matches that query. There
  is no reason to do this in the resource tracker ... other than the
  fact that right now the compute_node table has a service_id field and
  a relation to the services table. But this relationship is contrived
  and is not needed AFAICT.

  The solution to this is to remove the service_id column from the
  compute_nodes table and model, remove the foreign key relation to the
  services table from the compute_nodes table, and then simply look up a
  compute_node record directly from the host and nodename fields instead
  of looking up a service record first.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357491/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357487] [NEW] remove the 'More' text in the row actions

2014-08-15 Thread Cindy Lu
Public bug reported:

Per Lin's suggestion here: https://review.openstack.org/#/c/114358/. :)

I agree.  It is understood by the caret.  (See
http://getbootstrap.com/components/ - Split button dropdowns)

Split button seems like a common feature, e.g. MS Word:
http://tinypic.com/view.php?pic=mhx6kj&s=7#.U-5GdPldUyI

It will also give more space for the primary button text to reduce
issues like the buttons dropping to 2 lines (as reported here:
https://bugs.launchpad.net/horizon/+bug/1349615)

** Affects: horizon
 Importance: Undecided
 Assignee: Cindy Lu (clu-m)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Cindy Lu (clu-m)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1357487

Title:
  remove the 'More' text in the row actions

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Per Lin's suggestion here: https://review.openstack.org/#/c/114358/.
  :)

  I agree.  It is understood by the caret.  (See
  http://getbootstrap.com/components/ - Split button dropdowns)

  Split button seems like a common feature, e.g. MS Word:
  http://tinypic.com/view.php?pic=mhx6kj&s=7#.U-5GdPldUyI

  It will also give more space for the primary button text to reduce
  issues like the buttons dropping to 2 lines (as reported here:
  https://bugs.launchpad.net/horizon/+bug/1349615)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1357487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357476] [NEW] Timeout waiting for vif plugging callback for instance

2014-08-15 Thread Attila Fazekas
Public bug reported:

n-cpu times out while waiting for neutron.


Logstash

http://logstash.openstack.org/#eyJzZWFyY2giOiIgbWVzc2FnZTogXCJUaW1lb3V0IHdhaXRpbmcgZm9yIHZpZiBwbHVnZ2luZyBjYWxsYmFjayBmb3IgaW5zdGFuY2VcIiBBTkQgdGFnczpcInNjcmVlbi1uLWNwdS50eHRcIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiNjA0ODAwIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTQwODEyMjI1NjY2NiwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

message: "Timeout waiting for vif plugging callback for instance" AND
tags:"screen-n-cpu.txt"


Logs

http://logs.openstack.org/09/108909/4/gate/check-tempest-dsvm-neutron-full/628138b/logs/screen-n-cpu.txt.gz#_2014-08-13_21_14_53_453

2014-08-13 21:14:53.453 WARNING nova.virt.libvirt.driver [req-
0974eac5-f261-472e-a2c3-f96514e4131c ServerActionsTestXML-650848250
ServerActionsTestXML-1011304525] Timeout waiting for vif plugging
callback for instance 794ceb8c-a08b-4b02-bdcb-4ad5632f7744

2014-08-13 21:14:55.408 ERROR nova.compute.manager 
[req-0974eac5-f261-472e-a2c3-f96514e4131c ServerActionsTestXML-650848250 
ServerActionsTestXML-1011304525] [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] Setting instance vm_state to ERROR
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] Traceback (most recent call last):
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3714, in finish_resize
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] disk_info, image)
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3682, in _finish_resize
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] old_instance_type, sys_meta)
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
"/opt/stack/new/nova/nova/openstack/common/excutils.py", line 82, in __exit__
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] six.reraise(self.type_, self.value, 
self.tb)
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
"/opt/stack/new/nova/nova/compute/manager.py", line 3677, in _finish_resize
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] block_device_info, power_on)
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 5302, in 
finish_migration
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] block_device_info, power_on)
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744]   File 
"/opt/stack/new/nova/nova/virt/libvirt/driver.py", line 3792, in 
_create_domain_and_network
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] raise 
exception.VirtualInterfaceCreateException()
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] VirtualInterfaceCreateException: Virtual 
Interface creation failed
2014-08-13 21:14:55.408 29002 TRACE nova.compute.manager [instance: 
794ceb8c-a08b-4b02-bdcb-4ad5632f7744] 

2014-08-13 21:14:56.138 ERROR oslo.messaging.rpc.dispatcher 
[req-0974eac5-f261-472e-a2c3-f96514e4131c ServerActionsTestXML-650848250 
ServerActionsTestXML-1011304525] Exception during message handling: Virtual 
Interface creation failed
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
134, in _dispatch_and_reply
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
177, in _dispatch
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/local/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-08-13 21:14:56.138 29002 TRACE oslo.messaging.rpc.dispatcher   File 
"/opt/stack/new/nova/nova/exception.py", line 88,

[Yahoo-eng-team] [Bug 1357462] [NEW] glance cannot find store for scheme mware_datastore

2014-08-15 Thread Jaroslav Henner
Public bug reported:

 I have python-glance-2014.1.2-1.el7ost.noarch

when configuring

default_store=vmware_datastore
known_stores = glance.store.vmware_datastore.Store
vmware_server_host = 10.34.69.76
vmware_server_username=root
vmware_server_password=qum5net
vmware_datacenter_path="New Datacenter"
vmware_datastore_name=shared

glance-api doesn't seem to come up at all.
glance image-list
Error communicating with http://172.16.40.9:9292 [Errno 111] Connection refused

there seems to be nothing interesing in the logs. After changing to the

  default_store=file

  glance image-create --disk-format vmdk --container-format bare
--copy-from 'http://str-02.rhev/OpenStack/cirros-0.3.1-x86_64-disk.vmdk'
--name cirros-0.3.1-x86_64-disk.vmdk --is-public true --property
vmware_disktype="sparse" --property vmware_adaptertype="ide"
--property vmware_ostype="ubuntu64Guest" --name prdel --store
vmware_datastore

or

  glance image-create --disk-format vmdk --container-format bare
--file 'cirros-0.3.1-x86_64-disk.vmdk' --name
cirros-0.3.1-x86_64-disk.vmdk --is-public true --property
vmware_disktype="sparse" --property vmware_adaptertype="ide"
--property vmware_ostype="ubuntu64Guest" --name prdel --store
vmware_datastore

the image remains in queued state

I can see log lines
2014-08-15 12:38:55.885 24732 DEBUG glance.store [-] Registering store  with schemes ('vsphere',) create_stores 
/usr/lib/python2.7/site-packages/glance/store/__init__.py:208
2014-08-15 12:39:54.119 24764 DEBUG glance.api.v1.images [-] Store for scheme 
vmware_datastore not found get_store_or_400 
/usr/lib/python2.7/site-packages/glance/api/v1/images.py:1057
2014-08-15 12:43:31.408 24764 DEBUG glance.api.v1.images 
[eac2ff8d-d55a-4e2c-8006-95beef8a0d7b caffabe3f56e4e5cb5cbeb040224fe69 
77e18ad8a31e4de2ab26f52fb15b3cc1 - - -] Store for scheme vmware_datastore not 
found get_store_or_400 
/usr/lib/python2.7/site-packages/glance/api/v1/images.py:1057

so it looks like there is inconsistency on the scheme that should be
used. After hardcoding

  STORE_SCHEME = 'vmware_datastore'

in the

  /usr/lib/python2.7/site-packages/glance/store/vmware_datastore.py

the behaviour changed, but did not improve very much:

  glance image-create --disk-format vmdk --container-format bare --file 
'cirros-0.3.1-x86_64-disk.vmdk' --name cirros-0.3.1-x86_64-disk.vmdk 
--is-public true --property vmware_disktype="sparse" --property 
vmware_adaptertype="ide" --property vmware_ostype="ubuntu64Guest" --name 
prdel --store vmware_datastore
400 Bad Request
Store for image_id not found: 7edc22ae-f229-4f21-8f7d-fa19a03410be
(HTTP 400)

** Affects: glance
 Importance: Undecided
 Status: New

** Attachment added: "log"
   https://bugs.launchpad.net/bugs/1357462/+attachment/4178652/+files/api.log

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1357462

Title:
  glance cannot find store for scheme mware_datastore

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
   I have python-glance-2014.1.2-1.el7ost.noarch

  when configuring

  default_store=vmware_datastore
  known_stores = glance.store.vmware_datastore.Store
  vmware_server_host = 10.34.69.76
  vmware_server_username=root
  vmware_server_password=qum5net
  vmware_datacenter_path="New Datacenter"
  vmware_datastore_name=shared

  glance-api doesn't seem to come up at all.
  glance image-list
  Error communicating with http://172.16.40.9:9292 [Errno 111] Connection 
refused

  there seems to be nothing interesing in the logs. After changing to
  the

default_store=file

glance image-create --disk-format vmdk --container-format bare
  --copy-from
  'http://str-02.rhev/OpenStack/cirros-0.3.1-x86_64-disk.vmdk'
  --name cirros-0.3.1-x86_64-disk.vmdk --is-public true --property
  vmware_disktype="sparse" --property vmware_adaptertype="ide"
  --property vmware_ostype="ubuntu64Guest" --name prdel --store
  vmware_datastore

  or

glance image-create --disk-format vmdk --container-format bare
  --file 'cirros-0.3.1-x86_64-disk.vmdk' --name
  cirros-0.3.1-x86_64-disk.vmdk --is-public true --property
  vmware_disktype="sparse" --property vmware_adaptertype="ide"
  --property vmware_ostype="ubuntu64Guest" --name prdel --store
  vmware_datastore

  the image remains in queued state

  I can see log lines
  2014-08-15 12:38:55.885 24732 DEBUG glance.store [-] Registering store  with schemes ('vsphere',) create_stores 
/usr/lib/python2.7/site-packages/glance/store/__init__.py:208
  2014-08-15 12:39:54.119 24764 DEBUG glance.api.v1.images [-] Store for scheme 
vmware_datastore not found get_store_or_400 
/usr/lib/python2.7/site-packages/glance/api/v1/images.py:1057
  2014-08-15 12:43:31.408 24764 DEBUG glance.api.v1.images 
[eac2ff8d-d55a-4e2c-8006-95beef8a0d7b caffabe3f56e4e5cb5cbeb040224fe69 
77e18ad8a31e4de2

[Yahoo-eng-team] [Bug 1357453] [NEW] Resource tracker should create compute node record in constructor

2014-08-15 Thread Jay Pipes
Public bug reported:

Currently, the resource tracker lazily-creates the compute node record
in the database (via a call to the conductor's compute_node_create() API
call) during calls to update_available_resource():

```
@utils.synchronized(COMPUTE_RESOURCE_SEMAPHORE)
def update_available_resource(self, context):
"""Override in-memory calculations of compute node resource usage based
on data audited from the hypervisor layer.

Add in resource claims in progress to account for operations that have
declared a need for resources, but not necessarily retrieved them from
the hypervisor layer yet.
"""
LOG.audit(_("Auditing locally available compute resources"))
resources = self.driver.get_available_resource(self.nodename)

if not resources:
# The virt driver does not support this function
LOG.audit(_("Virt driver does not support "
 "'get_available_resource'  Compute tracking is disabled."))
self.compute_node = None
return
resources['host_ip'] = CONF.my_ip

self._verify_resources(resources)

self._report_hypervisor_resource_view(resources)

if 'pci_passthrough_devices' in resources:
if not self.pci_tracker:
self.pci_tracker = pci_manager.PciDevTracker()
self.pci_tracker.set_hvdevs(jsonutils.loads(resources.pop(
'pci_passthrough_devices')))

# Grab all instances assigned to this node:
instances = objects.InstanceList.get_by_host_and_node(
context, self.host, self.nodename)

# Now calculate usage based on instance utilization:
self._update_usage_from_instances(resources, instances)

# Grab all in-progress migrations:
capi = self.conductor_api
migrations = capi.migration_get_in_progress_by_host_and_node(context,
self.host, self.nodename)

self._update_usage_from_migrations(context, resources,
migrations)

# Detect and account for orphaned instances that may exist on the
# hypervisor, but are not in the DB:
orphans = self._find_orphaned_instances()
self._update_usage_from_orphans(resources, orphans)

# NOTE(yjiang5): Because pci device tracker status is not cleared in
# this periodic task, and also because the resource tracker is not
# notified when instances are deleted, we need remove all usages
# from deleted instances.
if self.pci_tracker:
self.pci_tracker.clean_usage(instances, migrations, orphans)
resources['pci_stats'] = jsonutils.dumps(self.pci_tracker.stats)
else:
resources['pci_stats'] = jsonutils.dumps([])

self._report_final_resource_view(resources)

metrics = self._get_host_metrics(context, self.nodename)
resources['metrics'] = jsonutils.dumps(metrics)
self._sync_compute_node(context, resources)

def _sync_compute_node(self, context, resources):
"""Create or update the compute node DB record."""
if not self.compute_node:
# we need a copy of the ComputeNode record:
service = self._get_service(context)
if not service:
# no service record, disable resource
return

compute_node_refs = service['compute_node']
if compute_node_refs:
for cn in compute_node_refs:
if cn.get('hypervisor_hostname') == self.nodename:
self.compute_node = cn
if self.pci_tracker:
self.pci_tracker.set_compute_node_id(cn['id'])
break

if not self.compute_node:
# Need to create the ComputeNode record:
resources['service_id'] = service['id']
self._create(context, resources)
if self.pci_tracker:
self.pci_tracker.set_compute_node_id(self.compute_node['id'])
LOG.info(_('Compute_service record created for %(host)s:%(node)s')
% {'host': self.host, 'node': self.nodename})

else:
# just update the record:
self._update(context, resources)
LOG.info(_('Compute_service record updated for %(host)s:%(node)s')
% {'host': self.host, 'node': self.nodename})

def _write_ext_resources(self, resources):
resources['stats'] = {}
resources['stats'].update(self.stats)
self.ext_resources_handler.write_resources(resources)

def _create(self, context, values):
"""Create the compute node in the DB."""
# initialize load stats from existing instances:
self._write_ext_resources(values)
# NOTE(pmurray): the stats field is stored as a json string. The
# json conversion will be done automatically by the ComputeNode object
   

[Yahoo-eng-team] [Bug 1357454] [NEW] Missing glyphicons-halflings png file

2014-08-15 Thread Thai Tran
Public bug reported:

We are attempting to retrieve a missing png file at login. See
attachment.

** Affects: horizon
 Importance: Undecided
 Assignee: Thai Tran (tqtran)
 Status: New


** Tags: low-hanging-fruit ui

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1357454

Title:
  Missing glyphicons-halflings png file

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  We are attempting to retrieve a missing png file at login. See
  attachment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1357454/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357437] [NEW] nova.tests.virt.libvirt.test_driver should use constants from fakelibvirt

2014-08-15 Thread Matt Riedemann
Public bug reported:

Commit f0883800660ab546f5667b973f339c4df4c5c458 adds some tests and new
constants for _swap_volume and _live_snapshot, one of which is
VIR_DOMAIN_BLOCK_REBASE_COPY.  It adds the required constants to the
fakelibvirt module but doesn't use them when making assertions in the
test, which can fail if you're not using fakelibvirt but not using a new
enough version of libvirt on your system.

I realize that we now require libvirt-python >= 1.2.5 for testing, but
that requires libvirt >= 0.9.11 and if you're on 0.9.11 it doesn't have
VIR_DOMAIN_BLOCK_REBASE_COPY  defined in libvirt.h.in:

http://libvirt.org/git/?p=libvirt.git;a=blob;f=include/libvirt/libvirt.h.in;h=499dcd4514bf793b531e53496c56237fb055e1ba;hb=782afa98e4a5fa9a0927a9e32f9cf36082a2e8e7

It seems kind of strange to me that we use constants directly from the
libvirt module at all in test_driver given the import conditional check
at the top of the module:

http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/virt/libvirt/test_driver.py#n88

try:
import libvirt
except ImportError:
libvirt = fakelibvirt
libvirt_driver.libvirt = libvirt

Given that, any change that requires testing new constants from libvirt
should be in fakelibvirt, and we should use fakelibvirt when using those
constants in the tests, otherwise the "unit" tests are really dependent
on your environment.

** Affects: nova
 Importance: Undecided
 Status: Triaged


** Tags: libvirt testing

** Changed in: nova
   Status: New => Triaged

** Description changed:

- Commit f0883800660ab546f5667b973f339c4df4c5c458 adds some tests for
- _swap_volume and _live_snapshot, one of which is
+ Commit f0883800660ab546f5667b973f339c4df4c5c458 adds some tests and new
+ constants for _swap_volume and _live_snapshot, one of which is
  VIR_DOMAIN_BLOCK_REBASE_COPY.  It adds the required constants to the
  fakelibvirt module but doesn't use them when making assertions in the
  test, which can fail if you're not using fakelibvirt but not using a new
  enough version of libvirt on your system.
  
  I realize that we now require libvirt-python >= 1.2.5 for testing, but
  that requires libvirt >= 0.9.11 and if you're on 0.9.11 it doesn't have
  VIR_DOMAIN_BLOCK_REBASE_COPY  defined in libvirt.h.in:
  
  
http://libvirt.org/git/?p=libvirt.git;a=blob;f=include/libvirt/libvirt.h.in;h=499dcd4514bf793b531e53496c56237fb055e1ba;hb=782afa98e4a5fa9a0927a9e32f9cf36082a2e8e7
  
  It seems kind of strange to me that we use constants directly from the
  libvirt module at all in test_driver given the import conditional check
  at the top of the module:
  
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/virt/libvirt/test_driver.py#n88
  
  try:
- import libvirt
+ import libvirt
  except ImportError:
- libvirt = fakelibvirt
+ libvirt = fakelibvirt
  libvirt_driver.libvirt = libvirt
  
  Given that, any change that requires testing new constants from libvirt
  should be in fakelibvirt, and we should use fakelibvirt when using those
  constants in the tests, otherwise the "unit" tests are really dependent
  on your environment.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357437

Title:
  nova.tests.virt.libvirt.test_driver should use constants from
  fakelibvirt

Status in OpenStack Compute (Nova):
  Triaged

Bug description:
  Commit f0883800660ab546f5667b973f339c4df4c5c458 adds some tests and
  new constants for _swap_volume and _live_snapshot, one of which is
  VIR_DOMAIN_BLOCK_REBASE_COPY.  It adds the required constants to the
  fakelibvirt module but doesn't use them when making assertions in the
  test, which can fail if you're not using fakelibvirt but not using a
  new enough version of libvirt on your system.

  I realize that we now require libvirt-python >= 1.2.5 for testing, but
  that requires libvirt >= 0.9.11 and if you're on 0.9.11 it doesn't
  have VIR_DOMAIN_BLOCK_REBASE_COPY  defined in libvirt.h.in:

  
http://libvirt.org/git/?p=libvirt.git;a=blob;f=include/libvirt/libvirt.h.in;h=499dcd4514bf793b531e53496c56237fb055e1ba;hb=782afa98e4a5fa9a0927a9e32f9cf36082a2e8e7

  It seems kind of strange to me that we use constants directly from the
  libvirt module at all in test_driver given the import conditional
  check at the top of the module:

  
http://git.openstack.org/cgit/openstack/nova/tree/nova/tests/virt/libvirt/test_driver.py#n88

  try:
  import libvirt
  except ImportError:
  libvirt = fakelibvirt
  libvirt_driver.libvirt = libvirt

  Given that, any change that requires testing new constants from
  libvirt should be in fakelibvirt, and we should use fakelibvirt when
  using those constants in the tests, otherwise the "unit" tests are
  really dependent on your environment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357437/+subscriptions

-- 
Mai

[Yahoo-eng-team] [Bug 1348820] Re: [OSSA 2014-026] Token issued_at time changes on /v3/auth/token GET requests (CVE-2014-5252)

2014-08-15 Thread Tristan Cacqueray
** Summary changed:

- Token issued_at time changes on /v3/auth/token GET requests (CVE-2014-5252)
+ [OSSA 2014-026] Token issued_at time changes on /v3/auth/token GET requests 
(CVE-2014-5252)

** Changed in: ossa
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1348820

Title:
  [OSSA 2014-026] Token issued_at time changes on /v3/auth/token GET
  requests (CVE-2014-5252)

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Steps to recreate

  1.) Generate a v2.0
  token http://pasteraw.com/37q9v3y80tlydltujo7vwfk7gcabggf

  2.) Pull token from the body of the response and use the /v3/auth/tokens/ GET 
api call to verify the token
  http://pasteraw.com/3oycofc541dil3d7hkzhihlcxlthqg4

  Notice that the 'issued_at' time of the token has changed.

  3.) Repeat step 2 and notice that the 'issued_at' time of the same token 
changes again.
  http://pasteraw.com/9wgyrmawewer1ptv5ct58w7pcrfb7zt

  The 'issued_at' time of a token should not change when validating the
  token using /v3/auth/token GET api call.

  This is because the issued_at time is being overwritten on GET here:
  
https://github.com/openstack/keystone/blob/83c7805ed3787303f8497bc479469d9071783107/keystone/token/providers/common.py#L319

  This seems like it has been written strictly for POSTs? In the case of
  POST, the issued_at time needs to be generated, in the case of HEAD or
  GET, the issued_at time should already exist.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1348820/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1347961] Re: [OSSA 2014-026] Revocation events are broken with mysql (CVE-2014-5251)

2014-08-15 Thread Tristan Cacqueray
** Summary changed:

- Revocation events are broken with mysql (CVE-2014-5251)
+ [OSSA 2014-026] Revocation events are broken with mysql (CVE-2014-5251)

** Changed in: ossa
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1347961

Title:
  [OSSA 2014-026] Revocation events are broken with mysql
  (CVE-2014-5251)

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone icehouse series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  Since mysql only stores timestamps with an accuracy of seconds rather
  than microseconds, doing comparisons of token expiration times will
  fail and tokens will not show up as being revoked.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1347961/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1349597] Re: [OSSA 2014-026] Domain-scoped tokens don't get revoked (CVE-2014-5253)

2014-08-15 Thread Tristan Cacqueray
** Summary changed:

- Domain-scoped tokens don't get revoked (CVE-2014-5253)
+ [OSSA 2014-026] Domain-scoped tokens don't get revoked (CVE-2014-5253)

** Changed in: ossa
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1349597

Title:
  [OSSA 2014-026] Domain-scoped tokens don't get revoked (CVE-2014-5253)

Status in OpenStack Identity (Keystone):
  Fix Committed
Status in Keystone havana series:
  Invalid
Status in Keystone icehouse series:
  Fix Released
Status in OpenStack Security Advisories:
  Fix Released

Bug description:
  
  If a domain is invalidated and that generates a revocation event, that 
revocation event won't match domain-scoped tokens so those tokens won't be 
revoked.

  This is because the code to calculate the fields for a domain-scoped
  token don't use the domain-scope so that information can't be used
  when testing against the revocation events.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1349597/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357428] [NEW] DBDeadlock in gate test

2014-08-15 Thread Matthew Booth
Public bug reported:

gate test failed with:

DBDeadlock: (OperationalError) (1213, 'Deadlock found when trying to get
lock; try restarting transaction') 'UPDATE image_properties SET
updated_at=%s, deleted_at=%s, deleted=%s WHERE image_properties.image_id
= %s AND image_properties.deleted = false' (datetime.datetime(2014, 8,
15, 13, 42, 36, 164537), datetime.datetime(2014, 8, 15, 13, 42, 36,
144848), 1, '62832243-7165-4493-bacc-7801640cc718')

Above from:

http://logs.openstack.org/28/114528/1/check/check-tempest-dsvm-
full/176f0f2/logs/screen-g-reg.txt.gz

Full logs:

http://logs.openstack.org/28/114528/1/check/check-tempest-dsvm-
full/176f0f2/

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1357428

Title:
  DBDeadlock in gate test

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  gate test failed with:

  DBDeadlock: (OperationalError) (1213, 'Deadlock found when trying to
  get lock; try restarting transaction') 'UPDATE image_properties SET
  updated_at=%s, deleted_at=%s, deleted=%s WHERE
  image_properties.image_id = %s AND image_properties.deleted = false'
  (datetime.datetime(2014, 8, 15, 13, 42, 36, 164537),
  datetime.datetime(2014, 8, 15, 13, 42, 36, 144848), 1,
  '62832243-7165-4493-bacc-7801640cc718')

  Above from:

  http://logs.openstack.org/28/114528/1/check/check-tempest-dsvm-
  full/176f0f2/logs/screen-g-reg.txt.gz

  Full logs:

  http://logs.openstack.org/28/114528/1/check/check-tempest-dsvm-
  full/176f0f2/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1357428/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357414] [NEW] rabbitMQ locks all services for 10 secs after creating instance snapshot

2014-08-15 Thread Diogo Monteiro
Public bug reported:

Steps to reproduce the problem.
#1 Create instance
#2 Create instance snapshot
#3 After the creation of the instance snapshot all openstack services lock for 
around 10 seconds

The problem is always reproducible.

Overview of the environment:
nova - 2.15.0
neutron - 2.3.0
glance - 0.11.0
cinder - 1.0.6
keystone - 0.3.2

RabbitMQ
 {running_applications,[{rabbit,"RabbitMQ","2.7.1"},
{os_mon,"CPO  CXC 138 46","2.2.7"},
{sasl,"SASL  CXC 138 11","2.1.10"},
{mnesia,"MNESIA  CXC 138 12","4.5"},
{stdlib,"ERTS  CXC 138 10","1.17.5"},
{kernel,"ERTS  CXC 138 10","2.14.5"}]},
 {os,{unix,linux}},
 {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:8:8] [rq:8] 
[async-threads:30] [kernel-poll:true]\n"},
 {memory,[{total,30703760},
  {processes,14640352},
  {processes_used,14630712},
  {system,16063408},
  {atom,1124441},
  {atom_used,1120308},
  {binary,478496},
  {code,11134417},
  {ets,986440}]},
 {vm_memory_high_watermark,0.397532165},
 {vm_memory_limit,3241710387}]

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1357414

Title:
  rabbitMQ locks all services for 10 secs after creating instance
  snapshot

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  Steps to reproduce the problem.
  #1 Create instance
  #2 Create instance snapshot
  #3 After the creation of the instance snapshot all openstack services lock 
for around 10 seconds

  The problem is always reproducible.

  Overview of the environment:
  nova - 2.15.0
  neutron - 2.3.0
  glance - 0.11.0
  cinder - 1.0.6
  keystone - 0.3.2

  RabbitMQ
   {running_applications,[{rabbit,"RabbitMQ","2.7.1"},
  {os_mon,"CPO  CXC 138 46","2.2.7"},
  {sasl,"SASL  CXC 138 11","2.1.10"},
  {mnesia,"MNESIA  CXC 138 12","4.5"},
  {stdlib,"ERTS  CXC 138 10","1.17.5"},
  {kernel,"ERTS  CXC 138 10","2.14.5"}]},
   {os,{unix,linux}},
   {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:8:8] 
[rq:8] [async-threads:30] [kernel-poll:true]\n"},
   {memory,[{total,30703760},
{processes,14640352},
{processes_used,14630712},
{system,16063408},
{atom,1124441},
{atom_used,1120308},
{binary,478496},
{code,11134417},
{ets,986440}]},
   {vm_memory_high_watermark,0.397532165},
   {vm_memory_limit,3241710387}]

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1357414/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357379] [NEW] policy adnmin_only rules not enforced when changing value to default

2014-08-15 Thread Elena Ezhova
Public bug reported:

If a non-admin user tries to update an attribute, which should be
updated only by admin, from a non-default value to default,  the update
is successfully performed and PolicyNotAuthorized exception is not
raised.

The reason is that when a rule to match for a given action is built
there is a verification that each attribute in a body of the resource is
present and has a non-default value. Thus, if we try to change some
attribute's value to default, it is not considered to be explicitly set
and a corresponding rule is not enforced.

** Affects: neutron
 Importance: Undecided
 Assignee: Elena Ezhova (eezhova)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Elena Ezhova (eezhova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357379

Title:
  policy adnmin_only rules not enforced when changing value to default

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  If a non-admin user tries to update an attribute, which should be
  updated only by admin, from a non-default value to default,  the
  update is successfully performed and PolicyNotAuthorized exception is
  not raised.

  The reason is that when a rule to match for a given action is built
  there is a verification that each attribute in a body of the resource
  is present and has a non-default value. Thus, if we try to change some
  attribute's value to default, it is not considered to be explicitly
  set and a corresponding rule is not enforced.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357372] [NEW] Race condition in VNC port allocation when spawning a instance on VMware

2014-08-15 Thread Marcio Roberto Starke
Public bug reported:

When spawning some instances,  nova VMware driver could have a race
condition in VNC port allocation. Although the get_vnc_port function has
a lock it not guarantee that the whole vnc port allocation process is
locked, so another instance could receive the same port if it requests
the VNC port before nova has finished the vnc port allocation to another
VM.

If the instances with the same VNC port are allocated in same host it
could lead to a improper access to the instance console.

Reproduce the problem: Launch  two or more instances at same time. In
some cases one instance could execute the get_vnc_port and pick a port
but before this instance has finished the _set_vnc_config another
instance could execute get_vnc_port and pick the same port.

How often this occurs: unpredictable.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

** Summary changed:

- Race condition in VNC port allocation when spanning a instance on VMware
+ Race condition in VNC port allocation when spawning a instance on VMware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357372

Title:
  Race condition in VNC port allocation when spawning a instance on
  VMware

Status in OpenStack Compute (Nova):
  New

Bug description:
  When spawning some instances,  nova VMware driver could have a race
  condition in VNC port allocation. Although the get_vnc_port function
  has a lock it not guarantee that the whole vnc port allocation process
  is locked, so another instance could receive the same port if it
  requests the VNC port before nova has finished the vnc port allocation
  to another VM.

  If the instances with the same VNC port are allocated in same host it
  could lead to a improper access to the instance console.

  Reproduce the problem: Launch  two or more instances at same time. In
  some cases one instance could execute the get_vnc_port and pick a port
  but before this instance has finished the _set_vnc_config another
  instance could execute get_vnc_port and pick the same port.

  How often this occurs: unpredictable.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357372/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357368] [NEW] Source side post Live Migration Logic cannot disconnect multipath iSCSI devices cleanly

2014-08-15 Thread Jeegn Chen
Public bug reported:

When a volume is attached to a VM in the source compute node through
multipath, the related files in /dev/disk/by-path/ are like this

stack@ubuntu-server12:~/devstack$ ls /dev/disk/by-path/*24
/dev/disk/by-path/ip-192.168.3.50:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.a5-lun-24
/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b4-lun-24

The information on its corresponding multipath device is like this
stack@ubuntu-server12:~/devstack$ sudo multipath -l 
3600601602ba03400921130967724e411
3600601602ba03400921130967724e411 dm-3 DGC,VRAID
size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=-1 status=active
| `- 19:0:0:24 sdl 8:176 active undef running
`-+- policy='round-robin 0' prio=-1 status=enabled
  `- 18:0:0:24 sdj 8:144 active undef running


But when the VM is migrated to the destination, the related information is like 
the following example since we CANNOT guarantee that all nodes are able to 
access the same iSCSI portals and the same target LUN number. And the 
information is used to overwrite connection_info in the DB before the post live 
migration logic is executed.

stack@ubuntu-server13:~/devstack$ ls /dev/disk/by-path/*24
/dev/disk/by-path/ip-192.168.3.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b5-lun-100
/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b4-lun-100

stack@ubuntu-server12:~/devstack$ sudo multipath -l 
3600601602ba03400921130967724e411
3600601602ba03400921130967724e411 dm-3 DGC,VRAID
size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=-1 status=active
| `- 19:0:0:100 sdf 8:176 active undef running
`-+- policy='round-robin 0' prio=-1 status=enabled
  `- 18:0:0:100 sdg 8:144 active undef running

As a result, if post live migration in source side uses ,  and  to find the devices to clean up, it may use 192.168.3.51, 
iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 100.
However, the correct one should be 192.168.3.50, 
iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 24.

Similar philosophy in (https://bugs.launchpad.net/nova/+bug/1327497) can
be used to fix it: Leverage the unchanged multipath_id to find correct
devices to delete.

** Affects: nova
 Importance: Undecided
 Assignee: Jeegn Chen (jeegn-chen)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => Jeegn Chen (jeegn-chen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357368

Title:
  Source side post Live Migration Logic cannot disconnect multipath
  iSCSI devices cleanly

Status in OpenStack Compute (Nova):
  New

Bug description:
  When a volume is attached to a VM in the source compute node through
  multipath, the related files in /dev/disk/by-path/ are like this

  stack@ubuntu-server12:~/devstack$ ls /dev/disk/by-path/*24
  
/dev/disk/by-path/ip-192.168.3.50:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.a5-lun-24
  
/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b4-lun-24

  The information on its corresponding multipath device is like this
  stack@ubuntu-server12:~/devstack$ sudo multipath -l 
3600601602ba03400921130967724e411
  3600601602ba03400921130967724e411 dm-3 DGC,VRAID
  size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
  |-+- policy='round-robin 0' prio=-1 status=active
  | `- 19:0:0:24 sdl 8:176 active undef running
  `-+- policy='round-robin 0' prio=-1 status=enabled
`- 18:0:0:24 sdj 8:144 active undef running

  
  But when the VM is migrated to the destination, the related information is 
like the following example since we CANNOT guarantee that all nodes are able to 
access the same iSCSI portals and the same target LUN number. And the 
information is used to overwrite connection_info in the DB before the post live 
migration logic is executed.

  stack@ubuntu-server13:~/devstack$ ls /dev/disk/by-path/*24
  
/dev/disk/by-path/ip-192.168.3.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b5-lun-100
  
/dev/disk/by-path/ip-192.168.4.51:3260-iscsi-iqn.1992-04.com.emc:cx.fnm00124500890.b4-lun-100

  stack@ubuntu-server12:~/devstack$ sudo multipath -l 
3600601602ba03400921130967724e411
  3600601602ba03400921130967724e411 dm-3 DGC,VRAID
  size=1.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
  |-+- policy='round-robin 0' prio=-1 status=active
  | `- 19:0:0:100 sdf 8:176 active undef running
  `-+- policy='round-robin 0' prio=-1 status=enabled
`- 18:0:0:100 sdg 8:144 active undef running

  As a result, if post live migration in source side uses ,  and 
 to find the devices to clean up, it may use 192.168.3.51, 
iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 100.
  However, the correct one should be 192.168.3.50, 
iqn.1992-04.com.emc:cx.fnm00124500890.a5 and 24.

  Similar philosophy in (http

[Yahoo-eng-team] [Bug 1357339] [NEW] ml2_conf_sriov.ini provides wrong examples

2014-08-15 Thread Przemyslaw Czesnowicz
Public bug reported:

Config option names in the example ml2_conf_sriov.ini are wrong .
i.e 
supported_vendor_pci_devs should be supported_pci_vendor_devs
exclude_list should be exclude_devices

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: example ml2 sriov

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357339

Title:
  ml2_conf_sriov.ini provides wrong examples

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Config option names in the example ml2_conf_sriov.ini are wrong .
  i.e 
  supported_vendor_pci_devs should be supported_pci_vendor_devs
  exclude_list should be exclude_devices

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357339/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357335] [NEW] "Device size" field in the "Launch Instance" allows negative values

2014-08-15 Thread Bradley Jones
Public bug reported:

For usability the "Device size" field should not allow negative values

** Affects: horizon
 Importance: Undecided
 Assignee: Bradley Jones (bradjones)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Bradley Jones (bradjones)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1357335

Title:
   "Device size" field in the "Launch Instance" allows negative values

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  For usability the "Device size" field should not allow negative values

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1357335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357331] [NEW] Delayed page reload after modal submit causes another modal to not open

2014-08-15 Thread Timur Sufiev
Public bug reported:

Steps to reproduce:

1. Go to Projects, click on 'Modify Quotas' for some project. 
2. After quotas tab in modal form appears, change some quota value, click 
'Submit'.
3. After 'Loading...' spinner disappears and message 'Success: Modified project 
.' appears in top right corner, click again 'Modify Quotas'.
4. Again 'Loading...' spinner appears, but the modal form won't be shown, 
because of the page reload that comes after modal submit (to reproduce that 
issue, the latency between your dashboard and the rest Openstack instance 
should be big enough - i.e., for me the Openstack instance is located roughly 
on opposite side of the globe).

This bug not only spoils UX, but also forces to add `time.sleep` into
integration tests to make them pass.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1357331

Title:
  Delayed page reload after modal submit causes another modal to not
  open

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Steps to reproduce:

  1. Go to Projects, click on 'Modify Quotas' for some project. 
  2. After quotas tab in modal form appears, change some quota value, click 
'Submit'.
  3. After 'Loading...' spinner disappears and message 'Success: Modified 
project .' appears in top right corner, click again 'Modify 
Quotas'.
  4. Again 'Loading...' spinner appears, but the modal form won't be shown, 
because of the page reload that comes after modal submit (to reproduce that 
issue, the latency between your dashboard and the rest Openstack instance 
should be big enough - i.e., for me the Openstack instance is located roughly 
on opposite side of the globe).

  This bug not only spoils UX, but also forces to add `time.sleep` into
  integration tests to make them pass.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1357331/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357314] [NEW] NSX: floating ip status can be incorrectly reset

2014-08-15 Thread Salvatore Orlando
Public bug reported:

This method can return None if:
1) an active floating ip is associated
2) a down floating ip is disassociated

Due to the default value for statsu being ACTIVE this implies that when
a floating IP is associated at create-time its status is reset.

** Affects: neutron
 Importance: High
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: New


** Tags: vmware

** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: python-neutronclient

** Changed in: neutron
   Importance: Undecided => High

** Changed in: neutron
 Assignee: (unassigned) => Salvatore Orlando (salvatore-orlando)

** Changed in: neutron
Milestone: None => juno-3

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357314

Title:
  NSX: floating ip status can be incorrectly reset

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This method can return None if:
  1) an active floating ip is associated
  2) a down floating ip is disassociated

  Due to the default value for statsu being ACTIVE this implies that
  when a floating IP is associated at create-time its status is reset.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357314/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357290] [NEW] validator shouldn't raise exception

2014-08-15 Thread Wei Wang
Public bug reported:

In the function
neutron.extension.allowedaddresspairs._validate_allowed_address_pairs,
validator raises exception instead of return message which other
validators do.

** Affects: neutron
 Importance: Undecided
 Assignee: Wei Wang (damon-devops)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Wei Wang (damon-devops)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357290

Title:
  validator shouldn't raise exception

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  In the function
  neutron.extension.allowedaddresspairs._validate_allowed_address_pairs,
  validator raises exception instead of return message which other
  validators do.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357290/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357278] [NEW] create image form url need check

2014-08-15 Thread tinytmy
Public bug reported:

when create image, if source_type=url, this is no check for copy_from,
we can change CharField to URLField

** Affects: horizon
 Importance: Undecided
 Assignee: tinytmy (tangmeiyan77)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => tinytmy (tangmeiyan77)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1357278

Title:
  create image form url need check

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  when create image, if source_type=url, this is no check for copy_from,
  we can change CharField to URLField

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1357278/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1233188] Re: Cant create VM with rbd backend enabled

2014-08-15 Thread Yaguang Tang
** Changed in: cloud-archive
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1233188

Title:
  Cant create VM with rbd backend enabled

Status in Ubuntu Cloud Archive:
  Fix Released
Status in OpenStack Compute (Nova):
  Fix Released
Status in OpenStack Compute (nova) havana series:
  Fix Released

Bug description:
  nova-compute.log:

  2013-09-30 15:52:18.897 12884 ERROR nova.compute.manager 
[req-d112a8fd-89c4-4b5b-b6c2-1896dcd0e4ab f70773b792354571a10d44260397fde1 
b9e4ccd38a794fee82dfb06a52ec3cfd] [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] Error: libvirt_info() takes exactly 6 
arguments (7 given)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] Traceback (most recent call last):
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1037, in 
_build_instance
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] set_access_ip=set_access_ip)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1410, in _spawn
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] LOG.exception(_('Instance failed to 
spawn'), instance=instance)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1407, in _spawn
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] block_device_info)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2069, in 
spawn
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] write_to_disk=True)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3042, in 
to_xml
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] disk_info, rescue, block_device_info)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2922, in 
get_guest_config
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] inst_type):
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2699, in 
get_guest_storage_config
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] inst_type)
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86]   File 
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2662, in 
get_guest_disk_config
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] self.get_hypervisor_version())
  2013-09-30 15:52:18.897 12884 TRACE nova.compute.manager [instance: 
f133ebdb-2f6f-49ba-baf3-296163a98c86] TypeError: libvirt_info() takes exactly 6 
arguments (7 given)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1233188/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357263] [NEW] Unhelpful error message when attempting to boot a guest with an invalid guestId

2014-08-15 Thread Matthew Booth
Public bug reported:

When booting a VMware instance from an image, guestId is taken from the
vmware_ostype property in glance. If this value is invalid, spawn() will
fail with the error message:

VMwareDriverException: A specified parameter was not correct.

As there are many parameters to CreateVM_Task, this error message does
not help us narrow down the offending one. Unfortunately this error
message is all that vSphere provides us, so we can't do better by
relying on vSphere alone.

As this is a user-editable parameter, we should try harder to provide an
indication of what the error might be. We can do this by validating the
field ourselves. As there is no way I'm aware of to extract a canonical
list of valid guestIds from a running vSphere host, I think we're left
embedding our own list and validating against it. This is not ideal,
because:

1. We will need to update our list for every ESX release
2. A simple list will not take account of the ESX version we're running against 
(i.e. we may have a list for 5.5, but be running against 5.1, which doesn't 
support everything on our list)

Consequently, to maintain a loose coupling we should validate the field,
but only warn for values we don't recognise. vSphere will continue to
return its non-specific error message, but there will be an additional
indication of what the root cause might be in the logs.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1357263

Title:
  Unhelpful error message when attempting to boot a guest with an
  invalid guestId

Status in OpenStack Compute (Nova):
  New

Bug description:
  When booting a VMware instance from an image, guestId is taken from
  the vmware_ostype property in glance. If this value is invalid,
  spawn() will fail with the error message:

  VMwareDriverException: A specified parameter was not correct.

  As there are many parameters to CreateVM_Task, this error message does
  not help us narrow down the offending one. Unfortunately this error
  message is all that vSphere provides us, so we can't do better by
  relying on vSphere alone.

  As this is a user-editable parameter, we should try harder to provide
  an indication of what the error might be. We can do this by validating
  the field ourselves. As there is no way I'm aware of to extract a
  canonical list of valid guestIds from a running vSphere host, I think
  we're left embedding our own list and validating against it. This is
  not ideal, because:

  1. We will need to update our list for every ESX release
  2. A simple list will not take account of the ESX version we're running 
against (i.e. we may have a list for 5.5, but be running against 5.1, which 
doesn't support everything on our list)

  Consequently, to maintain a loose coupling we should validate the
  field, but only warn for values we don't recognise. vSphere will
  continue to return its non-specific error message, but there will be
  an additional indication of what the root cause might be in the logs.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1357263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357236] [NEW] Neutron creates oslo.messaging.Server object directly

2014-08-15 Thread Alexei Kornienko
Public bug reported:

oslo.messaging provides a factory method to create a Server object. It
should be used to create it. Additionaly Neutron uses custom
RPCDispatcher to log incoming messages. This duplicates existing
functionality in oslo.messaging.

** Affects: neutron
 Importance: Undecided
 Assignee: Alexei Kornienko (alexei-kornienko)
 Status: In Progress


** Tags: messaging rpc

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357236

Title:
  Neutron creates oslo.messaging.Server object directly

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  oslo.messaging provides a factory method to create a Server object. It
  should be used to create it. Additionaly Neutron uses custom
  RPCDispatcher to log incoming messages. This duplicates existing
  functionality in oslo.messaging.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1357214] [NEW] correct getLoggers to use __name__ in code

2014-08-15 Thread Aaron Rosen
Public bug reported:

Previousy the NSX plugin would log as NeutronPlugin. Now it contains
the whole class path like the rest of the log statements.

** Affects: neutron
 Importance: Undecided
 Assignee: Aaron Rosen (arosen)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) => Aaron Rosen (arosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1357214

Title:
  correct getLoggers to use __name__ in code

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Previousy the NSX plugin would log as NeutronPlugin. Now it contains
  the whole class path like the rest of the log statements.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1357214/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp