[Yahoo-eng-team] [Bug 1290690] [NEW] Dashboard limts length to Flavor Extra Spec for filling in the key quota:vif_outbound_average

2014-03-11 Thread zhangguoqing
Public bug reported:

Dashboard limts length to  Flavor Extra Spec  for filling  in the  key
quota:vif_outbound_average

(1)Login in Dashboard by admin;
(2) Select Flavors;
(3)Select any one flavors, for example m1.tiny;
(4)Click More,then View Extra Specs;
(5)Click Create;
(6)fill  in the  key quota:vif_outbound_average
[ ABOUT quota:vif_outbound_average 
:https://wiki.openstack.org/wiki/InstanceResourceQuota ]

I find that, I cann't fill full of the key “quota:vif_outbound_average”
words. Obviously, the key inputbox  is short MaxLengh!

** Affects: horizon
 Importance: Undecided
 Assignee: zhangguoqing (474751729-o)
 Status: New


** Tags: extra flavor maxlengh spec

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1290690

Title:
  Dashboard limts length to  Flavor Extra Spec  for filling  in the  key
  quota:vif_outbound_average

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Dashboard limts length to  Flavor Extra Spec  for filling  in the  key
  quota:vif_outbound_average

  (1)Login in Dashboard by admin;
  (2) Select Flavors;
  (3)Select any one flavors, for example m1.tiny;
  (4)Click More,then View Extra Specs;
  (5)Click Create;
  (6)fill  in the  key quota:vif_outbound_average
  [ ABOUT quota:vif_outbound_average 
:https://wiki.openstack.org/wiki/InstanceResourceQuota ]

  I find that, I cann't fill full of the key
  “quota:vif_outbound_average” words. Obviously, the key inputbox  is
  short MaxLengh!

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1290690/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290700] [NEW] Nova-manage db archive_deleted_rows stops at the first failure with insufficient diagnostic

2014-03-11 Thread jeffrey
Public bug reported:

1- After a long run provisioning which creates and deletes VMs for days
we have a quite long history  of deleted instances in the DB that could
be archived

2- We have attempted to run:
nova-manage db archive_deleted_rows --max_rows  100

which was accepted but did not complete in  a short time and then was
stopped using ˆc.

Possibly the same happens when you get a time-out because the command plays a 
multiple insert in the target shadow table and only after this deletes the 
entries that where logically deleted in the on-line table. Not sure if both are 
in the same commit cycle and if the first  is rolled back when the second  is 
unable to complete.
Also it is not clear if the command can be executed in concurrency form 
multiple users without problems. It happened to us to that the DB was left in 
an inconsistent state with rows still present in the on-line tables and  
already copied to the shadow tables.

3- As consequence of this situation any further invocation of the
command also with a limited max_row number will fail. This is not good
as it could be better to skip the one in error and continue with the
other, reporting which one failed and needs further actions. This leads
the user with the suspect that the archiving doesn't work at all, as
many are saying in the OpenStack  forums

4- The problem here is a serviceability one.  as point one, the command
doesn't return any output if everything went fine, that doesn't help to
make clarity

5- As point two , the output of the command in case something went wrong is not 
clear about what happened. It is just list the SQL transaction that goes wrong. 
If the transaction is a multiple insert with a large set of values, that may be 
the case triggered by an high max row parameter, the output is capable of 
showing only the final part of the statement.
f the   max_rows parameter is big, the part of the output that fits in the 
shell is just a list of the values of the last field in the multiple insert, 
usually the content of the 'deleted' field for the row processed, which is a 
counter and not so meaningful to the user .
e.g. ...1401601, 1401602,  1401603, 1401604, 1401605, 1401606, 1401607, 
1401608, 1401609, 1401610, 1401611)
Please note that in this case the command can be  partially executed and any 
further attempt blocks at the same point.

6- as work around the user may only execute the command with max_row =1,
see the output and fix every problem manually in the DB. Not really
practical for the purpose of the command.

** Affects: nova-project
 Importance: Undecided
 Status: New

** Project changed: neutron = nova-project

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290700

Title:
  Nova-manage db archive_deleted_rows stops at the first failure with
  insufficient diagnostic

Status in The Nova Project:
  New

Bug description:
  1- After a long run provisioning which creates and deletes VMs for
  days we have a quite long history  of deleted instances in the DB that
  could be archived

  2- We have attempted to run:
  nova-manage db archive_deleted_rows --max_rows  100

  which was accepted but did not complete in  a short time and then was
  stopped using ˆc.

  Possibly the same happens when you get a time-out because the command plays a 
multiple insert in the target shadow table and only after this deletes the 
entries that where logically deleted in the on-line table. Not sure if both are 
in the same commit cycle and if the first  is rolled back when the second  is 
unable to complete.
  Also it is not clear if the command can be executed in concurrency form 
multiple users without problems. It happened to us to that the DB was left in 
an inconsistent state with rows still present in the on-line tables and  
already copied to the shadow tables.

  3- As consequence of this situation any further invocation of the
  command also with a limited max_row number will fail. This is not good
  as it could be better to skip the one in error and continue with the
  other, reporting which one failed and needs further actions. This
  leads the user with the suspect that the archiving doesn't work at
  all, as many are saying in the OpenStack  forums

  4- The problem here is a serviceability one.  as point one, the
  command doesn't return any output if everything went fine, that
  doesn't help to make clarity

  5- As point two , the output of the command in case something went wrong is 
not clear about what happened. It is just list the SQL transaction that goes 
wrong. If the transaction is a multiple insert with a large set of values, that 
may be the case triggered by an high max row parameter, the output is capable 
of showing only the final part of the statement.
  f the   max_rows parameter is big, the part of the output that fits in the 
shell is just a list of the values of the 

[Yahoo-eng-team] [Bug 1289195] Re: Duplicate security group name cause fail to start instance

2014-03-11 Thread yong sheng gong
If the ID will cover this case, I think the bug is invalid!

** Changed in: neutron
   Status: In Progress = Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1289195

Title:
  Duplicate security group name cause fail to start instance

Status in OpenStack Neutron (virtual network service):
  Opinion

Bug description:
  When create a security group, the duplicate name is allowed.
  In create a instance, duplicate sg name will cause exception and the instance 
will be started fail. So the duplicate name of sg should be not allowed.

  In nova.network.neutronv2.API:allocate_for_instance
  for security_group in security_groups:
  name_match = None
  uuid_match = None
  for user_security_group in user_security_groups:
  if user_security_group['name'] == security_group: # if have 
duplicate sg name, the name_match will not be None for the second matching.
  if name_match:
  raise exception.NoUniqueMatch(
  _(Multiple security groups found matching
      '%s'. Use an ID to be more specific.) %
  security_group)

  name_match = user_security_group['id']
  if user_security_group['id'] == security_group:
  uuid_match = user_security_group['id']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1289195/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290746] [NEW] Nova should allow HARD_REBOOT to instances in the state REBOOTING_HARD

2014-03-11 Thread Matthew Gilliard
Public bug reported:

Currently when trying to issue a hard reboot to an instance, the logic
in nova/compute/api.py says:

if (reboot_type == 'HARD' and instance['task_state'] == 
task_states.REBOOTING_HARD)):
raise exception.InstanceInvalidState

This mean's there's no user-facing way to rescue an instance that is
stuck in REBOOTING_HARD except for DELETE.

We should allow hard reboot to happen in the state REBOOTING_HARD.  Some
new locking code will be required.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290746

Title:
  Nova should allow HARD_REBOOT to instances in the state REBOOTING_HARD

Status in OpenStack Compute (Nova):
  New

Bug description:
  Currently when trying to issue a hard reboot to an instance, the logic
  in nova/compute/api.py says:

  if (reboot_type == 'HARD' and instance['task_state'] == 
task_states.REBOOTING_HARD)):
  raise exception.InstanceInvalidState

  This mean's there's no user-facing way to rescue an instance that is
  stuck in REBOOTING_HARD except for DELETE.

  We should allow hard reboot to happen in the state REBOOTING_HARD.
  Some new locking code will be required.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290746/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290767] [NEW] Error. Unable to associate floating ip in nova.network.neutronv2.api in _get_port_id_by_fixed_address

2014-03-11 Thread Matthew Macdonald-Wallace (HPCS)
Public bug reported:

Stacktrace (most recent call last):

  File nova/api/openstack/compute/contrib/floating_ips.py, line 255, in 
_add_floating_ip
fixed_address=fixed_address)
  File nova/network/api.py, line 50, in wrapper
res = f(self, context, *args, **kwargs)
  File nova/network/neutronv2/api.py, line 649, in associate_floating_ip
fixed_address)
  File nova/network/neutronv2/api.py, line 634, in 
_get_port_id_by_fixed_address
raise exception.FixedIpNotFoundForAddress(address=address)

Surely Nova should retry an assignment if the port is not ready?

This was created by a tempest test suite.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290767

Title:
  Error. Unable to associate floating ip in nova.network.neutronv2.api
  in _get_port_id_by_fixed_address

Status in OpenStack Compute (Nova):
  New

Bug description:
  Stacktrace (most recent call last):

File nova/api/openstack/compute/contrib/floating_ips.py, line 255, in 
_add_floating_ip
  fixed_address=fixed_address)
File nova/network/api.py, line 50, in wrapper
  res = f(self, context, *args, **kwargs)
File nova/network/neutronv2/api.py, line 649, in associate_floating_ip
  fixed_address)
File nova/network/neutronv2/api.py, line 634, in 
_get_port_id_by_fixed_address
  raise exception.FixedIpNotFoundForAddress(address=address)

  Surely Nova should retry an assignment if the port is not ready?

  This was created by a tempest test suite.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290767/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280423] Re: Date picker fields in Resources Usage Overview should be hidden by default

2014-03-11 Thread Matthias Runge
That issue is already fixed in latest checkout.

** Changed in: horizon
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1280423

Title:
  Date picker fields in Resources Usage Overview should be hidden by
  default

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  In the panel Admin  Resources Usage Overview panel custom date picker fields 
(From and To) are not hidden by default.
  This causes those fields to be visible very briefly when the panel loads 
before the Javascript check kicks in (that decides whether they should be 
visible or hidden).

  Their visibility depends on the selection in Period drop down. Only
  when the selection is Other date picker fields should be visible.
  However, the default drop down value when the panel initially loads is
  Last week, which means date picker fields should be hidden by
  default.

  To reproduce:
  1. Try to connect to Horizon remotely to see the issue more easily
  2. Go to Admin  Resources Usage Overview
  3. You'll see that date picker fields (From, To) are shown first and then 
very quickly are hidden.

  A screenshot of when the panel loads is attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1280423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290772] [NEW] MatchMakerRedis doesn't work [zeromq]

2014-03-11 Thread fetahi
Public bug reported:

I was testing zeromq with redis. I installed all packages from ubuntu cloud 
repository[havana].  I added the following lines in nova.conf.
...
rpc_zmq_matchmaker = nova.openstack.common.rpc.matchmaker_redis.MatchMakerRedis
rpc_backend = nova.openstack.common.rpc.impl_zmq
...

I get the following error
2014-03-11 09:57:58.671 11201 ERROR nova.openstack.common.threadgroup [-] 
Command # 1 (SADD scheduler.ubuntu scheduler.ubuntu.ubuntu) of pipeline caused 
error: Operation against a key holding the wrong kind of value

The same error is reported in the following services:
nova-conductor
nova-consoleauth
nova-scheduler

The problem seems to come from the matchmaker using the same key to
register a set of hosts and the hosts themselves.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- I was testing zeromq with redis. I installed all packages from ubuntu cloud 
repository.  I added the following lines in nova.conf.
+ I was testing zeromq with redis. I installed all packages from ubuntu cloud 
repository[havana].  I added the following lines in nova.conf.
  ...
  rpc_zmq_matchmaker = 
nova.openstack.common.rpc.matchmaker_redis.MatchMakerRedis
  rpc_backend = nova.openstack.common.rpc.impl_zmq
  ...
  
- I get the following error 
+ I get the following error
  2014-03-11 09:57:58.671 11201 ERROR nova.openstack.common.threadgroup [-] 
Command # 1 (SADD scheduler.ubuntu scheduler.ubuntu.ubuntu) of pipeline caused 
error: Operation against a key holding the wrong kind of value
  
  The same error is reported in the following services:
  nova-conductor
  nova-consoleauth
  nova-scheduler
  
- 
- The problem seems to come from the matchmaker using the same key to register 
a set of hosts and the hosts themselves.
+ The problem seems to come from the matchmaker using the same key to
+ register a set of hosts and the hosts themselves.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290772

Title:
  MatchMakerRedis doesn't work [zeromq]

Status in OpenStack Compute (Nova):
  New

Bug description:
  I was testing zeromq with redis. I installed all packages from ubuntu cloud 
repository[havana].  I added the following lines in nova.conf.
  ...
  rpc_zmq_matchmaker = 
nova.openstack.common.rpc.matchmaker_redis.MatchMakerRedis
  rpc_backend = nova.openstack.common.rpc.impl_zmq
  ...

  I get the following error
  2014-03-11 09:57:58.671 11201 ERROR nova.openstack.common.threadgroup [-] 
Command # 1 (SADD scheduler.ubuntu scheduler.ubuntu.ubuntu) of pipeline caused 
error: Operation against a key holding the wrong kind of value

  The same error is reported in the following services:
  nova-conductor
  nova-consoleauth
  nova-scheduler

  The problem seems to come from the matchmaker using the same key to
  register a set of hosts and the hosts themselves.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290772/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1246987] Re: check_policy does not work correctly

2014-03-11 Thread Christopher Yeoh
** Changed in: nova
   Status: Incomplete = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1246987

Title:
  check_policy does not work correctly

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
   function
  check_policy(context, 'method', target)

  If target is instance object, this check_policy  function does not work like 
expected.
  Because target is not dict.

  e.g.  
  /etc/nova/policy.json:

  startstop_api: is_admin:True or (project_id:%(project_id)s and 
role:comp_startstop and user_id:%(user_id)s),
  compute:start:  rule:startstop_api,
  compute:stop:   rule:startstop_api,

  above controls does not work never.

  ./nova/compute/api.py  should revise.
  I fixed like below.

  
  [root@nova-all0001 compute]# diff -rup api.old.py api.py
  --- api.old.py  2013-11-01 15:42:22.086922939 +0900
  +++ api.py  2013-11-01 14:38:12.407905965 +0900
  @@ -194,7 +194,10 @@ def policy_decorator(scope):
   def outer(func):
   @functools.wraps(func)
   def wrapped(self, context, target, *args, **kwargs):
  -check_policy(context, func.__name__, target, scope)
  +if not isinstance(target, dict):# Y.Kawada
  +r_target = dict(target.iteritems())
  +
  +check_policy(context, func.__name__, r_target, scope)
   return func(self, context, target, *args, **kwargs)
   return wrapped
   return outer

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1246987/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290807] [NEW] Resize on vCenter failed becausee of _VM_REFS_CACHE

2014-03-11 Thread Feng Xi Yan
Public bug reported:

This bug is for ice-house latest code version.

The resize action in vmware environment always fails.

The reason is that nova resized the -orign rather than the new
cloned vm.

It is caused by the outdated vm_ref in _VM_REFS_CACHE.

In nova/virt/vmwareapi/vmops.py:

def finish_migration(self, context, migration, instance, disk_info,
 network_info, image_meta, resize_instance=False,
 block_device_info=None, power_on=True):
Completes a resize, turning on the migrated instance.
if resize_instance:
client_factory = self._session._get_vim().client.factory
vm_ref = vm_util.get_vm_ref(self._session, instance)
vm_resize_spec = vm_util.get_vm_resize_spec(client_factory,
instance)
reconfig_task = self._session._call_method(
self._session._get_vim(),
ReconfigVM_Task, vm_ref,
spec=vm_resize_spec)
   ...

From this code, we can see we get vm_ref by vm_util.get_vm_ref.

In nova/virt/vmwareapi/vm_util.py

@vm_ref_cache_from_instance
def get_vm_ref(session, instance):
Get reference to the VM through uuid or vm name.
uuid = instance['uuid']
vm_ref = (_get_vm_ref_from_vm_uuid(session, uuid) or
  _get_vm_ref_from_extraconfig(session, uuid) or
  _get_vm_ref_from_uuid(session, uuid) or
  _get_vm_ref_from_name(session, instance['name']))
if vm_ref is None:
raise exception.InstanceNotFound(instance_id=uuid)
return vm_ref

The get_vm_ref method is decorated by vm_ref_cache_from_instance.
vm_ref_cache_from_instance will firstly check cache variable _VM_REFS_CACHE. 
But _VM_REFS_CACHE contains a outdated vm_ref keyed by our instance_uuid, 
because the virtual machine's name is changed.

** Affects: nova
 Importance: Undecided
 Status: New

** Description changed:

- This code is for ice-house latest code version.
+ This bug is for ice-house latest code version.
  
  The resize action in vmware environment always failed.
  
  The reason is that nova resized the -orign rather than the new
  cloned vm.
  
  It is caused by the outdated vm_ref in _VM_REFS_CACHE.
  
  In nova/virt/vmwareapi/vmops.py:
  
  def finish_migration(self, context, migration, instance, disk_info,
-  network_info, image_meta, resize_instance=False,
-  block_device_info=None, power_on=True):
- Completes a resize, turning on the migrated instance.
- if resize_instance:
- client_factory = self._session._get_vim().client.factory
- vm_ref = vm_util.get_vm_ref(self._session, instance)
- vm_resize_spec = vm_util.get_vm_resize_spec(client_factory,
- instance)
- reconfig_task = self._session._call_method(
- self._session._get_vim(),
- ReconfigVM_Task, vm_ref,
- spec=vm_resize_spec)
-...
+  network_info, image_meta, resize_instance=False,
+  block_device_info=None, power_on=True):
+ Completes a resize, turning on the migrated instance.
+ if resize_instance:
+ client_factory = self._session._get_vim().client.factory
+ vm_ref = vm_util.get_vm_ref(self._session, instance)
+ vm_resize_spec = vm_util.get_vm_resize_spec(client_factory,
+ instance)
+ reconfig_task = self._session._call_method(
+ self._session._get_vim(),
+ ReconfigVM_Task, vm_ref,
+ spec=vm_resize_spec)
+    ...
  
  From this code, we can see we get vm_ref by vm_util.get_vm_ref.
  
  In nova/virt/vmwareapi/vm_util.py
  
  @vm_ref_cache_from_instance
  def get_vm_ref(session, instance):
- Get reference to the VM through uuid or vm name.
- uuid = instance['uuid']
- vm_ref = (_get_vm_ref_from_vm_uuid(session, uuid) or
-   _get_vm_ref_from_extraconfig(session, uuid) or
-   _get_vm_ref_from_uuid(session, uuid) or
-   _get_vm_ref_from_name(session, instance['name']))
- if vm_ref is None:
- raise exception.InstanceNotFound(instance_id=uuid)
- return vm_ref
+ Get reference to the VM through uuid or vm name.
+ uuid = instance['uuid']
+ vm_ref = (_get_vm_ref_from_vm_uuid(session, uuid) or
+   _get_vm_ref_from_extraconfig(session, uuid) or
+   

[Yahoo-eng-team] [Bug 1168488] Re: host-list policy irrelevant

2014-03-11 Thread Christopher Yeoh
This is getting handled as part of the process of bubbling up all of the
policy checks to the API level - although targetted for the V3 API it
will also affect the V2 API.

https://blueprints.launchpad.net/nova/+spec/v3-api-policy

So I'm closing this bug as it will be tracked through the blueprint
instead.

** Changed in: nova
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1168488

Title:
  host-list policy irrelevant

Status in OpenStack Compute (Nova):
  Won't Fix

Bug description:
  There are some compute REST APIs where the policy setting is
  irrelevant because they require admin. host-list is an example.

  To recreate, start with devstack, set up so that you're running as
  demo user.

   $ export OS_USERNAME=demo
   $ export OS_PASSWORD=mypwd
   $ export OS_TENANT_NAME=demo
   $ export OS_AUTH_URL=http://localhost:5000/v2.0
   $ export OS_NO_CACHE=1

   # First try with the default policy:
   $ grep compute_extension:hosts /etc/nova/policy.json
  compute_extension:hosts: rule:admin_api,
   $ nova host-list
  ERROR: Policy doesn't allow compute_extension:hosts to be performed. (HTTP 
403) (Request-ID: req-b2b9408c-4498-4994-aee7-100cf6acf571)

   # Change policy so that anyone can view hosts:
   $ grep compute_extension:hosts /etc/nova/policy.json
  compute_extension:hosts: ,
   $ nova host-list
   ERROR: User does not have admin privileges (HTTP 403) (Request-ID: 
req-48983c2e-784c-4bb5-82ac-6116a67f6fe1)

  It was expected that since I configured the policy so that anyone
  could view hosts that a non-admin user could list hosts.

  Nova should respect the policy that the admin configured and not force
  its own.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1168488/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1240831] Re: Changing policy.json is invalid for creating an aggregate

2014-03-11 Thread Christopher Yeoh
Since this is being handled by the v3-api-policy blueprint and being
tracked by that blueprint, I'm closing this bug report

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1240831

Title:
  Changing policy.json is invalid for creating an aggregate

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  In default, Aggregate actions require the admin-user to operate.

  In order to give rights to normal-user, I change it in the policy.json
  , like this:

  from
  compute_extension:aggregates: rule:admin_api,
  to 
  compute_extension:aggregates: ,

  But the operation result is also rejected.

  

  I check the codes in Nova, the fault dues to def
  require_admin_context() in /nova/db/sqlalchemy/api.py.

  That means Nova has checked the policy of one API twice.
  So why twice? The policy has already checked in the api-layer.

  That cause the problem happens~

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1240831/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1245764] Re: The response of availability-zone's index is inconsistent between xml and json format

2014-03-11 Thread Christopher Yeoh
XML support is going away so closing this bug

** Changed in: nova
   Status: Confirmed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1245764

Title:
  The response of availability-zone's index is inconsistent between xml
  and json format

Status in OpenStack Compute (Nova):
  Invalid

Bug description:
  json format:

  {
  availability_zone_info: [
  {
  hosts: null,
  zone_name: nova,
  zone_state: {
  available: true
  }
  }
  ]
  }

  xml format:
  ?xml version='1.0' encoding='UTF-8'?
  availability_zones 
xmlns:os-availability-zone=http://docs.openstack.org/compute/ext/availabilityzone/api/v3;
availability_zone name=nova
  zone_state available=True/
  metadata/
/availability_zone
  /availability_zones

  
  Json format use 'availability_zone_info', but xml format use 
'availability_zones', we should make it consistent.

  I prefer to use 'availability_zones'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1245764/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1287948] Re: Orchestration: AttributeError: 'str' object has no attribute 'year'

2014-03-11 Thread Julie Pichon
*** This bug is a duplicate of bug 1286959 ***
https://bugs.launchpad.net/bugs/1286959

** This bug has been marked a duplicate of bug 1286959
   stack.updated_time is None

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1287948

Title:
  Orchestration: AttributeError: 'str' object has no attribute 'year'

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  I'm running a current devstack. If I have an existing stack, the
  Orchestration/Stacks page returns an error 500.

  [Tue Mar 04 14:04:19.322198 2014] [:error] [pid 27147] Error while rendering 
table rows.
  [Tue Mar 04 14:04:19.322640 2014] [:error] [pid 27147] Traceback (most recent 
call last):
  [Tue Mar 04 14:04:19.322804 2014] [:error] [pid 27147]   File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py, 
line 1559, in get_rows
  [Tue Mar 04 14:04:19.322933 2014] [:error] [pid 27147] row = 
self._meta.row_class(self, datum)
  [Tue Mar 04 14:04:19.323066 2014] [:error] [pid 27147]   File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py, 
line 476, in __init__
  [Tue Mar 04 14:04:19.323193 2014] [:error] [pid 27147] self.load_cells()
  [Tue Mar 04 14:04:19.323316 2014] [:error] [pid 27147]   File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py, 
line 502, in load_cells
  [Tue Mar 04 14:04:19.323474 2014] [:error] [pid 27147] cell = 
table._meta.cell_class(datum, column, self)
  [Tue Mar 04 14:04:19.323595 2014] [:error] [pid 27147]   File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py, 
line 597, in __init__
  [Tue Mar 04 14:04:19.323721 2014] [:error] [pid 27147] self.data = 
self.get_data(datum, column, row)
  [Tue Mar 04 14:04:19.323864 2014] [:error] [pid 27147]   File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py, 
line 635, in get_data
  [Tue Mar 04 14:04:19.323994 2014] [:error] [pid 27147] data = 
column.get_data(datum)
  [Tue Mar 04 14:04:19.324116 2014] [:error] [pid 27147]   File 
/opt/stack/horizon/openstack_dashboard/wsgi/../../horizon/tables/base.py, 
line 350, in get_data
  [Tue Mar 04 14:04:19.324240 2014] [:error] [pid 27147] data = 
filter_func(data)
  [Tue Mar 04 14:04:19.324398 2014] [:error] [pid 27147]   File 
/usr/lib/python2.7/site-packages/django/utils/timesince.py, line 32, in 
timesince
  [Tue Mar 04 14:04:19.324517 2014] [:error] [pid 27147] d = 
datetime.datetime(d.year, d.month, d.day)
  [Tue Mar 04 14:04:19.324639 2014] [:error] [pid 27147] AttributeError: 'str' 
object has no attribute 'year'
  [Tue Mar 04 14:04:19.368743 2014] [:error] [pid 27147] Internal Server Error: 
/project/stacks/

  Steps to reproduce:
  1. Log in as a regular user and go to the Orchestration - Stacks page
  2. Launch a new stack using an existing template, e.g. 
https://github.com/openstack/heat-templates/blob/master/hot/hello_world.yaml
  3. Fill in the information and click Launch
  4. The above error is now displayed on  the index page

  Adding the debug info from the heat client call:

  {stacks: [{description: Hello world HOT template that just
  defines a single compute instance. Contains just base features to
  verify base HOT support.\\n, links: [{href:
  
http://192.168.100.190:8004/v1/3ba0ed232e004fbe9a47721ec5bd9bc6/stacks/heattest
  /82f1dfef-c3e9-44c3-b62b-e7cc2d325d6e, rel: self}],
  stack_status_reason: Stack CREATE started, stack_name:
  heattest, creation_time: 2014-03-04T14:10:04Z, updated_time:
  null, stack_status: CREATE_IN_PROGRESS, id: 82f1dfef-c3e9-44c3
  -b62b-e7cc2d325d6e}]}

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1287948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290854] [NEW] missing tag for a port of resumed instance

2014-03-11 Thread Elena Ezhova
Public bug reported:

If we suspend and then resume an instance it becomes unreachable from
the tenant network

$nova boot --image 263f4823-f43c-4f2a-a845-2221f4a2dad1 --flavor 1 --nic
net-id=61ade795-f123-4880-9ab5-a73e0c1b2e70 server1

$ ping 10.0.0.8 

PING 10.0.0.8 (10.0.0.8) 56(84) bytes of data.
64 bytes from 10.0.0.8: icmp_req=1 ttl=63 time=46.5 ms
64 bytes from 10.0.0.8: icmp_req=2 ttl=63 time=0.675 ms
64 bytes from 10.0.0.8: icmp_req=3 ttl=63 time=0.572 ms

$ nova suspend 9b55928f-e6c7-49b3-9480-0df926dc6a08
$ nova resume 9b55928f-e6c7-49b3-9480-0df926dc6a08 

$ ping 10.0.0.8
PING 10.0.0.8 (10.0.0.8) 56(84) bytes of data.
From 172.24.4.2 icmp_seq=1 Destination Host Unreachable
From 172.24.4.2 icmp_seq=2 Destination Host Unreachable
From 172.24.4.2 icmp_seq=3 Destination Host Unreachable

ovs-vsctl shows that after resume the instanse's port lacks its VLAN tag

before suspend:
Port tap69cb10fd-43
tag: 1
Interface tap69cb10fd-43

after suspend:
Port tap69cb10fd-43
tag: 1
Interface tap69cb10fd-43

after resume:
 Port tap69cb10fd-43
Interface tap69cb10fd-43


The reason of such behavior is due to the patch 
https://review.openstack.org/#/c/67981/7 which introduced
removing and recreating of existing interfaces.

What's more it seems that ovs-agent doesn't notice tag's disappearance
and that's why it doesn't mark it as updated.

** Affects: nova
 Importance: High
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290854

Title:
  missing tag for a port of resumed instance

Status in OpenStack Compute (Nova):
  New

Bug description:
  If we suspend and then resume an instance it becomes unreachable from
  the tenant network

  $nova boot --image 263f4823-f43c-4f2a-a845-2221f4a2dad1 --flavor 1
  --nic net-id=61ade795-f123-4880-9ab5-a73e0c1b2e70 server1

  $ ping 10.0.0.8   
  
  PING 10.0.0.8 (10.0.0.8) 56(84) bytes of data.
  64 bytes from 10.0.0.8: icmp_req=1 ttl=63 time=46.5 ms
  64 bytes from 10.0.0.8: icmp_req=2 ttl=63 time=0.675 ms
  64 bytes from 10.0.0.8: icmp_req=3 ttl=63 time=0.572 ms

  $ nova suspend 9b55928f-e6c7-49b3-9480-0df926dc6a08
  $ nova resume 9b55928f-e6c7-49b3-9480-0df926dc6a08 

  $ ping 10.0.0.8
  PING 10.0.0.8 (10.0.0.8) 56(84) bytes of data.
  From 172.24.4.2 icmp_seq=1 Destination Host Unreachable
  From 172.24.4.2 icmp_seq=2 Destination Host Unreachable
  From 172.24.4.2 icmp_seq=3 Destination Host Unreachable

  ovs-vsctl shows that after resume the instanse's port lacks its VLAN
  tag

  before suspend:
  Port tap69cb10fd-43
  tag: 1
  Interface tap69cb10fd-43

  after suspend:
  Port tap69cb10fd-43
  tag: 1
  Interface tap69cb10fd-43

  after resume:
   Port tap69cb10fd-43
  Interface tap69cb10fd-43

  
  The reason of such behavior is due to the patch 
https://review.openstack.org/#/c/67981/7 which introduced
  removing and recreating of existing interfaces.

  What's more it seems that ovs-agent doesn't notice tag's disappearance
  and that's why it doesn't mark it as updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290857] [NEW] Image list api is wrong

2014-03-11 Thread Telles Mota Vidal Nóbrega
Public bug reported:

The Compute API explains that to list images you need to call v2/images
but this information is wrong because the right url is
v2/{project_id}/images

** Affects: nova
 Importance: Undecided
 Assignee: Telles Mota Vidal Nóbrega (tellesmvn)
 Status: Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290857

Title:
  Image list api is wrong

Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  The Compute API explains that to list images you need to call
  v2/images but this information is wrong because the right url is
  v2/{project_id}/images

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1265494] Re: Openstack Nova: Unpause after host reboot fails

2014-03-11 Thread Russell Bryant
** Also affects: nova/havana
   Importance: Undecided
   Status: New

** Tags added: icehouse-rc-potential

** Changed in: nova
   Importance: Medium = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1265494

Title:
  Openstack Nova: Unpause after host reboot fails

Status in OpenStack Compute (Nova):
  In Progress
Status in OpenStack Compute (nova) havana series:
  New

Bug description:
  Description of problem:
  Unpauseing an instance fails if host has rebooted. 

  Version-Release number of selected component (if applicable):
  RHEL: release 6.5 (Santiago)
  openstack-nova-api-2013.2.1-1.el6ost.noarch
  openstack-nova-compute-2013.2.1-1.el6ost.noarch
  openstack-nova-scheduler-2013.2.1-1.el6ost.noarch
  openstack-nova-common-2013.2.1-1.el6ost.noarch
  openstack-nova-console-2013.2.1-1.el6ost.noarch
  openstack-nova-conductor-2013.2.1-1.el6ost.noarch
  openstack-nova-novncproxy-2013.2.1-1.el6ost.noarch
  openstack-nova-cert-2013.2.1-1.el6ost.noarch

  How reproducible:
  Every time 

  Steps to Reproduce:
  1. Boot an instance 
  2. Pause that instance
  3. Reboot host
  4. Unpause instance  

  Actual results:
  can't unpause instance stuck in status paused, power state - shutdown

  Expected results:
  Instance should unpause, return to running state

  Additional info:

  virsh list -all --managed-save 
  ID is missing from paused instance - (pausecirros), state - shut off.

  [root@orange-vdse ~(keystone_admin)]# virsh list --all --managed-save
   IdName   State
  
   1 instance-0003  running
   2 instance-0002  running
   - instance-0001  shut off

  [root@orange-vdse ~(keystone_admin)]# nova list  (notice nova status paused)
  
+--+---+++-+-+
  | ID   | Name  | Status | Task State 
| Power State | Networks|
  
+--+---+++-+-+
  | ebe310c2-d715-45e5-83b6-32717af1ac90 | cirros| ACTIVE | None   
| Running | net=192.168.1.4 |
  | 3ef89feb-414f-4524-b806-f14044efdb14 | pausecirros   | PAUSED | None   
| Shutdown| net=192.168.1.5 |
  | 8bcae041-2f92-4ae2-a2c2-ee59b067ac76 | suspendcirros | ACTIVE | None   
| Running | net=192.168.1.2 |
  
+--+---+++-+-+

  
  Testing without rebooting host, ID/state (1/paused) instance (cirros) are 
ok and it unpauses ok.

  [root@orange-vdse ~(keystone_admin)]# virsh list --all --managed-save
   IdName   State
  
   1 instance-0003  paused
   2 instance-0002  running
   - instance-0001  shut off
  
+--+---+++-+-+
  | ID   | Name  | Status | Task State 
| Power State | Networks|
  
+--+---+++-+-+
  | ebe310c2-d715-45e5-83b6-32717af1ac90 | cirros| PAUSED | None   
| Paused  | net=192.168.1.4 |
  | 3ef89feb-414f-4524-b806-f14044efdb14 | pausecirros   | PAUSED | None   
| Shutdown| net=192.168.1.5 |
  | 8bcae041-2f92-4ae2-a2c2-ee59b067ac76 | suspendcirros | ACTIVE | None   
| Running | net=192.168.1.2 |
  
+--+---+++-+-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1265494/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1241219] Re: Adding a LBaaS VIP via horizon fails when the optional Specify a free IP address from my_subnet is not specified

2014-03-11 Thread Julie Pichon
*** This bug is a duplicate of bug 1252811 ***
https://bugs.launchpad.net/bugs/1252811

** This bug has been marked a duplicate of bug 1252811
   [LBaaS] Add VIP requires knowing a free ip address instead of subnet

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1241219

Title:
  Adding a LBaaS VIP via horizon fails when the optional Specify a free
  IP address from my_subnet is not specified

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  Version
  ===
  Havan on rhel

  
  Description
  ===

  I'm trying to add a VIP to my pool via horizon, I didn't specify the
  ip address (it's optional both in cli and horizon) because I wanted to
  be assigned with the next free ip address from my subnet
  automatically, however the whole operation of adding VIP fails with
  the following error:

  Error: Unable to add VIP vlan_212_vip.

  
  From /var/log/neutron/server.log
  
  2013-10-18 00:15:20.569 6309 ERROR neutron.api.v2.resource [-] create failed
  2013-10-18 00:15:20.569 6309 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2013-10-18 00:15:20.569 6309 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py, line 84, in 
resource
  2013-10-18 00:15:20.569 6309 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2013-10-18 00:15:20.569 6309 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/base.py, line 341, in create
  2013-10-18 00:15:20.569 6309 TRACE neutron.api.v2.resource 
allow_bulk=self._allow_bulk)
  2013-10-18 00:15:20.569 6309 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/base.py, line 594, in 
prepare_request_body
  2013-10-18 00:15:20.569 6309 TRACE neutron.api.v2.resource raise 
webob.exc.HTTPBadRequest(msg)
  2013-10-18 00:15:20.569 6309 TRACE neutron.api.v2.resource HTTPBadRequest: 
Invalid input for address. Reason: '' is not a valid IP address.

  
  The corresponding cli command' help output
  ==

  $ neutron lb-vip-create 
  usage: neutron lb-vip-create [-h] [-f {shell,table}] [-c COLUMN]
   [--variable VARIABLE] [--prefix PREFIX]
   [--request-format {json,xml}]
   [--tenant-id TENANT_ID] [--address ADDRESS]
   [--admin-state-down]
   [--connection-limit CONNECTION_LIMIT]
   [--description DESCRIPTION] --name NAME
   --protocol-port PROTOCOL_PORT --protocol PROTOCOL
   --subnet-id SUBNET_ID
   pool

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1241219/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1267362] Re: admin overview usage summary disk hours

2014-03-11 Thread Julie Pichon
Adding openstack-manuals, maybe we can also clarify in the documentation
what Disk GB hours actually means.

** Also affects: openstack-manuals
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1267362

Title:
  admin overview usage summary disk hours

Status in OpenStack Dashboard (Horizon):
  Confirmed
Status in OpenStack Manuals:
  New

Bug description:
  when visiting the admin overview page,
  http://localhost:8000/admin/

   the usage summary lists
   
  Disk GB Hours

  The same term This Period's GB-Hours: 0.00 can be found e.g here:
  
https://github.com/openstack/horizon/blob/master/horizon/templates/horizon/common/_usage_summary.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1267362/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290895] [NEW] Difficult to understand message when using incorrect role against object in Neutron

2014-03-11 Thread Sudipta Biswas
Public bug reported:

When a user runs an action against an object in neutron for which they
don't have authority to (perhaps their role allows read of the object,
but not update), they get the message The resource could not be found.
For example: User doesn't have the privilege to edit a network and
attempts doing that but ends up getting the resource not found message.

This is a bad message because the object they just read in is now
stating that it does not exist. This is not true, the root issue is that they
do not have authority to it.

 One can argue that for security reasons, we should state that the object
 does not exist. However, it creates a odd scenario where you have
 certain roles that can read an object, but then not write to it.

 I'm proposing that we change the message to The resource could not be
 found or user's role does not have sufficient privileges to run the
 operation.

Two identified test cases applicable to this would be the remove/edit
networks.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290895

Title:
  Difficult to understand message when using incorrect role against
  object in Neutron

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  When a user runs an action against an object in neutron for which they
  don't have authority to (perhaps their role allows read of the object,
  but not update), they get the message The resource could not be found.
  For example: User doesn't have the privilege to edit a network and
  attempts doing that but ends up getting the resource not found message.

  This is a bad message because the object they just read in is now
  stating that it does not exist. This is not true, the root issue is that they
  do not have authority to it.

   One can argue that for security reasons, we should state that the object
   does not exist. However, it creates a odd scenario where you have
   certain roles that can read an object, but then not write to it.

   I'm proposing that we change the message to The resource could not be
   found or user's role does not have sufficient privileges to run the
   operation.

  Two identified test cases applicable to this would be the remove/edit
  networks.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1290895/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290903] [NEW] xenapi: test_rescue incorrectly verifies original swap wasn't attached

2014-03-11 Thread Johannes Erdfelt
Public bug reported:

The code currently does:

vdi_uuids = []
 for vbd_uuid in rescue_vm[VBDs]:
vdi_uuids.append(xenapi_fake.get_record('VBD', vbd_uuid)[VDI])
self.assertNotIn(swap, vdi_uuids)

vdi_uuids is a list of uuid references. swap will never match a uuid,
so that test will always be true, even if the code is broken.

** Affects: nova
 Importance: Undecided
 Assignee: Johannes Erdfelt (johannes.erdfelt)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Johannes Erdfelt (johannes.erdfelt)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290903

Title:
  xenapi: test_rescue incorrectly verifies original swap wasn't attached

Status in OpenStack Compute (Nova):
  New

Bug description:
  The code currently does:

  vdi_uuids = []
   for vbd_uuid in rescue_vm[VBDs]:
  vdi_uuids.append(xenapi_fake.get_record('VBD', vbd_uuid)[VDI])
  self.assertNotIn(swap, vdi_uuids)

  vdi_uuids is a list of uuid references. swap will never match a
  uuid, so that test will always be true, even if the code is broken.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1290903/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290909] [NEW] callable url in LinkAction TemplateSyntaxError

2014-03-11 Thread Matthew D. Wood
Public bug reported:

According to the doc-string for
horizon.tables.actions.LinkAction.get_link_url:

 If ``url`` is callable it will call the function.

Here's the snippit that does the call:

if callable(self.url):
return self.url(datum, **self.kwargs)

However, self.kwargs is not set anywhere (should probably be set in
__init__).


This will result in an AttributeError or TemplateSyntaxError.


In searching through the code-base, I don't see a single use of a callable url, 
so perhaps this is dead code?  The dead-code-hypothesis is supported by the 
classes doc-string which hints that overriding get_link_url is the correct 
strategy:

.. attribute:: url

A string or a callable which resolves to a url to be used as the link
target. You must either define the ``url`` attribute or override
the ``get_link_url`` method on the class.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1290909

Title:
  callable url in LinkAction TemplateSyntaxError

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  According to the doc-string for
  horizon.tables.actions.LinkAction.get_link_url:

   If ``url`` is callable it will call the function.

  Here's the snippit that does the call:

  if callable(self.url):
  return self.url(datum, **self.kwargs)

  However, self.kwargs is not set anywhere (should probably be set in
  __init__).

  
  This will result in an AttributeError or TemplateSyntaxError.

  
  In searching through the code-base, I don't see a single use of a callable 
url, so perhaps this is dead code?  The dead-code-hypothesis is supported by 
the classes doc-string which hints that overriding get_link_url is the correct 
strategy:

  .. attribute:: url

  A string or a callable which resolves to a url to be used as the link
  target. You must either define the ``url`` attribute or override
  the ``get_link_url`` method on the class.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1290909/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1269740] Re: metadata agent fails with self-signed SSL certs

2014-03-11 Thread Sascha Peilicke
*** This bug is a duplicate of bug 1263872 ***
https://bugs.launchpad.net/bugs/1263872

** This bug has been marked a duplicate of bug 1263872
   metadata proxy not support Https Metadata-api

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1269740

Title:
  metadata agent fails with self-signed SSL certs

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When using a self-signed certificate for the network service endpoint,
  the metadata agent has to pass the 'insecure' flag to neutronclient.
  Otherwise requests will fail due to the failed certificate validity
  check.

  Grizzly trace:

  2013-10-30 16:09:47ERROR [quantum.agent.metadata.agent] Unexpected error.
  Traceback (most recent call last):
File /usr/lib64/python2.6/site-packages/quantum/agent/metadata/agent.py, 
line 86, in __call__
  instance_id = self._get_instance_id(req)
File /usr/lib64/python2.6/site-packages/quantum/agent/metadata/agent.py, 
line 110, in _get_instance_id
  device_owner=DEVICE_OWNER_ROUTER_INTF)['ports']
File /usr/lib64/python2.6/site-packages/quantumclient/v2_0/client.py, 
line 107, in with_params
  ret = self.function(instance, *args, **kwargs)
File /usr/lib64/python2.6/site-packages/quantumclient/v2_0/client.py, 
line 258, in list_ports
  **_params)
File /usr/lib64/python2.6/site-packages/quantumclient/v2_0/client.py, 
line 999, in list
  for r in self._pagination(collection, path, **params):
File /usr/lib64/python2.6/site-packages/quantumclient/v2_0/client.py, 
line 1012, in _pagination
  res = self.get(path, params=params)
File /usr/lib64/python2.6/site-packages/quantumclient/v2_0/client.py, 
line 985, in get
  headers=headers, params=params)
File /usr/lib64/python2.6/site-packages/quantumclient/v2_0/client.py, 
line 970, in retry_request
  headers=headers, params=params)
File /usr/lib64/python2.6/site-packages/quantumclient/v2_0/client.py, 
line 907, in do_request
  resp, replybody = self.httpclient.do_request(action, method, body=body)
File /usr/lib64/python2.6/site-packages/quantumclient/client.py, line 
143, in do_request
  self.authenticate()
File /usr/lib64/python2.6/site-packages/quantumclient/client.py, line 
199, in authenticate
  raise exceptions.Unauthorized(message=body)
  Unauthorized: Server presented certificate that does not match host 
d00-1e-c9-4c-44-30.cloud.susestudio.com: {'notAfter': 'Jul 10 12:00:00 2015 
GMT', 'subjectAltName': (('DNS', '*.susestudio.com'), ('DNS', 
'susestudio.com')), 'subject': ((('countryName', u'US'),), 
(('stateOrProvinceName', u'Utah'),), (('localityName', u'Provo'),), 
(('organizationName', u'Novell, Inc.'),), (('commonName', 
u'*.susestudio.com'),))}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1269740/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290948] [NEW] host aggregate non modal view broken

2014-03-11 Thread Cindy Lu
Public bug reported:

After I was auto logged out, and I logged back in again.  It went to the
non modal view which only showed html.  Missing data.  Please see image
attached.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: 031114 - host aggregate.png
   
https://bugs.launchpad.net/bugs/1290948/+attachment/4018437/+files/031114%20-%20host%20aggregate.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1290948

Title:
  host aggregate non modal view broken

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  After I was auto logged out, and I logged back in again.  It went to
  the non modal view which only showed html.  Missing data.  Please see
  image attached.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1290948/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1288846] Re: Common context module missed in Glance

2014-03-11 Thread Victor Sergeyev
Agree, it sounds reasonable.

** Also affects: oslo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1288846

Title:
  Common context module missed in Glance

Status in OpenStack Image Registry and Delivery Service (Glance):
  Triaged
Status in Oslo - a Library of Common OpenStack Code:
  In Progress

Bug description:
  Common module glance.openstack.common.db.sqlalchemy.utils requires
  common context module [1] but this module missed in glance, so we get
  an error, when we try to use sqlalchemy.utils module. See an example
  at [2]

  [1] 
https://github.com/openstack/glance/blob/master/glance/openstack/common/db/sqlalchemy/utils.py#L41
  [2] 
http://logs.openstack.org/77/43277/19/check/gate-glance-python27/a4b3b56/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1288846/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1286996] Re: FloatingIPNotFound: Floating IP c5afcf9d-44aa-4839-86b6-d26e2ff23188 could not be found

2014-03-11 Thread James E. Blair
This is a real test failure, not an infrastructure bug.


** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: openstack-ci

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1286996

Title:
  FloatingIPNotFound: Floating IP c5afcf9d-44aa-4839-86b6-d26e2ff23188
  could not be found

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  http://logs.openstack.org/07/72307/2/check/check-tempest-dsvm-neutron/b3222c0/
  
http://logs.openstack.org/07/72307/2/check/check-tempest-dsvm-neutron/b3222c0/logs/screen-q-svc.txt.gz

  2014-03-02 22:37:46.168 6302 ERROR neutron.openstack.common.rpc.amqp 
[req-084f3a00-d319-4b10-905f-057dfc7ce047 None] Exception during message 
handling
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py, line 462, in 
_process_data
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp 
**args)
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/common/rpc.py, line 45, in dispatch
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp 
neutron_ctxt, version, method, namespace, **kwargs)
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/l3_rpc_base.py, line 109, in 
update_floatingip_statuses
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp 
status)
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/l3_db.py, line 682, in 
update_floatingip_status
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp 
floatingip_db = self._get_floatingip(context, floatingip_id)
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp   File 
/opt/stack/new/neutron/neutron/db/l3_db.py, line 448, in _get_floatingip
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp 
raise l3.FloatingIPNotFound(floatingip_id=id)
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp 
FloatingIPNotFound: Floating IP c5afcf9d-44aa-4839-86b6-d26e2ff23188 could not 
be found
  2014-03-02 22:37:46.168 6302 TRACE neutron.openstack.common.rpc.amqp 
  2014-03-02 22:37:46.169 6302 ERROR neutron.openstack.common.rpc.common 
[req-084f3a00-d319-4b10-905f-057dfc7ce047 None] Returning exception Floating IP 
c5afcf9d-44aa-4839-86b6-d26e2ff23188 could not be found to caller
  2014-03-02 22:37:46.169 6302 ERROR neutron.openstack.common.rpc.common 
[req-084f3a00-d319-4b10-905f-057dfc7ce047 None] ['Traceback (most recent call 
last):\n', '  File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py, line 462, in 
_process_data\n**args)\n', '  File 
/opt/stack/new/neutron/neutron/common/rpc.py, line 45, in dispatch\n
neutron_ctxt, version, method, namespace, **kwargs)\n', '  File 
/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py, line 172, 
in dispatch\nresult = getattr(proxyobj, method)(ctxt, **kwargs)\n', '  File 
/opt/stack/new/neutron/neutron/db/l3_rpc_base.py, line 109, in 
update_floatingip_statuses\nstatus)\n', '  File 
/opt/stack/new/neutron/neutron/db/l3_db.py, line 682, in 
update_floatingip_status\nfloatingip_db = self._get_floatingip(context, 
floatingip_id)\n', '  File /opt/stack/new/neutron/neutron/db/l3_db.py, line 
448, in _get_floatingip\nraise l3.FloatingIPNotFound(floatingip_id=id)\n', 
'Floati
 ngIPNotFound: Floating IP c5afcf9d-44aa-4839-86b6-d26e2ff23188 could not be 
found\n']

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1286996/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290966] [NEW] missing auth method configuration in oauth1 documentation

2014-03-11 Thread Steve Martinelli
Public bug reported:

The oauth1 configuration page
(http://docs.openstack.org/developer/keystone/extensions/oauth1-configuration.html)
in the documentation should also mention that oauth1 must be added as
a default auth method

** Affects: keystone
 Importance: Medium
 Assignee: Steve Martinelli (stevemar)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) = Steve Martinelli (stevemar)

** Changed in: keystone
Milestone: None = icehouse-rc1

** Changed in: keystone
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1290966

Title:
  missing auth method configuration in oauth1 documentation

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  The oauth1 configuration page
  
(http://docs.openstack.org/developer/keystone/extensions/oauth1-configuration.html)
  in the documentation should also mention that oauth1 must be added
  as a default auth method

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1290966/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290969] [NEW] Change to require listing all known stores in glance-api.conf breaks swift-backed glance on upgrade

2014-03-11 Thread Clint Byrum
Public bug reported:

This change:

https://review.openstack.org/#q,I82073352641d3eb2ab3d6e9a6b64afc99a30dcc7,n,z

Causes an upgrade failure if users are backed by swift, as it is not in
the list of defaults anymore.

This change should be reverted and depending on the list should be
logged as a deprecation warning for 1 cycle.

** Affects: glance
 Importance: Undecided
 Assignee: Clint Byrum (clint-fewbar)
 Status: In Progress

** Affects: tripleo
 Importance: Critical
 Assignee: Derek Higgins (derekh)
 Status: In Progress

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: glance
 Assignee: (unassigned) = Clint Byrum (clint-fewbar)

** Changed in: tripleo
 Assignee: (unassigned) = Derek Higgins (derekh)

** Changed in: tripleo
   Status: New = Triaged

** Changed in: tripleo
   Importance: Undecided = Critical

** Changed in: glance
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1290969

Title:
  Change to require listing all known stores in glance-api.conf breaks
  swift-backed glance on upgrade

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in tripleo - openstack on openstack:
  In Progress

Bug description:
  This change:

  https://review.openstack.org/#q,I82073352641d3eb2ab3d6e9a6b64afc99a30dcc7,n,z

  Causes an upgrade failure if users are backed by swift, as it is not
  in the list of defaults anymore.

  This change should be reverted and depending on the list should be
  logged as a deprecation warning for 1 cycle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1290969/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1280134] Re: Can't run python 2.6 tests with nosetests

2014-03-11 Thread Clark Boylan
** Changed in: openstack-ci
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to anvil.
https://bugs.launchpad.net/bugs/1280134

Title:
  Can't run python 2.6 tests with nosetests

Status in ANVIL for forging OpenStack.:
  Fix Released
Status in OpenStack Core Infrastructure:
  Invalid

Bug description:
  In anvil, tox.ini allows virtualenv to use site packages, so pip finds
  finds nose which is installed to the system. But /usr/bin/nosetests
  uses system python (/usr/bin/python), not python from venv. So,  tests
  are run effectively outside of venv, and fail with lots of import
  errors.

  Example logs: http://logs.openstack.org/89/73289/2/gate/gate-anvil-
  python26/02f7f17/console.html

  2014-02-13 18:38:04.816 | WARNING:test command found but not installed in 
testenv
  2014-02-13 18:38:04.816 |   cmd: /usr/bin/nosetests
  2014-02-13 18:38:04.816 |   env: 
/home/jenkins/workspace/gate-anvil-python26/.tox/py26
  2014-02-13 18:38:04.816 | Maybe forgot to specify a dependency?

  Of course nose is listed among deps in tox.ini.

To manage notifications about this bug go to:
https://bugs.launchpad.net/anvil/+bug/1280134/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284882] Re: ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=35413): Max retries exceeded with url: /v2/images (Caused by class 'socket.error': [Errno 111] ECONNREFU

2014-03-11 Thread James E. Blair
** Also affects: glance
   Importance: Undecided
   Status: New

** No longer affects: openstack-ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1284882

Title:
  ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=35413): Max
  retries exceeded with url: /v2/images (Caused by class
  'socket.error': [Errno 111] ECONNREFUSED)

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  http://logs.openstack.org/86/74886/7/check/gate-glance-
  python26/5d3577f/

  2014-02-25 08:53:24.477 | running=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
  2014-02-25 08:53:24.477 | OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
  2014-02-25 08:53:24.477 | OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-160} \
  2014-02-25 08:53:24.477 | ${PYTHON:-python} -m subunit.run discover -t ./ 
./glance/tests  
  2014-02-25 08:53:24.477 | 
==
  2014-02-25 08:53:24.477 | FAIL: 
glance.tests.functional.v2.test_images.TestImages.test_image_visibility_to_different_users
  2014-02-25 08:53:24.477 | tags: worker-0
  2014-02-25 08:53:24.477 | 
--
  2014-02-25 08:53:24.477 | Traceback (most recent call last):
  2014-02-25 08:53:24.478 |   File glance/tests/functional/v2/test_images.py, 
line 1673, in test_image_visibility_to_different_users
  2014-02-25 08:53:24.478 | response = requests.post(path, headers=headers, 
data=data)
  2014-02-25 08:53:24.478 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/requests/api.py,
 line 88, in post
  2014-02-25 08:53:24.478 | return request('post', url, data=data, **kwargs)
  2014-02-25 08:53:24.478 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/requests/api.py,
 line 44, in request
  2014-02-25 08:53:24.478 | return session.request(method=method, url=url, 
**kwargs)
  2014-02-25 08:53:24.478 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/requests/sessions.py,
 line 383, in request
  2014-02-25 08:53:24.478 | resp = self.send(prep, **send_kwargs)
  2014-02-25 08:53:24.478 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/requests/sessions.py,
 line 486, in send
  2014-02-25 08:53:24.478 | r = adapter.send(request, **kwargs)
  2014-02-25 08:53:24.478 |   File 
/home/jenkins/workspace/gate-glance-python26/.tox/py26/lib/python2.6/site-packages/requests/adapters.py,
 line 378, in send
  2014-02-25 08:53:24.478 | raise ConnectionError(e)
  2014-02-25 08:53:24.479 | ConnectionError: 
HTTPConnectionPool(host='127.0.0.1', port=35413): Max retries exceeded with 
url: /v2/images (Caused by class 'socket.error': [Errno 111] ECONNREFUSED)
  2014-02-25 08:53:24.479 | 
==
  2014-02-25 08:53:24.479 | FAIL: process-returncode
  2014-02-25 08:53:24.479 | tags: worker-0
  2014-02-25 08:53:24.479 | 
--
  2014-02-25 08:53:24.479 | Binary content:
  2014-02-25 08:53:24.479 |   traceback (test/plain; charset=utf8)
  2014-02-25 08:53:24.479 | Ran 2194 tests in 779.097s
  2014-02-25 08:53:24.479 | FAILED (id=0, failures=2, skips=33)
  2014-02-25 08:53:24.516 | error: testr failed (1)
  2014-02-25 08:53:24.538 | ERROR: InvocationError: 
'/home/jenkins/workspace/gate-glance-python26/.tox/py26/bin/python -m 
glance.openstack.common.lockutils python setup.py test --slowest 
--testr-args=--concurrency 1 '
  2014-02-25 08:53:24.538 | ___ summary 

  2014-02-25 08:53:24.538 | ERROR:   py26: commands failed

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1284882/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285288] Re: gate-nova-docs No module named ....

2014-03-11 Thread James E. Blair
** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: openstack-ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1285288

Title:
  gate-nova-docs  No module named 

Status in OpenStack Compute (Nova):
  New

Bug description:
  Using openstack theme from 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/oslosphinx/theme
  2014-02-26 09:46:40.661 | loading intersphinx inventory from 
http://docs.python.org/objects.inv...
  2014-02-26 09:46:42.430 | loading intersphinx inventory from 
http://swift.openstack.org/objects.inv...
  2014-02-26 09:46:42.778 | building [html]: all source files
  2014-02-26 09:46:42.778 | updating environment: 48 added, 0 changed, 0 removed
  2014-02-26 09:46:42.779 | reading sources... [  2%] 
devref/addmethod.openstackapi
  2014-02-26 09:46:42.795 | reading sources... [  4%] devref/aggregates
  2014-02-26 09:46:42.815 | reading sources... [  6%] devref/api
  2014-02-26 09:46:42.831 | Traceback (most recent call last):
  2014-02-26 09:46:42.831 |   File 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc.py,
 line 321, in import_object
  2014-02-26 09:46:42.831 | __import__(self.modname)
  2014-02-26 09:46:42.831 | ImportError: No module named cloud
  2014-02-26 09:46:43.754 | Traceback (most recent call last):
  2014-02-26 09:46:43.754 |   File 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc.py,
 line 321, in import_object
  2014-02-26 09:46:43.754 | __import__(self.modname)
  2014-02-26 09:46:43.754 | ImportError: No module named backup_schedules
  2014-02-26 09:46:43.757 | Traceback (most recent call last):
  2014-02-26 09:46:43.757 |   File 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc.py,
 line 321, in import_object
  2014-02-26 09:46:43.757 | __import__(self.modname)
  2014-02-26 09:46:43.757 | ImportError: No module named faults
  2014-02-26 09:46:43.760 | Traceback (most recent call last):
  2014-02-26 09:46:43.761 |   File 
/home/jenkins/workspace/gate-nova-docs/.tox/venv/local/lib/python2.7/site-packages/sphinx/ext/autodoc.py,
 line 321, in import_object
  2014-02-26 09:46:43.761 | __import__(self.modname)
  2014-02-26 09:46:43.761 | ImportError: No module named flavors
  2014-02-26 09:46:43.764 | Traceback (most recent call last):
  2014-02-26 09:46:4

  Sample at http://logs.openstack.org/17/66917/2/check/gate-nova-
  docs/6a4637f/console.html

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1285288/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284232] Re: test_pause_paused_server fails with Cannot 'unpause' while instance is in vm_state active

2014-03-11 Thread James E. Blair
This is a real test failure, not an infrastructure bug.


** Also affects: nova
   Importance: Undecided
   Status: New

** No longer affects: openstack-ci

** Tags added: gate-failure

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1284232

Title:
  test_pause_paused_server fails with Cannot 'unpause' while instance
  is in vm_state active

Status in OpenStack Compute (Nova):
  New

Bug description:
  Tempest test is failing a negative test, test_pause_paused_server.

  From http://logs.openstack.org/29/74229/5/check/check-tempest-dsvm-
  postgres-full/0878106/

  2014-02-21 22:54:26.645 | 
  2014-02-21 22:54:26.645 | traceback-1: {{{
  2014-02-21 22:54:26.645 | Traceback (most recent call last):
  2014-02-21 22:54:26.645 |   File 
tempest/services/compute/json/servers_client.py, line 371, in unpause_server
  2014-02-21 22:54:26.646 | return self.action(server_id, 'unpause', None, 
**kwargs)
  2014-02-21 22:54:26.646 |   File 
tempest/services/compute/json/servers_client.py, line 196, in action
  2014-02-21 22:54:26.646 | post_body)
  2014-02-21 22:54:26.646 |   File tempest/common/rest_client.py, line 177, 
in post
  2014-02-21 22:54:26.646 | return self.request('POST', url, headers, body)
  2014-02-21 22:54:26.646 |   File tempest/common/rest_client.py, line 352, 
in request
  2014-02-21 22:54:26.646 | resp, resp_body)
  2014-02-21 22:54:26.647 |   File tempest/common/rest_client.py, line 406, 
in _error_checker
  2014-02-21 22:54:26.647 | raise exceptions.Conflict(resp_body)
  2014-02-21 22:54:26.647 | Conflict: An object with that identifier already 
exists
  2014-02-21 22:54:26.647 | Details: {u'message': uCannot 'unpause' while 
instance is in vm_state active, u'code': 409}
  2014-02-21 22:54:26.647 | }}}
  2014-02-21 22:54:26.647 | 
  2014-02-21 22:54:26.648 | Traceback (most recent call last):
  2014-02-21 22:54:26.648 |   File 
tempest/api/compute/servers/test_servers_negative.py, line 135, in 
test_pause_paused_server
  2014-02-21 22:54:26.648 | 
self.client.wait_for_server_status(self.server_id, 'PAUSED')
  2014-02-21 22:54:26.648 |   File 
tempest/services/compute/json/servers_client.py, line 160, in 
wait_for_server_status
  2014-02-21 22:54:26.648 | raise_on_error=raise_on_error)
  2014-02-21 22:54:26.648 |   File tempest/common/waiters.py, line 89, in 
wait_for_server_status
  2014-02-21 22:54:26.649 | raise exceptions.TimeoutException(message)
  2014-02-21 22:54:26.649 | TimeoutException: Request timed out
  2014-02-21 22:54:26.649 | Details: Server 
e09307a3-d2b3-4b43-895e-b14952a15aea failed to reach PAUSED status and task 
state None within the required time (196 s). Current status: ACTIVE. Current 
task state: pausing.
  2014-02-21 22:54:26.649 |

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1284232/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284579] Re: No module named dogpile.cache failed gate-grenade-dsvm job

2014-03-11 Thread James E. Blair
This is a real test failure, not an infrastructure bug.


** Also affects: keystone
   Importance: Undecided
   Status: New

** No longer affects: openstack-ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1284579

Title:
  No module named dogpile.cache failed gate-grenade-dsvm job

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In gate-grenade-dsvm test, keystone failed to start due to
  ImportError: No module named dogpile.cache

  http://logs.openstack.org/85/61985/5/gate/gate-grenade-
  dsvm/d557147/console.html


  2014-02-24 20:38:43.606 | 2014-02-24 20:38:43 + mysql -uroot -psecret 
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS keystone;'
  2014-02-24 20:38:43.612 | 2014-02-24 20:38:43 + mysql -uroot -psecret 
-h127.0.0.1 -e 'CREATE DATABASE keystone CHARACTER SET utf8;'
  2014-02-24 20:38:43.618 | 2014-02-24 20:38:43 + 
/opt/stack/old/keystone/bin/keystone-manage db_sync
  2014-02-24 20:38:43.624 | 2014-02-24 20:38:43 Traceback (most recent call 
last):
  2014-02-24 20:38:43.630 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/bin/keystone-manage, line 37, in module
  2014-02-24 20:38:43.635 | 2014-02-24 20:38:43 from keystone import cli
  2014-02-24 20:38:43.642 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/cli.py, line 32, in module
  2014-02-24 20:38:43.648 | 2014-02-24 20:38:43 from keystone import token
  2014-02-24 20:38:43.655 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/token/__init__.py, line 18, in module
  2014-02-24 20:38:43.660 | 2014-02-24 20:38:43 from keystone.token import 
controllers
  2014-02-24 20:38:43.666 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/token/controllers.py, line 27, in module
  2014-02-24 20:38:43.672 | 2014-02-24 20:38:43 from keystone.token import 
core
  2014-02-24 20:38:43.679 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/token/core.py, line 22, in module
  2014-02-24 20:38:43.684 | 2014-02-24 20:38:43 from keystone.common import 
cache
  2014-02-24 20:38:43.690 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/common/cache/__init__.py, line 17, in 
module
  2014-02-24 20:38:43.696 | 2014-02-24 20:38:43 from 
keystone.common.cache.core import *  # flake8: noqa
  2014-02-24 20:38:43.702 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/common/cache/core.py, line 19, in module
  2014-02-24 20:38:43.708 | 2014-02-24 20:38:43 import dogpile.cache
  2014-02-24 20:38:43.714 | 2014-02-24 20:38:43 ImportError: No module named 
dogpile.cache


  This issue occurred just once in last 48 hours.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcIk5vIG1vZHVsZSBuYW1lZCBkb2dwaWxlLmNhY2hlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTMzMjQ4MDA2OTQsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1284579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1285136] Re: Cannot create global roles in LDAP assignment backend

2014-03-11 Thread Adam Young
There are no global roles.  Only domain scoped roles.

** Changed in: keystone
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1285136

Title:
  Cannot create global roles in LDAP assignment backend

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  When using the ldap assignment backend, there is no possibility to assign a 
role to a user without a tenant  (global role)
  This behavior is completely different to SQL that allows that functionality. 
In LDAP it just simply fails requiring a tenant_id.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1285136/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254872] Re: libvirtError: Timed out during operation: cannot acquire state change lock

2014-03-11 Thread Joe Gordon
** Also affects: ubuntu-server-iso-testing
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254872

Title:
  libvirtError: Timed out during operation: cannot acquire state change
  lock

Status in OpenStack Compute (Nova):
  Triaged
Status in Automated testing of Ubuntu ISO images:
  New

Bug description:
   libvirtError: Timed out during operation: cannot acquire state change
  lock

  Source:

  
http://logs.openstack.org/72/58372/1/check/check-tempest-devstack-vm-full/4dd1a60/logs/screen-n-cpu.txt.gz
  
http://logs.openstack.org/72/58372/1/check/check-tempest-devstack-vm-full/4dd1a60/testr_results.html.gz

  Query: libvirtError: Timed out during operation: cannot acquire state
  change lock AND filename:logs/screen-n-cpu.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImxpYnZpcnRFcnJvcjogVGltZWQgb3V0IGR1cmluZyBvcGVyYXRpb246IGNhbm5vdCBhY3F1aXJlIHN0YXRlIGNoYW5nZSBsb2NrXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tbi1jcHUudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODU0MTI3MzA4NzQsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1254872/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290983] [NEW] db_base_class should call delete_port instead of _delete_port

2014-03-11 Thread Aaron Rosen
Public bug reported:

When a network is deleted the db_base_class currently deletes
network:dhcp ports out of the database directly since it calls the
private _delete_port() method. This patch changes that to call
delete_port so that the plugin can delete these ports from their
backend.

** Affects: neutron
 Importance: Undecided
 Assignee: Aaron Rosen (arosen)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Aaron Rosen (arosen)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290983

Title:
  db_base_class should call delete_port instead of _delete_port

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When a network is deleted the db_base_class currently deletes
  network:dhcp ports out of the database directly since it calls the
  private _delete_port() method. This patch changes that to call
  delete_port so that the plugin can delete these ports from their
  backend.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1290983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290987] [NEW] Lots of failures deleting things with ml2

2014-03-11 Thread Aaron Rosen
Public bug reported:

2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/resource.py, line 87, in resource
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/api/v2/base.py, line 444, in delete
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource 
obj_deleter(request.context, id, **kwargs)
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/opt/stack/new/neutron/neutron/plugins/ml2/plugin.py, line 709, in delete_port
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource 
filter_by(id=id).with_lockmode('update').one())
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2317, in 
one
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource ret = list(self)
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2360, in 
__iter__
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource return 
self._execute_and_instances(context)
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2375, in 
_execute_and_instances
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource result = 
conn.execute(querycontext.statement, self._params)
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 662, 
in execute
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource params)
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 761, 
in _execute_clauseelement
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource compiled_sql, 
distilled_params
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 874, 
in _execute_context
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource context)
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1024, 
in _handle_dbapi_exception
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource exc_info
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 196, 
in raise_from_cause
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource 
reraise(type(exception), exception, tb=exc_tb)
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 867, 
in _execute_context
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource context)
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
324, in do_execute
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource 
cursor.execute(statement, parameters)
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in execute
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource 
self.errorhandler(self, exc, value)
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource raise 
errorclass, errorvalue
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource OperationalError: 
(OperationalError) (1205, 'Lock wait timeout exceeded; try restarting 
transaction') 'SELECT ports.tenant_id AS ports_tenant_id, ports.id AS ports_id, 
ports.name AS ports_name, ports.network_id AS ports_network_id, 
ports.mac_address AS ports_mac_address, ports.admin_state_up AS 
ports_admin_state_up, ports.status AS ports_status, ports.device_id AS 
ports_device_id, ports.device_owner AS ports_device_owner \nFROM ports \nWHERE 
ports.id = %s FOR UPDATE' ('aef7e00e-ef4e-4523-a65e-b623ee7b04d0',)
2014-03-11 16:59:26.901 2434 TRACE neutron.api.v2.resource

** Affects: neutron
 Importance: High
 Assignee: Aaron Rosen (arosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Aaron Rosen (arosen)

** Changed in: neutron
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290987

Title:
  Lots of failures deleting things with ml2

Status in OpenStack Neutron 

[Yahoo-eng-team] [Bug 1290990] [NEW] Get tenants by name discrepency

2014-03-11 Thread Nathan Buckner
Public bug reported:

A get call on /v2.0/tenants?name=username
This call returns the single user on the adminurl however on the user url 
instead of giving a 404 since getting a tenant by name is not supported it 
returns the standard get tenants response, ignoring the parameters entirely.  
This would cause negative tests that just check the status to fail (returns a 
200 when a 404 is expected) and it would appear that the api returned a 
response for get tenant by name.

** Affects: keystone
 Importance: Undecided
 Status: New


** Tags: opencafe

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1290990

Title:
  Get tenants by name discrepency

Status in OpenStack Identity (Keystone):
  New

Bug description:
  A get call on /v2.0/tenants?name=username
  This call returns the single user on the adminurl however on the user url 
instead of giving a 404 since getting a tenant by name is not supported it 
returns the standard get tenants response, ignoring the parameters entirely.  
This would cause negative tests that just check the status to fail (returns a 
200 when a 404 is expected) and it would appear that the api returned a 
response for get tenant by name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1290990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254872] Re: libvirtError: Timed out during operation: cannot acquire state change lock

2014-03-11 Thread James Page
** Also affects: libvirt (Ubuntu)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254872

Title:
  libvirtError: Timed out during operation: cannot acquire state change
  lock

Status in OpenStack Compute (Nova):
  Triaged
Status in Automated testing of Ubuntu ISO images:
  New
Status in “libvirt” package in Ubuntu:
  New

Bug description:
   libvirtError: Timed out during operation: cannot acquire state change
  lock

  Source:

  
http://logs.openstack.org/72/58372/1/check/check-tempest-devstack-vm-full/4dd1a60/logs/screen-n-cpu.txt.gz
  
http://logs.openstack.org/72/58372/1/check/check-tempest-devstack-vm-full/4dd1a60/testr_results.html.gz

  Query: libvirtError: Timed out during operation: cannot acquire state
  change lock AND filename:logs/screen-n-cpu.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImxpYnZpcnRFcnJvcjogVGltZWQgb3V0IGR1cmluZyBvcGVyYXRpb246IGNhbm5vdCBhY3F1aXJlIHN0YXRlIGNoYW5nZSBsb2NrXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tbi1jcHUudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODU0MTI3MzA4NzQsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1254872/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291007] [NEW] device_path not available at detach time for boot from volume

2014-03-11 Thread Walt Boring
Public bug reported:

When you do a normal volume attach to an existing VM and then detach it, the 
connection_info contains the following
connection_info['data']['device_path'] at libvirt volume driver 
disconnect_volume(self, connection_info, mount_device) time.

When you boot a VM from a volume, not an image, and then terminate the VM, the 
libvirt volume driver disconnect_volume's
connection_info['data'] doesn't contain the 'device_path' key.   The libvirt 
volume driver's need this information to correctly disconnect the LUN from the 
kernel.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291007

Title:
  device_path not available at detach time for boot from volume

Status in OpenStack Compute (Nova):
  New

Bug description:
  When you do a normal volume attach to an existing VM and then detach it, the 
connection_info contains the following
  connection_info['data']['device_path'] at libvirt volume driver 
disconnect_volume(self, connection_info, mount_device) time.

  When you boot a VM from a volume, not an image, and then terminate the VM, 
the libvirt volume driver disconnect_volume's
  connection_info['data'] doesn't contain the 'device_path' key.   The libvirt 
volume driver's need this information to correctly disconnect the LUN from the 
kernel.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291007/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291006] [NEW] Clear text admin password assignment in instance creation should be disabled

2014-03-11 Thread Alessandro Pilotti
Public bug reported:

The clear text admin password management available in the instance
creation security tab should be removed as it has been superseded in
both Nova and Horizon by an encrypted password management.

Nova blueprint (Grizzly): 
https://blueprints.launchpad.net/nova/+spec/get-password
Horizon blueprint (Icehouse): 
https://blueprints.launchpad.net/horizon/+spec/decrypt-and-display-vm-generated-password

Since this feature is now available in Horizon as well, providing an
option for the users to specify the password is both misleading and non
secure.

Furthermore, the oldway of providing a clear text passwords works only
on selected hypervisors and it does not work for Windows guests, which
represent at the moment the main use case since SSH keypair
authentication does not apply

** Affects: horizon
 Importance: Undecided
 Assignee: Alessandro Pilotti (alexpilotti)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Alessandro Pilotti (alexpilotti)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291006

Title:
  Clear text admin password assignment in instance creation should be
  disabled

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The clear text admin password management available in the instance
  creation security tab should be removed as it has been superseded in
  both Nova and Horizon by an encrypted password management.

  Nova blueprint (Grizzly): 
https://blueprints.launchpad.net/nova/+spec/get-password
  Horizon blueprint (Icehouse): 
https://blueprints.launchpad.net/horizon/+spec/decrypt-and-display-vm-generated-password

  Since this feature is now available in Horizon as well, providing an
  option for the users to specify the password is both misleading and
  non secure.

  Furthermore, the oldway of providing a clear text passwords works
  only on selected hypervisors and it does not work for Windows guests,
  which represent at the moment the main use case since SSH keypair
  authentication does not apply

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291006/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291014] [NEW] Nova boot fails: AttributeError: is_public

2014-03-11 Thread Cory Stone
Public bug reported:

[req-d4c97a98-2b0e-419e-83d6-0e88332a699a account1 account1] [instance: 
036a26b1-7fe2-4d56-b7a2-4781e8cad696] Error: is_public
Traceback (most recent call last):
  File /opt/nova/nova/compute/manager.py, line 1254, in _build_instance
set_access_ip=set_access_ip)
  File /opt/nova/nova/compute/manager.py, line 394, in decorated_function
return function(self, context, *args, **kwargs)
  File /opt/nova/nova/compute/manager.py, line 1655, in _spawn
LOG.exception(_('Instance failed to spawn'), instance=instance)
  File /opt/nova/nova/openstack/common/excutils.py, line 68, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File /opt/nova/nova/compute/manager.py, line 1652, in _spawn
block_device_info)
  File /opt/nova/nova/virt/libvirt/driver.py, line2230, in spawn
admin_pass=admin_password)
  File /opt/nova/nova/virt/libvirt/driver.py, line2538, in _create_image
imagehandler_args=imagehandler_args)
  File /opt/nova/nova/virt/libvirt/imagebackend.py, line 184, in cache
*args, **kwargs)
  File /opt/nova/nova/virt/libvirt/imagebackend.py, line 310, in create_image
prepare_template(target=base, max_size=size, *args, **kwargs)
  File /opt/nova/nova/openstack/common/lockutils.py, line 249, in inner
return f(*args, **kwargs)
  File /opt/nova/nova/virt/libvirt/imagebackend.py, line 174, in 
fetch_func_sync
fetch_func(target=target, *args, **kwargs)
  File /opt/nova/nova/virt/libvirt/utils.py, line 654, in fetch_image
max_size=max_size, imagehandler_args=imagehandler_args)
  File /opt/nova/nova/virt/images.py, line 108, in fetch_to_raw
imagehandler_args=imagehandler_args)
  File /opt/nova/nova/virt/images.py, line 98, in fetch
fetched_to_local = handler.is_local()
  File /usr/lib/python2.7/contextlib.py, line 35, in __exit__
self.gen.throw(type, value, traceback)
  File /opt/nova/nova/openstack/common/fileutils.py, line 98, in 
remove_path_on_error
remove(path)
  File /opt/nova/nova/virt/images.py, line 71, in _remove_image_on_exec
image_path):
  File /opt/nova/nova/virt/imagehandler/__init__.py, line 154, in handle_image
img_locs = image_service.get_locations(context, image_id)
  File /opt/nova/nova/image/glance.py, line 307, in get_locations
if not self._is_image_available(context, image_meta):
  File /opt/nova/nova/image/glance.py, line 446, in _is_image_available
if image.is_public or context.is_admin:
  File /usr/local/lib/python2.7/dist-packages/warlock/model.py, line 72, in 
__getattr__
raise AttributeError(key)
AttributeError: is_public

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291014

Title:
  Nova boot fails: AttributeError: is_public

Status in OpenStack Compute (Nova):
  New

Bug description:
  [req-d4c97a98-2b0e-419e-83d6-0e88332a699a account1 account1] [instance: 
036a26b1-7fe2-4d56-b7a2-4781e8cad696] Error: is_public
  Traceback (most recent call last):
File /opt/nova/nova/compute/manager.py, line 1254, in _build_instance
  set_access_ip=set_access_ip)
File /opt/nova/nova/compute/manager.py, line 394, in decorated_function
  return function(self, context, *args, **kwargs)
File /opt/nova/nova/compute/manager.py, line 1655, in _spawn
  LOG.exception(_('Instance failed to spawn'), instance=instance)
File /opt/nova/nova/openstack/common/excutils.py, line 68, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File /opt/nova/nova/compute/manager.py, line 1652, in _spawn
  block_device_info)
File /opt/nova/nova/virt/libvirt/driver.py, line2230, in spawn
  admin_pass=admin_password)
File /opt/nova/nova/virt/libvirt/driver.py, line2538, in _create_image
  imagehandler_args=imagehandler_args)
File /opt/nova/nova/virt/libvirt/imagebackend.py, line 184, in cache
  *args, **kwargs)
File /opt/nova/nova/virt/libvirt/imagebackend.py, line 310, in 
create_image
  prepare_template(target=base, max_size=size, *args, **kwargs)
File /opt/nova/nova/openstack/common/lockutils.py, line 249, in inner
  return f(*args, **kwargs)
File /opt/nova/nova/virt/libvirt/imagebackend.py, line 174, in 
fetch_func_sync
  fetch_func(target=target, *args, **kwargs)
File /opt/nova/nova/virt/libvirt/utils.py, line 654, in fetch_image
  max_size=max_size, imagehandler_args=imagehandler_args)
File /opt/nova/nova/virt/images.py, line 108, in fetch_to_raw
  imagehandler_args=imagehandler_args)
File /opt/nova/nova/virt/images.py, line 98, in fetch
  fetched_to_local = handler.is_local()
File /usr/lib/python2.7/contextlib.py, line 35, in __exit__
  self.gen.throw(type, value, traceback)
File /opt/nova/nova/openstack/common/fileutils.py, line 98, in 
remove_path_on_error
  remove(path)
File 

[Yahoo-eng-team] [Bug 1290857] Re: Image list api is wrong

2014-03-11 Thread Telles Mota Vidal Nóbrega
** Project changed: nova = openstack-api-site

** Description changed:

  The Compute API explains that to list images you need to call v2/images
  but this information is wrong because the right url is
  v2/{project_id}/images
+ 
+ The problem can be found here:
+ http://api.openstack.org/api-ref-compute.html#compute_images

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1290857

Title:
  Image list api is wrong

Status in OpenStack API documentation site:
  Confirmed

Bug description:
  The Compute API explains that to list images you need to call
  v2/images but this information is wrong because the right url is
  v2/{project_id}/images

  The problem can be found here:
  http://api.openstack.org/api-ref-compute.html#compute_images

To manage notifications about this bug go to:
https://bugs.launchpad.net/openstack-api-site/+bug/1290857/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1254872] Re: libvirtError: Timed out during operation: cannot acquire state change lock

2014-03-11 Thread Serge Hallyn
** Also affects: libvirt (Ubuntu Precise)
   Importance: Undecided
   Status: New

** Changed in: libvirt (Ubuntu)
   Importance: Undecided = High

** Changed in: libvirt (Ubuntu)
   Status: New = Fix Released

** Changed in: libvirt (Ubuntu Precise)
   Importance: Undecided = High

** Changed in: libvirt (Ubuntu Precise)
   Status: New = Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1254872

Title:
  libvirtError: Timed out during operation: cannot acquire state change
  lock

Status in OpenStack Compute (Nova):
  Triaged
Status in Automated testing of Ubuntu ISO images:
  New
Status in “libvirt” package in Ubuntu:
  Fix Released
Status in “libvirt” source package in Precise:
  Triaged

Bug description:
   libvirtError: Timed out during operation: cannot acquire state change
  lock

  Source:

  
http://logs.openstack.org/72/58372/1/check/check-tempest-devstack-vm-full/4dd1a60/logs/screen-n-cpu.txt.gz
  
http://logs.openstack.org/72/58372/1/check/check-tempest-devstack-vm-full/4dd1a60/testr_results.html.gz

  Query: libvirtError: Timed out during operation: cannot acquire state
  change lock AND filename:logs/screen-n-cpu.txt

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcImxpYnZpcnRFcnJvcjogVGltZWQgb3V0IGR1cmluZyBvcGVyYXRpb246IGNhbm5vdCBhY3F1aXJlIHN0YXRlIGNoYW5nZSBsb2NrXCIgQU5EIGZpbGVuYW1lOlwibG9ncy9zY3JlZW4tbi1jcHUudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzODU0MTI3MzA4NzQsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1254872/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291021] [NEW] Unhandled UnboundLocalError in instance views

2014-03-11 Thread Brian DeHamer
Public bug reported:

If an error is raised in either the DetailView or ResizeView (in
openstack_dashboard.dashboards.project.instances.views) we assume that a
redirect will be raised. However, certain exception classes will not
result in a redirect when handled by the standard Horizon exception
handler.

The problem spots are here
(https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/views.py#L270)
and here
(https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/views.py#L312).
If the error handler doesn't raise a redirect, control will be returned
to the statement after the 'except' block and the 'instance' variable
may be unbound.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291021

Title:
  Unhandled UnboundLocalError in instance views

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  If an error is raised in either the DetailView or ResizeView (in
  openstack_dashboard.dashboards.project.instances.views) we assume that
  a redirect will be raised. However, certain exception classes will not
  result in a redirect when handled by the standard Horizon exception
  handler.

  The problem spots are here
  
(https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/views.py#L270)
  and here
  
(https://github.com/openstack/horizon/blob/master/openstack_dashboard/dashboards/project/instances/views.py#L312).
  If the error handler doesn't raise a redirect, control will be
  returned to the statement after the 'except' block and the 'instance'
  variable may be unbound.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291021/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291032] [NEW] some lbaas imports violate openstack hacking

2014-03-11 Thread Brandon Logan
Public bug reported:

in neutron/neutron/db/loadbalancer/loadbalancer_db.py,
LoadBalancerPluginBase is being imported directly

in neutron/neutron/extensions/loadbalancer.py, ServicePluginBase is
being imported directly

This violates the openstack hacking style guide rule of not import
objects, just modules.

** Affects: neutron
 Importance: Undecided
 Assignee: Brandon Logan (brandon-logan)
 Status: New


** Tags: low-hanging-fruit

** Changed in: neutron
 Assignee: (unassigned) = Brandon Logan (brandon-logan)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291032

Title:
  some lbaas imports violate openstack hacking

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  in neutron/neutron/db/loadbalancer/loadbalancer_db.py,
  LoadBalancerPluginBase is being imported directly

  in neutron/neutron/extensions/loadbalancer.py, ServicePluginBase is
  being imported directly

  This violates the openstack hacking style guide rule of not import
  objects, just modules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291032/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] dmercer deactivated by dmercer

2014-03-11 Thread anvil-dev
Hello Yahoo! Engineering Team,

The membership status of Dan Mercer (dmercer) in the team anvil-dev
(anvil-dev) was changed by the user from Approved to Deactivated.
https://launchpad.net/~anvil-dev

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291041] [NEW] Add timeout when generating a secret key from file

2014-03-11 Thread Maria Nita
Public bug reported:

In horizon/utils/secret_key.py file we generate a secret key for Django 
settings,
from a file or randomly. But for some systems the method just hangs.

For example this affects users who want to run tests. The tests just hang at 
some point,
because we call the method inside the TestCase SecretKeyTests - 
test_generate_or_read_key_from_file

We can add a timeout like in this example 
http://pythonhosted.org//lockfile/lockfile.html#filelock-objects
and generate a random key if we can't read/interact from/with the file.

** Affects: horizon
 Importance: Undecided
 Assignee: Maria Nita (maria-nita-dn)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Maria Nita (maria-nita-dn)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291041

Title:
  Add timeout when generating a secret key from file

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In horizon/utils/secret_key.py file we generate a secret key for Django 
settings,
  from a file or randomly. But for some systems the method just hangs.

  For example this affects users who want to run tests. The tests just hang at 
some point,
  because we call the method inside the TestCase SecretKeyTests - 
test_generate_or_read_key_from_file

  We can add a timeout like in this example 
http://pythonhosted.org//lockfile/lockfile.html#filelock-objects
  and generate a random key if we can't read/interact from/with the file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291048] [NEW] Incorrect number of security groups in Project Overview after restacking

2014-03-11 Thread Vahid Hashemian
Public bug reported:

Security Groups each project after restacking is reported as 0 (default:
Used 0 of 10) even though the default security group is the default
security group of each project. So it should report as 1 (Used 1 of
10).

Steps to reproduce:
1. Restack
2. Login to Horizon
3. Go to Project  Overview panel
4. You'll notice that Security Groups are reported 0 of 10 -- which is 
incorrect
5. Now go to Access  Security
6. Go back to Overview
7. You'll notice that Security Groups are now reported 1 of 10 -- which is 
correct

NOTE: Neutron should not be enabled to reproduce this bug.

This bug is related to bug #1271381, which fixes the reported number for
most cases.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: nova project security-groups

** Summary changed:

- Incorrect number of security groups in Project Overview of restacking
+ Incorrect number of security groups in Project Overview after restacking

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291048

Title:
  Incorrect number of security groups in Project Overview after
  restacking

Status in OpenStack Compute (Nova):
  New

Bug description:
  Security Groups each project after restacking is reported as 0
  (default: Used 0 of 10) even though the default security group is
  the default security group of each project. So it should report as 1
  (Used 1 of 10).

  Steps to reproduce:
  1. Restack
  2. Login to Horizon
  3. Go to Project  Overview panel
  4. You'll notice that Security Groups are reported 0 of 10 -- which is 
incorrect
  5. Now go to Access  Security
  6. Go back to Overview
  7. You'll notice that Security Groups are now reported 1 of 10 -- which 
is correct

  NOTE: Neutron should not be enabled to reproduce this bug.

  This bug is related to bug #1271381, which fixes the reported number
  for most cases.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291048/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291050] [NEW] Settings nav method can call attempt access of slug on None

2014-03-11 Thread David Lyle
Public bug reported:

When adding tests to horizon, it's possible to encounter an error in
dashboards/settings/dashboard.py where the current dashboard is not set.
Then the slug attribute is attempted to be accessed on None.  Doesn't
work real well :)

** Affects: horizon
 Importance: Low
 Assignee: David Lyle (david-lyle)
 Status: New

** Changed in: horizon
Milestone: None = icehouse-rc1

** Changed in: horizon
   Importance: Undecided = Low

** Changed in: horizon
 Assignee: (unassigned) = David Lyle (david-lyle)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291050

Title:
  Settings nav method can call attempt access of slug on None

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When adding tests to horizon, it's possible to encounter an error in
  dashboards/settings/dashboard.py where the current dashboard is not
  set.  Then the slug attribute is attempted to be accessed on None.
  Doesn't work real well :)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291050/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284579] Re: No module named dogpile.cache failed gate-grenade-dsvm job

2014-03-11 Thread Dolph Mathews
It seems the root cause is that dogpile.cache failed to get installed,
but devstack moved on anyway, ultimately failing to start keystone.

2014-02-24 20:37:13 Downloading/unpacking dogpile.cache=0.5.0 (from 
keystone==2013.2.2)
2014-02-24 20:37:13   Cannot fetch index base URL 
http://pypi.openstack.org/openstack/
2014-02-24 20:37:13   Could not find any downloads that satisfy the requirement 
dogpile.cache=0.5.0 (from keystone==2013.2.2)
2014-02-24 20:37:13 Cleaning up...
2014-02-24 20:37:13 No distributions at all found for dogpile.cache=0.5.0 
(from keystone==2013.2.2)

From http://logs.openstack.org/85/61985/5/gate/gate-grenade-
dsvm/d557147/logs/old/devstacklog.txt.gz

Doesn't appear to be a frequent issue though:

http://logstash.openstack.org/#eyJzZWFyY2giOiJcIkNhbm5vdCBmZXRjaCBpbmRleCBiYXNlIFVSTCBodHRwOi8vcHlwaS5vcGVuc3RhY2sub3JnL29wZW5zdGFjay9cIiIsImZpZWxkcyI6W10sIm9mZnNldCI6MCwidGltZWZyYW1lIjoiYWxsIiwiZ3JhcGhtb2RlIjoiY291bnQiLCJ0aW1lIjp7InVzZXJfaW50ZXJ2YWwiOjB9LCJzdGFtcCI6MTM5NDU3MTQ5NDIwOSwibW9kZSI6IiIsImFuYWx5emVfZmllbGQiOiIifQ==

** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1284579

Title:
  No module named dogpile.cache failed gate-grenade-dsvm job

Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Core Infrastructure:
  Won't Fix

Bug description:
  In gate-grenade-dsvm test, keystone failed to start due to
  ImportError: No module named dogpile.cache

  http://logs.openstack.org/85/61985/5/gate/gate-grenade-
  dsvm/d557147/console.html


  2014-02-24 20:38:43.606 | 2014-02-24 20:38:43 + mysql -uroot -psecret 
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS keystone;'
  2014-02-24 20:38:43.612 | 2014-02-24 20:38:43 + mysql -uroot -psecret 
-h127.0.0.1 -e 'CREATE DATABASE keystone CHARACTER SET utf8;'
  2014-02-24 20:38:43.618 | 2014-02-24 20:38:43 + 
/opt/stack/old/keystone/bin/keystone-manage db_sync
  2014-02-24 20:38:43.624 | 2014-02-24 20:38:43 Traceback (most recent call 
last):
  2014-02-24 20:38:43.630 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/bin/keystone-manage, line 37, in module
  2014-02-24 20:38:43.635 | 2014-02-24 20:38:43 from keystone import cli
  2014-02-24 20:38:43.642 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/cli.py, line 32, in module
  2014-02-24 20:38:43.648 | 2014-02-24 20:38:43 from keystone import token
  2014-02-24 20:38:43.655 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/token/__init__.py, line 18, in module
  2014-02-24 20:38:43.660 | 2014-02-24 20:38:43 from keystone.token import 
controllers
  2014-02-24 20:38:43.666 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/token/controllers.py, line 27, in module
  2014-02-24 20:38:43.672 | 2014-02-24 20:38:43 from keystone.token import 
core
  2014-02-24 20:38:43.679 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/token/core.py, line 22, in module
  2014-02-24 20:38:43.684 | 2014-02-24 20:38:43 from keystone.common import 
cache
  2014-02-24 20:38:43.690 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/common/cache/__init__.py, line 17, in 
module
  2014-02-24 20:38:43.696 | 2014-02-24 20:38:43 from 
keystone.common.cache.core import *  # flake8: noqa
  2014-02-24 20:38:43.702 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/common/cache/core.py, line 19, in module
  2014-02-24 20:38:43.708 | 2014-02-24 20:38:43 import dogpile.cache
  2014-02-24 20:38:43.714 | 2014-02-24 20:38:43 ImportError: No module named 
dogpile.cache


  This issue occurred just once in last 48 hours.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcIk5vIG1vZHVsZSBuYW1lZCBkb2dwaWxlLmNhY2hlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTMzMjQ4MDA2OTQsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1284579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1284579] Re: No module named dogpile.cache failed gate-grenade-dsvm job

2014-03-11 Thread James E. Blair
Oh, yeah, that looks like a transient network error.

Adding back to openstack-ci for the record.


** Also affects: openstack-ci
   Importance: Undecided
   Status: New

** Changed in: openstack-ci
   Status: New = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1284579

Title:
  No module named dogpile.cache failed gate-grenade-dsvm job

Status in OpenStack Identity (Keystone):
  Invalid
Status in OpenStack Core Infrastructure:
  Won't Fix

Bug description:
  In gate-grenade-dsvm test, keystone failed to start due to
  ImportError: No module named dogpile.cache

  http://logs.openstack.org/85/61985/5/gate/gate-grenade-
  dsvm/d557147/console.html


  2014-02-24 20:38:43.606 | 2014-02-24 20:38:43 + mysql -uroot -psecret 
-h127.0.0.1 -e 'DROP DATABASE IF EXISTS keystone;'
  2014-02-24 20:38:43.612 | 2014-02-24 20:38:43 + mysql -uroot -psecret 
-h127.0.0.1 -e 'CREATE DATABASE keystone CHARACTER SET utf8;'
  2014-02-24 20:38:43.618 | 2014-02-24 20:38:43 + 
/opt/stack/old/keystone/bin/keystone-manage db_sync
  2014-02-24 20:38:43.624 | 2014-02-24 20:38:43 Traceback (most recent call 
last):
  2014-02-24 20:38:43.630 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/bin/keystone-manage, line 37, in module
  2014-02-24 20:38:43.635 | 2014-02-24 20:38:43 from keystone import cli
  2014-02-24 20:38:43.642 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/cli.py, line 32, in module
  2014-02-24 20:38:43.648 | 2014-02-24 20:38:43 from keystone import token
  2014-02-24 20:38:43.655 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/token/__init__.py, line 18, in module
  2014-02-24 20:38:43.660 | 2014-02-24 20:38:43 from keystone.token import 
controllers
  2014-02-24 20:38:43.666 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/token/controllers.py, line 27, in module
  2014-02-24 20:38:43.672 | 2014-02-24 20:38:43 from keystone.token import 
core
  2014-02-24 20:38:43.679 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/token/core.py, line 22, in module
  2014-02-24 20:38:43.684 | 2014-02-24 20:38:43 from keystone.common import 
cache
  2014-02-24 20:38:43.690 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/common/cache/__init__.py, line 17, in 
module
  2014-02-24 20:38:43.696 | 2014-02-24 20:38:43 from 
keystone.common.cache.core import *  # flake8: noqa
  2014-02-24 20:38:43.702 | 2014-02-24 20:38:43   File 
/opt/stack/old/keystone/keystone/common/cache/core.py, line 19, in module
  2014-02-24 20:38:43.708 | 2014-02-24 20:38:43 import dogpile.cache
  2014-02-24 20:38:43.714 | 2014-02-24 20:38:43 ImportError: No module named 
dogpile.cache


  This issue occurred just once in last 48 hours.

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJcIk5vIG1vZHVsZSBuYW1lZCBkb2dwaWxlLmNhY2hlXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjE3MjgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjEzOTMzMjQ4MDA2OTQsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1284579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291069] [NEW] Modifying flavor access list modifies the flavor info

2014-03-11 Thread Santiago Baldassin
Public bug reported:

When the user modifies the flavor access list, the flavor is removed and
created again even when the flavor information hasn't chance so there
was no need to update the flavor information

** Affects: horizon
 Importance: Undecided
 Assignee: Santiago Baldassin (santiago-b-baldassin)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) = Santiago Baldassin (santiago-b-baldassin)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291069

Title:
  Modifying flavor access list modifies the flavor info

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When the user modifies the flavor access list, the flavor is removed
  and created again even when the flavor information hasn't chance so
  there was no need to update the flavor information

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291070] [NEW] floatingip_update status can fail

2014-03-11 Thread Aaron Rosen
Public bug reported:

http://logs.openstack.org/58/79458/3/check/check-tempest-dsvm-neutron-
full/6f846db/logs/screen-q-svc.txt.gz?#_2014-03-11_14_38_27_847

2014-03-11 14:38:27.847 27756 ERROR neutron.openstack.common.rpc.amqp
[req-ca69244e-71b1-42a9-a8ce-9dcdc43181a3 None] Exception during message
handling 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp Traceback (most recent call last):
2014-03-11 14:38:27.847 27756 TRACE neutron.openstack.common.rpc.amqp
File /opt/stack/new/neutron/neutron/openstack/common/rpc/amqp.py, line
462, in _process_data 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp **args) 2014-03-11 14:38:27.847
27756 TRACE neutron.openstack.common.rpc.amqp   File
/opt/stack/new/neutron/neutron/common/rpc.py, line 45, in dispatch
2014-03-11 14:38:27.847 27756 TRACE neutron.openstack.common.rpc.amqp
neutron_ctxt, version, method, namespace, **kwargs) 2014-03-11
14:38:27.847 27756 TRACE neutron.openstack.common.rpc.amqp   File
/opt/stack/new/neutron/neutron/openstack/common/rpc/dispatcher.py,
line 172, in dispatch 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp result = getattr(proxyobj,
method)(ctxt, **kwargs) 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp   File
/opt/stack/new/neutron/neutron/db/l3_rpc_base.py, line 120, in
update_floatingip_statuses 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp context, {'last_known_router_id':
[router_id]}) 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp   File
/opt/stack/new/neutron/neutron/db/l3_db.py, line 718, in
get_floatingips 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp page_reverse=page_reverse)
2014-03-11 14:38:27.847 27756 TRACE neutron.openstack.common.rpc.amqp
File /opt/stack/new/neutron/neutron/db/db_base_plugin_v2.py, line 191,
in _get_collection 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp items = [dict_func(c, fields) for
c in query] 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp   File /usr/local/lib/python2.7/dist-
packages/sqlalchemy/orm/query.py, line 2360, in __iter__ 2014-03-11
14:38:27.847 27756 TRACE neutron.openstack.common.rpc.amqp return
self._execute_and_instances(context) 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp   File /usr/local/lib/python2.7/dist-
packages/sqlalchemy/orm/query.py, line 2373, in _execute_and_instances
2014-03-11 14:38:27.847 27756 TRACE neutron.openstack.common.rpc.amqp
close_with_result=True) 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp   File /usr/local/lib/python2.7/dist-
packages/sqlalchemy/orm/query.py, line 2364, in
_connection_from_session 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp **kw) 2014-03-11 14:38:27.847
27756 TRACE neutron.openstack.common.rpc.amqp   File
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line
799, in connection 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp
close_with_result=close_with_result) 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp   File /usr/local/lib/python2.7/dist-
packages/sqlalchemy/orm/session.py, line 803, in _connection_for_bind
2014-03-11 14:38:27.847 27756 TRACE neutron.openstack.common.rpc.amqp
return self.transaction._connection_for_bind(engine) 2014-03-11
14:38:27.847 27756 TRACE neutron.openstack.common.rpc.amqp   File
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line
282, in _connection_for_bind 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp self._assert_active() 2014-03-11
14:38:27.847 27756 TRACE neutron.openstack.common.rpc.amqp   File
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/session.py, line
198, in _assert_active 2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp This Session's transaction has
been rolled back  2014-03-11 14:38:27.847 27756 TRACE
neutron.openstack.common.rpc.amqp InvalidRequestError: This Session's
transaction has been rolled back by a nested rollback() call.  To begin
a new transaction, issue Session.rollback() first. 2014-03-11
14:38:27.847 27756 TRACE neutron.openstack.common.rpc.amqp

** Affects: neutron
 Importance: High
 Assignee: Aaron Rosen (arosen)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Aaron Rosen (arosen)

** Changed in: neutron
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291070

Title:
  floatingip_update status can fail

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  http://logs.openstack.org/58/79458/3/check/check-tempest-dsvm-neutron-
  full/6f846db/logs/screen-q-svc.txt.gz?#_2014-03-11_14_38_27_847

  2014-03-11 14:38:27.847 27756 ERROR 

[Yahoo-eng-team] [Bug 1291059] [NEW] db_sync fails if hostname has underscore

2014-03-11 Thread Adam Young
Public bug reported:

Origianlly filed in Bugzilla:
https://bugzilla.redhat.com/show_bug.cgi?id=1069607

Description of problem:
  When hostname has _ character like rhos4_cnt.example.com, keystone.pp has 
failed with following error:

Notice: /Stage[main]/Keystone/Exec[keystone-manage db_sync]/returns:
2014-02-25 20:14:37.597 5207 CRITICAL keystone [-] (OperationalError)
(1045, Access denied for user 'keystone_admin'@'rhos4_cnt.example.com'
(using password: YES)) None None

  In this case, I can see hostname has escaped by \ as follows in
mysql db.

# mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 153
Server version: 5.1.71 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights
reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input
statement.

mysql select host, user from mysql.user;
+++
| host   | user   |
+++
| %  | cinder |
| %  | glance |
| %  | heat   |
| %  | keystone_admin |
| %  | neutron|
| %  | nova   |
| 127.0.0.1  | cinder |
| 127.0.0.1  | glance |
| 127.0.0.1  | keystone_admin |
| 127.0.0.1  | neutron|
| 127.0.0.1  | nova   |
| localhost  | heat   |
| localhost  | root   |
| rhos4\_cnt.example.com ||
| rhos4\_cnt.example.com | root   |
+++
15 rows in set (0.00 sec)


Version-Release number of selected component (if applicable):


How reproducible:
  Always

Steps to Reproduce:
1. set FQDN with _ character and run packstack.
2.
3.

Actual results:
  In the keystone.pp, keystone-manage db_sync failed and got python backtrace.

Expected results:
  Install should finish normally.

Additional info:
  Attaching packstack log and sosreport.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1291059

Title:
  db_sync fails if hostname has underscore

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Origianlly filed in Bugzilla:
  https://bugzilla.redhat.com/show_bug.cgi?id=1069607

  Description of problem:
When hostname has _ character like rhos4_cnt.example.com, keystone.pp 
has failed with following error:

  Notice: /Stage[main]/Keystone/Exec[keystone-manage db_sync]/returns:
  2014-02-25 20:14:37.597 5207 CRITICAL keystone [-] (OperationalError)
  (1045, Access denied for user
  'keystone_admin'@'rhos4_cnt.example.com' (using password: YES)) None
  None

In this case, I can see hostname has escaped by \ as follows in
  mysql db.

  # mysql -u root
  Welcome to the MySQL monitor.  Commands end with ; or \g.
  Your MySQL connection id is 153
  Server version: 5.1.71 Source distribution

  Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights
  reserved.

  Oracle is a registered trademark of Oracle Corporation and/or its
  affiliates. Other names may be trademarks of their respective
  owners.

  Type 'help;' or '\h' for help. Type '\c' to clear the current input
  statement.

  mysql select host, user from mysql.user;
  +++
  | host   | user   |
  +++
  | %  | cinder |
  | %  | glance |
  | %  | heat   |
  | %  | keystone_admin |
  | %  | neutron|
  | %  | nova   |
  | 127.0.0.1  | cinder |
  | 127.0.0.1  | glance |
  | 127.0.0.1  | keystone_admin |
  | 127.0.0.1  | neutron|
  | 127.0.0.1  | nova   |
  | localhost  | heat   |
  | localhost  | root   |
  | rhos4\_cnt.example.com ||
  | rhos4\_cnt.example.com | root   |
  +++
  15 rows in set (0.00 sec)

  
  Version-Release number of selected component (if applicable):

  
  How reproducible:
Always

  Steps to Reproduce:
  1. set FQDN with _ character and run packstack.
  2.
  3.

  Actual results:
In the keystone.pp, keystone-manage db_sync failed and got python backtrace.

  Expected results:
Install should finish normally.

  Additional info:
Attaching packstack log and 

[Yahoo-eng-team] [Bug 1290990] Re: Get tenants by name discrepency

2014-03-11 Thread Dolph Mathews
Despite the nearly identical paths, GET :5000/v2.0/tenants has almost
nothing to do with GET :35357/v2.0/tenants -- they have separate
specifications, separate use cases and separate intended behaviors. I'd
also expect unsupported query strings to be silently ignored, rather
than to raise a 4xx under *any* circumstance.

** Changed in: keystone
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1290990

Title:
  Get tenants by name discrepency

Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  A get call on /v2.0/tenants?name=username
  This call returns the single user on the adminurl however on the user url 
instead of giving a 404 since getting a tenant by name is not supported it 
returns the standard get tenants response, ignoring the parameters entirely.  
This would cause negative tests that just check the status to fail (returns a 
200 when a 404 is expected) and it would appear that the api returned a 
response for get tenant by name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1290990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291130] [NEW] Calling stopall not necessary in individual unit tests

2014-03-11 Thread Kevin Benton
Public bug reported:

Once https://bugs.launchpad.net/neutron/+bug/1290550 is closed, calling
stopall on mock.patch from individual unit tests is no longer required.
Calls to stopall in existing code should be removed to prevent
duplication.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291130

Title:
  Calling stopall not necessary in individual unit tests

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Once https://bugs.launchpad.net/neutron/+bug/1290550 is closed,
  calling stopall on mock.patch from individual unit tests is no longer
  required. Calls to stopall in existing code should be removed to
  prevent duplication.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291130/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291096] [NEW] log-dir variable doesnt work?

2014-03-11 Thread Maithem
Public bug reported:

When I set the value of log-dir (http://docs.openstack.org/trunk/config-
reference/content/list-of-compute-config-options.html), it seems that
the variable is not used at all. Although, setting logdir in nova.conf
works for me.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291096

Title:
  log-dir variable doesnt work?

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I set the value of log-dir (http://docs.openstack.org/trunk
  /config-reference/content/list-of-compute-config-options.html), it
  seems that the variable is not used at all. Although, setting logdir
  in nova.conf works for me.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291096/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291103] [NEW] admin_state_up check broken for update network in PLUMgrid Neutron Plugin

2014-03-11 Thread Fawad Khaliq
Public bug reported:

in update_network API call, the network dictionary has the contents:
{u'network': {u'admin_state_up': True}}

_network_admin_state functions expects network name in the dictionary
and it raises error. This has to be fixed. Network name should not be
expected.

2014-03-11 16:20:41.920 29794 ERROR neutron.api.v2.resource [-] update failed
2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py, line 84, in 
resource
2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/base.py, line 486, in update
2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py, line 
233, in inner
2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource retval = 
f(*args, **kwargs)
2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/plugins/plumgrid/plumgrid_plugin/plumgrid_plugin.py,
 line 133, in update_network
2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource 
self._network_admin_state(network)
2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/plugins/plumgrid/plumgrid_plugin/plumgrid_plugin.py,
 line 597, in _network_admin_state
2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource raise 
plum_excep.PLUMgridException(err_msg=err_message)
2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource PLUMgridException: 
An unexpected error occurred in the PLUMgrid Plugin: Network Admin State 
Validation Falied:

** Affects: neutron
 Importance: Undecided
 Assignee: Fawad Khaliq (fawad)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Fawad Khaliq (fawad)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291103

Title:
  admin_state_up check broken for update network in PLUMgrid Neutron
  Plugin

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  in update_network API call, the network dictionary has the contents:
  {u'network': {u'admin_state_up': True}}

  _network_admin_state functions expects network name in the dictionary
  and it raises error. This has to be fixed. Network name should not be
  expected.

  2014-03-11 16:20:41.920 29794 ERROR neutron.api.v2.resource [-] update failed
  2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource Traceback (most 
recent call last):
  2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/resource.py, line 84, in 
resource
  2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
  2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/api/v2/base.py, line 486, in update
  2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource obj = 
obj_updater(request.context, id, **kwargs)
  2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/openstack/common/lockutils.py, line 
233, in inner
  2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource retval = 
f(*args, **kwargs)
  2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/plugins/plumgrid/plumgrid_plugin/plumgrid_plugin.py,
 line 133, in update_network
  2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource 
self._network_admin_state(network)
  2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource   File 
/usr/lib/python2.6/site-packages/neutron/plugins/plumgrid/plumgrid_plugin/plumgrid_plugin.py,
 line 597, in _network_admin_state
  2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource raise 
plum_excep.PLUMgridException(err_msg=err_message)
  2014-03-11 16:20:41.920 29794 TRACE neutron.api.v2.resource 
PLUMgridException: An unexpected error occurred in the PLUMgrid Plugin: Network 
Admin State Validation Falied:

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291100] [NEW] UnsupportedVersion: Endpoint does not support RPC version 3.22

2014-03-11 Thread Bhuvan Arumugam
Public bug reported:

compute log spit this error message. We are unable to delete VMs. They
are struck at task state deleting.

2014-03-11 22:24:32,321 (oslo.messaging.rpc.dispatcher): ERROR dispatcher 
_dispatch_and_reply Exception during message handling: Endpoint does not 
support RPC version 3.22
Traceback (most recent call last):
  File 
/usr/local/csi/share/csi-nova.venv/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 133, in _dispatch_and_reply
incoming.message))
  File 
/usr/local/csi/share/csi-nova.venv/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 185, in _dispatch
raise UnsupportedVersion(version)
UnsupportedVersion: Endpoint does not support RPC version 3.22
2014-03-11 22:24:32,322 (oslo.messaging._drivers.common): ERROR common 
serialize_remote_exception Returning exception Endpoint does not support RPC 
version 3.22 to caller

It is likely the regression caused by
https://github.isg.apple.com/openstack/nova/commit/a7b5b975a48f132afa0fc8717c72ab3cb1f6545a#nova/compute/rpcapi.py,
wherein the RPC version was bumped to 3.23.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291100

Title:
  UnsupportedVersion: Endpoint does not support RPC version 3.22

Status in OpenStack Compute (Nova):
  New

Bug description:
  compute log spit this error message. We are unable to delete VMs. They
  are struck at task state deleting.

  2014-03-11 22:24:32,321 (oslo.messaging.rpc.dispatcher): ERROR dispatcher 
_dispatch_and_reply Exception during message handling: Endpoint does not 
support RPC version 3.22
  Traceback (most recent call last):
File 
/usr/local/csi/share/csi-nova.venv/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 133, in _dispatch_and_reply
  incoming.message))
File 
/usr/local/csi/share/csi-nova.venv/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py,
 line 185, in _dispatch
  raise UnsupportedVersion(version)
  UnsupportedVersion: Endpoint does not support RPC version 3.22
  2014-03-11 22:24:32,322 (oslo.messaging._drivers.common): ERROR common 
serialize_remote_exception Returning exception Endpoint does not support RPC 
version 3.22 to caller

  It is likely the regression caused by
  
https://github.isg.apple.com/openstack/nova/commit/a7b5b975a48f132afa0fc8717c72ab3cb1f6545a#nova/compute/rpcapi.py,
  wherein the RPC version was bumped to 3.23.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1252410] Re: SecurityGroup exception when there are no subnets

2014-03-11 Thread Aaron Rosen
** No longer affects: neutron

** Changed in: nova
 Assignee: sahid (sahid-ferdjaoui) = Aaron Rosen (arosen)

** Changed in: nova
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1252410

Title:
  SecurityGroup exception when there are no subnets

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  When booting an instance with a network that has no defined subnets
  (i.e. you want the instance to have a network interface but not to
  have the address range managed by neutron), the nova/neutron
  integration code throws a SecurityGroupCannotBeApplied exception. At
  the moment, nova does not have the ability to indicate that no
  SecurityGroup is required (omitting it results in the default group
  being assumed).

  To reproduce:

  1. create a network -- do not create a subnet!
  2. boot a vm a-la nova boot --image foo --nic net-id=[uuid for 
aforementioned network] foovm

  Result: 
  VM fails to boot, enters ERROR state

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1252410/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291120] [NEW] better tab name on Hosts Aggregate modal

2014-03-11 Thread Cindy Lu
Public bug reported:

Create Host Aggregate modal has a tab called Hosts within aggregate.
I suggest to match this with action button title Manage Hosts or
Manage Hosts within Aggregate?

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1291120

Title:
  better tab name on Hosts Aggregate modal

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Create Host Aggregate modal has a tab called Hosts within aggregate.
  I suggest to match this with action button title Manage Hosts or
  Manage Hosts within Aggregate?

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1291120/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1290969] Re: Change to require listing all known stores in glance-api.conf breaks swift-backed glance on upgrade

2014-03-11 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/79631
Committed: 
https://git.openstack.org/cgit/openstack/tripleo-image-elements/commit/?id=ef1ee25143bfd07893894da2cd24d2fbf6380736
Submitter: Jenkins
Branch:master

commit ef1ee25143bfd07893894da2cd24d2fbf6380736
Author: Derek Higgins der...@redhat.com
Date:   Tue Mar 11 15:03:48 2014 +

List filesystem and swift as known glance stores

Glance used to default to include all stores as known but has now
switched to only knowing the filesystem store by default. As of
I82073352641d3eb2ab3d6e9a6b64afc99a30dcc7

Change-Id: I10aebbd7c6969c6d95579f3b266e03501a7bedb7
Closes-Bug: #1290969


** Changed in: tripleo
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1290969

Title:
  Change to require listing all known stores in glance-api.conf breaks
  swift-backed glance on upgrade

Status in OpenStack Image Registry and Delivery Service (Glance):
  In Progress
Status in tripleo - openstack on openstack:
  Fix Released

Bug description:
  This change:

  https://review.openstack.org/#q,I82073352641d3eb2ab3d6e9a6b64afc99a30dcc7,n,z

  Causes an upgrade failure if users are backed by swift, as it is not
  in the list of defaults anymore.

  This change should be reverted and depending on the list should be
  logged as a deprecation warning for 1 cycle.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1290969/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291127] [NEW] If vm start of bootable volume does not work properly.

2014-03-11 Thread Yoon Doyoul
Public bug reported:

it does not operate normally then start after you stop the case vm
bootable volume.

The following error may occur. 
OSError: [Errno 2] No such file or directory: 
'/var/lib/nova/instances/a7cdb294-a8ce-4aa2-9c49-ac3484b73262/disk'

This problem, because using a local disk information without using
connection_info the volume.

** Affects: nova
 Importance: Undecided
 Assignee: Yoon Doyoul (ydoyeul)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Yoon Doyoul (ydoyeul)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291127

Title:
  If vm start of bootable volume does not work properly.

Status in OpenStack Compute (Nova):
  New

Bug description:
  it does not operate normally then start after you stop the case vm
  bootable volume.

  The following error may occur. 
  OSError: [Errno 2] No such file or directory: 
'/var/lib/nova/instances/a7cdb294-a8ce-4aa2-9c49-ac3484b73262/disk'

  This problem, because using a local disk information without using
  connection_info the volume.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291144] [NEW] Calling cfg.CONF.reset not necessary in individual unit tests

2014-03-11 Thread Henry Gessau
Public bug reported:

oslo.config.cfg.CONF.reset is added to cleanup in BaseTestCase.setUp().
No need for individual test classes to do it.

** Affects: neutron
 Importance: Undecided
 Assignee: Henry Gessau (gessau)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Henry Gessau (gessau)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291144

Title:
  Calling cfg.CONF.reset not necessary in individual unit tests

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  oslo.config.cfg.CONF.reset is added to cleanup in
  BaseTestCase.setUp(). No need for individual test classes to do it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291144/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291157] [NEW] idp deletion should trigger token deletion

2014-03-11 Thread Steve Martinelli
Public bug reported:

When a federation IdP is deleted, the tokens that were issued (and still
active) and associated with the IdP should be deleted. To prevent
unwarranted access. The fix should delete any tokens that are associated
with the idp, upon deletion (and possibly update, too).

** Affects: keystone
 Importance: High
 Status: New

** Changed in: keystone
   Importance: Undecided = High

** Changed in: keystone
Milestone: None = icehouse-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1291157

Title:
  idp deletion should trigger token deletion

Status in OpenStack Identity (Keystone):
  New

Bug description:
  When a federation IdP is deleted, the tokens that were issued (and
  still active) and associated with the IdP should be deleted. To
  prevent unwarranted access. The fix should delete any tokens that are
  associated with the idp, upon deletion (and possibly update, too).

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1291157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291161] [NEW] Need a property in glance metadata to indicate the vm id when create vm snapshot

2014-03-11 Thread David Geng
Public bug reported:

In order to manage the snapshot of vm in glance conveniently, we need know 
which images in glance are captured from VM.
So we need add new property in glance metadata when create the vm snapshot, for 
example: server_id = vm uuid. This new property will help to filter the image 
when use the glance image-list.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291161

Title:
  Need a property in glance metadata to indicate the vm id when create
  vm snapshot

Status in OpenStack Compute (Nova):
  New

Bug description:
  In order to manage the snapshot of vm in glance conveniently, we need know 
which images in glance are captured from VM.
  So we need add new property in glance metadata when create the vm snapshot, 
for example: server_id = vm uuid. This new property will help to filter the 
image when use the glance image-list.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1291161/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291163] [NEW] Create port done not check validity of mac-address

2014-03-11 Thread shihanzhang
Public bug reported:

the api of 'create port' does not check validity  of mac-address, if you use 
this invalid port to create VM , it will failed,
root@ubuntu01:~# neutron port-create  --mac-address 123 test2
Created a new port:
+---+---+
| Field | Value 
|
+---+---+
| admin_state_up| True  
|
| allowed_address_pairs |   
|
| binding:capabilities  | {port_filter: false}
|
| binding:host_id   |   
|
| binding:vif_type  | unbound   
|
| device_id |   
|
| device_owner  |   
|
| fixed_ips | {subnet_id: 5519e015-fc83-44c2-99ad-d669b3c2c9d7, 
ip_address: 10.10.10.4} |
| id| ae33af6e-6f8f-4ce8-928b-4f05396a7db3  
|
| mac_address   | 123   
|
| name  |   
|
| network_id| 255f3e92-5a6e-44a5-bbf9-1a62bf5d5935  
|
| security_groups   | f627556d-64a3-4c1b-8c50-10a58ddaf29f  
|
| status| DOWN  
|
| tenant_id | 34fddbc22c184214b823be267837ef81  
|

the failed log of creating VM:

 Traceback (most recent call last):
   File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1037, 
in _build_instance
 set_access_ip=set_access_ip)
   File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1420, 
in _spawn
 LOG.exception(_('Instance failed to spawn'), instance=instance)
   File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 1417, 
in _spawn
 block_device_info)
   File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
2070, in spawn
 block_device_info, context=context)
   File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3225, in _create_domain_and_network
 domain = self._create_domain(xml, instance=instance, power_on=power_on)
   File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3159, in _create_domain
 raise e
 libvirtError: XML error: unable to parse mac address '123'

** Affects: neutron
 Importance: Undecided
 Assignee: shihanzhang (shihanzhang)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = shihanzhang (shihanzhang)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291163

Title:
  Create port done not check validity  of mac-address

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  the api of 'create port' does not check validity  of mac-address, if you use 
this invalid port to create VM , it will failed,
  root@ubuntu01:~# neutron port-create  --mac-address 123 test2
  Created a new port:
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| True
  |
  | allowed_address_pairs | 
  |
  | binding:capabilities  | {port_filter: false}  
  |
  | binding:host_id   | 
  |
  | binding:vif_type  | unbound 
  |
  | device_id | 
  |
  | device_owner  | 
   

[Yahoo-eng-team] [Bug 1291174] [NEW] Calling patch.stop not needed in individual test cases

2014-03-11 Thread Henry Gessau
Public bug reported:

https://bugs.launchpad.net/neutron/+bug/1290550 adds mock.patch.stopall to 
BaseTestCase.
https://bugs.launchpad.net/neutron/+bug/1291130 removes stopall from individual 
test cases.

We can now go one step further, and remove individual patch stops from
cleanups.

** Affects: neutron
 Importance: Undecided
 Assignee: Henry Gessau (gessau)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Henry Gessau (gessau)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1291174

Title:
  Calling patch.stop not needed in individual test cases

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  https://bugs.launchpad.net/neutron/+bug/1290550 adds mock.patch.stopall to 
BaseTestCase.
  https://bugs.launchpad.net/neutron/+bug/1291130 removes stopall from 
individual test cases.

  We can now go one step further, and remove individual patch stops from
  cleanups.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1291174/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1291163] Re: Create VM use port' mac-address '123' failed

2014-03-11 Thread shihanzhang
** Summary changed:

- Create port done not check validity  of mac-address
+ Create VM use port' mac-address '123' failed

** Description changed:

- the api of 'create port' does not check validity  of mac-address, if you use 
this invalid port to create VM , it will failed,
+ neutron api of create port can use mac of int value,but nova can't use the 
port to create vm!
  root@ubuntu01:~# neutron port-create  --mac-address 123 test2
  Created a new port:
  
+---+---+
  | Field | Value   
  |
  
+---+---+
  | admin_state_up| True
  |
  | allowed_address_pairs | 
  |
  | binding:capabilities  | {port_filter: false}  
  |
  | binding:host_id   | 
  |
  | binding:vif_type  | unbound 
  |
  | device_id | 
  |
  | device_owner  | 
  |
  | fixed_ips | {subnet_id: 
5519e015-fc83-44c2-99ad-d669b3c2c9d7, ip_address: 10.10.10.4} |
  | id| ae33af6e-6f8f-4ce8-928b-4f05396a7db3
  |
  | mac_address   | 123 
  |
  | name  | 
  |
  | network_id| 255f3e92-5a6e-44a5-bbf9-1a62bf5d5935
  |
  | security_groups   | f627556d-64a3-4c1b-8c50-10a58ddaf29f
  |
  | status| DOWN
  |
  | tenant_id | 34fddbc22c184214b823be267837ef81
  |
  
  the failed log of creating VM:
  
-  Traceback (most recent call last):
-File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 
1037, in _build_instance
-  set_access_ip=set_access_ip)
-File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 
1420, in _spawn
-  LOG.exception(_('Instance failed to spawn'), instance=instance)
-File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 
1417, in _spawn
-  block_device_info)
-File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
2070, in spawn
-  block_device_info, context=context)
-File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3225, in _create_domain_and_network
-  domain = self._create_domain(xml, instance=instance, power_on=power_on)
-File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3159, in _create_domain
-  raise e
-  libvirtError: XML error: unable to parse mac address '123'
+  Traceback (most recent call last):
+    File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 
1037, in _build_instance
+  set_access_ip=set_access_ip)
+    File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 
1420, in _spawn
+  LOG.exception(_('Instance failed to spawn'), instance=instance)
+    File /usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 
1417, in _spawn
+  block_device_info)
+    File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
2070, in spawn
+  block_device_info, context=context)
+    File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3225, in _create_domain_and_network
+  domain = self._create_domain(xml, instance=instance, power_on=power_on)
+    File /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 
3159, in _create_domain
+  raise e
+  libvirtError: XML error: unable to parse mac address '123'

** Project changed: neutron = nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1291163

Title:
  Create VM use port' mac-address '123' failed

Status in OpenStack Compute (Nova):
  New

Bug description:
  neutron api of create port can use mac of int value,but nova can't use the 
port to create vm!
  root@ubuntu01:~# neutron port-create  --mac-address 123 test2
  Created a new port:
  

[Yahoo-eng-team] [Bug 1190906] Re: LBaaS should check authorization when updating objects

2014-03-11 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1190906

Title:
  LBaaS should check authorization when updating objects

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  It seems that update_* methods are missing logic of checking authorization 
unlike create_* methods.
  Policy.json is also lacking the rules.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1190906/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1091780] Re: nova-network - iptables-restore v1.4.12: host/network `None' not found

2014-03-11 Thread Launchpad Bug Tracker
[Expired for nova (Ubuntu) because there has been no activity for 60
days.]

** Changed in: nova (Ubuntu)
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1091780

Title:
  nova-network - iptables-restore v1.4.12: host/network `None' not
  found

Status in OpenStack Compute (Nova):
  Incomplete
Status in “nova” package in Ubuntu:
  Expired

Bug description:
  1- In Precise nova-network crashes because it cannot apply iptables
  rules when trying to apply vpn rules. nova-network tries to set VPN
  iptables rules for openvpn access:

  2012-12-17 07:17:24 TRACE nova Stderr: iptables-restore v1.4.12:
  host/network `None' not found\nError occurred at line: 23\nTry
  `iptables-restore -h' or 'iptables-restore --help' for more
  information.\n

  2- How reproducible?

  Not clear. The configuration I used with juju seems to create an
  environment that causes this problem. When this problem is present the
  issue reproduces every time.

  3- How to reproduce:

  When the issue is present just starting up nova-network causes the
  problem to reproduce. Nova-network exits in the end and dies because
  of the error on iptables-restore

  4- I added debugging in nova.conf with --debug=true and added extra
  debugging in

  /usr/lib/python2.7/dist-packages/nova/utils.py

  which showed the full iptables rules that were to be restored by
  iptables-restore:

  2012-12-17 07:17:24 DEBUG nova.utils 
[req-391688fd-3b99-4b1c-8b46-fb4f64e64246 None None] process input: 
  # Generated by iptables-save v1.4.12 on Mon Dec 17 07:17:21 2012
  *filter
  :INPUT ACCEPT [0:0]
  :FORWARD ACCEPT [0:0]
  :OUTPUT ACCEPT [0:0]
  :nova-api-FORWARD - [0:0]
  :nova-api-INPUT - [0:0]
  :nova-api-OUTPUT - [0:0]
  :nova-api-local - [0:0]
  :nova-network-FORWARD - [0:0]
  :nova-network-INPUT - [0:0]
  :nova-network-local - [0:0]
  :nova-network-OUTPUT - [0:0]
  :nova-filter-top - [0:0]
  -A FORWARD -j nova-filter-top
  -A OUTPUT -j nova-filter-top
  -A nova-filter-top -j nova-network-local
  -A INPUT -j nova-network-INPUT
  -A OUTPUT -j nova-network-OUTPUT
  -A FORWARD -j nova-network-FORWARD
  -A nova-network-FORWARD --in-interface br100 -j ACCEPT
  -A nova-network-FORWARD --out-interface br100 -j ACCEPT
  -A nova-network-FORWARD -d None -p udp --dport 1194 -j ACCEPT
  -A INPUT -j nova-api-INPUT
  -A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
  -A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
  -A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
  -A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
  -A FORWARD -j nova-api-FORWARD
  -A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED,ESTABLISHED 
-j ACCEPT
  -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT
  -A FORWARD -i virbr0 -o virbr0 -j ACCEPT
  -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
  -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
  -A OUTPUT -j nova-api-OUTPUT
  -A nova-api-INPUT -d 192.168.124.150/32 -p tcp -m tcp --dport 8775 -j ACCEPT
  -A nova-filter-top -j nova-api-local
  COMMIT

  
  4.1- Among the rules above we have:

  -A nova-network-FORWARD -d None -p udp --dport 1194 -j ACCEPT

  which is responsible for the fault in iptables-restore.

  5- These are the error messages:

  2012-12-17 07:17:24 DEBUG nova.utils 
[req-391688fd-3b99-4b1c-8b46-fb4f64e64246 None None] Result was 2 from 
(pid=14699) execute /usr/lib/python2.7/dist-packages/nova/utils.py:237
  2012-12-17 07:17:24 CRITICAL nova [-] Unexpected error while running command.
  Command: sudo nova-rootwrap iptables-restore
  Exit code: 2
  Stdout: ''

  Stderr: iptables-restore v1.4.12: host/network `None' not found\nError 
occurred at line: 23\nTry `iptables-restore -h' or 'iptables-restore --help' 
for more information.\n
  2012-12-17 07:17:24 TRACE nova Traceback (most recent call last):
  2012-12-17 07:17:24 TRACE nova   File /usr/bin/nova-network, line 49, in 
module
  2012-12-17 07:17:24 TRACE nova service.wait()
  2012-12-17 07:17:24 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 413, in wait
  2012-12-17 07:17:24 TRACE nova _launcher.wait()
  2012-12-17 07:17:24 TRACE nova   File 
/usr/lib/python2.7/dist-packages/nova/service.py, line 131, in wait
  2012-12-17 07:17:24 TRACE nova service.wait()
  2012-12-17 07:17:24 TRACE nova   File 
/usr/lib/python2.7/dist-packages/eventlet/greenthread.py, line 166, in wait
  2012-12-17 07:17:24 TRACE nova return self._exit_event.wait()
  2012-12-17 07:17:24 TRACE nova   File 
/usr/lib/python2.7/dist-packages/eventlet/event.py, line 116, in wait
  2012-12-17 07:17:24 TRACE nova return hubs.get_hub().switch()
  2012-12-17 07:17:24 TRACE nova   File 
/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py, line 177, in switch
  2012-12-17 07:17:24 TRACE nova return self.greenlet.switch()
  

[Yahoo-eng-team] [Bug 1290854] Re: Missing internal ovs vlan tag for a port of resumed instance

2014-03-11 Thread Aaron Rosen
Restarting the l2-agent causes the vlan tag to be restored.

** Changed in: nova
   Status: New = Confirmed

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1290854

Title:
  Missing internal ovs vlan tag for a port of resumed instance

Status in OpenStack Neutron (virtual network service):
  New
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  If we suspend and then resume an instance it becomes unreachable from
  the tenant network

  $nova boot --image 263f4823-f43c-4f2a-a845-2221f4a2dad1 --flavor 1
  --nic net-id=61ade795-f123-4880-9ab5-a73e0c1b2e70 server1

  $ ping 10.0.0.8   
  
  PING 10.0.0.8 (10.0.0.8) 56(84) bytes of data.
  64 bytes from 10.0.0.8: icmp_req=1 ttl=63 time=46.5 ms
  64 bytes from 10.0.0.8: icmp_req=2 ttl=63 time=0.675 ms
  64 bytes from 10.0.0.8: icmp_req=3 ttl=63 time=0.572 ms

  $ nova suspend 9b55928f-e6c7-49b3-9480-0df926dc6a08
  $ nova resume 9b55928f-e6c7-49b3-9480-0df926dc6a08 

  $ ping 10.0.0.8
  PING 10.0.0.8 (10.0.0.8) 56(84) bytes of data.
  From 172.24.4.2 icmp_seq=1 Destination Host Unreachable
  From 172.24.4.2 icmp_seq=2 Destination Host Unreachable
  From 172.24.4.2 icmp_seq=3 Destination Host Unreachable

  ovs-vsctl shows that after resume the instanse's port lacks its VLAN
  tag

  before suspend:
  Port tap69cb10fd-43
  tag: 1
  Interface tap69cb10fd-43

  after suspend:
  Port tap69cb10fd-43
  tag: 1
  Interface tap69cb10fd-43

  after resume:
   Port tap69cb10fd-43
  Interface tap69cb10fd-43

  
  The reason of such behavior is due to the patch 
https://review.openstack.org/#/c/67981/7 which introduced
  removing and recreating of existing interfaces.

  What's more it seems that ovs-agent doesn't notice tag's disappearance
  and that's why it doesn't mark it as updated.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1290854/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp