[Yahoo-eng-team] [Bug 1384555] [NEW] Openstack Neutron Database error.

2014-10-22 Thread Robert Campbell
Public bug reported:

On a fresh installation of Juno, it seems that that the database is not
being populated correctly on a fresh install. This is the output of the
log (I also demonstrated the DB had no tables to begin with):

MariaDB [(none)]> use neutron
Database changed
MariaDB [neutron]> show tables;
Empty set (0.00 sec)

MariaDB [neutron]> quit
Bye
root@vm-1:~# neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini current
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
Current revision for mysql://neutron:X@10.10.10.1/neutron: None
root@vm-1:~# neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini upgrade head
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Running upgrade None -> havana, havana_initial
INFO  [alembic.migration] Running upgrade havana -> e197124d4b9, add unique 
constraint to members
INFO  [alembic.migration] Running upgrade e197124d4b9 -> 1fcfc149aca4, Add a 
unique constraint on (agent_type, host) columns to prevent a race condition 
when an agent entry is 'upserted'.
INFO  [alembic.migration] Running upgrade 1fcfc149aca4 -> 50e86cb2637a, 
nsx_mappings
INFO  [alembic.migration] Running upgrade 50e86cb2637a -> 1421183d533f, NSX 
DHCP/metadata support
INFO  [alembic.migration] Running upgrade 1421183d533f -> 3d3cb89d84ee, 
nsx_switch_mappings
INFO  [alembic.migration] Running upgrade 3d3cb89d84ee -> 4ca36cfc898c, 
nsx_router_mappings
INFO  [alembic.migration] Running upgrade 4ca36cfc898c -> 27cc183af192, 
ml2_vnic_type
INFO  [alembic.migration] Running upgrade 27cc183af192 -> 50d5ba354c23, ml2 
binding:vif_details
INFO  [alembic.migration] Running upgrade 50d5ba354c23 -> 157a5d299379, ml2 
binding:profile
INFO  [alembic.migration] Running upgrade 157a5d299379 -> 3d2585038b95, VMware 
NSX rebranding
INFO  [alembic.migration] Running upgrade 3d2585038b95 -> abc88c33f74f, lb stats
INFO  [alembic.migration] Running upgrade abc88c33f74f -> 1b2580001654, 
nsx_sec_group_mapping
INFO  [alembic.migration] Running upgrade 1b2580001654 -> e766b19a3bb, 
nuage_initial
INFO  [alembic.migration] Running upgrade e766b19a3bb -> 2eeaf963a447, 
floatingip_status
INFO  [alembic.migration] Running upgrade 2eeaf963a447 -> 492a106273f8, Brocade 
ML2 Mech. Driver
INFO  [alembic.migration] Running upgrade 492a106273f8 -> 24c7ea5160d7, Cisco 
CSR VPNaaS
INFO  [alembic.migration] Running upgrade 24c7ea5160d7 -> 81c553f3776c, 
bsn_consistencyhashes
INFO  [alembic.migration] Running upgrade 81c553f3776c -> 117643811bca, nec: 
delete old ofc mapping tables
INFO  [alembic.migration] Running upgrade 117643811bca -> 19180cf98af6, 
nsx_gw_devices
INFO  [alembic.migration] Running upgrade 19180cf98af6 -> 33dd0a9fa487, 
embrane_lbaas_driver
INFO  [alembic.migration] Running upgrade 33dd0a9fa487 -> 2447ad0e9585, Add 
IPv6 Subnet properties
INFO  [alembic.migration] Running upgrade 2447ad0e9585 -> 538732fa21e1, NEC 
Rename quantum_id to neutron_id
INFO  [alembic.migration] Running upgrade 538732fa21e1 -> 5ac1c354a051, n1kv 
segment allocs for cisco n1kv plugin
INFO  [alembic.migration] Running upgrade 5ac1c354a051 -> icehouse, icehouse
INFO  [alembic.migration] Running upgrade icehouse -> 54f7549a0e5f, 
set_not_null_peer_address
INFO  [alembic.migration] Running upgrade 54f7549a0e5f -> 1e5dd1d09b22, 
set_not_null_fields_lb_stats
INFO  [alembic.migration] Running upgrade 1e5dd1d09b22 -> b65aa907aec, 
set_length_of_protocol_field
INFO  [alembic.migration] Running upgrade b65aa907aec -> 33c3db036fe4, 
set_length_of_description_field_metering
INFO  [alembic.migration] Running upgrade 33c3db036fe4 -> 4eca4a84f08a, Remove 
ML2 Cisco Credentials DB
INFO  [alembic.migration] Running upgrade 4eca4a84f08a -> d06e871c0d5, 
set_admin_state_up_not_null_ml2
INFO  [alembic.migration] Running upgrade d06e871c0d5 -> 6be312499f9, 
set_not_null_vlan_id_cisco
INFO  [alembic.migration] Running upgrade 6be312499f9 -> 1b837a7125a9, Cisco 
APIC Mechanism Driver
INFO  [alembic.migration] Running upgrade 1b837a7125a9 -> 10cd28e692e9, 
nuage_extraroute
INFO  [alembic.migration] Running upgrade 10cd28e692e9 -> 2db5203cb7a9, 
nuage_floatingip
INFO  [alembic.migration] Running upgrade 2db5203cb7a9 -> 5446f2a45467, 
set_server_default
INFO  [alembic.migration] Running upgrade 5446f2a45467 -> db_healing, Include 
all tables and make migrations unconditional.
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.autogenerate.compare] Detected server default on column 
'cisco_ml2_apic_epgs.provider'
INFO  [alembic.autogenerate.compare] Detected removed index 
'cisco_n1kv_vlan_allocations_ibfk_1' on 'cisco_n1kv_vlan_allocations'
INFO  [alembic.autogenerate.compare] Detected server default on column 
'cisco_n1kv_vxlan_allocations.allocated'
INFO  

[Yahoo-eng-team] [Bug 1384546] [NEW] Disk filter is not worked correct when the instance directory are shared

2014-10-22 Thread Takahiro Shida
Public bug reported:

- Abstraction
When you assign an launch request of instance, disk filter to a decision by 
looking at the free disk space on the target nova-compute.
Here, since the calculation of the free space is not performed correctly if you 
are sharing a disk with multiple nova-compute, it would allow instance start up 
in nova-compute no free disk

- Situation
If the instance directory is shared, the launch request of the instance that 
more than the actual disk size can not be blocked.
For example, if you had to share the disc with nova-compute between the two, it 
is assumed that you start the instance at the same time multiple units.
Free space of the shared disk is 15G, if the instance to start was a disk of 
10G, respectively, as Quota, the instance of the two or to start?
When's current situation, it starts normally both.
However, since disk space is 15G in fact, it is not possible to use the disk of 
15G or more in both instance.

- How to reproduce
0. Append disk filter on nova-scheduler
1. Prepare two nova-compute with share the instance directory as NFS.
2. NFS disk has 15G free space.
3. Boot two instance at the same time with 10G disk
4. Both instance can boot normally

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: disk filter nfs scheduler

** Description changed:

  - Abstraction
- When you assign an launch request of instance, disk filter to a decision by 
looking at the free disk space on the target nova-compute.
+ When you assign an launch request of instance, disk filter to a decision by 
looking at thefree disk space on the target nova-compute.
  Here, since the calculation of the free space is not performed correctly if 
you are sharing a disk with multiple nova-compute, it would allow instance 
start up in nova-compute no free disk
  
  - Situation
  If the instance directory is shared, the launch request of the instance that 
more than the actual disk size can not be blocked.
  For example, if you had to share the disc with nova-compute between the two, 
it is assumed that you start the instance at the same time multiple units.
- Free space of the shared disk is 15G, if the instance to start was a disk of 
10G, respectively, as Quota, the instance of the two or to start? 
+ Free space of the shared disk is 15G, if the instance to start was a disk of 
10G, respectively, as Quota, the instance of the two or to start?
  When's current situation, it starts normally both.
  However, since disk space is 15G in fact, it is not possible to use the disk 
of 15G or more in both instance.
  
  - How to reproduce
  0. Append disk filter on nova-scheduler
  1. Prepare two nova-compute with share the instance directory as NFS.
  2. NFS disk has 15G free space.
  3. Boot two instance at the same time with 10G disk
  4. Both instance can boot normally

** Description changed:

  - Abstraction
- When you assign an launch request of instance, disk filter to a decision by 
looking at thefree disk space on the target nova-compute.
+ When you assign an launch request of instance, disk filter to a decision by 
looking at the free disk space on the target nova-compute.
  Here, since the calculation of the free space is not performed correctly if 
you are sharing a disk with multiple nova-compute, it would allow instance 
start up in nova-compute no free disk
  
  - Situation
  If the instance directory is shared, the launch request of the instance that 
more than the actual disk size can not be blocked.
  For example, if you had to share the disc with nova-compute between the two, 
it is assumed that you start the instance at the same time multiple units.
  Free space of the shared disk is 15G, if the instance to start was a disk of 
10G, respectively, as Quota, the instance of the two or to start?
  When's current situation, it starts normally both.
  However, since disk space is 15G in fact, it is not possible to use the disk 
of 15G or more in both instance.
  
  - How to reproduce
  0. Append disk filter on nova-scheduler
  1. Prepare two nova-compute with share the instance directory as NFS.
  2. NFS disk has 15G free space.
  3. Boot two instance at the same time with 10G disk
  4. Both instance can boot normally

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384546

Title:
  Disk filter is not worked correct when the instance directory are
  shared

Status in OpenStack Compute (Nova):
  New

Bug description:
  - Abstraction
  When you assign an launch request of instance, disk filter to a decision by 
looking at the free disk space on the target nova-compute.
  Here, since the calculation of the free space is not performed correctly if 
you are sharing a disk with multiple nova-compute, it would allow instance 
start up in nova-compute no free disk

  - Situation
  If the instance directory is shared, the launch request of the insta

[Yahoo-eng-team] [Bug 1353158] Re: Attempting to create a router in a new nuage-net-partition still creates the router in the default nuage-net-partition

2014-10-22 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1353158

Title:
  Attempting to create a router in a new nuage-net-partition still
  creates the router in the default nuage-net-partition

Status in OpenStack Neutron (virtual network service):
  Expired

Bug description:
  Steps to recreate:

  1. Create a new nuage-net-partition
  2. Create a router in this nuage-net-partition by providing the 
--net-partition option

  The new router is created in the default nuage-net-partition instead
  of the new nuage-net-partition

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1353158/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383579] Re: admin can't update other user's password

2014-10-22 Thread Hong-Guang
there is an update password option with the edit action

** Changed in: horizon
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1383579

Title:
  admin can't update other user's password

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  Testing step :

  1:git clone https://github.com/openstack-dev/devstack.git
  2:cd devstack && ./stack.sh
  3:login as admin and create a user named demo1
  4:go to user panel and find the demo1 user
  5:there are only 2 action for edit user demo1:Disable user and Delete user

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1383579/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384531] [NEW] attach encrypted volume failed, exception info is not right

2014-10-22 Thread wanghao
Public bug reported:

when attaching encrypted volume to the vm, the process is failed because
import the encryptor class to raise an exception(Empty module name). But
the exception info in the log file is not right. The log is like this:

2014-10-22 21:40:39.089 ERROR nova.virt.libvirt.driver 
[req-92b203b3-bda6-4013-a946-c3760363b819 admin demo] [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] Failed to attach volume at mountpoint: 
/dev/vdb
2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] Traceback (most recent call last):
2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1380, in attach_volume
2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] encryption)
2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1327, in 
_get_volume_encryptor
2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] **encryption)
2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/opt/stack/nova/nova/volume/encryptors/__init__.py", line 44, in 
get_volume_encryptor
2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] provider=provider, exception=e)
2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/usr/lib/python2.7/logging/__init__.py", line 1449, in error
2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] self.logger.error(msg, *args, 
**kwargs)
2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/usr/lib/python2.7/logging/__init__.py", line 1178, in error
2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] self._log(ERROR, msg, args, **kwargs)
2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] TypeError: _log() got an unexpected 
keyword argument 'exception'

This error info is caused by wrong log printing in
/opt/stack/nova/nova/volume/encryptors/__init__.py(line 44).

** Affects: nova
 Importance: Undecided
 Assignee: wanghao (wanghao749)
 Status: New

** Changed in: nova
 Assignee: (unassigned) => wanghao (wanghao749)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384531

Title:
  attach encrypted volume failed, exception info is not right

Status in OpenStack Compute (Nova):
  New

Bug description:
  when attaching encrypted volume to the vm, the process is failed
  because import the encryptor class to raise an exception(Empty module
  name). But the exception info in the log file is not right. The log is
  like this:

  2014-10-22 21:40:39.089 ERROR nova.virt.libvirt.driver 
[req-92b203b3-bda6-4013-a946-c3760363b819 admin demo] [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] Failed to attach volume at mountpoint: 
/dev/vdb
  2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] Traceback (most recent call last):
  2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1380, in attach_volume
  2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] encryption)
  2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 1327, in 
_get_volume_encryptor
  2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] **encryption)
  2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/opt/stack/nova/nova/volume/encryptors/__init__.py", line 44, in 
get_volume_encryptor
  2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] provider=provider, exception=e)
  2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d]   File 
"/usr/lib/python2.7/logging/__init__.py", line 1449, in error
  2014-10-22 21:40:39.089 25617 TRACE nova.virt.libvirt.driver [instance: 
c2589f3a-3d20-44fc-bb37-88e65cb13b2d] self.logger.error(msg, *args, 
**kwargs)
  2

[Yahoo-eng-team] [Bug 1384505] [NEW] creating a security group with name "DEFAULT" should not be allowed

2014-10-22 Thread Prinika
Public bug reported:

Currently creating another security group with name "default" is not
allowed, however we can create another security group with name
"DEFAULT" (or any other CASE pattern). This should not be allowed.

When trying to boot a VM without specifying a security group it should
always pick the "default" security group, however if another security
group of the name DEFAULT is present, the VM gets associated with the
wrong security group (i.e DEFAULT and not default).

** Affects: neutron
 Importance: Undecided
 Assignee: Sayaji Patil (sayaji15)
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384505

Title:
  creating a security group with name "DEFAULT" should not be allowed

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Currently creating another security group with name "default" is not
  allowed, however we can create another security group with name
  "DEFAULT" (or any other CASE pattern). This should not be allowed.

  When trying to boot a VM without specifying a security group it should
  always pick the "default" security group, however if another security
  group of the name DEFAULT is present, the VM gets associated with the
  wrong security group (i.e DEFAULT and not default).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384505/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384487] [NEW] big switch server manager uses SSLv3

2014-10-22 Thread Kevin Benton
Public bug reported:

The communication with the backend is done using the default protocol of
ssl.wrap_socket, which is SSLv3. This protocol is vulnerable to the
Poodle attack.

** Affects: neutron
 Importance: Undecided
 Assignee: Kevin Benton (kevinbenton)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Kevin Benton (kevinbenton)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384487

Title:
  big switch server manager uses SSLv3

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The communication with the backend is done using the default protocol
  of ssl.wrap_socket, which is SSLv3. This protocol is vulnerable to the
  Poodle attack.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384487/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384423] Re: check-heat-dsvm-functional fails with "StackBuildErrorException: Stack VolumeBackupRestoreIntegrationTest-1161201394/7c711e8d-84ba-4363-ab8e-3afa1d6e1dc6 is in CREAT

2014-10-22 Thread Steve Baker
Yes, the following change has just landed to mitigate this:
https://review.openstack.org/#/c/129746/

This change just stops the test instead of failing it when this error
occurs. We think this test is finding a bug in nova/cinder/swift
interaction which is not tested in tempest, namely mounting a volume
which has been restored from a backup.

In Paris I'd like to discuss the possibility of using elastic-recheck to
check for these non-fail errors so we can monitor progress on fixing it,
without having the impact of it failing a gating job.

** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

** Description changed:

  This is a non-voting job but I saw the failure shows up quite a bit:
  
  http://logs.openstack.org/89/96889/13/check/check-heat-dsvm-
  functional/9135dca/console.html#_2014-10-22_15_12_51_439
  
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RhY2tCdWlsZEVycm9yRXhjZXB0aW9uOiBTdGFjayBWb2x1bWVCYWNrdXBSZXN0b3JlSW50ZWdyYXRpb25UZXN0XCIgQU5EIG1lc3NhZ2U6XCJpcyBpbiBDUkVBVEVfRkFJTEVEIHN0YXR1cyBkdWUgdG8gJ1Jlc291cmNlIENSRUFURSBmYWlsZWQ6IFdhaXRDb25kaXRpb25GYWlsdXJlOiBUZXN0IEZhaWxlZCdcIiBBTkQgdGFnczpcImNvbnNvbGVcIiBBTkQgYnVpbGRfbmFtZTpcImNoZWNrLWhlYXQtZHN2bS1mdW5jdGlvbmFsXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTQwMDY4MTA5MDIsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=
  
  188 hits in 7 days, all failures.  It's all check queue since it's a
  non-voting job, but it hits on multiple changes so it'd be good to fix
  this if the job is ever going to be voting.
+ 
+ (stevebaker) The root cause is when attempting to mount a volume which
+ was restored from a backup the volume never appears in cirros
+ /proc/partitions
+ 
+ 
http://git.openstack.org/cgit/openstack/heat/tree/heat_integrationtests/scenario/test_volumes_create_from_backup.yaml#n74
+ 
+ Assistance from cinder folk will be need to diagnose this.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384423

Title:
  check-heat-dsvm-functional fails with "StackBuildErrorException: Stack
  VolumeBackupRestoreIntegrationTest-1161201394/7c711e8d-84ba-4363-ab8e-
  3afa1d6e1dc6 is in CREATE_FAILED status due to 'Resource CREATE
  failed: WaitConditionFailure: Test Failed'"

Status in Cinder:
  New
Status in Orchestration API (Heat):
  New
Status in OpenStack Compute (Nova):
  New

Bug description:
  This is a non-voting job but I saw the failure shows up quite a bit:

  http://logs.openstack.org/89/96889/13/check/check-heat-dsvm-
  functional/9135dca/console.html#_2014-10-22_15_12_51_439

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiU3RhY2tCdWlsZEVycm9yRXhjZXB0aW9uOiBTdGFjayBWb2x1bWVCYWNrdXBSZXN0b3JlSW50ZWdyYXRpb25UZXN0XCIgQU5EIG1lc3NhZ2U6XCJpcyBpbiBDUkVBVEVfRkFJTEVEIHN0YXR1cyBkdWUgdG8gJ1Jlc291cmNlIENSRUFURSBmYWlsZWQ6IFdhaXRDb25kaXRpb25GYWlsdXJlOiBUZXN0IEZhaWxlZCdcIiBBTkQgdGFnczpcImNvbnNvbGVcIiBBTkQgYnVpbGRfbmFtZTpcImNoZWNrLWhlYXQtZHN2bS1mdW5jdGlvbmFsXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTQwMDY4MTA5MDIsIm1vZGUiOiIiLCJhbmFseXplX2ZpZWxkIjoiIn0=

  188 hits in 7 days, all failures.  It's all check queue since it's a
  non-voting job, but it hits on multiple changes so it'd be good to fix
  this if the job is ever going to be voting.

  (stevebaker) The root cause is when attempting to mount a volume which
  was restored from a backup the volume never appears in cirros
  /proc/partitions

  
http://git.openstack.org/cgit/openstack/heat/tree/heat_integrationtests/scenario/test_volumes_create_from_backup.yaml#n74

  Assistance from cinder folk will be need to diagnose this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1384423/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384462] [NEW] our angular humanizeNumbers utility is not internationalized

2014-10-22 Thread Doug Fish
Public bug reported:

While browsing code I ran across 
horizon/static/horizon/js/angular/services/horizon.utils.js#L25

 humanizeNumbers: function (number) {
return number.toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",");
},

which is not a proper way to group numbers in all locales.

http://en.wikipedia.org/wiki/Decimal_mark#Other_numeral_systems ,
Examples of use shows various internationalized examples.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: i18n

** Tags added: i18n

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1384462

Title:
  our angular humanizeNumbers utility is not internationalized

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  While browsing code I ran across 
  horizon/static/horizon/js/angular/services/horizon.utils.js#L25

   humanizeNumbers: function (number) {
  return number.toString().replace(/\B(?=(\d{3})+(?!\d))/g, ",");
  },

  which is not a proper way to group numbers in all locales.

  http://en.wikipedia.org/wiki/Decimal_mark#Other_numeral_systems ,
  Examples of use shows various internationalized examples.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1384462/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384457] [NEW] Self value in Link is wrong in GET /OS-REVOKE/events

2014-10-22 Thread Haneef Ali
Public bug reported:

There are 2 events in the path


# curl -k -H "X-Auth-Token:SomeToken"   
http://localhost:35357/v3/OS-REVOKE/events  | python -mjson.tool
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100   304  100   3040 0313  0 --:--:-- --:--:-- --:--:--   313
{
"events": [
{
"issued_before": "2014-10-22T20:26:14.00Z",
"project_id": "f5590b050dc14795b5e8447a223bd696"
},
{
"audit_id": "cAV3qiytQkuzpANJ3CPFRg",
"issued_before": "2014-10-22T20:29:44.00Z"
}
],
"links": {
"next": null,
"previous": null,
"self": "http://localhost:35357/v3/OS-REVOKE/events/events";
}
}

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1384457

Title:
  Self value in Link  is wrong in  GET /OS-REVOKE/events

Status in OpenStack Identity (Keystone):
  New

Bug description:
  There are 2 events in the path

  
  # curl -k -H "X-Auth-Token:SomeToken"   
http://localhost:35357/v3/OS-REVOKE/events  | python -mjson.tool
% Total% Received % Xferd  Average Speed   TimeTime Time  
Current
   Dload  Upload   Total   SpentLeft  Speed
  100   304  100   3040 0313  0 --:--:-- --:--:-- --:--:--   313
  {
  "events": [
  {
  "issued_before": "2014-10-22T20:26:14.00Z",
  "project_id": "f5590b050dc14795b5e8447a223bd696"
  },
  {
  "audit_id": "cAV3qiytQkuzpANJ3CPFRg",
  "issued_before": "2014-10-22T20:29:44.00Z"
  }
  ],
  "links": {
  "next": null,
  "previous": null,
  "self": "http://localhost:35357/v3/OS-REVOKE/events/events";
  }
  }

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1384457/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384446] [NEW] Horizon incorrectly displays memory of an instance

2014-10-22 Thread Ryan Aydelott
Public bug reported:

Horizon incorrectly reports the size after VM's reach a size greater
then 1TB.

For example openstack generated this xml for a 1.5TB memory VM:

 
 3e3f022b-5891-48cf-a09b-ea3fb29e4006 
 instance-444f 
 1584713728 
 24

But viewing this in Horizon only reports the VM memory size @ 1TB (see
attached screenshot)

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "Screen Shot 2014-10-22 at 3.31.42 PM.png"
   
https://bugs.launchpad.net/bugs/1384446/+attachment/4242003/+files/Screen%20Shot%202014-10-22%20at%203.31.42%20PM.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1384446

Title:
  Horizon incorrectly displays memory of an instance

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Horizon incorrectly reports the size after VM's reach a size greater
  then 1TB.

  For example openstack generated this xml for a 1.5TB memory VM:

   
   3e3f022b-5891-48cf-a09b-ea3fb29e4006 
   instance-444f 
   1584713728 
   24

  But viewing this in Horizon only reports the VM memory size @ 1TB (see
  attached screenshot)

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1384446/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384434] [NEW] tab styling on workflows need restyling

2014-10-22 Thread David Lyle
Public bug reported:

With the recent change to secondary navigation on primary pages, some
collateral damage is the tab styling on modal workflows. Required
indicator is hard to distinguish and dark background doesn't mesh well
with the rest of the modal.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: ux

** Attachment added: "modal-workflow.png"
   
https://bugs.launchpad.net/bugs/1384434/+attachment/4241962/+files/modal-workflow.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1384434

Title:
  tab styling on workflows need restyling

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  With the recent change to secondary navigation on primary pages, some
  collateral damage is the tab styling on modal workflows. Required
  indicator is hard to distinguish and dark background doesn't mesh well
  with the rest of the modal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1384434/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384409] [NEW] UnicodeDecodeError when trying to create a user with DEBUG logging turned on

2014-10-22 Thread Gabriel Hurley
Public bug reported:

The mask_password function of openstack.common.log and/or
openstack.common.strutils (depending on OpenStack version) seems to
choke on unicode characters. This actually prevents proper function when
logging level is set to DEBUG.

When submitting a POST request to create a user with the unicode snowman
character in the name (for testing purposes), I get the following
traceback:

2014-10-22 18:12:01.973 21263 DEBUG keystone.common.wsgi [-] 
 REQUEST BODY  _call_ 
/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py:430
2014-10-22 18:12:01.974 21263 ERROR keystone.common.wsgi [-] 'ascii' codec 
can't decode byte 0xe2 in position 36: ordinal not in range(128)
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi Traceback (most recent 
call last):
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi File 
"/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 396, in _call_
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi response = 
request.get_response(self.application)
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1296, in send
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi application, 
catch_exc_info=False)
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi File 
"/usr/lib/python2.7/dist-packages/webob/request.py", line 1260, in 
call_application
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi app_iter = 
application(self.environ, start_response)
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 144, in _call_
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi return resp(environ, 
start_response)
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 130, in _call_
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi resp = 
self.call_func(req, *args, **self.kwargs)
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi File 
"/usr/lib/python2.7/dist-packages/webob/dec.py", line 195, in call_func
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi return self.func(req, 
*args, **kwargs)
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi File 
"/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 432, in _call_
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi LOG.debug('%s', 
log.mask_password(line))
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi File 
"/usr/lib/python2.7/dist-packages/keystone/openstack/common/log.py", line 266, 
in mask_password
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi message = 
six.text_type(message)
2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi UnicodeDecodeError: 
'ascii' codec can't decode byte 0xe2 in position 36: ordinal not in range(128)

That trace is from an Icehouse install, but the code in mask_password
appears to suffer the same issue in Juno.

A simpler repro case:

>>> from keystone.openstack.common.log import mask_password
>>> mask_password('"password": "foo"')
u'"password": "***"'
>>> mask_password('"password": "f☃o"')
Traceback (most recent call last):
File "", line 1, in 
File "/usr/lib/python2.7/dist-packages/keystone/openstack/common/log.py", line 
266, in mask_password
message = six.text_type(message)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 14: 
ordinal not in range(128)

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1384409

Title:
  UnicodeDecodeError when trying to create a user with DEBUG logging
  turned on

Status in OpenStack Identity (Keystone):
  New

Bug description:
  The mask_password function of openstack.common.log and/or
  openstack.common.strutils (depending on OpenStack version) seems to
  choke on unicode characters. This actually prevents proper function
  when logging level is set to DEBUG.

  When submitting a POST request to create a user with the unicode
  snowman character in the name (for testing purposes), I get the
  following traceback:

  2014-10-22 18:12:01.973 21263 DEBUG keystone.common.wsgi [-] 
 REQUEST BODY  _call_ 
/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py:430
  2014-10-22 18:12:01.974 21263 ERROR keystone.common.wsgi [-] 'ascii' codec 
can't decode byte 0xe2 in position 36: ordinal not in range(128)
  2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi Traceback (most 
recent call last):
  2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi File 
"/usr/lib/python2.7/dist-packages/keystone/common/wsgi.py", line 396, in _call_
  2014-10-22 18:12:01.974 21263 TRACE keystone.common.wsgi response = 
request.get_response(self.application)
  2014-10-22 18:12:01.974 21263 TR

[Yahoo-eng-team] [Bug 1384402] [NEW] DHCP agent resync restarts all dnsmasq processes on any dhcp driver exception

2014-10-22 Thread Terry Wilson
Public bug reported:

The sync_state/periodic_resync implementation will loop through and
restart the dhcp process for all active networks any time there is an
exception calling a dhcp driver function for a specific network. This
allows a tenant who can create an unhandled exception to cause every
dhcp process on the system to restart. On systems with lots of networks
this can easily take longer than the default resync timeout leading to a
system that becomes unresponsive because of the load continually
restarting causes.

** Affects: neutron
 Importance: Undecided
 Assignee: Terry Wilson (otherwiseguy)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384402

Title:
  DHCP agent resync restarts all dnsmasq processes on any dhcp driver
  exception

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  The sync_state/periodic_resync implementation will loop through and
  restart the dhcp process for all active networks any time there is an
  exception calling a dhcp driver function for a specific network. This
  allows a tenant who can create an unhandled exception to cause every
  dhcp process on the system to restart. On systems with lots of
  networks this can easily take longer than the default resync timeout
  leading to a system that becomes unresponsive because of the load
  continually restarting causes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384402/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1377983] Re: remove workaround code for old versions of Babel

2014-10-22 Thread Doug Fish
This bug shouldn't be addressed in Horizon directly.  We should just be
picking up changes from oslo.

** Changed in: horizon
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1377983

Title:
  remove workaround code for old versions of Babel

Status in OpenStack Dashboard (Horizon):
  Invalid

Bug description:
  There is workaround code in
  
https://github.com/openstack/horizon/blob/a0f7235278cfe187b2ff31bfb787548735111c8b/openstack_dashboard/openstack/common/gettextutils.py#L302
  for handling older versions of Babel which we no longer support.   It
  should be removed for clarity.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1377983/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384400] [NEW] too many steps required to delete a neutron router

2014-10-22 Thread Miguel Grinberg
Public bug reported:

The procedure to delete a router is currently as follows:

1. clear the gateway
2. enter the router page and delete any interfaces attached
3. go back to the router list
4. finally delete router

The gateway and interfaces should be automatically detached from the
router so that the user can simply hit "delete router" from the router
list page.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1384400

Title:
  too many steps required to delete a neutron router

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  The procedure to delete a router is currently as follows:

  1. clear the gateway
  2. enter the router page and delete any interfaces attached
  3. go back to the router list
  4. finally delete router

  The gateway and interfaces should be automatically detached from the
  router so that the user can simply hit "delete router" from the router
  list page.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1384400/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1218780] Re: Image property "hw_vif_model" is not honored again if we do a VM soft/hard reboot via Nova

2014-10-22 Thread Vladik Romanovsky
It has been fixed a while ago: https://review.openstack.org/#/c/58701/

** Changed in: nova
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1218780

Title:
  Image property "hw_vif_model" is not honored again if we do a  VM
  soft/hard reboot via Nova

Status in OpenStack Compute (Nova):
  Fix Released

Bug description:
  I am using Grizzly 2013.1.2, I have an image with a property:
  hw_vif_model=e1000.

  When first nova boot a VM from this image, e1000 get configured as nic
  type properly,  but once I do a soft/hard reboot,  the nic type is
  changed backed to default virtio.

  I guess this is a bug of Nova ?

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1218780/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384392] [NEW] Snapshot volume backed VM does not handle image metadata correctly

2014-10-22 Thread Samuel Matzek
Public bug reported:

Nova Juno

The instance snapshot of volume backed instances does not handle image
metadata the same way that the regular instance snapshot path does.

nova/compute/api/api.py's snapshot path builds the Glance image metadata
using nova/compute/utils.py get_image_metadata which gets metadata from
the VM's base image, includes metadata from the instance's system
metadata, and excludes properties specified in
CONF.non_inheritable_image_properties.

The volume backed snapshot path,
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n1472
simply gets the image properties from the base image and does not
include properties from instance system metadata and doesn't honor the
CONF.non_inheritable_image_properties property.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384392

Title:
  Snapshot volume backed VM does not handle image metadata correctly

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova Juno

  The instance snapshot of volume backed instances does not handle image
  metadata the same way that the regular instance snapshot path does.

  nova/compute/api/api.py's snapshot path builds the Glance image
  metadata using nova/compute/utils.py get_image_metadata which gets
  metadata from the VM's base image, includes metadata from the
  instance's system metadata, and excludes properties specified in
  CONF.non_inheritable_image_properties.

  The volume backed snapshot path,
  
http://git.openstack.org/cgit/openstack/nova/tree/nova/api/openstack/compute/servers.py#n1472
  simply gets the image properties from the base image and does not
  include properties from instance system metadata and doesn't honor the
  CONF.non_inheritable_image_properties property.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384392/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384386] [NEW] Image block device mappings for snapshots of instances specify delete_on_termination=null

2014-10-22 Thread Samuel Matzek
Public bug reported:

Nova Juno

Scenario:
1. Boot an instance from a volume.
2. Nova snapshot the instance.  This produces a Glance image with a block 
device mapping property like this:
[{"guest_format": null, "boot_index": 0, "no_device": null, "snapshot_id": 
"1a642ca8-210f-4790-ab93-00b6a4b86a14", "delete_on_termination": null, 
"disk_bus": null, "image_id": null, "source_type": "snapshot", "device_type": 
"disk", "volume_id": null, "destination_type": "volume", "volume_size": null}]

3. Create an instance from the Glance image.  Nova creates a new Cinder volume 
from the image's Cinder snapshot and attaches it to the instance.
4. Delete the instance.

Problem:  The Cinder volume created at step 3 remains.

The block device mappings for Cinder snapshots created during VM
snapshot and placed into the Glance image should specify
"delete_on_termination":  True so that the Cinder volumes created for
VMs booted from the image will be cleaned up on VM deletion.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384386

Title:
  Image block device mappings for snapshots of instances specify
  delete_on_termination=null

Status in OpenStack Compute (Nova):
  New

Bug description:
  Nova Juno

  Scenario:
  1. Boot an instance from a volume.
  2. Nova snapshot the instance.  This produces a Glance image with a block 
device mapping property like this:
  [{"guest_format": null, "boot_index": 0, "no_device": null, "snapshot_id": 
"1a642ca8-210f-4790-ab93-00b6a4b86a14", "delete_on_termination": null, 
"disk_bus": null, "image_id": null, "source_type": "snapshot", "device_type": 
"disk", "volume_id": null, "destination_type": "volume", "volume_size": null}]

  3. Create an instance from the Glance image.  Nova creates a new Cinder 
volume from the image's Cinder snapshot and attaches it to the instance.
  4. Delete the instance.

  Problem:  The Cinder volume created at step 3 remains.

  The block device mappings for Cinder snapshots created during VM
  snapshot and placed into the Glance image should specify
  "delete_on_termination":  True so that the Cinder volumes created for
  VMs booted from the image will be cleaned up on VM deletion.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384386/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384382] [NEW] GET /OS-FEDERATION/saml2/metadata does not work

2014-10-22 Thread Rodrigo Duarte
Public bug reported:

In Kestone-to-Keystone federation, the metadata from Keystone Identity
Provider needs to be exchanged with the Keystone Service Provider. This
is done via the GET /OS-FEDERATION/saml2/metadata endpoint, which is
returning an internal server error (500).

Looking in the log files, seems that keystone.middleware.core is trying
to parse the XML body into JSON, which fails:

2014-10-22 18:15:32.177590 20576 DEBUG keystone.common.wsgi [-] arg_dict: {} 
__call__ /opt/stack/keystone/keystone/common/wsgi.py:191
2014-10-22 18:15:32.184124 20576 ERROR keystone.middleware.core [-] Serializer 
failed
2014-10-22 18:15:32.184148 20576 TRACE keystone.middleware.core Traceback (most 
recent call last):
2014-10-22 18:15:32.184155 20576 TRACE keystone.middleware.core   File 
"/opt/stack/keystone/keystone/middleware/core.py", line 183, in process_response
2014-10-22 18:15:32.184168 20576 TRACE keystone.middleware.core body_obj = 
jsonutils.loads(response.body)
2014-10-22 18:15:32.184185 20576 TRACE keystone.middleware.core   File 
"/usr/local/lib/python2.7/dist-packages/oslo/serialization/jsonutils.py", line 
211, in loads
2014-10-22 18:15:32.184194 20576 TRACE keystone.middleware.core return 
json.loads(encodeutils.safe_decode(s, encoding), **kwargs)
2014-10-22 18:15:32.184201 20576 TRACE keystone.middleware.core   File 
"/usr/lib/python2.7/json/__init__.py", line 338, in loads
2014-10-22 18:15:32.184207 20576 TRACE keystone.middleware.core return 
_default_decoder.decode(s)
2014-10-22 18:15:32.184213 20576 TRACE keystone.middleware.core   File 
"/usr/lib/python2.7/json/decoder.py", line 366, in decode
2014-10-22 18:15:32.184220 20576 TRACE keystone.middleware.core obj, end = 
self.raw_decode(s, idx=_w(s, 0).end())
2014-10-22 18:15:32.184226 20576 TRACE keystone.middleware.core   File 
"/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode
2014-10-22 18:15:32.184232 20576 TRACE keystone.middleware.core raise 
ValueError("No JSON object could be decoded")
2014-10-22 18:15:32.184238 20576 TRACE keystone.middleware.core ValueError: No 
JSON object could be decoded
2014-10-22 18:15:32.184244 20576 TRACE keystone.middleware.core
2014-10-22 18:15:32.184740 20576 WARNING keystone.common.wsgi [-] 
2014-10-22 18:15:32.184765 http://www.w3.org/2000/09/xmldsig#"; 
entityID="http://localhost:5000/v3/OS-FEDERATION/saml2/idp";>...rodrigodsrodrigodslocalhostrodrigodsRodrigoDuarterodrigodso...@gmail.com555-55-urn:oasis:names:tc:SAML:2.0:nameid-format:transienthttp://localhost:5000/v3/OS-FEDERATION/saml2/sso"; 
/>

** Affects: keystone
 Importance: Undecided
 Assignee: Rodrigo Duarte (rodrigodsousa)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => Rodrigo Duarte (rodrigodsousa)

** Description changed:

  In Kestone-to-Keystone federation, the metadata from Keystone Identity
  Provider needs to be exchanged with the Keystone Service Provider. This
  is done via the GET /OS-FEDERATION/saml2/metadata endpoint, which is
  returning an internal server error (500).
  
  Looking in the log files, seems that keystone.middleware.core is trying
  to parse the XML body into JSON, which fails:
  
  2014-10-22 18:15:32.177590 20576 DEBUG keystone.common.wsgi [-] arg_dict: {} 
__call__ /opt/stack/keystone/keystone/common/wsgi.py:191
  2014-10-22 18:15:32.184124 20576 ERROR keystone.middleware.core [-] 
Serializer failed
  2014-10-22 18:15:32.184148 20576 TRACE keystone.middleware.core Traceback 
(most recent call last):
  2014-10-22 18:15:32.184155 20576 TRACE keystone.middleware.core   File 
"/opt/stack/keystone/keystone/middleware/core.py", line 183, in process_response
  2014-10-22 18:15:32.184168 20576 TRACE keystone.middleware.core body_obj 
= jsonutils.loads(response.body)
  2014-10-22 18:15:32.184185 20576 TRACE keystone.middleware.core   File 
"/usr/local/lib/python2.7/dist-packages/oslo/serialization/jsonutils.py", line 
211, in loads
  2014-10-22 18:15:32.184194 20576 TRACE keystone.middleware.core return 
json.loads(encodeutils.safe_decode(s, encoding), **kwargs)
  2014-10-22 18:15:32.184201 20576 TRACE keystone.middleware.core   File 
"/usr/lib/python2.7/json/__init__.py", line 338, in loads
  2014-10-22 18:15:32.184207 20576 TRACE keystone.middleware.core return 
_default_decoder.decode(s)
  2014-10-22 18:15:32.184213 20576 TRACE keystone.middleware.core   File 
"/usr/lib/python2.7/json/decoder.py", line 366, in decode
  2014-10-22 18:15:32.184220 20576 TRACE keystone.middleware.core obj, end 
= self.raw_decode(s, idx=_w(s, 0).end())
  2014-10-22 18:15:32.184226 20576 TRACE keystone.middleware.core   File 
"/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode
  2014-10-22 18:15:32.184232 20576 TRACE keystone.middleware.core raise 
ValueError("No JSON object could be decoded")
  2014-10-22 18:15:32.184238 20576 TRACE keystone.middleware.core ValueError: 
No JSON object could be decoded
- 2014-10-22 18:15:32.184244 20576 TRACE 

[Yahoo-eng-team] [Bug 1384379] [NEW] versions resource uses host_url which may be incorrect

2014-10-22 Thread Vish Ishaya
Public bug reported:

The versions resource constructs the links by using host_url, but the
glance api endpoint may be behind a proxy or ssl terminator. This means
that host_url may be incorrect. It should have a config option to
override host_url like the other services do when constructing versions
links.

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1384379

Title:
  versions resource uses host_url which may be incorrect

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  The versions resource constructs the links by using host_url, but the
  glance api endpoint may be behind a proxy or ssl terminator. This
  means that host_url may be incorrect. It should have a config option
  to override host_url like the other services do when constructing
  versions links.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1384379/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384377] [NEW] Policy rule position errors

2014-10-22 Thread Andre Aranha
Public bug reported:

In the policy.v3cloudsample.json there is the rule "admin_or_owner" that
is defined as "(rule:admin_required and
domain_id:%(target.token.user.domain.id)s) or rule:owner", and the tests
for it (
https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json#L7
) , specially this
keystone.tests.test_v3_auth.TestTokenRevokeSelfAndAdmin.test_user_revokes_own_token
shows it's working as expected. The rule "admin_required" is defined
only as "role:admin" (
https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json#L2
), so I changed the rule "admin_or_owner" to "(role:admin and
domain_id:%(target.token.user.domain.id)s) or rule:owner" and the test
raises a error saying that the user has no permission to do the action.
As it's the same rule, it wasn't suppose to raise errors. But it doesn't
stop there, when I rearrange the rule order to be like this:
"admin_or_owner": "rule:owner or (role:admin and
domain_id:%(target.token.user.domain.id)s)" it works.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1384377

Title:
  Policy rule position errors

Status in OpenStack Identity (Keystone):
  New

Bug description:
  In the policy.v3cloudsample.json there is the rule "admin_or_owner"
  that is defined as "(rule:admin_required and
  domain_id:%(target.token.user.domain.id)s) or rule:owner", and the
  tests for it (
  
https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json#L7
  ) , specially this
  
keystone.tests.test_v3_auth.TestTokenRevokeSelfAndAdmin.test_user_revokes_own_token
  shows it's working as expected. The rule "admin_required" is defined
  only as "role:admin" (
  
https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json#L2
  ), so I changed the rule "admin_or_owner" to "(role:admin and
  domain_id:%(target.token.user.domain.id)s) or rule:owner" and the test
  raises a error saying that the user has no permission to do the
  action. As it's the same rule, it wasn't suppose to raise errors. But
  it doesn't stop there, when I rearrange the rule order to be like
  this: "admin_or_owner": "rule:owner or (role:admin and
  domain_id:%(target.token.user.domain.id)s)" it works.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1384377/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384367] [NEW] RAW backing device for ephemeral disks

2014-10-22 Thread David Hill
Public bug reported:

Hi guys,

While doing some auditing on disk usage, we've noticed that the
backing device for ephemeral disks is created by a slow mkfs.ntfs (when
the ephemeral disk is ntfs), is there any reasons why it's not done
using qcow2?

Also, could we add a quick format option  ?  Not everbody requires the
same level of filesystem security and sometimes, performances prevail on
security (for internal clouds for instance).

Dave

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: disk ephemeral nova performance slow

** Project changed: python-ceilometerclient => nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384367

Title:
  RAW backing device for ephemeral disks

Status in OpenStack Compute (Nova):
  New

Bug description:
  Hi guys,

  While doing some auditing on disk usage, we've noticed that the
  backing device for ephemeral disks is created by a slow mkfs.ntfs
  (when the ephemeral disk is ntfs), is there any reasons why it's not
  done using qcow2?

  Also, could we add a quick format option  ?  Not everbody requires the
  same level of filesystem security and sometimes, performances prevail
  on security (for internal clouds for instance).

  Dave

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384365] [NEW] Domain admin should be allowed to show their domain

2014-10-22 Thread Nathan Kinder
Public bug reported:

When using the policy.v3cloudsample.json, a domain admin (possessing the
'admin' role with a domain scoped token) is not allowed to show their
own domain.  This operation is restricted to the cloud admin:

  "identity:get_domain": "rule:cloud_admin"

The admin of a domain should be allowed to view/show their own domain.

** Affects: keystone
 Importance: Undecided
 Assignee: Nathan Kinder (nkinder)
 Status: In Progress

** Changed in: keystone
 Assignee: (unassigned) => Nathan Kinder (nkinder)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1384365

Title:
  Domain admin should be allowed to show their domain

Status in OpenStack Identity (Keystone):
  In Progress

Bug description:
  When using the policy.v3cloudsample.json, a domain admin (possessing
  the 'admin' role with a domain scoped token) is not allowed to show
  their own domain.  This operation is restricted to the cloud admin:

"identity:get_domain": "rule:cloud_admin"

  The admin of a domain should be allowed to view/show their own domain.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1384365/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384367] [NEW] RAW backing device for ephemeral disks

2014-10-22 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

Hi guys,

While doing some auditing on disk usage, we've noticed that the back
device for ephemeral disks is done by a slow mkfs.ntfs (when the
ephemeral disk is ntfs), is there any reasons why it's not done using
qcow2?

Also, could we add a quick format option  ?  Not everbody requires the
same level of filesystem security and sometimes, performances prevail on
security (for internal clouds for instance).

Dave

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: disk ephemeral nova performance slow
-- 
RAW backing device for ephemeral disks
https://bugs.launchpad.net/bugs/1384367
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to OpenStack Compute (nova).

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384347] [NEW] Couldn't run instance with existing port when default security group is absent

2014-10-22 Thread Feodor Tersin
Public bug reported:

If default security group in tenant is deleted (admin has appropriate
permissions) then launching an instance with Neutron port fails at
allocate network resources stage:

ERROR nova.compute.manager [-] Instance failed network setup after 1 attempt(s)
TRACE nova.compute.manager Traceback (most recent call last):
TRACE nova.compute.manager   File "/opt/stack/nova/nova/compute/manager.py", 
line 1528, in _allocate_network_async
TRACE nova.compute.manager dhcp_options=dhcp_options)
TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/network/neutronv2/api.py", line 294, in 
allocate_for_instance
TRACE nova.compute.manager security_group_id=security_group)
TRACE nova.compute.manager SecurityGroupNotFound: Security group default not 
found.

Steps to reproduce:
0. Delete the default security group with admin account.
1. Create custom security group
2. Create a network and a subnet
3. Create a port in the subnet with the custom security group
4. Launch an instance with the port (and don't specify any security group)

Launch command is accepted successfully, but 'nova show' command returns
the instance in error state.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384347

Title:
  Couldn't run instance with existing port when default security group
  is absent

Status in OpenStack Compute (Nova):
  New

Bug description:
  If default security group in tenant is deleted (admin has appropriate
  permissions) then launching an instance with Neutron port fails at
  allocate network resources stage:

  ERROR nova.compute.manager [-] Instance failed network setup after 1 
attempt(s)
  TRACE nova.compute.manager Traceback (most recent call last):
  TRACE nova.compute.manager   File "/opt/stack/nova/nova/compute/manager.py", 
line 1528, in _allocate_network_async
  TRACE nova.compute.manager dhcp_options=dhcp_options)
  TRACE nova.compute.manager   File 
"/opt/stack/nova/nova/network/neutronv2/api.py", line 294, in 
allocate_for_instance
  TRACE nova.compute.manager security_group_id=security_group)
  TRACE nova.compute.manager SecurityGroupNotFound: Security group default not 
found.

  Steps to reproduce:
  0. Delete the default security group with admin account.
  1. Create custom security group
  2. Create a network and a subnet
  3. Create a port in the subnet with the custom security group
  4. Launch an instance with the port (and don't specify any security group)

  Launch command is accepted successfully, but 'nova show' command
  returns the instance in error state.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384347/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383141] Re: apicapi does not included in requirements.txt

2014-10-22 Thread Armando Migliaccio
** Changed in: neutron
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1383141

Title:
  apicapi does not included in requirements.txt

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  
https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/cisco/apic/mechanism_apic.py#L16

  
  from apicapi import apic_manager
  from apicapi import exceptions as exc

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1383141/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384309] [NEW] VMware: New permission required: Extension.Register

2014-10-22 Thread Matthew Booth
Public bug reported:

Change I1046576c448704841ae8e1800b8390e947b0d457 uses
ExtensionManager.RegisterExtension, which requires the additional
permission Extension.Register on the vSphere server. Unfortunately we
missed the DocImpact in review. This needs to be added to the relevant
docs.

The impact of not having this permission is that n-cpu fails to start
with the error:

WebFault: Server raised fault: 'Permission to perform this operation was
denied.'

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: documentation vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384309

Title:
  VMware: New permission required: Extension.Register

Status in OpenStack Compute (Nova):
  New

Bug description:
  Change I1046576c448704841ae8e1800b8390e947b0d457 uses
  ExtensionManager.RegisterExtension, which requires the additional
  permission Extension.Register on the vSphere server. Unfortunately we
  missed the DocImpact in review. This needs to be added to the relevant
  docs.

  The impact of not having this permission is that n-cpu fails to start
  with the error:

  WebFault: Server raised fault: 'Permission to perform this operation
  was denied.'

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384309/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384305] [NEW] In Create Network workflow, "Next" / "Create" button loses state

2014-10-22 Thread Rob Cresswell
Public bug reported:

When following the Create Network workflow, if subnet is disabled
(unticked) and then the user clicks back (to the Network Tab, from the
Subnet tab), and then Next again, subnet is still unticked but the
Create button shows "Next" instead. When clicked this goes to the
invisible and irrelevant Subnet Detail tab.

** Affects: horizon
 Importance: Undecided
 Assignee: Bradley Jones (bradjones)
 Status: Confirmed

** Description changed:

  When following the Create Network workflow, if subnet is disabled
  (unticked) and then the user clicks back (to the Network Tab, from the
- Subnet tab), and then Next again, subnet is still untucked but the
+ Subnet tab), and then Next again, subnet is still unticked but the
  Create button shows "Next" instead. When clicked this goes to the
- invisible DHCP tab.
+ invisible and irrelevant Subnet Detail tab.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1384305

Title:
  In Create Network workflow, "Next" / "Create" button loses state

Status in OpenStack Dashboard (Horizon):
  Confirmed

Bug description:
  When following the Create Network workflow, if subnet is disabled
  (unticked) and then the user clicks back (to the Network Tab, from the
  Subnet tab), and then Next again, subnet is still unticked but the
  Create button shows "Next" instead. When clicked this goes to the
  invisible and irrelevant Subnet Detail tab.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1384305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1382573] Re: Uncaught GreenletExit in ServiceLauncher if wait called after greenlet kill

2014-10-22 Thread Ihar Hrachyshka
** Project changed: neutron => oslo-incubator

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1382573

Title:
  Uncaught GreenletExit in ServiceLauncher if wait called after greenlet
  kill

Status in The Oslo library incubator:
  In Progress

Bug description:
  This is similar to bug 1282206 that fixed the same issue for
  ProcessLauncher.

  The failure shows up in gate (Icehouse, Juno) as follows:

  ft1.1683: 
tests.unit.test_service.ServiceRestartTest.test_service_restart_StringException:
 Traceback (most recent call last):
File "tests/unit/test_service.py", line 252, in test_service_restart
  ready = self._spawn_service()
File "tests/unit/test_service.py", line 244, in _spawn_service
  launcher.wait(ready_callback=ready_event.set)
File "openstack/common/service.py", line 196, in wait
  status, signo = self._wait_for_exit_or_signal(ready_callback)
File "openstack/common/service.py", line 182, in _wait_for_exit_or_signal
  self.stop()
File "openstack/common/service.py", line 128, in stop
  self.services.stop()
File "openstack/common/service.py", line 479, in stop
  self.tg.stop()
File "openstack/common/threadgroup.py", line 125, in stop
  self._stop_threads()
File "openstack/common/threadgroup.py", line 98, in _stop_threads
  x.stop()
File "openstack/common/threadgroup.py", line 44, in stop
  self.thread.kill()
File 
"/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py",
 line 238, in kill
  return kill(self, *throw_args)
File 
"/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py",
 line 292, in kill
  g.throw(*throw_args)
File 
"/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py",
 line 212, in main
  result = function(*args, **kwargs)
File 
"/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py",
 line 278, in just_raise
  raise greenlet.GreenletExit()
  GreenletExit
  Traceback (most recent call last):
  _StringException: Empty attachments:
pythonlogging:''
stderr
stdout

  traceback-1: {{{
  Traceback (most recent call last):
File "tests/unit/test_service.py", line 93, in _reap_pid
  if self.pid:
  AttributeError: 'ServiceRestartTest' object has no attribute 'pid'
  }}}

  Traceback (most recent call last):
File "tests/unit/test_service.py", line 252, in test_service_restart
  ready = self._spawn_service()
File "tests/unit/test_service.py", line 244, in _spawn_service
  launcher.wait(ready_callback=ready_event.set)
File "openstack/common/service.py", line 196, in wait
  status, signo = self._wait_for_exit_or_signal(ready_callback)
File "openstack/common/service.py", line 182, in _wait_for_exit_or_signal
  self.stop()
File "openstack/common/service.py", line 128, in stop
  self.services.stop()
File "openstack/common/service.py", line 479, in stop
  self.tg.stop()
File "openstack/common/threadgroup.py", line 125, in stop
  self._stop_threads()
File "openstack/common/threadgroup.py", line 98, in _stop_threads
  x.stop()
File "openstack/common/threadgroup.py", line 44, in stop
  self.thread.kill()
File 
"/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py",
 line 238, in kill
  return kill(self, *throw_args)
File 
"/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py",
 line 292, in kill
  g.throw(*throw_args)
File 
"/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py",
 line 212, in main
  result = function(*args, **kwargs)
File 
"/home/jenkins/workspace/gate-oslo-incubator-python26/.tox/py26/lib/python2.6/site-packages/eventlet/greenthread.py",
 line 278, in just_raise
  raise greenlet.GreenletExit()
  GreenletExit

  Logs: http://logs.openstack.org/82/129182/1/check/gate-oslo-incubator-
  python26/002df95/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/oslo-incubator/+bug/1382573/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384263] [NEW] nec plugin: packet filter cannot be applied for openflow router interface

2014-10-22 Thread Akihiro Motoki
Public bug reported:

nec plugin has packet filter extension, but the filter is not applied to
an interface of routers implemented by OpenFlow controller.
It works for l3-agent router.

** Affects: neutron
 Importance: Low
 Status: New


** Tags: nec

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384263

Title:
  nec plugin: packet filter cannot be applied for openflow router
  interface

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  nec plugin has packet filter extension, but the filter is not applied to
  an interface of routers implemented by OpenFlow controller.
  It works for l3-agent router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384263/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1361357] Re: metadata service performance regression ~8x

2014-10-22 Thread Maru Newby
As per a thread on the mailing list [1], this issue was already fixed
[2] in Neutron in Juno and backported to Icehouse, so I'm going to
remove Neutron as an affected project.

1: http://lists.openstack.org/pipermail/openstack-dev/2014-October/048916.html
2: https://bugs.launchpad.net/neutron/+bug/1276440

** No longer affects: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1361357

Title:
  metadata service performance regression ~8x

Status in Ubuntu Cloud Archive:
  Confirmed
Status in ubuntu-cloud-archive icehouse series:
  Confirmed
Status in ubuntu-cloud-archive juno series:
  Confirmed
Status in “neutron” package in Ubuntu:
  Confirmed
Status in “neutron” source package in Trusty:
  Confirmed
Status in “neutron” source package in Utopic:
  Confirmed

Bug description:
  A change was made to the neutron portion of the nova metadata service
  to disable caching.  This ends up causing a all hits to the nova
  metadata service to generate messages to retrieve data from neutron.
  Doing so makes a "crawl" of the metadata service take at least 8x
  longer than it did.  cloud-init crawls the metadata service during
  boot.  The end result is that instances boot significantly slower than
  they did previously.

  The commits are marked as having fixed bug 1276440 [1].
  The commits are:
   icehouse: d568fee34be36ca17a9124fe6539f62d702d6359 [2]
   trunk:  3faea81c6029033c85cefd6e98d7a3e719e858f5 [3]

  [1] http://pad.lv/1276440
  [2] 
https://github.com/openstack/neutron/commit/d568fee34be36ca17a9124fe6539f62d702d6359
  [3] 
https://github.com/openstack/neutron/commit/423ca756af10e10398636d6d34a7594a4fd4bc87

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1361357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384248] [NEW] Adjust metering time units to sometime use a smaller unit

2014-10-22 Thread Doug Fish
Public bug reported:

While reviewing https://review.openstack.org/#/c/96800 it seemed to me
the code would sometimes select a larger time unit that people would
normally use to describe the data.

Consider these 2 representations of 2 data sets

Data set 1a:  15 sec 30 sec 45 sec 30 sec 75 sec
Data set 1b:  .2 min .5 min .7 min 1.2 min

Data set 2a :  1 day 2 days 3 days 10 days 18 days
Data set 2b:  .1 weeks .3 weeks .4 weeks 1.4 weeks 2.6 weeks

IMO the "a" version is how people normally think about the data, yet the
metering unit code will select "b".

Specifically I think the metering time unit selection code should not
move to the next higher unit at 1 unit, but instead should convert at 2
or 3 units.

** Affects: horizon
 Importance: Undecided
 Assignee: Doug Fish (drfish)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Doug Fish (drfish)

** Summary changed:

- Adjust metering units to sometime use a smaller unit
+ Adjust metering time units to sometime use a smaller unit

** Description changed:

  While reviewing https://review.openstack.org/#/c/96800 it seemed to me
- the code would sometimes select a larger unit that people would normally
- use to describe the data.
+ the code would sometimes select a larger time unit that people would
+ normally use to describe the data.
  
  Consider these 2 representations of 2 data sets
  
  Data set 1a:  15 sec 30 sec 45 sec 30 sec 75 sec
  Data set 1b:  .2 min .5 min .7 min 1.2 min
  
  Data set 2a :  1 day 2 days 3 days 10 days 18 days
  Data set 2b:  .1 weeks .3 weeks .4 weeks 1.4 weeks 2.6 weeks
  
  IMO the "a" version is how people normally think about the data, yet the
  metering unit code will select "b".
  
- Specifically I think the metering unit selection code should not move to
- the next higher unit at 1 unit, but instead should convert at 2 or 3
- units.
+ Specifically I think the metering time unit selection code should not
+ move to the next higher unit at 1 unit, but instead should convert at 2
+ or 3 units.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1384248

Title:
  Adjust metering time units to sometime use a smaller unit

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  While reviewing https://review.openstack.org/#/c/96800 it seemed to me
  the code would sometimes select a larger time unit that people would
  normally use to describe the data.

  Consider these 2 representations of 2 data sets

  Data set 1a:  15 sec 30 sec 45 sec 30 sec 75 sec
  Data set 1b:  .2 min .5 min .7 min 1.2 min

  Data set 2a :  1 day 2 days 3 days 10 days 18 days
  Data set 2b:  .1 weeks .3 weeks .4 weeks 1.4 weeks 2.6 weeks

  IMO the "a" version is how people normally think about the data, yet
  the metering unit code will select "b".

  Specifically I think the metering time unit selection code should not
  move to the next higher unit at 1 unit, but instead should convert at
  2 or 3 units.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1384248/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384247] [NEW] Adjust metering units to sometime use a smaller unit

2014-10-22 Thread Doug Fish
Public bug reported:

While reviewing https://review.openstack.org/#/c/96800 it seemed to me
the code would sometimes select a larger unit that people would normally
use to describe the data.

Consider these 2 representations of 2 data sets

Data set 1a:  15 sec 30 sec 45 sec 30 sec 75 sec
Data set 1b:  .2 min .5 min .7 min 1.2 min

Data set 2a :  1 day 2 days 3 days 10 days 18 days
Data set 2b:  .1 weeks .3 weeks .4 weeks 1.4 weeks 2.6 weeks

IMO the "a" version is how people normally think about the data, yet the
metering unit code will select "b".

Specifically I think the metering unit selection code should not move to
the next higher unit at 1 unit, but instead should convert at 2 or 3
units.

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1384247

Title:
  Adjust metering units to sometime use a smaller unit

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  While reviewing https://review.openstack.org/#/c/96800 it seemed to me
  the code would sometimes select a larger unit that people would
  normally use to describe the data.

  Consider these 2 representations of 2 data sets

  Data set 1a:  15 sec 30 sec 45 sec 30 sec 75 sec
  Data set 1b:  .2 min .5 min .7 min 1.2 min

  Data set 2a :  1 day 2 days 3 days 10 days 18 days
  Data set 2b:  .1 weeks .3 weeks .4 weeks 1.4 weeks 2.6 weeks

  IMO the "a" version is how people normally think about the data, yet
  the metering unit code will select "b".

  Specifically I think the metering unit selection code should not move
  to the next higher unit at 1 unit, but instead should convert at 2 or
  3 units.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1384247/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372049] Re: Launching multiple VMs fails over 63 instances

2014-10-22 Thread Yair Fried
** Also affects: oslo.messaging
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372049

Title:
  Launching multiple VMs fails over 63 instances

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Confirmed
Status in Messaging API for OpenStack:
  Confirmed

Bug description:
  RHEL-7.0
  Icehouse
  All-In-One

  Booting 63 VMs at once (with "num-instances" attribute) works fine.
  Setup is able to support up to 100 VMs booted in ~50 bulks.

  Booting 100 VMs at once, without Neutron network, so no network for
  the VMs, works fine.

  Booting 64 (and more) VMs boots only 63 VMs. any of the VMs over 63 are 
booted in ERROR state with details: VirtualInterfaceCreateException: Virtual 
Interface creation failed
  Failed VM's port at DOWN state

  Details:
  After the initial boot commands goes through, all CPU usage goes down (no 
neutron/nova CPU consumption) untll nova's vif_plugging_timeout is reached. at 
which point 1 (= #num_instances - 63) VM is set to ERROR, and the rest of the 
VMs reach active state.

  Guess: seems like neutron is going into some deadlock until some of
  the load is reduced by vif_plugging_timeout


  disabling neutorn-nova port notifications allows all VMs to be
  created.

  Notes: this is recreated also with multiple Compute nodes, and also
  multiple neutron RPC/API workers

  
  Recreate:
  set nova/neutron quota's to "-1"
  make sure neutorn-nova port notifications is ON on both neutron and nova conf 
files
  create a network in your tenant

  boot more than 64 VMs

  nova boot --flavor 42 test_VM --image cirros --num-instances 64


  [yfried@yfried-mobl-rh ~(keystone_demo)]$ nova list
  
+--+--+++-+-+
  | ID   | Name 
| Status | Task State | Power State | Networks|
  
+--+--+++-+-+
  | 02d7b680-efd8-4291-8d56-78b43c9451cb | 
test_VM-02d7b680-efd8-4291-8d56-78b43c9451cb | ACTIVE | -  | Running
 | demo_private=10.0.0.156 |
  | 05fd6dd2-6b0e-4801-9219-ae4a77a53cfd | 
test_VM-05fd6dd2-6b0e-4801-9219-ae4a77a53cfd | ACTIVE | -  | Running
 | demo_private=10.0.0.150 |
  | 09131f19-5e83-4a40-a900-ffca24a8c775 | 
test_VM-09131f19-5e83-4a40-a900-ffca24a8c775 | ACTIVE | -  | Running
 | demo_private=10.0.0.160 |
  | 0d3be93b-73d3-4995-913c-03a4b80ad37e | 
test_VM-0d3be93b-73d3-4995-913c-03a4b80ad37e | ACTIVE | -  | Running
 | demo_private=10.0.0.164 |
  | 0fcadae4-768c-44a1-9e1c-ac371d1803f9 | 
test_VM-0fcadae4-768c-44a1-9e1c-ac371d1803f9 | ACTIVE | -  | Running
 | demo_private=10.0.0.202 |
  | 11a87db1-5b15-4cad-a749-5d53e2fd8194 | 
test_VM-11a87db1-5b15-4cad-a749-5d53e2fd8194 | ACTIVE | -  | Running
 | demo_private=10.0.0.201 |
  | 147e4a6b-a77c-46ef-b8fd-d65479ccb8ca | 
test_VM-147e4a6b-a77c-46ef-b8fd-d65479ccb8ca | ACTIVE | -  | Running
 | demo_private=10.0.0.147 |
  | 1c5b5f40-d2f3-4cc7-9f80-f5df8de918b9 | 
test_VM-1c5b5f40-d2f3-4cc7-9f80-f5df8de918b9 | ACTIVE | -  | Running
 | demo_private=10.0.0.187 |
  | 1d0b7210-f5a0-4827-b338-2014e8f21341 | 
test_VM-1d0b7210-f5a0-4827-b338-2014e8f21341 | ACTIVE | -  | Running
 | demo_private=10.0.0.165 |
  | 1df564f6-5aac-4ac8-8361-bd44c305332b | 
test_VM-1df564f6-5aac-4ac8-8361-bd44c305332b | ACTIVE | -  | Running
 | demo_private=10.0.0.145 |
  | 2031945f-6305-4cdc-939f-5f02171f82b2 | 
test_VM-2031945f-6305-4cdc-939f-5f02171f82b2 | ACTIVE | -  | Running
 | demo_private=10.0.0.149 |
  | 256ff0ed-0e56-47e3-8b69-68006d658ad6 | 
test_VM-256ff0ed-0e56-47e3-8b69-68006d658ad6 | ACTIVE | -  | Running
 | demo_private=10.0.0.177 |
  | 2b7256a8-c04a-42cf-9c19-5836b585c0f5 | 
test_VM-2b7256a8-c04a-42cf-9c19-5836b585c0f5 | ACTIVE | -  | Running
 | demo_private=10.0.0.180 |
  | 2daac227-e0c9-4259-8e8e-b8a6e93b45e3 | 
test_VM-2daac227-e0c9-4259-8e8e-b8a6e93b45e3 | ACTIVE | -  | Running
 | demo_private=10.0.0.191 |
  | 425c170f-a450-440d-b9ba-0408d7c69b25 | 
test_VM-425c170f-a450-440d-b9ba-0408d7c69b25 | ACTIVE | -  | Running
 | demo_private=10.0.0.169 |
  | 461fcce3-96ae-4462-ab65-fb63f3552703 | 
test_VM-461fcce3-96ae-4462-ab65-fb63f3552703 | ACTIVE | -  | Running
 | demo_private=10.0.0.179 |
  | 46a9965d-6511-44a3-ab71-a87767cda759 | 
test_VM-46a9965d-6511-44a3-ab71-a87767cda759 | ACTIVE | -  | Running
 | demo_private=10.0.0.199 |
  | 4c4ce671-5e84-4ccd-8496-02c0723178ec | 
test_VM-4c4ce671-5e84-4ccd-84

[Yahoo-eng-team] [Bug 1384240] [NEW] It do not delete tap devices

2014-10-22 Thread Vitalii
Public bug reported:

After I added/removed routers, nets and subnets many times, for testing
purpose, I found that I have 45 interfaces:


b# ifconfig|grep encap:Ethernet
br-eth0   Link encap:Ethernet  HWaddr d0:67:e5:03:18:0f  
br-eth0:1 Link encap:Ethernet  HWaddr d0:67:e5:03:18:0f  
eth0  Link encap:Ethernet  HWaddr d0:67:e5:03:18:0f  
int-br-eth0 Link encap:Ethernet  HWaddr 82:ad:59:16:ca:da  
phy-br-eth0 Link encap:Ethernet  HWaddr da:e4:8d:cd:38:43  
qbr056025bc-49 Link encap:Ethernet  HWaddr fa:c1:6d:fe:5e:b8  
qbr3dccedf9-e8 Link encap:Ethernet  HWaddr 02:42:c0:73:d2:0b  
qbr422898c8-df Link encap:Ethernet  HWaddr 76:b0:42:f5:ad:f9  
qbr69eb1f6a-71 Link encap:Ethernet  HWaddr d6:d2:7f:ee:1a:34  
qbr750aa557-b7 Link encap:Ethernet  HWaddr 5a:2e:b3:f3:30:b1  
qbr81eb2deb-b7 Link encap:Ethernet  HWaddr f2:d0:95:19:fb:c4  
qbr971c890b-8f Link encap:Ethernet  HWaddr ce:a1:c9:f0:b6:24  
qbr9ab03868-2f Link encap:Ethernet  HWaddr 2a:d5:8a:61:0a:a1  
qbrcfd38872-d1 Link encap:Ethernet  HWaddr 16:7d:ad:1b:4b:82  
qbrde55f70b-7d Link encap:Ethernet  HWaddr 2a:8c:f5:2b:49:99  
qbrdead1da5-98 Link encap:Ethernet  HWaddr d6:95:61:73:1d:c6  
qbrf0db8340-a6 Link encap:Ethernet  HWaddr 9e:9b:36:c3:cb:d3  
qbrf3f3c43f-ff Link encap:Ethernet  HWaddr 4a:c6:5c:70:e4:b4  
qvb056025bc-49 Link encap:Ethernet  HWaddr fa:c1:6d:fe:5e:b8  
qvb3dccedf9-e8 Link encap:Ethernet  HWaddr 02:42:c0:73:d2:0b  
qvb422898c8-df Link encap:Ethernet  HWaddr 76:b0:42:f5:ad:f9  
qvb69eb1f6a-71 Link encap:Ethernet  HWaddr d6:d2:7f:ee:1a:34  
qvb750aa557-b7 Link encap:Ethernet  HWaddr 5a:2e:b3:f3:30:b1  
qvb81eb2deb-b7 Link encap:Ethernet  HWaddr f2:d0:95:19:fb:c4  
qvb971c890b-8f Link encap:Ethernet  HWaddr ce:a1:c9:f0:b6:24  
qvb9ab03868-2f Link encap:Ethernet  HWaddr 2a:d5:8a:61:0a:a1  
qvbcfd38872-d1 Link encap:Ethernet  HWaddr 16:7d:ad:1b:4b:82  
qvbde55f70b-7d Link encap:Ethernet  HWaddr 2a:8c:f5:2b:49:99  
qvbdead1da5-98 Link encap:Ethernet  HWaddr d6:95:61:73:1d:c6  
qvbf0db8340-a6 Link encap:Ethernet  HWaddr 9e:9b:36:c3:cb:d3  
qvbf3f3c43f-ff Link encap:Ethernet  HWaddr 4a:c6:5c:70:e4:b4  
qvo056025bc-49 Link encap:Ethernet  HWaddr aa:82:8b:9f:d6:a0  
qvo3dccedf9-e8 Link encap:Ethernet  HWaddr ea:8c:1f:0e:ab:92  
qvo422898c8-df Link encap:Ethernet  HWaddr 7a:9f:47:3c:3b:57  
qvo69eb1f6a-71 Link encap:Ethernet  HWaddr a6:dd:41:ce:e6:39  
qvo750aa557-b7 Link encap:Ethernet  HWaddr 32:6c:f8:ca:af:e9  
qvo81eb2deb-b7 Link encap:Ethernet  HWaddr ea:22:94:19:ac:4c  
qvo971c890b-8f Link encap:Ethernet  HWaddr 2e:f8:a7:72:1c:85  
qvo9ab03868-2f Link encap:Ethernet  HWaddr aa:3e:bb:c6:6d:d3  
qvocfd38872-d1 Link encap:Ethernet  HWaddr 16:3a:12:30:f5:71  
qvode55f70b-7d Link encap:Ethernet  HWaddr fa:ee:28:ed:52:37  
qvodead1da5-98 Link encap:Ethernet  HWaddr 5a:66:51:d9:a5:60  
qvof0db8340-a6 Link encap:Ethernet  HWaddr 66:b6:23:c9:ca:73  
qvof3f3c43f-ff Link encap:Ethernet  HWaddr 5e:5b:53:e8:11:58  
tapdead1da5-98 Link encap:Ethernet  HWaddr fe:54:00:39:74:0a  


I use Icehouse.

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

  After I added/removed routers, nets and subnets many times, for testing
- purpose, I found that I have the following list of interfaces:
+ purpose, I found that I have 45 interfaces:
  
- # ifconfig 
+ 
+ b# ifconfig|grep encap:Ethernet
  br-eth0   Link encap:Ethernet  HWaddr d0:67:e5:03:18:0f  
-   inet addr:172.19.29.147  Bcast:172.19.29.255  Mask:255.255.255.128
-   inet6 addr: fe80::d267:e5ff:fe03:180f/64 Scope:Link
-   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
-   RX packets:2039726347 errors:0 dropped:17762 overruns:0 frame:0
-   TX packets:5874295 errors:0 dropped:0 overruns:0 carrier:0
-   collisions:0 txqueuelen:0 
-   RX bytes:189626288472 (176.6 GiB)  TX bytes:610051438 (581.7 MiB)
+ br-eth0:1 Link encap:Ethernet  HWaddr d0:67:e5:03:18:0f  
+ eth0  Link encap:Ethernet  HWaddr d0:67:e5:03:18:0f  
+ int-br-eth0 Link encap:Ethernet  HWaddr 82:ad:59:16:ca:da  
+ phy-br-eth0 Link encap:Ethernet  HWaddr da:e4:8d:cd:38:43  
+ qbr056025bc-49 Link encap:Ethernet  HWaddr fa:c1:6d:fe:5e:b8  
+ qbr3dccedf9-e8 Link encap:Ethernet  HWaddr 02:42:c0:73:d2:0b  
+ qbr422898c8-df Link encap:Ethernet  HWaddr 76:b0:42:f5:ad:f9  
+ qbr69eb1f6a-71 Link encap:Ethernet  HWaddr d6:d2:7f:ee:1a:34  
+ qbr750aa557-b7 Link encap:Ethernet  HWaddr 5a:2e:b3:f3:30:b1  
+ qbr81eb2deb-b7 Link encap:Ethernet  HWaddr f2:d0:95:19:fb:c4  
+ qbr971c890b-8f Link encap:Ethernet  HWaddr ce:a1:c9:f0:b6:24  
+ qbr9ab03868-2f Link encap:Ethernet  HWaddr 2a:d5:8a:61:0a:a1  
+ qbrcfd38872-d1 Link encap:Ethernet  HWaddr 16:7d:ad:1b:4b:82  
+ qbrde55f70b-7d Link encap:Ethernet  HWaddr 2a:8c:f5:2b:49:99  
+ qbrdead1da5-98 Link encap:Ethernet  HWaddr d6:95:61:73:1d:c6  
+ qbrf0db8340-a6 Link encap:Ethernet  HWaddr 9e:9b:36:c3:cb:d3  
+ qbrf3f3c43f-ff Link encap:Ethernet  HWaddr 4a:c6:5c:70:e4:b4  
+ qvb056025bc-49 Link encap:Ethernet  HWad

[Yahoo-eng-team] [Bug 1384235] [NEW] Nova raises exception about existing libvirt filter

2014-10-22 Thread Vitalii
Public bug reported:

Sometimes, when I start instance, nova raises exception, that
filter like nova-instance-instance-000b-52540039740a already exists.

So I have to execute `virsh nwfilter-undefine` and try to boot instance
again:

In libvirt logs I can see the following:

2014-10-22 12:20:13.816+: 4693: error : virNWFilterObjAssignDef:3068 : 
operation failed: filter 'nova-instance-instance-000b-52540039740a' already 
exists with uuid af47118d-1934-4ca7-8a71-c6ae9a6499aa
2014-10-22 12:20:13.930+: 4688: error : virNetSocketReadWire:1523 : End of 
file while reading data: Input/output error

I use libvirt 1.2.8-3 ( Debian )

I have the following services defined:

service_plugins =
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

** Affects: neutron
 Importance: Undecided
 Status: New

** Description changed:

- Sometimes, when I start instance, nova raises exception, that 
+ Sometimes, when I start instance, nova raises exception, that
  filter like nova-instance-instance-000b-52540039740a already exists.
  
  So I have to execute `virsh nwfilter-undefine` and try to boot instance
  again:
  
  In libvirt logs I can see the following:
  
  2014-10-22 12:20:13.816+: 4693: error : virNWFilterObjAssignDef:3068 : 
operation failed: filter 'nova-instance-instance-000b-52540039740a' already 
exists with uuid af47118d-1934-4ca7-8a71-c6ae9a6499aa
  2014-10-22 12:20:13.930+: 4688: error : virNetSocketReadWire:1523 : End 
of file while reading data: Input/output error
  
  I use libvirt 1.2.8-3 ( Debian )
+ 
+ I have the following services defined:
+ 
+ service_plugins =
+ 
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

** Project changed: nova => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384235

Title:
  Nova raises exception about existing libvirt filter

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Sometimes, when I start instance, nova raises exception, that
  filter like nova-instance-instance-000b-52540039740a already exists.

  So I have to execute `virsh nwfilter-undefine` and try to boot
  instance again:

  In libvirt logs I can see the following:

  2014-10-22 12:20:13.816+: 4693: error : virNWFilterObjAssignDef:3068 : 
operation failed: filter 'nova-instance-instance-000b-52540039740a' already 
exists with uuid af47118d-1934-4ca7-8a71-c6ae9a6499aa
  2014-10-22 12:20:13.930+: 4688: error : virNetSocketReadWire:1523 : End 
of file while reading data: Input/output error

  I use libvirt 1.2.8-3 ( Debian )

  I have the following services defined:

  service_plugins =
  
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384235/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384231] [NEW] The number of neutron-ns-metadata-proxy processes grow uncontrollably

2014-10-22 Thread Vitalii
Public bug reported:

During testing and development I had to add and remove instances, routers, 
ports often. Also I restarted all neutron services often ( I use supervisor ).
After about one week, I noticed that I ran out of free RAM. It turned out there 
were tens of neutron-ns-metadata-proxy processes hanging. After I killed them 
and restarted neutron, I got 4 GB RAM freed.

"""
...
20537 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/a6f6aeaa-c325-42d6-95e2-d55d410fc5d9.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=a6f6aeaa-c325-42d6-95e2-d55d410fc5d9 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
20816 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/a4451c09-1655-4aea-86d6-849e563f4731.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=a4451c09-1655-4aea-86d6-849e563f4731 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
30098 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/b122a6ba-5614-4f1c-b0c6-95c6645dbab0.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=b122a6ba-5614-4f1c-b0c6-95c6645dbab0 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
30557 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/82ebd418-b156-49bf-9633-af3121fc12f7.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=82ebd418-b156-49bf-9633-af3121fc12f7 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
31072 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/d426f959-bfc5-4012-b89e-aec64cc2cf03.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=d426f959-bfc5-4012-b89e-aec64cc2cf03 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
31378 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/b8dc2dd7-18cb-4a56-9690-fc79248c5532.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=b8dc2dd7-18cb-4a56-9690-fc79248c5532 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
...
"""

** Affects: neutron
 Importance: Undecided
 Status: New

** Project changed: nova => neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384231

Title:
  The number of neutron-ns-metadata-proxy processes grow uncontrollably

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  During testing and development I had to add and remove instances, routers, 
ports often. Also I restarted all neutron services often ( I use supervisor ).
  After about one week, I noticed that I ran out of free RAM. It turned out 
there were tens of neutron-ns-metadata-proxy processes hanging. After I killed 
them and restarted neutron, I got 4 GB RAM freed.

  """
  ...
  20537 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/a6f6aeaa-c325-42d6-95e2-d55d410fc5d9.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=a6f6aeaa-c325-42d6-95e2-d55d410fc5d9 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
  20816 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/a4451c09-1655-4aea-86d6-849e563f4731.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=a4451c09-1655-4aea-86d6-849e563f4731 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
  30098 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/b122a6ba-5614-4f1c-b0c6-95c6645dbab0.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=b122a6ba-5614-4f1c-b0c6-95c6645dbab0 
--state_path=/home/vb/var/lib/neutron --metadata_port=9697 --debug --verbose
  30557 ?S  0:00 /home/vb/.virtualenvs/ecs/bin/python 
/usr/sbin/neutron-ns-metadata-proxy 
--pid_file=/home/vb/var/lib/neutron/external/pids/82ebd418-b156-49bf-9633-af3121fc12f7.pid
 --metadata_proxy_socket=/home/vb/var/lib/neutron/metadata_proxy 
--router_id=82ebd418-b156-49bf-963

[Yahoo-eng-team] [Bug 1384228] [NEW] Exception should be raised if nova has failed to add fixed ip

2014-10-22 Thread Vitalii
Public bug reported:

I have the following code

try:
instance.add_fixed_ip(network.id)
except Exception as e:
...

The issue is that in Nova compute logs I can see exception, but it is not 
thrown on client side. 
Since add_fixed_ip() returns nothing, it is difficult to find if fixed address 
was assigned.

Exception:
"""
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 134, in _dispatch_and_reply
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 177, in _dispatch
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py",
 line 123, in _do_dispatch
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 393, in decorated_function
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py",
 line 88, in wrapped
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher payload)
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 68, in __exit__
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/exception.py",
 line 71, in wrapped
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher return 
f(self, context, *args, **kw)
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 274, in decorated_function
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher pass
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 68, in __exit__
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 260, in decorated_function
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 303, in decorated_function
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher e, 
sys.exc_info())
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/openstack/common/excutils.py",
 line 68, in __exit__
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher 
six.reraise(self.type_, self.value, self.tb)
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 290, in decorated_function
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher return 
function(self, context, *args, **kwargs)
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/compute/manager.py",
 line 3621, in add_fixed_ip_to_instance
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher network_id)
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packages/nova/network/api.py",
 line 49, in wrapper
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher res = 
f(self, context, *args, **kwargs)
2014-10-22 13:06:59.936 1015 TRACE oslo.messaging.rpc.dispatcher   File 
"/home/vb/.virtualenvs/ecs/local/lib/python2.7/site-packa

[Yahoo-eng-team] [Bug 1297290] Re: keystone-manage db_sync might fail

2014-10-22 Thread Tom Fifield
My hope is that this was just a bug in Havana or before and all is fixed
in Icehouse/Juno now

** Changed in: openstack-manuals
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1297290

Title:
  keystone-manage db_sync might fail

Status in OpenStack Identity (Keystone):
  Expired
Status in OpenStack Manuals:
  Invalid

Bug description:
  Change to step 5. Create the database tables for the Identity Service

  Add an additional note after the db_sync command that this command may
  return a critical error and fail to fully migrate an empty repository.
  This is probably due to the fact that the upgrade scripts do not guard
  against transaction failures returned from the SQL server. You need to
  run this command multiple times and check "keystone-manage
  db_version". If it does not reach version 34, the upgrade scripts have
  not completed for Havana builds.

  
  ---
  Built: 2014-03-18T18:59:18 00:00
  git SHA: b4f5cdc33b9248aa9c9768e2deccbc3a1a04cc3f
  URL: 
http://docs.openstack.org/havana/install-guide/install/apt/content/keystone-install.html
  source File: 
file:/home/jenkins/workspace/openstack-install-deploy-guide-ubuntu/doc/install-guide/section_keystone-install.xml
  xml:id: keystone-install

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1297290/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384187] [NEW] Nova admin user not able to list the resources from all other users other than "nova list"

2014-10-22 Thread Roshan R Anvekar
Public bug reported:

While using nova commands as a admin user, we see that other than "nova
list --all-tenants", other resources like image-list, keypair-list,
flavor-list or any other resources used by other "users" cannot be
displayed.

Listing all resources from all users is a very important usecase since
this can be used by the "admin" user to display any resource from any
user and then update/delete it.

Hence there should be a provision to list resources as "--all-tenants"
for all the resources.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384187

Title:
  Nova admin user not able to list the resources from all other users
  other than "nova list"

Status in OpenStack Compute (Nova):
  New

Bug description:
  While using nova commands as a admin user, we see that other than
  "nova list --all-tenants", other resources like image-list, keypair-
  list, flavor-list or any other resources used by other "users" cannot
  be displayed.

  Listing all resources from all users is a very important usecase since
  this can be used by the "admin" user to display any resource from any
  user and then update/delete it.

  Hence there should be a provision to list resources as "--all-tenants"
  for all the resources.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384151] [NEW] warning message should use gettextutils

2014-10-22 Thread Romil Gupta
Public bug reported:

The existing LOG.warning(_("") messages should be translated to
LOG.warning(_LW("").

And every file which contains info messages should have a following file
imported.

from neutron.openstack.common.gettextutils import _LW

** Affects: neutron
 Importance: Undecided
 Assignee: Romil Gupta (romilg)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Romil Gupta (romilg)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384151

Title:
  warning message should use gettextutils

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  The existing LOG.warning(_("") messages should be translated to
  LOG.warning(_LW("").

  And every file which contains info messages should have a following
  file imported.

  from neutron.openstack.common.gettextutils import _LW

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1384151/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372049] Re: Launching multiple VMs fails over 63 instances

2014-10-22 Thread Yair Fried
I think the status was changed to Opinion by accident. It should be
"Confirmed"


** Changed in: nova
   Status: Opinion => Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372049

Title:
  Launching multiple VMs fails over 63 instances

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Confirmed

Bug description:
  RHEL-7.0
  Icehouse
  All-In-One

  Booting 63 VMs at once (with "num-instances" attribute) works fine.
  Setup is able to support up to 100 VMs booted in ~50 bulks.

  Booting 100 VMs at once, without Neutron network, so no network for
  the VMs, works fine.

  Booting 64 (and more) VMs boots only 63 VMs. any of the VMs over 63 are 
booted in ERROR state with details: VirtualInterfaceCreateException: Virtual 
Interface creation failed
  Failed VM's port at DOWN state

  Details:
  After the initial boot commands goes through, all CPU usage goes down (no 
neutron/nova CPU consumption) untll nova's vif_plugging_timeout is reached. at 
which point 1 (= #num_instances - 63) VM is set to ERROR, and the rest of the 
VMs reach active state.

  Guess: seems like neutron is going into some deadlock until some of
  the load is reduced by vif_plugging_timeout


  disabling neutorn-nova port notifications allows all VMs to be
  created.

  Notes: this is recreated also with multiple Compute nodes, and also
  multiple neutron RPC/API workers

  
  Recreate:
  set nova/neutron quota's to "-1"
  make sure neutorn-nova port notifications is ON on both neutron and nova conf 
files
  create a network in your tenant

  boot more than 64 VMs

  nova boot --flavor 42 test_VM --image cirros --num-instances 64


  [yfried@yfried-mobl-rh ~(keystone_demo)]$ nova list
  
+--+--+++-+-+
  | ID   | Name 
| Status | Task State | Power State | Networks|
  
+--+--+++-+-+
  | 02d7b680-efd8-4291-8d56-78b43c9451cb | 
test_VM-02d7b680-efd8-4291-8d56-78b43c9451cb | ACTIVE | -  | Running
 | demo_private=10.0.0.156 |
  | 05fd6dd2-6b0e-4801-9219-ae4a77a53cfd | 
test_VM-05fd6dd2-6b0e-4801-9219-ae4a77a53cfd | ACTIVE | -  | Running
 | demo_private=10.0.0.150 |
  | 09131f19-5e83-4a40-a900-ffca24a8c775 | 
test_VM-09131f19-5e83-4a40-a900-ffca24a8c775 | ACTIVE | -  | Running
 | demo_private=10.0.0.160 |
  | 0d3be93b-73d3-4995-913c-03a4b80ad37e | 
test_VM-0d3be93b-73d3-4995-913c-03a4b80ad37e | ACTIVE | -  | Running
 | demo_private=10.0.0.164 |
  | 0fcadae4-768c-44a1-9e1c-ac371d1803f9 | 
test_VM-0fcadae4-768c-44a1-9e1c-ac371d1803f9 | ACTIVE | -  | Running
 | demo_private=10.0.0.202 |
  | 11a87db1-5b15-4cad-a749-5d53e2fd8194 | 
test_VM-11a87db1-5b15-4cad-a749-5d53e2fd8194 | ACTIVE | -  | Running
 | demo_private=10.0.0.201 |
  | 147e4a6b-a77c-46ef-b8fd-d65479ccb8ca | 
test_VM-147e4a6b-a77c-46ef-b8fd-d65479ccb8ca | ACTIVE | -  | Running
 | demo_private=10.0.0.147 |
  | 1c5b5f40-d2f3-4cc7-9f80-f5df8de918b9 | 
test_VM-1c5b5f40-d2f3-4cc7-9f80-f5df8de918b9 | ACTIVE | -  | Running
 | demo_private=10.0.0.187 |
  | 1d0b7210-f5a0-4827-b338-2014e8f21341 | 
test_VM-1d0b7210-f5a0-4827-b338-2014e8f21341 | ACTIVE | -  | Running
 | demo_private=10.0.0.165 |
  | 1df564f6-5aac-4ac8-8361-bd44c305332b | 
test_VM-1df564f6-5aac-4ac8-8361-bd44c305332b | ACTIVE | -  | Running
 | demo_private=10.0.0.145 |
  | 2031945f-6305-4cdc-939f-5f02171f82b2 | 
test_VM-2031945f-6305-4cdc-939f-5f02171f82b2 | ACTIVE | -  | Running
 | demo_private=10.0.0.149 |
  | 256ff0ed-0e56-47e3-8b69-68006d658ad6 | 
test_VM-256ff0ed-0e56-47e3-8b69-68006d658ad6 | ACTIVE | -  | Running
 | demo_private=10.0.0.177 |
  | 2b7256a8-c04a-42cf-9c19-5836b585c0f5 | 
test_VM-2b7256a8-c04a-42cf-9c19-5836b585c0f5 | ACTIVE | -  | Running
 | demo_private=10.0.0.180 |
  | 2daac227-e0c9-4259-8e8e-b8a6e93b45e3 | 
test_VM-2daac227-e0c9-4259-8e8e-b8a6e93b45e3 | ACTIVE | -  | Running
 | demo_private=10.0.0.191 |
  | 425c170f-a450-440d-b9ba-0408d7c69b25 | 
test_VM-425c170f-a450-440d-b9ba-0408d7c69b25 | ACTIVE | -  | Running
 | demo_private=10.0.0.169 |
  | 461fcce3-96ae-4462-ab65-fb63f3552703 | 
test_VM-461fcce3-96ae-4462-ab65-fb63f3552703 | ACTIVE | -  | Running
 | demo_private=10.0.0.179 |
  | 46a9965d-6511-44a3-ab71-a87767cda759 | 
test_VM-46a9965d-6511-44a3-ab71-a87767cda759 | ACTIVE | -  | Running
 | demo_private=10.0.0.199 |
  | 4c4ce671-5e84-4ccd-8496-02c0723178ec | 
test_VM-4c4ce671-5

[Yahoo-eng-team] [Bug 1384146] [NEW] Inconsistent enable_snat management

2014-10-22 Thread Cedric Brandily
Public bug reported:

Neutron reset enable_snat on router-gateway-clear but not on router-
gateway-set which implies inconsistent behavior:


# pub1, pub2 are external networks and router1 is a router

(neutron) router-gateway-set router1 pub1 --disable-snat
Set gateway for router router
(neutron) router-show router1 -c external_gateway_info
+---+--+
| Field | Value 
   |
+---+--+
| external_gateway_info | {"network_id": 
"1682e4f4-7dc4-4ed0-bd10-e526ab2f6f81", "enable_snat": false} |
+---+--+
(neutron) router-gateway-clear router
Removed gateway from router router
(neutron) router-gateway-set router pub2 
Set gateway for router router
(neutron) router-show router1 -c external_gateway_info
+---+--+
| Field | Value 
   |
+---+--+
| external_gateway_info | {"network_id": 
"a32bcb44-165a-4de8-a8db-35f6ff8f2712", "enable_snat": true} |
+---+--+

==> enable_snat == False lost during router-gateway-clear


(neutron) router-gateway-set router1 pub1 --disable-snat
Set gateway for router router
(neutron) router-show router1 -c external_gateway_info
+---+--+
| Field | Value 
   |
+---+--+
| external_gateway_info | {"network_id": 
"1682e4f4-7dc4-4ed0-bd10-e526ab2f6f81", "enable_snat": false} |
+---+--+
(neutron) router-gateway-set router pub2 
Set gateway for router router
(neutron) router-show router1 -c external_gateway_info
+---+--+
| Field | Value 
   |
+---+--+
| external_gateway_info | {"network_id": 
"a32bcb44-165a-4de8-a8db-35f6ff8f2712", "enable_snat": false} |
+---+--+

==> enable_snat == False not lost during router-gateway-set

** Affects: neutron
 Importance: Undecided
 Assignee: Cedric Brandily (cbrandily)
 Status: In Progress


** Tags: l3-ipam-dhcp

** Changed in: neutron
   Status: New => In Progress

** Changed in: neutron
 Assignee: (unassigned) => Cedric Brandily (cbrandily)

** Tags added: l3-ipam-dhcp

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384146

Title:
  Inconsistent enable_snat management

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Neutron reset enable_snat on router-gateway-clear but not on router-
  gateway-set which implies inconsistent behavior:

  
  # pub1, pub2 are external networks and router1 is a router

  (neutron) router-gateway-set router1 pub1 --disable-snat
  Set gateway for router router
  (neutron) router-show router1 -c external_gateway_info
  
+---+--+
  | Field | Value   
 |
  
+---+--+
  | external_gateway_info | {"network_id": 
"1682e4f4-7dc4-4ed0-bd10-e526ab2f6f81", "enable_snat": false} |
  
+---+--+
  (neutron) router-gateway-clear router
  Removed gateway from router router
  (neutron) router-gateway-set router pub2 
  Set gateway for router router
  (neutron) router-show router1 -c external_gateway_info
  
+---+--+
  | Field | Value   
 |
  
+---+--

[Yahoo-eng-team] [Bug 1384127] [NEW] Fail-fast if initctl isnt present

2014-10-22 Thread Rick Harris
Public bug reported:

If initctl isn't present in the image, a shutdown will take (by default)
60 seconds to timeout.

In the case of initctl not being present, we figure this out immediately
by way of this error from libvirt:

error: Operation not supported: Container does not provide an initctl
pipe

If we detect this, we should abort the retry loop immediately, and
switch to the unclean shutdown (_destroy_instance).

** Affects: nova
 Importance: Undecided
 Assignee: Rick Harris (rconradharris)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384127

Title:
  Fail-fast if initctl isnt present

Status in OpenStack Compute (Nova):
  In Progress

Bug description:
  If initctl isn't present in the image, a shutdown will take (by
  default) 60 seconds to timeout.

  In the case of initctl not being present, we figure this out
  immediately by way of this error from libvirt:

  error: Operation not supported: Container does not provide an initctl
  pipe

  If we detect this, we should abort the retry loop immediately, and
  switch to the unclean shutdown (_destroy_instance).

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1384127/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384121] [NEW] evacuate --on-shared-storage tries to delete the vm port on new compute node and fails

2014-10-22 Thread gustavo panizzo
Public bug reported:

if i run nova evacuate --on-shared-storage.  nova-compute (on the
destination node) will try to delete the non-existent port in br-int
failing and making the migration to fail.


2014-10-22 04:48:29.853 6723 AUDIT nova.compute.manager 
[req-a856a797-f5c7-41c0-a949-de68e8ddbdfd 184136c2674746b5a4b27d3b13ca91f8 
d42aacf8661045beb3a9ee7585bb0c8a] [instance: 
14ed2298-a72e-49bd-9151-c1c24f364970] Rebuilding instance
2014-10-22 04:48:29.854 6723 INFO nova.compute.manager 
[req-a856a797-f5c7-41c0-a949-de68e8ddbdfd 184136c2674746b5a4b27d3b13ca91f8 
d42aacf8661045beb3a9ee7585bb0c8a] disk on shared storage, recreating using 
existing disk
2014-10-22 04:48:30.044 6723 ERROR nova.virt.libvirt.driver [-] [instance: 
14ed2298-a72e-49bd-9151-c1c24f364970] During wait destroy, instance disappeared.
2014-10-22 04:48:30.076 6723 ERROR nova.network.linux_net 
[req-a856a797-f5c7-41c0-a949-de68e8ddbdfd 184136c2674746b5a4b27d3b13ca91f8 
d42aacf8661045beb3a9ee7585bb0c8a] Unable to execute ['ovs-vsctl', 
'--timeout=120', 'del-port', u'br-int', u'qvoc57306e5-38']. Exception: 
Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl --timeout=120 
del-port br-int qvoc57306e5-38
Exit code: 1
Stdout: ''
Stderr: 'ovs-vsctl: no port named qvoc57306e5-38\n'
2014-10-22 04:48:30.126 6723 INFO nova.virt.libvirt.driver 
[req-a856a797-f5c7-41c0-a949-de68e8ddbdfd 184136c2674746b5a4b27d3b13ca91f8 
d42aacf8661045beb3a9ee7585bb0c8a] [instance: 
14ed2298-a72e-49bd-9151-c1c24f364970] Deleting instance files 
/var/lib/nova/instances/14ed2298-a72e-49bd-9151-c1c24f364970
2014-10-22 04:48:30.128 6723 INFO nova.virt.libvirt.driver 
[req-a856a797-f5c7-41c0-a949-de68e8ddbdfd 184136c2674746b5a4b27d3b13ca91f8 
d42aacf8661045beb3a9ee7585bb0c8a] [instance: 
14ed2298-a72e-49bd-9151-c1c24f364970] Deletion of 
/var/lib/nova/instances/14ed2298-a72e-49bd-9151-c1c24f364970 complete
2014-10-22 04:48:30.354 6723 INFO nova.virt.libvirt.driver 
[req-a856a797-f5c7-41c0-a949-de68e8ddbdfd 184136c2674746b5a4b27d3b13ca91f8 
d42aacf8661045beb3a9ee7585bb0c8a] [instance: 
14ed2298-a72e-49bd-9151-c1c24f364970] Creating image
2014-10-22 04:48:31.323 6723 INFO nova.compute.manager [-] Lifecycle event 0 on 
VM 14ed2298-a72e-49bd-9151-c1c24f364970
2014-10-22 04:48:31.389 6723 INFO nova.compute.manager 
[req-9a7d0c21-92d2-4a09-8e32-bbf67684224d None None] [instance: 
14ed2298-a72e-49bd-9151-c1c24f364970] During sync_power_state the instance has 
a pending task (rebuild_spawning). Skip.
2014-10-22 04:48:31.389 6723 INFO nova.compute.manager [-] Lifecycle event 2 on 
VM 14ed2298-a72e-49bd-9151-c1c24f364970
2014-10-22 04:48:31.456 6723 INFO nova.compute.manager [-] [instance: 
14ed2298-a72e-49bd-9151-c1c24f364970] During sync_power_state the instance has 
a pending task (rebuild_spawning). Skip.


eventually vif creation will timeout 

if i don't tell nova to use shared-storage evacuation works just fine


i'm running icehouse update .1 and .2 i see the problem on both versions

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: icehouse-backport-potential

** Summary changed:

- evacuate --on-shared-storage tries to delete the vm port on new vm and fails
+ evacuate --on-shared-storage tries to delete the vm port on new compute node 
and fails

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1384121

Title:
  evacuate --on-shared-storage tries to delete the vm port on new
  compute node and fails

Status in OpenStack Compute (Nova):
  New

Bug description:
  if i run nova evacuate --on-shared-storage.  nova-compute (on the
  destination node) will try to delete the non-existent port in br-int
  failing and making the migration to fail.

  
  2014-10-22 04:48:29.853 6723 AUDIT nova.compute.manager 
[req-a856a797-f5c7-41c0-a949-de68e8ddbdfd 184136c2674746b5a4b27d3b13ca91f8 
d42aacf8661045beb3a9ee7585bb0c8a] [instance: 
14ed2298-a72e-49bd-9151-c1c24f364970] Rebuilding instance
  2014-10-22 04:48:29.854 6723 INFO nova.compute.manager 
[req-a856a797-f5c7-41c0-a949-de68e8ddbdfd 184136c2674746b5a4b27d3b13ca91f8 
d42aacf8661045beb3a9ee7585bb0c8a] disk on shared storage, recreating using 
existing disk
  2014-10-22 04:48:30.044 6723 ERROR nova.virt.libvirt.driver [-] [instance: 
14ed2298-a72e-49bd-9151-c1c24f364970] During wait destroy, instance disappeared.
  2014-10-22 04:48:30.076 6723 ERROR nova.network.linux_net 
[req-a856a797-f5c7-41c0-a949-de68e8ddbdfd 184136c2674746b5a4b27d3b13ca91f8 
d42aacf8661045beb3a9ee7585bb0c8a] Unable to execute ['ovs-vsctl', 
'--timeout=120', 'del-port', u'br-int', u'qvoc57306e5-38']. Exception: 
Unexpected error while running command.
  Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl --timeout=120 
del-port br-int qvoc57306e5-38
  Exit code: 1
  Stdout: ''
  Stderr: 'ovs-vsctl: no port named qvoc57306e5-38\n'
  2014-10-22 04:48:30.1

[Yahoo-eng-team] [Bug 1384116] [NEW] Missing borders for "Actions" column in Firefox

2014-10-22 Thread Tatiana Ovchinnikova
Public bug reported:

In Firefox only, some rows are still missing borders in "Actions"
column. Moreover, the title row itself still should be fixed.

** Affects: horizon
 Importance: Low
 Assignee: Tatiana Ovchinnikova (tmazur)
 Status: New

** Changed in: horizon
 Assignee: (unassigned) => Tatiana Ovchinnikova (tmazur)

** Changed in: horizon
   Importance: Undecided => Low

** Changed in: horizon
Milestone: None => kilo-1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1384116

Title:
  Missing borders for "Actions" column in Firefox

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  In Firefox only, some rows are still missing borders in "Actions"
  column. Moreover, the title row itself still should be fixed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1384116/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384112] [NEW] endpoint, service, region can not be updated when using kvs driver

2014-10-22 Thread wanghong
Public bug reported:

region:
curl -i -H "X-Auth-Token:$TOKEN" -H "Content-Type:application/json" 
http://192.168.70.105:35357/v3/regions/ed5ff7d3e26c48aeaf1f2f9fb2a4ad7e -d 
'{"region":{"description":"xxx"}}' -X PATCH

{"error": {"message": "An unexpected error prevented the server from
fulfilling your request: 'id' (Disable debug mode to suppress these
details.)", "code": 500, "title": "Internal Server Error"}}


service:
curl -i -H "X-Auth-Token:$TOKEN" -H "Content-Type:application/json" 
http://192.168.70.105:35357/v3/services/f101743b55e54d2ba9cbf71d1f3456fc -d 
'{"service":{"type":"yy"}}' -X PATCH

{"error": {"message": "An unexpected error prevented the server from
fulfilling your request: 'id' (Disable debug mode to suppress these
details.)", "code": 500, "title": "Internal Server Error"}}


endpoint:
curl -i -H "X-Auth-Token:$TOKEN" -H "Content-Type:application/json" 
http://192.168.70.105:35357/v3/endpoints/bbe21bf654e442edb21716cc00fb1c58 -d 
'{"endpoint":{"zz":"tt"}}' -X PATCH

{"error": {"message": "An unexpected error prevented the server from
fulfilling your request: 'region_id' (Disable debug mode to suppress
these details.)", "code": 500, "title": "Internal Server Error"}}

** Affects: keystone
 Importance: Undecided
 Assignee: wanghong (w-wanghong)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) => wanghong (w-wanghong)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1384112

Title:
  endpoint, service, region can not be updated when using kvs driver

Status in OpenStack Identity (Keystone):
  New

Bug description:
  region:
  curl -i -H "X-Auth-Token:$TOKEN" -H "Content-Type:application/json" 
http://192.168.70.105:35357/v3/regions/ed5ff7d3e26c48aeaf1f2f9fb2a4ad7e -d 
'{"region":{"description":"xxx"}}' -X PATCH

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request: 'id' (Disable debug mode to suppress these
  details.)", "code": 500, "title": "Internal Server Error"}}

  
  service:
  curl -i -H "X-Auth-Token:$TOKEN" -H "Content-Type:application/json" 
http://192.168.70.105:35357/v3/services/f101743b55e54d2ba9cbf71d1f3456fc -d 
'{"service":{"type":"yy"}}' -X PATCH

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request: 'id' (Disable debug mode to suppress these
  details.)", "code": 500, "title": "Internal Server Error"}}

  
  endpoint:
  curl -i -H "X-Auth-Token:$TOKEN" -H "Content-Type:application/json" 
http://192.168.70.105:35357/v3/endpoints/bbe21bf654e442edb21716cc00fb1c58 -d 
'{"endpoint":{"zz":"tt"}}' -X PATCH

  {"error": {"message": "An unexpected error prevented the server from
  fulfilling your request: 'region_id' (Disable debug mode to suppress
  these details.)", "code": 500, "title": "Internal Server Error"}}

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1384112/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384109] [NEW] Mechanism driver 'l2population' failed in update_port_postcommit

2014-10-22 Thread James Page
Public bug reported:

OpenStack Juno, Ubuntu 14.04, 3 x neutron-server's with 32 API workers
each, rally/boot-and-delete with a concurrency level of 150:

2014-10-21 16:37:04.615 16312 ERROR neutron.plugins.ml2.managers 
[req-c4cdefd5-b2d9-46fa-a031-bddd03d981e6 None] Mechanism driver 'l2population' 
failed in update_port_postcommit
2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers Traceback 
(most recent call last):
2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 291, 
in _call_on_drivers
2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers 
getattr(driver.obj, method_name)(context)
2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py",
 line 135, in update_port_postcommit
2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers 
self._update_port_up(context)
2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py",
 line 228, in _update_port_up
2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers 
agent_ports += self._get_port_fdb_entries(binding.port)
2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/l2pop/mech_driver.py",
 line 45, in _get_port_fdb_entries
2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers 
ip['ip_address']] for ip in port['fixed_ips']]
2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers TypeError: 
'NoneType' object has no attribute '__getitem__'
2014-10-21 16:37:04.615 16312 TRACE neutron.plugins.ml2.managers
2014-10-21 16:37:04.618 16312 ERROR oslo.messaging.rpc.dispatcher 
[req-c4cdefd5-b2d9-46fa-a031-bddd03d981e6 ] Exception during message handling: 
update_port_postcommit failed.
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, 
in _do_dispatch
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/rpc.py", line 161, in 
update_device_up
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher host)
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 1136, in 
update_port_status
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
self.mechanism_manager.update_port_postcommit(mech_context)
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 527, 
in update_port_postcommit
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
continue_on_failure=True)
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/managers.py", line 302, 
in _call_on_drivers
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
method=method_name
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher 
MechanismDriverError: update_port_postcommit failed.
2014-10-21 16:37:04.618 16312 TRACE oslo.messaging.rpc.dispatcher
2014-10-21 16:37:04.620 16312 ERROR oslo.messaging._drivers.common 
[req-c4cdefd5-b2d9-46fa-a031-bddd03d981e6 ] Returning exception 
update_port_postcommit failed. to caller
2014-10-21 16:37:04.621 16312 ERROR oslo.messaging._drivers.common 
[req-c4cdefd5-b2d9-46fa-a031-bddd03d981e6 ] ['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(

[Yahoo-eng-team] [Bug 1384108] [NEW] Exception during message handling: QueuePool limit of size 10 overflow 20 reached, connection timed out, timeout 10

2014-10-22 Thread James Page
Public bug reported:

OpenStack Juno release, Ubuntu 14.04 using Cloud Archive; under
relatively high instance creation concurrency (150), neutron starts to
throw some errors:

2014-10-21 16:40:44.124 16312 ERROR oslo.messaging._drivers.common 
[req-8e3ebbdb-bc01-439d-af86-655176f206a6 ] ['Traceback (most recent call 
last):\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply\nincoming.message))\n', '  File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch\nreturn self._do_dispatch(endpoint, method, ctxt, args)\n', '  
File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 
123, in _do_dispatch\nresult = getattr(endpoint, method)(ctxt, 
**new_args)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/securitygroups_rpc.py",
 line 74, in security_group_info_for_devices\nports = 
self._get_devices_info(devices_info)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/api/rpc/handlers/securitygroups_rpc.py",
 line 41, in _get_devices_info\nport = 
self.plugin.get_port_from_device(device)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 1161, in 
get_port_from_device\nport = db.get_port_and_sgs(port_id)\n', '  File 
"/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/db.py", line 222, in 
get_port_and_sgs\nport_and_sgs = query.all()\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2300, in all\n 
   return list(self)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2412, in 
__iter__\nreturn self._execute_and_instances(context)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2425, in 
_execute_and_instances\nclose_with_result=True)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/query.py", line 2416, in 
_connection_from_session\n**kw)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 854, in 
connection\nclose_with_result=close_with_result)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 858, in 
_connection_for_bind\nreturn 
self.transaction._connection_for_bind(engine)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/orm/session.py", line 322, in 
_connection_for_bind\nconn = bind.contextual_connect()\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1799, in 
contextual_connect\nself.pool.connect(),\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 338, in connect\n   
 return _ConnectionFairy._checkout(self)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 641, in _checkout\n 
   fairy = _ConnectionRecord.checkout(pool)\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 440, in checkout\n  
  rec = pool._do_get()\n', '  File 
"/usr/lib/python2.7/dist-packages/sqlalchemy/pool.py", line 957, in _do_get\n   
 (self.size(), self.overflow(), self._timeout))\n', 'TimeoutError: QueuePool 
limit of size 10 overflow 20 reached, connection timed out, timeout 10\n']
2014-10-21 16:40:44.126 16312 ERROR oslo.messaging.rpc.dispatcher 
[req-ea96dc85-dc0f-4ddc-a827-dbc25ab32a03 ] Exception during message handling: 
QueuePool limit of size 10 overflow 20 reached, connection timed out, timeout 10
2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher Traceback 
(most recent call last):
2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 134, 
in _dispatch_and_reply
2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher 
incoming.message))
2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 177, 
in _dispatch
2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher return 
self._do_dispatch(endpoint, method, ctxt, args)
2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 123, 
in _do_dispatch
2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher result = 
getattr(endpoint, method)(ctxt, **new_args)
2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/db/agents_db.py", line 237, in 
report_state
2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher 
self.plugin.create_or_update_agent(context, agent_state)
2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher   File 
"/usr/lib/python2.7/dist-packages/neutron/db/agents_db.py", line 197, in 
create_or_update_agent
2014-10-21 16:40:44.126 16312 TRACE oslo.messaging.rpc.dispatcher return 
self._create_or_update_agent(context, agent)
2014-10-