[Yahoo-eng-team] [Bug 1387950] [NEW] libvirt: fail to delete VM due to libvirt timeout

2014-10-31 Thread Qin Zhao
Public bug reported:

When I run longevity test against Juno code, I notice that that delete
VM operation occasionally fails. The stack trace is:

File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2507, in 
_delete_instance
self._shutdown_instance(context, instance, bdms)
  File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2437, 
in _shutdown_instance
requested_networks)
  File /usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2426, 
in _shutdown_instance
block_device_info)
  File /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 
1054, in destroy
self._destroy(instance)
  File /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 
1010, in _destroy
instance=instance)
  File /usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, 
line 82, in __exit__
six.reraise(self.type_, self.value, self.tb)
  File /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 
979, in _destroy
virt_dom.destroy()
  File /usr/lib/python2.6/site-packages/eventlet/tpool.py, line 183, in doit
result = proxy_call(self._autowrap, f, *args, **kwargs)
  File /usr/lib/python2.6/site-packages/eventlet/tpool.py, line 141, in 
proxy_call
rv = execute(f, *args, **kwargs)
  File /usr/lib/python2.6/site-packages/eventlet/tpool.py, line 122, in 
execute
six.reraise(c, e, tb)
  File /usr/lib/python2.6/site-packages/eventlet/tpool.py, line 80, in tworker
rv = meth(*args, **kwargs)
  File /usr/lib64/python2.6/site-packages/libvirt.py, line 730, in destroy
if ret == -1: raise libvirtError ('virDomainDestroy() failed', dom=self)


Libvirt log is:

2014-10-29 06:28:17.535+: 2025: warning : qemuProcessKill:4174 : Timed out 
waiting after SIGTERM to process 9972, sending SIGKILL
2014-10-29 06:28:22.537+: 2025: warning : qemuProcessKill:4206 : Timed out 
waiting after SIGKILL to process 9972
2014-10-29 06:28:22.537+: 2025: error : qemuDomainDestroyFlags:2098 : 
operation failed: failed to kill qemu process with SIGTERM

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: libvirt

** Bug watch added: Red Hat Bugzilla #1073624
   https://bugzilla.redhat.com/show_bug.cgi?id=1073624

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387950

Title:
  libvirt: fail to delete VM due to libvirt timeout

Status in OpenStack Compute (Nova):
  New

Bug description:
  When I run longevity test against Juno code, I notice that that delete
  VM operation occasionally fails. The stack trace is:

  File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2507, 
in _delete_instance
  self._shutdown_instance(context, instance, bdms)
File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2437, 
in _shutdown_instance
  requested_networks)
File /usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, 
line 82, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File /usr/lib/python2.6/site-packages/nova/compute/manager.py, line 2426, 
in _shutdown_instance
  block_device_info)
File /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 
1054, in destroy
  self._destroy(instance)
File /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 
1010, in _destroy
  instance=instance)
File /usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py, 
line 82, in __exit__
  six.reraise(self.type_, self.value, self.tb)
File /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py, line 
979, in _destroy
  virt_dom.destroy()
File /usr/lib/python2.6/site-packages/eventlet/tpool.py, line 183, in doit
  result = proxy_call(self._autowrap, f, *args, **kwargs)
File /usr/lib/python2.6/site-packages/eventlet/tpool.py, line 141, in 
proxy_call
  rv = execute(f, *args, **kwargs)
File /usr/lib/python2.6/site-packages/eventlet/tpool.py, line 122, in 
execute
  six.reraise(c, e, tb)
File /usr/lib/python2.6/site-packages/eventlet/tpool.py, line 80, in 
tworker
  rv = meth(*args, **kwargs)
File /usr/lib64/python2.6/site-packages/libvirt.py, line 730, in destroy
  if ret == -1: raise libvirtError ('virDomainDestroy() failed', dom=self)

  
  Libvirt log is:

  2014-10-29 06:28:17.535+: 2025: warning : qemuProcessKill:4174 : Timed 
out waiting after SIGTERM to process 9972, sending SIGKILL
  2014-10-29 06:28:22.537+: 2025: warning : qemuProcessKill:4206 : Timed 
out waiting after SIGKILL to process 9972
  2014-10-29 06:28:22.537+: 2025: error : qemuDomainDestroyFlags:2098 : 
operation failed: failed to kill qemu process with SIGTERM

To manage notifications about 

[Yahoo-eng-team] [Bug 1387973] [NEW] Normal user not able to download image if protected property is not associated with the image with restrict-download policy

2014-10-31 Thread Abhishek Kekane
Public bug reported:

If restrict download rule is configured in policy.json, and image is
added without protected property mentioned in restricted rule, then
normal users (other than admin) not able to download the image.

Steps to reproduce:

1. Create normal_user with _member_ role using horizon

2. Configure download rule in policy.json

   download_image: role:admin or rule:restricted,
   restricted: not ('test_1234':%(test_key)s and role:_member_),

3. Restart glance-api service

4. create image without property 'test_key' with admin user

   i. source devstack/openrc admin admin
   ii. glance image-create
   iii. glance image-update image_id --name non_protected --disk-format qcow2 
--container-format bare --is-public True --file /home/openstack/api.log

5. Try to download the newly created image with normal_user.

   i. source devstack/openrc normal_user admin
   ii. glance image-download image_id

It returns 403 Forbidden response to the user, where as admin user can
download the image successfully.

Expected behavior is all users can download the images if restricted
property is not added.

Note:
With the current oslo-incubator policy module, this issue is not reproducible.

** Affects: glance
 Importance: Undecided
 Status: New


** Tags: ntt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1387973

Title:
  Normal user not able to download image if protected property is not
  associated with the image with restrict-download policy

Status in OpenStack Image Registry and Delivery Service (Glance):
  New

Bug description:
  If restrict download rule is configured in policy.json, and image is
  added without protected property mentioned in restricted rule, then
  normal users (other than admin) not able to download the image.

  Steps to reproduce:

  1. Create normal_user with _member_ role using horizon

  2. Configure download rule in policy.json

 download_image: role:admin or rule:restricted,
 restricted: not ('test_1234':%(test_key)s and role:_member_),

  3. Restart glance-api service

  4. create image without property 'test_key' with admin user

 i. source devstack/openrc admin admin
 ii. glance image-create
 iii. glance image-update image_id --name non_protected --disk-format 
qcow2 --container-format bare --is-public True --file /home/openstack/api.log

  5. Try to download the newly created image with normal_user.

 i. source devstack/openrc normal_user admin
 ii. glance image-download image_id

  It returns 403 Forbidden response to the user, where as admin user can
  download the image successfully.

  Expected behavior is all users can download the images if restricted
  property is not added.

  Note:
  With the current oslo-incubator policy module, this issue is not reproducible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1387973/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387989] [NEW] VMware: uploading snapshot to glance_vsphere is too slow

2014-10-31 Thread Koichi Yoshigoe
Public bug reported:

uploading snapshot to glance datastore is too slow in vSphere env. This
have to be done by vSphere API not HTTP transfer if glance_store is
vsphere.

** Affects: nova
 Importance: Undecided
 Assignee: Koichi Yoshigoe (degdoo)
 Status: New

** Changed in: nova
 Assignee: (unassigned) = Koichi Yoshigoe (degdoo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387989

Title:
  VMware: uploading snapshot to glance_vsphere is too slow

Status in OpenStack Compute (Nova):
  New

Bug description:
  uploading snapshot to glance datastore is too slow in vSphere env.
  This have to be done by vSphere API not HTTP transfer if glance_store
  is vsphere.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1387989/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386562] Re: keystone did not start (ImportError: Class TemplatedCatalog cannot be found)

2014-10-31 Thread OpenStack Infra
Reviewed:  https://review.openstack.org/131394
Committed: 
https://git.openstack.org/cgit/openstack-dev/devstack/commit/?id=7fb5082c5c7abff95eb46dd9a92c5fd8fc63ddd2
Submitter: Jenkins
Branch:master

commit 7fb5082c5c7abff95eb46dd9a92c5fd8fc63ddd2
Author: wanghong w.wangh...@huawei.com
Date:   Tue Oct 28 19:09:04 2014 +0800

correct templated catalog driver class

Now the templated catalog driver class TemplatedCatalog is removed
in this patch https://review.openstack.org/#/c/125708/2 use
keystone.catalog.backends.templated.Catalog instead.

Change-Id: Ib9c8ea557e7171ff0c78a1e10d752ed564aff9e7
Closes-Bug: #1386562


** Changed in: devstack
   Status: In Progress = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1386562

Title:
  keystone did not start (ImportError: Class TemplatedCatalog cannot be
  found)

Status in devstack - openstack dev environments:
  Fix Released
Status in OpenStack Identity (Keystone):
  Invalid

Bug description:
  After the commit ID 1ea9d50a2c828a3eb976e458659008a5461b1418 of Steve
  Martinelli (made on Thu Oct 2 12:57:20 2014 -0400), keystone service
  can no longer be started.

  Root cause:
- At that commit ID, the class TemplatedCatalog is removed from the 
module keystone.catalog.backends.templated. Another class named Catalog also 
in that module is used.
- However, DevStack script is not updated accordingly. Consequently, the 
file devstack/lib/keystone still points to the removed class 
TemplatedCatalog.

  Proposal:
- Change the file devstack/lib/keystone so that it points to the right 
class that is the Catalog

  Excerpt of the file devstack/lib/keystone:

  # Configure ``keystone.conf`` to use templates
  iniset $KEYSTONE_CONF catalog driver 
keystone.catalog.backends.templated.TemplatedCatalog
  iniset $KEYSTONE_CONF catalog template_file $KEYSTONE_CATALOG

  
  Excerpt of the commit ID 1ea9d50a2c828a3eb976e458659008a5461b1418:

  commit 1ea9d50a2c828a3eb976e458659008a5461b1418
  Author: Steve Martinelli steve...@ca.ibm.com
  Date:   Thu Oct 2 12:57:20 2014 -0400

  Remove deprecated TemplatedCatalog class
  
  Use keystone.catalog.backends.templated.Catalog instead
  
  implements bp removed-as-of-kilo
  
  Change-Id: I0415852991e504677d1d1a81740c72f0bd8fc8bb

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1386562/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1374519] Re: Orphaned queues are not auto-deleted for Qpid

2014-10-31 Thread Ihar Hrachyshka
** Changed in: neutron
   Status: Confirmed = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1374519

Title:
  Orphaned queues are not auto-deleted for Qpid

Status in OpenStack Neutron (virtual network service):
  Fix Released
Status in neutron icehouse series:
  In Progress
Status in Messaging API for OpenStack:
  Fix Committed

Bug description:
  The following patch incorrectly set auto-delete for Qpid to False:
  https://github.com/openstack/oslo-incubator/commit/5ff534d1#diff-
  372094c4bfc6319d22875a970aa6b730R190

  While for RabbitMQ, it's True.

  This results in queues left on the broker if client dies and does not
  return back.

  Red Hat bug for reference:
  https://bugzilla.redhat.com/show_bug.cgi?id=1099657

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1374519/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1372049] Re: Launching multiple VMs fails over 63 instances

2014-10-31 Thread Ihar Hrachyshka
Automatic expansion of thread pool in oslo.messaging is not an option
since we should have some limit applied to avoid other problems due to
too high parallelism. If we will ever consider expansion of the pool
beyond the hardcoded value from the configuration file, we'll need to
apply some non-obvious heuristics to determine whether higher
parallelism will be beneficial for the whole system.

We may change the default value for oslo.messaging eventlet executor,
though it will influence all the services that use the library, not just
this specific case, and it's not obvious whether it won't introduce
other issues.

The safest option is to backport the Nova fix to stable branches.

I've requested juno and icehouse backports for the Nova patch:
- https://review.openstack.org/132202 (Juno)
- https://review.openstack.org/132218 (Icehouse)

** Changed in: oslo.messaging
   Status: Confirmed = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1372049

Title:
  Launching multiple VMs fails over 63 instances

Status in OpenStack Neutron (virtual network service):
  Invalid
Status in OpenStack Compute (Nova):
  Fix Committed
Status in Messaging API for OpenStack:
  Won't Fix

Bug description:
  RHEL-7.0
  Icehouse
  All-In-One

  Booting 63 VMs at once (with num-instances attribute) works fine.
  Setup is able to support up to 100 VMs booted in ~50 bulks.

  Booting 100 VMs at once, without Neutron network, so no network for
  the VMs, works fine.

  Booting 64 (and more) VMs boots only 63 VMs. any of the VMs over 63 are 
booted in ERROR state with details: VirtualInterfaceCreateException: Virtual 
Interface creation failed
  Failed VM's port at DOWN state

  Details:
  After the initial boot commands goes through, all CPU usage goes down (no 
neutron/nova CPU consumption) untll nova's vif_plugging_timeout is reached. at 
which point 1 (= #num_instances - 63) VM is set to ERROR, and the rest of the 
VMs reach active state.

  Guess: seems like neutron is going into some deadlock until some of
  the load is reduced by vif_plugging_timeout


  disabling neutorn-nova port notifications allows all VMs to be
  created.

  Notes: this is recreated also with multiple Compute nodes, and also
  multiple neutron RPC/API workers

  
  Recreate:
  set nova/neutron quota's to -1
  make sure neutorn-nova port notifications is ON on both neutron and nova conf 
files
  create a network in your tenant

  boot more than 64 VMs

  nova boot --flavor 42 test_VM --image cirros --num-instances 64


  [yfried@yfried-mobl-rh ~(keystone_demo)]$ nova list
  
+--+--+++-+-+
  | ID   | Name 
| Status | Task State | Power State | Networks|
  
+--+--+++-+-+
  | 02d7b680-efd8-4291-8d56-78b43c9451cb | 
test_VM-02d7b680-efd8-4291-8d56-78b43c9451cb | ACTIVE | -  | Running
 | demo_private=10.0.0.156 |
  | 05fd6dd2-6b0e-4801-9219-ae4a77a53cfd | 
test_VM-05fd6dd2-6b0e-4801-9219-ae4a77a53cfd | ACTIVE | -  | Running
 | demo_private=10.0.0.150 |
  | 09131f19-5e83-4a40-a900-ffca24a8c775 | 
test_VM-09131f19-5e83-4a40-a900-ffca24a8c775 | ACTIVE | -  | Running
 | demo_private=10.0.0.160 |
  | 0d3be93b-73d3-4995-913c-03a4b80ad37e | 
test_VM-0d3be93b-73d3-4995-913c-03a4b80ad37e | ACTIVE | -  | Running
 | demo_private=10.0.0.164 |
  | 0fcadae4-768c-44a1-9e1c-ac371d1803f9 | 
test_VM-0fcadae4-768c-44a1-9e1c-ac371d1803f9 | ACTIVE | -  | Running
 | demo_private=10.0.0.202 |
  | 11a87db1-5b15-4cad-a749-5d53e2fd8194 | 
test_VM-11a87db1-5b15-4cad-a749-5d53e2fd8194 | ACTIVE | -  | Running
 | demo_private=10.0.0.201 |
  | 147e4a6b-a77c-46ef-b8fd-d65479ccb8ca | 
test_VM-147e4a6b-a77c-46ef-b8fd-d65479ccb8ca | ACTIVE | -  | Running
 | demo_private=10.0.0.147 |
  | 1c5b5f40-d2f3-4cc7-9f80-f5df8de918b9 | 
test_VM-1c5b5f40-d2f3-4cc7-9f80-f5df8de918b9 | ACTIVE | -  | Running
 | demo_private=10.0.0.187 |
  | 1d0b7210-f5a0-4827-b338-2014e8f21341 | 
test_VM-1d0b7210-f5a0-4827-b338-2014e8f21341 | ACTIVE | -  | Running
 | demo_private=10.0.0.165 |
  | 1df564f6-5aac-4ac8-8361-bd44c305332b | 
test_VM-1df564f6-5aac-4ac8-8361-bd44c305332b | ACTIVE | -  | Running
 | demo_private=10.0.0.145 |
  | 2031945f-6305-4cdc-939f-5f02171f82b2 | 
test_VM-2031945f-6305-4cdc-939f-5f02171f82b2 | ACTIVE | -  | Running
 | demo_private=10.0.0.149 |
  | 256ff0ed-0e56-47e3-8b69-68006d658ad6 | 
test_VM-256ff0ed-0e56-47e3-8b69-68006d658ad6 | ACTIVE | -  | Running
 | demo_private=10.0.0.177 |
  | 

[Yahoo-eng-team] [Bug 1388062] [NEW] Flag in keystone.conf to remove password panel when using ldap for authentication

2014-10-31 Thread Seb Hughes
Public bug reported:

When using ldap for authentication, the password panel still is visible
in Horizon. Therefore I suggest it would be good to have the ability to
set a flag in the keystone.conf that if you're using LDAP for auth it
disables the password panel. Generally most companies have their own
portal where users change their password.

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: keystone

** Description changed:

- When using ldap for authentication, the password panel still is visible.
- If a user tries to change their password it will fail. Therefore I
- suggest it would be good to have the ability to set a flag in the
- keystone.conf that if you're using LDAP for auth it disabled the
- password panel. Generally most companies have their own portal where
- users change their password.
+ When using ldap for authentication, the password panel still is visible
+ in Horizon. Therefore I suggest it would be good to have the ability to
+ set a flag in the keystone.conf that if you're using LDAP for auth it
+ disables the password panel. Generally most companies have their own
+ portal where users change their password.

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1388062

Title:
  Flag in keystone.conf to remove password panel when using ldap for
  authentication

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  When using ldap for authentication, the password panel still is
  visible in Horizon. Therefore I suggest it would be good to have the
  ability to set a flag in the keystone.conf that if you're using LDAP
  for auth it disables the password panel. Generally most companies have
  their own portal where users change their password.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1388062/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388069] [NEW] Review documentation change for trust redelegation

2014-10-31 Thread Alexander Makarov
Public bug reported:

https://review.openstack.org/#/c/131541/ needs a tech. writer review.

** Affects: keystone
 Importance: Undecided
 Assignee: Irina (ipovolotskaya)
 Status: New


** Tags: documentation

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1388069

Title:
  Review documentation change for trust redelegation

Status in OpenStack Identity (Keystone):
  New

Bug description:
  https://review.openstack.org/#/c/131541/ needs a tech. writer review.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1388069/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388077] [NEW] Parallel periodic instance power state reporting from compute nodes has high impact on conductors and message broker

2014-10-31 Thread James Page
Public bug reported:

Environment: OpenStack Juno release/Ubuntu 14.04/480 compute nodes/8
cloud controllers/40,000 instances +

The change made in:

https://github.com/openstack/nova/commit/baabab45e0ae0e9e35872cae77eb04bdb5ee0545

switches power state reporting from being a serial process for each
instance on a hypervisor to being a parallel thread for every instance;
for clouds running high instance counts, this has quite an impact on the
conductor processes as they try to deal with N instance refresh calls in
parallel where N is the number of instances running on the cloud.

It might be better to throttle this to a configurable parallel level so
that period RPC load can be managed effectively in a larger cloud, or to
continue todo this process in series but outside of the main thread.

The net result of this activity is that it places increase demands on
the message broker, which has to deal with more parallel connections,
and the conductors as they try to consume all of the RPC requests; if
the message broker hits its memory high water mark it will stop
publishers publishing any more messages until the memory usage drops
below the high water mark again - this might not be achievable if all
conductor processes are tied up with existing RPC calls try to send
replies, resulting in a message broker lockup and collapse of all RPC in
the cloud.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: nova (Ubuntu)
 Importance: Undecided
 Status: New

** Description changed:

  The change made in:
  
  
https://github.com/openstack/nova/commit/baabab45e0ae0e9e35872cae77eb04bdb5ee0545
  
  Switches power state reporting from being a serial process on each
  instance on a hypervisor to being a parallel thread for every instance;
  for clouds running high instance densities, this has quite an impact on
  the conductor processes as they try to deal with N instance refresh
  calls in parallel where N is the number of instances running on the
  cloud.
  
  It might be better to throttle this to a configurable parallel level so
  that period RPC load can be managed effectively in a larger cloud, or to
  continue todo this process in series but outside of the main thread.
+ 
+ The net result of this activity is that it places increase demands on
+ the message broker, which has to deal with more parallel connections,
+ and the conductors as they try to consume all of the RPC requests; if
+ the message broker hits its memory high water mark it will stop
+ publishers publishing any more messages until the memory usage drops
+ below the high water mark again - this might not be achievable if all
+ conductor processes are tied up with existing RPC calls try to send
+ replies, resulting in a message broker lockup and collapse of all RPC in
+ the cloud.

** Description changed:

  The change made in:
  
  
https://github.com/openstack/nova/commit/baabab45e0ae0e9e35872cae77eb04bdb5ee0545
  
- Switches power state reporting from being a serial process on each
+ Switches power state reporting from being a serial process for each
  instance on a hypervisor to being a parallel thread for every instance;
  for clouds running high instance densities, this has quite an impact on
  the conductor processes as they try to deal with N instance refresh
  calls in parallel where N is the number of instances running on the
  cloud.
  
  It might be better to throttle this to a configurable parallel level so
  that period RPC load can be managed effectively in a larger cloud, or to
  continue todo this process in series but outside of the main thread.
  
  The net result of this activity is that it places increase demands on
  the message broker, which has to deal with more parallel connections,
  and the conductors as they try to consume all of the RPC requests; if
  the message broker hits its memory high water mark it will stop
  publishers publishing any more messages until the memory usage drops
  below the high water mark again - this might not be achievable if all
  conductor processes are tied up with existing RPC calls try to send
  replies, resulting in a message broker lockup and collapse of all RPC in
  the cloud.

** Also affects: nova
   Importance: Undecided
   Status: New

** Summary changed:

- Parallel periodic power state reporting from compute nodes has high impact on 
conductors
+ Parallel periodic power state reporting from compute nodes has high impact on 
conductors and message broker

** Description changed:

+ Environment: OpenStack Juno release/Ubuntu 14.04/480 compute nodes/8
+ cloud controllers/40,000 instances +
+ 
  The change made in:
  
  
https://github.com/openstack/nova/commit/baabab45e0ae0e9e35872cae77eb04bdb5ee0545
  
- Switches power state reporting from being a serial process for each
+ switches power state reporting from being a serial process for each
  instance on a hypervisor to being a parallel thread for every instance;
  for clouds running high instance 

[Yahoo-eng-team] [Bug 1388095] [NEW] VMware fake driver returns invalid search results due to incorrect use of lstrip()

2014-10-31 Thread Matthew Booth
Public bug reported:

_search_ds in the fake driver does:

path = file.lstrip(dname).split('/')

The intention is to remove a prefix of dname from the beginning of file,
but this actually removes all instances of all characters in dname from
the left of file.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: vmware

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1388095

Title:
  VMware fake driver returns invalid search results due to incorrect use
  of lstrip()

Status in OpenStack Compute (Nova):
  New

Bug description:
  _search_ds in the fake driver does:

  path = file.lstrip(dname).split('/')

  The intention is to remove a prefix of dname from the beginning of
  file, but this actually removes all instances of all characters in
  dname from the left of file.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1388095/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1387945] Re: nova volume-attach is giving wrong device ID

2014-10-31 Thread John Griffith
This is a VERY old and long running issue with how things work on the
Nova side of the house.  The volumes are going to get attached to the
next available drive mapping (vdb, vdc, vdd) based on the Block Device
Mapping table in Nova.  The specification you provide to attach-volume
is really more of a hint than anything else.

Anyway, over the past the answer has been just use 'auto' and save
yourself the false sense of control here.  Not acceptable for some,
regardless this is a Nova operation and Cinder actually has no control
or input here.

Marking invalid for Cinder and adding Nova.

** Changed in: cinder
   Status: New = Invalid

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1387945

Title:
  nova volume-attach is giving wrong device ID

Status in Cinder:
  Invalid
Status in OpenStack Compute (Nova):
  New

Bug description:
  Sometimes while attaching volume to the instance using nova volume-
  attach it is giving wrong device ID (mountpoint : /dev/vdb).

  root@techpatron:~# nova volume-attach VM1 201b2fe8-7f77-446d-a6e4-5d077914329c
  +--+--+
  | Property | Value|
  +--+--+
  | device   | /dev/vdd |
  | id   | 201b2fe8-7f77-446d-a6e4-5d077914329c |
  | serverId | 2f319155-06d2-4aca-9f0f-49b415112568 |
  | volumeId | 201b2fe8-7f77-446d-a6e4-5d077914329c |
  +--+--+

  Here it is showing /dev/vdd, but volume actually attached as
  /dev/vdc to the instance VM1.

  Because of this when I am running some automation scripts (which will
  perform operations on the attached device with in the instance) facing
  problem. From the output that script taking the device id as
  /dev/vdd but device is attached to some other mount point.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1387945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1370869] Re: Cannot display project overview page due to cannot convert float infinity to integer error

2014-10-31 Thread Julie Pichon
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
   Status: New = In Progress

** Changed in: horizon/icehouse
   Importance: Undecided = Medium

** Changed in: horizon/icehouse
 Assignee: (unassigned) = Julie Pichon (jpichon)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1370869

Title:
  Cannot display project overview page due to cannot convert float
  infinity to integer error

Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in OpenStack Dashboard (Horizon) icehouse series:
  In Progress

Bug description:
  Due to nova bug 1370867, nova absolute-limits sometimes returns -1 for *Used 
fields rather than 0.
  If this happens, the project overview page cannot be displayed with cannot 
convert float infinity to integer error.
  Users cannot use the dashboard without specifying URL directly, so it is 
better the dashboard guards this situation.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1370869/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1386687] Re: Overview page: OverflowError when cinder limits are negative

2014-10-31 Thread Julie Pichon
** Also affects: horizon/icehouse
   Importance: Undecided
   Status: New

** Also affects: horizon/juno
   Importance: Undecided
   Status: New

** Changed in: horizon/icehouse
 Assignee: (unassigned) = Julie Pichon (jpichon)

** Changed in: horizon/icehouse
   Status: New = In Progress

** Changed in: horizon/juno
 Assignee: (unassigned) = Julie Pichon (jpichon)

** Changed in: horizon/icehouse
   Importance: Undecided = Medium

** Changed in: horizon/juno
   Importance: Undecided = Medium

** Changed in: horizon/juno
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1386687

Title:
  Overview page: OverflowError when cinder limits are negative

Status in OpenStack Dashboard (Horizon):
  Fix Committed
Status in OpenStack Dashboard (Horizon) icehouse series:
  In Progress
Status in OpenStack Dashboard (Horizon) juno series:
  In Progress

Bug description:
  This is the Cinder twin to bug 1370869 which was resolved for Nova.
  For some yet-to-be-fully-debugged reasons, after deleting multiple
  instances the quota_usages table for Cinder ended up with negative
  values for several of the in use limits, causing the Overview Page
  to fail with an error 500:

  OverflowError at /project/
  cannot convert float infinity to integer

  Even if this is (probably?) a rare occurrence, it would make sense to
  also add guards for the cinder limits and make the overview page more
  resilient.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1386687/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1384555] Re: SQL error during alembic.migration when populating Neutron database on MariaDB 10.0

2014-10-31 Thread OpenStack Infra
Fix proposed to branch: master
Review: https://review.openstack.org/132273

** Changed in: neutron
   Status: Won't Fix = In Progress

** Changed in: neutron
 Assignee: Ann Kamyshnikova (akamyshnikova) = Jakub Libosvar (libosvar)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1384555

Title:
  SQL error during alembic.migration when populating Neutron database on
  MariaDB 10.0

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in “neutron” package in Ubuntu:
  New

Bug description:
  On a fresh installation of Juno, it seems that that the database is
  not being populated correctly on a fresh install. This is the output
  of the log (I also demonstrated the DB had no tables to begin with):

  MariaDB [(none)] use neutron
  Database changed
  MariaDB [neutron] show tables;
  Empty set (0.00 sec)

  MariaDB [neutron] quit
  Bye
  root@vm-1:~# neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini current
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  Current revision for mysql://neutron:X@10.10.10.1/neutron: None
  root@vm-1:~# neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini upgrade head
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will assume non-transactional DDL.
  INFO  [alembic.migration] Running upgrade None - havana, havana_initial
  INFO  [alembic.migration] Running upgrade havana - e197124d4b9, add unique 
constraint to members
  INFO  [alembic.migration] Running upgrade e197124d4b9 - 1fcfc149aca4, Add a 
unique constraint on (agent_type, host) columns to prevent a race condition 
when an agent entry is 'upserted'.
  INFO  [alembic.migration] Running upgrade 1fcfc149aca4 - 50e86cb2637a, 
nsx_mappings
  INFO  [alembic.migration] Running upgrade 50e86cb2637a - 1421183d533f, NSX 
DHCP/metadata support
  INFO  [alembic.migration] Running upgrade 1421183d533f - 3d3cb89d84ee, 
nsx_switch_mappings
  INFO  [alembic.migration] Running upgrade 3d3cb89d84ee - 4ca36cfc898c, 
nsx_router_mappings
  INFO  [alembic.migration] Running upgrade 4ca36cfc898c - 27cc183af192, 
ml2_vnic_type
  INFO  [alembic.migration] Running upgrade 27cc183af192 - 50d5ba354c23, ml2 
binding:vif_details
  INFO  [alembic.migration] Running upgrade 50d5ba354c23 - 157a5d299379, ml2 
binding:profile
  INFO  [alembic.migration] Running upgrade 157a5d299379 - 3d2585038b95, 
VMware NSX rebranding
  INFO  [alembic.migration] Running upgrade 3d2585038b95 - abc88c33f74f, lb 
stats
  INFO  [alembic.migration] Running upgrade abc88c33f74f - 1b2580001654, 
nsx_sec_group_mapping
  INFO  [alembic.migration] Running upgrade 1b2580001654 - e766b19a3bb, 
nuage_initial
  INFO  [alembic.migration] Running upgrade e766b19a3bb - 2eeaf963a447, 
floatingip_status
  INFO  [alembic.migration] Running upgrade 2eeaf963a447 - 492a106273f8, 
Brocade ML2 Mech. Driver
  INFO  [alembic.migration] Running upgrade 492a106273f8 - 24c7ea5160d7, Cisco 
CSR VPNaaS
  INFO  [alembic.migration] Running upgrade 24c7ea5160d7 - 81c553f3776c, 
bsn_consistencyhashes
  INFO  [alembic.migration] Running upgrade 81c553f3776c - 117643811bca, nec: 
delete old ofc mapping tables
  INFO  [alembic.migration] Running upgrade 117643811bca - 19180cf98af6, 
nsx_gw_devices
  INFO  [alembic.migration] Running upgrade 19180cf98af6 - 33dd0a9fa487, 
embrane_lbaas_driver
  INFO  [alembic.migration] Running upgrade 33dd0a9fa487 - 2447ad0e9585, Add 
IPv6 Subnet properties
  INFO  [alembic.migration] Running upgrade 2447ad0e9585 - 538732fa21e1, NEC 
Rename quantum_id to neutron_id
  INFO  [alembic.migration] Running upgrade 538732fa21e1 - 5ac1c354a051, n1kv 
segment allocs for cisco n1kv plugin
  INFO  [alembic.migration] Running upgrade 5ac1c354a051 - icehouse, icehouse
  INFO  [alembic.migration] Running upgrade icehouse - 54f7549a0e5f, 
set_not_null_peer_address
  INFO  [alembic.migration] Running upgrade 54f7549a0e5f - 1e5dd1d09b22, 
set_not_null_fields_lb_stats
  INFO  [alembic.migration] Running upgrade 1e5dd1d09b22 - b65aa907aec, 
set_length_of_protocol_field
  INFO  [alembic.migration] Running upgrade b65aa907aec - 33c3db036fe4, 
set_length_of_description_field_metering
  INFO  [alembic.migration] Running upgrade 33c3db036fe4 - 4eca4a84f08a, 
Remove ML2 Cisco Credentials DB
  INFO  [alembic.migration] Running upgrade 4eca4a84f08a - d06e871c0d5, 
set_admin_state_up_not_null_ml2
  INFO  [alembic.migration] Running upgrade d06e871c0d5 - 6be312499f9, 
set_not_null_vlan_id_cisco
  INFO  [alembic.migration] Running upgrade 6be312499f9 - 1b837a7125a9, Cisco 
APIC Mechanism Driver
  INFO  [alembic.migration] Running upgrade 1b837a7125a9 - 10cd28e692e9, 
nuage_extraroute
  INFO  [alembic.migration] Running upgrade 10cd28e692e9 - 2db5203cb7a9, 
nuage_floatingip
  

[Yahoo-eng-team] [Bug 1388132] [NEW] [compute] Ceph client key missing in libvirt apparmor profile

2014-10-31 Thread Dr. Jens Rosenboom
Public bug reported:

This happens when booting an instance while nova has ceph backend
enabled:

Oct 31 14:06:59 vagrant-ubuntu-trusty-64 kernel: [ 8264.770442] type=1400 
audit(1414764419.818:29): apparmor=DENIED operation=open 
profile=libvirt-1550f42a-1b8b-4db5-9458-d5b9f496cc0c name=/tmp/ pid=25660 
comm=qemu-system-x86 requested_mask=r denied_mask=r fsuid=112 ouid=0
Oct 31 14:06:59 vagrant-ubuntu-trusty-64 kernel: [ 8264.770454] type=1400 
audit(1414764419.818:30): apparmor=DENIED operation=open 
profile=libvirt-1550f42a-1b8b-4db5-9458-d5b9f496cc0c name=/var/tmp/ 
pid=25660 comm=qemu-system-x86 requested_mask=r denied_mask=r fsuid=112 
ouid=0
Oct 31 14:06:59 vagrant-ubuntu-trusty-64 kernel: [ 8264.776679] type=1400 
audit(1414764419.826:31): apparmor=DENIED operation=open 
profile=libvirt-1550f42a-1b8b-4db5-9458-d5b9f496cc0c 
name=/etc/ceph/ceph.client.cindy.keyring pid=25660 comm=qemu-system-x86 
requested_mask=r denied_mask=r fsuid=112 ouid=1000

The keyring should not be used at all, since the secret is defined as
virsh secret.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1388132

Title:
  [compute] Ceph client key missing in libvirt apparmor profile

Status in OpenStack Compute (Nova):
  New

Bug description:
  This happens when booting an instance while nova has ceph backend
  enabled:

  Oct 31 14:06:59 vagrant-ubuntu-trusty-64 kernel: [ 8264.770442] type=1400 
audit(1414764419.818:29): apparmor=DENIED operation=open 
profile=libvirt-1550f42a-1b8b-4db5-9458-d5b9f496cc0c name=/tmp/ pid=25660 
comm=qemu-system-x86 requested_mask=r denied_mask=r fsuid=112 ouid=0
  Oct 31 14:06:59 vagrant-ubuntu-trusty-64 kernel: [ 8264.770454] type=1400 
audit(1414764419.818:30): apparmor=DENIED operation=open 
profile=libvirt-1550f42a-1b8b-4db5-9458-d5b9f496cc0c name=/var/tmp/ 
pid=25660 comm=qemu-system-x86 requested_mask=r denied_mask=r fsuid=112 
ouid=0
  Oct 31 14:06:59 vagrant-ubuntu-trusty-64 kernel: [ 8264.776679] type=1400 
audit(1414764419.826:31): apparmor=DENIED operation=open 
profile=libvirt-1550f42a-1b8b-4db5-9458-d5b9f496cc0c 
name=/etc/ceph/ceph.client.cindy.keyring pid=25660 comm=qemu-system-x86 
requested_mask=r denied_mask=r fsuid=112 ouid=1000

  The keyring should not be used at all, since the secret is defined as
  virsh secret.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1388132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388143] [NEW] [sahara] Quickstart guide is missing definition for SAHARA_URL

2014-10-31 Thread Dr. Jens Rosenboom
Public bug reported:

The quickstart document
http://docs.openstack.org/developer/sahara/devref/quickstart.html uses
$SAHARA_URL without defining it. From looking at what the cli client
does, it should be set to:

SAHARA_URL=http://127.0.0.1:8386/v1.0/$TENANT_ID

and the installation for me seems to work after that.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1388143

Title:
  [sahara] Quickstart guide is missing definition for SAHARA_URL

Status in OpenStack Compute (Nova):
  New

Bug description:
  The quickstart document
  http://docs.openstack.org/developer/sahara/devref/quickstart.html uses
  $SAHARA_URL without defining it. From looking at what the cli client
  does, it should be set to:

  SAHARA_URL=http://127.0.0.1:8386/v1.0/$TENANT_ID

  and the installation for me seems to work after that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1388143/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1383433] Re: please package the cloud-init ci-tool

2014-10-31 Thread James Hunt
** Also affects: cloud-init (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: cloud-init (Ubuntu)
 Assignee: (unassigned) = James Hunt (jamesodhunt)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1383433

Title:
  please package the cloud-init ci-tool

Status in Init scripts for use on cloud images:
  New
Status in “cloud-init” package in Ubuntu:
  New

Bug description:
  The ci-tool [1] is a useful utility that allows a host to be
  configured to use cloud-init (amongst other things).

  It would be great if we could get this into a package, either cloud-
  init itself or maybe a cloud-init-{tools|utils} package.

  

  [1] - http://bazaar.launchpad.net/~smoser/cloud-init/ci-tool/view/head
  :/ci-tool

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1383433/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388162] [NEW] Duplicate ensure_remove_chain method in iptables_manager

2014-10-31 Thread Elena Ezhova
Public bug reported:

ensure_remove_chain method in iptables_manager duplicates remove_chain
method and should be removed.

** Affects: neutron
 Importance: Undecided
 Assignee: Elena Ezhova (eezhova)
 Status: New


** Tags: low-hanging-fruit

** Changed in: neutron
 Assignee: (unassigned) = Elena Ezhova (eezhova)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1388162

Title:
  Duplicate ensure_remove_chain method in iptables_manager

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  ensure_remove_chain method in iptables_manager duplicates remove_chain
  method and should be removed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1388162/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388213] [NEW] Possible to crash nova compute node via deletion of a resizing instance (timing bug)

2014-10-31 Thread Patrick Crews
Public bug reported:

NOTE: tests run against devstack install on Ubuntu 14.04
tests conducted via randomized testing, no definitive test case produced yet, 
but have repeated this several times.

It appears that deleting a resizing instance can cause the nova compute node to 
crash with the following error / traceback:
(more detailed output below)
screen-n-cpu.2014-10-31-08.log:354289:2014-10-31 11:01:58.093 TRACE 
oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/libvirt.py, line 896, in createWithFlags
screen-n-cpu.2014-10-31-08.log:354290:2014-10-31 11:01:58.093 TRACE 
oslo.messaging.rpc.dispatcher if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
screen-n-cpu.2014-10-31-08.log:354291:2014-10-31 11:01:58.093 TRACE 
oslo.messaging.rpc.dispatcher libvirtError: Domain not found: no domain with 
matching uuid '2aadc976-951e-47d6-bb20-9e071a6a89a9' (instance-0360)

When this is triggered, the compute node will crash and all instances will end 
up in ERROR state.
Have not perfected the timing parameters, but will provide output from two runs 
below.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1388213

Title:
  Possible to crash nova compute node via deletion of a resizing
  instance (timing bug)

Status in OpenStack Compute (Nova):
  New

Bug description:
  NOTE: tests run against devstack install on Ubuntu 14.04
  tests conducted via randomized testing, no definitive test case produced yet, 
but have repeated this several times.

  It appears that deleting a resizing instance can cause the nova compute node 
to crash with the following error / traceback:
  (more detailed output below)
  screen-n-cpu.2014-10-31-08.log:354289:2014-10-31 11:01:58.093 TRACE 
oslo.messaging.rpc.dispatcher   File 
/usr/lib/python2.7/dist-packages/libvirt.py, line 896, in createWithFlags
  screen-n-cpu.2014-10-31-08.log:354290:2014-10-31 11:01:58.093 TRACE 
oslo.messaging.rpc.dispatcher if ret == -1: raise libvirtError 
('virDomainCreateWithFlags() failed', dom=self)
  screen-n-cpu.2014-10-31-08.log:354291:2014-10-31 11:01:58.093 TRACE 
oslo.messaging.rpc.dispatcher libvirtError: Domain not found: no domain with 
matching uuid '2aadc976-951e-47d6-bb20-9e071a6a89a9' (instance-0360)

  When this is triggered, the compute node will crash and all instances will 
end up in ERROR state.
  Have not perfected the timing parameters, but will provide output from two 
runs below.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1388213/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388230] [NEW] Checks for DB models and migrations sync not working

2014-10-31 Thread Henry Gessau
Public bug reported:

I noticed a couple of issues, which might be related.


1. db-manage revision --autogenerate on master with no code changes generates:

def upgrade():
op.drop_index('idx_autoinc_vr_id', table_name='ha_router_vrid_allocations')


2. With the following change to the IPAllocation() model, the revision is not 
detected. Also, the unit tests for model/migration sync do not give an error.

diff --git a/neutron/db/models_v2.py b/neutron/db/models_v2.py
--- a/neutron/db/models_v2.py
+++ b/neutron/db/models_v2.py
@@ -98,8 +98,8 @@ class IPAllocation(model_base.BASEV2):
 
 port_id = sa.Column(sa.String(36), sa.ForeignKey('ports.id',
  ondelete=CASCADE),
-nullable=True)
-ip_address = sa.Column(sa.String(64), nullable=False, primary_key=True)
+nullable=True, primary_key=True)
+ip_address = sa.Column(sa.String(64), nullable=False)
 subnet_id = sa.Column(sa.String(36), sa.ForeignKey('subnets.id',
ondelete=CASCADE),
   nullable=False, primary_key=True)

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: db

** Tags added: db

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1388230

Title:
  Checks for DB models and migrations sync not working

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  I noticed a couple of issues, which might be related.

  
  1. db-manage revision --autogenerate on master with no code changes 
generates:

  def upgrade():
  op.drop_index('idx_autoinc_vr_id', 
table_name='ha_router_vrid_allocations')

  
  2. With the following change to the IPAllocation() model, the revision is not 
detected. Also, the unit tests for model/migration sync do not give an error.

  diff --git a/neutron/db/models_v2.py b/neutron/db/models_v2.py
  --- a/neutron/db/models_v2.py
  +++ b/neutron/db/models_v2.py
  @@ -98,8 +98,8 @@ class IPAllocation(model_base.BASEV2):
   
   port_id = sa.Column(sa.String(36), sa.ForeignKey('ports.id',
ondelete=CASCADE),
  -nullable=True)
  -ip_address = sa.Column(sa.String(64), nullable=False, primary_key=True)
  +nullable=True, primary_key=True)
  +ip_address = sa.Column(sa.String(64), nullable=False)
   subnet_id = sa.Column(sa.String(36), sa.ForeignKey('subnets.id',
  ondelete=CASCADE),
 nullable=False, primary_key=True)

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1388230/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388237] [NEW] cannot import name fields after update from icehouse

2014-10-31 Thread thomas
Public bug reported:

Hi, I have an issue when upgrading from icehouse to juno.
I got a 500 for every request with that trace :
https://gist.github.com/ttwthomas/35b155ba0f7781c4e98e

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1388237

Title:
  cannot import name fields after update from icehouse

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Hi, I have an issue when upgrading from icehouse to juno.
  I got a 500 for every request with that trace :
  https://gist.github.com/ttwthomas/35b155ba0f7781c4e98e

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1388237/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1388305] [NEW] When using DVR, port list for floating IP is empy

2014-10-31 Thread Daniel Gauthier
Public bug reported:


The port list doesn't get updated correctly  when using DVR.

see https://ask.openstack.org/en/question/51634/juno-dvr-associate-
floating-ip-reported-no-ports-available/  for detail

** Affects: horizon
 Importance: Undecided
 Status: New


** Tags: dvr

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1388305

Title:
  When using DVR, port list for floating IP is empy

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  
  The port list doesn't get updated correctly  when using DVR.

  see https://ask.openstack.org/en/question/51634/juno-dvr-associate-
  floating-ip-reported-no-ports-available/  for detail

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1388305/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1354354] Re: No network after live-migration

2014-10-31 Thread Launchpad Bug Tracker
[Expired for neutron because there has been no activity for 60 days.]

** Changed in: neutron
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1354354

Title:
  No network after live-migration

Status in OpenStack Neutron (virtual network service):
  Expired
Status in OpenStack Compute (Nova):
  Incomplete

Bug description:
  During live-migration port-update is send to neutron after plug_vifs is 
executed. In a setup with neutron ml2 plugin where two nodes require different 
VIF_TYPE, migrating a VM from one node to the other will result in VM having no 
network connectivity. 
  vif bindings should be updated before plug_vifs is called.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1354354/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 884451] Re: End User Has No Forgot Password Option

2014-10-31 Thread Launchpad Bug Tracker
[Expired for OpenStack Dashboard (Horizon) because there has been no
activity for 60 days.]

** Changed in: horizon
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/884451

Title:
  End User Has No Forgot Password Option

Status in Django OpenStack Auth:
  Incomplete
Status in OpenStack Dashboard (Horizon):
  Expired
Status in OpenStack Identity (Keystone):
  Expired

Bug description:
  Related to blueprint: improve-user-experience.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/884451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 884451] Re: End User Has No Forgot Password Option

2014-10-31 Thread Launchpad Bug Tracker
[Expired for Keystone because there has been no activity for 60 days.]

** Changed in: keystone
   Status: Incomplete = Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/884451

Title:
  End User Has No Forgot Password Option

Status in Django OpenStack Auth:
  Incomplete
Status in OpenStack Dashboard (Horizon):
  Expired
Status in OpenStack Identity (Keystone):
  Expired

Bug description:
  Related to blueprint: improve-user-experience.

To manage notifications about this bug go to:
https://bugs.launchpad.net/django-openstack-auth/+bug/884451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp