Public bug reported:

After upgrade to the lastest version of openstack kilo, when i boot new 
instance from image the instance started failed with error message below.  I 
see nova do not create RBD volumes of glance image to Ceph VMS pool, my system 
worked correctly before. Please check my attach file for detail
===============
2015-12-18 22:07:56.590 1620300 DEBUG nova.virt.libvirt.rbd_utils 
[req-eab9562d-484f-432d-a9d7-0ab32ec99c23 0690b97b2bea47a8aeb4777856f473b2 
767b2f8a8b08403884269836420d34f0 - - -] rbd image 
352eb808-f7c3-405d-92b4-c29bf07a46f0_disk does not exist __init__ 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py:60
2015-12-18 22:07:56.641 1620300 DEBUG nova.virt.libvirt.rbd_utils 
[req-eab9562d-484f-432d-a9d7-0ab32ec99c23 0690b97b2bea47a8aeb4777856f473b2 
767b2f8a8b08403884269836420d34f0 - - -] rbd image 
352eb808-f7c3-405d-92b4-c29bf07a46f0_disk does not exist __init__ 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py:60
2015-12-18 22:15:12.033 1620300 DEBUG nova.virt.libvirt.rbd_utils 
[req-c07f3a3b-82de-41b5-9f91-e52a2295b0fa 0690b97b2bea47a8aeb4777856f473b2 
767b2f8a8b08403884269836420d34f0 - - -] rbd image 
40df5d10-6af1-49ea-bb5f-37a5e6ffa053_disk does not exist __init__ 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py:60
2015-12-18 22:15:12.093 1620300 DEBUG nova.virt.libvirt.rbd_utils 
[req-c07f3a3b-82de-41b5-9f91-e52a2295b0fa 0690b97b2bea47a8aeb4777856f473b2 
767b2f8a8b08403884269836420d34f0 - - -] rbd image 
40df5d10-6af1-49ea-bb5f-37a5e6ffa053_disk does not exist __init__ 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/rbd_utils.py:60
#########################

2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]     do_log=False, semaphores=semaphores, 
delay=delay):
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]   File 
"/usr/lib/python2.7/contextlib.py", line 17, in __ent
er__
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]     return self.gen.next()
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lo
ckutils.py", line 395, in lock
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]     ext_lock.acquire(delay=delay)
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lo
ckutils.py", line 209, in acquire
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]     do_acquire()
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lo
ckutils.py", line 158, in wrapper
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]     return r.call(func, *args, **kwargs)
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]   File 
"/usr/lib/python2.7/dist-packages/retrying.py", line
222, in call
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]     if not self.should_reject(attempt):
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]   File 
"/usr/lib/python2.7/dist-packages/retrying.py", line
206, in should_reject
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]     reject |= 
self._retry_on_exception(attempt.value[1])
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]   File 
"/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 130, in 
retry_on_exception
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d]     'exception': e,
2015-12-18 22:27:48.308 1623512 TRACE nova.compute.manager [instance: 
3bf19259-4a86-4b9e-b358-15e59470231d] error: Unable to acquire lock on 
`/var/lib/nova/instances/locks/nova-b1eea0695ff68d78bc25e6a8e3ed77b595fd485a` 
due to [Errno 37] No locks available

** Affects: nova
     Importance: Undecided
         Status: New

** Affects: nova (Ubuntu)
     Importance: Undecided
         Status: New


** Tags: ceph kilo nova

** Attachment added: "Full log"
   
https://bugs.launchpad.net/bugs/1527661/+attachment/4536918/+files/nova-compute.log.gz

** Tags added: kilo

** Also affects: nova (Ubuntu)
   Importance: Undecided
       Status: New

** Summary changed:

- Boot instance from image with ceph failed after udate lastest kilo version
+ Create new instance from image with ceph failed after udate from kilo to 
lastest kilo version

-- 
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to nova in Ubuntu.
https://bugs.launchpad.net/bugs/1527661

Title:
  Create new instance from image with ceph failed after udate from kilo
  to lastest kilo version

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1527661/+subscriptions

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs

Reply via email to