Hi,

I accidentally terminated an instance in openstack that was running a
system upgrade. It's boot volume was a cinder volume backed by ceph cache
tier pool.

Now i can not delete the volume neither from cinder of directly from ceph.
If i try to delete it from cinder i get "rbd volume busy. Try again in 30
sec". If i try to delete it with rbd rm i get "Unable to delete. Volume
still has watchers".

I can delete the cinder volume manually from db, but i get stuck with the
rbd volume that is present on the cache tier pool and cold-storage pool.

Is there a way to remove the watchers from the image or force delete it
with watchers active?

Thank you!
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to