I managed to delete it after running "rados -p cache-ssd listwatchers
rbd_header.xxxx". It was one of the monitors that was watching and keeping
the image busy. Any light on why did the monitor kept keeping the image
busy? I would like to know!

Thanks!

2016-02-01 14:32 GMT+02:00 mcapsali <[email protected]>:

> Hi,
>
> I accidentally terminated an instance in openstack that was running a
> system upgrade. It's boot volume was a cinder volume backed by ceph cache
> tier pool.
>
> Now i can not delete the volume neither from cinder of directly from ceph.
> If i try to delete it from cinder i get "rbd volume busy. Try again in 30
> sec". If i try to delete it with rbd rm i get "Unable to delete. Volume
> still has watchers".
>
> I can delete the cinder volume manually from db, but i get stuck with the
> rbd volume that is present on the cache tier pool and cold-storage pool.
>
> Is there a way to remove the watchers from the image or force delete it
> with watchers active?
>
> Thank you!
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to