Make sure the volume  "volume-17d9397b-d6e5-45e0-80fa-4bc7b7998842” does no 
have a snapshot or cloned linked to it  , it might sometimes cause problems 
while deletion.


- Karan Singh

On 07 Aug 2014, at 08:55, 杨万元 <[email protected]> wrote:

> Hi all:
>     we use ceph rbd with openstack ,recently there are some  dirty data in my 
> cinder-volume databases such as volumes status like error-deleting. So we 
> need manually delete this volumes。
>     but when I delete the volume on ceph node,ceph tell me this error
> 
>       [root@ceph-node3 ~]# rbd -p glance rm 
> volume-17d9397b-d6e5-45e0-80fa-4bc7b7998842    
>         Removing image: 99% complete...failed.
>         rbd: error: image still has watchers
>         This means the image is still open or the client using it crashed. 
> Try again after       closing/unmapping it or waiting 30s for the crashed 
> client to timeout.
>         2014-08-07 11:25:42.793275 7faf8c58b760 -1 librbd: error removing 
> header: (16) Device or resource busy
> 
> 
>    I google this problem and  find this  
> http://comments.gmane.org/gmane.comp.file-systems.ceph.user/9767
>    I did it and got this:
>     
>      [root@ceph-node3 ~]# rbd info -p glance 
> volume-17d9397b-d6e5-45e0-80fa-4bc7b7998842        
>         rbd image 'volume-17d9397b-d6e5-45e0-80fa-4bc7b7998842':
>         size 51200 MB in 12800 objects
>         order 22 (4096 kB objects)
>         block_name_prefix: rbd_data.3b1464f8e96d5
>         format: 2
>         features: layering
>      [root@ceph-node3 ~]# rados -p glance listwatchers 
> rbd_header.3b1464f8e96d5 
>         watcher=192.168.39.116:0/1032797 client.252302 cookie=1
> 
>   192.168.39.116 is my nova compute node ,so i can't reboot this server,
>   what can i do to delete this volume without reboot my  compute-node?
> 
>   my ceph version is 0.72.1.
> 
>  thanks very much!
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to