Hi

For huge volumes in Openstack and Ceph, setup in your cinder this param:

volume_clear_size = 50 

That will wipe only the first 50MB of the file and then ask to ceph to fully 
delete instead wipe all disk with zeros that sometimes in huge volumes cause 
timeout.

In our deploy that was the solution, Openstack Queens here


-----Mensaje original-----
De: Eugen Block <ebl...@nde.ag> 
Enviado el: miércoles, 15 de enero de 2020 8:51
Para: ceph-users@ceph.io
Asunto: [ceph-users] Re: Objects not removed (completely) when removing a rbd 
image

Hi,

this might happen if you try to delete images/instances/volumes in openstack 
that are somehow linked, e.g. if there are snapshots etc. I have experienced 
this in Ocata, too. Deleting a base image worked but there were existing clones 
so basically just the openstack database was updated, but the base image still 
existed within ceph.

Try to figure out if that is also the case. If it's something else, check the 
logs in your openstack environment, maybe they reveal something. Also check the 
ceph logs.

Regards,
Eugen


Zitat von 徐蕴 <yu...@me.com>:

> Hello,
>
> My setup is Ceph pike working with OpenStack. When I deleted an image, 
> I found that the space was not reclaimed. I checked with rbd ls and 
> confirmed that this image was disappeared. But when I check the 
> objects with rados ls, most objects named rbd_data.xxx are still 
> existed in my cluster. rbd_object_map and rbd_header were already 
> deleted. I waited for several hours and there is no further deletion 
> happed. Is it a known issue, or something wrong with my configuration?
>
> br,
> Xu Yun
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to