Then it's probably something different. Does that happen with every volume/image or just this one time?

Zitat von 徐蕴 <yu...@me.com>:

Hi Eugen,

Thank you for sharing your experience. I will dig into OpenStack cinder logs to check if something happened. The strange thing is the volume I deleted is not created from a snapshot, or doesn’t have any snapshot. And the rbd_id.xxx, rbd_header.xxx and rbd_object_map.xxx were deleted, just left out a lot of rbd_data objects. I plan to delete those objects manually.

br,
Xu Yun

2020年1月15日 下午3:50,Eugen Block <ebl...@nde.ag> 写道:

Hi,

this might happen if you try to delete images/instances/volumes in openstack that are somehow linked, e.g. if there are snapshots etc. I have experienced this in Ocata, too. Deleting a base image worked but there were existing clones so basically just the openstack database was updated, but the base image still existed within ceph.

Try to figure out if that is also the case. If it's something else, check the logs in your openstack environment, maybe they reveal something. Also check the ceph logs.

Regards,
Eugen


Zitat von 徐蕴 <yu...@me.com>:

Hello,

My setup is Ceph pike working with OpenStack. When I deleted an image, I found that the space was not reclaimed. I checked with rbd ls and confirmed that this image was disappeared. But when I check the objects with rados ls, most objects named rbd_data.xxx are still existed in my cluster. rbd_object_map and rbd_header were already deleted. I waited for several hours and there is no further deletion happed. Is it a known issue, or something wrong with my configuration?

br,
Xu Yun
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to