I've seen that happen when a rbd image or a snapshot is being removed and you cancel the operation, specially if they are big or storage is relatively slow. The rbd image will stay "half removed" in the pool.

Check "rbd ls -p POOL" vs "rbd ls -l -p POOL" outputs: the first may have one or more lines in it's output. Those extra lines are the half removed images that rbd du or rbd ls -l are complaining about. Make absolutely sure that you don't need them and remove them manually with "rbd rm IMAGE -p POOL".



On 2/9/23 17:04, Mehmet wrote:
Hello Friends,

i have a strange output when issuing following command

root@node35:~# rbd du -p cephhdd-001-mypool
NAME                              PROVISIONED  USED
...
vm-99936587-disk-0@H202302091535      400 GiB  5.2 GiB
vm-99936587-disk-0@H202302091635      400 GiB  1.2 GiB
vm-99936587-disk-0                    400 GiB  732 MiB
vm-9999104-cloudinit                    4 MiB    4 MiB
vm-9999104-disk-0                     600 GiB  586 GiB
<TOTAL>                                49 TiB   44 TiB
rbd: du failed: (2) No such file or directory
root@node35:~#

I do not know why i receive "rbd: du failed: (2) No such file or directory".

How can i find the origin for this?

My Ceph-Version 17.2.3 installed with "cephadm".
Cluster is "HEALTH_OK" with 108 OSDs distributed over 3 Nodes where mgr/mon also resides.

Hope you can help
Mehmet
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

--
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to