zap51 commented on issue #8211:
URL: https://github.com/apache/cloudstack/issues/8211#issuecomment-1891629923

   Hi @DaanHoogland @weizhouapache,
   We seem to have figured out the issue. When there is existence of a block 
device of some large like 1 TiB, it usually takes some time to delete as there 
are a lot of RADOS Objects assigned. This is usually observed in large clusters 
where there are millions of objects. I was able to reproduce this way.
   
   1. Create a block device of size 20 TiB, Ceph allocates a few million RADOS 
objects to this. The block device ID is 
`<pool_name>/370ffe6b-a536-401a-978f-14cb2f79b10f`
   2. Info command with `# rbd info 370ffe6b-a536-401a-978f-14cb2f79b10f` works 
when the image is active and present.
   3. Now delete the image on CloudStack and try executing, but it gives
   ```
   # rbd info 370ffe6b-a536-401a-978f-14cb2f79b10f
   rbd: error opening image 370ffe6b-a536-401a-978f-14cb2f79b10f: (2) No such 
file or directory
   ```
   4. The logs appear in libvirtd because of CloudStack-agent frequently asking 
libvirtd to refresh the pool. As per libvirt forums, libvirt tries opening the 
RBD image for querying the size, which is similar to `rbd info` but ends up 
getting `(2) No such file or directory`. 
   5. This is normal in large clusters and in clusters where slow_deletion of 
objects is configured. 
   
   These warnings can be safely ignored. Thanks to Libvirt, Ceph & CloudStack 
communities.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to