Hello,

Which version of Ceph are you using? Are all of your OSDs currently
up+in? If you're HEALTH_OK and all OSDs are up, snaptrim should work
through the removed_snaps_queue and clear it over time, but I have
seen cases where this seems to get stuck and restarting OSDs can help.

Josh

On Wed, Feb 7, 2024 at 12:01 PM localhost Liam <imluy...@gmail.com> wrote:
>
> Hello,
>
> I'm encountering an issue with Ceph when using it as the backend storage for 
> OpenStack Cinder. Specifically, after deleting RBD snapshots through Cinder, 
> I've noticed a significant increase in the removed_snaps_queue entries within 
> the corresponding Ceph pool. It seems to affect the pool's performance and 
> space efficiency.
>
> I understand that snapshot deletion in Cinder is an asynchronous operation, 
> and Ceph itself uses a lazy deletion mechanism to handle snapshot removal. 
> However, even after allowing sufficient time, the entries in 
> removed_snaps_queue do not decrease as expected.
>
> I have several questions for the community:
>
> Are there recommended methods or best practices for managing or reducing 
> entries in removed_snaps_queue?
> Is there any tool or command that can safely clear these residual snapshot 
> entries without affecting the integrity of active snapshots and data?
> Is this issue known, and are there any bug reports or plans for fixes related 
> to it?
> Thank you very much for your assistance!
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to