Hi Ansgar,

Since Squid (v19) and PR #54152 [1], the RGW Lifecycle supports the
'NoncurrentDays' and 'NewerNoncurrentVersions' filters. Would that work for
your use case?
You can find examples of how to use these filters here [2].

Note that PR also added support for ObjectSizeGreater(Less)Than filers to
transition objects based on their size.

Best regards,
Frédéric Nass

[1] https://github.com/ceph/ceph/pull/54152
[2]
https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-configuration-examples.html#lifecycle-config-conceptual-ex6

--
Frédéric Nass
Ceph Ambassador France | Senior Ceph Engineer @ CLYSO

Squishing Squids - A Ceph Compression Guide
<https://www.eventbrite.com/e/squishing-squids-a-ceph-compression-guide-tickets-1981347673227>,
February 25th, 9am PST.
https://clyso.com | [email protected]


Le mer. 4 févr. 2026 à 13:23, Ansgar Jazdzewski via ceph-users <
[email protected]> a écrit :

> Hi Cephers,
>
> we are running Ceph RGW with bucket versioning enabled (currently on
> Reef, but we have also observed the same behavior on other releases).
>
> We are seeing an operational issue where a single object key can
> accumulate extremely large numbers of versions (hundreds of thousands
> up to millions). Because all versions of the same key are stored on
> the same bucket index shard, this results in very large bucket-index
> entries / omap growth and shard hotspots.
>
> In our case, one object with millions of versions caused significant
> bucket index pressure and related performance/maintenance concerns.
>
> As far as we can tell, RGW currently does not provide a way to enforce
> a limit such as “keep only the last N versions per object key,” and
> existing quotas only apply at bucket/user level, not per object key.
>
> We wanted to ask:
> * Have others seen similar cases with very high version counts for
> individual keys?
> * How do you handle or mitigate this operationally today
>
> Any advice or shared experience would be very appreciated.
>
> Thank you,
> Ansgar Jazdzewski
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to