Hi Ansgar, You're absolutely right; it makes sense to protect the infrastructure as a whole rather than relying on users to properly manage their bucket's data.
This concern was discussed at Cephalocon '24 [1]. A tracker [2] had been created a couple of months before the conference to implement this feature, and the last two comments show that your message is getting some attention. This slightly different one that Sage created in 2011 is also worth mentioning [3]. Hopefully, this will be implemented soon. Cheers, Frédéric. [1] https://pad.ceph.com/p/cds24-powerusers#L128 [2] https://tracker.ceph.com/issues/68066 [3] https://tracker.ceph.com/issues/1424 -- Frédéric Nass Ceph Ambassador France | Senior Ceph Engineer @ CLYSO Squishing Squids - A Ceph Compression Guide <https://www.eventbrite.com/e/squishing-squids-a-ceph-compression-guide-tickets-1981347673227>, February 25th, 9am PST. https://clyso.com | [email protected] Le mer. 4 févr. 2026 à 18:53, Ansgar Jazdzewski via ceph-users < [email protected]> a écrit : > Hello Frédéric, > > thank you, that is very helpful information. > > Support for NewerNoncurrentVersions in Squid lifecycle definitely > addresses the issue for customers who configure lifecycle rules > properly. > > However, as an object storage service provider our concern is more > about cluster protection against unbounded version growth in general: > > lifecycle policies are optional and customer-controlled > * a tenant can enable versioning and never configure expiration > * a single hot key can still accumulate millions of versions and > create a bucket-index shard hotspot > * So while Squid lifecycle makes it possible for customers to > self-mitigate, we still see a gap in terms of "provider-side > guardrails" (e.g., a hard or automatic limit on versions per key). > > Do you know if there is any discussion about enforcing such limits > globally or at least providing admin-side defaults/mandatory lifecycle > policies? > > Best regards, > Ansgar > > Am Mi., 4. Feb. 2026 um 14:23 Uhr schrieb Frédéric Nass > <[email protected]>: > > > > Hi Ansgar, > > > > Since Squid (v19) and PR #54152 [1], the RGW Lifecycle supports the > 'NoncurrentDays' and 'NewerNoncurrentVersions' filters. Would that work for > your use case? > > You can find examples of how to use these filters here [2]. > > > > Note that PR also added support for ObjectSizeGreater(Less)Than filers > to transition objects based on their size. > > > > Best regards, > > Frédéric Nass > > > > [1] https://github.com/ceph/ceph/pull/54152 > > [2] > https://docs.aws.amazon.com/AmazonS3/latest/userguide/lifecycle-configuration-examples.html#lifecycle-config-conceptual-ex6 > > > > -- > > Frédéric Nass > > Ceph Ambassador France | Senior Ceph Engineer @ CLYSO > > > > Squishing Squids - A Ceph Compression Guide, February 25th, 9am PST. > > https://clyso.com | [email protected] > > > > > > Le mer. 4 févr. 2026 à 13:23, Ansgar Jazdzewski via ceph-users < > [email protected]> a écrit : > >> > >> Hi Cephers, > >> > >> we are running Ceph RGW with bucket versioning enabled (currently on > >> Reef, but we have also observed the same behavior on other releases). > >> > >> We are seeing an operational issue where a single object key can > >> accumulate extremely large numbers of versions (hundreds of thousands > >> up to millions). Because all versions of the same key are stored on > >> the same bucket index shard, this results in very large bucket-index > >> entries / omap growth and shard hotspots. > >> > >> In our case, one object with millions of versions caused significant > >> bucket index pressure and related performance/maintenance concerns. > >> > >> As far as we can tell, RGW currently does not provide a way to enforce > >> a limit such as “keep only the last N versions per object key,” and > >> existing quotas only apply at bucket/user level, not per object key. > >> > >> We wanted to ask: > >> * Have others seen similar cases with very high version counts for > >> individual keys? > >> * How do you handle or mitigate this operationally today > >> > >> Any advice or shared experience would be very appreciated. > >> > >> Thank you, > >> Ansgar Jazdzewski > >> _______________________________________________ > >> ceph-users mailing list -- [email protected] > >> To unsubscribe send an email to [email protected] > _______________________________________________ > ceph-users mailing list -- [email protected] > To unsubscribe send an email to [email protected] > _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
