Hi Dan, when the warning popped up the system was running for months without any issues. The cluster is running 18.2.7 and uses 2 active MDS. I started monitoring the object and noticed the amount of keys going beyond the 200k threshold multiple times. Unfortunately we had an outage Monday night (not caused by Ceph :) ), so we had to restart every service. Since then I didn't observe any openfiles objects with more than 200k keys. I set up a monitoring for all files though, so we should be alerted when the issue arises again.
Thank you! Philipp On Wed, Nov 19, 2025 at 8:43 AM Dan van der Ster <[email protected]> wrote: > Hi Philipp, > > This looks like https://tracker.ceph.com/issues/61950, which was > closed as not reproducible. > > Do you have any more background context about that cluster that might > give a clue how this happened? maybe a recent daemon > upgrade/restart/...? > > Cheers, Dan > > On Mon, Nov 17, 2025 at 8:28 AM Philipp Hocke <[email protected]> > wrote: > > > > Hi, > > > > last week we had a "large omap" warning in one of our clusters. The > > affected object was > > > > 025-11-09T23:31:24.799+0000 7f70131b3640 0 log_channel(cluster) log > > [WRN] : Large omap object found. Object: > > 2:ddb71591:::mds1_openfiles.d:head PG: 2.89a8edbb (2.b) Key count: > > 200001 Size (bytes): 12122520 > > > > I wanted to get a deeper understanding on what triggers this specific > issue > > and had a look at the code - I'm not that deep into C++, so please, > correct > > me if I'm wrong. > > > > If I understand correctly, LARGE_OMAP warnings are only generated (and > > cleared) during deep-scrubs. If an object is found with more keys > > than osd_deep_scrub_large_omap_object_key_threshold, it will trigger the > > warning. In this case > > > > mds1_openfiles.d had a Key count of 200001 while this specific option > > got deep-scrubbed and that triggered the warning > > > > The MDS should create a new openfiles fragment when the threshold is > reached. > > > > My question is: How exactly does the MDS end up with an openfiles > > segment with more than 200000? Shouldn't the MDS create a new segment > > as soon as the previous one reaches the configured limit? > > > > If this is considered a bug, I'm happy to open a report. > > > > > > Best regards > > > > Philipp > > > > -- > > > > Philipp Hocke > > Leiter Linux Systemadministration / Head Of TechOps > > CM4ALL GmbH > > Im Mediapark 6A - 50670 Köln / Cologne > > Phone +49-(0)221-6601-0 <https://outlook.office.com/mail/sentitems> > > Fax +49-(0)221-6601-1011 <https://outlook.office.com/mail/sentitems> > > E-Mail: [email protected] > > Internet: www.cm4all.com > > _______________________________________________ > > ceph-users mailing list -- [email protected] > > To unsubscribe send an email to [email protected] > > > > -- > Dan van der Ster > Ceph Executive Council | CTO @ CLYSO > Try our Ceph Analyzer -- https://analyzer.clyso.com/ > https://clyso.com | [email protected] > -- Philipp Hocke Leiter Linux Systemadministration / Head Of TechOps CM4ALL GmbH Im Mediapark 6A - 50670 Köln / Cologne Phone +49-(0)221-6601-0 <https://outlook.office.com/mail/sentitems> Fax +49-(0)221-6601-1011 <https://outlook.office.com/mail/sentitems> E-Mail: [email protected] Internet: www.cm4all.com Philipp Hocke Leiter Linux Systemadministration / Head Of TechOps CM4ALL GmbH Im Mediapark 6A - 50670 Köln / Cologne Phone +49-(0)221-6601-0 <https://outlook.office.com/mail/sentitems> Fax +49-(0)221-6601-1011 <https://outlook.office.com/mail/sentitems> E-Mail: [email protected] Internet: www.cm4all.com _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
