[ceph-users] Re: MDS behind on trimming every 4-5 weeks causing issue for ceph filesystem

2024-05-17 Thread Akash Warkhade
@Kotresh Hiremath Ravishankar Can you please help on above On Fri, 17 May, 2024, 12:26 pm Akash Warkhade, wrote: > Hi Kotresh, > > > Thanks for the reply. > 1)There are no customer configs defined > 2) not enabled subtree pinning > 3) there were no warning related to rados > > So wanted to

[ceph-users] Ceph Squid release / release candidate timeline?

2024-05-17 Thread Peter Sabaini
Hi, is there a ballpark timeline for a Squid release candidate / release? I'm aware of this pad that tracks blockers, is that still accurate or should I be looking at another resource? https://pad.ceph.com/p/squid-upgrade-failures Thanks! peter.

[ceph-users] Reef RGWs stop processing requests

2024-05-17 Thread Iain Stott
Hi, We are running 3 clusters in multisite. All 3 were running Quincy 17.2.6 and using cephadm. We upgraded one of the secondary sites to Reef 18.2.1 a couple of weeks ago and were planning on doing the rest shortly afterwards. We run 3 RGW daemons on separate physical hosts behind an external

[ceph-users] Reef RGWs stop processing requests

2024-05-17 Thread Iain Stott
Hi, We are running 3 clusters in multisite. All 3 were running Quincy 17.2.6 and using cephadm. We upgraded one of the secondary sites to Reef 18.2.1 a couple of weeks ago and were planning on doing the rest shortly afterwards. We run 3 RGW daemons on separate physical hosts behind an external

[ceph-users] Re: Write issues on CephFS mounted with root_squash

2024-05-17 Thread Fabien Sirjean
Hi, On 5/17/24 08:51, Kotresh Hiremath Ravishankar wrote: Yes, it's already merged to the reef branch, and should be available in the next reef release. Please look at https://tracker.ceph.com/issues/62952 This is great news! Many thanks to all involved. F.

[ceph-users] Re: MDS behind on trimming every 4-5 weeks causing issue for ceph filesystem

2024-05-17 Thread Akash Warkhade
Hi Kotresh, Thanks for the reply. 1)There are no customer configs defined 2) not enabled subtree pinning 3) there were no warning related to rados So wanted to know In order to fix this should we increase default mds_cache_memory_limit from 4Gb to 6Gb or more? Or is there any other solution for

[ceph-users] Re: Write issues on CephFS mounted with root_squash

2024-05-17 Thread Kotresh Hiremath Ravishankar
On Fri, May 17, 2024 at 11:52 AM Nicola Mori wrote: > Thank you Kotresh! My cluster is currently on Reef 18.2.2, which should > be the current version and which is affected. Will the fix be included > in the next Reef release? > Yes, it's already merged to the reef branch, and should be

[ceph-users] Re: MDS behind on trimming every 4-5 weeks causing issue for ceph filesystem

2024-05-17 Thread Kotresh Hiremath Ravishankar
Hi, ~6K log segments to be trimmed, that's huge. 1. Are there any custom configs configured on this setup ? 2. Is subtree pinning enabled ? 3. Are there any warnings w.r.t rados slowness ? 4. Please share the mds perf dump to check for latencies and other stuff. $ceph tell mds. perf dump

[ceph-users] Re: Write issues on CephFS mounted with root_squash

2024-05-17 Thread Nicola Mori
Thank you Kotresh! My cluster is currently on Reef 18.2.2, which should be the current version and which is affected. Will the fix be included in the next Reef release? Cheers, Nicola smime.p7s Description: S/MIME Cryptographic Signature ___