[ceph-users] Re: snaptrim number of objects

2023-08-23 Thread Angelo Hongens
On 23/08/2023 08:27, Sridhar Seshasayee wrote: This also leads me to agree with you there's 'something wrong' with the mclock scheduler. I was almost starting to suspect hardware issues or something like that, I was at my wit's end. Could you update this thread with the exact

[ceph-users] Re: snaptrim number of objects

2023-08-23 Thread Sridhar Seshasayee
> This also leads me to agree with you there's 'something wrong' with > the mclock scheduler. I was almost starting to suspect hardware issues > or something like that, I was at my wit's end. > > Could you update this thread with the exact quincy version by running: $ ceph versions and $ ceph

[ceph-users] Re: snaptrim number of objects

2023-08-22 Thread Mark Nelson
On 8/21/23 17:38, Angelo Höngens wrote: On 21/08/2023 16:47, Manuel Lausch wrote: Hello, on my testcluster I played a bit with ceph quincy (17.2.6). I also see slow ops while deleting snapshots. With the previous major (pacific) this wasn't a issue. In my case this is related to the new

[ceph-users] Re: snaptrim number of objects

2023-08-21 Thread Angelo Höngens
On 21/08/2023 16:47, Manuel Lausch wrote: > Hello, > > on my testcluster I played a bit with ceph quincy (17.2.6). > I also see slow ops while deleting snapshots. With the previous major > (pacific) this wasn't a issue. > In my case this is related to the new mclock scheduler which is > defaulted

[ceph-users] Re: snaptrim number of objects

2023-08-21 Thread Angelo Hongens
On 21/08/2023 12:38, Frank Schilder wrote: Hi Angelo, was this cluster upgraded (major version upgrade) before these issues started? We observed that with certain paths of a major version upgrade and the only way to fix that was to re-deploy all OSDs step by step. You can try a rocks-DB

[ceph-users] Re: snaptrim number of objects

2023-08-21 Thread Manuel Lausch
Hello, on my testcluster I played a bit with ceph quincy (17.2.6). I also see slow ops while deleting snapshots. With the previous major (pacific) this wasn't a issue. In my case this is related to the new mclock scheduler which is defaulted with quincy. With "ceph config set global osd_op_queue

[ceph-users] Re: snaptrim number of objects

2023-08-21 Thread Frank Schilder
Bygning 109, rum S14 From: Angelo Hongens Sent: Saturday, August 19, 2023 9:58 AM To: Patrick Donnelly Cc: ceph-users@ceph.io Subject: [ceph-users] Re: snaptrim number of objects On 07/08/2023 18:04, Patrick Donnelly wrote: >> I'm trying to figu

[ceph-users] Re: snaptrim number of objects

2023-08-19 Thread Angelo Hongens
On 07/08/2023 18:04, Patrick Donnelly wrote: I'm trying to figure out what's happening to my backup cluster that often grinds to a halt when cephfs automatically removes snapshots. CephFS does not "automatically" remove snapshots. Do you mean the snap_schedule mgr module? Yup. Almost

[ceph-users] Re: snaptrim number of objects

2023-08-07 Thread Patrick Donnelly
On Fri, Aug 4, 2023 at 5:41 PM Angelo Höngens wrote: > > Hey guys, > > I'm trying to figure out what's happening to my backup cluster that > often grinds to a halt when cephfs automatically removes snapshots. CephFS does not "automatically" remove snapshots. Do you mean the snap_schedule mgr