[ceph-users] Re: OSDs growing beyond full ratio

2022-08-28 Thread Stefan Kooman
On 8/28/22 17:30, Wyll Ingersoll wrote: We have a pacific cluster that is overly filled and is having major trouble recovering. We are desperate for help in improving recovery speed. We have modified all of the various recovery throttling parameters. The full_ratio is 0.95 but we have

[ceph-users] Re: OSDs growing beyond full ratio

2022-08-28 Thread Jarett
Isn’t rebalancing onto the empty OSDs default behavior? From: Wyll IngersollSent: Sunday, August 28, 2022 10:31 AMTo: ceph-users@ceph.ioSubject: [ceph-users] OSDs growing beyond full ratio We have a pacific cluster that is overly filled and is having major trouble recovering.  We are desperate for

[ceph-users] Changing the cluster network range

2022-08-28 Thread Nico Schottelius
Good morning, we are trying to migrate a Ceph/Nautilus cluster into kubernetes/rook/pacific [0]. Due to limitations in kubernetes we probably need to change the cluster network range, which is currently set to 2a0a:e5c0::/64. My question to the list: did anyone already go through this? My

[ceph-users] OSDs growing beyond full ratio

2022-08-28 Thread Wyll Ingersoll
We have a pacific cluster that is overly filled and is having major trouble recovering. We are desperate for help in improving recovery speed. We have modified all of the various recovery throttling parameters. The full_ratio is 0.95 but we have several osds that continue to grow and are

[ceph-users] remove osd in crush

2022-08-28 Thread farhad kh
i removed osd from crushmap but it still in 'ceph osd tree' [root@ceph2-node-01 ~]# ceph osd tree ID CLASS WEIGHTTYPE NAME STATUS REWEIGHT PRI-AFF -1 20.03859 root default -20 20.03859 datacenter dc-1 -21 20.03859 room