On 8/28/22 17:30, Wyll Ingersoll wrote:
We have a pacific cluster that is overly filled and is having major trouble
recovering. We are desperate for help in improving recovery speed. We have
modified all of the various recovery throttling parameters.
The full_ratio is 0.95 but we have
Isn’t rebalancing onto the empty OSDs default behavior? From: Wyll IngersollSent: Sunday, August 28, 2022 10:31 AMTo: ceph-users@ceph.ioSubject: [ceph-users] OSDs growing beyond full ratio We have a pacific cluster that is overly filled and is having major trouble recovering. We are desperate for
Good morning,
we are trying to migrate a Ceph/Nautilus cluster
into kubernetes/rook/pacific [0]. Due to limitations in kubernetes we
probably need to change the cluster network range, which is currently
set to 2a0a:e5c0::/64.
My question to the list: did anyone already go through this?
My
We have a pacific cluster that is overly filled and is having major trouble
recovering. We are desperate for help in improving recovery speed. We have
modified all of the various recovery throttling parameters.
The full_ratio is 0.95 but we have several osds that continue to grow and are
i removed osd from crushmap but it still in 'ceph osd tree'
[root@ceph2-node-01 ~]# ceph osd tree
ID CLASS WEIGHTTYPE NAME STATUS REWEIGHT
PRI-AFF
-1 20.03859 root default
-20 20.03859 datacenter dc-1
-21 20.03859 room