Re: [ceph-users] Ceph crushmap re-arrange with minimum rebalancing?

2019-03-11 Thread Wido den Hollander
On 3/8/19 4:17 AM, Pardhiv Karri wrote: > Hi, > > We have a ceph cluster with rack as failure domain but the racks are so > imbalanced due to which we are not able to utilize the maximum of > storage allocated as some odd's in small racks are filling up too fast > and causing ceph to go into

[ceph-users] Ceph crushmap re-arrange with minimum rebalancing?

2019-03-07 Thread Pardhiv Karri
Hi, We have a ceph cluster with rack as failure domain but the racks are so imbalanced due to which we are not able to utilize the maximum of storage allocated as some odd's in small racks are filling up too fast and causing ceph to go into warning state and near_full_ratio being triggered. We