On 12/2/19 5:55 PM, Lars Täuber wrote:
Here we have a similar situation.
After adding some OSDs to the cluster the PGs are not equally distributed over 
the OSDs.

The balancing mode is set to upmap.
The docshttps://docs.ceph.com/docs/master/rados/operations/balancer/#modes  say:
"This CRUSH mode will optimize the placement of individual PGs in order to achieve a 
balanced distribution. In most cases, this distribution is “perfect,” which an equal 
number of PGs on each OSD (+/-1 PG, since they might not divide evenly)."

This is not the case here with our cluster. The number of PGs reaches from 157 
up to 214. And so the resulting usage of the HDDs varies from 60% to 82%.
$ ceph osd df class hdd
[…]
MIN/MAX VAR: 0.89/1.22  STDDEV: 7.47

In the meantime I had to use reweight-by-utilization to get the cluster in 
healthy state again, because it had a near full PG and near full OSDs.

The cluster is nautilus 14.2.4 on debian 10 with packages from croit.io.

I think about switching back to crush-compat.

Is there a bug known regarding this?


Please paste your `ceph osd df tree`, `ceph osd pool ls detail`, `ceph osd crush rule dump`.




k
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to