Hello,

Am 27.10.2017 um 19:00 schrieb David Turner:
> What does your crush map look like?  Also a `ceph df` output.  You're
> optimizing your map for pool #5, if there are other pools with a
> significant amount of data, then your going to be off on your cluster
> balance.

There are no other pools at all. The crushmap has nothing special in it.
Just hosts and osds.

> A big question for balancing a cluster is how big are your PGs?  If your
> primary data pool has PGs that are 100GB in size, then even if you
> balanced the cluster such that all of the osds were within 1 PG of each
> other (assuming your average OSD size <1TB), then your OSDs would be 10%
> apart in disk usage from each other and 100GB.

There are 4096 pgs and 136 osds. I'm not talking about 10% it's 67%
usage vs . 92%.

Also crush optimize was working fine for other clusters.

Stefan
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to