Hi. .. Just an update - This looks awesome.. and in a 8x5 company -
christmas is a good period to rebalance a cluster :-)
>> I'll try it out again - last I tried it complanied about older clients -
>> it should be better now.
> upmap is supported since kernel 4.13.
>
>> Second - should the reweights be set back to 1 then?
> Yes, also:
>
> 1. `ceph osd crush tunables optimal`
Done
> 2. All your buckets should be straw2, but in case `ceph osd crush
> set-all-straw-buckets-to-straw2`
Done
> 3. Your hosts imbalanced: elefant & capone have only eight 10TB's,
> another hosts - 12. So I recommend replace 8TB's spinners to 10TB or
> just shuffle it between hosts, like 2x8TB+10x10Tb.
Yes, we initially thought we could go with 3 osd-hosts .. but then found
out that EC-pools required more -- and then added.
> 4. Revert all your reweights.
Done
> 5. Balancer do his work: `ceph balancer mode upmap`, `ceph balancer on`.
So far - works awesome --
sudo qms/server_documentation/ceph/ceph-osd-data-distribution hdd
hdd
x <stdin>
N Min Max Median Avg Stddev
x 72 50.82 55.65 52.88 52.916944 1.0002586
As compared to the best I got with reweighting:
$ sudo qms/server_documentation/ceph/ceph-osd-data-distribution hdd
hdd
x <stdin>
N Min Max Median Avg Stddev
x 72 45.36 54.98 52.63 52.131944 2.0746672
It took about 24 hours to rebalance -- and move quite some TB's around.
I would still like to have a log somewhere to grep and inspect what
balancer/upmap
actually does - when in my cluster. Or some ceph commands that deliveres
some monitoring capabilityes .. any suggestions?
--
Jesper
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com