Re: [ceph-users] ceph balancer - Some osds belong to multiple subtrees

2019-06-27 Thread Wolfgang Lendl
thx Paul - I suspect these shadow trees causing this misbehaviour. I have a second luminous cluster where these balancer settings work as expected - this working one has hdd+ssd osds i cannot use the upmap balancer because of some jewel-krbd clients - at least they are being reported as jewel

Re: [ceph-users] ceph balancer - Some osds belong to multiple subtrees

2019-06-26 Thread Paul Emmerich
Device classes are implemented with magic invisible crush trees; you've got two completely independent trees internally: one for crush rules mapping to HDDs, one to legacy crush rules not specifying a device class. The balancer *should* be aware of this and ignore it, but I'm not sure about the

[ceph-users] ceph balancer - Some osds belong to multiple subtrees

2019-06-26 Thread Wolfgang Lendl
Hi, tried to enable the ceph balancer on a 12.2.12 cluster and got this: mgr[balancer] Some osds belong to multiple subtrees: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,