Try this:

ceph osd crush reweight osd.XX 0

--Mike

On 5/28/22 15:02, Nico Schottelius wrote:

Good evening dear fellow Ceph'ers,

when removing OSDs from a cluster, we sometimes use

     ceph osd reweight osd.XX 0

and wait until the OSD's content has been redistributed. However, when
then finally stopping and removing it, Ceph is again rebalancing.

I assume this is due to a position that is removed in the CRUSH map and
thus the logical placement is "wrong". (Am I wrong about that?)

I wonder, is there a way to tell ceph properly that a particular OSD is
planned to leave the cluster and to remove the data to the "correct new
position" instead of doing the rebalance dance twice?

Best regards,

Nico

--
Sustainable and modern Infrastructures by ungleich.ch
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to