Hello to all,

Here is my setup :

- 2 racks
- osd1 .. osd6 in rack1
- osd7 .. osd12 in rack2
- replica = 2
- CRUSH map set to put replicas accross racks

My question :
Let's imagine that one day, I need to unplug one of the racks (let's
say, rack1). No problem because an other copy of my objects will be in
the other rack. But, if i do it, Ceph will start to rebalance data
accross OSDs.

So, is there a way to put nodes in "Maintenance mode", in order to put
Ceph in "degraded" mode, but avoiding any  remaping.

The idea is to have a command like :

$ ceph osd set maintenance=on osd.1
$ ceph osd set maintenance=on osd.2
$ ceph osd set maintenance=on osd.3
$ ceph osd set maintenance=on osd.4
$ ceph osd set maintenance=on osd.5
$ ceph osd set maintenance=on osd.6

So Ceph knows that 6 osds are down, goes into degraded mode, but
without remapping data.
Then, once maintenance finished, i'll only have to do the opposite :

$ ceph osd set maintenance=off osd.1
$ ceph osd set maintenance=off osd.2
$ ceph osd set maintenance=off osd.3
$ ceph osd set maintenance=off osd.4
$ ceph osd set maintenance=off osd.5
$ ceph osd set maintenance=off osd.6

What do you think ?
I know that there is a way in Ceph doc to do so with reweight, but
it's a little bit complex...

What do you think ?
Thanks,

Alexis
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to