Den fre 23 nov. 2018 kl 15:19 skrev Marco Gaiarin <[email protected]>:
>
>
> Previous (partial) node failures and my current experiments on adding a
> node lead me to the fact that, when rebalancing are needed, ceph
> rebalance also on intra-node: eg, if an OSD of a node die, data are
> rebalanced on all OSD, even if i've pool molteplicity 3 and 3 nodes.
>
> This, indeed, make perfectly sense: overral data scattering have better
> performance and safety.
>
>
> But... there's some way to se to crush 'don't rebalance in the same node, go
> in degradated mode'?
>

The default crush rules with replication=3 would only place PGs on
separate hosts,
so in that case it would go into degraded mode if a node goes away,
and not place
replicas on different disks on the remaining hosts.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to