I just responded to this on the thread "Strange remap on host failure". I
think that response covers your question.

On Mon, May 29, 2017, 4:10 PM Laszlo Budai <[email protected]> wrote:

> Hello,
>
> can someone give me some directions on how the ceph recovery works?
> Let's suppose we have a ceph cluster with several nodes grouped in 3 racks
> (2 nodes/rack). The crush map is configured to distribute PGs on OSDs from
> different racks.
>
> What happens if a node fails? Where can I read a description of the
> actions performed by the ceph cluster in case of a node failure?
>
> Kind regards,
> Laszlo
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to