We got a few size=4 pools, but most of them are metadata pools paired with
m=3 or m=4 erasure coded pools for the actual data.
Goal is to provide the same availability and durability guarantees for the
metadata as the data.

But we got some older odd setup with replicated size=4 for that reason
(setup predates nautilus so no ec overrides originally).

I'd prefer erasure coding over a size=4 setup for most scenarios nowadays.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Wed, Jul 24, 2019 at 9:22 PM Wido den Hollander <w...@42on.com> wrote:

> Hi,
>
> Is anybody using 4x (size=4, min_size=2) replication with Ceph?
>
> The reason I'm asking is that a customer of mine asked me for a solution
> to prevent a situation which occurred:
>
> A cluster running with size=3 and replication over different racks was
> being upgraded from 13.2.5 to 13.2.6.
>
> During the upgrade, which involved patching the OS as well, they
> rebooted one of the nodes. During that reboot suddenly a node in a
> different rack rebooted. It was unclear why this happened, but the node
> was gone.
>
> While the upgraded node was rebooting and the other node crashed about
> 120 PGs were inactive due to min_size=2
>
> Waiting for the nodes to come back, recovery to finish it took about 15
> minutes before all VMs running inside OpenStack were back again.
>
> As you are upgraded or performing any maintenance with size=3 you can't
> tolerate a failure of a node as that will cause PGs to go inactive.
>
> This made me think about using size=4 and min_size=2 to prevent this
> situation.
>
> This obviously has implications on write latency and cost, but it would
> prevent such a situation.
>
> Is anybody here running a Ceph cluster with size=4 and min_size=2 for
> this reason?
>
> Thank you,
>
> Wido
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to