Generally the recommendation is: if your redundancy is X you should have at
least X+1 entities in your failure domain to allow ceph to automatically
self-heal

Given your setup of 6 severs and failure domain host means you should
select k+m=5 at most. So 3+2 should make for a good profile in your case.

You could go to 4+2 but a loss of a host means ceph can't auto heal means
of another server gets into trouble before the first one is replaced your
cluster will become unavailable.

Please note that you can't change the EC profile of an existing pool -
you'll need to create a new pool and copy the data over if you want to
change your current profile

Cheers
Christian

On Sat, 21 Jul 2018, 01:52 Ziggy Maes, <[email protected]> wrote:

> Hello Caspar
>
>
>
> That makes a great deal of sense, thank you for elaborating. Am I correct
> to assume that if we were to use a k=2, m=2 profile, it would be identical
> to a replicated pool (since there would be an equal amount of data and
> parity chunks)? Furthermore, how should the proper erasure profile be
> determined then? Are we to strive for a as high as possible data chunk
> value (k) and a low parity/coding value (m)?
>
>
>
> Kind regards
>
>
> *Ziggy Maes *DevOps Engineer
> CELL +32 478 644 354
> SKYPE Ziggy.Maes
>
> [image: http://www.be-mobile.com/mail/bemobile_email.png]
> <http://www.be-mobile.com/>
>
> *www.be-mobile.com <http://www.be-mobile.com>*
>
>
>
>
>
> *From: *Caspar Smit <[email protected]>
> *Date: *Friday, 20 July 2018 at 14:15
> *To: *Ziggy Maes <[email protected]>
> *Cc: *"[email protected]" <[email protected]>
> *Subject: *Re: [ceph-users] Default erasure code profile and sustaining
> loss of one host containing 4 OSDs
>
>
>
> Ziggy,
>
>
>
> For EC pools: min_size = k+1
>
>
>
> So in your case (m=1) -> min_size is 3  which is the same as the number of
> shards. So if ANY shard goes down, IO is freezed.
>
>
>
> If you choose m=2 min_size will still be 3 but you now have 4 shards (k+m
> = 4) so you can loose a shard and still remain availability.
>
>
>
> Of course a failure domain of 'host' is required to do this but since you
> have 6 hosts that would be ok.
>
>
> Met vriendelijke groet,
>
> Caspar Smit
> Systemengineer
> SuperNAS
> Dorsvlegelstraat 13
> 1445 PA Purmerend
>
> t: (+31) 299 410 414
> e: [email protected]
> w: www.supernas.eu
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to