Ziggy,

The default min_size for your pool is 3 so losing ANY single OSD (not even
host) will result in reduced data availability:

https://patchwork.kernel.org/patch/8546771/

Use m=2 to be able to handle a node failure.


Met vriendelijke groet,

Caspar Smit
Systemengineer
SuperNAS
Dorsvlegelstraat 13
1445 PA Purmerend

t: (+31) 299 410 414
e: caspars...@supernas.eu
w: www.supernas.eu

2018-07-20 13:11 GMT+02:00 Ziggy Maes <ziggy.m...@be-mobile.com>:

> Hello
>
>
>
> I am currently trying to find out if Ceph can sustain the loss of a full
> host (containing 4 OSDs) in a default erasure coded pool (k=2, m=1). We
> have currently have a production EC pool with the default erasure profile,
> but would like to make sure the data on this pool remains accessible even
> after one of our hosts fail. Since we have a very small cluster (6 hosts, 4
> OSDs per host), I created a custom CRUSH rule to make sure the 3 chunks are
> spread over 3 hosts, screenshot here: https://gyazo.com/
> 1a3ddd6895df0d5e0e425774d2bcb257 .
>
>
>
> Unfortunately, taking one node offline results  in reduced data
> availability and incomplete PGs, as shown here: https://gyazo.com/
> db56d5a52c9de2fd71bf9ae8eb03dbbc .
>
>
>
> My question summed up: is it possible to sustain the loss of a host
> containing 4 OSDs using a k=2, m=1 erasure profile using a CRUSH map that
> spreads data over at least 3 hosts? If so, what am I doing wrong? I realize
> the documentation states that m equals the amount of OSDs that can be lost,
> but assuming a balanced CRUSH map is used I fail to see how this is
> required.
>
>
>
> Many thanks in advance.
>
>
>
> Kind regards
>
>
> *Ziggy Maes *DevOps Engineer
>
> [image: http://www.be-mobile.com/mail/bemobile_email.png]
> <http://www.be-mobile.com/>
>
> *www.be-mobile.com <http://www.be-mobile.com>*
>
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to