Hi Alexandre,

On 01/02/2015 18:15, Alexandre DERUMIER wrote:
> Hi,
> 
> I'm currently trying to understand how to setup correctly a pool with erasure 
> code
> 
> 
> https://ceph.com/docs/v0.80/dev/osd_internals/erasure_coding/developer_notes/
> 
> 
> My cluster is 3 nodes with 6 osd for each node (18 osd total).
> 
> I want to be able to survive of 2 disk failures, but also a full node failure.

If you have K=2,M=1 you will survive one node failure. If your failure domain 
is the host (i.e. there never is more than one chunk per node for any given 
object), it will also survive two disks failures within a given node because 
only one of them will have a chunk. It won't be able to resist the simultaneous 
failure of two OSDs that belong to two different nodes: that would be the same 
as having two simultaneous node failure.

> 
> What is the best setup for this ? Does I need M=2 or M=6 ?
> 
> 
> 
> 
> Also, how to determinate the best chunk number ?
> 
> for example,
> K = 4 , M=2
> K = 8 , M=2
> K = 16 , M=2
> 
> you can loose which each config 2 osd, but the more data chunks you have, the 
> less space is used by coding chunks right ?

Yes.

> Does the number of chunk have performance impact ? (read/write ?)

If there are more chunks there is an additional computation overhead but I'm 
not sure what's the impact. I suspect it's not significant when but never 
actually measured it.

Cheers


> 
> Regards,
> 
> Alexandre
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to