When we use a replicated pool of size 3 for example, each data, a block of
4MB is written on one PG which is distributed on 3 hosts (by default). The
osd holding the primary will copy the block to OSDs holding the secondary
and third PG.

With erasure code, let's take a raid5 schema like k=2 and m=1. Does Ceph
buffer the data till it reach a amount of 8 MB which it can then divide
into two blocks of 4MB and a parity control of 4MB  ? Does it just divide
the data in two chunks whatever the size ? Will it use then PG1 on OSD.A
to store the first block, PG1 on OSD.X to store the second block of data
and PG1 on OSD.z to store the parity ?

Thanks for your explanation because i didn't found any clear explanation on
how data chunk and parity
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to