Re: [ceph-users] Erasure coded pools and ceph failure domain setup

2019-03-04 Thread Hector Martin
On 02/03/2019 01:02, Ravi Patel wrote: Hello, My question is how crush distributes chunks throughout the cluster with erasure coded pools. Currently, we have 4 OSD nodes with 36 drives(OSD daemons) per node. If we use ceph_failire_domaon=host, then we are necessarily limited to k=3,m=1, or k

Re: [ceph-users] Erasure coded pools and ceph failure domain setup

2019-03-01 Thread Ravi Patel
Hello, My question is how crush distributes chunks throughout the cluster with erasure coded pools. Currently, we have 4 OSD nodes with 36 drives(OSD daemons) per node. If we use ceph_failire_domaon=host, then we are necessarily limited to k=3,m=1, or k=2,m=2. We would like to explore k>3, m>2 mod