On 02/03/2019 01:02, Ravi Patel wrote:
Hello,
My question is how crush distributes chunks throughout the cluster with
erasure coded pools. Currently, we have 4 OSD nodes with 36 drives(OSD
daemons) per node. If we use ceph_failire_domaon=host, then we are
necessarily limited to k=3,m=1, or k
Hello,
My question is how crush distributes chunks throughout the cluster with
erasure coded pools. Currently, we have 4 OSD nodes with 36 drives(OSD
daemons) per node. If we use ceph_failire_domaon=host, then we are
necessarily limited to k=3,m=1, or k=2,m=2. We would like to explore k>3,
m>2 mod