The simple answer is because k+1 is the default min_size for EC pools.
min_size means that the pool will still accept writes if that many failure
domains are still available. If you set min_size to k then you have entered
the dangerous territory that if you loose another failure domain (OSD or
On Tue, 21 May 2019 at 19:32, Yoann Moulin wrote:
>
> >> I am doing some tests with Nautilus and cephfs on erasure coding pool.
[...]
> > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/034867.html
>
> Oh thanks, I missed that thread, make sense. I agree with some comment that
> it
>> I am doing some tests with Nautilus and cephfs on erasure coding pool.
>>
>> I noticed something strange between k+m in my erasure profile and
>> size+min_size in the pool created:
>>
>>> test@icadmin004:~$ ceph osd erasure-code-profile get ecpool-4-2
>>> crush-device-class=
>>>
Hi,
this question comes up regularly and is been discussed just now:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/034867.html
Regards,
Eugen
Zitat von Yoann Moulin :
Dear all,
I am doing some tests with Nautilus and cephfs on erasure coding pool.
I noticed something
Dear all,
I am doing some tests with Nautilus and cephfs on erasure coding pool.
I noticed something strange between k+m in my erasure profile and size+min_size
in the pool created:
> test@icadmin004:~$ ceph osd erasure-code-profile get ecpool-4-2
> crush-device-class=
>