Hi all,

Whilst on a training course recently I was told that 'min_size' had an
affect on client write performance, in that it's the required number of
copies before ceph reports back to the client that an object has been
written therefore setting a 'min_size' of 0 would only require a write
to be accepted by the journal before confirming it's been accepted.

This is contrary to further reading elsewhere that the 'min_size' is the
minimum number of copies required of an object to allow I/O and that
'size' is the parameter that would affect write speed i.e. desired
number of replicas. 

Setting 'min_size' to 0 with a 'size' of 3 you would still have an
effective 'min_size' of 2 from:

https://raw.githubusercontent.com/ceph/ceph/master/doc/release-notes.rst

"* Degraded mode (when there fewer than the desired number of replicas)
is now more configurable on a per-pool basis, with the min_size
parameter. By default, with min_size 0, this allows I/O to objects
with N - floor(N/2) replicas, where N is the total number of
expected copies. Argonaut behavior was equivalent to having min_size
= 1, so I/O would always be possible if any completely up to date
copy remained. min_size = 1 could result in lower overall
availability in certain cases, such as flapping network partition"

Which leads to the conclusion that changing 'min_size' has nothing to do
with performance but is solely related to data integrity/resilience.

Could someone confirm my assertion is correct?

Many thanks

James


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to