just as long as you are aware that size=3, min_size=2 is the right config for everyone except those that really know what they are doing. and if you ever run min_size=1 you better be expecting to corrupt your cluster sooner or later.

Ronny

On 05.12.2017 21:22, Denes Dolhay wrote:
Hi,

So for this to happen you have to lose another osd before backfilling is done.


Thank You! This clarifies it!

Denes



On 12/05/2017 03:32 PM, Ronny Aasen wrote:
On 05. des. 2017 10:26, Denes Dolhay wrote:
Hi,

This question popped up a few times already under filestore and bluestore too, but please help me understand, why this is?

"when you have 2 different objects, both with correct digests, in your cluster, the cluster can not know witch of the 2 objects are the correct one."

Doesn't it use an epoch, or an omap epoch when storing new data? If so why can it not use the recent one?




this have been discussed a few times on the list. generally  you have 2 disks.

first disk fail. and writes happen to the other disk..

first disk recovers, and second disk fail before recovery is done. writes happen to second disk..

all objects have correct checksum. and both osd's think they are the correct one. so your cluster is inconsistent.  so bluestore checksums
does not solve this problem, both objects are objectivly "correct" :)


with min_size =2 the cluster would not accept a write unless 2 disks accepted the write.

kind regards
Ronny Aasen


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to