This week at the OpenStackSummit Vancouver I can hear people entertaining the 
idea of running Ceph with replication factor of 2.

Karl Vietmeier of Intel suggested that we use 2x replication because Bluestore 
comes with checksums.
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21370/supporting-highly-transactional-and-low-latency-workloads-on-ceph

Later, there was a question from the audience during the Ceph DR/mirroring talk 
on whether we could use 2x replication if we also mirror to DR.
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20749/how-to-survive-an-openstack-cloud-meltdown-with-ceph

So the interest is definitely there: not losing 1/3 of your disk space and 
performance is promising. But on the other hand it comes with higher risks.

I wonder if we as the community could come up to some consensus, now that the 
established practice of requiring size=3, min_size=2 is being challenged.


My thoughts on the subject are that even though checksums do allow to find 
which replica is corrupt without having to figure which 2 out of 3 copies are 
the same, this is not the only reason min_size=2 was required. Even if you are 
running all SSD which are more reliable than HDD and are keeping the disk size 
small so you could backfill quickly in case of a single disk failure, you would 
still occasionally have longer periods of degraded operation. To name a couple 
- a full node going down; or operator deliberately wiping an OSD to rebuild it. 
min_size=1 in this case would leave you running with no redundancy at all. DR 
scenario with pool-to-pool mirroring probably means that you can not just 
replace the lost or incomplete PGs in your main site from your DR, cause DR is 
likely to have a different PG layout, so full resync from DR would be required 
in case of one disk lost during such unprotected times.

What are your thoughts, would you run 2x replication factor in Production and 
in what scenarios?

Regards,
Anthony
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to