Hi Henrik and Matteo,

While I agree with Henrik: increasing your replication factor won’t improve 
recovery or read performance on its own. If you are changing from replica 2 to 
replica 3, you might need to scale-out your cluster to have enough space for 
the additional replica, and that would improve the recovery and read 
performance.

Cheers,
Maxime

From: ceph-users <[email protected]> on behalf of Henrik Korkuc 
<[email protected]>
Date: Friday 3 March 2017 11:35
To: "[email protected]" <[email protected]>
Subject: Re: [ceph-users] replica questions

On 17-03-03 12:30, Matteo Dacrema wrote:
Hi All,

I’ve a production cluster made of 8 nodes, 166 OSDs and 4 Journal SSD every 5 
OSDs with replica 2 for a total RAW space of 150 TB.
I’ve few question about it:


  *   It’s critical to have replica 2? Why?
Replica size 3 is highly recommended. I do not know exact numbers but it 
decreases chance of data loss as 2 disk failures appear to be quite frequent 
thing, especially in larger clusters.


  *   Does replica 3 makes recovery faster?
no


  *   Does replica 3 makes rebalancing and recovery less heavy for customers? 
If I lose 1 node does replica 3 reduce the IO impact respect a replica 2?
no


  *   Does read performance increase with replica 3?
no


Thank you
Regards
Matteo

--------------------------------------------
This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this email in error please notify the system manager. This 
message contains confidential information and is intended only for the 
individual named. If you are not the named addressee you should not 
disseminate, distribute or copy this e-mail. Please notify the sender 
immediately by e-mail if you have received this e-mail by mistake and delete 
this e-mail from your system. If you are not the intended recipient you are 
notified that disclosing, copying, distributing or taking any action in 
reliance on the contents of this information is strictly prohibited.





_______________________________________________

ceph-users mailing list

[email protected]<mailto:[email protected]>

http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to