Hi Kevin,

 

Ceph by default will make sure no copies of the data are on the same host. So 
with a replica count of 3, you could lose 2 hosts without losing any data or 
operational ability. If by some luck all disk failures were constrained to 2 
hosts, you could in theory have up to 8 disks fail. Otherwise if the disk 
failures are spread amongst the hosts, you could withstand 2 disk failures.

 

Nick

 

From: ceph-users [mailto:[email protected]] On Behalf Of kevin 
parrikar
Sent: 09 June 2015 16:54
To: [email protected]
Subject: [ceph-users] calculating maximum number of disk and node failure that 
can be handled by cluster with out data loss

 

I have 4 node cluster each with 5 disks (4 OSD and 1 Operating system also 
hosting 3 monitoring process) with default replica 3.

 

Total OSD disks : 16 

Total Nodes : 4

 

How can i calculate the 

*       Maximum number of disk failures my cluster can handle with out  any 
impact on current data and new writes.
*       Maximum number of node failures  my cluster can handle with out any 
impact on current data and new writes.

Thanks for any help




_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to