I understand the concept with Ceph being able to recover from the failure of an 
OSD (presumably with a single OSD being on a single disk), but I'm wondering 
what the scenario is if an OSD server node containing  multiple disks should 
fail.  Presuming you have a server containing 8-10 disks, your duplicated 
placement groups could end up on the same system.  From diagrams I've seen they 
show duplicates going to separate nodes, but is this in fact how it handles it?
------------------------------------------------------------------------------
CONFIDENTIALITY NOTICE: If you have received this email in error,
please immediately notify the sender by e-mail at the address shown. 
This email transmission may contain confidential information.  This
information is intended only for the use of the individual(s) or entity to
whom it is intended even if addressed incorrectly.  Please delete it from
your files if you are not the intended recipient.  Thank you for your
compliance.  Copyright (c) 2014 Cigna
==============================================================================
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to