>>Is it possible all replicas of an object to be saved in the same node? 

No. (until you don't wrongly modify the crushmap manually)

>>Is it possible to lose any? 
with replicat x2, if you loose 2osd on 2differents nodes, with the same object 
inside, you'll lost the object

>>Is there a mechanism that prevents replicas to be stored in another osd in 
>>the same node? 

The Crush map !



----- Mail original -----
De: "Azad Aliyar" <[email protected]>
À: "ceph-users" <[email protected]>
Envoyé: Vendredi 6 Mars 2015 11:08:47
Objet: [ceph-users] Multiple OSD's in a Each node with replica 2

I have a doubt . In a scenario (3nodes x 4osd each x 2replica) I tested with a 
node down and as long as you have space available all objects were there. 

Is it possible all replicas of an object to be saved in the same node? 

Is it possible to lose any? 

Is there a mechanism that prevents replicas to be stored in another osd in the 
same node? 

I would love someone to answer it and any information is highly appreciated. 

-- 
                Warm Regards, 
        Azad Aliyar 

        Linux Server Engineer 

        Email : [email protected] | Skype : spark.azad               
        3rd Floor, Leela Infopark, Phase -2,Kakanad, Kochi-30, Kerala, India 
        Phone :+91 484 6561696 , Mobile :91-8129270421. 
        Confidentiality Notice: Information in this e-mail is proprietary to 
SparkSupport. and is intended for use only by the addressed, and may contain 
information that is privileged, confidential or exempt from disclosure. If you 
are not the intended recipient, you are notified that any use of this 
information in any manner is strictly prohibited. Please delete this mail & 
notify us immediately at [email protected] 

_______________________________________________ 
ceph-users mailing list 
[email protected] 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to