I don't have a fresh cluster on hand to double check, but the default is to select a different host for each replica. You can adjust that to fit your needs, we are using cabinet as the selection criteria so that we can lose an entire cabinet of storage and still function.
In order to store multiple replicas on the same node, you will need to change this to osd from host. Please see http://ceph.com/docs/master/rados/operations/crush-map/ On Tue, Mar 3, 2015 at 7:39 PM, Azad Aliyar <[email protected]> wrote: > > I have a doubt . In a scenario (3nodes x 4osd each x 2replica) I tested > with a node down and as long as you have space available all objects were > there. > > Is it possible all replicas of an object to be saved in the same node? > > Is it possible to lose any? > > Is there a mechanism that prevents replicas to be stored in another osd in > the same node? > > I would love someone to answer it and any information is highly > appreciated. > -- > Warm Regards, Azad Aliyar > Linux Server Engineer > *Email* : [email protected] *|* *Skype* : spark.azad > <http://www.sparksupport.com> <http://www.sparkmycloud.com> > <https://www.facebook.com/sparksupport> > <http://www.linkedin.com/company/244846> > <https://twitter.com/sparksupport> 3rd Floor, Leela Infopark, Phase > -2,Kakanad, Kochi-30, Kerala, India *Phone*:+91 484 6561696 , > *Mobile*:91-8129270421. > *Confidentiality Notice:* Information in this e-mail is proprietary to > SparkSupport. and is intended for use only by the addressed, and may > contain information that is privileged, confidential or exempt from > disclosure. If you are not the intended recipient, you are notified that > any use of this information in any manner is strictly prohibited. Please > delete this mail & notify us immediately at [email protected] > > _______________________________________________ > ceph-users mailing list > [email protected] > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > >
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
