Hello,

Perfect, I want to keep on separate node's, so wanted to make sure the expected 
behaviour was that it would do that.

And no issues with running an odd number of nodes for a replication of 2? I 
know you have quorum, just wanted to make sure would not effect when running an 
even replication.

Will be adding nodes in future as require, but will always keep an uneven 
number.

,Ashley

-----Original Message-----
From: ceph-users [mailto:[email protected]] On Behalf Of 
[email protected]
Sent: 01 July 2016 13:07
To: [email protected]
Subject: Re: [ceph-users] CEPH Replication

It will put each object on 2 OSD, on 2 separate node All nodes, and all OSDs 
will have the same used space (approx)

If you want to allow both copies of an object to put stored on the same node, 
you should use osd_crush_chooseleaf_type = 0 (see 
http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-map-bucket-types
and
http://docs.ceph.com/docs/hammer/rados/configuration/pool-pg-config-ref/)


On 01/07/2016 13:49, Ashley Merrick wrote:
> Hello,
> 
> Looking at setting up a new CEPH Cluster, starting with the following.
> 
> 3 x CEPH OSD Servers
> 
> Each Server:
> 
> 20Gbps Network
> 12 OSD's
> SSD Journal
> 
> Looking at running with replication of 2, will there be any issues using 3 
> nodes with a replication of two, this should "technically" give me ½ the 
> available total capacity of the 3 node's?
> 
> Will the CRUSH map automaticly setup each 12 OSD's as a separate group, so 
> that the two replicated objects are put on separate OSD servers?
> 
> Thanks,
> Ashley
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to