To start safely you need a replication factor of 3 and at least 4 nodes
(think size+1) to allow for smooth maintenance on your nodes.

On Fri, Jul 1, 2016 at 2:31 PM, Ashley Merrick <[email protected]>
wrote:

> Hello,
>
> Okie makes perfect sense.
>
> So if run CEPH with a replication of 3, is it still required to run an odd
> number of OSD Nodes.
>
> Or could I run 4 OSD Nodes to start with, with a replication of 3, with
> each replication on a separate server.
>
> ,Ashley Merrick
>
> -----Original Message-----
> From: ceph-users [mailto:[email protected]] On Behalf Of
> Tomasz Kuzemko
> Sent: 01 July 2016 13:28
> To: [email protected]
> Subject: Re: [ceph-users] CEPH Replication
>
> Still in case of object corruption you will not be able to determine which
> copy is valid. Ceph does not provide data integrity with filestore (it's
> planned for bluestore).
>
> On 01.07.2016 14:20, David wrote:
> > It will work but be aware 2x replication is not a good idea if your
> > data is important. The exception would be if the OSD's are DC class
> > SSD's that you monitor closely.
> >
> > On Fri, Jul 1, 2016 at 1:09 PM, Ashley Merrick <[email protected]
> > <mailto:[email protected]>> wrote:
> >
> >     Hello,
> >
> >     Perfect, I want to keep on separate node's, so wanted to make sure
> >     the expected behaviour was that it would do that.
> >
> >     And no issues with running an odd number of nodes for a replication
> >     of 2? I know you have quorum, just wanted to make sure would not
> >     effect when running an even replication.
> >
> >     Will be adding nodes in future as require, but will always keep an
> >     uneven number.
> >
> >     ,Ashley
> >
> >     -----Original Message-----
> >     From: ceph-users [mailto:[email protected]
> >     <mailto:[email protected]>] On Behalf Of
> >     [email protected] <mailto:[email protected]>
> >     Sent: 01 July 2016 13:07
> >     To: [email protected] <mailto:[email protected]>
> >     Subject: Re: [ceph-users] CEPH Replication
> >
> >     It will put each object on 2 OSD, on 2 separate node All nodes, and
> >     all OSDs will have the same used space (approx)
> >
> >     If you want to allow both copies of an object to put stored on the
> >     same node, you should use osd_crush_chooseleaf_type = 0 (see
> >
> http://docs.ceph.com/docs/master/rados/operations/crush-map/#crush-map-bucket-types
> >     and
> >
> > http://docs.ceph.com/docs/hammer/rados/configuration/pool-pg-config-re
> > f/)
> >
> >
> >     On 01/07/2016 13:49, Ashley Merrick wrote:
> >     > Hello,
> >     >
> >     > Looking at setting up a new CEPH Cluster, starting with the
> following.
> >     >
> >     > 3 x CEPH OSD Servers
> >     >
> >     > Each Server:
> >     >
> >     > 20Gbps Network
> >     > 12 OSD's
> >     > SSD Journal
> >     >
> >     > Looking at running with replication of 2, will there be any issues
> >     using 3 nodes with a replication of two, this should "technically"
> >     give me ½ the available total capacity of the 3 node's?
> >     >
> >     > Will the CRUSH map automaticly setup each 12 OSD's as a separate
> >     group, so that the two replicated objects are put on separate OSD
> >     servers?
> >     >
> >     > Thanks,
> >     > Ashley
> >     >
> >     >
> >     >
> >     >
> >     > _______________________________________________
> >     > ceph-users mailing list
> >     > [email protected] <mailto:[email protected]>
> >     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >     >
> >
> >     _______________________________________________
> >     ceph-users mailing list
> >     [email protected] <mailto:[email protected]>
> >     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >     _______________________________________________
> >     ceph-users mailing list
> >     [email protected] <mailto:[email protected]>
> >     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > [email protected]
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
> --
> Tomasz Kuzemko
> [email protected]
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to