Your going across host’s so each replication will be on a different host.
,Ashley Sent from my iPhone On 26 Jun 2017, at 4:39 PM, Stéphane Klein <[email protected]<mailto:[email protected]>> wrote: 2017-06-26 11:15 GMT+02:00 Ashley Merrick <[email protected]<mailto:[email protected]>>: Will need to see a full export of your crush map rules. This is my crush map rules: # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_tries 0 tunable choose_total_tries 50 tunable chooseleaf_descend_once 1 tunable chooseleaf_vary_r 1 tunable straw_calc_version 1 # devices device 0 osd.0 device 1 osd.1 device 2 osd.2 device 3 osd.3 device 4 osd.4 device 5 osd.5 # types type 0 osd type 1 host type 2 chassis type 3 rack type 4 row type 5 pdu type 6 pod type 7 room type 8 datacenter type 9 region type 10 root # buckets host ceph-storage-rbx-1 { id -2 # do not change unnecessarily # weight 10.852 alg straw hash 0 # rjenkins1 item osd.0 weight 3.617 item osd.2 weight 3.617 item osd.4 weight 3.617 } host ceph-storage-rbx-2 { id -3 # do not change unnecessarily # weight 10.852 alg straw hash 0 # rjenkins1 item osd.1 weight 3.617 item osd.3 weight 3.617 item osd.5 weight 3.617 } root default { id -1 # do not change unnecessarily # weight 21.704 alg straw hash 0 # rjenkins1 item ceph-storage-rbx-1 weight 10.852 item ceph-storage-rbx-2 weight 10.852 } # rules rule replicated_ruleset { ruleset 0 type replicated min_size 1 max_size 10 step take default step chooseleaf firstn 0 type host step emit } # end crush map
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
