Just so you're aware of why that's the case, the line
step chooseleaf firstn 0 type host
in your crush map under the rules section says "host". If you changed that
to "osd", then your replicas would be unique per OSD instead of per
server. If you had a larger cluster and changed it to "rack" an
2017-06-26 11:48 GMT+02:00 Ashley Merrick :
> Your going across host’s so each replication will be on a different host.
>
Thanks :)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Your going across host’s so each replication will be on a different host.
,Ashley
Sent from my iPhone
On 26 Jun 2017, at 4:39 PM, Stéphane Klein
mailto:cont...@stephane-klein.info>> wrote:
2017-06-26 11:15 GMT+02:00 Ashley Merrick
mailto:ash...@amerrick.co.uk>>:
Will need to see a full expo
2017-06-26 11:15 GMT+02:00 Ashley Merrick :
> Will need to see a full export of your crush map rules.
>
This is my crush map rules:
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable choosel
Hello,
Will need to see a full export of your crush map rules.
Depends what the failure domain is set to.
,Ash
Sent from my iPhone
On 26 Jun 2017, at 4:11 PM, Stéphane Klein
mailto:cont...@stephane-klein.info>> wrote:
Hi,
I have this OSD:
root@ceph-storage-rbx-1:~# ceph osd tree
ID WEIGHT
Hi,
I have this OSD:
root@ceph-storage-rbx-1:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 21.70432 root default
-2 10.85216 host ceph-storage-rbx-1
0 3.61739 osd.0up 1.0 1.0
2 3.61739 os