Hello,
I have a problem with the following crushmap :
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable straw_calc_version 1
# devices
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9
device 10 osd.10
device 11 osd.11
# types
type 0 device
type 1 host
type 2 chassis
type 3 rack
type 4 room
type 5 datacenter
type 6 root
# buckets
host testctrcephosd1 {
id -1 # do not change unnecessarily
# weight 3.000
alg straw
hash 0 # rjenkins1
item osd.0 weight 1.000
item osd.1 weight 1.000
item osd.2 weight 1.000
}
host testctrcephosd2 {
id -2 # do not change unnecessarily
# weight 3.000
alg straw
hash 0 # rjenkins1
item osd.3 weight 1.000
item osd.4 weight 1.000
item osd.5 weight 1.000
}
host testctrcephosd3 {
id -3 # do not change unnecessarily
# weight 3.000
alg straw
hash 0 # rjenkins1
item osd.6 weight 1.000
item osd.7 weight 1.000
item osd.8 weight 1.000
}
host testctrcephosd4 {
id -4 # do not change unnecessarily
# weight 3.000
alg straw
hash 0 # rjenkins1
item osd.9 weight 1.000
item osd.10 weight 1.000
item osd.11 weight 1.000
}
chassis chassis1 {
id -5 # do not change unnecessarily
# weight 6.000
alg straw
hash 0 # rjenkins1
item testctrcephosd1 weight 3.000
item testctrcephosd2 weight 3.000
}
chassis chassis2 {
id -6 # do not change unnecessarily
# weight 6.000
alg straw
hash 0 # rjenkins1
item testctrcephosd3 weight 3.000
item testctrcephosd4 weight 3.000
}
room salle1 {
id -7
# weight 6.000
alg straw
hash 0
item chassis1 weight 6.000
}
room salle2 {
id -8
# weight 6.000
alg straw
hash 0
item chassis2 weight 6.000
}
root dc1 {
id -9
# weight 6.000
alg straw
hash 0
item salle1 weight 6.000
item salle2 weight 6.000
}
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take dc1
step chooseleaf firstn 0 type host
step emit
}
rule dc {
ruleset 1
type replicated
min_size 2
max_size 10
step take dc1
step choose firstn 0 type room
step chooseleaf firstn 0 type chassis
step emit
}
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-9 12.00000 root dc1
-7 6.00000 room salle1
-5 6.00000 chassis chassis1
-1 3.00000 host testctrcephosd1
0 1.00000 osd.0 up 1.00000 1.00000
1 1.00000 osd.1 up 1.00000 1.00000
2 1.00000 osd.2 up 1.00000 1.00000
-2 3.00000 host testctrcephosd2
3 1.00000 osd.3 up 1.00000 1.00000
4 1.00000 osd.4 up 1.00000 1.00000
5 1.00000 osd.5 up 1.00000 1.00000
-8 6.00000 room salle2
-6 6.00000 chassis chassis2
-3 3.00000 host testctrcephosd3
6 1.00000 osd.6 up 1.00000 1.00000
7 1.00000 osd.7 up 1.00000 1.00000
8 1.00000 osd.8 up 1.00000 1.00000
-4 3.00000 host testctrcephosd4
9 1.00000 osd.9 up 1.00000 1.00000
10 1.00000 osd.10 up 1.00000 1.00000
11 1.00000 osd.11 up 1.00000 1.00000
Allocating when creating is ok, my datas are replicated in 2 rooms.
ceph osd map rbdnew testvol1
osdmap e127 pool 'rbdnew' (1) object 'testvol1' -> pg 1.c657d5a4 (1.a4) -> up
([9,5], p9) acting ([9,5], p9)
but when one of these host is down, I want to create another replica on the
other host in the same room. For example, when host "testctrcephosd2" is down,
I want CRUSH to create another copy in "testctrcephosd1" (keeping another copy
on one of the host in room "salle 2".
In place of this, cluster stays with only one osd used (instead of 2) :
ceph osd map rbdnew testvol1
osdmap e130 pool 'rbdnew' (1) object 'testvol1' -> pg 1.c657d5a4 (1.a4) -> up
([9], p9) acting ([9], p9)
Do you have any idea to do this ?
Regards
Christophe
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com