Hi,
I built a new cluster followed the tutorial
"http://ceph.com/docs/master/start/". Then I got bunch of PGs degraded.
ceph-osd1:~# ceph -s
cluster 00f2b37f-ccfd-4569-b27d-8ddcce62573d
health HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
monmap e1: 1 mons at {ceph-mon1=192.168.111.74:6789/0}, election
epoch 1, quorum 0 ceph-mon1
osdmap e41: 2 osds: 2 up, 2 in
pgmap v104: 192 pgs, 3 pools, 0 bytes data, 0 objects
4845 MB used, 8259 MB / 13852 MB avail
192 active+degraded
My mistake was that invalid host was included into crushmap. so I
removed and recompile crushmap, but "active+degraded" is still not
changed to "active". Any hints where to check?
Below is my crushmap.
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
# devices
device 0 osd.0
device 1 osd.1
# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root
# buckets
#host ubuntu { # comment out b/c of my mistake
# id -2 # do not change unnecessarily
# # weight 0.000
# alg straw
# hash 0 # rjenkins1
#}
host ceph-osd2 {
id -2 # do not change unnecessarily
# weight 0.010
alg straw
hash 0 # rjenkins1
item osd.1 weight 0.010
}
host ceph-osd1 {
id -3 # do not change unnecessarily
# weight 0.010
alg straw
hash 0 # rjenkins1
item osd.0 weight 0.010
}
root default {
id -1 # do not change unnecessarily
# weight 0.020
alg straw
hash 0 # rjenkins1
#item ubuntu weight 0.000
item ceph-osd1 weight 0.010
item ceph-osd2 weight 0.010
}
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}
# end crush map
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com