That was it, pool 3 had a rule configured that I had removed. So the rule no 
longer existed.
I hadn't used the pool for some time, it was just used for some testing. I set 
the rule set to a real one and the cluster is rebalancing as I am typing.
I'll probably delete the pool once I am sure that I have fixed it.


Darryl

________________________________________
From: [email protected] [[email protected]] On 
Behalf Of Darryl Bond [[email protected]]
Sent: Tuesday, April 16, 2013 9:43 AM
To: [email protected]
Subject: Re: [ceph-users] Upgrade stale PG

On 04/16/13 08:50, Dan Mick wrote:
>
> On 04/04/2013 02:27 PM, Darryl Bond wrote:
>> # ceph pg 3.8 query
>> pgid currently maps to no osd
> That means your CRUSH rules are wrong.  What's the crushmap look like,
> and what's the rule for pool 3?
# begin crush map

# devices
device 0 device0
device 1 device1
device 2 device2
device 3 device3
device 4 device4
device 5 device5
device 6 device6
device 7 device7
device 8 device8
device 9 device9
device 10 osd.10
device 11 osd.11
device 12 osd.12
device 13 osd.13
device 14 osd.14
device 15 osd.15
device 16 device16
device 17 device17
device 18 device18
device 19 device19
device 20 osd.20
device 21 osd.21
device 22 osd.22
device 23 osd.23
device 24 osd.24
device 25 osd.25
device 26 device26
device 27 device27
device 28 device28
device 29 device29
device 30 osd.30
device 31 osd.31
device 32 osd.32
device 33 osd.33
device 34 osd.34
device 35 osd.35

# types
type 0 osd
type 1 host
type 2 rack
type 3 row
type 4 room
type 5 datacenter
type 6 root

# buckets
host ceph1-ssd {
     id -2        # do not change unnecessarily
     # weight 1.000
     alg straw
     hash 0    # rjenkins1
     item osd.10 weight 0.500
     item osd.11 weight 0.500
}
rack ServerRoom-ssd {
     id -3        # do not change unnecessarily
     # weight 1.000
     alg straw
     hash 0    # rjenkins1
     item ceph1-ssd weight 1.000
}
host ceph2-ssd {
     id -4        # do not change unnecessarily
     # weight 1.000
     alg straw
     hash 0    # rjenkins1
     item osd.20 weight 0.500
     item osd.21 weight 0.500
}
host ceph3-ssd {
     id -5        # do not change unnecessarily
     # weight 1.000
     alg straw
     hash 0    # rjenkins1
     item osd.30 weight 0.500
     item osd.31 weight 0.500
}
rack PABXRoom-ssd {
     id -6        # do not change unnecessarily
     # weight 1.000
     alg straw
     hash 0    # rjenkins1
     item ceph2-ssd weight 1.000
}
host ceph3-spin {
     id -11        # do not change unnecessarily
     # weight 16.000
     alg straw
     hash 0    # rjenkins1
     item osd.32 weight 4.000
     item osd.33 weight 4.000
     item osd.34 weight 4.000
     item osd.35 weight 4.000
}
rack BackupCub-spin {
     id -7        # do not change unnecessarily
     # weight 16.000
     alg straw
     hash 0    # rjenkins1
     item ceph3-spin weight 16.000
}
host ceph1-spin {
     id -9        # do not change unnecessarily
     # weight 16.000
     alg straw
     hash 0    # rjenkins1
     item osd.12 weight 4.000
     item osd.13 weight 4.000
     item osd.14 weight 4.000
     item osd.15 weight 4.000
}
host ceph2-spin {
     id -10        # do not change unnecessarily
     # weight 16.000
     alg straw
     hash 0    # rjenkins1
     item osd.22 weight 4.000
     item osd.23 weight 4.000
     item osd.24 weight 4.000
     item osd.25 weight 4.000
}
root spin {
     id -8        # do not change unnecessarily
     # weight 48.000
     alg straw
     hash 0    # rjenkins1
     item ceph1-spin weight 16.000
     item ceph2-spin weight 16.000
     item ceph3-spin weight 16.000
}
root ssd {
     id -12        # do not change unnecessarily
     # weight 3.000
     alg straw
     hash 0    # rjenkins1
     item ceph1-ssd weight 1.000
     item ceph2-ssd weight 1.000
     item ceph3-ssd weight 1.000
}
rack ServerRoom-spin {
     id -13        # do not change unnecessarily
     # weight 16.000
     alg straw
     hash 0    # rjenkins1
     item ceph1-spin weight 16.000
}
rack PABXRoom-spin {
     id -14        # do not change unnecessarily
     # weight 16.000
     alg straw
     hash 0    # rjenkins1
     item ceph2-spin weight 16.000
}
rack BackupCub-ssd {
     id -15        # do not change unnecessarily
     # weight 1.000
     alg straw
     hash 0    # rjenkins1
     item ceph3-ssd weight 1.000
}

# rules
rule data {
     ruleset 0
     type replicated
     min_size 1
     max_size 10
     step take spin
     step chooseleaf firstn 0 type host
     step emit
}
rule metadata {
     ruleset 1
     type replicated
     min_size 1
     max_size 10
     step take spin
     step chooseleaf firstn 0 type host
     step emit
}
rule rbd {
     ruleset 2
     type replicated
     min_size 1
     max_size 10
     step take spin
     step chooseleaf firstn 0 type host
     step emit
}
rule spin {
     ruleset 3
     type replicated
     min_size 0
     max_size 10
     step take spin
     step chooseleaf firstn 0 type host
     step emit
}
rule ssd {
     ruleset 4
     type replicated
     min_size 0
     max_size 10
     step take ssd
     step chooseleaf firstn 0 type host
     step emit
}

# end crush map


The contents of this electronic message and any attachments are intended only 
for the addressee and may contain legally privileged, personal, sensitive or 
confidential information. If you are not the intended addressee, and have 
received this email, any transmission, distribution, downloading, printing or 
photocopying of the contents of this message or attachments is strictly 
prohibited. Any legal privilege or confidentiality attached to this message and 
attachments is not waived, lost or destroyed by reason of delivery to any 
person other than intended addressee. If you have received this message and are 
not the intended addressee you should notify the sender by return email and 
destroy all copies of the message and any attachments. Unless expressly 
attributed, the views expressed in this email do not necessarily represent the 
views of the company.
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to