So I did this: ceph osd crush rule create-replicated hdd-rule default rack hdd
[ceph: root@cn01 ceph]# ceph osd crush rule ls
replicated_rule
hdd-rule
ssd-rule
[ceph: root@cn01 ceph]# ceph osd crush rule dump hdd-rule
{
"rule_id": 1,
"rule_name": "hdd-rule",
"ruleset": 1,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -2,
"item_name": "default~hdd"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "rack"
},
{
"op": "emit"
}
]
}
Then this:
ceph osd pool set device_health_metrics crush_rule hdd-rule
How do I prove that my device_health_metrics pool is no longer using any SSDs?
ceph pg ls
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS*
LOG STATE SINCE VERSION REPORTED UP ACTING
SCRUB_STAMP DEEP_SCRUB_STAMP
1.0 41 0 0 0 0 0 0
71 active+clean 22h 205'71 253:484 [28,33,10]p28 [28,33,10]p28
2021-05-27T14:44:37.466384+0000 2021-05-26T04:23:11.758060+0000
2.0 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:56 [9,5,26]p9 [9,5,26]p9
2021-05-28T00:46:34.470208+0000 2021-05-28T00:46:15.122042+0000
2.1 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [34,0,13]p34 [34,0,13]p34
2021-05-28T00:46:41.578301+0000 2021-05-28T00:46:15.122042+0000
2.2 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [30,25,5]p30 [30,25,5]p30
2021-05-28T00:46:41.394685+0000 2021-05-28T00:46:15.122042+0000
2.3 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [14,35,32]p14 [14,35,32]p14
2021-05-28T00:46:40.545088+0000 2021-05-28T00:46:15.122042+0000
2.4 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [27,28,7]p27 [27,28,7]p27
2021-05-28T00:46:41.208159+0000 2021-05-28T00:46:15.122042+0000
2.5 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [8,4,35]p8 [8,4,35]p8
2021-05-28T00:46:39.845197+0000 2021-05-28T00:46:15.122042+0000
2.6 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [31,26,6]p31 [31,26,6]p31
2021-05-28T00:46:45.808430+0000 2021-05-28T00:46:15.122042+0000
2.7 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [12,7,19]p12 [12,7,19]p12
2021-05-28T00:46:39.313525+0000 2021-05-28T00:46:15.122042+0000
2.8 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [20,21,11]p20 [20,21,11]p20
2021-05-28T00:46:38.840636+0000 2021-05-28T00:46:15.122042+0000
2.9 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [31,14,10]p31 [31,14,10]p31
2021-05-28T00:46:46.791644+0000 2021-05-28T00:46:15.122042+0000
2.a 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [16,27,35]p16 [16,27,35]p16
2021-05-28T00:46:39.025320+0000 2021-05-28T00:46:15.122042+0000
2.b 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [20,15,11]p20 [20,15,11]p20
2021-05-28T00:46:42.841924+0000 2021-05-28T00:46:15.122042+0000
2.c 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [32,11,0]p32 [32,11,0]p32
2021-05-28T00:46:38.403701+0000 2021-05-28T00:46:15.122042+0000
2.d 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:56 [5,19,3]p5 [5,19,3]p5
2021-05-28T00:46:39.808986+0000 2021-05-28T00:46:15.122042+0000
2.e 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [27,13,17]p27 [27,13,17]p27
2021-05-28T00:46:42.253293+0000 2021-05-28T00:46:15.122042+0000
2.f 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [11,22,18]p11 [11,22,18]p11
2021-05-28T00:46:38.721405+0000 2021-05-28T00:46:15.122042+0000
2.10 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [10,17,7]p10 [10,17,7]p10
2021-05-28T00:46:38.770867+0000 2021-05-28T00:46:15.122042+0000
2.11 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [34,20,1]p34 [34,20,1]p34
2021-05-28T00:46:39.572906+0000 2021-05-28T00:46:15.122042+0000
2.12 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:56 [5,24,26]p5 [5,24,26]p5
2021-05-28T00:46:38.802818+0000 2021-05-28T00:46:15.122042+0000
2.13 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [21,35,3]p21 [21,35,3]p21
2021-05-28T00:46:39.517117+0000 2021-05-28T00:46:15.122042+0000
2.14 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [18,9,7]p18 [18,9,7]p18
2021-05-28T00:46:38.078800+0000 2021-05-28T00:46:15.122042+0000
2.15 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [7,14,34]p7 [7,14,34]p7
2021-05-28T00:46:38.748425+0000 2021-05-28T00:46:15.122042+0000
2.16 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [0,23,7]p0 [0,23,7]p0
2021-05-28T00:46:42.000503+0000 2021-05-28T00:46:15.122042+0000
2.17 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [21,5,11]p21 [21,5,11]p21
2021-05-28T00:46:46.515686+0000 2021-05-28T00:46:15.122042+0000
2.18 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [18,9,33]p18 [18,9,33]p18
2021-05-28T00:46:40.104875+0000 2021-05-28T00:46:15.122042+0000
2.19 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [13,23,4]p13 [13,23,4]p13
2021-05-28T00:46:38.739980+0000 2021-05-28T00:46:35.469823+0000
2.1a 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [3,23,28]p3 [3,23,28]p3
2021-05-28T00:46:41.549389+0000 2021-05-28T00:46:15.122042+0000
2.1b 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:56 [5,28,23]p5 [5,28,23]p5
2021-05-28T00:46:40.824368+0000 2021-05-28T00:46:15.122042+0000
2.1c 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [33,29,31]p33 [33,29,31]p33
2021-05-28T00:46:38.106675+0000 2021-05-28T00:46:15.122042+0000
2.1d 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [10,33,28]p10 [10,33,28]p10
2021-05-28T00:46:39.785338+0000 2021-05-28T00:46:15.122042+0000
2.1e 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [3,21,13]p3 [3,21,13]p3
2021-05-28T00:46:40.584803+0000 2021-05-28T00:46:40.584803+0000
2.1f 0 0 0 0 0 0 0
0 active+clean 21h 0'0 254:42 [22,7,34]p22 [22,7,34]p22
2021-05-28T00:46:38.061932+0000 2021-05-28T00:46:15.122042+0000
PG 1.0, which has all the objects, is still using osd.28, which is an ssd drive.
ceph pg ls-by-pool device_health_metrics
PG OBJECTS DEGRADED MISPLACED UNFOUND BYTES OMAP_BYTES* OMAP_KEYS* LOG
STATE SINCE VERSION REPORTED UP ACTING
SCRUB_STAMP DEEP_SCRUB_STAMP
1.0 41 0 0 0 0 0 0 71
active+clean 22h 205'71 253:484 [28,33,10]p28 [28,33,10]p28
2021-05-27T14:44:37.466384+0000 2021-05-26T04:23:11.758060+0000
Also, I attempted to add my “crush location” and I believe I’m missing
something fundamental. It claims no change but that doesn’t make sense because
I haven’t previously specified this information:
ceph osd crush set osd.24 3.63869 root=default datacenter=la1 rack=rack1
host=cn06 room=room1 row=6
set item id 24 name 'osd.24' weight 3.63869 at location
{datacenter=la1,host=cn06,rack=rack1,room=room1,root=default,row=6}: no change
My end goal is to create a crush map that is away of two separate racks with
independent UPS power to increase our availability in the event of power going
out on one of our racks.
Thank you
-jeremy
> On May 28, 2021, at 5:01 AM, Jeremy Hansen <[email protected]> wrote:
>
> I’m continuing to read and it’s becoming more clear.
>
> The CRUSH map seems pretty amazing!
>
> -jeremy
>
>> On May 28, 2021, at 1:10 AM, Jeremy Hansen <[email protected]> wrote:
>>
>> Thank you both for your response. So this leads me to the next question:
>>
>> ceph osd crush rule create-replicated <rule-name> <root> <failure-domain>
>> <class>
>>
>> What is <root> and <failure-domain> in this case?
>>
>> It also looks like this is responsible for things like “rack awareness” type
>> attributes which is something I’d like to utilize.:
>>
>> # types
>> type 0 osd
>> type 1 host
>> type 2 chassis
>> type 3 rack
>> type 4 row
>> type 5 pdu
>> type 6 pod
>> type 7 room
>> type 8 datacenter
>> type 9 zone
>> type 10 region
>> type 11 root
>> This is something I will eventually take advantage of as well.
>>
>> Thank you!
>> -jeremy
>>
>>
>>> On May 28, 2021, at 12:03 AM, Janne Johansson <[email protected]> wrote:
>>>
>>> Create a crush rule that only chooses non-ssd drives, then
>>> ceph osd pool set <perf-pool-name> crush_rule YourNewRuleName
>>> and it will move over to the non-ssd OSDs.
>>>
>>> Den fre 28 maj 2021 kl 02:18 skrev Jeremy Hansen <[email protected]>:
>>>>
>>>>
>>>> I’m very new to Ceph so if this question makes no sense, I apologize.
>>>> Continuing to study but I thought an answer to this question would help me
>>>> understand Ceph a bit more.
>>>>
>>>> Using cephadm, I set up a cluster. Cephadm automatically creates a pool
>>>> for Ceph metrics. It looks like one of my ssd osd’s was allocated for the
>>>> PG. I’d like to understand how to remap this PG so it’s not using the SSD
>>>> OSDs.
>>>>
>>>> ceph pg map 1.0
>>>> osdmap e205 pg 1.0 (1.0) -> up [28,33,10] acting [28,33,10]
>>>>
>>>> OSD 28 is the SSD.
>>>>
>>>> Is this possible? Does this make any sense? I’d like to reserve the SSDs
>>>> for their own pool.
>>>>
>>>> Thank you!
>>>> -jeremy
>>>> _______________________________________________
>>>> ceph-users mailing list -- [email protected]
>>>> To unsubscribe send an email to [email protected]
>>>
>>>
>>>
>>> --
>>> May the most significant bit of your life be positive.
>>> _______________________________________________
>>> ceph-users mailing list -- [email protected]
>>> To unsubscribe send an email to [email protected]
>>
>> _______________________________________________
>> ceph-users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
signature.asc
Description: Message signed with OpenPGP
_______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
