Oh, it's not working as intended though the ssd-primary rule is officially 
listed on ceph documentation. I should file a feature request or bugzilla for 
it? 

Regards, 
Horace Ng 


From: "Paul Emmerich" <[email protected]> 
To: "horace" <[email protected]> 
Cc: "ceph-users" <[email protected]> 
Sent: Wednesday, May 23, 2018 8:37:07 PM 
Subject: Re: [ceph-users] SSD-primary crush rule doesn't work as intended 

You can't mix HDDs and SSDs in a server if you want to use such a rule. 
The new selection step after "emit" can't know what server was selected 
previously. 

Paul 

2018-05-23 11:02 GMT+02:00 Horace < [ mailto:[email protected] | 
[email protected] ] > : 


Add to the info, I have a slightly modified rule to take advantage of the new 
storage class. 

rule ssd-hybrid { 
id 2 
type replicated 
min_size 1 
max_size 10 
step take default class ssd 
step chooseleaf firstn 1 type host 
step emit 
step take default class hdd 
step chooseleaf firstn -1 type host 
step emit 
} 

Regards, 
Horace Ng 

----- Original Message ----- 
From: "horace" < [ mailto:[email protected] | [email protected] ] > 
To: "ceph-users" < [ mailto:[email protected] | 
[email protected] ] > 
Sent: Wednesday, May 23, 2018 3:56:20 PM 
Subject: [ceph-users] SSD-primary crush rule doesn't work as intended 

I've set up the rule according to the doc, but some of the PGs are still being 
assigned to the same host. 

[ http://docs.ceph.com/docs/master/rados/operations/crush-map-edits/ | 
http://docs.ceph.com/docs/master/rados/operations/crush-map-edits/ ] 

rule ssd-primary { 
ruleset 5 
type replicated 
min_size 5 
max_size 10 
step take ssd 
step chooseleaf firstn 1 type host 
step emit 
step take platter 
step chooseleaf firstn -1 type host 
step emit 
} 

Crush tree: 

[root@ceph0 ~]# ceph osd crush tree 
ID CLASS WEIGHT TYPE NAME 
-1 58.63989 root default 
-2 19.55095 host ceph0 
0 hdd 2.73000 osd.0 
1 hdd 2.73000 osd.1 
2 hdd 2.73000 osd.2 
3 hdd 2.73000 osd.3 
12 hdd 4.54999 osd.12 
15 hdd 3.71999 osd.15 
18 ssd 0.20000 osd.18 
19 ssd 0.16100 osd.19 
-3 19.55095 host ceph1 
4 hdd 2.73000 osd.4 
5 hdd 2.73000 osd.5 
6 hdd 2.73000 osd.6 
7 hdd 2.73000 osd.7 
13 hdd 4.54999 osd.13 
16 hdd 3.71999 osd.16 
20 ssd 0.16100 osd.20 
21 ssd 0.20000 osd.21 
-4 19.53799 host ceph2 
8 hdd 2.73000 osd.8 
9 hdd 2.73000 osd.9 
10 hdd 2.73000 osd.10 
11 hdd 2.73000 osd.11 
14 hdd 3.71999 osd.14 
17 hdd 4.54999 osd.17 
22 ssd 0.18700 osd.22 
23 ssd 0.16100 osd.23 

#ceph pg ls-by-pool ssd-hybrid 

27.8 1051 0 0 0 0 4399733760 1581 1581 active+clean 2018-05-23 06:20:56.088216 
27957'185553 27959:368828 [23,1,11] 23 [23,1,11] 23 27953'182582 2018-05-23 
06:20:56.088172 27843'162478 2018-05-20 18:28:20.118632 

With osd.23 and osd.11 being assigned on the same host. 

Regards, 
Horace Ng 
_______________________________________________ 
ceph-users mailing list 
[ mailto:[email protected] | [email protected] ] 
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 
_______________________________________________ 
ceph-users mailing list 
[ mailto:[email protected] | [email protected] ] 
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 






-- 
-- 
Paul Emmerich 

Looking for help with your Ceph cluster? Contact us at [ https://croit.io/ | 
https://croit.io ] 

croit GmbH 
Freseniusstr. 31h 
81247 München 
[ http://www.croit.io/ | www.croit.io ] 
Tel: +49 89 1896585 90 

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to