Hi Lindsay,

Actually you just setup two entries for each host in your crush map. One
for hdd's and one for ssd's. My osd's look like this:

# id    weight  type name       up/down reweight
-6      1.8     root ssd
-7      0.45            host ceph-01-ssd
0       0.45                    osd.0   up      1
-8      0.45            host ceph-02-ssd
3       0.45                    osd.3   up      1
-9      0.45            host ceph-03-ssd
8       0.45                    osd.8   up      1
-10     0.45            host ceph-04-ssd
11      0.45                    osd.11  up      1
-1      29.12   root default
-2      7.28            host ceph-01
1       3.64                    osd.1   up      1
2       3.64                    osd.2   up      1
-3      7.28            host ceph-02
5       3.64                    osd.5   up      1
4       3.64                    osd.4   up      1
-4      7.28            host ceph-03
6       3.64                    osd.6   up      1
7       3.64                    osd.7   up      1
-5      7.28            host ceph-04
10      3.64                    osd.10  up      1
9       3.64                    osd.9   up      1

As you can see, I have four hosts: ceph-01 ... ceph-04, but eight host
entries. This works great.

Regards,

Erik.


On 30-12-14 15:13, Lindsay Mathieson wrote:
> I looked at the section for setting up different pools with different
> OSD's (e.g SSD Pool):
> 
>  
> 
> http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
> 
>  
> 
> And it seems to make the assumption that the ssd's and platters all live
> on separate hosts.
> 
>  
> 
> Not the case at all for my setup and I imagine for most people I have
> ssd's mixed with platters on the same hosts.
> 
>  
> 
> In that case should one have the root buckets referencing buckets not
> based on hosts, e.g, something like this:
> 
>  
> 
>  
> 
> # devices
> 
> # Platters
> 
> device 0 osd.0
> 
> device 1 osd.1
> 
>  
> 
> # SSD
> 
> device 2 osd.2
> 
> device 3 osd.3
> 
>  
> 
> host vnb {
> 
> id -2 # do not change unnecessarily
> 
> # weight 1.000
> 
> alg straw
> 
> hash 0 # rjenkins1
> 
> item osd.0 weight 1.000
> 
> item osd.2 weight 1.000
> 
> }
> 
> host vng {
> 
> id -3 # do not change unnecessarily
> 
> # weight 1.000
> 
> alg straw
> 
> hash 0 # rjenkins1
> 
> item osd.1 weight 1.000
> 
> item osd.3 weight 1.000
> 
> }
> 
>  
> 
> row disk-platter {
> 
> alg straw
> 
> hash 0 # rjenkins1
> 
> item osd.0 weight 1.000
> 
> item osd.1 weight 1.000
> 
> }
> 
>  
> 
> row disk-ssd {
> 
> alg straw
> 
> hash 0 # rjenkins1
> 
> item osd.2 weight 1.000
> 
> item osd.3 weight 1.000
> 
> }
> 
>  
> 
>  
> 
> root default {
> 
> id -1 # do not change unnecessarily
> 
> # weight 2.000
> 
> alg straw
> 
> hash 0 # rjenkins1
> 
> item disk-platter weight 2.000
> 
> }
> 
>  
> 
> root ssd {
> 
> id -4
> 
> alg straw
> 
> hash 0
> 
> item disk-ssd weight 2.000
> 
> }
> 
>  
> 
> # rules
> 
> rule replicated_ruleset {
> 
> ruleset 0
> 
> type replicated
> 
> min_size 1
> 
> max_size 10
> 
> step take default
> 
> step chooseleaf firstn 0 type host
> 
> step emit
> 
> }
> 
>  
> 
> rule ssd {
> 
> ruleset 1
> 
> type replicated
> 
> min_size 0
> 
> max_size 4
> 
> step take ssd
> 
> step chooseleaf firstn 0 type host
> 
> step emit
> 
> }
> 
>  
> 
>  
> 
> -- 
> 
> Lindsay
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to