Hi, list,
  
 Ceph cache tier seems very promising for performance.
 According to 
http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
 
<http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds>
  , I need to create a new pool based on SSD OSDs。
 
 Currently, I’ve two servers with several HDD-based OSDs. I mean to add one 
SSD-based OSD for each server, and then use these two OSDs to build a cache 
pool.
 But I’ve found problems editing crushmap.
 The example in the link use two new hosts to build SSD OSDs and then create a 
new ruleset take the new hosts.
 But in my environment, I do not have new servers to use.
 Can I create a ruleset choose part of OSDs in a host?
 For example, as the crushmap shown below, osd.2 and osd.5 are new added 
SSD-based OSDs, how can I create a ruleset choose these two OSDs only and how 
can I avoid default ruleset to choose osd.2 and osd.5?
 Is this possible, or I have to add a new server to deploy cache tier?
 Thanks.

host node0 {
  id -2
  alg straw
  hash 0
  item osd.0 weight 1.0 # HDD
  item osd.1 weight 1.0 # HDD
  item osd.2 weight 0.5 # SSD
}

host node1 {
  id -3
  alg straw
  hash 0
  item osd.3 weight 1.0 # HDD
  item osd.4 weight 1.0 # HDD
  item osd.5 weight 0.5 # SSD
}

root default {
        id -1           # do not change unnecessarily
        # weight 1.560
        alg straw
        hash 0  # rjenkins1
        item node0 weight 2.5
        item node1 weight 2.5
}

 # typical ruleset
 rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default 
        step chooseleaf firstn 0 type host
        step emit
}



van
chaofa...@owtware.com



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to