You may also be interested in the cbt code that does this kind of thing for creating cache tiers:

https://github.com/ceph/cbt/blob/master/cluster/ceph.py#L295

The idea is that you create a parallel crush hierarchy for the SSDs and then you can assign that to the pool used for the cache tier. An example YAML config to create pool profiles that uses this looks something like:

  pool_profiles:
    basepool:
      pg_size: 1024
      pgp_size: 1024
      cache:
        pool_profile: 'cachepool'
        mode: 'writeback'
      replication: 'erasure'
      erasure_profile: 'ec62'
    cachepool:
      crush_profile: 'cache'
      pg_size: 1024
      pgp_size: 1024
      replication: 2
      hit_set_type: 'bloom'
      hit_set_count: 8
      hit_set_period: 60
      target_max_objects: 102400
      target_max_bytes: 68719476736


On 04/14/2015 04:13 AM, Vincenzo Pii wrote:
Hi Giuseppe,

There is also this article from Sébastien Han that you might find
useful:
http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/

Best regards,
Vincenzo.

2015-04-14 10:34 GMT+02:00 Saverio Proto <ziopr...@gmail.com
<mailto:ziopr...@gmail.com>>:

    Yes you can.
    You have to write your own crushmap.

    At the end of the crushmap you have rulesets

    Write a ruleset that selects only the OSDs you want. Then you have to
    assign the pool to that ruleset.

    I have seen examples online, people what wanted some pools only on SSD
    disks and other pools only on SAS disks. That should be not too far
    from what you want to achieve.

    ciao,

    Saverio



    2015-04-13 18:26 GMT+02:00 Giuseppe Civitella
    <giuseppe.civite...@gmail.com <mailto:giuseppe.civite...@gmail.com>>:
     > Hi all,
     >
     > I've got a Ceph cluster which serves volumes to a Cinder
    installation. It
     > runs Emperor.
     > I'd like to be able to replace some of the disks with OPAL disks
    and create
     > a new pool which uses exclusively the latter kind of disk. I'd
    like to have
     > a "traditional" pool and a "secure" one coexisting on the same
    ceph host.
     > I'd then use Cinder multi backend feature to serve them.
     > My question is: how is it possible to realize such a setup? How
    can I bind a
     > pool to certain OSDs?
     >
     > Thanks
     > Giuseppe
     >
     > _______________________________________________
     > ceph-users mailing list
     > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
     >
    _______________________________________________
    ceph-users mailing list
    ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
    http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to