On Wed, Aug 7, 2019 at 9:30 AM Robert LeBlanc <rob...@leblancnet.us> wrote:

>> # ceph osd crush rule dump replicated_racks_nvme
>> {
>>      "rule_id": 0,
>>      "rule_name": "replicated_racks_nvme",
>>      "ruleset": 0,
>>      "type": 1,
>>      "min_size": 1,
>>      "max_size": 10,
>>      "steps": [
>>          {
>>              "op": "take",
>>              "item": -44,
>>              "item_name": "default~nvme"    <------------
>>          },
>>          {
>>              "op": "chooseleaf_firstn",
>>              "num": 0,
>>              "type": "rack"
>>          },
>>          {
>>              "op": "emit"
>>          }
>>      ]
>> }
>> ```
>
>
> Yes, our HDD cluster is much like this, but not Luminous, so we created as 
> separate root with SSD OSD for the metadata and set up a CRUSH rule for the 
> metadata pool to be mapped to SSD. I understand that the CRUSH rule should 
> have a `step take default class ssd` which I don't see in your rule unless 
> the `~` in the item_name means device class.

~ is the internal implementation of device classes. Internally it's
still using separate roots, that's how it stays compatible with older
clients that don't know about device classes.

And since it wasn't mentioned here yet: consider upgrading to Nautilus
to benefit from the new and improved accounting for metadata space.
You'll be able to see how much space is used for metadata and quotas
should work properly for metadata usage.


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

>
> Thanks
> ----------------
> Robert LeBlanc
> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to