Philip;

Ah, ok.  I suspect that isn't documented because the developers don't want 
average users doing it.

It's also possible that it won't work as expected, as there is discussion on 
the web of device classes being changed at startup of the OSD daemon.

That said...

"ceph osd crush class create <name>" is the command to create a custom device 
class, at least in Nautilus 14.2.4.

Theoretically, a custom device class can then be used the same as the built in 
device classes.

Caveat: I'm a user, not a developer of Ceph.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
dhils...@performair.com 
www.PerformAir.com



-----Original Message-----
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Philip 
Brown
Sent: Monday, December 16, 2019 4:42 PM
To: ceph-users
Subject: Re: [ceph-users] Separate disk sets for high IO?

Yes I saw that thanks.

Unfortunately, that doesnt show use of "custom classes" as someone hinted at.



----- Original Message -----
From: dhils...@performair.com
To: "ceph-users" <ceph-users@lists.ceph.com>
Cc: "Philip Brown" <pbr...@medata.com>
Sent: Monday, December 16, 2019 3:38:49 PM
Subject: RE: Separate disk sets for high IO?

Philip;

There's isn't any documentation that shows specifically how to do that, though 
the below comes close.

Here's the documentation, for Nautilus, on CRUSH operations:
https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/

About a third of the way down the page is a discussion of "Device Classes."  In 
that sections it talks about creating CRUSH rules that target certain device 
classes (hdd, ssd, nvme, by default).

Once you have a rule, you can configure a pool to use the rule.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
dhils...@performair.com 
www.PerformAir.com


-----Original Message-----
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Philip 
Brown
Sent: Monday, December 16, 2019 3:43 PM
To: Nathan Fish
Cc: ceph-users
Subject: Re: [ceph-users] Separate disk sets for high IO?

Sounds very useful.

Any online example documentation for this?
havent found any so far?


----- Original Message -----
From: "Nathan Fish" <lordci...@gmail.com>
To: "Marc Roos" <m.r...@f1-outsourcing.eu>
Cc: "ceph-users" <ceph-users@lists.ceph.com>, "Philip Brown" <pbr...@medata.com>
Sent: Monday, December 16, 2019 2:07:44 PM
Subject: Re: [ceph-users] Separate disk sets for high IO?

Indeed, you can set device class to pretty much arbitrary strings and
specify them. By default, 'hdd', 'ssd', and I think 'nvme' are
autodetected - though my Optanes showed up as 'ssd'.

On Mon, Dec 16, 2019 at 4:58 PM Marc Roos <m.r...@f1-outsourcing.eu> wrote:
>
>
>
> You can classify osd's, eg as ssd. And you can assign this class to a
> pool you create. This way you have have rbd's running on only ssd's. I
> think you have also a class for nvme and you can create custom classes.
>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to