https://ceph.io/community/new-luminous-crush-device-classes/
https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/#device-classes

On Mon, Dec 16, 2019 at 5:42 PM Philip Brown <pbr...@medata.com> wrote:
>
> Sounds very useful.
>
> Any online example documentation for this?
> havent found any so far?
>
>
> ----- Original Message -----
> From: "Nathan Fish" <lordci...@gmail.com>
> To: "Marc Roos" <m.r...@f1-outsourcing.eu>
> Cc: "ceph-users" <ceph-users@lists.ceph.com>, "Philip Brown" 
> <pbr...@medata.com>
> Sent: Monday, December 16, 2019 2:07:44 PM
> Subject: Re: [ceph-users] Separate disk sets for high IO?
>
> Indeed, you can set device class to pretty much arbitrary strings and
> specify them. By default, 'hdd', 'ssd', and I think 'nvme' are
> autodetected - though my Optanes showed up as 'ssd'.
>
> On Mon, Dec 16, 2019 at 4:58 PM Marc Roos <m.r...@f1-outsourcing.eu> wrote:
> >
> >
> >
> > You can classify osd's, eg as ssd. And you can assign this class to a
> > pool you create. This way you have have rbd's running on only ssd's. I
> > think you have also a class for nvme and you can create custom classes.
> >
> >
> >
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to