Re: [ceph-users] Recommendations for I/O (blk-mq) scheduler for HDDs and SSDs?

2017-12-11 Thread German Anders
Hi Patrick,

Some thoughts about blk-mq:

*(virtio-blk)*

   - it's activated by default on kernels >= 3.13 on driver virtio-blk

   - *The blk-mq feature is currently implemented, and enabled by default,
   in the following drivers: virtio-blk, mtip32xx, nvme, and rbd*. (
   https://access.redhat.com/documentation/en-us/red_hat_enter
   prise_linux/7/html/7.2_release_notes/storage
   

   )

   - can be checked with "cat /sys/block/vda/queue/scheduler", it appears
   as none

   - https://serverfault.com/questions/693348/what-does-it-mean-
   when-linux-has-no-i-o-scheduler
   


*hosts local disk (scsi-mq)*

   - with disks "sda" (scsi), either rotational or ssd IT'S NOT activated
   by default on ubuntu (cat /sys/block/sda/queue/scheduler)

   - canonical deactivated in >= 3.18 https://bugs.launchpad.
   net/ubuntu/+source/linux/+bug/1397061
   

   - suse says it does not suit, for scsi rotational, but it's ok for SSD:
   https://doc.opensuse.org/documentation/leap/tuning/html
   /book.sle.tuning/cha.tuning.io.html#cha.tuning.io.scsimq
   


   - redhat says: "*The scsi-mq feature is provided as a Technology Preview
   in Red Hat Enterprise Linux 7.2. To enable scsi-mq, specify
   scsi_mod.use_blk_mq=y on the kernel command line. The default value is n
   (disabled).*" (https://access.redhat.com/doc
   umentation/en-us/red_hat_enterprise_linux/7/html/7.2_release
   _notes/storage
   

   )

   - how to change it: vi /etc/default/grub:
GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=1";
   update-grub; reboot

*ceph (rbd)*

   - it's activated by default: *The blk-mq feature is currently
   implemented, and enabled by default, in the following drivers: virtio-blk,
   mtip32xx, nvme, and rbd*. (https://access.redhat.com/doc
   umentation/en-us/red_hat_enterprise_linux/7/html/7.2_release
   _notes/storage
   

   )

*multipath (device mapper; dm / dm-mpath)*

   - how to change it: dm_mod.use_blk_mq=y

   - deactivated by default, how to verify: *To determine whether DM
   multipath is using blk-mq on a system, cat the file
   /sys/block/dm-X/dm/use_blk_mq, where dm-X is replaced by the DM multipath
   device of interest. This file is read-only and reflects what the global
   value in /sys/module/dm_mod/parameters/use_blk_mq was at the time the
   request-based DM multipath device was created*. (
   https://access.redhat.com/documentation/en-us/red_hat_enter
   prise_linux/7/html/7.2_release_notes/storage
   

   )

   - I thought it would not make any sense, since iscsi is by definition
   (network) much slower than SSD/NVMe, which is what blk-mq was generated
   for, but...:* It may be beneficial to set dm_mod.use_blk_mq=y if the
   underlying SCSI devices are also using blk-mq, as doing so reduces locking
   overhead at the DM layer*. (redhat)


observations

   - WARNING low performance https://www.redhat.com/archives/dm-devel/
   2016-February/msg00036.html

   - request-based device mapper targets planned for 4.1

   - now with >= 4.12 linux come with BFQ, a scheduler based en blk-mq

We try in our environment several schedulers but we did't notice a real
important improvement, in order to justify a global change in the whole
environment. But the best thing is to change/test/document an repeat again
and again :)

Hope it helps

Best,



*German*

2017-12-11 18:17 GMT-03:00 Patrick Fruh :

> Hi,
>
>
>
> after reading a lot about I/O schedulers and performance gains with
> blk-mq, I switched to a custom 4.14.5 kernel with  CONFIG_SCSI_MQ_DEFAULT
> enabled to have blk-mq for all devices on my cluster.
>
>
>
> This allows me to use the following schedulers for HDDs and SSDs:
>
> mq-deadline, kyber, bfq, none
>
>
>
> I’ve currently set the HDD scheduler to bfq and the SSD scheduler to none,
> however I’m still not sure if this is the best solution performance-wise.
>
> Does anyone have more experience with this and can maybe give me a
> recommendation? I’m not even sure if blk-mq is a good idea for ceph, since
> I haven’t really found anything on the topic.
>
>
>
> Best,
>
> Patrick
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
___
ceph-users mailing list

[ceph-users] Recommendations for I/O (blk-mq) scheduler for HDDs and SSDs?

2017-12-11 Thread Patrick Fruh
Hi,

after reading a lot about I/O schedulers and performance gains with blk-mq, I 
switched to a custom 4.14.5 kernel with  CONFIG_SCSI_MQ_DEFAULT enabled to have 
blk-mq for all devices on my cluster.

This allows me to use the following schedulers for HDDs and SSDs:
mq-deadline, kyber, bfq, none

I've currently set the HDD scheduler to bfq and the SSD scheduler to none, 
however I'm still not sure if this is the best solution performance-wise.
Does anyone have more experience with this and can maybe give me a 
recommendation? I'm not even sure if blk-mq is a good idea for ceph, since I 
haven't really found anything on the topic.

Best,
Patrick
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com