Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
Chris Murphy writes: > [root@fnuc ~]# cat /sys/block/sda/queue/scheduler > noop deadline [cfq] > [root@fnuc ~]# insmod > /usr/lib/modules/4.19.8-300.fc29.x86_64/kernel/block/bfq.ko.xz > [root@fnuc ~]# lsmod | grep bfq > bfq69632 0 > [root@fnuc ~]# cat /sys/block/sda/queue/scheduler > noop deadline [cfq] > [root@fnuc ~]# > > This appears in dmesg at the time I insmod bfq > [148854.557310] io scheduler bfq registered > > And > [root@fnuc ~]# dmesg | grep sched > [3.109164] io scheduler noop registered > [3.109174] io scheduler deadline registered > [3.109285] io scheduler cfq registered (default) > [3.109294] io scheduler mq-deadline registered > [3.302592] sched_clock: Marking stable (3301306612, > 1243588)->(3308582463, -6032263) > [ 11.129620] systemd[1]: systemd-journald.service: Service has no > hold-off time (RestartSec=0), scheduling restart. > [148854.557310] io scheduler bfq registered > [root@fnuc ~]# > > > If you want the entire dmesg I'll send it offlist. Yes, please. -Jeff ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
[root@fnuc ~]# cat /sys/block/sda/queue/scheduler noop deadline [cfq] [root@fnuc ~]# insmod /usr/lib/modules/4.19.8-300.fc29.x86_64/kernel/block/bfq.ko.xz [root@fnuc ~]# lsmod | grep bfq bfq69632 0 [root@fnuc ~]# cat /sys/block/sda/queue/scheduler noop deadline [cfq] [root@fnuc ~]# This appears in dmesg at the time I insmod bfq [148854.557310] io scheduler bfq registered And [root@fnuc ~]# dmesg | grep sched [3.109164] io scheduler noop registered [3.109174] io scheduler deadline registered [3.109285] io scheduler cfq registered (default) [3.109294] io scheduler mq-deadline registered [3.302592] sched_clock: Marking stable (3301306612, 1243588)->(3308582463, -6032263) [ 11.129620] systemd[1]: systemd-journald.service: Service has no hold-off time (RestartSec=0), scheduling restart. [148854.557310] io scheduler bfq registered [root@fnuc ~]# If you want the entire dmesg I'll send it offlist. -- Chris Murphy ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
Chris Murphy writes: > On Fri, Dec 14, 2018 at 11:34 AM Jeff Moyer wrote: >> >> stan writes: >> >> > On Thu, 13 Dec 2018 17:24:21 +0100 >> > Tomasz Torcz wrote: >> > >> >> On Wed, Dec 12, 2018 at 04:30:20PM -0700, stan wrote: >> > >> >> > Enabled deadline and cfq again, but still no bfq available. >> >> > $ cat /sys/block/sda/queue/scheduler >> >> > noop deadline [cfq] >> >> >> >> Those are single-queue scheduler. Multiqueue uses different >> >> schedulers: bfq, kyber, mq-deadline. MQ schedulers won't appear on >> >> single-queue devices even if you modprobe such schedulers. >> >> You probably need “scsi_mod.use_blk_mq=y dm_mod.use_blk_mq=y” kernel >> >> commandline options, although I think those are default in recent >> >> kernels. >> > >> > Thanks. I got the impression from the documentation I read that BFQ >> > operated for both mq and single queue. In fact, IIRC it actually >> > degraded mq performance slightly, but enhanced single queue >> > performance. I guess I was wrong. I'll try the above to see if it >> > enables me to use bfq on single queue devices. >> >> Yes, it is confusing. Basically, the block layer (and scsi) support a >> legacy path and multi-queue (blk-mq, scsi-mq). However, even if you are >> using blk-mq and scsi-mq, there are two types of devices: those that >> support a single hardware queue, and those that support multiple >> hardware queues. >> >> So, mq schedulers (such as kyber, mq-deadline and bfq) require blk-mq, >> but they can be used on hardware that supports only a single queue. >> This is the distinction that was being made in the bfq documentation. >> >> Clear as mud? > > In everything I've read, scsi_mod.use_blk_mq=1 boot param is required > if kernel config # CONFIG_SCSI_MQ_DEFAULT is not set, and on Fedora > kernels it is not set. So far, when I insmod bfq, that will make it > show up as an option for nvme drives, but not hard drives. I haven't > figured out to make it show up for hard drives. Can you send me your dmesg output? Thanks! Jeff ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Fri, Dec 14, 2018 at 11:34 AM Jeff Moyer wrote: > > stan writes: > > > On Thu, 13 Dec 2018 17:24:21 +0100 > > Tomasz Torcz wrote: > > > >> On Wed, Dec 12, 2018 at 04:30:20PM -0700, stan wrote: > > > >> > Enabled deadline and cfq again, but still no bfq available. > >> > $ cat /sys/block/sda/queue/scheduler > >> > noop deadline [cfq] > >> > >> Those are single-queue scheduler. Multiqueue uses different > >> schedulers: bfq, kyber, mq-deadline. MQ schedulers won't appear on > >> single-queue devices even if you modprobe such schedulers. > >> You probably need “scsi_mod.use_blk_mq=y dm_mod.use_blk_mq=y” kernel > >> commandline options, although I think those are default in recent > >> kernels. > > > > Thanks. I got the impression from the documentation I read that BFQ > > operated for both mq and single queue. In fact, IIRC it actually > > degraded mq performance slightly, but enhanced single queue > > performance. I guess I was wrong. I'll try the above to see if it > > enables me to use bfq on single queue devices. > > Yes, it is confusing. Basically, the block layer (and scsi) support a > legacy path and multi-queue (blk-mq, scsi-mq). However, even if you are > using blk-mq and scsi-mq, there are two types of devices: those that > support a single hardware queue, and those that support multiple > hardware queues. > > So, mq schedulers (such as kyber, mq-deadline and bfq) require blk-mq, > but they can be used on hardware that supports only a single queue. > This is the distinction that was being made in the bfq documentation. > > Clear as mud? In everything I've read, scsi_mod.use_blk_mq=1 boot param is required if kernel config # CONFIG_SCSI_MQ_DEFAULT is not set, and on Fedora kernels it is not set. So far, when I insmod bfq, that will make it show up as an option for nvme drives, but not hard drives. I haven't figured out to make it show up for hard drives. -- Chris Murphy ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
stan writes: > On Thu, 13 Dec 2018 17:24:21 +0100 > Tomasz Torcz wrote: > >> On Wed, Dec 12, 2018 at 04:30:20PM -0700, stan wrote: > >> > Enabled deadline and cfq again, but still no bfq available. >> > $ cat /sys/block/sda/queue/scheduler >> > noop deadline [cfq] >> >> Those are single-queue scheduler. Multiqueue uses different >> schedulers: bfq, kyber, mq-deadline. MQ schedulers won't appear on >> single-queue devices even if you modprobe such schedulers. >> You probably need “scsi_mod.use_blk_mq=y dm_mod.use_blk_mq=y” kernel >> commandline options, although I think those are default in recent >> kernels. > > Thanks. I got the impression from the documentation I read that BFQ > operated for both mq and single queue. In fact, IIRC it actually > degraded mq performance slightly, but enhanced single queue > performance. I guess I was wrong. I'll try the above to see if it > enables me to use bfq on single queue devices. Yes, it is confusing. Basically, the block layer (and scsi) support a legacy path and multi-queue (blk-mq, scsi-mq). However, even if you are using blk-mq and scsi-mq, there are two types of devices: those that support a single hardware queue, and those that support multiple hardware queues. So, mq schedulers (such as kyber, mq-deadline and bfq) require blk-mq, but they can be used on hardware that supports only a single queue. This is the distinction that was being made in the bfq documentation. Clear as mud? Cheers, Jeff ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Thu, Dec 13, 2018 at 2:30 PM, stan wrote: Latency statistics: min max avg std_dev conf99% 1.51 2.583 1.925330.576087 11.5249 Looks like this is what we want for Workstation, where latency is more important than throughput? ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Thu, 13 Dec 2018 19:59:14 +0100 Paolo Valente wrote: > > Il giorno 13 dic 2018, alle ore 18:34, stan > > ha scritto: > > > > On Thu, 13 Dec 2018 17:46:30 +0100 > > Paolo Valente wrote: > > > >>> Il giorno 13 dic 2018, alle ore 17:41, stan > >>> ha scritto: > >>> > > > >> You don't have bfq for a comparison, but you can still get an idea > >> of how good your system is, by comparing these start-up times with > >> how long the same application takes to start when there is no > >> I/O. Just do > >> > >> sudo ./comm_startup_lat.sh 0 0 seq 3 > >> "replay-startup-io gnometerm" > >> > > cfq with the above command (without I/O): *BIG* difference. > > > > Great! (for bfq :) ) > > > Latency statistics: > > min max avg std_dev conf99% > >1.34 1.704 1.533670.183118 3.66336 > > Aggregated throughput: > > min max avg std_dev conf99% > > 08.03 5.23143 2.60745 15.4099 > > Read throughput: > > min max avg std_dev conf99% > > 08.03 5.22571 2.60522 15.3967 > > Write throughput: > > min max avg std_dev conf99% > > 00.02 0.00571429 0.00786796 0.0464991 > > > >> and get ready to be surprised (next surprise when/if you'll try > >> with bfq ...) > > > > I had a response saying that bfq isn't available for single queue > > devices, but there might be a workaround. So it might or might not > > happen, depending on whether I can get it working. > > > > Actually, there's still a little confusion on this point. First, > blk-mq *is not* only for multi-queue devices. blk-mq is for any kind > of block device. If you have a fast, single-queue SSD, then > blk-mq is likely to make it go faster. If you have a multi-queue > drive, which implicitly means that your drive is very fast (according > to the current standards for 'fast'), then it is 100% sure that > blk-mq is the only way to utilize a high portion of the max speed of > your multi-queue monster. > > To use blk-mq, i.e., to have blk-mq handle your storage, you need > (only) to tell the I/O stack that you want blk-mq to manage the I/O > for the driver of your storage. In this respect, SCSI is for sure the > most used generic storage driver. So, according to the instructions > already provided by others, you can have blk-mq handle your storage > device by, e.g., adding "scsi_mod.use_blk_mq=y" as kernel boot option. > Such a choice of yours is not constrained, in any respect, by the > nature of your drive, be it an SD Card, eMMC, HDD, SSD or whatever > you want. As for multi-queue devices, they are handled by the > NVMe driver, and for that one only blk-mq is available. > > Once you have switched to blk-mq for your drive, you will have the set > of I/O schedulers that live in blk-mq. bfq is among these schedulers. > Actually, there is also an out-of-tree bfq available also for the good > old legacy block, but this is another story. > > Finally, from 4.21 there will be no legacy block any longer. Only > blk-mq will be available, so only blk-mq I/O schedulers will be > available. And finally, once I added scsi_mod.use_blk_mq=y to the kernel command line, I was able to set bfq for my I/O scheduler (for me, under blk-mq there is only none or bfq). Here are the results of your utility using bfq. The latency nearly matches no I/O load. Thanks for writing a utility that is so easy to use, and allows the evaluation of different schedulers. bfq Latency statistics: min max avg std_dev conf99% 1.51 2.583 1.925330.576087 11.5249 Aggregated throughput: min max avg std_dev conf99% 30.12 142.3 73.5314 45.6032 269.512 Read throughput: min max avg std_dev conf99% 26.45 136.92 69.5629 44.7932 264.725 Write throughput: min max avg std_dev conf99% 2.445.38 3.968570.948603 5.60619 > >>> cfq > >>> > >>> Latency statistics: > >>>min max avg std_dev conf99% > >>> 22.142 27.157 24.1967 2.6273 52.5604 > >>> Aggregated throughput: > >>>min max avg std_dev conf99% > >>> 67.29 139.74 105.491 19.245 39.7628 > >>> Read throughput: > >>>min max avg std_dev conf99% > >>> 51.73 135.67 102.402 21.3985 44.2123 > >>> Write throughput: > >>>min max avg std_dev conf99% > >>> 0.01 46.29 3.08857 8.37179 17.2972 > >>> > >>> noop > >>> > >>> Latency statistics: > >>>min max avg std_dev conf99% > >>> 40.861 42.021 41.36370.595266 11.9086 >
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
> Il giorno 13 dic 2018, alle ore 18:34, stan ha > scritto: > > On Thu, 13 Dec 2018 17:46:30 +0100 > Paolo Valente wrote: > >>> Il giorno 13 dic 2018, alle ore 17:41, stan >>> ha scritto: >>> > >> You don't have bfq for a comparison, but you can still get an idea of >> how good your system is, by comparing these start-up times with how >> long the same application takes to start when there is no I/O. Just >> do >> >> sudo ./comm_startup_lat.sh 0 0 seq 3 >> "replay-startup-io gnometerm" >> > cfq with the above command (without I/O): *BIG* difference. > Great! (for bfq :) ) > Latency statistics: > min max avg std_dev conf99% >1.34 1.704 1.533670.183118 3.66336 > Aggregated throughput: > min max avg std_dev conf99% > 08.03 5.23143 2.60745 15.4099 > Read throughput: > min max avg std_dev conf99% > 08.03 5.22571 2.60522 15.3967 > Write throughput: > min max avg std_dev conf99% > 00.02 0.00571429 0.00786796 0.0464991 > >> and get ready to be surprised (next surprise when/if you'll try with >> bfq ...) > > I had a response saying that bfq isn't available for single queue > devices, but there might be a workaround. So it might or might not > happen, depending on whether I can get it working. > Actually, there's still a little confusion on this point. First, blk-mq *is not* only for multi-queue devices. blk-mq is for any kind of block device. If you have a fast, single-queue SSD, then blk-mq is likely to make it go faster. If you have a multi-queue drive, which implicitly means that your drive is very fast (according to the current standards for 'fast'), then it is 100% sure that blk-mq is the only way to utilize a high portion of the max speed of your multi-queue monster. To use blk-mq, i.e., to have blk-mq handle your storage, you need (only) to tell the I/O stack that you want blk-mq to manage the I/O for the driver of your storage. In this respect, SCSI is for sure the most used generic storage driver. So, according to the instructions already provided by others, you can have blk-mq handle your storage device by, e.g., adding "scsi_mod.use_blk_mq=y" as kernel boot option. Such a choice of yours is not constrained, in any respect, by the nature of your drive, be it an SD Card, eMMC, HDD, SSD or whatever you want. As for multi-queue devices, they are handled by the NVMe driver, and for that one only blk-mq is available. Once you have switched to blk-mq for your drive, you will have the set of I/O schedulers that live in blk-mq. bfq is among these schedulers. Actually, there is also an out-of-tree bfq available also for the good old legacy block, but this is another story. Finally, from 4.21 there will be no legacy block any longer. Only blk-mq will be available, so only blk-mq I/O schedulers will be available. Thanks for trying my tests, Paolo >>> cfq >>> >>> Latency statistics: >>>min max avg std_dev conf99% >>> 22.142 27.157 24.1967 2.6273 52.5604 >>> Aggregated throughput: >>>min max avg std_dev conf99% >>> 67.29 139.74 105.491 19.245 39.7628 >>> Read throughput: >>>min max avg std_dev conf99% >>> 51.73 135.67 102.402 21.3985 44.2123 >>> Write throughput: >>>min max avg std_dev conf99% >>> 0.01 46.29 3.08857 8.37179 17.2972 >>> >>> noop >>> >>> Latency statistics: >>>min max avg std_dev conf99% >>> 40.861 42.021 41.36370.595266 11.9086 >>> Aggregated throughput: >>>min max avg std_dev conf99% >>> 45.66 72.89 55.9847 5.99054 9.87365 >>> Read throughput: >>>min max avg std_dev conf99% >>> 41.69 70.85 51.9495 6.02467 9.9299 >>> Write throughput: >>>min max avg std_dev conf99% >>> 0 7.9 4.03527 1.62392 2.67656 > ___ > devel mailing list -- devel@lists.fedoraproject.org > To unsubscribe send an email to devel-le...@lists.fedoraproject.org > Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: > https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines:
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Thu, 13 Dec 2018 17:46:30 +0100 Paolo Valente wrote: > > Il giorno 13 dic 2018, alle ore 17:41, stan > > ha scritto: > > > You don't have bfq for a comparison, but you can still get an idea of > how good your system is, by comparing these start-up times with how > long the same application takes to start when there is no I/O. Just > do > > sudo ./comm_startup_lat.sh 0 0 seq 3 > "replay-startup-io gnometerm" > cfq with the above command (without I/O): *BIG* difference. Latency statistics: min max avg std_dev conf99% 1.34 1.704 1.533670.183118 3.66336 Aggregated throughput: min max avg std_dev conf99% 08.03 5.23143 2.60745 15.4099 Read throughput: min max avg std_dev conf99% 08.03 5.22571 2.60522 15.3967 Write throughput: min max avg std_dev conf99% 00.02 0.00571429 0.00786796 0.0464991 > and get ready to be surprised (next surprise when/if you'll try with > bfq ...) I had a response saying that bfq isn't available for single queue devices, but there might be a workaround. So it might or might not happen, depending on whether I can get it working. > > cfq > > > > Latency statistics: > > min max avg std_dev conf99% > > 22.142 27.157 24.1967 2.6273 52.5604 > > Aggregated throughput: > > min max avg std_dev conf99% > > 67.29 139.74 105.491 19.245 39.7628 > > Read throughput: > > min max avg std_dev conf99% > > 51.73 135.67 102.402 21.3985 44.2123 > > Write throughput: > > min max avg std_dev conf99% > >0.01 46.29 3.08857 8.37179 17.2972 > > > > noop > > > > Latency statistics: > > min max avg std_dev conf99% > > 40.861 42.021 41.36370.595266 11.9086 > > Aggregated throughput: > > min max avg std_dev conf99% > > 45.66 72.89 55.9847 5.99054 9.87365 > > Read throughput: > > min max avg std_dev conf99% > > 41.69 70.85 51.9495 6.02467 9.9299 > > Write throughput: > > min max avg std_dev conf99% > > 0 7.9 4.03527 1.62392 2.67656 ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Thu, 13 Dec 2018 17:24:21 +0100 Tomasz Torcz wrote: > On Wed, Dec 12, 2018 at 04:30:20PM -0700, stan wrote: > > Enabled deadline and cfq again, but still no bfq available. > > $ cat /sys/block/sda/queue/scheduler > > noop deadline [cfq] > > Those are single-queue scheduler. Multiqueue uses different > schedulers: bfq, kyber, mq-deadline. MQ schedulers won't appear on > single-queue devices even if you modprobe such schedulers. > You probably need “scsi_mod.use_blk_mq=y dm_mod.use_blk_mq=y” kernel > commandline options, although I think those are default in recent > kernels. Thanks. I got the impression from the documentation I read that BFQ operated for both mq and single queue. In fact, IIRC it actually degraded mq performance slightly, but enhanced single queue performance. I guess I was wrong. I'll try the above to see if it enables me to use bfq on single queue devices. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
> Il giorno 13 dic 2018, alle ore 17:53, stan ha > scritto: > > On Thu, 13 Dec 2018 13:42:24 +0100 > Paolo Valente wrote: > >> To test the behavior of your system, why don't you check, e.g., how >> long it takes to start an application while there is some background >> I/O? >> >> A super quick way to do this is >> >> git clone https://github.com/Algodev-github/S >> cd S/comm_startup_lat >> sudo ./comm_startup_lat.sh 5 5 seq 3 >> "replay-startup-io gnometerm" >> >> The last command line >> - starts the reading of 5 files plus the writing of 5 other files >> - replays, for three times, the I/O that gnome terminal does while; >> starting up (if you want I can tell you how to change the last >> command line so as to execute the original application, but you would >> get the same results); >> - for each attempt, measures how long this start-up I/O takes to >> complete. > > Just a note: I would feel a lot more comfortable with this utility if > it didn't have to run as root. Paranoia. Could you add the > functionality that if it is run as a normal user, it tests the I/O > scheduling scheme currently enabled. That is, it checks if it is > running as root. If it isn't, it just uses whatever I/O scheduler is > currently set, ignoring any parameter on the command line. Running as > root, it behaves exactly as it does now. > > The user would be responsible for issuing the > > echo > > /sys/block//queue/scheduler > > as root if they wanted to run as a normal user. I do agree with your point, and I already tried to make this run as non root. The actual problem is not the scheduler switch, but the need to drop caches before every start-up attempt. Without that, only the first attempt might be reliable, in case data are not already in the cache even at the first iteration. To drop caches, it seems necessary to be root. Any suggestion to work around this issue would super welcome! Thanks, Paolo > ___ > kernel mailing list -- ker...@lists.fedoraproject.org > To unsubscribe send an email to kernel-le...@lists.fedoraproject.org > Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: > https://lists.fedoraproject.org/archives/list/ker...@lists.fedoraproject.org ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Thu, 13 Dec 2018 13:42:24 +0100 Paolo Valente wrote: > To test the behavior of your system, why don't you check, e.g., how > long it takes to start an application while there is some background > I/O? > > A super quick way to do this is > > git clone https://github.com/Algodev-github/S > cd S/comm_startup_lat > sudo ./comm_startup_lat.sh 5 5 seq 3 > "replay-startup-io gnometerm" > > The last command line > - starts the reading of 5 files plus the writing of 5 other files > - replays, for three times, the I/O that gnome terminal does while; > starting up (if you want I can tell you how to change the last > command line so as to execute the original application, but you would > get the same results); > - for each attempt, measures how long this start-up I/O takes to > complete. Just a note: I would feel a lot more comfortable with this utility if it didn't have to run as root. Paranoia. Could you add the functionality that if it is run as a normal user, it tests the I/O scheduling scheme currently enabled. That is, it checks if it is running as root. If it isn't, it just uses whatever I/O scheduler is currently set, ignoring any parameter on the command line. Running as root, it behaves exactly as it does now. The user would be responsible for issuing the echo > /sys/block//queue/scheduler as root if they wanted to run as a normal user. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Thu, 13 Dec 2018 13:42:24 +0100 Paolo Valente wrote: > To test the behavior of your system, why don't you check, e.g., how > long it takes to start an application while there is some background > I/O? > > A super quick way to do this is > > git clone https://github.com/Algodev-github/S > cd S/comm_startup_lat > sudo ./comm_startup_lat.sh 5 5 seq 3 > "replay-startup-io gnometerm" > > The last command line > - starts the reading of 5 files plus the writing of 5 other files > - replays, for three times, the I/O that gnome terminal does while; > starting up (if you want I can tell you how to change the last > command line so as to execute the original application, but you would > get the same results); > - for each attempt, measures how long this start-up I/O takes to > complete. Results for cfq and noop, haven't enabled bfq yet. I interpret these as showing that cfq was a large improvement for all categories except write throughput, where it actually degraded performance. cfq Latency statistics: min max avg std_dev conf99% 22.142 27.157 24.1967 2.6273 52.5604 Aggregated throughput: min max avg std_dev conf99% 67.29 139.74 105.491 19.245 39.7628 Read throughput: min max avg std_dev conf99% 51.73 135.67 102.402 21.3985 44.2123 Write throughput: min max avg std_dev conf99% 0.01 46.29 3.08857 8.37179 17.2972 noop Latency statistics: min max avg std_dev conf99% 40.861 42.021 41.36370.595266 11.9086 Aggregated throughput: min max avg std_dev conf99% 45.66 72.89 55.9847 5.99054 9.87365 Read throughput: min max avg std_dev conf99% 41.69 70.85 51.9495 6.02467 9.9299 Write throughput: min max avg std_dev conf99% 0 7.9 4.03527 1.62392 2.67656 ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
> Il giorno 13 dic 2018, alle ore 17:17, stan ha > scritto: > > On Thu, 13 Dec 2018 13:42:24 +0100 > Paolo Valente wrote: > >> To test the behavior of your system, why don't you check, e.g., how >> long it takes to start an application while there is some background >> I/O? >> >> A super quick way to do this is >> >> git clone https://github.com/Algodev-github/S >> cd S/comm_startup_lat >> sudo ./comm_startup_lat.sh 5 5 seq 3 >> "replay-startup-io gnometerm" >> >> The last command line >> - starts the reading of 5 files plus the writing of 5 other files >> - replays, for three times, the I/O that gnome terminal does while; >> starting up (if you want I can tell you how to change the last >> command line so as to execute the original application, but you would >> get the same results); >> - for each attempt, measures how long this start-up I/O takes to >> complete. > > Thanks for this. I suspect I wasn't really stressing my system when I > was evaluating it, and it was subjective. I'm running a kernel with cfq > right now, but I will boot the noop kernel when I get a chance and test > it. I suppose I could just switch to noop io scheduling instead. > Should be interesting. Consider that noop means legacy block too. From 4.21, the equivalent of noop will be none, in blk-mq. At any rate, you can do these tests with cfq too. Results may surprise you ... And, if results will feel like just numbers to you, I'll tell you how to change the command line for starting real applications. Thanks, Paolo > ___ > kernel mailing list -- ker...@lists.fedoraproject.org > To unsubscribe send an email to kernel-le...@lists.fedoraproject.org > Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: > https://lists.fedoraproject.org/archives/list/ker...@lists.fedoraproject.org ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Wed, Dec 12, 2018 at 04:30:20PM -0700, stan wrote: > On Wed, 12 Dec 2018 14:41:37 -0700 > stan wrote: > > > On Wed, 12 Dec 2018 16:07:49 -0500 > > Jeff Moyer wrote: > > > > Thanks for your insight. Doesn't look good for my use of BFQ. > > > > > Note that you can change the current I/O scheduler for any block > > > device by echo-ing into /sys/block//queue/scheduler. Cat-ing > > > that file will give you the list of available schedulers. > > > > That's part of the problem. BFQ doesn't appear in the list of > > available schedulers. When I cat that location for my disks, I see > > [noop]. Since CFQ does appear there if it is compiled into the > > kernel, I'll have to look into what is done for CFQ and see how hard > > it would be to patch the kernel to repeat that behavior for BFQ. > > Enabled deadline and cfq again, but still no bfq available. > $ cat /sys/block/sda/queue/scheduler > noop deadline [cfq] Those are single-queue scheduler. Multiqueue uses different schedulers: bfq, kyber, mq-deadline. MQ schedulers won't appear on single-queue devices even if you modprobe such schedulers. You probably need “scsi_mod.use_blk_mq=y dm_mod.use_blk_mq=y” kernel commandline options, although I think those are default in recent kernels. -- Tomasz Torcz 72->| 80->| xmpp: zdzich...@chrome.pl 72->| 80->| ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Thu, 13 Dec 2018 13:42:24 +0100 Paolo Valente wrote: > To test the behavior of your system, why don't you check, e.g., how > long it takes to start an application while there is some background > I/O? > > A super quick way to do this is > > git clone https://github.com/Algodev-github/S > cd S/comm_startup_lat > sudo ./comm_startup_lat.sh 5 5 seq 3 > "replay-startup-io gnometerm" > > The last command line > - starts the reading of 5 files plus the writing of 5 other files > - replays, for three times, the I/O that gnome terminal does while; > starting up (if you want I can tell you how to change the last > command line so as to execute the original application, but you would > get the same results); > - for each attempt, measures how long this start-up I/O takes to > complete. Thanks for this. I suspect I wasn't really stressing my system when I was evaluating it, and it was subjective. I'm running a kernel with cfq right now, but I will boot the noop kernel when I get a chance and test it. I suppose I could just switch to noop io scheduling instead. Should be interesting. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
> Il giorno 12 dic 2018, alle ore 22:41, stan ha > scritto: > > On Wed, 12 Dec 2018 16:07:49 -0500 > Jeff Moyer wrote: > > Thanks for your insight. Doesn't look good for my use of BFQ. > >> Note that you can change the current I/O scheduler for any block >> device by echo-ing into /sys/block//queue/scheduler. Cat-ing >> that file will give you the list of available schedulers. > > That's part of the problem. BFQ doesn't appear in the list of > available schedulers. When I cat that location for my disks, I see > [noop]. Since CFQ does appear there if it is compiled into the kernel, > I'll have to look into what is done for CFQ and see how hard it would > be to patch the kernel to repeat that behavior for BFQ. > > My use case in not mq, so after reading one of the links in this > thread about performance, I saw that BFQ gave ~20 to 30 % boost in > disk io performance, and enhanced low latency performance (desktop > responsiveness) for single queue. That's what I want to capture by using > BFQ. I wonder if that is my problem. From what Chris said, an mq > scheduler is required in order to use BFQ, whether it is for mq or > single queue use. I'll try that. I normally use deadline and CFQ for > scheduling. Back to the compiler. > > I'm surprised this is so difficult. It's been in the kernel since the > 2.x series, and usually the configuration options are excellent for > allowing variation in how the kernel is configured. > > On the plus side, I notice only slight degradation in behavior using > noop scheduling. :-) Maybe I should just skip scheduling. :-D To test the behavior of your system, why don't you check, e.g., how long it takes to start an application while there is some background I/O? A super quick way to do this is git clone https://github.com/Algodev-github/S cd S/comm_startup_lat sudo ./comm_startup_lat.sh 5 5 seq 3 "replay-startup-io gnometerm" The last command line - starts the reading of 5 files plus the writing of 5 other files - replays, for three times, the I/O that gnome terminal does while; starting up (if you want I can tell you how to change the last command line so as to execute the original application, but you would get the same results); - for each attempt, measures how long this start-up I/O takes to complete. Paolo > ___ > kernel mailing list -- ker...@lists.fedoraproject.org > To unsubscribe send an email to kernel-le...@lists.fedoraproject.org > Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines > List Archives: > https://lists.fedoraproject.org/archives/list/ker...@lists.fedoraproject.org ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Wed, 12 Dec 2018 16:50:10 -0700 Chris Murphy wrote: > OK that worked for an nvme drive, but not for an internal SATA HDD. > > $ sudo lsmod | grep bfq > $ sudo cat /sys/block/sda/queue/scheduler > noop deadline [cfq] > $ sudo > insmod /usr/lib/modules/4.19.8-300.fc29.x86_64/kernel/block/bfq.ko.xz > $ sudo cat /sys/block/sda/queue/scheduler noop deadline [cfq] > $ sudo lsmod | grep bfq > bfq69632 0 > $ > > So yeah this seems a lot more difficult than it should be. Thanks for the confirmation. I built all the schedulers into the kernel. I made the assumption that the kernel would be aware of those built in drivers. For deadline and cfq, true, for bfq, I thought it was not true. The plot thickens. I looked in the journal and found localhost.localdomain kernel: io scheduler bfq registered localhost.localdomain kernel: io scheduler cfq registered (default) localhost.localdomain kernel: io scheduler deadline registered localhost.localdomain kernel: io scheduler noop registered So the kernel is aware of bfq. There must be something missing for bfq in order for it to be treated the same as cfq or deadline. ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
OK that worked for an nvme drive, but not for an internal SATA HDD. $ sudo lsmod | grep bfq $ sudo cat /sys/block/sda/queue/scheduler noop deadline [cfq] $ sudo insmod /usr/lib/modules/4.19.8-300.fc29.x86_64/kernel/block/bfq.ko.xz $ sudo cat /sys/block/sda/queue/scheduler noop deadline [cfq] $ sudo lsmod | grep bfq bfq69632 0 $ So yeah this seems a lot more difficult than it should be. Chris Murphy ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Wed, Dec 12, 2018 at 4:31 PM stan wrote: > > Enabled deadline and cfq again, but still no bfq available. > > $ cat /sys/block/sda/queue/scheduler > noop deadline [cfq] Tried this and it showed up for me: [root@flap ~]# cat /sys/block/nvme0n1/queue/scheduler [none] mq-deadline [root@flap ~]# insmod /usr/lib/modules/4.20.0-0.rc6.git0.1.fc30.x86_64/kernel/block/bfq.ko.xz [root@flap ~]# cat /sys/block/nvme0n1/queue/scheduler [none] mq-deadline bfq -- Chris Murphy ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Wed, 12 Dec 2018 14:41:37 -0700 stan wrote: > On Wed, 12 Dec 2018 16:07:49 -0500 > Jeff Moyer wrote: > > Thanks for your insight. Doesn't look good for my use of BFQ. > > > Note that you can change the current I/O scheduler for any block > > device by echo-ing into /sys/block//queue/scheduler. Cat-ing > > that file will give you the list of available schedulers. > > That's part of the problem. BFQ doesn't appear in the list of > available schedulers. When I cat that location for my disks, I see > [noop]. Since CFQ does appear there if it is compiled into the > kernel, I'll have to look into what is done for CFQ and see how hard > it would be to patch the kernel to repeat that behavior for BFQ. Enabled deadline and cfq again, but still no bfq available. $ cat /sys/block/sda/queue/scheduler noop deadline [cfq] ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Wed, Dec 12, 2018 at 2:07 PM Jeff Moyer wrote: > > Hi, > > Chris Murphy writes: > > > I used two boot params: scsi_mod.use_blk_mq=1 elevator=bfq. I don't > > think that's a good way for a distribution to set the default though. > > You shouldn't need the "scsi_mod.use_blk_mq=1" option. As of 4.19, > scsi_mq is the default, and by 4.21 the legacy path will be gone. The > right way for the distro to set the default I/O scheduler is to use udev > rules. Like I mentioned earlier in the thread, Fedora kernels does not set scsi_mq as the default. I don't know why. # CONFIG_SCSI_MQ_DEFAULT is not set -- Chris Murphy ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
On Wed, 12 Dec 2018 16:07:49 -0500 Jeff Moyer wrote: Thanks for your insight. Doesn't look good for my use of BFQ. > Note that you can change the current I/O scheduler for any block > device by echo-ing into /sys/block//queue/scheduler. Cat-ing > that file will give you the list of available schedulers. That's part of the problem. BFQ doesn't appear in the list of available schedulers. When I cat that location for my disks, I see [noop]. Since CFQ does appear there if it is compiled into the kernel, I'll have to look into what is done for CFQ and see how hard it would be to patch the kernel to repeat that behavior for BFQ. My use case in not mq, so after reading one of the links in this thread about performance, I saw that BFQ gave ~20 to 30 % boost in disk io performance, and enhanced low latency performance (desktop responsiveness) for single queue. That's what I want to capture by using BFQ. I wonder if that is my problem. From what Chris said, an mq scheduler is required in order to use BFQ, whether it is for mq or single queue use. I'll try that. I normally use deadline and CFQ for scheduling. Back to the compiler. I'm surprised this is so difficult. It's been in the kernel since the 2.x series, and usually the configuration options are excellent for allowing variation in how the kernel is configured. On the plus side, I notice only slight degradation in behavior using noop scheduling. :-) Maybe I should just skip scheduling. :-D ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
Hi, Chris Murphy writes: > Really Readding devel@ this time... > > On Wed, Dec 12, 2018 at 11:08 AM stan wrote: >> >> On Tue, 11 Dec 2018 11:48:29 - >> "Alan Jenkins" wrote: >> >> > [3] BFQ: >> > http://algo.ing.unimo.it/people/paolo/disk_sched/description.php >> > >> > [4] >> > https://unix.stackexchange.com/questions/375600/how-to-enable-and-use-the-bfq-scheduler >> > >> > [5] "I'd prefer if the distros would lead the way on this, as theyare >> > the ones that will most likely see the most bug reports" - Jens >> > Axboe, https://www.spinics.net/lists/linux-block/msg31062.html >> >> I compiled a custom kernel from the fedora src.rpm for 4.19.8. I turned >> off all schedulers except NOOP and BFQ. But there was no way in the >> configuration process (make menuconfig) to set BFQ as default. I tried >> setting it in kernel-local, but the build process errored because it >> said NOOP is the default and that disagreed with my choice. I'm >> running the kernel and it is using noop. And there is no way to change >> it in the /sys hierarchy. >> >> So, how do I get a fedora kernel to run BFQ? > > Short version: > Yep, so far in 4.20 there is neither a CONFIG_DEFAULT_BFQ or > CONFIG_DEFAULT_IOSCHED="bfq" near as I can tell. Maybe it's different > for 4.21. There isn't an option to select a default mq I/O scheduler, and I don't think there will be in the future, either. For mq devices, the kernel policy is to use mq-deadline for single queue devices (so long as mq-deadline is available), and to not specify an elevator otherwise. Note that you can change the current I/O scheduler for any block device by echo-ing into /sys/block//queue/scheduler. Cat-ing that file will give you the list of available schedulers. > I used two boot params: scsi_mod.use_blk_mq=1 elevator=bfq. I don't > think that's a good way for a distribution to set the default though. You shouldn't need the "scsi_mod.use_blk_mq=1" option. As of 4.19, scsi_mq is the default, and by 4.21 the legacy path will be gone. The right way for the distro to set the default I/O scheduler is to use udev rules. Choosing an I/O scheduler really needs to take two things into account: 1) the properties of the storage 2) the intended workload okay, 3 things: 3) required features (such as proportional I/O control, which is only available via bfq) It's not easy to divine any of those from a udev rule, unfortunately, though heuristics can be applied. What we've done in the past is to pick an I/O scheduler that works reasonably well for all storage and workloads we care about. That involves (obviously) testing each I/O scheduler for each combination of storage and workload you want to support. The goal is to avoid the worst case scenarios, not necessarily to achive the best performance. It's worth noting that tuned profiles can also be used to change the I/O scheduler for data disks. I think tuned leaves the OS disk alone. > Icky version: > > I set the following in /etc/default/grub and then ran grub2-mkconfig > (not on Rawhide!) > GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=1 elevator=bfq zswap.enabled=1 > zswap.max_pool_percent=25 zswap.compressor=lz4" > > I also created /etc/dracut.conf.d/bfq.conf containing: > add_drivers+=" bfq " > yes, with spaces, and rebuilt the initramfs > > But upon reboot, total implosion. Piles of USB errors and disconnects > (the boot device is a Samsung FIT USB stick which fits flush in an > Intel NUC). I didn't have time to troubleshoot what's causing this > problem, other than to plug the USB stick into another computer to > verify the stick is good and hasn't been corrupted. It's possibly > related the mq-blk bug in 4.19.0 through 4.19.7 - so I've since > upgraded to 4.19.8 which as those patches, but I haven't had a chance > to retest. That sounds awful. It would be good if you could test the latest kernel and report the problem upstream if it still exists there. Thanks! Jeff ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
Really Readding devel@ this time... On Wed, Dec 12, 2018 at 11:08 AM stan wrote: > > On Tue, 11 Dec 2018 11:48:29 - > "Alan Jenkins" wrote: > > > [3] BFQ: > > http://algo.ing.unimo.it/people/paolo/disk_sched/description.php > > > > [4] > > https://unix.stackexchange.com/questions/375600/how-to-enable-and-use-the-bfq-scheduler > > > > [5] "I'd prefer if the distros would lead the way on this, as theyare > > the ones that will most likely see the most bug reports" - Jens > > Axboe, https://www.spinics.net/lists/linux-block/msg31062.html > > I compiled a custom kernel from the fedora src.rpm for 4.19.8. I turned > off all schedulers except NOOP and BFQ. But there was no way in the > configuration process (make menuconfig) to set BFQ as default. I tried > setting it in kernel-local, but the build process errored because it > said NOOP is the default and that disagreed with my choice. I'm > running the kernel and it is using noop. And there is no way to change > it in the /sys hierarchy. > > So, how do I get a fedora kernel to run BFQ? Short version: Yep, so far in 4.20 there is neither a CONFIG_DEFAULT_BFQ or CONFIG_DEFAULT_IOSCHED="bfq" near as I can tell. Maybe it's different for 4.21. I used two boot params: scsi_mod.use_blk_mq=1 elevator=bfq. I don't think that's a good way for a distribution to set the default though. Icky version: I set the following in /etc/default/grub and then ran grub2-mkconfig (not on Rawhide!) GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=1 elevator=bfq zswap.enabled=1 zswap.max_pool_percent=25 zswap.compressor=lz4" I also created /etc/dracut.conf.d/bfq.conf containing: add_drivers+=" bfq " yes, with spaces, and rebuilt the initramfs But upon reboot, total implosion. Piles of USB errors and disconnects (the boot device is a Samsung FIT USB stick which fits flush in an Intel NUC). I didn't have time to troubleshoot what's causing this problem, other than to plug the USB stick into another computer to verify the stick is good and hasn't been corrupted. It's possibly related the mq-blk bug in 4.19.0 through 4.19.7 - so I've since upgraded to 4.19.8 which as those patches, but I haven't had a chance to retest. -- Chris Murphy ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Re: Which I/O scheduler is Fedora switching to in 4.21? mq-deadline or BFQ?
I'd say BFQ on devices that are not multi-queue, and either none or mq-deadline on devices that are. This is detectable through sysfs. [root@flap ~]# cat /sys/block/nvme0n1/queue/scheduler [none] mq-deadline [root@flap ~]# grep SCSI_MQ /boot/config-4.20.0-0.rc5.git2.1.fc30.x86_64 # CONFIG_SCSI_MQ_DEFAULT is not set I'm not sure why it's not set, it is the default upstream. Chris Murphy ___ devel mailing list -- devel@lists.fedoraproject.org To unsubscribe send an email to devel-le...@lists.fedoraproject.org Fedora Code of Conduct: https://getfedora.org/code-of-conduct.html List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org