Re: [PATCH V4] scsi_debugfs: fix crash in scsi_show_rq()

2017-11-13 Thread Ming Lei
Hi James,

On Mon, Nov 13, 2017 at 10:55:52AM -0800, James Bottomley wrote:
> On Sat, 2017-11-11 at 10:43 +0800, Ming Lei wrote:
> > On Fri, Nov 10, 2017 at 08:51:58AM -0800, James Bottomley wrote:
> > > 
> > > On Fri, 2017-11-10 at 17:01 +0800, Ming Lei wrote:
> > > > 
> > > > cmd->cmnd can be allocated/freed dynamically in case of
> > > > T10_PI_TYPE2_PROTECTION, so we should check it in scsi_show_rq()
> > > > because this request may have been freed already here, and cmd-
> > > > >cmnd has been set as null.
> > > > 
> > > > We choose to accept read-after-free and dump request data as far
> > > > as possible.
> > > > 
> > > > This patch fixs the following kernel crash when dumping request
> > > > via block's debugfs interface:
> > > > 
> > > > [  252.962045] BUG: unable to handle kernel NULL pointer
> > > > dereference
> > > > at   (null)
> > > > [  252.963007] IP: scsi_format_opcode_name+0x1a/0x1c0
> > > > [  252.963007] PGD 25e75a067 P4D 25e75a067 PUD 25e75b067 PMD 0
> > > > [  252.963007] Oops:  [#1] PREEMPT SMP
> > > > [  252.963007] Dumping ftrace buffer:
> > > > [  252.963007](ftrace buffer empty)
> > > > [  252.963007] Modules linked in: scsi_debug ebtable_filter
> > > > ebtables
> > > > ip6table_filter ip6_tables xt_CHECKSUM iptable_mangle
> > > > ipt_MASQUERADE
> > > > nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4
> > > > nf_defrag_ipv4
> > > > nf_nat_ipv4 nf_nat nf_conntrack libcrc32c bridge stp llc
> > > > iptable_filter fuse ip_tables sd_mod sg mptsas mptscsih mptbase
> > > > crc32c_intel ahci libahci nvme serio_raw scsi_transport_sas
> > > > libata
> > > > lpc_ich nvme_core virtio_scsi binfmt_misc dm_mod iscsi_tcp
> > > > libiscsi_tcp libiscsi scsi_transport_iscsi null_blk configs
> > > > [  252.963007] CPU: 1 PID: 1881 Comm: cat Not tainted 4.14.0-
> > > > rc2.blk_mq_io_hang+ #516
> > > > [  252.963007] Hardware name: QEMU Standard PC (Q35 + ICH9,
> > > > 2009),
> > > > BIOS 1.9.3-1.fc25 04/01/2014
> > > > [  252.963007] task: 88025e6f6000 task.stack:
> > > > c90001bd
> > > > [  252.963007] RIP: 0010:scsi_format_opcode_name+0x1a/0x1c0
> > > > [  252.963007] RSP: 0018:c90001bd3c50 EFLAGS: 00010286
> > > > [  252.963007] RAX: 4843 RBX: 0050 RCX:
> > > > 
> > > > [  252.963007] RDX:  RSI: 0050 RDI:
> > > > c90001bd3cd8
> > > > [  252.963007] RBP: c90001bd3c88 R08: 1000 R09:
> > > > 
> > > > [  252.963007] R10: 880275134000 R11: 88027513406c R12:
> > > > 0050
> > > > [  252.963007] R13: c90001bd3cd8 R14:  R15:
> > > > 
> > > > [  252.963007] FS:  7f4d11762700()
> > > > GS:88027fc4()
> > > > knlGS:
> > > > [  252.963007] CS:  0010 DS:  ES:  CR0: 80050033
> > > > [  252.963007] CR2:  CR3: 00025e789003 CR4:
> > > > 003606e0
> > > > [  252.963007] DR0:  DR1:  DR2:
> > > > 
> > > > [  252.963007] DR3:  DR6: fffe0ff0 DR7:
> > > > 0400
> > > > [  252.963007] Call Trace:
> > > > [  252.963007]  __scsi_format_command+0x27/0xc0
> > > > [  252.963007]  scsi_show_rq+0x5c/0xc0
> > > > [  252.963007]  ? seq_printf+0x4e/0x70
> > > > [  252.963007]  ? blk_flags_show+0x5b/0xf0
> > > > [  252.963007]  __blk_mq_debugfs_rq_show+0x116/0x130
> > > > [  252.963007]  blk_mq_debugfs_rq_show+0xe/0x10
> > > > [  252.963007]  seq_read+0xfe/0x3b0
> > > > [  252.963007]  ? __handle_mm_fault+0x631/0x1150
> > > > [  252.963007]  full_proxy_read+0x54/0x90
> > > > [  252.963007]  __vfs_read+0x37/0x160
> > > > [  252.963007]  ? security_file_permission+0x9b/0xc0
> > > > [  252.963007]  vfs_read+0x96/0x130
> > > > [  252.963007]  SyS_read+0x55/0xc0
> > > > [  252.963007]  entry_SYSCALL_64_fastpath+0x1a/0xa5
> > > > [  252.963007] RIP: 0033:0x7f4d1127e9b0
> > > > [  252.963007] RSP: 002b:7ffd27082568 EFLAGS: 0246
> > > > ORIG_RAX:
> > > > 
> > > > [  252.963007] RAX: ffda RBX: 7f4d1154bb20 RCX:
> > > > 7f4d1127e9b0
> > > > [  252.963007] RDX: 0002 RSI: 7f4d115a7000 RDI:
> > > > 0003
> > > > [  252.963007] RBP: 00021010 R08:  R09:
> > > > 
> > > > [  252.963007] R10: 037b R11: 0246 R12:
> > > > 00022000
> > > > [  252.963007] R13: 7f4d1154bb78 R14: 1000 R15:
> > > > 0002
> > > > [  252.963007] Code: c6 e8 1b ca 24 00 eb 8c e8 74 2c ae ff 0f 1f
> > > > 40
> > > > 00 0f 1f 44 00 00 55 48 89 e5 41 56 41 55 41 54 53 49 89 fd 49 89
> > > > f4
> > > > 48 83 ec 18 <44> 0f b6 32 48 c7 45 c8 00 00 00 00 65 48 8b 04 25
> > > > 28
> > > > 00 00 00
> > > > [  252.963007] RIP: scsi_format_opcode_name+0x1a/0x1c0 RSP:
> > > > c90001bd3c50
> > > > [  252.963007] CR2: 
> > > > [  252.963007] ---[ end 

Re: [PATCH V2] block-throttle: avoid double charge

2017-11-13 Thread Tejun Heo
On Mon, Nov 13, 2017 at 12:37:10PM -0800, Shaohua Li wrote:
> If a bio is throttled and splitted after throttling, the bio could be
> resubmited and enters the throttling again. This will cause part of the
> bio is charged multiple times. If the cgroup has an IO limit, the double
> charge will significantly harm the performance. The bio split becomes
> quite common after arbitrary bio size change.
> 
> To fix this, we always set the BIO_THROTTLED flag if a bio is throttled.
> If the bio is cloned/slitted, we copy the flag to new bio too to avoid
> double charge. However cloned bio could be directed to a new disk,
> keeping the flag will have problem. The observation is we always set new
> disk for the bio in this case, so we can clear the flag in
> bio_set_dev().
> 
> This issue exists a long time, arbitrary bio size change makes it worse,
> so this should go into stable at least since v4.2.
> 
> V1-> V2: Not add extra field in bio based on discussion with Tejun
> 
> Cc: Tejun Heo 
> Cc: Vivek Goyal 
> Cc: sta...@vger.kernel.org
> Signed-off-by: Shaohua Li 

Yeah, this works too.

Acked-by: Tejun Heo 

Thanks.

-- 
tejun


Re: [PATCH] block-throttle: avoid double charge

2017-11-13 Thread Tejun Heo
On Mon, Nov 13, 2017 at 12:03:38PM -0800, Tejun Heo wrote:
> So, one question I have is whether we need both BIO_THROTTLED and
> bi_throttled_disk.  Can't we replace BIO_THROTTLED w/
> bi_throttled_disk?

IOW, won't something like the following work?  (not tested yet)

Thanks.

---
 block/bio.c   |3 +++
 block/blk-throttle.c  |   20 +++-
 include/linux/blk_types.h |4 
 3 files changed, 14 insertions(+), 13 deletions(-)

--- a/block/bio.c
+++ b/block/bio.c
@@ -597,6 +597,9 @@ void __bio_clone_fast(struct bio *bio, s
 * so we don't set nor calculate new physical/hw segment counts here
 */
bio->bi_disk = bio_src->bi_disk;
+#ifdef CONFIG_BLK_DEV_THROTTLING
+   bio->bi_throttled_disk = bio_src->bi_throttled_disk;
+#endif
bio_set_flag(bio, BIO_CLONED);
bio->bi_opf = bio_src->bi_opf;
bio->bi_write_hint = bio_src->bi_write_hint;
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -1051,13 +1051,12 @@ static void throtl_charge_bio(struct thr
tg->last_io_disp[rw]++;
 
/*
-* BIO_THROTTLED is used to prevent the same bio to be throttled
+* bi_throttled_disk is used to prevent the same bio to be throttled
 * more than once as a throttled bio will go through blk-throtl the
 * second time when it eventually gets issued.  Set it when a bio
 * is being charged to a tg.
 */
-   if (!bio_flagged(bio, BIO_THROTTLED))
-   bio_set_flag(bio, BIO_THROTTLED);
+   bio->bi_throttled_disk = bio->bi_disk;
 }
 
 /**
@@ -2131,8 +2130,11 @@ bool blk_throtl_bio(struct request_queue
 
WARN_ON_ONCE(!rcu_read_lock_held());
 
-   /* see throtl_charge_bio() */
-   if (bio_flagged(bio, BIO_THROTTLED) || !tg->has_rules[rw])
+   /*
+* See throtl_charge_bio().  If a bio is throttled against a disk
+* but remapped to other disk, we should throttle it again
+*/
+   if (bio->bi_throttled_disk == bio->bi_disk || !tg->has_rules[rw])
goto out;
 
spin_lock_irq(q->queue_lock);
@@ -2223,14 +2225,6 @@ again:
 out_unlock:
spin_unlock_irq(q->queue_lock);
 out:
-   /*
-* As multiple blk-throtls may stack in the same issue path, we
-* don't want bios to leave with the flag set.  Clear the flag if
-* being issued.
-*/
-   if (!throttled)
-   bio_clear_flag(bio, BIO_THROTTLED);
-
 #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
if (throttled || !td->track_bio_latency)
bio->bi_issue_stat.stat |= SKIP_LATENCY;
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -90,6 +90,10 @@ struct bio {
void*bi_cg_private;
struct blk_issue_stat   bi_issue_stat;
 #endif
+#ifdef CONFIG_BLK_DEV_THROTTLING
+   /* record which disk the bio is throttled against */
+   struct gendisk  *bi_throttled_disk;
+#endif
 #endif
union {
 #if defined(CONFIG_BLK_DEV_INTEGRITY)


[PATCH V2] block-throttle: avoid double charge

2017-11-13 Thread Shaohua Li
If a bio is throttled and splitted after throttling, the bio could be
resubmited and enters the throttling again. This will cause part of the
bio is charged multiple times. If the cgroup has an IO limit, the double
charge will significantly harm the performance. The bio split becomes
quite common after arbitrary bio size change.

To fix this, we always set the BIO_THROTTLED flag if a bio is throttled.
If the bio is cloned/slitted, we copy the flag to new bio too to avoid
double charge. However cloned bio could be directed to a new disk,
keeping the flag will have problem. The observation is we always set new
disk for the bio in this case, so we can clear the flag in
bio_set_dev().

This issue exists a long time, arbitrary bio size change makes it worse,
so this should go into stable at least since v4.2.

V1-> V2: Not add extra field in bio based on discussion with Tejun

Cc: Tejun Heo 
Cc: Vivek Goyal 
Cc: sta...@vger.kernel.org
Signed-off-by: Shaohua Li 
---
 block/bio.c  | 2 ++
 block/blk-throttle.c | 8 +---
 include/linux/bio.h  | 2 ++
 3 files changed, 5 insertions(+), 7 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 8338304..d1d4d51 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -598,6 +598,8 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src)
 */
bio->bi_disk = bio_src->bi_disk;
bio_set_flag(bio, BIO_CLONED);
+   if (bio_flagged(bio_src, BIO_THROTTLED))
+   bio_set_flag(bio, BIO_THROTTLED);
bio->bi_opf = bio_src->bi_opf;
bio->bi_write_hint = bio_src->bi_write_hint;
bio->bi_iter = bio_src->bi_iter;
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index ee6d7b0..f90fec1 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -,13 +,7 @@ bool blk_throtl_bio(struct request_queue *q, struct 
blkcg_gq *blkg,
 out_unlock:
spin_unlock_irq(q->queue_lock);
 out:
-   /*
-* As multiple blk-throtls may stack in the same issue path, we
-* don't want bios to leave with the flag set.  Clear the flag if
-* being issued.
-*/
-   if (!throttled)
-   bio_clear_flag(bio, BIO_THROTTLED);
+   bio_set_flag(bio, BIO_THROTTLED);
 
 #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
if (throttled || !td->track_bio_latency)
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 9c75f58..27b5bac 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -504,6 +504,8 @@ extern unsigned int bvec_nr_vecs(unsigned short idx);
 
 #define bio_set_dev(bio, bdev) \
 do {   \
+   if ((bio)->bi_disk != (bdev)->bd_disk)  \
+   bio_clear_flag(bio, BIO_THROTTLED);\
(bio)->bi_disk = (bdev)->bd_disk;   \
(bio)->bi_partno = (bdev)->bd_partno;   \
 } while (0)
-- 
2.9.5



Re: [PATCH] block-throttle: avoid double charge

2017-11-13 Thread Tejun Heo
Hello, Shaohua.

On Fri, Oct 13, 2017 at 11:10:29AM -0700, Shaohua Li wrote:
> If a bio is throttled and splitted after throttling, the bio could be
> resubmited and enters the throttling again. This will cause part of the
> bio is charged multiple times. If the cgroup has an IO limit, the double
> charge will significantly harm the performance. The bio split becomes
> quite common after arbitrary bio size change.

Missed the patch previously.  Sorry about that.

> Some sort of this patch probably should go into stable since v4.2

Seriously.

> @@ -2130,9 +2130,15 @@ bool blk_throtl_bio(struct request_queue *q, struct 
> blkcg_gq *blkg,
>  
>   WARN_ON_ONCE(!rcu_read_lock_held());
>  
> - /* see throtl_charge_bio() */
> - if (bio_flagged(bio, BIO_THROTTLED) || !tg->has_rules[rw])
> + /*
> +  * see throtl_charge_bio() for BIO_THROTTLED. If a bio is throttled
> +  * against a disk but remapped to other disk, we should throttle it
> +  * again
> +  */
> + if (bio_flagged(bio, BIO_THROTTLED) || !tg->has_rules[rw] ||
> + (bio->bi_throttled_disk && bio->bi_throttled_disk == bio->bi_disk))
>   goto out;
> + bio->bi_throttled_disk = NULL;

So, one question I have is whether we need both BIO_THROTTLED and
bi_throttled_disk.  Can't we replace BIO_THROTTLED w/
bi_throttled_disk?

Thanks.

-- 
tejun


Re: [PATCH V4] scsi_debugfs: fix crash in scsi_show_rq()

2017-11-13 Thread James Bottomley
On Sat, 2017-11-11 at 10:43 +0800, Ming Lei wrote:
> On Fri, Nov 10, 2017 at 08:51:58AM -0800, James Bottomley wrote:
> > 
> > On Fri, 2017-11-10 at 17:01 +0800, Ming Lei wrote:
> > > 
> > > cmd->cmnd can be allocated/freed dynamically in case of
> > > T10_PI_TYPE2_PROTECTION, so we should check it in scsi_show_rq()
> > > because this request may have been freed already here, and cmd-
> > > >cmnd has been set as null.
> > > 
> > > We choose to accept read-after-free and dump request data as far
> > > as possible.
> > > 
> > > This patch fixs the following kernel crash when dumping request
> > > via block's debugfs interface:
> > > 
> > > [  252.962045] BUG: unable to handle kernel NULL pointer
> > > dereference
> > > at   (null)
> > > [  252.963007] IP: scsi_format_opcode_name+0x1a/0x1c0
> > > [  252.963007] PGD 25e75a067 P4D 25e75a067 PUD 25e75b067 PMD 0
> > > [  252.963007] Oops:  [#1] PREEMPT SMP
> > > [  252.963007] Dumping ftrace buffer:
> > > [  252.963007](ftrace buffer empty)
> > > [  252.963007] Modules linked in: scsi_debug ebtable_filter
> > > ebtables
> > > ip6table_filter ip6_tables xt_CHECKSUM iptable_mangle
> > > ipt_MASQUERADE
> > > nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4
> > > nf_defrag_ipv4
> > > nf_nat_ipv4 nf_nat nf_conntrack libcrc32c bridge stp llc
> > > iptable_filter fuse ip_tables sd_mod sg mptsas mptscsih mptbase
> > > crc32c_intel ahci libahci nvme serio_raw scsi_transport_sas
> > > libata
> > > lpc_ich nvme_core virtio_scsi binfmt_misc dm_mod iscsi_tcp
> > > libiscsi_tcp libiscsi scsi_transport_iscsi null_blk configs
> > > [  252.963007] CPU: 1 PID: 1881 Comm: cat Not tainted 4.14.0-
> > > rc2.blk_mq_io_hang+ #516
> > > [  252.963007] Hardware name: QEMU Standard PC (Q35 + ICH9,
> > > 2009),
> > > BIOS 1.9.3-1.fc25 04/01/2014
> > > [  252.963007] task: 88025e6f6000 task.stack:
> > > c90001bd
> > > [  252.963007] RIP: 0010:scsi_format_opcode_name+0x1a/0x1c0
> > > [  252.963007] RSP: 0018:c90001bd3c50 EFLAGS: 00010286
> > > [  252.963007] RAX: 4843 RBX: 0050 RCX:
> > > 
> > > [  252.963007] RDX:  RSI: 0050 RDI:
> > > c90001bd3cd8
> > > [  252.963007] RBP: c90001bd3c88 R08: 1000 R09:
> > > 
> > > [  252.963007] R10: 880275134000 R11: 88027513406c R12:
> > > 0050
> > > [  252.963007] R13: c90001bd3cd8 R14:  R15:
> > > 
> > > [  252.963007] FS:  7f4d11762700()
> > > GS:88027fc4()
> > > knlGS:
> > > [  252.963007] CS:  0010 DS:  ES:  CR0: 80050033
> > > [  252.963007] CR2:  CR3: 00025e789003 CR4:
> > > 003606e0
> > > [  252.963007] DR0:  DR1:  DR2:
> > > 
> > > [  252.963007] DR3:  DR6: fffe0ff0 DR7:
> > > 0400
> > > [  252.963007] Call Trace:
> > > [  252.963007]  __scsi_format_command+0x27/0xc0
> > > [  252.963007]  scsi_show_rq+0x5c/0xc0
> > > [  252.963007]  ? seq_printf+0x4e/0x70
> > > [  252.963007]  ? blk_flags_show+0x5b/0xf0
> > > [  252.963007]  __blk_mq_debugfs_rq_show+0x116/0x130
> > > [  252.963007]  blk_mq_debugfs_rq_show+0xe/0x10
> > > [  252.963007]  seq_read+0xfe/0x3b0
> > > [  252.963007]  ? __handle_mm_fault+0x631/0x1150
> > > [  252.963007]  full_proxy_read+0x54/0x90
> > > [  252.963007]  __vfs_read+0x37/0x160
> > > [  252.963007]  ? security_file_permission+0x9b/0xc0
> > > [  252.963007]  vfs_read+0x96/0x130
> > > [  252.963007]  SyS_read+0x55/0xc0
> > > [  252.963007]  entry_SYSCALL_64_fastpath+0x1a/0xa5
> > > [  252.963007] RIP: 0033:0x7f4d1127e9b0
> > > [  252.963007] RSP: 002b:7ffd27082568 EFLAGS: 0246
> > > ORIG_RAX:
> > > 
> > > [  252.963007] RAX: ffda RBX: 7f4d1154bb20 RCX:
> > > 7f4d1127e9b0
> > > [  252.963007] RDX: 0002 RSI: 7f4d115a7000 RDI:
> > > 0003
> > > [  252.963007] RBP: 00021010 R08:  R09:
> > > 
> > > [  252.963007] R10: 037b R11: 0246 R12:
> > > 00022000
> > > [  252.963007] R13: 7f4d1154bb78 R14: 1000 R15:
> > > 0002
> > > [  252.963007] Code: c6 e8 1b ca 24 00 eb 8c e8 74 2c ae ff 0f 1f
> > > 40
> > > 00 0f 1f 44 00 00 55 48 89 e5 41 56 41 55 41 54 53 49 89 fd 49 89
> > > f4
> > > 48 83 ec 18 <44> 0f b6 32 48 c7 45 c8 00 00 00 00 65 48 8b 04 25
> > > 28
> > > 00 00 00
> > > [  252.963007] RIP: scsi_format_opcode_name+0x1a/0x1c0 RSP:
> > > c90001bd3c50
> > > [  252.963007] CR2: 
> > > [  252.963007] ---[ end trace 83c5bddfbaa6573c ]---
> > > [  252.963007] Kernel panic - not syncing: Fatal exception
> > > [  252.963007] Dumping ftrace buffer:
> > > [  252.963007](ftrace buffer empty)
> > > [  252.963007] Kernel Offset: disabled
> > > [  252.963007] ---[ end Kernel panic - not syncing: 

Re: [PATCH IMPROVEMENT/BUGFIX 0/4] block, bfq: increase sustainable IOPS and fix a bug

2017-11-13 Thread Jens Axboe
On 11/12/2017 11:34 PM, Paolo Valente wrote:
> Hi,
> these patches address the following issue, raised and
> discussed in [1].
> 
> BFQ provides a proportional share policy for the blkio controller.  In
> this respect, BFQ updates the I/O accounting related to its policy,
> i.e., the statistics contained in the special files blkio.bfq.* in
> blkio groups (these files are the bfq counterpart of the blkio.*
> statistic files updated by CFQ).  To update these statistics, BFQ
> invokes some blkg_*stats_* functions.  We have found out that these
> functions take a considerable percentage, about 40%, of the total
> execution time of BFQ.
> 
> This patch series contains two patches to address this issue, namely
> the patches anticipated and discussed in their main aspects in [1].
> 
> The first of these two patches is patch 3/4 in this series: it enables
> BFQ to execute the above blkg_*stats_* functions, where possible, in
> parallel with the rest of the code of the scheduler.  With this
> improvement, the maximum request-processing rate sustainable with BFQ
> grows by 25%-30%, depending on the CPU.  For instance, it grows from
> 250 to 310 KIOPS on an Intel i7-4850HQ. These results, and the others
> reported in this letter, have been obtained and can be reproduced very
> easily with the script [2].
> 
> Unfortunately, even after the above improvement, blkg_*stats_*
> functions still cause a noticeable loss of sustainable throughput.  To
> give an idea, on an Intel i7-4850HQ, if the update of blkio.bfq.*
> statistics is not performed at all, then the sustainable throughput
> grows from 310 to 400 KIOPS.  This issue has been already discussed in
> [1] as well. In brief, we agreed to make a further commit, which
> introduces the possibility to disable/re-enable at boot, or at
> module-loading time, the updating of all blkio statistics for
> proportional-share policies, i.e., of both those updated by BFQ and
> those updated by CFQ.
> 
> We are already working on that commit, but finalizing it will take
> some time.  Fortunately, following a suggestion/recommendation of
> Tejun in the same thread [2], it is already possible to drastically
> increase BFQ performance, when no blkio-debugging information is
> needed. Tejun's suggestion/recommendation is to move most blkio.bfq.*
> statistics behind an already existing config option,
> CONFIG_DEBUG_BLK_CGROUP.  Patch 4/4 in this series does that.  Thanks
> to this change, if CONFIG_DEBUG_BLK_CGROUP is not set, then bfq does
> attain a further boost in sustainable throughput, which ranges from
> +30% to +45%, depending on the CPU (some figures in the
> documentation).
> 
> The above two patches are preceded by two preliminary patches.  The
> first updates the conservative range of IOPS (sustainable with BFQ)
> that was previously reported in the documentation.  The patch replaces
> this piece of information with the actual, much higher limits that we
> have measured while working at the above two commits.  The second
> preliminary patch fixes a functional bug, related to the update of the
> above statistics.
> 
> We waited for one week of testing from bfq users before submitting
> these patches. We hope we are still in time for having these
> improvements and fixes considered for 4.15.

Usually I'd say it's too late, but I knew this was coming. I'll get
this queued up.

-- 
Jens Axboe



Re:

2017-11-13 Thread Amos Kalonzo
Attn:

I am wondering why You haven't respond to my email for some days now.
reference to my client's contract balance payment of (11.7M,USD)
Kindly get back to me for more details.

Best Regards

Amos Kalonzo