berman
Cc: Omar Sandoval
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Hannes Reinecke
Reported-by: Kashyap Desai
Signed-off-by: Ming Lei
---
block/blk-mq-sched.c | 11 ++-
block/blk-mq.c | 24 +++-
include/linux/blk-mq.h | 1 +
3 files changed, 26
1/3
Ming Lei (3):
blk-mq: use list_splice_tail_init() to insert requests
blk-mq: only attempt to merge bio if there is rq in sw queue
blk-mq: dequeue request one by one from sw queue iff hctx is busy
block/blk-mq-sched.c | 14 --
block/blk-mq.c |
igned-off-by: Ming Lei
---
block/blk-mq-sched.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 56c493c6cd90..f5745acc2d98 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -339,7 +339,8 @@ bool __blk_mq_sched_
off-by: Ming Lei
---
block/blk-mq.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 22fe394d0b49..359382b59d40 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1545,19 +1545,19 @@ void blk_mq_insert_requests(str
On Thu, Jun 28, 2018 at 08:18:04PM -0600, Jens Axboe wrote:
> On 6/28/18 7:59 PM, Ming Lei wrote:
> > On Thu, Jun 28, 2018 at 09:46:50AM -0600, Jens Axboe wrote:
> >> Some devices have different queue limits depending on the type of IO. A
> >> classic case is SATA N
On Thu, Jun 28, 2018 at 09:46:50AM -0600, Jens Axboe wrote:
> Some devices have different queue limits depending on the type of IO. A
> classic case is SATA NCQ, where some commands can queue, but others
> cannot. If we have NCQ commands inflight and encounter a non-queueable
> command, the driver
On Thu, Jun 28, 2018 at 11:21 PM, Bart Van Assche
wrote:
> On 06/27/18 17:30, Ming Lei wrote:
>>
>> One core idea of immutable bvec is to use bio->bi_iter and the original
>> bvec table to iterate over anywhere in the bio. That is why .bi_io_vec
>> needs to co
On Thu, Jun 28, 2018 at 09:53:23AM -0600, Jens Axboe wrote:
> On 6/27/18 2:12 PM, Bart Van Assche wrote:
> > Although __bio_clone_fast() copies bi_io_vec, it does not copy bi_vcnt,
> > the number of elements in bi_io_vec[] that contains data. Copy bi_vcnt
> > such that code that needs this member
On Thu, Jun 28, 2018 at 02:03:47PM +0800, jianchao.wang wrote:
>
>
> On 06/28/2018 01:42 PM, Kashyap Desai wrote:
> > Ming -
> >
> > Performance drop is resolved on my setup, but may be some stability of the
> > kernel is caused due to this patch set. I have not tried without patch
> > set, but
ted-by: Kashyap Desai
Signed-off-by: Ming Lei
---
block/blk-mq.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 70c65bb6c013..20b0519cb3b8 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1533,19 +1533,19 @@ v
igned-off-by: Ming Lei
---
block/blk-mq-sched.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 56c493c6cd90..f5745acc2d98 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -339,7 +339,8 @@ bool __blk_mq_sched_
Hi,
The 1st 2 patch improves ctx->lock uses, and it is observed that IOPS
may be improved by ~5% in rand IO test on MegaRaid SAS run by Kashyap.
The 3rd patch fixes rand IO performance regression on MegaRaid SAS
test, still reported by Kashyap.
Ming Lei (3):
blk-mq: use list_splice_t
On Wed, Jun 27, 2018 at 04:59:41PM -0700, Bart Van Assche wrote:
> On 06/27/18 16:50, Ming Lei wrote:
> > On Wed, Jun 27, 2018 at 01:12:31PM -0700, Bart Van Assche wrote:
> > > Although __bio_clone_fast() copies bi_io_vec, it does not copy bi_vcnt,
> > > the numb
On Wed, Jun 27, 2018 at 04:55:20PM -0700, Bart Van Assche wrote:
> On 06/27/18 16:21, Ming Lei wrote:
> > What we need to do is to only copy the 1st bvec for WRITE_SAME, your patch
> > changes to copy (bio->bi_iter.bi_size / block size) bvecs, then memory
> > corruption
original
> and for cloned requests.
>
> Signed-off-by: Bart Van Assche
> Cc: Christoph Hellwig
> Cc: Mike Snitzer
> Cc: Ming Lei
> Cc: Hannes Reinecke
> Cc: Johannes Thumshirn
> ---
> block/bio.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/block/bio.c
28/0x60d
> irq_exit+0x100/0x110
> smp_call_function_single_interrupt+0x90/0x330
> call_function_single_interrupt+0xf/0x20
>
>
> Fixes: f9d03f96b988 ("block: improve handling of the magic discard payload")
> Signed-off-by: Bart Van Assche
> Cc: Christoph He
On Wed, Jun 27, 2018 at 01:02:12PM -0700, Bart Van Assche wrote:
> Because the hctx lock is not held around the only
> blk_mq_tag_wakeup_all() call in the block layer, the wait queue
> entry removal in blk_mq_dispatch_wake() is protected by the wait
> queue lock only. Since the hctx->dispatch_wait
On Wed, Jun 27, 2018 at 10:48:06AM -0700, Bart Van Assche wrote:
> On 06/26/18 18:13, Ming Lei wrote:
> > On Tue, Jun 26, 2018 at 03:26:24PM -0700, Bart Van Assche wrote:
> > > There is no good reason to use different code paths for different
> > > request operations. H
On Tue, Jun 26, 2018 at 03:26:24PM -0700, Bart Van Assche wrote:
> There is no good reason to use different code paths for different
> request operations. Hence remove the switch/case statement from
> bio_clone_bioset().
>
> Signed-off-by: Bart Van Assche
> Cc: Christoph Hellw
On Mon, Jun 25, 2018 at 10:05:41AM -0700, Bart Van Assche wrote:
> On 06/22/18 20:15, Ming Lei wrote:
> > Just tried srp/001
> >
> > [root@ktest-04 blktests]# ./check srp/001
> > srp/001 (Create and remove LUNs) [failed]
> >
org
Reported-by: Andrew Jones
Cc: Andrew Jones
Signed-off-by: Ming Lei
---
block/blk-mq-sched.c | 85 +++---
block/blk-mq.c | 10 ++
include/linux/blkdev.h | 2 --
3 files changed, 7 insertions(+), 90 deletions(-)
diff --git a/block/blk-mq
Jones
Cc: Omar Sandoval
Cc: Andrew Jones
Cc: Bart Van Assche
Cc: linux-s...@vger.kernel.org
Cc: "Martin K. Petersen"
Cc: Christoph Hellwig
Signed-off-by: Ming Lei
---
block/blk-core.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/block/blk-core.c b/bl
drew Jones
Cc: Christoph Hellwig
Cc: Omar Sandoval
Cc: Bart Van Assche
Signed-off-by: Ming Lei
---
block/blk-mq.c | 26 +-
include/linux/blk-mq.h | 1 +
2 files changed, 18 insertions(+), 9 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index d
'hctx' won't be changed at all, so not necessary to pass
'**hctx' to blk_mq_mark_tag_wait().
Cc: Andrew Jones
Cc: Christoph Hellwig
Cc: Omar Sandoval
Cc: Bart Van Assche
Signed-off-by: Ming Lei
---
block/blk-mq.c | 23 +++
1 file changed, 11 insertions(+), 12 deletions
We never pass 'wait' as true to blk_mq_get_driver_tag(), then won't
get new hctx passed out.
So cleanup the usage and remove the two extra parameters.
Cc: Omar Sandoval
Cc: Andrew Jones
Cc: Bart Van Assche
Cc: Christoph Hellwig
Signed-off-by: Ming Lei
---
block/blk-mq.c | 19
On Sat, Jun 23, 2018 at 11:14 AM, Ming Lei wrote:
> On Sat, Jun 23, 2018 at 6:19 AM, Bart Van Assche
> wrote:
>> Hello Omar,
>>
>> As promised during LSF/MM, I have converted the srp-tests software to the
>> blktests framework. This patch series included all blk
-Unloaded the ib_srpt kernel module
+Unloaded the ib_srp kernel module
+modprobe: FATAL: Module target_core_mod is in use.
+modprobe: FATAL: Module target_core_mod is in use.
+modprobe: FATAL: Module target_core_mod is in use.
+modprobe: FATAL: Module target_core_mod is in use.
+modprobe: FATAL: Module target_core_mod is in use.
...
(Run 'diff -u tests/srp/001.out results/nodev/srp/001.out.bad' to
see the entire diff)
[root@ktest-04 blktests]# lsmod | grep target
iscsi_target_mod 303104 1 ib_isert
target_core_mod 356352 2 iscsi_target_mod,ib_isert
Thanks,
Ming Lei
On Fri, Jun 22, 2018 at 05:13:31PM -0600, Jens Axboe wrote:
> On 6/22/18 5:11 PM, Ming Lei wrote:
> > On Fri, Jun 22, 2018 at 04:51:26PM -0600, Jens Axboe wrote:
> >> On 6/22/18 4:43 PM, Ming Lei wrote:
> >>> On Fri, Jun 22, 2018 at 01:26:10PM -0600, Jens Axboe wrot
On Fri, Jun 22, 2018 at 04:51:26PM -0600, Jens Axboe wrote:
> On 6/22/18 4:43 PM, Ming Lei wrote:
> > On Fri, Jun 22, 2018 at 01:26:10PM -0600, Jens Axboe wrote:
> >> blk-wbt adds waiters to the tail of the waitqueue, and factors in the
> >> task placement in its
On Fri, Jun 22, 2018 at 01:26:10PM -0600, Jens Axboe wrote:
> blk-wbt adds waiters to the tail of the waitqueue, and factors in the
> task placement in its decision making on whether or not the current task
> can proceed. This can cause issues for the lowest class of writers,
> since they can get
On Wed, Jun 13, 2018 at 10:54:09AM -0400, Kent Overstreet wrote:
> On Wed, Jun 13, 2018 at 03:56:32PM +0200, Christoph Hellwig wrote:
> > On Wed, Jun 13, 2018 at 07:06:41PM +0800, Ming Lei wrote:
> > > > before bio_alloc_pages) that can be switched to something that
On Wed, Jun 13, 2018 at 05:58:01AM -0400, Kent Overstreet wrote:
> On Mon, Jun 11, 2018 at 09:48:00PM +0200, Christoph Hellwig wrote:
> > Hi all,
> >
> > this series cleans up various places where bcache is way too intimate
> > with bio internals. This is intended as a baseline for the
ld avoid the request tag to be released and life-recycle, but not
> completion.
>
> For the scsi mid-layer, what if a request is in error handler and normal
> completion come
> at the moment ?
Per my understanding, now the protection needs to be done completely by driver.
Thanks,
Ming Lei
ach_entry_rcu_rr(), thus soft lockup.
>
> Fix is simple: reinit list entry after an RCU grace period elapsed.
>
> Signed-off-by: Roman Pen
> Cc: Jens Axboe
> Cc: Bart Van Assche
> Cc: Christoph Hellwig
> Cc: Sagi Grimberg
> Cc: Ming Lei
> Cc: linux-block@vger.ker
On Tue, Jun 05, 2018 at 11:41:19AM +0800, Ming Lei wrote:
> On Mon, Jun 04, 2018 at 03:58:53PM +0200, Christoph Hellwig wrote:
> > Replace a nasty hack with a different nasty hack to prepare for multipage
> > bio_vecs. By moving the temporary page array as far up as possible i
ffset;
> - bv[0].bv_len -= offset;
> - if (diff)
> - bv[bio->bi_vcnt - 1].bv_len -= diff;
> + len = min_t(size_t, PAGE_SIZE - offset, left);
> + if (WARN_ON_ONCE(bio_add_page(bio, page, len, offset) != len))
> + return -EINVAL;
> + offset = 0;
> + }
One invariant is that the page index 'i' is always <= bvec index
of the added page, and the two can only be same when adding the last
page.
So this approach is correct and looks smart & clean:
Reviewed-by: Ming Lei
Thanks,
Ming
> -
> - if (page && !PageCompound(page))
> - set_page_dirty_lock(page);
> + if (!PageCompound(bvec->bv_page))
> + set_page_dirty_lock(bvec->bv_page);
> }
> }
Looks reasonable:
Reviewed-by: Ming Lei
Thanks,
Ming
> + bio_put(bio);
> + return;
> +defer:
> + spin_lock_irqsave(_dirty_lock, flags);
> + bio->bi_private = bio_dirty_list;
> + bio_dirty_list = bio;
> + spin_unlock_irqrestore(_dirty_lock, flags);
> + schedule_work(_dirty_work);
> }
>
> void generic_start_io_acct(struct request_queue *q, int rw,
It is good simplification, and the only effect may be some pages
which will be freed a bit late in partial clean case.
I have run fio randread on null_blk in a 512M ram fedora 27 cloud image
based VM, not see IOPS drop, so:
Tested-by: Ming Lei
Reviewed-by: Ming Lei
Thanks,
Ming
Now we setup q->nr_requests when switching to one new scheduler,
but not do it for 'none', then q->nr_requests may not be correct
for 'none'.
This patch fixes this issue by always updating 'nr_requests' when
switching to 'none'.
Cc: Marco Patalano
Cc: "Ewan D. Milne"
Signed-
On Sat, Jun 02, 2018 at 12:59:21PM +0800, Ming Lei wrote:
> If the current io scheduler is 'none', the max allowed 'nr_requests'
> should be from set->queue_depth, instead of the tags's depth, which
> can be adjusted before.
>
> This patch fixes this issue by using set->q
Milne"
Signed-off-by: Ming Lei
---
tests/block/021 | 51 +++
tests/block/021.out | 2 ++
2 files changed, 53 insertions(+)
create mode 100755 tests/block/021
create mode 100755 tests/block/021.out
diff --git a/tests/block/021 b/tests/block/02
switching
to 'none' scheduler.
Cc: Marco Patalano
Cc: "Ewan D. Milne"
Signed-off-by: Ming Lei
---
This issue can be reproduced by the blktests block/021, which will
be posted soon.
block/blk-mq-sched.c | 1 +
block/blk-mq-tag.c | 6 --
2 files changed, 5 insertions(+), 2 delet
806] RSP: 002b:7f42ba98ab60 EFLAGS: 0293 ORIG_RAX:
> 0012
> [ 370.056260] RAX: ffda RBX: 7f429400cd40 RCX:
> 7f42e0955073
> [ 370.064229] RDX: 00006400 RSI: 7f4294001000 RDI:
> 002c
> [ 370.072197] RBP: 7f42bf1add40 R08: 03e0 R09:
> 0005
> [ 370.080164] R10: 0030c362a400 R11: 0293 R12:
> 0001
> [ 370.088132] R13: 6400 R14: 7f429400cd68 R15:
> 7f42bf1add48
>
That is exactly the issue triggered by block/011, and I guess Keith is
working V4 fix:
http://lists.infradead.org/pipermail/linux-nvme/2018-May/017813.html
Thanks,
Ming Lei
The issue isn't related with shared tag, and it can be
triggered in case 'shared_tags' is set as 0.
Remove this so that this test can be run in older kernel, and
avoid to cause misunderstanding.
Signed-off-by: Ming Lei
---
tests/block/020 | 2 +-
1 file changed, 1 insertion(+), 1 deletion
On Mon, May 28, 2018 at 01:44:25PM +0200, Christoph Hellwig wrote:
> On Thu, May 24, 2018 at 12:45:15PM +0800, Ming Lei wrote:
> > This change should have been done after '[PATCH 13/14] blk-mq: Remove
> > generation seqeunce', otherwise the timed-out request won
On Fri, May 25, 2018 at 10:30:46AM -0600, Jens Axboe wrote:
> On 5/24/18 10:53 PM, Kent Overstreet wrote:
> > On Fri, May 25, 2018 at 11:45:48AM +0800, Ming Lei wrote:
> >> Hi,
> >>
> >> This patchset brings multipage bvec into block layer:
> >
> &g
bvec
is in, each bvec will store a real multipage segment, so people won't be
confused with these wrong names.
Signed-off-by: Ming Lei <ming@redhat.com>
---
Documentation/block/biovecs.txt | 4 ++--
arch/m68k/emu/nfblock.c | 2 +-
arch/xtensa/platforms/iss/simdisk.
rq_for_each_segment() still deceives us since this helper only returns
one page in each bvec, so fixes its name.
Signed-off-by: Ming Lei <ming@redhat.com>
---
Documentation/block/biodoc.txt | 6 +++---
block/blk-core.c | 2 +-
drivers/block/floppy.c | 4 ++--
d
There is one use case(DM) which requires to clone bio segment by
segement, so introduce this API.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/bio.c | 56 +++--
include/linux/bio.h | 1 +
2 files changed, 43 insertions(
max segment size,
so we have to split the big bvec into several segments.
Thirdly during splitting multipage bvec into segments, max segment number
may be reached, then the bio need to be splitted when this happens.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/blk-merge.
iov_iter is implemented with bvec itererator, so it is safe to pass
segment to it, and this way is much more efficient than passing one
page in each bvec.
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/block/loop.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
There are still cases in which rq_for_each_segment() is required, for
example, loop.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/blkdev.h | 4
1 file changed, 4 insertions(+)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1e8e9b430008..0b15bc
multipage bio.
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/dm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index f1db181e082e..425e99e20f5c 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1581,8 +1581,8 @@
BTRFS and guard_bio_eod() need to get the last page from one segment, so
introduce this helper to make them happy.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/bvec.h | 22 ++
1 file changed, 22 insertions(+)
diff --git a/include/linux/bvec.h b/i
current
bvec iterator which is thought as singlepage only by drivers, fs, dm and
etc. These helpers will build singlepage bvec in flight, so users of
current bio/bvec iterator still can work well and needn't change even
though we store real multipage segment into bvec table.
Signed-off-by: Ming Lei
This helper is used to iterate multipage bvec for bio spliting/merge,
and it is required in bio_clone_bioset() too, so introduce it.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/bio.h | 34 +++---
include/linux/bvec.
table directly, and users should be carful about this
helper since it returns real multipage segment now.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/bio.h | 18 ++
include/linux/bvec.h | 6 ++
2 files changed, 24 insertions(+)
diff --git a/include
we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/bcache/btree.c | 3 ++-
drivers/md/bcache/util.c | 2 +-
drivers/md/dm-crypt.c | 3 ++-
drivers/md/raid1.c| 3 ++-
4 files changed, 7 insertions(+), 4 deletions(-)
diff
We have to convert to bio_for_each_page_all2() for iterating page by
page.
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/bio.c | 18 --
block/blk-zoned.c | 5 +++--
ase all these pages if all
are dirtied, otherwise dirty them all in deferred wrokqueue.
This patch introduces segment_for_each_page_all() to deal with the case
a bit easier.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/bio.c | 45 +
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei <ming@redhat.com>
---
fs/btrfs/compression.c | 3 ++-
fs/btrfs/disk-io.c | 3 ++-
fs/btrfs/extent_io.c | 9 ++---
fs
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Given bvec can't be changed under bio_for_each_page_all2(), this patch
marks the bvec parameter as 'const' for xfs_finish_page_writeback().
Signed-off-by: Ming Lei
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei <ming@redhat.com>
---
fs/exofs/ore.c | 3 ++-
fs/exofs/ore_raid.c | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei <ming@redhat.com>
---
fs/f2fs/data.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/fs/f2fs/data.
Under some situations, such as block direct I/O, we can't use
bio_add_page() for merging pages into multipage bvec, so
a new function is implemented for converting page array into one
segment array, then these cases can benefit from multipage bvec
too.
Signed-off-by: Ming Lei <ming@redhat.
This patch pulls the trigger for multipage bvecs.
Now any request queue which supports queue cluster will see multipage
bvecs.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/bio.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/block/bio.c b/block/bio.c
Now bio_for_each_page_all() is gone, we can reuse the name to iterate
bio page by page, which is done via bio_for_each_page_all2() now.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/bio.c | 14 +++---
block/blk-zoned.c | 4 ++--
block/bo
No one uses it any more, so kill it and we can reuse this helper
name.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/bio.h | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 5ae2bc876295..c5e692
Now multipage bvec is supported, and some helpers may return page by
page, and some may return segment by segment, this patch documents the
usage for helping us use them correctly.
Signed-off-by: Ming Lei <ming@redhat.com>
---
Documentation/block/biovecs.tx
Now multipage bvec can cover CONFIG_THP_SWAP, so we don't need to
increase BIO_MAX_PAGES for it.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/bio.h | 8
1 file changed, 8 deletions(-)
diff --git a/include/linux/bio.h b/include/linux/bio.h
index fc8a82
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei <ming@redhat.com>
---
fs/ext4/page-io.c | 3 ++-
fs/ext4/readpage.c | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
As multipage bvec will be enabled soon, bio->bi_vcnt isn't same with
page count in the bio any more, so use bio_for_each_page_all() to
compute the number.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/bio.h | 8 +++-
1 file changed, 7 insertions(+), 1 deletion
Preparing for supporting multipage bvec.
Cc: Chris Mason <c...@fb.com>
Cc: Josef Bacik <jba...@fb.com>
Cc: David Sterba <dste...@suse.com>
Cc: linux-bt...@vger.kernel.org
Signed-off-by: Ming Lei <ming@redhat.com>
---
fs/btrfs/compression.c | 5 -
fs/btrfs/extent_i
It is more efficient to use bio_for_each_segment() to map sg, meantime
we have to consider splitting multipage bvec as done in blk_bio_segment_split().
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/blk-merge.c | 72 +++
1 file c
://marc.info/?t=14982021534=1=2
Ming Lei (33):
block: rename bio_for_each_segment* with bio_for_each_page*
block: rename rq_for_each_segment as rq_for_each_page
block: rename bio_segments() with bio_pages()
block: introduce multipage page bvec helpers
block: introduce
On Thu, May 24, 2018 at 11:44:41PM +0200, David Sterba wrote:
> On Thu, May 24, 2018 at 05:01:15PM +0800, Ming Lei wrote:
> > Preparing for supporting multipage bvec.
>
> Could you please also CC the cover letter so we have a chance to learn
> what multipage bvec means or wh
Under some situations, such as block direct I/O, we can't use
bio_add_page() for merging pages into multipage bvec, so
a new function is implemented for converting page array into one
segment array, then these cases can benefit from multipage bvec
too.
Signed-off-by: Ming Lei <ming@redhat.
Now multipage bvec can cover CONFIG_THP_SWAP, so we don't need to
increase BIO_MAX_PAGES for it.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/bio.h | 8
1 file changed, 8 deletions(-)
diff --git a/include/linux/bio.h b/include/linux/bio.h
index fc8a82
Now multipage bvec is supported, and some helpers may return page by
page, and some may return segment by segment, this patch documents the
usage for helping us use them correctly.
Signed-off-by: Ming Lei <ming@redhat.com>
---
Documentation/block/biovecs.tx
This patch pulls the trigger for multipage bvecs.
Now any request queue which supports queue cluster will see multipage
bvecs.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/bio.c | 13 +
1 file changed, 13 insertions(+)
diff --git a/block/bio.c b/block/bio.c
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Given bvec can't be changed inside bio_for_each_page_all2(), this patch
marks the bvec parameter as 'const' for gfs2_end_log_write_bh().
Signed-off-by: Ming Lei
Now bio_for_each_page_all() is gone, we can reuse the name to iterate
bio page by page, which is done via bio_for_each_page_all2() now.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/bio.c | 14 +++---
block/blk-zoned.c | 4 ++--
block/bo
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei <ming@redhat.com>
---
fs/exofs/ore.c | 3 ++-
fs/exofs/ore_raid.c | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
No one uses it any more, so kill it and we can reuse this helper
name.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/bio.h | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 5ae2bc876295..c5e692
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Given bvec can't be changed under bio_for_each_page_all2(), this patch
marks the bvec parameter as 'const' for xfs_finish_page_writeback().
Signed-off-by: Ming Lei
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei <ming@redhat.com>
---
fs/f2fs/data.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/fs/f2fs/data.
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei <ming@redhat.com>
---
fs/btrfs/compression.c | 3 ++-
fs/btrfs/disk-io.c | 3 ++-
fs/btrfs/extent_io.c | 9 ++---
fs
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei <ming@redhat.com>
---
fs/ext4/page-io.c | 3 ++-
fs/ext4/readpage.c | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled, so we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei <ming@redhat.com>
---
fs/block_dev.c | 6 --
fs/crypto/bio.c | 3 ++-
fs/direct-io.c | 4 +++-
fs/iomap.c | 3 ++-
fs/m
We have to convert to bio_for_each_page_all2() for iterating page by
page.
bio_for_each_page_all() can't be used any more after multipage bvec is
enabled.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/bio.c | 18 --
block/blk-zoned.c | 5 +++--
we have to convert to bio_for_each_page_all2().
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/bcache/btree.c | 3 ++-
drivers/md/bcache/util.c | 2 +-
drivers/md/dm-crypt.c | 3 ++-
drivers/md/raid1.c| 3 ++-
4 files changed, 7 insertions(+), 4 deletions(-)
diff
multipage bio.
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/dm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index f1db181e082e..425e99e20f5c 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1581,8 +1581,8 @@
table directly, and users should be carful about this
helper since it returns real multipage segment now.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/bio.h | 18 ++
include/linux/bvec.h | 6 ++
2 files changed, 24 insertions(+)
diff --git a/include
ase all these pages if all
are dirtied, otherwise dirty them all in deferred wrokqueue.
This patch introduces segment_for_each_page_all() to deal with the case
a bit easier.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/bio.c | 45 +
There is one use case(DM) which requires to clone bio segment by
segement, so introduce this API.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/bio.c | 56 +++--
include/linux/bio.h | 1 +
2 files changed, 43 insertions(
iov_iter is implemented with bvec itererator, so it is safe to pass
segment to it, and this way is much more efficient than passing one
page in each bvec.
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/block/loop.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
Preparing for supporting multipage bvec.
Cc: Chris Mason <c...@fb.com>
Cc: Josef Bacik <jba...@fb.com>
Cc: David Sterba <dste...@suse.com>
Cc: linux-bt...@vger.kernel.org
Signed-off-by: Ming Lei <ming@redhat.com>
---
fs/btrfs/compression.c | 5 -
fs/btrfs/extent_i
There are still cases in which rq_for_each_segment() is required, for
example, loop.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/blkdev.h | 4
1 file changed, 4 insertions(+)
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1e8e9b430008..0b15bc
As multipage bvec will be enabled soon, bio->bi_vcnt isn't same with
page count in the bio any more, so use bio_for_each_page_all() to
compute the number.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/bio.h | 8 +++-
1 file changed, 7 insertions(+), 1 deletion
There are still cases in which we need to use bio_segments() for get the
number of segment, so introduce it.
Signed-off-by: Ming Lei <ming@redhat.com>
---
include/linux/bio.h | 25 -
1 file changed, 20 insertions(+), 5 deletions(-)
diff --git a/include/linux/b
201 - 300 of 2196 matches
Mail list logo