On Wed, Aug 9, 2017 at 8:11 AM, Omar Sandoval wrote:
> On Sat, Aug 05, 2017 at 02:56:46PM +0800, Ming Lei wrote:
>> When hw queue is busy, we shouldn't take requests from
>> scheduler queue any more, otherwise IO merge will be
>> difficult to do.
>>
>> This patch fixes the
On Tue, 2017-08-08 at 20:11 -0400, Laurence Oberman wrote:
> > On Tue, 2017-08-08 at 22:17 +0800, Ming Lei wrote:
> > > > Hi Guys,
> > > >
> > > > Laurence and I see a system lockup issue when running
> > concurrent
> > > > big buffered write(4M bytes) to IB SRP on v4.13-rc3.
> > > >
> > > > 1
Hi David,
On Wed, Aug 9, 2017 at 2:13 AM, David Jeffery wrote:
> On 08/07/2017 07:53 PM, Ming Lei wrote:
>> On Tue, Aug 8, 2017 at 3:38 AM, David Jeffery wrote:
>
>>>
>>> Signed-off-by: David Jeffery
>>> ---
>>> block/blk-sysfs.c
On Tue, 2017-08-08 at 22:17 +0800, Ming Lei wrote:
> Hi Guys,
>
> Laurence and I see a system lockup issue when running concurrent
> big buffered write(4M bytes) to IB SRP on v4.13-rc3.
>
> 1 how to reproduce
>
> 1) setup IB_SRR & multi path
>
> #./start_opensm.sh
> #./start_srp.sh
On Sat, Aug 05, 2017 at 02:56:46PM +0800, Ming Lei wrote:
> When hw queue is busy, we shouldn't take requests from
> scheduler queue any more, otherwise IO merge will be
> difficult to do.
>
> This patch fixes the awful IO performance on some
> SCSI devices(lpfc, qla2xxx, ...) when
On 08/08/2017 04:48 PM, Omar Sandoval wrote:
> On Fri, Aug 04, 2017 at 09:04:21AM -0600, Jens Axboe wrote:
>> Modify blk_mq_in_flight() to count both a partition and root at
>> the same time. Then we only have to call it once, instead of
>> potentially looping the tags twice.
>
> Reviewed-by:
On 08/08/2017 04:42 PM, Omar Sandoval wrote:
> On Fri, Aug 04, 2017 at 09:04:19AM -0600, Jens Axboe wrote:
>> Instead of returning the count that matches the partition, pass
>> in an array of two ints. Index 0 will be filled with the inflight
>> count for the partition in question, and index 1
On Tue, 2017-08-08 at 22:17 +0800, Ming Lei wrote:
> Laurence and I see a system lockup issue when running concurrent
> big buffered write(4M bytes) to IB SRP on v4.13-rc3.
> [ ... ]
> #cat hammer_write.sh
> #!/bin/bash
> while true; do
> dd if=/dev/zero
On Fri, Aug 04, 2017 at 09:04:20AM -0600, Jens Axboe wrote:
> We don't have to inc/dec some counter, since we can just
> iterate the tags. That makes inc/dec a noop, but means we
> have to iterate busy tags to get an in-flight count.
>
> Reviewed-by: Bart Van Assche
On Fri, Aug 04, 2017 at 09:04:19AM -0600, Jens Axboe wrote:
> Instead of returning the count that matches the partition, pass
> in an array of two ints. Index 0 will be filled with the inflight
> count for the partition in question, and index 1 will filled
> with the root infligh count, if the
On Fri, Aug 04, 2017 at 09:04:21AM -0600, Jens Axboe wrote:
> Modify blk_mq_in_flight() to count both a partition and root at
> the same time. Then we only have to call it once, instead of
> potentially looping the tags twice.
Reviewed-by: Omar Sandoval
One comment below.
>
On Fri, Aug 04, 2017 at 09:04:18AM -0600, Jens Axboe wrote:
> No functional change in this patch, just in preparation for
> basing the inflight mechanism on the queue in question.
>
> Signed-off-by: Jens Axboe
> Reviewed-by: Bart Van Assche
Reviewed-by:
On Fri, Aug 04, 2017 at 09:04:17AM -0600, Jens Axboe wrote:
> Since we introduced blk-mq-sched, the tags->rqs[] array has been
> dynamically assigned. So we need to check for NULL when iterating,
> since there's a window of time where the bit is set, but we haven't
> dynamically assigned the
On Tue, Aug 08, 2017 at 10:00:21PM +, Bart Van Assche wrote:
> On Tue, 2017-08-08 at 15:13 -0600, Jens Axboe wrote:
> > On 08/08/2017 03:05 PM, Shaohua Li wrote:
> > > > I'm curious why null_blk isn't a good fit? You'd just need to add RAM
> > > > storage to it. That would just be a separate
On 08/08/2017 04:00 PM, Bart Van Assche wrote:
> On Tue, 2017-08-08 at 15:13 -0600, Jens Axboe wrote:
>> On 08/08/2017 03:05 PM, Shaohua Li wrote:
I'm curious why null_blk isn't a good fit? You'd just need to add RAM
storage to it. That would just be a separate option that should be
On Tue, 2017-08-08 at 15:13 -0600, Jens Axboe wrote:
> On 08/08/2017 03:05 PM, Shaohua Li wrote:
> > > I'm curious why null_blk isn't a good fit? You'd just need to add RAM
> > > storage to it. That would just be a separate option that should be
> > > set,
> > > ram_backing=1 or something like
On Mon, Aug 07, 2017 at 03:37:50PM +0300, Anton Volkov wrote:
> The early device registration made possible a race leading to allocations
> of disks with wrong minors.
>
> This patch moves the device registration further down the loop_init
> function to make the race infeasible.
>
> Found by
On 08/08/2017 03:05 PM, Shaohua Li wrote:
>> I'm curious why null_blk isn't a good fit? You'd just need to add RAM
>> storage to it. That would just be a separate option that should be set,
>> ram_backing=1 or something like that. That would make it less critical
>> than using the RAM disk driver
On Tue, Aug 08, 2017 at 02:31:54PM -0600, Jens Axboe wrote:
> On 08/07/2017 10:36 AM, Shaohua Li wrote:
> > On Mon, Aug 07, 2017 at 10:29:05AM +0200, Hannes Reinecke wrote:
> >> On 08/05/2017 05:51 PM, Shaohua Li wrote:
> >>> From: Shaohua Li
> >>>
> >>> In testing software RAID, I
On Wed, Jul 26, 2017 at 06:58:01PM -0500, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> Return EAGAIN in case RAID5 would block because of waiting due to:
> + Reshaping
> + Suspension
> + Stripe Expansion
>
> Signed-off-by: Goldwyn Rodrigues
>
On Wed, Jul 26, 2017 at 06:58:00PM -0500, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> The RAID1 driver would bail with EAGAIN in case of:
> + I/O has to wait for a barrier
> + array is frozen
> + Area is suspended
> + There are too many pending I/O that it will
On 08/08/2017 02:32 PM, Shaohua Li wrote:
>> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
>> index 25f6a0cb27d3..fae021ebec1b 100644
>> --- a/include/linux/blkdev.h
>> +++ b/include/linux/blkdev.h
>> @@ -633,6 +633,7 @@ struct request_queue {
>> #define QUEUE_FLAG_REGISTERED 29
On 08/05/2017 09:51 AM, Shaohua Li wrote:
> +static struct testb_page *testb_insert_page(struct testb *testb,
> + sector_t sector, unsigned long *lock_flag)
> +{
> + u64 idx;
> + struct testb_page *t_page;
> +
> + assert_spin_locked(>t_dev->lock);
> +
> + t_page =
On 08/07/2017 10:36 AM, Shaohua Li wrote:
> On Mon, Aug 07, 2017 at 10:29:05AM +0200, Hannes Reinecke wrote:
>> On 08/05/2017 05:51 PM, Shaohua Li wrote:
>>> From: Shaohua Li
>>>
>>> In testing software RAID, I usually found it's hard to cover specific cases.
>>> RAID is supposed to
On 08/08/2017 12:33 PM, Mike Galbraith wrote:
> On Tue, 2017-08-08 at 18:50 +0200, Mike Galbraith wrote:
>> On Tue, 2017-08-08 at 09:44 -0700, Greg KH wrote:
>>>
>>> Should these go back farther than 4.12? Looks like they apply cleanly
>>> to 4.9, didn't look older than that...
>>
>> I met
On Tue, Aug 08, 2017 at 07:33:37PM +0200, Paolo Valente wrote:
> > Differently from bfq-sq, setting slice_idle to 0 doesn't provide any
> > benefit, which lets me suspect that there is some other issue in
> > blk-mq (only a suspect). I think I may have already understood how to
> > guarantee that
On 08/07/2017 07:53 PM, Ming Lei wrote:
> On Tue, Aug 8, 2017 at 3:38 AM, David Jeffery wrote:
>>
>> Signed-off-by: David Jeffery
>> ---
>> block/blk-sysfs.c |2 ++
>> block/elevator.c |4
>> 2 files changed, 6 insertions(+)
>>
>>
>> diff
> Il giorno 08 ago 2017, alle ore 10:06, Paolo Valente
> ha scritto:
>
>>
>> Il giorno 07 ago 2017, alle ore 20:42, Paolo Valente
>> ha scritto:
>>
>>>
>>> Il giorno 07 ago 2017, alle ore 19:32, Paolo Valente
>>>
> Il giorno 08 ago 2017, alle ore 12:30, Mel Gorman
> ha scritto:
>
> On Mon, Aug 07, 2017 at 07:32:41PM +0200, Paolo Valente wrote:
global-dhp__io-dbench4-fsync-ext4 was a universal loss across any
machine tested. This is global-dhp__io-dbench4-fsync
On Tue, 2017-08-08 at 09:44 -0700, Greg KH wrote:
>
> Should these go back farther than 4.12? Looks like they apply cleanly
> to 4.9, didn't look older than that...
I met prerequisites at 4.11, but I wasn't patching anything remotely
resembling virgin source.
-Mike
Greg,
this is 765e40b675a9566459ddcb8358ad16f3b8344bbe.
On úterý 8. srpna 2017 18:43:33 CEST Greg KH wrote:
> On Tue, Aug 08, 2017 at 06:36:01PM +0200, Oleksandr Natalenko wrote:
> > Could you queue "block: disable runtime-pm for blk-mq" too please? It is
> > also related to suspend-resume
On Tue, Aug 08, 2017 at 06:34:01PM +0200, Mike Galbraith wrote:
> On Tue, 2017-08-08 at 09:22 -0700, Greg KH wrote:
> > On Sun, Jul 30, 2017 at 03:50:15PM +0200, Oleksandr Natalenko wrote:
> > > Hello Mike et al.
> > >
> > > On neděle 30. července 2017 7:12:31 CEST Mike Galbraith wrote:
> > > >
Could you queue "block: disable runtime-pm for blk-mq" too please? It is also
related to suspend-resume freezes that were observed by multiple users.
Thanks.
On úterý 8. srpna 2017 18:33:29 CEST Jens Axboe wrote:
> On 08/08/2017 10:22 AM, Greg KH wrote:
> > On Sun, Jul 30, 2017 at 03:50:15PM
On 08/08/2017 09:41 AM, Ming Lei wrote:
Hi Laurence and Guys,
On Mon, Aug 07, 2017 at 06:06:11PM -0400, Laurence Oberman wrote:
On Mon, Aug 7, 2017 at 8:48 AM, Laurence Oberman
wrote:
Hello
I need to retract my Tested-by:
While its valid that the patches do not
Hi Laurence and Guys,
On Mon, Aug 07, 2017 at 06:06:11PM -0400, Laurence Oberman wrote:
> On Mon, Aug 7, 2017 at 8:48 AM, Laurence Oberman
> wrote:
> Hello
>
> I need to retract my Tested-by:
>
> While its valid that the patches do not introduce performance regressions,
>
On Tue, Aug 08, 2017 at 07:49:53PM +0800, Ming Lei wrote:
> On Tue, Aug 8, 2017 at 7:27 PM, Mel Gorman
> wrote:
> > On Tue, Aug 08, 2017 at 06:43:03PM +0800, Ming Lei wrote:
> >> Hi Mel Gorman,
> >>
> >> On Tue, Aug 8, 2017 at 6:30 PM, Mel Gorman
> Il giorno 08 ago 2017, alle ore 11:09, Ming Lei ha
> scritto:
>
> On Tue, Aug 08, 2017 at 10:09:57AM +0200, Paolo Valente wrote:
>>
>>> Il giorno 05 ago 2017, alle ore 08:56, Ming Lei ha
>>> scritto:
>>>
>>> In Red Hat internal storage test wrt.
On Tue, Aug 08, 2017 at 10:09:57AM +0200, Paolo Valente wrote:
>
> > Il giorno 05 ago 2017, alle ore 08:56, Ming Lei ha
> > scritto:
> >
> > In Red Hat internal storage test wrt. blk-mq scheduler, we
> > found that I/O performance is much bad with mq-deadline, especially
>
Cc: Andrew Morton
Cc: linux...@kvack.org
Signed-off-by: Ming Lei
---
mm/page_io.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/mm/page_io.c b/mm/page_io.c
index b6c4ac388209..11c6f4a9a25b 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@
Cc: "Rafael J. Wysocki"
Cc: linux...@vger.kernel.org
Signed-off-by: Ming Lei
---
kernel/power/swap.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/power/swap.c b/kernel/power/swap.c
index 57d22571f306..aa52ccc03fcc 100644
---
Signed-off-by: Ming Lei
---
drivers/block/loop.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index ef8334949b42..58df9ed70328 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -487,6 +487,11 @@ static int
Signed-off-by: Ming Lei
---
fs/buffer.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/fs/buffer.c b/fs/buffer.c
index 5715dac7821f..c821ed6a6f0e 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -3054,8 +3054,13 @@ static void
Cc: Jaegeuk Kim
Cc: Chao Yu
Cc: linux-f2fs-de...@lists.sourceforge.net
Signed-off-by: Ming Lei
---
fs/f2fs/data.c | 4
1 file changed, 4 insertions(+)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index
BTRFS uses bio->bi_vcnt to figure out page numbers, this
way becomes not correct once we start to enable multipage
bvec.
So use bio_for_each_segment_all() to do that instead.
Cc: Chris Mason
Cc: Josef Bacik
Cc: David Sterba
Cc:
Looks all are safe after multipage bvec is supported.
Cc: linux-bca...@vger.kernel.org
Signed-off-by: Ming Lei
---
drivers/md/bcache/btree.c | 1 +
drivers/md/bcache/super.c | 6 ++
drivers/md/bcache/util.c | 7 +++
3 files changed, 14 insertions(+)
diff --git
bio_iov_iter_get_pages() used unused bvec spaces for
storing page pointer array temporarily, and this patch
comments on this usage wrt. multipage bvec support.
Signed-off-by: Ming Lei
---
block/bio.c | 4
1 file changed, 4 insertions(+)
diff --git a/block/bio.c
For BIO based DM, some targets aren't ready for dealing with
bigger incoming bio than 1Mbyte, such as crypt target.
Cc: Mike Snitzer
Cc:dm-de...@redhat.com
Signed-off-by: Ming Lei
---
drivers/md/dm.c | 11 ++-
1 file changed, 10 insertions(+), 1
This patch adds comment on usage of bio_alloc_pages().
Signed-off-by: Ming Lei
---
block/bio.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/block/bio.c b/block/bio.c
index e241bbc49f14..826b5d173416 100644
--- a/block/bio.c
+++ b/block/bio.c
@@
Commit 17347cec15f919901c90(Btrfs: change how we iterate bios in endio)
mentioned that for dio the submitted bio may be fast cloned, we
can't access the bvec table directly for a cloned bio, so use
bio_get_first_bvec() to retrieve the 1st bvec.
Cc: Chris Mason
Cc: Josef Bacik
On Tue, Aug 8, 2017 at 9:45 AM, Ming Lei wrote:
> Cc: Chris Mason
> Cc: Josef Bacik
> Cc: David Sterba
> Cc: linux-bt...@vger.kernel.org
> Signed-off-by: Ming Lei
Can you please add some meaningful
As we need to support multipage bvecs, so don't access bio->bi_io_vec
in copy_to_high_bio_irq(), and just use the standard iterator
to do that.
Signed-off-by: Ming Lei
---
block/bounce.c | 16 +++-
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git
This patch clarifies the fact that even though both
bio_for_each_segment() and bio_for_each_segment_all()
are named as _segment/_segment_all, they still return
one page in each vector, instead of real segment(multipage bvec).
With comming multipage bvec, both the two helpers
are capable of
This patch implements singlepage version of the following
3 helpers:
- bvec_iter_offset_sp()
- bvec_iter_len_sp()
- bvec_iter_page_sp()
So that one multipage bvec can be splited to singlepage
bvec, and make users of current bvec iterator happy.
Signed-off-by: Ming Lei
This patch introduces helpers which are suffixed with _mp
and _sp for the multipage bvec/segment support.
The helpers with _mp suffix are the interfaces for treating
one bvec/segment as real multipage one, for example, .bv_len
is the total length of the multipage segment.
The helpers with _sp
It is enough to check and compute bio->bi_seg_front_size just
after the 1st segment is found, but current code checks that
for each bvec, which is inefficient.
This patch follows the way in __blk_recalc_rq_segments()
for computing bio->bi_seg_front_size, and it is more efficient
and code becomes
It is more efficient to use bio_for_each_segment_mp()
for mapping sg, meantime we have to consider splitting
multipage bvec as done in blk_bio_segment_split().
Signed-off-by: Ming Lei
---
block/blk-merge.c | 72 +++
1 file
In this case, 'sectors' can't be zero at all, so remove the check
and let the bio be splitted.
Signed-off-by: Ming Lei
---
block/blk-merge.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index
This helper can be used to iterate each singlepage bvec
from one multipage bvec.
Signed-off-by: Ming Lei
---
include/linux/bvec.h | 14 ++
1 file changed, 14 insertions(+)
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index c1ec0945451a..23d3abdf057c
When merging one bvec into segment, if the bvec is too big
to merge, current policy is to move the whole bvec into another
new segment.
This patchset changes the policy into trying to maximize size of
front segments, that means in above situation, part of bvec
is merged into current segment, and
Preparing for supporting multipage bvec.
Cc: Chris Mason
Cc: Josef Bacik
Cc: David Sterba
Cc: linux-bt...@vger.kernel.org
Signed-off-by: Ming Lei
---
fs/btrfs/compression.c | 5 -
fs/btrfs/extent_io.c | 8 ++--
2
Once multipage bvec is enabled, the last bvec may include
more than one page, this patch use bvec_get_last_page()
to truncate the bio.
Signed-off-by: Ming Lei
---
fs/buffer.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/fs/buffer.c
BTRFS and guard_bio_eod() need to get the last page, so introduce
this helper to make them happy.
Signed-off-by: Ming Lei
---
include/linux/bvec.h | 14 ++
1 file changed, 14 insertions(+)
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index
In bio_check_pages_dirty(), bvec->bv_page is used as flag
for marking if the page has been dirtied & released, and if
no, it will be dirtied in deferred workqueue.
With multipage bvec, we can't do that any more, so change
the logic into checking all pages in one mp bvec, and only
release all
Signed-off-by: Ming Lei
---
block/bio.c | 17 +++--
block/blk-zoned.c | 5 +++--
block/bounce.c| 6 --
3 files changed, 18 insertions(+), 10 deletions(-)
diff --git a/block/bio.c b/block/bio.c
index 716e6917b0fd..fd6a055f491c 100644
---
Cc: linux-bca...@vger.kernel.org
Signed-off-by: Ming Lei
---
drivers/md/bcache/btree.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 3da595ae565b..74cbb7387dc5 100644
---
The bio is always freed after running crypt_free_buffer_pages(),
so it isn't necessary to clear the bv->bv_page.
Cc: Mike Snitzer
Cc:dm-de...@redhat.com
Signed-off-by: Ming Lei
---
drivers/md/dm-crypt.c | 1 -
1 file changed, 1 deletion(-)
diff --git
Signed-off-by: Ming Lei
---
fs/block_dev.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/block_dev.c b/fs/block_dev.c
index 9941dc8342df..489d103ae11b 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -209,6 +209,7 @@
Cc: Mike Snitzer
Cc:dm-de...@redhat.com
Signed-off-by: Ming Lei
---
drivers/md/dm-crypt.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 664ba3504f48..0f2f44a73a32 100644
---
Signed-off-by: Ming Lei
---
fs/mpage.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/mpage.c b/fs/mpage.c
index 2e4c41ccb5c9..b3c0f0d6bc21 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -46,9 +46,10 @@
static void mpage_end_io(struct bio *bio)
{
Signed-off-by: Ming Lei
---
fs/iomap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/iomap.c b/fs/iomap.c
index 039266128b7f..17541e1c86a2 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -790,8 +790,9 @@ static void iomap_dio_bio_end_io(struct bio
Cc: "Darrick J. Wong"
Cc: linux-...@vger.kernel.org
Signed-off-by: Ming Lei
---
fs/xfs/xfs_aops.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
index 6bf120bb1a17..94df43dcae0b 100644
---
Cc: Steven Whitehouse
Cc: Bob Peterson
Cc: cluster-de...@redhat.com
Signed-off-by: Ming Lei
---
fs/gfs2/lops.c| 3 ++-
fs/gfs2/meta_io.c | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/fs/gfs2/lops.c
Cc: Jaegeuk Kim
Cc: Chao Yu
Cc: linux-f2fs-de...@lists.sourceforge.net
Signed-off-by: Ming Lei
---
fs/f2fs/data.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index
Cc: Shaohua Li
Cc: linux-r...@vger.kernel.org
Signed-off-by: Ming Lei
---
drivers/md/raid1.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index f50958ded9f0..e34080bd91cb 100644
---
We will support multipage bvecs in the future, so change to
iterator way for getting bv_page of bvec from original bio.
Cc: Matthew Wilcox
Signed-off-by: Ming Lei
---
block/bounce.c | 17 -
1 file changed, 8 insertions(+), 9
Introduce BVEC_ITER_ALL_INIT for iterating one bio
from start to end.
Signed-off-by: Ming Lei
---
include/linux/bvec.h | 9 +
1 file changed, 9 insertions(+)
diff --git a/include/linux/bvec.h b/include/linux/bvec.h
index ec8a4d7af6bd..fe7a22dd133b 100644
---
Cc: Chris Mason
Cc: Josef Bacik
Cc: David Sterba
Cc: linux-bt...@vger.kernel.org
Acked: David Sterba
Signed-off-by: Ming Lei
---
fs/btrfs/compression.c | 4
fs/btrfs/inode.c | 12
2
Signed-off-by: Ming Lei
---
drivers/block/drbd/drbd_bitmap.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/block/drbd/drbd_bitmap.c b/drivers/block/drbd/drbd_bitmap.c
index 809fd245c3dc..70890d950dc9 100644
--- a/drivers/block/drbd/drbd_bitmap.c
+++
Hi,
This patchset brings multipage bvec into block layer:
1) what is multipage bvec?
Multipage bvecs means that one 'struct bio_bvec' can hold
multiple pages which are physically contiguous instead
of one single page used in linux kernel for long time.
2) why is multipage bvec introduced?
> Il giorno 05 ago 2017, alle ore 08:56, Ming Lei ha
> scritto:
>
> In Red Hat internal storage test wrt. blk-mq scheduler, we
> found that I/O performance is much bad with mq-deadline, especially
> about sequential I/O on some multi-queue SCSI devcies(lpfc, qla2xxx,
>
> Il giorno 07 ago 2017, alle ore 20:42, Paolo Valente
> ha scritto:
>
>>
>> Il giorno 07 ago 2017, alle ore 19:32, Paolo Valente
>> ha scritto:
>>
>>>
>>> Il giorno 05 ago 2017, alle ore 00:05, Paolo Valente
>>>
Hi,
On 08/07/2017 05:48 PM, Martin K. Petersen wrote:
>
>> If you create the integrity tag at or above device mapper level, you
>> will run into problems because the same device can be accessed using
>> device mapper and using physical volume /dev/sd*. If you create
>> integrity tags at device
82 matches
Mail list logo