On Wed, Jun 22, 2016 at 4:22 PM, Lars Ellenberg
wrote:
> For a long time, generic_make_request() converts recursion into
> iteration by queuing recursive arguments on current->bio_list.
>
> This is convenient for stacking drivers,
> the top-most driver would take the
On Tue, Feb 23, 2016 at 10:54 PM, Mike Snitzer <snit...@redhat.com> wrote:
> On Mon, Feb 22 2016 at 9:55pm -0500,
> Ming Lei <ming@canonical.com> wrote:
>
>> On Tue, Feb 23, 2016 at 6:58 AM, Kent Overstreet
>> <kent.overstr...@gmail.com> wrote:
>&g
ch two subsystems will be done in the future.
Also bio_add_page() is used in floppy, dm-crypt and fs/logfs to
avoiding direct access to .bi_vcnt & .bi_io_vec.
Thanks,
Ming
Ming Lei (27):
block: bio: introduce 4 helpers for cleanup
block: drbd: use bio_get_base_vec() to retrieve the 1st bvec
b
On Thu, 7 Apr 2016 11:54:49 +0800
Ming Lei <tom.leim...@gmail.com> wrote:
> On Mon, Apr 4, 2016 at 1:06 PM, Shaun Tancheff <sh...@tancheff.com> wrote:
> > Recursive endio calls can exceed 16k stack. Tested with
> > 32k stack and observed:
> >
> >
return chain_bio;
> +}
> +
> /**
> * bio_endio - end I/O on a bio
> * @bio: bio
> @@ -1762,9 +1815,7 @@ void bio_endio(struct bio *bio)
> bio_put(bio);
> bio = parent;
> } else {
> - if (bio->bi_
On Fri, Jun 24, 2016 at 10:27 PM, Lars Ellenberg
<lars.ellenb...@linbit.com> wrote:
> On Fri, Jun 24, 2016 at 07:36:57PM +0800, Ming Lei wrote:
>> >
>> > This is not a theoretical problem.
>> > At least int DRBD, and an unfortunately high IO concurrency wrt. the
On Mon, Jul 4, 2016 at 4:20 PM, Lars Ellenberg
<lars.ellenb...@linbit.com> wrote:
> On Sat, Jul 02, 2016 at 06:28:29PM +0800, Ming Lei wrote:
>> The idea looks good, but not sure it can cover all cases of
>> dm over brbd or brbd over dm and maintaining two lists
>&
On Wed, Jun 22, 2016 at 4:22 PM, Lars Ellenberg
wrote:
> For a long time, generic_make_request() converts recursion into
> iteration by queuing recursive arguments on current->bio_list.
>
> This is convenient for stacking drivers,
> the top-most driver would take the
On Thu, Jul 7, 2016 at 4:03 PM, Lars Ellenberg
<lars.ellenb...@linbit.com> wrote:
> On Wed, Jul 06, 2016 at 11:57:51PM +0800, Ming Lei wrote:
>> > my suggestion
>> >
>> > generic_make_request(bio_orig)
>> > NULL
On Fri, Jul 8, 2016 at 6:07 AM, NeilBrown wrote:
> On Thu, Jul 07 2016, Lars Ellenberg wrote:
>
>> On Thu, Jul 07, 2016 at 03:35:48PM +1000, NeilBrown wrote:
>>> On Wed, Jun 22 2016, Lars Ellenberg wrote:
>>>
>>> > For a long time, generic_make_request() converts recursion into
On Wed, Jul 6, 2016 at 8:38 PM, Lars Ellenberg
<lars.ellenb...@linbit.com> wrote:
> On Mon, Jul 04, 2016 at 06:47:29PM +0800, Ming Lei wrote:
>> >> One clean solution may be to convert the loop of generic_make_request()
>> >> into the following way:
>> >&
On Tue, Feb 7, 2017 at 10:49 AM, Kent Overstreet
wrote:
> On Mon, Feb 06, 2017 at 04:47:24PM -0900, Kent Overstreet wrote:
>> On Mon, Feb 06, 2017 at 01:53:09PM +0100, Pavel Machek wrote:
>> > Still there on v4.9, 36 threads on nokia n900 cellphone.
>> >
>> > So.. what
On Tue, Aug 30, 2016 at 5:57 AM, Mikulas Patocka <mpato...@redhat.com> wrote:
>
>
> On Mon, 29 Aug 2016, Ming Lei wrote:
>
>> On Sat, Aug 27, 2016 at 11:09 PM, Mikulas Patocka <mpato...@redhat.com>
>> wrote:
>> >
>> >
>> > On Fri,
ss code than reworking bcache to split bios internally.
>> > >
>> > >BTW. In the device mapper, we have a layer dm-io, that was created to work
>> > >around bio size limitations - it accepts unlimited I/O request and splits
>> > >it to several bios. When bio si
Firstly we have mature bvec/bio iterator helper for iterate each
page in one bio, not necessary to reinvent a wheel to do that.
Secondly the coming multipage bvecs requires this patch.
Also add comments about the direct access to bvec table.
Signed-off-by: Ming Lei <tom.leim...@gmail.
n help to prepare for the following multipage bvec
support.
Signed-off-by: Ming Lei <tom.leim...@gmail.com>
---
block/bio.c | 8 ++--
drivers/block/floppy.c| 3 +--
drivers/md/bcache/io.c| 4 +---
drivers/md/bcache/journal.c | 4 +---
drivers/md/bcach
We have the standard interface to add page to bio, so don't
do that in hacking way.
Reviewed-by: Christoph Hellwig <h...@lst.de>
Signed-off-by: Ming Lei <tom.leim...@gmail.com>
---
drivers/md/dm-crypt.c | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/d
Avoid to access .bi_vcnt directly, because the bio can be
splitted from block layer, and .bi_vcnt should never have
been used here.
Signed-off-by: Ming Lei <tom.leim...@gmail.com>
---
drivers/md/dm-rq.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/md/d
to access bio->bi_vcnt.
The most special one is to use bvec iterator helpers
to implement .get_page/.next_page for dm-io.c
One big motivation is to prepare for supporting multipage
bvec, but this patchset is one good cleanup too even not
for that purpose.
Thanks,
Ming
Ming Lei (12):
block:
On Fri, Nov 11, 2016 at 8:05 PM, Ming Lei <tom.leim...@gmail.com> wrote:
> Avoid to access .bi_vcnt directly, because the bio can be
> splitted from block layer, and .bi_vcnt should never have
> been used here.
>
> Signed-off-by: Ming Lei <tom.leim...@gmail.com>
>
On Mon, Nov 21, 2016 at 10:49 PM, Mike Snitzer <snit...@redhat.com> wrote:
> On Fri, Nov 11 2016 at 7:05am -0500,
> Ming Lei <tom.leim...@gmail.com> wrote:
>
>> Firstly we have mature bvec/bio iterator helper for iterate each
>> page in one bio, not ne
help reducing the
OK, that is just the 1st part of the patchset.
> actual series to a sane size, and it should also help to cut
> down the Cc list.
>
Thanks,
Ming Lei
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
On Mon, Oct 31, 2016 at 11:29 PM, Christoph Hellwig <h...@infradead.org> wrote:
> On Sat, Oct 29, 2016 at 04:08:08PM +0800, Ming Lei wrote:
>> Avoid to access .bi_vcnt directly, because it may be not what
>> the driver expected any more after supporting multipage bvec.
>
Firstly we have mature bvec/bio iterator helper for iterate each
page in one bio, not necessary to reinvent a wheel to do that.
Secondly the coming multipage bvecs requires this patch.
Also add comments about the direct access to bvec table.
Signed-off-by: Ming Lei <tom.leim...@gmail.
Signed-off-by: Ming Lei <tom.leim...@gmail.com>
---
drivers/md/dm-crypt.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 4999c7497f95..ed0f54e51638 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -
33=2
Ming Lei (60):
block: bio: introduce bio_init_with_vec_table()
block drivers: convert to bio_init_with_vec_table()
block: drbd: remove impossible failure handling
block: floppy: use bio_add_page()
target: avoid to access .bi_vcnt directly
bcache: debug: avoid to access .bi_io
Signed-off-by: Ming Lei <tom.leim...@gmail.com>
---
drivers/block/floppy.c| 3 +--
drivers/md/bcache/io.c| 4 +---
drivers/md/bcache/journal.c | 4 +---
drivers/md/bcache/movinggc.c | 7 +++
drivers/md/bcache/super.c | 13 -
drivers/md/bcache/write
For BIO based DM, some targets aren't ready for dealing with
bigger incoming bio than 1Mbyte, such as crypt and log write
targets.
Signed-off-by: Ming Lei <tom.leim...@gmail.com>
---
drivers/md/dm.c | 11 ++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/md/
For BIO based DM, some targets aren't ready for dealing with
bigger incoming bio than 1Mbyte, such as crypt target.
Signed-off-by: Ming Lei <tom.leim...@gmail.com>
---
drivers/md/dm.c | 11 ++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm.c b/driv
- address comments in V0
[1], http://marc.info/?l=linux-kernel=141680246629547=2
[2], https://patchwork.kernel.org/patch/9451523/
[3], http://marc.info/?t=14773544711=1=2
[4], http://marc.info/?l=linux-mm=147745525801433=2
Ming Lei (54):
block: drbd: comment on direct access bvec table
The bio is always freed after running crypt_free_buffer_pages(),
so it isn't necessary to clear the bv->bv_page.
Signed-off-by: Ming Lei <tom.leim...@gmail.com>
---
drivers/md/dm-crypt.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-cryp
Signed-off-by: Ming Lei <tom.leim...@gmail.com>
---
drivers/md/dm-crypt.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 593cdf88bf5f..c6932fb85418 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -
On Wed, Jan 4, 2017 at 12:43 AM, Mike Snitzer <snit...@redhat.com> wrote:
> On Tue, Dec 27 2016 at 10:56am -0500,
> Ming Lei <tom.leim...@gmail.com> wrote:
>
>> For BIO based DM, some targets aren't ready for dealing with
>> bigger incoming bio than 1Mbyte, such
On Thu, Mar 23, 2017 at 05:29:02PM +1100, NeilBrown wrote:
>
> Currently only dm and md/raid5 bios trigger trace_block_bio_complete().
> Now that we have bio_chain(), it is not possible, in general, for a
> driver to know when the bio is really complete. Only bio_endio()
> knows that.
>
> So
On Wed, Apr 12, 2017 at 06:38:07PM +, Bart Van Assche wrote:
> On Wed, 2017-04-12 at 11:42 +0800, Ming Lei wrote:
> > On Tue, Apr 11, 2017 at 06:18:36PM +, Bart Van Assche wrote:
> > > On Tue, 2017-04-11 at 14:03 -0400, Mike Snitzer wrote:
> > > > Rather than
On Thu, Apr 13, 2017 at 09:59:57AM -0700, Bart Van Assche wrote:
> On 04/12/17 19:20, Ming Lei wrote:
> > On Wed, Apr 12, 2017 at 06:38:07PM +, Bart Van Assche wrote:
> >> If the blk-mq core would always rerun a hardware queue if a block driver
> >> returns BLK_MQ_RQ
On Fri, Apr 14, 2017 at 05:12:50PM +, Bart Van Assche wrote:
> On Fri, 2017-04-14 at 09:13 +0800, Ming Lei wrote:
> > On Thu, Apr 13, 2017 at 09:59:57AM -0700, Bart Van Assche wrote:
> > > On 04/12/17 19:20, Ming Lei wrote:
> > > > On Wed, Apr 12, 2017 at 06:
On Tue, Apr 11, 2017 at 06:18:36PM +, Bart Van Assche wrote:
> On Tue, 2017-04-11 at 14:03 -0400, Mike Snitzer wrote:
> > Rather than working so hard to use DM code against me, your argument
> > should be: "blk-mq drivers X, Y and Z rerun the hw queue; this is a well
> > established pattern"
>
st,
> + .complete = lo_complete_rq,
> };
>
> static int loop_add(struct loop_device **l, int i)
> diff --git a/drivers/block/loop.h b/drivers/block/loop.h
> index fb2237c73e61..fecd3f97ef8c 100644
> --- a/drivers/block/loop.h
> +++ b/drivers/block/loop.h
> @@
t;Martin K. Petersen"
Cc: linux-s...@vger.kernel.org
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/dm-rq.c | 1 -
drivers/nvme/host/fc.c | 3 ---
drivers/scsi/scsi_lib.c | 4
3 files changed, 8 deletions(-)
diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-
On Fri, Jun 30, 2017 at 2:42 AM, Jens Axboe <ax...@kernel.dk> wrote:
> On 06/29/2017 10:00 AM, Jens Axboe wrote:
>> On 06/29/2017 09:58 AM, Jens Axboe wrote:
>>> On 06/29/2017 02:40 AM, Ming Lei wrote:
>>>> On Thu, Jun 29, 2017 at 5:49 AM, Jens Axboe <ax...@k
On Thu, Jun 29, 2017 at 12:31:05PM -0500, Brian King wrote:
> On 06/29/2017 11:25 AM, Ming Lei wrote:
> > On Thu, Jun 29, 2017 at 11:58 PM, Jens Axboe <ax...@kernel.dk> wrote:
> >> On 06/29/2017 02:40 AM, Ming Lei wrote:
> >>> On Thu, Jun 29, 2017 at 5:49 AM
On Thu, Jun 29, 2017 at 11:58 PM, Jens Axboe <ax...@kernel.dk> wrote:
> On 06/29/2017 02:40 AM, Ming Lei wrote:
>> On Thu, Jun 29, 2017 at 5:49 AM, Jens Axboe <ax...@kernel.dk> wrote:
>>> On 06/28/2017 03:12 PM, Brian King wrote:
>>>> This patch converts
On Sat, Jul 1, 2017 at 10:18 AM, Brian King <brk...@linux.vnet.ibm.com> wrote:
> On 06/30/2017 06:26 PM, Jens Axboe wrote:
>> On 06/30/2017 05:23 PM, Ming Lei wrote:
>>> Hi Bian,
>>>
>>> On Sat, Jul 1, 2017 at 2:33 AM, Brian King <brk...@linux.vnet.ibm
l_block devices' is? It means you
create 80 null_blk
devices? Or you create one null_blk and make its hw queue number as 80
via module
parameter of ''submit_queues"?
I guess we should focus on multi-queue case since it is the normal way of NVMe.
> per null_blk. This is what I saw on this ma
ounters is the right way to fix it, at least not for the inflight
> counter.
Yeah, it won't be a issue for non-mq path, and for blk-mq path,
maybe we can use some blk-mq knowledge(tagset?) to figure out
the 'in_flight' counter. I thought about it before, but never got a perfect
solution, and loo
On Mon, Aug 07, 2017 at 01:29:41PM -0400, Laurence Oberman wrote:
>
>
> On 08/07/2017 11:27 AM, Bart Van Assche wrote:
> > On Mon, 2017-08-07 at 08:48 -0400, Laurence Oberman wrote:
> > > I tested this series using Ming's tests as well as my own set of tests
> > > typically run against changes
Hi Guys,
Laurence and I see a system lockup issue when running concurrent
big buffered write(4M bytes) to IB SRP on v4.13-rc3.
1 how to reproduce
1) setup IB_SRR & multi path
#./start_opensm.sh
#./start_srp.sh
# cat start_opensm.sh
#!/bin/bash
rmmod
On Mon, Jun 19, 2017 at 11:18 PM, Christoph Hellwig <h...@lst.de> wrote:
> On Mon, Jun 19, 2017 at 11:13:46PM +0800, Ming Lei wrote:
>> On Mon, Jun 19, 2017 at 11:00 PM, Christoph Hellwig <h...@lst.de> wrote:
>> > On Mon, Jun 19, 2017 at 10
pd = q->queuedata;
> --
> 2.11.0
>
blk_queue_make_request() sets bounce for any highmem page for long time,
and in theory this patch might cause regression on 32bit arch, when
the controller can't handle higmem page really(especially in case of PAE), so
we may need to be careful ab
On Mon, Jun 19, 2017 at 11:00 PM, Christoph Hellwig <h...@lst.de> wrote:
> On Mon, Jun 19, 2017 at 10:34:36PM +0800, Ming Lei wrote:
>> blk_queue_make_request() sets bounce for any highmem page for long time,
>> and in theory this patch might cause regression on 32bit arch, w
path can break the quiescing. This patch improves this
interface via removing stopping queue, then it can be easier to use.
Thanks,
Ming
Ming Lei (6):
blk-mq: introduce blk_mq_unquiesce_queue
blk-mq: use the introduced blk_mq_unquiesce_queue()
blk-mq: fix blk_mq_quiesce_queue
blk-mq
BLK_MQ_S_STOPPED may be not observed in other concurrent I/O paths,
we can't guarantee that dispatching won't happen after queue
is stopped.
So clarify the fact and avoid potential misuse.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/blk-mq.c | 10 ++
1 file chang
blk_mq_unquiesce_queue() is used for unquiescing the queue.
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/dm-rq.c | 2 +-
drivers/nvme/host/core.c | 2 +-
drivers/scsi/scsi_lib.c | 5 -
3 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/md/dm-
cing.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/blk-mq.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index e1fc9ab50c87..900eb91e0ece 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -168,8 +168
dispatched request is freeed
- busy dispatched request is requeued, and the STARTED
flag is cleared
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/blk-mq.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 864709
BLK_MQ_S_STOPPED may be not observed in other concurrent I/O paths,
we can't guarantee that dispatching won't happen after queue
is stopped.
So clarify the fact and avoid potential misuse.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/blk-mq.c | 10 ++
1 file chang
is introduced and evaluated
inside RCU read-side critical sections for fixing the above issues.
This patch fixes request use-after-free during canceling requets
of NVMe in nvme_dev_disable().
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/blk-mq.c
Now we use blk_mq_start_stopped_hw_queues() implictely
as pair of blk_mq_quiesce_queue(), now we introduce
blk_mq_unquiesce_queue() explictely.
Also this function is introduced for fixing
current quiescing mechanism, which will be done
in the following patches.
Signed-off-by: Ming Lei <m
e system]
>
> url:
> https://github.com/0day-ci/linux/commits/Ming-Lei/blk-mq-introduce-blk_mq_unquiesce_queue/20170526-140138
> base: https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git
> for-next
> config: x86_64-randconfig-x019-201721 (attached as .config)
>
blk_mq_unquiesce_queue() is used for unquiescing the
queue explicitly, so replace blk_mq_start_stopped_hw_queues()
with it.
Cc: linux-n...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Cc: dm-devel@redhat.com
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/dm-rq.c
On Tue, May 30, 2017 at 12:04:02PM -0700, Eduardo Valentin wrote:
> On Sat, May 27, 2017 at 10:21:21PM +0800, Ming Lei wrote:
> > blk_mq_unquiesce_queue() is used for unquiescing the
> > queue explicitly, so replace blk_mq_start_stopped_hw_queues()
> > with it.
On Tue, May 30, 2017 at 03:12:41PM +, Bart Van Assche wrote:
> On Sat, 2017-05-27 at 22:21 +0800, Ming Lei wrote:
> > --- a/drivers/scsi/scsi_lib.c
> > +++ b/drivers/scsi/scsi_lib.c
> > @@ -3030,7 +3030,10 @@ scsi_internal_device_unblock(struct scsi_device
> > *s
On Thu, Jun 01, 2017 at 11:09:00PM +, Bart Van Assche wrote:
> On Thu, 2017-06-01 at 08:54 +0800, Ming Lei wrote:
> > On Wed, May 31, 2017 at 03:21:41PM +, Bart Van Assche wrote:
> > > On Wed, 2017-05-31 at 20:37 +0800, Ming Lei wrote:
> > > > diff --git a/dri
On Wed, May 31, 2017 at 03:21:41PM +, Bart Van Assche wrote:
> On Wed, 2017-05-31 at 20:37 +0800, Ming Lei wrote:
> > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> > index 99e16ac479e3..ffcf05765e2b 100644
> > --- a/drivers/scsi/scsi_lib.c
> > ++
blk_mq_unquiesce_queue() is used for unquiescing the
queue explicitly, so replace blk_mq_start_stopped_hw_queues()
with it.
Cc: linux-n...@lists.infradead.org
Cc: linux-s...@vger.kernel.org
Cc: dm-devel@redhat.com
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/dm-rq.c
Cc: dm-devel@redhat.com
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/dm-rq.c | 2 +-
drivers/nvme/host/core.c | 2 +-
drivers/scsi/scsi_lib.c | 4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index b639fa
t; }
>
> +/*
> + * Should only be used carefully, when the caller knows we want to
> + * bypass a potential IO scheduler on the target device.
> + */
> +void blk_mq_request_bypass_insert(struct request *rq)
> +{
> + struct blk_mq_ctx *ctx = rq->mq_ctx;
> + struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(rq->q, ctx->cpu);
> +
> + spin_lock(>lock);
> + list_add_tail(>queuelist, >dispatch);
> + spin_unlock(>lock);
> +
> + blk_mq_run_hw_queue(hctx, false);
> +}
> +
Hello Jens and Mike,
This patch sends flush request to ->dispatch directly too, which changes the
previous behaviour, is that OK for dm-rq?
--
Ming Lei
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
On Fri, Sep 15, 2017 at 12:30 AM, Jens Axboe <ax...@kernel.dk> wrote:
> On 09/14/2017 09:57 AM, Ming Lei wrote:
>> On Tue, Sep 12, 2017 at 5:27 AM, Jens Axboe <ax...@kernel.dk> wrote:
>>> On 09/11/2017 03:13 PM, Mike Snitzer wrote:
>>>> On Mon, Sep 11
eturns successfully.
This way will make block layer's I/O schedule more
effective on dm-rq device.
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/blk-core.c | 4 +++-
block/blk-sysfs.c | 5 +
block/blk.h| 2 --
include/linux/blkdev.h | 1 +
4 files changed,
in that coverletter was
against the raw lpfc/ib(run after 'multipath -F'), instead of dm-mpath.
5) this patchset itself doesn't depend on the scsi_mq_perf patchset[1]
[1] https://marc.info/?t=15043655572=1=2
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/dm-mpath.
mq-deadline, sequential I/O performance
is improved a lot, see test result in patch 5's commit log.
Any comments are welcome!
Thanks,
Ming
Ming Lei (5):
block: don't call blk_mq_delay_run_hw_queue() in case of
BLK_STS_RESOURCE
dm-mpath: return DM_MAPIO_REQUEUE in case of rq allocat
blk-mq will rerun queue via RESTART after one request is completion,
so not necessary to wait random time for requeuing, it should trust
blk-mq to do it.
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/dm-mpath.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
again.
Cc: Mike Snitzer <snit...@redhat.com>
Cc: dm-devel@redhat.com
Cc: Sagi Grimberg <s...@grimberg.me>
Cc: linux-n...@lists.infradead.org
Cc: "James E.J. Bottomley" <j...@linux.vnet.ibm.com>
Cc: "Martin K. Petersen" <martin.peter...@oracle.com>
Cc: li
mq-deadline, sequential I/O performance
is improved a lot, see test result in patch 5's commit log.
Any comments are welcome!
Thanks,
Ming
Ming Lei (5):
block: don't call blk_mq_delay_run_hw_queue() in case of
BLK_STS_RESOURCE
dm-mpath: return DM_MAPIO_REQUEUE in case of rq allocat
It is very normal to see allocation failure, so not necessary
to dump it and annoy people.
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/dm-mpath.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index f5a1088a6e79..f57ad8
On Tue, Sep 19, 2017 at 04:49:15PM +, Bart Van Assche wrote:
> On Wed, 2017-09-20 at 00:04 +0800, Ming Lei wrote:
> > Run queue at end_io is definitely wrong, because blk-mq has SCHED_RESTART
> > to do that already.
>
> Sorry but I disagree. If SCHED_RESTART is set that c
Hi Mike,
On Tue, Sep 19, 2017 at 07:50:06PM -0400, Mike Snitzer wrote:
> On Tue, Sep 19 2017 at 7:25pm -0400,
> Bart Van Assche <bart.vanass...@wdc.com> wrote:
>
> > On Wed, 2017-09-20 at 06:44 +0800, Ming Lei wrote:
> > > For this issue, it isn't same between S
On Tue, Sep 19, 2017 at 06:42:30PM +, Bart Van Assche wrote:
> On Wed, 2017-09-20 at 00:55 +0800, Ming Lei wrote:
> > On Wed, Sep 20, 2017 at 12:49 AM, Bart Van Assche
> > <bart.vanass...@wdc.com> wrote:
> > > On Wed, 2017-09-20 at 00:04 +0800, Ming Lei wrot
On Mon, Sep 18, 2017 at 03:18:16PM +, Bart Van Assche wrote:
> On Sun, 2017-09-17 at 20:40 +0800, Ming Lei wrote:
> > "if no request has completed before the delay has expired" can't be a
> > reason to rerun the queue, because the queue can still be busy.
>
&
On Thu, Sep 21, 2017 at 03:53:45PM +, Bart Van Assche wrote:
> On Thu, 2017-09-21 at 09:41 +0800, Ming Lei wrote:
> > On Wed, Sep 20, 2017 at 11:26:09PM +, Bart Van Assche wrote:
> > > dm-mpath request submission latency if the underlying driver returns
> > >
On Wed, Sep 20, 2017 at 11:26:09PM +, Bart Van Assche wrote:
> On Thu, 2017-09-21 at 06:36 +0800, Ming Lei wrote:
> > Actually with GFP_ATOMIC, dispatch in dm-rq can't move on and no request
> > will be dequeued from IO scheduler queue if this allocation fails, that
> > me
On Fri, Sep 15, 2017 at 05:57:31PM +, Bart Van Assche wrote:
> On Sat, 2017-09-16 at 00:44 +0800, Ming Lei wrote:
> > If .queue_rq() returns BLK_STS_RESOURCE, blk-mq will rerun
> > the queue in the three situations:
> >
> > 1) if BLK_MQ_S_SCHED_RESTART is set
> &
On Fri, Sep 15, 2017 at 04:06:55PM -0400, Mike Snitzer wrote:
> On Fri, Sep 15 2017 at 1:29pm -0400,
> Bart Van Assche <bart.vanass...@wdc.com> wrote:
>
> > On Sat, 2017-09-16 at 00:44 +0800, Ming Lei wrote:
> > > blk-mq will rerun queue via RESTART after one r
On Fri, Sep 15, 2017 at 05:29:53PM +, Bart Van Assche wrote:
> On Sat, 2017-09-16 at 00:44 +0800, Ming Lei wrote:
> > blk-mq will rerun queue via RESTART after one request is completion,
> > so not necessary to wait random time for requeuing, it should trust
&
.
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/dm-mpath.c | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index 11f273d2f018..0902f7762306 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-m
On Fri, Sep 22, 2017 at 03:06:16PM +, Bart Van Assche wrote:
> On Fri, 2017-09-22 at 09:35 +0800, Ming Lei wrote:
> > + /*
> > +* blk-mq's SCHED_RESTART can cover this requeue, so
> > +* we needn't to deal with it b
On Tue, Sep 19, 2017 at 11:48:23AM -0400, Mike Snitzer wrote:
> On Tue, Sep 19 2017 at 1:43am -0400,
> Ming Lei <ming@redhat.com> wrote:
>
> > On Mon, Sep 18, 2017 at 03:18:16PM +, Bart Van Assche wrote:
> > > On Sun, 2017-09-17 at 20:40 +0800, Ming Lei wro
On Wed, Sep 20, 2017 at 12:49 AM, Bart Van Assche
<bart.vanass...@wdc.com> wrote:
> On Wed, 2017-09-20 at 00:04 +0800, Ming Lei wrote:
>> Run queue at end_io is definitely wrong, because blk-mq has SCHED_RESTART
>> to do that already.
>
> Sorry but I disagree. If SCHED
On Tue, Sep 19, 2017 at 11:56:03AM -0400, Mike Snitzer wrote:
> On Tue, Sep 19 2017 at 11:36am -0400,
> Bart Van Assche <bart.vanass...@wdc.com> wrote:
>
> > On Tue, 2017-09-19 at 13:43 +0800, Ming Lei wrote:
> > > On Mon, Sep 18, 2017 at 03:18:16PM +, Bart Va
On Tue, Sep 19, 2017 at 10:41:30AM -0400, Mike Snitzer wrote:
> On Sun, Sep 17 2017 at 9:23am -0400,
> Ming Lei <ming@redhat.com> wrote:
>
> > On Fri, Sep 15, 2017 at 04:06:55PM -0400, Mike Snitzer wrote:
> > > On Fri, Sep 15 2017 at 1:29pm -0400,
>
Hi John,
On Mon, Oct 09, 2017 at 01:09:22PM +0100, John Garry wrote:
> On 30/09/2017 11:27, Ming Lei wrote:
> > Hi Jens,
> >
> > In Red Hat internal storage test wrt. blk-mq scheduler, we
> > found that I/O performance is much bad with mq-deadline, especially
>
On Tue, Oct 10, 2017 at 01:24:52PM +0100, John Garry wrote:
> On 10/10/2017 02:46, Ming Lei wrote:
> > > > > > I tested this series for the SAS controller on HiSilicon hip07
> > > > > > platform as I
> > > > > > am interested in enabling
On Tue, Oct 10, 2017 at 11:23:45AM -0700, Omar Sandoval wrote:
> On Mon, Oct 09, 2017 at 07:24:23PM +0800, Ming Lei wrote:
> > SCSI devices use host-wide tagset, and the shared driver tag space is
> > often quite big. Meantime there is also queue depth for each lun(
> >
;bart.vanass...@wdc.com>
Tested-by: Oleksandr Natalenko <oleksa...@natalenko.name>
Tested-by: Tom Nguyen <tom81...@gmail.com>
Tested-by: Paolo Valente <paolo.vale...@linaro.org>
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/blk-mq-debugf
- hold ctx->lock when clearing ctx busy bit as suggested
by Bart
Ming Lei (7):
blk-mq: issue rq directly in blk_mq_request_bypass_insert()
blk-mq-sched: fix scheduler bad performance
sbitmap: introduce __sbitmap_for_each_set()
blk-mq: introduce blk_mq_dequeue_from_ctx()
bl
So that we can reuse __elv_merge() to merge bio
into requests from sw queue in the following patches.
No functional change.
Tested-by: Oleksandr Natalenko <oleksa...@natalenko.name>
Tested-by: Tom Nguyen <tom81...@gmail.com>
Tested-by: Paolo Valente <paolo.vale...@linaro.org>
S
Prepare for supporting bio merge to sw queue if no
blk-mq io scheduler is taken.
Tested-by: Oleksandr Natalenko <oleksa...@natalenko.name>
Tested-by: Tom Nguyen <tom81...@gmail.com>
Tested-by: Paolo Valente <paolo.vale...@linaro.org>
Signed-off-by: Ming Lei <ming@redha
dr Natalenko <oleksa...@natalenko.name>
Tested-by: Tom Nguyen <tom81...@gmail.com>
Tested-by: Paolo Valente <paolo.vale...@linaro.org>
Signed-off-by: Ming Lei <ming@redhat.com>
---
block/blk-mq-sched.c | 8 +---
block/blk-mq-sched.h | 11 +++
2 files ch
During requeue, block layer won't change the request any
more, such as no merge, so we can cache ti->clone and
let .clone_and_map_rq check if the cache can be hit.
Signed-off-by: Ming Lei <ming@redhat.com>
---
drivers/md/dm-mpath.c | 31 ---
drivers/m
1 - 100 of 698 matches
Mail list logo