in blk_queue_split() for capturing this issue.
Cc: Vitaly Kuznetsov
Cc: Dave Chinner
Cc: Linux FS Devel
Cc: Darrick J. Wong
Cc: x...@vger.kernel.org
Cc: Dave Chinner
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Matthew Wilcox
Signed-off-by: Ming Lei
---
block/blk-merge.c | 2 ++
1 file changed, 2
: Dave Chinner
Cc: Linux FS Devel
Cc: Darrick J. Wong
Cc: x...@vger.kernel.org
Cc: Dave Chinner
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Matthew Wilcox
Signed-off-by: Ming Lei
---
fs/xfs/xfs_buf.c | 28 +---
fs/xfs/xfs_super.c | 13 -
2 files changed
uld be done via buddy directly.
Cc: Vitaly Kuznetsov
Cc: Dave Chinner
Cc: Linux FS Devel
Cc: Darrick J. Wong
Cc: x...@vger.kernel.org
Cc: Dave Chinner
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Matthew Wilcox
Signed-off-by: Ming Lei
---
block/Makefile | 3 +-
block/blk-cor
On Fri, Oct 12, 2018 at 03:53:10PM +0800, Ming Lei wrote:
> blk_queue_split() does respect this limit via bio splitting, so no
> need to do that in blkdev_issue_discard(), then we can align to
> normal bio submit(bio_add_page() & submit_bio()).
>
> More importantly, this pa
eue *blk_mq_init_sq_queue(struct blk_mq_tag_set *set,
> + const struct blk_mq_ops *ops,
> + unsigned int queue_depth,
> + unsigned int set_flags);
> int blk_mq_register_dev(struct device *, struct request_queue *);
> void blk_mq_unregister_dev(struct device *, struct request_queue *);
>
> --
> 2.17.1
>
> --
> Jens Axboe
>
Reviewed-by: Ming Lei
--
Ming
On Thu, Oct 11, 2018 at 10:58:53AM -0600, Jens Axboe wrote:
> Convert from the old request_fn style driver to blk-mq.
>
> Cc: David Miller
> Signed-off-by: Jens Axboe
> ---
> drivers/block/sunvdc.c | 161 -
> 1 file changed, 110 insertions(+), 51
On Thu, Oct 11, 2018 at 10:58:59AM -0600, Jens Axboe wrote:
> Just a straight forward conversion. The retry handling could
> potentially be done by blk-mq as well, but that's for another
> day.
>
> Cc: Jeff Dike
> Signed-off-by: Jens Axboe
> ---
> arch/um/drivers/ubd_kern.c | 154
wig
Cc: Xiao Ni
Signed-off-by: Ming Lei
---
block/blk-lib.c | 28 ++--
1 file changed, 2 insertions(+), 26 deletions(-)
diff --git a/block/blk-lib.c b/block/blk-lib.c
index d1b9dd03da25..bbd44666f2b5 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -29,9 +29,7 @@ in
On Mon, Oct 08, 2018 at 01:59:05PM +0800, Dongli Zhang wrote:
> I can reproduce with qemu:
>
> # ls /sys/block/nvme*n1/mq/*/cpu_list
> /sys/block/nvme0n1/mq/0/cpu_list
> /sys/block/nvme0n1/mq/1/cpu_list
> /sys/block/nvme0n1/mq/2/cpu_list
> /sys/block/nvme0n1/mq/3/cpu_list
>
On Fri, Sep 28, 2018 at 04:42:20PM +0800, Ming Lei wrote:
> Lot of controllers may have only one irq vector for completing IO
> request. And usually affinity of the only irq vector is all possible
> CPUs, however, on most of ARCH, there may be only one specific CPU
> for handling th
=randread --blocksize=4k
Cc: Dongli Zhang
Cc: Zach Marano
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Jianchao Wang
Signed-off-by: Ming Lei
---
block/blk-mq.c | 14 ++
block/blk-softirq.c | 5 ++---
2 files changed, 16 insertions(+), 3 deletions(-)
diff --git a/block/blk-mq.c b
On Thu, Sep 27, 2018 at 11:30:19AM +0800, jianchao.wang wrote:
> Hi Ming
>
> On 09/27/2018 12:08 AM, Ming Lei wrote:
> > Lot of controllers may have only one irq vector for completing IO
> > request. And usually affinity of the only irq vector is all possible
> > CPU
Hi Dongli,
On Thu, Sep 27, 2018 at 10:00:28AM +0800, Dongli Zhang wrote:
> Hi Ming,
>
> On 09/27/2018 12:08 AM, Ming Lei wrote:
> > Lot of controllers may have only one irq vector for completing IO
> > request. And usually affinity of the only irq vector is all possi
=randread --blocksize=4k
Cc: Zach Marano
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Jianchao Wang
Signed-off-by: Ming Lei
---
block/blk-mq.c | 14 ++
block/blk-softirq.c | 7 +--
2 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
/scsi/sr.c | 1 +
> include/linux/blk-pm.h | 24 +++
> include/linux/blkdev.h | 37 ++---
> include/linux/percpu-refcount.h | 1 +
> lib/percpu-refcount.c | 28 +++-
> 16 files changed, 401 insertions(+), 298 deletions(-)
> create mode 100644 block/blk-pm.c
> create mode 100644 block/blk-pm.h
> create mode 100644 include/linux/blk-pm.h
>
> --
> 2.19.0.444.g18242da7ef-goog
>
Looks fine,
Reviewed-by: Ming Lei
thanks,
Ming
e block headers.
>
> Change since V1:
> - dropped the Xen related changed which are moved into a new series
>to be sent after this one
Looks fine:
Reviewed-by: Ming Lei
Thanks,
Ming
On Thu, Sep 20, 2018 at 08:23:10AM -0700, Bart Van Assche wrote:
> On Thu, 2018-09-20 at 11:48 +0800, Ming Lei wrote:
> > On Wed, Sep 19, 2018 at 03:45:29PM -0700, Bart Van Assche wrote:
> > > + ret = -EBUSY;
> > > + if (blk_requests_in_flight(q) == 0) {
> > >
quest finishes instead of only if the queue depth drops to zero.
>
> Signed-off-by: Bart Van Assche
> Cc: Christoph Hellwig
> Cc: Ming Lei
> Cc: Jianchao Wang
> Cc: Hannes Reinecke
> Cc: Johannes Thumshirn
> Cc: Alan Stern
> ---
> block/blk
On Wed, Sep 19, 2018 at 02:39:35PM -0700, Bart Van Assche wrote:
> On Wed, 2018-09-19 at 12:05 +0800, Ming Lei wrote:
> > Looks this patch may introduce the following race between queue
> > freeze and
>
as increased the q_usage_counter request queue
> member. This change is needed for a later patch that will make request
> allocation block while the queue status is not RPM_ACTIVE.
>
> Signed-off-by: Bart Van Assche
> Cc: Christoph Hellwig
> Cc: Ming Lei
> Cc: Jianchao
On Tue, Sep 18, 2018 at 09:17:12AM +0800, jianchao.wang wrote:
> Hi Ming
>
> On 09/17/2018 08:07 PM, Ming Lei wrote:
> >>> This way will delay runtime pm or system suspend until the queue is
> >>> unfrozen,
> >>> but it isn't reasonable.
> >>
Hi,
On Mon, Sep 17, 2018 at 10:25:54AM +0800, jianchao.wang wrote:
> Hi Ming
>
> Thanks for your kindly response.
>
> On 09/16/2018 09:09 PM, Ming Lei wrote:
> > On Fri, Sep 14, 2018 at 03:27:44PM +0800, jianchao.wang wrote:
> >> Hi Ming
> >>
>
On Mon, Sep 17, 2018 at 08:34:09AM +0200, Hannes Reinecke wrote:
> On 09/13/2018 02:15 PM, Ming Lei wrote:
> > Hi,
> >
> > This patchset introduces per-host admin request queue for submitting
> > admin request only, and uses this approach to implement both SCSI
> &
On Fri, Sep 14, 2018 at 03:27:44PM +0800, jianchao.wang wrote:
> Hi Ming
>
> On 09/13/2018 08:15 PM, Ming Lei wrote:
> > This patchset introduces per-host admin request queue for submitting
> > admin request only, and uses this approach to implement both SCSI
> > qu
erate over scheduler tags instead of driver tags for counting
allocated requests(17/17)
Ming Lei (17):
blk-mq: allow to pass default queue flags for creating & initializing
queue
blk-mq: convert BLK_MQ_F_NO_SCHED into per-queue flag
block: rename QUEUE_FLAG_NO_SCHED as QU
.
Suggested-by: Kent Overstreet
Cc: Kent Overstreet
Cc: Dmitry Monakhov
Cc: Christoph Hellwig
Cc: Hannes Reinecke
Signed-off-by: Ming Lei
---
block/bio-integrity.c | 10 +++---
block/bio.c | 1 -
include/linux/bio.h | 22 --
include/linux/bvec.h | 3 ---
4 files
On Tue, Aug 28, 2018 at 07:33:25PM -0400, Kent Overstreet wrote:
> I just came across bi_done and your patch that added it -
> f9df1cd99e bio: add bvec_iter rewind API
>
> I invite you to think through what happens to a bio that gets split by
> something
> further down the stack, and then
On Mon, Aug 27, 2018 at 03:25:50PM +0800, jianchao.wang wrote:
>
>
> On 08/27/2018 03:00 PM, Ming Lei wrote:
> > On Mon, Aug 27, 2018 at 01:56:39PM +0800, jianchao.wang wrote:
> >> Hi Ming
> >>
> >> Currently, blk_mq_update_dispatch
On Mon, Aug 27, 2018 at 01:56:39PM +0800, jianchao.wang wrote:
> Hi Ming
>
> Currently, blk_mq_update_dispatch_busy is hooked in blk_mq_dispatch_rq_list
> and __blk_mq_issue_directly. blk_mq_update_dispatch_busy could be invoked on
> multiple
> cpus concurrently. But there is not any protection
On Sat, Aug 25, 2018 at 07:18:43AM -0500, Steve Wise wrote:
> > > I guess this way still can't fix the request allocation crash issue
> > > triggered by using blk_mq_alloc_request_hctx(), in which one hw queue
> > may
> > > not be mapped from any online CPU.
> >
> > Not really. I guess we will
On Thu, Aug 23, 2018 at 01:51:05AM +0100, Ben Hutchings wrote:
> On Thu, 2018-08-23 at 06:02 +0800, Ming Lei wrote:
> > On Wed, Aug 22, 2018 at 08:22:00PM +0100, Ben Hutchings wrote:
> > > On Mon, 2018-08-20 at 11:04 +0200, Ricardo Ribalda Delgado wrote:
> > > > Hel
On Wed, Aug 22, 2018 at 08:22:00PM +0100, Ben Hutchings wrote:
> On Mon, 2018-08-20 at 11:04 +0200, Ricardo Ribalda Delgado wrote:
> > Hello Ming
> > On Mon, Aug 20, 2018 at 10:30 AM Ming Lei wrote:
> [...]
> > > One problem found from your iostat log is that looks
On Wed, Aug 22, 2018 at 12:33:05PM +0200, Jan Kara wrote:
> On Wed 22-08-18 10:02:49, Martin Wilck wrote:
> > On Mon, 2018-07-30 at 20:37 +0800, Ming Lei wrote:
> > > On Wed, Jul 25, 2018 at 11:15:09PM +0200, Martin Wilck wrote:
> > > >
> > > > +/**
>
On Wed, Aug 22, 2018 at 09:06:40AM +0300, Jarkko Nikula wrote:
> On 08/21/2018 04:57 PM, Ming Lei wrote:
> > On Tue, Aug 21, 2018 at 04:45:41PM +0300, Jarkko Nikula wrote:
> > > On 08/21/2018 04:03 PM, Adrian Hunter wrote:
> > > > On 21/08/18 15:37, Jar
On Tue, Aug 21, 2018 at 04:45:41PM +0300, Jarkko Nikula wrote:
> On 08/21/2018 04:03 PM, Adrian Hunter wrote:
> > On 21/08/18 15:37, Jarkko Nikula wrote:
> > > Hi
> > >
> > > I bisected some kind of SDHCI regression to commit 6ce3dd6eec11 ("blk-mq:
> > > issue directly if hw queue isn't busy in
On Mon, Aug 20, 2018 at 01:54:20PM -0700, Sagi Grimberg wrote:
> nvme-rdma attempts to map queues based on irq vector affinity.
> However, for some devices, completion vector irq affinity is
> configurable by the user which can break the existing assumption
> that irq vectors are optimally
On Mon, Aug 20, 2018 at 11:04:08AM +0200, Ricardo Ribalda Delgado wrote:
> Hello Ming
> On Mon, Aug 20, 2018 at 10:30 AM Ming Lei wrote:
> >
> > On Mon, Aug 20, 2018 at 09:39:45AM +0200, Ricardo Ribalda Delgado wrote:
> > > Some measurements:
> > >
> >
On Mon, Aug 20, 2018 at 09:39:45AM +0200, Ricardo Ribalda Delgado wrote:
> Some measurements:
>
> Please note that even when iostat shows 0.0 the LED on the device was
> blinking as if there was some activity going on.
>
> Thanks!
>
> KERNEL:
> Linux neopili 4.17.0-1-amd64 #1 SMP Debian
on't change the default io scheduler as none, which shouldn't
work well for slow disk, such as non-SSD.
Thanks,
Ming Lei
quot;block: remove external dependency on wbt_flags")
Cc: Josef Bacik
Reported-by: Ming Lei
Signed-off-by: Ming Lei
---
block/blk-wbt.c | 6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index 1d94a20374fc..bb93c7c2b182 100644
---
On Fri, Aug 10, 2018 at 02:09:01AM +, Felipe Franciosi wrote:
> Hi Ming (and all),
>
> Your series "scsi: virtio_scsi: fix IO hang caused by irq vector automatic
> affinity" which forces virtio-scsi to use blk-mq fixes an issue introduced by
> 84676c1f. We noticed that this bug also exists
On Thu, Aug 09, 2018 at 12:41:39PM -0700, Bart Van Assche wrote:
> Process all requests in state SDEV_CREATED instead of only RQF_DV
> requests. This does not change the behavior of the SCSI core because
> the SCSI device state is modified into another state before SCSI
> devices become visible in
On Wed, Aug 08, 2018 at 05:28:43PM +, Bart Van Assche wrote:
> On Wed, 2018-08-08 at 14:43 +0800, jianchao.wang wrote:
> >
> > On 08/08/2018 02:11 PM, jianchao.wang wrote:
> > > Hi Bart
> > >
> > > On 08/08/2018 06:51 AM, Bart Van Assche wrote:
> > > > @@ -391,6 +393,9 @@ static struct
ting.
> Instead of maintaining the q->nr_pending counter, rely on
> q->q_usage_counter. Call pm_runtime_mark_last_busy() every time a
> request finishes instead of only if the queue depth drops to zero.
>
> Signed-off-by: Bart Van Assche
> Cc: Christoph Hellwig
> Cc: M
On Tue, Aug 07, 2018 at 03:51:25PM -0700, Bart Van Assche wrote:
> The RQF_PREEMPT flag is used for three purposes:
> - In the SCSI core, for making sure that power management requests
> are executed if a device is in the "quiesced" state.
> - For domain validation by SCSI drivers that use the
On Mon, Aug 6, 2018 at 11:20 PM, Bart Van Assche wrote:
> On Sat, 2018-08-04 at 18:01 +0800, Ming Lei wrote:
>> On Sat, Aug 4, 2018 at 8:03 AM, Bart Van Assche
>> wrote:
>> >
>> > diff --git a/block/blk-pm.c b/block/blk-pm.c
>> > index 2a4632d0be4b..070
On Tue, Jul 31, 2018 at 6:49 PM, Ming Lei wrote:
> Hi,
>
> This patchset fixes one issue related with physical segment computation,
> which is found by Mike. In case of dm-rq, the warning of
> 'blk_cloned_rq_check_limits:
> over max segments limit' can be triggered easily.
>
toph Hellwig
> Cc: Jianchao Wang
> Cc: Ming Lei
> Cc: Johannes Thumshirn
> Cc: Alan Stern
> ---
> block/blk-core.c | 5 +
> block/blk-mq.c | 3 +++
> block/blk-pm.c | 44 ++
> include/linux/blk-pm.h | 6 +++
only if the queue depth drops to zero.
>
> Signed-off-by: Bart Van Assche
> Cc: Martin K. Petersen
> Cc: Christoph Hellwig
> Cc: Jianchao Wang
> Cc: Ming Lei
> Cc: Alan Stern
> Cc: Johannes Thumshirn
> ---
> block/blk-core.c | 37
ned-off-by: Bart Van Assche
> Cc: Martin K. Petersen
> Cc: Christoph Hellwig
> Cc: Jianchao Wang
> Cc: Ming Lei
> Cc: Alan Stern
> Cc: Johannes Thumshirn
> ---
> block/blk-core.c| 9 +
> drivers/scsi/scsi_lib.c | 28
On Fri, Aug 03, 2018 at 12:08:54AM +, Bart Van Assche wrote:
> On Fri, 2018-08-03 at 07:53 +0800, Ming Lei wrote:
> > blk_pm_add_request() calls pm_request_resume() for waking up device, but
> > it is wrong because it is async request, which can't guarantee device
> >
has finished.
>
> Signed-off-by: Bart Van Assche
> Cc: Christoph Hellwig
> Cc: Jianchao Wang
> Cc: Ming Lei
> Cc: Alan Stern
> Cc: Johannes Thumshirn
> ---
> block/blk-mq-sched.c | 13 +++--
> block/blk-mq.c | 8
> 2 files changed, 19
he
> Cc: Martin K. Petersen
> Cc: Christoph Hellwig
> Cc: Jianchao Wang
> Cc: Ming Lei
> Cc: Alan Stern
> Cc: Johannes Thumshirn
> ---
> block/blk-core.c | 47 +++
> block/blk-mq-debugfs.c| 1 -
> block/blk-pm.c
On Thu, Aug 02, 2018 at 01:47:29PM +0300, Adrian Hunter wrote:
> On 02/08/18 13:33, Ming Lei wrote:
> > On Thu, Aug 02, 2018 at 01:09:31PM +0300, Adrian Hunter wrote:
> >> On 31/07/18 19:25, Ming Lei wrote:
> >>> Hi Peter,
> >>>
> >>> On T
On Thu, Aug 02, 2018 at 01:09:31PM +0300, Adrian Hunter wrote:
> On 31/07/18 19:25, Ming Lei wrote:
> > Hi Peter,
> >
> > On Tue, Jul 31, 2018 at 08:47:45AM -0400, Peter Geis wrote:
> >> Good Morning,
> >>
> >> On 07/30/2018 09:38 PM, Mi
Cc: "Ewan D. Milne"
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Omar Sandoval
Signed-off-by: Ming Lei
---
block/blk-mq-tag.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index 09b2ee6694fb..c43b3398d7b4 100644
--- a
On Wed, Aug 01, 2018 at 04:54:02PM +0200, Christoph Hellwig wrote:
> We still hit __blk_recalc_rq_segments at least once per submission,
> don't we? Either rather wait with this until we have multi-page bvecs.
No.
Please see blk_queue_split(), where the physical segment number is
always
On Tue, Jul 31, 2018 at 01:51:22PM -0400, Peter Geis wrote:
>
>
> On 07/31/2018 12:25 PM, Ming Lei wrote:
> > Hi Peter,
> >
> > On Tue, Jul 31, 2018 at 08:47:45AM -0400, Peter Geis wrote:
> > > Good Morning,
> > >
> > > On
Hi Peter,
On Tue, Jul 31, 2018 at 08:47:45AM -0400, Peter Geis wrote:
> Good Morning,
>
> On 07/30/2018 09:38 PM, Ming Lei wrote:
> > Hi Peter,
> >
> > Thanks for collecting the log.
> >
> > On Mon, Jul 30, 2018 at 02:55:42PM -0400, Peter Geis wrote:
>
er
Cc: Kent Overstreet
Signed-off-by: Ming Lei
---
block/blk-merge.c | 8 +---
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index aaec38cc37b8..4a16d4f929da 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -308,13 +308,7 @@ void
se to keep QUEUE_FLAG_NO_SG_MERGE any more.
Ming Lei (3):
block: don't use bio->bi_vcnt to figure out segment number
block: kill QUEUE_FLAG_NO_SG_MERGE
block: kill BLK_MQ_F_SG_MERGE
block/blk-merge.c| 39 +++
block/blk-mq-debugfs.c | 2
QUEUE_FLAG_NO_SG_MERGE has been killed, so kill BLK_MQ_F_SG_MERGE too.
Cc: Christoph Hellwig
Cc: Mike Snitzer
Cc: Kent Overstreet
Signed-off-by: Ming Lei
---
block/blk-mq-debugfs.c | 1 -
drivers/block/loop.c | 2 +-
drivers/block/nbd.c | 2 +-
drivers/block/rbd.c
ig problem if the flag is
killed
since dm-rq is the only user.
Also multipage bvec will come soon, and each bvec may become a one segment
at that time, so QUEUE_FLAG_NO_SG_MERGE doesn't make sense any more.
Cc: Christoph Hellwig
Cc: Mike Snitzer
Cc: Kent Overstreet
Signed-off-by: Ming Lei
---
block/b
Hi Peter,
Thanks for collecting the log.
On Mon, Jul 30, 2018 at 02:55:42PM -0400, Peter Geis wrote:
>
>
> On 07/28/2018 09:37 AM, Ming Lei wrote:
...
> [ 10.887209] systemd--112 0.n.1 2411122us : blk_mq_make_request: make
> rq -1
> [ 10.890274] kworker/-98
On Wed, Jul 25, 2018 at 11:15:09PM +0200, Martin Wilck wrote:
> bio_iov_iter_get_pages() currently only adds pages for the
> next non-zero segment from the iov_iter to the bio. That's
> suboptimal for callers, which typically try to pin as many
> pages as fit into the bio. This patch converts the
ables runtime-pm for blk-mq really by pm_runtime_disable()
and fixes all kinds of PM related kernel crash.
Cc: Christoph Hellwig
Cc: Patrick Steinhardt
Cc: Bart Van Assche
Cc: Tomas Janousek
Cc: Przemek Socha
Cc: Alan Stern
Cc:
Signed-off-by: Ming Lei
---
block/blk-core.c | 6 --
1 fi
On Wed, Jul 25, 2018 at 02:33:14PM -0700, Omar Sandoval wrote:
> On Wed, Jul 04, 2018 at 01:29:56PM +0800, Ming Lei wrote:
> > SCSI may have lots of channels, targets or LUNs, so it may
> > take long time for creating and cleaning up queues.
> >
> > So introduce block/0
urrent for-next? This should fix it:
> >
> > commit 8824f62246bef288173a6624a363352f0d4d3b09
> > Author: Ming Lei
> > Date: Sun Jul 22 14:10:15 2018 +0800
> >
> > blk-mq: fail the request in case issue failure
> >
>
> That commit made the cu
On Fri, Jul 27, 2018 at 11:47 PM, Josef Bacik wrote:
> On Sun, Jul 22, 2018 at 03:28:05PM +0800, Ming Lei wrote:
>> On Sun, Jul 22, 2018 at 02:15:38AM +, Josef Bacik wrote:
>> > Yup I sent a patch for this on Thursday, sorry about that,
>> >
>>
>> I just
hw queue isn't busy in case of 'none'.
>>>
>>> Can you try my current for-next? This should fix it:
>>>
>>> commit 8824f62246bef288173a6624a363352f0d4d3b09
>>> Author: Ming Lei
>>> Date: Sun Jul 22 14:10:15 2018 +0800
>>>
>>> blk-mq: fail th
On Sun, Jul 22, 2018 at 02:15:38AM +, Josef Bacik wrote:
> Yup I sent a patch for this on Thursday, sorry about that,
>
I just applied the patch of 'blk-rq-qos: make depth comparisons unsigned',
looks the same IO hang can be triggered too.
Thanks,
Ming
mq: issue directly if hw queue isn't busy in case of
'none'")
Cc: Kashyap Desai
Cc: Laurence Oberman
Cc: Omar Sandoval
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Hannes Reinecke
Cc: Kashyap Desai
Cc: kernel test robot
Cc: LKP
Reported-by: kernel test robot
Signed-off-by: Ming Lei
---
8 R11: 0246 R12: 7ff85f8f1140
[ 861.358045] R13: 0001 R14: 00000001 R15: 01cbb2b0
[ 869.362143] end test sanity/006: (HANG, -1)
[
--
Ming Lei
On Fri, Jul 20, 2018 at 06:54:48PM +0200, Martin Wilck wrote:
> On Sat, 2018-07-21 at 00:16 +0800, Ming Lei wrote:
> > On Fri, Jul 20, 2018 at 03:05:51PM +0200, Martin Wilck wrote:
> > > bio_iov_iter_get_pages() only adds pages for the next non-zero
> > > segment from th
On Fri, Jul 20, 2018 at 03:05:51PM +0200, Martin Wilck wrote:
> bio_iov_iter_get_pages() only adds pages for the next non-zero
> segment from the iov_iter to the bio. Some callers prefer to
> obtain as many pages as would fit into the bio, with proper
> rollback in case of failure. Add
On Thu, Jul 19, 2018 at 11:01:58PM +0200, Martin Wilck wrote:
> When bio_iov_iter_get_pages() is called from __blkdev_direct_IO_simple(),
> we already know that the content of the input iov_iter fits into a single
> bio, so we expect iov_iter_count(iter) to drop to 0. But in a single
>
put_page(bvec->bv_page);
> }
>
> - if (vecs != inline_vecs)
> - kfree(vecs);
> -
> if (unlikely(bio.bi_status))
> ret = blk_status_to_errno(bio.bi_status);
>
> +out:
> + if (vecs != inline_vecs)
> +
On Thu, Jul 19, 2018 at 03:54:53PM +, Bart Van Assche wrote:
> On Thu, 2018-07-19 at 06:45 +0800, Ming Lei wrote:
> > So once blk_freeze_queue_start() returns, percpu_ref_is_zero() won't
> > return true only until the rcu confirmation is done. That means this
> > approa
On Thu, Jul 19, 2018 at 01:56:16PM +0200, Jan Kara wrote:
> On Thu 19-07-18 19:04:46, Ming Lei wrote:
> > On Thu, Jul 19, 2018 at 11:39:18AM +0200, Martin Wilck wrote:
> > > bio_iov_iter_get_pages() returns only pages for a single non-empty
> > > segment of the input
On Thu, Jul 19, 2018 at 11:39:18AM +0200, Martin Wilck wrote:
> bio_iov_iter_get_pages() returns only pages for a single non-empty
> segment of the input iov_iter's iovec. This may be much less than the number
> of pages __blkdev_direct_IO_simple() is supposed to process. Call
>
On Thu, Jul 19, 2018 at 12:37:13PM +0200, Jan Kara wrote:
> On Thu 19-07-18 18:21:23, Ming Lei wrote:
> > On Thu, Jul 19, 2018 at 11:39:18AM +0200, Martin Wilck wrote:
> > > bio_iov_iter_get_pages() returns only pages for a single non-empty
> > > segment of the input
On Thu, Jul 19, 2018 at 11:39:18AM +0200, Martin Wilck wrote:
> bio_iov_iter_get_pages() returns only pages for a single non-empty
> segment of the input iov_iter's iovec. This may be much less than the number
> of pages __blkdev_direct_IO_simple() is supposed to process. Call
In
, which is shifted by the value of bio->bi_vcnt at function
> invocation, the correct index is (nr_pages - 1).
>
> V2: improved readability following suggestions from Ming Lei.
>
> Fixes: 2cefe4dbaadf ("block: add bio_iov_iter_get_pages()")
> Signed-off-by: Martin Wil
On Wed, Jul 18, 2018 at 03:45:15PM +, Bart Van Assche wrote:
> On Wed, 2018-07-18 at 20:16 +0800, Ming Lei wrote:
> > On Wed, Jul 18, 2018 at 7:49 AM, Bart Van Assche
> > wrote:
> > > @@ -3801,8 +3778,11 @@ int blk_pre_runtime_suspend(struct request_queue
> > &
On Wed, Jul 18, 2018 at 09:32:12AM +0200, Martin Wilck wrote:
> On Wed, 2018-07-18 at 10:48 +0800, Ming Lei wrote:
> > On Wed, Jul 18, 2018 at 02:07:28AM +0200, Martin Wilck wrote:
> > >
> > > From b75adc856119346e02126cf8975755300f2d9b7f Mon Sep 17 00:00:00
> >
On Wed, Jul 18, 2018 at 02:07:28AM +0200, Martin Wilck wrote:
> On Mon, 2018-07-16 at 19:45 +0800, Ming Lei wrote:
> > On Sat, Jul 14, 2018 at 6:29 AM, Martin Wilck
> > wrote:
> > > Hi Ming & Jens,
> > >
> > > On Fri, 2018-07-13 at 12:54 -0600, Jens
On Sat, Jul 14, 2018 at 6:29 AM, Martin Wilck wrote:
> Hi Ming & Jens,
>
> On Fri, 2018-07-13 at 12:54 -0600, Jens Axboe wrote:
>> On 7/12/18 5:29 PM, Ming Lei wrote:
>> >
>> > Maybe you can try the following patch from Christoph to see if it
>>
_pages <= BIO_MAX_PAGES' ?
> It's not that we can handle it in __blkdev_direct_IO() ...
>
> Thanks for any clarification.
Maybe you can try the following patch from Christoph to see if it makes a
difference:
https://marc.info/?l=linux-kernel=153013977816825=2
thanks,
Ming Lei
Desai
Cc: Laurence Oberman
Cc: Omar Sandoval
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Hannes Reinecke
Reported-by: Kashyap Desai
Tested-by: Kashyap Desai
Signed-off-by: Ming Lei
---
block/blk-mq-sched.c | 13 -
block/blk-mq.c | 23 ++-
block/blk
SCSI may have lots of channels, targets or LUNs, so it may
take long time for creating and cleaning up queues.
So introduce block/023 and uses null_blk to run this test
on both blk-mq and legacy mode, then compare both and check
the difference.
Signed-off-by: Ming Lei
---
tests/block/023
is busy.
Fixes: b347689ffbca ("blk-mq-sched: improve dispatching from sw queue")
Cc: Kashyap Desai
Cc: Laurence Oberman
Cc: Omar Sandoval
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Hannes Reinecke
Reported-by: Kashyap Desai
Tested-by: Kashyap Desai
Signed-off-by: Ming Lei
---
On Tue, Jul 03, 2018 at 08:13:45AM -0600, Jens Axboe wrote:
> On 7/3/18 8:11 AM, Ming Lei wrote:
> > On Tue, Jul 03, 2018 at 08:03:23AM -0600, Jens Axboe wrote:
> >> On 7/3/18 2:34 AM, Ming Lei wrote:
> >>> It won't be efficient to dequeue request one by one from sw
On Tue, Jul 03, 2018 at 08:03:23AM -0600, Jens Axboe wrote:
> On 7/3/18 2:34 AM, Ming Lei wrote:
> > It won't be efficient to dequeue request one by one from sw queue,
> > but we have to do that when queue is busy for better merge performance.
> >
> > This patch tak
is busy.
Fixes: b347689ffbca ("blk-mq-sched: improve dispatching from sw queue")
Cc: Kashyap Desai
Cc: Laurence Oberman
Cc: Omar Sandoval
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Hannes Reinecke
Reported-by: Kashyap Desai
Tested-by: Kashyap Desai
Signed-off-by: Ming Lei
---
On Mon, Jul 02, 2018 at 11:30:17AM -0600, Jens Axboe wrote:
> On 7/2/18 3:36 AM, Ming Lei wrote:
> > It won't be efficient to dequeue request one by one from sw queue,
> > but we have to do that when queue is busy for better merge performance.
> >
> > This patch takes
wed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
block/blk-mq.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 95919268564b..174637d09923 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1544,19 +1544,19 @@ v
lk-mq-sched: improve dispatching from sw queue")
Cc: Kashyap Desai
Cc: Laurence Oberman
Cc: Omar Sandoval
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Hannes Reinecke
Reported-by: Kashyap Desai
Signed-off-by: Ming Lei
---
block/blk-mq-debugfs.c | 9 +
block/blk-mq-sche
Reviewed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
block/blk-mq-sched.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
index 56c493c6cd90..f5745acc2d98 100644
--- a/block/blk-mq-sched.c
+++ b/block/blk-mq-sched.c
@@ -339
On Fri, Jun 29, 2018 at 08:58:16AM -0600, Jens Axboe wrote:
> On 6/29/18 2:12 AM, Ming Lei wrote:
> > It won't be efficient to dequeue request one by one from sw queue,
> > but we have to do that when queue is busy for better merge performance.
> >
> > This patch takes
On Fri, Jun 29, 2018 at 10:39:44AM +0200, Christoph Hellwig wrote:
> > +/* update queue busy with EWMA (7/8 * ewma(t) + 1/8 * busy(t + 1)) */
> > +static void blk_mq_update_hctx_busy(struct blk_mq_hw_ctx *hctx, unsigned
> > int busy)
>
> Overly long line. Also busy really is a bool, so I think
101 - 200 of 2196 matches
Mail list logo