We need to iterate ctx starting from any ctx in round robin
way, so introduce this helper.
Reviewed-by: Omar Sandoval
Cc: Omar Sandoval
Signed-off-by: Ming Lei
---
include/linux/sbitmap.h | 64 -
1 file changed, 47 insertions(+), 17 deletions
On Sat, Oct 14, 2017 at 07:38:29PM +0200, Oleksandr Natalenko wrote:
> Hi.
>
> By any chance, could this be backported to 4.14? I'm confused with "SCSI:
> allow to pass null rq to scsi_prep_state_check()" since it uses refactored
> flags.
>
> ===
> if (req && !(req->rq_flags & RQF_PREEMPT))
> =
On Sat, Oct 14, 2017 at 09:39:21AM -0600, Jens Axboe wrote:
> On 10/14/2017 03:22 AM, Ming Lei wrote:
> > Hi Jens,
> >
> > In Red Hat internal storage test wrt. blk-mq scheduler, we found that I/O
> > performance is much bad with mq-deadline, especially about seque
On Mon, Oct 16, 2017 at 01:30:09PM +0200, Hannes Reinecke wrote:
> On 10/13/2017 07:29 PM, Ming Lei wrote:
> > On Fri, Oct 13, 2017 at 05:08:52PM +, Bart Van Assche wrote:
> >> On Sat, 2017-10-14 at 00:45 +0800, Ming Lei wrote:
> >>> On Fri, Oct 13, 2017 at 04:
If there isn't any outstanding request in this queue, both
blk-mq's RESTART and SCSI's builtin RESTART can't work,
so we have to deal with this case by running this queue
after delay.
Fixes: d04b6d97d0a1(scsi: implement .get_budget and .put_budget for blk-mq)
Signed-off-by: Mi
g branch:
https://github.com/ming1/linux/commits/blk_mq_improve_restart_V1
Ming Lei (2):
SCSI: run idle hctx after delay in scsi_mq_get_budget()
blk-mq: don't handle TAG_SHARED in restart
block/blk-mq-sched.c| 78 +++--
drivers/scsi/sc
t-in RESTART is enough to cover itself.
So we don't need to pay special attention to TAG_SHARED wrt. restart.
Signed-off-by: Ming Lei
---
block/blk-mq-sched.c | 78 +++-
1 file changed, 4 insertions(+), 74 deletions(-)
diff --git a/block/blk-mq-sc
On Tue, Oct 17, 2017 at 01:04:16PM +0800, Ming Lei wrote:
> Hi Jens,
>
> The 1st patch runs idle hctx after dealy in scsi_mq_get_budget(),
> so that we can keep same behaviour with before, and it can be
> thought as a fix.
>
> The 2nd patch cleans up RESTART, and removes ha
On Tue, Oct 17, 2017 at 08:38:01AM +0200, Hannes Reinecke wrote:
> On 10/17/2017 03:29 AM, Ming Lei wrote:
> > On Mon, Oct 16, 2017 at 01:30:09PM +0200, Hannes Reinecke wrote:
> >> On 10/13/2017 07:29 PM, Ming Lei wrote:
> >>> On Fri, Oct 13, 2017 at 05:08:52P
On Tue, Oct 17, 2017 at 04:47:23PM +0100, John Garry wrote:
> On 17/10/2017 06:12, Ming Lei wrote:
> > On Tue, Oct 17, 2017 at 01:04:16PM +0800, Ming Lei wrote:
> > > Hi Jens,
> > >
> > > The 1st patch runs idle hctx after dealy in scsi_mq_get_budget(),
>
blk_insert_flush() should only insert request since run queue always
follows it.
In case of bypassing flush, we don't need to run queue because every
blk_insert_flush() follows one run queue.
Signed-off-by: Ming Lei
---
block/blk-flush.c | 2 +-
1 file changed, 1 insertion(+), 1 del
From: Jianchao Wang
When free the driver tag of the next rq with I/O scheduler
configured, it get the first entry of the list, however, at the
moment, the failed rq has been requeued at the head of the list.
The rq it gets is the failed rq not the next rq.
Free the driver tag of next rq before th
ts of bypassing
flush machinery, just like what the legacy path did.
Signed-off-by: Ming Lei
---
block/blk-flush.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 81bd1a843043..a9773d2075ac 100644
--- a/block/blk-flush.c
+++ b/bloc
st is dispatched to ->dispatch
list directly.
This is a prepare patch for not preallocating driver tag on flush rq,
and don't treat flush rq as special case, just what the legacy path
did.
Signed-off-by: Ming Lei
---
block/blk-mq-sched.c | 29 -
1 file changed,
xt rq before first one is requeued
Ming Lei (6):
blk-flush: don't run queue for requests of bypassing flush
block: pass 'run_queue' to blk_mq_request_bypass_insert
blk-flush: use blk_mq_request_bypass_insert()
blk-mq-sched: decide how to handle flush rq via RQF_FLUSH_SEQ
blk-mq
Block flush need this function without running queue, so introduce the
parameter.
Signed-off-by: Ming Lei
---
block/blk-core.c | 2 +-
block/blk-mq.c | 5 +++--
block/blk-mq.h | 2 +-
3 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/block/blk-core.c b/block/blk-core.c
index
We need this helper to put the driver tag for flush rq, since we will
not share tag in the flush request sequence in the following patch
in case that I/O scheduler is applied.
Signed-off-by: Ming Lei
---
block/blk-mq.c | 32
block/blk-mq.h | 33
e driver tag before requeueing.
Signed-off-by: Ming Lei
---
block/blk-flush.c| 35 ++-
block/blk-mq-sched.c | 42 +-
block/blk-mq.c | 41 ++---
3 files changed, 37 insertions(+), 81 de
On Mon, Oct 23, 2017 at 06:12:29PM +0200, Roman Penyaev wrote:
> Hi Ming,
>
> On Fri, Oct 20, 2017 at 3:39 PM, Ming Lei wrote:
> > On Wed, Oct 18, 2017 at 12:22:06PM +0200, Roman Pen wrote:
> >> Hi all,
> >>
> >> the patch below fixes queue st
On Thu, Nov 02, 2017 at 11:24:31PM +0800, Ming Lei wrote:
> Hi Jens,
>
> This patchset avoids to allocate driver tag beforehand for flush rq
> in case of I/O scheduler, then flush rq isn't treated specially
> wrt. get/put driver tag, code gets cleanup much, such as,
> reo
On Thu, Aug 10, 2017 at 04:26:03AM -0700, Christoph Hellwig wrote:
> I think all this bcache code needs bigger attention. For one
> bio_alloc_pages is only used in bcache, so we should move it in there.
Looks a good idea.
>
> Second the way bio_alloc_pages is currently written looks potentiall
On Thu, Aug 10, 2017 at 04:28:05AM -0700, Christoph Hellwig wrote:
> > + * The hacking way of using bvec table as page pointer array is safe
> > + * even after multipage bvec is introduced because that space can be
> > + * thought as unused by bio_add_page().
>
> I'm not sure what value this comme
On Thu, Aug 10, 2017 at 04:29:59AM -0700, Christoph Hellwig wrote:
> > +static unsigned int get_bio_pages(struct bio *bio)
> > +{
> > + unsigned i;
> > + struct bio_vec *bv;
> > +
> > + bio_for_each_segment_all(bv, bio, i)
> > + ;
> > +
> > + return i;
> > +}
>
> s/get_bio_pages/
On Thu, Aug 10, 2017 at 05:11:10AM -0700, Christoph Hellwig wrote:
> First: as mentioned in the previous patches I really hate the name
> scheme with the _sp and _mp postfixes.
>
> To be clear and understandable we should always name the versions
> that iterate over segments *segment* and the ones
On Tue, Aug 08, 2017 at 09:32:32AM -0700, Darrick J. Wong wrote:
> On Tue, Aug 08, 2017 at 04:45:40PM +0800, Ming Lei wrote:
>
> Sure would be nice to have a changelog explaining why we're doing this.
>
> > Cc: "Darrick J. Wong"
> > Cc: linux-...@vger
On Thu, Aug 10, 2017 at 05:16:12AM -0700, Christoph Hellwig wrote:
> > struct bio_vec *bv;
> > + struct bvec_iter_all bia;
> > int i;
> >
> > - bio_for_each_segment_all(bv, bio, i) {
> > + bio_for_each_segment_all_sp(bv, bio, i, bia) {
> > struct page *page = bv->bv_page
On Tue, Oct 6, 2015 at 9:43 AM, Stephen Rothwell wrote:
> Hi Jens,
>
> After merging the block tree, today's linux-next build (arm
> multi_v7_defconfig) failed like this:
>
> drivers/block/loop.c: In function 'lo_rw_aio_complete':
> drivers/block/loop.c:474:2: error: too few arguments to function
On Tue, Oct 6, 2015 at 7:23 AM, Dan Williams wrote:
> On Sun, Oct 4, 2015 at 12:52 AM, Ming Lei wrote:
>> On Wed, Sep 30, 2015 at 8:41 AM, Dan Williams
>> wrote:
>>> Allow pmem, and other synchronous/bio-based block drivers, to fallback
>>
>> Just a bit cur
On Sat, Aug 29, 2015 at 9:11 AM, Luis R. Rodriguez wrote:
> On Thu, Aug 27, 2015 at 08:55:13AM +0800, Ming Lei wrote:
>> On Thu, Aug 27, 2015 at 2:07 AM, Linus Torvalds
>> wrote:
>> > On Wed, Aug 26, 2015 at 1:06 AM, Liam Girdwood
>> > wrote:
>> >
On Sat, 29 Aug 2015 10:50:22 +0200
Arend van Spriel wrote:
> On 08/29/2015 09:11 AM, Takashi Iwai wrote:
> > On Sat, 29 Aug 2015 06:09:01 +0200,
> > Ming Lei wrote:
> >>
> >> On Sat, Aug 29, 2015 at 9:11 AM, Luis R. Rodriguez wrote:
> >>> On Thu, Aug
On Sun, Aug 30, 2015 at 4:25 PM, Arend van Spriel wrote:
> On 08/29/2015 12:38 PM, Ming Lei wrote:
>>
>> On Sat, 29 Aug 2015 10:50:22 +0200
>> Arend van Spriel wrote:
>>
>>> On 08/29/2015 09:11 AM, Takashi Iwai wrote:
>>>>
>>>
Hi,
The 1st three patches are optimizations related with bio splitting.
The 4th patch is to mark ctx as pending at batch in flush plug path.
Thanks,
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo in
The splitted bio has been already too fat to merge, so mark it
as NOMERGE.
Signed-off-by: Ming Lei
---
block/blk-merge.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 22293fd..de5716d8 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
Most of times, flush plug should be the hottest I/O path,
so mark ctx as pending after all requests in the list are
inserted.
Signed-off-by: Ming Lei
---
block/blk-mq.c | 18 +-
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index
It isn't necessary to try to merge the bio which is marked
as NOMERGE.
Signed-off-by: Ming Lei
---
block/blk-mq.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 546b3b8..deb5f4c 100644
--- a/block/blk-mq.c
+++ b/block/blk
The number of bio->bi_phys_segments is always obtained
during bio splitting, so it is natural to setup it
just after bio splitting, then we can avoid to compute
nr_segment again during merge.
Signed-off-by: Ming Lei
---
block/blk-merge.c | 29 ++---
1 file changed,
sulting in imperfect 'in_generic_make_request' accounting that happens
> lazily once the outermost plug completes blk_finish_plug. manifested as
> dm-bufio.c:dm_bufio_prefetch's BUG_ON(dm_bufio_in_request()); hitting.
Looks this problem should be related with above 'bio_lis
Hi Jeff,
On Thu, Oct 15, 2015 at 11:26 PM, Jeff Moyer wrote:
> Ming Lei writes:
>
>> Most of times, flush plug should be the hottest I/O path,
>> so mark ctx as pending after all requests in the list are
>> inserted.
>
> Hi, Ming,
>
> Did you see some perfo
On Thu, Oct 15, 2015 at 11:21 PM, Jeff Moyer wrote:
> Ming Lei writes:
>
>> It isn't necessary to try to merge the bio which is marked
>> as NOMERGE.
>>
>> Signed-off-by: Ming Lei
>> ---
>> block/blk-mq.c | 5 -
>> 1 file changed, 4 inse
On Thu, Oct 15, 2015 at 4:06 PM, Mike Snitzer wrote:
> On Wed, Oct 14 2015 at 11:27pm -0400,
> Ming Lei wrote:
>
>> On Sat, Oct 10, 2015 at 3:52 AM, Mike Snitzer wrote:
>> >
>> > Turns out that this change:
>> > http://git.kernel.org/cgit/linux/ker
The splitted bio has been already too fat to merge, so mark it
as NOMERGE.
Reviewed-by: Jeff Moyer
Signed-off-by: Ming Lei
---
block/blk-merge.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 22293fd..de5716d8 100644
--- a/block/blk-merge.c
It isn't necessary to try to merge the bio which is marked
as NOMERGE.
Signed-off-by: Ming Lei
---
block/blk-mq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 546b3b8..d55d022 100644
--- a/block/blk-mq.c
+++ b/block/blk
Hi,
The 1st four patches are optimizations related with bio splitting.
The 5th patch is one fix for using trace_block_plug().
The 6th patch is to mark ctx as pending at batch in flush plug path.
V1:
- 3/6: don't check bio_mergeable() in blk_mq_attempt_merge()
- 4/6: check bio_me
The number of bio->bi_phys_segments is always obtained
during bio splitting, so it is natural to setup it
just after bio splitting, then we can avoid to compute
nr_segment again during merge.
Reviewed-by: Jeff Moyer
Signed-off-by: Ming Lei
---
block/blk-merge.c |
Most of times, flush plug should be the hottest I/O path,
so mark ctx as pending after all requests in the list are
inserted.
Signed-off-by: Ming Lei
---
block/blk-mq.c | 18 +-
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index
The trace point is for tracing plug event of each request
queue instead of each task, so we should check the request
count in the plug list from current queue instead of
current task.
Signed-off-by: Ming Lei
---
block/blk-mq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a
After bio splitting is introduced, one bio can be splitted
and it is marked as NOMERGE because it is too fat to be merged,
so check bio_mergeable() earlier to avoid to try to merge it
unnecessarily.
Signed-off-by: Ming Lei
---
block/elevator.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
On Fri, Oct 16, 2015 at 11:29 PM, Mike Snitzer wrote:
> On Thu, Oct 15 2015 at 11:08pm -0400,
> Ming Lei wrote:
>
>> On Thu, Oct 15, 2015 at 4:06 PM, Mike Snitzer wrote:
>> > On Wed, Oct 14 2015 at 11:27pm -0400,
>> > Ming Lei wrote:
>> >
>>
On Thu, Oct 15, 2015 at 4:47 AM, Mike Snitzer wrote:
> From: Mikulas Patocka
>
> The block layer uses per-process bio list to avoid recursion in
> generic_make_request. When generic_make_request is called recursively,
> the bio is added to current->bio_list and generic_make_request returns
> imm
On Wed, Oct 21, 2015 at 4:03 AM, Mikulas Patocka wrote:
>
>
> On Sun, 18 Oct 2015, Ming Lei wrote:
>
>> On Thu, Oct 15, 2015 at 4:47 AM, Mike Snitzer wrote:
>> > From: Mikulas Patocka
>> >
>> > The block layer uses per-process bio list to avoid
On Thu, Oct 22, 2015 at 5:49 AM, Mikulas Patocka wrote:
>
>
> On Thu, 22 Oct 2015, Ming Lei wrote:
>
>> > Some drivers (dm-snapshot, dm-thin) do acquire a mutex in .make_requests()
>> > for every bio. It wouldn't be practical to convert them to not acquir
On Mon, Oct 19, 2015 at 11:27 PM, Jeff Moyer wrote:
> Ming Lei writes:
>
>> After bio splitting is introduced, one bio can be splitted
>> and it is marked as NOMERGE because it is too fat to be merged,
>> so check bio_mergeable() earlier to avoid to try to merge it
>&g
On Mon, Oct 19, 2015 at 11:38 PM, Jeff Moyer wrote:
> Ming Lei writes:
>
>> The trace point is for tracing plug event of each request
>> queue instead of each task, so we should check the request
>> count in the plug list from current queue instead of
>> current t
On Tue, Oct 20, 2015 at 1:59 AM, Jeff Moyer wrote:
> Ming Lei writes:
>
>> On Mon, Oct 19, 2015 at 11:38 PM, Jeff Moyer wrote:
>>> Ming Lei writes:
>>>
>>>> The trace point is for tracing plug event of each request
>>>> queue instead of e
After bio splitting is introduced, one bio can be splitted
and it is marked as NOMERGE because it is too fat to be merged,
so check bio_mergeable() earlier to avoid to try to merge it
unnecessarily.
Signed-off-by: Ming Lei
---
block/elevator.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
It isn't necessary to try to merge the bio which is marked
as NOMERGE.
Reviewed-by: Jeff Moyer
Signed-off-by: Ming Lei
---
block/blk-mq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 1796b31..edbc877 100644
--- a/block/blk-mq.c
Most of times, flush plug should be the hottest I/O path,
so mark ctx as pending after all requests in the list are
inserted.
Reviewed-by: Jeff Moyer
Signed-off-by: Ming Lei
---
block/blk-mq.c | 18 +-
1 file changed, 13 insertions(+), 5 deletions(-)
diff --git a/block/blk
The number of bio->bi_phys_segments is always obtained
during bio splitting, so it is natural to setup it
just after bio splitting, then we can avoid to compute
nr_segment again during merge.
Reviewed-by: Jeff Moyer
Signed-off-by: Ming Lei
---
block/blk-merge.c |
The splitted bio has been already too fat to merge, so mark it
as NOMERGE.
Reviewed-by: Jeff Moyer
Signed-off-by: Ming Lei
---
block/blk-merge.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 22293fd..de5716d8 100644
--- a/block/blk-merge.c
Hi,
The 1st patch is one fix for automatic flush plug in case of
nomerge queue.
The following four patches are optimizations related with bio splitting.
The 6th patch is one fix for using trace_block_plug().
The 7th patch is to mark ctx as pending at batch in flush plug path.
V2:
- Jef
The trace point is for tracing plug event of each request
queue instead of each task, so we should check the request
count in the plug list from current queue instead of
current task.
Signed-off-by: Ming Lei
---
block/blk-mq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a
From: Jeff Moyer
Request queues with merging disabled will not flush the plug list after
BLK_MAX_REQUEST_COUNT requests have been queued, since the code relies
on blk_attempt_plug_merge to compute the request_count. Fix this by
computing the number of queued requests even for nomerge queues.
Si
uquest queue should have been initialized completely, that
said blk_mq_add_queue_tag_set() should be run in atomic mode
of the usage ref counter.
Not sure how can you observe the ref counter is running at percpu mode
during queue initialization.
Care to share what your disk/driver is? I doubt it
On Sun, Oct 4, 2015 at 3:33 AM, Meelis Roos wrote:
>> This is 4.3.0-rc1 on Sun E220R (dual-CPU sparc64). Sometimes it boots,
>> sometimes it fails to boot with looping errors and finally a watchdog
>> timeout. This console log from a failure. Config is below.
>
> I noticed blk-mq related changes i
t same context
> +* need not check that the queue has been frozen (marked dead).
> +*/
> + percpu_ref_get(&q->q_usage_counter);
> +}
>
> void blk_rq_timed_out_timer(unsigned long data);
> unsigned long blk_rq_timeout(unsigned long timeout);
&g
Hello Sumit,
On Tue, Aug 28, 2018 at 12:04:52PM +0530, Sumit Saxena wrote:
> Affinity managed interrupts vs non-managed interrupts
>
> Hi Thomas,
>
> We are working on next generation MegaRAID product where requirement is- to
> allocate additional 16 MSI-x vectors in addition to number of MSI-x
>
> So, how to secure-erase the NVME SSD connected via the JMS583 chip?
You may try 'blkdiscard --secure' and see if you are luck.
Thanks,
Ming Lei
Hi Mike,
On Tue, Jan 23, 2018 at 10:22:04AM +0100, Mike Snitzer wrote:
> On Thu, Jan 18 2018 at 5:20pm -0500,
> Bart Van Assche wrote:
>
> > On Thu, 2018-01-18 at 17:01 -0500, Mike Snitzer wrote:
> > > And yet Laurence cannot reproduce any such lockups with your test...
> >
> > Hmm ... maybe I
On Tue, Jan 23, 2018 at 8:15 PM, Mike Snitzer wrote:
> On Tue, Jan 23 2018 at 5:53am -0500,
> Ming Lei wrote:
>
>> Hi Mike,
>>
>> On Tue, Jan 23, 2018 at 10:22:04AM +0100, Mike Snitzer wrote:
>> > On Thu, Jan 18 2018 at 5:20pm -0500,
>> > Bart Van
On Wed, Jan 24, 2018 at 08:34:14PM -0700, Jens Axboe wrote:
> On 1/24/18 7:46 PM, Jia-Ju Bai wrote:
> > The function ioc_create_icq here is not called in atomic context.
> > Thus GFP_ATOMIC is not necessary, and it can be replaced with GFP_KERNEL.
> >
> > This is found by a static analysis tool na
On Wed, Dec 20, 2017 at 04:47:21PM +0100, Christian Borntraeger wrote:
> On 12/18/2017 02:56 PM, Stefan Haberland wrote:
> > On 07.12.2017 00:29, Christoph Hellwig wrote:
> >> On Wed, Dec 06, 2017 at 01:25:11PM +0100, Christian Borntraeger wrote:
> >> t > commit 11b2025c3326f7096ceb588c3117c7883850
On Thu, Jan 11, 2018 at 06:46:54PM +0100, Christoph Hellwig wrote:
> Thanks for looking into this Ming, I had missed it in the my current
> work overload. Can you send the updated series to Jens?
OK, I will post it out soon.
Thanks,
Ming
Hi,
This two patches support physical CPU hotplug, so that we can make blk-mq
scale well when new physical CPU is added or removed, and this use case
is normal for VM world.
Also this patchset fixes the following warning reported by Christian
Borntraeger:
https://marc.info/?l=linux-block
ne CPUs for schedule.
Reported-by: Christian Borntraeger
Tested-by: Christian Borntraeger
Tested-by: Stefan Haberland
Cc: Thomas Gleixner
Signed-off-by: Christoph Hellwig
(merged the three into one because any single one may not work, and fix
selecting online CPUs for scheduler)
Signed-off-by: Ming
From: Christoph Hellwig
Currently we assign managed interrupt vectors to all present CPUs. This
works fine for systems were we only online/offline CPUs. But in case of
systems that support physical CPU hotplug (or the virtualized version of
it) this means the additional CPUs covered for in the
On Fri, Jan 12, 2018 at 04:55:34PM -0500, Laurence Oberman wrote:
> On Fri, 2018-01-12 at 20:57 +, Bart Van Assche wrote:
> > On Tue, 2018-01-09 at 08:29 -0800, Tejun Heo wrote:
> > > Currently, blk-mq timeout path synchronizes against the usual
> > > issue/completion path using a complex schem
On Fri, Jan 12, 2018 at 04:55:34PM -0500, Laurence Oberman wrote:
> On Fri, 2018-01-12 at 20:57 +, Bart Van Assche wrote:
> > On Tue, 2018-01-09 at 08:29 -0800, Tejun Heo wrote:
> > > Currently, blk-mq timeout path synchronizes against the usual
> > > issue/completion path using a complex schem
On Sat, Jan 13, 2018 at 10:45:14PM +0800, Ming Lei wrote:
> On Fri, Jan 12, 2018 at 04:55:34PM -0500, Laurence Oberman wrote:
> > On Fri, 2018-01-12 at 20:57 +, Bart Van Assche wrote:
> > > On Tue, 2018-01-09 at 08:29 -0800, Tejun Heo wrote:
> > > > Currently, bl
Hi,
On Tue, Jan 30, 2018 at 09:05:26AM +0100, Oleksandr Natalenko wrote:
> Hi, Paolo, Ivan, Ming et al.
>
> It looks like I've just encountered the issue Ivan has already described in
> [1]. Since I'm able to reproduce it reliably in a VM, I'd like to draw more
> attention to it.
>
> First, I'm
On Tue, Jan 30, 2018 at 03:30:28PM +0100, Oleksandr Natalenko wrote:
> Hi.
>
...
>systemd-udevd-271 [000] 4.311033: bfq_insert_requests: insert
> rq->0
>systemd-udevd-271 [000] ...1 4.311037: blk_mq_do_dispatch_sched:
> not get rq, 1
> cfdisk-408 [000]
On Tue, Jan 16, 2018 at 03:22:18PM +, Don Brace wrote:
> > -Original Message-
> > From: Laurence Oberman [mailto:lober...@redhat.com]
> > Sent: Tuesday, January 16, 2018 7:29 AM
> > To: Thomas Gleixner ; Ming Lei
> > Cc: Christoph Hellwig ; Jens Axboe ;
&
On Thu, Feb 01, 2018 at 02:53:35PM +, Don Brace wrote:
> > -Original Message-
> > From: Ming Lei [mailto:ming@redhat.com]
> > Sent: Thursday, February 01, 2018 4:37 AM
> > To: Don Brace
> > Cc: Laurence Oberman ; Thomas Gleixner
> > ; Christoph
On Thu, Jan 18, 2018 at 01:11:01PM -0700, Jens Axboe wrote:
> On 1/18/18 11:47 AM, Bart Van Assche wrote:
> >> This is all very tiresome.
> >
> > Yes, this is tiresome. It is very annoying to me that others keep
> > introducing so many regressions in such important parts of the kernel.
> > It is a
On Thu, Jan 18, 2018 at 09:02:45PM -0700, Jens Axboe wrote:
> On 1/18/18 7:32 PM, Ming Lei wrote:
> > On Thu, Jan 18, 2018 at 01:11:01PM -0700, Jens Axboe wrote:
> >> On 1/18/18 11:47 AM, Bart Van Assche wrote:
> >>>> This is all very tiresome.
> >>
On Fri, Jan 19, 2018 at 05:09:46AM +, Bart Van Assche wrote:
> On Fri, 2018-01-19 at 10:32 +0800, Ming Lei wrote:
> > Now most of times both NVMe and SCSI won't return BLK_STS_RESOURCE, and
> > it should be DM-only which returns STS_RESOURCE so often.
>
> That'
On Fri, Jan 19, 2018 at 03:20:13PM +, Bart Van Assche wrote:
> On Fri, 2018-01-19 at 15:26 +0800, Ming Lei wrote:
> > Please see queue_delayed_work_on(), hctx->run_work is shared by all
> > scheduling, once blk_mq_delay_run_hw_queue(100ms) returns, no new
> > sche
On Fri, Jan 19, 2018 at 08:24:06AM -0700, Jens Axboe wrote:
> On 1/19/18 12:26 AM, Ming Lei wrote:
> > On Thu, Jan 18, 2018 at 09:02:45PM -0700, Jens Axboe wrote:
> >> On 1/18/18 7:32 PM, Ming Lei wrote:
> >>> On Thu, Jan 18, 2018 at 01:11:01PM -0700, Jens Axboe wro
On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
> On 1/19/18 8:40 AM, Ming Lei wrote:
> >>>> Where does the dm STS_RESOURCE error usually come from - what's exact
> >>>> resource are we running out of?
> >>>
>
On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
> On 1/19/18 9:05 AM, Ming Lei wrote:
> > On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
> >> On 1/19/18 8:40 AM, Ming Lei wrote:
> >>>>>> Where does the dm STS_RESOURCE
On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
> On 1/19/18 9:26 AM, Ming Lei wrote:
> > On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
> >> On 1/19/18 9:05 AM, Ming Lei wrote:
> >>> On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wr
On Fri, Jan 19, 2018 at 09:52:32AM -0700, Jens Axboe wrote:
> On 1/19/18 9:47 AM, Mike Snitzer wrote:
> > On Fri, Jan 19 2018 at 11:41am -0500,
> > Jens Axboe wrote:
> >
> >> On 1/19/18 9:37 AM, Ming Lei wrote:
> >>> On Fri, Jan 19, 2018 at 09:27:46AM -0
On Fri, Jan 19, 2018 at 10:09:11AM -0700, Jens Axboe wrote:
> On 1/19/18 10:05 AM, Ming Lei wrote:
> > On Fri, Jan 19, 2018 at 09:52:32AM -0700, Jens Axboe wrote:
> >> On 1/19/18 9:47 AM, Mike Snitzer wrote:
> >>> On Fri, Jan 19 2018 at 11:41am -0500,
> >>&g
On Fri, Jan 19, 2018 at 10:38:41AM -0700, Jens Axboe wrote:
> On 1/19/18 9:37 AM, Ming Lei wrote:
> > On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
> >> On 1/19/18 9:26 AM, Ming Lei wrote:
> >>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wr
On Fri, Jan 19, 2018 at 10:38:41AM -0700, Jens Axboe wrote:
> On 1/19/18 9:37 AM, Ming Lei wrote:
> > On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
> >> On 1/19/18 9:26 AM, Ming Lei wrote:
> >>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wr
On Fri, Jan 19, 2018 at 09:23:35AM -0700, Jens Axboe wrote:
> On 1/19/18 9:13 AM, Mike Snitzer wrote:
> > On Fri, Jan 19 2018 at 10:48am -0500,
> > Jens Axboe wrote:
> >
> >> On 1/19/18 8:40 AM, Ming Lei wrote:
> >>>>>> Where does the dm
Hi Jianchao,
On Fri, Jan 19, 2018 at 11:05:35AM +0800, jianchao.wang wrote:
> Hi ming
>
> Sorry for delayed report this.
>
> On 01/17/2018 05:57 PM, Ming Lei wrote:
> > 2) hctx->next_cpu can become offline from online before
> > __blk_mq_run_hw_queue
> > i
Hi Jianchao,
On Wed, Jan 17, 2018 at 10:56:13AM +0800, jianchao.wang wrote:
> Hi ming
>
> Thanks for your patch and kindly response.
You are welcome!
>
> On 01/16/2018 11:32 PM, Ming Lei wrote:
> > OK, I got it, and it should have been the only corner case in which
> &
Hi Jianchao,
On Wed, Jan 17, 2018 at 01:24:23PM +0800, jianchao.wang wrote:
> Hi ming
>
> Thanks for your kindly response.
>
> On 01/17/2018 11:52 AM, Ming Lei wrote:
> >> It is here.
> >> __blk_mq_run_hw_queue()
> >>
> >> WARN_ON(!cpu
Hi Jianchao,
On Wed, Jan 17, 2018 at 04:09:11PM +0800, jianchao.wang wrote:
> Hi ming
>
> Thanks for your kindly response.
>
> On 01/17/2018 02:22 PM, Ming Lei wrote:
> > This warning can't be removed completely, for example, the CPU figured
> > in blk_mq_h
On Wed, Jan 17, 2018 at 11:07:48AM +0100, Christian Borntraeger wrote:
>
>
> On 01/17/2018 10:57 AM, Ming Lei wrote:
> > Hi Jianchao,
> >
> > On Wed, Jan 17, 2018 at 04:09:11PM +0800, jianchao.wang wrote:
> >> Hi ming
> >>
> >> Thanks for
501 - 600 of 2944 matches
Mail list logo