On 01/25/2017 09:57 AM, Hannes Reinecke wrote:
> On 01/25/2017 04:52 PM, Jens Axboe wrote:
>> On 01/25/2017 04:10 AM, Hannes Reinecke wrote:
> [ .. ]
>>> Bah.
>>>
>>> Not quite. I'm still seeing some queues with state 'restart'.
>>>
>>> I've found that I need another patch on top of that:
>>>
>>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>>> index e872555..edcbb44 100644
>>> --- a/block/blk-mq.c
>>> +++ b/block/blk-mq.c
>>> @@ -753,8 +754,10 @@ static void blk_mq_timeout_work(struct work_struct
>>> *work)
>>>
>>>                 queue_for_each_hw_ctx(q, hctx, i) {
>>>                         /* the hctx may be unmapped, so check it here */
>>> -                       if (blk_mq_hw_queue_mapped(hctx))
>>> +                       if (blk_mq_hw_queue_mapped(hctx)) {
>>>                                 blk_mq_tag_idle(hctx);
>>> +                               blk_mq_sched_restart(hctx);
>>> +                       }
>>>                 }
>>>         }
>>>         blk_queue_exit(q);
>>>
>>>
>>> Reasoning is that in blk_mq_get_tag() we might end up scheduling the
>>> request on another hctx, but the original hctx might still have the
>>> SCHED_RESTART bit set.
>>> Which will never cleared as we complete the request on a different hctx,
>>> so anything we do on the end_request side won't do us any good.
>>
>> I think you are right, it'll potentially trigger with shared tags and
>> multiple hardware queues. I'll debug this today and come up with a
>> decent fix.
>>
>> I committed the previous patch, fwiw.
>>
> THX.
> 
> The above patch _does_ help in the sense that my testcase now completes 
> without stalls. And I even get a decent performance with the mq-sched 
> fixes: 82k IOPs sequential read with mq-deadline as compared to 44k IOPs 
> when running without I/O scheduling.
> Still some way off from the 132k IOPs I'm getting with CFQ, but we're 
> getting there.
> 
> However, I do get a noticeable stall during the stonewall sequence 
> before the timeout handler kicks in, so the must be a better way for 
> handling this.
> 
> But nevertheless, thanks for all your work here.
> Very much appreciated.

Yeah, the fix isn't really a fix, unless you are willing to tolerate
potentially tens of seconds of extra latency until we idle it out :-)

So we can't use the un-idling for this, but we can track it on the
shared state, which is the tags. The problem isn't that we are
switching to a new hardware queue, it's if we mark the hardware queue
as restart AND it has nothing pending. In that case, we'll never
get it restarted, since IO completion is what restarts it.

I need to handle that case separately. Currently testing a patch, I
should have something for you to test later today.

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to