On 04/20/2017 02:47 PM, Stephen  Bates wrote:
> 
>> I agree, it's fine as-is. We should queue it up for 4.12.
> 
> Great. I will get something based on Omar’s latest comments asap.
> 
>>> However right now I am stuck as I am seeing the kernel oops I reported
>>> before in testing of my latest patchset [1]. I will try and find some
>>> time to bisect that but it looks like it was introduced when the
>>> support for mq schedulers was added (on or around bd166ef18).
> 
>> Just replied to that one, let me know if the suggestion works.
> 
> That suggestion worked. Do you want me to send a patch for it?

Is the patch going to be different than the one I sent? Here it
is, with a comment added. Can I add you tested-by?

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 572966f49596..c7836a1ded97 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2928,8 +2928,17 @@ bool blk_mq_poll(struct request_queue *q, blk_qc_t 
cookie)
        hctx = q->queue_hw_ctx[blk_qc_t_to_queue_num(cookie)];
        if (!blk_qc_t_is_internal(cookie))
                rq = blk_mq_tag_to_rq(hctx->tags, blk_qc_t_to_tag(cookie));
-       else
+       else {
                rq = blk_mq_tag_to_rq(hctx->sched_tags, 
blk_qc_t_to_tag(cookie));
+               /*
+                * With scheduling, if the request has completed, we'll
+                * get a NULL return here, as we clear the sched tag when
+                * that happens. The request still remains valid, like always,
+                * so we should be safe with just the NULL check.
+                */
+               if (!rq)
+                       return false;
+       }
 
        return __blk_mq_poll(hctx, rq);
 }

-- 
Jens Axboe

Reply via email to