On Thu, 2018-12-20 at 14:26 -0700, Jens Axboe wrote:
> On 12/20/18 2:23 PM, Bart Van Assche wrote:
> > On Thu, 2018-12-20 at 14:00 -0700, Jens Axboe wrote:
> > > On 12/20/18 1:56 PM, Bart Van Assche wrote:
> > > > @@ -96,6 +97,9 @@ static void blk_mq_check_inflight(struct 
> > > > blk_mq_hw_ctx *hctx,
> > > >  {
> > > >         struct mq_inflight *mi = priv;
> > > >  
> > > > +       if (rq->q != mi->q)
> > > > +               return;
> > > 
> > > Aren't you back to square one with this one, if the tags are shared? You
> > > can't dereference it before you know it matches.
> > 
> > My patch can only work if the new rq->q = NULL assignment in 
> > __blk_mq_free_request()
> > is executed before the request tag is freed and if freeing a tag does not 
> > happen
> > concurrently with any bt_iter() call. Would you accept that I add a seqlock 
> > to avoid
> > this scenario?
> 
> Ugh no, let's not go that far. Why not just use my approach that avoids
> any need to dereference rq, unless we know it belongs to the queue in
> question? I think that's cheaper than resetting ->queue as well when the
> rq completes, I'm always wary of adding new stores in the completion
> path.

I think there is a race condition in bt_iter() in your approach: 
tags->rqs[bitnr].queue
can change after it has been read and that can cause a request that is not 
associated
with hctx->queue to be passed to iter_data->fn(). Since 'fn' will access '*rq' 
I think
that with your patch a use-after-free can occur similar to the one reported at 
the
start of this e-mail thread. Your patch may make it harder to trigger that 
issue though.

Bart.

Reply via email to