On Thu, Dec 06 2018 at  5:20pm -0500,
Jens Axboe <[email protected]> wrote:

> After the direct dispatch corruption fix, we permanently disallow direct
> dispatch of non read/write requests. This works fine off the normal IO
> path, as they will be retried like any other failed direct dispatch
> request. But for the blk_insert_cloned_request() that only DM uses to
> bypass the bottom level scheduler, we always first attempt direct
> dispatch. For some types of requests, that's now a permanent failure,
> and no amount of retrying will make that succeed.
> 
> Don't use direct dispatch off the cloned insert path, always just use
> bypass inserts. This still bypasses the bottom level scheduler, which is
> what DM wants.
> 
> Fixes: ffe81d45322c ("blk-mq: fix corruption with direct issue")
> Signed-off-by: Jens Axboe <[email protected]>
> 
> ---
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index deb56932f8c4..4c44e6fa0d08 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -2637,7 +2637,8 @@ blk_status_t blk_insert_cloned_request(struct 
> request_queue *q, struct request *
>                * bypass a potential scheduler on the bottom device for
>                * insert.
>                */
> -             return blk_mq_request_issue_directly(rq);
> +             blk_mq_request_bypass_insert(rq, true);
> +             return BLK_STS_OK;
>       }
>  
>       spin_lock_irqsave(q->queue_lock, flags);

Not sure what this trailing spin_lock_irqsave(q->queue_lock, flags) is
about.. but this looks good.  I'll cleanup dm-rq.c to do away with the
extra STS_RESOURCE checks for its call to blk_insert_cloned_request()
once this lands.

Acked-by: Mike Snitzer <[email protected]>

Thanks.

Reply via email to