On Mon, 2018-10-29 at 10:37 -0600, Jens Axboe wrote:
> @@ -400,9 +402,15 @@ void blk_mq_sched_insert_requests(struct request_queue 
> *q,
>                                 struct blk_mq_ctx *ctx,
>                                 struct list_head *list, bool run_queue_async)
>  {
> -     struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, ctx->cpu);
> -     struct elevator_queue *e = hctx->queue->elevator;
> +     struct blk_mq_hw_ctx *hctx;
> +     struct elevator_queue *e;
> +     struct request *rq;
> +
> +     /* For list inserts, requests better be on the same hw queue */
> +     rq = list_first_entry(list, struct request, queuelist);
> +     hctx = blk_mq_map_queue(q, rq->cmd_flags, ctx->cpu);

Passing all request cmd_flags bits to blk_mq_map_queue() makes it possible
for that function to depend on every single cmd_flags bit even if different
requests have different cmd_flags. Have you considered to pass the hw_ctx
type only to blk_mq_map_queue() to avoid that that function would start
depending on other cmd_flags?

Additionally, what guarantees that all requests in queuelist have the same
hw_ctx type? If a later patch will guarantee that, please mention that in
the comment about list_first_entry().

Thanks,

Bart.

Reply via email to