On 1/17/18 5:34 AM, Ming Lei wrote:
> When hctx->next_cpu is set from possible online CPUs, there is one
> race in which hctx->next_cpu may be set as >= nr_cpu_ids, and finally
> break workqueue.
> 
> The race is triggered when one CPU is becoming DEAD, blk_mq_hctx_notify_dead()
> is called to dispatch requests from the DEAD cpu context, but at that
> time, this DEAD CPU has been cleared from 'cpu_online_mask', so all
> CPUs in hctx->cpumask may become offline, and cause hctx->next_cpu set
> a bad value.
> 
> This patch deals with this issue by re-selecting next CPU, and make
> sure it is set correctly.
> 
> Cc: Christian Borntraeger <[email protected]>
> Cc: Stefan Haberland <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Reported-by: "jianchao.wang" <[email protected]>
> Tested-by: "jianchao.wang" <[email protected]>
> Fixes: 20e4d81393 ("blk-mq: simplify queue mapping & schedule with each 
> possisble CPU")
> Signed-off-by: Ming Lei <[email protected]>
> ---
>  block/blk-mq.c | 31 +++++++++++++++++++++++++++++--
>  1 file changed, 29 insertions(+), 2 deletions(-)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index c376d1b6309a..dc4066d28323 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1416,21 +1416,48 @@ static void __blk_mq_run_hw_queue(struct 
> blk_mq_hw_ctx *hctx)
>   */
>  static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
>  {
> +     bool tried = false;
> +
>       if (hctx->queue->nr_hw_queues == 1)
>               return WORK_CPU_UNBOUND;
>  
>       if (--hctx->next_cpu_batch <= 0) {
>               int next_cpu;
> -
> +select_cpu:
>               next_cpu = cpumask_next_and(hctx->next_cpu, hctx->cpumask,
>                               cpu_online_mask);
>               if (next_cpu >= nr_cpu_ids)
>                       next_cpu = 
> cpumask_first_and(hctx->cpumask,cpu_online_mask);
>  
> -             hctx->next_cpu = next_cpu;
> +             /*
> +              * No online CPU is found here, and this may happen when
> +              * running from blk_mq_hctx_notify_dead(), and make sure
> +              * hctx->next_cpu is set correctly for not breaking workqueue.
> +              */
> +             if (next_cpu >= nr_cpu_ids)
> +                     hctx->next_cpu = cpumask_first(hctx->cpumask);
> +             else
> +                     hctx->next_cpu = next_cpu;
>               hctx->next_cpu_batch = BLK_MQ_CPU_WORK_BATCH;

Since this should _only_ happen from the offline notification, why don't
we move the reset/update logic in that path and out of the hot path?

-- 
Jens Axboe

Reply via email to