On 09/19/2017 01:53 PM, Jens Axboe wrote:
> @@ -948,15 +949,25 @@ static void wb_start_writeback(struct bdi_writeback 
> *wb, long nr_pages,
>                              bool range_cyclic, enum wb_reason reason)
>  {
>       struct wb_writeback_work *work;
> +     bool zero_pages = false;
>  
>       if (!wb_has_dirty_io(wb))
>               return;
>  
>       /*
> -      * If someone asked for zero pages, we write out the WORLD
> +      * If someone asked for zero pages, we write out the WORLD.
> +      * Places like vmscan and laptop mode want to queue a wakeup to
> +      * the flusher threads to clean out everything. To avoid potentially
> +      * having tons of these pending, ensure that we only allow one of
> +      * them pending and inflight at the time
>        */
> -     if (!nr_pages)
> +     if (!nr_pages) {
> +             if (test_bit(WB_zero_pages, &wb->state))
> +                     return;
> +             set_bit(WB_zero_pages, &wb->state);
>               nr_pages = get_nr_dirty_pages();
> +             zero_pages = true;
> +     }

Later fix added here to ensure we clear WB_zero_pages, if work
allocation fails:

work = kzalloc(sizeof(*work),                                           
                GFP_NOWAIT | __GFP_NOMEMALLOC | __GFP_NOWARN);           
if (!work) {                                                            
        if (zero_pages)                                                 
                clear_bit(WB_zero_pages, &wb->state);
        [...]

Updated patch here:

http://git.kernel.dk/cgit/linux-block/commit/?h=writeback-fixup&id=21ea70657894fda9fccf257543cbec112b2813ef

-- 
Jens Axboe

Reply via email to