On 03.09.20 16:00, Pavel Tatashin wrote:
> There is a race during page offline that can lead to infinite loop:
> a page never ends up on a buddy list and __offline_pages() keeps
> retrying infinitely or until a termination signal is received.
> 
> Thread#1 - a new process:
> 
> load_elf_binary
>  begin_new_exec
>   exec_mmap
>    mmput
>     exit_mmap
>      tlb_finish_mmu
>       tlb_flush_mmu
>        release_pages
>         free_unref_page_list
>          free_unref_page_prepare
>           set_pcppage_migratetype(page, migratetype);
>              // Set page->index migration type below  MIGRATE_PCPTYPES
> 
> Thread#2 - hot-removes memory
> __offline_pages
>   start_isolate_page_range
>     set_migratetype_isolate
>       set_pageblock_migratetype(page, MIGRATE_ISOLATE);
>         Set migration type to MIGRATE_ISOLATE-> set
>         drain_all_pages(zone);
>              // drain per-cpu page lists to buddy allocator.
> 
> Thread#1 - continue
>          free_unref_page_commit
>            migratetype = get_pcppage_migratetype(page);
>               // get old migration type
>            list_add(&page->lru, &pcp->lists[migratetype]);
>               // add new page to already drained pcp list
> 
> Thread#2
> Never drains pcp again, and therefore gets stuck in the loop.
> 
> The fix is to try to drain per-cpu lists again after
> check_pages_isolated_cb() fails.
> 
> Fixes: c52e75935f8d ("mm: remove extra drain pages on pcp list")
> 
> Signed-off-by: Pavel Tatashin <[email protected]>
> Cc: [email protected]
> Acked-by: David Rientjes <[email protected]>
> Acked-by: Vlastimil Babka <[email protected]>
> ---
>  mm/memory_hotplug.c | 14 ++++++++++++++
>  mm/page_isolation.c |  8 ++++++++
>  2 files changed, 22 insertions(+)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index e9d5ab5d3ca0..b11a269e2356 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1575,6 +1575,20 @@ static int __ref __offline_pages(unsigned long 
> start_pfn,
>               /* check again */
>               ret = walk_system_ram_range(start_pfn, end_pfn - start_pfn,
>                                           NULL, check_pages_isolated_cb);
> +             /*
> +              * per-cpu pages are drained in start_isolate_page_range, but if
> +              * there are still pages that are not free, make sure that we
> +              * drain again, because when we isolated range we might
> +              * have raced with another thread that was adding pages to pcp
> +              * list.
> +              *
> +              * Forward progress should be still guaranteed because
> +              * pages on the pcp list can only belong to MOVABLE_ZONE
> +              * because has_unmovable_pages explicitly checks for
> +              * PageBuddy on freed pages on other zones.
> +              */
> +             if (ret)
> +                     drain_all_pages(zone);
>       } while (ret);
>  
>       /* Ok, all of our target is isolated.
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index 242c03121d73..63a3db10a8c0 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -170,6 +170,14 @@ __first_valid_page(unsigned long pfn, unsigned long 
> nr_pages)
>   * pageblocks we may have modified and return -EBUSY to caller. This
>   * prevents two threads from simultaneously working on overlapping ranges.
>   *
> + * Please note that there is no strong synchronization with the page 
> allocator
> + * either. Pages might be freed while their page blocks are marked ISOLATED.
> + * In some cases pages might still end up on pcp lists and that would allow
> + * for their allocation even when they are in fact isolated already. 
> Depending
> + * on how strong of a guarantee the caller needs drain_all_pages might be 
> needed
> + * (e.g. __offline_pages will need to call it after check for isolated range 
> for
> + * a next retry).
> + *
>   * Return: the number of isolated pageblocks on success and -EBUSY if any 
> part
>   * of range cannot be isolated.
>   */
> 

(still on vacation, back next week on Tuesday)

I didn't look into discussions in v1, but to me this looks like we are
trying to hide an actual bug by implementing hacks in the caller
(repeated calls to drain_all_pages()). What about alloc_contig_range()
users - you get more allocation errors just because PCP code doesn't
play along.

There *is* strong synchronization with the page allocator - however,
there seems to be one corner case race where we allow to allocate pages
from isolated pageblocks.

I want that fixed instead if possible, otherwise this is just an ugly
hack to make the obvious symptoms (offlining looping forever) disappear.

If that is not possible easily, I'd much rather want to see all
drain_all_pages() calls being moved to the caller and have the expected
behavior documented instead of specifying "there is no strong
synchronization with the page allocator" - which is wrong in all but PCP
cases (and there only in one possible race?).

I do wonder why we hit this issue now and not before - I suspect
something in PCP code changed that made this race possible.

-- 
Thanks,

David / dhildenb

Reply via email to