On 12/02/2016 12:29 PM, Mel Gorman wrote:
Vlastimil Babka pointed out that commit 479f854a207c ("mm, page_alloc:
defer debugging checks of pages allocated from the PCP") will allow the
per-cpu list counter to be out of sync with the per-cpu list contents
if a struct page is corrupted.

The consequence is an infinite loop if the per-cpu lists get fully drained
by free_pcppages_bulk because all the lists are empty but the count is
positive. The infinite loop occurs here

                do {
                        batch_free++;
                        if (++migratetype == MIGRATE_PCPTYPES)
                                migratetype = 0;
                        list = &pcp->lists[migratetype];
                } while (list_empty(list));

From a user perspective, it's a bad page warning followed by a soft lockup
with interrupts disabled in free_pcppages_bulk().

This patch keeps the accounting in sync.

Fixes: 479f854a207c ("mm, page_alloc: defer debugging checks of pages allocated from 
the PCP")
Signed-off-by: Mel Gorman <mgor...@suse.de>
cc: sta...@vger.kernel.org [4.7+]

Acked-by: Vlastimil Babka <vba...@suse.cz>

---
 mm/page_alloc.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6de9440e3ae2..34ada718ef47 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2192,7 +2192,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int 
order,
                        unsigned long count, struct list_head *list,
                        int migratetype, bool cold)
 {
-       int i;
+       int i, alloced = 0;

        spin_lock(&zone->lock);
        for (i = 0; i < count; ++i) {
@@ -2217,13 +2217,21 @@ static int rmqueue_bulk(struct zone *zone, unsigned int 
order,
                else
                        list_add_tail(&page->lru, list);
                list = &page->lru;
+               alloced++;
                if (is_migrate_cma(get_pcppage_migratetype(page)))
                        __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
                                              -(1 << order));
        }
+
+       /*
+        * i pages were removed from the buddy list even if some leak due
+        * to check_pcp_refill failing so adjust NR_FREE_PAGES based
+        * on i. Do not confuse with 'alloced' which is the number of
+        * pages added to the pcp list.
+        */
        __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
        spin_unlock(&zone->lock);
-       return i;
+       return alloced;
 }

 #ifdef CONFIG_NUMA


Reply via email to