On 05/30/2016 11:01 AM, Mel Gorman wrote:
From: Vlastimil Babka <[email protected]>

In DEBUG_VM kernel, we can hit infinite loop for order == 0 in
buffered_rmqueue() when check_new_pcp() returns 1, because the bad page is
never removed from the pcp list. Fix this by removing the page before retrying.
Also we don't need to check if page is non-NULL, because we simply grab it from
the list which was just tested for being non-empty.

Fixes: 
http://www.ozlabs.org/~akpm/mmotm/broken-out/mm-page_alloc-defer-debugging-checks-of-freed-pages-until-a-pcp-drain.patch

That was a wrong one, which I corrected later. Also it's no longer mmotm. Correction below:

Fixes: 479f854a207c ("mm, page_alloc: defer debugging checks of pages allocated from the PCP")

Reported-by: Naoya Horiguchi <[email protected]>
Signed-off-by: Vlastimil Babka <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>

Thanks Mel, I've missed that the patch didn't go in.

---
 mm/page_alloc.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f8f3bfc435ee..bb320cde4d6d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2609,11 +2609,12 @@ struct page *buffered_rmqueue(struct zone 
*preferred_zone,
                                page = list_last_entry(list, struct page, lru);
                        else
                                page = list_first_entry(list, struct page, lru);
-               } while (page && check_new_pcp(page));

-               __dec_zone_state(zone, NR_ALLOC_BATCH);
-               list_del(&page->lru);
-               pcp->count--;
+                       __dec_zone_state(zone, NR_ALLOC_BATCH);
+                       list_del(&page->lru);
+                       pcp->count--;
+
+               } while (check_new_pcp(page));
        } else {
                /*
                 * We most definitely don't want callers attempting to


Reply via email to