On 12/01/2016 03:24 PM, Mel Gorman wrote:
On Thu, Dec 01, 2016 at 02:41:29PM +0100, Vlastimil Babka wrote:
On 12/01/2016 01:24 AM, Mel Gorman wrote:
Hmm I think that if this hits, we don't decrease count/increase nr_freed and
pcp->count will become wrong.
Ok, I think you're right but I also think it's relatively trivial to fix
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 94808f565f74..8777aefc1b8e 100644
@@ -1134,13 +1134,13 @@ static void free_pcppages_bulk(struct zone *zone, int
mt = get_pageblock_migratetype(page);
+ nr_freed += (1 << order);
+ count -= (1 << order);
__free_one_page(page, page_to_pfn(page), zone, order,
trace_mm_page_pcpu_drain(page, order, mt);
- nr_freed += (1 << order);
- count -= (1 << order);
} while (count > 0 && --batch_free && !list_empty(list));
And if we are unlucky/doing full drain, all
lists will get empty, but as count stays e.g. 1, we loop forever on the
Potentially yes. Granted the system is already in a bad state as pages
are being freed in a bad or unknown state but we haven't halted the
system for that in the past.
BTW, I think there's a similar problem (but not introduced by this patch) in
rmqueue_bulk() and its
Potentially yes. It's outside the scope of this patch but it needs
If you agree with the above fix, I'll roll it into a v5 and append
another patch for this issue.
Yeah, looks fine. Thanks.