On 03/14/2016 08:31 AM, [email protected] wrote:
From: Joonsoo Kim <[email protected]>
There is a system that node's pfn are overlapped like as following.
-----pfn-------->
N0 N1 N2 N0 N1 N2
Therefore, we need to care this overlapping when iterating pfn range.
There are two places in vmstat.c that iterates pfn range and
they don't consider this overlapping. Add it.
Without this patch, above system could over count pageblock number
on a zone.
Signed-off-by: Joonsoo Kim <[email protected]>
---
mm/vmstat.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 5e43004..0a726e3 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1010,6 +1010,9 @@ static void pagetypeinfo_showblockcount_print(struct
seq_file *m,
if (!memmap_valid_within(pfn, page, zone))
continue;
The above already does this for each page within the block, but it's
guarded by CONFIG_ARCH_HAS_HOLES_MEMORYMODEL. I guess that's not the
case of your system, right?
I guess your added check should go above this, though. Also what about
employing pageblock_pfn_to_page() here and in all other applicable
places, so it's unified and optimized by zone->contiguous?
+ if (page_zone(page) != zone)
+ continue;
+
mtype = get_pageblock_migratetype(page);
if (mtype < MIGRATE_TYPES)
@@ -1076,6 +1079,10 @@ static void pagetypeinfo_showmixedcount_print(struct
seq_file *m,
continue;
page = pfn_to_page(pfn);
+
+ if (page_zone(page) != zone)
+ continue;
+
if (PageBuddy(page)) {
pfn += (1UL << page_order(page)) - 1;
continue;