Currently memmap_init_zone_device() ends up initializing 32768 pages
when it only needs to initialize 128 given tail page reuse. That
number is worse with 1GB compound pages, 262144 instead of 128. Update
memmap_init_zone_device() to skip redundant initialization, detailed
below.

When a pgmap @vmemmap_shift is set, all pages are mapped at a given
huge page alignment and use compound pages to describe them as opposed
to a struct per 4K.

With @vmemmap_shift > 0 and when struct pages are stored in ram
(!altmap) most tail pages are reused. Consequently, the amount of
unique struct pages is a lot smaller that the total amount of struct
pages being mapped.

The altmap path is left alone since it does not support memory savings
based on compound pages devmap.

Signed-off-by: Joao Martins <joao.m.mart...@oracle.com>
---
 mm/page_alloc.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e0c1e6bb09dd..01f10b5a4e47 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6653,6 +6653,20 @@ static void __ref __init_zone_device_page(struct page 
*page, unsigned long pfn,
        }
 }
 
+/*
+ * With compound page geometry and when struct pages are stored in ram most
+ * tail pages are reused. Consequently, the amount of unique struct pages to
+ * initialize is a lot smaller that the total amount of struct pages being
+ * mapped. This is a paired / mild layering violation with explicit knowledge
+ * of how the sparse_vmemmap internals handle compound pages in the lack
+ * of an altmap. See vmemmap_populate_compound_pages().
+ */
+static inline unsigned long compound_nr_pages(struct vmem_altmap *altmap,
+                                             unsigned long nr_pages)
+{
+       return !altmap ? 2 * (PAGE_SIZE/sizeof(struct page)) : nr_pages;
+}
+
 static void __ref memmap_init_compound(struct page *head,
                                       unsigned long head_pfn,
                                       unsigned long zone_idx, int nid,
@@ -6717,7 +6731,7 @@ void __ref memmap_init_zone_device(struct zone *zone,
                        continue;
 
                memmap_init_compound(page, pfn, zone_idx, nid, pgmap,
-                                    pfns_per_compound);
+                                    compound_nr_pages(altmap, 
pfns_per_compound));
        }
 
        pr_info("%s initialised %lu pages in %ums\n", __func__,
-- 
2.17.2


Reply via email to