On 11/13/20 2:59 AM, Muchun Song wrote:
> On x86_64, vmemmap is always PMD mapped if the machine has hugepages
> support and if we have 2MB contiguos pages and PMD aligned. If we want
                             contiguous              alignment
> to free the unused vmemmap pages, we have to split the huge pmd firstly.
> So we should pre-allocate pgtable to split PMD to PTE.
> 
> Signed-off-by: Muchun Song <[email protected]>
> ---
>  mm/hugetlb_vmemmap.c | 73 
> ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/hugetlb_vmemmap.h | 12 +++++++++
>  2 files changed, 85 insertions(+)

Thanks for the cleanup.

Oscar made some other comments.  I only have one additional minor comment
below.

With those minor cleanups,
Acked-by: Mike Kravetz <[email protected]>

> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
...
> +int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> +{
> +     unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> +
> +     /* Store preallocated pages on huge page lru list */

Let's expland the above comment to something like this:

        /*
         * Use the huge page lru list to temporarily store the preallocated
         * pages.  The preallocated pages are used and the list is emptied
         * before the huge page is put into use.  When the huge page is put
         * into use by prep_new_huge_page() the list will be reinitialized.
         */

> +     INIT_LIST_HEAD(&page->lru);
> +
> +     while (nr--) {
> +             pte_t *pte_p;
> +
> +             pte_p = pte_alloc_one_kernel(&init_mm);
> +             if (!pte_p)
> +                     goto out;
> +             list_add(&virt_to_page(pte_p)->lru, &page->lru);
> +     }
> +
> +     return 0;
> +out:
> +     vmemmap_pgtable_free(page);
> +     return -ENOMEM;
> +}

-- 
Mike Kravetz

Reply via email to