On Thu, Jun 16, 2022 at 02:05:16PM -0700, Mike Kravetz wrote: > From: Baolin Wang <[email protected]> > > The HugeTLB address ranges are linearly scanned during fork, unmap and > remap operations, and the linear scan can skip to the end of range mapped > by the page table page if hitting a non-present entry, which can help > to speed linear scanning of the HugeTLB address ranges. > > So hugetlb_mask_last_page() is introduced to help to update the address in > the loop of HugeTLB linear scanning with getting the last huge page mapped > by the associated page table page[1], when a non-present entry is encountered. > > Considering ARM64 specific cont-pte/pmd size HugeTLB, this patch implemented > an ARM64 specific hugetlb_mask_last_page() to help this case. > > [1] > https://lore.kernel.org/linux-mm/[email protected]/ > > Signed-off-by: Baolin Wang <[email protected]> > Signed-off-by: Mike Kravetz <[email protected]>
Acked-by: Muchun Song <[email protected]> Thanks.
