On Fri, Feb 27, 2026 at 09:08:38PM +0100, David Hildenbrand (Arm) wrote: > Let's rename it to better fit our new naming scheme. > > Signed-off-by: David Hildenbrand (Arm) <[email protected]>
Yesssss thank you! I hate[d] the rando 'sometimes zap sometimes unmap' naming convention here. LGTM, so: Reviewed-by: Lorenzo Stoakes (Oracle) <[email protected]> > --- > mm/memory.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index 621f38ae1425..f0aaec57a66b 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -2074,7 +2074,7 @@ static void unmap_page_range(struct mmu_gather *tlb, > struct vm_area_struct *vma, > } > > > -static void unmap_single_vma(struct mmu_gather *tlb, > +static void __zap_vma_range(struct mmu_gather *tlb, > struct vm_area_struct *vma, unsigned long start_addr, > unsigned long end_addr, struct zap_details *details) > { > @@ -2177,7 +2177,7 @@ void unmap_vmas(struct mmu_gather *tlb, struct > unmap_desc *unmap) > unsigned long start = unmap->vma_start; > unsigned long end = unmap->vma_end; > hugetlb_zap_begin(vma, &start, &end); > - unmap_single_vma(tlb, vma, start, end, &details); > + __zap_vma_range(tlb, vma, start, end, &details); > hugetlb_zap_end(vma, &details); > vma = mas_find(unmap->mas, unmap->tree_end - 1); > } while (vma); > @@ -2213,7 +2213,7 @@ void zap_page_range_single_batched(struct mmu_gather > *tlb, > * unmap 'address-end' not 'range.start-range.end' as range > * could have been expanded for hugetlb pmd sharing. > */ > - unmap_single_vma(tlb, vma, address, end, details); > + __zap_vma_range(tlb, vma, address, end, details); > mmu_notifier_invalidate_range_end(&range); > if (is_vm_hugetlb_page(vma)) { > /* > -- > 2.43.0 >
