On Tue, Aug 19, 2025 at 07:41:55AM -0600, Nico Pache wrote: > For khugepaged to support different mTHP orders, we must generalize this > to check if the PMD is not shared by another VMA and the order is enabled. > > To ensure madvise_collapse can support working on mTHP orders without the > PMD order enabled, we need to convert hugepage_vma_revalidate to take a > bitmap of orders. > > No functional change in this patch. > > Reviewed-by: Baolin Wang <baolin.w...@linux.alibaba.com> > Acked-by: David Hildenbrand <da...@redhat.com> > Co-developed-by: Dev Jain <dev.j...@arm.com> > Signed-off-by: Dev Jain <dev.j...@arm.com> > Signed-off-by: Nico Pache <npa...@redhat.com>
LGTM (modulo nit/query below) so: Reviewed-by: Lorenzo Stoakes <lorenzo.stoa...@oracle.com> > --- > mm/khugepaged.c | 13 ++++++++----- > 1 file changed, 8 insertions(+), 5 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index b7b98aebb670..2d192ec961d2 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -917,7 +917,7 @@ static int collapse_find_target_node(struct > collapse_control *cc) > static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long > address, > bool expect_anon, > struct vm_area_struct **vmap, > - struct collapse_control *cc) > + struct collapse_control *cc, unsigned long > orders) > { > struct vm_area_struct *vma; > enum tva_type type = cc->is_khugepaged ? TVA_KHUGEPAGED : > @@ -930,9 +930,10 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, > unsigned long address, > if (!vma) > return SCAN_VMA_NULL; > > + /* Always check the PMD order to insure its not shared by another VMA */ NIT: ensure not insure. > if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) > return SCAN_ADDRESS_RANGE; > - if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER)) > + if (!thp_vma_allowable_orders(vma, vma->vm_flags, type, orders)) > return SCAN_VMA_CHECK; > /* > * Anon VMA expected, the address may be unmapped then > @@ -1134,7 +1135,8 @@ static int collapse_huge_page(struct mm_struct *mm, > unsigned long address, > goto out_nolock; > > mmap_read_lock(mm); > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, > + BIT(HPAGE_PMD_ORDER)); Shouldn't this be PMD order? Seems equivalent. > if (result != SCAN_SUCCEED) { > mmap_read_unlock(mm); > goto out_nolock; > @@ -1168,7 +1170,8 @@ static int collapse_huge_page(struct mm_struct *mm, > unsigned long address, > * mmap_lock. > */ > mmap_write_lock(mm); > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, > + BIT(HPAGE_PMD_ORDER)); > if (result != SCAN_SUCCEED) > goto out_up_write; > /* check if the pmd is still valid */ > @@ -2807,7 +2810,7 @@ int madvise_collapse(struct vm_area_struct *vma, > unsigned long start, > mmap_read_lock(mm); > mmap_locked = true; > result = hugepage_vma_revalidate(mm, addr, false, &vma, > - cc); > + cc, > BIT(HPAGE_PMD_ORDER)); > if (result != SCAN_SUCCEED) { > last_fail = result; > goto out_nolock; > -- > 2.50.1 >