On Tue, Feb 3, 2026 at 6:13 AM Lorenzo Stoakes
<[email protected]> wrote:
>
> On Thu, Jan 22, 2026 at 12:28:33PM -0700, Nico Pache wrote:
> > Pass an order and offset to collapse_huge_page to support collapsing anon
> > memory to arbitrary orders within a PMD. order indicates what mTHP size we
> > are attempting to collapse to, and offset indicates were in the PMD to
> > start the collapse attempt.
> >
> > For non-PMD collapse we must leave the anon VMA write locked until after
> > we collapse the mTHP-- in the PMD case all the pages are isolated, but in
> > the mTHP case this is not true, and we must keep the lock to prevent
> > changes to the VMA from occurring.
> >
> > Reviewed-by: Baolin Wang <[email protected]>
> > Tested-by: Baolin Wang <[email protected]>
> > Signed-off-by: Nico Pache <[email protected]>
> > ---
> > mm/khugepaged.c | 111 +++++++++++++++++++++++++++++++-----------------
> > 1 file changed, 71 insertions(+), 40 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 9b7e05827749..76cb17243793 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -1151,44 +1151,54 @@ static enum scan_result alloc_charge_folio(struct
> > folio **foliop, struct mm_stru
> > return SCAN_SUCCEED;
> > }
> >
> > -static enum scan_result collapse_huge_page(struct mm_struct *mm, unsigned
> > long address,
> > - int referenced, int unmapped, struct collapse_control *cc)
> > +static enum scan_result collapse_huge_page(struct mm_struct *mm, unsigned
> > long start_addr,
> > + int referenced, int unmapped, struct collapse_control *cc,
> > + bool *mmap_locked, unsigned int order)
> > {
> > LIST_HEAD(compound_pagelist);
> > pmd_t *pmd, _pmd;
> > - pte_t *pte;
> > + pte_t *pte = NULL;
> > pgtable_t pgtable;
> > struct folio *folio;
> > spinlock_t *pmd_ptl, *pte_ptl;
> > enum scan_result result = SCAN_FAIL;
> > struct vm_area_struct *vma;
> > struct mmu_notifier_range range;
> > + bool anon_vma_locked = false;
> > + const unsigned long nr_pages = 1UL << order;
> > + const unsigned long pmd_address = start_addr & HPAGE_PMD_MASK;
> >
> > - VM_BUG_ON(address & ~HPAGE_PMD_MASK);
> > + VM_WARN_ON_ONCE(pmd_address & ~HPAGE_PMD_MASK);
> >
> > /*
> > * Before allocating the hugepage, release the mmap_lock read lock.
> > * The allocation can take potentially a long time if it involves
> > * sync compaction, and we do not need to hold the mmap_lock during
> > * that. We will recheck the vma after taking it again in write mode.
> > + * If collapsing mTHPs we may have already released the read_lock.
> > */
> > - mmap_read_unlock(mm);
> > + if (*mmap_locked) {
> > + mmap_read_unlock(mm);
> > + *mmap_locked = false;
> > + }
> >
> > - result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
> > + result = alloc_charge_folio(&folio, mm, cc, order);
> > if (result != SCAN_SUCCEED)
> > goto out_nolock;
> >
> > mmap_read_lock(mm);
> > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc,
> > - HPAGE_PMD_ORDER);
> > + *mmap_locked = true;
> > + result = hugepage_vma_revalidate(mm, pmd_address, true, &vma, cc,
> > order);
>
> Why would we use the PMD address here rather than the actual start address?
The revalidation relies on the pmd_addr not the start_addr. It (only)
uses this to make sure the VMA is still at least PMD sized, and it
uses the order to validate that the target order is allowed. I left a
small comment about this in the revalidate function.
>
> Also please add /*expect_anon=*/ before the 'true' because it's hard to
> understand what that references.
ack
>
> > if (result != SCAN_SUCCEED) {
> > mmap_read_unlock(mm);
> > + *mmap_locked = false;
> > goto out_nolock;
> > }
> >
> > - result = find_pmd_or_thp_or_none(mm, address, &pmd);
> > + result = find_pmd_or_thp_or_none(mm, pmd_address, &pmd);
> > if (result != SCAN_SUCCEED) {
> > mmap_read_unlock(mm);
> > + *mmap_locked = false;
> > goto out_nolock;
> > }
> >
> > @@ -1198,13 +1208,16 @@ static enum scan_result collapse_huge_page(struct
> > mm_struct *mm, unsigned long a
> > * released when it fails. So we jump out_nolock directly in
> > * that case. Continuing to collapse causes inconsistency.
> > */
> > - result = __collapse_huge_page_swapin(mm, vma, address, pmd,
> > - referenced,
> > HPAGE_PMD_ORDER);
> > - if (result != SCAN_SUCCEED)
> > + result = __collapse_huge_page_swapin(mm, vma, start_addr, pmd,
> > + referenced, order);
> > + if (result != SCAN_SUCCEED) {
> > + *mmap_locked = false;
> > goto out_nolock;
> > + }
> > }
> >
> > mmap_read_unlock(mm);
> > + *mmap_locked = false;
> > /*
> > * Prevent all access to pagetables with the exception of
> > * gup_fast later handled by the ptep_clear_flush and the VM
> > @@ -1214,20 +1227,20 @@ static enum scan_result collapse_huge_page(struct
> > mm_struct *mm, unsigned long a
> > * mmap_lock.
> > */
> > mmap_write_lock(mm);
> > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc,
> > - HPAGE_PMD_ORDER);
> > + result = hugepage_vma_revalidate(mm, pmd_address, true, &vma, cc,
> > order);
> > if (result != SCAN_SUCCEED)
> > goto out_up_write;
> > /* check if the pmd is still valid */
> > vma_start_write(vma);
> > - result = check_pmd_still_valid(mm, address, pmd);
> > + result = check_pmd_still_valid(mm, pmd_address, pmd);
> > if (result != SCAN_SUCCEED)
> > goto out_up_write;
> >
> > anon_vma_lock_write(vma->anon_vma);
> > + anon_vma_locked = true;
> >
> > - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address,
> > - address + HPAGE_PMD_SIZE);
> > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, start_addr,
> > + start_addr + (PAGE_SIZE << order));
> > mmu_notifier_invalidate_range_start(&range);
> >
> > pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
> > @@ -1239,24 +1252,21 @@ static enum scan_result collapse_huge_page(struct
> > mm_struct *mm, unsigned long a
> > * Parallel GUP-fast is fine since GUP-fast will back off when
> > * it detects PMD is changed.
> > */
> > - _pmd = pmdp_collapse_flush(vma, address, pmd);
> > + _pmd = pmdp_collapse_flush(vma, pmd_address, pmd);
> > spin_unlock(pmd_ptl);
> > mmu_notifier_invalidate_range_end(&range);
> > tlb_remove_table_sync_one();
> >
> > - pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl);
> > + pte = pte_offset_map_lock(mm, &_pmd, start_addr, &pte_ptl);
> > if (pte) {
> > - result = __collapse_huge_page_isolate(vma, address, pte, cc,
> > - HPAGE_PMD_ORDER,
> > - &compound_pagelist);
> > + result = __collapse_huge_page_isolate(vma, start_addr, pte,
> > cc,
> > + order,
> > &compound_pagelist);
> > spin_unlock(pte_ptl);
> > } else {
> > result = SCAN_NO_PTE_TABLE;
> > }
> >
> > if (unlikely(result != SCAN_SUCCEED)) {
> > - if (pte)
> > - pte_unmap(pte);
> > spin_lock(pmd_ptl);
> > BUG_ON(!pmd_none(*pmd));
> > /*
> > @@ -1266,21 +1276,21 @@ static enum scan_result collapse_huge_page(struct
> > mm_struct *mm, unsigned long a
> > */
> > pmd_populate(mm, pmd, pmd_pgtable(_pmd));
> > spin_unlock(pmd_ptl);
> > - anon_vma_unlock_write(vma->anon_vma);
> > goto out_up_write;
> > }
> >
> > /*
> > - * All pages are isolated and locked so anon_vma rmap
> > - * can't run anymore.
> > + * For PMD collapse all pages are isolated and locked so anon_vma
> > + * rmap can't run anymore. For mTHP collapse we must hold the lock
> > */
> > - anon_vma_unlock_write(vma->anon_vma);
> > + if (is_pmd_order(order)) {
> > + anon_vma_unlock_write(vma->anon_vma);
> > + anon_vma_locked = false;
> > + }
> >
> > result = __collapse_huge_page_copy(pte, folio, pmd, _pmd,
> > - vma, address, pte_ptl,
> > - HPAGE_PMD_ORDER,
> > - &compound_pagelist);
> > - pte_unmap(pte);
> > + vma, start_addr, pte_ptl,
> > + order, &compound_pagelist);
> > if (unlikely(result != SCAN_SUCCEED))
> > goto out_up_write;
> >
> > @@ -1290,20 +1300,42 @@ static enum scan_result collapse_huge_page(struct
> > mm_struct *mm, unsigned long a
> > * write.
> > */
> > __folio_mark_uptodate(folio);
> > - pgtable = pmd_pgtable(_pmd);
> > + if (is_pmd_order(order)) { /* PMD collapse */
> > + pgtable = pmd_pgtable(_pmd);
> >
> > - spin_lock(pmd_ptl);
> > - BUG_ON(!pmd_none(*pmd));
> > - pgtable_trans_huge_deposit(mm, pmd, pgtable);
> > - map_anon_folio_pmd_nopf(folio, pmd, vma, address);
> > + spin_lock(pmd_ptl);
> > + WARN_ON_ONCE(!pmd_none(*pmd));
> > + pgtable_trans_huge_deposit(mm, pmd, pgtable);
> > + map_anon_folio_pmd_nopf(folio, pmd, vma, pmd_address);
> > + } else { /* mTHP collapse */
> > + pte_t mthp_pte = mk_pte(folio_page(folio, 0),
> > vma->vm_page_prot);
> > +
> > + mthp_pte = maybe_mkwrite(pte_mkdirty(mthp_pte), vma);
> > + spin_lock(pmd_ptl);
> > + WARN_ON_ONCE(!pmd_none(*pmd));
> > + folio_ref_add(folio, nr_pages - 1);
> > + folio_add_new_anon_rmap(folio, vma, start_addr,
> > RMAP_EXCLUSIVE);
> > + folio_add_lru_vma(folio, vma);
> > + set_ptes(vma->vm_mm, start_addr, pte, mthp_pte, nr_pages);
> > + update_mmu_cache_range(NULL, vma, start_addr, pte, nr_pages);
> > +
> > + smp_wmb(); /* make PTEs visible before PMD. See pmd_install()
> > */
> > + pmd_populate(mm, pmd, pmd_pgtable(_pmd));
>
> I seriously hate this being open-coded, can we separate it out into another
> function?
Yeah I think we've discussed this before. I started to generalize
this, and apply it to other parts of the kernel that maintain a
similar pattern, but each potential user of the helper was slightly
different in its approach and I was unable to find a quick solution to
make it apply to all. I think it will require a lot more thought to
cleanly refactor this. I figured I could leave this to the later
cleanup work, or I could just create a static function just for
khugepaged for now?
>
> > + }
> > spin_unlock(pmd_ptl);
> >
> > folio = NULL;
> >
> > result = SCAN_SUCCEED;
> > out_up_write:
> > + if (anon_vma_locked)
> > + anon_vma_unlock_write(vma->anon_vma);
>
> Thanks it's much better tracking this specifically.
>
> The whole damn thing needs refactoring (by this I mean - khugepaged and really
> THP in general to be clear :) but it's not your fault.
Yeah it has not been the prettiest code to try and understand/work on!
>
> Could I ask though whether you might help out with some cleanups after this
> lands :)
>
> I feel like we all need to do our bit to pay down some technical debt!
Yes ofc! I had already planned on doing so. I have some in mind, and I
believe others have already tackled some. After this land, let's
discuss further plans (discussion thread or THP meeting).
Cheers,
-- Nico
>
> > + if (pte)
> > + pte_unmap(pte);
> > mmap_write_unlock(mm);
> > + *mmap_locked = false;
> > out_nolock:
> > + WARN_ON_ONCE(*mmap_locked);
> > if (folio)
> > folio_put(folio);
> > trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result);
> > @@ -1471,9 +1503,8 @@ static enum scan_result collapse_scan_pmd(struct
> > mm_struct *mm,
> > pte_unmap_unlock(pte, ptl);
> > if (result == SCAN_SUCCEED) {
> > result = collapse_huge_page(mm, start_addr, referenced,
> > - unmapped, cc);
> > - /* collapse_huge_page will return with the mmap_lock released
> > */
> > - *mmap_locked = false;
> > + unmapped, cc, mmap_locked,
> > + HPAGE_PMD_ORDER);
> > }
> > out:
> > trace_mm_khugepaged_scan_pmd(mm, folio, referenced,
> > --
> > 2.52.0
> >
>
> Cheers, Lorenzo
>