On Sat, Jan 31, 2026 at 01:42:20PM -0800, John Hubbard wrote:
> On 1/31/26 11:00 AM, Matthew Brost wrote:
> > On Sat, Jan 31, 2026 at 01:57:21PM +0100, Thomas Hellström wrote:
> > > On Fri, 2026-01-30 at 19:01 -0800, John Hubbard wrote:
> > > > On 1/30/26 10:00 AM, Andrew Morton wrote:
> > > > > On Fri, 30 Jan 2026 15:45:29 +0100 Thomas Hellström
> > > > > <[email protected]> wrote:
> > > > ...
> > > It looks like lru_cache_disable() is using synchronize_rcu_expedited(),
> > > which whould be a huge performance killer?
> > >
> >
> > Yep. I’ve done some quick testing with John’s patch, and
> > xe_exec_system_alloc slows down by what seems like orders of magnitude in
>
> ouchie!
>
> > certain sections. I haven’t done a deep dive yet, but the initial results
> > don’t look good.
> >
> > I also eventually hit a kernel deadlock. I have the stack trace saved.
> >
> > > From the migrate code it looks like it's calling lru_add_drain_all()
> > > once only, because migration is still best effort, so it's accepting
> > > failures if someone adds pages to the per-cpu lru_add structures,
> > > rather than wanting to take the heavy performance loss of
> > > lru_cache_disable().
>
> Yes, I'm clearly far too biased right now towards "make migration
> succeed more often" (some notes below). lru_cache_disable() is sounding
> awfully severe in terms of perf loss.
>
> > >
> > > The problem at hand is also solved if we move the lru_add_drain_all()
> > > out of the page-locked region in migrate_vma_setup(), like if we hit a
> > > system folio not on the LRU, we'd unlock all folios, call
> > > lru_add_drain_all() and retry from start.
> > >
> >
> > That seems like something to try. It should actually be pretty easy to
> > implement as well. It’s good to determine whether a backoff like this is
>
> This does seem like a less drastic fix, and it keeps the same design.
>
Perhaps Thomas and I can look at this option during the work week.
> > common, and whether the backoff causes a performance hit or leads to a
> > large number of retries under the right race conditions.
> >
> > > But the root cause, even though lru_add_drain_all() is bad-behaving, is
> > > IMHO the trylock spin in hmm_range_fault(). This is relatively recently
> > > introduced to avoid another livelock problem, but there were other
> > > fixes associated with that as well, so might not be strictly necessary.
> > >
> > > IIRC he original non-trylocking code in do_swap_page() first took a
> >
> > Here is change for reference:
> >
> > git format-patch -1 1afaeb8293c9a
> >
> > > reference to the folio, released the page-table lock and then performed
> > > a sleeping folio lock. Problem was that if the folio was already locked
> >
> > So original code never had page lock.
> >
> > > for migration, that additional folio refcount would block migration
> >
> > The additional folio refcount could block migration, so if multiple
> > threads fault the same page you could spin thousands of times before
> > one of them actually wins the race and moves the page. Or, if
> > migrate_to_ram contends on some common mutex or similar structure
> > (Xe/GPU SVM doesn’t, but AMD and Nouveau do), you could get a stable
> > livelock.
> >
> > > (which might not be a big problem considering do_swap_page() might want
> > > to migrate to system ram anyway). @Matt Brost what's your take on this?
> > >
> >
> > The primary reason I used a trylock in do_swap_page is because the
> > migrate_vma_* functions also use trylocks. It seems reasonable to
>
> Those are trylocks because it is collecting multiple pages/folios, so in
> order to avoid deadlocks (very easy to hit with that pattern), it goes
> with trylock.
>
> > simply convert the lock in do_swap_page to a sleeping lock. I believe
> > that would solve this issue for both non-RT and RT threads. I don’t know
> > enough about the MM to say whether using a sleeping lock here is
> > acceptable, though. Perhaps Andrew can provide guidance.
>
> This might actually be possible.
>
> >
> > > I'm also not sure a folio refcount should block migration after the
> > > introduction of pinned (like in pin_user_pages) pages. Rather perhaps a
> > > folio pin-count should block migration and in that case do_swap_page()
> > > can definitely do a sleeping folio lock and the problem is gone.
>
> A problem for that specific point is that pincount and refcount both
> mean, "the page is pinned" (which in turn literally means "not allowed
> to migrate/move").
>
> (In fact, pincount is implemented in terms of refcount, in most
> configurations still.)
>
> > >
> >
> > I’m not convinced the folio refcount has any bearing if we can take a
> > sleeping lock in do_swap_page, but perhaps I’m missing something.
>
> So far, I am not able to find a problem with your proposal. So,
> something like this I believe could actually work:
>
I did something slightly more defensive with a refcount protection, but
this seems to work + fix the raised by Thomas and shows no noticeable
performance difference. If we go this route, do_huge_pmd_device_private
would need to be updated with the same pattern as well - I don't have
large device pages enabled in current test branch but would have to test
that part out too.
diff --git a/mm/memory.c b/mm/memory.c
index da360a6eb8a4..1e7ccc4a1a6c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4652,6 +4652,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
vmf->page = softleaf_to_page(entry);
ret = remove_device_exclusive_entry(vmf);
} else if (softleaf_is_device_private(entry)) {
+ struct dev_pagemap *pgmap;
+
if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
/*
* migrate_to_ram is not yet ready to operate
@@ -4670,21 +4672,15 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
vmf->orig_pte)))
goto unlock;
- /*
- * Get a page reference while we know the page can't be
- * freed.
- */
- if (trylock_page(vmf->page)) {
- struct dev_pagemap *pgmap;
-
- get_page(vmf->page);
- pte_unmap_unlock(vmf->pte, vmf->ptl);
+ pte_unmap_unlock(vmf->pte, vmf->ptl);
+ lock_page(vmf->page);
+ if (get_page_unless_zero(vmf->page)) {
pgmap = page_pgmap(vmf->page);
ret = pgmap->ops->migrate_to_ram(vmf);
unlock_page(vmf->page);
put_page(vmf->page);
} else {
- pte_unmap_unlock(vmf->pte, vmf->ptl);
+ unlock_page(vmf->page);
}
} else if (softleaf_is_hwpoison(entry)) {
ret = VM_FAULT_HWPOISON;
> diff --git a/mm/memory.c b/mm/memory.c
> index da360a6eb8a4..af73430e7888 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4652,6 +4652,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> vmf->page = softleaf_to_page(entry);
> ret = remove_device_exclusive_entry(vmf);
> } else if (softleaf_is_device_private(entry)) {
> + struct dev_pagemap *pgmap;
> +
> if (vmf->flags & FAULT_FLAG_VMA_LOCK) {
> /*
> * migrate_to_ram is not yet ready to operate
> @@ -4674,18 +4676,13 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> * Get a page reference while we know the page can't be
> * freed.
> */
> - if (trylock_page(vmf->page)) {
> - struct dev_pagemap *pgmap;
> -
> - get_page(vmf->page);
> - pte_unmap_unlock(vmf->pte, vmf->ptl);
> - pgmap = page_pgmap(vmf->page);
> - ret = pgmap->ops->migrate_to_ram(vmf);
> - unlock_page(vmf->page);
> - put_page(vmf->page);
> - } else {
> - pte_unmap_unlock(vmf->pte, vmf->ptl);
> - }
> + get_page(vmf->page);
> + pte_unmap_unlock(vmf->pte, vmf->ptl);
> + lock_page(vmf->page);
> + pgmap = page_pgmap(vmf->page);
> + ret = pgmap->ops->migrate_to_ram(vmf);
> + unlock_page(vmf->page);
> + put_page(vmf->page);
> } else if (softleaf_is_hwpoison(entry)) {
> ret = VM_FAULT_HWPOISON;
> } else if (softleaf_is_marker(entry)) {
>
> >
> > > But it looks like an AR for us to try to check how bad
> > > lru_cache_disable() really is. And perhaps compare with an
> > > unconditional lru_add_drain_all() at migration start.
> > >
> > > Does anybody know who would be able to tell whether a page refcount
> > > still should block migration (like today) or whether that could
> > > actually be relaxed to a page pincount?
>
> Yes, it really should block migration, see my response above: both
> pincount and refcount literally mean "do not move this page".
>
> As an aside because it might help at some point, I'm just now testing a
> tiny patchset that uses:
>
> wait_var_event_killable(&folio->_refcount,
> folio_ref_count(folio) <= expected)
>
> during migration, paired with:
>
> wake_up_var(&folio->_refcount) in put_page().
>
> This waits for the expected refcount, instead of doing a blind, tight
> retry loop during migration attempts. This lets migration succeed even
> when waiting a long time for another caller to release a refcount.
>
> It works well, but of course, it also has a potentially serious
> performance cost (which I need to quantify), because it adds cycles to
> the put_page() hot path. Which is why I haven't posted it yet, even as
> an RFC. It's still in the "is this even reasonable" stage, just food
> for thought here.
>
If you post an RFC we (Intel) can give it try as we have tests that
really stress migration in odd ways and have fairly good metrics to
catch perf issues too.
Matt
> thanks,
> --
> John Hubbard