On Sat, Jan 31, 2026 at 01:57:21PM +0100, Thomas Hellström wrote:
> On Fri, 2026-01-30 at 19:01 -0800, John Hubbard wrote:
> > On 1/30/26 10:00 AM, Andrew Morton wrote:
> > > On Fri, 30 Jan 2026 15:45:29 +0100 Thomas Hellström
> > > <[email protected]> wrote:
> > ...
> > > > This can happen, for example if the process holding the
> > > > device-private folio lock is stuck in
> > > > migrate_device_unmap()->lru_add_drain_all()
> > > > The lru_add_drain_all() function requires a short work-item
> > > > to be run on all online cpus to complete.
> > >
> > > This is pretty bad behavior from lru_add_drain_all().
> >
> > Yes. And also, by code inspection, it seems like other folio_batch
> > items (I was going to say pagevecs, heh) can leak in after calling
> > lru_add_drain_all(), making things even worse.
> >
> > Maybe we really should be calling lru_cache_disable/enable()
> > pairs for migration, even though it looks heavier weight.
> >
> > This diff would address both points, and maybe fix Matthew's issue,
> > although I haven't done much real testing on it other than a quick
> > run of run_vmtests.sh:
>
> It looks like lru_cache_disable() is using synchronize_rcu_expedited(),
> which whould be a huge performance killer?
>
Yep. I’ve done some quick testing with John’s patch, and
xe_exec_system_alloc slows down by what seems like orders of magnitude in
certain sections. I haven’t done a deep dive yet, but the initial results
don’t look good.
I also eventually hit a kernel deadlock. I have the stack trace saved.
> From the migrate code it looks like it's calling lru_add_drain_all()
> once only, because migration is still best effort, so it's accepting
> failures if someone adds pages to the per-cpu lru_add structures,
> rather than wanting to take the heavy performance loss of
> lru_cache_disable().
>
> The problem at hand is also solved if we move the lru_add_drain_all()
> out of the page-locked region in migrate_vma_setup(), like if we hit a
> system folio not on the LRU, we'd unlock all folios, call
> lru_add_drain_all() and retry from start.
>
That seems like something to try. It should actually be pretty easy to
implement as well. It’s good to determine whether a backoff like this is
common, and whether the backoff causes a performance hit or leads to a
large number of retries under the right race conditions.
> But the root cause, even though lru_add_drain_all() is bad-behaving, is
> IMHO the trylock spin in hmm_range_fault(). This is relatively recently
> introduced to avoid another livelock problem, but there were other
> fixes associated with that as well, so might not be strictly necessary.
>
> IIRC he original non-trylocking code in do_swap_page() first took a
Here is change for reference:
git format-patch -1 1afaeb8293c9a
> reference to the folio, released the page-table lock and then performed
> a sleeping folio lock. Problem was that if the folio was already locked
So original code never had page lock.
> for migration, that additional folio refcount would block migration
The additional folio refcount could block migration, so if multiple
threads fault the same page you could spin thousands of times before
one of them actually wins the race and moves the page. Or, if
migrate_to_ram contends on some common mutex or similar structure
(Xe/GPU SVM doesn’t, but AMD and Nouveau do), you could get a stable
livelock.
> (which might not be a big problem considering do_swap_page() might want
> to migrate to system ram anyway). @Matt Brost what's your take on this?
>
The primary reason I used a trylock in do_swap_page is because the
migrate_vma_* functions also use trylocks. It seems reasonable to
simply convert the lock in do_swap_page to a sleeping lock. I believe
that would solve this issue for both non-RT and RT threads. I don’t know
enough about the MM to say whether using a sleeping lock here is
acceptable, though. Perhaps Andrew can provide guidance.
> I'm also not sure a folio refcount should block migration after the
> introduction of pinned (like in pin_user_pages) pages. Rather perhaps a
> folio pin-count should block migration and in that case do_swap_page()
> can definitely do a sleeping folio lock and the problem is gone.
>
I’m not convinced the folio refcount has any bearing if we can take a
sleeping lock in do_swap_page, but perhaps I’m missing something.
> But it looks like an AR for us to try to check how bad
> lru_cache_disable() really is. And perhaps compare with an
> unconditional lru_add_drain_all() at migration start.
>
> Does anybody know who would be able to tell whether a page refcount
> still should block migration (like today) or whether that could
> actually be relaxed to a page pincount?
>
This is a good question. AFAIK this is probably a leftover from the
original device-pages implementation, and it could likely be relaxed.
But I’m not really convinced the folio refcount is relevant to this
discussion (see above).
Matt
> Thanks,
> Thomas
>
> >
> > diff --git a/mm/migrate_device.c b/mm/migrate_device.c
> > index 23379663b1e1..3c55a766dd33 100644
> > --- a/mm/migrate_device.c
> > +++ b/mm/migrate_device.c
> > @@ -570,7 +570,6 @@ static unsigned long
> > migrate_device_unmap(unsigned long *src_pfns,
> > struct folio *fault_folio = fault_page ?
> > page_folio(fault_page) : NULL;
> > unsigned long i, restore = 0;
> > - bool allow_drain = true;
> > unsigned long unmapped = 0;
> >
> > lru_add_drain();
> > @@ -595,12 +594,6 @@ static unsigned long
> > migrate_device_unmap(unsigned long *src_pfns,
> >
> > /* ZONE_DEVICE folios are not on LRU */
> > if (!folio_is_zone_device(folio)) {
> > - if (!folio_test_lru(folio) && allow_drain) {
> > - /* Drain CPU's lru cache */
> > - lru_add_drain_all();
> > - allow_drain = false;
> > - }
> > -
> > if (!folio_isolate_lru(folio)) {
> > src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
> > restore++;
> > @@ -759,11 +752,15 @@ int migrate_vma_setup(struct migrate_vma *args)
> > args->cpages = 0;
> > args->npages = 0;
> >
> > + lru_cache_disable();
> > +
> > migrate_vma_collect(args);
> >
> > if (args->cpages)
> > migrate_vma_unmap(args);
> >
> > + lru_cache_enable();
> > +
> > /*
> > * At this point pages are locked and unmapped, and thus
> > they have
> > * stable content and can safely be copied to destination
> > memory that
> > @@ -1395,6 +1392,8 @@ int migrate_device_range(unsigned long
> > *src_pfns, unsigned long start,
> > {
> > unsigned long i, j, pfn;
> >
> > + lru_cache_disable();
> > +
> > for (pfn = start, i = 0; i < npages; pfn++, i++) {
> > struct page *page = pfn_to_page(pfn);
> > struct folio *folio = page_folio(page);
> > @@ -1413,6 +1412,8 @@ int migrate_device_range(unsigned long
> > *src_pfns, unsigned long start,
> >
> > migrate_device_unmap(src_pfns, npages, NULL);
> >
> > + lru_cache_enable();
> > +
> > return 0;
> > }
> > EXPORT_SYMBOL(migrate_device_range);
> > @@ -1429,6 +1430,8 @@ int migrate_device_pfns(unsigned long
> > *src_pfns, unsigned long npages)
> > {
> > unsigned long i, j;
> >
> > + lru_cache_disable();
> > +
> > for (i = 0; i < npages; i++) {
> > struct page *page = pfn_to_page(src_pfns[i]);
> > struct folio *folio = page_folio(page);
> > @@ -1446,6 +1449,8 @@ int migrate_device_pfns(unsigned long
> > *src_pfns, unsigned long npages)
> >
> > migrate_device_unmap(src_pfns, npages, NULL);
> >
> > + lru_cache_enable();
> > +
> > return 0;
> > }
> > EXPORT_SYMBOL(migrate_device_pfns);
> >
> >
> >
> >
> > thanks,