On Fri, 2026-01-30 at 10:00 -0800, Andrew Morton wrote: > On Fri, 30 Jan 2026 15:45:29 +0100 Thomas Hellström > <[email protected]> wrote: > > > If hmm_range_fault() fails a folio_trylock() in do_swap_page, > > trying to acquire the lock of a device-private folio for migration, > > to ram, the function will spin until it succeeds grabbing the lock. > > > > However, if the process holding the lock is depending on a work > > item to be completed, which is scheduled on the same CPU as the > > spinning hmm_range_fault(), that work item might be starved and > > we end up in a livelock / starvation situation which is never > > resolved. > > > > This can happen, for example if the process holding the > > device-private folio lock is stuck in > > migrate_device_unmap()->lru_add_drain_all() > > The lru_add_drain_all() function requires a short work-item > > to be run on all online cpus to complete. > > This is pretty bad behavior from lru_add_drain_all(). > > > A prerequisite for this to happen is: > > a) Both zone device and system memory folios are considered in > > migrate_device_unmap(), so that there is a reason to call > > lru_add_drain_all() for a system memory folio while a > > folio lock is held on a zone device folio. > > b) The zone device folio has an initial mapcount > 1 which causes > > at least one migration PTE entry insertion to be deferred to > > try_to_migrate(), which can happen after the call to > > lru_add_drain_all(). > > c) No or voluntary only preemption. > > > > This all seems pretty unlikely to happen, but indeed is hit by > > the "xe_exec_system_allocator" igt test. > > > > Resolve this using a cond_resched() after each iteration in > > hmm_range_fault(). Future code improvements might consider moving > > the lru_add_drain_all() call in migrate_device_unmap() out of the > > folio locked region. > > > > Also, hmm_range_fault() can be a very long-running function > > so a cond_resched() at the end of each iteration can be > > motivated even in the absence of an -EBUSY. > > > > Fixes: d28c2c9a4877 ("mm/hmm: make full use of walk_page_range()") > > Six years ago.
Yeah, although unlikely to have been hit due to our multi-device migration code might have been the first instance of all those prerequisites to be fulfilled. > > > --- a/mm/hmm.c > > +++ b/mm/hmm.c > > @@ -674,6 +674,13 @@ int hmm_range_fault(struct hmm_range *range) > > return -EBUSY; > > ret = walk_page_range(mm, hmm_vma_walk.last, > > range->end, > > &hmm_walk_ops, > > &hmm_vma_walk); > > + /* > > + * Conditionally reschedule to let other work > > items get > > + * a chance to unlock device-private pages whose > > locks > > + * we're spinning on. > > + */ > > + cond_resched(); > > + > > /* > > * When -EBUSY is returned the loop restarts with > > * hmm_vma_walk.last set to an address that has > > not been stored > > If the process which is running hmm_range_fault() has > SCHED_FIFO/SHCED_RR then cond_resched() doesn't work. An explicit > msleep() would be better? Unfortunately hmm_range_fault() is typically called from a gpu pagefault handler and it's crucial to get the gpu up and running again as fast as possible. Is there a way we could test for the cases where cond_resched() doesn't work and in that case instead call sched_yield(), at least on -EBUSY errors? Thanks, Thomas
