If hmm_range_fault() fails a folio_trylock() in do_swap_page,
trying to acquire the lock of a device-private folio for migration,
to ram, the function will spin until it succeeds grabbing the lock.
However, if the process holding the lock is depending on a work
item to be completed, which is scheduled on the same CPU as the
spinning hmm_range_fault(), that work item might be starved and
we end up in a livelock / starvation situation which is never
resolved.
This can happen, for example if the process holding the
device-private folio lock is stuck in
migrate_device_unmap()->lru_add_drain_all()
The lru_add_drain_all() function requires a short work-item
to be run on all online cpus to complete.
A prerequisite for this to happen is:
a) Both zone device and system memory folios are considered in
migrate_device_unmap(), so that there is a reason to call
lru_add_drain_all() for a system memory folio while a
folio lock is held on a zone device folio.
b) The zone device folio has an initial mapcount > 1 which causes
at least one migration PTE entry insertion to be deferred to
try_to_migrate(), which can happen after the call to
lru_add_drain_all().
c) No or voluntary only preemption.
This all seems pretty unlikely to happen, but indeed is hit by
the "xe_exec_system_allocator" igt test.
Resolve this using a cond_resched() after each iteration in
hmm_range_fault(). Future code improvements might consider moving
the lru_add_drain_all() call in migrate_device_unmap() out of the
folio locked region.
Also, hmm_range_fault() can be a very long-running function
so a cond_resched() at the end of each iteration can be
motivated even in the absence of an -EBUSY.
Fixes: d28c2c9a4877 ("mm/hmm: make full use of walk_page_range()")
Cc: Ralph Campbell <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Jason Gunthorpe <[email protected]>
Cc: Leon Romanovsky <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Matthew Brost <[email protected]>
Cc: [email protected]
Cc: <[email protected]> # v5.5+
Cc: <[email protected]>
Signed-off-by: Thomas Hellström <[email protected]>
---
mm/hmm.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/mm/hmm.c b/mm/hmm.c
index 4ec74c18bef6..160c9e4e5a92 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -674,6 +674,13 @@ int hmm_range_fault(struct hmm_range *range)
return -EBUSY;
ret = walk_page_range(mm, hmm_vma_walk.last, range->end,
&hmm_walk_ops, &hmm_vma_walk);
+ /*
+ * Conditionally reschedule to let other work items get
+ * a chance to unlock device-private pages whose locks
+ * we're spinning on.
+ */
+ cond_resched();
+
/*
* When -EBUSY is returned the loop restarts with
* hmm_vma_walk.last set to an address that has not been stored
--
2.52.0