On 2/3/21 4:29 PM, Daniel Vetter wrote:
Recently there was a fairly long thread about recoreable hardware page
faults, how they can deadlock, and what to do about that.

While the discussion is still fresh I figured good time to try and
document the conclusions a bit. This documentation section explains
what's the potential problem, and the remedies we've discussed,
roughly ordered from best to worst.

v2: Linus -> Linux typoe (Dave)

v3:
- Make it clear drivers only need to implement one option (Christian)
- Make it clearer that implicit sync is out the window with exclusive
   fences (Christian)
- Add the fairly theoretical option of segementing the memory (either
   statically or through dynamic checks at runtime for which piece of
   memory is managed how) and explain why it's not a great idea (Felix)

References: 
https://lore.kernel.org/dri-devel/20210107030127.20393-1-felix.kuehl...@amd.com/
Cc: Dave Airlie <airl...@gmail.com>
Cc: Maarten Lankhorst <maarten.lankho...@linux.intel.com>
Cc: Thomas Hellström <thomas.hellst...@intel.com>
Cc: "Christian König" <christian.koe...@amd.com>
Cc: Jerome Glisse <jgli...@redhat.com>
Cc: Felix Kuehling <felix.kuehl...@amd.com>
Signed-off-by: Daniel Vetter <daniel.vet...@intel.com>
Cc: Sumit Semwal <sumit.sem...@linaro.org>
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
---
  Documentation/driver-api/dma-buf.rst | 76 ++++++++++++++++++++++++++++
  1 file changed, 76 insertions(+)

diff --git a/Documentation/driver-api/dma-buf.rst 
b/Documentation/driver-api/dma-buf.rst
index a2133d69872c..7f37ec30d9fd 100644
--- a/Documentation/driver-api/dma-buf.rst
+++ b/Documentation/driver-api/dma-buf.rst
@@ -257,3 +257,79 @@ fences in the kernel. This means:
    userspace is allowed to use userspace fencing or long running compute
    workloads. This also means no implicit fencing for shared buffers in these
    cases.
+
+Recoverable Hardware Page Faults Implications
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Modern hardware supports recoverable page faults, which has a lot of
+implications for DMA fences.
+
+First, a pending page fault obviously holds up the work that's running on the
+accelerator and a memory allocation is usually required to resolve the fault.
+But memory allocations are not allowed to gate completion of DMA fences, which
+means any workload using recoverable page faults cannot use DMA fences for
+synchronization. Synchronization fences controlled by userspace must be used
+instead.
+
+On GPUs this poses a problem, because current desktop compositor protocols on
+Linux rely on DMA fences, which means without an entirely new userspace stack
+built on top of userspace fences, they cannot benefit from recoverable page
+faults. Specifically this means implicit synchronization will not be possible.
+The exception is when page faults are only used as migration hints and never to
+on-demand fill a memory request. For now this means recoverable page
+faults on GPUs are limited to pure compute workloads.
+
+Furthermore GPUs usually have shared resources between the 3D rendering and
+compute side, like compute units or command submission engines. If both a 3D
+job with a DMA fence and a compute workload using recoverable page faults are
+pending they could deadlock:
+
+- The 3D workload might need to wait for the compute job to finish and release
+  hardware resources first.
+
+- The compute workload might be stuck in a page fault, because the memory
+  allocation is waiting for the DMA fence of the 3D workload to complete.
+
+There are a few options to prevent this problem, one of which drivers need to
+ensure:
+
+- Compute workloads can always be preempted, even when a page fault is pending
+  and not yet repaired. Not all hardware supports this.
+
+- DMA fence workloads and workloads which need page fault handling have
+  independent hardware resources to guarantee forward progress. This could be
+  achieved through e.g. through dedicated engines and minimal compute unit
+  reservations for DMA fence workloads.
+
+- The reservation approach could be further refined by only reserving the
+  hardware resources for DMA fence workloads when they are in-flight. This must
+  cover the time from when the DMA fence is visible to other threads up to
+  moment when fence is completed through dma_fence_signal().
+
+- As a last resort, if the hardware provides no useful reservation mechanics,
+  all workloads must be flushed from the GPU when switching between jobs
+  requiring DMA fences or jobs requiring page fault handling: This means all 
DMA
+  fences must complete before a compute job with page fault handling can be
+  inserted into the scheduler queue. And vice versa, before a DMA fence can be
+  made visible anywhere in the system, all compute workloads must be preempted
+  to guarantee all pending GPU page faults are flushed.
+
+- Only a fairly theoretical option would be to untangle these dependencies when
+  allocating memory to repair hardware page faults, either through separate
+  memory blocks or runtime tracking of the full dependency graph of all DMA
+  fences. This results very wide impact on the kernel, since resolving the page
+  on the CPU side can itself involve a page fault. It is much more feasible and
+  robust to limit the impact of handling hardware page faults to the specific
+  driver.
+
+Note that workloads that run on independent hardware like copy engines or other
+GPUs do not have any impact. This allows us to keep using DMA fences internally
+in the kernel even for resolving hardware page faults, e.g. by using copy
+engines to clear or copy memory needed to resolve the page fault.
+
+In some ways this page fault problem is a special case of the `Infinite DMA
+Fences` discussions: Infinite fences from compute workloads are allowed to
+depend on DMA fences, but not the other way around. And not even the page fault
+problem is new, because some other CPU thread in userspace might
+hit a page fault which holds up a userspace fence - supporting page faults on
+GPUs doesn't anything fundamentally new.

To me, in general this looks good. One thing, though is that for a first time reader it might not be totally clear what's special with a compute workload. Perhaps some clarification?

Also since the current cross-driver dma_fence locking order is

1) dma_resv ->
2) memory_allocation / reclaim ->
3) dma_fence_wait/critical

And the locking order required for recoverable pagefault is

a) dma_resv ->
b) fence_wait/critical ->
c) memory_allocation / reclaim

(Possibly with a) and b) interchanged above, Is it possible to service a recoverable pagefault without taking the dma_resv lock?)

It's clear that the fence critical section in b) is not compatible with the dma_fence wait in 3) and thus the memory restrictions are needed. But I think given the memory allocation restrictions for recoverable pagefaults I guess at some point we must ask ourselves why are they necessary and what's the price to be paid for getting rid of them, and document also that. *If* it's the case that it all boils down to the 2) -> 3) locking order above, and that's mandated *only* by the dma_fence wait in the userptr mmu notifiers, I think these restrictions are a pretty high price to pay. Wouldn't it be possible now to replace that fence wait with either page pinning (which now is coherent since 5.9) or preempt-ctx fences + unpinned pages if available and thus invert the 2) -> 3) locking order?

Thanks,
Thomas


_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to