Am 09.11.24 um 18:29 schrieb Matthew Brost:
The motivation for this series comes from pending UMD submission work by
AMD [1], ARM [3], and the Xe team, who are also beginning to look at
this. Sima has suggested [4] some common driver preemptive fences and
semantics, which we all agree on. This is the first attempt to implement
them, based on Xe's existing long-running preemptive fences.
The semantics are fairly simple: preemptive fences attached to a
dma-resv must wait to enable signaling (and thus preempt device
execution) until all fences in other slots have been signaled. These
semantics are necessary to avoid deadlocks when preempting a device
while a user submission is pending, and resuming device execution
depends on the user submission completing. The semantics align with
Christian's view [5], which I also arrived at independently (with a
little help from Sima).
Ah! I was really wondering for a moment why you wanted to add a separate
dma_resv usage for that. But I strongly think that this approach won't work.
The assumption that completion fences which depend on a preemption fence
are always part of the same dma_resv object is most likely false in some
cases.
At least for the AMD design what can easily happen is that we put a
completion fence only into a drm_syncobj for explicit synchronization
and then engines or different devices which still use kernel submissions
depend on it. That can go boom really trivially.
What we do instead is to wait for the latest issued completion fence
while holding a mutex to prevent creating new ones before stopping the
threads and signaling the preemption fence.
That code could of course be made some driver independent component.
Regards,
Christian.
Implemented via the new dma-resv slot DMA_RESV_USAGE_PREEMPT, a common
embedded base preemptive fence class with driver operations, and updates
to the scheduler to adhere to these semantics.
The current Xe long-running preemptive fences have been converted to the
new common code, and UMD submission is expected to follow (hopefully)
shortly thereafter in a subsequent series.
Not intended to be presented as the final solution, but rather to
kickstart serious discussions on this topic.
Matt
[1] https://patchwork.freedesktop.org/series/113675/
[2] https://patchwork.freedesktop.org/series/114385/
[3] https://patchwork.freedesktop.org/series/137924/
[4]
https://patchwork.kernel.org/project/dri-devel/cover/20240828172605.19176-1-mihail.atanas...@arm.com/#26011577
[5]
https://patchwork.kernel.org/project/dri-devel/cover/20240828172605.19176-1-mihail.atanas...@arm.com/#26011853
Matthew Brost (6):
dma-resv: Add DMA_RESV_USAGE_PREEMPT
drm/sched: Teach scheduler about DMA_RESV_USAGE_PREEMPT
dma-fence: Add dma_fence_preempt base class
drm/sched: Teach scheduler about dma_fence_prempt type
drm/xe: Use DMA_RESV_USAGE_PREEMPT for preempt fences
drm/xe: Use dma_fence_preempt base class
drivers/dma-buf/Makefile | 2 +-
drivers/dma-buf/dma-fence-preempt.c | 102 ++++++++++++++++++++
drivers/dma-buf/dma-resv.c | 43 ++++++---
drivers/dma-buf/st-dma-resv.c | 2 +-
drivers/gpu/drm/scheduler/sched_entity.c | 29 +++++-
drivers/gpu/drm/scheduler/sched_main.c | 50 +++++++---
drivers/gpu/drm/xe/xe_bo.c | 22 +----
drivers/gpu/drm/xe/xe_guc_submit.c | 3 +
drivers/gpu/drm/xe/xe_hw_engine_group.c | 4 +-
drivers/gpu/drm/xe/xe_migrate.c | 4 +-
drivers/gpu/drm/xe/xe_preempt_fence.c | 81 +++++-----------
drivers/gpu/drm/xe/xe_preempt_fence.h | 2 +-
drivers/gpu/drm/xe/xe_preempt_fence_types.h | 11 +--
drivers/gpu/drm/xe/xe_pt.c | 12 +--
drivers/gpu/drm/xe/xe_vm.c | 22 +----
include/drm/gpu_scheduler.h | 15 +++
include/linux/dma-fence-preempt.h | 54 +++++++++++
include/linux/dma-fence.h | 1 +
include/linux/dma-resv.h | 24 +++--
19 files changed, 326 insertions(+), 157 deletions(-)
create mode 100644 drivers/dma-buf/dma-fence-preempt.c
create mode 100644 include/linux/dma-fence-preempt.h