On 8/10/23 08:34, Christian König wrote:
Am 10.08.23 um 00:17 schrieb Danilo Krummrich:
With the current mental model every GPU scheduler instance represents
a single HW ring, while every entity represents a software queue feeding
into one or multiple GPU scheduler instances and hence into one
On 8/10/23 06:31, Matthew Brost wrote:
On Thu, Aug 10, 2023 at 12:17:23AM +0200, Danilo Krummrich wrote:
With the current mental model every GPU scheduler instance represents
a single HW ring, while every entity represents a software queue feeding
into one or multiple GPU scheduler instances
.
While at it, remove some trailing empty lines.
Fixes: 9710631cc8f3 ("drm: add drm_exec selftests v4")
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/tests/drm_exec_test.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/tests/drm_exec_test.c
On 8/8/23 09:21, Boris Brezillon wrote:
On Thu, 3 Aug 2023 18:52:20 +0200
Danilo Krummrich wrote:
When no custom lock is set to protect a GEMs GPUVA list, lockdep checks
should fall back to the GEM objects dma-resv lock. With the current
implementation we're setting the lock_dep_map
other words, to pick up the schedulers existing terminology,
prevent dependency pipelining.
Signed-off-by: Danilo Krummrich
---
Just before sending out this patch I was made aware of the "DRM Scheduler
changes for XE" [1] patch series.
However, I think bringing this alternative approach
the
> fence and then return without waiting.
Good catch!
Reviewed-by: Danilo Krummrich
>
> Signed-off-by: Faith Ekstrand
> Fixes: b88baab82871 ("drm/nouveau: implement new VM_BIND uAPI")
> Cc: Danilo Krummrich
> Cc: Dave Airlie
> ---
> drivers/gpu/drm/nouveau/nouveau
Hi Christian,
On 8/7/23 20:07, Christian König wrote:
Am 03.08.23 um 18:52 schrieb Danilo Krummrich:
The new (VM_BIND) UAPI exports DMA fences through DRM syncobjs. Hence,
in order to emit fences within DMA fence signalling critical sections
(e.g. as typically done in the DRM GPU schedulers
VMAs can find their corresponding VM through their embedded struct
drm_gpuva which already carries a pointer to a struct drm_gpuva_manager
which the VM is based on. Hence, remove the struct nouveau_uvmm pointer
from struct nouveau_uvma to save a couple of bytes per mapping.
Signed-off-by: Danilo
Remove incorrect calls to mas_unlock() in the unwind path of
__nouveau_uvma_region_insert(). The region maple tree uses an external
lock instead, namely the global uvmm lock.
Fixes: b88baab82871 ("drm/nouveau: implement new VM_BIND uAPI")
Reported-by: kernel test robot
Signed-off-
Fix copy-paste error causing EXEC and VM_BIND syscalls data pointers
to carry incorrect __user annotations.
Fixes: b88baab82871 ("drm/nouveau: implement new VM_BIND uAPI")
Reported-by: kernel test robot
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_
Cast the integer to a pointer-sized type first to keep the compiler
happy.
Fixes: 6b252cf42281 ("drm/nouveau: nvkm/vmm: implement raw ops to manage uvmm")
Reported-by: kernel test robot
Reported-by: Stephen Rothwell
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nvkm/
Fix call to nouveau_fence_emit() with wrong channel parameter.
Fixes: 7f2a0b50b2b2 ("drm/nouveau: fence: separate fence alloc and emit")
Reported-by: kernel test robot
Reported-by: Stephen Rothwell
Reviewed-by: Karol Herbst
Signed-off-by: Danilo Krummrich
---
drivers/gpu/d
The patch series provides a few fixes for the recently merged VM_BIND uAPI
mostly addressing a couple of warnings.
It also contains one patch to slightly reduce the memory footprint of
struct nouveau_uvma.
Danilo Krummrich (5):
nouveau/dmem: fix copy-paste error in nouveau_dmem_migrate_chunk
The patch series provides a few fixes for the recently merged VM_BIND uAPI
mostly addressing a couple of warnings.
It also contains one patch to slightly reduce the memory footprint of
struct nouveau_uvma.
Danilo Krummrich (5):
nouveau/dmem: fix copy-paste error in nouveau_dmem_migrate_chunk
Fix call to nouveau_fence_emit() with wrong channel parameter.
Fixes: 7f2a0b50b2b2 ("drm/nouveau: fence: separate fence alloc and emit")
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drive
Provide the driver indirection iterating over all DRM GPU VA spaces to
enable the common 'gpuvas' debugfs file for dumping DRM GPU VA spaces.
Reviewed-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_debugfs.c | 39 +++
1 file changed, 39
mechanism. DRM GEM object locking is
handled with drm_exec.
Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, use the DRM
GPU scheduler for the asynchronous paths.
Reviewed-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/driver-uapi.rst |3 +
drivers
the address space and the corresponding map/unmap/sparse
operations to the upper layers.
Reviewed-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h | 26 ++-
drivers/gpu/drm/nouveau/include/nvif/vmm.h| 19 +-
.../gpu/drm/nouveau/include/nvkm/subdev
that the device is dead on the next EXEC or VM_BIND ioctl.
Reviewed-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_chan.c | 14 +++---
drivers/gpu/drm/nouveau/nouveau_chan.h | 1 +
2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu
would hang forever.
To fix that, fail to emit a new fence on a killed fence context with
-ENODEV to unblock the job.
Reviewed-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_fence.c | 7 +++
drivers/gpu/drm/nouveau/nouveau_fence.h | 2 +-
2 files changed
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/dispnv04/crtc.c | 9 -
drivers/gpu/drm/nouveau/nouveau_bo.c| 52 +++--
drivers/gpu/drm/nouveau/nouveau_chan.c | 6 ++-
drivers/gpu/drm/nouveau/nouveau_dmem.c | 9 +++--
drivers/gpu/drm/nouveau/nouveau_fence.c
Move the usercopy helpers to a common driver header file to make it
usable for the new API added in subsequent commits.
Reviewed-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_drv.h | 26 ++
drivers/gpu/drm/nouveau/nouveau_gem.c | 26
Initialize the GEM's DRM GPU VA manager interface in preparation for the
(u)vmm implementation, provided by subsequent commits, to make use of it.
Reviewed-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 6 ++
1 file changed, 6 insertions(+)
diff
Provide a getter function for the client's current vmm context. Since
we'll add a new (u)vmm context for UMD bindings in subsequent commits,
this will keep the code clean.
Reviewed-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 2 +-
drivers/gpu
eem to be things
we'd want to use nvif for.
Undeprecate and put them into the uapi header so we can just copy it
into mesa later.
v2: use uapi types.
Reviewed-by: Faith Ekstrand
Signed-off-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_abi16.h |
DRM_IOCTL_NOUVEAU_VM_BIND is synchronous processing,
DRM_IOCTL_NOUVEAU_EXEC supports asynchronous processing only.
Reviewed-by: Faith Ekstrand
Reviewed-by: Dave Airlie
Co-developed-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/driver-uapi.rst | 8 ++
include/uapi/drm/nouveau_drm.h| 217
lock_dep_map
pointer only if an actual custom lock is set.
Fixes: e6303f323b1a ("drm: manager to keep track of GPUs VA mappings")
Reviewed-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
include/drm/drm_gem.h | 15 +--
1 file changed, 9 insertions(+), 6 deletions(-)
ader fixup patch
GPUVA Manager:
- n/a (merged into drm-misc/drm-misc-next since V8)
Danilo Krummrich (11):
drm/gem: fix lockdep check for dma-resv lock
drm/nouveau: new VM_BIND uAPI interfaces
drm/nouveau: get vmm via nouveau_cli_vmm()
drm/nouveau: bo: initialize GEM GPU VA inter
mechanism. DRM GEM object locking is
handled with drm_exec.
Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, use the DRM
GPU scheduler for the asynchronous paths.
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/driver-uapi.rst |3 +
drivers/gpu/drm/nouveau/Kbuild
Provide the driver indirection iterating over all DRM GPU VA spaces to
enable the common 'gpuvas' debugfs file for dumping DRM GPU VA spaces.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_debugfs.c | 39 +++
1 file changed, 39 insertions(+)
diff --git
the address space and the corresponding map/unmap/sparse
operations to the upper layers.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h | 26 ++-
drivers/gpu/drm/nouveau/include/nvif/vmm.h| 19 +-
.../gpu/drm/nouveau/include/nvkm/subdev/mmu.h | 20 +-
drivers
that the device is dead on the next EXEC or VM_BIND ioctl.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_chan.c | 14 +++---
drivers/gpu/drm/nouveau/nouveau_chan.h | 1 +
2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/nouveau
would hang forever.
To fix that, fail to emit a new fence on a killed fence context with
-ENODEV to unblock the job.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_fence.c | 7 +++
drivers/gpu/drm/nouveau/nouveau_fence.h | 2 +-
2 files changed, 8 insertions(+), 1
The new (VM_BIND) UAPI exports DMA fences through DRM syncobjs. Hence,
in order to emit fences within DMA fence signalling critical sections
(e.g. as typically done in the DRM GPU schedulers run_job() callback) we
need to separate fence allocation and fence emitting.
Signed-off-by: Danilo
Move the usercopy helpers to a common driver header file to make it
usable for the new API added in subsequent commits.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_drv.h | 26 ++
drivers/gpu/drm/nouveau/nouveau_gem.c | 26
Initialize the GEM's DRM GPU VA manager interface in preparation for the
(u)vmm implementation, provided by subsequent commits, to make use of it.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm
Provide a getter function for the client's current vmm context. Since
we'll add a new (u)vmm context for UMD bindings in subsequent commits,
this will keep the code clean.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_chan.c
DRM_IOCTL_NOUVEAU_VM_BIND is synchronous processing,
DRM_IOCTL_NOUVEAU_EXEC supports asynchronous processing only.
Co-authored-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/driver-uapi.rst | 8 ++
include/uapi/drm/nouveau_drm.h| 217 ++
2 files changed, 225
lock_dep_map
pointer only if an actual custom lock is set.
Fixes: e6303f323b1a ("drm: manager to keep track of GPUs VA mappings")
Signed-off-by: Danilo Krummrich
---
include/drm/drm_gem.h | 15 +--
1 file changed, 9 insertions(+), 6 deletions(-)
diff --git a/include/drm/d
next since V8)
DRM GEM:
- added a patch to fix lockdep checks of GEM GPUVA locks
Danilo Krummrich (11):
drm/gem: fix lockdep check for dma-resv lock
drm/nouveau: new VM_BIND uapi interfaces
drm/nouveau: get vmm via nouveau_cli_vmm()
drm/nouveau: bo: initialize GEM GPU VA interface
d
On 7/31/23 15:35, Boris Brezillon wrote:
+Danilo, to confirm my understanding of the gpuva remap operation is
correct.
Your understanding is correct.
Unfortunately, re-mapping things has such implications.
I'm currently working on tracking external GEM objects in the GPUVA
manager, where,
.
While at it, remove some trailing empty lines.
Fixes: 9710631cc8f3 ("drm: add drm_exec selftests v4")
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/tests/drm_exec_test.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/tests/drm_exec_test.c
+ PIIX, 1996), BIOS
1.16.2-1.fc37 04/01/2014
[15:05:50] RIP: 0010:drm_gem_private_object_init+0x60/0xc0
Fixes: e6303f323b1a ("drm: manager to keep track of GPUs VA mappings")
Signed-off-by: Arthur Grillo
Tested-by: Danilo Krummrich
Acked-by: Danilo Krummrich
---
drivers/gpu
On 7/24/23 09:27, Boris Brezillon wrote:
On Fri, 21 Jul 2023 02:06:16 +0800
kernel test robot wrote:
tree: git://anongit.freedesktop.org/drm/drm-misc for-linux-next
head: c7a472297169156252a50d76965eb36b081186e2
commit: 4f66feeab173bd73e71028b8c2e1dcea07e32dd5 [2/2] drm: debugfs: provide
On 7/25/23 18:43, Danilo Krummrich wrote:
On 7/25/23 18:16, Faith Ekstrand wrote:
Thanks for the detailed write-up! That would definitely explain it. If
I remember, I'll try to do a single-threaded run or two. If your
theory is correct, there should be no real perf difference when
running
On 7/25/23 18:16, Faith Ekstrand wrote:
On Mon, Jul 24, 2023 at 9:04 PM Danilo Krummrich <mailto:d...@redhat.com>> wrote:
On 7/22/23 17:12, Faith Ekstrand wrote:
> On Wed, Jul 19, 2023 at 7:15 PM Danilo Krummrich mailto:d...@redhat.com>
> <mailto:d...
On 7/22/23 17:12, Faith Ekstrand wrote:
On Wed, Jul 19, 2023 at 7:15 PM Danilo Krummrich <mailto:d...@redhat.com>> wrote:
This commit provides the implementation for the new uapi motivated
by the
Vulkan API. It allows user mode drivers (UMDs) to:
1) Initialize a GP
On 7/22/23 00:58, Faith Ekstrand wrote:
On Wed, Jul 19, 2023 at 7:15 PM Danilo Krummrich <mailto:d...@redhat.com>> wrote:
This commit provides the interfaces for the new UAPI motivated by the
Vulkan API. It allows user mode drivers (UMDs) to:
1) Initialize a GP
On 7/20/23 12:44, Steven Price wrote:
On 20/07/2023 01:14, Danilo Krummrich wrote:
Add infrastructure to keep track of GPU virtual address (VA) mappings
with a decicated VA space manager implementation.
New UAPIs, motivated by Vulkan sparse memory bindings graphics drivers
start implementing
Provide the driver indirection iterating over all DRM GPU VA spaces to
enable the common 'gpuvas' debugfs file for dumping DRM GPU VA spaces.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_debugfs.c | 39 +++
1 file changed, 39 insertions(+)
diff --git
mechanism. DRM GEM object locking is
handled with drm_exec.
Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, use the DRM
GPU scheduler for the asynchronous paths.
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/driver-uapi.rst |3 +
drivers/gpu/drm/nouveau/Kbuild
that the device is dead on the next EXEC or VM_BIND ioctl.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_chan.c | 14 +++---
drivers/gpu/drm/nouveau/nouveau_chan.h | 1 +
2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/nouveau
the address space and the corresponding map/unmap/sparse
operations to the upper layers.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h | 26 ++-
drivers/gpu/drm/nouveau/include/nvif/vmm.h| 19 +-
.../gpu/drm/nouveau/include/nvkm/subdev/mmu.h | 20 +-
drivers
would hang forever.
To fix that, fail to emit a new fence on a killed fence context with
-ENODEV to unblock the job.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_fence.c | 7 +++
drivers/gpu/drm/nouveau/nouveau_fence.h | 2 +-
2 files changed, 8 insertions(+), 1
Move the usercopy helpers to a common driver header file to make it
usable for the new API added in subsequent commits.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_drv.h | 26 ++
drivers/gpu/drm/nouveau/nouveau_gem.c | 26
The new (VM_BIND) UAPI exports DMA fences through DRM syncobjs. Hence,
in order to emit fences within DMA fence signalling critical sections
(e.g. as typically done in the DRM GPU schedulers run_job() callback) we
need to separate fence allocation and fence emitting.
Signed-off-by: Danilo
Initialize the GEM's DRM GPU VA manager interface in preparation for the
(u)vmm implementation, provided by subsequent commits, to make use of it.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm
Provide a getter function for the client's current vmm context. Since
we'll add a new (u)vmm context for UMD bindings in subsequent commits,
this will keep the code clean.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_chan.c
spaces and a DRM core infrastructure, hence we need the
indirection via the driver iterating it's maintained DRM GPU VA spaces.
Reviewed-by: Boris Brezillon
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_debugfs.c | 40 +++
include/drm/drm_debugfs.h | 25
DRM_IOCTL_NOUVEAU_VM_BIND is synchronous processing,
DRM_IOCTL_NOUVEAU_EXEC supports asynchronous processing only.
Co-authored-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/driver-uapi.rst | 8 ++
include/uapi/drm/nouveau_drm.h| 209 ++
2 files changed, 217
Reviewed-by: Boris Brezillon
Tested-by: Matthew Brost
Tested-by: Donald Robson
Suggested-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/drm-mm.rst| 36 +
drivers/gpu/drm/Makefile|1 +
drivers/gpu/drm/drm_gem.c |3 +
drivers/gpu/drm/drm_gpuva_mgr.c
drm_gem.h. (Boris)
- Fix code style issues pointed out by Thomas.
- Switch to EXPORT_SYMBOL_GPL(). (Christoph)
Changes in V8
=
Nouveau:
- n/a
GPUVA Manager:
- Fix documentation about locking the GEMs GPUVA list. (Donald)
- Fix a few minor checkpatch warning
:
- Address review comments (Danilo Krummrich)
- Formatting fixes
v4:
- Address typos (Francois Dugast)
- Explain why in-fences are not allowed for VM_BIND operations for long-
running workloads (Matthew Brost)
v5:
- More typo- and style fixing
- Further clarify the implications of disallowing
to
drm-misc-next.
- Danilo
On Thu, 2023-07-13 at 19:03 +0200, Danilo Krummrich wrote:
+
+/**
+ * DOC: Locking
+ *
+ * Generally, the GPU VA manager does not take care of locking itself, it is
+ * the drivers responsibility to take care about locking. Drivers might want to
+ * protect the following
Provide the driver indirection iterating over all DRM GPU VA spaces to
enable the common 'gpuvas' debugfs file for dumping DRM GPU VA spaces.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_debugfs.c | 39 +++
1 file changed, 39 insertions(+)
diff --git
the address space and the corresponding map/unmap/sparse
operations to the upper layers.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h | 26 ++-
drivers/gpu/drm/nouveau/include/nvif/vmm.h| 19 +-
.../gpu/drm/nouveau/include/nvkm/subdev/mmu.h | 20 +-
drivers
mechanism. DRM GEM object locking is
handled with drm_exec.
Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, use the DRM
GPU scheduler for the asynchronous paths.
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/driver-uapi.rst |3 +
drivers/gpu/drm/nouveau/Kbuild
would hang forever.
To fix that, fail to emit a new fence on a killed fence context with
-ENODEV to unblock the job.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_fence.c | 7 +++
drivers/gpu/drm/nouveau/nouveau_fence.h | 2 +-
2 files changed, 8 insertions(+), 1
Move the usercopy helpers to a common driver header file to make it
usable for the new API added in subsequent commits.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_drv.h | 26 ++
drivers/gpu/drm/nouveau/nouveau_gem.c | 26
that the device is dead on the next EXEC or VM_BIND ioctl.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_chan.c | 14 +++---
drivers/gpu/drm/nouveau/nouveau_chan.h | 1 +
2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/nouveau
The new (VM_BIND) UAPI exports DMA fences through DRM syncobjs. Hence,
in order to emit fences within DMA fence signalling critical sections
(e.g. as typically done in the DRM GPU schedulers run_job() callback) we
need to separate fence allocation and fence emitting.
Signed-off-by: Danilo
Initialize the GEM's DRM GPU VA manager interface in preparation for the
(u)vmm implementation, provided by subsequent commits, to make use of it.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm
Provide a getter function for the client's current vmm context. Since
we'll add a new (u)vmm context for UMD bindings in subsequent commits,
this will keep the code clean.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_chan.c
DRM_IOCTL_NOUVEAU_VM_BIND is synchronous processing,
DRM_IOCTL_NOUVEAU_EXEC supports asynchronous processing only.
Co-authored-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/driver-uapi.rst | 8 ++
include/uapi/drm/nouveau_drm.h| 209 ++
2 files changed, 217
Reviewed-by: Boris Brezillon
Tested-by: Matthew Brost
Tested-by: Donald Robson
Suggested-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/drm-mm.rst| 36 +
drivers/gpu/drm/Makefile|1 +
drivers/gpu/drm/drm_gem.c |3 +
drivers/gpu/drm/drm_gpuva_mgr.c
spaces and a DRM core infrastructure, hence we need the
indirection via the driver iterating it's maintained DRM GPU VA spaces.
Reviewed-by: Boris Brezillon
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_debugfs.c | 40 +++
include/drm/drm_debugfs.h | 25
-by: Christian König
Reviewed-by: Boris Brezillon
Reviewed-by: Danilo Krummrich
Tested-by: Danilo Krummrich
Acked-by: Alex Deucher
Link:
https://patchwork.freedesktop.org/patch/msgid/20230711133122.3710-2-christian.koe...@amd.com
---
Documentation/gpu/drm-mm.rst | 12 ++
drivers/gpu/drm/Kconfig
- Switch to EXPORT_SYMBOL_GPL(). (Christoph)
Christian König (1):
drm: execution context for GEM buffers v7
Danilo Krummrich (12):
drm: manager to keep track of GPUs VA mappings
drm: debugfs: provide infrastructure to dump a DRM GPU VA space
drm/nouveau: new VM_BIND uapi interfaces
drm/nouveau: g
On 7/7/23 13:00, Boris Brezillon wrote:
On Fri, 30 Jun 2023 00:25:18 +0200
Danilo Krummrich wrote:
+/**
+ * drm_gpuva_for_each_va_range - iternator to walk over a range of _gpuvas
+ * @va__: _gpuva structure to assign to in each iteration step
+ * @mgr__: _gpuva_manager to walk over
On 7/7/23 09:57, Boris Brezillon wrote:
On Thu, 6 Jul 2023 20:26:42 +0200
Boris Brezillon wrote:
On Fri, 30 Jun 2023 00:25:18 +0200
Danilo Krummrich wrote:
+#ifdef CONFIG_LOCKDEP
+typedef struct lockdep_map *lockdep_map_p;
+#define drm_gpuva_manager_ext_assert_held(mgr
Hi Donald,
On 7/6/23 17:45, Donald Robson wrote:
On Fri, 2023-06-30 at 00:25 +0200, Danilo Krummrich wrote:
+#ifdef CONFIG_LOCKDEP
+typedef struct lockdep_map *lockdep_map_p;
+#define drm_gpuva_manager_ext_assert_held(mgr) \
+ lockdep_assert(lock_is_held((mgr)->ext_l
Hi Thomas,
On 7/6/23 10:49, Thomas Hellström (Intel) wrote:
Hi, Danilo
Some review comments below:
On 6/30/23 00:25, Danilo Krummrich wrote:
Add infrastructure to keep track of GPU virtual address (VA) mappings
with a decicated VA space manager implementation.
New UAPIs, motivated by Vulkan
Hi Boris,
On 6/30/23 10:02, Boris Brezillon wrote:
Hi Danilo,
On Fri, 30 Jun 2023 00:25:18 +0200
Danilo Krummrich wrote:
+ * int driver_gpuva_remap(struct drm_gpuva_op *op, void *__ctx)
+ * {
+ * struct driver_context *ctx = __ctx;
+ *
+ * drm_gpuva_remap(ctx
Provide the driver indirection iterating over all DRM GPU VA spaces to
enable the common 'gpuvas' debugfs file for dumping DRM GPU VA spaces.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_debugfs.c | 39 +++
1 file changed, 39 insertions(+)
diff --git
the address space and the corresponding map/unmap/sparse
operations to the upper layers.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/include/nvif/if000c.h | 26 ++-
drivers/gpu/drm/nouveau/include/nvif/vmm.h| 19 +-
.../gpu/drm/nouveau/include/nvkm/subdev/mmu.h | 20 +-
drivers
mechanism. DRM GEM object locking is
handled with drm_exec.
Both, DRM_IOCTL_NOUVEAU_VM_BIND and DRM_IOCTL_NOUVEAU_EXEC, use the DRM
GPU scheduler for the asynchronous paths.
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/driver-uapi.rst |3 +
drivers/gpu/drm/nouveau/Kbuild
that the device is dead on the next EXEC or VM_BIND ioctl.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_chan.c | 14 +++---
drivers/gpu/drm/nouveau/nouveau_chan.h | 1 +
2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/nouveau
would hang forever.
To fix that, fail to emit a new fence on a killed fence context with
-ENODEV to unblock the job.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_fence.c | 7 +++
drivers/gpu/drm/nouveau/nouveau_fence.h | 2 +-
2 files changed, 8 insertions(+), 1
Move the usercopy helpers to a common driver header file to make it
usable for the new API added in subsequent commits.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_drv.h | 26 ++
drivers/gpu/drm/nouveau/nouveau_gem.c | 26
The new (VM_BIND) UAPI exports DMA fences through DRM syncobjs. Hence,
in order to emit fences within DMA fence signalling critical sections
(e.g. as typically done in the DRM GPU schedulers run_job() callback) we
need to separate fence allocation and fence emitting.
Signed-off-by: Danilo
Initialize the GEM's DRM GPU VA manager interface in preparation for the
(u)vmm implementation, provided by subsequent commits, to make use of it.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/gpu/drm
Provide a getter function for the client's current vmm context. Since
we'll add a new (u)vmm context for UMD bindings in subsequent commits,
this will keep the code clean.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_chan.c
spaces and a DRM core infrastructure, hence we need the
indirection via the driver iterating it's maintained DRM GPU VA spaces.
Reviewed-by: Boris Brezillon
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_debugfs.c | 40 +++
include/drm/drm_debugfs.h | 25
DRM_IOCTL_NOUVEAU_VM_BIND is synchronous processing,
DRM_IOCTL_NOUVEAU_EXEC supports asynchronous processing only.
Co-authored-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/driver-uapi.rst | 8 ++
include/uapi/drm/nouveau_drm.h| 209 ++
2 files changed, 217
-by: Dave Airlie
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/drm-mm.rst| 36 +
drivers/gpu/drm/Makefile|1 +
drivers/gpu/drm/drm_gem.c |3 +
drivers/gpu/drm/drm_gpuva_mgr.c | 1743 +++
include/drm/drm_drv.h |6 +
include
From: Christian König
This adds the infrastructure for an execution context for GEM buffers
which is similar to the existing TTMs execbuf util and intended to replace
it in the long term.
The basic functionality is that we abstracts the necessary loop to lock
many different GEM buffers with
Christian König (1):
drm: execution context for GEM buffers v5
Danilo Krummrich (12):
drm: manager to keep track of GPUs VA mappings
drm: debugfs: provide infrastructure to dump a DRM GPU VA space
drm/nouveau: new VM_BIND uapi interfaces
drm/nouveau: get vmm via nouveau_cli_vmm()
drm/n
On 6/23/23 09:16, Christian König wrote:
Am 22.06.23 um 17:07 schrieb Danilo Krummrich:
On 6/22/23 17:04, Danilo Krummrich wrote:
On 6/22/23 16:42, Christian König wrote:
Am 22.06.23 um 16:22 schrieb Danilo Krummrich:
On 6/22/23 15:54, Christian König wrote:
Am 20.06.23 um 14:23 schrieb
ock_slow(obj->resv, >ticket);
> + }
> +
> + ret = drm_exec_obj_locked(exec, obj);
> + if (unlikely(ret)) {
> + dma_resv_unlock(obj->resv);
> + goto error_dropref;
> + }
> +
> + swap(exec->prelocked, obj);
On 6/22/23 17:19, Boris Brezillon wrote:
Hi Danilo,
On Thu, 22 Jun 2023 15:58:23 +0200
Danilo Krummrich wrote:
Hi Boris,
On 6/22/23 15:01, Boris Brezillon wrote:
Hi Danilo,
On Tue, 20 Jun 2023 14:46:07 +0200
Danilo Krummrich wrote:
The only thing I'm worried about is the 'sync
401 - 500 of 831 matches
Mail list logo