On 10/8/23 19:27, Bragatheswaran Manickavel wrote:
On running checkpatch.pl to nouveau_drm.h identified
few warnings. Fixing them in this patch
WARNING: Missing or malformed SPDX-License-Identifier tag in line 1
+/*
WARNING: space prohibited between function name and open parenthesis '('
On 10/13/23 03:18, Ma Ke wrote:
In nv17_tv_get_hd_modes(), the return value of drm_mode_duplicate()
is assigned to mode, which will lead to a NULL pointer dereference on
failure of drm_mode_duplicate(). The same applies to drm_cvt_mode().
Add a check to avoid null pointer dereference.
is accelerated.
5) Provide some convinience functions for common patterns.
Big thanks to Boris Brezillon for his help to figure out locking for
drivers updating the GPU VA space within the fence signalling path.
Suggested-by: Matthew Brost
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c
the GPUVM's GEM objects. Hence, make us of
it.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c| 4 +-
drivers/gpu/drm/nouveau/nouveau_exec.c | 57 -
drivers/gpu/drm/nouveau/nouveau_exec.h | 4 -
drivers/gpu/drm/nouveau/nouveau_sched.c | 9 +-
drivers
for
this idea goes to the developers of amdgpu.
Cc: Christian König
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 335 +
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 64 +++--
include/drm/drm_gem.h | 32 +--
include/drm
er to make sure the object proving the shared dma-resv can't be
freed up before the objects making use of it, let every such GEM object
take a reference on it.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 11 +--
drivers/gpu/drm/nouveau/nouveau_bo.h | 5
Use drm_WARN() and drm_WARN_ON() variants to indicate drivers the
context the failing VM resides in.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 32 ++
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 3 ++-
include/drm/drm_gpuvm.h
Introduce flags for struct drm_gpuvm, this required by subsequent
commits.
Reviewed-by: Thomas Hellström
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 3 +++
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
include/drm/drm_gpuvm.h| 16
Provide a common dma-resv for GEM objects not being used outside of this
GPU-VM. This is used in a subsequent patch to generalize dma-resv,
external and evicted object handling and GEM validation.
Reviewed-by: Thomas Hellström
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c
nce get() calls with pointer assignments
- call drm_gem_object_put() after vm_bo_free() callback
- make lockdep checks explicit for drm_gpuvm_bo_* functions
- improve documentation of struct drm_gpuvm_bo
- fix a few documentation typos and style issues
- use BIT() instead of shift ops for enum drm_gpuvm_
On 10/13/23 15:00, Thomas Hellström wrote:
On Fri, 2023-10-13 at 13:51 +0200, Danilo Krummrich wrote:
On 10/13/23 13:38, Thomas Hellström wrote:
On Mon, 2023-10-09 at 01:32 +0200, Danilo Krummrich wrote:
Provide a common dma-resv for GEM objects not being used outside
of
this
GPU-VM
On Fri, Oct 13, 2023 at 02:30:29PM +0200, Thomas Hellström wrote:
> On Mon, 2023-10-09 at 01:32 +0200, Danilo Krummrich wrote:
> > Add an abstraction layer between the drm_gpuva mappings of a
> > particular
> > drm_gem_object and this GEM object itself. The a
t; to access GPL-only kernel symbols.
>
> Much of the ideas and algorithms used in the drm_gpuvm code is already
> present in one way or another in MIT-licensed code.
>
> Cc: Danilo Krummrich
> Cc: airl...@gmail.com
> Cc: dan...@ffwll.ch
> Cc: linux-ker...@vger.kernel.o
On 10/10/23 08:40, Thomas Hellström wrote:
On 10/9/23 01:32, Danilo Krummrich wrote:
Currently the DRM GPUVM offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations
On 10/10/23 08:26, Thomas Hellström wrote:
On 10/9/23 16:45, Danilo Krummrich wrote:
On 10/9/23 15:36, Thomas Hellström wrote:
On 10/9/23 01:32, Danilo Krummrich wrote:
Currently the DRM GPUVM offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA
On 10/13/23 13:38, Thomas Hellström wrote:
On Mon, 2023-10-09 at 01:32 +0200, Danilo Krummrich wrote:
Provide a common dma-resv for GEM objects not being used outside of
this
GPU-VM. This is used in a subsequent patch to generalize dma-resv,
external and evicted object handling and GEM
On 10/7/23 05:23, Ma Ke wrote:
In nv17_tv_get_ld_modes(), the return value of drm_mode_duplicate()
is assigned to mode, which will lead to a NULL pointer dereference
on failure of drm_mode_duplicate(). Add a check to avoid npd.
Signed-off-by: Ma Ke
Reviewed-by: Danilo Krummrich
Cc: Dave Airlie
Cc: Danilo Krummrich
Cc: Karol Herbst
Cc: Lyude Paul
Cc: dri-devel@lists.freedesktop.org
Cc: nouv...@lists.freedesktop.org
Cc: Bragatheswaran Manickavel
---
v2: update commit text; somehow I sent a version of the patch before
adding the full text.
v1: https://
On 10/12/23 03:52, Luben Tuikov wrote:
Hi,
Thanks for fixing the title and submitting a v2 of this patch. Comments inlined
below.
On 2023-10-09 18:35, Danilo Krummrich wrote:
Currently, job flow control is implemented simply by limiting the number
of jobs in flight. Therefore, a scheduler
Hi Matt,
Can you please address my comments from V3 and V4?
https://lore.kernel.org/all/076891e9-b2ce-4c7f-833d-157aca5cd...@amd.com/T/#m34ccee55e37ca47c87adf01439585d0bd187e3a0
- Danilo
On 10/12/23 01:58, Matthew Brost wrote:
As a prerequisite to merging the new Intel Xe DRM driver [1] [2],
On 10/10/23 15:37, Sarah Walker wrote:
From: Donald Robson
Determining the start and range of the unmap stage of a remap op is a
common piece of code currently implemented by multiple drivers. Add a
helper for this.
Changes since v6:
- Remove use of __always_inline
Signed-off-by: Donald
-off-by: Danilo Krummrich
---
Changes in V2:
==
- fixed up influence on scheduling fairness due to consideration of a job's
size
- If we reach a ready entity in drm_sched_select_entity() but can't actually
queue a job from it due to size limitations, just give up and go
On 10/9/23 16:45, Danilo Krummrich wrote:
On 10/9/23 15:36, Thomas Hellström wrote:
On 10/9/23 01:32, Danilo Krummrich wrote:
Currently the DRM GPUVM offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform
On 10/9/23 15:36, Thomas Hellström wrote:
On 10/9/23 01:32, Danilo Krummrich wrote:
Currently the DRM GPUVM offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU
is accelerated.
5) Provide some convinience functions for common patterns.
Big thanks to Boris Brezillon for his help to figure out locking for
drivers updating the GPU VA space within the fence signalling path.
Suggested-by: Matthew Brost
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c
the GPUVM's GEM objects. Hence, make us of
it.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c| 4 +-
drivers/gpu/drm/nouveau/nouveau_exec.c | 52 +++--
drivers/gpu/drm/nouveau/nouveau_exec.h | 4 -
drivers/gpu/drm/nouveau/nouveau_sched.h | 4 +-
drivers/gpu
er to make sure the object proving the shared dma-resv can't be
freed up before the objects making use of it, let every such GEM object
take a reference on it.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 11 +--
drivers/gpu/drm/nouveau/nouveau_bo.h | 5
for
this idea goes to the developers of amdgpu.
Cc: Christian König
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 332 +
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 64 +++--
include/drm/drm_gem.h | 32 +--
include/drm
Introduce flags for struct drm_gpuvm, this required by subsequent
commits.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 4 +++-
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
include/drm/drm_gpuvm.h| 17 -
3 files changed, 20
Provide a common dma-resv for GEM objects not being used outside of this
GPU-VM. This is used in a subsequent patch to generalize dma-resv,
external and evicted object handling and GEM validation.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 56
_unlink() (Thomas)
- fix commit message wording (Thomas)
- fix kernel doc warnings (kernel test robot)
Danilo Krummrich (6):
drm/gpuvm: add common dma-resv per struct drm_gpuvm
drm/gpuvm: add drm_gpuvm_flags to drm_gpuvm
drm/gpuvm: add an abstraction for a VM / BO combination
drm/gpuvm: t
On 10/5/23 13:51, Thomas Hellström wrote:
Hi,
On 9/28/23 21:16, Danilo Krummrich wrote:
This patch adds an abstraction layer between the drm_gpuva mappings of
NIT: imperative: s/This patch adds/Add/
a particular drm_gem_object and this GEM object itself. The abstraction
represents
Hi Thomas,
On 10/5/23 11:35, Thomas Hellström wrote:
Hi, Danilo
On 9/28/23 21:16, Danilo Krummrich wrote:
Currently GPUVM offers common infrastructure to track GPU VA allocations
and mappings, generically connect GPU VA mappings to their backing
buffers and perform more complex mapping
On 10/4/23 19:57, Thomas Hellström wrote:
On Wed, 2023-10-04 at 19:17 +0200, Danilo Krummrich wrote:
On 10/4/23 17:29, Thomas Hellström wrote:
On Wed, 2023-10-04 at 14:57 +0200, Danilo Krummrich wrote:
On 10/3/23 11:11, Thomas Hellström wrote:
+
+/**
+ * drm_gpuvm_bo_evict() - add
On 10/4/23 17:29, Thomas Hellström wrote:
On Wed, 2023-10-04 at 14:57 +0200, Danilo Krummrich wrote:
On 10/3/23 11:11, Thomas Hellström wrote:
+
+/**
+ * drm_gpuvm_bo_evict() - add / remove a _gpuvm_bo to /
from the _gpuvms
+ * evicted list
+ * @vm_bo: the _gpuvm_bo to add or remove
On 10/3/23 19:37, Thomas Hellström wrote:
Hi, Danilo
On Tue, 2023-10-03 at 18:55 +0200, Danilo Krummrich wrote:
It seems like we're mostly aligned on this series, except for the key
controversy we're discussing for a few versions now: locking of the
internal
lists. Hence, let's just re-iterate
On 10/3/23 11:11, Thomas Hellström wrote:
+
+/**
+ * drm_gpuvm_bo_evict() - add / remove a _gpuvm_bo to / from the
_gpuvms
+ * evicted list
+ * @vm_bo: the _gpuvm_bo to add or remove
+ * @evict: indicates whether the object is evicted
+ *
+ * Adds a _gpuvm_bo to or removes it from the
It seems like we're mostly aligned on this series, except for the key
controversy we're discussing for a few versions now: locking of the internal
lists. Hence, let's just re-iterate the options we have to get this out of the
way.
(1) The spinlock dance. This basically works for every use case,
of a given device have the same ring size.
Acked-by: Faith Ekstrand
Signed-off-by: Danilo Krummrich
---
Changes in V2
=
- consider the extra slot required by a job's HW fence
---
drivers/gpu/drm/nouveau/nouveau_abi16.c | 21 +
drivers/gpu/drm/nouveau/nouveau_chan.c
Use channel class definitions instead of magic numbers.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_chan.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c
b/drivers/gpu/drm/nouveau/nouveau_chan.c
index
Use actual struct nvif_mclass instead of identical anonymous struct.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_chan.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c
b/drivers/gpu/drm/nouveau
the GPUVM's GEM objects. Hence, make us of
it.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c| 4 +-
drivers/gpu/drm/nouveau/nouveau_exec.c | 52 +++--
drivers/gpu/drm/nouveau/nouveau_exec.h | 4 -
drivers/gpu/drm/nouveau/nouveau_sched.h | 4 +-
drivers/gpu
, hence the credit for
this idea goes to the developers of amdgpu.
Cc: Christian König
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 334 +
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 64 +++--
include/drm/drm_gem.h | 32
er to make sure the object proving the shared dma-resv can't be
freed up before the objects making use of it, let every such GEM object
take a reference on it.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 11 +--
drivers/gpu/drm/nouveau/nouveau_bo.h | 5
is accelerated.
5) Provide some convinience functions for common patterns.
Big thanks to Boris Brezillon for his help to figure out locking for
drivers updating the GPU VA space within the fence signalling path.
Suggested-by: Matthew Brost
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c
Introduce flags for struct drm_gpuvm, this required by subsequent
commits.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 4 +++-
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
include/drm/drm_gpuvm.h| 17 -
3 files changed, 20
Provide a common dma-resv for GEM objects not being used outside of this
GPU-VM. This is used in a subsequent patch to generalize dma-resv,
external and evicted object handling and GEM validation.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 56
in drm-misc-next
- f72c2db47080 ("drm/gpuvm: rename struct drm_gpuva_manager to struct
drm_gpuvm")
- fe7acaa727e1 ("drm/gpuvm: allow building as module")
- 78f54469b871 ("drm/nouveau: uvmm: rename 'umgr' to 'base'")
Danilo Krummrich (6):
drm/gpuvm:
On 9/22/23 13:45, Boris Brezillon wrote:
On Wed, 20 Sep 2023 16:42:40 +0200
Danilo Krummrich wrote:
+ /**
+* @DRM_GPUVM_RESV_PROTECTED: GPUVM is protected externally by the
+* GPUVM's _resv lock
I think we need to be more specific, and list the fields/operations
On 9/22/23 13:58, Boris Brezillon wrote:
On Wed, 20 Sep 2023 16:42:39 +0200
Danilo Krummrich wrote:
+/**
+ * enum drm_gpuvm_flags - flags for struct drm_gpuvm
+ */
+enum drm_gpuvm_flags {
+ /**
+* @DRM_GPUVM_USERBITS: user defined bits
+*/
+ DRM_GPUVM_USERBITS = (1
On 9/25/23 17:59, Arnd Bergmann wrote:
From: Arnd Bergmann
After a recent change, two variables are only used in an #ifdef:
drivers/gpu/drm/nouveau/dispnv50/disp.c: In function 'nv50_sor_atomic_disable':
drivers/gpu/drm/nouveau/dispnv50/disp.c:1569:13: error: unused variable 'ret'
On 9/27/23 14:15, Christian König wrote:
Am 27.09.23 um 14:11 schrieb Danilo Krummrich:
On 9/27/23 13:54, Christian König wrote:
Am 26.09.23 um 09:11 schrieb Boris Brezillon:
On Mon, 25 Sep 2023 19:55:21 +0200
Christian König wrote:
Am 25.09.23 um 14:55 schrieb Boris Brezillon
On 9/27/23 13:54, Christian König wrote:
Am 26.09.23 um 09:11 schrieb Boris Brezillon:
On Mon, 25 Sep 2023 19:55:21 +0200
Christian König wrote:
Am 25.09.23 um 14:55 schrieb Boris Brezillon:
+The imagination team, who's probably interested too.
On Mon, 25 Sep 2023 00:43:06 +0200
Danilo
On 9/27/23 09:25, Boris Brezillon wrote:
On Wed, 27 Sep 2023 02:13:59 +0200
Danilo Krummrich wrote:
On 9/26/23 22:43, Luben Tuikov wrote:
Hi,
On 2023-09-24 18:43, Danilo Krummrich wrote:
Currently, job flow control is implemented simply by limiting the amount
of jobs in flight. Therefore
of a given device have the same ring size.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_abi16.c | 19 +++
drivers/gpu/drm/nouveau/nouveau_chan.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_dma.h | 3 +++
drivers/gpu/drm/nouveau/nouveau_exec.c | 7
Use channel class definitions instead of magic numbers.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_chan.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c
b/drivers/gpu/drm/nouveau/nouveau_chan.c
index
Use actual struct nvif_mclass instead of identical anonymous struct.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_chan.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c
b/drivers/gpu/drm/nouveau
Hi,
On 9/27/23 01:48, Luben Tuikov wrote:
Hi,
Please also CC me to the whole set, as opposed to just one patch of the set.
And so in the future.
There is no series. I created a series in the first place, but finally decided
to
send this one and a few driver patches separately. However, I
On 9/26/23 22:43, Luben Tuikov wrote:
Hi,
On 2023-09-24 18:43, Danilo Krummrich wrote:
Currently, job flow control is implemented simply by limiting the amount
of jobs in flight. Therefore, a scheduler is initialized with a
submission limit that corresponds to a certain amount of jobs
Hi,
On 9/26/23 07:07, Stephen Rothwell wrote:
Hi all,
After merging the drm-misc tree, today's linux-next build (htmldocs)
produced this warning:
Error: Cannot open file /home/sfr/next/next/drivers/gpu/drm/drm_gpuva_mgr.c
Error: Cannot open file /home/sfr/next/next/include/drm/drm_gpuva_mgr.h
: rename struct drm_gpuva_manager to struct
drm_gpuvm")
Reported-by: Stephen Rothwell
Closes:
https://lore.kernel.org/dri-devel/20230926150725.4cca5...@canb.auug.org.au/
Signed-off-by: Danilo Krummrich
---
Documentation/gpu/drm-mm.rst | 20 ++--
1 file changed, 10 insertions(+), 1
Since I will continue to work on Nouveau consistently, also beyond my
former and still ongoing VM_BIND/EXEC work, add myself to the list of
Nouveau maintainers.
Signed-off-by: Danilo Krummrich
---
MAINTAINERS | 1 +
1 file changed, 1 insertion(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index
ame conditional block, along with the nv_connector variable
that becomes unused during that fix.
Fixes: 757033808c95b ("drm/nouveau/kms/nv50-: fixup sink D3 before tearing down
link")
Signed-off-by: Arnd Bergmann
Reviewed-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/dispnv
On 9/19/23 13:44, Danilo Krummrich wrote:
Hi Matt,
On 9/19/23 07:01, Matthew Brost wrote:
As a prerequisite to merging the new Intel Xe DRM driver [1] [2], we
have been asked to merge our common DRM scheduler patches first.
This a continuation of a RFC [3] with all comments addressed, ready
always run concurrently and hence, free_job() work can never stall
run_job() work. For EXEC jobs we don't have this requirement, since EXEC
job's free_job() does not require to take any locks which indirectly or
directly are held for allocations elsewhere.
Signed-off-by: Danilo Krummrich
Make use of the scheduler's submission limit and scheduler job's
submission unit count to account for the actual size of a job, such that
we fill up the ring efficiently.
Signed-off-by: Danilo Krummrich
---
This patch is based on Matt's scheduler work [1] and [2].
[1]
https://lore.kernel.org
dry.
In order to overcome this issue, allow for tracking the actual job size
instead of the amount job jobs. Therefore, add a field to track a job's
submission units, which represents the amount of units a job contributes
to the scheduler's submission limit.
Signed-off-by: Danilo Krummrich
On 9/21/23 16:34, Christian König wrote:
Am 21.09.23 um 16:25 schrieb Boris Brezillon:
On Thu, 21 Sep 2023 15:34:44 +0200
Danilo Krummrich wrote:
On 9/21/23 09:39, Christian König wrote:
Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
Provide a common dma-resv for GEM objects not being
On 9/21/23 16:25, Boris Brezillon wrote:
On Thu, 21 Sep 2023 15:34:44 +0200
Danilo Krummrich wrote:
On 9/21/23 09:39, Christian König wrote:
Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
Provide a common dma-resv for GEM objects not being used outside of this
GPU-VM. This is used
On 9/21/23 09:39, Christian König wrote:
Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
Provide a common dma-resv for GEM objects not being used outside of this
GPU-VM. This is used in a subsequent patch to generalize dma-resv,
external and evicted object handling and GEM validation.
Signed
ctionality and opt-in for other features without setting any feature
flags, just by making use of the corresponding functions.
Big thanks to Boris Brezillon for his help to figure out locking for
drivers updating the GPU VA space within the fence signalling path.
Suggested-by: Matthew Brost
Signed-off-
Make use of the DRM GPUVA managers GPU-VM common dma-resv, external GEM
object tracking, dma-resv locking, evicted GEM object tracking and
validation features.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_bo.c| 4 +-
drivers/gpu/drm/nouveau/nouveau_exec.c | 52
Introduce flags for struct drm_gpuvm, this required by subsequent
commits.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 3 ++-
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 2 +-
include/drm/drm_gpuvm.h| 17 -
3 files changed, 19
, hence the credit for
this idea goes to the developers of amdgpu.
Cc: Christian König
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 309 ++---
drivers/gpu/drm/nouveau/nouveau_uvmm.c | 68 --
include/drm/drm_gem.h | 32
Rename struct drm_gpuvm within struct nouveau_uvmm from 'umgr' to base.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_debugfs.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_exec.c| 4 +--
drivers/gpu/drm/nouveau/nouveau_uvmm.c| 32 +++
drivers/gpu
Provide a common dma-resv for GEM objects not being used outside of this
GPU-VM. This is used in a subsequent patch to generalize dma-resv,
external and evicted object handling and GEM validation.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 9 +++--
drivers
Currently, the DRM GPUVM does not have any core dependencies preventing
a module build.
Also, new features from subsequent patches require helpers (namely
drm_exec) which can be built as module.
Reviewed-by: Christian König
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/Kconfig
introduced for implementing a common dma-resv per GPU-VM
including tracking of external and evicted objects in subsequent
patches.
[1] Documentation/gpu/drm-vm-bind-async.rst
[2] Documentation/gpu/drm-vm-bind-locking.rst
Cc: Thomas Hellström
Cc: Matthew Brost
Signed-off-by: Danilo Krummrich
Thomas)
- documentation fixes
Danilo Krummrich (8):
drm/gpuvm: rename struct drm_gpuva_manager to struct drm_gpuvm
drm/gpuvm: allow building as module
drm/nouveau: uvmm: rename 'umgr' to 'base'
drm/gpuvm: add common dma-resv per struct drm_gpuvm
drm/gpuvm: add an abstraction for a V
On 9/16/23 16:24, Dan Carpenter wrote:
On Sat, Sep 16, 2023 at 01:41:43AM +0200, Danilo Krummrich wrote:
Hi Dan,
On 9/15/23 14:59, Dan Carpenter wrote:
The u_memcpya() function is supposed to return error pointers on
error. Returning NULL will lead to an Oops.
Fixes: 68132cc6d1bc ("no
On 9/19/23 14:21, Thomas Hellström wrote:
Hi Christian
On 9/19/23 14:07, Christian König wrote:
Am 13.09.23 um 17:46 schrieb Danilo Krummrich:
On 9/13/23 17:33, Christian König wrote:
Am 13.09.23 um 17:15 schrieb Danilo Krummrich:
On 9/13/23 16:26, Christian König wrote:
Am 13.09.23 um 14
Hi Matt,
On 9/19/23 07:01, Matthew Brost wrote:
As a prerequisite to merging the new Intel Xe DRM driver [1] [2], we
have been asked to merge our common DRM scheduler patches first.
This a continuation of a RFC [3] with all comments addressed, ready for
a full review, and hopefully in state
https://gitlab.freedesktop.org/nouvelles/kernel/-/commits/sched-single-entity/
[2]
https://lore.kernel.org/dri-devel/20230912021615.2086698-1-matthew.br...@intel.com/
On Sat, 2023-09-16 at 18:28 +0200, Danilo Krummrich wrote:
Always stop and re-start the scheduler in order to let the scheduler
f
On 9/18/23 13:03, Christian König wrote:
Am 16.09.23 um 19:52 schrieb Danilo Krummrich:
On 9/12/23 16:47, Matthew Brost wrote:
On Tue, Sep 12, 2023 at 11:57:30AM +0200, Christian König wrote:
Am 12.09.23 um 04:16 schrieb Matthew Brost:
Wait for pending jobs to be complete before signaling
On 9/14/23 19:15, Danilo Krummrich wrote:
On 9/14/23 19:13, Thomas Hellström wrote:
On Thu, 2023-09-14 at 17:27 +0200, Danilo Krummrich wrote:
On 9/14/23 13:32, Thomas Hellström wrote:
On 9/14/23 12:57, Danilo Krummrich wrote:
On 9/13/23 14:16, Danilo Krummrich wrote:
And validate() can
On 9/12/23 04:16, Matthew Brost wrote:
Provide documentation to guide in ways to teardown an entity.
Signed-off-by: Matthew Brost
---
Documentation/gpu/drm-mm.rst | 6 ++
drivers/gpu/drm/scheduler/sched_entity.c | 19 +++
2 files changed, 25 insertions(+)
On 9/12/23 16:47, Matthew Brost wrote:
On Tue, Sep 12, 2023 at 11:57:30AM +0200, Christian König wrote:
Am 12.09.23 um 04:16 schrieb Matthew Brost:
Wait for pending jobs to be complete before signaling queued jobs. This
ensures dma-fence signaling order correct and also ensures the entity is
On 9/12/23 04:16, Matthew Brost wrote:
In XE, the new Intel GPU driver, a choice has made to have a 1 to 1
mapping between a drm_gpu_scheduler and drm_sched_entity. At first this
seems a bit odd but let us explain the reasoning below.
1. In XE the submission order from multiple drm_sched_entity
uAPI")
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_exec.c | 2 +-
drivers/gpu/drm/nouveau/nouveau_sched.c | 12 +---
2 files changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_exec.c
b/drivers/gpu/drm/nouveau/nouveau_ex
On 9/16/23 16:26, Dan Carpenter wrote:
On Sat, Sep 16, 2023 at 05:24:04PM +0300, Dan Carpenter wrote:
On Sat, Sep 16, 2023 at 01:41:43AM +0200, Danilo Krummrich wrote:
Hi Dan,
On 9/15/23 14:59, Dan Carpenter wrote:
The u_memcpya() function is supposed to return error pointers on
error
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/nouveau/nouveau_fence.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c
b/drivers/gpu/drm/nouveau/nouveau_fence.c
index 61d9e70da9fd..ca762ea55413 100644
--- a/drivers/gpu/drm/nouveau/nouve
Hi Dan,
On 9/15/23 14:59, Dan Carpenter wrote:
The u_memcpya() function is supposed to return error pointers on
error. Returning NULL will lead to an Oops.
Fixes: 68132cc6d1bc ("nouveau/u_memcpya: use vmemdup_user")
Signed-off-by: Dan Carpenter
---
drivers/gpu/drm/nouveau/nouveau_drv.h | 2
On 9/14/23 19:21, Thomas Hellström wrote:
On Thu, 2023-09-14 at 18:36 +0200, Danilo Krummrich wrote:
On 9/14/23 15:48, Thomas Hellström wrote:
Hi, Danilo
Some additional minor comments as xe conversion progresses.
On 9/9/23 17:31, Danilo Krummrich wrote:
So far the DRM GPUVA manager offers
On 9/14/23 19:13, Thomas Hellström wrote:
On Thu, 2023-09-14 at 17:27 +0200, Danilo Krummrich wrote:
On 9/14/23 13:32, Thomas Hellström wrote:
On 9/14/23 12:57, Danilo Krummrich wrote:
On 9/13/23 14:16, Danilo Krummrich wrote:
And validate() can remove it while still holding all dma
On 9/14/23 15:48, Thomas Hellström wrote:
Hi, Danilo
Some additional minor comments as xe conversion progresses.
On 9/9/23 17:31, Danilo Krummrich wrote:
So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings
On 9/14/23 13:32, Thomas Hellström wrote:
On 9/14/23 12:57, Danilo Krummrich wrote:
On 9/13/23 14:16, Danilo Krummrich wrote:
And validate() can remove it while still holding all dma-resv locks,
neat!
However, what if two tasks are trying to lock the VA space
concurrently? What
do we do
On 9/13/23 14:16, Danilo Krummrich wrote:
And validate() can remove it while still holding all dma-resv locks,
neat!
However, what if two tasks are trying to lock the VA space
concurrently? What
do we do when the drm_gpuvm_bo's refcount drops to zero in
drm_gpuva_unlink()?
Are we guaranteed
On 9/13/23 17:33, Christian König wrote:
Am 13.09.23 um 17:15 schrieb Danilo Krummrich:
On 9/13/23 16:26, Christian König wrote:
Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on the assumption
that we don't support anything else
On 9/13/23 16:26, Christian König wrote:
Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on the assumption
that we don't support anything else than GPUVM updates from the IOCTL.
I think that this assumption is incorrect.
Well, more
As mentioned in a different mail thread, the reply is based on the assumption
that we don't support anything else than GPUVM updates from the IOCTL.
On Wed, Sep 13, 2023 at 11:14:46AM +0200, Thomas Hellström wrote:
> Hi!
>
> On Wed, 2023-09-13 at 01:36 +0200, Danilo Krummrich wrote:
&
201 - 300 of 811 matches
Mail list logo