[Nouveau] [PATCH] nouveau: find the smallest page allocation to cover a buffer alloc.

2023-08-10 Thread Dave Airlie
From: Dave Airlie With the new uapi we don't have the comp flags on the allocation, so we shouldn't be using the first size that works, we should be iterating until we get the correct one. This reduces allocations from 2MB to 64k in lots of places. Fixes

[Nouveau] [PATCH drm-misc-next] drm/nouveau: sched: avoid job races between entities

2023-08-10 Thread Danilo Krummrich
If a sched job depends on a dma-fence from a job from the same GPU scheduler instance, but a different scheduler entity, the GPU scheduler does only wait for the particular job to be scheduled, rather than for the job to fully complete. This is due to the GPU scheduler assuming that there is a

Re: [Nouveau] [PATCH] nouveau/u_memcpya: use vmemdup_user

2023-08-10 Thread Danilo Krummrich
On 8/10/23 20:50, Dave Airlie wrote: From: Dave Airlie I think there are limit checks in places for most things but the new api wants to not have them. Add a limit check and use the vmemdup_user helper instead. Reviewed-by: Danilo Krummrich Signed-off-by: Dave Airlie ---

[Nouveau] [PATCH] nouveau/u_memcpya: use vmemdup_user

2023-08-10 Thread Dave Airlie
From: Dave Airlie I think there are limit checks in places for most things but the new api wants to not have them. Add a limit check and use the vmemdup_user helper instead. Signed-off-by: Dave Airlie --- drivers/gpu/drm/nouveau/nouveau_drv.h | 19 +-- 1 file changed, 5

[Nouveau] [PATCH] nouveau/u_memcpya: use kvmalloc_array.

2023-08-10 Thread Dave Airlie
From: Dave Airlie I think there are limit checks in places for most things but the new api wants to not have them. Signed-off-by: Dave Airlie --- drivers/gpu/drm/nouveau/nouveau_drv.h | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h

Re: [Nouveau] [PATCH drm-misc-next] drm/sched: support multiple rings per gpu_scheduler

2023-08-10 Thread Danilo Krummrich
On 8/10/23 08:34, Christian König wrote: Am 10.08.23 um 00:17 schrieb Danilo Krummrich: With the current mental model every GPU scheduler instance represents a single HW ring, while every entity represents a software queue feeding into one or multiple GPU scheduler instances and hence into one

Re: [Nouveau] [PATCH drm-misc-next] drm/sched: support multiple rings per gpu_scheduler

2023-08-10 Thread Danilo Krummrich
On 8/10/23 06:31, Matthew Brost wrote: On Thu, Aug 10, 2023 at 12:17:23AM +0200, Danilo Krummrich wrote: With the current mental model every GPU scheduler instance represents a single HW ring, while every entity represents a software queue feeding into one or multiple GPU scheduler instances

Re: [Nouveau] [PATCH drm-misc-next] drm/sched: support multiple rings per gpu_scheduler

2023-08-10 Thread Christian König
Am 10.08.23 um 00:17 schrieb Danilo Krummrich: With the current mental model every GPU scheduler instance represents a single HW ring, while every entity represents a software queue feeding into one or multiple GPU scheduler instances and hence into one or multiple HW rings. This does not