This patch series was initially motivated by a race condition (exposed in PATCH 4/6) where we lacked synchronization for `job->file` access. This led to use-after-free issues when a file descriptor was closed while a job was still running.
However, beyond fixing this specific race, the series introduces broader improvements to active job management and locking. While PATCH 1/6, 2/6, and 5/6 are primarily code refactors, PATCH 3/6 brings a significant change to the locking scheme. Previously, all queues shared the same spinlock, which caused unnecessary contention during high GPU usage across different queues. PATCH 3/6 allows queues to operate more independently. Finally, PATCH 6/6 addresses a similar race condition to PATCH 4/6, but this time, on the per-file descriptor reset counter. Best Regards, - Maíra --- v1 -> v2: - Rebase on top of drm-misc-next. - Link to v1: https://lore.kernel.org/r/20250719-v3d-queue-lock-v1-0-bcc61210f...@igalia.com --- Maíra Canal (6): drm/v3d: Store a pointer to `struct v3d_file_priv` inside each job drm/v3d: Store the active job inside the queue's state drm/v3d: Replace a global spinlock with a per-queue spinlock drm/v3d: Address race-condition between per-fd GPU stats and fd release drm/v3d: Synchronous operations can't timeout drm/v3d: Protect per-fd reset counter against fd release drivers/gpu/drm/v3d/v3d_drv.c | 14 ++++++- drivers/gpu/drm/v3d/v3d_drv.h | 22 ++++------- drivers/gpu/drm/v3d/v3d_fence.c | 11 +++--- drivers/gpu/drm/v3d/v3d_gem.c | 10 ++--- drivers/gpu/drm/v3d/v3d_irq.c | 68 +++++++++++++------------------- drivers/gpu/drm/v3d/v3d_sched.c | 83 +++++++++++++++++++++------------------- drivers/gpu/drm/v3d/v3d_submit.c | 2 +- 7 files changed, 104 insertions(+), 106 deletions(-) --- base-commit: f9c67b019bc3c0324ee42c0dbfbb2d55726d751e change-id: 20250718-v3d-queue-lock-59babfb548bc