On 06/11/2025 13:23, Christian König wrote:
On 11/4/25 16:12, Tvrtko Ursulin wrote:

On 31/10/2025 13:16, Christian König wrote:
Just as proof of concept and minor cleanup.

Signed-off-by: Christian König <[email protected]>
---
   drivers/gpu/drm/scheduler/sched_fence.c | 11 +++++------
   include/drm/gpu_scheduler.h             |  4 ----
   2 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_fence.c 
b/drivers/gpu/drm/scheduler/sched_fence.c
index 9391d6f0dc01..7a94e03341cb 100644
--- a/drivers/gpu/drm/scheduler/sched_fence.c
+++ b/drivers/gpu/drm/scheduler/sched_fence.c
@@ -156,19 +156,19 @@ static void drm_sched_fence_set_deadline_finished(struct 
dma_fence *f,
       struct dma_fence *parent;
       unsigned long flags;
   -    spin_lock_irqsave(&fence->lock, flags);
+    dma_fence_lock(f, flags);

Moving to dma_fence_lock should either be a separate patch or squashed into the 
one which converts many other drivers. Even a separate patch before that 
previous patch would be better.

As far as I can see that won't work or would be at least rather tricky.

Previously from spin_lock_irqsave() locked drm_sched_fence->lock, but now it is 
locking dma_fence->lock.

That only works because we switched to using the internal lock.

What I meant here is to add a patch before the current 5/20.

Because in 5/20 one we have a lot of:

-       spin_lock_irqsave(fence->lock, flags);
+       dma_fence_lock(fence, flags);

If before it you insert a patch like "dma-fence: Add lock/unlock helper" as standalone, then 5/20 simply touches the internal of the helper and becomes smaller.

That new patch would also include the drm/sched part where it would be a straight replacement. At that point dma_fence->lock == dma_sched_fence->lock.

Then at the point of this patch you remove dma_sched_fence->lock member but dma_fence_lock_* already does the right thing before it, no?

Sorry if I got confused somehow, I am jumping between topics.

Regards,

Tvrtko
Naming wise, I however still think dma_fence_lock_irqsave would probably be 
better to stick with the same pattern everyone is so used too.

Oh, that is a good idea. Going to apply this to the patch set.

Regards,
Christian.


Regards,

Tvrtko

         /* If we already have an earlier deadline, keep it: */
       if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags) &&
           ktime_before(fence->deadline, deadline)) {
-        spin_unlock_irqrestore(&fence->lock, flags);
+        dma_fence_unlock(f, flags);
           return;
       }
         fence->deadline = deadline;
       set_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT, &f->flags);
   -    spin_unlock_irqrestore(&fence->lock, flags);
+    dma_fence_unlock(f, flags);
         /*
        * smp_load_aquire() to ensure that if we are racing another
@@ -217,7 +217,6 @@ struct drm_sched_fence *drm_sched_fence_alloc(struct 
drm_sched_entity *entity,
         fence->owner = owner;
       fence->drm_client_id = drm_client_id;
-    spin_lock_init(&fence->lock);
         return fence;
   }
@@ -230,9 +229,9 @@ void drm_sched_fence_init(struct drm_sched_fence *fence,
       fence->sched = entity->rq->sched;
       seq = atomic_inc_return(&entity->fence_seq);
       dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
-               &fence->lock, entity->fence_context, seq);
+               NULL, entity->fence_context, seq);
       dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
-               &fence->lock, entity->fence_context + 1, seq);
+               NULL, entity->fence_context + 1, seq);
   }
     module_init(drm_sched_fence_slab_init);
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index fb88301b3c45..b77f24a783e3 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -297,10 +297,6 @@ struct drm_sched_fence {
            * belongs to.
            */
       struct drm_gpu_scheduler    *sched;
-        /**
-         * @lock: the lock used by the scheduled and the finished fences.
-         */
-    spinlock_t            lock;
           /**
            * @owner: job owner for debugging
            */



Reply via email to