On Tue, 2025-11-04 at 15:12 +0000, Tvrtko Ursulin wrote:
> 
> On 31/10/2025 13:16, Christian König wrote:
> > Just as proof of concept and minor cleanup.

That's by the way why I'm asking whether this series is intended as an
RFC.

> > 
> > Signed-off-by: Christian König <[email protected]>
> > ---
> >   drivers/gpu/drm/scheduler/sched_fence.c | 11 +++++------
> >   include/drm/gpu_scheduler.h             |  4 ----
> >   2 files changed, 5 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/scheduler/sched_fence.c 
> > b/drivers/gpu/drm/scheduler/sched_fence.c
> > index 9391d6f0dc01..7a94e03341cb 100644
> > --- a/drivers/gpu/drm/scheduler/sched_fence.c
> > +++ b/drivers/gpu/drm/scheduler/sched_fence.c
> > @@ -156,19 +156,19 @@ static void 
> > drm_sched_fence_set_deadline_finished(struct dma_fence *f,
> >     struct dma_fence *parent;
> >     unsigned long flags;
> >   
> > -   spin_lock_irqsave(&fence->lock, flags);
> > +   dma_fence_lock(f, flags);
> 
> Moving to dma_fence_lock should either be a separate patch or squashed 
> into the one which converts many other drivers. Even a separate patch
> before that previous patch would be better.

Yes. +1

> 
> Naming wise, I however still think dma_fence_lock_irqsave would probably 
> be better to stick with the same pattern everyone is so used too.

I also think this would be better. Who knows, one day maybe someone
really wants a lock that definitely must not be irqsave for some
reason, and then the naming pattern would completely break.

> > 
> > 

[…]

> > @@ -230,9 +229,9 @@ void drm_sched_fence_init(struct drm_sched_fence *fence,
> >     fence->sched = entity->rq->sched;
> >     seq = atomic_inc_return(&entity->fence_seq);
> >     dma_fence_init(&fence->scheduled, &drm_sched_fence_ops_scheduled,
> > -                  &fence->lock, entity->fence_context, seq);
> > +                  NULL, entity->fence_context, seq);
> >     dma_fence_init(&fence->finished, &drm_sched_fence_ops_finished,
> > -                  &fence->lock, entity->fence_context + 1, seq);
> > +                  NULL, entity->fence_context + 1, seq);
> >   }

Do we agree that there is no benefit in porting the scheduler to the
non-shared spinlock?

If so, I really prefer to not do it.


P.

Reply via email to