On Mon, 16 Feb 2026 08:33:21 +0100
Philipp Stanner <[email protected]> wrote:

> On Fri, 2026-02-13 at 15:27 +0100, Boris Brezillon wrote:
> > On Tue, 10 Feb 2026 11:01:59 +0100
> > "Christian König" <[email protected]> wrote:
> >   
> > > Implement per-fence spinlocks, allowing implementations to not give an
> > > external spinlock to protect the fence internal statei. Instead a spinlock
> > > embedded into the fence structure itself is used in this case.
> > > 
> > > Shared spinlocks have the problem that implementations need to guarantee
> > > that the lock live at least as long all fences referencing them.
> > > 
> > > Using a per-fence spinlock allows completely decoupling spinlock producer
> > > and consumer life times, simplifying the handling in most use cases.
> > > 
> > > v2: improve naming, coverage and function documentation
> > > v3: fix one additional locking in the selftests
> > > v4: separate out some changes to make the patch smaller,
> > >     fix one amdgpu crash found by CI systems
> > > 
> > > Signed-off-by: Christian König <[email protected]>
> > > ---
> > >  drivers/dma-buf/dma-fence.c             | 21 ++++++++++++++++-----
> > >  drivers/dma-buf/sync_debug.h            |  2 +-
> > >  drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h  |  2 +-
> > >  drivers/gpu/drm/drm_crtc.c              |  2 +-
> > >  drivers/gpu/drm/drm_writeback.c         |  2 +-
> > >  drivers/gpu/drm/nouveau/nouveau_fence.c |  3 ++-
> > >  drivers/gpu/drm/qxl/qxl_release.c       |  3 ++-
> > >  drivers/gpu/drm/vmwgfx/vmwgfx_fence.c   |  3 ++-
> > >  drivers/gpu/drm/xe/xe_hw_fence.c        |  3 ++-
> > >  include/linux/dma-fence.h               | 19 +++++++++++++------
> > >  10 files changed, 41 insertions(+), 19 deletions(-)
> > > 
> > > diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c
> > > index 56aa59867eaa..1833889e7466 100644
> > > --- a/drivers/dma-buf/dma-fence.c
> > > +++ b/drivers/dma-buf/dma-fence.c
> > > @@ -343,7 +343,6 @@ void __dma_fence_might_wait(void)
> > >  }
> > >  #endif
> > >  
> > > -
> > >  /**
> > >   * dma_fence_signal_timestamp_locked - signal completion of a fence
> > >   * @fence: the fence to signal
> > > @@ -1067,7 +1066,6 @@ static void
> > >  __dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops 
> > > *ops,
> > >            spinlock_t *lock, u64 context, u64 seqno, unsigned long flags)
> > >  {
> > > - BUG_ON(!lock);
> > >   BUG_ON(!ops || !ops->get_driver_name || !ops->get_timeline_name);
> > >  
> > >   kref_init(&fence->refcount);
> > > @@ -1078,10 +1076,15 @@ __dma_fence_init(struct dma_fence *fence, const 
> > > struct dma_fence_ops *ops,
> > >    */
> > >   RCU_INIT_POINTER(fence->ops, ops);
> > >   INIT_LIST_HEAD(&fence->cb_list);
> > > - fence->lock = lock;
> > >   fence->context = context;
> > >   fence->seqno = seqno;
> > >   fence->flags = flags | BIT(DMA_FENCE_FLAG_INITIALIZED_BIT);
> > > + if (lock) {
> > > +         fence->extern_lock = lock;
> > > + } else {
> > > +         spin_lock_init(&fence->inline_lock);
> > > +         fence->flags |= BIT(DMA_FENCE_FLAG_INLINE_LOCK_BIT);  
> > 
> > Hm, does this even make a different in term of instructions to check for
> > a bit instead of extern_lock == NULL? If not, I'd be in favor of
> > killing this redundancy and checking extern_lock against NULL in
> > dma_fence_spinlock().  
> 
> extern_lock and inline_lock are a union, so they overlap each other.
> inline_lock will only be equivalent to all zeros after initializing a
> new fence to 0.
> 
> 
> P.
> 
> PS: Can you terminate messages by a delimiter or by cropping? I give
> this tip sometimes, because often the reviewer has to scroll emails
> down to the end to see whether there are further comments. I terminate
> my messages with "P." for that purpose ;]

I tend to strip messages and quote only the bits I comment on. I get
this time I didn't, my bad.

Reply via email to