On Mon, 2 Mar 2026 12:05:46 -0800
Matthew Brost <[email protected]> wrote:

> On Mon, Mar 02, 2026 at 05:39:59PM +0100, Boris Brezillon wrote:
> > On Mon, 2 Mar 2026 16:42:28 +0100
> > Christian König <[email protected]> wrote:
> >   
> > > On 3/2/26 16:28, Boris Brezillon wrote:  
> > > > On Tue, 24 Feb 2026 09:55:43 -0800
> > > > Matthew Brost <[email protected]> wrote:
> > > >     
> > > >> dma_fence_chain_enable_signaling() runs while holding the chain
> > > >> inline_lock and may add callbacks to underlying fences, which takes
> > > >> their inline_lock.
> > > >>
> > > >> Since both locks share the same lockdep class, this valid nesting
> > > >> triggers a recursive locking warning. Assign a distinct lockdep
> > > >> class to the chain inline_lock so lockdep can correctly model the
> > > >> hierarchy.
> > > >>
> > > >> Fixes: a408c0ca0c41 ("dma-buf: use inline lock for the
> > > >> dma-fence-chain") Cc: Christian König <[email protected]>
> > > >> Cc: Tvrtko Ursulin <[email protected]>
> > > >> Cc: Philipp Stanner <[email protected]>
> > > >> Cc: Boris Brezillon <[email protected]>
> > > >> Signed-off-by: Matthew Brost <[email protected]>
> > > >> ---
> > > >>  drivers/dma-buf/dma-fence-chain.c | 17 +++++++++++++++++
> > > >>  1 file changed, 17 insertions(+)
> > > >>
> > > >> diff --git a/drivers/dma-buf/dma-fence-chain.c
> > > >> b/drivers/dma-buf/dma-fence-chain.c index
> > > >> a707792b6025..4c2a9f2ce126 100644 ---
> > > >> a/drivers/dma-buf/dma-fence-chain.c +++
> > > >> b/drivers/dma-buf/dma-fence-chain.c @@ -242,6 +242,9 @@ void
> > > >> dma_fence_chain_init(struct dma_fence_chain *chain, struct
> > > >> dma_fence *fence, uint64_t seqno)
> > > >>  {
> > > >> +#if IS_ENABLED(CONFIG_LOCKDEP)
> > > >> +      static struct lock_class_key dma_fence_chain_lock_key;
> > > >> +#endif
> > > >>        struct dma_fence_chain *prev_chain =
> > > >> to_dma_fence_chain(prev); uint64_t context;
> > > >>  
> > > >> @@ -263,6 +266,20 @@ void dma_fence_chain_init(struct
> > > >> dma_fence_chain *chain, dma_fence_init64(&chain->base,
> > > >> &dma_fence_chain_ops, NULL, context, seqno);
> > > >>  
> > > >> +#if IS_ENABLED(CONFIG_LOCKDEP)
> > > >> +      /*
> > > >> +       * dma_fence_chain_enable_signaling() is invoked while
> > > >> holding
> > > >> +       * chain->base.inline_lock and may call
> > > >> dma_fence_add_callback()
> > > >> +       * on the underlying fences, which takes their
> > > >> inline_lock.
> > > >> +       *
> > > >> +       * Since both locks share the same lockdep class, this
> > > >> legitimate
> > > >> +       * nesting confuses lockdep and triggers a recursive
> > > >> locking
> > > >> +       * warning. Assign a separate lockdep class to the chain
> > > >> lock
> > > >> +       * to model this hierarchy correctly.
> > > >> +       */
> > > >> +      lockdep_set_class(&chain->base.inline_lock,
> > > >> &dma_fence_chain_lock_key); +#endif    
> > > > 
> > > > If we're going to recommend the use of this inline_lock for all new
> > > > dma_fence_ops implementers, as the commit message seems to imply
> > > >     
> > > >> Shared spinlocks have the problem that implementations need to
> > > >> guarantee that the lock lives at least as long all fences
> > > >> referencing them.
> > > >>
> > > >> Using a per-fence spinlock allows completely decoupling spinlock
> > > >> producer and consumer life times, simplifying the handling in most
> > > >> use cases.    
> > > > 
> > > > maybe we should have the lock_class_key at the dma_buf_ops level and
> > > > have this lockdep_set_class() automated in __dma_fence_init().    
> > > 
> > > The dma_fence_chain() and dma_fence_array() containers are the only
> > > ones who are allowed to nest the lock with other dma_fences. E.g. we
> > > have WARN_ON()s in place which fire when you try to stitch together
> > > something which won't work.
> > > 
> > > So everybody else should get a lockdep warning when they try to do
> > > nasty things like this because you really can't guarantee lock order
> > > between different dma_fence implementations.  
> > 
> > Okay, that makes sense.  
> 
> Yes, I agree with Christian's reasoning - chains / arrays is the only
> case where nesting should be allowed. Also if we assigned a key for
> every inline lock we'd quickly exhaust the number of lockdep keys.

The suggestion was to have a key per-dma_buf_ops implementation, not
per-lock ;-). Anyway, Christian's argument already convinced me, so
this is moot.

Reply via email to