On Thu Feb 5, 2026 at 10:16 AM GMT, Boris Brezillon wrote:
> On Tue,  3 Feb 2026 09:14:01 +0100
> Philipp Stanner <[email protected]> wrote:
>
>> +/// A synchronization primitive mainly for GPU drivers.
>> +///
>> +/// DmaFences are always reference counted. The typical use case is that 
>> one side registers
>> +/// callbacks on the fence which will perform a certain action (such as 
>> queueing work) once the
>> +/// other side signals the fence.
>> +///
>> +/// # Examples
>> +///
>> +/// ```
>> +/// use kernel::sync::{Arc, ArcBorrow, DmaFence, DmaFenceCtx, DmaFenceCb, 
>> DmaFenceCbFunc};
>> +/// use core::sync::atomic::{self, AtomicBool};
>> +///
>> +/// static mut CHECKER: AtomicBool = AtomicBool::new(false);
>> +///
>> +/// struct CallbackData {
>> +///     i: u32,
>> +/// }
>> +///
>> +/// impl CallbackData {
>> +///     fn new() -> Self {
>> +///         Self { i: 9 }
>> +///     }
>> +/// }
>> +///
>> +/// impl DmaFenceCbFunc for CallbackData {
>> +///     fn callback(cb: Pin<KBox<DmaFenceCb<Self>>>) where Self: Sized {
>> +///         assert_eq!(cb.data.i, 9);
>> +///         // SAFETY: Just to have an easy way for testing. This cannot 
>> race with the checker
>> +///         // because the fence signalling callbacks are executed 
>> synchronously.
>> +///         unsafe { CHECKER.store(true, atomic::Ordering::Relaxed); }
>> +///     }
>> +/// }
>> +///
>> +/// struct DriverData {
>> +///     i: u32,
>> +/// }
>> +///
>> +/// impl DriverData {
>> +///     fn new() -> Self {
>> +///         Self { i: 5 }
>> +///     }
>> +/// }
>> +///
>> +/// let data = DriverData::new();
>> +/// let fctx = DmaFenceCtx::new()?;
>> +///
>> +/// let mut fence = fctx.as_arc_borrow().new_fence(data)?;
>> +///
>> +/// let cb_data = CallbackData::new();
>> +/// fence.register_callback(cb_data);
>> +/// // fence.begin_signalling();
>> +/// fence.signal()?;
>> +/// // Now check wehether the callback was actually executed.
>> +/// // SAFETY: `fence.signal()` above works sequentially. We just check 
>> here whether the signalling
>> +/// // actually did set the boolean correctly.
>> +/// unsafe { assert_eq!(CHECKER.load(atomic::Ordering::Relaxed), true); }
>> +///
>> +/// Ok::<(), Error>(())
>> +/// ```
>> +#[pin_data]
>> +pub struct DmaFence<T> {
>> +    /// The actual dma_fence passed to C.
>> +    #[pin]
>> +    inner: Opaque<bindings::dma_fence>,
>> +    /// User data.
>> +    #[pin]
>> +    data: T,
>
> A DmaFence is a cross-device synchronization mechanism that can (and
> will) cross the driver boundary (one driver can wait on a fence emitted
> by a different driver). As such, I don't think embedding a generic T in
> the DmaFence and considering it's the object being passed around is
> going to work, because, how can one driver know the T chosen by the
> driver that created the fence? If you want to have some fence emitter
> data attached to the DmaFence allocation, you'll need two kind of
> objects:
>
> - one that's type agnostic and on which you can do the callback
>   registration/unregistration, signalling checks, and generally all
>   type-agnostic operations. That's basically just a wrapper around a
>   bindings::dma_fence implementing AlwaysRefCounted.
> - one that has the extra data and fctx, with a way to transmute from a
>   generic fence to a implementer specific one in case the driver wants
>   to do something special when waiting on its own fences (check done
>   with the fence ops in C, I don't know how that translates in rust)

If `data` is moved to the end of struct and `DmaFence<T>` changed to
`DmaFence<T: ?Sized>`, you would also gain the ability to coerce `DmaFence<T>`
to `DmaFence<dyn Trait>`, e.g. `DmaFence<dyn Any>`.

Best,
Gary

>
>> +    /// Marks whether the fence is currently in the signalling critical 
>> section.
>> +    signalling: bool,
>> +    /// A boolean needed for the C backend's lockdep guard.
>> +    signalling_cookie: bool,
>> +    /// A reference to the associated [`DmaFenceCtx`] so that it cannot be 
>> dropped while there are
>> +    /// still fences around.
>> +    fctx: Arc<DmaFenceCtx>,
>> +}

Reply via email to