On Sun Mar 1, 2026 at 1:06 PM CET, Alice Ryhl wrote:
> On Sat, Feb 28, 2026 at 07:40:26PM +0100, Danilo Krummrich wrote:
>> On Wed Feb 4, 2026 at 9:40 PM CET, Daniel Almeida wrote:
>> > This implementation dispatches any work enqueued on ARef<drm::Device<T>> to
>> > its driver-provided handler. It does so by building upon the newly-added
>> > ARef<T> support in workqueue.rs in order to call into the driver
>> > implementations for work_container_of and raw_get_work.
>> >
>> > This is notably important for work items that need access to the drm
>> > device, as it was not possible to enqueue work on a ARef<drm::Device<T>>
>> > previously without failing the orphan rule.
>> >
>> > The current implementation needs T::Data to live inline with drm::Device in
>> > order for work_container_of to function. This restriction is already
>> > captured by the trait bounds. Drivers that need to share their ownership of
>> > T::Data may trivially get around this:
>> >
>> > // Lives inline in drm::Device
>> > struct DataWrapper {
>> > work: ...,
>> > // Heap-allocated, shared ownership.
>> > data: Arc<DriverData>,
>> > }
>>
>> IIUC, this is how it's supposed to be used:
>>
>> #[pin_data]
>> struct MyData {
>> #[pin]
>> work: Work<drm::Device<MyDriver>>,
>> value: u32,
>> }
>>
>> impl_has_work! {
>> impl HasWork<drm::Device<MyDriver>> for MyData { self.work }
>> }
>>
>> impl WorkItem for MyData {
>> type Pointer = ARef<drm::Device<MyDriver>>;
>>
>> fn run(dev: ARef<drm::Device<MyDriver>>) {
>> dev_info!(dev, "value = {}\n", dev.value);
>> }
>> }
>>
>> The reason the WorkItem is implemented for MyData, rather than
>> drm::Device<MyDriver> (which would be a bit more straight forward) is the
>> orphan
>> rule, I assume.
>
> This characterizes it as a workaround for the orphan rule. I don't think
> that's fair. Implementing WorkItem for MyDriver directly is the
> idiomatic way to do it, in my opinion.
The trait bound is T::Data: WorkItem, not T: drm::Driver + WorkItem.
Implementing WorkItem for MyDriver seems more straight forward to me.
>> Now, the whole purpose of this is that a driver can implement WorkItem for
>> MyData without needing an additional struct (and allocation), such as:
>>
>> #[pin_data]
>> struct MyWork {
>> #[pin]
>> work: Work<Self>,
>> dev: drm::Device<MyDriver>,
>> }
>>
>> How is this supposed to be done when you want multiple different
>> implementations
>> of WorkItem that have a drm::Device<MyDriver> as payload?
>>
>> Fall back to various struct MyWork? Add in an "artificial" type state for
>> MyData
>> with some phantom data, so you can implement HasWork for MyData<Work0>,
>> MyData<Work1>, etc.?
>
> You cannot configure the code that is executed on a per-call basis
> because the code called by a work item is a function pointer stored
> inside the `struct work_struct`. And it can't be changed after
> initialization of the field.
>
> So either you must store that info in a separate field. This is what
> Binder does, see drivers/android/binder/process.rs for an example.
>
> impl workqueue::WorkItem for Process {
> type Pointer = Arc<Process>;
>
> fn run(me: Arc<Self>) {
> let defer;
> {
> let mut inner = me.inner.lock();
> defer = inner.defer_work;
> inner.defer_work = 0;
> }
>
> if defer & PROC_DEFER_FLUSH != 0 {
> me.deferred_flush();
> }
> if defer & PROC_DEFER_RELEASE != 0 {
> me.deferred_release();
> }
> }
> }
Ok, so this would be a switch to decide what to do when a single work is run,
i.e. it is not for running multiple work.
> Or you must have multiple different fields of type Work, each with a
> different function pointer stored inside it.
This sounds it works for running multiple work, but I wonder how enqueue() knows
which work should be run in this case? I.e. what do we do with:
impl_has_work! {
impl HasWork<drm::Device<MyDriver>> for MyData { self.work }
}