On 3/16/26 17:46, Tejun Heo wrote:
> Hello,
>
> On Mon, Mar 16, 2026 at 10:02:48AM +0000, Christian Loehle wrote:
>> @@ -5686,11 +5718,20 @@ static void kick_cpus_irq_workfn(struct irq_work
>> *irq_work)
>> * task is picked subsequently. The latter is necessary to break
>> * the wait when $cpu is taken by a higher sched class.
>> */
>> - if (cpu != cpu_of(this_rq))
>> + if (cpu != this_cpu)
>> smp_cond_load_acquire(wait_kick_sync, VAL !=
>> ksyncs[cpu]);
>
> Given that irq_work is executed at the end of IRQ handling, we can just
> reschedule the irq work when the condition is not met (or separate that out
> into its own irq_work). That way, I think we can avoid the global lock.
>
I'll go poke at it some more, but I think it's not guaranteed that B actually
advances kick_sync if A keeps kicking. At least not if the handling is in HARD
irqwork?
Or what would the separated out irq work do differently?