On 05.02.26 19:55, Michael Kelley wrote:
> From: Jan Kiszka <[email protected]> Sent: Tuesday, February 3, 2026 
> 8:02 AM
>>
>> Resolves the following lockdep report when booting PREEMPT_RT on Hyper-V
>> with related guest support enabled:
>>
>> [    1.127941] hv_vmbus: registering driver hyperv_drm
>>
>> [    1.132518] =============================
>> [    1.132519] [ BUG: Invalid wait context ]
>> [    1.132521] 6.19.0-rc8+ #9 Not tainted
>> [    1.132524] -----------------------------
>> [    1.132525] swapper/0/0 is trying to lock:
>> [    1.132526] ffff8b9381bb3c90 (&channel->sched_lock){....}-{3:3}, at: 
>> vmbus_chan_sched+0xc4/0x2b0
>> [    1.132543] other info that might help us debug this:
>> [    1.132544] context-{2:2}
>> [    1.132545] 1 lock held by swapper/0/0:
>> [    1.132547]  #0: ffffffffa010c4c0 (rcu_read_lock){....}-{1:3}, at: 
>> vmbus_chan_sched+0x31/0x2b0
>> [    1.132557] stack backtrace:
>> [    1.132560] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.19.0-rc8+ 
>> #9 PREEMPT_{RT,(lazy)}
>> [    1.132565] Hardware name: Microsoft Corporation Virtual Machine/Virtual 
>> Machine, BIOS Hyper-V UEFI Release v4.1 09/25/2025
>> [    1.132567] Call Trace:
>> [    1.132570]  <IRQ>
>> [    1.132573]  dump_stack_lvl+0x6e/0xa0
>> [    1.132581]  __lock_acquire+0xee0/0x21b0
>> [    1.132592]  lock_acquire+0xd5/0x2d0
>> [    1.132598]  ? vmbus_chan_sched+0xc4/0x2b0
>> [    1.132606]  ? lock_acquire+0xd5/0x2d0
>> [    1.132613]  ? vmbus_chan_sched+0x31/0x2b0
>> [    1.132619]  rt_spin_lock+0x3f/0x1f0
>> [    1.132623]  ? vmbus_chan_sched+0xc4/0x2b0
>> [    1.132629]  ? vmbus_chan_sched+0x31/0x2b0
>> [    1.132634]  vmbus_chan_sched+0xc4/0x2b0
>> [    1.132641]  vmbus_isr+0x2c/0x150
>> [    1.132648]  __sysvec_hyperv_callback+0x5f/0xa0
>> [    1.132654]  sysvec_hyperv_callback+0x88/0xb0
>> [    1.132658]  </IRQ>
>> [    1.132659]  <TASK>
>> [    1.132660]  asm_sysvec_hyperv_callback+0x1a/0x20
>>
>> As code paths that handle vmbus IRQs use sleepy locks under PREEMPT_RT,
>> the complete vmbus_handler execution needs to be moved into thread
>> context. Open-coding this allows to skip the IPI that irq_work would
>> additionally bring and which we do not need, being an IRQ, never an NMI.
>>
>> Signed-off-by: Jan Kiszka <[email protected]>
>> ---
>>
>> This should resolve what was once brought forward via [1]. If it
>> actually resolves all remaining compatibility issues of the hyperv
>> support with RT is not yet clear, though. So far, lockdep is happy when
>> using this plus [2].
>>
>> [1] 
>> https://lore.kernel.org/all/[email protected]/
>> [2] 
>> https://lore.kernel.org/lkml/[email protected]/
>>
>>  arch/x86/kernel/cpu/mshyperv.c | 52 ++++++++++++++++++++++++++++++++--      
> 
> You've added this code under arch/x86. But isn't it architecture independent? 
> I
> think it should also work on arm64. If that's the case, the code should 
> probably
> be added to drivers/hv/vmbus_drv.c instead.
> 

I checked that before: arm64 uses normal IRQs, not over-optimized APIC
vectors. And those IRQs are auto-threaded.

That said, someone with an arm64 Hyper-V deployment should still try to
run things there once (PREEMPT_RT + PROVE_LOCKING). I don't have such a
setup.

>>  1 file changed, 50 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
>> index 579fb2c64cfd..1194ca452c52 100644
>> --- a/arch/x86/kernel/cpu/mshyperv.c
>> +++ b/arch/x86/kernel/cpu/mshyperv.c
>> @@ -17,6 +17,7 @@
>>  #include <linux/irq.h>
>>  #include <linux/kexec.h>
>>  #include <linux/random.h>
>> +#include <linux/smpboot.h>
>>  #include <asm/processor.h>
>>  #include <asm/hypervisor.h>
>>  #include <hyperv/hvhdk.h>
>> @@ -150,6 +151,43 @@ static void (*hv_stimer0_handler)(void);
>>  static void (*hv_kexec_handler)(void);
>>  static void (*hv_crash_handler)(struct pt_regs *regs);
>>
>> +static DEFINE_PER_CPU(bool, vmbus_irq_pending);
>> +static DEFINE_PER_CPU(struct task_struct *, vmbus_irqd);
>> +
>> +static void vmbus_irqd_wake(void)
>> +{
>> +    struct task_struct *tsk = __this_cpu_read(vmbus_irqd);
>> +
>> +    __this_cpu_write(vmbus_irq_pending, true);
>> +    wake_up_process(tsk);
>> +}
>> +
>> +static void vmbus_irqd_setup(unsigned int cpu)
>> +{
>> +    sched_set_fifo(current);
>> +}
>> +
>> +static int vmbus_irqd_should_run(unsigned int cpu)
>> +{
>> +    return __this_cpu_read(vmbus_irq_pending);
>> +}
>> +
>> +static void run_vmbus_irqd(unsigned int cpu)
>> +{
>> +    vmbus_handler();
>> +    __this_cpu_write(vmbus_irq_pending, false);
>> +}
> 
> The two statements in this function should be swapped. This function
> runs with pre-emption enabled and interrupts enabled. If a VMBus
> interrupt comes in as vmbus_handler() is finishing, vmbus_irqd_wake()
> will run and set vmbus_irq_pending to "true". This function will then set
> vmbus_irq_pending to 'false", wiping out the "true" setting. The hotplug
> thread will decide it doesn't need to run again, and whatever generated
> the new interrupt doesn't get processed (at least until another interrupt
> comes in).

You are absolutely right. The reordered pattern is the same as in
irq_work - for the very same reason. I'll send v2.

Thanks,
Jan

-- 
Siemens AG, Foundational Technologies
Linux Expert Center

Reply via email to