On 18.02.26 00:03, Bezdeka, Florian (FT RPD CED OES-DE) wrote: > On Mon, 2026-02-16 at 17:24 +0100, Jan Kiszka wrote: >> From: Jan Kiszka <[email protected]> >> >> Resolves the following lockdep report when booting PREEMPT_RT on Hyper-V >> with related guest support enabled: >> >> [ 1.127941] hv_vmbus: registering driver hyperv_drm >> >> [ 1.132518] ============================= >> [ 1.132519] [ BUG: Invalid wait context ] >> [ 1.132521] 6.19.0-rc8+ #9 Not tainted >> [ 1.132524] ----------------------------- >> [ 1.132525] swapper/0/0 is trying to lock: >> [ 1.132526] ffff8b9381bb3c90 (&channel->sched_lock){....}-{3:3}, at: >> vmbus_chan_sched+0xc4/0x2b0 >> [ 1.132543] other info that might help us debug this: >> [ 1.132544] context-{2:2} >> [ 1.132545] 1 lock held by swapper/0/0: >> [ 1.132547] #0: ffffffffa010c4c0 (rcu_read_lock){....}-{1:3}, at: >> vmbus_chan_sched+0x31/0x2b0 >> [ 1.132557] stack backtrace: >> [ 1.132560] CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Not tainted 6.19.0-rc8+ >> #9 PREEMPT_{RT,(lazy)} >> [ 1.132565] Hardware name: Microsoft Corporation Virtual Machine/Virtual >> Machine, BIOS Hyper-V UEFI Release v4.1 09/25/2025 >> [ 1.132567] Call Trace: >> [ 1.132570] <IRQ> >> [ 1.132573] dump_stack_lvl+0x6e/0xa0 >> [ 1.132581] __lock_acquire+0xee0/0x21b0 >> [ 1.132592] lock_acquire+0xd5/0x2d0 >> [ 1.132598] ? vmbus_chan_sched+0xc4/0x2b0 >> [ 1.132606] ? lock_acquire+0xd5/0x2d0 >> [ 1.132613] ? vmbus_chan_sched+0x31/0x2b0 >> [ 1.132619] rt_spin_lock+0x3f/0x1f0 >> [ 1.132623] ? vmbus_chan_sched+0xc4/0x2b0 >> [ 1.132629] ? vmbus_chan_sched+0x31/0x2b0 >> [ 1.132634] vmbus_chan_sched+0xc4/0x2b0 >> [ 1.132641] vmbus_isr+0x2c/0x150 >> [ 1.132648] __sysvec_hyperv_callback+0x5f/0xa0 >> [ 1.132654] sysvec_hyperv_callback+0x88/0xb0 >> [ 1.132658] </IRQ> >> [ 1.132659] <TASK> >> [ 1.132660] asm_sysvec_hyperv_callback+0x1a/0x20 >> >> As code paths that handle vmbus IRQs use sleepy locks under PREEMPT_RT, >> the vmbus_isr execution needs to be moved into thread context. Open- >> coding this allows to skip the IPI that irq_work would additionally >> bring and which we do not need, being an IRQ, never an NMI. >> >> This affects both x86 and arm64, therefore hook into the common driver >> logic. > > I tested this patch in combination with the related SCSI driver patch. > The tests were done on x86 with both VM generations provided by Hyper-v. > > Lockdep was enabled and there were no splat reports within 24 hours of > massive load produced by stress-ng. > > With that: > > Reviewed-by: Florian Bezdeka <[email protected]> > Tested-by: Florian Bezdeka <[email protected]> > > > Side note: We did some backports down to 6.1 already, just in case > someone is interested. We recognized a massive network performance drop > in 6.1. The root cause has been identified and is not related to this > patch. It's simply another RT regression caused by a missing stable-rt > backport. Upstreaming in progress... >
Submitted: https://lore.kernel.org/stable/[email protected]/T/#u Jan -- Siemens AG, Foundational Technologies Linux Expert Center
