Currently there's an issue which is not always reproducible, where the softirqs annotation in lockdep goes out of sync with the reality of the world and causes a lockdep splat. This is because the preemptoff tracer calls into lockdep which can cause a softirqs invalid annotation splat.
The same issue was fixed in local_bh_disable_ip. /* * The preempt tracer hooks into preempt_count_add and will break * lockdep because it calls back into lockdep after SOFTIRQ_OFFSET * is set and before current->softirq_enabled is cleared. * We must manually increment preempt_count here and manually * call the trace_preempt_off later. */ I am not fully sure why the issue is reproducible with my patches easily, but it seems like the same ordering issue can exist so we ought to fix it. Cc: Steven Rostedt <rost...@goodmis.org> Cc: Peter Zilstra <pet...@infradead.org> Cc: Ingo Molnar <mi...@redhat.com> Cc: Mathieu Desnoyers <mathieu.desnoy...@efficios.com> Cc: Tom Zanussi <tom.zanu...@linux.intel.com> Cc: Namhyung Kim <namhy...@kernel.org> Cc: Thomas Glexiner <t...@linutronix.de> Cc: Boqun Feng <boqun.f...@gmail.com> Cc: Paul McKenney <paul...@linux.vnet.ibm.com> Cc: Frederic Weisbecker <fweis...@gmail.com> Cc: Randy Dunlap <rdun...@infradead.org> Cc: Masami Hiramatsu <mhira...@kernel.org> Cc: Fenguang Wu <fengguang...@intel.com> Cc: Baohong Liu <baohong....@intel.com> Cc: Vedang Patel <vedang.pa...@intel.com> Signed-off-by: Joel Fernandes <joe...@google.com> --- kernel/softirq.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/kernel/softirq.c b/kernel/softirq.c index 177de3640c78..8a040bcaa033 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -139,9 +139,13 @@ static void __local_bh_enable(unsigned int cnt) { lockdep_assert_irqs_disabled(); + if (preempt_count() == cnt) + trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip()); + if (softirq_count() == (cnt & SOFTIRQ_MASK)) trace_softirqs_on(_RET_IP_); - preempt_count_sub(cnt); + + __preempt_count_sub(cnt); } /* -- 2.17.0.484.g0c8726318c-goog