Currently there's an issue which is not always reproducible, where the
softirqs annotation in lockdep goes out of sync with the reality of the
world and causes a lockdep splat. This is because the preemptoff tracer
calls into lockdep which can cause a softirqs invalid annotation splat.

The same issue was fixed in local_bh_disable_ip.
 * The preempt tracer hooks into preempt_count_add and will break
 * lockdep because it calls back into lockdep after SOFTIRQ_OFFSET
 * is set and before current->softirq_enabled is cleared.
 * We must manually increment preempt_count here and manually
 * call the trace_preempt_off later.

I am not fully sure why the issue is reproducible with my patches easily,
but it seems like the same ordering issue can exist so we ought to fix

Cc: Steven Rostedt <>
Cc: Peter Zilstra <>
Cc: Ingo Molnar <>
Cc: Mathieu Desnoyers <>
Cc: Tom Zanussi <>
Cc: Namhyung Kim <>
Cc: Thomas Glexiner <>
Cc: Boqun Feng <>
Cc: Paul McKenney <>
Cc: Frederic Weisbecker <>
Cc: Randy Dunlap <>
Cc: Masami Hiramatsu <>
Cc: Fenguang Wu <>
Cc: Baohong Liu <>
Cc: Vedang Patel <>
Signed-off-by: Joel Fernandes <>
 kernel/softirq.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 177de3640c78..8a040bcaa033 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -139,9 +139,13 @@ static void __local_bh_enable(unsigned int cnt)
+       if (preempt_count() == cnt)
+               trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
        if (softirq_count() == (cnt & SOFTIRQ_MASK))
-       preempt_count_sub(cnt);
+       __preempt_count_sub(cnt);

Reply via email to