It makes no more sense to check for redundant softirqs on because
trace_softirqs_on() is no more symmetrical to trace_softirqs_off().

Indeed trace_softirqs_off() is called whenever we know that all softirq
vectors have been disabled. And trace_softirqs_on() is called everytime
we enable at least one vector. So curr->softirqs_enabled may well remain
true throughout subsequent calls.

FIXME: Perhaps we should rename those functions. Another solution would
be to make curr->softirqs_enabled to record the last value of
local_softirqs_enabled() so we could track again redundant calls.

Reviewed-by: David S. Miller <[email protected]>
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Mauro Carvalho Chehab <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Pavan Kondeti <[email protected]>
Cc: Paul E . McKenney <[email protected]>
Cc: David S . Miller <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Peter Zijlstra <[email protected]>
---
 kernel/locking/lockdep.c           | 5 -----
 kernel/locking/lockdep_internals.h | 1 -
 kernel/locking/lockdep_proc.c      | 2 +-
 3 files changed, 1 insertion(+), 7 deletions(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index a99fd5fade54..ce027d436651 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2978,11 +2978,6 @@ void trace_softirqs_on(unsigned long ip)
        if (DEBUG_LOCKS_WARN_ON(!irqs_disabled()))
                return;
 
-       if (curr->softirqs_enabled) {
-               debug_atomic_inc(redundant_softirqs_on);
-               return;
-       }
-
        current->lockdep_recursion = 1;
        /*
         * We'll do an OFF -> ON transition:
diff --git a/kernel/locking/lockdep_internals.h 
b/kernel/locking/lockdep_internals.h
index 4b0c03f0f7ce..3401e4a91afb 100644
--- a/kernel/locking/lockdep_internals.h
+++ b/kernel/locking/lockdep_internals.h
@@ -186,7 +186,6 @@ struct lockdep_stats {
        int     redundant_hardirqs_off;
        int     softirqs_on_events;
        int     softirqs_off_events;
-       int     redundant_softirqs_on;
        int     redundant_softirqs_off;
        int     nr_unused_locks;
        int     nr_redundant_checks;
diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c
index 3d31f9b0059e..4157560b36d2 100644
--- a/kernel/locking/lockdep_proc.c
+++ b/kernel/locking/lockdep_proc.c
@@ -169,7 +169,7 @@ static void lockdep_stats_debug_show(struct seq_file *m)
                           hr2 = debug_atomic_read(redundant_hardirqs_off),
                           si1 = debug_atomic_read(softirqs_on_events),
                           si2 = debug_atomic_read(softirqs_off_events),
-                          sr1 = debug_atomic_read(redundant_softirqs_on),
+                          sr1 = 0,
                           sr2 = debug_atomic_read(redundant_softirqs_off);
 
        seq_printf(m, " chain lookup misses:           %11llu\n",
-- 
2.21.0

Reply via email to