From: Rik van Riel <[email protected]>

About 40% of all csd_lock warnings observed in our fleet appear to
be due to sched_clock() going backward in time (usually only a little
bit), resulting in ts0 being larger than ts2.

When the local CPU is at fault, we should print out a message reflecting
that, rather than trying to get the remote CPU's stack trace.

Signed-off-by: Rik van Riel <[email protected]>
Tested-by: "Paul E. McKenney" <[email protected]>
Signed-off-by: Neeraj Upadhyay <[email protected]>
---
 kernel/smp.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/smp.c b/kernel/smp.c
index dfcde438ef63..143ae26f96a2 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -253,6 +253,14 @@ static bool csd_lock_wait_toolong(call_single_data_t *csd, 
u64 ts0, u64 *ts1, in
                   csd_lock_timeout_ns == 0))
                return false;
 
+       if (ts0 > ts2) {
+               /* Our own sched_clock went backward; don't blame another CPU. 
*/
+               ts_delta = ts0 - ts2;
+               pr_alert("sched_clock on CPU %d went backward by %llu ns\n", 
raw_smp_processor_id(), ts_delta);
+               *ts1 = ts2;
+               return false;
+       }
+
        firsttime = !*bug_id;
        if (firsttime)
                *bug_id = atomic_inc_return(&csd_bug_count);
-- 
2.40.1


Reply via email to