It is possible for all CPUs to miss the pending cpumask becoming clear,
and then nobody resetting it, which will cause the lockup detector to
stop working. It will eventually expire, but watchdog_smp_panic will
avoid doing anything if the pending mask is clear and it will never be
reset.

Order the cpumask clear vs the subsequent test to close this race.

Add an extra check for an empty pending mask when the watchdog fires and
finds its bit still clear, to try to catch any other possible races or
bugs here and keep the watchdog working. The extra test in
arch_touch_nmi_watchdog is required to prevent the new warning from
firing off.

Debugged-by: Laurent Dufour <lduf...@linux.ibm.com>
Signed-off-by: Nicholas Piggin <npig...@gmail.com>
---
 arch/powerpc/kernel/watchdog.c | 31 ++++++++++++++++++++++++++++++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/watchdog.c b/arch/powerpc/kernel/watchdog.c
index f9ea0e5357f9..4bb7c8e371a2 100644
--- a/arch/powerpc/kernel/watchdog.c
+++ b/arch/powerpc/kernel/watchdog.c
@@ -215,13 +215,38 @@ static void wd_smp_clear_cpu_pending(int cpu, u64 tb)
 
                        cpumask_clear_cpu(cpu, &wd_smp_cpus_stuck);
                        wd_smp_unlock(&flags);
+               } else {
+                       /*
+                        * The last CPU to clear pending should have reset the
+                        * watchdog, yet we find it empty here. This should not
+                        * happen but we can try to recover and avoid a false
+                        * positive if it does.
+                        */
+                       if (WARN_ON_ONCE(cpumask_empty(&wd_smp_cpus_pending)))
+                               goto none_pending;
                }
                return;
        }
+
        cpumask_clear_cpu(cpu, &wd_smp_cpus_pending);
+       /*
+        * Order the store to clear pending with the load(s) to check all
+        * words in the pending mask to check they are all empty. This orders
+        * with the same barrier on another CPU. This prevents two CPUs
+        * clearing the last 2 pending bits, but neither seeing the other's
+        * store when checking if the mask is empty, and missing an empty
+        * mask, which ends with a false positive.
+        */
+       smp_mb();
        if (cpumask_empty(&wd_smp_cpus_pending)) {
                unsigned long flags;
 
+none_pending:
+               /*
+                * Double check under lock because more than one CPU could see
+                * a clear mask with the lockless check after clearing their
+                * pending bits.
+                */
                wd_smp_lock(&flags);
                if (cpumask_empty(&wd_smp_cpus_pending)) {
                        wd_smp_last_reset_tb = tb;
@@ -312,8 +337,12 @@ void arch_touch_nmi_watchdog(void)
 {
        unsigned long ticks = tb_ticks_per_usec * wd_timer_period_ms * 1000;
        int cpu = smp_processor_id();
-       u64 tb = get_tb();
+       u64 tb;
 
+       if (!cpumask_test_cpu(cpu, &watchdog_cpumask))
+               return;
+
+       tb = get_tb();
        if (tb - per_cpu(wd_timer_tb, cpu) >= ticks) {
                per_cpu(wd_timer_tb, cpu) = tb;
                wd_smp_clear_cpu_pending(cpu, tb);
-- 
2.23.0

Reply via email to