On 3/4/2021 1:23 AM, [email protected] wrote:
From: Frederic Weisbecker <[email protected]>

Enqueuing a local timer after the tick has been stopped will result in
the timer being ignored until the next random interrupt.

Perform sanity checks to report these situations.

Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar<[email protected]>
Cc: Rafael J. Wysocki <[email protected]>
Signed-off-by: Frederic Weisbecker <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>

Reviewed-by: Rafael J. Wysocki <[email protected]>


---
  kernel/sched/core.c | 24 +++++++++++++++++++++++-
  1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ca2bb62..4822371 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -674,6 +674,26 @@ int get_nohz_timer_target(void)
        return cpu;
  }
+static void wake_idle_assert_possible(void)
+{
+#ifdef CONFIG_SCHED_DEBUG
+       /* Timers are re-evaluated after idle IRQs */
+       if (in_hardirq())
+               return;
+       /*
+        * Same as hardirqs, assuming they are executing
+        * on IRQ tail. Ksoftirqd shouldn't reach here
+        * as the timer base wouldn't be idle. And inline
+        * softirq processing after a call to local_bh_enable()
+        * within idle loop sound too fun to be considered here.
+        */
+       if (in_serving_softirq())
+               return;
+
+       WARN_ON_ONCE("Late timer enqueue may be ignored\n");
+#endif
+}
+
  /*
   * When add_timer_on() enqueues a timer into the timer wheel of an
   * idle CPU then this timer might expire before the next timer event
@@ -688,8 +708,10 @@ static void wake_up_idle_cpu(int cpu)
  {
        struct rq *rq = cpu_rq(cpu);
- if (cpu == smp_processor_id())
+       if (cpu == smp_processor_id()) {
+               wake_idle_assert_possible();
                return;
+       }
if (set_nr_and_not_polling(rq->idle))
                smp_send_reschedule(cpu);


Reply via email to