From: "Gautham R. Shenoy" <[email protected]>

Currently TIF_NEED_RESCHED is being overloaded, to wakeup an idle CPU in
TIF_POLLING mode to service an IPI even if there are no new tasks being
woken up on the said CPU.

In preparation of a proper fix, introduce a new helper
"need_resched_or_ipi()" which is intended to return true if either
the TIF_NEED_RESCHED flag or if TIF_NOTIFY_IPI flag is set. Use this
helper function in place of need_resched() in idle loops where
TIF_POLLING_NRFLAG is set.

To preserve bisectibility and avoid unbreakable idle loops, all the
need_resched() checks within TIF_POLLING_NRFLAGS sections, have been
replaced tree-wide with the need_resched_or_ipi() check.

[ prateek: Replaced some of the missed out occurrences of
  need_resched() within a TIF_POLLING sections with
  need_resched_or_ipi() ]

Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Russell King <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Dinh Nguyen <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Andreas Larsson <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Andrew Donnellan <[email protected]>
Cc: Benjamin Gray <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Xin Li <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Rick Edgecombe <[email protected]>
Cc: Tony Battersby <[email protected]>
Cc: Bjorn Helgaas <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Leonardo Bras <[email protected]>
Cc: Imran Khan <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: David Vernet <[email protected]>
Cc: Julia Lawall <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Gautham R. Shenoy <[email protected]>
Co-developed-by: K Prateek Nayak <[email protected]>
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o Fixed a conflict with commit edc8fc01f608 ("x86: Fix
  CPUIDLE_FLAG_IRQ_ENABLE leaking timer reprogram") that touched
  mwait_idle_with_hints() in arch/x86/include/asm/mwait.h
---
 arch/x86/include/asm/mwait.h      | 2 +-
 arch/x86/kernel/process.c         | 2 +-
 drivers/cpuidle/cpuidle-powernv.c | 2 +-
 drivers/cpuidle/cpuidle-pseries.c | 2 +-
 drivers/cpuidle/poll_state.c      | 2 +-
 include/linux/sched.h             | 5 +++++
 include/linux/sched/idle.h        | 4 ++--
 kernel/sched/idle.c               | 7 ++++---
 8 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
index 920426d691ce..3fa6f0bbd74f 100644
--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -125,7 +125,7 @@ static __always_inline void mwait_idle_with_hints(unsigned 
long eax, unsigned lo
 
                __monitor((void *)&current_thread_info()->flags, 0, 0);
 
-               if (!need_resched()) {
+               if (!need_resched_or_ipi()) {
                        if (ecx & 1) {
                                __mwait(eax, ecx);
                        } else {
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index b8441147eb5e..dd73cd6f735c 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -901,7 +901,7 @@ static __cpuidle void mwait_idle(void)
                }
 
                __monitor((void *)&current_thread_info()->flags, 0, 0);
-               if (!need_resched()) {
+               if (!need_resched_or_ipi()) {
                        __sti_mwait(0, 0);
                        raw_local_irq_disable();
                }
diff --git a/drivers/cpuidle/cpuidle-powernv.c 
b/drivers/cpuidle/cpuidle-powernv.c
index 9ebedd972df0..77c3bb371f56 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -79,7 +79,7 @@ static int snooze_loop(struct cpuidle_device *dev,
        dev->poll_time_limit = false;
        ppc64_runlatch_off();
        HMT_very_low();
-       while (!need_resched()) {
+       while (!need_resched_or_ipi()) {
                if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
                        /*
                         * Task has not woken up but we are exiting the polling
diff --git a/drivers/cpuidle/cpuidle-pseries.c 
b/drivers/cpuidle/cpuidle-pseries.c
index 14db9b7d985d..4f2b490f8b73 100644
--- a/drivers/cpuidle/cpuidle-pseries.c
+++ b/drivers/cpuidle/cpuidle-pseries.c
@@ -46,7 +46,7 @@ int snooze_loop(struct cpuidle_device *dev, struct 
cpuidle_driver *drv,
        snooze_exit_time = get_tb() + snooze_timeout;
        dev->poll_time_limit = false;
 
-       while (!need_resched()) {
+       while (!need_resched_or_ipi()) {
                HMT_low();
                HMT_very_low();
                if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
diff --git a/drivers/cpuidle/poll_state.c b/drivers/cpuidle/poll_state.c
index 9b6d90a72601..225f37897e45 100644
--- a/drivers/cpuidle/poll_state.c
+++ b/drivers/cpuidle/poll_state.c
@@ -26,7 +26,7 @@ static int __cpuidle poll_idle(struct cpuidle_device *dev,
 
                limit = cpuidle_poll_time(drv, dev);
 
-               while (!need_resched()) {
+               while (!need_resched_or_ipi()) {
                        cpu_relax();
                        if (loop_count++ < POLL_IDLE_RELAX_COUNT)
                                continue;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 90691d99027e..e52cdd1298bf 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2069,6 +2069,11 @@ static __always_inline bool need_resched(void)
        return unlikely(tif_need_resched());
 }
 
+static __always_inline bool need_resched_or_ipi(void)
+{
+       return unlikely(tif_need_resched() || tif_notify_ipi());
+}
+
 /*
  * Wrappers for p->thread_info->cpu access. No-op on UP.
  */
diff --git a/include/linux/sched/idle.h b/include/linux/sched/idle.h
index e670ac282333..497518b84e8d 100644
--- a/include/linux/sched/idle.h
+++ b/include/linux/sched/idle.h
@@ -63,7 +63,7 @@ static __always_inline bool __must_check 
current_set_polling_and_test(void)
         */
        smp_mb__after_atomic();
 
-       return unlikely(tif_need_resched());
+       return unlikely(need_resched_or_ipi());
 }
 
 static __always_inline bool __must_check current_clr_polling_and_test(void)
@@ -76,7 +76,7 @@ static __always_inline bool __must_check 
current_clr_polling_and_test(void)
         */
        smp_mb__after_atomic();
 
-       return unlikely(tif_need_resched());
+       return unlikely(need_resched_or_ipi());
 }
 
 #else
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 6e78d071beb5..7de94df5d477 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -57,7 +57,7 @@ static noinline int __cpuidle cpu_idle_poll(void)
        ct_cpuidle_enter();
 
        raw_local_irq_enable();
-       while (!tif_need_resched() &&
+       while (!need_resched_or_ipi() &&
               (cpu_idle_force_poll || tick_check_broadcast_expired()))
                cpu_relax();
        raw_local_irq_disable();
@@ -174,7 +174,7 @@ static void cpuidle_idle_call(void)
         * Check if the idle task must be rescheduled. If it is the
         * case, exit the function after re-enabling the local IRQ.
         */
-       if (need_resched()) {
+       if (need_resched_or_ipi()) {
                local_irq_enable();
                return;
        }
@@ -270,7 +270,7 @@ static void do_idle(void)
        __current_set_polling();
        tick_nohz_idle_enter();
 
-       while (!need_resched()) {
+       while (!need_resched_or_ipi()) {
                rmb();
 
                /*
@@ -350,6 +350,7 @@ static void do_idle(void)
         * RCU relies on this call to be done outside of an RCU read-side
         * critical section.
         */
+       current_clr_notify_ipi();
        flush_smp_call_function_queue();
        schedule_idle();
 
-- 
2.34.1


Reply via email to