Re: [RFC PATCH 00/14] Introducing TIF_NOTIFY_IPI flag

2024-03-06 Thread K Prateek Nayak
Hello Vincent,

Thank you for taking a look at the series.

On 3/6/2024 3:29 PM, Vincent Guittot wrote:
> Hi Prateek,
> 
> Adding Julia who could be interested in this patchset. Your patchset
> should trigger idle load balance instead of newly idle load balance
> now when the polling is used. This was one reason for not migrating
> task in idle CPU

Thank you.

> 
> On Tue, 20 Feb 2024 at 18:15, K Prateek Nayak  wrote:
>>
>> Hello everyone,
>>
>> [..snip..]
>>
>>
>> Skipping newidle_balance()
>> ==
>>
>> In an earlier attempt to solve the challenge of the long IRQ disabled
>> section, newidle_balance() was skipped when a CPU waking up from idle
>> was found to have no runnable tasks, and was transitioning back to
>> idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
>> may be viable for CPUs that are idling with tick enabled, where the
>> newidle_balance() has the opportunity to pull tasks onto the idle CPU.
>>
>> Vincent [5] pointed out a case where the idle load kick will fail to
>> run on an idle CPU since the IPI handler launching the ILB will check
>> for need_resched(). In such cases, the idle CPU relies on
>> newidle_balance() to pull tasks towards itself.
> 
> Calling newidle_balance() instead of the normal idle load balance
> prevents the CPU to pull tasks from other groups

Thank you for the correction.

> 
>>
>> Using an alternate flag instead of NEED_RESCHED to indicate a pending
>> IPI was suggested as the correct approach to solve this problem on the
>> same thread.
>>
>>
>> Proposed solution: TIF_NOTIFY_IPI
>> =
>>
>> Instead of reusing TIF_NEED_RESCHED bit to pull an TIF_POLLING CPU out
>> of idle, TIF_NOTIFY_IPI is a newly introduced flag that
>> call_function_single_prep_ipi() sets on a target TIF_POLLING CPU to
>> indicate a pending IPI, which the idle CPU promises to process soon.
>>
>> On architectures that do not support the TIF_NOTIFY_IPI flag (this
>> series only adds support for x86 and ARM processors for now),
> 
> I'm surprised that you are mentioning ARM processors because they
> don't use TIF_POLLING.

Yup I just realised that after Linus Walleij pointed it out on the
thread.

> 
>> call_function_single_prep_ipi() will fallback to setting
>> TIF_NEED_RESCHED bit to pull the TIF_POLLING CPU out of idle.
>>
>> Since the pending IPI handlers are processed before the call to
>> schedule_idle() in do_idle(), schedule_idle() will only be called if the
>> IPI handler have woken / migrated a new task on the idle CPU and has set
>> TIF_NEED_RESCHED bit to indicate the same. This avoids running into the
>> long IRQ disabled section in schedule_idle() unnecessarily, and any
>> need_resched() check within a call function will accurately notify if a
>> task is waiting for CPU time on the CPU handling the IPI.
>>
>> Following is the crude visualization of how the situation changes with
>> the newly introduced TIF_NOTIFY_IPI flag:
>> --
>> CPU0CPU1
>> 
>> do_idle() {
>> 
>> __current_set_polling();
>> ...
>> 
>> monitor(addr);
>> if 
>> (!need_resched_or_ipi())
>> 
>> mwait() {
>> /* 
>> Waiting */
>> smp_call_function_single(CPU1, func, wait = 1) { 
>>...
>> ...  
>>...
>> set_nr_if_polling(CPU1) {
>>...
>> /* Realizes CPU1 is polling */   
>>...
>> try_cmpxchg(addr,
>>...
>> ,
>>...
>> val | _TIF_NOTIFY_IPI);  
>>...
>> } /* Does not send an IPI */ 
&

[RFC PATCH 03/14] sched/core: Use TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING mode of pending IPI

2024-02-20 Thread K Prateek Nayak
c: "Aneesh Kumar K.V" 
Cc: "Naveen N. Rao" 
Cc: Yoshinori Sato 
Cc: Rich Felker 
Cc: John Paul Adrian Glaubitz 
Cc: "David S. Miller" 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: "H. Peter Anvin" 
Cc: "Rafael J. Wysocki" 
Cc: Daniel Lezcano 
Cc: Peter Zijlstra 
Cc: Juri Lelli 
Cc: Vincent Guittot 
Cc: Dietmar Eggemann 
Cc: Steven Rostedt 
Cc: Ben Segall 
Cc: Mel Gorman 
Cc: Daniel Bristot de Oliveira 
Cc: Valentin Schneider 
Cc: Al Viro 
Cc: Linus Walleij 
Cc: Ard Biesheuvel 
Cc: Andrew Donnellan 
Cc: Nicholas Miehlbradt 
Cc: Andrew Morton 
Cc: Arnd Bergmann 
Cc: Josh Poimboeuf 
Cc: "Kirill A. Shutemov" 
Cc: Rick Edgecombe 
Cc: Tony Battersby 
Cc: Brian Gerst 
Cc: Tim Chen 
Cc: David Vernet 
Cc: x...@kernel.org
Cc: linux-ker...@vger.kernel.org
Cc: linux-alpha@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-c...@vger.kernel.org
Cc: linux-openr...@vger.kernel.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@vger.kernel.org
Signed-off-by: Gautham R. Shenoy 
Co-developed-by: K Prateek Nayak 
Signed-off-by: K Prateek Nayak 
---
 include/linux/sched/idle.h |  8 
 kernel/sched/core.c| 41 ++
 kernel/sched/idle.c| 16 +++
 3 files changed, 49 insertions(+), 16 deletions(-)

diff --git a/include/linux/sched/idle.h b/include/linux/sched/idle.h
index d739ab810e00..c22312087c30 100644
--- a/include/linux/sched/idle.h
+++ b/include/linux/sched/idle.h
@@ -58,8 +58,8 @@ static __always_inline bool __must_check 
current_set_polling_and_test(void)
__current_set_polling();
 
/*
-* Polling state must be visible before we test NEED_RESCHED,
-* paired by resched_curr()
+* Polling state must be visible before we test NEED_RESCHED or
+* NOTIFY_IPI paired by resched_curr() or notify_ipi_if_polling()
 */
smp_mb__after_atomic();
 
@@ -71,8 +71,8 @@ static __always_inline bool __must_check 
current_clr_polling_and_test(void)
__current_clr_polling();
 
/*
-* Polling state must be visible before we test NEED_RESCHED,
-* paired by resched_curr()
+* Polling state must be visible before we test NEED_RESCHED or
+* NOTIFY_IPI paired by resched_curr() or notify_ipi_if_polling()
 */
smp_mb__after_atomic();
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index db4be4921e7f..6fb6e5b75724 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -909,12 +909,30 @@ static inline bool set_nr_and_not_polling(struct 
task_struct *p)
 }
 
 /*
- * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
+ * Certain architectures that support TIF_POLLING_NRFLAG may not support
+ * TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING mode of a pending
+ * IPI. On such architectures, set TIF_NEED_RESCHED instead to wake the
+ * idle CPU and process the pending IPI.
+ */
+#ifdef _TIF_NOTIFY_IPI
+#define _TIF_WAKE_FLAG _TIF_NOTIFY_IPI
+#else
+#define _TIF_WAKE_FLAG _TIF_NEED_RESCHED
+#endif
+
+/*
+ * Atomically set TIF_WAKE_FLAG when TIF_POLLING_NRFLAG is set.
+ *
+ * On architectures that define TIF_NOTIFY_IPI, the same is set in the
+ * idle task's thread_info to pull the CPU out of idle and process
+ * the pending interrupt. On architectures that don't support
+ * TIF_NOTIFY_IPI, TIF_NEED_RESCHED is set instead to notify the
+ * pending IPI.
  *
- * If this returns true, then the idle task promises to call
- * sched_ttwu_pending() and reschedule soon.
+ * If this returns true, then the idle task promises to process the
+ * call function soon.
  */
-static bool set_nr_if_polling(struct task_struct *p)
+static bool notify_ipi_if_polling(struct task_struct *p)
 {
struct thread_info *ti = task_thread_info(p);
typeof(ti->flags) val = READ_ONCE(ti->flags);
@@ -922,9 +940,16 @@ static bool set_nr_if_polling(struct task_struct *p)
do {
if (!(val & _TIF_POLLING_NRFLAG))
return false;
-   if (val & _TIF_NEED_RESCHED)
+   /*
+* If TIF_NEED_RESCHED flag is set in addition to
+* TIF_POLLING_NRFLAG, the CPU will soon fall out of
+* idle. Since flush_smp_call_function_queue() is called
+* soon after the idle exit, setting TIF_WAKE_FLAG is
+* not necessary.
+*/
+   if (val & (_TIF_NEED_RESCHED | _TIF_WAKE_FLAG))
return true;
-   } while (!try_cmpxchg(>flags, , val | _TIF_NEED_RESCHED));
+   } while (!try_cmpxchg(>flags, , val | _TIF_WAKE_FLAG));
 
return true;
 }
@@ -937,7 +962,7 @@ static inline bool set_nr_and_not_polling(struct 
task_struct *p)
 }
 
 #ifdef CONFIG_SMP
-static inline bool

[RFC PATCH 02/14] sched: Define a need_resched_or_ipi() helper and use it treewide

2024-02-20 Thread K Prateek Nayak
From: "Gautham R. Shenoy" 

Currently TIF_NEED_RESCHED is being overloaded, to wakeup an idle CPU in
TIF_POLLING mode to service an IPI even if there are no new tasks being
woken up on the said CPU.

In preparation of a proper fix, introduce a new helper
"need_resched_or_ipi()" which is intended to return true if either
the TIF_NEED_RESCHED flag or if TIF_NOTIFY_IPI flag is set. Use this
helper function in place of need_resched() in idle loops where
TIF_POLLING_NRFLAG is set.

To preserve bisectibility and avoid unbreakable idle loops, all the
need_resched() checks within TIF_POLLING_NRFLAGS sections, have been
replaced tree-wide with the need_resched_or_ipi() check.

[ prateek: Replaced some of the missed out occurrences of
  need_resched() within a TIF_POLLING sections with
  need_resched_or_ipi() ]

Cc: Richard Henderson 
Cc: Ivan Kokshaysky 
Cc: Matt Turner 
Cc: Russell King 
Cc: Guo Ren 
Cc: Michal Simek 
Cc: Dinh Nguyen 
Cc: Jonas Bonn 
Cc: Stefan Kristiansson 
Cc: Stafford Horne 
Cc: "James E.J. Bottomley" 
Cc: Helge Deller 
Cc: Michael Ellerman 
Cc: Nicholas Piggin 
Cc: Christophe Leroy 
Cc: "Aneesh Kumar K.V" 
Cc: "Naveen N. Rao" 
Cc: Yoshinori Sato 
Cc: Rich Felker 
Cc: John Paul Adrian Glaubitz 
Cc: "David S. Miller" 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: "H. Peter Anvin" 
Cc: "Rafael J. Wysocki" 
Cc: Daniel Lezcano 
Cc: Peter Zijlstra 
Cc: Juri Lelli 
Cc: Vincent Guittot 
Cc: Dietmar Eggemann 
Cc: Steven Rostedt 
Cc: Ben Segall 
Cc: Mel Gorman 
Cc: Daniel Bristot de Oliveira 
Cc: Valentin Schneider 
Cc: Al Viro 
Cc: Linus Walleij 
Cc: Ard Biesheuvel 
Cc: Andrew Donnellan 
Cc: Nicholas Miehlbradt 
Cc: Andrew Morton 
Cc: Arnd Bergmann 
Cc: Josh Poimboeuf 
Cc: "Kirill A. Shutemov" 
Cc: Rick Edgecombe 
Cc: Tony Battersby 
Cc: Brian Gerst 
Cc: Tim Chen 
Cc: David Vernet 
Cc: x...@kernel.org
Cc: linux-ker...@vger.kernel.org
Cc: linux-alpha@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-c...@vger.kernel.org
Cc: linux-openr...@vger.kernel.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@vger.kernel.org
Signed-off-by: Gautham R. Shenoy 
Co-developed-by: K Prateek Nayak 
Signed-off-by: K Prateek Nayak 
---
 arch/x86/include/asm/mwait.h  | 2 +-
 arch/x86/kernel/process.c | 2 +-
 drivers/cpuidle/cpuidle-powernv.c | 2 +-
 drivers/cpuidle/cpuidle-pseries.c | 2 +-
 drivers/cpuidle/poll_state.c  | 2 +-
 include/linux/sched.h | 5 +
 include/linux/sched/idle.h| 4 ++--
 kernel/sched/idle.c   | 7 ---
 8 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
index 778df05f8539..ac1370143407 100644
--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -115,7 +115,7 @@ static __always_inline void mwait_idle_with_hints(unsigned 
long eax, unsigned lo
}
 
__monitor((void *)_thread_info()->flags, 0, 0);
-   if (!need_resched())
+   if (!need_resched_or_ipi())
__mwait(eax, ecx);
}
current_clr_polling();
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index b6f4e8399fca..ca6cb7e28cba 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -925,7 +925,7 @@ static __cpuidle void mwait_idle(void)
}
 
__monitor((void *)_thread_info()->flags, 0, 0);
-   if (!need_resched()) {
+   if (!need_resched_or_ipi()) {
__sti_mwait(0, 0);
raw_local_irq_disable();
}
diff --git a/drivers/cpuidle/cpuidle-powernv.c 
b/drivers/cpuidle/cpuidle-powernv.c
index 9ebedd972df0..77c3bb371f56 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -79,7 +79,7 @@ static int snooze_loop(struct cpuidle_device *dev,
dev->poll_time_limit = false;
ppc64_runlatch_off();
HMT_very_low();
-   while (!need_resched()) {
+   while (!need_resched_or_ipi()) {
if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
/*
 * Task has not woken up but we are exiting the polling
diff --git a/drivers/cpuidle/cpuidle-pseries.c 
b/drivers/cpuidle/cpuidle-pseries.c
index 14db9b7d985d..4f2b490f8b73 100644
--- a/drivers/cpuidle/cpuidle-pseries.c
+++ b/drivers/cpuidle/cpuidle-pseries.c
@@ -46,7 +46,7 @@ int snooze_loop(struct cpuidle_device *dev, struct 
cpuidle_driver *drv,
snooze_exit_time = get_tb() + snooze_timeout;
dev->poll_time_limit = false;
 
-   while (!need_resched()) {
+   while (!need_resched_or_ipi()) {
HMT_low();
HMT

[RFC PATCH 01/14] thread_info: Add helpers to test and clear TIF_NOTIFY_IPI

2024-02-20 Thread K Prateek Nayak
From: "Gautham R. Shenoy" 

Introduce the notion of TIF_NOTIFY_IPI flag. When a processor in
TIF_POLLING mode needs to process an IPI, the sender sets NEED_RESCHED
bit in idle task's thread_info to pull the target out of idle and avoids
sending an interrupt to the idle CPU. When NEED_RESCHED is set, the
scheduler assumes that a new task has been queued on the idle CPU and
calls schedule_idle(), however, it is not necessary that an IPI on an
idle CPU will necessarily end up waking a task on the said CPU. To avoid
spurious calls to schedule_idle() assuming an IPI on an idle CPU will
always wake a task on the said CPU, TIF_NOTIFY_IPI will be used to pull
a TIF_POLLING CPU out of idle.

Since the IPI handlers are processed before the call to schedule_idle(),
schedule_idle() will be called only if one of the handlers have woken up
a new task on the CPU and has set NEED_RESCHED.

Add tif_notify_ipi() and current_clr_notify_ipi() helpers to test if
TIF_NOTIFY_IPI is set in the current task's thread_info, and to clear it
respectively. These interfaces will be used in subsequent patches as
TIF_NOTIFY_IPI notion is integrated in the scheduler and in the idle
path.

[ prateek: Split the changes into a separate patch, add commit log ]

Cc: Richard Henderson 
Cc: Ivan Kokshaysky 
Cc: Matt Turner 
Cc: Russell King 
Cc: Guo Ren 
Cc: Michal Simek 
Cc: Dinh Nguyen 
Cc: Jonas Bonn 
Cc: Stefan Kristiansson 
Cc: Stafford Horne 
Cc: "James E.J. Bottomley" 
Cc: Helge Deller 
Cc: Michael Ellerman 
Cc: Nicholas Piggin 
Cc: Christophe Leroy 
Cc: "Aneesh Kumar K.V" 
Cc: "Naveen N. Rao" 
Cc: Yoshinori Sato 
Cc: Rich Felker 
Cc: John Paul Adrian Glaubitz 
Cc: "David S. Miller" 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: "H. Peter Anvin" 
Cc: "Rafael J. Wysocki" 
Cc: Daniel Lezcano 
Cc: Peter Zijlstra 
Cc: Juri Lelli 
Cc: Vincent Guittot 
Cc: Dietmar Eggemann 
Cc: Steven Rostedt 
Cc: Ben Segall 
Cc: Mel Gorman 
Cc: Daniel Bristot de Oliveira 
Cc: Valentin Schneider 
Cc: Al Viro 
Cc: Linus Walleij 
Cc: Ard Biesheuvel 
Cc: Andrew Donnellan 
Cc: Nicholas Miehlbradt 
Cc: Andrew Morton 
Cc: Arnd Bergmann 
Cc: Josh Poimboeuf 
Cc: "Kirill A. Shutemov" 
Cc: Rick Edgecombe 
Cc: Tony Battersby 
Cc: Brian Gerst 
Cc: Tim Chen 
Cc: David Vernet 
Cc: x...@kernel.org
Cc: linux-ker...@vger.kernel.org
Cc: linux-alpha@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-c...@vger.kernel.org
Cc: linux-openr...@vger.kernel.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@vger.kernel.org
Signed-off-by: Gautham R. Shenoy 
Co-developed-by: K Prateek Nayak 
Signed-off-by: K Prateek Nayak 
---
 include/linux/thread_info.h | 43 +
 1 file changed, 43 insertions(+)

diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 9ea0b28068f4..1e10dd8c0227 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -195,6 +195,49 @@ static __always_inline bool tif_need_resched(void)
 
 #endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H */
 
+#ifdef TIF_NOTIFY_IPI
+
+#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
+
+static __always_inline bool tif_notify_ipi(void)
+{
+   return arch_test_bit(TIF_NOTIFY_IPI,
+(unsigned long *)(_thread_info()->flags));
+}
+
+static __always_inline void current_clr_notify_ipi(void)
+{
+   arch_clear_bit(TIF_NOTIFY_IPI,
+  (unsigned long *)(_thread_info()->flags));
+}
+
+#else
+
+static __always_inline bool tif_notify_ipi(void)
+{
+   return test_bit(TIF_NOTIFY_IPI,
+   (unsigned long *)(_thread_info()->flags));
+}
+
+static __always_inline void current_clr_notify_ipi(void)
+{
+   clear_bit(TIF_NOTIFY_IPI,
+ (unsigned long *)(_thread_info()->flags));
+}
+
+#endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H */
+
+#else /* !TIF_NOTIFY_IPI */
+
+static __always_inline bool tif_notify_ipi(void)
+{
+   return false;
+}
+
+static __always_inline void current_clr_notify_ipi(void) { }
+
+#endif /* TIF_NOTIFY_IPI */
+
 #ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
 static inline int arch_within_stack_frames(const void * const stack,
   const void * const stackend,
-- 
2.34.1




[RFC PATCH 00/14] Introducing TIF_NOTIFY_IPI flag

2024-02-20 Thread K Prateek Nayak
aken to complete a fixed set of IPIs
using ipistorm improves drastically. Following are the numbers from the
same dual socket 3rd Generation EPYC system (2 x 64C/128T) (boost on,
C2 disabled) running ipistorm between CPU8 and CPU16:

cmdline: insmod ipistorm.ko numipi=10 single=1 offset=8 cpulist=8 wait=1

  ==
  Test  : ipistorm (modified)
  Units : Normalized runtime
  Interpretation: Lower is better
  Statistic : AMean
  ==
  kernel:   time [pct imp]
  tip:sched/core1.00 [0.00]
  tip:sched/core + revert   0.81 [19.36]
  tip:sched/core + TIF_NOTIFY_IPI   0.20 [80.99]

Same experiment was repeated on an dual socket ARM server (2 x 64C)
which too saw a significant improvement in the ipistorm performance:

  ==
  Test  : ipistorm (modified)
  Units : Normalized runtime
  Interpretation: Lower is better
  Statistic : AMean
  ==
  kernel:   time [pct imp]
  tip:sched/core1.00 [0.00]
  tip:sched/core + TIF_NOTIFY_IPI   0.41 [59.29]

netperf and tbench results with the patch match the results on tip on
the dual socket 3rd Generation AMD system (2 x 64C/128T). Additionally,
hackbench, stream, and schbench too were tested, with results from the
patched kernel matching that of the tip.


Future Work
===

Evaluate impact of newidle_balance() when scheduler tick hits an idle
CPU. The call to newidle_balance() will be skipped with the
TIF_NOTIFY_IPI solution similar to [2]. Counter argument for the case is
that if the idle state did not set the TIF_POLLING bit, the idle CPU
would not have called schedule_idle() unless the IPI handler set the
NEED_RESCHED bit.


Links
=

[1] https://github.com/antonblanchard/ipistorm
[2] https://lore.kernel.org/lkml/20240119084548.2788-1-kprateek.na...@amd.com/
[3] 
https://lore.kernel.org/lkml/b4f5ac150685456cf45a342e3bb1f28cdd557a53.ca...@linux.intel.com/
[4] https://lore.kernel.org/lkml/20240123211756.GA221793@maniforge/
[5] 
https://lore.kernel.org/lkml/cakftptc446lo9catpp7pexdklhhqfobuy-jmgc7agohy4hs...@mail.gmail.com/

This series is based on tip:sched/core at tag "sched-core-2024-01-08".
---
Gautham R. Shenoy (4):
  thread_info: Add helpers to test and clear TIF_NOTIFY_IPI
  sched: Define a need_resched_or_ipi() helper and use it treewide
  sched/core: Use TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING
mode of pending IPI
  x86/thread_info: Introduce TIF_NOTIFY_IPI flag

K Prateek Nayak (10):
  arm/thread_info: Introduce TIF_NOTIFY_IPI flag
  alpha/thread_info: Introduce TIF_NOTIFY_IPI flag
  openrisc/thread_info: Introduce TIF_NOTIFY_IPI flag
  powerpc/thread_info: Introduce TIF_NOTIFY_IPI flag
  sh/thread_info: Introduce TIF_NOTIFY_IPI flag
  sparc/thread_info: Introduce TIF_NOTIFY_IPI flag
  csky/thread_info: Introduce TIF_NOTIFY_IPI flag
  parisc/thread_info: Introduce TIF_NOTIFY_IPI flag
  nios2/thread_info: Introduce TIF_NOTIFY_IPI flag
  microblaze/thread_info: Introduce TIF_NOTIFY_IPI flag
---
Cc: Richard Henderson 
Cc: Ivan Kokshaysky 
Cc: Matt Turner 
Cc: Russell King 
Cc: Guo Ren 
Cc: Michal Simek 
Cc: Dinh Nguyen 
Cc: Jonas Bonn 
Cc: Stefan Kristiansson 
Cc: Stafford Horne 
Cc: "James E.J. Bottomley" 
Cc: Helge Deller 
Cc: Michael Ellerman 
Cc: Nicholas Piggin 
Cc: Christophe Leroy 
Cc: "Aneesh Kumar K.V" 
Cc: "Naveen N. Rao" 
Cc: Yoshinori Sato 
Cc: Rich Felker 
Cc: John Paul Adrian Glaubitz 
Cc: "David S. Miller" 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: "H. Peter Anvin" 
Cc: "Rafael J. Wysocki" 
Cc: Daniel Lezcano 
Cc: Peter Zijlstra 
Cc: Juri Lelli 
Cc: Vincent Guittot 
Cc: Dietmar Eggemann 
Cc: Steven Rostedt 
Cc: Ben Segall 
Cc: Mel Gorman 
Cc: Daniel Bristot de Oliveira 
Cc: Valentin Schneider 
Cc: Al Viro 
Cc: Linus Walleij 
Cc: Ard Biesheuvel 
Cc: Andrew Donnellan 
Cc: Nicholas Miehlbradt 
Cc: Andrew Morton 
Cc: Arnd Bergmann 
Cc: Josh Poimboeuf 
Cc: "Kirill A. Shutemov" 
Cc: Rick Edgecombe 
Cc: Tony Battersby 
Cc: Brian Gerst 
Cc: Tim Chen 
Cc: David Vernet 
Cc: x...@kernel.org
Cc: linux-ker...@vger.kernel.org
Cc: linux-alpha@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-c...@vger.kernel.org
Cc: linux-openr...@vger.kernel.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@vger.kernel.org
---
 arch/alpha/include/asm/thread_info.h  |  2 ++
 arch/arm/include/asm/thread_info.h|  3 ++
 arch/csky/include/asm/thread_info.h   |  2 ++
 arch/microblaze/include/asm/thread_info.h |  2 +