[RFC PATCH 03/14] sched/core: Use TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING mode of pending IPI

2024-02-20 Thread K Prateek Nayak
From: "Gautham R. Shenoy" 

Problem statement
=

When measuring IPI throughput using a modified version of Anton
Blanchard's ipistorm benchmark [1], configured to measure time taken to
perform a fixed number of smp_call_function_single() (with wait set to
1), an increase in benchmark time was observed between v5.7 and the
upstream kernel (v6.7-rc6).

Bisection pointed to commit b2a02fc43a1f ("smp: Optimize
send_call_function_single_ipi()") as the reason behind this increase in
runtime. Reverting the optimization introduced by the above commit fixed
the regression in ipistorm, however benchmarks like tbench and netperf
regressed with the revert, supporting the validity of the optimization.

Following are the benchmark results on top of tip:sched/core with the
optimization reverted on a dual socket 3rd Generation aMD EPYC system
(2 x 64C/128T) running with boost enabled and C2 disabled:

(tip:sched/core at tag "sched-core-2024-01-08" for all the testing done
below)

  ==
  Test  : ipistorm (modified)
  Units : Normalized runtime
  Interpretation: Lower is better
  Statistic : AMean
  cmdline   : insmod ipistorm.ko numipi=10 single=1 offset=8 cpulist=8 
wait=1
  ==
  kernel:   time [pct imp]
  tip:sched/core1.00 [0.00]
  tip:sched/core + revert   0.81 [19.36]

  ==
  Test  : tbench
  Units : Normalized throughput
  Interpretation: Higher is better
  Statistic : AMean
  ==
  Clients:tip[pct imp](CV)   revert[pct imp](CV)
  1 1.00 [  0.00]( 0.24) 0.91 [ -8.96]( 0.30)
  2 1.00 [  0.00]( 0.25) 0.92 [ -8.20]( 0.97)
  4 1.00 [  0.00]( 0.23) 0.91 [ -9.20]( 1.75)
  8 1.00 [  0.00]( 0.69) 0.91 [ -9.48]( 1.56)
 16 1.00 [  0.00]( 0.66) 0.92 [ -8.49]( 2.43)
 32 1.00 [  0.00]( 0.96) 0.89 [-11.13]( 0.96)
 64 1.00 [  0.00]( 1.06) 0.90 [ -9.72]( 2.49)
128 1.00 [  0.00]( 0.70) 0.92 [ -8.36]( 1.26)
256 1.00 [  0.00]( 0.72) 0.97 [ -3.30]( 1.10)
512 1.00 [  0.00]( 0.42) 0.98 [ -1.73]( 0.37)
   1024 1.00 [  0.00]( 0.28) 0.99 [ -1.39]( 0.43)

  ==
  Test  : netperf
  Units : Normalized Througput
  Interpretation: Higher is better
  Statistic : AMean
  ==
  Clients: tip[pct imp](CV)   revert[pct imp](CV)
   1-clients 1.00 [  0.00]( 0.50) 0.89 [-10.51]( 0.20)
   2-clients 1.00 [  0.00]( 1.16) 0.89 [-11.10]( 0.59)
   4-clients 1.00 [  0.00]( 1.03) 0.89 [-10.68]( 0.38)
   8-clients 1.00 [  0.00]( 0.99) 0.89 [-10.54]( 0.50)
  16-clients 1.00 [  0.00]( 0.87) 0.89 [-10.92]( 0.95)
  32-clients 1.00 [  0.00]( 1.24) 0.89 [-10.85]( 0.63)
  64-clients 1.00 [  0.00]( 1.58) 0.90 [-10.11]( 1.18)
  128-clients1.00 [  0.00]( 0.87) 0.89 [-10.94]( 1.11)
  256-clients1.00 [  0.00]( 4.77) 1.00 [ -0.16]( 3.45)
  512-clients1.00 [  0.00](56.16) 1.02 [  2.10](56.05)

Since a simple revert is not a viable solution, the changes in the code
path of call_function_single_prep_ipi(), with and without the
optimization were audited to better understand the effect of the commit.

Effects of call_function_single_prep_ipi()
==

To pull a TIF_POLLING thread out of idle to process an IPI, the sender
sets the TIF_NEED_RESCHED bit in the idle task's thread info in
call_function_single_prep_ipi() and avoids sending an actual IPI to the
target. As a result, the scheduler expects a task to be enqueued when
exiting the idle path. This is not the case with non-polling idle states
where the idle CPU exits the non-polling idle state to process the
interrupt, and since need_resched() returns false, soon goes back to
idle again.

When TIF_NEED_RESCHED flag is set, do_idle() will call schedule_idle(),
a large part of which runs with local IRQ disabled. In case of ipistorm,
when measuring IPI throughput, this large IRQ disabled section delays
processing of IPIs. Further auditing revealed that in absence of any
runnable tasks, pick_next_task_fair(), which is called from the
pick_next_task() fast path, will always call newidle_balance() in this
scenario, further increasing the time spent in the IRQ disabled section.

Following is the crude visualization of the problem with relevant
functions expanded:
--
CPU0CPU1

do_idle() {
  

[RFC PATCH 02/14] sched: Define a need_resched_or_ipi() helper and use it treewide

2024-02-20 Thread K Prateek Nayak
From: "Gautham R. Shenoy" 

Currently TIF_NEED_RESCHED is being overloaded, to wakeup an idle CPU in
TIF_POLLING mode to service an IPI even if there are no new tasks being
woken up on the said CPU.

In preparation of a proper fix, introduce a new helper
"need_resched_or_ipi()" which is intended to return true if either
the TIF_NEED_RESCHED flag or if TIF_NOTIFY_IPI flag is set. Use this
helper function in place of need_resched() in idle loops where
TIF_POLLING_NRFLAG is set.

To preserve bisectibility and avoid unbreakable idle loops, all the
need_resched() checks within TIF_POLLING_NRFLAGS sections, have been
replaced tree-wide with the need_resched_or_ipi() check.

[ prateek: Replaced some of the missed out occurrences of
  need_resched() within a TIF_POLLING sections with
  need_resched_or_ipi() ]

Cc: Richard Henderson 
Cc: Ivan Kokshaysky 
Cc: Matt Turner 
Cc: Russell King 
Cc: Guo Ren 
Cc: Michal Simek 
Cc: Dinh Nguyen 
Cc: Jonas Bonn 
Cc: Stefan Kristiansson 
Cc: Stafford Horne 
Cc: "James E.J. Bottomley" 
Cc: Helge Deller 
Cc: Michael Ellerman 
Cc: Nicholas Piggin 
Cc: Christophe Leroy 
Cc: "Aneesh Kumar K.V" 
Cc: "Naveen N. Rao" 
Cc: Yoshinori Sato 
Cc: Rich Felker 
Cc: John Paul Adrian Glaubitz 
Cc: "David S. Miller" 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: "H. Peter Anvin" 
Cc: "Rafael J. Wysocki" 
Cc: Daniel Lezcano 
Cc: Peter Zijlstra 
Cc: Juri Lelli 
Cc: Vincent Guittot 
Cc: Dietmar Eggemann 
Cc: Steven Rostedt 
Cc: Ben Segall 
Cc: Mel Gorman 
Cc: Daniel Bristot de Oliveira 
Cc: Valentin Schneider 
Cc: Al Viro 
Cc: Linus Walleij 
Cc: Ard Biesheuvel 
Cc: Andrew Donnellan 
Cc: Nicholas Miehlbradt 
Cc: Andrew Morton 
Cc: Arnd Bergmann 
Cc: Josh Poimboeuf 
Cc: "Kirill A. Shutemov" 
Cc: Rick Edgecombe 
Cc: Tony Battersby 
Cc: Brian Gerst 
Cc: Tim Chen 
Cc: David Vernet 
Cc: x...@kernel.org
Cc: linux-ker...@vger.kernel.org
Cc: linux-alpha@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-c...@vger.kernel.org
Cc: linux-openr...@vger.kernel.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@vger.kernel.org
Signed-off-by: Gautham R. Shenoy 
Co-developed-by: K Prateek Nayak 
Signed-off-by: K Prateek Nayak 
---
 arch/x86/include/asm/mwait.h  | 2 +-
 arch/x86/kernel/process.c | 2 +-
 drivers/cpuidle/cpuidle-powernv.c | 2 +-
 drivers/cpuidle/cpuidle-pseries.c | 2 +-
 drivers/cpuidle/poll_state.c  | 2 +-
 include/linux/sched.h | 5 +
 include/linux/sched/idle.h| 4 ++--
 kernel/sched/idle.c   | 7 ---
 8 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
index 778df05f8539..ac1370143407 100644
--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -115,7 +115,7 @@ static __always_inline void mwait_idle_with_hints(unsigned 
long eax, unsigned lo
}
 
__monitor((void *)_thread_info()->flags, 0, 0);
-   if (!need_resched())
+   if (!need_resched_or_ipi())
__mwait(eax, ecx);
}
current_clr_polling();
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index b6f4e8399fca..ca6cb7e28cba 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -925,7 +925,7 @@ static __cpuidle void mwait_idle(void)
}
 
__monitor((void *)_thread_info()->flags, 0, 0);
-   if (!need_resched()) {
+   if (!need_resched_or_ipi()) {
__sti_mwait(0, 0);
raw_local_irq_disable();
}
diff --git a/drivers/cpuidle/cpuidle-powernv.c 
b/drivers/cpuidle/cpuidle-powernv.c
index 9ebedd972df0..77c3bb371f56 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -79,7 +79,7 @@ static int snooze_loop(struct cpuidle_device *dev,
dev->poll_time_limit = false;
ppc64_runlatch_off();
HMT_very_low();
-   while (!need_resched()) {
+   while (!need_resched_or_ipi()) {
if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
/*
 * Task has not woken up but we are exiting the polling
diff --git a/drivers/cpuidle/cpuidle-pseries.c 
b/drivers/cpuidle/cpuidle-pseries.c
index 14db9b7d985d..4f2b490f8b73 100644
--- a/drivers/cpuidle/cpuidle-pseries.c
+++ b/drivers/cpuidle/cpuidle-pseries.c
@@ -46,7 +46,7 @@ int snooze_loop(struct cpuidle_device *dev, struct 
cpuidle_driver *drv,
snooze_exit_time = get_tb() + snooze_timeout;
dev->poll_time_limit = false;
 
-   while (!need_resched()) {
+   while (!need_resched_or_ipi()) {
HMT_low();
HMT_very_low();
if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
diff --git 

[RFC PATCH 01/14] thread_info: Add helpers to test and clear TIF_NOTIFY_IPI

2024-02-20 Thread K Prateek Nayak
From: "Gautham R. Shenoy" 

Introduce the notion of TIF_NOTIFY_IPI flag. When a processor in
TIF_POLLING mode needs to process an IPI, the sender sets NEED_RESCHED
bit in idle task's thread_info to pull the target out of idle and avoids
sending an interrupt to the idle CPU. When NEED_RESCHED is set, the
scheduler assumes that a new task has been queued on the idle CPU and
calls schedule_idle(), however, it is not necessary that an IPI on an
idle CPU will necessarily end up waking a task on the said CPU. To avoid
spurious calls to schedule_idle() assuming an IPI on an idle CPU will
always wake a task on the said CPU, TIF_NOTIFY_IPI will be used to pull
a TIF_POLLING CPU out of idle.

Since the IPI handlers are processed before the call to schedule_idle(),
schedule_idle() will be called only if one of the handlers have woken up
a new task on the CPU and has set NEED_RESCHED.

Add tif_notify_ipi() and current_clr_notify_ipi() helpers to test if
TIF_NOTIFY_IPI is set in the current task's thread_info, and to clear it
respectively. These interfaces will be used in subsequent patches as
TIF_NOTIFY_IPI notion is integrated in the scheduler and in the idle
path.

[ prateek: Split the changes into a separate patch, add commit log ]

Cc: Richard Henderson 
Cc: Ivan Kokshaysky 
Cc: Matt Turner 
Cc: Russell King 
Cc: Guo Ren 
Cc: Michal Simek 
Cc: Dinh Nguyen 
Cc: Jonas Bonn 
Cc: Stefan Kristiansson 
Cc: Stafford Horne 
Cc: "James E.J. Bottomley" 
Cc: Helge Deller 
Cc: Michael Ellerman 
Cc: Nicholas Piggin 
Cc: Christophe Leroy 
Cc: "Aneesh Kumar K.V" 
Cc: "Naveen N. Rao" 
Cc: Yoshinori Sato 
Cc: Rich Felker 
Cc: John Paul Adrian Glaubitz 
Cc: "David S. Miller" 
Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: "H. Peter Anvin" 
Cc: "Rafael J. Wysocki" 
Cc: Daniel Lezcano 
Cc: Peter Zijlstra 
Cc: Juri Lelli 
Cc: Vincent Guittot 
Cc: Dietmar Eggemann 
Cc: Steven Rostedt 
Cc: Ben Segall 
Cc: Mel Gorman 
Cc: Daniel Bristot de Oliveira 
Cc: Valentin Schneider 
Cc: Al Viro 
Cc: Linus Walleij 
Cc: Ard Biesheuvel 
Cc: Andrew Donnellan 
Cc: Nicholas Miehlbradt 
Cc: Andrew Morton 
Cc: Arnd Bergmann 
Cc: Josh Poimboeuf 
Cc: "Kirill A. Shutemov" 
Cc: Rick Edgecombe 
Cc: Tony Battersby 
Cc: Brian Gerst 
Cc: Tim Chen 
Cc: David Vernet 
Cc: x...@kernel.org
Cc: linux-ker...@vger.kernel.org
Cc: linux-alpha@vger.kernel.org
Cc: linux-arm-ker...@lists.infradead.org
Cc: linux-c...@vger.kernel.org
Cc: linux-openr...@vger.kernel.org
Cc: linux-par...@vger.kernel.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@vger.kernel.org
Signed-off-by: Gautham R. Shenoy 
Co-developed-by: K Prateek Nayak 
Signed-off-by: K Prateek Nayak 
---
 include/linux/thread_info.h | 43 +
 1 file changed, 43 insertions(+)

diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 9ea0b28068f4..1e10dd8c0227 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -195,6 +195,49 @@ static __always_inline bool tif_need_resched(void)
 
 #endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H */
 
+#ifdef TIF_NOTIFY_IPI
+
+#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
+
+static __always_inline bool tif_notify_ipi(void)
+{
+   return arch_test_bit(TIF_NOTIFY_IPI,
+(unsigned long *)(_thread_info()->flags));
+}
+
+static __always_inline void current_clr_notify_ipi(void)
+{
+   arch_clear_bit(TIF_NOTIFY_IPI,
+  (unsigned long *)(_thread_info()->flags));
+}
+
+#else
+
+static __always_inline bool tif_notify_ipi(void)
+{
+   return test_bit(TIF_NOTIFY_IPI,
+   (unsigned long *)(_thread_info()->flags));
+}
+
+static __always_inline void current_clr_notify_ipi(void)
+{
+   clear_bit(TIF_NOTIFY_IPI,
+ (unsigned long *)(_thread_info()->flags));
+}
+
+#endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H */
+
+#else /* !TIF_NOTIFY_IPI */
+
+static __always_inline bool tif_notify_ipi(void)
+{
+   return false;
+}
+
+static __always_inline void current_clr_notify_ipi(void) { }
+
+#endif /* TIF_NOTIFY_IPI */
+
 #ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
 static inline int arch_within_stack_frames(const void * const stack,
   const void * const stackend,
-- 
2.34.1




[RFC PATCH 00/14] Introducing TIF_NOTIFY_IPI flag

2024-02-20 Thread K Prateek Nayak
Hello everyone,

Before jumping into the issue, let me clarify the Cc list. Everyone have
been cc'ed on Patch 0 through Patch 3. Respective arch maintainers,
reviewers, and committers returned by scripts/get_maintainer.pl have
been cc'ed on the respective arch side changes. Scheduler and CPU Idle
maintainers and reviewers have been included for the entire series. If I
have missed anyone, please do add them. If you would like to be dropped
from the cc list, wholly or partially, for the future iterations, please
do let me know.

With that out of the way ...

Problem statement
=

When measuring IPI throughput using a modified version of Anton
Blanchard's ipistorm benchmark [1], configured to measure time taken to
perform a fixed number of smp_call_function_single() (with wait set to
1), an increase in benchmark time was observed between v5.7 and the
current upstream release (v6.7-rc6 at the time of encounter).

Bisection pointed to commit b2a02fc43a1f ("smp: Optimize
send_call_function_single_ipi()") as the reason behind this increase in
runtime.


Experiments
===

Since the commit cannot be cleanly reverted on top of the current
tip:sched/core, the effects of the optimizations were reverted by:

1. Removing the check for call_function_single_prep_ipi() in
   send_call_function_single_ipi(). With this change
   send_call_function_single_ipi() always calls
   arch_send_call_function_single_ipi()

2. Removing the call to flush_smp_call_function_queue() in do_idle()
   since every smp_call_function, with (1.), would unconditionally send
   an IPI to an idle CPU in TIF_POLLING mode.

Following is the diff of the above described changes which will be
henceforth referred to as the "revert":

diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 31231925f1ec..735184d98c0f 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -332,11 +332,6 @@ static void do_idle(void)
 */
smp_mb__after_atomic();
 
-   /*
-* RCU relies on this call to be done outside of an RCU read-side
-* critical section.
-*/
-   flush_smp_call_function_queue();
schedule_idle();
 
if (unlikely(klp_patch_pending(current)))
diff --git a/kernel/smp.c b/kernel/smp.c
index f085ebcdf9e7..2ff100c41885 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -111,11 +111,9 @@ void __init call_function_init(void)
 static __always_inline void
 send_call_function_single_ipi(int cpu)
 {
-   if (call_function_single_prep_ipi(cpu)) {
-   trace_ipi_send_cpu(cpu, _RET_IP_,
-  generic_smp_call_function_single_interrupt);
-   arch_send_call_function_single_ipi(cpu);
-   }
+   trace_ipi_send_cpu(cpu, _RET_IP_,
+  generic_smp_call_function_single_interrupt);
+   arch_send_call_function_single_ipi(cpu);
 }
 
 static __always_inline void
--

With the revert, the time taken to complete a fixed set of IPIs using
ipistorm improves significantly. Following are the numbers from a dual
socket 3rd Generation EPYC system (2 x 64C/128T) (boost on, C2 disabled)
running ipistorm between CPU8 and CPU16:

cmdline: insmod ipistorm.ko numipi=10 single=1 offset=8 cpulist=8 wait=1

(tip:sched/core at tag "sched-core-2024-01-08" for all the testing done
below)

  ==
  Test  : ipistorm (modified)
  Units : Normalized runtime
  Interpretation: Lower is better
  Statistic : AMean
  ==
  kernel:   time [pct imp]
  tip:sched/core1.00 [0.00]
  tip:sched/core + revert   0.81 [19.36]

Although the revert improves ipistorm performance, it also regresses
tbench and netperf, supporting the validity of the optimization.
Following are netperf and tbench numbers from the same machine comparing
vanilla tip:sched/core and the revert applied on top:

  ==
  Test  : tbench
  Units : Normalized throughput
  Interpretation: Higher is better
  Statistic : AMean
  ==
  Clients:tip[pct imp](CV)   revert[pct imp](CV)
  1 1.00 [  0.00]( 0.24) 0.91 [ -8.96]( 0.30)
  2 1.00 [  0.00]( 0.25) 0.92 [ -8.20]( 0.97)
  4 1.00 [  0.00]( 0.23) 0.91 [ -9.20]( 1.75)
  8 1.00 [  0.00]( 0.69) 0.91 [ -9.48]( 1.56)
 16 1.00 [  0.00]( 0.66) 0.92 [ -8.49]( 2.43)
 32 1.00 [  0.00]( 0.96) 0.89 [-11.13]( 0.96)
 64 1.00 [  0.00]( 1.06) 0.90 [ -9.72]( 2.49)
128 1.00 [  0.00]( 0.70) 0.92 [ -8.36]( 1.26)
256 1.00 [  0.00]( 0.72) 0.97 [ -3.30]( 1.10)
512 1.00 [  0.00]( 0.42) 0.98 [ -1.73]( 0.37)
   1024 1.00 [  0.00]( 0.28) 0.99 [ -1.39]( 0.43)