On 30/12/25 6:11 am, Joel Fernandes wrote:
The RCU grace period mechanism uses a two-phase FQS (Force Quiescent
State) design where the first FQS saves dyntick-idle snapshots and
the second FQS compares them. This results in long and unnecessary latency
for synchronize_rcu() on idle systems (two FQS waits of ~3ms each with
1000HZ) whenever one FQS wait sufficed.

Some investigations showed that the GP kthread's CPU is the holdout CPU
a lot of times after the first FQS as - it cannot be detected as "idle"
because it's actively running the FQS scan in the GP kthread.

Therefore, at the end of rcu_gp_init(), immediately report a quiescent
state for the GP kthread's CPU using rcu_qs() + rcu_report_qs_rdp(). The
GP kthread cannot be in an RCU read-side critical section while running
GP initialization, so this is safe and results in significant latency
improvements.

I benchmarked 100 synchronize_rcu() calls with 32 CPUs, 10 runs each
showing significant latency improvements (default settings for fqs jiffies):

Baseline (without fix):
| Run | Mean      | Min      | Max       |
|-----|-----------|----------|-----------|
| 1   | 10.088 ms | 9.989 ms | 18.848 ms |
| 2   | 10.064 ms | 9.982 ms | 16.470 ms |
| 3   | 10.051 ms | 9.988 ms | 15.113 ms |
| 4   | 10.125 ms | 9.929 ms | 22.411 ms |
| 5   |  8.695 ms | 5.996 ms | 15.471 ms |
| 6   | 10.157 ms | 9.977 ms | 25.723 ms |
| 7   | 10.102 ms | 9.990 ms | 20.224 ms |
| 8   |  8.050 ms | 5.985 ms | 10.007 ms |
| 9   | 10.059 ms | 9.978 ms | 15.934 ms |
| 10  | 10.077 ms | 9.984 ms | 17.703 ms |

With fix:
| Run | Mean     | Min      | Max       |
|-----|----------|----------|-----------|
| 1   | 6.027 ms | 5.915 ms |  8.589 ms |
| 2   | 6.032 ms | 5.984 ms |  9.241 ms |
| 3   | 6.010 ms | 5.986 ms |  7.004 ms |
| 4   | 6.076 ms | 5.993 ms | 10.001 ms |
| 5   | 6.084 ms | 5.893 ms | 10.250 ms |
| 6   | 6.034 ms | 5.908 ms |  9.456 ms |
| 7   | 6.051 ms | 5.993 ms | 10.000 ms |
| 8   | 6.057 ms | 5.941 ms | 10.001 ms |
| 9   | 6.016 ms | 5.927 ms |  7.540 ms |
| 10  | 6.036 ms | 5.993 ms |  9.579 ms |

Summary:
- Mean latency: 9.75 ms -> 6.04 ms (38% improvement)
- Max latency:  25.72 ms -> 10.25 ms (60% improvement)

Additional bridge setup/teardown testing by Uladzislau Rezki on x86_64
with 64 CPUs (100 iterations of bridge add/configure/delete):

                                real time
1 - default:                   24.221s
2 - this patch:                20.754s  (14% faster)
3 - this patch + wake_from_gp: 15.895s  (34% faster)
4 - wake_from_gp only:         18.947s  (22% faster)

Per-synchronize_rcu() latency (in usec):
               1         2         3       4
median: 37249.5   31540.5   15765   22480
min:    7881      7918      9803    7857
max:    63651     55639     31861   32040

This patch combined with rcu_normal_wake_from_gp reduces bridge
setup/teardown time from 24 seconds to 16 seconds.

Tested rcutorture TREE and SRCU configurations.

Reviewed-by: Paul E. McKenney <[email protected]>
Tested-by: Uladzislau Rezki (Sony) <[email protected]>
Signed-off-by: Joel Fernandes <[email protected]>
---
  kernel/rcu/tree.c | 12 ++++++++++++
  1 file changed, 12 insertions(+)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 78c045a5ef03..b7c818cabe44 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -160,6 +160,7 @@ static void rcu_report_qs_rnp(unsigned long mask, struct 
rcu_node *rnp,
                              unsigned long gps, unsigned long flags);
  static void invoke_rcu_core(void);
  static void rcu_report_exp_rdp(struct rcu_data *rdp);
+static void rcu_report_qs_rdp(struct rcu_data *rdp);
  static void check_cb_ovld_locked(struct rcu_data *rdp, struct rcu_node *rnp);
  static bool rcu_rdp_is_offloaded(struct rcu_data *rdp);
  static bool rcu_rdp_cpu_online(struct rcu_data *rdp);
@@ -1983,6 +1984,17 @@ static noinline_for_stack bool rcu_gp_init(void)
        if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD))
                on_each_cpu(rcu_strict_gp_boundary, NULL, 0);
+ /*
+        * Immediately report QS for the GP kthread's CPU. The GP kthread
+        * cannot be in an RCU read-side critical section while running
+        * the FQS scan. This eliminates the need for a second FQS wait
+        * when all CPUs are idle.
+        */
+       preempt_disable();
+       rcu_qs();
+       rcu_report_qs_rdp(this_cpu_ptr(&rcu_data));
+       preempt_enable();
+
        return true;
  }

Hi,


I verified this patch on ppc64 systems and observed consistent performance improvements.

The testing was conducted on Power LPARs using 20 cores (160 CPUs) with SMT enabled and disabled. All tests were performed on the latest upstream kernel (v6.19.0-rc3+), and the patch showed measurable improvements in both SMT configurations.

SMT Mode   |      With Patch (s)    |    Without Patch (s)  | Improvement (%)
————————————————————————————————————
SMT ON.      |    51.662               |      75.540  |       31.61% faster
SMT OFF      |    44.246               |      59.933  |       26.18% faster

Please add below tag: 
Tested-by: Samir M <[email protected]>

Thanks,
Samir


Reply via email to