On Mon, Nov 03, 2025 at 12:51:48PM +0000, Will Deacon wrote: > On Sun, Nov 02, 2025 at 01:44:34PM -0800, Paul E. McKenney wrote: > > Some arm64 platforms have slow per-CPU atomic operations, for example, > > the Neoverse V2. This commit therefore moves SRCU-fast from per-CPU > > atomic operations to interrupt-disabled non-read-modify-write-atomic > > atomic_read()/atomic_set() operations. This works because > > SRCU-fast-updown is not invoked from read-side primitives, which > > means that if srcu_read_unlock_fast() NMI handlers. This means that > > srcu_read_lock_fast_updown() and srcu_read_unlock_fast_updown() can > > exclude themselves and each other > > > > This reduces the overhead of calls to srcu_read_lock_fast_updown() and > > srcu_read_unlock_fast_updown() from about 100ns to about 12ns on an ARM > > Neoverse V2. Although this is not excellent compared to about 2ns on x86, > > it sure beats 100ns. > > > > This command was used to measure the overhead: > > > > tools/testing/selftests/rcutorture/bin/kvm.sh --torture refscale --allcpus > > --duration 5 --configs NOPREEMPT --kconfig "CONFIG_NR_CPUS=64 > > CONFIG_TASKS_TRACE_RCU=y" --bootargs "refscale.loops=100000 > > refscale.guest_os_delay=5 refscale.nreaders=64 refscale.holdoff=30 > > torture.disable_onoff_at_boot refscale.scale_type=srcu-fast-updown > > refscale.verbose_batched=8 torture.verbose_sleep_frequency=8 > > torture.verbose_sleep_duration=8 refscale.nruns=100" --trust-make > > > > Signed-off-by: Paul E. McKenney <[email protected]> > > Cc: Catalin Marinas <[email protected]> > > Cc: Will Deacon <[email protected]> > > Cc: Mark Rutland <[email protected]> > > Cc: Mathieu Desnoyers <[email protected]> > > Cc: Steven Rostedt <[email protected]> > > Cc: Sebastian Andrzej Siewior <[email protected]> > > Cc: <[email protected]> > > Cc: <[email protected]> > > --- > > include/linux/srcutree.h | 56 ++++++++++++++++++++++++++++++++++++---- > > 1 file changed, 51 insertions(+), 5 deletions(-) > > [...] > > > @@ -327,12 +355,23 @@ __srcu_read_unlock_fast(struct srcu_struct *ssp, > > struct srcu_ctr __percpu *scp) > > static inline > > struct srcu_ctr __percpu notrace *__srcu_read_lock_fast_updown(struct > > srcu_struct *ssp) > > { > > - struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp); > > + struct srcu_ctr __percpu *scp; > > > > - if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE)) > > + if (IS_ENABLED(CONFIG_ARM64) && > > IS_ENABLED(CONFIG_ARM64_USE_LSE_PERCPU_ATOMICS)) { > > + unsigned long flags; > > + > > + local_irq_save(flags); > > + scp = __srcu_read_lock_fast_na(ssp); > > + local_irq_restore(flags); /* Avoids leaking the critical > > section. */ > > + return scp; > > + } > > Do we still need to pursue this after Catalin's prefetch suggestion for the > per-cpu atomics? > > https://lore.kernel.org/r/[email protected] > > Although disabling/enabling interrupts on your system seems to be > significantly faster than an atomic instruction, I'm worried that it's > all very SoC-specific and on a mobile part (especially with pseudo-NMI), > the relative costs could easily be the other way around.
My preference would be to go for the percpu atomic prefetch but we'd need to do a bit of benchmarking to see we don't break other platforms (unlikely though). -- Catalin

