> On Jul 22, 2025, at 6:17 PM, Paul E. McKenney <paul...@kernel.org> wrote:
> 
> This commit documents the implicit RCU readers that are implied by the
> this_cpu_inc() and atomic_long_inc() operations in __srcu_read_lock_fast()
> and __srcu_read_unlock_fast().  While in the area, fix the documentation
> of the memory pairing of atomic_long_inc() in __srcu_read_lock_fast().

Just to clarify, the implication here is since SRCU-fast uses synchronize_rcu 
on the update side, these operations result in blocking of classical RCU too. 
So simply using srcu fast is another way of achieving the previously used 
pre-empt-disabling in the use cases.

Or is the rationale for this something else?

I would probably spell this out more in a longer comment above the if/else, 
than modify the inline comments.

But I am probably misunderstood the whole thing. :-(

-Joel

> 
> Signed-off-by: Paul E. McKenney <paul...@kernel.org>
> Cc: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
> Cc: Steven Rostedt <rost...@goodmis.org>
> Cc: Sebastian Andrzej Siewior <bige...@linutronix.de>
> Cc: <b...@vger.kernel.org>
> 
> diff --git a/include/linux/srcutree.h b/include/linux/srcutree.h
> index 043b5a67ef71e..78e1a7b845ba9 100644
> --- a/include/linux/srcutree.h
> +++ b/include/linux/srcutree.h
> @@ -245,9 +245,9 @@ static inline struct srcu_ctr __percpu 
> *__srcu_read_lock_fast(struct srcu_struct
>    struct srcu_ctr __percpu *scp = READ_ONCE(ssp->srcu_ctrp);
> 
>    if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
> -        this_cpu_inc(scp->srcu_locks.counter); /* Y */
> +        this_cpu_inc(scp->srcu_locks.counter); // Y, and implicit RCU reader.
>    else
> -        atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks));  /* Z */
> +        atomic_long_inc(raw_cpu_ptr(&scp->srcu_locks));  // Y, and implicit 
> RCU reader.
>    barrier(); /* Avoid leaking the critical section. */
>    return scp;
> }
> @@ -271,9 +271,9 @@ static inline void __srcu_read_unlock_fast(struct 
> srcu_struct *ssp, struct srcu_
> {
>    barrier();  /* Avoid leaking the critical section. */
>    if (!IS_ENABLED(CONFIG_NEED_SRCU_NMI_SAFE))
> -        this_cpu_inc(scp->srcu_unlocks.counter);  /* Z */
> +        this_cpu_inc(scp->srcu_unlocks.counter);  // Z, and implicit RCU 
> reader.
>    else
> -        atomic_long_inc(raw_cpu_ptr(&scp->srcu_unlocks));  /* Z */
> +        atomic_long_inc(raw_cpu_ptr(&scp->srcu_unlocks));  // Z, and 
> implicit RCU reader.
> }
> 
> void __srcu_check_read_flavor(struct srcu_struct *ssp, int read_flavor);
> 

Reply via email to