On Tue, Dec 17, 2024 at 02:22:29PM -0800, Daniel Xu wrote:
> WRITE_ONCE() is needed here to prevent store tears and other unwanted
> compiler optimizations.
> 
> Signed-off-by: Daniel Xu <[email protected]>

I am pulling this in despite the misgivings on this thread.  If the kernel
gets an INC_ONCE(), maybe perfbook should too.  But in the meantime...

                                                        Thanx, Paul

> ---
>  CodeSamples/defer/seqlock.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/CodeSamples/defer/seqlock.h b/CodeSamples/defer/seqlock.h
> index ec00177b..842d10a6 100644
> --- a/CodeSamples/defer/seqlock.h
> +++ b/CodeSamples/defer/seqlock.h
> @@ -60,14 +60,14 @@ static inline int read_seqretry(seqlock_t *slp,           
> //\lnlbl{read_seqretry:b}
>  static inline void write_seqlock(seqlock_t *slp)     
> //\lnlbl{write_seqlock:b}
>  {
>       spin_lock(&slp->lock);
> -     ++slp->seq;
> +     WRITE_ONCE(slp->seq, READ_ONCE(slp->seq) + 1);
>       smp_mb();
>  }                                                    
> //\lnlbl{write_seqlock:e}
>  
>  static inline void write_sequnlock(seqlock_t *slp)   
> //\lnlbl{write_sequnlock:b}
>  {
>       smp_mb();                                       
> //\lnlbl{write_sequnlock:mb}
> -     ++slp->seq;                                     
> //\lnlbl{write_sequnlock:inc}
> +     WRITE_ONCE(slp->seq, READ_ONCE(slp->seq) + 1);  
> //\lnlbl{write_sequnlock:inc}
>       spin_unlock(&slp->lock);
>  }                                                    
> //\lnlbl{write_sequnlock:e}
>  //\end{snippet}
> -- 
> 2.47.0
> 

Reply via email to