Hello,

Daniel Xu wrote:
> WRITE_ONCE() is needed here to prevent store tears and other unwanted
> compiler optimizations.

That might be true if there were chances of these two accesses to
race with each other.

I don't see any possibility of such races.

Can you elaborate?

        Thanks, Akira

> 
> Signed-off-by: Daniel Xu <[email protected]>
> ---
>  CodeSamples/defer/seqlock.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/CodeSamples/defer/seqlock.h b/CodeSamples/defer/seqlock.h
> index ec00177b..842d10a6 100644
> --- a/CodeSamples/defer/seqlock.h
> +++ b/CodeSamples/defer/seqlock.h
> @@ -60,14 +60,14 @@ static inline int read_seqretry(seqlock_t *slp,           
> //\lnlbl{read_seqretry:b}
>  static inline void write_seqlock(seqlock_t *slp)     
> //\lnlbl{write_seqlock:b}
>  {
>       spin_lock(&slp->lock);
> -     ++slp->seq;
> +     WRITE_ONCE(slp->seq, READ_ONCE(slp->seq) + 1);
>       smp_mb();
>  }                                                    
> //\lnlbl{write_seqlock:e}
>  
>  static inline void write_sequnlock(seqlock_t *slp)   
> //\lnlbl{write_sequnlock:b}
>  {
>       smp_mb();                                       
> //\lnlbl{write_sequnlock:mb}
> -     ++slp->seq;                                     
> //\lnlbl{write_sequnlock:inc}
> +     WRITE_ONCE(slp->seq, READ_ONCE(slp->seq) + 1);  
> //\lnlbl{write_sequnlock:inc}
>       spin_unlock(&slp->lock);
>  }                                                    
> //\lnlbl{write_sequnlock:e}
>  //\end{snippet}


Reply via email to