On Mon, Feb 19, 2018 at 12:14:45PM -0500, Alan Stern wrote: > On Sat, 17 Feb 2018, Andrea Parri wrote: > > > > Akira's observation about READ_ONCE extends to all (annotated) loads. In > > > fact, it also applies to loads corresponding to unsuccessful RMW > > > operations; > > > consider, for example, the following variation of MP+onceassign+derefonce: > > > > > > C T > > > > > > { > > > y=z; > > > z=0; > > > } > > > > > > P0(int *x, int **y) > > > { > > > WRITE_ONCE(*x, 1); > > > smp_store_release(y, x); > > > } > > > > > > P1(int **y, int *z) > > > { > > > int *r0; > > > int r1; > > > > > > r0 = cmpxchg_relaxed(y, z, z); > > > r1 = READ_ONCE(*r0); > > > } > > > > > > exists (1:r0=x /\ 1:r1=0) > > > > > > The final state is allowed w/o the patch, and forbidden w/ the patch. > > > > > > This also reminds me of > > > > > > 5a8897cc7631fa544d079c443800f4420d1b173f > > > ("locking/atomics/alpha: Add smp_read_barrier_depends() to > > > _release()/_relaxed() atomics") > > > > > > (that we probably want to mention in the commit message). > > > > Please also notice that 5a8897cc7631f only touched alpha's atomic.h: > > I see no corresponding commit/change on {,cmp}xchg.h (where the "mb" > > is currently conditionally executed). > > This leaves us with a question: Do we want to change the kernel by > adding memory barriers after unsuccessful RMW operations on Alpha, or > do we want to change the model by excluding such operations from > address dependencies?
I vote for adding the barrier on Alpha. However, I don't know of any code in the Linux kernel that relies on read-to-read address dependency ordering headed by a failing RMW operation, so I don't feel all that strongly about this. > Note that operations like atomic_add_unless() already include memory > barriers. And I don't see an atomic_add_unless_relaxed(), so we are good on this one. So far, anyway! ;-) Thanx, Paul