Greg Kroah-Hartman wrote:
> [ Upstream commit 69d927bba39517d0980462efc051875b7f4db185 ]
>
> Recent probing at the Linux Kernel Memory Model uncovered a
> 'surprise'. Strongly ordered architectures where the atomic RmW
> primitive implies full memory ordering and
> smp_mb__{before,after}_atomic() are a simple barrier() (such as x86)
> fail for:
>
> *x = 1;
> atomic_inc(u);
> smp_mb__after_atomic();
> r0 = *y;
[snip]
> --- a/arch/x86/include/asm/atomic.h
> +++ b/arch/x86/include/asm/atomic.h
> @@ -54,7 +54,7 @@ static __always_inline void arch_atomic_add(int i, atomic_t
> *v)
> {
> asm volatile(LOCK_PREFIX "addl %1,%0"
> : "+m" (v->counter)
> - : "ir" (i));
> + : "ir" (i) : "memory");
> }
>
> /**
Shouldn't those clobber contraints actually be: "memory","cc"
That is because addl subl (and other) machine instructions
actually modify the flags register too.
gcc docs say: The "cc" clobber indicates that the assembler
code modifies the flags register.
--
Jari Ruusu 4096R/8132F189 12D6 4C3A DCDA 0AA4 27BD ACDF F073 3C80 8132 F189