On 10/06/2015 11:54 AM, Pranith Kumar wrote:
> We are reading from memory locations pointed to by p1 and p2 in the asm
> block. Add a memory clobber flag to make gcc aware of this.
> 
> Signed-off-by: Pranith Kumar <[email protected]>
> ---
>  arch/x86/include/asm/cmpxchg.h | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/include/asm/cmpxchg.h b/arch/x86/include/asm/cmpxchg.h
> index 4a2e5bc..3e83949 100644
> --- a/arch/x86/include/asm/cmpxchg.h
> +++ b/arch/x86/include/asm/cmpxchg.h
> @@ -214,7 +214,8 @@ extern void __add_wrong_size(void)
>                    : "=a" (__ret), "+d" (__old2),                     \
>                      "+m" (*(p1)), "+m" (*(p2))                       \
>                    : "i" (2 * sizeof(long)), "a" (__old1),            \
> -                    "b" (__new1), "c" (__new2));                     \
> +                    "b" (__new1), "c" (__new2)                       \
> +                  : "memory");                                       \
>       __ret;                                                          \
>  })

NAK.  We already have the "+m" for exactly this reason; adding an
explicit memory clobber should only be used to prevent movement of
*other* memory operations around this one (i.e. a barrier).

        -hpa


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to