[tip:locking/core] locking/atomics/x86: Reduce arch_cmpxchg64*() instrumentation

2018-07-25 Thread tip-bot for Mark Rutland
Commit-ID:  00d5551cc4eec0fc39c3871c25c613553acfb866
Gitweb: https://git.kernel.org/tip/00d5551cc4eec0fc39c3871c25c613553acfb866
Author: Mark Rutland 
AuthorDate: Mon, 16 Jul 2018 12:30:07 +0100
Committer:  Ingo Molnar 
CommitDate: Wed, 25 Jul 2018 11:53:58 +0200

locking/atomics/x86: Reduce arch_cmpxchg64*() instrumentation

Currently x86's arch_cmpxchg64() and arch_cmpxchg64_local() are
instrumented twice, as they call into instrumented atomics rather than
their arch_ equivalents.

A call to cmpxchg64() results in:

  cmpxchg64()
kasan_check_write()
arch_cmpxchg64()
  cmpxchg()
kasan_check_write()
arch_cmpxchg()

Let's fix this up and call the arch_ equivalents, resulting in:

  cmpxchg64()
kasan_check_write()
arch_cmpxchg64()
  arch_cmpxchg()

Signed-off-by: Mark Rutland 
Acked-by: Thomas Gleixner 
Acked-by: Peter Zijlstra (Intel) 
Acked-by: Will Deacon 
Cc: Boqun Feng 
Cc: Dmitry Vyukov 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: andy.shevche...@gmail.com
Cc: a...@arndb.de
Cc: aryabi...@virtuozzo.com
Cc: catalin.mari...@arm.com
Cc: gli...@google.com
Cc: linux-arm-ker...@lists.infradead.org
Cc: parri.and...@gmail.com
Cc: pe...@hurleysoftware.com
Link: http://lkml.kernel.org/r/20180716113017.3909-3-mark.rutl...@arm.com
Signed-off-by: Ingo Molnar 
---
 arch/x86/include/asm/cmpxchg_64.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/cmpxchg_64.h 
b/arch/x86/include/asm/cmpxchg_64.h
index bfca3b346c74..072e5459fe2f 100644
--- a/arch/x86/include/asm/cmpxchg_64.h
+++ b/arch/x86/include/asm/cmpxchg_64.h
@@ -10,13 +10,13 @@ static inline void set_64bit(volatile u64 *ptr, u64 val)
 #define arch_cmpxchg64(ptr, o, n)  \
 ({ \
BUILD_BUG_ON(sizeof(*(ptr)) != 8);  \
-   cmpxchg((ptr), (o), (n));   \
+   arch_cmpxchg((ptr), (o), (n));  \
 })
 
 #define arch_cmpxchg64_local(ptr, o, n)
\
 ({ \
BUILD_BUG_ON(sizeof(*(ptr)) != 8);  \
-   cmpxchg_local((ptr), (o), (n)); \
+   arch_cmpxchg_local((ptr), (o), (n));\
 })
 
 #define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX16)


[tip:locking/core] locking/atomics/x86: Reduce arch_cmpxchg64*() instrumentation

2018-07-25 Thread tip-bot for Mark Rutland
Commit-ID:  00d5551cc4eec0fc39c3871c25c613553acfb866
Gitweb: https://git.kernel.org/tip/00d5551cc4eec0fc39c3871c25c613553acfb866
Author: Mark Rutland 
AuthorDate: Mon, 16 Jul 2018 12:30:07 +0100
Committer:  Ingo Molnar 
CommitDate: Wed, 25 Jul 2018 11:53:58 +0200

locking/atomics/x86: Reduce arch_cmpxchg64*() instrumentation

Currently x86's arch_cmpxchg64() and arch_cmpxchg64_local() are
instrumented twice, as they call into instrumented atomics rather than
their arch_ equivalents.

A call to cmpxchg64() results in:

  cmpxchg64()
kasan_check_write()
arch_cmpxchg64()
  cmpxchg()
kasan_check_write()
arch_cmpxchg()

Let's fix this up and call the arch_ equivalents, resulting in:

  cmpxchg64()
kasan_check_write()
arch_cmpxchg64()
  arch_cmpxchg()

Signed-off-by: Mark Rutland 
Acked-by: Thomas Gleixner 
Acked-by: Peter Zijlstra (Intel) 
Acked-by: Will Deacon 
Cc: Boqun Feng 
Cc: Dmitry Vyukov 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: andy.shevche...@gmail.com
Cc: a...@arndb.de
Cc: aryabi...@virtuozzo.com
Cc: catalin.mari...@arm.com
Cc: gli...@google.com
Cc: linux-arm-ker...@lists.infradead.org
Cc: parri.and...@gmail.com
Cc: pe...@hurleysoftware.com
Link: http://lkml.kernel.org/r/20180716113017.3909-3-mark.rutl...@arm.com
Signed-off-by: Ingo Molnar 
---
 arch/x86/include/asm/cmpxchg_64.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/cmpxchg_64.h 
b/arch/x86/include/asm/cmpxchg_64.h
index bfca3b346c74..072e5459fe2f 100644
--- a/arch/x86/include/asm/cmpxchg_64.h
+++ b/arch/x86/include/asm/cmpxchg_64.h
@@ -10,13 +10,13 @@ static inline void set_64bit(volatile u64 *ptr, u64 val)
 #define arch_cmpxchg64(ptr, o, n)  \
 ({ \
BUILD_BUG_ON(sizeof(*(ptr)) != 8);  \
-   cmpxchg((ptr), (o), (n));   \
+   arch_cmpxchg((ptr), (o), (n));  \
 })
 
 #define arch_cmpxchg64_local(ptr, o, n)
\
 ({ \
BUILD_BUG_ON(sizeof(*(ptr)) != 8);  \
-   cmpxchg_local((ptr), (o), (n)); \
+   arch_cmpxchg_local((ptr), (o), (n));\
 })
 
 #define system_has_cmpxchg_double() boot_cpu_has(X86_FEATURE_CX16)