[tip:locking/core] arm64, locking/atomics: Use instrumented atomics

2018-11-01 Thread tip-bot for Mark Rutland
Commit-ID:  c0df10812835040e261b915f04887b0cf0411851
Gitweb: https://git.kernel.org/tip/c0df10812835040e261b915f04887b0cf0411851
Author: Mark Rutland 
AuthorDate: Tue, 4 Sep 2018 11:48:30 +0100
Committer:  Ingo Molnar 
CommitDate: Thu, 1 Nov 2018 11:01:40 +0100

arm64, locking/atomics: Use instrumented atomics

Now that the generic atomic headers provide instrumented wrappers of all
the atomics implemented by arm64, let's migrate arm64 over to these.

The additional instrumentation will help to find bugs (e.g. when fuzzing
with Syzkaller).

Mostly this change involves adding an arch_ prefix to a number of
function names and macro definitions. When LSE atomics are used, the
out-of-line LL/SC atomics will be named __ll_sc_arch_atomic_${OP}.

Adding the arch_ prefix requires some whitespace fixups to keep things
aligned. Some other unusual whitespace is fixed up at the same time
(e.g. in the cmpxchg wrappers).

Signed-off-by: Mark Rutland 
Signed-off-by: Peter Zijlstra (Intel) 
Acked-by: Will Deacon 
Cc: linux-arm-ker...@lists.infradead.org
Cc: Catalin Marinas 
Cc: linuxdriv...@attotech.com
Cc: dvyu...@google.com
Cc: boqun.f...@gmail.com
Cc: a...@arndb.de
Cc: aryabi...@virtuozzo.com
Cc: gli...@google.com
Link: http://lkml.kernel.org/r/20180904104830.2975-7-mark.rutl...@arm.com
Signed-off-by: Ingo Molnar 
---
 arch/arm64/include/asm/atomic.h   | 237 +-
 arch/arm64/include/asm/atomic_ll_sc.h |  28 ++--
 arch/arm64/include/asm/atomic_lse.h   |  38 +++---
 arch/arm64/include/asm/cmpxchg.h  |  60 -
 arch/arm64/include/asm/sync_bitops.h  |  16 +--
 5 files changed, 193 insertions(+), 186 deletions(-)

diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h
index 9bca54dda75c..1f4e9ee641c9 100644
--- a/arch/arm64/include/asm/atomic.h
+++ b/arch/arm64/include/asm/atomic.h
@@ -42,124 +42,131 @@
 
 #define ATOMIC_INIT(i) { (i) }
 
-#define atomic_read(v) READ_ONCE((v)->counter)
-#define atomic_set(v, i)   WRITE_ONCE(((v)->counter), (i))
-
-#define atomic_add_return_relaxed  atomic_add_return_relaxed
-#define atomic_add_return_acquire  atomic_add_return_acquire
-#define atomic_add_return_release  atomic_add_return_release
-#define atomic_add_return  atomic_add_return
-
-#define atomic_sub_return_relaxed  atomic_sub_return_relaxed
-#define atomic_sub_return_acquire  atomic_sub_return_acquire
-#define atomic_sub_return_release  atomic_sub_return_release
-#define atomic_sub_return  atomic_sub_return
-
-#define atomic_fetch_add_relaxed   atomic_fetch_add_relaxed
-#define atomic_fetch_add_acquire   atomic_fetch_add_acquire
-#define atomic_fetch_add_release   atomic_fetch_add_release
-#define atomic_fetch_add   atomic_fetch_add
-
-#define atomic_fetch_sub_relaxed   atomic_fetch_sub_relaxed
-#define atomic_fetch_sub_acquire   atomic_fetch_sub_acquire
-#define atomic_fetch_sub_release   atomic_fetch_sub_release
-#define atomic_fetch_sub   atomic_fetch_sub
-
-#define atomic_fetch_and_relaxed   atomic_fetch_and_relaxed
-#define atomic_fetch_and_acquire   atomic_fetch_and_acquire
-#define atomic_fetch_and_release   atomic_fetch_and_release
-#define atomic_fetch_and   atomic_fetch_and
-
-#define atomic_fetch_andnot_relaxedatomic_fetch_andnot_relaxed
-#define atomic_fetch_andnot_acquireatomic_fetch_andnot_acquire
-#define atomic_fetch_andnot_releaseatomic_fetch_andnot_release
-#define atomic_fetch_andnotatomic_fetch_andnot
-
-#define atomic_fetch_or_relaxedatomic_fetch_or_relaxed
-#define atomic_fetch_or_acquireatomic_fetch_or_acquire
-#define atomic_fetch_or_releaseatomic_fetch_or_release
-#define atomic_fetch_oratomic_fetch_or
-
-#define atomic_fetch_xor_relaxed   atomic_fetch_xor_relaxed
-#define atomic_fetch_xor_acquire   atomic_fetch_xor_acquire
-#define atomic_fetch_xor_release   atomic_fetch_xor_release
-#define atomic_fetch_xor   atomic_fetch_xor
-
-#define atomic_xchg_relaxed(v, new)xchg_relaxed(&((v)->counter), (new))
-#define atomic_xchg_acquire(v, new)xchg_acquire(&((v)->counter), (new))
-#define atomic_xchg_release(v, new)xchg_release(&((v)->counter), (new))
-#define atomic_xchg(v, new)xchg(&((v)->counter), (new))
-
-#define atomic_cmpxchg_relaxed(v, old, new)\
-   cmpxchg_relaxed(&((v)->counter), (old), (new))
-#define atomic_cmpxchg_acquire(v, old, new)\
-   cmpxchg_acquire(&((v)->counter), (old), (new))
-#define atomic_cmpxchg_release(v, old, new)\
-   cmpxchg_release(&((v)->counter), (old), (new))
-#define atomic_cmpxchg(v, old, new)cmpxchg(&((v)->counter), (old), (new))
-
-#define atomic_andnot  atomic_andnot
+#define 

[tip:locking/core] arm64, locking/atomics: Use instrumented atomics

2018-11-01 Thread tip-bot for Mark Rutland
Commit-ID:  c0df10812835040e261b915f04887b0cf0411851
Gitweb: https://git.kernel.org/tip/c0df10812835040e261b915f04887b0cf0411851
Author: Mark Rutland 
AuthorDate: Tue, 4 Sep 2018 11:48:30 +0100
Committer:  Ingo Molnar 
CommitDate: Thu, 1 Nov 2018 11:01:40 +0100

arm64, locking/atomics: Use instrumented atomics

Now that the generic atomic headers provide instrumented wrappers of all
the atomics implemented by arm64, let's migrate arm64 over to these.

The additional instrumentation will help to find bugs (e.g. when fuzzing
with Syzkaller).

Mostly this change involves adding an arch_ prefix to a number of
function names and macro definitions. When LSE atomics are used, the
out-of-line LL/SC atomics will be named __ll_sc_arch_atomic_${OP}.

Adding the arch_ prefix requires some whitespace fixups to keep things
aligned. Some other unusual whitespace is fixed up at the same time
(e.g. in the cmpxchg wrappers).

Signed-off-by: Mark Rutland 
Signed-off-by: Peter Zijlstra (Intel) 
Acked-by: Will Deacon 
Cc: linux-arm-ker...@lists.infradead.org
Cc: Catalin Marinas 
Cc: linuxdriv...@attotech.com
Cc: dvyu...@google.com
Cc: boqun.f...@gmail.com
Cc: a...@arndb.de
Cc: aryabi...@virtuozzo.com
Cc: gli...@google.com
Link: http://lkml.kernel.org/r/20180904104830.2975-7-mark.rutl...@arm.com
Signed-off-by: Ingo Molnar 
---
 arch/arm64/include/asm/atomic.h   | 237 +-
 arch/arm64/include/asm/atomic_ll_sc.h |  28 ++--
 arch/arm64/include/asm/atomic_lse.h   |  38 +++---
 arch/arm64/include/asm/cmpxchg.h  |  60 -
 arch/arm64/include/asm/sync_bitops.h  |  16 +--
 5 files changed, 193 insertions(+), 186 deletions(-)

diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h
index 9bca54dda75c..1f4e9ee641c9 100644
--- a/arch/arm64/include/asm/atomic.h
+++ b/arch/arm64/include/asm/atomic.h
@@ -42,124 +42,131 @@
 
 #define ATOMIC_INIT(i) { (i) }
 
-#define atomic_read(v) READ_ONCE((v)->counter)
-#define atomic_set(v, i)   WRITE_ONCE(((v)->counter), (i))
-
-#define atomic_add_return_relaxed  atomic_add_return_relaxed
-#define atomic_add_return_acquire  atomic_add_return_acquire
-#define atomic_add_return_release  atomic_add_return_release
-#define atomic_add_return  atomic_add_return
-
-#define atomic_sub_return_relaxed  atomic_sub_return_relaxed
-#define atomic_sub_return_acquire  atomic_sub_return_acquire
-#define atomic_sub_return_release  atomic_sub_return_release
-#define atomic_sub_return  atomic_sub_return
-
-#define atomic_fetch_add_relaxed   atomic_fetch_add_relaxed
-#define atomic_fetch_add_acquire   atomic_fetch_add_acquire
-#define atomic_fetch_add_release   atomic_fetch_add_release
-#define atomic_fetch_add   atomic_fetch_add
-
-#define atomic_fetch_sub_relaxed   atomic_fetch_sub_relaxed
-#define atomic_fetch_sub_acquire   atomic_fetch_sub_acquire
-#define atomic_fetch_sub_release   atomic_fetch_sub_release
-#define atomic_fetch_sub   atomic_fetch_sub
-
-#define atomic_fetch_and_relaxed   atomic_fetch_and_relaxed
-#define atomic_fetch_and_acquire   atomic_fetch_and_acquire
-#define atomic_fetch_and_release   atomic_fetch_and_release
-#define atomic_fetch_and   atomic_fetch_and
-
-#define atomic_fetch_andnot_relaxedatomic_fetch_andnot_relaxed
-#define atomic_fetch_andnot_acquireatomic_fetch_andnot_acquire
-#define atomic_fetch_andnot_releaseatomic_fetch_andnot_release
-#define atomic_fetch_andnotatomic_fetch_andnot
-
-#define atomic_fetch_or_relaxedatomic_fetch_or_relaxed
-#define atomic_fetch_or_acquireatomic_fetch_or_acquire
-#define atomic_fetch_or_releaseatomic_fetch_or_release
-#define atomic_fetch_oratomic_fetch_or
-
-#define atomic_fetch_xor_relaxed   atomic_fetch_xor_relaxed
-#define atomic_fetch_xor_acquire   atomic_fetch_xor_acquire
-#define atomic_fetch_xor_release   atomic_fetch_xor_release
-#define atomic_fetch_xor   atomic_fetch_xor
-
-#define atomic_xchg_relaxed(v, new)xchg_relaxed(&((v)->counter), (new))
-#define atomic_xchg_acquire(v, new)xchg_acquire(&((v)->counter), (new))
-#define atomic_xchg_release(v, new)xchg_release(&((v)->counter), (new))
-#define atomic_xchg(v, new)xchg(&((v)->counter), (new))
-
-#define atomic_cmpxchg_relaxed(v, old, new)\
-   cmpxchg_relaxed(&((v)->counter), (old), (new))
-#define atomic_cmpxchg_acquire(v, old, new)\
-   cmpxchg_acquire(&((v)->counter), (old), (new))
-#define atomic_cmpxchg_release(v, old, new)\
-   cmpxchg_release(&((v)->counter), (old), (new))
-#define atomic_cmpxchg(v, old, new)cmpxchg(&((v)->counter), (old), (new))
-
-#define atomic_andnot  atomic_andnot
+#define