Re: [PATCHv2 03/16] atomics/treewide: make atomic64_inc_not_zero() optional

2018-06-04 Thread Mark Rutland
On Mon, Jun 04, 2018 at 04:17:25PM -0700, Palmer Dabbelt wrote:
> On Tue, 29 May 2018 08:43:33 PDT (-0700), mark.rutl...@arm.com wrote:
> > We define a trivial fallback for atomic_inc_not_zero(), but don't do
> > the same for atmic64_inc_not_zero(), leading most architectures to
> > define the same boilerplate.
> 
> atmic64

Cheers for the spot; fixed now!

Mark.


Re: [PATCHv2 03/16] atomics/treewide: make atomic64_inc_not_zero() optional

2018-06-04 Thread Mark Rutland
On Mon, Jun 04, 2018 at 04:17:25PM -0700, Palmer Dabbelt wrote:
> On Tue, 29 May 2018 08:43:33 PDT (-0700), mark.rutl...@arm.com wrote:
> > We define a trivial fallback for atomic_inc_not_zero(), but don't do
> > the same for atmic64_inc_not_zero(), leading most architectures to
> > define the same boilerplate.
> 
> atmic64

Cheers for the spot; fixed now!

Mark.


Re: [PATCHv2 03/16] atomics/treewide: make atomic64_inc_not_zero() optional

2018-06-04 Thread Palmer Dabbelt

On Tue, 29 May 2018 08:43:33 PDT (-0700), mark.rutl...@arm.com wrote:

We define a trivial fallback for atomic_inc_not_zero(), but don't do
the same for atmic64_inc_not_zero(), leading most architectures to
define the same boilerplate.


atmic64


Let's add a fallback in , and remove the redundant
implementations. Note that atomic64_add_unless() is always defined in
, and promotes its arguments to the requisite types, so
we need not do this explicitly.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland 
Acked-by: Peter Zijlstra (Intel) 
Cc: Boqun Feng 
Cc: Will Deacon 
---
 arch/alpha/include/asm/atomic.h   |  2 --
 arch/arc/include/asm/atomic.h |  1 -
 arch/arm/include/asm/atomic.h |  1 -
 arch/arm64/include/asm/atomic.h   |  2 --
 arch/ia64/include/asm/atomic.h|  2 --
 arch/mips/include/asm/atomic.h|  2 --
 arch/parisc/include/asm/atomic.h  |  2 --
 arch/powerpc/include/asm/atomic.h |  1 +
 arch/riscv/include/asm/atomic.h   |  7 ---
 arch/s390/include/asm/atomic.h|  1 -
 arch/sparc/include/asm/atomic_64.h|  2 --
 arch/x86/include/asm/atomic64_32.h|  2 +-
 arch/x86/include/asm/atomic64_64.h|  2 --
 include/asm-generic/atomic-instrumented.h |  3 +++
 include/asm-generic/atomic64.h|  1 -
 include/linux/atomic.h| 11 +++
 16 files changed, 16 insertions(+), 26 deletions(-)
[...]
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index 0e27e050ba14..18259e90f57e 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -375,13 +375,6 @@ static __always_inline int atomic64_add_unless(atomic64_t 
*v, long a, long u)
 }
 #endif

-#ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_inc_not_zero(atomic64_t *v)
-{
-return atomic64_add_unless(v, 1, 0);
-}
-#endif
-
 /*
  * atomic_{cmp,}xchg is required to have exactly the same ordering semantics as
  * {cmp,}xchg and the operations that return, so they need a full barrier.


Acked-by: Palmer Dabbelt 

Thanks!


Re: [PATCHv2 03/16] atomics/treewide: make atomic64_inc_not_zero() optional

2018-06-04 Thread Palmer Dabbelt

On Tue, 29 May 2018 08:43:33 PDT (-0700), mark.rutl...@arm.com wrote:

We define a trivial fallback for atomic_inc_not_zero(), but don't do
the same for atmic64_inc_not_zero(), leading most architectures to
define the same boilerplate.


atmic64


Let's add a fallback in , and remove the redundant
implementations. Note that atomic64_add_unless() is always defined in
, and promotes its arguments to the requisite types, so
we need not do this explicitly.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland 
Acked-by: Peter Zijlstra (Intel) 
Cc: Boqun Feng 
Cc: Will Deacon 
---
 arch/alpha/include/asm/atomic.h   |  2 --
 arch/arc/include/asm/atomic.h |  1 -
 arch/arm/include/asm/atomic.h |  1 -
 arch/arm64/include/asm/atomic.h   |  2 --
 arch/ia64/include/asm/atomic.h|  2 --
 arch/mips/include/asm/atomic.h|  2 --
 arch/parisc/include/asm/atomic.h  |  2 --
 arch/powerpc/include/asm/atomic.h |  1 +
 arch/riscv/include/asm/atomic.h   |  7 ---
 arch/s390/include/asm/atomic.h|  1 -
 arch/sparc/include/asm/atomic_64.h|  2 --
 arch/x86/include/asm/atomic64_32.h|  2 +-
 arch/x86/include/asm/atomic64_64.h|  2 --
 include/asm-generic/atomic-instrumented.h |  3 +++
 include/asm-generic/atomic64.h|  1 -
 include/linux/atomic.h| 11 +++
 16 files changed, 16 insertions(+), 26 deletions(-)
[...]
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index 0e27e050ba14..18259e90f57e 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -375,13 +375,6 @@ static __always_inline int atomic64_add_unless(atomic64_t 
*v, long a, long u)
 }
 #endif

-#ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_inc_not_zero(atomic64_t *v)
-{
-return atomic64_add_unless(v, 1, 0);
-}
-#endif
-
 /*
  * atomic_{cmp,}xchg is required to have exactly the same ordering semantics as
  * {cmp,}xchg and the operations that return, so they need a full barrier.


Acked-by: Palmer Dabbelt 

Thanks!


[PATCHv2 03/16] atomics/treewide: make atomic64_inc_not_zero() optional

2018-05-29 Thread Mark Rutland
We define a trivial fallback for atomic_inc_not_zero(), but don't do
the same for atmic64_inc_not_zero(), leading most architectures to
define the same boilerplate.

Let's add a fallback in , and remove the redundant
implementations. Note that atomic64_add_unless() is always defined in
, and promotes its arguments to the requisite types, so
we need not do this explicitly.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland 
Acked-by: Peter Zijlstra (Intel) 
Cc: Boqun Feng 
Cc: Will Deacon 
---
 arch/alpha/include/asm/atomic.h   |  2 --
 arch/arc/include/asm/atomic.h |  1 -
 arch/arm/include/asm/atomic.h |  1 -
 arch/arm64/include/asm/atomic.h   |  2 --
 arch/ia64/include/asm/atomic.h|  2 --
 arch/mips/include/asm/atomic.h|  2 --
 arch/parisc/include/asm/atomic.h  |  2 --
 arch/powerpc/include/asm/atomic.h |  1 +
 arch/riscv/include/asm/atomic.h   |  7 ---
 arch/s390/include/asm/atomic.h|  1 -
 arch/sparc/include/asm/atomic_64.h|  2 --
 arch/x86/include/asm/atomic64_32.h|  2 +-
 arch/x86/include/asm/atomic64_64.h|  2 --
 include/asm-generic/atomic-instrumented.h |  3 +++
 include/asm-generic/atomic64.h|  1 -
 include/linux/atomic.h| 11 +++
 16 files changed, 16 insertions(+), 26 deletions(-)

diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 392b15a4dd4f..eb0f25e4c5dd 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -296,8 +296,6 @@ static inline long atomic64_dec_if_positive(atomic64_t *v)
return old - 1;
 }
 
-#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
-
 #define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0)
 #define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0)
 
diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index cecdf3403caf..1406825b5e7d 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -603,7 +603,6 @@ static inline int atomic64_add_unless(atomic64_t *v, long 
long a, long long u)
 #define atomic64_dec(v)atomic64_sub(1LL, (v))
 #define atomic64_dec_return(v) atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)   (atomic64_dec_return((v)) == 0)
-#define atomic64_inc_not_zero(v)   atomic64_add_unless((v), 1LL, 0LL)
 
 #endif /* !CONFIG_GENERIC_ATOMIC64 */
 
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index 9d56d0727c9b..02f3894faa48 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -534,7 +534,6 @@ static inline int atomic64_add_unless(atomic64_t *v, long 
long a, long long u)
 #define atomic64_dec(v)atomic64_sub(1LL, (v))
 #define atomic64_dec_return_relaxed(v) atomic64_sub_return_relaxed(1LL, (v))
 #define atomic64_dec_and_test(v)   (atomic64_dec_return((v)) == 0)
-#define atomic64_inc_not_zero(v)   atomic64_add_unless((v), 1LL, 0LL)
 
 #endif /* !CONFIG_GENERIC_ATOMIC64 */
 #endif
diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h
index 264d20339f74..ad50412889c5 100644
--- a/arch/arm64/include/asm/atomic.h
+++ b/arch/arm64/include/asm/atomic.h
@@ -204,7 +204,5 @@
 #define atomic64_add_unless(v, a, u)   (___atomic_add_unless(v, a, u, 64) != u)
 #define atomic64_andnotatomic64_andnot
 
-#define atomic64_inc_not_zero(v)   atomic64_add_unless((v), 1, 0)
-
 #endif
 #endif
diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 9d2ddde5f9d5..93d48b823220 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -246,8 +246,6 @@ static __inline__ long atomic64_add_unless(atomic64_t *v, 
long a, long u)
return c != (u);
 }
 
-#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
-
 static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
 {
long c, old, dec;
diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 02fc1553cf9b..502e691c6393 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -644,8 +644,6 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, 
long a, long u)
return c != (u);
 }
 
-#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
-
 #define atomic64_dec_return(v) atomic64_sub_return(1, (v))
 #define atomic64_inc_return(v) atomic64_add_return(1, (v))
 
diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index 7748abced766..3fd0243bf405 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -305,8 +305,6 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, 
long a, long u)
return c != (u);
 }
 
-#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 

[PATCHv2 03/16] atomics/treewide: make atomic64_inc_not_zero() optional

2018-05-29 Thread Mark Rutland
We define a trivial fallback for atomic_inc_not_zero(), but don't do
the same for atmic64_inc_not_zero(), leading most architectures to
define the same boilerplate.

Let's add a fallback in , and remove the redundant
implementations. Note that atomic64_add_unless() is always defined in
, and promotes its arguments to the requisite types, so
we need not do this explicitly.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland 
Acked-by: Peter Zijlstra (Intel) 
Cc: Boqun Feng 
Cc: Will Deacon 
---
 arch/alpha/include/asm/atomic.h   |  2 --
 arch/arc/include/asm/atomic.h |  1 -
 arch/arm/include/asm/atomic.h |  1 -
 arch/arm64/include/asm/atomic.h   |  2 --
 arch/ia64/include/asm/atomic.h|  2 --
 arch/mips/include/asm/atomic.h|  2 --
 arch/parisc/include/asm/atomic.h  |  2 --
 arch/powerpc/include/asm/atomic.h |  1 +
 arch/riscv/include/asm/atomic.h   |  7 ---
 arch/s390/include/asm/atomic.h|  1 -
 arch/sparc/include/asm/atomic_64.h|  2 --
 arch/x86/include/asm/atomic64_32.h|  2 +-
 arch/x86/include/asm/atomic64_64.h|  2 --
 include/asm-generic/atomic-instrumented.h |  3 +++
 include/asm-generic/atomic64.h|  1 -
 include/linux/atomic.h| 11 +++
 16 files changed, 16 insertions(+), 26 deletions(-)

diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 392b15a4dd4f..eb0f25e4c5dd 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -296,8 +296,6 @@ static inline long atomic64_dec_if_positive(atomic64_t *v)
return old - 1;
 }
 
-#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
-
 #define atomic_add_negative(a, v) (atomic_add_return((a), (v)) < 0)
 #define atomic64_add_negative(a, v) (atomic64_add_return((a), (v)) < 0)
 
diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index cecdf3403caf..1406825b5e7d 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -603,7 +603,6 @@ static inline int atomic64_add_unless(atomic64_t *v, long 
long a, long long u)
 #define atomic64_dec(v)atomic64_sub(1LL, (v))
 #define atomic64_dec_return(v) atomic64_sub_return(1LL, (v))
 #define atomic64_dec_and_test(v)   (atomic64_dec_return((v)) == 0)
-#define atomic64_inc_not_zero(v)   atomic64_add_unless((v), 1LL, 0LL)
 
 #endif /* !CONFIG_GENERIC_ATOMIC64 */
 
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index 9d56d0727c9b..02f3894faa48 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -534,7 +534,6 @@ static inline int atomic64_add_unless(atomic64_t *v, long 
long a, long long u)
 #define atomic64_dec(v)atomic64_sub(1LL, (v))
 #define atomic64_dec_return_relaxed(v) atomic64_sub_return_relaxed(1LL, (v))
 #define atomic64_dec_and_test(v)   (atomic64_dec_return((v)) == 0)
-#define atomic64_inc_not_zero(v)   atomic64_add_unless((v), 1LL, 0LL)
 
 #endif /* !CONFIG_GENERIC_ATOMIC64 */
 #endif
diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h
index 264d20339f74..ad50412889c5 100644
--- a/arch/arm64/include/asm/atomic.h
+++ b/arch/arm64/include/asm/atomic.h
@@ -204,7 +204,5 @@
 #define atomic64_add_unless(v, a, u)   (___atomic_add_unless(v, a, u, 64) != u)
 #define atomic64_andnotatomic64_andnot
 
-#define atomic64_inc_not_zero(v)   atomic64_add_unless((v), 1, 0)
-
 #endif
 #endif
diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 9d2ddde5f9d5..93d48b823220 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -246,8 +246,6 @@ static __inline__ long atomic64_add_unless(atomic64_t *v, 
long a, long u)
return c != (u);
 }
 
-#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
-
 static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
 {
long c, old, dec;
diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 02fc1553cf9b..502e691c6393 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -644,8 +644,6 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, 
long a, long u)
return c != (u);
 }
 
-#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
-
 #define atomic64_dec_return(v) atomic64_sub_return(1, (v))
 #define atomic64_inc_return(v) atomic64_add_return(1, (v))
 
diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index 7748abced766..3fd0243bf405 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -305,8 +305,6 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, 
long a, long u)
return c != (u);
 }
 
-#define atomic64_inc_not_zero(v) atomic64_add_unless((v),