Commit-ID: 33e42ef571979fe6601ac838d338eb599d842a6d Gitweb: https://git.kernel.org/tip/33e42ef571979fe6601ac838d338eb599d842a6d Author: Mark Rutland <mark.rutl...@arm.com> AuthorDate: Wed, 22 May 2019 14:22:43 +0100 Committer: Ingo Molnar <mi...@kernel.org> CommitDate: Mon, 3 Jun 2019 12:32:56 +0200
locking/atomic, riscv: Fix atomic64_sub_if_positive() offset argument Presently the riscv implementation of atomic64_sub_if_positive() takes a 32-bit offset value rather than a 64-bit offset value as it should do. Thus, if called with a 64-bit offset, the value will be unexpectedly truncated to 32 bits. Fix this by taking the offset as a long rather than an int. Signed-off-by: Mark Rutland <mark.rutl...@arm.com> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org> Reviewed-by: Palmer Dabbelt <pal...@sifive.com> Cc: Albert Ou <a...@eecs.berkeley.edu> Cc: Linus Torvalds <torva...@linux-foundation.org> Cc: Peter Zijlstra <pet...@infradead.org> Cc: Thomas Gleixner <t...@linutronix.de> Cc: Will Deacon <will.dea...@arm.com> Cc: a...@arndb.de Cc: b...@alien8.de Cc: catalin.mari...@arm.com Cc: da...@davemloft.net Cc: fenghua...@intel.com Cc: heiko.carst...@de.ibm.com Cc: herb...@gondor.apana.org.au Cc: i...@jurassic.park.msu.ru Cc: jho...@kernel.org Cc: li...@armlinux.org.uk Cc: matts...@gmail.com Cc: m...@ellerman.id.au Cc: paul.bur...@mips.com Cc: pau...@samba.org Cc: r...@linux-mips.org Cc: r...@twiddle.net Cc: tony.l...@intel.com Cc: vgu...@synopsys.com Link: https://lkml.kernel.org/r/20190522132250.26499-12-mark.rutl...@arm.com Signed-off-by: Ingo Molnar <mi...@kernel.org> --- arch/riscv/include/asm/atomic.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h index 9038aeb900a6..9c263bd9d5ad 100644 --- a/arch/riscv/include/asm/atomic.h +++ b/arch/riscv/include/asm/atomic.h @@ -332,7 +332,7 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset) #define atomic_dec_if_positive(v) atomic_sub_if_positive(v, 1) #ifndef CONFIG_GENERIC_ATOMIC64 -static __always_inline long atomic64_sub_if_positive(atomic64_t *v, int offset) +static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset) { long prev, rc;