Re: [PATCH] locking: remove spin_lock_flags() etc
Peter Zijlstra writes: > On Mon, Oct 25, 2021 at 03:06:24PM +0200, Arnd Bergmann wrote: >> On Mon, Oct 25, 2021 at 11:57 AM Peter Zijlstra wrote: >> > On Sat, Oct 23, 2021 at 06:04:57PM +0200, Arnd Bergmann wrote: >> > > On Sat, Oct 23, 2021 at 3:37 AM Waiman Long wrote: >> > > >> On 10/22/21 7:59 AM, Arnd Bergmann wrote: >> > > > > From: Arnd Bergmann >> > > > > >> > > > > As this is all dead code, just remove it and the helper functions >> > > > > built >> > > > > around it. For arch/ia64, the inline asm could be cleaned up, but >> > > > > it seems safer to leave it untouched. >> > > > > >> > > > > Signed-off-by: Arnd Bergmann >> > > > >> > > > Does that mean we can also remove the GENERIC_LOCKBREAK config option >> > > > from the Kconfig files as well? >> > > >> > > I couldn't figure this out. >> > > >> > > What I see is that the only architectures setting GENERIC_LOCKBREAK are >> > > nds32, parisc, powerpc, s390, sh and sparc64, while the only >> > > architectures >> > > implementing arch_spin_is_contended() are arm32, csky and ia64. >> > > >> > > The part I don't understand is whether the option actually does anything >> > > useful any more after commit d89c70356acf ("locking/core: Remove >> > > break_lock >> > > field when CONFIG_GENERIC_LOCKBREAK=y"). >> > >> > Urgh, what a mess.. AFAICT there's still code in >> > kernel/locking/spinlock.c that relies on it. Specifically when >> > GENERIC_LOCKBREAK=y we seem to create _lock*() variants that are >> > basically TaS locks which drop preempt/irq disable while spinning. >> > >> > Anybody having this on and not having native TaS locks is in for a rude >> > surprise I suppose... sparc64 being the obvious candidate there :/ >> >> Is this a problem on s390 and powerpc, those two being the ones >> that matter in practice? >> >> On s390, we pick between the cmpxchg() based directed-yield when >> running on virtualized CPUs, and a normal qspinlock when running on a >> dedicated CPU. >> >> On PowerPC, we pick at compile-time between either the qspinlock >> (default-enabled on Book3S-64, i.e. all server chips) or a ll/sc based >> spinlock plus vm_yield() (default on embedded and 32-bit mac). > > Urgh, yeah, so this crud undermines the whole point of having a fair > lock. I'm thinking s390 and Power want to have this fixed. Our Kconfig has: config GENERIC_LOCKBREAK bool default y depends on SMP && PREEMPTION And we have exactly one defconfig that enables both SMP and PREEMPT, arch/powerpc/configs/85xx/ge_imp3a_defconfig, which is some ~10 year old PCI card embedded thing I've never heard of. High chance anyone who has those is not running upstream kernels on them. So I think we'd be happy for you rip GENERIC_LOCKBREAK out, it's almost entirely unused on powerpc anyway. cheers
Re: [PATCH] locking: remove spin_lock_flags() etc
On 10/25/21 11:44 AM, Arnd Bergmann wrote: On Mon, Oct 25, 2021 at 5:28 PM Waiman Long wrote: On 10/25/21 9:06 AM, Arnd Bergmann wrote: On s390, we pick between the cmpxchg() based directed-yield when running on virtualized CPUs, and a normal qspinlock when running on a dedicated CPU. I am not aware that s390 is using qspinlocks at all as I don't see ARCH_USE_QUEUED_SPINLOCKS being set anywhere under arch/s390. I only see that it uses a cmpxchg based spinlock. Sorry, I should not have said "normal" here. See arch/s390/lib/spinlock.c for their custom queued spinlocks as implemented in arch_spin_lock_queued(). I don't know if that code actually does the same thing as the generic qspinlock, but it seems at least similar. Yes, you are right. Their queued lock code looks like a custom version of the pvqspinlock code. Cheers, Longman
Re: [PATCH] locking: remove spin_lock_flags() etc
On Mon, Oct 25, 2021 at 5:28 PM Waiman Long wrote: > On 10/25/21 9:06 AM, Arnd Bergmann wrote: > > > > On s390, we pick between the cmpxchg() based directed-yield when > > running on virtualized CPUs, and a normal qspinlock when running on a > > dedicated CPU. > > I am not aware that s390 is using qspinlocks at all as I don't see > ARCH_USE_QUEUED_SPINLOCKS being set anywhere under arch/s390. I only see > that it uses a cmpxchg based spinlock. Sorry, I should not have said "normal" here. See arch/s390/lib/spinlock.c for their custom queued spinlocks as implemented in arch_spin_lock_queued(). I don't know if that code actually does the same thing as the generic qspinlock, but it seems at least similar. Arnd
Re: [PATCH] locking: remove spin_lock_flags() etc
On 10/25/21 9:06 AM, Arnd Bergmann wrote: On Mon, Oct 25, 2021 at 11:57 AM Peter Zijlstra wrote: On Sat, Oct 23, 2021 at 06:04:57PM +0200, Arnd Bergmann wrote: On Sat, Oct 23, 2021 at 3:37 AM Waiman Long wrote: On 10/22/21 7:59 AM, Arnd Bergmann wrote: From: Arnd Bergmann As this is all dead code, just remove it and the helper functions built around it. For arch/ia64, the inline asm could be cleaned up, but it seems safer to leave it untouched. Signed-off-by: Arnd Bergmann Does that mean we can also remove the GENERIC_LOCKBREAK config option from the Kconfig files as well? I couldn't figure this out. What I see is that the only architectures setting GENERIC_LOCKBREAK are nds32, parisc, powerpc, s390, sh and sparc64, while the only architectures implementing arch_spin_is_contended() are arm32, csky and ia64. The part I don't understand is whether the option actually does anything useful any more after commit d89c70356acf ("locking/core: Remove break_lock field when CONFIG_GENERIC_LOCKBREAK=y"). Urgh, what a mess.. AFAICT there's still code in kernel/locking/spinlock.c that relies on it. Specifically when GENERIC_LOCKBREAK=y we seem to create _lock*() variants that are basically TaS locks which drop preempt/irq disable while spinning. Anybody having this on and not having native TaS locks is in for a rude surprise I suppose... sparc64 being the obvious candidate there :/ Is this a problem on s390 and powerpc, those two being the ones that matter in practice? On s390, we pick between the cmpxchg() based directed-yield when running on virtualized CPUs, and a normal qspinlock when running on a dedicated CPU. I am not aware that s390 is using qspinlocks at all as I don't see ARCH_USE_QUEUED_SPINLOCKS being set anywhere under arch/s390. I only see that it uses a cmpxchg based spinlock. Cheers, Longman
Re: [PATCH] locking: remove spin_lock_flags() etc
On Mon, Oct 25, 2021 at 03:06:24PM +0200, Arnd Bergmann wrote: > On Mon, Oct 25, 2021 at 11:57 AM Peter Zijlstra wrote: > > On Sat, Oct 23, 2021 at 06:04:57PM +0200, Arnd Bergmann wrote: > > > On Sat, Oct 23, 2021 at 3:37 AM Waiman Long wrote: > > > >> On 10/22/21 7:59 AM, Arnd Bergmann wrote: > > > > > From: Arnd Bergmann > > > > > > > > > > As this is all dead code, just remove it and the helper functions > > > > > built > > > > > around it. For arch/ia64, the inline asm could be cleaned up, but > > > > > it seems safer to leave it untouched. > > > > > > > > > > Signed-off-by: Arnd Bergmann > > > > > > > > Does that mean we can also remove the GENERIC_LOCKBREAK config option > > > > from the Kconfig files as well? > > > > > > I couldn't figure this out. > > > > > > What I see is that the only architectures setting GENERIC_LOCKBREAK are > > > nds32, parisc, powerpc, s390, sh and sparc64, while the only architectures > > > implementing arch_spin_is_contended() are arm32, csky and ia64. > > > > > > The part I don't understand is whether the option actually does anything > > > useful any more after commit d89c70356acf ("locking/core: Remove > > > break_lock > > > field when CONFIG_GENERIC_LOCKBREAK=y"). > > > > Urgh, what a mess.. AFAICT there's still code in > > kernel/locking/spinlock.c that relies on it. Specifically when > > GENERIC_LOCKBREAK=y we seem to create _lock*() variants that are > > basically TaS locks which drop preempt/irq disable while spinning. > > > > Anybody having this on and not having native TaS locks is in for a rude > > surprise I suppose... sparc64 being the obvious candidate there :/ > > Is this a problem on s390 and powerpc, those two being the ones > that matter in practice? > > On s390, we pick between the cmpxchg() based directed-yield when > running on virtualized CPUs, and a normal qspinlock when running on a > dedicated CPU. > > On PowerPC, we pick at compile-time between either the qspinlock > (default-enabled on Book3S-64, i.e. all server chips) or a ll/sc based > spinlock plus vm_yield() (default on embedded and 32-bit mac). Urgh, yeah, so this crud undermines the whole point of having a fair lock. I'm thinking s390 and Power want to have this fixed.
Re: [PATCH] locking: remove spin_lock_flags() etc
On Mon, Oct 25, 2021 at 11:57 AM Peter Zijlstra wrote: > On Sat, Oct 23, 2021 at 06:04:57PM +0200, Arnd Bergmann wrote: > > On Sat, Oct 23, 2021 at 3:37 AM Waiman Long wrote: > > >> On 10/22/21 7:59 AM, Arnd Bergmann wrote: > > > > From: Arnd Bergmann > > > > > > > > As this is all dead code, just remove it and the helper functions built > > > > around it. For arch/ia64, the inline asm could be cleaned up, but > > > > it seems safer to leave it untouched. > > > > > > > > Signed-off-by: Arnd Bergmann > > > > > > Does that mean we can also remove the GENERIC_LOCKBREAK config option > > > from the Kconfig files as well? > > > > I couldn't figure this out. > > > > What I see is that the only architectures setting GENERIC_LOCKBREAK are > > nds32, parisc, powerpc, s390, sh and sparc64, while the only architectures > > implementing arch_spin_is_contended() are arm32, csky and ia64. > > > > The part I don't understand is whether the option actually does anything > > useful any more after commit d89c70356acf ("locking/core: Remove break_lock > > field when CONFIG_GENERIC_LOCKBREAK=y"). > > Urgh, what a mess.. AFAICT there's still code in > kernel/locking/spinlock.c that relies on it. Specifically when > GENERIC_LOCKBREAK=y we seem to create _lock*() variants that are > basically TaS locks which drop preempt/irq disable while spinning. > > Anybody having this on and not having native TaS locks is in for a rude > surprise I suppose... sparc64 being the obvious candidate there :/ Is this a problem on s390 and powerpc, those two being the ones that matter in practice? On s390, we pick between the cmpxchg() based directed-yield when running on virtualized CPUs, and a normal qspinlock when running on a dedicated CPU. On PowerPC, we pick at compile-time between either the qspinlock (default-enabled on Book3S-64, i.e. all server chips) or a ll/sc based spinlock plus vm_yield() (default on embedded and 32-bit mac). Arnd
Re: [PATCH] locking: remove spin_lock_flags() etc
On Mon, Oct 25, 2021 at 11:57:28AM +0200, Peter Zijlstra wrote: > On Sat, Oct 23, 2021 at 06:04:57PM +0200, Arnd Bergmann wrote: > > On Sat, Oct 23, 2021 at 3:37 AM Waiman Long wrote: > > >> On 10/22/21 7:59 AM, Arnd Bergmann wrote: > > > > From: Arnd Bergmann > > > > > > > > As this is all dead code, just remove it and the helper functions built > > > > around it. For arch/ia64, the inline asm could be cleaned up, but > > > > it seems safer to leave it untouched. > > > > > > > > Signed-off-by: Arnd Bergmann > > > > > > Does that mean we can also remove the GENERIC_LOCKBREAK config option > > > from the Kconfig files as well? > > > > I couldn't figure this out. > > > > What I see is that the only architectures setting GENERIC_LOCKBREAK are > > nds32, parisc, powerpc, s390, sh and sparc64, while the only architectures > > implementing arch_spin_is_contended() are arm32, csky and ia64. > > > > The part I don't understand is whether the option actually does anything > > useful any more after commit d89c70356acf ("locking/core: Remove break_lock > > field when CONFIG_GENERIC_LOCKBREAK=y"). > > Urgh, what a mess.. AFAICT there's still code in > kernel/locking/spinlock.c that relies on it. Specifically when > GENERIC_LOCKBREAK=y we seem to create _lock*() variants that are > basically TaS locks which drop preempt/irq disable while spinning. > > Anybody having this on and not having native TaS locks is in for a rude > surprise I suppose... sparc64 being the obvious candidate there :/ Something like the *totally*untested* patch below would rip it all out. --- arch/ia64/Kconfig| 3 -- arch/nds32/Kconfig | 4 -- arch/parisc/Kconfig | 5 --- arch/powerpc/Kconfig | 5 --- arch/s390/Kconfig| 3 -- arch/sh/Kconfig | 4 -- arch/sparc/Kconfig | 6 --- include/linux/rwlock_api_smp.h | 4 +- include/linux/spinlock_api_smp.h | 4 +- kernel/Kconfig.locks | 26 ++-- kernel/locking/spinlock.c| 90 11 files changed, 17 insertions(+), 137 deletions(-) diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index 1e33666fa679..5ec3abba3c81 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -81,9 +81,6 @@ config MMU config STACKTRACE_SUPPORT def_bool y -config GENERIC_LOCKBREAK - def_bool n - config GENERIC_CALIBRATE_DELAY bool default y diff --git a/arch/nds32/Kconfig b/arch/nds32/Kconfig index aea26e739543..699008dbd6c2 100644 --- a/arch/nds32/Kconfig +++ b/arch/nds32/Kconfig @@ -59,10 +59,6 @@ config GENERIC_CSUM config GENERIC_HWEIGHT def_bool y -config GENERIC_LOCKBREAK - def_bool y - depends on PREEMPTION - config STACKTRACE_SUPPORT def_bool y diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig index 27a8b49af11f..afe70bcdde2c 100644 --- a/arch/parisc/Kconfig +++ b/arch/parisc/Kconfig @@ -86,11 +86,6 @@ config ARCH_DEFCONFIG default "arch/parisc/configs/generic-32bit_defconfig" if !64BIT default "arch/parisc/configs/generic-64bit_defconfig" if 64BIT -config GENERIC_LOCKBREAK - bool - default y - depends on SMP && PREEMPTION - config ARCH_HAS_ILOG2_U32 bool default n diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index ba5b66189358..e782c9ea3f81 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -98,11 +98,6 @@ config LOCKDEP_SUPPORT bool default y -config GENERIC_LOCKBREAK - bool - default y - depends on SMP && PREEMPTION - config GENERIC_HWEIGHT bool default y diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index b86de61b8caa..e4ff05f5393b 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -26,9 +26,6 @@ config GENERIC_BUG config GENERIC_BUG_RELATIVE_POINTERS def_bool y -config GENERIC_LOCKBREAK - def_bool y if PREEMPTION - config PGSTE def_bool y if KVM diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig index 6904f4bdbf00..26f1cf2c69a3 100644 --- a/arch/sh/Kconfig +++ b/arch/sh/Kconfig @@ -86,10 +86,6 @@ config GENERIC_HWEIGHT config GENERIC_CALIBRATE_DELAY bool -config GENERIC_LOCKBREAK - def_bool y - depends on SMP && PREEMPTION - config ARCH_SUSPEND_POSSIBLE def_bool n diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index b120ed947f50..e77e7254eaa0 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -246,12 +246,6 @@ config US3_MC If in doubt, say Y, as this information can be very useful. -# Global things across all Sun machines. -config GENERIC_LOCKBREAK - bool - default y - depends on SPARC64 && SMP && PREEMPTION - config NUMA bool "NUMA support" depends on SPARC64 && SMP diff --git a/include/linux/rwlock_api_smp.h b/include/linux/rwlock_api_smp.h index abfb53ab11be..a281d81ef8ee 100644 --- a/include/
Re: [PATCH] locking: remove spin_lock_flags() etc
On Sat, Oct 23, 2021 at 06:04:57PM +0200, Arnd Bergmann wrote: > On Sat, Oct 23, 2021 at 3:37 AM Waiman Long wrote: > >> On 10/22/21 7:59 AM, Arnd Bergmann wrote: > > > From: Arnd Bergmann > > > > > > As this is all dead code, just remove it and the helper functions built > > > around it. For arch/ia64, the inline asm could be cleaned up, but > > > it seems safer to leave it untouched. > > > > > > Signed-off-by: Arnd Bergmann > > > > Does that mean we can also remove the GENERIC_LOCKBREAK config option > > from the Kconfig files as well? > > I couldn't figure this out. > > What I see is that the only architectures setting GENERIC_LOCKBREAK are > nds32, parisc, powerpc, s390, sh and sparc64, while the only architectures > implementing arch_spin_is_contended() are arm32, csky and ia64. > > The part I don't understand is whether the option actually does anything > useful any more after commit d89c70356acf ("locking/core: Remove break_lock > field when CONFIG_GENERIC_LOCKBREAK=y"). Urgh, what a mess.. AFAICT there's still code in kernel/locking/spinlock.c that relies on it. Specifically when GENERIC_LOCKBREAK=y we seem to create _lock*() variants that are basically TaS locks which drop preempt/irq disable while spinning. Anybody having this on and not having native TaS locks is in for a rude surprise I suppose... sparc64 being the obvious candidate there :/
Re: [PATCH] locking: remove spin_lock_flags() etc
On Sat, Oct 23, 2021 at 3:37 AM Waiman Long wrote: >> On 10/22/21 7:59 AM, Arnd Bergmann wrote: > > From: Arnd Bergmann > > > > As this is all dead code, just remove it and the helper functions built > > around it. For arch/ia64, the inline asm could be cleaned up, but > > it seems safer to leave it untouched. > > > > Signed-off-by: Arnd Bergmann > > Does that mean we can also remove the GENERIC_LOCKBREAK config option > from the Kconfig files as well? I couldn't figure this out. What I see is that the only architectures setting GENERIC_LOCKBREAK are nds32, parisc, powerpc, s390, sh and sparc64, while the only architectures implementing arch_spin_is_contended() are arm32, csky and ia64. The part I don't understand is whether the option actually does anything useful any more after commit d89c70356acf ("locking/core: Remove break_lock field when CONFIG_GENERIC_LOCKBREAK=y"). Arnd
Re: [PATCH] locking: remove spin_lock_flags() etc
On 10/22/21 7:59 AM, Arnd Bergmann wrote: From: Arnd Bergmann parisc, ia64 and powerpc32 are the only remaining architectures that provide custom arch_{spin,read,write}_lock_flags() functions, which are meant to re-enable interrupts while waiting for a spinlock. However, none of these can actually run into this codepath, because it is only called on architectures without CONFIG_GENERIC_LOCKBREAK, or when CONFIG_DEBUG_LOCK_ALLOC is set without CONFIG_LOCKDEP, and none of those combinations are possible on the three architectures. Going back in the git history, it appears that arch/mn10300 may have been able to run into this code path, but there is a good chance that it never worked. On the architectures that still exist, it was already impossible to hit back in 2008 after the introduction of CONFIG_GENERIC_LOCKBREAK, and possibly earlier. As this is all dead code, just remove it and the helper functions built around it. For arch/ia64, the inline asm could be cleaned up, but it seems safer to leave it untouched. Signed-off-by: Arnd Bergmann Does that mean we can also remove the GENERIC_LOCKBREAK config option from the Kconfig files as well? Cheers, Longman
Re: [PATCH] locking: remove spin_lock_flags() etc
On 10/22/21 13:59, Arnd Bergmann wrote: > From: Arnd Bergmann > > parisc, ia64 and powerpc32 are the only remaining architectures that > provide custom arch_{spin,read,write}_lock_flags() functions, which are > meant to re-enable interrupts while waiting for a spinlock. > > However, none of these can actually run into this codepath, because > it is only called on architectures without CONFIG_GENERIC_LOCKBREAK, > or when CONFIG_DEBUG_LOCK_ALLOC is set without CONFIG_LOCKDEP, and none > of those combinations are possible on the three architectures. > > Going back in the git history, it appears that arch/mn10300 may have > been able to run into this code path, but there is a good chance that > it never worked. On the architectures that still exist, it was > already impossible to hit back in 2008 after the introduction of > CONFIG_GENERIC_LOCKBREAK, and possibly earlier. > > As this is all dead code, just remove it and the helper functions built > around it. For arch/ia64, the inline asm could be cleaned up, but > it seems safer to leave it untouched. > > Signed-off-by: Arnd Bergmann Acked-by: Helge Deller # parisc Helge > --- > arch/ia64/include/asm/spinlock.h | 23 ++ > arch/openrisc/include/asm/spinlock.h | 3 --- > arch/parisc/include/asm/spinlock.h | 15 -- > arch/powerpc/include/asm/simple_spinlock.h | 21 > arch/s390/include/asm/spinlock.h | 8 > include/linux/lockdep.h| 17 > include/linux/rwlock.h | 15 -- > include/linux/rwlock_api_smp.h | 6 ++ > include/linux/spinlock.h | 13 > include/linux/spinlock_api_smp.h | 9 - > include/linux/spinlock_up.h| 1 - > kernel/locking/spinlock.c | 3 +-- > 12 files changed, 9 insertions(+), 125 deletions(-) > > diff --git a/arch/ia64/include/asm/spinlock.h > b/arch/ia64/include/asm/spinlock.h > index 864775970c50..0e5c1ad3239c 100644 > --- a/arch/ia64/include/asm/spinlock.h > +++ b/arch/ia64/include/asm/spinlock.h > @@ -124,18 +124,13 @@ static __always_inline void > arch_spin_unlock(arch_spinlock_t *lock) > __ticket_spin_unlock(lock); > } > > -static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock, > - unsigned long flags) > -{ > - arch_spin_lock(lock); > -} > -#define arch_spin_lock_flags arch_spin_lock_flags > - > #ifdef ASM_SUPPORTED > > static __always_inline void > -arch_read_lock_flags(arch_rwlock_t *lock, unsigned long flags) > +arch_read_lock(arch_rwlock_t *lock) > { > + unsigned long flags = 0; > + > __asm__ __volatile__ ( > "tbit.nz p6, p0 = %1,%2\n" > "br.few 3f\n" > @@ -157,13 +152,8 @@ arch_read_lock_flags(arch_rwlock_t *lock, unsigned long > flags) > : "p6", "p7", "r2", "memory"); > } > > -#define arch_read_lock_flags arch_read_lock_flags > -#define arch_read_lock(lock) arch_read_lock_flags(lock, 0) > - > #else /* !ASM_SUPPORTED */ > > -#define arch_read_lock_flags(rw, flags) arch_read_lock(rw) > - > #define arch_read_lock(rw) > \ > do { > \ > arch_rwlock_t *__read_lock_ptr = (rw); > \ > @@ -186,8 +176,10 @@ do { > \ > #ifdef ASM_SUPPORTED > > static __always_inline void > -arch_write_lock_flags(arch_rwlock_t *lock, unsigned long flags) > +arch_write_lock(arch_rwlock_t *lock) > { > + unsigned long flags = 0; > + > __asm__ __volatile__ ( > "tbit.nz p6, p0 = %1, %2\n" > "mov ar.ccv = r0\n" > @@ -210,9 +202,6 @@ arch_write_lock_flags(arch_rwlock_t *lock, unsigned long > flags) > : "ar.ccv", "p6", "p7", "r2", "r29", "memory"); > } > > -#define arch_write_lock_flags arch_write_lock_flags > -#define arch_write_lock(rw) arch_write_lock_flags(rw, 0) > - > #define arch_write_trylock(rw) > \ > ({ > \ > register long result; > \ > diff --git a/arch/openrisc/include/asm/spinlock.h > b/arch/openrisc/include/asm/spinlock.h > index a8940bdfcb7e..264944a71535 100644 > --- a/arch/openrisc/include/asm/spinlock.h > +++ b/arch/openrisc/include/asm/spinlock.h > @@ -19,9 +19,6 @@ > > #include > > -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) > -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) > - > #define arch_spin_relax(lock)cpu_relax() > #define arch_read_relax(lock)cpu_relax() > #define arch_write_relax(lock) cpu_rel
[PATCH] locking: remove spin_lock_flags() etc
From: Arnd Bergmann parisc, ia64 and powerpc32 are the only remaining architectures that provide custom arch_{spin,read,write}_lock_flags() functions, which are meant to re-enable interrupts while waiting for a spinlock. However, none of these can actually run into this codepath, because it is only called on architectures without CONFIG_GENERIC_LOCKBREAK, or when CONFIG_DEBUG_LOCK_ALLOC is set without CONFIG_LOCKDEP, and none of those combinations are possible on the three architectures. Going back in the git history, it appears that arch/mn10300 may have been able to run into this code path, but there is a good chance that it never worked. On the architectures that still exist, it was already impossible to hit back in 2008 after the introduction of CONFIG_GENERIC_LOCKBREAK, and possibly earlier. As this is all dead code, just remove it and the helper functions built around it. For arch/ia64, the inline asm could be cleaned up, but it seems safer to leave it untouched. Signed-off-by: Arnd Bergmann --- arch/ia64/include/asm/spinlock.h | 23 ++ arch/openrisc/include/asm/spinlock.h | 3 --- arch/parisc/include/asm/spinlock.h | 15 -- arch/powerpc/include/asm/simple_spinlock.h | 21 arch/s390/include/asm/spinlock.h | 8 include/linux/lockdep.h| 17 include/linux/rwlock.h | 15 -- include/linux/rwlock_api_smp.h | 6 ++ include/linux/spinlock.h | 13 include/linux/spinlock_api_smp.h | 9 - include/linux/spinlock_up.h| 1 - kernel/locking/spinlock.c | 3 +-- 12 files changed, 9 insertions(+), 125 deletions(-) diff --git a/arch/ia64/include/asm/spinlock.h b/arch/ia64/include/asm/spinlock.h index 864775970c50..0e5c1ad3239c 100644 --- a/arch/ia64/include/asm/spinlock.h +++ b/arch/ia64/include/asm/spinlock.h @@ -124,18 +124,13 @@ static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) __ticket_spin_unlock(lock); } -static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock, - unsigned long flags) -{ - arch_spin_lock(lock); -} -#define arch_spin_lock_flags arch_spin_lock_flags - #ifdef ASM_SUPPORTED static __always_inline void -arch_read_lock_flags(arch_rwlock_t *lock, unsigned long flags) +arch_read_lock(arch_rwlock_t *lock) { + unsigned long flags = 0; + __asm__ __volatile__ ( "tbit.nz p6, p0 = %1,%2\n" "br.few 3f\n" @@ -157,13 +152,8 @@ arch_read_lock_flags(arch_rwlock_t *lock, unsigned long flags) : "p6", "p7", "r2", "memory"); } -#define arch_read_lock_flags arch_read_lock_flags -#define arch_read_lock(lock) arch_read_lock_flags(lock, 0) - #else /* !ASM_SUPPORTED */ -#define arch_read_lock_flags(rw, flags) arch_read_lock(rw) - #define arch_read_lock(rw) \ do { \ arch_rwlock_t *__read_lock_ptr = (rw); \ @@ -186,8 +176,10 @@ do { \ #ifdef ASM_SUPPORTED static __always_inline void -arch_write_lock_flags(arch_rwlock_t *lock, unsigned long flags) +arch_write_lock(arch_rwlock_t *lock) { + unsigned long flags = 0; + __asm__ __volatile__ ( "tbit.nz p6, p0 = %1, %2\n" "mov ar.ccv = r0\n" @@ -210,9 +202,6 @@ arch_write_lock_flags(arch_rwlock_t *lock, unsigned long flags) : "ar.ccv", "p6", "p7", "r2", "r29", "memory"); } -#define arch_write_lock_flags arch_write_lock_flags -#define arch_write_lock(rw) arch_write_lock_flags(rw, 0) - #define arch_write_trylock(rw) \ ({ \ register long result; \ diff --git a/arch/openrisc/include/asm/spinlock.h b/arch/openrisc/include/asm/spinlock.h index a8940bdfcb7e..264944a71535 100644 --- a/arch/openrisc/include/asm/spinlock.h +++ b/arch/openrisc/include/asm/spinlock.h @@ -19,9 +19,6 @@ #include -#define arch_read_lock_flags(lock, flags) arch_read_lock(lock) -#define arch_write_lock_flags(lock, flags) arch_write_lock(lock) - #define arch_spin_relax(lock) cpu_relax() #define arch_read_relax(lock) cpu_relax() #define arch_write_relax(lock) cpu_relax() diff --git a/arch/parisc/include/asm/spinlock.h b/arch/parisc/include/asm/spinlock.h index fa5ee8a45dbd..a6e5d66a7656 100644 --- a/arch/parisc/include/asm/spinlock.h +++ b/arch/parisc/include/asm/spinlock.h @@ -23,21 +23,6 @@ static inline void arch_spin_lock(arch_spinlock_t *x)