On Fri, Jun 30, 2017 at 03:18:40PM -0700, Paul E. McKenney wrote:
> On Fri, Jun 30, 2017 at 02:13:39PM +0100, Will Deacon wrote:
> > On Fri, Jun 30, 2017 at 05:38:15AM -0700, Paul E. McKenney wrote:
> > > I also need to check all uses of spin_is_locked().  There might no
> > > longer be any that rely on any particular ordering...
> > 
> > Right. I think we're looking for the "insane case" as per 38b850a73034
> > (which was apparently used by ipc/sem.c at the time, but no longer).
> > 
> > There's a usage in kernel/debug/debug_core.c, but it doesn't fill me with
> > joy.
> 
> That is indeed an interesting one...  But my first round will be what
> semantics the implementations seem to provide:
> 
> Acquire courtesy of TSO: s390, sparc, x86.
> Acquire: ia64 (in reality fully ordered).
> Control dependency: alpha, arc, arm, blackfin, hexagon, m32r, mn10300, tile,
>       xtensa.
> Control dependency plus leading full barrier: arm64, powerpc.
> UP-only: c6x, cris, frv, h8300, m68k, microblaze nios2, openrisc, um, 
> unicore32.
> 
> Special cases:
>       metag: Acquire if !CONFIG_METAG_SMP_WRITE_REORDERING.
>              Otherwise control dependency?
>       mips: Control dependency, acquire if CONFIG_CPU_CAVIUM_OCTEON.
>       parisc: Acquire courtesy of TSO, but why barrier in smp_load_acquire?
>       sh: Acquire if one of SH4A, SH5, or J2, otherwise acquire?  UP-only?
> 
> Are these correct, or am I missing something with any of them?

That looks about right but, at least on ARM, I think we have to consider
the semantics of spin_is_locked with respect to the other spin_* functions,
rather than in isolation.

For example, ARM only has a control dependency, but spin_lock has a trailing
smp_mb() and spin_unlock has both leading and trailing smp_mb().

Will

Reply via email to