Re: [PATCH RFC 21/26] powerpc: Remove spin_unlock_wait() arch-specific definitions

2017-07-05 Thread Paul E. McKenney
On Sun, Jul 02, 2017 at 11:58:07AM +0800, Boqun Feng wrote:
> On Thu, Jun 29, 2017 at 05:01:29PM -0700, Paul E. McKenney wrote:
> > There is no agreed-upon definition of spin_unlock_wait()'s semantics,
> > and it appears that all callers could do just as well with a lock/unlock
> > pair.  This commit therefore removes the underlying arch-specific
> > arch_spin_unlock_wait().
> > 
> > Signed-off-by: Paul E. McKenney 
> > Cc: Benjamin Herrenschmidt 
> > Cc: Paul Mackerras 
> > Cc: Michael Ellerman 
> > Cc: 
> > Cc: Will Deacon 
> > Cc: Peter Zijlstra 
> > Cc: Alan Stern 
> > Cc: Andrea Parri 
> > Cc: Linus Torvalds 
> 
> Acked-by: Boqun Feng 

And finally applied in preparation for v2 of the patch series.

Thank you!!!

Thanx, Paul

> Regards,
> Boqun
> 
> > ---
> >  arch/powerpc/include/asm/spinlock.h | 33 -
> >  1 file changed, 33 deletions(-)
> > 
> > diff --git a/arch/powerpc/include/asm/spinlock.h 
> > b/arch/powerpc/include/asm/spinlock.h
> > index 8c1b913de6d7..d256e448ea49 100644
> > --- a/arch/powerpc/include/asm/spinlock.h
> > +++ b/arch/powerpc/include/asm/spinlock.h
> > @@ -170,39 +170,6 @@ static inline void arch_spin_unlock(arch_spinlock_t 
> > *lock)
> > lock->slock = 0;
> >  }
> >  
> > -static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> > -{
> > -   arch_spinlock_t lock_val;
> > -
> > -   smp_mb();
> > -
> > -   /*
> > -* Atomically load and store back the lock value (unchanged). This
> > -* ensures that our observation of the lock value is ordered with
> > -* respect to other lock operations.
> > -*/
> > -   __asm__ __volatile__(
> > -"1:" PPC_LWARX(%0, 0, %2, 0) "\n"
> > -"  stwcx. %0, 0, %2\n"
> > -"  bne- 1b\n"
> > -   : "=&r" (lock_val), "+m" (*lock)
> > -   : "r" (lock)
> > -   : "cr0", "xer");
> > -
> > -   if (arch_spin_value_unlocked(lock_val))
> > -   goto out;
> > -
> > -   while (lock->slock) {
> > -   HMT_low();
> > -   if (SHARED_PROCESSOR)
> > -   __spin_yield(lock);
> > -   }
> > -   HMT_medium();
> > -
> > -out:
> > -   smp_mb();
> > -}
> > -
> >  /*
> >   * Read-write spinlocks, allowing multiple readers
> >   * but only one writer.
> > -- 
> > 2.5.2
> > 




Re: [PATCH RFC 21/26] powerpc: Remove spin_unlock_wait() arch-specific definitions

2017-07-01 Thread Boqun Feng
On Thu, Jun 29, 2017 at 05:01:29PM -0700, Paul E. McKenney wrote:
> There is no agreed-upon definition of spin_unlock_wait()'s semantics,
> and it appears that all callers could do just as well with a lock/unlock
> pair.  This commit therefore removes the underlying arch-specific
> arch_spin_unlock_wait().
> 
> Signed-off-by: Paul E. McKenney 
> Cc: Benjamin Herrenschmidt 
> Cc: Paul Mackerras 
> Cc: Michael Ellerman 
> Cc: 
> Cc: Will Deacon 
> Cc: Peter Zijlstra 
> Cc: Alan Stern 
> Cc: Andrea Parri 
> Cc: Linus Torvalds 

Acked-by: Boqun Feng 

Regards,
Boqun

> ---
>  arch/powerpc/include/asm/spinlock.h | 33 -
>  1 file changed, 33 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/spinlock.h 
> b/arch/powerpc/include/asm/spinlock.h
> index 8c1b913de6d7..d256e448ea49 100644
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -170,39 +170,6 @@ static inline void arch_spin_unlock(arch_spinlock_t 
> *lock)
>   lock->slock = 0;
>  }
>  
> -static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> -{
> - arch_spinlock_t lock_val;
> -
> - smp_mb();
> -
> - /*
> -  * Atomically load and store back the lock value (unchanged). This
> -  * ensures that our observation of the lock value is ordered with
> -  * respect to other lock operations.
> -  */
> - __asm__ __volatile__(
> -"1:  " PPC_LWARX(%0, 0, %2, 0) "\n"
> -"stwcx. %0, 0, %2\n"
> -"bne- 1b\n"
> - : "=&r" (lock_val), "+m" (*lock)
> - : "r" (lock)
> - : "cr0", "xer");
> -
> - if (arch_spin_value_unlocked(lock_val))
> - goto out;
> -
> - while (lock->slock) {
> - HMT_low();
> - if (SHARED_PROCESSOR)
> - __spin_yield(lock);
> - }
> - HMT_medium();
> -
> -out:
> - smp_mb();
> -}
> -
>  /*
>   * Read-write spinlocks, allowing multiple readers
>   * but only one writer.
> -- 
> 2.5.2
> 


signature.asc
Description: PGP signature


[PATCH RFC 21/26] powerpc: Remove spin_unlock_wait() arch-specific definitions

2017-06-29 Thread Paul E. McKenney
There is no agreed-upon definition of spin_unlock_wait()'s semantics,
and it appears that all callers could do just as well with a lock/unlock
pair.  This commit therefore removes the underlying arch-specific
arch_spin_unlock_wait().

Signed-off-by: Paul E. McKenney 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Michael Ellerman 
Cc: 
Cc: Will Deacon 
Cc: Peter Zijlstra 
Cc: Alan Stern 
Cc: Andrea Parri 
Cc: Linus Torvalds 
---
 arch/powerpc/include/asm/spinlock.h | 33 -
 1 file changed, 33 deletions(-)

diff --git a/arch/powerpc/include/asm/spinlock.h 
b/arch/powerpc/include/asm/spinlock.h
index 8c1b913de6d7..d256e448ea49 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -170,39 +170,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
lock->slock = 0;
 }
 
-static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
-{
-   arch_spinlock_t lock_val;
-
-   smp_mb();
-
-   /*
-* Atomically load and store back the lock value (unchanged). This
-* ensures that our observation of the lock value is ordered with
-* respect to other lock operations.
-*/
-   __asm__ __volatile__(
-"1:" PPC_LWARX(%0, 0, %2, 0) "\n"
-"  stwcx. %0, 0, %2\n"
-"  bne- 1b\n"
-   : "=&r" (lock_val), "+m" (*lock)
-   : "r" (lock)
-   : "cr0", "xer");
-
-   if (arch_spin_value_unlocked(lock_val))
-   goto out;
-
-   while (lock->slock) {
-   HMT_low();
-   if (SHARED_PROCESSOR)
-   __spin_yield(lock);
-   }
-   HMT_medium();
-
-out:
-   smp_mb();
-}
-
 /*
  * Read-write spinlocks, allowing multiple readers
  * but only one writer.
-- 
2.5.2