On Thu, Jul 16, 2015 at 04:32:36PM +0100, Will Deacon wrote:
> @@ -130,8 +130,7 @@ static inline void queued_read_unlock(struct qrwlock 
> *lock)
>       /*
>        * Atomically decrement the reader count
>        */
> -     smp_mb__before_atomic();
> -     atomic_sub(_QR_BIAS, &lock->cnts);
> +     (void)atomic_sub_return_release(_QR_BIAS, &lock->cnts);
>  }
>  
>  /**

This one will actually cause different code on x86; I think its still
fine though. LOCK XADD should not be (much) slower than LOCK SUB.

> diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
> index a71bb3541880..879c8fab7bea 100644
> --- a/kernel/locking/qrwlock.c
> +++ b/kernel/locking/qrwlock.c
> @@ -36,7 +36,7 @@ rspin_until_writer_unlock(struct qrwlock *lock, u32 cnts)
>  {
>       while ((cnts & _QW_WMASK) == _QW_LOCKED) {
>               cpu_relax_lowlatency();
> -             cnts = smp_load_acquire((u32 *)&lock->cnts);
> +             cnts = atomic_read_acquire(&lock->cnts);
>       }
>  }

It might make sense to add comments to the users of this function that
actually rely on the _acquire semantics, I had to double check that :-)


But otherwise that all looks good.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to