On Thu, Jul 16, 2015 at 05:59:03PM +0100, Peter Zijlstra wrote:
> On Thu, Jul 16, 2015 at 04:32:36PM +0100, Will Deacon wrote:
> > @@ -130,8 +130,7 @@ static inline void queued_read_unlock(struct qrwlock 
> > *lock)
> >     /*
> >      * Atomically decrement the reader count
> >      */
> > -   smp_mb__before_atomic();
> > -   atomic_sub(_QR_BIAS, &lock->cnts);
> > +   (void)atomic_sub_return_release(_QR_BIAS, &lock->cnts);
> >  }
> >  
> >  /**
> 
> This one will actually cause different code on x86; I think its still
> fine though. LOCK XADD should not be (much) slower than LOCK SUB.

Yeah, I wondered whether introduced atomic_sub_release etc was worth the
hassle and decided against it for now.

> > diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
> > index a71bb3541880..879c8fab7bea 100644
> > --- a/kernel/locking/qrwlock.c
> > +++ b/kernel/locking/qrwlock.c
> > @@ -36,7 +36,7 @@ rspin_until_writer_unlock(struct qrwlock *lock, u32 cnts)
> >  {
> >     while ((cnts & _QW_WMASK) == _QW_LOCKED) {
> >             cpu_relax_lowlatency();
> > -           cnts = smp_load_acquire((u32 *)&lock->cnts);
> > +           cnts = atomic_read_acquire(&lock->cnts);
> >     }
> >  }
> 
> It might make sense to add comments to the users of this function that
> actually rely on the _acquire semantics, I had to double check that :-)

Good point, I'll add those.

> But otherwise that all looks good.

Cheers. I'll send a v3 next week with your comments addressed. Pending
any objection, I guess this could be merged via -tip with the exception
of the ARM patch? FWIW, I plan to port arm64 once I've got my pending
asm/atomic.h rework queued.

Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to