On 01/24/2014 03:25 AM, Peter Zijlstra wrote:
On Thu, Jan 23, 2014 at 11:28:48PM -0500, Waiman Long wrote:+/** + * queue_read_trylock - try to acquire read lock of a queue rwlock + * @lock : Pointer to queue rwlock structure + * Return: 1 if lock acquired, 0 if failed + */ +static inline int queue_read_trylock(struct qrwlock *lock) +{ + union qrwcnts cnts; + + cnts.rwc = ACCESS_ONCE(lock->cnts.rwc); + if (likely(!cnts.writer)) { + cnts.rwc = (u32)atomic_add_return(_QR_BIAS,&lock->cnts.rwa); + if (likely(!cnts.writer)) { + smp_mb__after_atomic_inc();That's superfluous, as atomic_add_return() is documented as being a full barrier.
Yes, you are right. I have reviewed the memory_barrier.txt again and atomic_add_return() is supposed to act as a memory barrier. So no extra barrier. I will correct that in the next version.
+ return 1; + } + atomic_sub(_QR_BIAS,&lock->cnts.rwa); + } + return 0; +} + +/** + * queue_write_trylock - try to acquire write lock of a queue rwlock + * @lock : Pointer to queue rwlock structure + * Return: 1 if lock acquired, 0 if failed + */ +static inline int queue_write_trylock(struct qrwlock *lock) +{ + union qrwcnts old, new; + + old.rwc = ACCESS_ONCE(lock->cnts.rwc); + if (likely(!old.rwc)) { + new.rwc = old.rwc; + new.writer = _QW_LOCKED; + if (likely(cmpxchg(&lock->cnts.rwc, old.rwc, new.rwc) + == old.rwc))One could actually use atomic_cmpxchg() and avoid one (ab)use of that union :-)
I think either one is fine. I would like to keep the original code if it is not really a problem.
+ return 1; + } + return 0; +} +/** + * queue_read_lock - acquire read lock of a queue rwlock + * @lock: Pointer to queue rwlock structure + */ +static inline void queue_read_lock(struct qrwlock *lock) +{ + union qrwcnts cnts; + + cnts.rwc = atomic_add_return(_QR_BIAS,&lock->cnts.rwa); + if (likely(!cnts.writer)) { + smp_mb__after_atomic_inc();Superfluous again.
Will remove that.
+ return; + queue_write_lock_slowpath(lock); +} + +/** + * queue_read_unlock - release read lock of a queue rwlock + * @lock : Pointer to queue rwlock structure + */ +static inline void queue_read_unlock(struct qrwlock *lock) +{ + /* + * Atomically decrement the reader count + */ + smp_mb__before_atomic_dec(); + atomic_sub(_QR_BIAS,&lock->cnts.rwa); +} + +/** + * queue_write_unlock - release write lock of a queue rwlock + * @lock : Pointer to queue rwlock structure + */ +static inline void queue_write_unlock(struct qrwlock *lock) +{ + /* + * If the writer field is atomic, it can be cleared directly. + * Otherwise, an atomic subtraction will be used to clear it. + */ + if (__native_word(lock->cnts.writer)) + smp_store_release(&lock->cnts.writer, 0); + else { + smp_mb__before_atomic_dec(); + atomic_sub(_QW_LOCKED,&lock->cnts.rwa); + }Missing {}, Documentation/CodingStyle Chapter 3 near the very end.
Thank for spotting that. Will fix it in the next version. -Longman -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

