Commit-ID:  a338ecb07a338c9a8b0ca0010e862ebe598b1551
Gitweb:     https://git.kernel.org/tip/a338ecb07a338c9a8b0ca0010e862ebe598b1551
Author:     Waiman Long <[email protected]>
AuthorDate: Thu, 4 Apr 2019 13:43:13 -0400
Committer:  Ingo Molnar <[email protected]>
CommitDate: Wed, 10 Apr 2019 10:56:01 +0200

locking/rwsem: Micro-optimize rwsem_try_read_lock_unqueued()

The atomic_long_cmpxchg_acquire() in rwsem_try_read_lock_unqueued() is
replaced by atomic_long_try_cmpxchg_acquire() to simpify the code and
generate slightly better assembly code.

There is no functional change.

Signed-off-by: Waiman Long <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Acked-by: Will Deacon <[email protected]>
Acked-by: Davidlohr Bueso <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Davidlohr Bueso <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Tim Chen <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
 kernel/locking/rwsem-xadd.c | 15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index c213869e1aa7..f6198e1a58f6 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -259,21 +259,16 @@ static inline bool rwsem_try_write_lock(long count, 
struct rw_semaphore *sem)
  */
 static inline bool rwsem_try_write_lock_unqueued(struct rw_semaphore *sem)
 {
-       long old, count = atomic_long_read(&sem->count);
+       long count = atomic_long_read(&sem->count);
 
-       while (true) {
-               if (!(count == 0 || count == RWSEM_WAITING_BIAS))
-                       return false;
-
-               old = atomic_long_cmpxchg_acquire(&sem->count, count,
-                                     count + RWSEM_ACTIVE_WRITE_BIAS);
-               if (old == count) {
+       while (!count || count == RWSEM_WAITING_BIAS) {
+               if (atomic_long_try_cmpxchg_acquire(&sem->count, &count,
+                                       count + RWSEM_ACTIVE_WRITE_BIAS)) {
                        rwsem_set_owner(sem);
                        return true;
                }
-
-               count = old;
        }
+       return false;
 }
 
 static inline bool owner_on_cpu(struct task_struct *owner)

Reply via email to