On Thu, 2015-04-09 at 07:37 +0200, Ingo Molnar wrote:

> The 'break' path does not seem to be equivalent, we used to do:
> 
> > -                   rcu_read_unlock();
> > -                   return false;
> 
> and now we'll do:
> 
> > +                   ret = false;
> ...
> > +   if (!READ_ONCE(sem->owner)) {
> > +           long count = READ_ONCE(sem->count);
> 
> it's harmless (we do one more round of checking), but that's not an 
> equivalent transformation and slows down the preemption trigger a 
> (tiny) bit, because the chance that we actually catch the lock when 
> breaking out early is vanishingly small. (It might in fact do the 
> wrong thing in returning true if need_resched() is set and we've 
> switched owners in that small window.)
> 
> Given how dissimilar the return path is in this case, I'm not sure 
> it's worth sharing it. This might be one of the few cases where 
> separate return statements is the better solution.

I also preferred the multiple returns for the rwsem variant to avoid
needing to check sem->owner when it should go to slowpath, as you
mentioned.

Now that I think of it though, if we want to have just one return path,
we can still do that if we add an "out" label.

---
 kernel/locking/rwsem-xadd.c |   25 +++++++++++++------------
 1 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 3417d01..e74240f 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -327,38 +327,39 @@ done:
 static noinline
 bool rwsem_spin_on_owner(struct rw_semaphore *sem, struct task_struct *owner)
 {
-       long count;
+       bool ret = true;
 
        rcu_read_lock();
        while (sem->owner == owner) {
                /*
                 * Ensure we emit the owner->on_cpu, dereference _after_
-                * checking sem->owner still matches owner, if that fails,
-                * owner might point to free()d memory, if it still matches,
+                * checking sem->owner still matches owner. If that fails,
+                * owner might point to freed memory. If it still matches,
                 * the rcu_read_lock() ensures the memory stays valid.
                 */
                barrier();
 
-               /* abort spinning when need_resched or owner is not running */
+               /* Abort spinning when need_resched or owner is not running. */
                if (!owner->on_cpu || need_resched()) {
                        rcu_read_unlock();
-                       return false;
+                       ret = false;
+                       goto out;
                }
 
                cpu_relax_lowlatency();
        }
        rcu_read_unlock();
 
-       if (READ_ONCE(sem->owner))
-               return true; /* new owner, continue spinning */
-
        /*
         * When the owner is not set, the lock could be free or
-        * held by readers. Check the counter to verify the
-        * state.
+        * held by readers. Check the counter to verify the state.
         */
-       count = READ_ONCE(sem->count);
-       return (count == 0 || count == RWSEM_WAITING_BIAS);
+       if (!READ_ONCE(sem->owner)) {
+               long count = READ_ONCE(sem->count);
+               ret = (count == 0 || count == RWSEM_WAITING_BIAS);
+       }
+out:
+       return ret;
 }
 
 static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
-- 
1.7.2.5



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to