On Thu, 2016-01-21 at 17:29 +0800, Ding Tianhong wrote:

> 
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index 0551c21..596b341 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -256,7 +256,7 @@ static inline int mutex_can_spin_on_owner(struct mutex 
> *lock)
>       struct task_struct *owner;
>       int retval = 1;
>  
> -     if (need_resched())
> +     if (need_resched() || atomic_read(&lock->count) == -1)
>               return 0;
>  

One concern I have is this change will eliminate any optimistic spinning
as long as there is a waiter.  Is there a middle ground that we
can allow only one spinner if there are waiters?  

In other words, we allow spinning when
atomic_read(&lock->count) == -1 but there is no one on the
osq lock that queue up the spinners (i.e. no other process doing
optimistic spinning).

This could allow a bit of spinning without starving out the waiters.

Tim

Reply via email to