On 11/19, Ivo Sieben wrote:
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3090,9 +3090,22 @@ void __wake_up(wait_queue_head_t *q, unsigned int mode,
>  {
>       unsigned long flags;
>
> -     spin_lock_irqsave(&q->lock, flags);
> -     __wake_up_common(q, mode, nr_exclusive, 0, key);
> -     spin_unlock_irqrestore(&q->lock, flags);
> +     /*
> +      * We check for list emptiness outside the lock. This prevents the wake
> +      * up to enter the critical section needlessly when the task list is
> +      * empty.
> +      *
> +      * Placed a full memory barrier before checking list emptiness to make
> +      * 100% sure this function sees an up-to-date list administration.
> +      * Note that other code that manipulates the list uses a spin_lock and
> +      * therefore doesn't need additional memory barriers.
> +      */
> +     smp_mb();
> +     if (!list_empty(&q->task_list)) {

waitqueue_active() ?

> +             spin_lock_irqsave(&q->lock, flags);
> +             __wake_up_common(q, mode, nr_exclusive, 0, key);
> +             spin_unlock_irqrestore(&q->lock, flags);
> +     }

I am wondering if it makes sense unconditionally. A lot of callers do

        if (waitqueue_active(q))
                wake_up(...);

this patch makes the optimization above pointless and adds mb().


But I won't argue.

Oleg.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to