On Mon, Feb 12, 2018 at 02:58:56PM +0000, Mel Gorman wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c1091cb023c4..28c8d9c91955 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5747,7 +5747,16 @@ wake_affine_weight(struct sched_domain *sd, struct 
> task_struct *p,
>               prev_eff_load *= 100 + (sd->imbalance_pct - 100) / 2;
>       prev_eff_load *= capacity_of(this_cpu);
>  
> -     return this_eff_load <= prev_eff_load ? this_cpu : nr_cpumask_bits;
> +     /*
> +      * If sync, adjust the weight of prev_eff_load such that if
> +      * prev_eff == this_eff that select_idle_sibling will consider
> +      * stacking the wakee on top of the waker if no other CPU is
> +      * idle.
> +      */
> +     if (sync)
> +             prev_eff_load += 1;

So where we had <= and would consistently favour pulling the task to the
waking CPU when all else what equal, you now switch to <, such that when
things are equal we do not pull.

That makes sense I suppose.

Except for sync wakeups, where you say, if everything else is equal,
pull, which also makes sense, because sync says 'current' promises to go
away.

OK.

> +
> +     return this_eff_load < prev_eff_load ? this_cpu : nr_cpumask_bits;
>  }
>  
>  static int wake_affine(struct sched_domain *sd, struct task_struct *p,
> -- 
> 2.15.1
> 

Reply via email to