On 31 August 2017 at 13:58, Brendan Jackman <[email protected]> wrote:
> find_idlest_group returns NULL when the local group is idlest. The
> caller then continues the find_idlest_group search at a lower level
> of the current CPU's sched_domain hierarchy. find_idlest_group_cpu is
> not consulted and, crucially, @new_cpu is not updated. This means the
> search is pointless and we return @prev_cpu from select_task_rq_fair.
>
> This is fixed by initialising @new_cpu to @cpu instead of
> @prev_cpu.
>
> Signed-off-by: Brendan Jackman <[email protected]>
> Cc: Dietmar Eggemann <[email protected]>
> Cc: Vincent Guittot <[email protected]>
> Cc: Josef Bacik <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Morten Rasmussen <[email protected]>
> Cc: Peter Zijlstra <[email protected]>

Reviewed-by: Vincent Guittot <[email protected]>

> ---
>  kernel/sched/fair.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2608091..f93cb97 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5567,7 +5567,7 @@ find_idlest_group_cpu(struct sched_group *group, struct 
> task_struct *p, int this
>  static inline int find_idlest_cpu(struct sched_domain *sd, struct 
> task_struct *p,
>                                   int cpu, int prev_cpu, int sd_flag)
>  {
> -       int new_cpu = prev_cpu;
> +       int new_cpu = cpu;
>
>         if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed))
>                 return prev_cpu;
> --
> 2.7.4
>

Reply via email to