On 25 August 2017 at 12:16, Brendan Jackman <brendan.jack...@arm.com> wrote:
> When p is allowed on none of the CPUs in the sched_domain, we
> currently return NULL from find_idlest_group, and pointlessly
> continue the search on lower sched_domain levels (where p is also not
> allowed) before returning prev_cpu regardless (as we have not updated
> new_cpu).
>
> Add an explicit check for this case, and a comment to
> find_idlest_group. Now when find_idlest_group returns NULL, it always
> means that the local group is allowed and idlest.
>
> Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
> Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
> Cc: Vincent Guittot <vincent.guit...@linaro.org>
> Cc: Josef Bacik <jo...@toxicpanda.com>
> Cc: Ingo Molnar <mi...@redhat.com>
> Cc: Morten Rasmussen <morten.rasmus...@arm.com>
> Cc: Peter Zijlstra <pet...@infradead.org>


Reviewed-by: Vincent Guittot <vincent.guit...@linaro.org>

> ---
>  kernel/sched/fair.c | 5 +++++
>  1 file changed, 5 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0ce75bbcde45..26080917ff8d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5380,6 +5380,8 @@ static unsigned long capacity_spare_wake(int cpu, 
> struct task_struct *p)
>  /*
>   * find_idlest_group finds and returns the least busy CPU group within the
>   * domain.
> + *
> + * Assumes p is allowed on at least one CPU in sd.
>   */
>  static struct sched_group *
>  find_idlest_group(struct sched_domain *sd, struct task_struct *p,
> @@ -5567,6 +5569,9 @@ static inline int find_idlest_cpu(struct sched_domain 
> *sd, struct task_struct *p
>  {
>         int new_cpu = prev_cpu;
>
> +       if (!cpumask_intersects(sched_domain_span(sd), &p->cpus_allowed))
> +               return prev_cpu;
> +
>         while (sd) {
>                 struct sched_group *group;
>                 struct sched_domain *tmp;
> --
> 2.14.1
>

Reply via email to