On 25 August 2017 at 12:16, Brendan Jackman <brendan.jack...@arm.com> wrote:
> When the local group is not allowed we do not modify this_*_load from
> their initial value of 0. That means that the load checks at the end
> of find_idlest_group cause us to incorrectly return NULL. Fixing the
> initial values to ULONG_MAX means we will instead return the idlest
> remote group in that case.
>
> Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
> Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
> Cc: Vincent Guittot <vincent.guit...@linaro.org>
> Cc: Josef Bacik <jo...@toxicpanda.com>
> Cc: Ingo Molnar <mi...@redhat.com>
> Cc: Morten Rasmussen <morten.rasmus...@arm.com>
> Cc: Peter Zijlstra <pet...@infradead.org>

Reviewed-by: Vincent Guittot <vincent.guit...@linaro.org>

> ---
>  kernel/sched/fair.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 4ccecbf825bf..0ce75bbcde45 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5387,8 +5387,9 @@ find_idlest_group(struct sched_domain *sd, struct 
> task_struct *p,
>  {
>         struct sched_group *idlest = NULL, *group = sd->groups;
>         struct sched_group *most_spare_sg = NULL;
> -       unsigned long min_runnable_load = ULONG_MAX, this_runnable_load = 0;
> -       unsigned long min_avg_load = ULONG_MAX, this_avg_load = 0;
> +       unsigned long min_runnable_load = ULONG_MAX;
> +       unsigned long this_runnable_load = ULONG_MAX;
> +       unsigned long min_avg_load = ULONG_MAX, this_avg_load = ULONG_MAX;
>         unsigned long most_spare = 0, this_spare = 0;
>         int load_idx = sd->forkexec_idx;
>         int imbalance_scale = 100 + (sd->imbalance_pct-100)/2;
> --
> 2.14.1
>

Reply via email to