On 25 August 2017 at 12:16, Brendan Jackman <[email protected]> wrote: > When the local group is not allowed we do not modify this_*_load from > their initial value of 0. That means that the load checks at the end > of find_idlest_group cause us to incorrectly return NULL. Fixing the > initial values to ULONG_MAX means we will instead return the idlest > remote group in that case. > > Signed-off-by: Brendan Jackman <[email protected]> > Cc: Dietmar Eggemann <[email protected]> > Cc: Vincent Guittot <[email protected]> > Cc: Josef Bacik <[email protected]> > Cc: Ingo Molnar <[email protected]> > Cc: Morten Rasmussen <[email protected]> > Cc: Peter Zijlstra <[email protected]>
Reviewed-by: Vincent Guittot <[email protected]> > --- > kernel/sched/fair.c | 5 +++-- > 1 file changed, 3 insertions(+), 2 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 4ccecbf825bf..0ce75bbcde45 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -5387,8 +5387,9 @@ find_idlest_group(struct sched_domain *sd, struct > task_struct *p, > { > struct sched_group *idlest = NULL, *group = sd->groups; > struct sched_group *most_spare_sg = NULL; > - unsigned long min_runnable_load = ULONG_MAX, this_runnable_load = 0; > - unsigned long min_avg_load = ULONG_MAX, this_avg_load = 0; > + unsigned long min_runnable_load = ULONG_MAX; > + unsigned long this_runnable_load = ULONG_MAX; > + unsigned long min_avg_load = ULONG_MAX, this_avg_load = ULONG_MAX; > unsigned long most_spare = 0, this_spare = 0; > int load_idx = sd->forkexec_idx; > int imbalance_scale = 100 + (sd->imbalance_pct-100)/2; > -- > 2.14.1 >

