On Mon, 05 Dec, at 10:27:36AM, Vincent Guittot wrote:
> 
> Hi Matt,
> 
> Thanks for the results.
> 
> During the review, it has been pointed out by Morten that the test condition
> (100*this_avg_load < imbalance_scale*min_avg_load) makes more sense than
> (100*min_avg_load > imbalance_scale*this_avg_load). But i see lower
> performances with this change. Coud you run tests with the change below on
> top of the patchset ?
> 
> ---
>  kernel/sched/fair.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e8d1ae7..0129fbb 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5514,7 +5514,7 @@ find_idlest_group(struct sched_domain *sd, struct 
> task_struct *p,
>       if (!idlest ||
>           (min_runnable_load > (this_runnable_load + imbalance)) ||
>           ((this_runnable_load < (min_runnable_load + imbalance)) &&
> -                     (100*min_avg_load > imbalance_scale*this_avg_load)))
> +                     (100*this_avg_load < imbalance_scale*min_avg_load)))
>               return NULL;
>       return idlest;
>  }

Queued for testing.

Reply via email to