On Thu, 08 Dec, at 05:56:53PM, Vincent Guittot wrote:
> During fork, the utilization of a task is init once the rq has been
> selected because the current utilization level of the rq is used to set
> the utilization of the fork task. As the task's utilization is still
> null at this step of the fork sequence, it doesn't make sense to look for
> some spare capacity that can fit the task's utilization.
> Furthermore, I can see perf regressions for the test "hackbench -P -g 1"
> because the least loaded policy is always bypassed and tasks are not
> spread during fork.
> 
> With this patch and the fix below, we are back to same performances as
> for v4.8. The fix below is only a temporary one used for the test until a
> smarter solution is found because we can't simply remove the test which is
> useful for others benchmarks
> 
> @@ -5708,13 +5708,6 @@ static int select_idle_cpu(struct task_struct *p, 
> struct sched_domain *sd, int t
> 
>       avg_cost = this_sd->avg_scan_cost;
> 
> -     /*
> -      * Due to large variance we need a large fuzz factor; hackbench in
> -      * particularly is sensitive here.
> -      */
> -     if ((avg_idle / 512) < avg_cost)
> -             return -1;
> -
>       time = local_clock();
> 
>       for_each_cpu_wrap(cpu, sched_domain_span(sd), target, wrap) {
> 
> Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
> Acked-by: Morten Rasmussen <morten.rasmus...@arm.com>
> ---
>  kernel/sched/fair.c | 6 ++++++
>  1 file changed, 6 insertions(+)

Tested-by: Matt Fleming <m...@codeblueprint.co.uk>
Reviewed-by: Matt Fleming <m...@codeblueprint.co.uk>

Reply via email to