On Sun, Sep 15, 2013 at 09:30:13PM +0400, Vladimir Davydov wrote:
> Currently new_dst_cpu is prevented from being reselected actually, not
> dst_cpu. This can result in attempting to pull tasks to this_cpu twice.
> 
> Signed-off-by: Vladimir Davydov <[email protected]>
> ---
>  kernel/sched/fair.c |    6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 9b3fe1c..cd59640 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5269,15 +5269,15 @@ more_balance:
>                */
>               if ((env.flags & LBF_SOME_PINNED) && env.imbalance > 0) {
>  
> +                     /* Prevent to re-select dst_cpu via env's cpus */
> +                     cpumask_clear_cpu(env.dst_cpu, env.cpus);
> +
>                       env.dst_rq       = cpu_rq(env.new_dst_cpu);
>                       env.dst_cpu      = env.new_dst_cpu;
>                       env.flags       &= ~LBF_SOME_PINNED;
>                       env.loop         = 0;
>                       env.loop_break   = sched_nr_migrate_break;
>  
> -                     /* Prevent to re-select dst_cpu via env's cpus */
> -                     cpumask_clear_cpu(env.dst_cpu, env.cpus);
> -
>                       /*
>                        * Go back to "more_balance" rather than "redo" since we
>                        * need to continue with same src_cpu.

FWIW please submit patches against tip/master (or at the very least
against tip/sched/core).

I currently picked up:

vladimir_davydov-2_sched-fix_small_imbalance__fix_local-_avg_load___busiest-_avg_load_case.patch
vladimir_davydov-sched-fix_task_h_load_calculation.patch
vladimir_davydov-1_sched-load_balance__prevent_reselect_prev_dst_cpu_if_some_pinned.patch

So I've two patches still outstanding.. one on the <= vs < issue and the
other for loosing the bound on the load-balance passes.
_______________________________________________
Devel mailing list
[email protected]
https://lists.openvz.org/mailman/listinfo/devel

Reply via email to