Commit-ID:  4ad3831a9d4af5e36da5d44a3b9c6522d0353cee
Gitweb:     https://git.kernel.org/tip/4ad3831a9d4af5e36da5d44a3b9c6522d0353cee
Author:     Chris Redpath <chris.redp...@arm.com>
AuthorDate: Wed, 4 Jul 2018 11:17:48 +0100
Committer:  Ingo Molnar <mi...@kernel.org>
CommitDate: Mon, 10 Sep 2018 11:05:53 +0200

sched/fair: Don't move tasks to lower capacity CPUs unless necessary

When lower capacity CPUs are load balancing and considering to pull
something from a higher capacity group, we should not pull tasks from a
CPU with only one task running as this is guaranteed to impede progress
for that task. If there is more than one task running, load balance in
the higher capacity group would have already made any possible moves to
resolve imbalance and we should make better use of system compute
capacity by moving a task if we still have more than one running.

Signed-off-by: Chris Redpath <chris.redp...@arm.com>
Signed-off-by: Morten Rasmussen <morten.rasmus...@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: dietmar.eggem...@arm.com
Cc: gaku.inami...@renesas.com
Cc: valentin.schnei...@arm.com
Cc: vincent.guit...@linaro.org
Link: 
http://lkml.kernel.org/r/1530699470-29808-11-git-send-email-morten.rasmus...@arm.com
Signed-off-by: Ingo Molnar <mi...@kernel.org>
---
 kernel/sched/fair.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8b228c5b3eb4..06ff75f4ac7b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8423,6 +8423,17 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 
                capacity = capacity_of(i);
 
+               /*
+                * For ASYM_CPUCAPACITY domains, don't pick a CPU that could
+                * eventually lead to active_balancing high->low capacity.
+                * Higher per-CPU capacity is considered better than balancing
+                * average load.
+                */
+               if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
+                   capacity_of(env->dst_cpu) < capacity &&
+                   rq->nr_running == 1)
+                       continue;
+
                wl = weighted_cpuload(rq);
 
                /*

Reply via email to