When hotplugging cpus out or creating exclusive cpusets (disabling
sched_load_balance) systems which were asymmetric at boot might become
symmetric. In this case leaving the flag set might lead to suboptimal
scheduling decisions.

The arch-code proving the flag doesn't have visibility of the cpuset
configuration so it must either be told by passing a cpumask or the
generic topology code has to verify if the flag should still be set
when taking the actual sched_domain_span() into account. This patch
implements the latter approach.

We need to detect capacity based on calling arch_scale_cpu_capacity()
directly as rq->cpu_capacity_orig hasn't been set yet early in the boot
process.

cc: Ingo Molnar <mi...@redhat.com>
cc: Peter Zijlstra <pet...@infradead.org>

Signed-off-by: Morten Rasmussen <morten.rasmus...@arm.com>
---
 kernel/sched/topology.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 71330e0e41db..29c186961345 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1160,6 +1160,26 @@ sd_init(struct sched_domain_topology_level *tl,
        sd_id = cpumask_first(sched_domain_span(sd));
 
        /*
+        * Check if cpu_map eclipses cpu capacity asymmetry.
+        */
+
+       if (sd->flags & SD_ASYM_CPUCAPACITY) {
+               int i;
+               bool disable = true;
+               long capacity = arch_scale_cpu_capacity(NULL, sd_id);
+
+               for_each_cpu(i, sched_domain_span(sd)) {
+                       if (capacity != arch_scale_cpu_capacity(NULL, i)) {
+                               disable = false;
+                               break;
+                       }
+               }
+
+               if (disable)
+                       sd->flags &= ~SD_ASYM_CPUCAPACITY;
+       }
+
+       /*
         * Convert topological properties into behaviour.
         */
 
-- 
2.7.4

Reply via email to