The overload indicator is used for knowing when we can totally avoid load
balancing to a cpu that is about to go idle. We can avoid load balancing
when no cpu has cfs task and both rt and deadline have push/pull mechanism
to do their own balancing.

However, rq->nr_running on behalf of the total number of each class tasks
on the cpu, do idle balance when remaining tasks are all non-CFS tasks does
not make any sense.

This patch fix it by idle balance when there are still other CFS tasks in
the rq's root domain.

Signed-off-by: Wanpeng Li <[email protected]>
---
 kernel/sched/fair.c  | 2 +-
 kernel/sched/sched.h | 3 ++-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a1c9267..8b7e131 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6192,7 +6192,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
                sgs->group_load += load;
                sgs->sum_nr_running += rq->cfs.h_nr_running;
 
-               if (rq->nr_running > 1)
+               if (rq->nr_running > 1 && rq->cfs.h_nr_running > 0)
                        *overload = true;
 
 #ifdef CONFIG_NUMA_BALANCING
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 31f1e4d..f7dd978 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1269,7 +1269,8 @@ static inline void add_nr_running(struct rq *rq, unsigned 
count)
 
        rq->nr_running = prev_nr + count;
 
-       if (prev_nr < 2 && rq->nr_running >= 2) {
+       if (prev_nr < 2 && rq->nr_running >= 2 &&
+               rq->cfs.h_nr_running > 0) {
 #ifdef CONFIG_SMP
                if (!rq->rd->overload)
                        rq->rd->overload = true;
-- 
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to