Adds blocked_load_avg to weighted_cpuload() to take recently runnable
tasks into account in load-balancing decisions. This changes the nature
of weighted_cpuload() as it may >0 while there are currently no runnable
tasks on the cpu rq. Hence care must be taken in the load-balance code
to use cfs_rq->runnable_load_avg or nr_running when current rq status is
needed.

This patch is highly experimental and will probably have require
additional updates of the users of weighted_cpuload().

cc: Ingo Molnar <[email protected]>
cc: Peter Zijlstra <[email protected]>

Signed-off-by: Morten Rasmussen <[email protected]>
---
 kernel/sched/fair.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index bd950b2..ad0ebb7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4349,7 +4349,8 @@ static void dequeue_task_fair(struct rq *rq, struct 
task_struct *p, int flags)
 /* Used instead of source_load when we know the type == 0 */
 static unsigned long weighted_cpuload(const int cpu)
 {
-       return cpu_rq(cpu)->cfs.runnable_load_avg;
+       return cpu_rq(cpu)->cfs.runnable_load_avg
+               + cpu_rq(cpu)->cfs.blocked_load_avg;
 }
 
 /*
-- 
1.9.1


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to