The moves of tasks are now propagated down to root and the utilization
of cfs_rq reflects reality so it doesn't need to be estimated at init.

Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4c0a404..d0cf597 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9098,7 +9098,7 @@ void online_fair_sched_group(struct task_group *tg)
                se = tg->se[i];
 
                raw_spin_lock_irq(&rq->lock);
-               post_init_entity_util_avg(se);
+               attach_entity_cfs_rq(se);
                sync_throttle(tg, i);
                raw_spin_unlock_irq(&rq->lock);
        }
-- 
2.7.4

Reply via email to