Commit-ID:  4a465e3ebbc8004ce4f7f08f6022ee8315a94edf
Gitweb:     https://git.kernel.org/tip/4a465e3ebbc8004ce4f7f08f6022ee8315a94edf
Author:     Dietmar Eggemann <[email protected]>
AuthorDate: Fri, 3 Aug 2018 15:05:38 +0100
Committer:  Ingo Molnar <[email protected]>
CommitDate: Tue, 2 Oct 2018 09:45:03 +0200

sched/fair: Remove setting task's se->runnable_weight during PELT update

A CFS (SCHED_OTHER, SCHED_BATCH or SCHED_IDLE policy) task's
se->runnable_weight must always be in sync with its se->load.weight.

se->runnable_weight is set to se->load.weight when the task is
forked (init_entity_runnable_average()) or reniced (reweight_entity()).

There are two cases in set_load_weight() which since they currently only
set se->load.weight could lead to a situation in which se->load.weight
is different to se->runnable_weight for a CFS task:

(1) A task switches to SCHED_IDLE.

(2) A SCHED_FIFO, SCHED_RR or SCHED_DEADLINE task which has been reniced
    (during which only its static priority gets set) switches to
    SCHED_OTHER or SCHED_BATCH.

Set se->runnable_weight to se->load.weight in these two cases to prevent
this. This eliminates the need to explicitly set it to se->load.weight
during PELT updates in the CFS scheduler fastpath.

Signed-off-by: Dietmar Eggemann <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Morten Rasmussen <[email protected]>
Cc: Patrick Bellasi <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Quentin Perret <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Vincent Guittot <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
---
 kernel/sched/core.c | 2 ++
 kernel/sched/pelt.c | 6 ------
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index f2caf1bae4a3..56b3c1781276 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -700,6 +700,7 @@ static void set_load_weight(struct task_struct *p, bool 
update_load)
        if (idle_policy(p->policy)) {
                load->weight = scale_load(WEIGHT_IDLEPRIO);
                load->inv_weight = WMULT_IDLEPRIO;
+               p->se.runnable_weight = load->weight;
                return;
        }
 
@@ -712,6 +713,7 @@ static void set_load_weight(struct task_struct *p, bool 
update_load)
        } else {
                load->weight = scale_load(sched_prio_to_weight[prio]);
                load->inv_weight = sched_prio_to_wmult[prio];
+               p->se.runnable_weight = load->weight;
        }
 }
 
diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
index 48a126486435..90fb5bc12ad4 100644
--- a/kernel/sched/pelt.c
+++ b/kernel/sched/pelt.c
@@ -269,9 +269,6 @@ ___update_load_avg(struct sched_avg *sa, unsigned long 
load, unsigned long runna
 
 int __update_load_avg_blocked_se(u64 now, int cpu, struct sched_entity *se)
 {
-       if (entity_is_task(se))
-               se->runnable_weight = se->load.weight;
-
        if (___update_load_sum(now, cpu, &se->avg, 0, 0, 0)) {
                ___update_load_avg(&se->avg, se_weight(se), se_runnable(se));
                return 1;
@@ -282,9 +279,6 @@ int __update_load_avg_blocked_se(u64 now, int cpu, struct 
sched_entity *se)
 
 int __update_load_avg_se(u64 now, int cpu, struct cfs_rq *cfs_rq, struct 
sched_entity *se)
 {
-       if (entity_is_task(se))
-               se->runnable_weight = se->load.weight;
-
        if (___update_load_sum(now, cpu, &se->avg, !!se->on_rq, !!se->on_rq,
                                cfs_rq->curr == se)) {
 

Reply via email to