[tip:sched/urgent] sched/rt: Make update_curr_rt() more accurate

2018-02-13 Thread tip-bot for Wen Yang
Commit-ID:  a7711602c7b79950ea437178f601b52ab08ef659
Gitweb: https://git.kernel.org/tip/a7711602c7b79950ea437178f601b52ab08ef659
Author: Wen Yang 
AuthorDate: Tue, 6 Feb 2018 09:53:28 +0800
Committer:  Ingo Molnar 
CommitDate: Tue, 13 Feb 2018 11:44:41 +0100

sched/rt: Make update_curr_rt() more accurate

rq->clock_task may be updated between the two calls of
rq_clock_task() in update_curr_rt(). Calling rq_clock_task() only
once makes it more accurate and efficient, taking update_curr() as
reference.

Signed-off-by: Wen Yang 
Signed-off-by: Peter Zijlstra (Intel) 
Reviewed-by: Jiang Biao 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: zhong.weid...@zte.com.cn
Link: 
http://lkml.kernel.org/r/1517882008-44552-1-git-send-email-wen.yan...@zte.com.cn
Signed-off-by: Ingo Molnar 
---
 kernel/sched/rt.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 663b235..aad49451 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -950,12 +950,13 @@ static void update_curr_rt(struct rq *rq)
 {
struct task_struct *curr = rq->curr;
struct sched_rt_entity *rt_se = &curr->rt;
-   u64 now = rq_clock_task(rq);
u64 delta_exec;
+   u64 now;
 
if (curr->sched_class != &rt_sched_class)
return;
 
+   now = rq_clock_task(rq);
delta_exec = now - curr->se.exec_start;
if (unlikely((s64)delta_exec <= 0))
return;


[tip:sched/urgent] sched/rt: Make update_curr_rt() more accurate

2018-02-06 Thread tip-bot for Wen Yang
Commit-ID:  e7ad203166fff89b1d8253faf68fbe6966bf7181
Gitweb: https://git.kernel.org/tip/e7ad203166fff89b1d8253faf68fbe6966bf7181
Author: Wen Yang 
AuthorDate: Mon, 5 Feb 2018 11:18:41 +0800
Committer:  Ingo Molnar 
CommitDate: Tue, 6 Feb 2018 10:20:34 +0100

sched/rt: Make update_curr_rt() more accurate

rq->clock_task may be updated between the two calls of
rq_clock_task() in update_curr_rt(). Calling rq_clock_task() only
once makes it more accurate and efficient, taking update_curr() as
reference.

Signed-off-by: Wen Yang 
Signed-off-by: Peter Zijlstra (Intel) 
Reviewed-by: Jiang Biao 
Cc: Linus Torvalds 
Cc: Mike Galbraith 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: zhong.weid...@zte.com.cn
Link: 
http://lkml.kernel.org/r/1517800721-42092-1-git-send-email-wen.yan...@zte.com.cn
Signed-off-by: Ingo Molnar 
---
 kernel/sched/rt.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 89a086e..663b235 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -950,12 +950,13 @@ static void update_curr_rt(struct rq *rq)
 {
struct task_struct *curr = rq->curr;
struct sched_rt_entity *rt_se = &curr->rt;
+   u64 now = rq_clock_task(rq);
u64 delta_exec;
 
if (curr->sched_class != &rt_sched_class)
return;
 
-   delta_exec = rq_clock_task(rq) - curr->se.exec_start;
+   delta_exec = now - curr->se.exec_start;
if (unlikely((s64)delta_exec <= 0))
return;
 
@@ -968,7 +969,7 @@ static void update_curr_rt(struct rq *rq)
curr->se.sum_exec_runtime += delta_exec;
account_group_exec_runtime(curr, delta_exec);
 
-   curr->se.exec_start = rq_clock_task(rq);
+   curr->se.exec_start = now;
cgroup_account_cputime(curr, delta_exec);
 
sched_rt_avg_update(rq, delta_exec);