Improve push_rt_task() by propagating the double_lock_balance() 
usage from find_lock_lowest_rq(), thereby reducing the number 
of cases where we have to assume rq->lock was dropped.
When CONFIG_PREEMPT=n, Unnecessary retries frequently happens
in a simple test case:
Four different rt priority tasks are restricted to run on two 
cpus.

Thanks for Steven Rostedt's advice.

Signed-off-by: Peng Hao <[email protected]>
---
 kernel/sched/rt.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 2e2955a..be0fc43 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1754,7 +1754,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct 
*task, struct rq *rq)
                                     !task_on_rq_queued(task))) {
 
                                double_unlock_balance(rq, lowest_rq);
-                               lowest_rq = NULL;
+                               lowest_rq = RETRY_TASK;
                                break;
                        }
                }
@@ -1830,7 +1830,9 @@ static int push_rt_task(struct rq *rq)
 
        /* find_lock_lowest_rq locks the rq if found */
        lowest_rq = find_lock_lowest_rq(next_task, rq);
-       if (!lowest_rq) {
+       if (!lowest_rq)
+               goto out;
+       if (lowest_rq == RETRY_TASK) {
                struct task_struct *task;
                /*
                 * find_lock_lowest_rq releases rq->lock
-- 
1.8.3.1

Reply via email to