Re: [PATCH] sched/rt/deadline: Don't push if task's scheduling class was changed

2016-05-09 Thread Steven Rostedt
On Mon,  9 May 2016 12:11:31 +0800
Xunlei Pang  wrote:

> We got a warning below:
> WARNING: CPU: 1 PID: 2468 at kernel/sched/core.c:1161 
> set_task_cpu+0x1af/0x1c0
> CPU: 1 PID: 2468 Comm: bugon Not tainted 4.6.0-rc3+ #16
> Hardware name: Intel Corporation Broadwell Client
> 0086 89618374 8800897a7d50 8133dc8c
>   8800897a7d90 81089921
> 048981037f39 88016c4315c0 88016ecd6e40 
> Call Trace:
> [] dump_stack+0x63/0x87
> [] __warn+0xd1/0xf0
> [] warn_slowpath_null+0x1d/0x20
> [] set_task_cpu+0x1af/0x1c0
> [] push_dl_task.part.34+0xea/0x180
> [] push_dl_tasks+0x17/0x30
> [] __balance_callback+0x45/0x5c
> [] __sched_setscheduler+0x906/0xb90
> [] SyS_sched_setattr+0x150/0x190
> [] do_syscall_64+0x62/0x110
> [] entry_SYSCALL64_slow_path+0x25/0x25
> 
> The corresponding warning triggering code:
> WARN_ON_ONCE(p->state == TASK_RUNNING &&
>  p->sched_class == &fair_sched_class &&
>  (p->on_rq && !task_on_rq_migrating(p)))
> 
> This is because in find_lock_later_rq(), the task whose scheduling
> class was changed to fair class is still pushed away as deadline.
> 
> So, check in find_lock_later_rq() after double_lock_balance(), if the
> scheduling class of the deadline task was changed, break and retry.
> Apply the same logic to RT.
> 
> Signed-off-by: Xunlei Pang 

Reviewed-by: Steven Rostedt 

-- Steve


> ---
>  kernel/sched/deadline.c | 1 +
>  kernel/sched/rt.c   | 1 +
>  2 files changed, 2 insertions(+)
> 
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index 169d40d..57eb3e4 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1385,6 +1385,7 @@ static struct rq *find_lock_later_rq(struct task_struct 
> *task, struct rq *rq)
>!cpumask_test_cpu(later_rq->cpu,
>  &task->cpus_allowed) ||
>task_running(rq, task) ||
> +  !dl_task(task) ||
>!task_on_rq_queued(task))) {
>   double_unlock_balance(rq, later_rq);
>   later_rq = NULL;
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index ecfc83d..c10a6f5 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1720,6 +1720,7 @@ static struct rq *find_lock_lowest_rq(struct 
> task_struct *task, struct rq *rq)
>!cpumask_test_cpu(lowest_rq->cpu,
>  tsk_cpus_allowed(task)) 
> ||
>task_running(rq, task) ||
> +  !rt_task(task) ||
>!task_on_rq_queued(task))) {
>  
>   double_unlock_balance(rq, lowest_rq);



[PATCH] sched/rt/deadline: Don't push if task's scheduling class was changed

2016-05-08 Thread Xunlei Pang
We got a warning below:
WARNING: CPU: 1 PID: 2468 at kernel/sched/core.c:1161 
set_task_cpu+0x1af/0x1c0
CPU: 1 PID: 2468 Comm: bugon Not tainted 4.6.0-rc3+ #16
Hardware name: Intel Corporation Broadwell Client
0086 89618374 8800897a7d50 8133dc8c
  8800897a7d90 81089921
048981037f39 88016c4315c0 88016ecd6e40 
Call Trace:
[] dump_stack+0x63/0x87
[] __warn+0xd1/0xf0
[] warn_slowpath_null+0x1d/0x20
[] set_task_cpu+0x1af/0x1c0
[] push_dl_task.part.34+0xea/0x180
[] push_dl_tasks+0x17/0x30
[] __balance_callback+0x45/0x5c
[] __sched_setscheduler+0x906/0xb90
[] SyS_sched_setattr+0x150/0x190
[] do_syscall_64+0x62/0x110
[] entry_SYSCALL64_slow_path+0x25/0x25

The corresponding warning triggering code:
WARN_ON_ONCE(p->state == TASK_RUNNING &&
 p->sched_class == &fair_sched_class &&
 (p->on_rq && !task_on_rq_migrating(p)))

This is because in find_lock_later_rq(), the task whose scheduling
class was changed to fair class is still pushed away as deadline.

So, check in find_lock_later_rq() after double_lock_balance(), if the
scheduling class of the deadline task was changed, break and retry.
Apply the same logic to RT.

Signed-off-by: Xunlei Pang 
---
 kernel/sched/deadline.c | 1 +
 kernel/sched/rt.c   | 1 +
 2 files changed, 2 insertions(+)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 169d40d..57eb3e4 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1385,6 +1385,7 @@ static struct rq *find_lock_later_rq(struct task_struct 
*task, struct rq *rq)
 !cpumask_test_cpu(later_rq->cpu,
   &task->cpus_allowed) ||
 task_running(rq, task) ||
+!dl_task(task) ||
 !task_on_rq_queued(task))) {
double_unlock_balance(rq, later_rq);
later_rq = NULL;
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index ecfc83d..c10a6f5 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1720,6 +1720,7 @@ static struct rq *find_lock_lowest_rq(struct task_struct 
*task, struct rq *rq)
 !cpumask_test_cpu(lowest_rq->cpu,
   tsk_cpus_allowed(task)) 
||
 task_running(rq, task) ||
+!rt_task(task) ||
 !task_on_rq_queued(task))) {
 
double_unlock_balance(rq, lowest_rq);
-- 
1.8.3.1