On Mon, Sep 12, 2016 at 01:37:27PM +0200, Peter Zijlstra wrote:
> So what you're saying is that migration_stop_cpu() doesn't work because
> wait_for_completion() dequeues the task.
> 
> True I suppose. Not sure I like your solution, nor your implementation
> of the solution much though.
> 
> I would much prefer an unconditional cond_resched() there, but also, I
> think we should do what __migrate_swap_task() does, and set wake_cpu.
> 
> So something like so..
> 
> ---
>  kernel/sched/core.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index ddd5f48551f1..ade772aa9610 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1063,8 +1063,12 @@ static int migration_cpu_stop(void *data)
>        * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
>        * we're holding p->pi_lock.
>        */
> -     if (task_rq(p) == rq && task_on_rq_queued(p))
> -             rq = __migrate_task(rq, p, arg->dest_cpu);
> +     if (task_rq(p) == rq) {
> +             if (task_on_rq_queued(p))
> +                     rq = __migrate_task(rq, p, arg->dest_cpu);
> +             else
> +                     p->wake_cpu = arg->dest_cpu;
> +     }
>       raw_spin_unlock(&rq->lock);
>       raw_spin_unlock(&p->pi_lock);
>  

And this, too narrow a constraint do git diff made it go away.

---
 kernel/stop_machine.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c
index ae6f41fb9cba..637798d6b554 100644
--- a/kernel/stop_machine.c
+++ b/kernel/stop_machine.c
@@ -121,6 +121,11 @@ int stop_one_cpu(unsigned int cpu, cpu_stop_fn_t fn, void 
*arg)
        cpu_stop_init_done(&done, 1);
        if (!cpu_stop_queue_work(cpu, &work))
                return -ENOENT;
+       /*
+        * In case @cpu == smp_proccessor_id() we can avoid a sleep+wakeup
+        * by doing a preemption.
+        */
+       cond_resched();
        wait_for_completion(&done.completion);
        return done.ret;
 }

Reply via email to