On Wed, Jan 16, 2019 at 07:42:18PM +0100, Andrea Parri wrote:
> The smp_wmb() in move_queued_task() (c.f., __set_task_cpu()) pairs with
> the composition of the dependency and the ACQUIRE in task_rq_lock():
> 
>       move_queued_task()              task_rq_lock()
> 
>       [S] ->on_rq = MIGRATING         [L] rq = task_rq()
>       WMB (__set_task_cpu())          ACQUIRE (rq->lock);
>       [S] ->cpu = new_cpu             [L] ->on_rq
> 
> where "[L] rq = task_rq()" is ordered before "ACQUIRE (rq->lock)" by an
> address dependency and, in turn, "ACQUIRE (rq->lock)" is ordered before
> "[L] ->on_rq" by the ACQUIRE itself.
> 
> Use READ_ONCE() to load ->cpu in task_rq() (c.f., task_cpu()) to honour
> this address dependency between loads; also, mark the store to ->cpu in
> __set_task_cpu() by using WRITE_ONCE() in order to tell the compiler to
> not mess/tear this (synchronizing) memory access.

In the light of the recent discussion about the integration of plain
accesses in the LKMM (c.f., e.g., [1] and discussion thereof), I was
considering even further changes to this in order to "reinforce" the
above smp_wmb().  Here's two approaches (one of):

 1) replace this smp_wmb()+WRITE_ONCE() with an smp_store_release();

 2) or keep this smp_wmb()+WRITE_ONCE(), but use {WRITE,READ}_ONCE()
    also for the accesses to ->on_rq.

What do you think?  (maybe I'm just being too paranoid?)

Adding Will to the Cc:  ((1) should be "painless" for x86, not sure
about arm64...)

  Andrea

[1] http://lkml.kernel.org/r/20190118155638.GA24442@andrea


> 
> Signed-off-by: Andrea Parri <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: "Paul E. McKenney" <[email protected]>
> Cc: Alan Stern <[email protected]>
> ---
>  include/linux/sched.h | 4 ++--
>  kernel/sched/sched.h  | 4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 224666226e87b..2bb02c9635bd8 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1753,9 +1753,9 @@ static __always_inline bool need_resched(void)
>  static inline unsigned int task_cpu(const struct task_struct *p)
>  {
>  #ifdef CONFIG_THREAD_INFO_IN_TASK
> -     return p->cpu;
> +     return READ_ONCE(p->cpu);
>  #else
> -     return task_thread_info(p)->cpu;
> +     return READ_ONCE(task_thread_info(p)->cpu);
>  #endif
>  }
>  
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index d04530bf251fe..270a3333589d2 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1460,9 +1460,9 @@ static inline void __set_task_cpu(struct task_struct 
> *p, unsigned int cpu)
>        */
>       smp_wmb();
>  #ifdef CONFIG_THREAD_INFO_IN_TASK
> -     p->cpu = cpu;
> +     WRITE_ONCE(p->cpu, cpu);
>  #else
> -     task_thread_info(p)->cpu = cpu;
> +     WRITE_ONCE(task_thread_info(p)->cpu, cpu);
>  #endif
>       p->wake_cpu = cpu;
>  #endif
> -- 
> 2.17.1
> 

Reply via email to