On Tue, 13 Jun 2017 20:58:43 -0700
"Paul E. McKenney" <paul...@linux.vnet.ibm.com> wrote:

> And here is the part you also need to look at:

Why? We are talking about two different, unrelated variables modified
on two different CPUs. I don't see where the overlap is.

> 
> ====
> 
>  (*) Overlapping loads and stores within a particular CPU will appear to be
>      ordered within that CPU.  This means that for:
> 
>       a = READ_ONCE(*X); WRITE_ONCE(*X, b);
> 
>      the CPU will only issue the following sequence of memory operations:
> 
>       a = LOAD *X, STORE *X = b
> 
>      And for:
> 
>       WRITE_ONCE(*X, c); d = READ_ONCE(*X);
> 
>      the CPU will only issue:
> 
>       STORE *X = c, d = LOAD *X
> 
>      (Loads and stores overlap if they are targeted at overlapping pieces of
>      memory).
> 
> ====
> 
> This section needs some help -- the actual guarantee is stronger, that
> all CPUs will agree on the order of volatile same-sized aligned accesses
> to a given single location.  So if a previous READ_ONCE() sees the new
> value, any subsequent READ_ONCE() from that same variable is guaranteed
> to also see the new value (or some later value).
> 
> Does that help, or am I missing something here?

Maybe I'm missing something. Let me rewrite what I first wrote, and
then abstract it into a simpler version:

Here's what I first wrote:

(looking at __call_rcu_core() and rcu_gp_kthread()

        CPU0                            CPU1
        ----                            ----
                                __call_rcu_core() {

                                 spin_lock(rnp_root)
                                 need_wake = __rcu_start_gp() {
                                  rcu_start_gp_advanced() {
                                   gp_flags = FLAG_INIT
                                  }
                                 }

 rcu_gp_kthread() {
   swait_event_interruptible(wq,
        gp_flags & FLAG_INIT) {
   spin_lock(q->lock)

                                *fetch wq->task_list here! *

   list_add(wq->task_list, q->task_list)
   spin_unlock(q->lock);

   *fetch old value of gp_flags here *


                                 spin_unlock(rnp_root)

                                 rcu_gp_kthread_wake() {
                                  swake_up(wq) {
                                   swait_active(wq) {
                                    list_empty(wq->task_list)

                                   } * return false *

  if (condition) * false *
    schedule();


Now let's make it simpler. I'll even add the READ_ONCE and WRITE_ONCE
where applicable.


        CPU0                            CPU1
        ----                            ----
                                LOCK(A)

 LOCK(B)
                                 WRITE_ONCE(X, INIT)

                                 (the cpu may postpone writing X)

                                 (the cpu can fetch wq list here)
  list_add(wq, q)

 UNLOCK(B)

 (the cpu may fetch old value of X)

                                 (write of X happens here)

 if (READ_ONCE(X) != init)
   schedule();

                                UNLOCK(A)

                                 if (list_empty(wq))
                                   return;

Tell me again how the READ_ONCE() and WRITE_ONCE() helps in this
scenario?

Because we are using spinlocks, this wont be an issue for most
architectures. The bug happens if the fetching of the list_empty()
leaks into before the UNLOCK(A).

If the reading/writing of the list and the reading/writing of gp_flags
gets reversed in either direction by the CPU, then we have a problem.

-- Steve

Reply via email to