From: Frederic Weisbecker <frede...@kernel.org>

A full memory barrier is necessary at the end of the expedited grace
period to order:

1) The grace period completion (pictured by the GP sequence
   number) with all preceding accesses. This pairs with rcu_seq_end()
   performed by the concurrent kworker.

2) The grace period completion and subsequent post-GP update side
   accesses. Pairs again against rcu_seq_end().

This full barrier is already provided by the final sync_exp_work_done()
test, making the subsequent explicit one redundant. Remove it and
improve comments.

Signed-off-by: Frederic Weisbecker <frede...@kernel.org>
Signed-off-by: Paul E. McKenney <paul...@kernel.org>
---
 kernel/rcu/tree_exp.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index bec24ea6777e8..721cb93b1fece 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -265,7 +265,12 @@ static bool sync_exp_work_done(unsigned long s)
 {
        if (rcu_exp_gp_seq_done(s)) {
                trace_rcu_exp_grace_period(rcu_state.name, s, TPS("done"));
-               smp_mb(); /* Ensure test happens before caller kfree(). */
+               /*
+                * Order GP completion with preceding accesses. Order also GP
+                * completion with post GP update side accesses. Pairs with
+                * rcu_seq_end().
+                */
+               smp_mb();
                return true;
        }
        return false;
@@ -959,7 +964,6 @@ void synchronize_rcu_expedited(void)
        rnp = rcu_get_root();
        wait_event(rnp->exp_wq[rcu_seq_ctr(s) & 0x3],
                   sync_exp_work_done(s));
-       smp_mb(); /* Work actions happen before return. */
 
        /* Let the next expedited grace period start. */
        mutex_unlock(&rcu_state.exp_mutex);
-- 
2.40.1


Reply via email to