----- On Oct 7, 2015, at 3:51 AM, Peter Zijlstra pet...@infradead.org wrote:
> On Tue, Oct 06, 2015 at 01:58:50PM -0700, Paul E. McKenney wrote: >> On Tue, Oct 06, 2015 at 10:29:37PM +0200, Peter Zijlstra wrote: >> > On Tue, Oct 06, 2015 at 09:29:21AM -0700, Paul E. McKenney wrote: >> > > +static void __maybe_unused rcu_report_exp_rnp(struct rcu_state *rsp, >> > > + struct rcu_node *rnp, >> > > bool wake) >> > > +{ >> > > + unsigned long flags; >> > > + unsigned long mask; >> > > + >> > > + raw_spin_lock_irqsave(&rnp->lock, flags); >> > >> > Normally we require a comment with barriers, explaining the order and >> > the pairing etc.. :-) >> > >> > > + smp_mb__after_unlock_lock(); >> >> Hmmmm... That is not good. >> >> Worse yet, I am missing comments on most of the pre-existing barriers >> of this form. > > Yes I noticed.. :/ > >> The purpose is to enforce the heavy-weight grace-period memory-ordering >> guarantees documented in the synchronize_sched() header comment and >> elsewhere. > >> They pair with anything you might use to check for violation >> of these guarantees, or, simiarly, any ordering that you might use when >> relying on these guarantees. > > I'm sure you know what that means, but I've no clue ;-) That is, I > wouldn't know where to start looking in the RCU implementation to verify > the barrier is either needed or sufficient. Unless you mean _everywhere_ > :-) One example is the new membarrier system call. It relies on synchronize_sched() to enforce this: from kernel/membarrier.c: * All memory accesses performed in program order from each targeted thread * is guaranteed to be ordered with respect to sys_membarrier(). If we use * the semantic "barrier()" to represent a compiler barrier forcing memory * accesses to be performed in program order across the barrier, and * smp_mb() to represent explicit memory barriers forcing full memory * ordering across the barrier, we have the following ordering table for * each pair of barrier(), sys_membarrier() and smp_mb(): * * The pair ordering is detailed as (O: ordered, X: not ordered): * * barrier() smp_mb() sys_membarrier() * barrier() X X O * smp_mb() X O O * sys_membarrier() O O O And include/uapi/linux/membarrier.h: * @MEMBARRIER_CMD_SHARED: Execute a memory barrier on all running threads. * Upon return from system call, the caller thread * is ensured that all running threads have passed * through a state where all memory accesses to * user-space addresses match program order between * entry to and return from the system call * (non-running threads are de facto in such a * state). This covers threads from all processes * running on the system. This command returns 0. I hope this sheds light on a userspace-facing interface to synchronize_sched() and clarifies its expected semantic a bit. Thanks, Mathieu > >> I could add something like "/* Enforce GP memory ordering. */" >> >> Or perhaps "/* See synchronize_sched() header. */" >> >> I do not propose reproducing the synchronize_sched() header on each >> of these. That would be verbose, even for me! ;-) >> >> Other thoughts? > > Well, this is an UNLOCK+LOCK on non-matching lock variables upgrade to > full barrier thing, right? > > To me its not clear which UNLOCK we even match here. I've just read the > sync_sched() header, but that doesn't help me either, so referring to > that isn't really helpful either. > > In any case, I don't want to make too big a fuzz here, but I just > stumbled over a lot of unannotated barriers and figured I ought to say > something about it. -- Mathieu Desnoyers EfficiOS Inc. http://www.efficios.com -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/