Re: Adding plain accesses and detecting data races in the LKMM

2019-04-21 Thread Paul E. McKenney
On Sat, Apr 20, 2019 at 11:50:14PM +0900, Akira Yokosawa wrote: > On Fri, 19 Apr 2019 11:06:41 -0700, Paul E. McKenney wrote: > > On Sat, Apr 20, 2019 at 12:06:58AM +0900, Akira Yokosawa wrote: > >> Hi Paul, > >> > [...] > > > >>> + (1) The compiler can reorder the load from a to precede the

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-20 Thread Akira Yokosawa
On Fri, 19 Apr 2019 11:06:41 -0700, Paul E. McKenney wrote: > On Sat, Apr 20, 2019 at 12:06:58AM +0900, Akira Yokosawa wrote: >> Hi Paul, >> [...] > >>> + (1) The compiler can reorder the load from a to precede the >>> + atomic_dec(), (2) Because x86 smp_mb__before_atomic() is only a >>>

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-19 Thread Akira Yokosawa
Hi Paul, Please find inline comments below. On Fri, 19 Apr 2019 05:47:20 -0700, Paul E. McKenney wrote: > On Fri, Apr 19, 2019 at 02:53:02AM +0200, Andrea Parri wrote: >>> Are you saying that on x86, atomic_inc() acts as a full memory barrier >>> but not as a compiler barrier, and vice versa

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-19 Thread Andrea Parri
> > + (1) The compiler can reorder the load from a to precede the > > + atomic_dec(), (2) Because x86 smp_mb__before_atomic() is only a > > + compiler barrier, the CPU can reorder the preceding store to > > + obj->dead with the later load from a. > > + > > + This could be

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-19 Thread Paul E. McKenney
On Sat, Apr 20, 2019 at 12:06:58AM +0900, Akira Yokosawa wrote: > Hi Paul, > > Please find inline comments below. > > On Fri, 19 Apr 2019 05:47:20 -0700, Paul E. McKenney wrote: > > On Fri, Apr 19, 2019 at 02:53:02AM +0200, Andrea Parri wrote: > >>> Are you saying that on x86, atomic_inc() acts

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-19 Thread Paul E. McKenney
On Fri, Apr 19, 2019 at 02:53:02AM +0200, Andrea Parri wrote: > > Are you saying that on x86, atomic_inc() acts as a full memory barrier > > but not as a compiler barrier, and vice versa for > > smp_mb__after_atomic()? Or that neither atomic_inc() nor > > smp_mb__after_atomic() implements a

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-19 Thread Alan Stern
On Fri, 19 Apr 2019, Paul E. McKenney wrote: > On Fri, Apr 19, 2019 at 02:53:02AM +0200, Andrea Parri wrote: > > > Are you saying that on x86, atomic_inc() acts as a full memory barrier > > > but not as a compiler barrier, and vice versa for > > > smp_mb__after_atomic()? Or that neither

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-19 Thread Paul E. McKenney
On Fri, Apr 19, 2019 at 10:34:06AM -0400, Alan Stern wrote: > On Fri, 19 Apr 2019, Paul E. McKenney wrote: > > > On Fri, Apr 19, 2019 at 02:53:02AM +0200, Andrea Parri wrote: > > > > Are you saying that on x86, atomic_inc() acts as a full memory barrier > > > > but not as a compiler barrier, and

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-18 Thread Andrea Parri
> Are you saying that on x86, atomic_inc() acts as a full memory barrier > but not as a compiler barrier, and vice versa for > smp_mb__after_atomic()? Or that neither atomic_inc() nor > smp_mb__after_atomic() implements a full memory barrier? I'd say the former; AFAICT, these boil down to:

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-18 Thread Alan Stern
On Thu, 18 Apr 2019, Paul E. McKenney wrote: > > Are you saying that on x86, atomic_inc() acts as a full memory barrier > > but not as a compiler barrier, and vice versa for > > smp_mb__after_atomic()? Or that neither atomic_inc() nor > > smp_mb__after_atomic() implements a full memory

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-18 Thread Paul E. McKenney
On Thu, Apr 18, 2019 at 01:44:36PM -0400, Alan Stern wrote: > On Thu, 18 Apr 2019, Andrea Parri wrote: > > > > Another question is "should the kernel permit smp_mb__{before,after}*() > > > anywhere other than immediately before or after the primitive being > > > strengthened?" > > > > Mmh, I do

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-18 Thread Alan Stern
On Thu, 18 Apr 2019, Andrea Parri wrote: > > Another question is "should the kernel permit smp_mb__{before,after}*() > > anywhere other than immediately before or after the primitive being > > strengthened?" > > Mmh, I do think that keeping these barriers "immediately before or after > the

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-18 Thread Andrea Parri
> Another question is "should the kernel permit smp_mb__{before,after}*() > anywhere other than immediately before or after the primitive being > strengthened?" Mmh, I do think that keeping these barriers "immediately before or after the primitive being strengthened" is a good practice

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-15 Thread Paul E. McKenney
On Mon, Apr 15, 2019 at 09:50:03AM -0400, Alan Stern wrote: > On Mon, 15 Apr 2019, Paul E. McKenney wrote: > > > Another question is "should the kernel permit smp_mb__{before,after}*() > > anywhere other than immediately before or after the primitive being > > strengthened?" > > > > Thoughts? >

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-15 Thread Alan Stern
On Mon, 15 Apr 2019, Paul E. McKenney wrote: > Another question is "should the kernel permit smp_mb__{before,after}*() > anywhere other than immediately before or after the primitive being > strengthened?" > > Thoughts? How would the kernel forbid other constructions? Alan

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-15 Thread Paul E. McKenney
On Sat, Apr 13, 2019 at 11:39:38PM +0200, Andrea Parri wrote: > On Tue, Apr 09, 2019 at 08:01:32AM -0700, Paul E. McKenney wrote: > > On Tue, Apr 09, 2019 at 03:36:18AM +0200, Andrea Parri wrote: > > > > > The formula was more along the line of "do not assume either of these > > > > > cases to

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-13 Thread Andrea Parri
On Tue, Apr 09, 2019 at 08:01:32AM -0700, Paul E. McKenney wrote: > On Tue, Apr 09, 2019 at 03:36:18AM +0200, Andrea Parri wrote: > > > > The formula was more along the line of "do not assume either of these > > > > cases to hold; use barrier() is you need an unconditional barrier..." > > > >

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-09 Thread Paul E. McKenney
On Tue, Apr 09, 2019 at 03:36:18AM +0200, Andrea Parri wrote: > > > The formula was more along the line of "do not assume either of these > > > cases to hold; use barrier() is you need an unconditional barrier..." > > > AFAICT, all current implementations of smp_mb__{before,after}_atomic() > > >

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-08 Thread Andrea Parri
> > The formula was more along the line of "do not assume either of these > > cases to hold; use barrier() is you need an unconditional barrier..." > > AFAICT, all current implementations of smp_mb__{before,after}_atomic() > > provides a compiler barrier with either barrier() or "memory" clobber.

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-08 Thread Alan Stern
On Mon, 8 Apr 2019, Andrea Parri wrote: > > > > > I'd have: > > > > > > > > > > *x = 1; /* A */ > > > > > smp_mb__before_atomic(); > > > > > r0 = xchg_relaxed(x, 2); /* B (read or write part) */ > > > > > > > > > > => (A ->barrier B) > > > > > > > > Perhaps so. It

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-07 Thread Andrea Parri
> > > > I'd have: > > > > > > > > *x = 1; /* A */ > > > > smp_mb__before_atomic(); > > > > r0 = xchg_relaxed(x, 2); /* B (read or write part) */ > > > > > > > > => (A ->barrier B) > > > > > > Perhaps so. It wasn't clear initially how these should be treated, so

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-06 Thread Alan Stern
On Sat, 6 Apr 2019, Andrea Parri wrote: > > > I'd have: > > > > > > *x = 1; /* A */ > > > smp_mb__before_atomic(); > > > r0 = xchg_relaxed(x, 2); /* B (read or write part) */ > > > > > > => (A ->barrier B) > > > > Perhaps so. It wasn't clear initially how these should be treated, so >

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-05 Thread Andrea Parri
> > I'd have: > > > > *x = 1; /* A */ > > smp_mb__before_atomic(); > > r0 = xchg_relaxed(x, 2); /* B (read or write part) */ > > > > => (A ->barrier B) > > Perhaps so. It wasn't clear initially how these should be treated, so > I just ignored them. For example, what should

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-02 Thread Alan Stern
On Tue, 2 Apr 2019, Andrea Parri wrote: > On Tue, Mar 19, 2019 at 03:38:19PM -0400, Alan Stern wrote: > > LKMM maintainers and other interested parties: > > > > Here is my latest proposal for extending the Linux Kernel Memory Model > > to handle plain accesses and data races. The document

Re: Adding plain accesses and detecting data races in the LKMM

2019-04-02 Thread Andrea Parri
On Tue, Mar 19, 2019 at 03:38:19PM -0400, Alan Stern wrote: > LKMM maintainers and other interested parties: > > Here is my latest proposal for extending the Linux Kernel Memory Model > to handle plain accesses and data races. The document below explains > the rationale behind the proposal,

Adding plain accesses and detecting data races in the LKMM

2019-03-19 Thread Alan Stern
LKMM maintainers and other interested parties: Here is my latest proposal for extending the Linux Kernel Memory Model to handle plain accesses and data races. The document below explains the rationale behind the proposal, and a patch (based on the dev branch of Paul's linux-rcu tree) is