Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-13 Thread Luc Maranget
> On Fri, Mar 09, 2018 at 04:21:37PM -0800, Daniel Lustig wrote:
> > On 3/9/2018 2:57 PM, Palmer Dabbelt wrote:
> > > On Fri, 09 Mar 2018 13:30:08 PST (-0800), parri.and...@gmail.com wrote:
> > >> On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:
> > >>> On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:
> > >>
> > >> [...]
> > >>
> > >>> >This proposal relies on the generic definition,
> > >>> >
> > >>> >   include/linux/atomic.h ,
> > >>> >
> > >>> >and on the
> > >>> >
> > >>> >   __atomic_op_acquire()
> > >>> >   __atomic_op_release()
> > >>> >
> > >>> >above to build the acquire/release atomics (except for the 
> > >>> >xchg,cmpxchg,
> > >>> >where the ACQUIRE_BARRIER is inserted conditionally/on success).
> > >>>
> > >>> I thought we wanted to use the AQ and RL bits for AMOs, just not for 
> > >>> LR/SC
> > >>> sequences.  IIRC the AMOs are safe with the current memory model, but I 
> > >>> might
> > >>> just have some version mismatches in my head.
> > >>
> > >> AMO.aqrl are "safe" w.r.t. the LKMM (as they provide "full-ordering"); 
> > >> OTOH,
> > >> AMO.aq and AMO.rl present weaknesses that LKMM (and some kernel 
> > >> developers)
> > >> do not "expect".  I was probing this issue in:
> > >>
> > >>   https://marc.info/?l=linux-kernel=151930201102853=2
> > >>
> > >> (c.f., e.g., test "RISCV-unlock-lock-read-ordering" from that post).
> > >>
> > >> Quoting from the commit message of my patch 1/2:
> > >>
> > >>   "Referring to the "unlock-lock-read-ordering" test reported below,
> > >>    Daniel wrote:
> > >>
> > >>  I think an RCpc interpretation of .aq and .rl would in fact
> > >>  allow the two normal loads in P1 to be reordered [...]
> > >>
> > >>  [...]
> > >>
> > >>  Likewise even if the unlock()/lock() is between two stores.
> > >>  A control dependency might originate from the load part of
> > >>  the amoswap.w.aq, but there still would have to be something
> > >>  to ensure that this load part in fact performs after the store
> > >>  part of the amoswap.w.rl performs globally, and that's not
> > >>  automatic under RCpc.
> > >>
> > >>    Simulation of the RISC-V memory consistency model confirmed this
> > >>    expectation."
> > >>
> > >> I have just (re)checked these observations against the latest 
> > >> specification,
> > >> and my results _confirmed_ these verdicts.
> > > 
> > > Thanks, I must have just gotten confused about a draft spec or something. 
> > >  I'm
> > > pulling these on top or your other memory model related patch.  I've 
> > > renamed
> > > the branch "next-mm" to be a bit more descriptiove.
> > 
> > (Sorry for being out of the loop this week, I was out to deal with
> > a family matter.)
> > 
> > I assume you're using the herd model?  Luc's doing a great job with
> > that, but even so, nothing is officially confirmed until we ratify
> > the model.  In other words, the herd model may end up changing too.
> > If something is broken on our end, there's still time to fix it.
> > 
> > Regarding AMOs, let me copy from something I wrote in a previous
> > offline conversation:
> > 
> > > it seems to us that pairing a store-release of "amoswap.rl" with
> > > a "ld; fence r,rw" doesn't actually give us the RC semantics we've
> > > been discussing for LKMM.  For example:
> > > 
> > > (a) sd t0,0(s0)
> > > (b) amoswap.d.rl x0,t1,0(s1)
> > > ...
> > > (c) ld a0,0(s1)
> > > (d) fence r,rw
> > > (e) sd t2,0(s2)
> > > 
> > > There, we won't get (a) ordered before (e) regardless of whether
> > > (b) is RCpc or RCsc.  Do you agree?
> > 
> > At the moment, only the load part of (b) is in the predecessor
> > set of (d), but the store part of (b) is not.  Likewise, the
> > .rl annotation applies only to the store part of (b), not the
> > load part.
> > 
> > This gets back to a question Linus asked last week about
> > whether the AMO is a single unit or whether it can be
> 
> You mean AMO or RmW atomic operations?
> 
> > considered to split into a load and a store part (which still
> > perform atomically).  For RISC-V, for right now at least, the
> > answer is the latter.  Is it still the latter for Linux too?
> > 
> 
> I think for RmW atomics it's still the latter, the acquire or release is
> for the load part or the store part of an RmW. For example, ppc uses
> lwsync as acquire/release barriers, and lwsync could not order
> write->read. 

You are correct LKMM represent read-modify-write constructs with 2 events,
one read and one write. Those are connected by the special "rmw" relation.



> 
> Regards,
> Boqun

Regards,

--Luc


> 
> > https://lkml.org/lkml/2018/2/26/606
> > 
> > > So I think we'll need to make sure we pair .rl with .aq, or that
> > > we pair fence-based mappings with fence-based mappings, in order
> > > to make the acquire/release operations work.
> > 
> > This assumes we'll say that .aq and .rl are RCsc, not RCpc.
> > But in this case, I think .aq and .rl could still be safe to use,
> > as long 

Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-13 Thread Luc Maranget
> On Fri, Mar 09, 2018 at 04:21:37PM -0800, Daniel Lustig wrote:
> > On 3/9/2018 2:57 PM, Palmer Dabbelt wrote:
> > > On Fri, 09 Mar 2018 13:30:08 PST (-0800), parri.and...@gmail.com wrote:
> > >> On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:
> > >>> On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:
> > >>
> > >> [...]
> > >>
> > >>> >This proposal relies on the generic definition,
> > >>> >
> > >>> >   include/linux/atomic.h ,
> > >>> >
> > >>> >and on the
> > >>> >
> > >>> >   __atomic_op_acquire()
> > >>> >   __atomic_op_release()
> > >>> >
> > >>> >above to build the acquire/release atomics (except for the 
> > >>> >xchg,cmpxchg,
> > >>> >where the ACQUIRE_BARRIER is inserted conditionally/on success).
> > >>>
> > >>> I thought we wanted to use the AQ and RL bits for AMOs, just not for 
> > >>> LR/SC
> > >>> sequences.  IIRC the AMOs are safe with the current memory model, but I 
> > >>> might
> > >>> just have some version mismatches in my head.
> > >>
> > >> AMO.aqrl are "safe" w.r.t. the LKMM (as they provide "full-ordering"); 
> > >> OTOH,
> > >> AMO.aq and AMO.rl present weaknesses that LKMM (and some kernel 
> > >> developers)
> > >> do not "expect".  I was probing this issue in:
> > >>
> > >>   https://marc.info/?l=linux-kernel=151930201102853=2
> > >>
> > >> (c.f., e.g., test "RISCV-unlock-lock-read-ordering" from that post).
> > >>
> > >> Quoting from the commit message of my patch 1/2:
> > >>
> > >>   "Referring to the "unlock-lock-read-ordering" test reported below,
> > >>    Daniel wrote:
> > >>
> > >>  I think an RCpc interpretation of .aq and .rl would in fact
> > >>  allow the two normal loads in P1 to be reordered [...]
> > >>
> > >>  [...]
> > >>
> > >>  Likewise even if the unlock()/lock() is between two stores.
> > >>  A control dependency might originate from the load part of
> > >>  the amoswap.w.aq, but there still would have to be something
> > >>  to ensure that this load part in fact performs after the store
> > >>  part of the amoswap.w.rl performs globally, and that's not
> > >>  automatic under RCpc.
> > >>
> > >>    Simulation of the RISC-V memory consistency model confirmed this
> > >>    expectation."
> > >>
> > >> I have just (re)checked these observations against the latest 
> > >> specification,
> > >> and my results _confirmed_ these verdicts.
> > > 
> > > Thanks, I must have just gotten confused about a draft spec or something. 
> > >  I'm
> > > pulling these on top or your other memory model related patch.  I've 
> > > renamed
> > > the branch "next-mm" to be a bit more descriptiove.
> > 
> > (Sorry for being out of the loop this week, I was out to deal with
> > a family matter.)
> > 
> > I assume you're using the herd model?  Luc's doing a great job with
> > that, but even so, nothing is officially confirmed until we ratify
> > the model.  In other words, the herd model may end up changing too.
> > If something is broken on our end, there's still time to fix it.
> > 
> > Regarding AMOs, let me copy from something I wrote in a previous
> > offline conversation:
> > 
> > > it seems to us that pairing a store-release of "amoswap.rl" with
> > > a "ld; fence r,rw" doesn't actually give us the RC semantics we've
> > > been discussing for LKMM.  For example:
> > > 
> > > (a) sd t0,0(s0)
> > > (b) amoswap.d.rl x0,t1,0(s1)
> > > ...
> > > (c) ld a0,0(s1)
> > > (d) fence r,rw
> > > (e) sd t2,0(s2)
> > > 
> > > There, we won't get (a) ordered before (e) regardless of whether
> > > (b) is RCpc or RCsc.  Do you agree?
> > 
> > At the moment, only the load part of (b) is in the predecessor
> > set of (d), but the store part of (b) is not.  Likewise, the
> > .rl annotation applies only to the store part of (b), not the
> > load part.
> > 
> > This gets back to a question Linus asked last week about
> > whether the AMO is a single unit or whether it can be
> 
> You mean AMO or RmW atomic operations?
> 
> > considered to split into a load and a store part (which still
> > perform atomically).  For RISC-V, for right now at least, the
> > answer is the latter.  Is it still the latter for Linux too?
> > 
> 
> I think for RmW atomics it's still the latter, the acquire or release is
> for the load part or the store part of an RmW. For example, ppc uses
> lwsync as acquire/release barriers, and lwsync could not order
> write->read. 

You are correct LKMM represent read-modify-write constructs with 2 events,
one read and one write. Those are connected by the special "rmw" relation.



> 
> Regards,
> Boqun

Regards,

--Luc


> 
> > https://lkml.org/lkml/2018/2/26/606
> > 
> > > So I think we'll need to make sure we pair .rl with .aq, or that
> > > we pair fence-based mappings with fence-based mappings, in order
> > > to make the acquire/release operations work.
> > 
> > This assumes we'll say that .aq and .rl are RCsc, not RCpc.
> > But in this case, I think .aq and .rl could still be safe to use,
> > as long 

Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-12 Thread Boqun Feng
On Fri, Mar 09, 2018 at 04:21:37PM -0800, Daniel Lustig wrote:
> On 3/9/2018 2:57 PM, Palmer Dabbelt wrote:
> > On Fri, 09 Mar 2018 13:30:08 PST (-0800), parri.and...@gmail.com wrote:
> >> On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:
> >>> On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:
> >>
> >> [...]
> >>
> >>> >This proposal relies on the generic definition,
> >>> >
> >>> >   include/linux/atomic.h ,
> >>> >
> >>> >and on the
> >>> >
> >>> >   __atomic_op_acquire()
> >>> >   __atomic_op_release()
> >>> >
> >>> >above to build the acquire/release atomics (except for the xchg,cmpxchg,
> >>> >where the ACQUIRE_BARRIER is inserted conditionally/on success).
> >>>
> >>> I thought we wanted to use the AQ and RL bits for AMOs, just not for LR/SC
> >>> sequences.  IIRC the AMOs are safe with the current memory model, but I 
> >>> might
> >>> just have some version mismatches in my head.
> >>
> >> AMO.aqrl are "safe" w.r.t. the LKMM (as they provide "full-ordering"); 
> >> OTOH,
> >> AMO.aq and AMO.rl present weaknesses that LKMM (and some kernel developers)
> >> do not "expect".  I was probing this issue in:
> >>
> >>   https://marc.info/?l=linux-kernel=151930201102853=2
> >>
> >> (c.f., e.g., test "RISCV-unlock-lock-read-ordering" from that post).
> >>
> >> Quoting from the commit message of my patch 1/2:
> >>
> >>   "Referring to the "unlock-lock-read-ordering" test reported below,
> >>    Daniel wrote:
> >>
> >>  I think an RCpc interpretation of .aq and .rl would in fact
> >>  allow the two normal loads in P1 to be reordered [...]
> >>
> >>  [...]
> >>
> >>  Likewise even if the unlock()/lock() is between two stores.
> >>  A control dependency might originate from the load part of
> >>  the amoswap.w.aq, but there still would have to be something
> >>  to ensure that this load part in fact performs after the store
> >>  part of the amoswap.w.rl performs globally, and that's not
> >>  automatic under RCpc.
> >>
> >>    Simulation of the RISC-V memory consistency model confirmed this
> >>    expectation."
> >>
> >> I have just (re)checked these observations against the latest 
> >> specification,
> >> and my results _confirmed_ these verdicts.
> > 
> > Thanks, I must have just gotten confused about a draft spec or something.  
> > I'm
> > pulling these on top or your other memory model related patch.  I've renamed
> > the branch "next-mm" to be a bit more descriptiove.
> 
> (Sorry for being out of the loop this week, I was out to deal with
> a family matter.)
> 
> I assume you're using the herd model?  Luc's doing a great job with
> that, but even so, nothing is officially confirmed until we ratify
> the model.  In other words, the herd model may end up changing too.
> If something is broken on our end, there's still time to fix it.
> 
> Regarding AMOs, let me copy from something I wrote in a previous
> offline conversation:
> 
> > it seems to us that pairing a store-release of "amoswap.rl" with
> > a "ld; fence r,rw" doesn't actually give us the RC semantics we've
> > been discussing for LKMM.  For example:
> > 
> > (a) sd t0,0(s0)
> > (b) amoswap.d.rl x0,t1,0(s1)
> > ...
> > (c) ld a0,0(s1)
> > (d) fence r,rw
> > (e) sd t2,0(s2)
> > 
> > There, we won't get (a) ordered before (e) regardless of whether
> > (b) is RCpc or RCsc.  Do you agree?
> 
> At the moment, only the load part of (b) is in the predecessor
> set of (d), but the store part of (b) is not.  Likewise, the
> .rl annotation applies only to the store part of (b), not the
> load part.
> 
> This gets back to a question Linus asked last week about
> whether the AMO is a single unit or whether it can be

You mean AMO or RmW atomic operations?

> considered to split into a load and a store part (which still
> perform atomically).  For RISC-V, for right now at least, the
> answer is the latter.  Is it still the latter for Linux too?
> 

I think for RmW atomics it's still the latter, the acquire or release is
for the load part or the store part of an RmW. For example, ppc uses
lwsync as acquire/release barriers, and lwsync could not order
write->read. 

Regards,
Boqun

> https://lkml.org/lkml/2018/2/26/606
> 
> > So I think we'll need to make sure we pair .rl with .aq, or that
> > we pair fence-based mappings with fence-based mappings, in order
> > to make the acquire/release operations work.
> 
> This assumes we'll say that .aq and .rl are RCsc, not RCpc.
> But in this case, I think .aq and .rl could still be safe to use,
> as long as you don't ever try to mix in a fence-based mapping
> on the same data structure like in the example above.  That
> might be important if we want to find the most compact legal
> implementation, and hence do want to use .aq and .rl after all.
> 
> > And since we don't have native "ld.aq" today in RISC-V, that
> > would mean smp_store_release would have to remain implemented
> > as "fence rw,w; s{w|d}", rather than "amoswap.{w|d}.rl", for
> 

Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-12 Thread Boqun Feng
On Fri, Mar 09, 2018 at 04:21:37PM -0800, Daniel Lustig wrote:
> On 3/9/2018 2:57 PM, Palmer Dabbelt wrote:
> > On Fri, 09 Mar 2018 13:30:08 PST (-0800), parri.and...@gmail.com wrote:
> >> On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:
> >>> On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:
> >>
> >> [...]
> >>
> >>> >This proposal relies on the generic definition,
> >>> >
> >>> >   include/linux/atomic.h ,
> >>> >
> >>> >and on the
> >>> >
> >>> >   __atomic_op_acquire()
> >>> >   __atomic_op_release()
> >>> >
> >>> >above to build the acquire/release atomics (except for the xchg,cmpxchg,
> >>> >where the ACQUIRE_BARRIER is inserted conditionally/on success).
> >>>
> >>> I thought we wanted to use the AQ and RL bits for AMOs, just not for LR/SC
> >>> sequences.  IIRC the AMOs are safe with the current memory model, but I 
> >>> might
> >>> just have some version mismatches in my head.
> >>
> >> AMO.aqrl are "safe" w.r.t. the LKMM (as they provide "full-ordering"); 
> >> OTOH,
> >> AMO.aq and AMO.rl present weaknesses that LKMM (and some kernel developers)
> >> do not "expect".  I was probing this issue in:
> >>
> >>   https://marc.info/?l=linux-kernel=151930201102853=2
> >>
> >> (c.f., e.g., test "RISCV-unlock-lock-read-ordering" from that post).
> >>
> >> Quoting from the commit message of my patch 1/2:
> >>
> >>   "Referring to the "unlock-lock-read-ordering" test reported below,
> >>    Daniel wrote:
> >>
> >>  I think an RCpc interpretation of .aq and .rl would in fact
> >>  allow the two normal loads in P1 to be reordered [...]
> >>
> >>  [...]
> >>
> >>  Likewise even if the unlock()/lock() is between two stores.
> >>  A control dependency might originate from the load part of
> >>  the amoswap.w.aq, but there still would have to be something
> >>  to ensure that this load part in fact performs after the store
> >>  part of the amoswap.w.rl performs globally, and that's not
> >>  automatic under RCpc.
> >>
> >>    Simulation of the RISC-V memory consistency model confirmed this
> >>    expectation."
> >>
> >> I have just (re)checked these observations against the latest 
> >> specification,
> >> and my results _confirmed_ these verdicts.
> > 
> > Thanks, I must have just gotten confused about a draft spec or something.  
> > I'm
> > pulling these on top or your other memory model related patch.  I've renamed
> > the branch "next-mm" to be a bit more descriptiove.
> 
> (Sorry for being out of the loop this week, I was out to deal with
> a family matter.)
> 
> I assume you're using the herd model?  Luc's doing a great job with
> that, but even so, nothing is officially confirmed until we ratify
> the model.  In other words, the herd model may end up changing too.
> If something is broken on our end, there's still time to fix it.
> 
> Regarding AMOs, let me copy from something I wrote in a previous
> offline conversation:
> 
> > it seems to us that pairing a store-release of "amoswap.rl" with
> > a "ld; fence r,rw" doesn't actually give us the RC semantics we've
> > been discussing for LKMM.  For example:
> > 
> > (a) sd t0,0(s0)
> > (b) amoswap.d.rl x0,t1,0(s1)
> > ...
> > (c) ld a0,0(s1)
> > (d) fence r,rw
> > (e) sd t2,0(s2)
> > 
> > There, we won't get (a) ordered before (e) regardless of whether
> > (b) is RCpc or RCsc.  Do you agree?
> 
> At the moment, only the load part of (b) is in the predecessor
> set of (d), but the store part of (b) is not.  Likewise, the
> .rl annotation applies only to the store part of (b), not the
> load part.
> 
> This gets back to a question Linus asked last week about
> whether the AMO is a single unit or whether it can be

You mean AMO or RmW atomic operations?

> considered to split into a load and a store part (which still
> perform atomically).  For RISC-V, for right now at least, the
> answer is the latter.  Is it still the latter for Linux too?
> 

I think for RmW atomics it's still the latter, the acquire or release is
for the load part or the store part of an RmW. For example, ppc uses
lwsync as acquire/release barriers, and lwsync could not order
write->read. 

Regards,
Boqun

> https://lkml.org/lkml/2018/2/26/606
> 
> > So I think we'll need to make sure we pair .rl with .aq, or that
> > we pair fence-based mappings with fence-based mappings, in order
> > to make the acquire/release operations work.
> 
> This assumes we'll say that .aq and .rl are RCsc, not RCpc.
> But in this case, I think .aq and .rl could still be safe to use,
> as long as you don't ever try to mix in a fence-based mapping
> on the same data structure like in the example above.  That
> might be important if we want to find the most compact legal
> implementation, and hence do want to use .aq and .rl after all.
> 
> > And since we don't have native "ld.aq" today in RISC-V, that
> > would mean smp_store_release would have to remain implemented
> > as "fence rw,w; s{w|d}", rather than "amoswap.{w|d}.rl", for
> 

Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-10 Thread Andrea Parri
On Fri, Mar 09, 2018 at 04:21:37PM -0800, Daniel Lustig wrote:
> On 3/9/2018 2:57 PM, Palmer Dabbelt wrote:
> > On Fri, 09 Mar 2018 13:30:08 PST (-0800), parri.and...@gmail.com wrote:
> >> On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:
> >>> On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:
> >>
> >> [...]
> >>
> >>> >This proposal relies on the generic definition,
> >>> >
> >>> >   include/linux/atomic.h ,
> >>> >
> >>> >and on the
> >>> >
> >>> >   __atomic_op_acquire()
> >>> >   __atomic_op_release()
> >>> >
> >>> >above to build the acquire/release atomics (except for the xchg,cmpxchg,
> >>> >where the ACQUIRE_BARRIER is inserted conditionally/on success).
> >>>
> >>> I thought we wanted to use the AQ and RL bits for AMOs, just not for LR/SC
> >>> sequences.  IIRC the AMOs are safe with the current memory model, but I 
> >>> might
> >>> just have some version mismatches in my head.
> >>
> >> AMO.aqrl are "safe" w.r.t. the LKMM (as they provide "full-ordering"); 
> >> OTOH,
> >> AMO.aq and AMO.rl present weaknesses that LKMM (and some kernel developers)
> >> do not "expect".  I was probing this issue in:
> >>
> >>   https://marc.info/?l=linux-kernel=151930201102853=2
> >>
> >> (c.f., e.g., test "RISCV-unlock-lock-read-ordering" from that post).
> >>
> >> Quoting from the commit message of my patch 1/2:
> >>
> >>   "Referring to the "unlock-lock-read-ordering" test reported below,
> >>    Daniel wrote:
> >>
> >>  I think an RCpc interpretation of .aq and .rl would in fact
> >>  allow the two normal loads in P1 to be reordered [...]
> >>
> >>  [...]
> >>
> >>  Likewise even if the unlock()/lock() is between two stores.
> >>  A control dependency might originate from the load part of
> >>  the amoswap.w.aq, but there still would have to be something
> >>  to ensure that this load part in fact performs after the store
> >>  part of the amoswap.w.rl performs globally, and that's not
> >>  automatic under RCpc.
> >>
> >>    Simulation of the RISC-V memory consistency model confirmed this
> >>    expectation."
> >>
> >> I have just (re)checked these observations against the latest 
> >> specification,
> >> and my results _confirmed_ these verdicts.
> > 
> > Thanks, I must have just gotten confused about a draft spec or something.  
> > I'm
> > pulling these on top or your other memory model related patch.  I've renamed
> > the branch "next-mm" to be a bit more descriptiove.
> 
> (Sorry for being out of the loop this week, I was out to deal with
> a family matter.)
> 
> I assume you're using the herd model?  Luc's doing a great job with
> that, but even so, nothing is officially confirmed until we ratify
> the model.  In other words, the herd model may end up changing too.
> If something is broken on our end, there's still time to fix it.

No need to say :) if you look back at the LKMM as from 2 years ago or as
presented last year in LWN, you won't recognize it as such ;-)  Spec. do
change/evolve, and so do implementations: if ratifications of the RISC-V
memory model (or of the LKMM) will enable optimizations/modifications to
these implementations (while preserving correctness), I'll be glad to do
or to help with them.

To answer your question: I used both the herd-based model from INRIA and
the operational model from the group in Cambridge (these are referred to
in the currently available RISC-V spec.).


> 
> Regarding AMOs, let me copy from something I wrote in a previous
> offline conversation:
> 
> > it seems to us that pairing a store-release of "amoswap.rl" with
> > a "ld; fence r,rw" doesn't actually give us the RC semantics we've
> > been discussing for LKMM.  For example:
> > 
> > (a) sd t0,0(s0)
> > (b) amoswap.d.rl x0,t1,0(s1)
> > ...
> > (c) ld a0,0(s1)
> > (d) fence r,rw
> > (e) sd t2,0(s2)
> > 
> > There, we won't get (a) ordered before (e) regardless of whether
> > (b) is RCpc or RCsc.  Do you agree?
> 
> At the moment, only the load part of (b) is in the predecessor
> set of (d), but the store part of (b) is not.  Likewise, the
> .rl annotation applies only to the store part of (b), not the
> load part.

Indeed.  (If you want, this is one reason why, with these patches, ".rl"
and "fence r,rw" are never "mixed" as in your snipped above unless there
is also a "fence rw,rw" in between.)


> 
> This gets back to a question Linus asked last week about
> whether the AMO is a single unit or whether it can be
> considered to split into a load and a store part (which still
> perform atomically).  For RISC-V, for right now at least, the
> answer is the latter.  Is it still the latter for Linux too?

Yes: (successful) atomic RMWs all generate (at least) one load event and
one store event in LKMM.  (This same approach is taken by other hardware
models as well...)


> 
> https://lkml.org/lkml/2018/2/26/606
> 
> > So I think we'll need to make sure we pair .rl with .aq, or that
> > we pair fence-based mappings with fence-based 

Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-10 Thread Andrea Parri
On Fri, Mar 09, 2018 at 04:21:37PM -0800, Daniel Lustig wrote:
> On 3/9/2018 2:57 PM, Palmer Dabbelt wrote:
> > On Fri, 09 Mar 2018 13:30:08 PST (-0800), parri.and...@gmail.com wrote:
> >> On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:
> >>> On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:
> >>
> >> [...]
> >>
> >>> >This proposal relies on the generic definition,
> >>> >
> >>> >   include/linux/atomic.h ,
> >>> >
> >>> >and on the
> >>> >
> >>> >   __atomic_op_acquire()
> >>> >   __atomic_op_release()
> >>> >
> >>> >above to build the acquire/release atomics (except for the xchg,cmpxchg,
> >>> >where the ACQUIRE_BARRIER is inserted conditionally/on success).
> >>>
> >>> I thought we wanted to use the AQ and RL bits for AMOs, just not for LR/SC
> >>> sequences.  IIRC the AMOs are safe with the current memory model, but I 
> >>> might
> >>> just have some version mismatches in my head.
> >>
> >> AMO.aqrl are "safe" w.r.t. the LKMM (as they provide "full-ordering"); 
> >> OTOH,
> >> AMO.aq and AMO.rl present weaknesses that LKMM (and some kernel developers)
> >> do not "expect".  I was probing this issue in:
> >>
> >>   https://marc.info/?l=linux-kernel=151930201102853=2
> >>
> >> (c.f., e.g., test "RISCV-unlock-lock-read-ordering" from that post).
> >>
> >> Quoting from the commit message of my patch 1/2:
> >>
> >>   "Referring to the "unlock-lock-read-ordering" test reported below,
> >>    Daniel wrote:
> >>
> >>  I think an RCpc interpretation of .aq and .rl would in fact
> >>  allow the two normal loads in P1 to be reordered [...]
> >>
> >>  [...]
> >>
> >>  Likewise even if the unlock()/lock() is between two stores.
> >>  A control dependency might originate from the load part of
> >>  the amoswap.w.aq, but there still would have to be something
> >>  to ensure that this load part in fact performs after the store
> >>  part of the amoswap.w.rl performs globally, and that's not
> >>  automatic under RCpc.
> >>
> >>    Simulation of the RISC-V memory consistency model confirmed this
> >>    expectation."
> >>
> >> I have just (re)checked these observations against the latest 
> >> specification,
> >> and my results _confirmed_ these verdicts.
> > 
> > Thanks, I must have just gotten confused about a draft spec or something.  
> > I'm
> > pulling these on top or your other memory model related patch.  I've renamed
> > the branch "next-mm" to be a bit more descriptiove.
> 
> (Sorry for being out of the loop this week, I was out to deal with
> a family matter.)
> 
> I assume you're using the herd model?  Luc's doing a great job with
> that, but even so, nothing is officially confirmed until we ratify
> the model.  In other words, the herd model may end up changing too.
> If something is broken on our end, there's still time to fix it.

No need to say :) if you look back at the LKMM as from 2 years ago or as
presented last year in LWN, you won't recognize it as such ;-)  Spec. do
change/evolve, and so do implementations: if ratifications of the RISC-V
memory model (or of the LKMM) will enable optimizations/modifications to
these implementations (while preserving correctness), I'll be glad to do
or to help with them.

To answer your question: I used both the herd-based model from INRIA and
the operational model from the group in Cambridge (these are referred to
in the currently available RISC-V spec.).


> 
> Regarding AMOs, let me copy from something I wrote in a previous
> offline conversation:
> 
> > it seems to us that pairing a store-release of "amoswap.rl" with
> > a "ld; fence r,rw" doesn't actually give us the RC semantics we've
> > been discussing for LKMM.  For example:
> > 
> > (a) sd t0,0(s0)
> > (b) amoswap.d.rl x0,t1,0(s1)
> > ...
> > (c) ld a0,0(s1)
> > (d) fence r,rw
> > (e) sd t2,0(s2)
> > 
> > There, we won't get (a) ordered before (e) regardless of whether
> > (b) is RCpc or RCsc.  Do you agree?
> 
> At the moment, only the load part of (b) is in the predecessor
> set of (d), but the store part of (b) is not.  Likewise, the
> .rl annotation applies only to the store part of (b), not the
> load part.

Indeed.  (If you want, this is one reason why, with these patches, ".rl"
and "fence r,rw" are never "mixed" as in your snipped above unless there
is also a "fence rw,rw" in between.)


> 
> This gets back to a question Linus asked last week about
> whether the AMO is a single unit or whether it can be
> considered to split into a load and a store part (which still
> perform atomically).  For RISC-V, for right now at least, the
> answer is the latter.  Is it still the latter for Linux too?

Yes: (successful) atomic RMWs all generate (at least) one load event and
one store event in LKMM.  (This same approach is taken by other hardware
models as well...)


> 
> https://lkml.org/lkml/2018/2/26/606
> 
> > So I think we'll need to make sure we pair .rl with .aq, or that
> > we pair fence-based mappings with fence-based 

Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Daniel Lustig
On 3/9/2018 2:57 PM, Palmer Dabbelt wrote:
> On Fri, 09 Mar 2018 13:30:08 PST (-0800), parri.and...@gmail.com wrote:
>> On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:
>>> On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:
>>
>> [...]
>>
>>> >This proposal relies on the generic definition,
>>> >
>>> >   include/linux/atomic.h ,
>>> >
>>> >and on the
>>> >
>>> >   __atomic_op_acquire()
>>> >   __atomic_op_release()
>>> >
>>> >above to build the acquire/release atomics (except for the xchg,cmpxchg,
>>> >where the ACQUIRE_BARRIER is inserted conditionally/on success).
>>>
>>> I thought we wanted to use the AQ and RL bits for AMOs, just not for LR/SC
>>> sequences.  IIRC the AMOs are safe with the current memory model, but I 
>>> might
>>> just have some version mismatches in my head.
>>
>> AMO.aqrl are "safe" w.r.t. the LKMM (as they provide "full-ordering"); OTOH,
>> AMO.aq and AMO.rl present weaknesses that LKMM (and some kernel developers)
>> do not "expect".  I was probing this issue in:
>>
>>   https://marc.info/?l=linux-kernel=151930201102853=2
>>
>> (c.f., e.g., test "RISCV-unlock-lock-read-ordering" from that post).
>>
>> Quoting from the commit message of my patch 1/2:
>>
>>   "Referring to the "unlock-lock-read-ordering" test reported below,
>>    Daniel wrote:
>>
>>  I think an RCpc interpretation of .aq and .rl would in fact
>>  allow the two normal loads in P1 to be reordered [...]
>>
>>  [...]
>>
>>  Likewise even if the unlock()/lock() is between two stores.
>>  A control dependency might originate from the load part of
>>  the amoswap.w.aq, but there still would have to be something
>>  to ensure that this load part in fact performs after the store
>>  part of the amoswap.w.rl performs globally, and that's not
>>  automatic under RCpc.
>>
>>    Simulation of the RISC-V memory consistency model confirmed this
>>    expectation."
>>
>> I have just (re)checked these observations against the latest specification,
>> and my results _confirmed_ these verdicts.
> 
> Thanks, I must have just gotten confused about a draft spec or something.  I'm
> pulling these on top or your other memory model related patch.  I've renamed
> the branch "next-mm" to be a bit more descriptiove.

(Sorry for being out of the loop this week, I was out to deal with
a family matter.)

I assume you're using the herd model?  Luc's doing a great job with
that, but even so, nothing is officially confirmed until we ratify
the model.  In other words, the herd model may end up changing too.
If something is broken on our end, there's still time to fix it.

Regarding AMOs, let me copy from something I wrote in a previous
offline conversation:

> it seems to us that pairing a store-release of "amoswap.rl" with
> a "ld; fence r,rw" doesn't actually give us the RC semantics we've
> been discussing for LKMM.  For example:
> 
> (a) sd t0,0(s0)
> (b) amoswap.d.rl x0,t1,0(s1)
> ...
> (c) ld a0,0(s1)
> (d) fence r,rw
> (e) sd t2,0(s2)
> 
> There, we won't get (a) ordered before (e) regardless of whether
> (b) is RCpc or RCsc.  Do you agree?

At the moment, only the load part of (b) is in the predecessor
set of (d), but the store part of (b) is not.  Likewise, the
.rl annotation applies only to the store part of (b), not the
load part.

This gets back to a question Linus asked last week about
whether the AMO is a single unit or whether it can be
considered to split into a load and a store part (which still
perform atomically).  For RISC-V, for right now at least, the
answer is the latter.  Is it still the latter for Linux too?

https://lkml.org/lkml/2018/2/26/606

> So I think we'll need to make sure we pair .rl with .aq, or that
> we pair fence-based mappings with fence-based mappings, in order
> to make the acquire/release operations work.

This assumes we'll say that .aq and .rl are RCsc, not RCpc.
But in this case, I think .aq and .rl could still be safe to use,
as long as you don't ever try to mix in a fence-based mapping
on the same data structure like in the example above.  That
might be important if we want to find the most compact legal
implementation, and hence do want to use .aq and .rl after all.

> And since we don't have native "ld.aq" today in RISC-V, that
> would mean smp_store_release would have to remain implemented
> as "fence rw,w; s{w|d}", rather than "amoswap.{w|d}.rl", for
> example.

Thoughts?

Thanks,
Dan

> 
> Thanks!


Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Daniel Lustig
On 3/9/2018 2:57 PM, Palmer Dabbelt wrote:
> On Fri, 09 Mar 2018 13:30:08 PST (-0800), parri.and...@gmail.com wrote:
>> On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:
>>> On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:
>>
>> [...]
>>
>>> >This proposal relies on the generic definition,
>>> >
>>> >   include/linux/atomic.h ,
>>> >
>>> >and on the
>>> >
>>> >   __atomic_op_acquire()
>>> >   __atomic_op_release()
>>> >
>>> >above to build the acquire/release atomics (except for the xchg,cmpxchg,
>>> >where the ACQUIRE_BARRIER is inserted conditionally/on success).
>>>
>>> I thought we wanted to use the AQ and RL bits for AMOs, just not for LR/SC
>>> sequences.  IIRC the AMOs are safe with the current memory model, but I 
>>> might
>>> just have some version mismatches in my head.
>>
>> AMO.aqrl are "safe" w.r.t. the LKMM (as they provide "full-ordering"); OTOH,
>> AMO.aq and AMO.rl present weaknesses that LKMM (and some kernel developers)
>> do not "expect".  I was probing this issue in:
>>
>>   https://marc.info/?l=linux-kernel=151930201102853=2
>>
>> (c.f., e.g., test "RISCV-unlock-lock-read-ordering" from that post).
>>
>> Quoting from the commit message of my patch 1/2:
>>
>>   "Referring to the "unlock-lock-read-ordering" test reported below,
>>    Daniel wrote:
>>
>>  I think an RCpc interpretation of .aq and .rl would in fact
>>  allow the two normal loads in P1 to be reordered [...]
>>
>>  [...]
>>
>>  Likewise even if the unlock()/lock() is between two stores.
>>  A control dependency might originate from the load part of
>>  the amoswap.w.aq, but there still would have to be something
>>  to ensure that this load part in fact performs after the store
>>  part of the amoswap.w.rl performs globally, and that's not
>>  automatic under RCpc.
>>
>>    Simulation of the RISC-V memory consistency model confirmed this
>>    expectation."
>>
>> I have just (re)checked these observations against the latest specification,
>> and my results _confirmed_ these verdicts.
> 
> Thanks, I must have just gotten confused about a draft spec or something.  I'm
> pulling these on top or your other memory model related patch.  I've renamed
> the branch "next-mm" to be a bit more descriptiove.

(Sorry for being out of the loop this week, I was out to deal with
a family matter.)

I assume you're using the herd model?  Luc's doing a great job with
that, but even so, nothing is officially confirmed until we ratify
the model.  In other words, the herd model may end up changing too.
If something is broken on our end, there's still time to fix it.

Regarding AMOs, let me copy from something I wrote in a previous
offline conversation:

> it seems to us that pairing a store-release of "amoswap.rl" with
> a "ld; fence r,rw" doesn't actually give us the RC semantics we've
> been discussing for LKMM.  For example:
> 
> (a) sd t0,0(s0)
> (b) amoswap.d.rl x0,t1,0(s1)
> ...
> (c) ld a0,0(s1)
> (d) fence r,rw
> (e) sd t2,0(s2)
> 
> There, we won't get (a) ordered before (e) regardless of whether
> (b) is RCpc or RCsc.  Do you agree?

At the moment, only the load part of (b) is in the predecessor
set of (d), but the store part of (b) is not.  Likewise, the
.rl annotation applies only to the store part of (b), not the
load part.

This gets back to a question Linus asked last week about
whether the AMO is a single unit or whether it can be
considered to split into a load and a store part (which still
perform atomically).  For RISC-V, for right now at least, the
answer is the latter.  Is it still the latter for Linux too?

https://lkml.org/lkml/2018/2/26/606

> So I think we'll need to make sure we pair .rl with .aq, or that
> we pair fence-based mappings with fence-based mappings, in order
> to make the acquire/release operations work.

This assumes we'll say that .aq and .rl are RCsc, not RCpc.
But in this case, I think .aq and .rl could still be safe to use,
as long as you don't ever try to mix in a fence-based mapping
on the same data structure like in the example above.  That
might be important if we want to find the most compact legal
implementation, and hence do want to use .aq and .rl after all.

> And since we don't have native "ld.aq" today in RISC-V, that
> would mean smp_store_release would have to remain implemented
> as "fence rw,w; s{w|d}", rather than "amoswap.{w|d}.rl", for
> example.

Thoughts?

Thanks,
Dan

> 
> Thanks!


Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Palmer Dabbelt

On Fri, 09 Mar 2018 13:30:08 PST (-0800), parri.and...@gmail.com wrote:

On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:

On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:


[...]



>This belongs to the "few style fixes" (in the specific, 80-chars lines)
>mentioned in the cover letter; I could not resist ;-), but I'll remove
>them in v3 if you like so.

No problem, just next time it's a bit easier to not mix the really complicated
stuff (memory model changes) with the really simple stuff (whitespace changes).


Got it.



>This proposal relies on the generic definition,
>
>   include/linux/atomic.h ,
>
>and on the
>
>   __atomic_op_acquire()
>   __atomic_op_release()
>
>above to build the acquire/release atomics (except for the xchg,cmpxchg,
>where the ACQUIRE_BARRIER is inserted conditionally/on success).

I thought we wanted to use the AQ and RL bits for AMOs, just not for LR/SC
sequences.  IIRC the AMOs are safe with the current memory model, but I might
just have some version mismatches in my head.


AMO.aqrl are "safe" w.r.t. the LKMM (as they provide "full-ordering"); OTOH,
AMO.aq and AMO.rl present weaknesses that LKMM (and some kernel developers)
do not "expect".  I was probing this issue in:

  https://marc.info/?l=linux-kernel=151930201102853=2

(c.f., e.g., test "RISCV-unlock-lock-read-ordering" from that post).

Quoting from the commit message of my patch 1/2:

  "Referring to the "unlock-lock-read-ordering" test reported below,
   Daniel wrote:

 I think an RCpc interpretation of .aq and .rl would in fact
 allow the two normal loads in P1 to be reordered [...]

 [...]

 Likewise even if the unlock()/lock() is between two stores.
 A control dependency might originate from the load part of
 the amoswap.w.aq, but there still would have to be something
 to ensure that this load part in fact performs after the store
 part of the amoswap.w.rl performs globally, and that's not
 automatic under RCpc.

   Simulation of the RISC-V memory consistency model confirmed this
   expectation."

I have just (re)checked these observations against the latest specification,
and my results _confirmed_ these verdicts.


Thanks, I must have just gotten confused about a draft spec or something.  I'm
pulling these on top or your other memory model related patch.  I've renamed
the branch "next-mm" to be a bit more descriptiove.

Thanks!


Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Palmer Dabbelt

On Fri, 09 Mar 2018 13:30:08 PST (-0800), parri.and...@gmail.com wrote:

On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:

On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:


[...]



>This belongs to the "few style fixes" (in the specific, 80-chars lines)
>mentioned in the cover letter; I could not resist ;-), but I'll remove
>them in v3 if you like so.

No problem, just next time it's a bit easier to not mix the really complicated
stuff (memory model changes) with the really simple stuff (whitespace changes).


Got it.



>This proposal relies on the generic definition,
>
>   include/linux/atomic.h ,
>
>and on the
>
>   __atomic_op_acquire()
>   __atomic_op_release()
>
>above to build the acquire/release atomics (except for the xchg,cmpxchg,
>where the ACQUIRE_BARRIER is inserted conditionally/on success).

I thought we wanted to use the AQ and RL bits for AMOs, just not for LR/SC
sequences.  IIRC the AMOs are safe with the current memory model, but I might
just have some version mismatches in my head.


AMO.aqrl are "safe" w.r.t. the LKMM (as they provide "full-ordering"); OTOH,
AMO.aq and AMO.rl present weaknesses that LKMM (and some kernel developers)
do not "expect".  I was probing this issue in:

  https://marc.info/?l=linux-kernel=151930201102853=2

(c.f., e.g., test "RISCV-unlock-lock-read-ordering" from that post).

Quoting from the commit message of my patch 1/2:

  "Referring to the "unlock-lock-read-ordering" test reported below,
   Daniel wrote:

 I think an RCpc interpretation of .aq and .rl would in fact
 allow the two normal loads in P1 to be reordered [...]

 [...]

 Likewise even if the unlock()/lock() is between two stores.
 A control dependency might originate from the load part of
 the amoswap.w.aq, but there still would have to be something
 to ensure that this load part in fact performs after the store
 part of the amoswap.w.rl performs globally, and that's not
 automatic under RCpc.

   Simulation of the RISC-V memory consistency model confirmed this
   expectation."

I have just (re)checked these observations against the latest specification,
and my results _confirmed_ these verdicts.


Thanks, I must have just gotten confused about a draft spec or something.  I'm
pulling these on top or your other memory model related patch.  I've renamed
the branch "next-mm" to be a bit more descriptiove.

Thanks!


Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Andrea Parri
On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:
> On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:

[...]


> >This belongs to the "few style fixes" (in the specific, 80-chars lines)
> >mentioned in the cover letter; I could not resist ;-), but I'll remove
> >them in v3 if you like so.
> 
> No problem, just next time it's a bit easier to not mix the really complicated
> stuff (memory model changes) with the really simple stuff (whitespace 
> changes).

Got it.


> >This proposal relies on the generic definition,
> >
> >   include/linux/atomic.h ,
> >
> >and on the
> >
> >   __atomic_op_acquire()
> >   __atomic_op_release()
> >
> >above to build the acquire/release atomics (except for the xchg,cmpxchg,
> >where the ACQUIRE_BARRIER is inserted conditionally/on success).
> 
> I thought we wanted to use the AQ and RL bits for AMOs, just not for LR/SC
> sequences.  IIRC the AMOs are safe with the current memory model, but I might
> just have some version mismatches in my head.

AMO.aqrl are "safe" w.r.t. the LKMM (as they provide "full-ordering"); OTOH,
AMO.aq and AMO.rl present weaknesses that LKMM (and some kernel developers)
do not "expect".  I was probing this issue in:

  https://marc.info/?l=linux-kernel=151930201102853=2

(c.f., e.g., test "RISCV-unlock-lock-read-ordering" from that post).

Quoting from the commit message of my patch 1/2:

  "Referring to the "unlock-lock-read-ordering" test reported below,
   Daniel wrote:

 I think an RCpc interpretation of .aq and .rl would in fact
 allow the two normal loads in P1 to be reordered [...]

 [...]

 Likewise even if the unlock()/lock() is between two stores.
 A control dependency might originate from the load part of
 the amoswap.w.aq, but there still would have to be something
 to ensure that this load part in fact performs after the store
 part of the amoswap.w.rl performs globally, and that's not
 automatic under RCpc.

   Simulation of the RISC-V memory consistency model confirmed this
   expectation."

I have just (re)checked these observations against the latest specification,
and my results _confirmed_ these verdicts.

  Andrea


Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Andrea Parri
On Fri, Mar 09, 2018 at 10:54:27AM -0800, Palmer Dabbelt wrote:
> On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:

[...]


> >This belongs to the "few style fixes" (in the specific, 80-chars lines)
> >mentioned in the cover letter; I could not resist ;-), but I'll remove
> >them in v3 if you like so.
> 
> No problem, just next time it's a bit easier to not mix the really complicated
> stuff (memory model changes) with the really simple stuff (whitespace 
> changes).

Got it.


> >This proposal relies on the generic definition,
> >
> >   include/linux/atomic.h ,
> >
> >and on the
> >
> >   __atomic_op_acquire()
> >   __atomic_op_release()
> >
> >above to build the acquire/release atomics (except for the xchg,cmpxchg,
> >where the ACQUIRE_BARRIER is inserted conditionally/on success).
> 
> I thought we wanted to use the AQ and RL bits for AMOs, just not for LR/SC
> sequences.  IIRC the AMOs are safe with the current memory model, but I might
> just have some version mismatches in my head.

AMO.aqrl are "safe" w.r.t. the LKMM (as they provide "full-ordering"); OTOH,
AMO.aq and AMO.rl present weaknesses that LKMM (and some kernel developers)
do not "expect".  I was probing this issue in:

  https://marc.info/?l=linux-kernel=151930201102853=2

(c.f., e.g., test "RISCV-unlock-lock-read-ordering" from that post).

Quoting from the commit message of my patch 1/2:

  "Referring to the "unlock-lock-read-ordering" test reported below,
   Daniel wrote:

 I think an RCpc interpretation of .aq and .rl would in fact
 allow the two normal loads in P1 to be reordered [...]

 [...]

 Likewise even if the unlock()/lock() is between two stores.
 A control dependency might originate from the load part of
 the amoswap.w.aq, but there still would have to be something
 to ensure that this load part in fact performs after the store
 part of the amoswap.w.rl performs globally, and that's not
 automatic under RCpc.

   Simulation of the RISC-V memory consistency model confirmed this
   expectation."

I have just (re)checked these observations against the latest specification,
and my results _confirmed_ these verdicts.

  Andrea


Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Palmer Dabbelt

On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:

On Fri, Mar 09, 2018 at 09:56:21AM -0800, Palmer Dabbelt wrote:

On Fri, 09 Mar 2018 04:13:40 PST (-0800), parri.and...@gmail.com wrote:
>Atomics present the same issue with locking: release and acquire
>variants need to be strengthened to meet the constraints defined
>by the Linux-kernel memory consistency model [1].
>
>Atomics present a further issue: implementations of atomics such
>as atomic_cmpxchg() and atomic_add_unless() rely on LR/SC pairs,
>which do not give full-ordering with .aqrl; for example, current
>implementations allow the "lr-sc-aqrl-pair-vs-full-barrier" test
>below to end up with the state indicated in the "exists" clause.
>
>In order to "synchronize" LKMM and RISC-V's implementation, this
>commit strengthens the implementations of the atomics operations
>by replacing .rl and .aq with the use of ("lightweigth") fences,
>and by replacing .aqrl LR/SC pairs in sequences such as:
>
>  0:  lr.w.aqrl  %0, %addr
>  bne%0, %old, 1f
>  ...
>  sc.w.aqrl  %1, %new, %addr
>  bnez   %1, 0b
>  1:
>
>with sequences of the form:
>
>  0:  lr.w   %0, %addr
>  bne%0, %old, 1f
>  ...
>  sc.w.rl%1, %new, %addr   /* SC-release   */
>  bnez   %1, 0b
>  fence  rw, rw/* "full" fence */
>  1:
>
>following Daniel's suggestion.
>
>These modifications were validated with simulation of the RISC-V
>memory consistency model.
>
>C lr-sc-aqrl-pair-vs-full-barrier
>
>{}
>
>P0(int *x, int *y, atomic_t *u)
>{
>int r0;
>int r1;
>
>WRITE_ONCE(*x, 1);
>r0 = atomic_cmpxchg(u, 0, 1);
>r1 = READ_ONCE(*y);
>}
>
>P1(int *x, int *y, atomic_t *v)
>{
>int r0;
>int r1;
>
>WRITE_ONCE(*y, 1);
>r0 = atomic_cmpxchg(v, 0, 1);
>r1 = READ_ONCE(*x);
>}
>
>exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
>
>[1] https://marc.info/?l=linux-kernel=151930201102853=2
>
https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
>https://marc.info/?l=linux-kernel=151633436614259=2
>
>Suggested-by: Daniel Lustig 
>Signed-off-by: Andrea Parri 
>Cc: Palmer Dabbelt 
>Cc: Albert Ou 
>Cc: Daniel Lustig 
>Cc: Alan Stern 
>Cc: Will Deacon 
>Cc: Peter Zijlstra 
>Cc: Boqun Feng 
>Cc: Nicholas Piggin 
>Cc: David Howells 
>Cc: Jade Alglave 
>Cc: Luc Maranget 
>Cc: "Paul E. McKenney" 
>Cc: Akira Yokosawa 
>Cc: Ingo Molnar 
>Cc: Linus Torvalds 
>Cc: linux-ri...@lists.infradead.org
>Cc: linux-kernel@vger.kernel.org
>---
> arch/riscv/include/asm/atomic.h  | 417 +--
> arch/riscv/include/asm/cmpxchg.h | 391 +---
> 2 files changed, 588 insertions(+), 220 deletions(-)
>
>diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
>index e65d1cd89e28b..855115ace98c8 100644
>--- a/arch/riscv/include/asm/atomic.h
>+++ b/arch/riscv/include/asm/atomic.h
>@@ -24,6 +24,20 @@
> #include 
>
> #define ATOMIC_INIT(i) { (i) }
>+
>+#define __atomic_op_acquire(op, args...)   \
>+({ \
>+   typeof(op##_relaxed(args)) __ret  = op##_relaxed(args); \
>+   __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory");\
>+   __ret;  \
>+})
>+
>+#define __atomic_op_release(op, args...)   \
>+({ \
>+   __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");\
>+   op##_relaxed(args); \
>+})
>+
> static __always_inline int atomic_read(const atomic_t *v)
> {
>return READ_ONCE(v->counter);
>@@ -50,22 +64,23 @@ static __always_inline void atomic64_set(atomic64_t *v, 
long i)
>  * have the AQ or RL bits set.  These don't return anything, so there's only
>  * one version to worry about.
>  */
>-#define ATOMIC_OP(op, asm_op, I, asm_type, c_type, prefix)
 \
>-static __always_inline void atomic##prefix##_##op(c_type i, 
atomic##prefix##_t *v) \
>-{ 
 \
>-   __asm__ __volatile__ ( 
 \
>-   "amo" #asm_op "." #asm_type " zero, %1, %0"
   \
>-   : "+A" (v->counter)
\
>-   : "r" (I)  

Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Palmer Dabbelt

On Fri, 09 Mar 2018 10:36:44 PST (-0800), parri.and...@gmail.com wrote:

On Fri, Mar 09, 2018 at 09:56:21AM -0800, Palmer Dabbelt wrote:

On Fri, 09 Mar 2018 04:13:40 PST (-0800), parri.and...@gmail.com wrote:
>Atomics present the same issue with locking: release and acquire
>variants need to be strengthened to meet the constraints defined
>by the Linux-kernel memory consistency model [1].
>
>Atomics present a further issue: implementations of atomics such
>as atomic_cmpxchg() and atomic_add_unless() rely on LR/SC pairs,
>which do not give full-ordering with .aqrl; for example, current
>implementations allow the "lr-sc-aqrl-pair-vs-full-barrier" test
>below to end up with the state indicated in the "exists" clause.
>
>In order to "synchronize" LKMM and RISC-V's implementation, this
>commit strengthens the implementations of the atomics operations
>by replacing .rl and .aq with the use of ("lightweigth") fences,
>and by replacing .aqrl LR/SC pairs in sequences such as:
>
>  0:  lr.w.aqrl  %0, %addr
>  bne%0, %old, 1f
>  ...
>  sc.w.aqrl  %1, %new, %addr
>  bnez   %1, 0b
>  1:
>
>with sequences of the form:
>
>  0:  lr.w   %0, %addr
>  bne%0, %old, 1f
>  ...
>  sc.w.rl%1, %new, %addr   /* SC-release   */
>  bnez   %1, 0b
>  fence  rw, rw/* "full" fence */
>  1:
>
>following Daniel's suggestion.
>
>These modifications were validated with simulation of the RISC-V
>memory consistency model.
>
>C lr-sc-aqrl-pair-vs-full-barrier
>
>{}
>
>P0(int *x, int *y, atomic_t *u)
>{
>int r0;
>int r1;
>
>WRITE_ONCE(*x, 1);
>r0 = atomic_cmpxchg(u, 0, 1);
>r1 = READ_ONCE(*y);
>}
>
>P1(int *x, int *y, atomic_t *v)
>{
>int r0;
>int r1;
>
>WRITE_ONCE(*y, 1);
>r0 = atomic_cmpxchg(v, 0, 1);
>r1 = READ_ONCE(*x);
>}
>
>exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
>
>[1] https://marc.info/?l=linux-kernel=151930201102853=2
>
https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
>https://marc.info/?l=linux-kernel=151633436614259=2
>
>Suggested-by: Daniel Lustig 
>Signed-off-by: Andrea Parri 
>Cc: Palmer Dabbelt 
>Cc: Albert Ou 
>Cc: Daniel Lustig 
>Cc: Alan Stern 
>Cc: Will Deacon 
>Cc: Peter Zijlstra 
>Cc: Boqun Feng 
>Cc: Nicholas Piggin 
>Cc: David Howells 
>Cc: Jade Alglave 
>Cc: Luc Maranget 
>Cc: "Paul E. McKenney" 
>Cc: Akira Yokosawa 
>Cc: Ingo Molnar 
>Cc: Linus Torvalds 
>Cc: linux-ri...@lists.infradead.org
>Cc: linux-kernel@vger.kernel.org
>---
> arch/riscv/include/asm/atomic.h  | 417 +--
> arch/riscv/include/asm/cmpxchg.h | 391 +---
> 2 files changed, 588 insertions(+), 220 deletions(-)
>
>diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
>index e65d1cd89e28b..855115ace98c8 100644
>--- a/arch/riscv/include/asm/atomic.h
>+++ b/arch/riscv/include/asm/atomic.h
>@@ -24,6 +24,20 @@
> #include 
>
> #define ATOMIC_INIT(i) { (i) }
>+
>+#define __atomic_op_acquire(op, args...)   \
>+({ \
>+   typeof(op##_relaxed(args)) __ret  = op##_relaxed(args); \
>+   __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory");\
>+   __ret;  \
>+})
>+
>+#define __atomic_op_release(op, args...)   \
>+({ \
>+   __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");\
>+   op##_relaxed(args); \
>+})
>+
> static __always_inline int atomic_read(const atomic_t *v)
> {
>return READ_ONCE(v->counter);
>@@ -50,22 +64,23 @@ static __always_inline void atomic64_set(atomic64_t *v, 
long i)
>  * have the AQ or RL bits set.  These don't return anything, so there's only
>  * one version to worry about.
>  */
>-#define ATOMIC_OP(op, asm_op, I, asm_type, c_type, prefix)
 \
>-static __always_inline void atomic##prefix##_##op(c_type i, 
atomic##prefix##_t *v) \
>-{ 
 \
>-   __asm__ __volatile__ ( 
 \
>-   "amo" #asm_op "." #asm_type " zero, %1, %0"
   \
>-   : "+A" (v->counter)
\
>-   : "r" (I)  
   \
>-   : "memory");   
   \
>-}
>+#define ATOMIC_OP(op, asm_op, I, asm_type, c_type, prefix) \
>+static __always_inline \
>+void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v)\
>+{ 

Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Andrea Parri
On Fri, Mar 09, 2018 at 09:56:21AM -0800, Palmer Dabbelt wrote:
> On Fri, 09 Mar 2018 04:13:40 PST (-0800), parri.and...@gmail.com wrote:
> >Atomics present the same issue with locking: release and acquire
> >variants need to be strengthened to meet the constraints defined
> >by the Linux-kernel memory consistency model [1].
> >
> >Atomics present a further issue: implementations of atomics such
> >as atomic_cmpxchg() and atomic_add_unless() rely on LR/SC pairs,
> >which do not give full-ordering with .aqrl; for example, current
> >implementations allow the "lr-sc-aqrl-pair-vs-full-barrier" test
> >below to end up with the state indicated in the "exists" clause.
> >
> >In order to "synchronize" LKMM and RISC-V's implementation, this
> >commit strengthens the implementations of the atomics operations
> >by replacing .rl and .aq with the use of ("lightweigth") fences,
> >and by replacing .aqrl LR/SC pairs in sequences such as:
> >
> >  0:  lr.w.aqrl  %0, %addr
> >  bne%0, %old, 1f
> >  ...
> >  sc.w.aqrl  %1, %new, %addr
> >  bnez   %1, 0b
> >  1:
> >
> >with sequences of the form:
> >
> >  0:  lr.w   %0, %addr
> >  bne%0, %old, 1f
> >  ...
> >  sc.w.rl%1, %new, %addr   /* SC-release   */
> >  bnez   %1, 0b
> >  fence  rw, rw/* "full" fence */
> >  1:
> >
> >following Daniel's suggestion.
> >
> >These modifications were validated with simulation of the RISC-V
> >memory consistency model.
> >
> >C lr-sc-aqrl-pair-vs-full-barrier
> >
> >{}
> >
> >P0(int *x, int *y, atomic_t *u)
> >{
> > int r0;
> > int r1;
> >
> > WRITE_ONCE(*x, 1);
> > r0 = atomic_cmpxchg(u, 0, 1);
> > r1 = READ_ONCE(*y);
> >}
> >
> >P1(int *x, int *y, atomic_t *v)
> >{
> > int r0;
> > int r1;
> >
> > WRITE_ONCE(*y, 1);
> > r0 = atomic_cmpxchg(v, 0, 1);
> > r1 = READ_ONCE(*x);
> >}
> >
> >exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> >
> >[1] https://marc.info/?l=linux-kernel=151930201102853=2
> >
> > https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
> >https://marc.info/?l=linux-kernel=151633436614259=2
> >
> >Suggested-by: Daniel Lustig 
> >Signed-off-by: Andrea Parri 
> >Cc: Palmer Dabbelt 
> >Cc: Albert Ou 
> >Cc: Daniel Lustig 
> >Cc: Alan Stern 
> >Cc: Will Deacon 
> >Cc: Peter Zijlstra 
> >Cc: Boqun Feng 
> >Cc: Nicholas Piggin 
> >Cc: David Howells 
> >Cc: Jade Alglave 
> >Cc: Luc Maranget 
> >Cc: "Paul E. McKenney" 
> >Cc: Akira Yokosawa 
> >Cc: Ingo Molnar 
> >Cc: Linus Torvalds 
> >Cc: linux-ri...@lists.infradead.org
> >Cc: linux-kernel@vger.kernel.org
> >---
> > arch/riscv/include/asm/atomic.h  | 417 
> > +--
> > arch/riscv/include/asm/cmpxchg.h | 391 +---
> > 2 files changed, 588 insertions(+), 220 deletions(-)
> >
> >diff --git a/arch/riscv/include/asm/atomic.h 
> >b/arch/riscv/include/asm/atomic.h
> >index e65d1cd89e28b..855115ace98c8 100644
> >--- a/arch/riscv/include/asm/atomic.h
> >+++ b/arch/riscv/include/asm/atomic.h
> >@@ -24,6 +24,20 @@
> > #include 
> >
> > #define ATOMIC_INIT(i)  { (i) }
> >+
> >+#define __atomic_op_acquire(op, args...)\
> >+({  \
> >+typeof(op##_relaxed(args)) __ret  = op##_relaxed(args); \
> >+__asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory");\
> >+__ret;  \
> >+})
> >+
> >+#define __atomic_op_release(op, args...)\
> >+({  \
> >+__asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");\
> >+op##_relaxed(args); \
> >+})
> >+
> > static __always_inline int atomic_read(const atomic_t *v)
> > {
> > return READ_ONCE(v->counter);
> >@@ -50,22 +64,23 @@ static __always_inline void atomic64_set(atomic64_t *v, 
> >long i)
> >  * have the AQ or RL bits set.  These don't return anything, so there's only
> >  * one version to worry about.
> >  */
> >-#define ATOMIC_OP(op, asm_op, I, asm_type, c_type, prefix)  
> >\
> >-static __always_inline void atomic##prefix##_##op(c_type i, 
> >atomic##prefix##_t *v)  \
> >-{   
> >\
> >-__asm__ __volatile__ (  
> >\
> >-"amo" #asm_op 

Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Andrea Parri
On Fri, Mar 09, 2018 at 09:56:21AM -0800, Palmer Dabbelt wrote:
> On Fri, 09 Mar 2018 04:13:40 PST (-0800), parri.and...@gmail.com wrote:
> >Atomics present the same issue with locking: release and acquire
> >variants need to be strengthened to meet the constraints defined
> >by the Linux-kernel memory consistency model [1].
> >
> >Atomics present a further issue: implementations of atomics such
> >as atomic_cmpxchg() and atomic_add_unless() rely on LR/SC pairs,
> >which do not give full-ordering with .aqrl; for example, current
> >implementations allow the "lr-sc-aqrl-pair-vs-full-barrier" test
> >below to end up with the state indicated in the "exists" clause.
> >
> >In order to "synchronize" LKMM and RISC-V's implementation, this
> >commit strengthens the implementations of the atomics operations
> >by replacing .rl and .aq with the use of ("lightweigth") fences,
> >and by replacing .aqrl LR/SC pairs in sequences such as:
> >
> >  0:  lr.w.aqrl  %0, %addr
> >  bne%0, %old, 1f
> >  ...
> >  sc.w.aqrl  %1, %new, %addr
> >  bnez   %1, 0b
> >  1:
> >
> >with sequences of the form:
> >
> >  0:  lr.w   %0, %addr
> >  bne%0, %old, 1f
> >  ...
> >  sc.w.rl%1, %new, %addr   /* SC-release   */
> >  bnez   %1, 0b
> >  fence  rw, rw/* "full" fence */
> >  1:
> >
> >following Daniel's suggestion.
> >
> >These modifications were validated with simulation of the RISC-V
> >memory consistency model.
> >
> >C lr-sc-aqrl-pair-vs-full-barrier
> >
> >{}
> >
> >P0(int *x, int *y, atomic_t *u)
> >{
> > int r0;
> > int r1;
> >
> > WRITE_ONCE(*x, 1);
> > r0 = atomic_cmpxchg(u, 0, 1);
> > r1 = READ_ONCE(*y);
> >}
> >
> >P1(int *x, int *y, atomic_t *v)
> >{
> > int r0;
> > int r1;
> >
> > WRITE_ONCE(*y, 1);
> > r0 = atomic_cmpxchg(v, 0, 1);
> > r1 = READ_ONCE(*x);
> >}
> >
> >exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> >
> >[1] https://marc.info/?l=linux-kernel=151930201102853=2
> >
> > https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
> >https://marc.info/?l=linux-kernel=151633436614259=2
> >
> >Suggested-by: Daniel Lustig 
> >Signed-off-by: Andrea Parri 
> >Cc: Palmer Dabbelt 
> >Cc: Albert Ou 
> >Cc: Daniel Lustig 
> >Cc: Alan Stern 
> >Cc: Will Deacon 
> >Cc: Peter Zijlstra 
> >Cc: Boqun Feng 
> >Cc: Nicholas Piggin 
> >Cc: David Howells 
> >Cc: Jade Alglave 
> >Cc: Luc Maranget 
> >Cc: "Paul E. McKenney" 
> >Cc: Akira Yokosawa 
> >Cc: Ingo Molnar 
> >Cc: Linus Torvalds 
> >Cc: linux-ri...@lists.infradead.org
> >Cc: linux-kernel@vger.kernel.org
> >---
> > arch/riscv/include/asm/atomic.h  | 417 
> > +--
> > arch/riscv/include/asm/cmpxchg.h | 391 +---
> > 2 files changed, 588 insertions(+), 220 deletions(-)
> >
> >diff --git a/arch/riscv/include/asm/atomic.h 
> >b/arch/riscv/include/asm/atomic.h
> >index e65d1cd89e28b..855115ace98c8 100644
> >--- a/arch/riscv/include/asm/atomic.h
> >+++ b/arch/riscv/include/asm/atomic.h
> >@@ -24,6 +24,20 @@
> > #include 
> >
> > #define ATOMIC_INIT(i)  { (i) }
> >+
> >+#define __atomic_op_acquire(op, args...)\
> >+({  \
> >+typeof(op##_relaxed(args)) __ret  = op##_relaxed(args); \
> >+__asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory");\
> >+__ret;  \
> >+})
> >+
> >+#define __atomic_op_release(op, args...)\
> >+({  \
> >+__asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");\
> >+op##_relaxed(args); \
> >+})
> >+
> > static __always_inline int atomic_read(const atomic_t *v)
> > {
> > return READ_ONCE(v->counter);
> >@@ -50,22 +64,23 @@ static __always_inline void atomic64_set(atomic64_t *v, 
> >long i)
> >  * have the AQ or RL bits set.  These don't return anything, so there's only
> >  * one version to worry about.
> >  */
> >-#define ATOMIC_OP(op, asm_op, I, asm_type, c_type, prefix)  
> >\
> >-static __always_inline void atomic##prefix##_##op(c_type i, 
> >atomic##prefix##_t *v)  \
> >-{   
> >\
> >-__asm__ __volatile__ (  
> >\
> >-"amo" #asm_op "." #asm_type " zero, %1, %0" 
> >\
> >-: "+A" (v->counter) 
> >\
> >-: "r" (I)   
> >\
> >-: "memory");
> >\
> >-}
> >+#define 

Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Palmer Dabbelt

On Fri, 09 Mar 2018 04:13:40 PST (-0800), parri.and...@gmail.com wrote:

Atomics present the same issue with locking: release and acquire
variants need to be strengthened to meet the constraints defined
by the Linux-kernel memory consistency model [1].

Atomics present a further issue: implementations of atomics such
as atomic_cmpxchg() and atomic_add_unless() rely on LR/SC pairs,
which do not give full-ordering with .aqrl; for example, current
implementations allow the "lr-sc-aqrl-pair-vs-full-barrier" test
below to end up with the state indicated in the "exists" clause.

In order to "synchronize" LKMM and RISC-V's implementation, this
commit strengthens the implementations of the atomics operations
by replacing .rl and .aq with the use of ("lightweigth") fences,
and by replacing .aqrl LR/SC pairs in sequences such as:

  0:  lr.w.aqrl  %0, %addr
  bne%0, %old, 1f
  ...
  sc.w.aqrl  %1, %new, %addr
  bnez   %1, 0b
  1:

with sequences of the form:

  0:  lr.w   %0, %addr
  bne%0, %old, 1f
  ...
  sc.w.rl%1, %new, %addr   /* SC-release   */
  bnez   %1, 0b
  fence  rw, rw/* "full" fence */
  1:

following Daniel's suggestion.

These modifications were validated with simulation of the RISC-V
memory consistency model.

C lr-sc-aqrl-pair-vs-full-barrier

{}

P0(int *x, int *y, atomic_t *u)
{
int r0;
int r1;

WRITE_ONCE(*x, 1);
r0 = atomic_cmpxchg(u, 0, 1);
r1 = READ_ONCE(*y);
}

P1(int *x, int *y, atomic_t *v)
{
int r0;
int r1;

WRITE_ONCE(*y, 1);
r0 = atomic_cmpxchg(v, 0, 1);
r1 = READ_ONCE(*x);
}

exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)

[1] https://marc.info/?l=linux-kernel=151930201102853=2

https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
https://marc.info/?l=linux-kernel=151633436614259=2

Suggested-by: Daniel Lustig 
Signed-off-by: Andrea Parri 
Cc: Palmer Dabbelt 
Cc: Albert Ou 
Cc: Daniel Lustig 
Cc: Alan Stern 
Cc: Will Deacon 
Cc: Peter Zijlstra 
Cc: Boqun Feng 
Cc: Nicholas Piggin 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Luc Maranget 
Cc: "Paul E. McKenney" 
Cc: Akira Yokosawa 
Cc: Ingo Molnar 
Cc: Linus Torvalds 
Cc: linux-ri...@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
---
 arch/riscv/include/asm/atomic.h  | 417 +--
 arch/riscv/include/asm/cmpxchg.h | 391 +---
 2 files changed, 588 insertions(+), 220 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index e65d1cd89e28b..855115ace98c8 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -24,6 +24,20 @@
 #include 

 #define ATOMIC_INIT(i) { (i) }
+
+#define __atomic_op_acquire(op, args...)   \
+({ \
+   typeof(op##_relaxed(args)) __ret  = op##_relaxed(args); \
+   __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory");\
+   __ret;  \
+})
+
+#define __atomic_op_release(op, args...)   \
+({ \
+   __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");\
+   op##_relaxed(args); \
+})
+
 static __always_inline int atomic_read(const atomic_t *v)
 {
return READ_ONCE(v->counter);
@@ -50,22 +64,23 @@ static __always_inline void atomic64_set(atomic64_t *v, 
long i)
  * have the AQ or RL bits set.  These don't return anything, so there's only
  * one version to worry about.
  */
-#define ATOMIC_OP(op, asm_op, I, asm_type, c_type, prefix) 
\
-static __always_inline void atomic##prefix##_##op(c_type i, atomic##prefix##_t 
*v) \
-{  
\
-   __asm__ __volatile__ (  
\
-   "amo" #asm_op "." #asm_type " zero, %1, %0" 
  \
-   : "+A" (v->counter) 
   \
-   : "r" (I)   
  \
-   : "memory");
  \
-}
+#define ATOMIC_OP(op, asm_op, I, asm_type, c_type, 

Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Palmer Dabbelt

On Fri, 09 Mar 2018 04:13:40 PST (-0800), parri.and...@gmail.com wrote:

Atomics present the same issue with locking: release and acquire
variants need to be strengthened to meet the constraints defined
by the Linux-kernel memory consistency model [1].

Atomics present a further issue: implementations of atomics such
as atomic_cmpxchg() and atomic_add_unless() rely on LR/SC pairs,
which do not give full-ordering with .aqrl; for example, current
implementations allow the "lr-sc-aqrl-pair-vs-full-barrier" test
below to end up with the state indicated in the "exists" clause.

In order to "synchronize" LKMM and RISC-V's implementation, this
commit strengthens the implementations of the atomics operations
by replacing .rl and .aq with the use of ("lightweigth") fences,
and by replacing .aqrl LR/SC pairs in sequences such as:

  0:  lr.w.aqrl  %0, %addr
  bne%0, %old, 1f
  ...
  sc.w.aqrl  %1, %new, %addr
  bnez   %1, 0b
  1:

with sequences of the form:

  0:  lr.w   %0, %addr
  bne%0, %old, 1f
  ...
  sc.w.rl%1, %new, %addr   /* SC-release   */
  bnez   %1, 0b
  fence  rw, rw/* "full" fence */
  1:

following Daniel's suggestion.

These modifications were validated with simulation of the RISC-V
memory consistency model.

C lr-sc-aqrl-pair-vs-full-barrier

{}

P0(int *x, int *y, atomic_t *u)
{
int r0;
int r1;

WRITE_ONCE(*x, 1);
r0 = atomic_cmpxchg(u, 0, 1);
r1 = READ_ONCE(*y);
}

P1(int *x, int *y, atomic_t *v)
{
int r0;
int r1;

WRITE_ONCE(*y, 1);
r0 = atomic_cmpxchg(v, 0, 1);
r1 = READ_ONCE(*x);
}

exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)

[1] https://marc.info/?l=linux-kernel=151930201102853=2

https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
https://marc.info/?l=linux-kernel=151633436614259=2

Suggested-by: Daniel Lustig 
Signed-off-by: Andrea Parri 
Cc: Palmer Dabbelt 
Cc: Albert Ou 
Cc: Daniel Lustig 
Cc: Alan Stern 
Cc: Will Deacon 
Cc: Peter Zijlstra 
Cc: Boqun Feng 
Cc: Nicholas Piggin 
Cc: David Howells 
Cc: Jade Alglave 
Cc: Luc Maranget 
Cc: "Paul E. McKenney" 
Cc: Akira Yokosawa 
Cc: Ingo Molnar 
Cc: Linus Torvalds 
Cc: linux-ri...@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
---
 arch/riscv/include/asm/atomic.h  | 417 +--
 arch/riscv/include/asm/cmpxchg.h | 391 +---
 2 files changed, 588 insertions(+), 220 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index e65d1cd89e28b..855115ace98c8 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -24,6 +24,20 @@
 #include 

 #define ATOMIC_INIT(i) { (i) }
+
+#define __atomic_op_acquire(op, args...)   \
+({ \
+   typeof(op##_relaxed(args)) __ret  = op##_relaxed(args); \
+   __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory");\
+   __ret;  \
+})
+
+#define __atomic_op_release(op, args...)   \
+({ \
+   __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");\
+   op##_relaxed(args); \
+})
+
 static __always_inline int atomic_read(const atomic_t *v)
 {
return READ_ONCE(v->counter);
@@ -50,22 +64,23 @@ static __always_inline void atomic64_set(atomic64_t *v, 
long i)
  * have the AQ or RL bits set.  These don't return anything, so there's only
  * one version to worry about.
  */
-#define ATOMIC_OP(op, asm_op, I, asm_type, c_type, prefix) 
\
-static __always_inline void atomic##prefix##_##op(c_type i, atomic##prefix##_t 
*v) \
-{  
\
-   __asm__ __volatile__ (  
\
-   "amo" #asm_op "." #asm_type " zero, %1, %0" 
  \
-   : "+A" (v->counter) 
   \
-   : "r" (I)   
  \
-   : "memory");
  \
-}
+#define ATOMIC_OP(op, asm_op, I, asm_type, c_type, prefix) \
+static __always_inline \
+void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v)\
+{  \
+   __asm__ __volatile__ (  \
+   "  amo" #asm_op "." #asm_type " zero, 

Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Andrea Parri
On Fri, Mar 09, 2018 at 11:39:11AM -0500, Alan Stern wrote:
> On Fri, 9 Mar 2018, Andrea Parri wrote:
> 
> > Atomics present the same issue with locking: release and acquire
> > variants need to be strengthened to meet the constraints defined
> > by the Linux-kernel memory consistency model [1].
> > 
> > Atomics present a further issue: implementations of atomics such
> > as atomic_cmpxchg() and atomic_add_unless() rely on LR/SC pairs,
> > which do not give full-ordering with .aqrl; for example, current
> > implementations allow the "lr-sc-aqrl-pair-vs-full-barrier" test
> > below to end up with the state indicated in the "exists" clause.
> > 
> > In order to "synchronize" LKMM and RISC-V's implementation, this
> > commit strengthens the implementations of the atomics operations
> > by replacing .rl and .aq with the use of ("lightweigth") fences,
> > and by replacing .aqrl LR/SC pairs in sequences such as:
> > 
> >   0:  lr.w.aqrl  %0, %addr
> >   bne%0, %old, 1f
> >   ...
> >   sc.w.aqrl  %1, %new, %addr
> >   bnez   %1, 0b
> >   1:
> > 
> > with sequences of the form:
> > 
> >   0:  lr.w   %0, %addr
> >   bne%0, %old, 1f
> >   ...
> >   sc.w.rl%1, %new, %addr   /* SC-release   */
> >   bnez   %1, 0b
> >   fence  rw, rw/* "full" fence */
> >   1:
> > 
> > following Daniel's suggestion.
> > 
> > These modifications were validated with simulation of the RISC-V
> > memory consistency model.
> > 
> > C lr-sc-aqrl-pair-vs-full-barrier
> > 
> > {}
> > 
> > P0(int *x, int *y, atomic_t *u)
> > {
> > int r0;
> > int r1;
> > 
> > WRITE_ONCE(*x, 1);
> > r0 = atomic_cmpxchg(u, 0, 1);
> > r1 = READ_ONCE(*y);
> > }
> > 
> > P1(int *x, int *y, atomic_t *v)
> > {
> > int r0;
> > int r1;
> > 
> > WRITE_ONCE(*y, 1);
> > r0 = atomic_cmpxchg(v, 0, 1);
> > r1 = READ_ONCE(*x);
> > }
> > 
> > exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> 
> There's another aspect to this imposed by the LKMM, and I'm not sure
> whether your patch addresses it.  You add a fence after the cmpxchg
> operation but nothing before it.  So what would happen with the 
> following litmus test (which the LKMM forbids)?

Available RISC-V memory model formalizations forbid it;  an intuitive
explanation could probably be derived by paralleling the argument for
arm64, as pointed out by Daniel at:

  https://marc.info/?l=linux-kernel=151994289015267=2

  Andrea


> 
> C SB-atomic_cmpxchg-mb
> 
> {}
> 
> P0(int *x, int *y)
> {
>   int r0;
> 
>   WRITE_ONCE(*x, 1);
>   r0 = atomic_cmpxchg(y, 0, 0);
> }
> 
> P1(int *x, int *y)
> {
>   int r1;
> 
>   WRITE_ONCE(*y, 1);
>   smp_mb();
>   r1 = READ_ONCE(*x);
> }
> 
> exists (0:r0=0 /\ 1:r1=0)
> 
> This is yet another illustration showing that full fences are stronger 
> than cominations of release + acquire.
> 
> Alan Stern
> 


Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Andrea Parri
On Fri, Mar 09, 2018 at 11:39:11AM -0500, Alan Stern wrote:
> On Fri, 9 Mar 2018, Andrea Parri wrote:
> 
> > Atomics present the same issue with locking: release and acquire
> > variants need to be strengthened to meet the constraints defined
> > by the Linux-kernel memory consistency model [1].
> > 
> > Atomics present a further issue: implementations of atomics such
> > as atomic_cmpxchg() and atomic_add_unless() rely on LR/SC pairs,
> > which do not give full-ordering with .aqrl; for example, current
> > implementations allow the "lr-sc-aqrl-pair-vs-full-barrier" test
> > below to end up with the state indicated in the "exists" clause.
> > 
> > In order to "synchronize" LKMM and RISC-V's implementation, this
> > commit strengthens the implementations of the atomics operations
> > by replacing .rl and .aq with the use of ("lightweigth") fences,
> > and by replacing .aqrl LR/SC pairs in sequences such as:
> > 
> >   0:  lr.w.aqrl  %0, %addr
> >   bne%0, %old, 1f
> >   ...
> >   sc.w.aqrl  %1, %new, %addr
> >   bnez   %1, 0b
> >   1:
> > 
> > with sequences of the form:
> > 
> >   0:  lr.w   %0, %addr
> >   bne%0, %old, 1f
> >   ...
> >   sc.w.rl%1, %new, %addr   /* SC-release   */
> >   bnez   %1, 0b
> >   fence  rw, rw/* "full" fence */
> >   1:
> > 
> > following Daniel's suggestion.
> > 
> > These modifications were validated with simulation of the RISC-V
> > memory consistency model.
> > 
> > C lr-sc-aqrl-pair-vs-full-barrier
> > 
> > {}
> > 
> > P0(int *x, int *y, atomic_t *u)
> > {
> > int r0;
> > int r1;
> > 
> > WRITE_ONCE(*x, 1);
> > r0 = atomic_cmpxchg(u, 0, 1);
> > r1 = READ_ONCE(*y);
> > }
> > 
> > P1(int *x, int *y, atomic_t *v)
> > {
> > int r0;
> > int r1;
> > 
> > WRITE_ONCE(*y, 1);
> > r0 = atomic_cmpxchg(v, 0, 1);
> > r1 = READ_ONCE(*x);
> > }
> > 
> > exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> 
> There's another aspect to this imposed by the LKMM, and I'm not sure
> whether your patch addresses it.  You add a fence after the cmpxchg
> operation but nothing before it.  So what would happen with the 
> following litmus test (which the LKMM forbids)?

Available RISC-V memory model formalizations forbid it;  an intuitive
explanation could probably be derived by paralleling the argument for
arm64, as pointed out by Daniel at:

  https://marc.info/?l=linux-kernel=151994289015267=2

  Andrea


> 
> C SB-atomic_cmpxchg-mb
> 
> {}
> 
> P0(int *x, int *y)
> {
>   int r0;
> 
>   WRITE_ONCE(*x, 1);
>   r0 = atomic_cmpxchg(y, 0, 0);
> }
> 
> P1(int *x, int *y)
> {
>   int r1;
> 
>   WRITE_ONCE(*y, 1);
>   smp_mb();
>   r1 = READ_ONCE(*x);
> }
> 
> exists (0:r0=0 /\ 1:r1=0)
> 
> This is yet another illustration showing that full fences are stronger 
> than cominations of release + acquire.
> 
> Alan Stern
> 


Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Alan Stern
On Fri, 9 Mar 2018, Andrea Parri wrote:

> Atomics present the same issue with locking: release and acquire
> variants need to be strengthened to meet the constraints defined
> by the Linux-kernel memory consistency model [1].
> 
> Atomics present a further issue: implementations of atomics such
> as atomic_cmpxchg() and atomic_add_unless() rely on LR/SC pairs,
> which do not give full-ordering with .aqrl; for example, current
> implementations allow the "lr-sc-aqrl-pair-vs-full-barrier" test
> below to end up with the state indicated in the "exists" clause.
> 
> In order to "synchronize" LKMM and RISC-V's implementation, this
> commit strengthens the implementations of the atomics operations
> by replacing .rl and .aq with the use of ("lightweigth") fences,
> and by replacing .aqrl LR/SC pairs in sequences such as:
> 
>   0:  lr.w.aqrl  %0, %addr
>   bne%0, %old, 1f
>   ...
>   sc.w.aqrl  %1, %new, %addr
>   bnez   %1, 0b
>   1:
> 
> with sequences of the form:
> 
>   0:  lr.w   %0, %addr
>   bne%0, %old, 1f
>   ...
>   sc.w.rl%1, %new, %addr   /* SC-release   */
>   bnez   %1, 0b
>   fence  rw, rw/* "full" fence */
>   1:
> 
> following Daniel's suggestion.
> 
> These modifications were validated with simulation of the RISC-V
> memory consistency model.
> 
> C lr-sc-aqrl-pair-vs-full-barrier
> 
> {}
> 
> P0(int *x, int *y, atomic_t *u)
> {
>   int r0;
>   int r1;
> 
>   WRITE_ONCE(*x, 1);
>   r0 = atomic_cmpxchg(u, 0, 1);
>   r1 = READ_ONCE(*y);
> }
> 
> P1(int *x, int *y, atomic_t *v)
> {
>   int r0;
>   int r1;
> 
>   WRITE_ONCE(*y, 1);
>   r0 = atomic_cmpxchg(v, 0, 1);
>   r1 = READ_ONCE(*x);
> }
> 
> exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)

There's another aspect to this imposed by the LKMM, and I'm not sure
whether your patch addresses it.  You add a fence after the cmpxchg
operation but nothing before it.  So what would happen with the 
following litmus test (which the LKMM forbids)?

C SB-atomic_cmpxchg-mb

{}

P0(int *x, int *y)
{
int r0;

WRITE_ONCE(*x, 1);
r0 = atomic_cmpxchg(y, 0, 0);
}

P1(int *x, int *y)
{
int r1;

WRITE_ONCE(*y, 1);
smp_mb();
r1 = READ_ONCE(*x);
}

exists (0:r0=0 /\ 1:r1=0)

This is yet another illustration showing that full fences are stronger 
than cominations of release + acquire.

Alan Stern



Re: [PATCH v2 2/2] riscv/atomic: Strengthen implementations with fences

2018-03-09 Thread Alan Stern
On Fri, 9 Mar 2018, Andrea Parri wrote:

> Atomics present the same issue with locking: release and acquire
> variants need to be strengthened to meet the constraints defined
> by the Linux-kernel memory consistency model [1].
> 
> Atomics present a further issue: implementations of atomics such
> as atomic_cmpxchg() and atomic_add_unless() rely on LR/SC pairs,
> which do not give full-ordering with .aqrl; for example, current
> implementations allow the "lr-sc-aqrl-pair-vs-full-barrier" test
> below to end up with the state indicated in the "exists" clause.
> 
> In order to "synchronize" LKMM and RISC-V's implementation, this
> commit strengthens the implementations of the atomics operations
> by replacing .rl and .aq with the use of ("lightweigth") fences,
> and by replacing .aqrl LR/SC pairs in sequences such as:
> 
>   0:  lr.w.aqrl  %0, %addr
>   bne%0, %old, 1f
>   ...
>   sc.w.aqrl  %1, %new, %addr
>   bnez   %1, 0b
>   1:
> 
> with sequences of the form:
> 
>   0:  lr.w   %0, %addr
>   bne%0, %old, 1f
>   ...
>   sc.w.rl%1, %new, %addr   /* SC-release   */
>   bnez   %1, 0b
>   fence  rw, rw/* "full" fence */
>   1:
> 
> following Daniel's suggestion.
> 
> These modifications were validated with simulation of the RISC-V
> memory consistency model.
> 
> C lr-sc-aqrl-pair-vs-full-barrier
> 
> {}
> 
> P0(int *x, int *y, atomic_t *u)
> {
>   int r0;
>   int r1;
> 
>   WRITE_ONCE(*x, 1);
>   r0 = atomic_cmpxchg(u, 0, 1);
>   r1 = READ_ONCE(*y);
> }
> 
> P1(int *x, int *y, atomic_t *v)
> {
>   int r0;
>   int r1;
> 
>   WRITE_ONCE(*y, 1);
>   r0 = atomic_cmpxchg(v, 0, 1);
>   r1 = READ_ONCE(*x);
> }
> 
> exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)

There's another aspect to this imposed by the LKMM, and I'm not sure
whether your patch addresses it.  You add a fence after the cmpxchg
operation but nothing before it.  So what would happen with the 
following litmus test (which the LKMM forbids)?

C SB-atomic_cmpxchg-mb

{}

P0(int *x, int *y)
{
int r0;

WRITE_ONCE(*x, 1);
r0 = atomic_cmpxchg(y, 0, 0);
}

P1(int *x, int *y)
{
int r1;

WRITE_ONCE(*y, 1);
smp_mb();
r1 = READ_ONCE(*x);
}

exists (0:r0=0 /\ 1:r1=0)

This is yet another illustration showing that full fences are stronger 
than cominations of release + acquire.

Alan Stern