Re: [lttng-dev] liburcu: LTO breaking rcu_dereference on arm64 and possibly other architectures ?

2021-04-19 Thread Mathieu Desnoyers via lttng-dev
- On Apr 19, 2021, at 11:41 AM, Duncan Sands baldr...@free.fr wrote:

> Hi Mathieu,
> 
> On 4/19/21 5:31 PM, Mathieu Desnoyers wrote:
>> - On Apr 19, 2021, at 5:41 AM, Duncan Sands baldr...@free.fr wrote:
>> 
>> 
>>>
 Quick question: should we use __atomic_load() or atomic_load_explicit() 
 (C) and
 (std::atomic<__typeof__(x)>)(x)).load() (C++) ?
>>>
>>> If both are available, is there any advantage to using the C++ version when
>>> compiling C++?  As opposed to using the C11 one for both C and C++?
>> 
>> I recently noticed that using C11/C++11 atomic load explicit is not a good
>> fit for rcu_dereference, because we want the type to be a pointer, not an
>> _Atomic type. gcc appears to accept a looser typing, but clang has issues
>> trying to build that code.
> 
> in the long run maybe the original variables should be declared with the
> appropriate atomic type from the get-go.

Considering that rcu_dereference is public API, we would have to wait until we
do a major soname ABI bump _and_ an API break to do that, which I am very
reluctant to do, especially for the API break part.

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] liburcu: LTO breaking rcu_dereference on arm64 and possibly other architectures ?

2021-04-19 Thread Duncan Sands via lttng-dev

Hi Mathieu,

On 4/19/21 5:31 PM, Mathieu Desnoyers wrote:

- On Apr 19, 2021, at 5:41 AM, Duncan Sands baldr...@free.fr wrote:





Quick question: should we use __atomic_load() or atomic_load_explicit() (C) and
(std::atomic<__typeof__(x)>)(x)).load() (C++) ?


If both are available, is there any advantage to using the C++ version when
compiling C++?  As opposed to using the C11 one for both C and C++?


I recently noticed that using C11/C++11 atomic load explicit is not a good
fit for rcu_dereference, because we want the type to be a pointer, not an
_Atomic type. gcc appears to accept a looser typing, but clang has issues
trying to build that code.


in the long run maybe the original variables should be declared with the 
appropriate atomic type from the get-go.



So I plan to use __atomic(p, v, __ATOMIC_CONSUME) instead in both C and C++.

Also, I'll drop the cmm_smp_read_barrier_depends() when using __ATOMIC_CONSUME,
because AFAIU their memory ordering semantics are redundant for rcu_dereference.


Yeah, keeping the barrier makes no sense in that case.



Here is the resulting commit for review on gerrit:

https://review.lttng.org/c/userspace-rcu/+/5455 Fix: use __atomic_load() rather 
than atomic load explicit [NEW]


Looks good to me (I didn't test it though).

Ciao, Duncan.
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] liburcu: LTO breaking rcu_dereference on arm64 and possibly other architectures ?

2021-04-19 Thread Mathieu Desnoyers via lttng-dev
- On Apr 19, 2021, at 5:41 AM, Duncan Sands baldr...@free.fr wrote:


> 
>> Quick question: should we use __atomic_load() or atomic_load_explicit() (C) 
>> and
>> (std::atomic<__typeof__(x)>)(x)).load() (C++) ?
> 
> If both are available, is there any advantage to using the C++ version when
> compiling C++?  As opposed to using the C11 one for both C and C++?

I recently noticed that using C11/C++11 atomic load explicit is not a good
fit for rcu_dereference, because we want the type to be a pointer, not an
_Atomic type. gcc appears to accept a looser typing, but clang has issues
trying to build that code.

So I plan to use __atomic(p, v, __ATOMIC_CONSUME) instead in both C and C++.

Also, I'll drop the cmm_smp_read_barrier_depends() when using __ATOMIC_CONSUME,
because AFAIU their memory ordering semantics are redundant for rcu_dereference.

Here is the resulting commit for review on gerrit:

https://review.lttng.org/c/userspace-rcu/+/5455 Fix: use __atomic_load() rather 
than atomic load explicit [NEW]

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] liburcu: LTO breaking rcu_dereference on arm64 and possibly other architectures ?

2021-04-16 Thread Mathieu Desnoyers via lttng-dev
- On Apr 16, 2021, at 11:22 AM, lttng-dev lttng-dev@lists.lttng.org wrote:

> Hi Mathieu,
> 

Hi Duncan,

> On 4/16/21 4:52 PM, Mathieu Desnoyers via lttng-dev wrote:
>> Hi Paul, Will, Peter,
>> 
>> I noticed in this discussion https://lkml.org/lkml/2021/4/16/118 that LTO
>> is able to break rcu_dereference. This seems to be taken care of by
>> arch/arm64/include/asm/rwonce.h on arm64 in the Linux kernel tree.
>> 
>> In the liburcu user-space library, we have this comment near 
>> rcu_dereference()
>> in
>> include/urcu/static/pointer.h:
>> 
>>   * The compiler memory barrier in CMM_LOAD_SHARED() ensures that
>>   value-speculative
>>   * optimizations (e.g. VSS: Value Speculation Scheduling) does not perform 
>> the
>>   * data read before the pointer read by speculating the value of the 
>> pointer.
>>   * Correct ordering is ensured because the pointer is read as a volatile 
>> access.
>>   * This acts as a global side-effect operation, which forbids reordering of
>>   * dependent memory operations. Note that such concern about 
>> dependency-breaking
>>   * optimizations will eventually be taken care of by the 
>> "memory_order_consume"
>>   * addition to forthcoming C++ standard.
>> 
>> (note: CMM_LOAD_SHARED() is the equivalent of READ_ONCE(), but was 
>> introduced in
>> liburcu as a public API before READ_ONCE() existed in the Linux kernel)
> 
> this is not directly on topic, but what do you think of porting userspace RCU 
> to
> use the C++ memory model and GCC/LLVM atomic builtins (__atomic_store etc)
> rather than rolling your own?  Tools like thread sanitizer would then 
> understand
> what userspace RCU is doing.  Not to mention the compiler.  More developers
> would understand it too!

Yes, that sounds like a clear win.

> From a code organization viewpoint, going down this path would presumably mean
> directly using GCC/LLVM atomic support when available, and falling back on
> something like the current uatomic to emulate them for older compilers.

Yes, I think this approach would be good. One caveat though: the GCC atomic
operations were known to be broken with some older compilers for specific 
architectures,
so we may have to keep track of a list of known buggy compilers to use our own
implementation instead in those situations. It's been a while since I've looked 
at
this though, so we may not even be supporting those old compilers in liburcu 
anymore.

> 
> Some parts of uatomic have pretty clear equivalents (see below), but not all, 
> so
> the conversion could be quite tricky.

We'd have to see on a case by case basis, but it cannot hurt to start the effort
by integrating the easy ones.

> 
>> Peter tells me the "memory_order_consume" is not something which can be used
>> today.
> 
> This is a pity, because it seems to have been invented with rcu_dereference in
> mind.

Actually, (see other leg of this email thread) memory_order_consume works for
rcu_dereference, but it appears to be implemented as a slightly heavier than
required memory_order_acquire on weakly-ordered architectures. So we're just
moving the issue into compiler-land. Oh well.

> 
>> Any information on its status at C/C++ standard levels and 
>> implementation-wise ?
>> 
>> Pragmatically speaking, what should we change in liburcu to ensure we don't
>> generate
>> broken code when LTO is enabled ? I suspect there are a few options here:
>> 
>> 1) Fail to build if LTO is enabled,
>> 2) Generate slower code for rcu_dereference, either on all architectures or 
>> only
>> on weakly-ordered architectures,
>> 3) Generate different code depending on whether LTO is enabled or not. AFAIU
>> this would only
>> work if every compile unit is aware that it will end up being optimized 
>> with
>> LTO. Not sure
>> how this could be done in the context of user-space.
>> 4) [ Insert better idea here. ]
>> 
>> Thoughts ?
> 
> Best wishes, Duncan.
> 
> PS: We are experimentally running with the following patch, as it already 
> makes
> thread sanitizer a lot happier:

Quick question: should we use __atomic_load() or atomic_load_explicit() (C) and
(std::atomic<__typeof__(x)>)(x)).load() (C++) ?

We'd have to make this dependent on C11/C++11 though, and keep volatile for 
older
compilers.

Last thing: I have limited time to work on this, so if you have well-tested 
patches
you wish to submit, I'll do my best to review them!

Thanks,

Mathieu

> 
> --- a/External/UserspaceRCU/userspace-rcu/include/urcu/system.h
> 
> +++ b/External/UserspaceRCU/userspace-rcu/include/urcu/system.h
> 
> @@ -26,34 +26,45 @@
> 
>   * Identify a shared load. A cmm_smp_rmc() or cmm_smp_mc() should come
> 
>   * before the load.
> 
>   */
> 
> -#define _CMM_LOAD_SHARED(p) CMM_ACCESS_ONCE(p)
> 
> +#define _CMM_LOAD_SHARED(p)  \
> 
> + __extension__   \
> 
> + ({  \
> 
> + __typeof__(p) v;  

Re: [lttng-dev] liburcu: LTO breaking rcu_dereference on arm64 and possibly other architectures ?

2021-04-16 Thread Paul E. McKenney via lttng-dev
On Fri, Apr 16, 2021 at 03:30:53PM -0400, Mathieu Desnoyers wrote:
> - On Apr 16, 2021, at 3:02 PM, paulmck paul...@kernel.org wrote:
> [...]
> > 
> > If it can be done reasonably, I suggest also having some way for the
> > person building userspace RCU to say "I know what I am doing, so do
> > it with volatile rather than memory_order_consume."
> 
> Like so ?
> 
> #define CMM_ACCESS_ONCE(x) (*(__volatile__  __typeof__(x) *)&(x))
> #define CMM_LOAD_SHARED(p) CMM_ACCESS_ONCE(p)
> 
> /*
>  * By defining URCU_DEREFERENCE_USE_VOLATILE, the user requires use of
>  * volatile access to implement rcu_dereference rather than
>  * memory_order_consume load from the C11/C++11 standards.
>  *
>  * This may improve performance on weakly-ordered architectures where
>  * the compiler implements memory_order_consume as a
>  * memory_order_acquire, which is stricter than required by the
>  * standard.
>  *
>  * Note that using volatile accesses for rcu_dereference may cause
>  * LTO to generate incorrectly ordered code starting from C11/C++11.
>  */
> 
> #ifdef URCU_DEREFERENCE_USE_VOLATILE
> # define rcu_dereference(x) CMM_LOAD_SHARED(x)
> #else
> # if defined (__cplusplus)
> #  if __cplusplus >= 201103L
> #   include 
> #   define rcu_dereference(x)   
> ((std::atomic<__typeof__(x)>)(x)).load(std::memory_order_consume)
> #  else
> #   define rcu_dereference(x)   CMM_LOAD_SHARED(x)
> #  endif
> # else
> #  if (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)
> #   include 
> #   define rcu_dereference(x)   atomic_load_explicit(&(x), 
> memory_order_consume)
> #  else
> #   define rcu_dereference(x)   CMM_LOAD_SHARED(x)
> #  endif
> # endif
> #endif

Looks good to me!

Thanx, Paul
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] liburcu: LTO breaking rcu_dereference on arm64 and possibly other architectures ?

2021-04-16 Thread Mathieu Desnoyers via lttng-dev
- On Apr 16, 2021, at 3:02 PM, paulmck paul...@kernel.org wrote:
[...]
> 
> If it can be done reasonably, I suggest also having some way for the
> person building userspace RCU to say "I know what I am doing, so do
> it with volatile rather than memory_order_consume."

Like so ?

#define CMM_ACCESS_ONCE(x) (*(__volatile__  __typeof__(x) *)&(x))
#define CMM_LOAD_SHARED(p) CMM_ACCESS_ONCE(p)

/*
 * By defining URCU_DEREFERENCE_USE_VOLATILE, the user requires use of
 * volatile access to implement rcu_dereference rather than
 * memory_order_consume load from the C11/C++11 standards.
 *
 * This may improve performance on weakly-ordered architectures where
 * the compiler implements memory_order_consume as a
 * memory_order_acquire, which is stricter than required by the
 * standard.
 *
 * Note that using volatile accesses for rcu_dereference may cause
 * LTO to generate incorrectly ordered code starting from C11/C++11.
 */

#ifdef URCU_DEREFERENCE_USE_VOLATILE
# define rcu_dereference(x) CMM_LOAD_SHARED(x)
#else
# if defined (__cplusplus)
#  if __cplusplus >= 201103L
#   include 
#   define rcu_dereference(x)   
((std::atomic<__typeof__(x)>)(x)).load(std::memory_order_consume)
#  else
#   define rcu_dereference(x)   CMM_LOAD_SHARED(x)
#  endif
# else
#  if (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)
#   include 
#   define rcu_dereference(x)   atomic_load_explicit(&(x), memory_order_consume)
#  else
#   define rcu_dereference(x)   CMM_LOAD_SHARED(x)
#  endif
# endif
#endif

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] liburcu: LTO breaking rcu_dereference on arm64 and possibly other architectures ?

2021-04-16 Thread Paul E. McKenney via lttng-dev
On Fri, Apr 16, 2021 at 02:40:08PM -0400, Mathieu Desnoyers wrote:
> - On Apr 16, 2021, at 12:01 PM, paulmck paul...@kernel.org wrote:
> 
> > On Fri, Apr 16, 2021 at 05:17:11PM +0200, Peter Zijlstra wrote:
> >> On Fri, Apr 16, 2021 at 10:52:16AM -0400, Mathieu Desnoyers wrote:
> >> > Hi Paul, Will, Peter,
> >> > 
> >> > I noticed in this discussion https://lkml.org/lkml/2021/4/16/118 that LTO
> >> > is able to break rcu_dereference. This seems to be taken care of by
> >> > arch/arm64/include/asm/rwonce.h on arm64 in the Linux kernel tree.
> >> > 
> >> > In the liburcu user-space library, we have this comment near 
> >> > rcu_dereference()
> >> > in
> >> > include/urcu/static/pointer.h:
> >> > 
> >> >  * The compiler memory barrier in CMM_LOAD_SHARED() ensures that
> >> >  value-speculative
> >> >  * optimizations (e.g. VSS: Value Speculation Scheduling) does not 
> >> > perform the
> >> >  * data read before the pointer read by speculating the value of the 
> >> > pointer.
> >> >  * Correct ordering is ensured because the pointer is read as a volatile 
> >> > access.
> >> >  * This acts as a global side-effect operation, which forbids reordering 
> >> > of
> >> >  * dependent memory operations. Note that such concern about 
> >> > dependency-breaking
> >> >  * optimizations will eventually be taken care of by the 
> >> > "memory_order_consume"
> >> >  * addition to forthcoming C++ standard.
> >> > 
> >> > (note: CMM_LOAD_SHARED() is the equivalent of READ_ONCE(), but was 
> >> > introduced in
> >> > liburcu as a public API before READ_ONCE() existed in the Linux kernel)
> >> > 
> >> > Peter tells me the "memory_order_consume" is not something which can be 
> >> > used
> >> > today.
> >> > Any information on its status at C/C++ standard levels and 
> >> > implementation-wise ?
> > 
> > Actually, you really can use memory_order_consume.  All current
> > implementations will compile it as if it was memory_order_acquire.
> > This will work correctly, but may be slower than you would like on ARM,
> > PowerPC, and so on.
> > 
> > On things like x86, the penalty is forgone optimizations, so less
> > of a problem there.
> 
> OK
> 
> > 
> >> > Pragmatically speaking, what should we change in liburcu to ensure we 
> >> > don't
> >> > generate
> >> > broken code when LTO is enabled ? I suspect there are a few options here:
> >> > 
> >> > 1) Fail to build if LTO is enabled,
> >> > 2) Generate slower code for rcu_dereference, either on all architectures 
> >> > or only
> >> >on weakly-ordered architectures,
> >> > 3) Generate different code depending on whether LTO is enabled or not. 
> >> > AFAIU
> >> > this would only
> >> >work if every compile unit is aware that it will end up being 
> >> > optimized with
> >> >LTO. Not sure
> >> >how this could be done in the context of user-space.
> >> > 4) [ Insert better idea here. ]
> > 
> > Use memory_order_consume if LTO is enabled.  That will work now, and
> > might generate good code in some hoped-for future.
> 
> In the context of a user-space library, how does one check whether LTO is 
> enabled with
> preprocessor directives ? A quick test with gcc seems to show that both with 
> and without
> -flto cannot be distinguished from a preprocessor POV, e.g. the output of both
> 
> gcc --std=c11 -O2 -dM -E - < /dev/null
> and
> gcc --std=c11 -O2 -flto -dM -E - < /dev/null
> 
> is exactly the same. Am I missing something here ?

No idea.  ;-)

> If we accept to use memory_order_consume all the time in both C and C++ code 
> starting from
> C11 and C++11, the following code snippet could do the trick:
> 
> #define CMM_ACCESS_ONCE(x) (*(__volatile__  __typeof__(x) *)&(x))
> #define CMM_LOAD_SHARED(p) CMM_ACCESS_ONCE(p)
> 
> #if defined (__cplusplus)
> # if __cplusplus >= 201103L
> #  include 
> #  define rcu_dereference(x)
> ((std::atomic<__typeof__(x)>)(x)).load(std::memory_order_consume)
> # else
> #  define rcu_dereference(x)CMM_LOAD_SHARED(x)
> # endif
> #else
> # if (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)
> #  include 
> #  define rcu_dereference(x)atomic_load_explicit(&(x), 
> memory_order_consume)
> # else
> #  define rcu_dereference(x)CMM_LOAD_SHARED(x)
> # endif
> #endif
> 
> This uses the volatile approach prior to C11/C++11, and moves to 
> memory_order_consume
> afterwards. This will bring a performance penalty on weakly-ordered 
> architectures even
> when -flto is not specified though.
> 
> Then the burden is pushed on the compiler people to eventually implement an 
> efficient
> memory_order_consume.
> 
> Is that acceptable ?

That makes sense to me!

If it can be done reasonably, I suggest also having some way for the
person building userspace RCU to say "I know what I am doing, so do
it with volatile rather than memory_order_consume."

Thanx, Paul
___
lttng-dev mailing list
lttng-dev@lists.lttng.org

Re: [lttng-dev] liburcu: LTO breaking rcu_dereference on arm64 and possibly other architectures ?

2021-04-16 Thread Mathieu Desnoyers via lttng-dev
- On Apr 16, 2021, at 12:01 PM, paulmck paul...@kernel.org wrote:

> On Fri, Apr 16, 2021 at 05:17:11PM +0200, Peter Zijlstra wrote:
>> On Fri, Apr 16, 2021 at 10:52:16AM -0400, Mathieu Desnoyers wrote:
>> > Hi Paul, Will, Peter,
>> > 
>> > I noticed in this discussion https://lkml.org/lkml/2021/4/16/118 that LTO
>> > is able to break rcu_dereference. This seems to be taken care of by
>> > arch/arm64/include/asm/rwonce.h on arm64 in the Linux kernel tree.
>> > 
>> > In the liburcu user-space library, we have this comment near 
>> > rcu_dereference()
>> > in
>> > include/urcu/static/pointer.h:
>> > 
>> >  * The compiler memory barrier in CMM_LOAD_SHARED() ensures that
>> >  value-speculative
>> >  * optimizations (e.g. VSS: Value Speculation Scheduling) does not perform 
>> > the
>> >  * data read before the pointer read by speculating the value of the 
>> > pointer.
>> >  * Correct ordering is ensured because the pointer is read as a volatile 
>> > access.
>> >  * This acts as a global side-effect operation, which forbids reordering of
>> >  * dependent memory operations. Note that such concern about 
>> > dependency-breaking
>> >  * optimizations will eventually be taken care of by the 
>> > "memory_order_consume"
>> >  * addition to forthcoming C++ standard.
>> > 
>> > (note: CMM_LOAD_SHARED() is the equivalent of READ_ONCE(), but was 
>> > introduced in
>> > liburcu as a public API before READ_ONCE() existed in the Linux kernel)
>> > 
>> > Peter tells me the "memory_order_consume" is not something which can be 
>> > used
>> > today.
>> > Any information on its status at C/C++ standard levels and 
>> > implementation-wise ?
> 
> Actually, you really can use memory_order_consume.  All current
> implementations will compile it as if it was memory_order_acquire.
> This will work correctly, but may be slower than you would like on ARM,
> PowerPC, and so on.
> 
> On things like x86, the penalty is forgone optimizations, so less
> of a problem there.

OK

> 
>> > Pragmatically speaking, what should we change in liburcu to ensure we don't
>> > generate
>> > broken code when LTO is enabled ? I suspect there are a few options here:
>> > 
>> > 1) Fail to build if LTO is enabled,
>> > 2) Generate slower code for rcu_dereference, either on all architectures 
>> > or only
>> >on weakly-ordered architectures,
>> > 3) Generate different code depending on whether LTO is enabled or not. 
>> > AFAIU
>> > this would only
>> >work if every compile unit is aware that it will end up being optimized 
>> > with
>> >LTO. Not sure
>> >how this could be done in the context of user-space.
>> > 4) [ Insert better idea here. ]
> 
> Use memory_order_consume if LTO is enabled.  That will work now, and
> might generate good code in some hoped-for future.

In the context of a user-space library, how does one check whether LTO is 
enabled with
preprocessor directives ? A quick test with gcc seems to show that both with 
and without
-flto cannot be distinguished from a preprocessor POV, e.g. the output of both

gcc --std=c11 -O2 -dM -E - < /dev/null
and
gcc --std=c11 -O2 -flto -dM -E - < /dev/null

is exactly the same. Am I missing something here ?

If we accept to use memory_order_consume all the time in both C and C++ code 
starting from
C11 and C++11, the following code snippet could do the trick:

#define CMM_ACCESS_ONCE(x) (*(__volatile__  __typeof__(x) *)&(x))
#define CMM_LOAD_SHARED(p) CMM_ACCESS_ONCE(p)

#if defined (__cplusplus)
# if __cplusplus >= 201103L
#  include 
#  define rcu_dereference(x)
((std::atomic<__typeof__(x)>)(x)).load(std::memory_order_consume)
# else
#  define rcu_dereference(x)CMM_LOAD_SHARED(x)
# endif
#else
# if (defined(__STDC_VERSION__) && __STDC_VERSION__ >= 201112L)
#  include 
#  define rcu_dereference(x)atomic_load_explicit(&(x), memory_order_consume)
# else
#  define rcu_dereference(x)CMM_LOAD_SHARED(x)
# endif
#endif

This uses the volatile approach prior to C11/C++11, and moves to 
memory_order_consume
afterwards. This will bring a performance penalty on weakly-ordered 
architectures even
when -flto is not specified though.

Then the burden is pushed on the compiler people to eventually implement an 
efficient
memory_order_consume.

Is that acceptable ?

Thanks,

Mathieu

-- 
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] liburcu: LTO breaking rcu_dereference on arm64 and possibly other architectures ?

2021-04-16 Thread Paul E. McKenney via lttng-dev
On Fri, Apr 16, 2021 at 05:17:11PM +0200, Peter Zijlstra wrote:
> On Fri, Apr 16, 2021 at 10:52:16AM -0400, Mathieu Desnoyers wrote:
> > Hi Paul, Will, Peter,
> > 
> > I noticed in this discussion https://lkml.org/lkml/2021/4/16/118 that LTO
> > is able to break rcu_dereference. This seems to be taken care of by
> > arch/arm64/include/asm/rwonce.h on arm64 in the Linux kernel tree.
> > 
> > In the liburcu user-space library, we have this comment near 
> > rcu_dereference() in
> > include/urcu/static/pointer.h:
> > 
> >  * The compiler memory barrier in CMM_LOAD_SHARED() ensures that 
> > value-speculative
> >  * optimizations (e.g. VSS: Value Speculation Scheduling) does not perform 
> > the
> >  * data read before the pointer read by speculating the value of the 
> > pointer.
> >  * Correct ordering is ensured because the pointer is read as a volatile 
> > access.
> >  * This acts as a global side-effect operation, which forbids reordering of
> >  * dependent memory operations. Note that such concern about 
> > dependency-breaking
> >  * optimizations will eventually be taken care of by the 
> > "memory_order_consume"
> >  * addition to forthcoming C++ standard.
> > 
> > (note: CMM_LOAD_SHARED() is the equivalent of READ_ONCE(), but was 
> > introduced in
> > liburcu as a public API before READ_ONCE() existed in the Linux kernel)
> > 
> > Peter tells me the "memory_order_consume" is not something which can be 
> > used today.
> > Any information on its status at C/C++ standard levels and 
> > implementation-wise ?

Actually, you really can use memory_order_consume.  All current
implementations will compile it as if it was memory_order_acquire.
This will work correctly, but may be slower than you would like on ARM,
PowerPC, and so on.

On things like x86, the penalty is forgone optimizations, so less
of a problem there.

> > Pragmatically speaking, what should we change in liburcu to ensure we don't 
> > generate
> > broken code when LTO is enabled ? I suspect there are a few options here:
> > 
> > 1) Fail to build if LTO is enabled,
> > 2) Generate slower code for rcu_dereference, either on all architectures or 
> > only
> >on weakly-ordered architectures,
> > 3) Generate different code depending on whether LTO is enabled or not. 
> > AFAIU this would only
> >work if every compile unit is aware that it will end up being optimized 
> > with LTO. Not sure
> >how this could be done in the context of user-space.
> > 4) [ Insert better idea here. ]

Use memory_order_consume if LTO is enabled.  That will work now, and
might generate good code in some hoped-for future.

> > Thoughts ?
> 
> Using memory_order_acquire is safe; and is basically what Will did for
> ARM64.
> 
> The problematic tranformations are possible even without LTO, although
> less likely due to less visibility, but everybody agrees they're
> possible and allowed.
> 
> OTOH we do not have a positive sighting of it actually happening (I
> think), we're all just being cautious and not willing to debug the
> resulting wreckage if it does indeed happen.

And yes, you can also use memory_order_acquire.

Thanx, Paul
___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] liburcu: LTO breaking rcu_dereference on arm64 and possibly other architectures ?

2021-04-16 Thread Peter Zijlstra via lttng-dev
On Fri, Apr 16, 2021 at 10:52:16AM -0400, Mathieu Desnoyers wrote:
> Hi Paul, Will, Peter,
> 
> I noticed in this discussion https://lkml.org/lkml/2021/4/16/118 that LTO
> is able to break rcu_dereference. This seems to be taken care of by
> arch/arm64/include/asm/rwonce.h on arm64 in the Linux kernel tree.
> 
> In the liburcu user-space library, we have this comment near 
> rcu_dereference() in
> include/urcu/static/pointer.h:
> 
>  * The compiler memory barrier in CMM_LOAD_SHARED() ensures that 
> value-speculative
>  * optimizations (e.g. VSS: Value Speculation Scheduling) does not perform the
>  * data read before the pointer read by speculating the value of the pointer.
>  * Correct ordering is ensured because the pointer is read as a volatile 
> access.
>  * This acts as a global side-effect operation, which forbids reordering of
>  * dependent memory operations. Note that such concern about 
> dependency-breaking
>  * optimizations will eventually be taken care of by the 
> "memory_order_consume"
>  * addition to forthcoming C++ standard.
> 
> (note: CMM_LOAD_SHARED() is the equivalent of READ_ONCE(), but was introduced 
> in
> liburcu as a public API before READ_ONCE() existed in the Linux kernel)
> 
> Peter tells me the "memory_order_consume" is not something which can be used 
> today.
> Any information on its status at C/C++ standard levels and 
> implementation-wise ?
> 
> Pragmatically speaking, what should we change in liburcu to ensure we don't 
> generate
> broken code when LTO is enabled ? I suspect there are a few options here:
> 
> 1) Fail to build if LTO is enabled,
> 2) Generate slower code for rcu_dereference, either on all architectures or 
> only
>on weakly-ordered architectures,
> 3) Generate different code depending on whether LTO is enabled or not. AFAIU 
> this would only
>work if every compile unit is aware that it will end up being optimized 
> with LTO. Not sure
>how this could be done in the context of user-space.
> 4) [ Insert better idea here. ]
> 
> Thoughts ?

Using memory_order_acquire is safe; and is basically what Will did for
ARM64.

The problematic tranformations are possible even without LTO, although
less likely due to less visibility, but everybody agrees they're
possible and allowed.

OTOH we do not have a positive sighting of it actually happening (I
think), we're all just being cautious and not willing to debug the
resulting wreckage if it does indeed happen.

___
lttng-dev mailing list
lttng-dev@lists.lttng.org
https://lists.lttng.org/cgi-bin/mailman/listinfo/lttng-dev


Re: [lttng-dev] liburcu: LTO breaking rcu_dereference on arm64 and possibly other architectures ?

2021-04-16 Thread Duncan Sands via lttng-dev

Hi Mathieu,

On 4/16/21 4:52 PM, Mathieu Desnoyers via lttng-dev wrote:

Hi Paul, Will, Peter,

I noticed in this discussion https://lkml.org/lkml/2021/4/16/118 that LTO
is able to break rcu_dereference. This seems to be taken care of by
arch/arm64/include/asm/rwonce.h on arm64 in the Linux kernel tree.

In the liburcu user-space library, we have this comment near rcu_dereference() 
in
include/urcu/static/pointer.h:

  * The compiler memory barrier in CMM_LOAD_SHARED() ensures that 
value-speculative
  * optimizations (e.g. VSS: Value Speculation Scheduling) does not perform the
  * data read before the pointer read by speculating the value of the pointer.
  * Correct ordering is ensured because the pointer is read as a volatile 
access.
  * This acts as a global side-effect operation, which forbids reordering of
  * dependent memory operations. Note that such concern about 
dependency-breaking
  * optimizations will eventually be taken care of by the "memory_order_consume"
  * addition to forthcoming C++ standard.

(note: CMM_LOAD_SHARED() is the equivalent of READ_ONCE(), but was introduced in
liburcu as a public API before READ_ONCE() existed in the Linux kernel)


this is not directly on topic, but what do you think of porting userspace RCU to 
use the C++ memory model and GCC/LLVM atomic builtins (__atomic_store etc) 
rather than rolling your own?  Tools like thread sanitizer would then understand 
what userspace RCU is doing.  Not to mention the compiler.  More developers 
would understand it too!


From a code organization viewpoint, going down this path would presumably mean 
directly using GCC/LLVM atomic support when available, and falling back on 
something like the current uatomic to emulate them for older compilers.


Some parts of uatomic have pretty clear equivalents (see below), but not all, so 
the conversion could be quite tricky.



Peter tells me the "memory_order_consume" is not something which can be used 
today.


This is a pity, because it seems to have been invented with rcu_dereference in 
mind.


Any information on its status at C/C++ standard levels and implementation-wise ?

Pragmatically speaking, what should we change in liburcu to ensure we don't 
generate
broken code when LTO is enabled ? I suspect there are a few options here:

1) Fail to build if LTO is enabled,
2) Generate slower code for rcu_dereference, either on all architectures or only
on weakly-ordered architectures,
3) Generate different code depending on whether LTO is enabled or not. AFAIU 
this would only
work if every compile unit is aware that it will end up being optimized 
with LTO. Not sure
how this could be done in the context of user-space.
4) [ Insert better idea here. ]

Thoughts ?


Best wishes, Duncan.

PS: We are experimentally running with the following patch, as it already makes 
thread sanitizer a lot happier:


--- a/External/UserspaceRCU/userspace-rcu/include/urcu/system.h

+++ b/External/UserspaceRCU/userspace-rcu/include/urcu/system.h

@@ -26,34 +26,45 @@

  * Identify a shared load. A cmm_smp_rmc() or cmm_smp_mc() should come

  * before the load.

  */

-#define _CMM_LOAD_SHARED(p)   CMM_ACCESS_ONCE(p)

+#define _CMM_LOAD_SHARED(p)\

+   __extension__   \

+   ({  \

+   __typeof__(p) v;\

+   __atomic_load(, , __ATOMIC_RELAXED);\

+   v;  \

+   })



 /*

  * Load a data from shared memory, doing a cache flush if required.

  */

-#define CMM_LOAD_SHARED(p) \

-   __extension__   \

-   ({  \

-   cmm_smp_rmc();  \

-   _CMM_LOAD_SHARED(p);\

+#define CMM_LOAD_SHARED(p) \

+   __extension__   \

+   ({  \

+   __typeof__(p) v;\

+   __atomic_load(, , __ATOMIC_ACQUIRE);\

+   v;  \

})



 /*

  * Identify a shared store. A cmm_smp_wmc() or cmm_smp_mc() should

  * follow the store.

  */

-#define _CMM_STORE_SHARED(x, v)__extension__ ({ CMM_ACCESS_ONCE(x) = 
(v); })

+#define _CMM_STORE_SHARED(x, v)\

+   __extension__   \

+   ({  \

+   __typeof__(x) w = v;\

+   __atomic_store(, , __ATOMIC_RELAXED);   \

+   })



 /*

  * Store v into x, where x is located in shared memory. Performs the

  * required cache flush after writing. Returns v.