On Wed, 3 May 2023 at 08:29, Richard Henderson <richard.hender...@linaro.org> wrote: > > There is an edge condition prior to gcc13 for which optimization > is required to generate 16-byte atomic sequences. Detect this. > > Signed-off-by: Richard Henderson <richard.hender...@linaro.org> > --- > accel/tcg/ldst_atomicity.c.inc | 38 ++++++++++++++++++------- > meson.build | 52 ++++++++++++++++++++++------------ > 2 files changed, 61 insertions(+), 29 deletions(-) >
> @@ -676,28 +695,24 @@ static inline void store_atomic8(void *pv, uint64_t val) > * > * Atomically store 16 aligned bytes to @pv. > */ > -static inline void store_atomic16(void *pv, Int128 val) > +static inline void ATTRIBUTE_ATOMIC128_OPT > +store_atomic16(void *pv, Int128Alias val) > { > #if defined(CONFIG_ATOMIC128) > __uint128_t *pu = __builtin_assume_aligned(pv, 16); > - Int128Alias new; > - > - new.s = val; > - qatomic_set__nocheck(pu, new.u); > + qatomic_set__nocheck(pu, val.u); > #elif defined(CONFIG_CMPXCHG128) > __uint128_t *pu = __builtin_assume_aligned(pv, 16); > __uint128_t o; > - Int128Alias n; > > /* > * Without CONFIG_ATOMIC128, __atomic_compare_exchange_n will always > * defer to libatomic, so we must use __sync_val_compare_and_swap_16 > * and accept the sequential consistency that comes with it. > */ > - n.s = val; > do { > o = *pu; > - } while (!__sync_bool_compare_and_swap_16(pu, o, n.u)); > + } while (!__sync_bool_compare_and_swap_16(pu, o, val.u)); > #else > qemu_build_not_reached(); > #endif Should this change be in a different patch? It doesn't seem related to the meson detection. Otherwise Reviewed-by: Peter Maydell <peter.mayd...@linaro.org> thanks -- PMM