https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65697

--- Comment #41 from mwahab at gcc dot gnu.org ---
(In reply to torvald from comment #38)
> (In reply to Andrew Macleod from comment #34)
> 
> Also, if you look at the IA-64 __sync_lock_release vs. GCC docs'
> __sync_lock_release, the latter is like x86/TSO.  Do you have any info on
> which other semantics __sync was supposed to adhere to?
> 
> One potential way to solve it would be to just require code that uses __sync
> to more or less implement an IA-64 or x86 memory model, modulo allowing
> compiler-reordering and optimization between adjacent non-__sync memory
> accesses.  This could be inefficient on ARM (see James' examples) and
> perhaps Power too (or not -- see Jakub's comments).

If the __sync barriers are as described in the GCC manual, that a barrier is
atomic and its restrictions apply to all data references, then the Aarch64
backend doesn't currently emit strong enough barriers.

For MEMMODEL_SEQ_CST, the problem was visible enough and the solution I
suggested (extending the set of available memmodel types) was simple enough
that the changes it would need could be justified. I don't think that's true
for the MEMMODEL_ACQUIRE case which seems to be much less likely to be seen and
would be rather more disruptive.

I believe that Aarch64 is the only current target where the code needs to be
strengthened. Since extending the set of memmodels is difficult to justify and
(IMO) so is resurrecting the __sync patterns, I suggest just adding a target
hook to allow the expansion of __sync calls to be overridden. That will allow
Aarch64 to set a target-specific memmodel value, as is currently allowed, which
can then be passed through the existing __atomics mechanisms in the middle
through to the Aarch64 backend. No other backend will need to be touched.

If it happens that future architectures have a similar problem then we can
reconsider whether any changes need to be made in the target-independent
expansions.

Does that sounds like a reasonable approach?

Reply via email to