Carlos O'Donell wrote:
> On Mon, Oct 27, 2008 at 10:05 AM, Andrew Haley <[EMAIL PROTECTED]> wrote:
>>> I've seen this on-and-off again on the hppa-linux port. The issue has,
>>> in my experience, been a compiler problem. My standard operating
>>> procedure is to methodically add volatile to the atomic.h operations
>>> until it goes away, and then work out the compiler mis-optimization.
>>>
>>> The bug is almost always a situation where the lll_unlock is scheduled
>>> before owner = 0, and the assert catches the race condition where you
>>> unlock but have not yet cleared the owner.
>> Are you sure this is a compiler problem?  Unless you use explicit atomic
>> memory accesses or volatile the compiler is supposed to re-order memory
>> access.  Perhaps I'm misunderstanding you.
> 
> Sorry, parsing the above statement requires knowing something about
> how lll_unlock is implemented in glibc.
> 
> The lll_unlock function is supposed to be a memory barrier.
> 
> The function is usually an explicit atomic operation, or a volatile
> asm implementing the futex syscall i.e. INTERNAL_SYSCALL macro.

I understand all that, but the question still stands: is the compiler
really moving a memory write past a memory barrier?  ISTR we did have
a discussion on gcc-list about that, but it was a while ago and should
now be fixed.

Andrew.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to