I see two paths now:

A) If we go for atomic_inc/dec with such locking service,
xnarch_memory_barrier__after_atomic_inc & friends will be needed anyway
and could be already introduced now.

Yes, this would be better.

Any thoughts?

I have already sent once to you the message, but I do it now once more. Just
to give some more stuff to think about (although, nobody seems to be
inetersted in memory-barriers :) and maybe, if I'm wrong, someone will point
it out.


I just noticed that probably the code is still, at least very theoretically,
not perfectly safe.

let's consider the following scenario:




from the Documentation/memory-barriers.txt follows that the only guarantee
here is that "b = c" is executed inside the lock-unlock section (of course,
that's what locks are up too).

But the funny thing is that non of the ops are ordered in respect to each

iow, e.g. the following sequences are possible :

lock(); op1; op2; op3; lock();
lock(); op3; op2; op1; lock();

and moreover, pure lock/unlock (without irq disable/enable) doesn't even
imply a compiler barrier for UP.

[ read starting from the line 1150 in the above mentioned doc ]

And now apply all the said above to xnintr_detach() or even linux's
synchronize_irq(). IOW, spin_unlock() doesn't guarantee we have a mb between
element deletion and checking the active flag :)

Ok, maybe it's just in theory. e.g. lock and unlock for x86 seem to imply a
full memory barrier (and, probably, all the rest systems does the same).
Just look at definitions of mb() and spinlocks() for x86. asm("lock; some
write ops") does the trick in both cases.


Best regards,
Dmitry Adamushko
Xenomai-core mailing list

Reply via email to