>Unfortunately, there is one rather big pitfall - the operation is no longer
>atomic. Both the increment and the test must be done atomically wrt the
>other atomic functions.
Yes, that's true. That patch was more to lend some credence to my idea that
it was up() at fault than anything else. Here's another which also seems to
work for me. I admit to not being 100% sure what's going on here any more - I
now suspect that part of the lossage I was suffering yesterday was caused by
an unrelated error on my part.
The armv implementation of `up' might benefit from having the LS/LE change
as well, by the way. If I'm understanding it correct, the current code will
*always* take the slow path through __up_wakeup and __up even when no tasks
are actually waiting. I just did a somewhat unscientific benchmark comparison
that does seem to confirm this.
p.
--- include/asm-arm/proc-armo/semaphore.h Fri May 14 20:17:20 1999
+++ include/asm-arm/proc-armo/semaphore.h Sun May 16 11:17:23 1999
@@ -99,9 +99,8 @@
ldr lr, [%0]
adds lr, lr, #1
str lr, [%0]
- mov lr, pc, lsr #28
- orrls r0, r0, #0x80000000 @ set N
- teqp r0, lr, lsl #28
+ orrle r0, r0, #0x80000000 @ set N
+ teqp r0, #0
movmi r0, %0
blmi " SYMBOL_NAME_STR(__up_wakeup)
:
unsubscribe: body of `unsubscribe linux-arm' to [EMAIL PROTECTED]