From: Christoph Lameter <[EMAIL PROTECTED]>
__clear_bit_unlock does not need to perform atomic operations on the
variable. Avoid a cmpxchg and simply do a store with release semantics.
Add a barrier to be safe that the compiler does not do funky things.
[EMAIL PROTECTED]: coding-style fixes]
Signed-off-by: Christoph Lameter <[EMAIL PROTECTED]>
Acked-by: Nick Piggin <[EMAIL PROTECTED]>
Cc: "Luck, Tony" <[EMAIL PROTECTED]>
Signed-off-by: Andrew Morton <[EMAIL PROTECTED]>
---
include/asm-ia64/bitops.h | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff -puN include/asm-ia64/bitops.h~ia64-slim-down-__clear_bit_unlock
include/asm-ia64/bitops.h
--- a/include/asm-ia64/bitops.h~ia64-slim-down-__clear_bit_unlock
+++ a/include/asm-ia64/bitops.h
@@ -124,10 +124,21 @@ clear_bit_unlock (int nr, volatile void
/**
* __clear_bit_unlock - Non-atomically clear a bit with release
*
- * This is like clear_bit_unlock, but the implementation may use a non-atomic
- * store (this one uses an atomic, however).
+ * This is like clear_bit_unlock, but the implementation uses a store
+ * with release semantics. See also __raw_spin_unlock().
*/
-#define __clear_bit_unlock clear_bit_unlock
+static __inline__ void
+__clear_bit_unlock(int nr, volatile void *addr)
+{
+ __u32 mask, new;
+ volatile __u32 *m;
+
+ m = (volatile __u32 *)addr + (nr >> 5);
+ mask = ~(1 << (nr & 31));
+ new = *m & mask;
+ barrier();
+ asm volatile ("st4.rel.nta [%0] = %1\n\t" :: "r"(m), "r"(new));
+}
/**
* __clear_bit - Clears a bit in memory (non-atomic version)
_
-
To unsubscribe from this list: send the line "unsubscribe linux-ia64" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html