lwsync orders loads in cacheable memory with respect to other loads,
and stores in cacheable memory with respect to other stores.  Use it
to implement smp_rmb/smp_wmb.

The heavy-weight sync is still used for the "full" rmb/wmb operations,
as well as for smp_mb.

Signed-off-by: Paolo Bonzini <[email protected]>
---
 urcu/arch/ppc.h |   10 +++++++++-
 1 files changed, 9 insertions(+), 1 deletions(-)

diff --git a/urcu/arch/ppc.h b/urcu/arch/ppc.h
index a03d688..05f7db6 100644
--- a/urcu/arch/ppc.h
+++ b/urcu/arch/ppc.h
@@ -32,7 +32,15 @@ extern "C" {
 /* Include size of POWER5+ L3 cache lines: 256 bytes */
 #define CAA_CACHE_LINE_SIZE    256
 
-#define cmm_mb()    asm volatile("sync":::"memory")
+#define cmm_mb()         asm volatile("sync":::"memory")
+
+/* lwsync does not preserve ordering of cacheable vs. non-cacheable
+ * accesses, but it is good when MMIO is not in use.  An eieio+lwsync
+ * pair is also not enough for rmb, because it will order cacheable
+ * and non-cacheable memory operations separately---i.e. not the latter
+ * against the former.  */
+#define cmm_smp_rmb()    asm volatile("lwsync":::"memory")
+#define cmm_smp_wmb()    asm volatile("lwsync":::"memory")
 
 #define mftbl()                                                \
        ({                                              \
-- 
1.7.6


_______________________________________________
ltt-dev mailing list
[email protected]
http://lists.casi.polymtl.ca/cgi-bin/mailman/listinfo/ltt-dev

Reply via email to