Hi Paolo, * Paolo Bonzini ([email protected]) wrote: > In userspace we can assume no accesses to write-combining memory occur, > and also that there are no non-temporal load/stores (people would presumably > write those with assembly or intrinsics and put appropriate lfence/sfence > manually). So rmb and wmb are no-ops on x86.
What about memory barriers for DMA with devices ? For these, we might want to define cmm_wmb/rmb and cmm_smp_wmb/rmb differently (keep the fences for DMA accesses). So people who want to use memory barriers for non-temporal load/stores could use the cmm_wmb/rmb variants too. > > But IDT chips are an exception, so keep wmb on 32-bit and document better > the rationale. > > Signed-off-by: Paolo Bonzini <[email protected]> > --- > urcu/arch/x86.h | 14 +++++++++----- > 1 files changed, 9 insertions(+), 5 deletions(-) > > diff --git a/urcu/arch/x86.h b/urcu/arch/x86.h > index 9e5411f..d25f13d 100644 > --- a/urcu/arch/x86.h > +++ b/urcu/arch/x86.h > @@ -33,15 +33,19 @@ extern "C" { > > #ifdef CONFIG_RCU_HAVE_FENCE > #define cmm_mb() asm volatile("mfence":::"memory") > -#define cmm_rmb() asm volatile("lfence":::"memory") > -#define cmm_wmb() asm volatile("sfence"::: "memory") > +#define cmm_rmb() asm volatile("":::"memory") > +#define cmm_wmb() asm volatile(""::: "memory") > #else > /* > - * Some non-Intel clones support out of order store. cmm_wmb() ceases to be a > - * nop for these. > + * IDT WinChip supports weak store ordering, and the kernel may enable it > + * under our feet; cmm_wmb() ceases to be a nop for these processors. > + * > + * The same would hold for cmm_rmb() on some old PentiumPro multiprocessor > + * systems that have an errata, but the Linux kernel says that "Even distro > + * kernels should think twice before enabling this". Maybe we should have configure options --without-x86-ppro-support and --without-x86-idt-winchip-support for this ? I really want the default to be bullet-proof. So deactivating support for these specific architectures on a per-distro basis would make more sense. Thanks, Mathieu > */ > #define cmm_mb() asm volatile("lock; addl $0,0(%%esp)":::"memory") > -#define cmm_rmb() asm volatile("lock; addl $0,0(%%esp)":::"memory") > +#define cmm_rmb() asm volatile("":::"memory") > #define cmm_wmb() asm volatile("lock; addl $0,0(%%esp)"::: "memory") > #endif -- Mathieu Desnoyers Operating System Efficiency R&D Consultant EfficiOS Inc. http://www.efficios.com _______________________________________________ ltt-dev mailing list [email protected] http://lists.casi.polymtl.ca/cgi-bin/mailman/listinfo/ltt-dev
