From: Wanpeng Li <[email protected]>

virt_xxx memory barriers are implemented trivially using the low-level 
__smp_xxx macros, __smp_xxx is equal to a compiler barrier for strong 
TSO memory model, however, mandatory barriers will unconditional add 
memory barriers, this patch replaces the rmb() in kvm_steal_clock() by 
virt_rmb().

Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krčmář <[email protected]>
Signed-off-by: Wanpeng Li <[email protected]>
---
 arch/x86/kernel/kvm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 14f65a5..da5c097 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -396,9 +396,9 @@ static u64 kvm_steal_clock(int cpu)
        src = &per_cpu(steal_time, cpu);
        do {
                version = src->version;
-               rmb();
+               virt_rmb();
                steal = src->steal;
-               rmb();
+               virt_rmb();
        } while ((version & 1) || (version != src->version));
 
        return steal;
-- 
2.7.4

Reply via email to