On 05/15/2011 12:35 AM, Xiao Guangrong wrote:
Simply return from kvm_mmu_pte_write path if no shadow page is
write-protected, then we can avoid to walk all shadow pages and hold
mmu-lock
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 2841805..971e2d2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -498,6 +498,7 @@ static void account_shadowed(struct kvm *kvm, gfn_t gfn)
                linfo = lpage_info_slot(gfn, slot, i);
                linfo->write_count += 1;
        }
+       atomic_inc(&kvm->arch.indirect_shadow_pages);
  }

  static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn)
@@ -513,6 +514,7 @@ static void unaccount_shadowed(struct kvm *kvm, gfn_t gfn)
                linfo->write_count -= 1;
                WARN_ON(linfo->write_count<  0);
        }
+       atomic_dec(&kvm->arch.indirect_shadow_pages);
  }

These atomic ops are always called from within the spinlock, so we don't need an atomic_t here.

Sorry, I should have noticed this on the first version.





--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to