When this was introduced, kvm_flush_remote_tlbs() could be called
without holding mmu_lock.  It is now acknowledged that the function
must be called before releasing mmu_lock, and all callers have already
been changed to do so.

This patch adds a comment explaining this.

Signed-off-by: Takuya Yoshikawa <[email protected]>
---
 virt/kvm/kvm_main.c |    9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index a9e999a..53521ea 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -184,6 +184,15 @@ static bool make_all_cpus_request(struct kvm *kvm, 
unsigned int req)
        return called;
 }
 
+/*
+ * tlbs_dirty is used only for optimizing x86's shadow paging code with mmu
+ * notifiers in mind, see the note on sync_page().  Since it is always 
protected
+ * with mmu_lock there, should kvm_flush_remote_tlbs() be called before
+ * releasing mmu_lock, the trick using smp_mb() and cmpxchg() is not necessary.
+ *
+ * Currently, the assumption about kvm_flush_remote_tlbs() callers is true, but
+ * the code is kept as is for someone who may change the rule in the future.
+ */
 void kvm_flush_remote_tlbs(struct kvm *kvm)
 {
        long dirty_count = kvm->tlbs_dirty;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to