On 02/25/2016 10:15 AM, Takuya Yoshikawa wrote:
On 2016/02/24 22:17, Paolo Bonzini wrote:
Move the call to kvm_mmu_flush_or_zap outside the loop.

Signed-off-by: Paolo Bonzini <pbonz...@redhat.com>
---
   arch/x86/kvm/mmu.c | 9 ++++++---
   1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 725316df32ec..6d47b5c43246 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2029,24 +2029,27 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu,
        struct mmu_page_path parents;
        struct kvm_mmu_pages pages;
        LIST_HEAD(invalid_list);
+       bool flush = false;

        while (mmu_unsync_walk(parent, &pages)) {
                bool protected = false;
-               bool flush = false;

                for_each_sp(pages, sp, parents, i)
                        protected |= rmap_write_protect(vcpu, sp->gfn);

-               if (protected)
+               if (protected) {
                        kvm_flush_remote_tlbs(vcpu->kvm);
+                       flush = false;
+               }

                for_each_sp(pages, sp, parents, i) {
                        flush |= kvm_sync_page(vcpu, sp, &invalid_list);
                        mmu_pages_clear_parents(&parents);
                }
-               kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush);
                cond_resched_lock(&vcpu->kvm->mmu_lock);

This may release the mmu_lock before committing the zapping.
Is it safe?  If so, we may want to see the reason in the changelog.

It is unsafe indeed, please do not do it.


Reply via email to