On Wed, Mar 26, 2008 at 08:22:31PM +0100, Andrea Arcangeli wrote:
> what happens if invalidate_page runs after rmap_remove is returned
> (the spte isn't visible anymore by the rmap code and in turn by
> invalidate_page) but before the set_shadow_pte(nonpresent) runs.

Thinking some more the mmu_lock is meant to prevent this. So
invalidate_page should wait. As long as the kvm tlb flush happens
inside the mmu lock we should be safe.

Fixing it with mmu notifiers is the higher performance way too. This
would be the patch if we decide to do that.

Signed-off-by: Andrea Arcangeli <[EMAIL PROTECTED]>

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 95c12bc..80cf172 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -550,6 +550,7 @@ static void rmap_remove(struct kvm *kvm, u64 *spte)
        sp = page_header(__pa(spte));
        page = spte_to_page(*spte);
        mark_page_accessed(page);
+       BUG_ON(page_count(page) <= 1);
        if (is_writeble_pte(*spte))
                kvm_release_page_dirty(page);
        else
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 30bf832..a49987c 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -741,6 +741,10 @@ static struct vm_operations_struct kvm_vcpu_vm_ops = {
 static int kvm_vcpu_mmap(struct file *file, struct vm_area_struct *vma)
 {
        vma->vm_ops = &kvm_vcpu_vm_ops;
+#ifndef CONFIG_MMU_NOTIFIER
+       /* prevent the VM to release pages under sptes mappings */
+       vma->vm_flags |= VM_LOCKED;
+#endif
        return 0;
 }
 

-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to