This is a note to let you know that I've just added the patch titled
KVM: Fix write protection race during dirty logging
to the 3.3-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
kvm-fix-write-protection-race-during-dirty-logging.patch
and it can be found in the queue-3.3 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <[email protected]> know about it.
>From [email protected] Wed May 9 06:11:17 2012
From: Avi Kivity <[email protected]>
Date: Wed, 9 May 2012 16:10:39 +0300
Subject: KVM: Fix write protection race during dirty logging
To: [email protected]
Cc: Marcelo Tosatti <[email protected]>, [email protected]
Message-ID: <[email protected]>
From: Takuya Yoshikawa <[email protected]>
(cherry picked from commit 6dbf79e7164e9a86c1e466062c48498142ae6128)
This patch fixes a race introduced by:
commit 95d4c16ce78cb6b7549a09159c409d52ddd18dae
KVM: Optimize dirty logging by rmap_write_protect()
During protecting pages for dirty logging, other threads may also try
to protect a page in mmu_sync_children() or kvm_mmu_get_page().
In such a case, because get_dirty_log releases mmu_lock before flushing
TLB's, the following race condition can happen:
A (get_dirty_log) B (another thread)
lock(mmu_lock)
clear pte.w
unlock(mmu_lock)
lock(mmu_lock)
pte.w is already cleared
unlock(mmu_lock)
skip TLB flush
return
...
TLB flush
Though thread B assumes the page has already been protected when it
returns, the remaining TLB entry will break that assumption.
This patch fixes this problem by making get_dirty_log hold the mmu_lock
until it flushes the TLB's.
Signed-off-by: Takuya Yoshikawa <[email protected]>
Signed-off-by: Marcelo Tosatti <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
arch/x86/kvm/x86.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2997,6 +2997,8 @@ static void write_protect_slot(struct kv
unsigned long *dirty_bitmap,
unsigned long nr_dirty_pages)
{
+ spin_lock(&kvm->mmu_lock);
+
/* Not many dirty pages compared to # of shadow pages. */
if (nr_dirty_pages < kvm->arch.n_used_mmu_pages) {
unsigned long gfn_offset;
@@ -3004,16 +3006,13 @@ static void write_protect_slot(struct kv
for_each_set_bit(gfn_offset, dirty_bitmap, memslot->npages) {
unsigned long gfn = memslot->base_gfn + gfn_offset;
- spin_lock(&kvm->mmu_lock);
kvm_mmu_rmap_write_protect(kvm, gfn, memslot);
- spin_unlock(&kvm->mmu_lock);
}
kvm_flush_remote_tlbs(kvm);
- } else {
- spin_lock(&kvm->mmu_lock);
+ } else
kvm_mmu_slot_remove_write_access(kvm, memslot->id);
- spin_unlock(&kvm->mmu_lock);
- }
+
+ spin_unlock(&kvm->mmu_lock);
}
/*
Patches currently in stable-queue which might be from [email protected] are
queue-3.3/kvm-s390-do-store-status-after-handling-stop_on_stop-bit.patch
queue-3.3/kvm-nvmx-fix-erroneous-exception-bitmap-check.patch
queue-3.3/kvm-s390-sanitize-fpc-registers-for-kvm_set_fpu.patch
queue-3.3/kvm-x86-emulator-correctly-mask-pmc-index-bits-in-rdpmc-instruction-emulation.patch
queue-3.3/kvm-mmu_notifier-flush-tlbs-before-releasing-mmu_lock.patch
queue-3.3/kvm-vmx-fix-kvm_set_shared_msr-called-in-preemptible-context.patch
queue-3.3/kvm-vmx-vmx_set_cr0-expects-kvm-srcu-locked.patch
queue-3.3/kvm-ensure-all-vcpus-are-consistent-with-in-kernel-irqchip-settings.patch
queue-3.3/kvm-fix-write-protection-race-during-dirty-logging.patch
queue-3.3/kvm-lock-slots_lock-around-device-assignment.patch
queue-3.3/kvm-vmx-fix-delayed-load-of-shared-msrs.patch
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html