On 09/22/2009 02:37 AM, Marcelo Tosatti wrote:
Use two steps for memslot deletion: mark the slot invalid (which stops
instantiation of new shadow pages for that slot, but allows destruction),
then instantiate the new empty slot.
Also simplifies kvm_handle_hva locking.
unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm)
{
- int i;
+ int i, idx;
unsigned int nr_mmu_pages;
unsigned int nr_pages = 0;
+ struct kvm_memslots *slots;
- for (i = 0; i< kvm->memslots->nmemslots; i++)
- nr_pages += kvm->memslots->memslots[i].npages;
+ idx = srcu_read_lock(&kvm->srcu);
Doesn't the caller hold the srcu_read_lock() here?
Index: kvm-slotslock/arch/x86/kvm/vmx.c
===================================================================
--- kvm-slotslock.orig/arch/x86/kvm/vmx.c
+++ kvm-slotslock/arch/x86/kvm/vmx.c
@@ -24,6 +24,7 @@
#include<linux/mm.h>
#include<linux/highmem.h>
#include<linux/sched.h>
+#include<linux/srcu.h>
#include<linux/moduleparam.h>
#include<linux/ftrace_event.h>
#include "kvm_cache_regs.h"
@@ -1465,10 +1466,18 @@ static void enter_pmode(struct kvm_vcpu
static gva_t rmode_tss_base(struct kvm *kvm)
{
if (!kvm->arch.tss_addr) {
- gfn_t base_gfn = kvm->memslots->memslots[0].base_gfn +
- kvm->memslots->memslots[0].npages - 3;
+ struct kvm_memslots *slots;
+ gfn_t base_gfn;
+ int idx;
+
+ idx = srcu_read_lock(&kvm->srcu);
+ slots = rcu_dereference(kvm->memslots);
+ base_gfn = slots->memslots[0].base_gfn +
+ slots->memslots[0].npages - 3;
+ srcu_read_unlock(&kvm->srcu, idx);
return base_gfn<< PAGE_SHIFT;
}
+
And here? Maybe we should take the srcu_lock in vcpu_load/put and only
drop in when going into vcpu context or explicitly sleeping, just to
simplify things.
--
Do not meddle in the internals of kernels, for they are subtle and quick to
panic.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html