On Thu, 2008-09-11 at 10:43 -0300, Marcelo Tosatti wrote:
> plain text document attachment (kvm-use-fast-gup)
> Convert gfn_to_pfn to use get_user_pages_fast, which can do lockless
> pagetable lookups on x86. Kernel compilation on 4-way guest is 3.7%
> faster on VMX.
>
> Hollis, can you fix kvmppc_mmu_map? gfn_to_page must not be called with
> mmap_sem held.
>
> Looks tricky:
> /* Must be called with mmap_sem locked for writing. */
> static void kvmppc_44x_shadow_release(struct kvm_vcpu *vcpu,
Actually the comment is wrong, so it's not that tricky. ;) Marcelo,
after Avi applies the following patch, could you respin and remove the
locking around PPC's gfn_to_pfn() too? Thanks!
kvm: ppc: kvmppc_44x_shadow_release() does not require mmap_sem to be locked
Signed-off-by: Hollis Blanchard <[EMAIL PROTECTED]>
diff --git a/arch/powerpc/kvm/44x_tlb.c b/arch/powerpc/kvm/44x_tlb.c
--- a/arch/powerpc/kvm/44x_tlb.c
+++ b/arch/powerpc/kvm/44x_tlb.c
@@ -110,7 +110,6 @@ static int kvmppc_44x_tlbe_is_writable(s
return tlbe->word2 & (PPC44x_TLB_SW|PPC44x_TLB_UW);
}
-/* Must be called with mmap_sem locked for writing. */
static void kvmppc_44x_shadow_release(struct kvm_vcpu *vcpu,
unsigned int index)
{
@@ -150,17 +149,16 @@ void kvmppc_mmu_map(struct kvm_vcpu *vcp
/* Get reference to new page. */
down_read(¤t->mm->mmap_sem);
new_page = gfn_to_page(vcpu->kvm, gfn);
+ up_read(¤t->mm->mmap_sem);
if (is_error_page(new_page)) {
printk(KERN_ERR "Couldn't get guest page for gfn %lx!\n", gfn);
kvm_release_page_clean(new_page);
- up_read(¤t->mm->mmap_sem);
return;
}
hpaddr = page_to_phys(new_page);
/* Drop reference to old page. */
kvmppc_44x_shadow_release(vcpu, victim);
- up_read(¤t->mm->mmap_sem);
vcpu->arch.shadow_pages[victim] = new_page;
@@ -194,7 +192,6 @@ void kvmppc_mmu_invalidate(struct kvm_vc
int i;
/* XXX Replace loop with fancy data structures. */
- down_write(¤t->mm->mmap_sem);
for (i = 0; i <= tlb_44x_hwater; i++) {
struct tlbe *stlbe = &vcpu->arch.shadow_tlb[i];
unsigned int tid;
@@ -219,7 +216,6 @@ void kvmppc_mmu_invalidate(struct kvm_vc
stlbe->tid, stlbe->word0, stlbe->word1,
stlbe->word2, handler);
}
- up_write(¤t->mm->mmap_sem);
}
/* Invalidate all mappings on the privilege switch after PID has been changed.
@@ -231,7 +227,6 @@ void kvmppc_mmu_priv_switch(struct kvm_v
if (vcpu->arch.swap_pid) {
/* XXX Replace loop with fancy data structures. */
- down_write(¤t->mm->mmap_sem);
for (i = 0; i <= tlb_44x_hwater; i++) {
struct tlbe *stlbe = &vcpu->arch.shadow_tlb[i];
@@ -243,7 +238,6 @@ void kvmppc_mmu_priv_switch(struct kvm_v
stlbe->tid, stlbe->word0, stlbe->word1,
stlbe->word2, handler);
}
- up_write(¤t->mm->mmap_sem);
vcpu->arch.swap_pid = 0;
}
--
Hollis Blanchard
IBM Linux Technology Center
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html