On Mon, Nov 24, 2008 at 01:04:23PM +0100, Marcelo Tosatti wrote:
> On Sun, Nov 23, 2008 at 12:36:29PM +0200, Avi Kivity wrote:
> > Marcelo Tosatti wrote:
> >
> >> The cost of hash table and memslot lookups are quite significant if the
> >> workload is pagetable write intensive resulting in increased mmu_lock
> >> contention.
> >>
> >> @@ -1593,7 +1593,16 @@ static int set_spte(struct kvm_vcpu *vcp
> >> spte |= PT_WRITABLE_MASK;
> >> - if (mmu_need_write_protect(vcpu, gfn, can_unsync)) {
> >> + /*
> >> + * Optimization: for pte sync, if spte was writable the hash
> >> + * lookup is unnecessary (and expensive). Write protection
> >> + * is responsibility of mmu_get_page / kvm_sync_page.
> >> + * Same reasoning can be applied to dirty page accounting.
> >> + */
> >> + if (sync_page && is_writeble_pte(*shadow_pte))
> >> + goto set_pte;
> >>
> >
> > What if *shadow_pte points at a different page? Is that possible?
>
> To a different gfn? Then sync_page will have nuked the spte:
>
> if (gpte_to_gfn(gpte) != gfn || !is_present_pte(gpte) ||
> !(gpte & PT_ACCESSED_MASK)) {
> u64 nonpresent;
> ..
> set_shadow_pte(&sp->spt[i], nonpresent);
> }
>
> Otherwise:
>
> /*
> * Using the cached information from sp->gfns is safe because:
> * - The spte has a reference to the struct page, so the pfn for a given
> * gfn can't change unless all sptes pointing to it are nuked first.
*shadow_pte can point to a different page if the guest updates
pagetable, there is a fault before resync, the fault updates the
spte with new gfn (and pfn) via mmu_set_spte. In which case the gfn
cache is updated since:
} else if (pfn != spte_to_pfn(*shadow_pte)) {
printk("hfn old %lx new %lx\n",
spte_to_pfn(*shadow_pte), pfn);
rmap_remove(vcpu->kvm, shadow_pte);
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html