On Mon, Sep 21, 2009 at 08:37:18PM -0300, Marcelo Tosatti wrote:
> Use two steps for memslot deletion: mark the slot invalid (which stops 
> instantiation of new shadow pages for that slot, but allows destruction),
> then instantiate the new empty slot.
> 
> Also simplifies kvm_handle_hva locking.
> 
> Signed-off-by: Marcelo Tosatti <[email protected]>
> 

<snip>

> -     if (!npages)
> +     if (!npages) {
> +             slots = kzalloc(sizeof(struct kvm_memslots), GFP_KERNEL);
> +             if (!slots)
> +                     goto out_free;
> +             memcpy(slots, kvm->memslots, sizeof(struct kvm_memslots));
> +             if (mem->slot >= slots->nmemslots)
> +                     slots->nmemslots = mem->slot + 1;
> +             slots->memslots[mem->slot].flags |= KVM_MEMSLOT_INVALID;
> +
> +             old_memslots = kvm->memslots;
> +             rcu_assign_pointer(kvm->memslots, slots);
> +             synchronize_srcu(&kvm->srcu);
> +             /* From this point no new shadow pages pointing to a deleted
> +              * memslot will be created.
> +              *
> +              * validation of sp->gfn happens in:
> +              *      - gfn_to_hva (kvm_read_guest, gfn_to_pfn)
> +              *      - kvm_is_visible_gfn (mmu_check_roots)
> +              */
>               kvm_arch_flush_shadow(kvm);
> +             kfree(old_memslots);
> +     }
>  
>       r = kvm_arch_prepare_memory_region(kvm, &new, old, user_alloc);
>       if (r)
>               goto out_free;
>  
> -     spin_lock(&kvm->mmu_lock);
> -     if (mem->slot >= kvm->memslots->nmemslots)
> -             kvm->memslots->nmemslots = mem->slot + 1;
> +#ifdef CONFIG_DMAR
> +     /* map the pages in iommu page table */
> +     if (npages)
> +             r = kvm_iommu_map_pages(kvm, &new);
> +             if (r)
> +                     goto out_free;
> +#endif
>  
> -     *memslot = new;
> -     spin_unlock(&kvm->mmu_lock);
> +     slots = kzalloc(sizeof(struct kvm_memslots), GFP_KERNEL);
> +     if (!slots)
> +             goto out_free;
> +     memcpy(slots, kvm->memslots, sizeof(struct kvm_memslots));
> +     if (mem->slot >= slots->nmemslots)
> +             slots->nmemslots = mem->slot + 1;
> +
> +     /* actual memory is freed via old in kvm_free_physmem_slot below */
> +     if (!npages) {
> +             new.rmap = NULL;
> +             new.dirty_bitmap = NULL;
> +             for (i = 0; i < KVM_NR_PAGE_SIZES - 1; ++i)
> +                     new.lpage_info[i] = NULL;
> +     }
> +
> +     slots->memslots[mem->slot] = new;
> +     old_memslots = kvm->memslots;
> +     rcu_assign_pointer(kvm->memslots, slots);
> +     synchronize_srcu(&kvm->srcu);
>  
>       kvm_arch_commit_memory_region(kvm, mem, old, user_alloc);

Paul,

There is a scenario where this path, which updates KVM memory slots, is
called relatively often.

Each synchronize_srcu() call takes about 10ms (avg 3ms per
synchronize_sched call), so this is hurting us.

Is this expected? Is there any possibility for synchronize_srcu()
optimization?

There are other sides we can work on, such as reducing the memory slot 
updates, but i'm wondering what can be done regarding SRCU itself.

TIA

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to