On Wed, Jan 23, 2013 at 07:18:11PM +0900, Takuya Yoshikawa wrote:
> We noticed that kvm_mmu_zap_all() could take hundreds of milliseconds
> for zapping mmu pages with mmu_lock held.
> 
> Although we need to do conditional rescheduling for completely
> fixing this issue, we can reduce the hold time to some extent by moving
> free_zapped_mmu_pages() out of the protection.  Since invalid_list can
> be very long, the effect is not negligible.
> 
> Note: this patch does not treat non-trivial cases.
> 
> Signed-off-by: Takuya Yoshikawa <[email protected]>

Can you describe the case thats biting? Is it

        /*
         * If memory slot is created, or moved, we need to clear all
         * mmio sptes.
         */
        if (npages && old.base_gfn != mem->guest_phys_addr >> PAGE_SHIFT) {
                kvm_mmu_zap_all(kvm);
                kvm_reload_remote_mmus(kvm);
        }

Because conditional rescheduling for kvm_mmu_zap_all() might not be
desirable: KVM_SET_USER_MEMORY has low latency requirements.


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to