On 13/10/2018 16:54, lantianyu1...@gmail.com wrote:
>       while (mmu_unsync_walk(parent, &pages)) {
>               bool protected = false;
> +             LIST_HEAD(flush_list);
>  
> -             for_each_sp(pages, sp, parents, i)
> +             for_each_sp(pages, sp, parents, i) {
>                       protected |= rmap_write_protect(vcpu, sp->gfn);
> +                     kvm_mmu_queue_flush_request(sp, &flush_list);
> +             }

Here you already know that the page has to be flushed, because you are
dealing with shadow page tables and those always use 4K pages.  So the
check on is_last_page is unnecessary.

> 
>                                        pte_access, PT_PAGE_TABLE_LEVEL,
>                                        gfn, spte_to_pfn(sp->spt[i]),
>                                        true, false, host_writable);
> +             if (set_spte_ret && kvm_available_flush_tlb_with_range())
> +                     kvm_mmu_queue_flush_request(sp, &flush_list);
>       }

This is wrong, I think.  sp is always the same throughout the loop, so
you are adding it multiple times to flush_list.

Instead, you need to add a separate range for each virtual address (in
this case L2 GPA) that is synced; but for each PTE that you call
set_spte here for, you could be syncing multiple L2 GPAs if a single
page is reused multiple times by the guest's EPT page tables.

And actually I may be missing something, but doesn't this apply to all
call sites?  For mmu_sync_children you can do the flush in
__rmap_write_protect and return false, similar to the first part of the
series, but not for kvm_mmu_commit_zap_page and sync_page.

Can you simplify this series to only have hv_remote_flush_tlb_with_range
and remove all the flush_list stuff?  That first part is safe and well
understood, because it uses the rmap and so it's clear that you have L2
GPAs at hand.  Most of the remarks I made on the Hyper-V API will still
apply.

Paolo

>       if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
> -             kvm_flush_remote_tlbs(vcpu->kvm);
> +             kvm_flush_remote_tlbs_with_list(vcpu->kvm, &flush_list);
>  
>       return nr_present;

_______________________________________________
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

Reply via email to