On Tue, 16 Mar 2021 13:43:38 +0000,
Keqian Zhu <zhukeqi...@huawei.com> wrote:
> 
> The MMIO region of a device maybe huge (GB level), try to use
> block mapping in stage2 to speedup both map and unmap.
> 
> Compared to normal memory mapping, we should consider two more
> points when try block mapping for MMIO region:
> 
> 1. For normal memory mapping, the PA(host physical address) and
> HVA have same alignment within PUD_SIZE or PMD_SIZE when we use
> the HVA to request hugepage, so we don't need to consider PA
> alignment when verifing block mapping. But for device memory
> mapping, the PA and HVA may have different alignment.
> 
> 2. For normal memory mapping, we are sure hugepage size properly
> fit into vma, so we don't check whether the mapping size exceeds
> the boundary of vma. But for device memory mapping, we should pay
> attention to this.
> 
> This adds device_rough_page_shift() to check these two points when
> selecting block mapping size.
> 
> Signed-off-by: Keqian Zhu <zhukeqi...@huawei.com>
> ---
> 
> Mainly for RFC, not fully tested. I will fully test it when the
> code logic is well accepted.
> 
> ---
>  arch/arm64/kvm/mmu.c | 42 ++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 38 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index c59af5ca01b0..224aa15eb4d9 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -624,6 +624,36 @@ static void kvm_send_hwpoison_signal(unsigned long 
> address, short lsb)
>       send_sig_mceerr(BUS_MCEERR_AR, (void __user *)address, lsb, current);
>  }
>  
> +/*
> + * Find a mapping size that properly insides the intersection of vma and
> + * memslot. And hva and pa have the same alignment to this mapping size.
> + * It's rough because there are still other restrictions, which will be
> + * checked by the following fault_supports_stage2_huge_mapping().

I don't think these restrictions make complete sense to me. If this is
a PFNMAP VMA, we should use the biggest mapping size that covers the
VMA, and not more than the VMA.

> + */
> +static short device_rough_page_shift(struct kvm_memory_slot *memslot,
> +                                  struct vm_area_struct *vma,
> +                                  unsigned long hva)
> +{
> +     size_t size = memslot->npages * PAGE_SIZE;
> +     hva_t sec_start = max(memslot->userspace_addr, vma->vm_start);
> +     hva_t sec_end = min(memslot->userspace_addr + size, vma->vm_end);
> +     phys_addr_t pa = (vma->vm_pgoff << PAGE_SHIFT) + (hva - vma->vm_start);
> +
> +#ifndef __PAGETABLE_PMD_FOLDED
> +     if ((hva & (PUD_SIZE - 1)) == (pa & (PUD_SIZE - 1)) &&
> +         ALIGN_DOWN(hva, PUD_SIZE) >= sec_start &&
> +         ALIGN(hva, PUD_SIZE) <= sec_end)
> +             return PUD_SHIFT;
> +#endif
> +
> +     if ((hva & (PMD_SIZE - 1)) == (pa & (PMD_SIZE - 1)) &&
> +         ALIGN_DOWN(hva, PMD_SIZE) >= sec_start &&
> +         ALIGN(hva, PMD_SIZE) <= sec_end)
> +             return PMD_SHIFT;
> +
> +     return PAGE_SHIFT;
> +}
> +
>  static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot 
> *memslot,
>                                              unsigned long hva,
>                                              unsigned long map_size)
> @@ -769,7 +799,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
> phys_addr_t fault_ipa,
>               return -EFAULT;
>       }
>  
> -     /* Let's check if we will get back a huge page backed by hugetlbfs */
> +     /*
> +      * Let's check if we will get back a huge page backed by hugetlbfs, or
> +      * get block mapping for device MMIO region.
> +      */
>       mmap_read_lock(current->mm);
>       vma = find_vma_intersection(current->mm, hva, hva + 1);
>       if (unlikely(!vma)) {
> @@ -780,11 +813,12 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
> phys_addr_t fault_ipa,
>  
>       if (is_vm_hugetlb_page(vma))
>               vma_shift = huge_page_shift(hstate_vma(vma));
> +     else if (vma->vm_flags & VM_PFNMAP)
> +             vma_shift = device_rough_page_shift(memslot, vma, hva);
>       else
>               vma_shift = PAGE_SHIFT;
>  
> -     if (logging_active ||
> -         (vma->vm_flags & VM_PFNMAP)) {
> +     if (logging_active) {
>               force_pte = true;
>               vma_shift = PAGE_SHIFT;

But why should we downgrade to page-size mappings if logging? This is
a device, and you aren't moving the device around, are you? Or is your
device actually memory with a device mapping that you are trying to
migrate?

>       }
> @@ -855,7 +889,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, 
> phys_addr_t fault_ipa,
>  
>       if (kvm_is_device_pfn(pfn)) {
>               device = true;
> -             force_pte = true;
> +             force_pte = (vma_pagesize == PAGE_SIZE);
>       } else if (logging_active && !write_fault) {
>               /*
>                * Only actually map the page as writable if this was a write
> -- 
> 2.19.1
> 
> 

Thanks,

        M.

-- 
Without deviation from the norm, progress is not possible.

Reply via email to