Hi,

On Thu, May 19, 2022 at 02:41:08PM +0100, Will Deacon wrote:
> Now that EL2 is able to manage guest stage-2 page-tables, avoid
> allocating a separate MMU structure in the host and instead introduce a
> new fault handler which responds to guest stage-2 faults by sharing
> GUP-pinned pages with the guest via a hypercall. These pages are
> recovered (and unpinned) on guest teardown via the page reclaim
> hypercall.
> 
> Signed-off-by: Will Deacon <w...@kernel.org>
> ---
[..]
> +static int pkvm_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> +                       unsigned long hva)
> +{
> +     struct kvm_hyp_memcache *hyp_memcache = &vcpu->arch.pkvm_memcache;
> +     struct mm_struct *mm = current->mm;
> +     unsigned int flags = FOLL_HWPOISON | FOLL_LONGTERM | FOLL_WRITE;
> +     struct kvm_pinned_page *ppage;
> +     struct kvm *kvm = vcpu->kvm;
> +     struct page *page;
> +     u64 pfn;
> +     int ret;
> +
> +     ret = topup_hyp_memcache(hyp_memcache, kvm_mmu_cache_min_pages(kvm));
> +     if (ret)
> +             return -ENOMEM;
> +
> +     ppage = kmalloc(sizeof(*ppage), GFP_KERNEL_ACCOUNT);
> +     if (!ppage)
> +             return -ENOMEM;
> +
> +     ret = account_locked_vm(mm, 1, true);
> +     if (ret)
> +             goto free_ppage;
> +
> +     mmap_read_lock(mm);
> +     ret = pin_user_pages(hva, 1, flags, &page, NULL);

When I implemented memory pinning via GUP for the KVM SPE series, I
discovered that the pages were regularly unmapped at stage 2 because of
automatic numa balancing, as change_prot_numa() ends up calling
mmu_notifier_invalidate_range_start().

I was curious how you managed to avoid that, I don't know my way around
pKVM and can't seem to find where that's implemented.

Thanks,
Alex
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to