On Wed, Dec 01, 2021 at 05:04:02PM +0000, Quentin Perret wrote:
> In order to simplify the page tracking infrastructure at EL2 in nVHE
> protected mode, move the responsibility of refcounting pages that are
> shared multiple times on the host. In order to do so, let's create a
> red-black tree tracking all the PFNs that have been shared, along with
> a refcount.
> 
> Signed-off-by: Quentin Perret <[email protected]>
> ---
>  arch/arm64/kvm/mmu.c | 78 ++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 68 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index fd868fb9d922..d72566896755 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -284,23 +284,72 @@ static phys_addr_t kvm_kaddr_to_phys(void *kaddr)
>       }
>  }
>  
> -static int pkvm_share_hyp(phys_addr_t start, phys_addr_t end)
> +struct hyp_shared_pfn {
> +     u64 pfn;
> +     int count;
> +     struct rb_node node;
> +};
> +
> +static DEFINE_MUTEX(hyp_shared_pfns_lock);
> +static struct rb_root hyp_shared_pfns = RB_ROOT;
> +
> +static struct hyp_shared_pfn *find_shared_pfn(u64 pfn, struct rb_node 
> ***node,
> +                                           struct rb_node **parent)
>  {
> -     phys_addr_t addr;
> -     int ret;
> +     struct hyp_shared_pfn *this;
> +
> +     *node = &hyp_shared_pfns.rb_node;
> +     *parent = NULL;
> +     while (**node) {
> +             this = container_of(**node, struct hyp_shared_pfn, node);
> +             *parent = **node;
> +             if (this->pfn < pfn)
> +                     *node = &((**node)->rb_left);
> +             else if (this->pfn > pfn)
> +                     *node = &((**node)->rb_right);
> +             else
> +                     return this;
> +     }
>  
> -     for (addr = ALIGN_DOWN(start, PAGE_SIZE); addr < end; addr += 
> PAGE_SIZE) {
> -             ret = kvm_call_hyp_nvhe(__pkvm_host_share_hyp,
> -                                     __phys_to_pfn(addr));
> -             if (ret)
> -                     return ret;
> +     return NULL;
> +}
> +
> +static int share_pfn_hyp(u64 pfn)
> +{
> +     struct rb_node **node, *parent;
> +     struct hyp_shared_pfn *this;
> +     int ret = 0;
> +
> +     mutex_lock(&hyp_shared_pfns_lock);
> +     this = find_shared_pfn(pfn, &node, &parent);

I don't think this is a fast-path at the moment, but in the future we might
consider using RCU to do the lookup outside of the mutex.

But as-is:

Acked-by: Will Deacon <[email protected]>

Will
_______________________________________________
kvmarm mailing list
[email protected]
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to