On 03.12.2021 21:33, Oleksandr Tyshchenko wrote:
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1382,8 +1382,10 @@ void share_xen_page_with_guest(struct page_info *page, 
> struct domain *d,
>      spin_lock(&d->page_alloc_lock);
>  
>      /* The incremented type count pins as writable or read-only. */
> -    page->u.inuse.type_info =
> -        (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
> +    page->u.inuse.type_info &= ~(PGT_type_mask | PGT_count_mask);
> +    page->u.inuse.type_info |= (flags == SHARE_ro ? PGT_none
> +                                                  : PGT_writable_page) |
> +                                MASK_INSR(1, PGT_count_mask);

It's certainly up to the Arm maintainers to judge, but I would have
deemed it better (less risky going forward) if PGT_count_mask
continued to use the bottom bits. (I guess I may have said so before.)

> @@ -1487,7 +1489,23 @@ int xenmem_add_to_physmap_one(
>      }
>  
>      /* Map at new location. */
> -    rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
> +    if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
> +        rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
> +    else
> +    {
> +        struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +        p2m_write_lock(p2m);
> +        if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)), INVALID_GFN) )
> +        {
> +            rc = p2m_set_entry(p2m, gfn, 1, mfn, t, p2m->default_access);
> +            if ( !rc )
> +                page_set_xenheap_gfn(mfn_to_page(mfn), gfn);
> +        }
> +        else
> +            rc = -EBUSY;

May I suggest to avoid failing here when page_get_xenheap_gfn(mfn_to_page(mfn))
matches the passed in GFN?

> @@ -2169,6 +2170,9 @@ void *alloc_xenheap_pages(unsigned int order, unsigned 
> int memflags)
>      if ( unlikely(pg == NULL) )
>          return NULL;
>  
> +    for ( i = 0; i < (1u << order); i++ )
> +        arch_alloc_xenheap_page(&pg[i]);
> +
>      memguard_unguard_range(page_to_virt(pg), 1 << (order + PAGE_SHIFT));

I think this and ...

> @@ -2177,14 +2181,22 @@ void *alloc_xenheap_pages(unsigned int order, 
> unsigned int memflags)
>  
>  void free_xenheap_pages(void *v, unsigned int order)
>  {
> +    struct page_info *pg;
> +    unsigned int i;
> +
>      ASSERT(!in_irq());
>  
>      if ( v == NULL )
>          return;
>  
> +    pg = virt_to_page(v);
> +
>      memguard_guard_range(v, 1 << (order + PAGE_SHIFT));

... this really want to (logically) move into the new arch hooks.
That'll effectively mean to simply drop the Arm stubs afaict (and I
notice there's some dead code there on x86, which I guess I'll make
a patch to clean up). But first of all this suggests that you want
to call the hooks with base page and order, putting the loops there.

> @@ -166,6 +173,32 @@ extern unsigned long xenheap_base_pdx;
>  
>  #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
>  
> +static inline gfn_t page_get_xenheap_gfn(struct page_info *p)

const please wherever possible.

Jan


Reply via email to