On 12/12/2019 12:46, Hongyan Xia wrote:
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 7d4dd80a85..8def4fb8d8 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5151,6 +5151,52 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
>                           flush_area_local((const void *)v, f) : \
>                           flush_area_all((const void *)v, f))
>  
> +/* Shatter an l3 entry and populate l2. If virt is passed in, also do flush. 
> */
> +static bool shatter_l3e(l3_pgentry_t *pl3e, unsigned long virt, bool locking)
> +{
> +    unsigned int i;
> +    l3_pgentry_t ol3e = *pl3e;
> +    l2_pgentry_t l2e = l2e_from_intpte(l3e_get_intpte(ol3e));
> +    l2_pgentry_t *l2t = alloc_xen_pagetable();
> +
> +    if ( !l2t )
> +        return false;
> +
> +    for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
> +    {
> +        l2e_write(l2t + i, l2e);
> +        l2e = l2e_from_intpte(
> +                  l2e_get_intpte(l2e) + (PAGE_SIZE << PAGETABLE_ORDER));
> +    }
> +
> +    if ( locking )
> +        spin_lock(&map_pgdir_lock);
> +    if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
> +         (l3e_get_flags(*pl3e) & _PAGE_PSE) )
> +    {
> +        l3e_write_atomic(pl3e,
> +            l3e_from_paddr(virt_to_maddr(l2t), __PAGE_HYPERVISOR));
> +        l2t = NULL;
> +    }
> +    if ( locking )
> +        spin_unlock(&map_pgdir_lock);
> +
> +    if ( virt )
> +    {
> +        unsigned int flush_flags =
> +            FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER);
> +
> +        if ( l3e_get_flags(ol3e) & _PAGE_GLOBAL )
> +            flush_flags |= FLUSH_TLB_GLOBAL;

Another problematic use of ol3e which is racy on conflict.  You need to
strictly use the content of *pl3e from within the locked region.

However, why have you moved the flushing in here?  Only one of the two
callers actually wanted it, and even then I'm not totally sure it is
necessary.

Both callers operate on an arbitrary range of addresses, and for
anything other than a singleton update, will want to issue a single
flush at the end, rather than a spate of flushes for sub-areas.

(Although someone really please check my reasoning here for the
map_pages_to_xen() case which currently does have sub-area flushing.)

Either the flush wants dropping (and best via a prereq patch altering
map_pages_to_xen()), or you need to cache ol3e in the locked region with
ACCESS_ONCE() or equivalent.

> +        flush_area(virt, flush_flags);
> +    }
> +
> +    if ( l2t )
> +        free_xen_pagetable(l2t);

Mind annotating this as:

    if ( l2t ) /* Raced on trying to shatter?  Throw away our work. */

to highlight that this is an error path, and there is no connection
between the TLB flushing and pagetable freeing.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to