On Wed, Jan 08, 2025 at 03:18:18PM +0000, Alejandro Vallejo wrote:
> From: Hongyan Xia <[email protected]>
>
> Create empty mappings in the second e820 pass. Also, destroy existing
> direct map mappings created in the first pass.
>
> To make xenheap pages visible in guests, it is necessary to create empty
> L3 tables in the direct map even when directmap=no, since guest cr3s
> copy idle domain's L4 entries, which means they will share mappings in
> the direct map if we pre-populate idle domain's L4 entries and L3
> tables. A helper is introduced for this.
>
> Also, after the direct map is actually gone, we need to stop updating
> the direct map in update_xen_mappings().
>
> Signed-off-by: Hongyan Xia <[email protected]>
> Signed-off-by: Julien Grall <[email protected]>
> Signed-off-by: Elias El Yandouzi <[email protected]>
> Signed-off-by: Alejandro Vallejo <[email protected]>
> ---
> v4->v5:
> * No changes.
> ---
> xen/arch/x86/setup.c | 73 +++++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 66 insertions(+), 7 deletions(-)
>
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index 609ec4cf07f2..23b77f13bc10 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1060,6 +1060,56 @@ static struct domain *__init create_dom0(struct
> boot_info *bi)
> return d;
> }
>
> +/*
> + * This either populates a valid direct map, or allocates empty L3 tables and
> + * creates the L4 entries for virtual address between [start, end) in the
> + * direct map depending on has_directmap();
> + *
> + * When directmap=no, we still need to populate empty L3 tables in the
> + * direct map region. The reason is that on-demand xenheap mappings are
> + * created in the idle domain's page table but must be seen by
> + * everyone. Since all domains share the direct map L4 entries, they
> + * will share xenheap mappings if we pre-populate the L4 entries and L3
> + * tables in the direct map region for all RAM. We also rely on the fact
> + * that L3 tables are never freed.
> + */
> +static void __init populate_directmap(paddr_t pstart, paddr_t pend,
> + unsigned int flags)
> +{
> + unsigned long vstart = (unsigned long)__va(pstart);
> + unsigned long vend = (unsigned long)__va(pend);
> +
> + if ( pstart >= pend )
> + return;
> +
> + BUG_ON(vstart < DIRECTMAP_VIRT_START);
> + BUG_ON(vend > DIRECTMAP_VIRT_END);
> +
> + if ( has_directmap() )
> + /* Populate valid direct map. */
> + BUG_ON(map_pages_to_xen(vstart, maddr_to_mfn(pstart),
> + PFN_DOWN(pend - pstart), flags));
> + else
> + {
> + /* Create empty L3 tables. */
> + unsigned long vaddr = vstart & ~((1UL << L4_PAGETABLE_SHIFT) - 1);
> +
> + for ( unsigned long idx = l4_table_offset(vaddr);
> + idx <= l4_table_offset(vend); idx++ )
> + {
> + l4_pgentry_t *pl4e = &idle_pg_table[l4_table_offset(idx)];
As we are attempting to integrate this series with the per-CPU
mappings work, there's an issue here. l4_table_offset() call is
duplicated, as idx is already the L4 table index:
for ( unsigned long idx = l4_table_offset(vaddr);
idx <= l4_table_offset(vend); idx++ )
{
l4_pgentry_t *pl4e = &idle_pg_table[idx];
This probably went unnoticed in small systems that can fit all the
directmap in a single L4 entry, but does explode on bigger ones.
Leaving a note here in case anyone else picks this up before a new
version is sent.
Regards, Roger.