Hi,
On 9/26/19 1:36 PM, hong...@amazon.com wrote:
On 26/09/2019 12:21, Julien Grall wrote:
Hi,
From the title, this patch is doing two things:
1) Map bootmem_region_list
2) Fix double unmap bug
It is not entirely clear how the latter is related to the former. Can
you explain it?
Actually they are not related. The second one should probably be
squashed into some other patch.
On 9/26/19 10:46 AM, hong...@amazon.com wrote:
From: Hongyan Xia <hong...@amazon.com>
Please provide a commit message description.
The description is just a one-liner in the subject. Should be there when
you import this patch.
I am afraid this is not enough to understand the patch. You should
explain in the patch you need it...
Signed-off-by: Hongyan Xia <hong...@amazon.com>
---
xen/arch/x86/pv/dom0_build.c | 2 +-
xen/common/page_alloc.c | 12 ++++++++++--
2 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index 202edcaa17..1555a61b84 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -236,7 +236,7 @@ static __init void setup_pv_physmap(struct domain
*d, unsigned long pgtbl_pfn,
if ( pl3e )
unmap_domain_page(pl3e);
- unmap_domain_page(l4start);
+ //unmap_domain_page(l4start);
I guess you wanted to remove it completely and not comment it?
Thanks. Will fix.
}
static struct page_info * __init alloc_chunk(struct domain *d,
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index deeeac065c..6acc1c78a4 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -247,6 +247,7 @@ mfn_t first_valid_mfn = INVALID_MFN_INITIALIZER;
static struct bootmem_region {
unsigned long s, e; /* MFNs @s through @e-1 inclusive are free */
} *__initdata bootmem_region_list;
+struct page_info *bootmem_region_list_pg;
I guess this should be static. But...
Yes.
static unsigned int __initdata nr_bootmem_regions;
struct scrub_region {
@@ -264,7 +265,11 @@ static void __init bootmem_region_add(unsigned
long s, unsigned long e)
unsigned int i;
if ( (bootmem_region_list == NULL) && (s < e) )
- bootmem_region_list = mfn_to_virt(s++);
+ {
+ bootmem_region_list_pg = mfn_to_page(_mfn(s));
... at least on Arm, the frametable is allocated after the boot
allocator has been initialized. So mfn_to_page() will not work
properly here.
It works because the bootmem_region_list_pg is only accessed later in
end_boot_allocator() when the frametable has been initialised. This pg
is just to remember what the pg will be when the frametable is ready. Of
course, to avoid confusion, I could keep the bootmem_region_list_mfn and
only convert to pg later in end_boot_allocator().
This only works because mfn_to_page() does not depend on anything
initialized afterwards for x86. This is not true on Arm because the
helper depends on frametable_base_pdx which is not initialized until
setup_frametable_mappings() is called.
So you will have the wrong pointer to the page.
.
+ bootmem_region_list = map_domain_page(_mfn(s));
So I would suggest to look at statically allocating the
bootmem_region_list. This was actually discussed recently as part of
on-going problem with Arm32 (see [1]).
Actually this patch series introduces PMAP infrastructure for x86, so
this map_domain_page() works. It certainly won't work for ARM though
without also introducing PMAP.
Well, map_domain_page() is meant to be used for domain heap page.
At the moment, the boot allocator requires a xenheap page on the first
call. So IHMO, you are be misusing the function.
Hence, I strongly think the static allocation is the best here.
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel