Re: [Xen-devel] [PATCH v8 04/15] xen: add function for obtaining highest possible memory address
On 20/09/17 19:15, Julien Grall wrote: > Hi Juergen, > > On 20/09/17 15:33, Juergen Gross wrote: >> On 20/09/17 16:24, Julien Grall wrote: >>> On 20/09/17 14:08, Juergen Gross wrote: On 20/09/17 14:51, Julien Grall wrote: > Hi Juergen, > > Sorry for the late comment. > > On 20/09/17 07:34, Juergen Gross wrote: >> Add a function for obtaining the highest possible physical memory >> address of the system. This value is influenced by: >> >> - hypervisor configuration (CONFIG_BIGMEM) >> - processor capability (max. addressable physical memory) >> - memory map at boot time >> - memory hotplug capability >> >> The value is especially needed for dom0 to decide sizing of grant >> frame >> limits of guests and for pv domains for selecting the grant interface > > Why limiting to PV domain? Arm domain may also need to switch to > another > interface between v1 only support 32-bit GFN. Right. And I just used that reasoning for an answer to Jan. :-) > >> version to use. >> >> Signed-off-by: Juergen Gross > > [...] > >> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h >> index cd6dfb54b9..6aa8cba5e0 100644 >> --- a/xen/include/asm-arm/mm.h >> +++ b/xen/include/asm-arm/mm.h >> @@ -376,6 +376,11 @@ static inline void put_page_and_type(struct >> page_info *page) >> void clear_and_clean_page(struct page_info *page); >> +static inline unsigned long arch_get_upper_mfn_bound(void) >> +{ >> + return 0; >> +} > > I am not sure to understand the Arm implementation given the > description > of the commit message. > > The guest layout is completely separate from the host layout. It might > be possible to have all the memory below 40 bits on the host, but this > does not preclude the guest to have all memory below 40 bits (the > hardware might support, for instance, up to 48 bits). Who is setting up the memory map for the guest then? >>> >>> The memory map is at the moment static and described in >>> public/arch-arm.h. The guest is not allowed to assume it and should >>> discover it through ACPI/DT. >> >> Is there any memory hotplug possible (host level, guest level)? > > It is not implemented at the moment. > >> >>> There are 2 banks of memory for the guest (it depends on the amount of >>> memory requested by the user): >>> - 3GB @ 1GB >>> - 1016GB @ 8GB >>> >>> But the guest would be free to use the populate memory hypercall to >>> allocate memory anywhere in the address space. >> >> Okay, so this is similar to x86 HVM then. > You could compare Arm guest to PVH. > >> >>> For Arm32, the maximum IPA (Intermediate Physical Address aka guest >>> physical address on Xen) we currently support is always 40 bits. >>> >>> For Arm64, this range from 32 bits to 48 bits. New hardware can support >>> up to 52 bits. >> >> I guess this information is included in some tables like ACPI or DT? > > No. On Arm64, you can deduce the maximum size from the ID_AA64MMFR0_EL1. > But, the hypervisor would be free to limit the number of guest physical > bits. Although, it could never be higher than the Physical Address range > supported. Okay, so we have no need for an additional interface on ARM, right? It can all be handled via the existing interfaces. Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v8 04/15] xen: add function for obtaining highest possible memory address
Hi Juergen, On 20/09/17 15:33, Juergen Gross wrote: On 20/09/17 16:24, Julien Grall wrote: On 20/09/17 14:08, Juergen Gross wrote: On 20/09/17 14:51, Julien Grall wrote: Hi Juergen, Sorry for the late comment. On 20/09/17 07:34, Juergen Gross wrote: Add a function for obtaining the highest possible physical memory address of the system. This value is influenced by: - hypervisor configuration (CONFIG_BIGMEM) - processor capability (max. addressable physical memory) - memory map at boot time - memory hotplug capability The value is especially needed for dom0 to decide sizing of grant frame limits of guests and for pv domains for selecting the grant interface Why limiting to PV domain? Arm domain may also need to switch to another interface between v1 only support 32-bit GFN. Right. And I just used that reasoning for an answer to Jan. :-) version to use. Signed-off-by: Juergen Gross [...] diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index cd6dfb54b9..6aa8cba5e0 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -376,6 +376,11 @@ static inline void put_page_and_type(struct page_info *page) void clear_and_clean_page(struct page_info *page); +static inline unsigned long arch_get_upper_mfn_bound(void) +{ +return 0; +} I am not sure to understand the Arm implementation given the description of the commit message. The guest layout is completely separate from the host layout. It might be possible to have all the memory below 40 bits on the host, but this does not preclude the guest to have all memory below 40 bits (the hardware might support, for instance, up to 48 bits). Who is setting up the memory map for the guest then? The memory map is at the moment static and described in public/arch-arm.h. The guest is not allowed to assume it and should discover it through ACPI/DT. Is there any memory hotplug possible (host level, guest level)? It is not implemented at the moment. There are 2 banks of memory for the guest (it depends on the amount of memory requested by the user): - 3GB @ 1GB - 1016GB @ 8GB But the guest would be free to use the populate memory hypercall to allocate memory anywhere in the address space. Okay, so this is similar to x86 HVM then. You could compare Arm guest to PVH. For Arm32, the maximum IPA (Intermediate Physical Address aka guest physical address on Xen) we currently support is always 40 bits. For Arm64, this range from 32 bits to 48 bits. New hardware can support up to 52 bits. I guess this information is included in some tables like ACPI or DT? No. On Arm64, you can deduce the maximum size from the ID_AA64MMFR0_EL1. But, the hypervisor would be free to limit the number of guest physical bits. Although, it could never be higher than the Physical Address range supported. Cheers, -- Julien Grall ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v8 04/15] xen: add function for obtaining highest possible memory address
On 20/09/17 16:24, Julien Grall wrote: > Hi Juergen, > > On 20/09/17 14:08, Juergen Gross wrote: >> On 20/09/17 14:51, Julien Grall wrote: >>> Hi Juergen, >>> >>> Sorry for the late comment. >>> >>> On 20/09/17 07:34, Juergen Gross wrote: Add a function for obtaining the highest possible physical memory address of the system. This value is influenced by: - hypervisor configuration (CONFIG_BIGMEM) - processor capability (max. addressable physical memory) - memory map at boot time - memory hotplug capability The value is especially needed for dom0 to decide sizing of grant frame limits of guests and for pv domains for selecting the grant interface >>> >>> Why limiting to PV domain? Arm domain may also need to switch to another >>> interface between v1 only support 32-bit GFN. >> >> Right. And I just used that reasoning for an answer to Jan. :-) >> >>> version to use. Signed-off-by: Juergen Gross >>> >>> [...] >>> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index cd6dfb54b9..6aa8cba5e0 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -376,6 +376,11 @@ static inline void put_page_and_type(struct page_info *page) void clear_and_clean_page(struct page_info *page); +static inline unsigned long arch_get_upper_mfn_bound(void) +{ + return 0; +} >>> >>> I am not sure to understand the Arm implementation given the description >>> of the commit message. >>> >>> The guest layout is completely separate from the host layout. It might >>> be possible to have all the memory below 40 bits on the host, but this >>> does not preclude the guest to have all memory below 40 bits (the >>> hardware might support, for instance, up to 48 bits). >> >> Who is setting up the memory map for the guest then? > > The memory map is at the moment static and described in > public/arch-arm.h. The guest is not allowed to assume it and should > discover it through ACPI/DT. Is there any memory hotplug possible (host level, guest level)? > There are 2 banks of memory for the guest (it depends on the amount of > memory requested by the user): > - 3GB @ 1GB > - 1016GB @ 8GB > > But the guest would be free to use the populate memory hypercall to > allocate memory anywhere in the address space. Okay, so this is similar to x86 HVM then. > For Arm32, the maximum IPA (Intermediate Physical Address aka guest > physical address on Xen) we currently support is always 40 bits. > > For Arm64, this range from 32 bits to 48 bits. New hardware can support > up to 52 bits. I guess this information is included in some tables like ACPI or DT? Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v8 04/15] xen: add function for obtaining highest possible memory address
Hi Juergen, On 20/09/17 14:08, Juergen Gross wrote: On 20/09/17 14:51, Julien Grall wrote: Hi Juergen, Sorry for the late comment. On 20/09/17 07:34, Juergen Gross wrote: Add a function for obtaining the highest possible physical memory address of the system. This value is influenced by: - hypervisor configuration (CONFIG_BIGMEM) - processor capability (max. addressable physical memory) - memory map at boot time - memory hotplug capability The value is especially needed for dom0 to decide sizing of grant frame limits of guests and for pv domains for selecting the grant interface Why limiting to PV domain? Arm domain may also need to switch to another interface between v1 only support 32-bit GFN. Right. And I just used that reasoning for an answer to Jan. :-) version to use. Signed-off-by: Juergen Gross [...] diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index cd6dfb54b9..6aa8cba5e0 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -376,6 +376,11 @@ static inline void put_page_and_type(struct page_info *page) void clear_and_clean_page(struct page_info *page); +static inline unsigned long arch_get_upper_mfn_bound(void) +{ +return 0; +} I am not sure to understand the Arm implementation given the description of the commit message. The guest layout is completely separate from the host layout. It might be possible to have all the memory below 40 bits on the host, but this does not preclude the guest to have all memory below 40 bits (the hardware might support, for instance, up to 48 bits). Who is setting up the memory map for the guest then? The memory map is at the moment static and described in public/arch-arm.h. The guest is not allowed to assume it and should discover it through ACPI/DT. There are 2 banks of memory for the guest (it depends on the amount of memory requested by the user): - 3GB @ 1GB - 1016GB @ 8GB But the guest would be free to use the populate memory hypercall to allocate memory anywhere in the address space. For Arm32, the maximum IPA (Intermediate Physical Address aka guest physical address on Xen) we currently support is always 40 bits. For Arm64, this range from 32 bits to 48 bits. New hardware can support up to 52 bits. Cheers, -- Julien Grall ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v8 04/15] xen: add function for obtaining highest possible memory address
On 20/09/17 14:51, Julien Grall wrote: > Hi Juergen, > > Sorry for the late comment. > > On 20/09/17 07:34, Juergen Gross wrote: >> Add a function for obtaining the highest possible physical memory >> address of the system. This value is influenced by: >> >> - hypervisor configuration (CONFIG_BIGMEM) >> - processor capability (max. addressable physical memory) >> - memory map at boot time >> - memory hotplug capability >> >> The value is especially needed for dom0 to decide sizing of grant frame >> limits of guests and for pv domains for selecting the grant interface > > Why limiting to PV domain? Arm domain may also need to switch to another > interface between v1 only support 32-bit GFN. Right. And I just used that reasoning for an answer to Jan. :-) > >> version to use. >> >> Signed-off-by: Juergen Gross > > [...] > >> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h >> index cd6dfb54b9..6aa8cba5e0 100644 >> --- a/xen/include/asm-arm/mm.h >> +++ b/xen/include/asm-arm/mm.h >> @@ -376,6 +376,11 @@ static inline void put_page_and_type(struct >> page_info *page) >> void clear_and_clean_page(struct page_info *page); >> +static inline unsigned long arch_get_upper_mfn_bound(void) >> +{ >> + return 0; >> +} > > I am not sure to understand the Arm implementation given the description > of the commit message. > > The guest layout is completely separate from the host layout. It might > be possible to have all the memory below 40 bits on the host, but this > does not preclude the guest to have all memory below 40 bits (the > hardware might support, for instance, up to 48 bits). Who is setting up the memory map for the guest then? Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v8 04/15] xen: add function for obtaining highest possible memory address
Hi Juergen, Sorry for the late comment. On 20/09/17 07:34, Juergen Gross wrote: Add a function for obtaining the highest possible physical memory address of the system. This value is influenced by: - hypervisor configuration (CONFIG_BIGMEM) - processor capability (max. addressable physical memory) - memory map at boot time - memory hotplug capability The value is especially needed for dom0 to decide sizing of grant frame limits of guests and for pv domains for selecting the grant interface Why limiting to PV domain? Arm domain may also need to switch to another interface between v1 only support 32-bit GFN. version to use. Signed-off-by: Juergen Gross [...] diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h index cd6dfb54b9..6aa8cba5e0 100644 --- a/xen/include/asm-arm/mm.h +++ b/xen/include/asm-arm/mm.h @@ -376,6 +376,11 @@ static inline void put_page_and_type(struct page_info *page) void clear_and_clean_page(struct page_info *page); +static inline unsigned long arch_get_upper_mfn_bound(void) +{ +return 0; +} I am not sure to understand the Arm implementation given the description of the commit message. The guest layout is completely separate from the host layout. It might be possible to have all the memory below 40 bits on the host, but this does not preclude the guest to have all memory below 40 bits (the hardware might support, for instance, up to 48 bits). Cheers, -- Julien Grall ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v8 04/15] xen: add function for obtaining highest possible memory address
On 20/09/17 11:32, Jan Beulich wrote: On 20.09.17 at 10:58, wrote: >> On 20/09/17 10:57, Jan Beulich wrote: >> On 20.09.17 at 08:34, wrote: --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -6312,6 +6312,17 @@ int pv_ro_page_fault(unsigned long addr, struct >> cpu_user_regs *regs) return 0; } +unsigned long arch_get_upper_mfn_bound(void) +{ +unsigned long max_mfn; + +max_mfn = mem_hotplug ? PFN_DOWN(mem_hotplug) : max_page; >>> >>> Taking into account the code in the caller of this function as well >>> as the ARM counterpart I find the use of max_page here odd. I'd >>> prefer if get_upper_mfn_bound() went away altogether, and it's >>> sole caller (which strangely enough doesn't get introduced here) >>> called the arch function directly. Additionally, with the caller being >>> a sysctl, how is that supposed to help a PV DomU kernel in their >>> choice of grant table version? >> >> Did you look at patch 15? > > Not yet, no (I had looked over the titles, but this one's didn't > make me make the connection). So yes, that addresses the PV > DomU concern. Still I'd like to get away without the thin common > wrapper around the arch specific actual implementation (and I > don't care much whether the resulting function has an arch_ > prefix). Okay, I'll remove the common wrapper and drop the arch_ from the x86 and arm variants. Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v8 04/15] xen: add function for obtaining highest possible memory address
>>> On 20.09.17 at 10:58, wrote: > On 20/09/17 10:57, Jan Beulich wrote: > On 20.09.17 at 08:34, wrote: >>> --- a/xen/arch/x86/mm.c >>> +++ b/xen/arch/x86/mm.c >>> @@ -6312,6 +6312,17 @@ int pv_ro_page_fault(unsigned long addr, struct > cpu_user_regs *regs) >>> return 0; >>> } >>> >>> +unsigned long arch_get_upper_mfn_bound(void) >>> +{ >>> +unsigned long max_mfn; >>> + >>> +max_mfn = mem_hotplug ? PFN_DOWN(mem_hotplug) : max_page; >> >> Taking into account the code in the caller of this function as well >> as the ARM counterpart I find the use of max_page here odd. I'd >> prefer if get_upper_mfn_bound() went away altogether, and it's >> sole caller (which strangely enough doesn't get introduced here) >> called the arch function directly. Additionally, with the caller being >> a sysctl, how is that supposed to help a PV DomU kernel in their >> choice of grant table version? > > Did you look at patch 15? Not yet, no (I had looked over the titles, but this one's didn't make me make the connection). So yes, that addresses the PV DomU concern. Still I'd like to get away without the thin common wrapper around the arch specific actual implementation (and I don't care much whether the resulting function has an arch_ prefix). Jan ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v8 04/15] xen: add function for obtaining highest possible memory address
On 20/09/17 10:57, Jan Beulich wrote: On 20.09.17 at 08:34, wrote: >> --- a/xen/arch/x86/mm.c >> +++ b/xen/arch/x86/mm.c >> @@ -6312,6 +6312,17 @@ int pv_ro_page_fault(unsigned long addr, struct >> cpu_user_regs *regs) >> return 0; >> } >> >> +unsigned long arch_get_upper_mfn_bound(void) >> +{ >> +unsigned long max_mfn; >> + >> +max_mfn = mem_hotplug ? PFN_DOWN(mem_hotplug) : max_page; > > Taking into account the code in the caller of this function as well > as the ARM counterpart I find the use of max_page here odd. I'd > prefer if get_upper_mfn_bound() went away altogether, and it's > sole caller (which strangely enough doesn't get introduced here) > called the arch function directly. Additionally, with the caller being > a sysctl, how is that supposed to help a PV DomU kernel in their > choice of grant table version? Did you look at patch 15? Juergen ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] [PATCH v8 04/15] xen: add function for obtaining highest possible memory address
>>> On 20.09.17 at 08:34, wrote: > --- a/xen/arch/x86/mm.c > +++ b/xen/arch/x86/mm.c > @@ -6312,6 +6312,17 @@ int pv_ro_page_fault(unsigned long addr, struct > cpu_user_regs *regs) > return 0; > } > > +unsigned long arch_get_upper_mfn_bound(void) > +{ > +unsigned long max_mfn; > + > +max_mfn = mem_hotplug ? PFN_DOWN(mem_hotplug) : max_page; Taking into account the code in the caller of this function as well as the ARM counterpart I find the use of max_page here odd. I'd prefer if get_upper_mfn_bound() went away altogether, and it's sole caller (which strangely enough doesn't get introduced here) called the arch function directly. Additionally, with the caller being a sysctl, how is that supposed to help a PV DomU kernel in their choice of grant table version? Jan ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel