Re: [Xen-devel] [RFC PATCH v1 00/21] ARM: Add Xen NUMA support

2017-03-02 Thread Vijay Kilari
Hi Konrad,

On Fri, Feb 10, 2017 at 11:00 PM, Konrad Rzeszutek Wilk
 wrote:
> On Thu, Feb 09, 2017 at 09:26:52PM +0530, vijay.kil...@gmail.com wrote:
>> From: Vijaya Kumar K 
>>
>> With this RFC patch series, NUMA support is added for arm platform.
>> Both DT and ACPI based NUMA support is added.
>> Only Xen is made aware of NUMA platform. Dom0 is awareness is not
>> added.
>>
>> As part of this series, the code under x86 architecture is
>> reused by moving into common files.
>> New files xen/common/numa.c and xen/commom/srat.c files are
>> added which are common for both x86 and arm.
>>
>> Patches 1 - 12 & 20 are for DT NUMA and 13 - 19 & 21 are for
>> ACPI NUMA.
>>
>> DT NUMA: The following major changes are performed
>>  - Dropped numa-node-id information from Dom0 DT.
>>So that Dom0 devices make allocation from node 0 for
>>devmalloc requests.
>>  - Memory DT is not deleted by EFI. It is exposed to Xen
>>to extract numa information.
>>  - On NUMA failure, Fallback to Non-NUMA booting.
>>Assuming all the memory and CPU's are under node 0.
>>  - CONFIG_NUMA is introduced.
>>
>> ACPI NUMA:
>>  - MADT is parsed before parsing SRAT table to extract
>>CPU_ID to MPIDR mapping info. In Linux, while parsing SRAT
>>table, MADT table is opened and extract MPIDR. However this
>>approach is not working on Xen it allows only one table to
>>be open at a time because when ACPI table is opened, Xen
>>maps to single region. So opening ACPI tables recursively
>>leads to overwriting of contents.
>
> Huh? Why can't you use vmap APIs to map them?
I see acpi_os_map_memory() could be used.

However, this approach of caching of cpu to MPIDR mapping by parsing MADT
before processing SRAT is much efficient.
In linux for every CPU_ID entry in SRAT read, the MADT is opened and searched
for CPU_ID to MPIDR mapping and closed. So MADT is searched n*n times.

Regards
Vijay

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH v1 00/21] ARM: Add Xen NUMA support

2017-02-10 Thread Konrad Rzeszutek Wilk
On Thu, Feb 09, 2017 at 09:26:52PM +0530, vijay.kil...@gmail.com wrote:
> From: Vijaya Kumar K 
> 
> With this RFC patch series, NUMA support is added for arm platform.
> Both DT and ACPI based NUMA support is added.
> Only Xen is made aware of NUMA platform. Dom0 is awareness is not
> added.
> 
> As part of this series, the code under x86 architecture is
> reused by moving into common files.
> New files xen/common/numa.c and xen/commom/srat.c files are
> added which are common for both x86 and arm.
> 
> Patches 1 - 12 & 20 are for DT NUMA and 13 - 19 & 21 are for
> ACPI NUMA.
> 
> DT NUMA: The following major changes are performed
>  - Dropped numa-node-id information from Dom0 DT.
>So that Dom0 devices make allocation from node 0 for
>devmalloc requests.
>  - Memory DT is not deleted by EFI. It is exposed to Xen
>to extract numa information.
>  - On NUMA failure, Fallback to Non-NUMA booting.
>Assuming all the memory and CPU's are under node 0.
>  - CONFIG_NUMA is introduced.
> 
> ACPI NUMA:
>  - MADT is parsed before parsing SRAT table to extract
>CPU_ID to MPIDR mapping info. In Linux, while parsing SRAT
>table, MADT table is opened and extract MPIDR. However this
>approach is not working on Xen it allows only one table to
>be open at a time because when ACPI table is opened, Xen
>maps to single region. So opening ACPI tables recursively
>leads to overwriting of contents.

Huh? Why can't you use vmap APIs to map them?

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC PATCH v1 00/21] ARM: Add Xen NUMA support

2017-02-09 Thread Vijay Kilari
Hi Julien,

On Thu, Feb 9, 2017 at 10:01 PM, Julien Grall  wrote:
> Hi Vijay,
>
> On 02/09/2017 03:56 PM, vijay.kil...@gmail.com wrote:
>>
>> Note: Please use this patch series only for review.
>> For testing, patch to boot allocator is required. Which will
>> be sent outside this series.
>
>
> Can you expand here? Is this patch a NUMA specific?

Yes it is NUMA specific, which I have reported here.
I have workaround for this. Need to prepare a patch. ( I hope till now
there is no
patch from anyone else for this issue)

https://www.mail-archive.com/xen-devel@lists.xen.org/msg92093.html

>
> Also in a previous thread you mentioned issue to boot Xen with NUMA on Xen
> unstable. So how did you test it?

This issue  (panic in page_alloc.c) that I reported is seen when I
boot plain unstable
xen on NUMA board without any NUMA or ITS patches. This issue
is seen only with on NUMA board with DT

I have tested this series with ACPI using unstable version and DT on
4.7 version.
Also, I have prepared a small patch as below (just adhoc way),
where in I called cpu_to_node() for all cpus and print phys_to_nid()
to see if the node id is correct or not.

-
diff --git a/xen/arch/arm/numa.c b/xen/arch/arm/numa.c
index d296523..d28e6bf 100644
--- a/xen/arch/arm/numa.c
+++ b/xen/arch/arm/numa.c
@@ -43,9 +43,11 @@ void __init numa_set_cpu_node(int cpu, unsigned long hwid)
 unsigned node;

 node = hwid >> 16 & 0xf;
+printk("In %s cpu %d node %d\n",__func__, cpu, node);
 if ( !node_isset(node, numa_nodes_parsed) || node == MAX_NUMNODES )
 node = 0;

+printk("In %s cpu %d node %d\n",__func__, cpu, node);
 numa_set_node(cpu, node);
 numa_add_cpu(cpu);
 }
@@ -245,3 +247,52 @@ int __init arch_numa_setup(char *opt)
 {
 return 1;
 }
+
+struct mem_list {
+u64 start;
+u64 end;
+};
+
+void numa_test(void)
+{
+int i;
+
+struct mem_list ml[] =
+{
+{ 0x0140, 0xfffecfff },
+{ 0x0001 , 0x000ff7ff },
+{ 0x000ff800 , 0x000ff801 },
+{ 0x000ff802 , 0x000fffa9cfff },
+{ 0x000fffa9d000 , 0x000f },
+{ 0x0140 , 0x010ff57b2fff },
+{ 0x010ff6618000 , 0x010ff6ff0fff },
+{ 0x010ff6ff1000 , 0x010ff724 },
+{ 0x010ff734b000 , 0x010ff73defff },
+{ 0x010ff73f , 0x010ff73fbfff },
+{ 0x010ff73fc000 , 0x010ff74defff },
+{ 0x010ff74df000 , 0x010ff9718fff },
+{ 0x010ff97a2000 , 0x010ff97acfff },
+{ 0x010ff97ad000 , 0x010ff97b3fff },
+{ 0x010ff97b5000 , 0x010ff9813fff },
+{ 0x010ff9814000 , 0x010ff9819fff },
+{ 0x010ff981a000 , 0x010ff984afff },
+{ 0x010ff984c000 , 0x010ff9851fff },
+{ 0x010ff9935000 , 0x010ffaeb5fff },
+{ 0x010ffaff5000 , 0x010ffb008fff },
+{ 0x010ffb009000 , 0x010fffe28fff },
+{ 0x010fffe29000 , 0x010fffe70fff },
+{ 0x010fffe71000 , 0x010b8fff },
+{ 0x010ff000 , 0x010f },
+};
+
+for ( i = 0; i < 23; i++ )
+{
+printk("NUMA MEM TEST: start 0x%lx in node %d end 0x%lx in node %d\n",
+   ml[i].start, phys_to_nid(ml[i].start), ml[i].end,
phys_to_nid(ml[i].end));
+}
+
+for ( i = 0; i < NR_CPUS; i++)
+{
+   printk("NUMA CPU TEST: cpu %d in node %d\n", i, cpu_to_node(i));
+}
+}
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 5612ba6..0598672 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -698,6 +698,7 @@ void __init setup_cache(void)
 cacheline_bytes = 1U << (4 + (ccsid & 0x7));
 }

+extern void numa_test(void);
 /* C entry point for boot CPU */
 void __init start_xen(unsigned long boot_phys_offset,
   unsigned long fdt_paddr,
@@ -825,6 +826,7 @@ void __init start_xen(unsigned long boot_phys_offset,
 }
 }

+numa_test();
 printk("Brought up %ld CPUs\n", (long)num_online_cpus());
 /* TODO: smp_cpus_done(); */


>
> Cheers,
>
> --
> Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [RFC PATCH v1 00/21] ARM: Add Xen NUMA support

2017-02-09 Thread vijay . kilari
From: Vijaya Kumar K 

With this RFC patch series, NUMA support is added for arm platform.
Both DT and ACPI based NUMA support is added.
Only Xen is made aware of NUMA platform. Dom0 is awareness is not
added.

As part of this series, the code under x86 architecture is
reused by moving into common files.
New files xen/common/numa.c and xen/commom/srat.c files are
added which are common for both x86 and arm.

Patches 1 - 12 & 20 are for DT NUMA and 13 - 19 & 21 are for
ACPI NUMA.

DT NUMA: The following major changes are performed
 - Dropped numa-node-id information from Dom0 DT.
   So that Dom0 devices make allocation from node 0 for
   devmalloc requests.
 - Memory DT is not deleted by EFI. It is exposed to Xen
   to extract numa information.
 - On NUMA failure, Fallback to Non-NUMA booting.
   Assuming all the memory and CPU's are under node 0.
 - CONFIG_NUMA is introduced.

ACPI NUMA:
 - MADT is parsed before parsing SRAT table to extract
   CPU_ID to MPIDR mapping info. In Linux, while parsing SRAT
   table, MADT table is opened and extract MPIDR. However this
   approach is not working on Xen it allows only one table to
   be open at a time because when ACPI table is opened, Xen
   maps to single region. So opening ACPI tables recursively
   leads to overwriting of contents.
 - SRAT table is parsed for ACPI_SRAT_TYPE_GICC_AFFINITY to extract
   proximity info and MPIDR from CPU_ID to MPIDR mapping table.
 - SRAT table is parsed for ACPI_SRAT_TYPE_MEMORY_AFFINITY to extract
   memory proximity.
 - Re-use SLIT parsing of x86 for node distance information.
 - CONFIG_ACPI_NUMA is introduced

The node_distance() API is implemented separately for x86 and arm
as arm has DT and ACPI based distance information.

No changes are made to x86 implementation only code is refactored.
Hence only compilation tested for x86.

Code is shared at https://github.com/vijaykilari/xen-numa rfc_1

Note: Please use this patch series only for review.
For testing, patch to boot allocator is required. Which will
be sent outside this series.

Vijaya Kumar K (21):
  ARM: NUMA: Add existing ARM numa code under CONFIG_NUMA
  x86: NUMA: Refactor NUMA code
  NUMA: Move arch specific NUMA code as common
  NUMA: Refactor generic and arch specific code of numa_setup
  ARM: efi: Do not delete memory node from fdt
  ARM: NUMA: Parse CPU NUMA information
  ARM: NUMA: Parse memory NUMA information
  ARM: NUMA: Parse NUMA distance information
  ARM: NUMA: Add CPU NUMA support
  ARM: NUMA: Add memory NUMA support
  ARM: NUMA: Add fallback on NUMA failure
  ARM: NUMA: Do not expose numa info to DOM0
  ACPI: Refactor acpi SRAT and SLIT table handling code
  ACPI: Move srat_disabled to common code
  ARM: NUMA: Extract MPIDR from MADT table
  ARM: NUMA: Extract proximity from SRAT table
  ARM: NUMA: Extract memory proximity from SRAT table
  ARM: NUMA: update node_distance with ACPI support
  ARM: NUMA: Initialize ACPI NUMA
  ARM: NUMA: Enable CONFIG_NUMA config
  ARM: NUMA: Enable CONFIG_ACPI_NUMA config

 xen/arch/arm/Kconfig|   5 +
 xen/arch/arm/Makefile   |   3 +
 xen/arch/arm/acpi_numa.c| 257 +
 xen/arch/arm/bootfdt.c  |  21 +-
 xen/arch/arm/domain_build.c |   9 +
 xen/arch/arm/dt_numa.c  | 244 
 xen/arch/arm/efi/efi-boot.h |  25 --
 xen/arch/arm/numa.c | 249 
 xen/arch/arm/setup.c|   5 +
 xen/arch/arm/smpboot.c  |   3 +
 xen/arch/x86/domain_build.c |   1 +
 xen/arch/x86/numa.c | 318 +-
 xen/arch/x86/physdev.c  |   1 +
 xen/arch/x86/setup.c|   1 +
 xen/arch/x86/smpboot.c  |   1 +
 xen/arch/x86/srat.c | 183 +--
 xen/arch/x86/x86_64/mm.c|   1 +
 xen/common/Makefile |   2 +
 xen/common/numa.c   | 439 
 xen/common/srat.c   | 157 +
 xen/drivers/acpi/numa.c |  37 +++
 xen/drivers/acpi/osl.c  |   2 +
 xen/drivers/passthrough/vtd/iommu.c |   1 +
 xen/include/acpi/actbl1.h   |  17 +-
 xen/include/asm-arm/acpi.h  |   2 +
 xen/include/asm-arm/numa.h  |  41 
 xen/include/asm-x86/acpi.h  |   2 -
 xen/include/asm-x86/numa.h  |  53 +
 xen/include/xen/acpi.h  |  39 
 xen/include/xen/device_tree.h   |   7 +
 xen/include/xen/numa.h  |  61 -
 xen/include/xen/srat.h  |  15 ++
 32 files changed, 1620 insertions(+), 582 deletions(-)
 create mode 100644 xen/arch/arm/acpi_numa.c
 create mode 100644 xen/arch/arm/dt_numa.c
 create mode 100644 xen/arch/arm/numa.c
 create mode 100644 xen/common/numa.c
 create mode 100644 xen/common/srat.c
 create mode 100644 xen/include/xen/srat.h

-- 
2.7.4