Hi Jan,

> -----Original Message-----
> From: Jan Beulich <jbeul...@suse.com>
> Sent: 2022年11月3日 22:26
> To: Wei Chen <wei.c...@arm.com>
> Cc: nd <n...@arm.com>; Andrew Cooper <andrew.coop...@citrix.com>; Roger Pau
> Monné <roger....@citrix.com>; Wei Liu <w...@xen.org>; George Dunlap
> <george.dun...@citrix.com>; Julien Grall <jul...@xen.org>; Stefano
> Stabellini <sstabell...@kernel.org>; xen-devel@lists.xenproject.org
> Subject: Re: [PATCH v7 2/6] xen/x86: move generically usable NUMA code
> from x86 to common
> 
> On 20.10.2022 08:14, Wei Chen wrote:
> > There are some codes in x86/numa.c can be shared by common
> > architectures to implememnt NUMA support. Just like some
> > variables and functions to check and store NUMA memory map.
> > And some variables and functions to do NUMA initialization.
> >
> > In this patch, we move them to common/numa.c and xen/numa.h
> > and use the CONFIG_NUMA to gate them for non-NUMA supported
> > architectures. As the target header file is Xen-style, so
> > we trim some spaces and replace tabs for the codes that has
> > been moved to xen/numa.h at the same time.
> >
> > As acpi_scan_nodes has been used in a common function, it
> > doesn't make sense to use acpi_xxx in common code, so we
> > rename it to numa_process_nodes in this patch too. After that
> > if we still use CONFIG_ACPI_NUMA in to gate numa_process_nodes
> > in numa_initmem_init, that doesn't make sense. As CONFIG_NUMA
> > will be selected by CONFIG_ACPI_NUMA for x86. So, we replace
> > CONFIG_ACPI_NUMA by CONFIG_NUMA to gate numa_process_nodes.
> >
> > As arch_numa_disabled has been implememnted for ACPI NUMA,
> > we can rename srat_disabled to numa_disabled and move it
> > to common code as well.
> >
> > The macro node_to_first_cpu(node) hasn't been used anywhere,
> > so we drop it in this patch too.
> >
> > Because some architectures allow to use all 64 physical address
> > bits, but some architectures are not (like Arm64 allows 52, 48
> > bits). In this case, we use min(PADDR_BITS, BITS_PER_LONG - 1)
> > to calculate the shift when only one node is in the system in
> > this patch too.
> >
> > Signed-off-by: Wei Chen <wei.c...@arm.com>
> 
> Reviewed-by: Jan Beulich <jbeul...@suse.com>
> with one small further request (could be taken care of also while
> committing if no other need for a v8 arises):
> 

Thanks. This series is in merge conflict status now, do I need to
send a v8 to fix the merge conflict? If yes, I will fix above as
well, after PATCH#5 be reviewed.

> > --- /dev/null
> > +++ b/xen/common/numa.c
> > @@ -0,0 +1,464 @@
> > +/*
> > + * Generic VM initialization for NUMA setups.
> > + * Copyright 2002,2003 Andi Kleen, SuSE Labs.
> > + * Adapted for Xen: Ryan Harper <ry...@us.ibm.com>
> > + */
> > +
> > +#include <xen/init.h>
> > +#include <xen/keyhandler.h>
> > +#include <xen/mm.h>
> > +#include <xen/nodemask.h>
> > +#include <xen/numa.h>
> > +#include <xen/param.h>
> > +#include <xen/sched.h>
> > +#include <xen/softirq.h>
> > +
> > +struct node_data __ro_after_init node_data[MAX_NUMNODES];
> > +
> > +/* Mapping from pdx to node id */
> > +unsigned int __ro_after_init memnode_shift;
> > +unsigned long __ro_after_init memnodemapsize;
> > +nodeid_t *__ro_after_init memnodemap;
> > +static typeof(*memnodemap) __ro_after_init _memnodemap[64];
> > +
> > +nodeid_t __read_mostly cpu_to_node[NR_CPUS] = {
> > +    [0 ... NR_CPUS-1] = NUMA_NO_NODE
> > +};
> > +
> > +cpumask_t __read_mostly node_to_cpumask[MAX_NUMNODES];
> > +
> > +nodemask_t __read_mostly node_online_map = { { [0] = 1UL } };
> > +
> > +bool __ro_after_init numa_off;
> > +
> > +bool numa_disabled(void)
> > +{
> > +    return numa_off || arch_numa_disabled();
> > +}
> > +
> > +/*
> > + * Given a shift value, try to populate memnodemap[]
> > + * Returns :
> > + * 1 if OK
> > + * 0 if memnodmap[] too small (of shift too small)
> 
> May I ask that you correct this comment line: "of" (alone) makes no sense
> here. Either "or" was meant or it would want to be "because of". Unless
> this is a language tweak I'm entirely unaware of ...

Yes, if we need a v8, I will correct it.

Cheers,
Wei Chen

> 
> Jan

Reply via email to