On Fri, Sep 30, 2022 at 10:36:20AM +0200, Jan Beulich wrote: > On 30.09.2022 10:25, Roger Pau Monné wrote: > > On Tue, Sep 27, 2022 at 06:20:35PM +0200, Jan Beulich wrote: > >> SRAT may describe individual nodes using multiple ranges. When they're > >> adjacent (with or without a gap in between), only the start of the first > >> such range actually needs accounting for. Furthermore the very first > >> range doesn't need considering of its start address at all, as it's fine > >> to associate all lower addresses (with no memory) with that same node. > >> For this to work, the array of ranges needs to be sorted by address - > >> adjust logic accordingly in acpi_numa_memory_affinity_init(). > > > > Speaking for myself (due to the lack of knowledge of the NUMA stuff) I > > would benefit from a bit of context on why and how memnode_shift is > > used. > > It's used solely to shift PDXes right before indexing memnodemap[]. > Hence a larger shift allows for a smaller array (less memory and, > perhaps more importantly, less cache footprint). Personally I don't > think such needs mentioning in a patch, but I know others think > differently (George for example looks to believe that the present > situation should always be described). In the case here a simple > grep for memnode_shift would tell you, and even if I described this > here it wouldn't really help review I think: You'd still want to > verify then that what I claim actually matches reality.
Right, that's why I like some context with the patches. Sometimes (and I'm saying it's the case here) the context analysis done by the patch submitter is wrong, and hence it's easier to detect and point out if it's part of the commit message. IMO it also helps when looking at git history. > >> @@ -413,14 +414,37 @@ acpi_numa_memory_affinity_init(const str > >> node, pxm, start, end - 1, > >> ma->flags & ACPI_SRAT_MEM_HOT_PLUGGABLE ? " (hotplug)" : ""); > >> > >> - node_memblk_range[num_node_memblks].start = start; > >> - node_memblk_range[num_node_memblks].end = end; > >> - memblk_nodeid[num_node_memblks] = node; > >> + /* Keep node_memblk_range[] sorted by address. */ > >> + for (i = 0; i < num_node_memblks; ++i) > >> + if (node_memblk_range[i].start > start || > >> + (node_memblk_range[i].start == start && > > > > Maybe I'm confused, but won't .start == start means we have > > overlapping ranges? > > Yes, except when a range is empty. Hm, OK. Won't overlapping ranges be bad if not empty? Shouldn't Xen complain if it finds overlapping ranges, or that's taken care somewhere else? Thanks, Roger.
