On Mon, Feb 22, 2021 at 3:42 AM Joao Martins <[email protected]> wrote:
>
>
>
> On 2/20/21 3:34 AM, Dan Williams wrote:
> > On Tue, Dec 8, 2020 at 9:32 AM Joao Martins <[email protected]>
> > wrote:
> >>
> >> Introduce a new flag, MEMHP_REUSE_VMEMMAP, which signals that
> >> struct pages are onlined with a given alignment, and should reuse the
> >> tail pages vmemmap areas. On that circunstamce we reuse the PFN backing
> >
> > s/On that circunstamce we reuse/Reuse/
> >
> > Kills a "we" and switches to imperative tense. I noticed a couple
> > other "we"s in the previous patches, but this crossed my threshold to
> > make a comment.
> >
> /me nods. Will fix.
>
> >> only the tail pages subsections, while letting the head page PFN remain
> >> different. This presumes that the backing page structs are compound
> >> pages, such as the case for compound pagemaps (i.e. ZONE_DEVICE with
> >> PGMAP_COMPOUND set)
> >>
> >> On 2M compound pagemaps, it lets us save 6 pages out of the 8 necessary
> >
> > s/lets us save/saves/
> >
> Will fix.
>
> >> PFNs necessary
> >
> > s/8 necessary PFNs necessary/8 PFNs necessary/
>
> Will fix.
>
> >
> >> to describe the subsection's 32K struct pages we are
> >> onlining.
> >
> > s/we are onlining/being mapped/
> >
> > ...because ZONE_DEVICE pages are never "onlined".
> >
> >> On a 1G compound pagemap it let us save 4096 pages.
> >
> > s/lets us save/saves/
> >
>
> Will fix both.
>
> >>
> >> Sections are 128M (or bigger/smaller),
> >
> > Huh?
> >
>
> Section size is arch-dependent if we are being hollistic.
> On x86 it's 64M, 128M or 512M right?
>
> #ifdef CONFIG_X86_32
> # ifdef CONFIG_X86_PAE
> # define SECTION_SIZE_BITS 29
> # define MAX_PHYSMEM_BITS 36
> # else
> # define SECTION_SIZE_BITS 26
> # define MAX_PHYSMEM_BITS 32
> # endif
> #else /* CONFIG_X86_32 */
> # define SECTION_SIZE_BITS 27 /* matt - 128 is convenient right now */
> # define MAX_PHYSMEM_BITS (pgtable_l5_enabled() ? 52 : 46)
> #endif
>
> Also, me pointing about section sizes, is because a 1GB+ page vmemmap
> population will
> cross sections in how sparsemem populates the vmemmap. And on that case we
> gotta reuse the
> the PTE/PMD pages across multiple invocations of
> vmemmap_populate_basepages(). Either
> that, or looking at the previous page PTE, but that might be ineficient.
Ok, makes sense I think saying this description of needing to handle
section crossing is clearer than mentioning one of the section sizes.
>
> >> @@ -229,38 +235,95 @@ int __meminit vmemmap_populate_basepages(unsigned
> >> long start, unsigned long end,
> >> for (; addr < end; addr += PAGE_SIZE) {
> >> pgd = vmemmap_pgd_populate(addr, node);
> >> if (!pgd)
> >> - return -ENOMEM;
> >> + return NULL;
> >> p4d = vmemmap_p4d_populate(pgd, addr, node);
> >> if (!p4d)
> >> - return -ENOMEM;
> >> + return NULL;
> >> pud = vmemmap_pud_populate(p4d, addr, node);
> >> if (!pud)
> >> - return -ENOMEM;
> >> + return NULL;
> >> pmd = vmemmap_pmd_populate(pud, addr, node);
> >> if (!pmd)
> >> - return -ENOMEM;
> >> - pte = vmemmap_pte_populate(pmd, addr, node, altmap);
> >> + return NULL;
> >> + pte = vmemmap_pte_populate(pmd, addr, node, altmap, block);
> >> if (!pte)
> >> - return -ENOMEM;
> >> + return NULL;
> >> vmemmap_verify(pte, node, addr, addr + PAGE_SIZE);
> >> }
> >>
> >> + return __va(__pfn_to_phys(pte_pfn(*pte)));
> >> +}
> >> +
> >> +int __meminit vmemmap_populate_basepages(unsigned long start, unsigned
> >> long end,
> >> + int node, struct vmem_altmap
> >> *altmap)
> >> +{
> >> + if (!__vmemmap_populate_basepages(start, end, node, altmap, NULL))
> >> + return -ENOMEM;
> >> return 0;
> >> }
> >>
> >> +static struct page * __meminit vmemmap_populate_reuse(unsigned long start,
> >> + unsigned long end, int node,
> >> + struct vmem_context *ctx)
> >> +{
> >> + unsigned long size, addr = start;
> >> + unsigned long psize = PHYS_PFN(ctx->align) * sizeof(struct page);
> >> +
> >> + size = min(psize, end - start);
> >> +
> >> + for (; addr < end; addr += size) {
> >> + unsigned long head = addr + PAGE_SIZE;
> >> + unsigned long tail = addr;
> >> + unsigned long last = addr + size;
> >> + void *area;
> >> +
> >> + if (ctx->block_page &&
> >> + IS_ALIGNED((addr - ctx->block_page), psize))
> >> + ctx->block = NULL;
> >> +
> >> + area = ctx->block;
> >> + if (!area) {
> >> + if (!__vmemmap_populate_basepages(addr, head, node,
> >> + ctx->altmap,
> >> NULL))
> >> + return NULL;
> >> +
> >> + tail = head + PAGE_SIZE;
> >> + area = __vmemmap_populate_basepages(head, tail,
> >> node,
> >> + ctx->altmap,
> >> NULL);
> >> + if (!area)
> >> + return NULL;
> >> +
> >> + ctx->block = area;
> >> + ctx->block_page = addr;
> >> + }
> >> +
> >> + if (!__vmemmap_populate_basepages(tail, last, node,
> >> + ctx->altmap, area))
> >> + return NULL;
> >> + }
> >
> > I think that compound page accounting and combined altmap accounting
> > makes this difficult to read, and I think the compound page case
> > deserves it's own first class loop rather than reusing
> > vmemmap_populate_basepages(). With the suggestion to drop altmap
> > support I'd expect a vmmemap_populate_compound that takes a compound
> > page size and goes the right think with respect to mapping all the
> > tail pages to the same pfn.
> >
> I can move this to a separate loop as suggested.
>
> But to be able to map all tail pages in one call of
> vmemmap_populate_compound()
> this might requires changes in sparsemem generic code that I am not so sure
> they are warranted the added complexity. Otherwise I'll have to probably keep
> this logic of @ctx to be able to pass the page to be reused (i.e. @block and
> @block_page). That's actually the main reason that made me introduce
> a struct vmem_context.
Do you need to pass in a vmem_context, isn't that context local to
vmemmap_populate_compound_pages()?
_______________________________________________
Linux-nvdimm mailing list -- [email protected]
To unsubscribe send an email to [email protected]