On 2/22/21 10:40 PM, Dan Williams wrote:
> On Mon, Feb 22, 2021 at 3:42 AM Joao Martins <[email protected]>
> wrote:
>> On 2/20/21 3:34 AM, Dan Williams wrote:
>>> On Tue, Dec 8, 2020 at 9:32 AM Joao Martins <[email protected]>
>>> wrote:
>>>> Sections are 128M (or bigger/smaller),
>>>
>>> Huh?
>>>
>>
>> Section size is arch-dependent if we are being hollistic.
>> On x86 it's 64M, 128M or 512M right?
>>
>> #ifdef CONFIG_X86_32
>> # ifdef CONFIG_X86_PAE
>> # define SECTION_SIZE_BITS 29
>> # define MAX_PHYSMEM_BITS 36
>> # else
>> # define SECTION_SIZE_BITS 26
>> # define MAX_PHYSMEM_BITS 32
>> # endif
>> #else /* CONFIG_X86_32 */
>> # define SECTION_SIZE_BITS 27 /* matt - 128 is convenient right now */
>> # define MAX_PHYSMEM_BITS (pgtable_l5_enabled() ? 52 : 46)
>> #endif
>>
>> Also, me pointing about section sizes, is because a 1GB+ page vmemmap
>> population will
>> cross sections in how sparsemem populates the vmemmap. And on that case we
>> gotta reuse the
>> the PTE/PMD pages across multiple invocations of
>> vmemmap_populate_basepages(). Either
>> that, or looking at the previous page PTE, but that might be ineficient.
>
> Ok, makes sense I think saying this description of needing to handle
> section crossing is clearer than mentioning one of the section sizes.
>
I'll amend the commit message to have this.
>>
>>>> @@ -229,38 +235,95 @@ int __meminit vmemmap_populate_basepages(unsigned
>>>> long start, unsigned long end,
>>>> for (; addr < end; addr += PAGE_SIZE) {
>>>> pgd = vmemmap_pgd_populate(addr, node);
>>>> if (!pgd)
>>>> - return -ENOMEM;
>>>> + return NULL;
>>>> p4d = vmemmap_p4d_populate(pgd, addr, node);
>>>> if (!p4d)
>>>> - return -ENOMEM;
>>>> + return NULL;
>>>> pud = vmemmap_pud_populate(p4d, addr, node);
>>>> if (!pud)
>>>> - return -ENOMEM;
>>>> + return NULL;
>>>> pmd = vmemmap_pmd_populate(pud, addr, node);
>>>> if (!pmd)
>>>> - return -ENOMEM;
>>>> - pte = vmemmap_pte_populate(pmd, addr, node, altmap);
>>>> + return NULL;
>>>> + pte = vmemmap_pte_populate(pmd, addr, node, altmap, block);
>>>> if (!pte)
>>>> - return -ENOMEM;
>>>> + return NULL;
>>>> vmemmap_verify(pte, node, addr, addr + PAGE_SIZE);
>>>> }
>>>>
>>>> + return __va(__pfn_to_phys(pte_pfn(*pte)));
>>>> +}
>>>> +
>>>> +int __meminit vmemmap_populate_basepages(unsigned long start, unsigned
>>>> long end,
>>>> + int node, struct vmem_altmap
>>>> *altmap)
>>>> +{
>>>> + if (!__vmemmap_populate_basepages(start, end, node, altmap, NULL))
>>>> + return -ENOMEM;
>>>> return 0;
>>>> }
>>>>
>>>> +static struct page * __meminit vmemmap_populate_reuse(unsigned long start,
>>>> + unsigned long end, int node,
>>>> + struct vmem_context *ctx)
>>>> +{
>>>> + unsigned long size, addr = start;
>>>> + unsigned long psize = PHYS_PFN(ctx->align) * sizeof(struct page);
>>>> +
>>>> + size = min(psize, end - start);
>>>> +
>>>> + for (; addr < end; addr += size) {
>>>> + unsigned long head = addr + PAGE_SIZE;
>>>> + unsigned long tail = addr;
>>>> + unsigned long last = addr + size;
>>>> + void *area;
>>>> +
>>>> + if (ctx->block_page &&
>>>> + IS_ALIGNED((addr - ctx->block_page), psize))
>>>> + ctx->block = NULL;
>>>> +
>>>> + area = ctx->block;
>>>> + if (!area) {
>>>> + if (!__vmemmap_populate_basepages(addr, head, node,
>>>> + ctx->altmap,
>>>> NULL))
>>>> + return NULL;
>>>> +
>>>> + tail = head + PAGE_SIZE;
>>>> + area = __vmemmap_populate_basepages(head, tail,
>>>> node,
>>>> + ctx->altmap,
>>>> NULL);
>>>> + if (!area)
>>>> + return NULL;
>>>> +
>>>> + ctx->block = area;
>>>> + ctx->block_page = addr;
>>>> + }
>>>> +
>>>> + if (!__vmemmap_populate_basepages(tail, last, node,
>>>> + ctx->altmap, area))
>>>> + return NULL;
>>>> + }
>>>
>>> I think that compound page accounting and combined altmap accounting
>>> makes this difficult to read, and I think the compound page case
>>> deserves it's own first class loop rather than reusing
>>> vmemmap_populate_basepages(). With the suggestion to drop altmap
>>> support I'd expect a vmmemap_populate_compound that takes a compound
>>> page size and goes the right think with respect to mapping all the
>>> tail pages to the same pfn.
>>>
>> I can move this to a separate loop as suggested.
>>
>> But to be able to map all tail pages in one call of
>> vmemmap_populate_compound()
>> this might requires changes in sparsemem generic code that I am not so sure
>> they are warranted the added complexity. Otherwise I'll have to probably keep
>> this logic of @ctx to be able to pass the page to be reused (i.e. @block and
>> @block_page). That's actually the main reason that made me introduce
>> a struct vmem_context.
>
> Do you need to pass in a vmem_context, isn't that context local to
> vmemmap_populate_compound_pages()?
>
Hmm, so we allocate a vmem_context (inited to zeroes) in __add_pages(), and
then we use
the same vmem_context across all sections we are onling from the pfn range
passed in
__add_pages(). So all sections use the same vmem_context. Then we take care in
vmemmap_populate_compound_pages() to check whether there was a @block allocated
that needs
to be reused.
So while the content itself is private/local to
vmemmap_populate_compound_pages() we still
rely on the ability that vmemmap_populate_compound_pages() always gets the same
vmem_context location passed in for all sections being onlined in the whole pfn
range.
_______________________________________________
Linux-nvdimm mailing list -- [email protected]
To unsubscribe send an email to [email protected]