On Tue, Dec 16, 2025 at 01:57:19PM +0200 Mike Rapoport wrote:
> Hi Evangelos,
> 
> On Tue, Dec 16, 2025 at 08:49:12AM +0000, Evangelos Petrongonas wrote:
> > When `CONFIG_DEFERRED_STRUCT_PAGE_INIT` is enabled, struct page
> 
> No need for markup formatting in the changelog.
> 
ack

> > initialization is deferred to parallel kthreads that run later
> > in the boot process.
> > 
> > During KHO restoration, `deserialize_bitmap()` writes metadata for
> > each preserved memory region. However, if the struct page has not been
> > initialized, this write targets uninitialized memory, potentially
> > leading to errors like:
> > ```
> > BUG: unable to handle page fault for address: ...
> > ```
> > 
> > Fix this by introducing `kho_get_preserved_page()`,  which ensures
> > all struct pages in a preserved region are initialized by calling
> > `init_deferred_page()` which is a no-op when deferred init is disabled
> > or when the struct page is already initialized.
> > 
> > Fixes: 8b66ed2c3f42 ("kho: mm: don't allow deferred struct page with KHO")
> > Signed-off-by: Evangelos Petrongonas <[email protected]>
> > ---
> 
> ...
> 
> > +static struct page *__init kho_get_preserved_page(phys_addr_t phys,
> > +                                             unsigned int order)
> > +{
> > +   unsigned long pfn = PHYS_PFN(phys);
> > +   int nid = early_pfn_to_nid(pfn);
> > +
> > +   for (int i = 0; i < (1 << order); i++)
> > +           init_deferred_page(pfn + i, nid);
> 
> This will skip pages below node->first_deferred_pfn, we need to use
> __init_page_from_nid() here.
> 
Right, __init_page_from_nid() unconditionally initializes the page.

> > +
> > +   return pfn_to_page(pfn);
> > +}
> > +
> >  static void __init deserialize_bitmap(unsigned int order,
> >                                   struct khoser_mem_bitmap_ptr *elm)
> >  {
> > @@ -449,7 +466,7 @@ static void __init deserialize_bitmap(unsigned int 
> > order,
> >             int sz = 1 << (order + PAGE_SHIFT);
> >             phys_addr_t phys =
> >                     elm->phys_start + (bit << (order + PAGE_SHIFT));
> > -           struct page *page = phys_to_page(phys);
> > +           struct page *page = kho_get_preserved_page(phys, order);
> 
> I think it's better to initialize deferred struct pages later in
> kho_restore_page. deserialize_bitmap() runs before SMP and it already does
> heavy lifting of memblock_reserve()s. Delaying struct page initialization
> until restore makes it at least run in parallel with other initialization
> tasks.
> 
> I started to work on this just before plumbers and I have something
> untested here:
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/rppt/linux.git/log/?h=kho/deferred-page/v0.1
> 

Nice suggestion! I looked at your branch and I agree, your
approach seems better.

I also noticed your debug check:
```
        if (IS_ENABLED(CONFIG_KEXEC_HANDOVER_DEBUG))
                WARN_ON(nid != early_pfn_to_nid(pfn + i));
```

This catches, or at least allows for easier debugging of,
another potential (although unlinkely (?)) issue that my patch missed:
preserved pages spanning multiple NUMA nodes within a single higher-order
allocation. Nice to have this :)

I am happy to drop my patch in favor of yours. FWIW I have quickly
tested it both using the modified selftest and a custom payload and it
seems to be working fine. Please let me know once you post the patches.

> >             union kho_page_info info;
> >  
> >             memblock_reserve(phys, sz);
> > -- 
> > 2.43.0
> > 
> > 
> > 
> > 
> > Amazon Web Services Development Center Germany GmbH
> > Tamara-Danz-Str. 13
> > 10243 Berlin
> > Geschaeftsfuehrung: Christof Hellmis, Andreas Stieger
> > Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
> > Sitz: Berlin
> > Ust-ID: DE 365 538 597
> > 
> 
> -- 
> Sincerely yours,
> Mike.

Kind Regards,
Evangelos



Amazon Web Services Development Center Germany GmbH
Tamara-Danz-Str. 13
10243 Berlin
Geschaeftsfuehrung: Christof Hellmis, Andreas Stieger
Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B
Sitz: Berlin
Ust-ID: DE 365 538 597


Reply via email to