On Thu, Feb 26, 2026 at 05:56:30PM +0530, Ritesh Harjani (IBM) wrote: > Architecture like powerpc, checks for pfn_valid() in their > virt_to_phys() implementation (when CONFIG_DEBUG_VIRTUAL is enabled) [1]. > Commit d49004c5f0c1 "arch, mm: consolidate initialization of nodes, zones and > memory map" > changed the order of initialization between hugetlb_bootmem_alloc() and > free_area_init(). This means, pfn_valid() can now return false in > alloc_bootmem() path, since sparse_init() is not yet done. > > Since, alloc_bootmem() uses memblock_alloc(.., MEMBLOCK_ALLOC_ACCESSIBLE), > this > means these allocations are always going to happen below high_memory, where > __pa() should return valid physical addresses. Hence this patch converts > the two callers of virt_to_phys() in alloc_bootmem() path to __pa() to avoid > this bootup warning: > > ------------[ cut here ]------------ > WARNING: arch/powerpc/include/asm/io.h:879 at virt_to_phys+0x44/0x1b8, > CPU#0: swapper/0 > Modules linked in: > <...> > NIP [c000000000601584] virt_to_phys+0x44/0x1b8 > LR [c000000004075de4] alloc_bootmem+0x144/0x1a8 > Call Trace: > [c000000004d1fb50] [c000000004075dd4] alloc_bootmem+0x134/0x1a8 > [c000000004d1fba0] [c000000004075fac] __alloc_bootmem_huge_page+0x164/0x230 > [c000000004d1fbe0] [c000000004030bc4] alloc_bootmem_huge_page+0x44/0x138 > [c000000004d1fc10] [c000000004076e48] hugetlb_hstate_alloc_pages+0x350/0x5ac > [c000000004d1fd30] [c0000000040782f0] hugetlb_bootmem_alloc+0x15c/0x19c > [c000000004d1fd70] [c00000000406d7b4] mm_core_init_early+0x7c/0xdf4 > [c000000004d1ff30] [c000000004011d84] start_kernel+0xac/0xc58 > [c000000004d1ffe0] [c00000000000e99c] start_here_common+0x1c/0x20 > > [1]: https://lore.kernel.org/linuxppc-dev/[email protected]/ > > Fixes: d49004c5f0c1 ("arch, mm: consolidate initialization of nodes, zones > and memory map") > Signed-off-by: Ritesh Harjani (IBM) <[email protected]>
Reviewed-by: Mike Rapoport (Microsoft) <[email protected]> > --- > mm/hugetlb.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 6e855a32de3d..43e0c95738a6 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -3101,7 +3101,7 @@ static __init void *alloc_bootmem(struct hstate *h, int > nid, bool node_exact) > * extract the actual node first. > */ > if (m) > - listnode = > early_pfn_to_nid(PHYS_PFN(virt_to_phys(m))); > + listnode = early_pfn_to_nid(PHYS_PFN(__pa(m))); > } > > if (m) { > @@ -3160,7 +3160,7 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid) > * The head struct page is used to get folio information by the HugeTLB > * subsystem like zone id and node id. > */ > - memblock_reserved_mark_noinit(virt_to_phys((void *)m + PAGE_SIZE), > + memblock_reserved_mark_noinit(__pa((void *)m + PAGE_SIZE), > huge_page_size(h) - PAGE_SIZE); > > return 1; > -- > 2.53.0 > -- Sincerely yours, Mike.
