Hi Mike, Andrew, On Thu, Mar 13 2025, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <r...@kernel.org> > > high_memory defines upper bound on the directly mapped memory. > This bound is defined by the beginning of ZONE_HIGHMEM when a system has > high memory and by the end of memory otherwise. > > All this is known to generic memory management initialization code that > can set high_memory while initializing core mm structures. > > Add a generic calculation of high_memory to free_area_init() and remove > per-architecture calculation except for the architectures that set and > use high_memory earlier than that. > > Acked-by: Dave Hansen <dave.han...@linux.intel.com> # x86 > Signed-off-by: Mike Rapoport (Microsoft) <r...@kernel.org> > --- > arch/alpha/mm/init.c | 1 - > arch/arc/mm/init.c | 2 -- > arch/arm64/mm/init.c | 2 -- > arch/csky/mm/init.c | 1 - > arch/hexagon/mm/init.c | 6 ------ > arch/loongarch/kernel/numa.c | 1 - > arch/loongarch/mm/init.c | 2 -- > arch/microblaze/mm/init.c | 2 -- > arch/mips/mm/init.c | 2 -- > arch/nios2/mm/init.c | 6 ------ > arch/openrisc/mm/init.c | 2 -- > arch/parisc/mm/init.c | 1 - > arch/riscv/mm/init.c | 1 - > arch/s390/mm/init.c | 2 -- > arch/sh/mm/init.c | 7 ------- > arch/sparc/mm/init_32.c | 1 - > arch/sparc/mm/init_64.c | 2 -- > arch/um/kernel/um_arch.c | 1 - > arch/x86/kernel/setup.c | 2 -- > arch/x86/mm/init_32.c | 3 --- > arch/x86/mm/numa_32.c | 3 --- > arch/xtensa/mm/init.c | 2 -- > mm/memory.c | 8 -------- > mm/mm_init.c | 30 ++++++++++++++++++++++++++++++ > mm/nommu.c | 2 -- > 25 files changed, 30 insertions(+), 62 deletions(-) This patch causes a BUG() when built with CONFIG_DEBUG_VIRTUAL and passing in the cma= commandline parameter: ------------[ cut here ]------------ kernel BUG at arch/x86/mm/physaddr.c:23! ception 0x06 IP 10:ffffffff812ebbf8 error 0 cr2 0xffff88903ffff000 CPU: 0 UID: 0 PID: 0 Comm: swapper Not tainted 6.15.0-rc6+ #231 PREEMPT(undef) Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Arch Linux 1.16.3-1-1 04/01/2014 RIP: 0010:__phys_addr+0x58/0x60 Code: 01 48 89 c2 48 d3 ea 48 85 d2 75 05 e9 91 52 cf 00 0f 0b 48 3d ff ff ff 1f 77 0f 48 8b 05 20 54 55 01 48 01 d0 e9 78 52 cf 00 <0f> 0b 90 0f 1f 44 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 RSP: 0000:ffffffff82803dd8 EFLAGS: 00010006 ORIG_RAX: 0000000000000000 RAX: 000000007fffffff RBX: 00000000ffffffff RCX: 0000000000000000 RDX: 000000007fffffff RSI: 0000000280000000 RDI: ffffffffffffffff RBP: ffffffff82803e68 R08: 0000000000000000 R09: 0000000000000000 R10: ffffffff83153180 R11: ffffffff82803e48 R12: ffffffff83c9aed0 R13: 0000000000000000 R14: 0000001040000000 R15: 0000000000000000 FS: 0000000000000000(0000) GS:0000000000000000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffff88903ffff000 CR3: 0000000002838000 CR4: 00000000000000b0 Call Trace: <TASK> ? __cma_declare_contiguous_nid+0x6e/0x340 ? cma_declare_contiguous_nid+0x33/0x70 ? dma_contiguous_reserve_area+0x2f/0x70 ? setup_arch+0x6f1/0x870 ? start_kernel+0x52/0x4b0 ? x86_64_start_reservations+0x29/0x30 ? x86_64_start_kernel+0x7c/0x80 ? common_startup_64+0x13e/0x141 The reason is that __cma_declare_contiguous_nid() does: highmem_start = __pa(high_memory - 1) + 1; If dma_contiguous_reserve_area() (or any other CMA declaration) is called before free_area_init(), high_memory is uninitialized. Without CONFIG_DEBUG_VIRTUAL, it will likely work but use the wrong value for highmem_start. Among the architectures this patch touches, the below call dma_contiguous_reserve_area() _before_ free_area_init(): - x86 - s390 - mips - riscv - xtensa - loongarch - csky The below call it _after_ free_area_init(): - arm64 And the below don't call it at all: - sparc - nios2 - openrisc - hexagon - sh - um - alpha One possible fix would be to move the calls to dma_contiguous_reserve_area() after free_area_init(). On x86, it would look like the diff below. The obvious downside is that moving the call later increases the chances of allocation failure. I'm not sure how much that actually matters, but at least on x86, that means crash kernel and hugetlb reservations go before DMA reservation. Also, adding a patch like that at rc7 is a bit risky. The other option would be to revert this. I tried a revert, but it isn't trivial. It runs into merge conflicts in pretty much all of the arch files. Maybe reverting patches 11, 12, and 13 as well would make it easier but I didn't try that. Which option should we take? If we want to move dma_contiguous_reserve_area() a bit further down the line then I can send a patch doing that on the rest of the architectures. Otherwise I can try my hand at the revert. --- 8< --- diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c index 9d2a13b37833..ca6928dde0c9 100644 --- a/arch/x86/kernel/setup.c +++ b/arch/x86/kernel/setup.c @@ -1160,7 +1160,6 @@ void __init setup_arch(char **cmdline_p) x86_flattree_get_config(); initmem_init(); - dma_contiguous_reserve(max_pfn_mapped << PAGE_SHIFT); if (boot_cpu_has(X86_FEATURE_GBPAGES)) { hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT); @@ -1178,6 +1177,8 @@ void __init setup_arch(char **cmdline_p) x86_init.paging.pagetable_init(); + dma_contiguous_reserve(max_pfn_mapped << PAGE_SHIFT); + kasan_init(); /* -- Regards, Pratyush Yadav Amazon Web Services Development Center Germany GmbH Tamara-Danz-Str. 13 10243 Berlin Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss Eingetragen am Amtsgericht Charlottenburg unter HRB 257764 B Sitz: Berlin Ust-ID: DE 365 538 597