On Thu, Feb 06, 2025 at 08:50:30PM -0800, Andrew Morton wrote: > My x86_64 allmodconfig sayeth: > > WARNING: modpost: vmlinux: section mismatch in reference: > kho_reserve_scratch+0xca (section: .text) -> memblock_alloc_try_nid (section: > .init.text) > WARNING: modpost: vmlinux: section mismatch in reference: > kho_reserve_scratch+0xf5 (section: .text) -> scratch_scale (section: > .init.data)
This should fix it: >From 176767698d4ac5b7cddffe16677b60cb18dce786 Mon Sep 17 00:00:00 2001 From: "Mike Rapoport (Microsoft)" <r...@kernel.org> Date: Fri, 7 Feb 2025 09:57:09 +0200 Subject: [PATCH] kho: make kho_reserve_scratch and kho_init_reserved_pages __init Signed-off-by: Mike Rapoport (Microsoft) <r...@kernel.org> --- kernel/kexec_handover.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c index c21ea2a09d47..e0b92011afe2 100644 --- a/kernel/kexec_handover.c +++ b/kernel/kexec_handover.c @@ -620,7 +620,7 @@ static phys_addr_t __init scratch_size(int nid) * active. This CMA region will only be used for movable pages which are not a * problem for us during KHO because we can just move them somewhere else. */ -static void kho_reserve_scratch(void) +static void __init kho_reserve_scratch(void) { phys_addr_t addr, size; int nid, i = 1; @@ -672,7 +672,7 @@ static void kho_reserve_scratch(void) * Scan the DT for any memory ranges and make sure they are reserved in * memblock, otherwise they will end up in a weird state on free lists. */ -static void kho_init_reserved_pages(void) +static void __init kho_init_reserved_pages(void) { const void *fdt = kho_get_fdt(); int offset = 0, depth = 0, initial_depth = 0, len; -- 2.47.2 -- Sincerely yours, Mike.