On Mon, Nov 24 2025, [email protected] wrote:

> From: Ran Xiaokai <[email protected]>
>
> When booting with debug_pagealloc=on while having:
> CONFIG_KEXEC_HANDOVER_ENABLE_DEFAULT=y
> CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=n
> the system fails to boot due to page faults during kmemleak scanning.
>
> This occurs because:
> With debug_pagealloc is enabled, __free_pages() invokes
> debug_pagealloc_unmap_pages(), clearing the _PAGE_PRESENT bit for
> freed pages in the kernel page table.
> KHO scratch areas are allocated from memblock and noted by kmemleak. But
> these areas don't remain reserved but released later to the page allocator
> using init_cma_reserved_pageblock(). This causes subsequent kmemleak scans
> access non-PRESENT pages, leading to fatal page faults.
>
> Mark scratch areas with kmemleak_ignore_phys() after they are allocated
> from memblock to exclude them from kmemleak scanning before they are
> released to buddy allocator to fix this.
>
> Fixes: 3dc92c311498 ("kexec: add Kexec HandOver (KHO) generation helpers")
> Signed-off-by: Ran Xiaokai <[email protected]>
> Reviewed-by: Mike Rapoport (Microsoft) <[email protected]>
> ---
>  kernel/liveupdate/kexec_handover.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/kernel/liveupdate/kexec_handover.c 
> b/kernel/liveupdate/kexec_handover.c
> index 224bdf5becb6..c729d455ee7b 100644
> --- a/kernel/liveupdate/kexec_handover.c
> +++ b/kernel/liveupdate/kexec_handover.c
> @@ -11,6 +11,7 @@
>  
>  #include <linux/cleanup.h>
>  #include <linux/cma.h>
> +#include <linux/kmemleak.h>
>  #include <linux/count_zeros.h>
>  #include <linux/kexec.h>
>  #include <linux/kexec_handover.h>
> @@ -1369,6 +1370,7 @@ static __init int kho_init(void)
>               unsigned long count = kho_scratch[i].size >> PAGE_SHIFT;
>               unsigned long pfn;
>  
> +             kmemleak_ignore_phys(kho_scratch[i].addr);

Can you please put the explanation you gave in [0] for why this is not
necessary in KHO boot as a comment here?

After that,

Reviewed-by: Pratyush Yadav <[email protected]>

[0] https://lore.kernel.org/all/[email protected]/

>               for (pfn = base_pfn; pfn < base_pfn + count;
>                    pfn += pageblock_nr_pages)
>                       init_cma_reserved_pageblock(pfn_to_page(pfn));

-- 
Regards,
Pratyush Yadav

Reply via email to