On Mon, Nov 03, 2025 at 07:02:31PM +0100, Pratyush Yadav wrote:
> kho_vmalloc_unpreserve_chunk() calls __kho_unpreserve() with end_pfn as
> pfn + 1. This happens to work for 0-order pages, but leaks higher order
> pages.
> 
> For example, say order 2 pages back the allocation. During preservation,
> they get preserved in the order 2 bitmaps, but
> kho_vmalloc_unpreserve_chunk() would try to unpreserve them from the
> order 0 bitmaps, which should not have these bits set anyway, leaving
> the order 2 bitmaps untouched. This results in the pages being carried
> over to the next kernel. Nothing will free those pages in the next boot,
> leaking them.
> 
> Fix this by taking the order into account when calculating the end PFN
> for __kho_unpreserve().
> 
> Fixes: a667300bd53f2 ("kho: add support for preserving vmalloc allocations")
> Signed-off-by: Pratyush Yadav <[email protected]>

Reviewed-by: Mike Rapoport (Microsoft) <[email protected]>

> ---
> 
> Notes:
>     When Pasha's patch [0] to add kho_unpreserve_pages() is merged, maybe it
>     would be a better idea to use kho_unpreserve_pages() here? But that is
>     something for later I suppose.
>     
>     [0] 
> https://lore.kernel.org/linux-mm/[email protected]/
> 
>  kernel/kexec_handover.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/kexec_handover.c b/kernel/kexec_handover.c
> index cc5aaa738bc50..c2bcbb10918ce 100644
> --- a/kernel/kexec_handover.c
> +++ b/kernel/kexec_handover.c
> @@ -862,7 +862,8 @@ static struct kho_vmalloc_chunk *new_vmalloc_chunk(struct 
> kho_vmalloc_chunk *cur
>       return NULL;
>  }
>  
> -static void kho_vmalloc_unpreserve_chunk(struct kho_vmalloc_chunk *chunk)
> +static void kho_vmalloc_unpreserve_chunk(struct kho_vmalloc_chunk *chunk,
> +                                      unsigned short order)
>  {
>       struct kho_mem_track *track = &kho_out.ser.track;
>       unsigned long pfn = PHYS_PFN(virt_to_phys(chunk));
> @@ -871,7 +872,7 @@ static void kho_vmalloc_unpreserve_chunk(struct 
> kho_vmalloc_chunk *chunk)
>  
>       for (int i = 0; i < ARRAY_SIZE(chunk->phys) && chunk->phys[i]; i++) {
>               pfn = PHYS_PFN(chunk->phys[i]);
> -             __kho_unpreserve(track, pfn, pfn + 1);
> +             __kho_unpreserve(track, pfn, pfn + (1 << order));
>       }
>  }
>  
> @@ -882,7 +883,7 @@ static void kho_vmalloc_free_chunks(struct kho_vmalloc 
> *kho_vmalloc)
>       while (chunk) {
>               struct kho_vmalloc_chunk *tmp = chunk;
>  
> -             kho_vmalloc_unpreserve_chunk(chunk);
> +             kho_vmalloc_unpreserve_chunk(chunk, kho_vmalloc->order);
>  
>               chunk = KHOSER_LOAD_PTR(chunk->hdr.next);
>               free_page((unsigned long)tmp);
> -- 
> 2.47.3
> 

-- 
Sincerely yours,
Mike.

Reply via email to