On Mon, Oct 20, 2025 at 4:05 AM Mike Rapoport <[email protected]> wrote:
>
> On Sat, Oct 18, 2025 at 01:17:56PM -0400, Pasha Tatashin wrote:
> > KHO allocates metadata for its preserved memory map using the slab
> > allocator via kzalloc(). This metadata is temporary and is used by the
> > next kernel during early boot to find preserved memory.
> >
> > A problem arises when KFENCE is enabled. kzalloc() calls can be
> > randomly intercepted by kfence_alloc(), which services the allocation
> > from a dedicated KFENCE memory pool. This pool is allocated early in
> > boot via memblock.
> >
> > When booting via KHO, the memblock allocator is restricted to a "scratch
> > area", forcing the KFENCE pool to be allocated within it. This creates a
> > conflict, as the scratch area is expected to be ephemeral and
> > overwriteable by a subsequent kexec. If KHO metadata is placed in this
> > KFENCE pool, it leads to memory corruption when the next kernel is
> > loaded.
> >
> > To fix this, modify KHO to allocate its metadata directly from the buddy
> > allocator instead of slab.
> >
> > Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
> > Signed-off-by: Pasha Tatashin <[email protected]>
> > Reviewed-by: Pratyush Yadav <[email protected]>
> > ---
> >  kernel/liveupdate/kexec_handover.c | 8 +++++---
> >  1 file changed, 5 insertions(+), 3 deletions(-)
> >
> > diff --git a/kernel/liveupdate/kexec_handover.c 
> > b/kernel/liveupdate/kexec_handover.c
> > index 7c8e89a6b953..92662739a3a2 100644
> > --- a/kernel/liveupdate/kexec_handover.c
> > +++ b/kernel/liveupdate/kexec_handover.c
> > @@ -132,6 +132,8 @@ static struct kho_out kho_out = {
> >       .finalized = false,
> >  };
> >
> > +DEFINE_FREE(kho_free_page, void *, free_page((unsigned long)_T))
> > +
>
> Just drop kho_ prefix and stick it into include/linux/gfp.h

done

>
> >  static void *xa_load_or_alloc(struct xarray *xa, unsigned long index)
> >  {
> >       void *res = xa_load(xa, index);
> > @@ -139,7 +141,7 @@ static void *xa_load_or_alloc(struct xarray *xa, 
> > unsigned long index)
> >       if (res)
> >               return res;
> >
> > -     void *elm __free(kfree) = kzalloc(PAGE_SIZE, GFP_KERNEL);
> > +     void *elm __free(kho_free_page) = (void *)get_zeroed_page(GFP_KERNEL);
> >
> >       if (!elm)
> >               return ERR_PTR(-ENOMEM);
> > @@ -352,9 +354,9 @@ static_assert(sizeof(struct khoser_mem_chunk) == 
> > PAGE_SIZE);
> >  static struct khoser_mem_chunk *new_chunk(struct khoser_mem_chunk 
> > *cur_chunk,
> >                                         unsigned long order)
> >  {
> > -     struct khoser_mem_chunk *chunk __free(kfree) = NULL;
> > +     struct khoser_mem_chunk *chunk __free(kho_free_page) = NULL;
> >
> > -     chunk = kzalloc(PAGE_SIZE, GFP_KERNEL);
> > +     chunk = (void *)get_zeroed_page(GFP_KERNEL);
> >       if (!chunk)
> >               return ERR_PTR(-ENOMEM);
> >
> > --
> > 2.51.0.915.g61a8936c21-goog
> >
>
> --
> Sincerely yours,
> Mike.

Reply via email to