Mike Rapoport <[email protected]> writes:

> Hi Ritesh,
>
> On Fri, Feb 27, 2026 at 10:39:53PM +0530, Ritesh Harjani wrote:
>> Sourabh Jain <[email protected]> writes:
>> 
>> > I noticed CMA init for fadump crashkernel memory is failing.
>> >
>> > [    0.000000] cma: pageblock_order not yet initialized. Called during 
>> > early boot?
>> > [    0.000000] fadump: Failed to init cma area for firmware-assisted 
>> > dump,-22
>> >
>> >
>> > kernel command-line:
>> > BOOT_IMAGE=(ieee1275//vdevice/v-scsi@30000070/disk@8100000000000000,msdos2)/vmlinuz-7.0.0-rc1+
>> >  
>> > root=/dev/mapper/rhel_ltcden3--lp12-root ro 
>> > rd.lvm.lv=rhel_ltcden3-lp12/root rd.lvm.lv=rhel_ltcden3-lp12/swap 
>> > fadump=on crashkernel=3G
>> >
>> >
>> > Same issue with kdump CMA reservation:
>> >
>> > [    0.000000][    T0] cma: pageblock_order not yet initialized. Called 
>> > during early boot?
>> 
>> Good that we added those debug prints ;)
>> 
>> I think I know what went wrong, as part of this arch,mm consolidation
>> patch series [1], I think the order of initialization is changed.
>> 
>> With this patch the new order is ... 
>> start_kernel()
>>     - setup_arch()
>>        - xxx_cma_reserve();
>>     - mm_core_init_early()
>>        - free_area_init()
>>           - sparse_init()
>>              - set_pageblock_order() // this sets the pageblock_order.
>> 
>> Whereas earlier set_pageblock_order() was called from initmem_init(),
>> just before cma reservations were being made. 
>> 
>> start_kernel()
>>     - setup_arch()
>>        - initmem_init()
>>          - sparse_init()
>>            - set_pageblock_order();  // this sets the pageblock_order
>>        - xxx_cma_reserve();
>> 
>> So that means, pageblock_order is not initialized before these cma
>> reservation function calls, hence we are seeing these failures.
>> 
>> setup_arch() {
>>     ...
>> 
>>      /*
>>       * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM 
>> and
>>       * hugetlb. These must be called after initmem_init(), so that
>>       * pageblock_order is initialised.
>>       */
>>      fadump_cma_init();
>>      kdump_cma_reserve();
>>      kvm_cma_reserve();
>> 
>>     ...
>> }
>> 
>> 
>> So what if we do.. 
>> 
>> start_kernel() {
>>   ...
>>      setup_arch(&command_line);
>>      mm_core_init_early();
>>     setup_arch_post_mm_core_init(); // and here we call CMA reservation 
>> functions ?
>  
> Unless I'm missing something these cma reservations can be moved to
> arch_mm_preinit().
> It runs after mm_core_init_early() and before memblock moves the free
> memory to the buddy.
>

Right. I think, we should be able to use that...

@Sourabh, 

I don't have access to the systems (travelling back...). Could you
please give this a try?


diff --git a/arch/powerpc/kernel/setup-common.c 
b/arch/powerpc/kernel/setup-common.c
index cb5b73adc250..b1761909c23f 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -35,7 +35,6 @@
 #include <linux/of_irq.h>
 #include <linux/hugetlb.h>
 #include <linux/pgtable.h>
-#include <asm/kexec.h>
 #include <asm/io.h>
 #include <asm/paca.h>
 #include <asm/processor.h>
@@ -995,15 +994,6 @@ void __init setup_arch(char **cmdline_p)
 
        initmem_init();
 
-       /*
-        * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM 
and
-        * hugetlb. These must be called after initmem_init(), so that
-        * pageblock_order is initialised.
-        */
-       fadump_cma_init();
-       kdump_cma_reserve();
-       kvm_cma_reserve();
-
        early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
 
        if (ppc_md.setup_arch)
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 29bf347f6012..5ba947e4fe37 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -30,6 +30,10 @@
 #include <asm/setup.h>
 #include <asm/fixmap.h>
 
+#include <asm/fadump.h>
+#include <asm/kexec.h>
+#include <asm/kvm_ppc.h>
+
 #include <mm/mmu_decl.h>
 
 unsigned long long memory_limit __initdata;
@@ -268,6 +272,16 @@ void __init paging_init(void)
 
 void __init arch_mm_preinit(void)
 {
+
+       /*
+        * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM
+        * and hugetlb. These must be called after pageblock_order is
+        * initialised.
+        */
+       fadump_cma_init();
+       kdump_cma_reserve();
+       kvm_cma_reserve();
+
        /*
         * book3s is limited to 16 page sizes due to encoding this in
         * a 4-bit field for slices.


-ritesh

>> References:
>> [1]: 
>> https://lore.kernel.org/linuxppc-dev/[email protected]/T/#m5adf1a845e0a0867066c4f7055f28e6304b73fa5
>> [2]: 
>> https://lore.kernel.org/all/3ae208e48c0d9cefe53d2dc4f593388067405b7d.1729146153.git.ritesh.l...@gmail.com/
>> 
>> 
>> -ritesh
>
> -- 
> Sincerely yours,
> Mike.

Reply via email to