Hi!

On 03/11/2020 17:31, Nicolas Saenz Julienne wrote:
> crashkernel might reserve memory located in ZONE_DMA. We plan to delay
> ZONE_DMA's initialization after unflattening the devicetree and ACPI's
> boot table initialization, so move it later in the boot process.
> Specifically into mem_init(), this is the last place crashkernel will be
> able to reserve the memory before the page allocator kicks in.

> There
> isn't any apparent reason for doing this earlier.

It's so that map_mem() can carve it out of the linear/direct map.
This is so that stray writes from a crashing kernel can't accidentally corrupt 
the kdump
kernel. We depend on this if we continue with kdump, but failed to offline all 
the other
CPUs. We also depend on this when skipping the checksum code in purgatory, 
which can be
exceedingly slow.

Grepping around, the current order is:

start_kernel()
-> setup_arch()
        -> arm64_memblock_init()        /* reserve */
        -> paging_init()
                -> map_mem()            /* carve out reservation */
[...]
        -> mm_init()
                -> mem_init()


I agree we should add comments to make this apparent!


Thanks,

James


> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 095540667f0f..fc4ab0d6d5d2 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -386,8 +386,6 @@ void __init arm64_memblock_init(void)
>       else
>               arm64_dma32_phys_limit = PHYS_MASK + 1;
>  
> -     reserve_crashkernel();
> -
>       reserve_elfcorehdr();
>  
>       high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
> @@ -508,6 +506,8 @@ void __init mem_init(void)
>       else
>               swiotlb_force = SWIOTLB_NO_FORCE;
>  
> +     reserve_crashkernel();
> +
>       set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
>  
>  #ifndef CONFIG_SPARSEMEM_VMEMMAP
> 

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to