From: "Mike Rapoport (Microsoft)" <[email protected]> Move calculations of zone limits to a dedicated arch_zone_limits_init() function.
Later MM core will use this function as an architecture specific callback during nodes and zones initialization and thus there won't be a need to call free_area_init() from every architecture. Signed-off-by: Mike Rapoport (Microsoft) <[email protected]> --- arch/hexagon/mm/init.c | 23 +++++++++++++---------- 1 file changed, 13 insertions(+), 10 deletions(-) diff --git a/arch/hexagon/mm/init.c b/arch/hexagon/mm/init.c index 34eb9d424b96..e2c9487d8d34 100644 --- a/arch/hexagon/mm/init.c +++ b/arch/hexagon/mm/init.c @@ -54,6 +54,18 @@ void sync_icache_dcache(pte_t pte) __vmcache_idsync(addr, PAGE_SIZE); } +void __init arch_zone_limits_init(unsigned long *max_zone_pfns) +{ + /* + * This is not particularly well documented anywhere, but + * give ZONE_NORMAL all the memory, including the big holes + * left by the kernel+bootmem_map which are already left as reserved + * in the bootmem_map; free_area_init should see those bits and + * adjust accordingly. + */ + max_zone_pfns[ZONE_NORMAL] = max_low_pfn; +} + /* * In order to set up page allocator "nodes", * somebody has to call free_area_init() for UMA. @@ -65,16 +77,7 @@ static void __init paging_init(void) { unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, }; - /* - * This is not particularly well documented anywhere, but - * give ZONE_NORMAL all the memory, including the big holes - * left by the kernel+bootmem_map which are already left as reserved - * in the bootmem_map; free_area_init should see those bits and - * adjust accordingly. - */ - - max_zone_pfn[ZONE_NORMAL] = max_low_pfn; - + arch_zone_limits_init(max_zone_pfn); free_area_init(max_zone_pfn); /* sets up the zonelists and mem_map */ /* -- 2.51.0
