On some architectures (reported on arm64) commit 864b75f9d6b01 ("mm/page_alloc: 
fix memmap_init_zone pageblock alignment")
causes a boot hang. This patch fixes the hang making sure the alignment
never steps back.

Link: 
http://lkml.kernel.org/r/0485727b2e82da7efbce5f6ba42524b429d0391a.1520011945.git.ne...@redhat.com
Fixes: 864b75f9d6b01 ("mm/page_alloc: fix memmap_init_zone pageblock alignment")
Signed-off-by: Daniel Vacek <ne...@redhat.com>
Tested-by: Sudeep Holla <sudeep.ho...@arm.com>
Tested-by: Naresh Kamboju <naresh.kamb...@linaro.org>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Mel Gorman <mgor...@techsingularity.net>
Cc: Michal Hocko <mho...@suse.com>
Cc: Paul Burton <paul.bur...@imgtec.com>
Cc: Pavel Tatashin <pasha.tatas...@oracle.com>
Cc: Vlastimil Babka <vba...@suse.cz>
Cc: <sta...@vger.kernel.org>
---
 mm/page_alloc.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3d974cb2a1a1..e033a6895c6f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5364,9 +5364,14 @@ void __meminit memmap_init_zone(unsigned long size, int 
nid, unsigned long zone,
                         * is not. move_freepages_block() can shift ahead of
                         * the valid region but still depends on correct page
                         * metadata.
+                        * Also make sure we never step back.
                         */
-                       pfn = (memblock_next_valid_pfn(pfn, end_pfn) &
+                       unsigned long next_pfn;
+
+                       next_pfn = (memblock_next_valid_pfn(pfn, end_pfn) &
                                        ~(pageblock_nr_pages-1)) - 1;
+                       if (next_pfn > pfn)
+                               pfn = next_pfn;
 #endif
                        continue;
                }
-- 
2.16.2

Reply via email to