Hi Mel,

On Wed, Apr 29, 2015 at 02:41:31PM +0100, Mel Gorman wrote:
> On Wed, Apr 29, 2015 at 09:28:17PM +0800, Fengguang Wu wrote:
> > Greetings,
> > 
> > 0day kernel testing robot got the below dmesg and the first bad commit is
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/mel/linux-balancenuma 
> > mm-deferred-meminit-v6r1
> > 
> > commit 285c36ab5b3e59865a0f4d79f4c1758455e684f7
> > Author:     Mel Gorman <[email protected]>
> > AuthorDate: Mon Sep 29 14:54:01 2014 +0100
> > Commit:     Mel Gorman <[email protected]>
> > CommitDate: Wed Apr 22 19:48:15 2015 +0100
> > 
> >     mm: meminit: Reduce number of times pageblocks are set during struct 
> > page init
> >     
> >     During parallel sturct page initialisation, ranges are checked for every
> >     PFN unnecessarily which increases boot times. This patch alters when the
> >     ranges are checked.
> >     
> >     Signed-off-by: Mel Gorman <[email protected]>
> > 
> 
> The series is old but I think it's still relevant. Can you try this
> please?

Yes it fixed the problem.

Tested-by: Fengguang Wu <[email protected]>

Thanks,
Fengguang

> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9c8f2a72263d..19543f708642 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4489,8 +4489,8 @@ void __meminit memmap_init_zone(unsigned long size, int 
> nid, unsigned long zone,
>               if (!(pfn & (pageblock_nr_pages - 1))) {
>                       struct page *page = pfn_to_page(pfn);
>  
> -                     set_pageblock_migratetype(page, MIGRATE_MOVABLE);
>                       __init_single_page(page, pfn, zone, nid);
> +                     set_pageblock_migratetype(page, MIGRATE_MOVABLE);
>               } else {
>                       __init_single_pfn(pfn, zone, nid);
>               }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to