On 05/05/20 at 09:20am, Qian Cai wrote:
> 
> 
> > On May 5, 2020, at 8:43 AM, Baoquan He <b...@redhat.com> wrote:
> > 
> > Hi,
> > 
> > On 04/24/20 at 09:45am, Qian Cai wrote:
> >> 
> >> 
> >>> On Apr 23, 2020, at 11:43 PM, Baoquan He <b...@redhat.com> wrote:
> >>> 
> >>> On 04/23/20 at 05:25pm, Qian Cai wrote:
> >>>> Compaction starts to crash below on linux-next today. The faulty page 
> >>>> belongs to Node 0 DMA32 zone.
> >>>> I’ll continue to narrow it down, but just want to give a headup in case 
> >>>> someone could beat me to it.
> >>>> 
> >>>> Debug output from free_area_init_core()
> >>>> [    0.000000] KK start page = ffffea0000000040, end page = 
> >>>> ffffea0000040000, nid = 0 DMA
> >>>> [    0.000000] KK start page = ffffea0000040000, end page = 
> >>>> ffffea0004000000, nid = 0 DMA32
> >>>> [    0.000000] KK start page = ffffea0004000000, end page = 
> >>>> ffffea0012000000, nid = 0 NORMAL
> >>>> [    0.000000] KK start page = ffffea0012000000, end page = 
> >>>> ffffea0021fc0000, nid = 4 NORMAL
> >>> 
> >>> Where are these printed? They are the direct mapping address of page?
> >> 
> >> From this debug patch. Yes, direct mapping.
> > 
> > Can you try below patch? I may get why this is caused, not sure if the
> > place is right. 
> > 
> > diff --git a/mm/compaction.c b/mm/compaction.c
> > index 177c11a8f3b9..e26972f26414 100644
> > --- a/mm/compaction.c
> > +++ b/mm/compaction.c
> > @@ -1409,7 +1409,9 @@ fast_isolate_freepages(struct compact_control *cc)
> >                             cc->free_pfn = highest;
> >                     } else {
> >                             if (cc->direct_compaction && 
> > pfn_valid(min_pfn)) {
> > -                                   page = pfn_to_page(min_pfn);
> > +                                   page = pageblock_pfn_to_page(min_pfn,
> > +                                           pageblock_end_pfn(min_pfn),
> > +                                           cc->zone);
> >                                     cc->free_pfn = min_pfn;
> >                             }
> >                     }
> 
> I have not had luck to reproduce this again yet, but feel free to move 
> forward with the patch anyway if you are comfortable to do so, so at least 
> people could review it properly.

OK, I will make a patch with details in log and post. Thanks.

Reply via email to