> -----Original Message-----
> From: Matthias Brugger [mailto:matthias....@gmail.com]
> Sent: Monday, June 8, 2020 8:15 AM
> To: Roman Gushchin <g...@fb.com>; Song Bao Hua (Barry Song)
> <song.bao....@hisilicon.com>
> Cc: catalin.mari...@arm.com; John Garry <john.ga...@huawei.com>;
> linux-ker...@vger.kernel.org; Linuxarm <linux...@huawei.com>;
> iommu@lists.linux-foundation.org; Zengtao (B) <prime.z...@hisilicon.com>;
> Jonathan Cameron <jonathan.came...@huawei.com>;
> robin.mur...@arm.com; h...@lst.de; linux-arm-ker...@lists.infradead.org;
> m.szyprow...@samsung.com
> Subject: Re: [PATCH 2/3] arm64: mm: reserve hugetlb CMA after numa_init
> 
> 
> 
> On 03/06/2020 05:22, Roman Gushchin wrote:
> > On Wed, Jun 03, 2020 at 02:42:30PM +1200, Barry Song wrote:
> >> hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
> >> done yet. so all reserved memory will be located at node0.
> >>
> >> Cc: Roman Gushchin <g...@fb.com>
> >> Signed-off-by: Barry Song <song.bao....@hisilicon.com>
> >
> > Acked-by: Roman Gushchin <g...@fb.com>
> >
> 
> When did this break or was it broken since the beginning?
> In any case, could you provide a "Fixes" tag for it, so that it can easily be
> backported to older releases.

I guess it was broken at the first beginning.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cf11e85fc08cc

Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using 
cma")

Would you think it is better for me to send v2 for this patch separately with 
this tag and take this out of my original patch set for per-numa CMA?
Please give your suggestion.

Best Regards
Barry

> 
> Regards,
> Matthias
> 
> > Thanks!
> >
> >> ---
> >>  arch/arm64/mm/init.c | 10 +++++-----
> >>  1 file changed, 5 insertions(+), 5 deletions(-)
> >>
> >> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> >> index e42727e3568e..8f0e70ebb49d 100644
> >> --- a/arch/arm64/mm/init.c
> >> +++ b/arch/arm64/mm/init.c
> >> @@ -458,11 +458,6 @@ void __init arm64_memblock_init(void)
> >>    high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
> >>
> >>    dma_contiguous_reserve(arm64_dma32_phys_limit);
> >> -
> >> -#ifdef CONFIG_ARM64_4K_PAGES
> >> -  hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> >> -#endif
> >> -
> >>  }
> >>
> >>  void __init bootmem_init(void)
> >> @@ -478,6 +473,11 @@ void __init bootmem_init(void)
> >>    min_low_pfn = min;
> >>
> >>    arm64_numa_init();
> >> +
> >> +#ifdef CONFIG_ARM64_4K_PAGES
> >> +  hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> >> +#endif
> >> +
> >>    /*
> >>     * Sparsemem tries to allocate bootmem in memory_present(), so must
> be
> >>     * done after the fixed reservations.
> >> --
> >> 2.23.0

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to