On 08/06/2020 02:50, Song Bao Hua (Barry Song) wrote:
>> -----Original Message-----
>> From: Matthias Brugger [mailto:matthias....@gmail.com]
>> Sent: Monday, June 8, 2020 8:15 AM
>> To: Roman Gushchin <g...@fb.com>; Song Bao Hua (Barry Song)
>> Cc: catalin.mari...@arm.com; John Garry <john.ga...@huawei.com>;
>> linux-ker...@vger.kernel.org; Linuxarm <linux...@huawei.com>;
>> email@example.com; Zengtao (B) <prime.z...@hisilicon.com>;
>> Jonathan Cameron <jonathan.came...@huawei.com>;
>> robin.mur...@arm.com; h...@lst.de; linux-arm-ker...@lists.infradead.org;
>> Subject: Re: [PATCH 2/3] arm64: mm: reserve hugetlb CMA after numa_init
>> On 03/06/2020 05:22, Roman Gushchin wrote:
>>> On Wed, Jun 03, 2020 at 02:42:30PM +1200, Barry Song wrote:
>>>> hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
>>>> done yet. so all reserved memory will be located at node0.
>>>> Cc: Roman Gushchin <g...@fb.com>
>>>> Signed-off-by: Barry Song <song.bao....@hisilicon.com>
>>> Acked-by: Roman Gushchin <g...@fb.com>
>> When did this break or was it broken since the beginning?
>> In any case, could you provide a "Fixes" tag for it, so that it can easily be
>> backported to older releases.
> I guess it was broken at the first beginning.
> Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages
> using cma")
> Would you think it is better for me to send v2 for this patch separately with
> this tag and take this out of my original patch set for per-numa CMA?
> Please give your suggestion.
I'm not the maintainer but I think it could help to get the patch accepted
earlier while you address the rest of the series.
iommu mailing list