Re: [PATCH v9 05/12] mm: HUGE_VMAP arch support cleanup

2021-01-23 Thread Nicholas Piggin
Excerpts from Ding Tianhong's message of January 4, 2021 10:33 pm:
> On 2020/12/5 14:57, Nicholas Piggin wrote:
>> This changes the awkward approach where architectures provide init
>> functions to determine which levels they can provide large mappings for,
>> to one where the arch is queried for each call.
>> 
>> This removes code and indirection, and allows constant-folding of dead
>> code for unsupported levels.
>> 
>> This also adds a prot argument to the arch query. This is unused
>> currently but could help with some architectures (e.g., some powerpc
>> processors can't map uncacheable memory with large pages).
>> 
>> Cc: linuxppc-dev@lists.ozlabs.org
>> Cc: Catalin Marinas 
>> Cc: Will Deacon 
>> Cc: linux-arm-ker...@lists.infradead.org
>> Cc: Thomas Gleixner 
>> Cc: Ingo Molnar 
>> Cc: Borislav Petkov 
>> Cc: x...@kernel.org
>> Cc: "H. Peter Anvin" 
>> Acked-by: Catalin Marinas  [arm64]
>> Signed-off-by: Nicholas Piggin 
>> ---
>>  arch/arm64/include/asm/vmalloc.h |  8 +++
>>  arch/arm64/mm/mmu.c  | 10 +--
>>  arch/powerpc/include/asm/vmalloc.h   |  8 +++
>>  arch/powerpc/mm/book3s64/radix_pgtable.c |  8 +--
>>  arch/x86/include/asm/vmalloc.h   |  7 ++
>>  arch/x86/mm/ioremap.c| 10 +--
>>  include/linux/io.h   |  9 ---
>>  include/linux/vmalloc.h  |  6 ++
>>  init/main.c  |  1 -
>>  mm/ioremap.c | 88 +---
>>  10 files changed, 77 insertions(+), 78 deletions(-)
>> 
>> diff --git a/arch/arm64/include/asm/vmalloc.h 
>> b/arch/arm64/include/asm/vmalloc.h
>> index 2ca708ab9b20..597b40405319 100644
>> --- a/arch/arm64/include/asm/vmalloc.h
>> +++ b/arch/arm64/include/asm/vmalloc.h
>> @@ -1,4 +1,12 @@
>>  #ifndef _ASM_ARM64_VMALLOC_H
>>  #define _ASM_ARM64_VMALLOC_H
>>  
>> +#include 
>> +
>> +#ifdef CONFIG_HAVE_ARCH_HUGE_VMAP
>> +bool arch_vmap_p4d_supported(pgprot_t prot);
>> +bool arch_vmap_pud_supported(pgprot_t prot);
>> +bool arch_vmap_pmd_supported(pgprot_t prot);
>> +#endif
>> +
>>  #endif /* _ASM_ARM64_VMALLOC_H */
>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>> index ca692a815731..1b60079c1cef 100644
>> --- a/arch/arm64/mm/mmu.c
>> +++ b/arch/arm64/mm/mmu.c
>> @@ -1315,12 +1315,12 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys, 
>> int *size, pgprot_t prot)
>>  return dt_virt;
>>  }
>>  
>> -int __init arch_ioremap_p4d_supported(void)
>> +bool arch_vmap_p4d_supported(pgprot_t prot)
>>  {
>> -return 0;
>> +return false;
>>  }
>>  
> 
> I think you should put this function in the CONFIG_HAVE_ARCH_HUGE_VMAP, 
> otherwise it may break the compile when disable the 
> CONFIG_HAVE_ARCH_HUGE_VMAP, the same
> as the x86 and ppc.

Ah, good catch. arm64 is okay because it always selects 
HAVE_ARCH_HUGE_VMAP, powerpc is okay because it places
them in a file that's only compiled for configs that select
huge vmap, but x86-32 without PAE build breaks. I'll fix that.

Thanks,
Nick


Re: [PATCH v2 2/2] memblock: do not start bottom-up allocations with kernel_end

2021-01-23 Thread Mike Rapoport
On Sat, Jan 23, 2021 at 06:09:11PM -0800, Andrew Morton wrote:
> On Fri, 22 Jan 2021 01:37:14 -0300 Thiago Jung Bauermann 
>  wrote:
> 
> > Mike Rapoport  writes:
> > 
> > > > Signed-off-by: Roman Gushchin 
> > > 
> > > Reviewed-by: Mike Rapoport 
> > 
> > I've seen a couple of spurious triggers of the WARN_ONCE() removed by this
> > patch. This happens on some ppc64le bare metal (powernv) server machines 
> > with
> > CONFIG_SWIOTLB=y and crashkernel=4G, as described in a candidate patch I 
> > posted
> > to solve this issue in a different way:
> > 
> > https://lore.kernel.org/linuxppc-dev/20201218062103.76102-1-bauer...@linux.ibm.com/
> > 
> > Since this patch solves that problem, is it possible to include it in the 
> > next
> > feasible v5.11-rcX, with the following tag?
> 
> We could do this, if we're confident that this patch doesn't depend on
> [1/2] "mm: cma: allocate cma areas bottom-up"?  I think it is...

A think it does not depend on cma bottom-up allocation, it's rather the other
way around: without this CMA bottom-up allocation could fail with KASLR
enabled.

Still, this patch may need updates to the way x86 does early reservations:

https://lore.kernel.org/lkml/20210115083255.12744-1-r...@kernel.org
 
> > Fixes: 8fabc623238e ("powerpc: Ensure that swiotlb buffer is allocated from 
> > low memory")
> 
> I added that.
> 
> 

-- 
Sincerely yours,
Mike.


Re: [PATCH v2 2/2] memblock: do not start bottom-up allocations with kernel_end

2021-01-23 Thread Andrew Morton
On Fri, 22 Jan 2021 01:37:14 -0300 Thiago Jung Bauermann 
 wrote:

> Mike Rapoport  writes:
> 
> > > Signed-off-by: Roman Gushchin 
> > 
> > Reviewed-by: Mike Rapoport 
> 
> I've seen a couple of spurious triggers of the WARN_ONCE() removed by this
> patch. This happens on some ppc64le bare metal (powernv) server machines with
> CONFIG_SWIOTLB=y and crashkernel=4G, as described in a candidate patch I 
> posted
> to solve this issue in a different way:
> 
> https://lore.kernel.org/linuxppc-dev/20201218062103.76102-1-bauer...@linux.ibm.com/
> 
> Since this patch solves that problem, is it possible to include it in the next
> feasible v5.11-rcX, with the following tag?

We could do this, if we're confident that this patch doesn't depend on
[1/2] "mm: cma: allocate cma areas bottom-up"?  I think it is...

> Fixes: 8fabc623238e ("powerpc: Ensure that swiotlb buffer is allocated from 
> low memory")

I added that.