[PATCH] powerpc/fixmap: Fix the size of the early debug area

2020-08-16 Thread Christophe Leroy
Commit ("03fd42d458fb powerpc/fixmap: Fix FIX_EARLY_DEBUG_BASE when page size is 256k") reworked the setup of the early debug area and mistakenly replaced 128 * 1024 by SZ_128. Change to SZ_128K to restore the original 128 kbytes size of the area. Fixes: 03fd42d458fb ("powerpc/fixmap: Fix FIX_EAR

[PATCH v3] powerpc/numa: Restrict possible nodes based on platform

2020-08-16 Thread Srikar Dronamraju
As per draft LoPAPR (Revision 2.9_pre7), section B.5.3 "Run Time Abstaction Services (RTAS) Node at https://openpowerfoundation.org/wp-content/uploads/2020/07/LoPAR-20200611.pdf, there are 2 device tree property ibm,max-associativity-domains (which defines the maximum number of domains that the fir

[PATCH v1 3/4] powerpc/process: Remove useless #ifdef CONFIG_SPE

2020-08-16 Thread Christophe Leroy
cpu_has_feature(CPU_FTR_SPE) returns false when CONFIG_SPE is not set. There is no need to enclose the test in an #ifdef CONFIG_SPE. Remove it. CPU_FTR_SPE only exists on 32 bits. Define it as 0 on 64 bits. We have a couple of places like: #ifdef CONFIG_SPE if (cpu_has_feature(CPU_FTR_

[PATCH v1 4/4] powerpc/process: Remove useless #ifdef CONFIG_PPC_FPU

2020-08-16 Thread Christophe Leroy
Add a stub for __giveup_fpu() when CONFIG_PPC_FPU is not selected, as done for CONFIG_SPE and CONFIG_ALTIVEC. This allows to remove some #ifdef CONFIG_PPC_FPU. Also change one to IS_ENABLED(). Signed-off-by: Christophe Leroy --- arch/powerpc/kernel/process.c | 9 - 1 file changed, 4 in

[PATCH v1 2/4] powerpc/process: Remove useless #ifdef CONFIG_ALTIVEC

2020-08-16 Thread Christophe Leroy
cpu_has_feature(CPU_FTR_ALTIVEC) returns false when CONFIG_ALTIVEC is not set. There is no need to enclose the test in an #ifdef CONFIG_ALTIVEC. Remove it. Signed-off-by: Christophe Leroy --- arch/powerpc/kernel/process.c | 4 1 file changed, 4 deletions(-) diff --git a/arch/powerpc/kerne

[PATCH v1 1/4] powerpc/process: Remove useless #ifdef CONFIG_VSX

2020-08-16 Thread Christophe Leroy
cpu_has_feature(CPU_FTR_VSX) returns false when CONFIG_VSX is not set. There is no need to enclose the test in an #ifdef CONFIG_VSX. Remove it. Signed-off-by: Christophe Leroy --- arch/powerpc/kernel/process.c | 13 + 1 file changed, 1 insertion(+), 12 deletions(-) diff --git a/arc

[PATCH v1] powerpc/process: Tag an #endif to help locate the matching #ifdef.

2020-08-16 Thread Christophe Leroy
That #endif is more than 100 lines after the matching #ifdef, and there are several #ifdef/#else/#endif inbetween. Tag it as /* CONFIG_PPC_BOOK3S_64 */ to help locate the matching #ifdef. Signed-off-by: Christophe Leroy --- arch/powerpc/kernel/process.c | 2 +- 1 file changed, 1 insertion(+), 1

[PATCH v1] powerpc/process: Replace an #if defined(CONFIG_4xx) || defined(CONFIG_BOOKE) by IS_ENABLED()

2020-08-16 Thread Christophe Leroy
The #if defined(CONFIG_4xx) || defined(CONFIG_BOOKE) encloses some printk which can be compiled in all cases. Replace by IS_ENABLED(). Signed-off-by: Christophe Leroy --- arch/powerpc/kernel/process.c | 13 +++-- 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/powerpc

[PATCH v1] powerpc/process: Replace an #ifdef CONFIG_PPC_BOOK3S_64 by IS_ENABLED()

2020-08-16 Thread Christophe Leroy
This #ifdef CONFIG_PPC_BOOK3S_64 calls preload_new_slb_context() when radix is not enabled. radix_enabled() is always defined, and the prototype for preload_new_slb_context() is always present, so the #ifdef is unneeded. Replace it by IS_ENABLED(). Signed-off-by: Christophe Leroy --- arch/powe

[PATCH v1] powerpc/process: Remove unnecessary #ifdef CONFIG_FUNCTION_GRAPH_TRACER

2020-08-16 Thread Christophe Leroy
ftrace_graph_ret_addr() is always defined and returns 'ip' when CONFIG_FUNCTION GRAPH_TRACER is not set. So the #ifdef is not needed, remove it. Signed-off-by: Christophe Leroy --- arch/powerpc/kernel/process.c | 4 1 file changed, 4 deletions(-) diff --git a/arch/powerpc/kernel/process.c

[PATCH v1] powerpc/process: Replace #ifdef CONFIG_KALLSYMS by IS_ENABLED()

2020-08-16 Thread Christophe Leroy
The #ifdef CONFIG_KALLSYMS encloses some printk which can compile in all cases. Replace by IS_ENABLED(). Signed-off-by: Christophe Leroy --- arch/powerpc/kernel/process.c | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/ker

[PATCH v1] powerpc/process: Replace an #ifdef CONFIG_PPC_47x by IS_ENABLED()

2020-08-16 Thread Christophe Leroy
isync() is always defined, no need for an #ifdef. Replace it by IS_ENABLED(CONFIG_PPC_47x). Signed-off-by: Christophe Leroy --- arch/powerpc/kernel/process.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c in

Re: [PATCH v2 1/5] powerpc/mm: Introduce temporary mm

2020-08-16 Thread Christopher M. Riedl
On Thu Aug 6, 2020 at 6:27 AM CDT, Daniel Axtens wrote: > Hi Chris, > > > void __set_breakpoint(int nr, struct arch_hw_breakpoint *brk); > > +void __get_breakpoint(int nr, struct arch_hw_breakpoint *brk); > > bool ppc_breakpoint_available(void); > > #ifdef CONFIG_PPC_ADV_DEBUG_REGS > > exter

Re: [PATCH v2] powerpc/pseries: explicitly reschedule during drmem_lmb list traversal

2020-08-16 Thread Michael Ellerman
Nathan Lynch writes: > Michael Ellerman writes: >> Tyrel Datwyler writes: >>> On 8/11/20 6:20 PM, Nathan Lynch wrote: +static inline struct drmem_lmb *drmem_lmb_next(struct drmem_lmb *lmb) +{ + const unsigned int resched_interval = 20; + + BUG_ON(lmb < drmem_inf

Re: [PATCH v2 2/5] powerpc/lib: Initialize a temporary mm for code patching

2020-08-16 Thread Christopher M. Riedl
On Thu Aug 6, 2020 at 8:24 AM CDT, Daniel Axtens wrote: > "Christopher M. Riedl" writes: > > > When code patching a STRICT_KERNEL_RWX kernel the page containing the > > address to be patched is temporarily mapped with permissive memory > > protections. Currently, a per-cpu vmalloc patch area is us

[PATCH 1/2] powerpc/kernel/cputable: cleanup the function declarations

2020-08-16 Thread Madhavan Srinivasan
__machine_check_early_realmode_p*() are currently declared as extern in cputable.c and because of this when compiled with "C=1" (which enables semantic checker) produces these warnings. CHECK arch/powerpc/kernel/mce_power.c arch/powerpc/kernel/mce_power.c:709:6: warning: symbol '__machine_che

[PATCH 2/2] powerpc: Add POWER10 raw mode cputable entry

2020-08-16 Thread Madhavan Srinivasan
Add a raw mode cputable entry for POWER10. Copies most of the fields from commit a3ea40d5c736 ("powerpc: Add POWER10 architected mode") except for oprofile_cpu_type, machine_check_early, pvr_mask and pvr_mask fields. On bare metal systems we use DT CPU features, which doesn't need a cputable entry.

Re: fsl_espi errors on v5.7.15

2020-08-16 Thread Chris Packham
On 14/08/20 6:19 pm, Heiner Kallweit wrote: > On 14.08.2020 04:48, Chris Packham wrote: >> Hi, >> >> I'm seeing a problem with accessing spi-nor after upgrading a T2081 >> based system to linux v5.7.15 >> >> For this board u-boot and the u-boot environment live on spi-nor. >> >> When I use fw_sete

[PATCH v4 8/8] mm/vmalloc: Hugepage vmalloc mappings

2020-08-16 Thread Nicholas Piggin
On platforms that define HAVE_ARCH_HUGE_VMAP and support PMD vmaps, vmalloc will attempt to allocate PMD-sized pages first, before falling back to small pages. Allocations which use something other than PAGE_KERNEL protections are not permitted to use huge pages yet, not all callers expect this (e

[PATCH v4 7/8] mm/vmalloc: add vmap_range_noflush variant

2020-08-16 Thread Nicholas Piggin
As a side-effect, the order of flush_cache_vmap() and arch_sync_kernel_mappings() calls are switched, but that now matches the other callers in this file. Signed-off-by: Nicholas Piggin --- mm/vmalloc.c | 17 + 1 file changed, 13 insertions(+), 4 deletions(-) diff --git a/mm/vma

[PATCH v4 6/8] mm: Move vmap_range from lib/ioremap.c to mm/vmalloc.c

2020-08-16 Thread Nicholas Piggin
This is a generic kernel virtual memory mapper, not specific to ioremap. Signed-off-by: Nicholas Piggin --- include/linux/vmalloc.h | 2 + mm/ioremap.c| 192 mm/vmalloc.c| 191 +++ 3 files chan

[PATCH v4 5/8] mm: HUGE_VMAP arch support cleanup

2020-08-16 Thread Nicholas Piggin
This changes the awkward approach where architectures provide init functions to determine which levels they can provide large mappings for, to one where the arch is queried for each call. This removes code and indirection, and allows constant-folding of dead code for unsupported levels. This also

[PATCH v4 4/8] lib/ioremap: rename ioremap_*_range to vmap_*_range

2020-08-16 Thread Nicholas Piggin
This will be moved to mm/ and used as a generic kernel virtual mapping function, so re-name it in preparation. Signed-off-by: Nicholas Piggin --- mm/ioremap.c | 55 ++-- 1 file changed, 23 insertions(+), 32 deletions(-) diff --git a/mm/ioremap.c b

[PATCH v4 3/8] mm/vmalloc: rename vmap_*_range vmap_pages_*_range

2020-08-16 Thread Nicholas Piggin
The vmalloc mapper operates on a struct page * array rather than a linear physical address, re-name it to make this distinction clear. Signed-off-by: Nicholas Piggin --- mm/vmalloc.c | 28 1 file changed, 12 insertions(+), 16 deletions(-) diff --git a/mm/vmalloc.c b

[PATCH v4 2/8] mm: apply_to_pte_range warn and fail if a large pte is encountered

2020-08-16 Thread Nicholas Piggin
Signed-off-by: Nicholas Piggin --- mm/memory.c | 60 +++-- 1 file changed, 44 insertions(+), 16 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index c39a13b09602..1d5f3093c249 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2260,13 +2260,20 @@

[PATCH v4 1/8] mm/vmalloc: fix vmalloc_to_page for huge vmap mappings

2020-08-16 Thread Nicholas Piggin
vmalloc_to_page returns NULL for addresses mapped by larger pages[*]. Whether or not a vmap is huge depends on the architecture details, alignments, boot options, etc., which the caller can not be expected to know. Therefore HUGE_VMAP is a regression for vmalloc_to_page. This change teaches vmallo

[PATCH v4 0/8] huge vmalloc mappings

2020-08-16 Thread Nicholas Piggin
Let's try again. Thanks, Nick Since v3: - Fixed an off-by-one bug in a loop - Fix !CONFIG_HAVE_ARCH_HUGE_VMAP build fail - Hopefully this time fix the arm64 vmap stack bug, thanks Jonathan Cameron for debugging the cause of this (hopefully). Since v2: - Rebased on vmalloc cleanups, split serie