Re: [PATCH] powerpc/32s: Fix RTAS machine check with VMAP stack

2020-12-22 Thread Christophe Leroy




Le 22/12/2020 à 08:11, Christophe Leroy a écrit :

When we have VMAP stack, exception prolog 1 sets r1, not r11.


But exception prolog 1 uses r1 to setup r1 when machine check happens in kernel.
So r1 must be restored when the branch is not taken. See subsequent patch I 
just sent out.

Christophe



Fixes: da7bb43ab9da ("powerpc/32: Fix vmap stack - Properly set r1 before activating 
MMU")
Fixes: d2e006036082 ("powerpc/32: Use SPRN_SPRG_SCRATCH2 in exception prologs")
Cc: sta...@vger.kernel.org
Signed-off-by: Christophe Leroy 
---
  arch/powerpc/kernel/head_book3s_32.S | 7 +++
  1 file changed, 7 insertions(+)

diff --git a/arch/powerpc/kernel/head_book3s_32.S 
b/arch/powerpc/kernel/head_book3s_32.S
index 349bf3f0c3af..fbc48a500846 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -260,9 +260,16 @@ __secondary_hold_acknowledge:
  MachineCheck:
EXCEPTION_PROLOG_0
  #ifdef CONFIG_PPC_CHRP
+#ifdef CONFIG_VMAP_STACK
+   mtspr   SPRN_SPRG_SCRATCH2,r1
+   mfspr   r1, SPRN_SPRG_THREAD
+   lwz r1, RTAS_SP(r1)
+   cmpwi   cr1, r1, 0
+#else
mfspr   r11, SPRN_SPRG_THREAD
lwz r11, RTAS_SP(r11)
cmpwi   cr1, r11, 0
+#endif
bne cr1, 7f
  #endif /* CONFIG_PPC_CHRP */
EXCEPTION_PROLOG_1 for_rtas=1



[PATCH] powerpc/32s: Fix RTAS machine check with VMAP stack - again

2020-12-22 Thread Christophe Leroy
When it is not a RTAS machine check, don't trash r1
because it is needed by prolog 1.

Fixes: 9c7422b92cb2 ("powerpc/32s: Fix RTAS machine check with VMAP stack")
Cc: sta...@vger.kernel.org
Signed-off-by: Christophe Leroy 
---
Sorry Michael for this last minute fix of the fix.

 arch/powerpc/kernel/head_book3s_32.S | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/head_book3s_32.S 
b/arch/powerpc/kernel/head_book3s_32.S
index fbc48a500846..858fbc8b19f3 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -265,12 +265,14 @@ MachineCheck:
mfspr   r1, SPRN_SPRG_THREAD
lwz r1, RTAS_SP(r1)
cmpwi   cr1, r1, 0
+   bne cr1, 7f
+   mfspr   r1, SPRN_SPRG_SCRATCH2
 #else
mfspr   r11, SPRN_SPRG_THREAD
lwz r11, RTAS_SP(r11)
cmpwi   cr1, r11, 0
-#endif
bne cr1, 7f
+#endif
 #endif /* CONFIG_PPC_CHRP */
EXCEPTION_PROLOG_1 for_rtas=1
 7: EXCEPTION_PROLOG_2
-- 
2.25.0



[powerpc:merge] BUILD SUCCESS 409655c00c9ca27e768b09af3bae5bd675fbd994

2020-12-22 Thread kernel test robot
   randconfig-a001-20201221
x86_64   randconfig-a006-20201221
x86_64   randconfig-a002-20201221
x86_64   randconfig-a004-20201221
x86_64   randconfig-a003-20201221
x86_64   randconfig-a005-20201221
i386 randconfig-a005-20201222
i386 randconfig-a002-20201222
i386 randconfig-a006-20201222
i386 randconfig-a004-20201222
i386 randconfig-a003-20201222
i386 randconfig-a001-20201222
i386 randconfig-a011-20201221
i386 randconfig-a016-20201221
i386 randconfig-a014-20201221
i386 randconfig-a012-20201221
i386 randconfig-a015-20201221
i386 randconfig-a013-20201221
i386 randconfig-a016-20201222
i386 randconfig-a011-20201222
i386 randconfig-a014-20201222
i386 randconfig-a012-20201222
i386 randconfig-a015-20201222
i386 randconfig-a013-20201222
riscvnommu_virt_defconfig
riscv  rv32_defconfig
riscvnommu_k210_defconfig
riscvallyesconfig
riscv allnoconfig
riscv   defconfig
riscvallmodconfig
x86_64   rhel-8.3
x86_64  rhel-8.3-kbuiltin
x86_64   rhel
x86_64   allyesconfig
x86_64rhel-7.6-kselftests
x86_64  defconfig
x86_64  kexec

clang tested configs:
x86_64   randconfig-a015-20201221
x86_64   randconfig-a014-20201221
x86_64   randconfig-a016-20201221
x86_64   randconfig-a012-20201221
x86_64   randconfig-a013-20201221
x86_64   randconfig-a011-20201221
x86_64   randconfig-a001-20201222
x86_64   randconfig-a006-20201222
x86_64   randconfig-a002-20201222
x86_64   randconfig-a004-20201222
x86_64   randconfig-a003-20201222
x86_64   randconfig-a005-20201222

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org


[powerpc:fixes-test] BUILD SUCCESS 9c7422b92cb27369653c371ad9c44a502e5eea8f

2020-12-22 Thread kernel test robot
tree/branch: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git  
fixes-test
branch HEAD: 9c7422b92cb27369653c371ad9c44a502e5eea8f  powerpc/32s: Fix RTAS 
machine check with VMAP stack

elapsed time: 875m

configs tested: 118
configs skipped: 109

The following configs have been built successfully.
More configs may be tested in the coming days.

gcc tested configs:
arm defconfig
arm64allyesconfig
arm64   defconfig
arm  allyesconfig
arm  allmodconfig
powerpc kilauea_defconfig
powerpc tqm8555_defconfig
arc   tb10x_defconfig
xtensa   common_defconfig
c6xevmc6474_defconfig
powerpcfsp2_defconfig
m68k   sun3_defconfig
powerpc skiroot_defconfig
powerpc mpc834x_itx_defconfig
m68kq40_defconfig
m68k   m5208evb_defconfig
arm   h5000_defconfig
arm  pxa168_defconfig
powerpcicon_defconfig
powerpcge_imp3a_defconfig
sh   j2_defconfig
arm   spitz_defconfig
arm  badge4_defconfig
powerpc tqm8548_defconfig
powerpc mpc512x_defconfig
mips   xway_defconfig
powerpc  ppc64e_defconfig
arm  pxa255-idp_defconfig
arm   tegra_defconfig
arm  integrator_defconfig
powerpcadder875_defconfig
mips  fuloong2e_defconfig
powerpc   ebony_defconfig
armdove_defconfig
sh espt_defconfig
armrealview_defconfig
s390 alldefconfig
powerpc canyonlands_defconfig
powerpc   ppc64_defconfig
powerpcmvme5100_defconfig
arm   cns3420vb_defconfig
arm rpc_defconfig
arm palmz72_defconfig
powerpc mpc85xx_cds_defconfig
armvt8500_v6_v7_defconfig
arm eseries_pxa_defconfig
mips   mtx1_defconfig
um i386_defconfig
m68k   bvme6000_defconfig
sh   sh2007_defconfig
ia64 allmodconfig
ia64defconfig
ia64 allyesconfig
nios2   defconfig
arc  allyesconfig
nds32 allnoconfig
c6x  allyesconfig
xtensa   allyesconfig
h8300allyesconfig
arc defconfig
sh   allmodconfig
parisc  defconfig
s390 allyesconfig
parisc   allyesconfig
s390defconfig
i386 allyesconfig
sparcallyesconfig
sparc   defconfig
i386   tinyconfig
i386defconfig
mips allyesconfig
mips allmodconfig
powerpc  allyesconfig
powerpc  allmodconfig
powerpc   allnoconfig
x86_64   randconfig-a001-20201221
x86_64   randconfig-a006-20201221
x86_64   randconfig-a002-20201221
x86_64   randconfig-a004-20201221
x86_64   randconfig-a003-20201221
x86_64   randconfig-a005-20201221
i386 randconfig-a005-20201222
i386 randconfig-a002-20201222
i386 randconfig-a006-20201222
i386 randconfig-a004-20201222
i386 randconfig-a003-20201222
i386 randconfig-a001-20201222
i386 randconfig-a011-20201221
i386 randconfig-a016-20201221
i386 randconfig-a014-20201221
i386 randconfig-a012-20201221
i386 randconfig-a015-20201221
i386 randconfig-a013-20201221
riscvnommu_k210_defconfig
riscvallyesconfig
riscvnommu_virt_defconfig
riscv allnoconfig
riscv   defconfig
riscv  rv32_defconfig
riscvallmodconfig
x86_64

Re: [PATCH 3/3] powerpc: rewrite atomics to use ARCH_ATOMIC

2020-12-22 Thread Boqun Feng
On Tue, Dec 22, 2020 at 01:52:50PM +1000, Nicholas Piggin wrote:
> Excerpts from Boqun Feng's message of November 14, 2020 1:30 am:
> > Hi Nicholas,
> > 
> > On Wed, Nov 11, 2020 at 09:07:23PM +1000, Nicholas Piggin wrote:
> >> All the cool kids are doing it.
> >> 
> >> Signed-off-by: Nicholas Piggin 
> >> ---
> >>  arch/powerpc/include/asm/atomic.h  | 681 ++---
> >>  arch/powerpc/include/asm/cmpxchg.h |  62 +--
> >>  2 files changed, 248 insertions(+), 495 deletions(-)
> >> 
> >> diff --git a/arch/powerpc/include/asm/atomic.h 
> >> b/arch/powerpc/include/asm/atomic.h
> >> index 8a55eb8cc97b..899aa2403ba7 100644
> >> --- a/arch/powerpc/include/asm/atomic.h
> >> +++ b/arch/powerpc/include/asm/atomic.h
> >> @@ -11,185 +11,285 @@
> >>  #include 
> >>  #include 
> >>  
> >> +#define ARCH_ATOMIC
> >> +
> >> +#ifndef CONFIG_64BIT
> >> +#include 
> >> +#endif
> >> +
> >>  /*
> >>   * Since *_return_relaxed and {cmp}xchg_relaxed are implemented with
> >>   * a "bne-" instruction at the end, so an isync is enough as a acquire 
> >> barrier
> >>   * on the platform without lwsync.
> >>   */
> >>  #define __atomic_acquire_fence()  \
> >> -  __asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory")
> >> +  asm volatile(PPC_ACQUIRE_BARRIER "" : : : "memory")
> >>  
> >>  #define __atomic_release_fence()  \
> >> -  __asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory")
> >> +  asm volatile(PPC_RELEASE_BARRIER "" : : : "memory")
> >>  
> >> -static __inline__ int atomic_read(const atomic_t *v)
> >> -{
> >> -  int t;
> >> +#define __atomic_pre_full_fence   smp_mb
> >>  
> >> -  __asm__ __volatile__("lwz%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter));
> >> +#define __atomic_post_full_fence  smp_mb
> >>  
> 
> Thanks for the review.
> 
> > Do you need to define __atomic_{pre,post}_full_fence for PPC? IIRC, they
> > are default smp_mb__{before,atomic}_atomic(), so are smp_mb() defautly
> > on PPC.
> 
> Okay I didn't realise that's not required.
> 
> >> -  return t;
> >> +#define arch_atomic_read(v)   
> >> __READ_ONCE((v)->counter)
> >> +#define arch_atomic_set(v, i) 
> >> __WRITE_ONCE(((v)->counter), (i))
> >> +#ifdef CONFIG_64BIT
> >> +#define ATOMIC64_INIT(i)  { (i) }
> >> +#define arch_atomic64_read(v) 
> >> __READ_ONCE((v)->counter)
> >> +#define arch_atomic64_set(v, i)   
> >> __WRITE_ONCE(((v)->counter), (i))
> >> +#endif
> >> +
> > [...]
> >>  
> >> +#define ATOMIC_FETCH_OP_UNLESS_RELAXED(name, type, dtype, width, asm_op) \
> >> +static inline int arch_##name##_relaxed(type *v, dtype a, dtype u)
> >> \
> > 
> > I don't think we have atomic_fetch_*_unless_relaxed() at atomic APIs,
> > ditto for:
> > 
> > atomic_fetch_add_unless_relaxed()
> > atomic_inc_not_zero_relaxed()
> > atomic_dec_if_positive_relaxed()
> > 
> > , and we don't have the _acquire() and _release() variants for them
> > either, and if you don't define their fully-ordered version (e.g.
> > atomic_inc_not_zero()), atomic-arch-fallback.h will use read and cmpxchg
> > to implement them, and I think not what we want.
> 
> Okay. How can those be added? The atoimc generation is pretty 
> complicated.
> 

Yeah, I know ;-) I think you can just implement and define fully-ordered
verions:

arch_atomic_fetch_*_unless()
arch_atomic_inc_not_zero()
arch_atomic_dec_if_postive()

, that should work.

Rules of atomic generation, IIRC:

1.  If you define _relaxed, _acquire, _release or fully-ordered
version, atomic generation will use that version

2.  If you define _relaxed, atomic generation will use that and
barriers to generate _acquire, _release and fully-ordered
versions, unless they are already defined (as Rule #1 says)

3.  If you don't define _relaxed, but define the fully-ordered
version, atomic generation will use the fully-ordered version
and use it as _relaxed variants and generate the rest using Rule
#2.

> > [...]
> >>  
> >>  #endif /* __KERNEL__ */
> >>  #endif /* _ASM_POWERPC_ATOMIC_H_ */
> >> diff --git a/arch/powerpc/include/asm/cmpxchg.h 
> >> b/arch/powerpc/include/asm/cmpxchg.h
> >> index cf091c4c22e5..181f7e8b3281 100644
> >> --- a/arch/powerpc/include/asm/cmpxchg.h
> >> +++ b/arch/powerpc/include/asm/cmpxchg.h
> >> @@ -192,7 +192,7 @@ __xchg_relaxed(void *ptr, unsigned long x, unsigned 
> >> int size)
> >>(unsigned long)_x_, sizeof(*(ptr)));
> >>  \
> >>})
> >>  
> >> -#define xchg_relaxed(ptr, x)  
> >> \
> >> +#define arch_xchg_relaxed(ptr, x) \
> >>  ({
> >> \
> >>__typeof__(*(ptr)) _x_ = (x);   \
> >>(__typeof__(*(ptr))) 

Re: [PATCH] arch: consolidate pm_power_off callback

2020-12-22 Thread Enrico Weigelt, metux IT consult
On 22.12.20 19:54, Geert Uytterhoeven wrote:

Hi,

> On Tue, Dec 22, 2020 at 7:46 PM Enrico Weigelt, metux IT consult
>  wrote:
>> Move the pm_power_off callback into one global place and also add an
>> function for conditionally calling it (when not NULL), in order to remove
>> code duplication in all individual archs.
>>
>> Signed-off-by: Enrico Weigelt, metux IT consult 
> 
> Thanks for your patch!
> 
>> --- a/arch/alpha/kernel/process.c
>> +++ b/arch/alpha/kernel/process.c
>> @@ -43,12 +43,6 @@
>>  #include "proto.h"
>>  #include "pci_impl.h"
>>
>> -/*
>> - * Power off function, if any
>> - */
>> -void (*pm_power_off)(void) = machine_power_off;
> 
> Assignments like these are lost in the conversion.

Yes, but this doesn't seem to be ever called anyways. (in arch/alpha)
And, BTW, letting it point to machine_power_off() doesn't make much
sense, since it's the arch's machine_power_off() function, who're
calling pm_power_off().

Actually, we could remove pm_power_off completely from here, assuming
nobody would *build* any drivers that register themselves into
pm_power_off.

If you feel better with it, I could post a patch that just removes
pm_power_off from arch/alpha.


--mtx

-- 
---
Hinweis: unverschlüsselte E-Mails können leicht abgehört und manipuliert
werden ! Für eine vertrauliche Kommunikation senden Sie bitte ihren
GPG/PGP-Schlüssel zu.
---
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
i...@metux.net -- +49-151-27565287


Re: [PATCH] arch: consolidate pm_power_off callback

2020-12-22 Thread Geert Uytterhoeven
Hi Enrico,

On Tue, Dec 22, 2020 at 7:46 PM Enrico Weigelt, metux IT consult
 wrote:
> Move the pm_power_off callback into one global place and also add an
> function for conditionally calling it (when not NULL), in order to remove
> code duplication in all individual archs.
>
> Signed-off-by: Enrico Weigelt, metux IT consult 

Thanks for your patch!

> --- a/arch/alpha/kernel/process.c
> +++ b/arch/alpha/kernel/process.c
> @@ -43,12 +43,6 @@
>  #include "proto.h"
>  #include "pci_impl.h"
>
> -/*
> - * Power off function, if any
> - */
> -void (*pm_power_off)(void) = machine_power_off;

Assignments like these are lost in the conversion.

> -EXPORT_SYMBOL(pm_power_off);

Gr{oetje,eeting}s,

Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds


[PATCH] arch: consolidate pm_power_off callback

2020-12-22 Thread Enrico Weigelt, metux IT consult
Move the pm_power_off callback into one global place and also add an
function for conditionally calling it (when not NULL), in order to remove
code duplication in all individual archs.

Signed-off-by: Enrico Weigelt, metux IT consult 
---
 arch/alpha/kernel/process.c|  6 --
 arch/arc/kernel/reset.c|  3 ---
 arch/arm/kernel/reboot.c   |  6 ++
 arch/arm64/kernel/process.c|  6 +-
 arch/c6x/kernel/process.c  | 10 ++
 arch/csky/kernel/power.c   | 10 +++---
 arch/h8300/kernel/process.c|  3 ---
 arch/hexagon/kernel/reset.c|  3 ---
 arch/ia64/kernel/process.c |  5 +
 arch/m68k/kernel/process.c |  3 ---
 arch/microblaze/kernel/process.c   |  3 ---
 arch/mips/kernel/reset.c   |  6 +-
 arch/nds32/kernel/process.c|  7 ++-
 arch/nios2/kernel/process.c|  3 ---
 arch/openrisc/kernel/process.c |  3 ---
 arch/parisc/kernel/process.c   |  9 +++--
 arch/powerpc/kernel/setup-common.c |  5 ++---
 arch/powerpc/xmon/xmon.c   |  4 ++--
 arch/riscv/kernel/reset.c  |  9 -
 arch/s390/kernel/setup.c   |  3 ---
 arch/sh/kernel/reboot.c|  6 +-
 arch/x86/kernel/reboot.c   | 15 ---
 arch/x86/xen/enlighten_pv.c|  4 ++--
 arch/xtensa/kernel/process.c   |  4 
 include/linux/pm.h |  2 ++
 kernel/reboot.c| 10 ++
 26 files changed, 42 insertions(+), 106 deletions(-)

diff --git a/arch/alpha/kernel/process.c b/arch/alpha/kernel/process.c
index 6c71554206cc..df0df869751d 100644
--- a/arch/alpha/kernel/process.c
+++ b/arch/alpha/kernel/process.c
@@ -43,12 +43,6 @@
 #include "proto.h"
 #include "pci_impl.h"
 
-/*
- * Power off function, if any
- */
-void (*pm_power_off)(void) = machine_power_off;
-EXPORT_SYMBOL(pm_power_off);
-
 #ifdef CONFIG_ALPHA_WTINT
 /*
  * Sleep the CPU.
diff --git a/arch/arc/kernel/reset.c b/arch/arc/kernel/reset.c
index fd6c3eb930ba..3a27b6a202d4 100644
--- a/arch/arc/kernel/reset.c
+++ b/arch/arc/kernel/reset.c
@@ -26,6 +26,3 @@ void machine_power_off(void)
/* FIXME ::  power off ??? */
machine_halt();
 }
-
-void (*pm_power_off) (void) = NULL;
-EXPORT_SYMBOL(pm_power_off);
diff --git a/arch/arm/kernel/reboot.c b/arch/arm/kernel/reboot.c
index 0ce388f15422..9e1bf0e9b3e0 100644
--- a/arch/arm/kernel/reboot.c
+++ b/arch/arm/kernel/reboot.c
@@ -6,6 +6,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -19,8 +20,6 @@ typedef void (*phys_reset_t)(unsigned long, bool);
  * Function pointers to optional machine specific functions
  */
 void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
-void (*pm_power_off)(void);
-EXPORT_SYMBOL(pm_power_off);
 
 /*
  * A temporary stack to use for CPU reset. This is static so that we
@@ -118,8 +117,7 @@ void machine_power_off(void)
local_irq_disable();
smp_send_stop();
 
-   if (pm_power_off)
-   pm_power_off();
+   do_power_off();
 }
 
 /*
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 6616486a58fe..a5d4c1e80abd 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -67,9 +67,6 @@ EXPORT_SYMBOL(__stack_chk_guard);
 /*
  * Function pointers to optional machine specific functions
  */
-void (*pm_power_off)(void);
-EXPORT_SYMBOL_GPL(pm_power_off);
-
 void (*arm_pm_restart)(enum reboot_mode reboot_mode, const char *cmd);
 
 static void noinstr __cpu_do_idle(void)
@@ -172,8 +169,7 @@ void machine_power_off(void)
 {
local_irq_disable();
smp_send_stop();
-   if (pm_power_off)
-   pm_power_off();
+   do_power_off();
 }
 
 /*
diff --git a/arch/c6x/kernel/process.c b/arch/c6x/kernel/process.c
index 9f4fd6a40a10..8b4b24476162 100644
--- a/arch/c6x/kernel/process.c
+++ b/arch/c6x/kernel/process.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 
@@ -25,12 +26,6 @@ void (*c6x_halt)(void);
 extern asmlinkage void ret_from_fork(void);
 extern asmlinkage void ret_from_kernel_thread(void);
 
-/*
- * power off function, if any
- */
-void (*pm_power_off)(void);
-EXPORT_SYMBOL(pm_power_off);
-
 void arch_cpu_idle(void)
 {
unsigned long tmp;
@@ -71,8 +66,7 @@ void machine_halt(void)
 
 void machine_power_off(void)
 {
-   if (pm_power_off)
-   pm_power_off();
+   do_power_off();
halt_loop();
 }
 
diff --git a/arch/csky/kernel/power.c b/arch/csky/kernel/power.c
index 923ee4e381b8..c702e66ce03a 100644
--- a/arch/csky/kernel/power.c
+++ b/arch/csky/kernel/power.c
@@ -2,23 +2,19 @@
 // Copyright (C) 2018 Hangzhou C-SKY Microsystems co.,ltd.
 
 #include 
-
-void (*pm_power_off)(void);
-EXPORT_SYMBOL(pm_power_off);
+#include 
 
 void machine_power_off(void)
 {
local_irq_disable();
-   if (pm_power_off)
-   pm_power_off();
+   do_power_off();
asm volatile 

Re: [PATCH] powerpc:Don't print raw EIP/LR hex values in dump_stack() and show_regs()

2020-12-22 Thread Christophe Leroy




Le 22/12/2020 à 18:29, Segher Boessenkool a écrit :

On Tue, Dec 22, 2020 at 09:45:03PM +0800, Xiaoming Ni wrote:

On 2020/12/22 1:12, Segher Boessenkool wrote:

On Mon, Dec 21, 2020 at 04:42:23PM +, David Laight wrote:

From: Segher Boessenkool

Sent: 21 December 2020 16:32

On Mon, Dec 21, 2020 at 04:17:21PM +0100, Christophe Leroy wrote:

Le 21/12/2020 à 04:27, Xiaoming Ni a écrit :

Since the commit 2b0e86cc5de6 ("powerpc/fsl_booke/32: implement KASLR
infrastructure"), the powerpc system is ready to support KASLR.
To reduces the risk of invalidating address randomization, don't print
the
EIP/LR hex values in dump_stack() and show_regs().



I think your change is not enough to hide EIP address, see below a dump
with you patch, you get "Faulting instruction address: 0xc03a0c14"


As far as I can see the patch does nothing to the GPR printout.  Often
GPRs contain code addresses.  As one example, the LR is moved via a GPR
(often GPR0, but not always) for storing on the stack.

So this needs more work.


If the dump_stack() is from an oops you need the real EIP value
on order to stand any chance of making headway.


Or at least the function name + offset, yes.


When the system is healthy, only symbols and offsets are printed,
Output address and symbol + offset when the system is dying
Does this meet both debugging and security requirements?


If you have the vmlinux, sym+off is enough to find what instruction
caused the crash.

It does of course not give all the information you get in a crash dump
with all the registers, so it does hinder debugging a bit.  This is a
tradeoff.

Most debugging will need xmon or similar (or printf-style debugging)
anyway; and otoh the register dump will render KASLR largely
ineffective.


For example:

+static void __show_regs_ip_lr(const char *flag, unsigned long addr)
+{
+ if (system_going_down()) { /* panic oops reboot */
+ pr_cont("%s["REG"] %pS", flag, addr, (void *)addr);
+ } else {
+ pr_cont("%s%pS", flag, (void *)addr);
+ }
+}


*If* you are certain the system goes down immediately, and you are also
certain this information will not help defeat ASLR after a reboot, you
could just print whatever, sure.

Otherwise, you only want to show some very few registers.  Or, make sure
no attackers can ever see these dumps (which is hard, many systems trust
all (local) users with it!)  Which means we first will need some very
different patches, before any of this can be much useful :-(



So IIUC, on one side we enlarge the dumping of registers with commits like 
https://github.com/linuxppc/linux/commit/bf13718bc57ada25016d9fe80323238d0b94506e#diff-8b965e0e62fc1b6ad5e51bf0a539941e929754cdb716041b06b4f4a5f73590f9, 
and on the other side we want to narrow it and hide registers ? I'm lost.


Christophe


Re: [PATCH 3/3] ibmvfc: use correlation token to tag commands

2020-12-22 Thread Tyrel Datwyler
On 12/21/20 10:24 PM, Nathan Chancellor wrote:
> On Tue, Nov 17, 2020 at 12:50:31PM -0600, Tyrel Datwyler wrote:
>> The vfcFrame correlation field is 64bit handle that is intended to trace
>> I/O operations through both the client stack and VIOS stack when the
>> underlying physical FC adapter supports tagging.
>>
>> Tag vfcFrames with the associated ibmvfc_event pointer handle.
>>
>> Signed-off-by: Tyrel Datwyler 
>> ---
>>  drivers/scsi/ibmvscsi/ibmvfc.c | 4 
>>  1 file changed, 4 insertions(+)
>>
>> diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
>> index 0cab4b852b48..3922441a117d 100644
>> --- a/drivers/scsi/ibmvscsi/ibmvfc.c
>> +++ b/drivers/scsi/ibmvscsi/ibmvfc.c
>> @@ -1693,6 +1693,8 @@ static int ibmvfc_queuecommand_lck(struct scsi_cmnd 
>> *cmnd,
>>  vfc_cmd->iu.pri_task_attr = IBMVFC_SIMPLE_TASK;
>>  }
>>  
>> +vfc_cmd->correlation = cpu_to_be64(evt);
>> +
>>  if (likely(!(rc = ibmvfc_map_sg_data(cmnd, evt, vfc_cmd, vhost->dev
>>  return ibmvfc_send_event(evt, vhost, 0);
>>  
>> @@ -2370,6 +2372,8 @@ static int ibmvfc_abort_task_set(struct scsi_device 
>> *sdev)
>>  tmf->iu.tmf_flags = IBMVFC_ABORT_TASK_SET;
>>  evt->sync_iu = _iu;
>>  
>> +tmf->correlation = cpu_to_be64(evt);
>> +
>>  init_completion(>comp);
>>  rsp_rc = ibmvfc_send_event(evt, vhost, default_timeout);
>>  }
>> -- 
>> 2.27.0
>>
> 
> This patch introduces a clang warning, is this intentional behavior?

Nope, I just missed the required cast. I've got a fixes patch queued up. I just
haven't sent it yet.

-Tyrel



Re: [PATCH] powerpc:Don't print raw EIP/LR hex values in dump_stack() and show_regs()

2020-12-22 Thread Segher Boessenkool
On Tue, Dec 22, 2020 at 09:45:03PM +0800, Xiaoming Ni wrote:
> On 2020/12/22 1:12, Segher Boessenkool wrote:
> >On Mon, Dec 21, 2020 at 04:42:23PM +, David Laight wrote:
> >>From: Segher Boessenkool
> >>>Sent: 21 December 2020 16:32
> >>>
> >>>On Mon, Dec 21, 2020 at 04:17:21PM +0100, Christophe Leroy wrote:
> Le 21/12/2020 à 04:27, Xiaoming Ni a écrit :
> >Since the commit 2b0e86cc5de6 ("powerpc/fsl_booke/32: implement KASLR
> >infrastructure"), the powerpc system is ready to support KASLR.
> >To reduces the risk of invalidating address randomization, don't print 
> >the
> >EIP/LR hex values in dump_stack() and show_regs().
> >>>
> I think your change is not enough to hide EIP address, see below a dump
> with you patch, you get "Faulting instruction address: 0xc03a0c14"
> >>>
> >>>As far as I can see the patch does nothing to the GPR printout.  Often
> >>>GPRs contain code addresses.  As one example, the LR is moved via a GPR
> >>>(often GPR0, but not always) for storing on the stack.
> >>>
> >>>So this needs more work.
> >>
> >>If the dump_stack() is from an oops you need the real EIP value
> >>on order to stand any chance of making headway.
> >
> >Or at least the function name + offset, yes.
> >
> When the system is healthy, only symbols and offsets are printed,
> Output address and symbol + offset when the system is dying
> Does this meet both debugging and security requirements?

If you have the vmlinux, sym+off is enough to find what instruction
caused the crash.

It does of course not give all the information you get in a crash dump
with all the registers, so it does hinder debugging a bit.  This is a
tradeoff.

Most debugging will need xmon or similar (or printf-style debugging)
anyway; and otoh the register dump will render KASLR largely
ineffective.

> For example:
> 
> +static void __show_regs_ip_lr(const char *flag, unsigned long addr)
> +{
> + if (system_going_down()) { /* panic oops reboot */
> + pr_cont("%s["REG"] %pS", flag, addr, (void *)addr);
> + } else {
> + pr_cont("%s%pS", flag, (void *)addr);
> + }
> +}

*If* you are certain the system goes down immediately, and you are also
certain this information will not help defeat ASLR after a reboot, you
could just print whatever, sure.

Otherwise, you only want to show some very few registers.  Or, make sure
no attackers can ever see these dumps (which is hard, many systems trust
all (local) users with it!)  Which means we first will need some very
different patches, before any of this can be much useful :-(


Segher


Re: [PATCH v3 03/19] powerpc: bad_page_fault, do_break get registers from regs

2020-12-22 Thread Christophe Leroy




Le 28/11/2020 à 15:40, Nicholas Piggin a écrit :

Similar to the previous patch this makes interrupt handler function
types more regular so they can be wrapped with the next patch.

bad_page_fault and do_break are not performance critical.


I partly took your changes into one of my series, in different order though.

Please have a look at https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=221656 patches 
4 to 7


I think some of the changes are missing in your series, especially the changes in entry_32.S from 
patch 7.


Will see how our two series make their way into mainline, yours needs rebase 
anyway.

Christophe



[32s DABR code from Christophe Leroy ]
Signed-off-by: Nicholas Piggin 
---
  arch/powerpc/include/asm/bug.h |  2 +-
  arch/powerpc/include/asm/debug.h   |  3 +--
  arch/powerpc/kernel/entry_32.S | 18 +-
  arch/powerpc/kernel/exceptions-64e.S   |  3 +--
  arch/powerpc/kernel/exceptions-64s.S   |  3 +--
  arch/powerpc/kernel/head_8xx.S |  5 ++---
  arch/powerpc/kernel/head_book3s_32.S   |  3 +++
  arch/powerpc/kernel/process.c  |  7 +++
  arch/powerpc/kernel/traps.c|  2 +-
  arch/powerpc/mm/book3s64/hash_utils.c  |  4 ++--
  arch/powerpc/mm/book3s64/slb.c |  2 +-
  arch/powerpc/mm/fault.c| 10 +-
  arch/powerpc/platforms/8xx/machine_check.c |  2 +-
  13 files changed, 23 insertions(+), 41 deletions(-)

diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
index 897bad6b6bbb..49162faba33f 100644
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -113,7 +113,7 @@
  struct pt_regs;
  long do_page_fault(struct pt_regs *);
  long hash__do_page_fault(struct pt_regs *);
-extern void bad_page_fault(struct pt_regs *, unsigned long, int);
+void bad_page_fault(struct pt_regs *, int);
  extern void _exception(int, struct pt_regs *, int, unsigned long);
  extern void _exception_pkey(struct pt_regs *, unsigned long, int);
  extern void die(const char *, struct pt_regs *, long);
diff --git a/arch/powerpc/include/asm/debug.h b/arch/powerpc/include/asm/debug.h
index ec57daf87f40..0550eceab3ca 100644
--- a/arch/powerpc/include/asm/debug.h
+++ b/arch/powerpc/include/asm/debug.h
@@ -52,8 +52,7 @@ extern void do_send_trap(struct pt_regs *regs, unsigned long 
address,
 unsigned long error_code, int brkpt);
  #else
  
-extern void do_break(struct pt_regs *regs, unsigned long address,

-unsigned long error_code);
+void do_break(struct pt_regs *regs);
  #endif
  
  #endif /* _ASM_POWERPC_DEBUG_H */

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 8cdc8bcde703..57b8e95ea2a0 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -657,10 +657,6 @@ ppc_swapcontext:
.globl  handle_page_fault
  handle_page_fault:
addir3,r1,STACK_FRAME_OVERHEAD
-#ifdef CONFIG_PPC_BOOK3S_32
-   andis.  r0,r5,DSISR_DABRMATCH@h
-   bne-handle_dabr_fault
-#endif
bl  do_page_fault
cmpwi   r3,0
beq+ret_from_except
@@ -668,23 +664,11 @@ handle_page_fault:
lwz r0,_TRAP(r1)
clrrwi  r0,r0,1
stw r0,_TRAP(r1)
-   mr  r5,r3
+   mr  r4,r3   /* err arg for bad_page_fault */
addir3,r1,STACK_FRAME_OVERHEAD
-   lwz r4,_DAR(r1)
bl  bad_page_fault
b   ret_from_except_full
  
-#ifdef CONFIG_PPC_BOOK3S_32

-   /* We have a data breakpoint exception - handle it */
-handle_dabr_fault:
-   SAVE_NVGPRS(r1)
-   lwz r0,_TRAP(r1)
-   clrrwi  r0,r0,1
-   stw r0,_TRAP(r1)
-   bl  do_break
-   b   ret_from_except_full
-#endif
-
  /*
   * This routine switches between two different tasks.  The process
   * state of one is saved on its kernel stack.  Then the state
diff --git a/arch/powerpc/kernel/exceptions-64e.S 
b/arch/powerpc/kernel/exceptions-64e.S
index 25fa7d5a643c..dc728bb1c89a 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1018,9 +1018,8 @@ storage_fault_common:
bne-1f
b   ret_from_except_lite
  1:bl  save_nvgprs
-   mr  r5,r3
+   mr  r4,r3
addir3,r1,STACK_FRAME_OVERHEAD
-   ld  r4,_DAR(r1)
bl  bad_page_fault
b   ret_from_except
  
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S

index 690058043b17..77b730f515c4 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -2136,8 +2136,7 @@ EXC_COMMON_BEGIN(h_data_storage_common)
GEN_COMMON h_data_storage
addir3,r1,STACK_FRAME_OVERHEAD
  BEGIN_MMU_FTR_SECTION
-   ld  r4,_DAR(r1)
-   li  r5,SIGSEGV
+   li  r4,SIGSEGV
bl  

Re: [PATCH] powerpc:Don't print raw EIP/LR hex values in dump_stack() and show_regs()

2020-12-22 Thread Xiaoming Ni

On 2020/12/22 1:12, Segher Boessenkool wrote:

On Mon, Dec 21, 2020 at 04:42:23PM +, David Laight wrote:

From: Segher Boessenkool

Sent: 21 December 2020 16:32

On Mon, Dec 21, 2020 at 04:17:21PM +0100, Christophe Leroy wrote:

Le 21/12/2020 à 04:27, Xiaoming Ni a écrit :

Since the commit 2b0e86cc5de6 ("powerpc/fsl_booke/32: implement KASLR
infrastructure"), the powerpc system is ready to support KASLR.
To reduces the risk of invalidating address randomization, don't print the
EIP/LR hex values in dump_stack() and show_regs().



I think your change is not enough to hide EIP address, see below a dump
with you patch, you get "Faulting instruction address: 0xc03a0c14"


As far as I can see the patch does nothing to the GPR printout.  Often
GPRs contain code addresses.  As one example, the LR is moved via a GPR
(often GPR0, but not always) for storing on the stack.

So this needs more work.


If the dump_stack() is from an oops you need the real EIP value
on order to stand any chance of making headway.


Or at least the function name + offset, yes.


When the system is healthy, only symbols and offsets are printed,
Output address and symbol + offset when the system is dying
Does this meet both debugging and security requirements?
For example:

+static void __show_regs_ip_lr(const char *flag, unsigned long addr)
+{
+ if (system_going_down()) { /* panic oops reboot */
+ pr_cont("%s["REG"] %pS", flag, addr, (void *)addr);
+ } else {
+ pr_cont("%s%pS", flag, (void *)addr);
+ }
+}
+
 static void __show_regs(struct pt_regs *regs)
 {
int i, trap;

-   printk("NIP:  "REG" LR: "REG" CTR: "REG"\n",
-  regs->nip, regs->link, regs->ctr);
+ __show_regs_ip_lr("NIP: ", regs->nip);
+ __show_regs_ip_lr(" LR: ", regs->link);
+ pr_cont(" CTR: "REG"\n", regs->ctr);
printk("REGS: %px TRAP: %04lx   %s  (%s)\n",
   regs, regs->trap, print_tainted(), init_utsname()->release);
printk("MSR:  "REG" ", regs->msr);





Otherwise you might just as well just print 'borked - tough luck'.


Yes.  ASLR is a house of cards.  But that isn't constructive wrt this
patch :-)


Segher
.



Thanks
Xiaoming Ni


Re: [PATCH] tpm: ibmvtpm: fix error return code in tpm_ibmvtpm_probe()

2020-12-22 Thread Stefan Berger

On 11/25/20 10:35 PM, Jarkko Sakkinen wrote:

On Tue, 2020-11-24 at 21:52 +0800, Wang Hai wrote:

Fix to return a negative error code from the error handling
case instead of 0, as done elsewhere in this function.

Fixes: d8d74ea3c002 ("tpm: ibmvtpm: Wait for buffer to be set before
proceeding")
Reported-by: Hulk Robot 
Signed-off-by: Wang Hai 

Provide a reasoning for -ETIMEOUT in the commit message.

/Jarkko



Was this patch ever applied? I don't seem to find the infradead git tree ...




[PATCH -next] ide/pmac: use DIV_ROUND_UP macro to do calculation

2020-12-22 Thread Zheng Yongjun
Don't open-code DIV_ROUND_UP() kernel macro.

Signed-off-by: Zheng Yongjun 
---
 drivers/ide/pmac.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/ide/pmac.c b/drivers/ide/pmac.c
index ea0b064b5f56..6c0237af610d 100644
--- a/drivers/ide/pmac.c
+++ b/drivers/ide/pmac.c
@@ -105,8 +105,8 @@ static const char* model_name[] = {
  */
 
 /* Number of IDE_SYSCLK_NS ticks, argument is in nanoseconds */
-#define SYSCLK_TICKS(t)(((t) + IDE_SYSCLK_NS - 1) / 
IDE_SYSCLK_NS)
-#define SYSCLK_TICKS_66(t) (((t) + IDE_SYSCLK_66_NS - 1) / 
IDE_SYSCLK_66_NS)
+#define SYSCLK_TICKS(t)DIV_ROUND_UP(t, IDE_SYSCLK_NS)
+#define SYSCLK_TICKS_66(t) DIV_ROUND_UP(t, IDE_SYSCLK_66_NS)
 #define IDE_SYSCLK_NS  30  /* 33Mhz cell */
 #define IDE_SYSCLK_66_NS   15  /* 66Mhz cell */
 
-- 
2.22.0



[PATCH v1 15/15] powerpc/32: Use r11 to store DSISR in prolog

2020-12-22 Thread Christophe Leroy
We now have r11 available. Use it to avoid reloading DSISR
from the stack when needed.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/head_6xx_8xx.h   | 4 ++--
 arch/powerpc/kernel/head_8xx.S   | 3 +--
 arch/powerpc/kernel/head_book3s_32.S | 3 +--
 3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/head_6xx_8xx.h 
b/arch/powerpc/kernel/head_6xx_8xx.h
index 5a90bafee536..7116162dae9d 100644
--- a/arch/powerpc/kernel/head_6xx_8xx.h
+++ b/arch/powerpc/kernel/head_6xx_8xx.h
@@ -72,9 +72,9 @@
tovirt(r12, r12)
.if \handle_dar_dsisr
lwz r10, DAR(r12)
+   lwz r11, DSISR(r12)
stw r10, _DAR(r1)
-   lwz r10, DSISR(r12)
-   stw r10, _DSISR(r1)
+   stw r11, _DSISR(r1)
.endif
lwz r9, SRR1(r12)
lwz r12, SRR0(r12)
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 7a078b26d24c..7e9cbd64efd9 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -335,9 +335,8 @@ DARFixed:/* Return from dcbx instruction bug workaround */
mtspr   SPRN_DAR, r11   /* Tag DAR, to be used in DTLB Error */
EXCEPTION_PROLOG_1
EXCEPTION_PROLOG_2 handle_dar_dsisr=1
+   andis.  r10,r11,DSISR_NOHPTE@h
lwz r4, _DAR(r1)
-   lwz r5, _DSISR(r1)
-   andis.  r10,r5,DSISR_NOHPTE@h
beq+.Ldtlbie
tlbie   r4
 .Ldtlbie:
diff --git a/arch/powerpc/kernel/head_book3s_32.S 
b/arch/powerpc/kernel/head_book3s_32.S
index 40ee63af84f2..c0db295734f5 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -653,8 +653,7 @@ alignment_exception_tramp:
 
 handle_page_fault_tramp_1:
EXCEPTION_PROLOG_2 handle_dar_dsisr=1
-   lwz r5, _DSISR(r1)
-   andis.  r0, r5, DSISR_DABRMATCH@h
+   andis.  r0, r11, DSISR_DABRMATCH@h
bne-1f
EXC_XFER_LITE(0x300, handle_page_fault)
 1: EXC_XFER_STD(0x300, do_break)
-- 
2.25.0



[PATCH v1 14/15] powerpc/32: Use r1 directly instead of r11 in exception prologs on 6xx/8xx

2020-12-22 Thread Christophe Leroy
r1 and r11 are both pointing to the stack. Use r1 and free up r11.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/entry_32.S   |  4 
 arch/powerpc/kernel/head_6xx_8xx.h   | 28 ++--
 arch/powerpc/kernel/head_8xx.S   | 10 +-
 arch/powerpc/kernel/head_book3s_32.S |  6 +++---
 4 files changed, 26 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 2c38106c2c93..2ec3aa712282 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -318,7 +318,11 @@ stack_ovf:
ori r12,r12,_end@l
cmplw   r1,r12
ble 5b  /* r1 <= &_end is OK */
+#ifdef CONFIG_HAVE_ARCH_VMAP_STACK
+   SAVE_NVGPRS(r1)
+#else
SAVE_NVGPRS(r11)
+#endif
addir3,r1,STACK_FRAME_OVERHEAD
lis r1,init_thread_union@ha
addir1,r1,init_thread_union@l
diff --git a/arch/powerpc/kernel/head_6xx_8xx.h 
b/arch/powerpc/kernel/head_6xx_8xx.h
index bedbf37c2a0c..5a90bafee536 100644
--- a/arch/powerpc/kernel/head_6xx_8xx.h
+++ b/arch/powerpc/kernel/head_6xx_8xx.h
@@ -59,34 +59,33 @@
 1:
stw r11,GPR1(r1)
stw r11,0(r1)
-   mr  r11, r1
-   stw r10,_CCR(r11)   /* save registers */
-   stw r12,GPR12(r11)
-   stw r9,GPR9(r11)
+   stw r10,_CCR(r1)/* save registers */
+   stw r12,GPR12(r1)
+   stw r9,GPR9(r1)
mfspr   r10,SPRN_SPRG_SCRATCH0
mfspr   r12,SPRN_SPRG_SCRATCH1
-   stw r10,GPR10(r11)
-   stw r12,GPR11(r11)
+   stw r10,GPR10(r1)
+   stw r12,GPR11(r1)
mflrr10
-   stw r10,_LINK(r11)
+   stw r10,_LINK(r1)
mfspr   r12, SPRN_SPRG_THREAD
tovirt(r12, r12)
.if \handle_dar_dsisr
lwz r10, DAR(r12)
-   stw r10, _DAR(r11)
+   stw r10, _DAR(r1)
lwz r10, DSISR(r12)
-   stw r10, _DSISR(r11)
+   stw r10, _DSISR(r1)
.endif
lwz r9, SRR1(r12)
lwz r12, SRR0(r12)
li  r10, MSR_KERNEL /* can take exceptions */
mtmsr   r10 /* (except for mach check in rtas) */
-   stw r0,GPR0(r11)
+   stw r0,GPR0(r1)
lis r10,STACK_FRAME_REGS_MARKER@ha /* exception frame marker */
addir10,r10,STACK_FRAME_REGS_MARKER@l
-   stw r10,8(r11)
-   SAVE_4GPRS(3, r11)
-   SAVE_2GPRS(7, r11)
+   stw r10,8(r1)
+   SAVE_4GPRS(3, r1)
+   SAVE_2GPRS(7, r1)
 .endm
 
 .macro SYSCALL_ENTRY trapno
@@ -196,7 +195,8 @@
 
 #define EXC_XFER_TEMPLATE(hdlr, trap, tfer, ret)   \
li  r10,trap;   \
-   stw r10,_TRAP(r11); \
+   mr  r11, r1;\
+   stw r10,_TRAP(r1);  \
bl  tfer;   \
.long   hdlr;   \
.long   ret
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 6fa8e58c6e4c..7a078b26d24c 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -316,8 +316,8 @@ InstructionTLBError:
tlbie   r12
/* 0x400 is InstructionAccess exception, needed by bad_page_fault() */
 .Litlbie:
-   stw r12, _DAR(r11)
-   stw r5, _DSISR(r11)
+   stw r12, _DAR(r1)
+   stw r5, _DSISR(r1)
EXC_XFER_LITE(0x400, handle_page_fault)
 
 /* This is the data TLB error on the MPC8xx.  This could be due to
@@ -335,8 +335,8 @@ DARFixed:/* Return from dcbx instruction bug workaround */
mtspr   SPRN_DAR, r11   /* Tag DAR, to be used in DTLB Error */
EXCEPTION_PROLOG_1
EXCEPTION_PROLOG_2 handle_dar_dsisr=1
-   lwz r4, _DAR(r11)
-   lwz r5, _DSISR(r11)
+   lwz r4, _DAR(r1)
+   lwz r5, _DSISR(r1)
andis.  r10,r5,DSISR_NOHPTE@h
beq+.Ldtlbie
tlbie   r4
@@ -358,7 +358,7 @@ do_databreakpoint:
EXCEPTION_PROLOG_2 handle_dar_dsisr=1
addir3,r1,STACK_FRAME_OVERHEAD
mfspr   r4,SPRN_BAR
-   stw r4,_DAR(r11)
+   stw r4,_DAR(r1)
EXC_XFER_STD(0x1c00, do_break)
 
. = 0x1c00
diff --git a/arch/powerpc/kernel/head_book3s_32.S 
b/arch/powerpc/kernel/head_book3s_32.S
index 19a1ae0697fc..40ee63af84f2 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -333,8 +333,8 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
EXCEPTION_PROLOG_1
EXCEPTION_PROLOG_2
andis.  r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
-   stw r12, _DAR(r11)
-   stw r5, _DSISR(r11)
+   stw r12, _DAR(r1)
+   stw r5, _DSISR(r1)

[PATCH v1 13/15] powerpc/32: Enable instruction translation at the same time as data translation

2020-12-22 Thread Christophe Leroy
On 8xx, kernel text is pinned.
On book3s/32, kernel text is mapped by BATs.

Enable instruction translation at the same time as data translation, it
makes things simpler.

In syscall handler, MSR_RI can also be set at the same time because
srr0/srr1 are already saved and r1 is set properly.

Also update comment in power_save_ppc32_restore().

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/entry_32.S | 15 -
 arch/powerpc/kernel/head_6xx_8xx.h | 35 +++---
 arch/powerpc/kernel/idle_6xx.S |  4 +---
 3 files changed, 28 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 9ef75efaff47..2c38106c2c93 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -213,12 +213,8 @@ transfer_to_handler_cont:
 3:
mflrr9
tovirt_novmstack r2, r2 /* set r2 to current */
-   tovirt_vmstack r9, r9
lwz r11,0(r9)   /* virtual address of handler */
lwz r9,4(r9)/* where to go when done */
-#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
-   mtspr   SPRN_NRI, r0
-#endif
 #ifdef CONFIG_TRACE_IRQFLAGS
/*
 * When tracing IRQ state (lockdep) we enable the MMU before we call
@@ -235,6 +231,11 @@ transfer_to_handler_cont:
 
/* MSR isn't changing, just transition directly */
 #endif
+#ifdef CONFIG_HAVE_ARCH_VMAP_STACK
+   mtctr   r11
+   mtlrr9
+   bctr/* jump to handler */
+#else
mtspr   SPRN_SRR0,r11
mtspr   SPRN_SRR1,r10
mtlrr9
@@ -242,6 +243,7 @@ transfer_to_handler_cont:
 #ifdef CONFIG_40x
b . /* Prevent prefetch past rfi */
 #endif
+#endif
 
 #if defined (CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500)
 4: rlwinm  r12,r12,0,~_TLF_NAPPING
@@ -261,7 +263,9 @@ _ASM_NOKPROBE_SYMBOL(transfer_to_handler)
 _ASM_NOKPROBE_SYMBOL(transfer_to_handler_cont)
 
 #ifdef CONFIG_TRACE_IRQFLAGS
-1: /* MSR is changing, re-enable MMU so we can notify lockdep. We need to
+1:
+#ifndef CONFIG_HAVE_ARCH_VMAP_STACK
+   /* MSR is changing, re-enable MMU so we can notify lockdep. We need to
 * keep interrupts disabled at this point otherwise we might risk
 * taking an interrupt before we tell lockdep they are enabled.
 */
@@ -276,6 +280,7 @@ _ASM_NOKPROBE_SYMBOL(transfer_to_handler_cont)
 #endif
 
 reenable_mmu:
+#endif
/*
 * We save a bunch of GPRs,
 * r3 can be different from GPR3(r1) at this point, r9 and r11
diff --git a/arch/powerpc/kernel/head_6xx_8xx.h 
b/arch/powerpc/kernel/head_6xx_8xx.h
index 11b608b6f4b7..bedbf37c2a0c 100644
--- a/arch/powerpc/kernel/head_6xx_8xx.h
+++ b/arch/powerpc/kernel/head_6xx_8xx.h
@@ -49,10 +49,14 @@
 .endm
 
 .macro EXCEPTION_PROLOG_2 handle_dar_dsisr=0
-   li  r11, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */
-   mtmsr   r11
-   isync
+   li  r11, MSR_KERNEL & ~MSR_RI /* re-enable MMU */
+   mtspr   SPRN_SRR1, r11
+   lis r11, 1f@h
+   ori r11, r11, 1f@l
+   mtspr   SPRN_SRR0, r11
mfspr   r11, SPRN_SPRG_SCRATCH2
+   rfi
+1:
stw r11,GPR1(r1)
stw r11,0(r1)
mr  r11, r1
@@ -75,7 +79,7 @@
.endif
lwz r9, SRR1(r12)
lwz r12, SRR0(r12)
-   li  r10, MSR_KERNEL & ~MSR_IR /* can take exceptions */
+   li  r10, MSR_KERNEL /* can take exceptions */
mtmsr   r10 /* (except for mach check in rtas) */
stw r0,GPR0(r11)
lis r10,STACK_FRAME_REGS_MARKER@ha /* exception frame marker */
@@ -95,9 +99,13 @@
lwz r1,TASK_STACK-THREAD(r12)
beq-99f
addir1, r1, THREAD_SIZE - INT_FRAME_SIZE
-   li  r10, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */
-   mtmsr   r10
-   isync
+   li  r10, MSR_KERNEL /* can take exceptions */
+   mtspr   SPRN_SRR1, r10
+   lis r10, 1f@h
+   ori r10, r10, 1f@l
+   mtspr   SPRN_SRR0, r10
+   rfi
+1:
tovirt(r12, r12)
stw r11,GPR1(r1)
stw r11,0(r1)
@@ -108,8 +116,6 @@
mfcrr10
rlwinm  r10,r10,0,4,2   /* Clear SO bit in CR */
stw r10,_CCR(r1)/* save registers */
-   LOAD_REG_IMMEDIATE(r10, MSR_KERNEL & ~MSR_IR) /* can take exceptions */
-   mtmsr   r10 /* (except for mach check in rtas) */
lis r10,STACK_FRAME_REGS_MARKER@ha /* exception frame marker */
stw r2,GPR2(r1)
addir10,r10,STACK_FRAME_REGS_MARKER@l
@@ -126,8 +132,6 @@
ACCOUNT_CPU_USER_ENTRY(r2, r11, r12)
 
 3:
-   lis r11, transfer_to_syscall@h
-   ori r11, r11, transfer_to_syscall@l
 #ifdef CONFIG_TRACE_IRQFLAGS
/*
 * If MSR is changing we need to keep interrupts disabled at 

[PATCH v1 12/15] powerpc/32: Remove msr argument in EXC_XFER_TEMPLATE() on 6xx/8xx

2020-12-22 Thread Christophe Leroy
Only MSR_KERNEL is used as msr in EXC_XFER_TEMPLATE(), no need
to make it an argument.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/head_6xx_8xx.h | 10 --
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/head_6xx_8xx.h 
b/arch/powerpc/kernel/head_6xx_8xx.h
index 2536f0a660af..11b608b6f4b7 100644
--- a/arch/powerpc/kernel/head_6xx_8xx.h
+++ b/arch/powerpc/kernel/head_6xx_8xx.h
@@ -194,21 +194,19 @@
addir3,r1,STACK_FRAME_OVERHEAD; \
xfer(n, hdlr)
 
-#define EXC_XFER_TEMPLATE(hdlr, trap, msr, tfer, ret)  \
+#define EXC_XFER_TEMPLATE(hdlr, trap, tfer, ret)   \
li  r10,trap;   \
stw r10,_TRAP(r11); \
-   LOAD_REG_IMMEDIATE(r10, msr);   \
+   LOAD_REG_IMMEDIATE(r10, MSR_KERNEL);\
bl  tfer;   \
.long   hdlr;   \
.long   ret
 
 #define EXC_XFER_STD(n, hdlr)  \
-   EXC_XFER_TEMPLATE(hdlr, n, MSR_KERNEL, transfer_to_handler_full,
\
- ret_from_except_full)
+   EXC_XFER_TEMPLATE(hdlr, n, transfer_to_handler_full, 
ret_from_except_full)
 
 #define EXC_XFER_LITE(n, hdlr) \
-   EXC_XFER_TEMPLATE(hdlr, n+1, MSR_KERNEL, transfer_to_handler, \
- ret_from_except)
+   EXC_XFER_TEMPLATE(hdlr, n + 1, transfer_to_handler, ret_from_except)
 
 .macro vmap_stack_overflow_exception
 #ifdef CONFIG_SMP
-- 
2.25.0



[PATCH v1 10/15] powerpc/32: Make VMAP stack code depend on HAVE_ARCH_VMAP_STACK

2020-12-22 Thread Christophe Leroy
If the code can use a stack in vm area, it can also use a
stack in linear space.

Simplify code by removing old non VMAP stack code on 6xx and 8xx.

In common code, depend on HAVE_ARCH_VMAP_STACK instead of
depending on VMAP_STACK.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/processor.h |  2 +-
 arch/powerpc/kernel/asm-offsets.c|  2 +-
 arch/powerpc/kernel/entry_32.S   |  5 +-
 arch/powerpc/kernel/fpu.S|  2 +-
 arch/powerpc/kernel/head_6xx_8xx.h   | 82 +---
 arch/powerpc/kernel/head_8xx.S   | 17 ++
 arch/powerpc/kernel/head_book3s_32.S | 38 +
 arch/powerpc/kernel/idle_6xx.S   |  8 ---
 arch/powerpc/kernel/vector.S |  2 +-
 arch/powerpc/mm/book3s32/hash_low.S  | 14 -
 10 files changed, 11 insertions(+), 161 deletions(-)

diff --git a/arch/powerpc/include/asm/processor.h 
b/arch/powerpc/include/asm/processor.h
index 8acc3590c971..16442a770050 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -152,7 +152,7 @@ struct thread_struct {
 #if defined(CONFIG_PPC_BOOK3S_32) && defined(CONFIG_PPC_KUAP)
unsigned long   kuap;   /* opened segments for user access */
 #endif
-#ifdef CONFIG_VMAP_STACK
+#ifdef CONFIG_HAVE_ARCH_VMAP_STACK
unsigned long   srr0;
unsigned long   srr1;
unsigned long   dar;
diff --git a/arch/powerpc/kernel/asm-offsets.c 
b/arch/powerpc/kernel/asm-offsets.c
index b12d7c049bfe..e2b5d25d16f4 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -132,7 +132,7 @@ int main(void)
OFFSET(KSP_VSID, thread_struct, ksp_vsid);
 #else /* CONFIG_PPC64 */
OFFSET(PGDIR, thread_struct, pgdir);
-#ifdef CONFIG_VMAP_STACK
+#ifdef CONFIG_HAVE_ARCH_VMAP_STACK
OFFSET(SRR0, thread_struct, srr0);
OFFSET(SRR1, thread_struct, srr1);
OFFSET(DAR, thread_struct, dar);
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index c1687f3cd0ca..9ef75efaff47 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -321,9 +321,6 @@ stack_ovf:
lis r9,StackOverflow@ha
addir9,r9,StackOverflow@l
LOAD_REG_IMMEDIATE(r10,MSR_KERNEL)
-#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
-   mtspr   SPRN_NRI, r0
-#endif
mtspr   SPRN_SRR0,r9
mtspr   SPRN_SRR1,r10
rfi
@@ -1353,7 +1350,7 @@ _GLOBAL(enter_rtas)
mtspr   SPRN_SRR1,r9
rfi
 1: tophys_novmstack r9, r1
-#ifdef CONFIG_VMAP_STACK
+#ifdef CONFIG_HAVE_ARCH_VMAP_STACK
li  r0, MSR_KERNEL & ~MSR_IR/* can take DTLB miss */
mtmsr   r0
isync
diff --git a/arch/powerpc/kernel/fpu.S b/arch/powerpc/kernel/fpu.S
index 3ff9a8fafa46..5be78db32257 100644
--- a/arch/powerpc/kernel/fpu.S
+++ b/arch/powerpc/kernel/fpu.S
@@ -92,7 +92,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
/* enable use of FP after return */
 #ifdef CONFIG_PPC32
mfspr   r5,SPRN_SPRG_THREAD /* current task's THREAD (phys) */
-#ifdef CONFIG_VMAP_STACK
+#ifdef CONFIG_HAVE_ARCH_VMAP_STACK
tovirt(r5, r5)
 #endif
lwz r4,THREAD_FPEXC_MODE(r5)
diff --git a/arch/powerpc/kernel/head_6xx_8xx.h 
b/arch/powerpc/kernel/head_6xx_8xx.h
index b70d50efc961..540092fb90a9 100644
--- a/arch/powerpc/kernel/head_6xx_8xx.h
+++ b/arch/powerpc/kernel/head_6xx_8xx.h
@@ -19,7 +19,6 @@
 .macro EXCEPTION_PROLOG_0 handle_dar_dsisr=0
mtspr   SPRN_SPRG_SCRATCH0,r10
mtspr   SPRN_SPRG_SCRATCH1,r11
-#ifdef CONFIG_VMAP_STACK
mfspr   r10, SPRN_SPRG_THREAD
.if \handle_dar_dsisr
mfspr   r11, SPRN_DAR
@@ -29,17 +28,13 @@
.endif
mfspr   r11, SPRN_SRR0
stw r11, SRR0(r10)
-#endif
mfspr   r11, SPRN_SRR1  /* check whether user or kernel */
-#ifdef CONFIG_VMAP_STACK
stw r11, SRR1(r10)
-#endif
mfcrr10
andi.   r11, r11, MSR_PR
 .endm
 
 .macro EXCEPTION_PROLOG_1
-#ifdef CONFIG_VMAP_STACK
mtspr   SPRN_SPRG_SCRATCH2,r1
subir1, r1, INT_FRAME_SIZE  /* use r1 if kernel */
beq 1f
@@ -47,20 +42,13 @@
lwz r1,TASK_STACK-THREAD(r1)
addir1, r1, THREAD_SIZE - INT_FRAME_SIZE
 1:
+#ifdef CONFIG_VMAP_STACK
mtcrf   0x3f, r1
bt  32 - THREAD_ALIGN_SHIFT, stack_overflow
-#else
-   subir11, r1, INT_FRAME_SIZE /* use r1 if kernel */
-   beq 1f
-   mfspr   r11,SPRN_SPRG_THREAD
-   lwz r11,TASK_STACK-THREAD(r11)
-   addir11, r11, THREAD_SIZE - INT_FRAME_SIZE
-1: tophys(r11, r11)
 #endif
 .endm
 
 .macro EXCEPTION_PROLOG_2 handle_dar_dsisr=0
-#ifdef CONFIG_VMAP_STACK
li  r11, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */
mtmsr   r11
isync
@@ -68,11 +56,6 @@
stw r11,GPR1(r1)
stw r11,0(r1)
mr  r11, r1
-#else
-   stw 

[PATCH v1 11/15] powerpc/32: Use r1 directly instead of r11 in syscall prolog

2020-12-22 Thread Christophe Leroy
In syscall prolog, we don't need to keep the stack pointer in r11 as
we do in exception prolog. So r1 can be used directly to free r11.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/head_6xx_8xx.h | 21 ++---
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/kernel/head_6xx_8xx.h 
b/arch/powerpc/kernel/head_6xx_8xx.h
index 540092fb90a9..2536f0a660af 100644
--- a/arch/powerpc/kernel/head_6xx_8xx.h
+++ b/arch/powerpc/kernel/head_6xx_8xx.h
@@ -101,26 +101,25 @@
tovirt(r12, r12)
stw r11,GPR1(r1)
stw r11,0(r1)
-   mr  r11, r1
mflrr10
-   stw r10, _LINK(r11)
+   stw r10, _LINK(r1)
mfctr   r10
-   stw r10,_NIP(r11)
+   stw r10,_NIP(r1)
mfcrr10
rlwinm  r10,r10,0,4,2   /* Clear SO bit in CR */
-   stw r10,_CCR(r11)   /* save registers */
+   stw r10,_CCR(r1)/* save registers */
LOAD_REG_IMMEDIATE(r10, MSR_KERNEL & ~MSR_IR) /* can take exceptions */
mtmsr   r10 /* (except for mach check in rtas) */
lis r10,STACK_FRAME_REGS_MARKER@ha /* exception frame marker */
-   stw r2,GPR2(r11)
+   stw r2,GPR2(r1)
addir10,r10,STACK_FRAME_REGS_MARKER@l
-   stw r9,_MSR(r11)
+   stw r9,_MSR(r1)
li  r2, \trapno + 1
-   stw r10,8(r11)
-   stw r2,_TRAP(r11)
-   SAVE_GPR(0, r11)
-   SAVE_4GPRS(3, r11)
-   SAVE_2GPRS(7, r11)
+   stw r10,8(r1)
+   stw r2,_TRAP(r1)
+   SAVE_GPR(0, r1)
+   SAVE_4GPRS(3, r1)
+   SAVE_2GPRS(7, r1)
addir11,r1,STACK_FRAME_OVERHEAD
addir2,r12,-THREAD
stw r11,PT_REGS(r12)
-- 
2.25.0



[PATCH v1 05/15] powerpc: Remove address argument from bad_page_fault()

2020-12-22 Thread Christophe Leroy
The address argument is not used by bad_page_fault().

Remove it.

Suggested-by: Nicholas Piggin 
Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/bug.h | 4 ++--
 arch/powerpc/kernel/entry_32.S | 4 +---
 arch/powerpc/kernel/exceptions-64e.S   | 3 +--
 arch/powerpc/kernel/exceptions-64s.S   | 8 +++-
 arch/powerpc/kernel/traps.c| 2 +-
 arch/powerpc/mm/book3s64/hash_utils.c  | 2 +-
 arch/powerpc/mm/book3s64/slb.c | 2 +-
 arch/powerpc/mm/fault.c| 6 +++---
 arch/powerpc/platforms/8xx/machine_check.c | 2 +-
 9 files changed, 14 insertions(+), 19 deletions(-)

diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
index 464f8ca8a5c9..af8c164254d0 100644
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -112,8 +112,8 @@
 
 struct pt_regs;
 extern int do_page_fault(struct pt_regs *, unsigned long, unsigned long);
-extern void bad_page_fault(struct pt_regs *, unsigned long, int);
-void __bad_page_fault(struct pt_regs *regs, unsigned long address, int sig);
+void bad_page_fault(struct pt_regs *regs, int sig);
+void __bad_page_fault(struct pt_regs *regs, int sig);
 extern void _exception(int, struct pt_regs *, int, unsigned long);
 extern void _exception_pkey(struct pt_regs *, unsigned long, int);
 extern void die(const char *, struct pt_regs *, long);
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 238eacfda7b0..abd95aebe73a 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -671,15 +671,13 @@ ppc_swapcontext:
 handle_page_fault:
addir3,r1,STACK_FRAME_OVERHEAD
bl  do_page_fault
-   cmpwi   r3,0
+   mr. r4,r3
beq+ret_from_except
SAVE_NVGPRS(r1)
lwz r0,_TRAP(r1)
clrrwi  r0,r0,1
stw r0,_TRAP(r1)
-   mr  r5,r3
addir3,r1,STACK_FRAME_OVERHEAD
-   lwz r4,_DAR(r1)
bl  __bad_page_fault
b   ret_from_except_full
 
diff --git a/arch/powerpc/kernel/exceptions-64e.S 
b/arch/powerpc/kernel/exceptions-64e.S
index 74d07dc0bb48..e6fa10fc5d67 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1020,9 +1020,8 @@ storage_fault_common:
bne-1f
b   ret_from_except_lite
 1: bl  save_nvgprs
-   mr  r5,r3
+   mr  r4,r3
addir3,r1,STACK_FRAME_OVERHEAD
-   ld  r4,_DAR(r1)
bl  __bad_page_fault
b   ret_from_except
 
diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index e02ad6fefa46..cfbd1d690033 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -2137,8 +2137,7 @@ EXC_COMMON_BEGIN(h_data_storage_common)
GEN_COMMON h_data_storage
addir3,r1,STACK_FRAME_OVERHEAD
 BEGIN_MMU_FTR_SECTION
-   ld  r4,_DAR(r1)
-   li  r5,SIGSEGV
+   li  r4,SIGSEGV
bl  bad_page_fault
 MMU_FTR_SECTION_ELSE
bl  unknown_exception
@@ -3256,9 +3255,8 @@ handle_page_fault:
bl  do_page_fault
cmpdi   r3,0
beq+interrupt_return
-   mr  r5,r3
+   mr  r4,r3
addir3,r1,STACK_FRAME_OVERHEAD
-   ld  r4,_DAR(r1)
bl  __bad_page_fault
b   interrupt_return
 
@@ -3295,6 +3293,6 @@ handle_dabr_fault:
  * the access, or panic if there isn't a handler.
  */
 77:addir3,r1,STACK_FRAME_OVERHEAD
-   li  r5,SIGSEGV
+   li  r4,SIGSEGV
bl  bad_page_fault
b   interrupt_return
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 3ec7b443fe6b..f3f6af3141ee 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1612,7 +1612,7 @@ void alignment_exception(struct pt_regs *regs)
if (user_mode(regs))
_exception(sig, regs, code, regs->dar);
else
-   bad_page_fault(regs, regs->dar, sig);
+   bad_page_fault(regs, sig);
 
 bail:
exception_exit(prev_state);
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c 
b/arch/powerpc/mm/book3s64/hash_utils.c
index 73b06adb6eeb..a181eaba3349 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1859,7 +1859,7 @@ void low_hash_fault(struct pt_regs *regs, unsigned long 
address, int rc)
 #endif
_exception(SIGBUS, regs, BUS_ADRERR, address);
} else
-   bad_page_fault(regs, address, SIGBUS);
+   bad_page_fault(regs, SIGBUS);
 
exception_exit(prev_state);
 }
diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
index 584567970c11..8aa01c92e28b 100644
--- a/arch/powerpc/mm/book3s64/slb.c
+++ b/arch/powerpc/mm/book3s64/slb.c
@@ -871,7 +871,7 @@ void 

[PATCH v1 08/15] powerpc/32: Split head_32.h into head_40x.h and head_6xx_8xx.h

2020-12-22 Thread Christophe Leroy
book3s/32 (aka 6xx) and 8xx head will be reworked to re-enable MMU
earlier.

Split 40x head.h out so that we can keep 40x as is until it
is phased out.

There is no plan to implement VMAP stack on 40x on the near future
so remove everything related.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/entry_32.S|   6 +-
 arch/powerpc/kernel/head_40x.S|   2 +-
 arch/powerpc/kernel/{head_32.h => head_40x.h} | 185 +-
 .../kernel/{head_32.h => head_6xx_8xx.h}  |  39 +---
 arch/powerpc/kernel/head_8xx.S|   2 +-
 arch/powerpc/kernel/head_book3s_32.S  |   4 +-
 6 files changed, 18 insertions(+), 220 deletions(-)
 copy arch/powerpc/kernel/{head_32.h => head_40x.h} (52%)
 rename arch/powerpc/kernel/{head_32.h => head_6xx_8xx.h} (89%)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 05904334c0ff..c1687f3cd0ca 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -33,7 +33,11 @@
 #include 
 #include 
 
-#include "head_32.h"
+#ifdef CONFIG_40x
+#include "head_40x.h"
+#else
+#include "head_6xx_8xx.h"
+#endif
 
 /*
  * powerpc relies on return from interrupt/syscall being context synchronising
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 16dc0eecbdf9..050b5fdc0438 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -37,7 +37,7 @@
 #include 
 #include 
 
-#include "head_32.h"
+#include "head_40x.h"
 
 /* As with the other PowerPC ports, it is expected that when code
  * execution begins here, the following registers contain valid, yet
diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_40x.h
similarity index 52%
copy from arch/powerpc/kernel/head_32.h
copy to arch/powerpc/kernel/head_40x.h
index a2f72c966baf..9e27c07f5f2b 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_40x.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0 */
-#ifndef __HEAD_32_H__
-#define __HEAD_32_H__
+#ifndef __HEAD_40x_H__
+#define __HEAD_40x_H__
 
 #include /* for STACK_FRAME_REGS_MARKER */
 
@@ -10,69 +10,21 @@
  * We assume sprg3 has the physical address of the current
  * task's thread_struct.
  */
-.macro EXCEPTION_PROLOG handle_dar_dsisr=0
-   EXCEPTION_PROLOG_0  handle_dar_dsisr=\handle_dar_dsisr
-   EXCEPTION_PROLOG_1
-   EXCEPTION_PROLOG_2  handle_dar_dsisr=\handle_dar_dsisr
-.endm
-
-.macro EXCEPTION_PROLOG_0 handle_dar_dsisr=0
+.macro EXCEPTION_PROLOG
mtspr   SPRN_SPRG_SCRATCH0,r10
mtspr   SPRN_SPRG_SCRATCH1,r11
-#ifdef CONFIG_VMAP_STACK
-   mfspr   r10, SPRN_SPRG_THREAD
-   .if \handle_dar_dsisr
-   mfspr   r11, SPRN_DAR
-   stw r11, DAR(r10)
-   mfspr   r11, SPRN_DSISR
-   stw r11, DSISR(r10)
-   .endif
-   mfspr   r11, SPRN_SRR0
-   stw r11, SRR0(r10)
-#endif
mfspr   r11, SPRN_SRR1  /* check whether user or kernel */
-#ifdef CONFIG_VMAP_STACK
-   stw r11, SRR1(r10)
-#endif
mfcrr10
andi.   r11, r11, MSR_PR
-.endm
-
-.macro EXCEPTION_PROLOG_1 for_rtas=0
-#ifdef CONFIG_VMAP_STACK
-   mtspr   SPRN_SPRG_SCRATCH2,r1
-   subir1, r1, INT_FRAME_SIZE  /* use r1 if kernel */
-   beq 1f
-   mfspr   r1,SPRN_SPRG_THREAD
-   lwz r1,TASK_STACK-THREAD(r1)
-   addir1, r1, THREAD_SIZE - INT_FRAME_SIZE
-1:
-   mtcrf   0x7f, r1
-   bt  32 - THREAD_ALIGN_SHIFT, stack_overflow
-#else
subir11, r1, INT_FRAME_SIZE /* use r1 if kernel */
beq 1f
mfspr   r11,SPRN_SPRG_THREAD
lwz r11,TASK_STACK-THREAD(r11)
addir11, r11, THREAD_SIZE - INT_FRAME_SIZE
 1: tophys(r11, r11)
-#endif
-.endm
-
-.macro EXCEPTION_PROLOG_2 handle_dar_dsisr=0
-#ifdef CONFIG_VMAP_STACK
-   li  r11, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */
-   mtmsr   r11
-   isync
-   mfspr   r11, SPRN_SPRG_SCRATCH2
-   stw r11,GPR1(r1)
-   stw r11,0(r1)
-   mr  r11, r1
-#else
stw r1,GPR1(r11)
stw r1,0(r11)
tovirt(r1, r11) /* set new kernel sp */
-#endif
stw r10,_CCR(r11)   /* save registers */
stw r12,GPR12(r11)
stw r9,GPR9(r11)
@@ -82,31 +34,9 @@
stw r12,GPR11(r11)
mflrr10
stw r10,_LINK(r11)
-#ifdef CONFIG_VMAP_STACK
-   mfspr   r12, SPRN_SPRG_THREAD
-   tovirt(r12, r12)
-   .if \handle_dar_dsisr
-   lwz r10, DAR(r12)
-   stw r10, _DAR(r11)
-   lwz r10, DSISR(r12)
-   stw r10, _DSISR(r11)
-   .endif
-   lwz r9, SRR1(r12)
-   lwz r12, SRR0(r12)
-#else
mfspr   r12,SPRN_SRR0
mfspr   r9,SPRN_SRR1
-#endif
-#ifdef CONFIG_40x
rlwinm  r9,r9,0,14,12   /* clear MSR_WE (necessary?) */
-#else
-#ifdef CONFIG_VMAP_STACK
-

[PATCH v1 09/15] powerpc/32: Preserve cr1 in exception prolog stack check

2020-12-22 Thread Christophe Leroy
THREAD_ALIGN_SHIFT = THREAD_SHIFT + 1 = PAGE_SHIFT + 1
Maximum PAGE_SHIFT is 18 for 256k pages so
THREAD_ALIGN_SHIFT is 19 at the maximum.

No need to clobber cr1, it can be preserved when moving r1
into CR when we check stack overflow.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/head_6xx_8xx.h   | 2 +-
 arch/powerpc/kernel/head_book3s_32.S | 6 --
 2 files changed, 1 insertion(+), 7 deletions(-)

diff --git a/arch/powerpc/kernel/head_6xx_8xx.h 
b/arch/powerpc/kernel/head_6xx_8xx.h
index 0e4ce6746443..b70d50efc961 100644
--- a/arch/powerpc/kernel/head_6xx_8xx.h
+++ b/arch/powerpc/kernel/head_6xx_8xx.h
@@ -47,7 +47,7 @@
lwz r1,TASK_STACK-THREAD(r1)
addir1, r1, THREAD_SIZE - INT_FRAME_SIZE
 1:
-   mtcrf   0x7f, r1
+   mtcrf   0x3f, r1
bt  32 - THREAD_ALIGN_SHIFT, stack_overflow
 #else
subir11, r1, INT_FRAME_SIZE /* use r1 if kernel */
diff --git a/arch/powerpc/kernel/head_book3s_32.S 
b/arch/powerpc/kernel/head_book3s_32.S
index ccc691d67b0c..89f38e9ec7cc 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -276,12 +276,6 @@ MachineCheck:
 7: EXCEPTION_PROLOG_2
addir3,r1,STACK_FRAME_OVERHEAD
 #ifdef CONFIG_PPC_CHRP
-#ifdef CONFIG_VMAP_STACK
-   mfspr   r4, SPRN_SPRG_THREAD
-   tovirt(r4, r4)
-   lwz r4, RTAS_SP(r4)
-   cmpwi   cr1, r4, 0
-#endif
beq cr1, machine_check_tramp
twi 31, 0, 0
 #else
-- 
2.25.0



[PATCH v1 07/15] powerpc: Remove address and errorcode arguments from do_page_fault()

2020-12-22 Thread Christophe Leroy
Let do_page_fault() retrieve address and errorcode from regs.

This simplifies the code and shouldn't impeed performance as
address and errorcode are likely still hot in the cache.

Additional cleanup could be done in book3s/64 code once
the same changes have been applied to hash_fault() handling.

Suggested-by: Nicholas Piggin 
Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/bug.h   |  2 +-
 arch/powerpc/kernel/entry_32.S   |  7 +--
 arch/powerpc/kernel/exceptions-64e.S |  2 --
 arch/powerpc/kernel/head_40x.S   |  6 +++---
 arch/powerpc/kernel/head_8xx.S   |  6 +++---
 arch/powerpc/kernel/head_book3s_32.S |  5 ++---
 arch/powerpc/kernel/head_booke.h |  4 +---
 arch/powerpc/mm/fault.c  | 10 +-
 8 files changed, 16 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
index af8c164254d0..5a05f43b2984 100644
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -111,7 +111,7 @@
 #ifndef __ASSEMBLY__
 
 struct pt_regs;
-extern int do_page_fault(struct pt_regs *, unsigned long, unsigned long);
+int do_page_fault(struct pt_regs *regs);
 void bad_page_fault(struct pt_regs *regs, int sig);
 void __bad_page_fault(struct pt_regs *regs, int sig);
 extern void _exception(int, struct pt_regs *, int, unsigned long);
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index abd95aebe73a..05904334c0ff 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -276,8 +276,7 @@ reenable_mmu:
 * We save a bunch of GPRs,
 * r3 can be different from GPR3(r1) at this point, r9 and r11
 * contains the old MSR and handler address respectively,
-* r4 & r5 can contain page fault arguments that need to be passed
-* along as well. r0, r6-r8, r12, CCR, CTR, XER etc... are left
+* r0, r4-r8, r12, CCR, CTR, XER etc... are left
 * clobbered as they aren't useful past this point.
 */
 
@@ -285,15 +284,11 @@ reenable_mmu:
stw r9,8(r1)
stw r11,12(r1)
stw r3,16(r1)
-   stw r4,20(r1)
-   stw r5,24(r1)
 
/* If we are disabling interrupts (normal case), simply log it with
 * lockdep
 */
 1: bl  trace_hardirqs_off
-   lwz r5,24(r1)
-   lwz r4,20(r1)
lwz r3,16(r1)
lwz r11,12(r1)
lwz r9,8(r1)
diff --git a/arch/powerpc/kernel/exceptions-64e.S 
b/arch/powerpc/kernel/exceptions-64e.S
index e6fa10fc5d67..52421042a020 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1011,8 +1011,6 @@ storage_fault_common:
std r14,_DAR(r1)
std r15,_DSISR(r1)
addir3,r1,STACK_FRAME_OVERHEAD
-   mr  r4,r14
-   mr  r5,r15
ld  r14,PACA_EXGEN+EX_R14(r13)
ld  r15,PACA_EXGEN+EX_R15(r13)
bl  do_page_fault
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index a1ae00689e0f..16dc0eecbdf9 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -191,9 +191,9 @@ _ENTRY(saved_ksp_limit)
  */
START_EXCEPTION(0x0400, InstructionAccess)
EXCEPTION_PROLOG
-   mr  r4,r12  /* Pass SRR0 as arg2 */
-   stw r4, _DEAR(r11)
-   li  r5,0/* Pass zero as arg3 */
+   stw r12, _DEAR(r11) /* SRR0 as DEAR */
+   li  r5,0
+   stw r5, _ESR(r11)   /* Zero ESR */
EXC_XFER_LITE(0x400, handle_page_fault)
 
 /* 0x0500 - External Interrupt Exception */
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 81f3c984f50c..7dce277c8a2a 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -312,14 +312,14 @@ DataStoreTLBMiss:
. = 0x1300
 InstructionTLBError:
EXCEPTION_PROLOG
-   mr  r4,r12
andis.  r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
andis.  r10,r9,SRR1_ISI_NOPT@h
beq+.Litlbie
-   tlbie   r4
+   tlbie   r12
/* 0x400 is InstructionAccess exception, needed by bad_page_fault() */
 .Litlbie:
-   stw r4, _DAR(r11)
+   stw r12, _DAR(r11)
+   stw r5, _DSISR(r11)
EXC_XFER_LITE(0x400, handle_page_fault)
 
 /* This is the data TLB error on the MPC8xx.  This could be due to
diff --git a/arch/powerpc/kernel/head_book3s_32.S 
b/arch/powerpc/kernel/head_book3s_32.S
index 15e6003fd3b8..0133a02d1d47 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -369,9 +369,9 @@ BEGIN_MMU_FTR_SECTION
 END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
 #endif
 #endif /* CONFIG_VMAP_STACK */
-1: mr  r4,r12
andis.  r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
-   stw r4, _DAR(r11)
+   stw 

[PATCH v1 06/15] powerpc: Remove address and errorcode arguments from do_break()

2020-12-22 Thread Christophe Leroy
Let do_break() retrieve address and errorcode from regs.

This simplifies the code and shouldn't impeed performance as
address and errorcode are likely still hot in the cache.

Suggested-by: Nicholas Piggin 
Signed-off-by: Christophe Leroy 
---
 arch/powerpc/include/asm/debug.h | 3 +--
 arch/powerpc/kernel/exceptions-64s.S | 2 --
 arch/powerpc/kernel/head_8xx.S   | 5 -
 arch/powerpc/kernel/process.c| 8 +++-
 4 files changed, 4 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/include/asm/debug.h b/arch/powerpc/include/asm/debug.h
index ec57daf87f40..0550eceab3ca 100644
--- a/arch/powerpc/include/asm/debug.h
+++ b/arch/powerpc/include/asm/debug.h
@@ -52,8 +52,7 @@ extern void do_send_trap(struct pt_regs *regs, unsigned long 
address,
 unsigned long error_code, int brkpt);
 #else
 
-extern void do_break(struct pt_regs *regs, unsigned long address,
-unsigned long error_code);
+void do_break(struct pt_regs *regs);
 #endif
 
 #endif /* _ASM_POWERPC_DEBUG_H */
diff --git a/arch/powerpc/kernel/exceptions-64s.S 
b/arch/powerpc/kernel/exceptions-64s.S
index cfbd1d690033..3ea067bcbb95 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -3262,8 +3262,6 @@ handle_page_fault:
 
 /* We have a data breakpoint exception - handle it */
 handle_dabr_fault:
-   ld  r4,_DAR(r1)
-   ld  r5,_DSISR(r1)
addir3,r1,STACK_FRAME_OVERHEAD
bl  do_break
/*
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 52702f3db6df..81f3c984f50c 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -364,11 +364,6 @@ do_databreakpoint:
addir3,r1,STACK_FRAME_OVERHEAD
mfspr   r4,SPRN_BAR
stw r4,_DAR(r11)
-#ifdef CONFIG_VMAP_STACK
-   lwz r5,_DSISR(r11)
-#else
-   mfspr   r5,SPRN_DSISR
-#endif
EXC_XFER_STD(0x1c00, do_break)
 
. = 0x1c00
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index a66f435dabbf..99c5e4fc5ff1 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -659,12 +659,10 @@ static void do_break_handler(struct pt_regs *regs)
}
 }
 
-void do_break (struct pt_regs *regs, unsigned long address,
-   unsigned long error_code)
+void do_break(struct pt_regs *regs)
 {
current->thread.trap_nr = TRAP_HWBKPT;
-   if (notify_die(DIE_DABR_MATCH, "dabr_match", regs, error_code,
-   11, SIGSEGV) == NOTIFY_STOP)
+   if (notify_die(DIE_DABR_MATCH, "dabr_match", regs, regs->dsisr, 11, 
SIGSEGV) == NOTIFY_STOP)
return;
 
if (debugger_break_match(regs))
@@ -681,7 +679,7 @@ void do_break (struct pt_regs *regs, unsigned long address,
do_break_handler(regs);
 
/* Deliver the signal to userspace */
-   force_sig_fault(SIGTRAP, TRAP_HWBKPT, (void __user *)address);
+   force_sig_fault(SIGTRAP, TRAP_HWBKPT, (void __user *)regs->dar);
 }
 #endif /* CONFIG_PPC_ADV_DEBUG_REGS */
 
-- 
2.25.0



[PATCH v1 01/15] powerpc/32: Fix vmap stack - Properly set r1 before activating MMU on syscall too

2020-12-22 Thread Christophe Leroy
We need r1 to be properly set before activating MMU, otherwise any new
exception taken while saving registers into the stack in syscall
prologs will use the user stack, which is wrong and will even lockup
or crash when KUAP is selected.

Do that by switching the meaning of r11 and r1 until we have saved r1
to the stack: copy r1 into r11 and setup the new stack pointer in r1.
To avoid complicating and impacting all generic and specific prolog
code (and more), copy back r1 into r11 once r11 is save onto
the stack.

We could get rid of copying r1 back and forth at the cost of rewriting
everything to use r1 instead of r11 all the way when CONFIG_VMAP_STACK
is set, but the effort is probably not worth it for now.

Fixes: da7bb43ab9da ("powerpc/32: Fix vmap stack - Properly set r1 before 
activating MMU")
Cc: sta...@vger.kernel.org
Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/head_32.h | 25 -
 1 file changed, 16 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 541664d95702..a2f72c966baf 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -121,18 +121,28 @@
 #ifdef CONFIG_VMAP_STACK
mfspr   r11, SPRN_SRR0
mtctr   r11
-#endif
andi.   r11, r9, MSR_PR
-   lwz r11,TASK_STACK-THREAD(r12)
+   mr  r11, r1
+   lwz r1,TASK_STACK-THREAD(r12)
beq-99f
-   addir11, r11, THREAD_SIZE - INT_FRAME_SIZE
-#ifdef CONFIG_VMAP_STACK
+   addir1, r1, THREAD_SIZE - INT_FRAME_SIZE
li  r10, MSR_KERNEL & ~(MSR_IR | MSR_RI) /* can take DTLB miss */
mtmsr   r10
isync
+   tovirt(r12, r12)
+   stw r11,GPR1(r1)
+   stw r11,0(r1)
+   mr  r11, r1
+#else
+   andi.   r11, r9, MSR_PR
+   lwz r11,TASK_STACK-THREAD(r12)
+   beq-99f
+   addir11, r11, THREAD_SIZE - INT_FRAME_SIZE
+   tophys(r11, r11)
+   stw r1,GPR1(r11)
+   stw r1,0(r11)
+   tovirt(r1, r11) /* set new kernel sp */
 #endif
-   tovirt_vmstack r12, r12
-   tophys_novmstack r11, r11
mflrr10
stw r10, _LINK(r11)
 #ifdef CONFIG_VMAP_STACK
@@ -140,9 +150,6 @@
 #else
mfspr   r10,SPRN_SRR0
 #endif
-   stw r1,GPR1(r11)
-   stw r1,0(r11)
-   tovirt_novmstack r1, r11/* set new kernel sp */
stw r10,_NIP(r11)
mfcrr10
rlwinm  r10,r10,0,4,2   /* Clear SO bit in CR */
-- 
2.25.0



[PATCH v1 03/15] powerpc/32s: Only build hash code when CONFIG_PPC_BOOK3S_604 is selected

2020-12-22 Thread Christophe Leroy
It is now possible to only build book3s/32 kernel for
CPUs without hash table.

Opt out hash related code when CONFIG_PPC_BOOK3S_604 is not selected.

Signed-off-by: Christophe Leroy 
---
v2: Rebased
---
 arch/powerpc/kernel/head_book3s_32.S | 12 
 arch/powerpc/mm/book3s32/Makefile|  4 +++-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/head_book3s_32.S 
b/arch/powerpc/kernel/head_book3s_32.S
index fbc48a500846..f6355fcca86a 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -293,6 +293,7 @@ MachineCheck:
DO_KVM  0x300
 DataAccess:
 #ifdef CONFIG_VMAP_STACK
+#ifdef CONFIG_PPC_BOOK3S_604
 BEGIN_MMU_FTR_SECTION
mtspr   SPRN_SPRG_SCRATCH2,r10
mfspr   r10, SPRN_SPRG_THREAD
@@ -309,12 +310,14 @@ BEGIN_MMU_FTR_SECTION
 MMU_FTR_SECTION_ELSE
b   1f
 ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
+#endif
 1: EXCEPTION_PROLOG_0 handle_dar_dsisr=1
EXCEPTION_PROLOG_1
b   handle_page_fault_tramp_1
 #else  /* CONFIG_VMAP_STACK */
EXCEPTION_PROLOG handle_dar_dsisr=1
get_and_save_dar_dsisr_on_stack r4, r5, r11
+#ifdef CONFIG_PPC_BOOK3S_604
 BEGIN_MMU_FTR_SECTION
andis.  r0, r5, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH)@h
bne handle_page_fault_tramp_2   /* if not, try to put a PTE */
@@ -322,8 +325,11 @@ BEGIN_MMU_FTR_SECTION
bl  hash_page
b   handle_page_fault_tramp_1
 MMU_FTR_SECTION_ELSE
+#endif
b   handle_page_fault_tramp_2
+#ifdef CONFIG_PPC_BOOK3S_604
 ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_HPTE_TABLE)
+#endif
 #endif /* CONFIG_VMAP_STACK */
 
 /* Instruction access exception. */
@@ -339,12 +345,14 @@ InstructionAccess:
mfspr   r11, SPRN_SRR1  /* check whether user or kernel */
stw r11, SRR1(r10)
mfcrr10
+#ifdef CONFIG_PPC_BOOK3S_604
 BEGIN_MMU_FTR_SECTION
andis.  r11, r11, SRR1_ISI_NOPT@h   /* no pte found? */
bne hash_page_isi
 .Lhash_page_isi_cont:
mfspr   r11, SPRN_SRR1  /* check whether user or kernel */
 END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
+#endif
andi.   r11, r11, MSR_PR
 
EXCEPTION_PROLOG_1
@@ -355,9 +363,11 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
beq 1f  /* if so, try to put a PTE */
li  r3,0/* into the hash table */
mr  r4,r12  /* SRR0 is fault address */
+#ifdef CONFIG_PPC_BOOK3S_604
 BEGIN_MMU_FTR_SECTION
bl  hash_page
 END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
+#endif
 #endif /* CONFIG_VMAP_STACK */
 1: mr  r4,r12
andis.  r5,r9,DSISR_SRR1_MATCH_32S@h /* Filter relevant SRR1 bits */
@@ -690,6 +700,7 @@ handle_page_fault_tramp_2:
EXC_XFER_LITE(0x300, handle_page_fault)
 
 #ifdef CONFIG_VMAP_STACK
+#ifdef CONFIG_PPC_BOOK3S_604
 .macro save_regs_threadthread
stw r0, THR0(\thread)
stw r3, THR3(\thread)
@@ -761,6 +772,7 @@ fast_hash_page_return:
mfspr   r11, SPRN_SPRG_SCRATCH1
mfspr   r10, SPRN_SPRG_SCRATCH0
rfi
+#endif /* CONFIG_PPC_BOOK3S_604 */
 
 stack_overflow:
vmap_stack_overflow_exception
diff --git a/arch/powerpc/mm/book3s32/Makefile 
b/arch/powerpc/mm/book3s32/Makefile
index 3f972db17761..446d9de88ce4 100644
--- a/arch/powerpc/mm/book3s32/Makefile
+++ b/arch/powerpc/mm/book3s32/Makefile
@@ -6,4 +6,6 @@ ifdef CONFIG_KASAN
 CFLAGS_mmu.o   += -DDISABLE_BRANCH_PROFILING
 endif
 
-obj-y += mmu.o hash_low.o mmu_context.o tlb.o nohash_low.o
+obj-y += mmu.o mmu_context.o
+obj-$(CONFIG_PPC_BOOK3S_603) += nohash_low.o
+obj-$(CONFIG_PPC_BOOK3S_604) += hash_low.o tlb.o
-- 
2.25.0



[PATCH v1 04/15] powerpc/32s: Do DABR match out of handle_page_fault()

2020-12-22 Thread Christophe Leroy
handle_page_fault() has some code dedicated to book3s/32 to
call do_break() when the DSI is a DABR match.

On other platforms, do_break() is handled separately.

Do the same for book3s/32, do it earlier in the process of DSI.

This change also avoid doing the test on ISI.

Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/entry_32.S   | 15 ---
 arch/powerpc/kernel/head_book3s_32.S |  3 +++
 2 files changed, 3 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 1c9b0ccc2172..238eacfda7b0 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -670,10 +670,6 @@ ppc_swapcontext:
.globl  handle_page_fault
 handle_page_fault:
addir3,r1,STACK_FRAME_OVERHEAD
-#ifdef CONFIG_PPC_BOOK3S_32
-   andis.  r0,r5,DSISR_DABRMATCH@h
-   bne-handle_dabr_fault
-#endif
bl  do_page_fault
cmpwi   r3,0
beq+ret_from_except
@@ -687,17 +683,6 @@ handle_page_fault:
bl  __bad_page_fault
b   ret_from_except_full
 
-#ifdef CONFIG_PPC_BOOK3S_32
-   /* We have a data breakpoint exception - handle it */
-handle_dabr_fault:
-   SAVE_NVGPRS(r1)
-   lwz r0,_TRAP(r1)
-   clrrwi  r0,r0,1
-   stw r0,_TRAP(r1)
-   bl  do_break
-   b   ret_from_except_full
-#endif
-
 /*
  * This routine switches between two different tasks.  The process
  * state of one is saved on its kernel stack.  Then the state
diff --git a/arch/powerpc/kernel/head_book3s_32.S 
b/arch/powerpc/kernel/head_book3s_32.S
index f6355fcca86a..15e6003fd3b8 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -697,7 +697,10 @@ handle_page_fault_tramp_1:
lwz r5, _DSISR(r11)
/* fall through */
 handle_page_fault_tramp_2:
+   andis.  r0, r5, DSISR_DABRMATCH@h
+   bne-1f
EXC_XFER_LITE(0x300, handle_page_fault)
+1: EXC_XFER_STD(0x300, do_break)
 
 #ifdef CONFIG_VMAP_STACK
 #ifdef CONFIG_PPC_BOOK3S_604
-- 
2.25.0



[PATCH v1 00/15] powerpc/32: Reduce head complexity and re-activate MMU earlier

2020-12-22 Thread Christophe Leroy
This series aims at reducing exception/syscall prologs complexity.
It also brings earlier MMU re-activation.

At the time being, we have two pathes in the prologs: one for
when we have CONFIG_VMAP stack and one when we don't.

Among 40x, 6xx and 8xx, only 40x doesn't support VMAP stack.

When VMAP stack is supported, there is special prolog code to
allow accessing stack with MMU on.

That code that access VM stack with MMU on is also able to access
linear memory, so it can also access non VM stack with MMU on.

CONFIG_VMAP_STACK as been on by default on 6xx and 8xx for some
kernel releases now, so it is known to work.

On the 8xx, null_syscall runs in 292 cycles with VMAP_STACK and in
296 cycles without VMAP stack.
On the 832x, null_syscall runs in 224 cycles with VMAP_STACK and in
213 cycles without VMAP stack.

By removing the old non VMAP stack code, and using the same prolog
regardless of the activation of VMAP stacks, we make the code a lot
simplier and open perspective to even more.

Once this is done, we can easily go one step further and re-activate
Instruction translation at the same time as data translation.

At the end, null_syscall runs in 286 cycles on the 8xx and in 216
cycles on the 832x

To do this, I splitted head_32.h in two files, one for 40x which
doesn't have VMAP stack and one for 6xx and 8xx that have VMAP stack.

Now that we have MMU back on earlier on the 6xx and 8xx, once the 40x is
gone it will be possible have more commonalities with book3e/32 which
has MMU always on.

Christophe Leroy (15):
  powerpc/32: Fix vmap stack - Properly set r1 before activating MMU on
syscall too
  powerpc/32s: Fix RTAS machine check with VMAP stack
  powerpc/32s: Only build hash code when CONFIG_PPC_BOOK3S_604 is
selected
  powerpc/32s: Do DABR match out of handle_page_fault()
  powerpc: Remove address argument from bad_page_fault()
  powerpc: Remove address and errorcode arguments from do_break()
  powerpc: Remove address and errorcode arguments from do_page_fault()
  powerpc/32: Split head_32.h into head_40x.h and head_6xx_8xx.h
  powerpc/32: Preserve cr1 in exception prolog stack check
  powerpc/32: Make VMAP stack code depend on HAVE_ARCH_VMAP_STACK
  powerpc/32: Use r1 directly instead of r11 in syscall prolog
  powerpc/32: Remove msr argument in EXC_XFER_TEMPLATE() on 6xx/8xx
  powerpc/32: Enable instruction translation at the same time as data
translation
  powerpc/32: Use r1 directly instead of r11 in exception prologs on
6xx/8xx
  powerpc/32: Use r11 to store DSISR in prolog

 arch/powerpc/include/asm/bug.h|   6 +-
 arch/powerpc/include/asm/debug.h  |   3 +-
 arch/powerpc/include/asm/processor.h  |   2 +-
 arch/powerpc/kernel/asm-offsets.c |   2 +-
 arch/powerpc/kernel/entry_32.S|  56 ++---
 arch/powerpc/kernel/exceptions-64e.S  |   5 +-
 arch/powerpc/kernel/exceptions-64s.S  |  10 +-
 arch/powerpc/kernel/fpu.S |   2 +-
 arch/powerpc/kernel/head_40x.S|   8 +-
 arch/powerpc/kernel/{head_32.h => head_40x.h} | 186 +--
 .../kernel/{head_32.h => head_6xx_8xx.h}  | 222 +-
 arch/powerpc/kernel/head_8xx.S|  33 +--
 arch/powerpc/kernel/head_book3s_32.S  |  64 ++---
 arch/powerpc/kernel/head_booke.h  |   4 +-
 arch/powerpc/kernel/idle_6xx.S|  12 +-
 arch/powerpc/kernel/process.c |   8 +-
 arch/powerpc/kernel/traps.c   |   2 +-
 arch/powerpc/kernel/vector.S  |   2 +-
 arch/powerpc/mm/book3s32/Makefile |   4 +-
 arch/powerpc/mm/book3s32/hash_low.S   |  14 --
 arch/powerpc/mm/book3s64/hash_utils.c |   2 +-
 arch/powerpc/mm/book3s64/slb.c|   2 +-
 arch/powerpc/mm/fault.c   |  16 +-
 arch/powerpc/platforms/8xx/machine_check.c|   2 +-
 24 files changed, 154 insertions(+), 513 deletions(-)
 copy arch/powerpc/kernel/{head_32.h => head_40x.h} (53%)
 rename arch/powerpc/kernel/{head_32.h => head_6xx_8xx.h} (50%)

-- 
2.25.0



[PATCH v1 02/15] powerpc/32s: Fix RTAS machine check with VMAP stack

2020-12-22 Thread Christophe Leroy
When we have VMAP stack, exception prolog 1 sets r1, not r11.

Fixes: da7bb43ab9da ("powerpc/32: Fix vmap stack - Properly set r1 before 
activating MMU")
Fixes: d2e006036082 ("powerpc/32: Use SPRN_SPRG_SCRATCH2 in exception prologs")
Cc: sta...@vger.kernel.org
Signed-off-by: Christophe Leroy 
---
 arch/powerpc/kernel/head_book3s_32.S | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/arch/powerpc/kernel/head_book3s_32.S 
b/arch/powerpc/kernel/head_book3s_32.S
index 349bf3f0c3af..fbc48a500846 100644
--- a/arch/powerpc/kernel/head_book3s_32.S
+++ b/arch/powerpc/kernel/head_book3s_32.S
@@ -260,9 +260,16 @@ __secondary_hold_acknowledge:
 MachineCheck:
EXCEPTION_PROLOG_0
 #ifdef CONFIG_PPC_CHRP
+#ifdef CONFIG_VMAP_STACK
+   mtspr   SPRN_SPRG_SCRATCH2,r1
+   mfspr   r1, SPRN_SPRG_THREAD
+   lwz r1, RTAS_SP(r1)
+   cmpwi   cr1, r1, 0
+#else
mfspr   r11, SPRN_SPRG_THREAD
lwz r11, RTAS_SP(r11)
cmpwi   cr1, r11, 0
+#endif
bne cr1, 7f
 #endif /* CONFIG_PPC_CHRP */
EXCEPTION_PROLOG_1 for_rtas=1
-- 
2.25.0



Re: GIT kernel with the PowerPC updates 5.11-1 doesn't boot on a FSL P5040 board and in a virtual e5500 QEMU machine

2020-12-22 Thread Michael Ellerman
Christian Zigotzky  writes:
>
...
> Download: http://www.xenosoft.de/MintPPC32-X5000.tar.gz (md5sum: 
> b31c1c1ca1fcf5d4cdf110c4bce11654) The password for both 'root' and 
> 'mintppc' is 'mintppc'.
...
>
> QEMU command without KVM on macOS Intel: qemu-system-ppc64 -M ppce500 
> -cpu e5500 -m 1024 -kernel uImage -drive 
> format=raw,file=MintPPC32-X5000.img,index=0,if=virtio -netdev 
> user,id=mynet0 -device virtio-net-pci,netdev=mynet0 -append "rw 
> root=/dev/vda" -device virtio-vga -usb -device usb-ehci,id=ehci -device 
> usb-tablet -device virtio-keyboard-pci -smp 4 -vnc :1

I was able to boot the above (on powerpc, but not using KVM), using my
fixes branch.

Please give that branch a test:
  https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/log/?h=fixes


cheers


Re: [PATCH] powerpc/boot: Fix build of dts/fsl

2020-12-22 Thread Michael Ellerman
On Tue, 15 Dec 2020 14:29:06 +1100, Michael Ellerman wrote:
> The lkp robot reported that some configs fail to build, for example
> mpc85xx_smp_defconfig, with:
> 
>   cc1: fatal error: opening output file 
> arch/powerpc/boot/dts/fsl/.mpc8540ads.dtb.dts.tmp: No such file or directory
> 
> This bisects to:
>   cc8a51ca6f05 ("kbuild: always create directories of targets")
> 
> [...]

Applied to powerpc/fixes.

[1/1] powerpc/boot: Fix build of dts/fsl
  https://git.kernel.org/powerpc/c/b36f835b636908e4122f2e17310b1dbc380a3b19

cheers


Re: [PATCH 1/3] powerpc/vdso: Block R_PPC_REL24 relocations

2020-12-22 Thread Michael Ellerman
On Fri, 18 Dec 2020 22:16:17 +1100, Michael Ellerman wrote:
> Add R_PPC_REL24 relocations to the list of relocations we do NOT
> support in the VDSO.
> 
> These are generated in some cases and we do not support relocating
> them at runtime, so if they appear then the VDSO will not work at
> runtime, therefore it's preferable to break the build if we see them.

Applied to powerpc/fixes.

[1/3] powerpc/vdso: Block R_PPC_REL24 relocations
  https://git.kernel.org/powerpc/c/42ed6d56ade21f367f27aa5915cc397510cfdef5
[2/3] powerpc/vdso: Don't pass 64-bit ABI cflags to 32-bit VDSO
  https://git.kernel.org/powerpc/c/107521e8039688f7a9548f17919dfde670b911c1
[3/3] powerpc/vdso: Fix DOTSYM for 32-bit LE VDSO
  https://git.kernel.org/powerpc/c/2eda7f11000646909a10298951c9defb2321b240

cheers


Re: [PATCH] powerpc/smp: Add __init to init_big_cores()

2020-12-22 Thread Michael Ellerman
On Mon, 21 Dec 2020 08:41:54 +0100, Cédric Le Goater wrote:
> It fixes this link warning:
> 
> WARNING: modpost: vmlinux.o(.text.unlikely+0x2d98): Section mismatch in 
> reference from the function init_big_cores.isra.0() to the function 
> .init.text:init_thread_group_cache_map()
> The function init_big_cores.isra.0() references
> the function __init init_thread_group_cache_map().
> This is often because init_big_cores.isra.0 lacks a __init
> annotation or the annotation of init_thread_group_cache_map is wrong.

Applied to powerpc/fixes.

[1/1] powerpc/smp: Add __init to init_big_cores()
  https://git.kernel.org/powerpc/c/9014eab6a38c60fd185bc92ed60f46cf99a462ab

cheers


Re: [PATCH] powerpc/time: Force inlining of get_tb()

2020-12-22 Thread Michael Ellerman
On Sun, 20 Dec 2020 18:18:26 + (UTC), Christophe Leroy wrote:
> Force inlining of get_tb() in order to avoid getting
> following function in vdso32, leading to suboptimal
> performance in clock_gettime()
> 
> 0688 <.get_tb>:
>  688: 7c 6d 42 a6 mftbu   r3
>  68c: 7c 8c 42 a6 mftbr4
>  690: 7d 2d 42 a6 mftbu   r9
>  694: 7c 03 48 40 cmplw   r3,r9
>  698: 40 e2 ff f0 bne+688 <.get_tb>
>  69c: 4e 80 00 20 blr

Applied to powerpc/fixes.

[1/1] powerpc/time: Force inlining of get_tb()
  https://git.kernel.org/powerpc/c/0faa22f09caadc11af2aa7570870ebd2ac5b8170

cheers


Re: [PATCH] powerpc/32s: Fix RTAS machine check with VMAP stack

2020-12-22 Thread Michael Ellerman
On Tue, 22 Dec 2020 07:11:18 + (UTC), Christophe Leroy wrote:
> When we have VMAP stack, exception prolog 1 sets r1, not r11.

Applied to powerpc/fixes.

[1/1] powerpc/32s: Fix RTAS machine check with VMAP stack
  https://git.kernel.org/powerpc/c/9c7422b92cb27369653c371ad9c44a502e5eea8f

cheers


Re: [PATCH] powerpc/32: Fix vmap stack - Properly set r1 before activating MMU on syscall too

2020-12-22 Thread Michael Ellerman
On Mon, 21 Dec 2020 06:18:03 + (UTC), Christophe Leroy wrote:
> We need r1 to be properly set before activating MMU, otherwise any new
> exception taken while saving registers into the stack in syscall
> prologs will use the user stack, which is wrong and will even lockup
> or crash when KUAP is selected.
> 
> Do that by switching the meaning of r11 and r1 until we have saved r1
> to the stack: copy r1 into r11 and setup the new stack pointer in r1.
> To avoid complicating and impacting all generic and specific prolog
> code (and more), copy back r1 into r11 once r11 is save onto
> the stack.
> 
> [...]

Applied to powerpc/fixes.

[1/1] powerpc/32: Fix vmap stack - Properly set r1 before activating MMU on 
syscall too
  https://git.kernel.org/powerpc/c/d5c243989fb0cb03c74d7340daca3b819f706ee7

cheers


Re: GIT kernel with the PowerPC updates 5.11-1 doesn't boot on a FSL P5040 board and in a virtual e5500 QEMU machine

2020-12-22 Thread Christian Zigotzky

Hello,

I compiled the latest Git kernel today and unfortunately the boot issue 
still exists.


I was able to reduce the patch for reverting the changes. In this way we 
know the problematic code now.


vdso-v2.patch:

diff -rupN a/arch/powerpc/kernel/vdso32/vgettimeofday.c 
b/arch/powerpc/kernel/vdso32/vgettimeofday.c
--- a/arch/powerpc/kernel/vdso32/vgettimeofday.c    2020-12-19 
00:01:16.829846652 +0100
+++ b/arch/powerpc/kernel/vdso32/vgettimeofday.c    2020-12-19 
00:00:37.817369691 +0100

@@ -10,12 +10,6 @@ int __c_kernel_clock_gettime(clockid_t c
 return __cvdso_clock_gettime32_data(vd, clock, ts);
 }

-int __c_kernel_clock_gettime64(clockid_t clock, struct 
__kernel_timespec *ts,

-               const struct vdso_data *vd)
-{
-    return __cvdso_clock_gettime_data(vd, clock, ts);
-}
-
 int __c_kernel_gettimeofday(struct __kernel_old_timeval *tv, struct 
timezone *tz,

             const struct vdso_data *vd)
 {



With this patch, the uImage boots without any problems on my FSL P5040 
board and in a virtual e5500 QEMU machine. Please check the problematic 
code.


Thanks,
Christian



On 19 December 2020 at 01:33pm, Christian Zigotzky wrote:

On 19 December 2020 at 07:49am, Christophe Leroy wrote:



Le 18/12/2020 à 23:49, Christian Zigotzky a écrit :

On 18 December 2020 at 10:25pm, Denis Kirjanov wrote:
 >
 >
 > On Friday, December 18, 2020, Christian Zigotzky 
 wrote:

 >
 > Hello,
 >
 > I compiled the latest Git kernel with the new PowerPC updates 
5.11-1 [1] today. Unfortunately this kernel doesn't boot on my FSL 
P5040 board [2] and in a virtual e5500 QEMU machine [3].

 >
 > I was able to revert the new PowerPC updates 5.11-1 [4] and 
after a new compiling, the kernel boots without any problems on my 
FSL P5040 board.

 >
 > Please check the new PowerPC updates 5.11-1.
 >
 >
 > Can you bisect the bad commit?
 >
Hello Denis,

I have bisected [5] and d0e3fc69d00d1f50d22d6b6acfc555ccda80ad1e 
(powerpc/vdso: Provide __kernel_clock_gettime64() on vdso32) [6] is 
the first bad commit.


I was able to revert this bad commit and after a new compiling, the 
kernel boots without any problems.


That's puzzling.

Can you describe the symptoms exactly ? What do you mean by "the 
kernel doesn't boot" ? Where and how does it stops booting ?

It stops during the disk initialisation.


This commit only adds a new VDSO call, for getting y2038 compliant 
time. At the time I implemented it there was no libc using it yet. Is 
your libc using it ?
I tested it with ubuntu MATE 16.04.7 LTS (32-bit userland + 64-bit 
kernel) and with Debian Sid (MintPPC and Fienix 32-bit userland + 
64-bit kernel) on my FSL P5040 board and in a virtual e5500 QEMU 
machine. How can I figure out if the libc use it?


Where can I find all the elements you are using to boot with QEMU ? 
Especially the file MintPPC32-X5000.img
Download: http://www.xenosoft.de/MintPPC32-X5000.tar.gz (md5sum: 
b31c1c1ca1fcf5d4cdf110c4bce11654) The password for both 'root' and 
'mintppc' is 'mintppc'.


QEMU command with KVM on my P5040 board: qemu-system-ppc64 -M ppce500 
-cpu e5500 -enable-kvm -m 1024 -kernel uImage -drive 
format=raw,file=MintPPC32-X5000.img,index=0,if=virtio -netdev 
user,id=mynet0 -device e1000,netdev=mynet0 -append "rw root=/dev/vda" 
-device virtio-vga -device virtio-mouse-pci -device 
virtio-keyboard-pci -device pci-ohci,id=newusb -device 
usb-audio,bus=newusb.0 -smp 4


QEMU command without KVM on macOS Intel: qemu-system-ppc64 -M ppce500 
-cpu e5500 -m 1024 -kernel uImage -drive 
format=raw,file=MintPPC32-X5000.img,index=0,if=virtio -netdev 
user,id=mynet0 -device virtio-net-pci,netdev=mynet0 -append "rw 
root=/dev/vda" -device virtio-vga -usb -device usb-ehci,id=ehci 
-device usb-tablet -device virtio-keyboard-pci -smp 4 -vnc :1


Can you also share you kernel config

See attachment.


Thanks
Christophe

Thanks
Christian