The system reset interrupt may use HSRR registers (e.g., to call in to OPAL), but code that uses HSRR registers is not required to clear MSR[RI] by convention.
Rather than introduce that requirement, have system reset interrupt save HSRRs before they might be used. Signed-off-by: Nicholas Piggin <npig...@gmail.com> --- arch/powerpc/kernel/traps.c | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c index b429b2264a1f..d54459152a2b 100644 --- a/arch/powerpc/kernel/traps.c +++ b/arch/powerpc/kernel/traps.c @@ -438,14 +438,32 @@ void hv_nmi_check_nonrecoverable(struct pt_regs *regs) void system_reset_exception(struct pt_regs *regs) { + unsigned long hsrr0, hsrr1; + bool hsrrs_saved = false; + bool nested = in_nmi(); + /* * Avoid crashes in case of nested NMI exceptions. Recoverability * is determined by RI and in_nmi */ - bool nested = in_nmi(); if (!nested) nmi_enter(); + /* + * System reset can interrupt a region where HSRRs are live and + * MSR[RI]=1, and it may clobber HSRRs itself (e.g., to call OPAL), + * so save them before doing anything. + * + * Machine checks should be okay to avoid this, as the real mode + * handler is careful to avoid HSRRs, and the virt code is not + * delivered as an NMI. + */ + if (cpu_has_feature(CPU_FTR_HVMODE)) { + hsrrs_saved = true; + hsrr0 = mfspr(SPRN_HSRR0); + hsrr1 = mfspr(SPRN_HSRR1); + } + hv_nmi_check_nonrecoverable(regs); __this_cpu_inc(irq_stat.sreset_irqs); @@ -495,6 +513,11 @@ void system_reset_exception(struct pt_regs *regs) if (!(regs->msr & MSR_RI)) nmi_panic(regs, "Unrecoverable System Reset"); + if (hsrrs_saved) { + mtspr(SPRN_HSRR0, hsrr0); + mtspr(SPRN_HSRR1, hsrr1); + } + if (!nested) nmi_exit(); -- 2.18.0