On Sat, 2017-09-09 at 14:59 +0200, Joakim Tjernlund wrote: > On Sat, 2017-09-09 at 14:45 +0200, Joakim Tjernlund wrote: > > On Fri, 2017-09-08 at 22:27 +0000, Leo Li wrote: > > > > -----Original Message----- > > > > From: Joakim Tjernlund [mailto:joakim.tjernl...@infinera.com] > > > > Sent: Friday, September 08, 2017 7:51 AM > > > > To: linuxppc-dev@lists.ozlabs.org; Leo Li <leoyang...@nxp.com>; York Sun > > > > <york....@nxp.com> > > > > Subject: Re: Machine Check in P2010(e500v2) > > > > > > > > On Fri, 2017-09-08 at 11:54 +0200, Joakim Tjernlund wrote: > > > > > On Thu, 2017-09-07 at 18:54 +0000, Leo Li wrote: > > > > > > > -----Original Message----- > > > > > > > From: Joakim Tjernlund [mailto:joakim.tjernl...@infinera.com] > > > > > > > Sent: Thursday, September 07, 2017 3:41 AM > > > > > > > To: linuxppc-dev@lists.ozlabs.org; Leo Li <leoyang...@nxp.com>; > > > > > > > York Sun <york....@nxp.com> > > > > > > > Subject: Re: Machine Check in P2010(e500v2) > > > > > > > > > > > > > > On Thu, 2017-09-07 at 00:50 +0200, Joakim Tjernlund wrote: > > > > > > > > On Wed, 2017-09-06 at 21:13 +0000, Leo Li wrote: > > > > > > > > > > -----Original Message----- > > > > > > > > > > From: Joakim Tjernlund > > > > > > > > > > [mailto:joakim.tjernl...@infinera.com] > > > > > > > > > > Sent: Wednesday, September 06, 2017 3:54 PM > > > > > > > > > > To: linuxppc-dev@lists.ozlabs.org; Leo Li > > > > > > > > > > <leoyang...@nxp.com>; York Sun <york....@nxp.com> > > > > > > > > > > Subject: Re: Machine Check in P2010(e500v2) > > > > > > > > > > > > > > > > > > > > On Wed, 2017-09-06 at 20:28 +0000, Leo Li wrote: > > > > > > > > > > > > -----Original Message----- > > > > > > > > > > > > From: Joakim Tjernlund > > > > > > > > > > > > [mailto:joakim.tjernl...@infinera.com] > > > > > > > > > > > > Sent: Wednesday, September 06, 2017 3:17 PM > > > > > > > > > > > > To: linuxppc-dev@lists.ozlabs.org; Leo Li > > > > > > > > > > > > <leoyang...@nxp.com>; York Sun <york....@nxp.com> > > > > > > > > > > > > Subject: Re: Machine Check in P2010(e500v2) > > > > > > > > > > > > > > > > > > > > > > > > On Wed, 2017-09-06 at 19:31 +0000, Leo Li wrote: > > > > > > > > > > > > > > -----Original Message----- > > > > > > > > > > > > > > From: York Sun > > > > > > > > > > > > > > Sent: Wednesday, September 06, 2017 10:38 AM > > > > > > > > > > > > > > To: Joakim Tjernlund > > > > > > > > > > > > > > <joakim.tjernl...@infinera.com>; > > > > > > > > > > > > > > linuxppc- d...@lists.ozlabs.org; Leo Li > > > > > > > > > > > > > > <leoyang...@nxp.com> > > > > > > > > > > > > > > Subject: Re: Machine Check in P2010(e500v2) > > > > > > > > > > > > > > > > > > > > > > > > > > > > Scott is no longer with Freescale/NXP. Adding Leo. > > > > > > > > > > > > > > > > > > > > > > > > > > > > On 09/05/2017 01:40 AM, Joakim Tjernlund wrote: > > > > > > > > > > > > > > > So after some debugging I found this bug: > > > > > > > > > > > > > > > @@ -996,7 +998,7 @@ int > > > > > > > > > > > > > > > fsl_pci_mcheck_exception(struct pt_regs > > > > > > > > > > > > > > > > > > > > *regs) > > > > > > > > > > > > > > > if (is_in_pci_mem_space(addr)) { > > > > > > > > > > > > > > > if (user_mode(regs)) { > > > > > > > > > > > > > > > pagefault_disable(); > > > > > > > > > > > > > > > - ret = get_user(regs->nip, > > > > > > > > > > > > > > > &inst); > > > > > > > > > > > > > > > + ret = get_user(inst, > > > > > > > > > > > > > > > + (__u32 __user *)regs->nip); > > > > > > > > > > > > > > > pagefault_enable(); > > > > > > > > > > > > > > > } else { > > > > > > > > > > > > > > > ret = > > > > > > > > > > > > > > > probe_kernel_address(regs->nip, inst); > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > However, the kernel still locked up after fixing > > > > > > > > > > > > > > > that. > > > > > > > > > > > > > > > Now I wonder why this fixup is there in the first > > > > > > > > > > > > > > > place? > > > > > > > > > > > > > > > The routine will not really fixup the insn, just > > > > > > > > > > > > > > > return 0xffffffff for the failing read and then > > > > > > > > > > > > > > > advance the > > > > > > > > process NIP. > > > > > > > > > > > > > > > > > > > > > > > > > > You are right. The code here only gives 0xffffffff to > > > > > > > > > > > > > the load instructions and > > > > > > > > > > > > > > > > > > > > > > > > continue with the next instruction when the load > > > > > > > > > > > > instruction is causing the machine check. This will > > > > > > > > > > > > prevent a system lockup when reading from PCI/RapidIO > > > > > > > > > > > > device > > > > > > > > which is link down. > > > > > > > > > > > > > > > > > > > > > > > > > > I don't know what is actual problem in your case. > > > > > > > > > > > > > Maybe it is a write > > > > > > > > > > > > > > > > > > > > > > > > instruction instead of read? Or the code is in a > > > > > > > > > > > > infinite loop > > > > > > > > waiting for > > > > > > > > > > > > > > a > > > > > > > > > > > > > > > > > > > > valid > > > > > > > > > > > > read result? Are you able to do some further debugging > > > > > > > > > > > > with the NIP correctly printed? > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > According to the MC it is a Read and the NIP also leads > > > > > > > > > > > > to a read in the > > > > > > > > > > > > > > > > > > > > program. > > > > > > > > > > > > ATM, I have disabled the fixup but I will enable that > > > > > > > > > > > > again. > > > > > > > > > > > > Question, is it safe add a small printk when this MC > > > > > > > > > > > > happens(after fixing up)? I need to see that it has > > > > > > > > > > > > happened as the error is somewhat > > > > > > > > > > > > > > > > > > > > random. > > > > > > > > > > > > > > > > > > > > > > I think it is safe to add printk as the current machine > > > > > > > > > > > check handlers are also > > > > > > > > > > > > > > > > > > > > using printk. > > > > > > > > > > > > > > > > > > > > I hope so, but if the fixup fires there is no printk at all > > > > > > > > > > so I was a bit > > > > > > > > unsure. > > > > > > > > > > Don't like this fixup though, is there not a better way than > > > > > > > > > > faking a read to user space(or kernel for that matter) ? > > > > > > > > > > > > > > > > > > I don't have a better idea. Without the fixup, the offending > > > > > > > > > load instruction > > > > > > > > > > > > > > will never finish if there is anything wrong with the backing > > > > > > > device and freeze the whole system. Do you have any suggestion > > > > > > > in mind? > > > > > > > > > > > > > > > > > > > > > > > > > But it never finishes the load, it just fakes a load of > > > > > > > > 0xfffffffff, for user space I rather have it signal a SIGBUS but > > > > > > > > that does not seem to work either, at least not for us but that > > > > > > > > could be a bug in general MC code > > > > > > > > > > > > > > maybe. > > > > > > > > This fixup might be valid for kernel only as it has never worked > > > > > > > > for user space > > > > > > > > > > > > > > due to the bug I found. > > > > > > > > > > > > > > > > Where can I read about this errata ? > > > > > > > > > > > > > > I have look high and low an cannot find an errata which maps to > > > > > > > this fixup. > > > > > > > The closest I get is A-005125 which seems to have another > > > > > > > workaround, I cannot find any evidence that this workaround has > > > > > > > been > > > > > > > > applied in Linux, can you? > > > > > > > > > > > > This is not A-005125. There was an erratum for this issue with > > > > > > older silicons > > > > > > > > (e.g. erratum PCI-ex 3 for MPC8572). > > > > > > " When its link goes down, the PCI Express controller clears all > > > > > > outstanding transactions with an error indicator and sends a link > > > > > > down exception to the interrupt controller if PEX_PME_MES_DISR[LDDD] > > > > > > = 0. If, however, any transactions are sent to the controller after > > > > > > the link down event, they are accepted by the controller and wait > > > > > > for the link to come back up before starting any timeout counters > > > > > > (for > > > > > > > > example, completion timeout). There is no mechanism to cancel the new > > > > transactions short of a device HRESET. " > > > > > > > > > > > > But it was removed in newer silicon like P2020/P2010 probably > > > > > > because a > > > > > > > > Machine Check will be triggered in this situation to deal with the > > > > stalled > > > > instruction and no longer considered it as a hardware issue. > > > > > > > > > > > > > > > > Maybe this fixup should be configurable then? > > > > > > No. My point is that the problem was no longer considered a hardware > > > issue because of the machine check mechanism is in place to handle it. > > > If there is no handling of this special case, we would still experience a > > > system hang if this situation really occurs. > > > > > > > > > > > > > > The A-005125 is dealt with in u-boot. > > > > > > > > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.de > > > > nx.de%2Fpipermail%2Fu-boot%2F2013- > > > > August%2F161185.html&data=01%7C01%7Cleoyang.li%40nxp.com%7Ccb8a93e > > > > 0090e48eb53a008d4f6b84235%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0& > > > > sdata=8sR4yoXA4adqMHz6TY%2BvmYpfCBTcYEZHjPuANjz%2F1EQ%3D&reserve > > > > d=0 > > > > > > > > > > Yes, I found it eventually :) > > > > > > > > > > However, I cannot return to normal execution. I can follow the code to > > > > > returning from > > > > > machine_check_exception() and moving into ASM handler for returning > > > > > from a ME but then I am a bit lost. It does not seem to be any problem > > > > > executing, it feels more like a SW bug dealing with machine checks. > > > > > Don't > > > > > > > > known how to diagnose this further and could use some pointers. > > > > > > Is the execution returned to the user application? I doubt the system > > > hang is caused by the machine check handling. > > > You can try to comment out the machine check handling code and check if > > > there is any improvement and see if > > > this is related to the machine check handling. > > > > It tries to return to user app but I cannot see what happens as the system > > lock up when the > > MC returns. > > How do you mean comment out MC handling? The simplest path is the PCI fixup > > which will > > just do regs->nip += 4; and then return to user space. That still does not > > work as > > as soon MC handling returns, the system is locked up. > > > > > > > > Machine check is a serious situation and not always possible to be > > > recovered from. > > > > This one should at least not kill the whole system. It is a simple bus > > error in user space and > > the app should get SIGBUS and the the system should carry on. > > > > > I would focus more on debugging why the machine check is triggered by the > > > user space application. > > > Can you locate what code is causing this machine check from user space? > > > Is it accessing some hardware related space which is not ready? > > > Or is it accessing address that it shouldn't have accessed? > > > > of course, this is ongoing and getting closer a solution. The MC looking > > the machine completely > > does not make this any easier though. > > These are 2 separate things, fixing the cause and not having a simple bus > > error lock up the machine. > > I am focusing on fixing the lockup. > > > > I have been following the execution in the kernel and I always end up in > > the ASM returning > > from the MC. > > The other day we got a similar PCI MC(bus error) on T1042 CPU(e5500/e500mc) > > and there > > the system survived. The one thing I see different there is that MSR RI is > > set > > when entering MC, why is that? > > Before you ask, I have tried to add MSR_RI to both msr and mcsrr1. Didn't > help.
I managed to provoke another Machine Check, much earlier this time: [ 15.047108] Machine check in kernel mode. [ 15.051120] Caused by (from MCSR=10008): Bus - Read Data Bus Error [ 15.057302] Oops: Machine check, sig: 7 [#1] [ 15.061567] P1010 RDB [ 15.063832] Modules linked in: linux_bcm_knet(PO) linux_user_bde(PO) linux_kernel_bde(PO) [ 15.072022] CPU: 0 PID: 472 Comm: emxp2_hw_bl Tainted: P O 4.1.43+ #52 [ 15.079680] task: db1a7990 ti: df18c000 task.ti: df18c000 [ 15.085075] NIP: 00000000 LR: 109e7648 CTR: 00000000 [ 15.090036] REGS: df18df10 TRAP: 0204 Tainted: P O (4.1.43+) [ 15.097082] MSR: 0002d000 <CE,EE,PR,ME> CR: 280004e8 XER: 20000000 [ 15.103448] DEAR: b6e44140 ESR: 00000000 GPR00: 10ac1160 bfa44010 b79734a0 136eb4a0 bfa44030 01010101 bfa44038 00000020 GPR08: 00000000 b6e13000 063e521e 0f9ed9c4 22000422 11db7334 00000000 00000000 GPR16: 10f8b054 10f895e5 10f8a8bf 00031150 136eb4d0 00030000 00031140 00031140 GPR24: 00000000 00000000 136f10a0 00000000 00000000 00000000 00031140 136eb4a0 [ 15.135690] NIP [00000000] (null) [ 15.139174] LR [109e7648] 0x109e7648 [ 15.142743] Call Trace: [ 15.145184] ---[ end trace c00af6117685cb6e ]--- The fun part is that now the OS did NOT lock up! Looking that the faulting process, emxp2_hw_bl, I see it is in Zombie state(cd /proc/472): cat status Name: emxp2_hw_bl State: Z (zombie) Tgid: 472 Ngid: 0 Pid: 472 PPid: 468 TracerPid: 0 Uid: 0 0 0 0 Gid: 0 0 0 0 FDSize: 0 Groups: Threads: 8 SigQ: 0/3462 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000000001000 SigCgt: 00000001c0000628 CapInh: 0000000000000000 CapPrm: 0000003fffffffff CapEff: 0000003fffffffff CapBnd: 0000003fffffffff Cpus_allowed: 1 Cpus_allowed_list: 0 voluntary_ctxt_switches: 1126 nonvoluntary_ctxt_switches: 376 This even after parent process has called waitid(2) for emxp2_hw_bl If I now do a kill -s SIGBUS/TERM <pid of emxp2_hw_bl> this signal is propagated to the parent and emxp2_hw_bl goes away. Stack: cat stack [<c0071c04>] do_futex+0x150/0x874 [<c0027670>] do_exit+0x4e8/0x7d0 [<c000a164>] die+0x178/0x1d8 [<c000a7c8>] machine_check_exception+0xcc/0x17c [<c000dd94>] ret_from_mcheck_exc+0x0/0x144 So emxp2_hw_bl is stuck somewhere in down in machine_check_exception(). This all looks like Linux bugs when asked to kill a user process from Machine Check. I don't think I will get any further without some pointers now. Jocke