Excerpts from Nicholas Piggin's message of October 19, 2020 11:00 am: > Excerpts from Michal Suchánek's message of October 17, 2020 6:14 am: >> On Mon, Sep 07, 2020 at 11:13:47PM +1000, Nicholas Piggin wrote: >>> Excerpts from Michael Ellerman's message of August 31, 2020 8:50 pm: >>> > Michal Suchánek <msucha...@suse.de> writes: >>> >> On Mon, Aug 31, 2020 at 11:14:18AM +1000, Nicholas Piggin wrote: >>> >>> Excerpts from Michal Suchánek's message of August 31, 2020 6:11 am: >>> >>> > Hello, >>> >>> > >>> >>> > on POWER8 KVM hosts lock up since commit 10d91611f426 ("powerpc/64s: >>> >>> > Reimplement book3s idle code in C"). >>> >>> > >>> >>> > The symptom is host locking up completely after some hours of KVM >>> >>> > workload with messages like >>> >>> > >>> >>> > 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab >>> >>> > cpu 47 >>> >>> > 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab >>> >>> > cpu 71 >>> >>> > 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab >>> >>> > cpu 47 >>> >>> > 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab >>> >>> > cpu 71 >>> >>> > 2020-08-30T10:51:31+00:00 obs-power8-01 kernel: KVM: couldn't grab >>> >>> > cpu 47 >>> >>> > >>> >>> > printed before the host locks up. >>> >>> > >>> >>> > The machines run sandboxed builds which is a mixed workload resulting >>> >>> > in >>> >>> > IO/single core/mutiple core load over time and there are periods of no >>> >>> > activity and no VMS runnig as well. The VMs are shortlived so VM >>> >>> > setup/terdown is somewhat excercised as well. >>> >>> > >>> >>> > POWER9 with the new guest entry fast path does not seem to be >>> >>> > affected. >>> >>> > >>> >>> > Reverted the patch and the followup idle fixes on top of 5.2.14 and >>> >>> > re-applied commit a3f3072db6ca ("powerpc/powernv/idle: Restore IAMR >>> >>> > after idle") which gives same idle code as 5.1.16 and the kernel seems >>> >>> > stable. >>> >>> > >>> >>> > Config is attached. >>> >>> > >>> >>> > I cannot easily revert this commit, especially if I want to use the >>> >>> > same >>> >>> > kernel on POWER8 and POWER9 - many of the POWER9 fixes are applicable >>> >>> > only to the new idle code. >>> >>> > >>> >>> > Any idea what can be the problem? >>> >>> >>> >>> So hwthread_state is never getting back to to HWTHREAD_IN_IDLE on >>> >>> those threads. I wonder what they are doing. POWER8 doesn't have a good >>> >>> NMI IPI and I don't know if it supports pdbg dumping registers from the >>> >>> BMC unfortunately. >>> >> >>> >> It may be possible to set up fadump with a later kernel version that >>> >> supports it on powernv and dump the whole kernel. >>> > >>> > Your firmware won't support it AFAIK. >>> > >>> > You could try kdump, but if we have CPUs stuck in KVM then there's a >>> > good chance it won't work :/ >>> >>> I haven't had any luck yet reproducing this still. Testing with sub >>> cores of various different combinations, etc. I'll keep trying though. >> >> Hello, >> >> I tried running some KVM guests to simulate the workload and what I get >> is guests failing to start with a rcu stall. Tried both 5.3 and 5.9 >> kernel and qemu 4.2.1 and 5.1.0 >> >> To start some guests I run >> >> for i in $(seq 0 9) ; do /opt/qemu/bin/qemu-system-ppc64 -m 2048 -accel kvm >> -smp 8 -kernel /boot/vmlinux -initrd /boot/initrd -nodefaults -nographic >> -serial mon:telnet::444$i,server,wait & done >> >> To simulate some workload I run >> >> xz -zc9T0 < /dev/zero > /dev/null & >> while true; do >> killall -STOP xz; sleep 1; killall -CONT xz; sleep 1; >> done & >> >> on the host and add a job that executes this to the ramdisk. However, most >> guests never get to the point where the job is executed. >> >> Any idea what might be the problem? > > I would say try without pv queued spin locks (but if the same thing is > happening with 5.3 then it must be something else I guess). > > I'll try to test a similar setup on a POWER8 here.
Couldn't reproduce the guest hang, they seem to run fine even with queued spinlocks. Might have a different .config. I might have got a lockup in the host (although different symptoms than the original report). I'll look into that a bit further. Thanks, Nick