[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-21 Thread Christian Ehrhardt 
Theory given what we know so far: - only fails if LVL1 is at 4.4 - not failing if LVL1 is at 3.13 - 4.4 might have more CPU features - qemu 2.0 when using host-model is passing ALL features - qemu 2.5 works, but we now know it filters some flags that 2.0 doesn't => one of these extra flags

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-21 Thread Christian Ehrhardt 
That finds us the fix as: commit 120eee7d1fdb2eba15766cfff7b9bcdc902690b4 Author: Eduardo Habkost Date: Tue Jun 17 17:31:53 2014 -0300 target-i386: Set migratable=yes by default on "host" CPU mooel Having only migratable flags reported by default on the "host" CPU model is

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-21 Thread Christian Ehrhardt 
Bisect result is git bisect start # good: [a9e8aeb3755bccb7b51174adcf4a3fc427e0d147] Update version for v2.0.0 release git bisect good a9e8aeb3755bccb7b51174adcf4a3fc427e0d147 # bad: [a8c40fa2d667e585382080db36ac44e216b37a1c] Update version for v2.5.0 release git bisect bad

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-21 Thread Christian Ehrhardt 
My initial testcase will be 07 "T4.4 / Q2.0 T3.13" Bisect is rather complex as we'd need the md-clear patches on top at each step. Sorry that it took a while. Adaptions: - non ubuntu machine type (using 2.0 to work on all builds) - remove VNC in xml as we built a reduced feature qemu - place

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-21 Thread Christian Ehrhardt 
I re-checked all that you and I found, lets write a List with all that we know if there are patterns. Host (should not matter, but be rather new) - in my case B4.18 Q2.11 For new qemu I'm using Mitaka. In this case being from

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-21 Thread Christian Ehrhardt 
Hmm with that workaround for 4.15 I get the md bug affects in the guest, but not the md-clear feature. $ uname -r; cat /sys/devices/system/cpu/vulnerabilities/mds; cat /proc/cpuinfo | grep -e ^bug -e ^flags | grep md 4.15.0-50-generic Vulnerable: Clear CPU buffers attempted, no microcode; SMT

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-21 Thread Christian Ehrhardt 
Of course I spoke too soon, on T3.13/Q2.0/B4.15 I now hit an FPU issue. That builds up to a kernel stack crash (recursive) [2.394255] Bad FPU state detected at fpu__clear+0x6b/0xd0, reinitializing FPU registers. [...] BUG: stack guard page was hit at (ptrval) (stack is

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-21 Thread Christian Ehrhardt 
oO :-/, that crash also happens to the older qemu on trusty as soon as you have more than 1 lvl1 or lvl2 guest it seems. Anyway - same workaround to tets MDS applies -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu.

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-21 Thread Christian Ehrhardt 
Further I realized that I can trigger this with T3.13/Q2.5/B4.15: trusty-lvl1-mitaka kernel: [ 931.946357] kvm [2356]: vcpu0 unhandled rdmsr: 0x140 trusty-lvl1-mitaka kernel: [ 932.236914] kvm [2356]: vcpu0 unhandled rdmsr: 0x1c9 trusty-lvl1-mitaka kernel: [ 932.238337] kvm [2356]: vcpu0

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-21 Thread Christian Ehrhardt 
Note: support on nested is and always was "best effort" as it is famously known to work great until it doesn't. Recently upstreams stance on this changed and in the last few versions nested x86 got some love (due to some big players using it now), but I'm more looking to 20.04 than anything before

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-20 Thread Steve Beattie
Interesting, I see different behaviors in my setup: 4.4 on lvl1, 4.15 lvl2, trusty lvl1 qemu: /proc/cpuinfo in lvl2 contains: bugs: cpu_meltdown spectre_v1 spectre_v2 l1tf (note missing mds, hence "Not Affected") 4.4 on lvl1, 4.15 lvl2, xenial lvl1 qemu: /proc/cpuinfo in lvl2

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-20 Thread Christian Ehrhardt 
E.g. in /proc/cpuinfo the whole section is missing: bugs: cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds ^^ this is not there when "Not affected" is reported Separating kernels: 3.13 on both: Works 4.4 on lvl1, 3.13 on lvl2: Fail 3.13 on lvl1, 4.4 on lvl2: Works So

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-20 Thread Christian Ehrhardt 
TL;DR only occurs with HWE kernel - odd Results of: $ cat /sys/devices/system/cpu/vulnerabilities/mds Host Bionic: 4.18.0-20-generic / 2.11+dfsg-1ubuntu7.13 Guest-lvl1: 3.13.0-170-generic / 2.0.0+dfsg-2ubuntu1.46 Vulnerable: Clear CPU buffers attempted, SMT Host state unknown Guest-lvl2:

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-17 Thread Steve Beattie
For the record, attempting to boot a bionic guest in a trusty vm running the current trusty 3.13 kernel 3.13.0-170.220-generic results in the bionic kernel oopsing repeatedly. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-17 Thread Steve Beattie
Oh, I just realized both the trusty and xenial 1st level vms are both using the 4.4 kernel, so this is likely an issue with trusty's qemu. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu.

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-17 Thread Steve Beattie
Informatio above collected from the trusty 1st level guest. ** Changed in: linux (Ubuntu) Status: Incomplete => Confirmed -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1829555

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-17 Thread Steve Beattie
apport information ** Tags added: apport-collected ** Description changed: When nested kvm virtualization is used (with host-passthrough), if the first level guest is a trusty vm, odd behavior is seen in the second level guest: host os: disco/5.0.0-15.16-generic/qemu

[Kernel-packages] [Bug 1829555] Re: nested virtualization w/first level trusty guests has odd MDS behavior

2019-05-17 Thread Steve Beattie
** Also affects: qemu (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1829555 Title: nested virtualization w/first level trusty