Re: [RFC PATCH 00/62] Linux as SEV-ES Guest Support
On Tue, Feb 11, 2020 at 07:48:12PM -0800, Andy Lutomirski wrote: > > > > On Feb 11, 2020, at 5:53 AM, Joerg Roedel wrote: > > > > > > >* Putting some NMI-load on the guest will make it crash usually > > within a minute > > Suppose you do CPUID or some MMIO and get #VC. You fill in the GHCB to > ask for help. Some time between when you start filling it out and when > you do VMGEXIT, you get NMI. If the NMI does its own GHCB access [0], > it will clobber the outer #VC’a state, resulting in a failure when > VMGEXIT happens. There’s a related failure mode if the NMI is after > the VMGEXIT but before the result is read. > > I suspect you can fix this by saving the GHCB at the beginning of > do_nmi and restoring it at the end. This has the major caveat that it > will not work if do_nmi comes from user mode and schedules, but I > don’t believe this can happen. > > [0] Due to the NMI_COMPLETE catastrophe, there is a 100% chance that > this happens. Very true, thank you! You probably saved me a few hours of debugging this further :) I will implement better handling for nested #VC exceptions, which hopefully solves the NMI crashes. Thanks again, Joerg ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC PATCH 00/62] Linux as SEV-ES Guest Support
On Tue, Feb 11, 2020 at 02:12:04PM -0800, Andy Lutomirski wrote: > On Tue, Feb 11, 2020 at 7:43 AM Joerg Roedel wrote: > > > > On Tue, Feb 11, 2020 at 03:50:08PM +0100, Peter Zijlstra wrote: > > > > > Oh gawd; so instead of improving the whole NMI situation, AMD went and > > > made it worse still ?!? > > > > Well, depends on how you want to see it. Under SEV-ES an IRET will not > > re-open the NMI window, but the guest has to tell the hypervisor > > explicitly when it is ready to receive new NMIs via the NMI_COMPLETE > > message. NMIs stay blocked even when an exception happens in the > > handler, so this could also be seen as a (slight) improvement. > > > > I don't get it. VT-x has a VMCS bit "Interruptibility > state"."Blocking by NMI" that tracks the NMI masking state. Would it > have killed AMD to solve the problem they same way to retain > architectural behavior inside a SEV-ES VM? No, but it wouldn't solve the problem. Inside an NMI handler there could be #VC exceptions, which do an IRET on their own. Hardware NMI state tracking would re-enable NMIs when the #VC exception returns to the NMI handler, which is not what every OS is comfortable with. Yes, there are many ways to hack around this. The GHCB spec mentions the single-stepping-over-IRET idea, which I also prototyped in a previous version of this patch-set. I gave up on it when I discovered that NMIs that happen when executing in kernel-mode but on entry stack will cause the #VC handler to call into C code while on entry stack, because neither paranoid_entry nor error_entry handle the from-kernel-with-entry-strack case. This could of course also be fixed, but further complicates things already complicated enough by the PTI changes and nested-NMI support. My patch for using the NMI_COMPLETE message is certainly not perfect and needs changes, but having the message specified in the protocol gives the guest the best flexibility in deciding when it is ready to receive new NMIs, imho. Regards, Joerg ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC PATCH 00/62] Linux as SEV-ES Guest Support
> On Feb 11, 2020, at 5:53 AM, Joerg Roedel wrote: > > >* Putting some NMI-load on the guest will make it crash usually > within a minute Suppose you do CPUID or some MMIO and get #VC. You fill in the GHCB to ask for help. Some time between when you start filling it out and when you do VMGEXIT, you get NMI. If the NMI does its own GHCB access [0], it will clobber the outer #VC’a state, resulting in a failure when VMGEXIT happens. There’s a related failure mode if the NMI is after the VMGEXIT but before the result is read. I suspect you can fix this by saving the GHCB at the beginning of do_nmi and restoring it at the end. This has the major caveat that it will not work if do_nmi comes from user mode and schedules, but I don’t believe this can happen. [0] Due to the NMI_COMPLETE catastrophe, there is a 100% chance that this happens. ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC PATCH 00/62] Linux as SEV-ES Guest Support
On Tue, Feb 11, 2020 at 7:43 AM Joerg Roedel wrote: > > On Tue, Feb 11, 2020 at 03:50:08PM +0100, Peter Zijlstra wrote: > > > Oh gawd; so instead of improving the whole NMI situation, AMD went and > > made it worse still ?!? > > Well, depends on how you want to see it. Under SEV-ES an IRET will not > re-open the NMI window, but the guest has to tell the hypervisor > explicitly when it is ready to receive new NMIs via the NMI_COMPLETE > message. NMIs stay blocked even when an exception happens in the > handler, so this could also be seen as a (slight) improvement. > I don't get it. VT-x has a VMCS bit "Interruptibility state"."Blocking by NMI" that tracks the NMI masking state. Would it have killed AMD to solve the problem they same way to retain architectural behavior inside a SEV-ES VM? --Andy ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC PATCH 00/62] Linux as SEV-ES Guest Support
On Tue, Feb 11, 2020 at 03:50:08PM +0100, Peter Zijlstra wrote: > Oh gawd; so instead of improving the whole NMI situation, AMD went and > made it worse still ?!? Well, depends on how you want to see it. Under SEV-ES an IRET will not re-open the NMI window, but the guest has to tell the hypervisor explicitly when it is ready to receive new NMIs via the NMI_COMPLETE message. NMIs stay blocked even when an exception happens in the handler, so this could also be seen as a (slight) improvement. Regards, Joerg ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [RFC PATCH 00/62] Linux as SEV-ES Guest Support
On Tue, Feb 11, 2020 at 02:51:54PM +0100, Joerg Roedel wrote: > NMI Special Handling > > > The last thing that needs special handling with SEV-ES are NMIs. > Hypervisors usually start to intercept IRET instructions when an NMI got > injected to find out when the NMI window is re-opened. But handling IRET > intercepts requires the hypervisor to access guest register state and is > not possible with SEV-ES. The specification under [1] solves this > problem with an NMI_COMPLETE message sent my the guest to the > hypervisor, upon which the hypervisor re-opens the NMI window for the > guest. > > This patch-set sends the NMI_COMPLETE message before the actual IRET, > while the kernel is still on a valid stack and kernel cr3. This opens > the NMI-window a few instructions early, but this is fine as under > x86-64 Linux NMI-nesting is safe. The alternative would be to > single-step over the IRET, but that requires more intrusive changes to > the entry code because it does not handle entries from kernel-mode while > on the entry stack. > > Besides the special handling above the patch-set contains the handlers > for the #VC exception and all the exit-codes specified in [1]. Oh gawd; so instead of improving the whole NMI situation, AMD went and made it worse still ?!? ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
[RFC PATCH 00/62] Linux as SEV-ES Guest Support
Hi, here is the first public post of the patch-set to enable Linux to run under SEV-ES enabled hypervisors. The code is mostly feature-complete, but there are still a couple of bugs to fix. Nevertheless, given the size of the patch-set, I think it is about time to ask for initial feedback of the changes that come with it. To better understand the code here is a quick explanation of SEV-ES first. This patch-set does not contain the hypervisor changes necessary to run SEV-ES enabled KVM guests. These patches will be sent separatly when they are ready to be sent out. What is SEV-ES == SEV-ES is an acronym for 'Secure Encrypted Virtualization - Encrypted State' and means a hardware feature of AMD processors which hides the register state of VCPUs to the hypervisor by encrypting it. The hypervisor can't read or make changes to the guests register state. Most intercepts set by the hypervisor do not cause a #VMEXIT of the guest anymore, but turn into a VMM Communication Exception (#VC exception, vector 29) inside the guest. The error-code of this exception is the intercept exit-code that caused the exception. The guest handles the #VC exception by communicating with the hypervisor through a shared data structure, the 'Guest-Hypervisor-Communication-Block' (GHCB). The layout of that data-structure and the protocol is specified in [1]. A description of the SEV-ES hardware interface can be found in the AMD64 Architecture Programmer's Manual Volume 2, Section 15.35 [2]. Implementation Details == SEV-ES guests will always boot via UEFI firmware and use the 64-bit EFI entry point into the kernel. This implies that only 64-bit Linux x86 guests are supported. Pre-Decompression Boot Code and Early Exception Support --- Intercepts that cause exceptions in the guest include instructions like CPUID, RDMSR/WRMSR, IOIO instructions and a couple more. Some of them are executed very early during boot, which means that exceptions need to work that early. That is the reason big parts of this patch-set enable support for early exceptions, first in the pre-decompression boot-code and later also in the early boot-code of the kernel image. As these patches add exception support to the pre-decompression boot code, it also implements a page-fault handler to create the identity-mapped page-table on-demand. One reason for this change is to make use of the exception handling code in non SEV-ES guests too, so that it is less likely to break in the future. The other reason is that for SEV-ES guests the code needs to setup its own page-table to map the GHCB unencrypted. Without these patches the pre-decompression code only uses its own page-table when KASLR is enabled and used. SIPI and INIT Handling -- The hypervisor also can't make changes to the guest register state, which implies that it can't emulate SIPI and INIT messages. This means that any CPU register state reset needs to be done inside the guest. Most of this is handled in the firmware, but the Linux kernel has to setup an AP Jump Table to boot secondary processors. CPU online/offline handling also needs special handling, where this patch-set implements a shortcut. An offlined CPU will not go back to real-mode when it is woken up again, but stays in long-mode an just jumps back to the trampoline code. NMI Special Handling The last thing that needs special handling with SEV-ES are NMIs. Hypervisors usually start to intercept IRET instructions when an NMI got injected to find out when the NMI window is re-opened. But handling IRET intercepts requires the hypervisor to access guest register state and is not possible with SEV-ES. The specification under [1] solves this problem with an NMI_COMPLETE message sent my the guest to the hypervisor, upon which the hypervisor re-opens the NMI window for the guest. This patch-set sends the NMI_COMPLETE message before the actual IRET, while the kernel is still on a valid stack and kernel cr3. This opens the NMI-window a few instructions early, but this is fine as under x86-64 Linux NMI-nesting is safe. The alternative would be to single-step over the IRET, but that requires more intrusive changes to the entry code because it does not handle entries from kernel-mode while on the entry stack. Besides the special handling above the patch-set contains the handlers for the #VC exception and all the exit-codes specified in [1]. Current State of the Patches The patch-set posted here can boot an SMP Linux guest under SEV-ES-enabled KVM and the guest survives some load-testing (kernel-compiles). The guest boots to the graphical desktop and is usable. But there are still know bugs and issues: * Putting some NMI-load on the guest will make it crash usually within a minute * The handler for MMIO events needs more security checks when walking the