This is a fix on top of the KAISER [v3] patches I posted earlier.
It is a fix for:

        [PATCH 05/30] x86, kaiser: prepare assembly for entry/exit CR3 switching

I made a mistake and stopped running the 32-bit selftests at
some point.  My changes from one of Borislav's review comments
ended up breaking the 32-bit SYSENTER path.

The issue was that we switched over to the process stack before
and wrote to it before we switched CR3.  Since it is now unmapped
this access faulted.

I can also send a consolidated 05/30 patch that contains this
fix if that would be easier.

Signed-off-by: Dave Hansen <dave.han...@linux.intel.com>
Cc: Moritz Lipp <moritz.l...@iaik.tugraz.at>
Cc: Daniel Gruss <daniel.gr...@iaik.tugraz.at>
Cc: Michael Schwarz <michael.schw...@iaik.tugraz.at>
Cc: Richard Fellner <richard.fell...@student.tugraz.at>
Cc: Andy Lutomirski <l...@kernel.org>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Kees Cook <keesc...@google.com>
Cc: Hugh Dickins <hu...@google.com>
Cc: x...@kernel.org
---

 b/arch/x86/entry/entry_64_compat.S |   18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff -puN arch/x86/entry/entry_64_compat.S~kaiser-mpx-32-splat 
arch/x86/entry/entry_64_compat.S
--- a/arch/x86/entry/entry_64_compat.S~kaiser-mpx-32-splat      2017-11-10 
15:44:42.893205660 -0800
+++ b/arch/x86/entry/entry_64_compat.S  2017-11-10 16:01:14.880203186 -0800
@@ -48,6 +48,10 @@
 ENTRY(entry_SYSENTER_compat)
        /* Interrupts are off on entry. */
        SWAPGS
+
+       /* We are about to clobber %rsp anyway, clobbering here is OK */
+       SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
+
        movq    PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 
        /*
@@ -91,9 +95,6 @@ ENTRY(entry_SYSENTER_compat)
        pushq   $0                      /* pt_regs->r15 = 0 */
        cld
 
-       /* We just saved all the registers, so safe to clobber %rdi */
-       SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
-
        /*
         * SYSENTER doesn't filter flags, so we need to clear NT and AC
         * ourselves.  To save a few cycles, we can check whether
@@ -245,7 +246,6 @@ sysret32_from_system_call:
        popq    %rsi                    /* pt_regs->si */
        popq    %rdi                    /* pt_regs->di */
 
-       SWITCH_TO_USER_CR3 scratch_reg=%r8
         /*
          * USERGS_SYSRET32 does:
          *  GSBASE = user's GS base
@@ -261,10 +261,18 @@ sysret32_from_system_call:
         * when the system call started, which is already known to user
         * code.  We zero R8-R10 to avoid info leaks.
          */
+       movq    RSP-ORIG_RAX(%rsp), %rsp
+
+       /*
+        * %rsp is not mapped to userspace so the switch to the user
+        * CR3 can not be done until after all references to it are
+        * complete.
+        */
+       SWITCH_TO_USER_CR3 scratch_reg=%r8
+
        xorq    %r8, %r8
        xorq    %r9, %r9
        xorq    %r10, %r10
-       movq    RSP-ORIG_RAX(%rsp), %rsp
        swapgs
        sysretl
 END(entry_SYSCALL_compat)
_

Reply via email to