7f2590a110b8("x86/entry/64: Use a per-CPU trampoline stack for IDT entries")
has resulted that when exception on userspace, the kernel (error_entry)
always push the pt_regs to entry stack(sp0), and then copy them to the
kernel stack.

And recent x86/entry work makes interrupt also use idtentry
and makes all the interrupt code save the pt_regs on the sp0 stack
and then copy it to the thread stack like exception.

This is hot path (page fault, ipi), such overhead should be avoided.
And the original interrupt_entry directly switches to kernel stack
and pushes pt_regs to kernel stack. We should do it for error_entry.
This is the job of patch1.

Patch 2-4 simply stack switching for .Lerror_bad_iret by just doing
all the work in one function (fixup_bad_iret()).

The patch set is based on tip/x86/entry (28447ea41542) (May 20).

Changed from V1:
        based on tip/master -> based on tip/x86/entry

        patch 1 replaces the patch1,2 of V1, it borrows the
        original interrupt_entry's code into error_entry.

        patch2-4 is V1's patch3-5, unchanged (but rebased)

Cc: Andy Lutomirski <[email protected]>,
Cc: Thomas Gleixner <[email protected]>,
Cc: Ingo Molnar <[email protected]>,
Cc: Borislav Petkov <[email protected]>,
Cc: [email protected],
Cc: "H. Peter Anvin" <[email protected]>,
Cc: Peter Zijlstra <[email protected]>,
Cc: Alexandre Chartre <[email protected]>,
Cc: "Eric W. Biederman" <[email protected]>,
Cc: Jann Horn <[email protected]>,
Cc: Dave Hansen <[email protected]>

Lai Jiangshan (4):
  x86/entry: avoid calling into sync_regs() when entering from userspace
  x86/entry: directly switch to kernel stack when .Lerror_bad_iret
  x86/entry: remove unused sync_regs()
  x86/entry: don't copy to tmp in fixup_bad_iret

 arch/x86/entry/entry_64.S    | 52 +++++++++++++++++++++++-------------
 arch/x86/include/asm/traps.h |  1 -
 arch/x86/kernel/traps.c      | 42 ++++++++++++-----------------
 3 files changed, 51 insertions(+), 44 deletions(-)

-- 
2.20.1

Reply via email to