This makes failed execve's faster. This also enables a future possible optimization: SAVE_EXTRA_REGS can be avoided in execve's.
Run-tested. Signed-off-by: Denys Vlasenko <[email protected]> CC: Linus Torvalds <[email protected]> CC: Steven Rostedt <[email protected]> CC: Ingo Molnar <[email protected]> CC: Borislav Petkov <[email protected]> CC: "H. Peter Anvin" <[email protected]> CC: Andy Lutomirski <[email protected]> CC: Oleg Nesterov <[email protected]> CC: Frederic Weisbecker <[email protected]> CC: Alexei Starovoitov <[email protected]> CC: Will Drewry <[email protected]> CC: Kees Cook <[email protected]> CC: [email protected] CC: [email protected] --- arch/x86/kernel/entry_64.S | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S index 060cb2e..e8f2aeb 100644 --- a/arch/x86/kernel/entry_64.S +++ b/arch/x86/kernel/entry_64.S @@ -421,6 +421,11 @@ ENTRY(stub_execve) DEFAULT_FRAME 0, 8 SAVE_EXTRA_REGS 8 call sys_execve +return_from_exec: + testl %eax,%eax + jz return_from_stub + /* exec failed, can use fast SYSRET code path in this case */ + ret return_from_stub: addq $8, %rsp movq %rax,RAX(%rsp) @@ -434,7 +439,7 @@ ENTRY(stub_execveat) DEFAULT_FRAME 0, 8 SAVE_EXTRA_REGS 8 call sys_execveat - jmp return_from_stub + jmp return_from_exec CFI_ENDPROC END(stub_execveat) @@ -471,7 +476,7 @@ ENTRY(stub_x32_execve) DEFAULT_FRAME 0, 8 SAVE_EXTRA_REGS 8 call compat_sys_execve - jmp return_from_stub + jmp return_from_exec CFI_ENDPROC END(stub_x32_execve) @@ -480,7 +485,7 @@ ENTRY(stub_x32_execveat) DEFAULT_FRAME 0, 8 SAVE_EXTRA_REGS 8 call compat_sys_execveat - jmp return_from_stub + jmp return_from_exec CFI_ENDPROC END(stub_x32_execveat) -- 1.8.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

