ase refers to base backed up at kernel
entries and of inactive (user) task's.
The bug that returns stale FS/GS base value (when index
is nonzero) is preserved and will be fixed by next
patch.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
-by: Andy Lutomirski
[chang: Rebase and revise patch description]
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/kernel/process_64.c | 67 +---
1 file changed, 51
Andy Lutomirski (1):
x86/fsgsbase/64: Make ptrace read FS/GS base accurately
Chang S. Bae (6):
x86/fsgsbase/64: Introduce FS/GS base helper functions
x86/fsgsbase/64: Use FS/GS base helpers in core dump
x86/fsgsbase/64: Factor out load FS/GS segments from __switch_to
x86/segments/64
When new FSGSBASE instructions enabled, this read will
become faster.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
Reviewed-by: Andy Lutomirski
---
arch/x86/include/asm/elf.h
the kernel
and userspace unconditionally available much sooner.
(Thanks to HPA for suggesting the cleanup)
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Andy Lutomirski
Cc: Andi Kleen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/entry/vdso/vma.c | 41
Instead of open code, load_fsgs() will cleanup __switch_to
and symmetric with FS/GS segment save. When FSGSBASE
enabled, X86_FEATURE_FSGSBASE check will be incorporated.
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
64-bit doesn't use the entry for per CPU data, but for CPU
numbers. The change will clarify the real usage of this
entry in GDT.
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Cc: Andi Kleen
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
Acked-by: Andy Lutomirski
---
arch
CPU number initialization in vDSO is now a bit cleaned up by
the new helper functions. The helper functions will take
care of combing CPU and node number and reading each from
the combined value.
Suggested-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Andi Kleen
Cc
to do that. The measured
overhead was (almost) offset to the benefit.
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: Any Lutomirski
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/include/asm/fsgsbase.h | 17 --
arch/x86/kernel/process_64
-off-by: Andi Kleen
Signed-off-by: Andy Lutomirski
[chang: Replace new instruction macros with GAS-compatible and
renaming. Note: if GCC supports it, we can add -mfsgsbase to
CFLAGS and use the builtins here for extra performance.]
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Dave Hansen
From: Andy Lutomirski
This is temporary. It will allow the next few patches to be tested
incrementally.
Setting unsafe_fsgsbase is a root hole. Don't do it.
Signed-off-by: Andy Lutomirski
[chang: Fix the deactivated flag. Add TAINT_INSECURE flag.]
Signed-off-by: Chang S. Bae
Reviewed
-by: Andy Lutomirski
[chang: Rebase and revise patch description]
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/kernel/process_64.c | 59
1 file changed, 49
. (Thanks to HPA for suggesting the
cleanup)
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Andy Lutomirski
Cc: Andi Kleen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/entry/vdso/vgetcpu.c | 2 +-
arch/x86/entry/vdso/vma.c | 38
4.6-rc1 behavior on my
Skylake laptop.
Signed-off-by: Andy Lutomirski
[chang: 5~10% performance improvement on context switch micro-
benchmark, when FS/GS base is actively switched.]
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc
When adding new feature support, patches need to be
incrementally applied and tested with temporal parameters.
For such testing (or root-only) purposes, the new flag
will serve to tag the kernel taint state properly.
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Cc: Andy Lutomirski
From: Andy Lutomirski
Now that FSGSBASE is fully supported, remove unsafe_fsgsbase, enable
FSGSBASE by default, and add nofsgsbase to disable it.
Signed-off-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc
When FSGSBASE enabled, copy real FS/GS base values instead
of approximation.
Factor out to save_fsgs() does not yield the exact same
behavior, because save_base_legacy() does not copy FS/GS base
when index is zero.
Signed-off-by: Chang S. Bae
Cc: Andy Lutomirski
Cc: H. Peter Anvin
Cc: Andi
.
GAS-compatible RDPID macro is included.
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Cc: Andi Kleen
Cc: Andy Lutomirski
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/entry/entry_64.S | 74 +
arch/x86/include/asm
and edit the patch note accordingly]
Signed-off-by: Chang S. Bae
Cc: Andy Lutomirski
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/include/uapi/asm/hwcap2.h | 3 +++
arch/x86/kernel/cpu/common.c | 4 +++-
2 files changed, 6 insertions(+), 1 deletion
From: Andi Kleen
v2: Minor updates to documentation requested in review.
v3: Update for new gcc and various improvements.
Signed-off-by: Andi Kleen
[chang: Minor edit and include descriptions for entry
changes by FSGSBASE]
Signed-off-by: Chang S. Bae
Cc: Andy Lutomirski
Cc: H. Peter Anvin
by default and add a chicken bit
Chang S. Bae (8):
x86/fsgsbase/64: Introduce FS/GS base helper functions
x86/fsgsbase/64: Use FS/GS base helpers in core dump
x86/fsgsbase/64: Factor out load FS/GS segments from __switch_to
x86/vdso: Move out the CPU number store
taint: Add taint
When new FSGSBASE instructions enabled, this read will be
switched to be faster.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/include/asm/elf.h | 6 +++---
1 file
Instead of open code, load_fsgs() will cleanup __switch_to
and symmetric with FS/GS segment save. When FSGSBASE
enabled, X86_FEATURE_FSGSBASE check will be incorporated.
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: Andy Lutomirski
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas
"inactive" are used to distinguish
GS bases between "kernel" and "user". "inactive" GS base
is the GS base, backed up at kernel entries, of inactive
(user) task's.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Klee
Using wrmsr_safe() can make code a bit simpler by removing
some condition check
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Cc: Andy Lutomirski
Cc: Andi Kleen
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/include/asm/msr.h | 2 +-
1 file changed, 1
When new FSGSBASE instructions enabled, this read will
become faster.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
Reviewed-by: Andy Lutomirski
---
arch/x86/include/asm/elf.h
. (Thanks to HPA for suggesting the
cleanup)
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Andy Lutomirski
Cc: Andi Kleen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/entry/vdso/vgetcpu.c | 4 ++--
arch/x86/entry/vdso/vma.c | 38
] FSGSBASE patch set V2: https://lkml.org/lkml/2018/5/31/686
Andy Lutomirski (1):
x86/fsgsbase/64: Make ptrace read FS/GS base accurately
Chang S. Bae (5):
x86/fsgsbase/64: Introduce FS/GS base helper functions
x86/fsgsbase/64: Use FS/GS base helpers in core dump
x86/fsgsbase/64: Factor out
Instead of open code, load_fsgs() will cleanup __switch_to
and symmetric with FS/GS segment save. When FSGSBASE
enabled, X86_FEATURE_FSGSBASE check will be incorporated.
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: Andy Lutomirski
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas
ase refers to base backed up at kernel
entries and of inactive (user) task's.
The bug that returns stale FS/GS base value (when index
is nonzero) is preserved and will be fixed by next
patch.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
-by: Andy Lutomirski
[chang: Rebase and revise patch description]
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/kernel/process_64.c | 67 +---
1 file changed, 51
ase refers to base backed up at kernel
entries and of inactive (user) task's.
The bug that returns stale FS/GS base value (when index
is nonzero) is preserved and will be fixed by next
patch.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
When new FSGSBASE instructions enabled, this read will
become faster.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
Reviewed-by: Andy Lutomirski
---
arch/x86/include/asm/elf.h
64-bit doesn't use the entry for per CPU data, but for CPU
numbers. The change will clarify the real usage of this
entry in GDT.
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Cc: Andy Lutomirski
Cc: Andi Kleen
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86
/lkml/2018/6/4/887
Andy Lutomirski (1):
x86/fsgsbase/64: Make ptrace read FS/GS base accurately
Chang S. Bae (7):
x86/fsgsbase/64: Introduce FS/GS base helper functions
x86/fsgsbase/64: Use FS/GS base helpers in core dump
x86/fsgsbase/64: Factor out load FS/GS segments from __switch_to
x86
(flat, initial) user space %ss. %ss is
specified than %ds because it is less likely to be changed
as 64-bit has %ss defined.
Suggested-by: H. Peter
Signed-off-by: Chang S. Bae
Cc: Andy Lutomirski
Cc: Andi Kleen
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/include/asm
-by: Andy Lutomirski
[chang: Rebase and revise patch description]
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/kernel/process_64.c | 67 +---
1 file changed, 51
. (Thanks to HPA for suggesting the
cleanup)
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Andy Lutomirski
Cc: Andi Kleen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/entry/vdso/vgetcpu.c | 4 ++--
arch/x86/entry/vdso/vma.c | 41
Using wrmsr_safe() can make code a bit simpler by removing
some condition check
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Cc: Andy Lutomirski
Cc: Andi Kleen
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/include/asm/msr.h | 2 +-
1 file changed, 1
Instead of open code, load_fsgs() will cleanup __switch_to
and symmetric with FS/GS segment save. When FSGSBASE
enabled, X86_FEATURE_FSGSBASE check will be incorporated.
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: Andy Lutomirski
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas
The open coded access is now replaced, that might prevent
from using the enhanced FSGSBASE mechanism.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Reviewed-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Cc: H. Peter Anvin
Cc: Ingo Molnar
Cc
Instead of open coding the calls to load_seg_legacy(), add a
load_fsgs() helper to handle fs and gs. When FSGSBASE is enabled,
load_fsgs() will be updated.
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Reviewed-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Cc: H. Peter Anvin
Cc
://lkml.org/lkml/2018/6/7/975
[5] V4: https://lkml.org/lkml/2018/6/20/1045
Andy Lutomirski (1):
x86/arch_prctl/64: Make ptrace read FS/GS base accurately
Chang S. Bae (7):
x86/fsgsbase/64: Introduce FS/GS base helper functions
x86/fsgsbase/64: Make ptrace use FS/GS base helpers
x86/fsgsbase/64: Use
functions are implemented as closely coupled.
When next patch makes ptrace to use the helpers, it
won't be directly accessed from ptrace.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Dave Hansen
64-bit doesn't use the entry for per CPU data, but for CPU
(and node) numbers. The change will clarify the real usage
of this entry in GDT.
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Acked-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andi Kleen
Cc: Dave
The CPU initialization in vDSO is now a bit cleaned up by
the new helper functions. The helper functions will take
care of combining CPU and node number and reading each from
the combined value.
Suggested-by: Andy Lutomirski
Suggested-by: Thomas Gleixner
Signed-off-by: Chang S. Bae
Cc: H
and hotplug
notifier are removed.
Suggested-by: H. Peter Anvin
Suggested-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: Thomas Gleixner
Cc: Dave Hansen
Cc: Ingo Molnar
Cc: Andi Kleen
---
arch/x86/entry/vdso/vma.c| 33 +
arch/x86/kernel/cpu/common.c | 24
The FS/GS base helper functions are used on ptrace APIs
(PTRACE_ARCH_PRCTL, PTRACE_SETREG, PTRACE_GETREG, etc).
The FS/GS-update mechanism is now a bit organized.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Dave
-by: Andy Lutomirski
[chang: Rebase and revise patch description]
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Dave Hansen
---
arch/x86/kernel/ptrace.c | 62
1 file changed, 52
64-bit doesn't use the entry for per CPU data, but for CPU
numbers. The change will clarify the real usage of this
entry in GDT.
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Cc: Andi Kleen
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
Acked-by: Andy Lutomirski
---
arch
ase refers to base backed up at kernel
entries and of inactive (user) task's.
The bug that returns stale FS/GS base value (when index
is nonzero) is preserved and will be fixed by next
patch.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
-by: Andy Lutomirski
[chang: Rebase and revise patch description]
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/kernel/process_64.c | 67 +---
1 file changed, 51
CPU number initialization in vDSO is now a bit cleaned up by
the new helper functions. The helper functions will take
care of combing CPU and node number and reading each from
the combined value.
Suggested-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Andi Kleen
Cc
When new FSGSBASE instructions enabled, this read will
become faster.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
Reviewed-by: Andy Lutomirski
---
arch/x86/include/asm/elf.h
Instead of open coding the calls to load_seg_legacy(), add a
load_fsgs() helper to handle fs and gs. When FSGSBASE is enabled,
load_fsgs() will be updated.
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
Reviewed
[3] V2: https://lkml.org/lkml/2018/6/6/582
[4] V3: https://lkml.org/lkml/2018/6/7/975
Andy Lutomirski (1):
x86/fsgsbase/64: Make ptrace read FS/GS base accurately
Chang S. Bae (6):
x86/fsgsbase/64: Introduce FS/GS base helper functions
x86/fsgsbase/64: Use FS/GS base helpers in core dump
for suggesting the cleanup)
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Dave Hansen
Cc: Andy Lutomirski
Cc: Andi Kleen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/entry/vdso/vma.c| 41 +
arch/x86/kernel/cpu/common.c | 28
-by: Andy Lutomirski
[chang: Rebase and revise patch description]
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Dave Hansen
---
arch/x86/kernel/ptrace.c | 62
1 file changed, 52
The FS/GS base helper functions are used on ptrace APIs
(PTRACE_ARCH_PRCTL, PTRACE_SETREG, PTRACE_GETREG, etc).
The FS/GS-update mechanism is now a bit organized.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Dave
64-bit doesn't use the entry for per CPU data, but for CPU
(and node) numbers. The change will clarify the real usage
of this entry in GDT.
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Acked-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andi Kleen
Cc: Dave
The CPU initialization in vDSO is now a bit cleaned up by
the new helper functions. The helper functions will take
care of combining CPU and node number and reading each from
the combined value.
Suggested-by: Andy Lutomirski
Suggested-by: Thomas Gleixner
Signed-off-by: Chang S. Bae
Cc: H
The open coded access is now replaced, that might prevent
from using the enhanced FSGSBASE mechanism.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Reviewed-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Cc: H. Peter Anvin
Cc: Ingo Molnar
Cc
Instead of open coding the calls to load_seg_legacy(), add a
load_fsgs() helper to handle fs and gs. When FSGSBASE is enabled,
load_fsgs() will be updated.
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Reviewed-by: Andy Lutomirski
Reviewed-by: Thomas Gleixner
Cc: H. Peter Anvin
Cc
FS/GS base accurately
Chang S. Bae (7):
x86/fsgsbase/64: Introduce FS/GS base helper functions
x86/fsgsbase/64: Make ptrace use FS/GS base helpers
x86/fsgsbase/64: Use FS/GS base helpers in core dump
x86/fsgsbase/64: Factor out load FS/GS segments from __switch_to
x86/segments/64: Rename
and hotplug
notifier are removed.
Suggested-by: H. Peter Anvin
Suggested-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: Thomas Gleixner
Cc: Dave Hansen
Cc: Ingo Molnar
Cc: Andi Kleen
---
arch/x86/entry/vdso/vma.c| 33 +
arch/x86/kernel/cpu/common.c | 24
functions are implemented as closely coupled.
When next patch makes ptrace to use the helpers, it
won't be directly accessed from ptrace.
Based-on-code-from: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Dave Hansen
mpared to the baseline 4.6-rc1 behavior on my
Skylake laptop.
Signed-off-by: Andy Lutomirski <l...@kernel.org>
[chang: 5~10% performance improvement on context switch micro-
benchmark, when FS/GS base is actively switched.]
Signed-off-by: Chang S. Bae <chang.seok@intel.com>
Revie
and selector are
covered.
When FSGSBASE is enabled, an arbitrary base value is possible
anyways, so it is going to be reasonable to write base lastly.
Suggested-by: H. Peter Anvin <h...@zytor.com>
Signed-off-by: Chang S. Bae <chang.seok@intel.com>
Cc: Markus T. Metzger <markus.t.metz..
From: Andy Lutomirski <l...@kernel.org>
Now that FSGSBASE is fully supported, remove unsafe_fsgsbase, enable
FSGSBASE by default, and add nofsgsbase to disable it.
Signed-off-by: Andy Lutomirski <l...@kernel.org>
Signed-off-by: Chang S. Bae <chang.seok@intel.com>
Revie
From: Andy Lutomirski <l...@kernel.org>
This is temporary. It will allow the next few patches to be tested
incrementally.
Setting unsafe_fsgsbase is a root hole. Don't do it.
Signed-off-by: Andy Lutomirski <l...@kernel.org>
[chang: Fix the deactivated flag]
Signed-off-by:
to do that. The measured
overhead was (almost) offset to the benefit.
Signed-off-by: Chang S. Bae <chang.seok@intel.com>
Reviewed-by: Andi Kleen <a...@linux.intel.com>
Cc: Any Lutomirski <l...@kernel.org>
Cc: H. Peter Anvin <h...@zytor.com>
---
arch/x86/inc
When FSGSBASE enabled, copy real FS/GS base values instead
of approximation.
Factor out to save_fsgs() does not yield the exact same
behavior, because save_base_legacy() does not copy FS/GS base
when index is zero.
Signed-off-by: Chang S. Bae <chang.seok@intel.com>
Cc: Andy Lutomir
Proliferation of offsetof() for user_regs_struct is trimmed
down with the USER_REGS_OFFSET macro.
Signed-off-by: Chang S. Bae <chang.seok@intel.com>
Cc: Andi Kleen <a...@linux.intel.com>
Cc: H. Peter Anvin <h...@zytor.com>
cc: Andy Lutomirski <l...@kernel.org>
---
GDT/LDT (legacy behavior)
- When FS/GS base (regardless of selector) changed, tracee
will have the base
Suggested-by: Markus T. Metzger <markus.t.metz...@intel.com>
Suggested-by: H. Peter Anvin <h...@zytor.com>
Signed-off-by: Chang S. Bae <chang.seok@intel.com&
accurately
x86/fsgsbase/64: Add 'unsafe_fsgsbase' to enable CR4.FSGSBASE
x86/fsgsbase/64: Preserve FS/GS state in __switch_to if FSGSBASE is on
x86/fsgsbase/64: Enable FSGSBASE by default and add a chicken bit
Chang S. Bae (10):
x86/fsgsbase/64: Introduce FS/GS base helper functions
x86
the offset table with the CPU number.
GS-compatible RDPID macro is included.
Suggested-by: H. Peter Anvin <h...@zytor.com>
Signed-off-by: Chang S. Bae <chang.seok@intel.com>
Cc: Andi Kleen <a...@linux.intel.com>
Cc: Andy Lutomirski <l...@kernel.org>
Cc: Dave Hansen <
2: Use __always_inline
Signed-off-by: Andi Kleen <a...@linux.intel.com>
Signed-off-by: Andy Lutomirski <l...@kernel.org>
[chang: Replace new instruction macros with GAS-compatible and
renaming]
Signed-off-by: Chang S. Bae <chang.seok@intel.com>
Cc: H. Peter Anvin <h...@zytor
putregs() can be used to handle multiple elements flexibly.
It is useful when inter-dependency lies in updating a
group of context entries. There will be a case with FSGSBASE.
Signed-off-by: Chang S. Bae <chang.seok@intel.com>
Cc: Markus T. Metzger <markus.t.metz...@intel.com>
nd "shadow" are used to distinguish
GS bases between "kernel" and "user". "shadow" GS base
refers to the GS base backed up at kernel entries;
inactive (user) task's GS base.
Based-on-code-from: Andy Lutomirski <l...@kernel.org>
Signed-off-by: Chang S
Instead of open code, load_fsgs() will cleanup __switch_to
and symmetric with FS/GS segment save. When FSGSBASE
enabled, X86_FEATURE_FSGSBASE check will be incorporated.
Signed-off-by: Chang S. Bae <chang.seok@intel.com>
Reviewed-by: Andi Kleen <a...@linux.intel.com>
Cc: Andy L
When new FSGSBASE instructions enabled, this read will be
switched to be faster.
Based-on-code-from: Andy Lutomirski <l...@kernel.org>
Signed-off-by: Chang S. Bae <chang.seok@intel.com>
Reviewed-by: Andi Kleen <a...@linux.intel.com>
Cc: H. Peter Anvin <h...@zytor.com&g
nd etc).
Signed-off-by: Andy Lutomirski <l...@kernel.org>
[chang: Rebase and revise patch description]
Signed-off-by: Chang S. Bae <chang.seok@intel.com>
Reviewed-by: Andi Kleen <a...@linux.intel.com>
Cc: H. Peter Anvin <h...@zytor.com>
---
arc
GSBASE read into a new macro,
READ_MSR_GSBASE.
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Cc: Andi Kleen
Cc: Andy Lutomirski
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/entry/entry_64.S | 73 ++---
arch/x86/include/asm
From: Andi Kleen
v2: Minor updates to documentation requested in review.
v3: Update for new gcc and various improvements.
[ chang: Fix some typo. Fix the example code. ]
Signed-off-by: Andi Kleen
Signed-off-by: Chang S. Bae
Cc: Andy Lutomirski
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc
From: Andy Lutomirski
Now that FSGSBASE is fully supported, remove unsafe_fsgsbase, enable
FSGSBASE by default, and add nofsgsbase to disable it.
Signed-off-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc
to the baseline 4.6-rc1 behavior on my
Skylake laptop.
[ chang: 5~10% performance improvements were seen by a context switch
benchmark that ran threads with different FS/GSBASE values. Minor
edit on the changelog. ]
Signed-off-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: H
.
The new macro will be used on a following patch.
Suggested-by: H. Peter Anvin
Signed-off-by: Chang S. Bae
Cc: Andi Kleen
Cc: Andy Lutomirski
Cc: Dave Hansen
Cc: Thomas Gleixner
Cc: Ingo Molnar
---
arch/x86/include/asm/fsgsbase.h | 52 +
arch/x86/include/asm
accordingly. ]
Signed-off-by: Andi Kleen
Signed-off-by: Chang S. Bae
Cc: Andy Lutomirski
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Dave Hansen
---
arch/x86/include/uapi/asm/hwcap2.h | 3 +++
arch/x86/kernel/cpu/common.c | 4 +++-
2 files changed, 6 insertions(+), 1
. However, it seems to spend more cycles for
savings and restorations. Little or no benefit was measured from
experiments.
Signed-off-by: Chang S. Bae
Reviewed-by: Andi Kleen
Cc: Any Lutomirski
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Dave Hansen
---
arch/x86/include/asm
From: Andy Lutomirski
This validates that GS and GSBASE are independently preserved across
context switches.
Signed-off-by: Andy Lutomirski
Reviewed-by: Andi Kleen
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Dave Hansen
---
tools/testing
Copy real FS/GSBASE values instead of approximation when FSGSBASE is
enabled.
Factoring out to save_fsgs() does not result in the same behavior because
save_base_legacy() does not copy FS/GSBASE when the index is zero.
Signed-off-by: Chang S. Bae
Cc: Andy Lutomirski
Cc: H. Peter Anvin
Cc
state in __switch_to() if FSGSBASE is
on
selftests/x86/fsgsbase: Test WRGSBASE
x86/fsgsbase/64: Enable FSGSBASE by default and add a chicken bit
Chang S. Bae (5):
taint: Introduce a new taint flag (insecure)
x86/fsgsbase/64: Enable FSGSBASE instructions in the helper functions
x86
From: Andy Lutomirski
This is temporary. It will allow the next few patches to be tested
incrementally.
Setting unsafe_fsgsbase is a root hole. Don't do it.
[ chang: Minor fix. Add the TAINT_INSECURE flag. ]
Signed-off-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Reviewed-by: Andi
[ chang: Revise the changelog. Place them in . Replace
the macros with GAS-compatible ones. ]
If GCC supports it, we can add -mfsgsbase to CFLAGS and use the builtins
here for extra performance.
Signed-off-by: Andi Kleen
Signed-off-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: H. Peter Anvin
-by: Chang S. Bae
Cc: Andy Lutomirski
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Andi Kleen
Cc: Dave Hansen
---
Documentation/sysctl/kernel.txt | 1 +
include/linux/kernel.h | 3 ++-
kernel/panic.c | 1 +
3 files changed, 4 insertions(+), 1 deletion(-)
diff --git
task, but a stopped task.
v2: Fix further on the task write functions. Revert the changes on the
task read helpers.
Suggested-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: H. Peter Anvin
Cc: Andi Kleen
Cc: Dave Hansen
---
arch/x86/kernel/process_64
task, but a stopped task.
v2: Fix further on the task write functions. Revert the changes on the
task read helpers.
Suggested-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: H. Peter Anvin
Cc: Andi Kleen
Cc: Dave Hansen
---
arch/x86/kernel/process_64
to
change the index.
putreg() in ptrace does not write the current task, but a stopped task.
Suggested-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: H. Peter Anvin
Cc: Andi Kleen
Cc: Dave Hansen
---
arch/x86/kernel/process_64.c | 67
the changes on the
task read helpers.
v3: Fix putreg(). Edit the changelog.
Suggested-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: H. Peter Anvin
Cc: Andi Kleen
Cc: Dave Hansen
---
arch/x86/kernel/process_64.c | 48
and do_arch_prctl_64(). Fix
the comment in putreg().
Suggested-by: Andy Lutomirski
Signed-off-by: Chang S. Bae
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: H. Peter Anvin
Cc: Andi Kleen
Cc: Dave Hansen
---
arch/x86/include/asm/fsgsbase.h | 15 --
arch/x86/kernel/process_64.c| 84
1 - 100 of 510 matches
Mail list logo