From: Andi Kleen
Clear all registers on entering the 64bit kernel for exceptions and
interrupts.
Since there are no arguments this is fairly simple.
Signed-off-by: Andi Kleen
---
arch/x86/entry/entry_64.S | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/x86/entry/entry_64.S b
From: Andi Kleen <a...@linux.intel.com>
In order to sanitize the system call arguments properly
we need to know the number of syscall arguments for each
syscall. Add a new column to the 32bit and 64bit syscall
tables to list the number of arguments.
Also fix the generation script to not c
From: Andi Kleen <a...@linux.intel.com>
Clear all registers for compat calls on 64bit kernels. All arguments
are initially passed through the stack, so this is fairly simple
without additional stubs.
Signed-off-by: Andi Kleen <a...@linux.intel.com>
---
arch/x86/entry/entry_64_
From: Andi Kleen
In order to sanitize the system call arguments properly
we need to know the number of syscall arguments for each
syscall. Add a new column to the 32bit and 64bit syscall
tables to list the number of arguments.
Also fix the generation script to not confuse the number
From: Andi Kleen
Clear all registers for compat calls on 64bit kernels. All arguments
are initially passed through the stack, so this is fairly simple
without additional stubs.
Signed-off-by: Andi Kleen
---
arch/x86/entry/entry_64_compat.S | 6 ++
1 file changed, 6 insertions(+)
diff
From: Andi Kleen <a...@linux.intel.com>
Remove the partial stack frame in the 64bit syscall fast path.
In the next patch we want to clear the extra registers, which requires
to always save all registers. So remove the partial stack frame
in the syscall fast path and always save ever
From: Andi Kleen
Remove the partial stack frame in the 64bit syscall fast path.
In the next patch we want to clear the extra registers, which requires
to always save all registers. So remove the partial stack frame
in the syscall fast path and always save everything.
This actually simplifies
This patch kit implements clearing of all unused registers on kernel entries,
including system calls and all exceptions and interrupt.
This doesn't fix any known issue, but will make it harder in general
to exploit the kernel with speculation because it will be harder
to get user controlled
This patch kit implements clearing of all unused registers on kernel entries,
including system calls and all exceptions and interrupt.
This doesn't fix any known issue, but will make it harder in general
to exploit the kernel with speculation because it will be harder
to get user controlled
From: Andi Kleen <a...@linux.intel.com>
Add 64bit assembler macros to clear registers on kernel entry.
Used in followon patches.
Signed-off-by: Andi Kleen <a...@linux.intel.com>
---
arch/x86/entry/calling.h | 28
1 file changed, 28 insertions(+)
diff
From: Andi Kleen
Add 64bit assembler macros to clear registers on kernel entry.
Used in followon patches.
Signed-off-by: Andi Kleen
---
arch/x86/entry/calling.h | 28
1 file changed, 28 insertions(+)
diff --git a/arch/x86/entry/calling.h b/arch/x86/entry
From: Andi Kleen <a...@linux.intel.com>
The main system call code doesn't know how many arguments each
system call has. So generate stubs that do the clearing.
Set up macros to generate stubs to clear unused argument registers
for each system call in a 64bit kernel. This uses the s
From: Andi Kleen
The main system call code doesn't know how many arguments each
system call has. So generate stubs that do the clearing.
Set up macros to generate stubs to clear unused argument registers
for each system call in a 64bit kernel. This uses the syscall
argument count from
> Then just make sure X86_FEATURE_RETPOLINE_AMD disables X86_FEATURE_RETPOLINE.
>
> That is both simpler an dsmaller, no?
Yes that works, and is clearly better/simpler.
Tested-by: Andi Kleen <a...@linux.intel.com>
Thomas, I assume you will fix it up, or let me know if I shou
> Then just make sure X86_FEATURE_RETPOLINE_AMD disables X86_FEATURE_RETPOLINE.
>
> That is both simpler an dsmaller, no?
Yes that works, and is clearly better/simpler.
Tested-by: Andi Kleen
Thomas, I assume you will fix it up, or let me know if I should
send another patch.
-Andi
From: Andi Kleen <a...@linux.intel.com>
With the latest tip x86/pti I get oopses when booting
a 64bit VM in qemu with RETPOLINE/gcc7 and PTI enabled.
The following patch fixes it for me. Something doesn't
seem to work with ALTERNATIVE_2. It adds only a few bytes
more code, so seems acce
From: Andi Kleen
With the latest tip x86/pti I get oopses when booting
a 64bit VM in qemu with RETPOLINE/gcc7 and PTI enabled.
The following patch fixes it for me. Something doesn't
seem to work with ALTERNATIVE_2. It adds only a few bytes
more code, so seems acceptable.
Signed-off-by: Andi
Commit-ID: 450c505047981e97471f0170e0102f613bba4739
Gitweb: https://git.kernel.org/tip/450c505047981e97471f0170e0102f613bba4739
Author: Andi Kleen <a...@linux.intel.com>
AuthorDate: Tue, 9 Jan 2018 14:43:17 +
Committer: Thomas Gleixner <t...@linutronix.de>
CommitDate: Tue
Commit-ID: 450c505047981e97471f0170e0102f613bba4739
Gitweb: https://git.kernel.org/tip/450c505047981e97471f0170e0102f613bba4739
Author: Andi Kleen
AuthorDate: Tue, 9 Jan 2018 14:43:17 +
Committer: Thomas Gleixner
CommitDate: Tue, 9 Jan 2018 16:17:55 +0100
x86/retpoline: Avoid
Commit-ID: 3025d1ebb41bc8fc58fc050c6d4d6dd4d71ca5e8
Gitweb: https://git.kernel.org/tip/3025d1ebb41bc8fc58fc050c6d4d6dd4d71ca5e8
Author: Andi Kleen <a...@linux.intel.com>
AuthorDate: Tue, 9 Jan 2018 14:43:16 +
Committer: Thomas Gleixner <t...@linutronix.de>
CommitDate: Tue
Commit-ID: 3025d1ebb41bc8fc58fc050c6d4d6dd4d71ca5e8
Gitweb: https://git.kernel.org/tip/3025d1ebb41bc8fc58fc050c6d4d6dd4d71ca5e8
Author: Andi Kleen
AuthorDate: Tue, 9 Jan 2018 14:43:16 +
Committer: Thomas Gleixner
CommitDate: Tue, 9 Jan 2018 16:17:54 +0100
x86/retpoline/irq32
Commit-ID: 61888594f2ff61633c7fb29b58c128d6dc850e7c
Gitweb: https://git.kernel.org/tip/61888594f2ff61633c7fb29b58c128d6dc850e7c
Author: Andi Kleen <a...@linux.intel.com>
AuthorDate: Tue, 9 Jan 2018 14:43:08 +
Committer: Thomas Gleixner <t...@linutronix.de>
CommitDate: Tue
Commit-ID: 61888594f2ff61633c7fb29b58c128d6dc850e7c
Gitweb: https://git.kernel.org/tip/61888594f2ff61633c7fb29b58c128d6dc850e7c
Author: Andi Kleen
AuthorDate: Tue, 9 Jan 2018 14:43:08 +
Committer: Thomas Gleixner
CommitDate: Tue, 9 Jan 2018 16:17:51 +0100
x86/retpoline
> > On Skylake and Broadwell when the RSB underflows it will fall back to the
> > indirect branch predictor, which can be poisoned and we try to avoid
> > using with retpoline. So we try to avoid underflows, and this filling
> > helps us with that.
>
> That's no longer true for Broadwell with
> > On Skylake and Broadwell when the RSB underflows it will fall back to the
> > indirect branch predictor, which can be poisoned and we try to avoid
> > using with retpoline. So we try to avoid underflows, and this filling
> > helps us with that.
>
> That's no longer true for Broadwell with
On Mon, Jan 08, 2018 at 05:16:02PM -0800, Andi Kleen wrote:
> > If we clear the registers, what the hell are you going to put in the
> > RSB that helps you?
>
> RSB allows you to control chains of gadgets.
I admit the gadget thing is a bit obscure.
There's another case we
On Mon, Jan 08, 2018 at 05:16:02PM -0800, Andi Kleen wrote:
> > If we clear the registers, what the hell are you going to put in the
> > RSB that helps you?
>
> RSB allows you to control chains of gadgets.
I admit the gadget thing is a bit obscure.
There's another case we
> If we clear the registers, what the hell are you going to put in the
> RSB that helps you?
RSB allows you to control chains of gadgets.
You can likely find some chain of gadgets that set up constants in registers in
a
lot of useful ways. Perhaps not any way (so may be hard to scan through all
> If we clear the registers, what the hell are you going to put in the
> RSB that helps you?
RSB allows you to control chains of gadgets.
You can likely find some chain of gadgets that set up constants in registers in
a
lot of useful ways. Perhaps not any way (so may be hard to scan through all
> So I was really hoping that in places like context switching etc, we'd
> be able to instead effectively kill off any exploits by clearing
> registers.
>
> That should make it pretty damn hard to then find a matching "gadget"
> that actually does anything interesting/powerful.
>
> Together with
> So I was really hoping that in places like context switching etc, we'd
> be able to instead effectively kill off any exploits by clearing
> registers.
>
> That should make it pretty damn hard to then find a matching "gadget"
> that actually does anything interesting/powerful.
>
> Together with
> Probably doesn't matter right there but it's going to end up being used
> elsewhere with IBRS/IBPB, and the compiler is going to think it needs
> to save all the call-clobbered registers for that. Do we want to make
> it use inline asm instead?
You mean KVM?
All the other places have lots of
> Probably doesn't matter right there but it's going to end up being used
> elsewhere with IBRS/IBPB, and the compiler is going to think it needs
> to save all the call-clobbered registers for that. Do we want to make
> it use inline asm instead?
You mean KVM?
All the other places have lots of
On Mon, Jan 08, 2018 at 03:56:30PM -0800, Linus Torvalds wrote:
> On Mon, Jan 8, 2018 at 3:44 PM, David Woodhouse wrote:
> >
> > To guard against this fill the return buffer with controlled
> > content during context switch. This prevents any underflows.
>
> Ugh. I really
On Mon, Jan 08, 2018 at 03:56:30PM -0800, Linus Torvalds wrote:
> On Mon, Jan 8, 2018 at 3:44 PM, David Woodhouse wrote:
> >
> > To guard against this fill the return buffer with controlled
> > content during context switch. This prevents any underflows.
>
> Ugh. I really dislike this patch.
From: Andi Kleen <a...@linux.intel.com>
This is an extension of the earlier patch to fill the return buffer
on context switch. It uses the assembler macros added earlier.
When we go into deeper idle states the return buffer could be cleared
in MWAIT, but then another thread which wa
From: Andi Kleen
This is an extension of the earlier patch to fill the return buffer
on context switch. It uses the assembler macros added earlier.
When we go into deeper idle states the return buffer could be cleared
in MWAIT, but then another thread which wakes up earlier might
be poisoning
> > Why is none of that done here? Also, can we pretty please stop using
> > those retarded number labels, they make this stuff unreadable.
>
> Personally I find the magic labels with strange ASCII characters
> far less readable than a simple number.
Tried it and \@ is incompatible with .rept.
> > Why is none of that done here? Also, can we pretty please stop using
> > those retarded number labels, they make this stuff unreadable.
>
> Personally I find the magic labels with strange ASCII characters
> far less readable than a simple number.
Tried it and \@ is incompatible with .rept.
> We want this on vmexit too, right?
Yes. KVM patches are done separately.
-Andi
> We want this on vmexit too, right?
Yes. KVM patches are done separately.
-Andi
> So pjt did alignment, a single unroll and per discussion earlier today
> (CET) or late last night (PST), he only does 16.
I used the Intel recommended sequence, which recommends 32.
Not sure if alignment makes a difference. I can check.
> Why is none of that done here? Also, can we pretty
> So pjt did alignment, a single unroll and per discussion earlier today
> (CET) or late last night (PST), he only does 16.
I used the Intel recommended sequence, which recommends 32.
Not sure if alignment makes a difference. I can check.
> Why is none of that done here? Also, can we pretty
From: Andi Kleen <a...@linux.intel.com>
[This is on top of David's retpoline branch, as of 08-01 this morning]
This patch further hardens retpoline
CPUs have return buffers which store the return address for
RET to predict function returns. Some CPUs (Skylake, some Broadwells)
can fal
From: Andi Kleen
[This is on top of David's retpoline branch, as of 08-01 this morning]
This patch further hardens retpoline
CPUs have return buffers which store the return address for
RET to predict function returns. Some CPUs (Skylake, some Broadwells)
can fall back to indirect branch
> > Many of the x86 pipeline.json files have the brief description "Total
> > execution stalls" for both CYCLE_ACTIVITY.CYCLES_NO_EXECUTE and
> > CYCLE_ACTIVITY.STALLS_TOTAL. Should the case for
> > CYCLE_ACTIVITY.CYCLES_NO_EXECUTE have a brief description that mentions
> > cycles? Some of the
> > Many of the x86 pipeline.json files have the brief description "Total
> > execution stalls" for both CYCLE_ACTIVITY.CYCLES_NO_EXECUTE and
> > CYCLE_ACTIVITY.STALLS_TOTAL. Should the case for
> > CYCLE_ACTIVITY.CYCLES_NO_EXECUTE have a brief description that mentions
> > cycles? Some of the
t's important to know the I/O statistics of them.
> Perf can collect physical addresses, but those are raw data.
> It still needs extra work to resolve the physical addresses.
> Provide a script to facilitate the physical addresses resolving and
> I/O statistics.
Reviewed-by: Andi Kleen <a...@linux.intel.com>
-Andi
I/O statistics of them.
> Perf can collect physical addresses, but those are raw data.
> It still needs extra work to resolve the physical addresses.
> Provide a script to facilitate the physical addresses resolving and
> I/O statistics.
Reviewed-by: Andi Kleen
-Andi
From: Andi Kleen <a...@linux.intel.com>
The internal retpoline thunks used by the compiler contain a dot.
They have to be exported, but modversions cannot handle them
it because they don't have a prototype due to the C incompatible
name (and it doesn't support asm("..."))
Th
From: Andi Kleen
The internal retpoline thunks used by the compiler contain a dot.
They have to be exported, but modversions cannot handle them
it because they don't have a prototype due to the C incompatible
name (and it doesn't support asm("..."))
This leads to lots of warnings fr
> If the *compiler* uses the out-of-line version, that's a separate
> thing. But for our asm cases, let's just make it all be the inline
> case, ok?
Should be a simple change.
>
> It also should simplify the whole target generation. None of this
> silly "__x86.indirect_thunk.\reg" crap with
> If the *compiler* uses the out-of-line version, that's a separate
> thing. But for our asm cases, let's just make it all be the inline
> case, ok?
Should be a simple change.
>
> It also should simplify the whole target generation. None of this
> silly "__x86.indirect_thunk.\reg" crap with
> Clearly Paul's approach to retpoline without lfence is faster.
> I'm guessing it wasn't shared with amazon/intel until now and
> this set of patches going to adopt it, right?
>
> Paul, could you share a link to a set of alternative gcc patches
> that do retpoline similar to llvm diff ?
I don't
> Clearly Paul's approach to retpoline without lfence is faster.
> I'm guessing it wasn't shared with amazon/intel until now and
> this set of patches going to adopt it, right?
>
> Paul, could you share a link to a set of alternative gcc patches
> that do retpoline similar to llvm diff ?
I don't
On Thu, Jan 04, 2018 at 04:02:06PM +0100, Juergen Gross wrote:
> On 04/01/18 15:37, David Woodhouse wrote:
> > Convert pvops invocations to use non-speculative call sequences, when
> > CONFIG_RETPOLINE is enabled.
> >
> > There is scope for future optimisation here — once the pvops methods are
>
On Thu, Jan 04, 2018 at 04:02:06PM +0100, Juergen Gross wrote:
> On 04/01/18 15:37, David Woodhouse wrote:
> > Convert pvops invocations to use non-speculative call sequences, when
> > CONFIG_RETPOLINE is enabled.
> >
> > There is scope for future optimisation here — once the pvops methods are
>
On Thu, Jan 04, 2018 at 10:06:01AM -0600, Josh Poimboeuf wrote:
> On Thu, Jan 04, 2018 at 07:59:14AM -0800, Andi Kleen wrote:
> > > NAK. We can't blindly disable objtool warnings, that will break
> > > livepatch and the ORC unwinder. If you share a .o file (or the GCC
&
On Thu, Jan 04, 2018 at 10:06:01AM -0600, Josh Poimboeuf wrote:
> On Thu, Jan 04, 2018 at 07:59:14AM -0800, Andi Kleen wrote:
> > > NAK. We can't blindly disable objtool warnings, that will break
> > > livepatch and the ORC unwinder. If you share a .o file (or the GCC
&
> NAK. We can't blindly disable objtool warnings, that will break
> livepatch and the ORC unwinder. If you share a .o file (or the GCC
> code) I can look at adding retpoline support.
I don't think we can wait for that. We can disable livepatch and the
unwinder for now. They are not essential.
> NAK. We can't blindly disable objtool warnings, that will break
> livepatch and the ORC unwinder. If you share a .o file (or the GCC
> code) I can look at adding retpoline support.
I don't think we can wait for that. We can disable livepatch and the
unwinder for now. They are not essential.
> +.macro JMP_THUNK reg:req
> +#ifdef RETPOLINE
> + ALTERNATIVE __stringify(jmp __x86.indirect_thunk.\reg),
> __stringify(jmp *%\reg), X86_FEATURE_IBRS_ATT
> +#else
> + jmp *\reg
> +#endif
> +.endm
I remove that because what you're testing for doesn't exist in the tree yet.
Yes it
> +.macro JMP_THUNK reg:req
> +#ifdef RETPOLINE
> + ALTERNATIVE __stringify(jmp __x86.indirect_thunk.\reg),
> __stringify(jmp *%\reg), X86_FEATURE_IBRS_ATT
> +#else
> + jmp *\reg
> +#endif
> +.endm
I remove that because what you're testing for doesn't exist in the tree yet.
Yes it
> > diff --git a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
> > b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
> > index 1743e6850e00..9cd8450a2050 100644
> > --- a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
> > +++ b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
> > @@ -12,6 +12,7 @@
> >
>
> > diff --git a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
> > b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
> > index 1743e6850e00..9cd8450a2050 100644
> > --- a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
> > +++ b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
> > @@ -12,6 +12,7 @@
> >
>
From: Andi Kleen <a...@linux.intel.com>
Convert all indirect jumps in xen inline assembler code to use
non speculative sequences.
Based on code from David Woodhouse and Tim Chen
Signed-off-by: Andi Kleen <a...@linux.intel.com>
---
arch/x86/crypto/camellia-aesni-avx2-asm_64.S | 1
From: Andi Kleen
Convert all indirect jumps in xen inline assembler code to use
non speculative sequences.
Based on code from David Woodhouse and Tim Chen
Signed-off-by: Andi Kleen
---
arch/x86/crypto/camellia-aesni-avx2-asm_64.S | 1 +
arch/x86/include/asm/xen/hypercall.h | 3 ++-
2
From: Andi Kleen <a...@linux.intel.com>
Convert all indirect jumps in hyperv inline asm code to use
non speculative sequences.
Based on code from David Woodhouse and Tim Chen
Signed-off-by: Andi Kleen <a...@linux.intel.com>
---
arch/x86/include/asm/mshyperv.h | 9 +
1 fil
From: Andi Kleen
Convert all indirect jumps in hyperv inline asm code to use
non speculative sequences.
Based on code from David Woodhouse and Tim Chen
Signed-off-by: Andi Kleen
---
arch/x86/include/asm/mshyperv.h | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git
out of line trampoline used by the
compiler, and NOSPEC_JUMP / NOSPEC_CALL macros for assembler
[Originally from David and Tim, heavily hacked by AK]
v2: Add CONFIG_RETPOLINE option
Signed-off-by: David Woodhouse <d...@amazon.co.uk>
Signed-off-by: Tim Chen <tim.c.c...@linux.intel.com>
S
/ NOSPEC_CALL macros for assembler
[Originally from David and Tim, heavily hacked by AK]
v2: Add CONFIG_RETPOLINE option
Signed-off-by: David Woodhouse
Signed-off-by: Tim Chen
Signed-off-by: Andi Kleen
---
arch/x86/Kconfig| 8 +
arch/x86/include/asm/jump-asm.h | 70
From: Andi Kleen <a...@linux.intel.com>
The speculative jump trampoline has to contain unreachable code.
objtool keeps complaining
arch/x86/lib/retpoline.o: warning: objtool: __x86.indirect_thunk()+0x8:
unreachable instruction
I tried to fix it here by adding ASM_UNREACHABLE annotation
From: Andi Kleen <a...@linux.intel.com>
Convert all indirect jumps in core 32/64bit entry assembler code to use
non speculative sequences.
Based on code from David Woodhouse and Tim Chen
Signed-off-by: Andi Kleen <a...@linux.intel.com>
---
arch/x86/entry/entry_32.S | 5 +++--
ar
From: Andi Kleen
Convert all indirect jumps in core 32/64bit entry assembler code to use
non speculative sequences.
Based on code from David Woodhouse and Tim Chen
Signed-off-by: Andi Kleen
---
arch/x86/entry/entry_32.S | 5 +++--
arch/x86/entry/entry_64.S | 12 +++-
2 files changed
From: Andi Kleen
The speculative jump trampoline has to contain unreachable code.
objtool keeps complaining
arch/x86/lib/retpoline.o: warning: objtool: __x86.indirect_thunk()+0x8:
unreachable instruction
I tried to fix it here by adding ASM_UNREACHABLE annotation (after
supporting them
From: Andi Kleen <a...@linux.intel.com>
Convert all indirect jumps in crypto assembler code to use
non speculative sequences.
Based on code from David Woodhouse and Tim Chen
Signed-off-by: Andi Kleen <a...@linux.intel.com>
---
arch/x86/crypto/aesni-intel_asm.S| 5 +++
From: Andi Kleen
Convert all indirect jumps in crypto assembler code to use
non speculative sequences.
Based on code from David Woodhouse and Tim Chen
Signed-off-by: Andi Kleen
---
arch/x86/crypto/aesni-intel_asm.S| 5 +++--
arch/x86/crypto/camellia-aesni-avx-asm_64.S | 3
From: Andi Kleen <a...@linux.intel.com>
Convert all indirect jumps in 32bit checksum assembler code to use
non speculative sequences.
Based on code from David Woodhouse and Tim Chen
Signed-off-by: Andi Kleen <a...@linux.intel.com>
---
arch/x86/lib/checksum_32.S | 5 +++--
1 fil
From: Andi Kleen
Convert all indirect jumps in 32bit checksum assembler code to use
non speculative sequences.
Based on code from David Woodhouse and Tim Chen
Signed-off-by: Andi Kleen
---
arch/x86/lib/checksum_32.S | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git
This is a fix for Variant 2 in
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
Any speculative indirect calls in the kernel can be tricked
to execute any kernel code, which may allow side channel
attacks that can leak arbitrary kernel data.
So we want to
From: Andi Kleen <a...@linux.intel.com>
With the indirect call thunk enabled compiler two objtool
warnings are triggered very frequently and make the build
very noisy.
I don't see a good way to avoid them, so just disable them
for now.
Signed-off-by: Andi Kleen <a...@linux.intel.com>
This is a fix for Variant 2 in
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
Any speculative indirect calls in the kernel can be tricked
to execute any kernel code, which may allow side channel
attacks that can leak arbitrary kernel data.
So we want to
From: Andi Kleen
With the indirect call thunk enabled compiler two objtool
warnings are triggered very frequently and make the build
very noisy.
I don't see a good way to avoid them, so just disable them
for now.
Signed-off-by: Andi Kleen
---
tools/objtool/check.c | 11 +++
1 file
From: Dave Hansen
From: David Woodhouse
Add retpoline compile option in Makefile
Update Makefile with retpoline compile options. This requires a gcc with the
retpoline compiler patches enabled.
Print a warning when the compiler doesn't support
From: Andi Kleen <a...@linux.intel.com>
When the kernel or a module hasn't been compiled with a retpoline
aware compiler, print a warning and set a taint flag.
For modules it is checked at compile time, however it cannot
check assembler or other non compiled objects used in the module link
From: Dave Hansen
From: David Woodhouse
Add retpoline compile option in Makefile
Update Makefile with retpoline compile options. This requires a gcc with the
retpoline compiler patches enabled.
Print a warning when the compiler doesn't support retpoline
[Originally from David and Tim, but
From: Andi Kleen
When the kernel or a module hasn't been compiled with a retpoline
aware compiler, print a warning and set a taint flag.
For modules it is checked at compile time, however it cannot
check assembler or other non compiled objects used in the module link.
Due to lack of better
From: Andi Kleen <a...@linux.intel.com>
Convert all indirect jumps in ftrace assembler code to use
non speculative sequences.
Based on code from David Woodhouse and Tim Chen
Signed-off-by: Andi Kleen <a...@linux.intel.com>
---
arch/x86/kernel/ftrace_32.S | 3 ++-
arch/x86/kernel
From: Andi Kleen
Convert all indirect jumps in ftrace assembler code to use
non speculative sequences.
Based on code from David Woodhouse and Tim Chen
Signed-off-by: Andi Kleen
---
arch/x86/kernel/ftrace_32.S | 3 ++-
arch/x86/kernel/ftrace_64.S | 6 +++---
2 files changed, 5 insertions
From: Andi Kleen <a...@linux.intel.com>
Convert all indirect jumps in 32bit irq inline asm code to use
non speculative sequences.
Signed-off-by: Andi Kleen <a...@linux.intel.com>
---
arch/x86/kernel/irq_32.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --gi
From: Andi Kleen
Convert all indirect jumps in 32bit irq inline asm code to use
non speculative sequences.
Signed-off-by: Andi Kleen
---
arch/x86/kernel/irq_32.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/irq_32.c b/arch/x86/kernel/irq_32.c
index
> So you say, that we finally need a perl interpreter in the kernel to do
> alternative patching?
I don't think perl or objtool makes sense. That would be just incredibly
fragile because compilers can reorder and mix code.
It could be done with a gcc change I suppose. That should be reliable.
> So you say, that we finally need a perl interpreter in the kernel to do
> alternative patching?
I don't think perl or objtool makes sense. That would be just incredibly
fragile because compilers can reorder and mix code.
It could be done with a gcc change I suppose. That should be reliable.
On Wed, Jan 03, 2018 at 09:40:04AM +, Hugues FRUCHET wrote:
> Hi Andi,
> Thanks for the patch but I would suggest to use strlcpy instead, this
> will guard msg.name overwriting and add the NULL termination in case
> of truncation:
> - memcpy(msg.name, name, sizeof(msg.name));
> -
On Wed, Jan 03, 2018 at 09:40:04AM +, Hugues FRUCHET wrote:
> Hi Andi,
> Thanks for the patch but I would suggest to use strlcpy instead, this
> will guard msg.name overwriting and add the NULL termination in case
> of truncation:
> - memcpy(msg.name, name, sizeof(msg.name));
> -
> It should be a CPU_BUG bit as we have for the other mess. And that can be
> used for patching.
It has to be done at compile time because it requires a compiler option.
Most of the indirect calls are in C code.
So it cannot just patched in, only partially out.
-Andi
> It should be a CPU_BUG bit as we have for the other mess. And that can be
> used for patching.
It has to be done at compile time because it requires a compiler option.
Most of the indirect calls are in C code.
So it cannot just patched in, only partially out.
-Andi
Hi Linus,
On Wed, Jan 03, 2018 at 03:51:35PM -0800, Linus Torvalds wrote:
> On Wed, Jan 3, 2018 at 3:09 PM, Andi Kleen <a...@firstfloor.org> wrote:
> > This is a fix for Variant 2 in
> > https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-
Hi Linus,
On Wed, Jan 03, 2018 at 03:51:35PM -0800, Linus Torvalds wrote:
> On Wed, Jan 3, 2018 at 3:09 PM, Andi Kleen wrote:
> > This is a fix for Variant 2 in
> > https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
> >
> > Any
This is a fix for Variant 2 in
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
Any speculative indirect calls in the kernel can be tricked
to execute any kernel code, which may allow side channel
attacks that can leak arbitrary kernel data.
So we want to
1401 - 1500 of 19746 matches
Mail list logo