On Sun, 2018-01-21 at 19:37 +, Andrew Cooper wrote:
>
> It doesn't matter if an attacker can use SP1 to try and skip the IBPB.
>
> Exits to userspace/guest are serialising (with some retroactive updates
> to the architecture spec coming), so an attacker can't cause victim code
> to be
On Sun, 2018-01-21 at 20:01 +0100, Borislav Petkov wrote:
>
> so execution runs directly into the MSR write and the JMP is gone.
>
> So I don't see indirect branches anywhere...
Wait until the wind changes.
Congratulations, you've just turned a potential GCC missed optimisation
into a kernel
On Sun, 2018-01-21 at 20:01 +0100, Borislav Petkov wrote:
>
> so execution runs directly into the MSR write and the JMP is gone.
>
> So I don't see indirect branches anywhere...
Wait until the wind changes.
Congratulations, you've just turned a potential GCC missed optimisation
into a kernel
On Sun, 2018-01-21 at 20:04 +0100, Borislav Petkov wrote:
> On Sun, Jan 21, 2018 at 06:54:22PM +0000, David Woodhouse wrote:
> > Because we're backporting this to every stable kernel under the
> sun,
> > and they don't already require asm-goto.
>
> Considering the
On Sun, 2018-01-21 at 20:04 +0100, Borislav Petkov wrote:
> On Sun, Jan 21, 2018 at 06:54:22PM +0000, David Woodhouse wrote:
> > Because we're backporting this to every stable kernel under the
> sun,
> > and they don't already require asm-goto.
>
> Considering the
On Sun, 2018-01-21 at 19:06 +0100, Borislav Petkov wrote:
>
> > switch to using ALTERNATIVES instead of static_cpu_has]
>
> Why?
>
> if (static_cpu_has(X86_FEATURE_IBPB))
> wrmsr(MSR_IA32_PRED_CMD, PRED_CMD_IBPB, 0);
>
> It can't get any more readable than this. Why
On Sun, 2018-01-21 at 19:06 +0100, Borislav Petkov wrote:
>
> > switch to using ALTERNATIVES instead of static_cpu_has]
>
> Why?
>
> if (static_cpu_has(X86_FEATURE_IBPB))
> wrmsr(MSR_IA32_PRED_CMD, PRED_CMD_IBPB, 0);
>
> It can't get any more readable than this. Why
> On Sat, 20 Jan 2018, KarimAllah Ahmed wrote:
>> From: David Woodhouse <d...@amazon.co.uk>
>>
>> Not functional yet; just add the handling for it in the Spectre v2
>> mitigation selection, and the X86_FEATURE_IBRS flag which will control
>> the code to be
> On Sat, 20 Jan 2018, KarimAllah Ahmed wrote:
>> From: David Woodhouse
>>
>> Not functional yet; just add the handling for it in the Spectre v2
>> mitigation selection, and the X86_FEATURE_IBRS flag which will control
>> the code to be added in later pa
> On 01/21/2018, 10:49 AM, David Woodhouse wrote:
>> Add MSR and bit definitions for SPEC_CTRL, PRED_CMD and
>> ARCH_CAPABILITIES.
>>
>> See Intel's 336996-Speculative-Execution-Side-Channel-Mitigations.pdf
>>
>> Signed-off-by: David Woodhouse <d...@ama
> On 01/21/2018, 10:49 AM, David Woodhouse wrote:
>> Add MSR and bit definitions for SPEC_CTRL, PRED_CMD and
>> ARCH_CAPABILITIES.
>>
>> See Intel's 336996-Speculative-Execution-Side-Channel-Mitigations.pdf
>>
>> Signed-off-by: David Woodhouse
>>
> On Sun, Jan 21, 2018 at 12:22:47PM -0000, David Woodhouse wrote:
>> Yeah, that's fat-fingered in a cut/paste in refactoring. Fixed in what I
>> posted this morning.
>
> Hmm, I better switch to v2 then. With the crazy amount of patchsets
> flying around, I could use
> On Sun, Jan 21, 2018 at 12:22:47PM -0000, David Woodhouse wrote:
>> Yeah, that's fat-fingered in a cut/paste in refactoring. Fixed in what I
>> posted this morning.
>
> Hmm, I better switch to v2 then. With the crazy amount of patchsets
> flying around, I could use
> On Sat, Jan 20, 2018 at 12:03:31PM +0000, David Woodhouse wrote:
>> AMD doesn't implement the Speculation Control MSR that Intel does, but
>> the Prediction Control MSR does exist and is advertised by a separate
>> CPUID bit. Add support for that.
>>
>>
> On Sat, Jan 20, 2018 at 12:03:31PM +0000, David Woodhouse wrote:
>> AMD doesn't implement the Speculation Control MSR that Intel does, but
>> the Prediction Control MSR does exist and is advertised by a separate
>> CPUID bit. Add support for that.
>>
>&g
> On Sat, Jan 20, 2018 at 08:22:55PM +0100, KarimAllah Ahmed wrote:
>> From: Tim Chen
>>
>> Flush indirect branches when switching into a process that marked
>> itself non dumpable. This protects high value processes like gpg
>> better, without having too high
> On Sat, Jan 20, 2018 at 08:22:55PM +0100, KarimAllah Ahmed wrote:
>> From: Tim Chen
>>
>> Flush indirect branches when switching into a process that marked
>> itself non dumpable. This protects high value processes like gpg
>> better, without having too high performance overhead.
>
> So if I
When they advertise the IA32_ARCH_CAPABILITIES MSR and it has the RDCL_NO
bit set, they don't need KPTI either.
Signed-off-by: David Woodhouse <d...@amazon.co.uk>
---
arch/x86/kernel/cpu/common.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kern
When they advertise the IA32_ARCH_CAPABILITIES MSR and it has the RDCL_NO
bit set, they don't need KPTI either.
Signed-off-by: David Woodhouse
---
arch/x86/kernel/cpu/common.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86
From: Thomas Gleixner <t...@linutronix.de>
[peterz: comment]
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: David Woodhouse <d...@amazon.co.uk>
---
arch/x86/mm/tlb.c | 10 +-
1 fi
From: Thomas Gleixner
[peterz: comment]
Signed-off-by: Thomas Gleixner
Signed-off-by: Peter Zijlstra (Intel)
Signed-off-by: David Woodhouse
---
arch/x86/mm/tlb.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index a156195
AMD doesn't implement the Speculation Control MSR that Intel does, but
the Prediction Control MSR does exist and is advertised by a separate
CPUID bit. Add support for that.
Signed-off-by: David Woodhouse <d...@amazon.co.uk>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kern
AMD doesn't implement the Speculation Control MSR that Intel does, but
the Prediction Control MSR does exist and is advertised by a separate
CPUID bit. Add support for that.
Signed-off-by: David Woodhouse
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/scattered.c| 1 +
2
From: Andi Kleen <a...@linux.intel.com>
Flush indirect branches when switching into a process that marked
itself non dumpable. This protects high value processes like gpg
better, without having too high performance overhead.
Signed-off-by: Andi Kleen <a...@linux.intel.com>
Signed-o
discussion of the final patch to tweak precisely when
we use IBPB in context switch.
---
v2: Fix STIPB/STIBP typo
Fix error in AMD CPUID bit definition (0x8000_0008 EBX[12])
Ashok Raj (1):
x86/kvm: Add IBPB support
David Woodhouse (4):
x86/cpufeatures: Add Intel feature bits for Speculation
From: Andi Kleen
Flush indirect branches when switching into a process that marked
itself non dumpable. This protects high value processes like gpg
better, without having too high performance overhead.
Signed-off-by: Andi Kleen
Signed-off-by: David Woodhouse
Signed-off-by: KarimAllah Ahmed
discussion of the final patch to tweak precisely when
we use IBPB in context switch.
---
v2: Fix STIPB/STIBP typo
Fix error in AMD CPUID bit definition (0x8000_0008 EBX[12])
Ashok Raj (1):
x86/kvm: Add IBPB support
David Woodhouse (4):
x86/cpufeatures: Add Intel feature bits for Speculation
tel.com>
Cc: Andy Lutomirski <l...@kernel.org>
Cc: Greg KH <gre...@linuxfoundation.org>
Cc: Paolo Bonzini <pbonz...@redhat.com>
Signed-off-by: Ashok Raj <ashok@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Link:
http://lkml.kernel.org/r/15157207
: Paolo Bonzini
Signed-off-by: Ashok Raj
Signed-off-by: Peter Zijlstra (Intel)
Link:
http://lkml.kernel.org/r/1515720739-43819-6-git-send-email-ashok@intel.com
Signed-off-by: David Woodhouse
Signed-off-by: KarimAllah Ahmed
---
arch/x86/kvm/svm.c | 14 ++
arch/x86/kvm/vmx.c | 11
-by: David Woodhouse <d...@amazon.co.uk>
Reviewed-by: Borislav Petkov <b...@suse.de>
---
arch/x86/include/asm/cpufeature.h| 7 +--
arch/x86/include/asm/cpufeatures.h | 12 +---
arch/x86/include/asm/disabled-features.h | 3 ++-
arch/x86/include/asm/required-fea
-by: David Woodhouse
Reviewed-by: Borislav Petkov
---
arch/x86/include/asm/cpufeature.h| 7 +--
arch/x86/include/asm/cpufeatures.h | 12 +---
arch/x86/include/asm/disabled-features.h | 3 ++-
arch/x86/include/asm/required-features.h | 3 ++-
arch/x86/kernel/cpu/common.c
Add MSR and bit definitions for SPEC_CTRL, PRED_CMD and ARCH_CAPABILITIES.
See Intel's 336996-Speculative-Execution-Side-Channel-Mitigations.pdf
Signed-off-by: David Woodhouse <d...@amazon.co.uk>
---
arch/x86/include/asm/msr-index.h | 11 +++
1 file changed, 11 insertions(+)
diff
he asm too so it gets NOP'd out]
Signed-off-by: Thomas Gleixner <t...@linutronix.de>
Signed-off-by: KarimAllah Ahmed <karah...@amazon.de>
Signed-off-by: David Woodhouse <d...@amazon.co.uk>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/nospec-branch.h | 16 ++
Add MSR and bit definitions for SPEC_CTRL, PRED_CMD and ARCH_CAPABILITIES.
See Intel's 336996-Speculative-Execution-Side-Channel-Mitigations.pdf
Signed-off-by: David Woodhouse
---
arch/x86/include/asm/msr-index.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/arch/x86/include
out]
Signed-off-by: Thomas Gleixner
Signed-off-by: KarimAllah Ahmed
Signed-off-by: David Woodhouse
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/include/asm/nospec-branch.h | 16
arch/x86/kernel/cpu/bugs.c | 7 +++
3 files changed, 24 insertions
On Sat, 2018-01-20 at 13:51 -0800, Steven Noonan wrote:
>
> > +#define X86_FEATURE_STIPB (18*32+27) /* Speculation
> Control with STIPB (Intel) */
>
> Is this correct? I thought the acronym was "STIBP", i.e.
> "Single-Thread Indrect Branch Prediction"? If so, then you've got the
> B
On Sat, 2018-01-20 at 13:51 -0800, Steven Noonan wrote:
>
> > +#define X86_FEATURE_STIPB (18*32+27) /* Speculation
> Control with STIPB (Intel) */
>
> Is this correct? I thought the acronym was "STIBP", i.e.
> "Single-Thread Indrect Branch Prediction"? If so, then you've got the
> B
Add three feature bits exposed by new microcode on Intel CPUs for
speculation control. We would now be up to five bits in CPUID(7).RDX
so take them out of the 'scattered' features and make a proper word
for them instead.
Signed-off-by: David Woodhouse <d...@amazon.co.uk>
---
arch/x86/inclu
for Intel CPUs which say they don't need it.
The rest of the bits to actually *use* the features are still being
worked out, but this much is fairly straightforward so it's a good
start.
David Woodhouse (4):
x86/cpufeatures: Add Intel feature bits for Speculation Control
x86/cpufeature
for Intel CPUs which say they don't need it.
The rest of the bits to actually *use* the features are still being
worked out, but this much is fairly straightforward so it's a good
start.
David Woodhouse (4):
x86/cpufeatures: Add Intel feature bits for Speculation Control
x86/cpufeature
Add three feature bits exposed by new microcode on Intel CPUs for
speculation control. We would now be up to five bits in CPUID(7).RDX
so take them out of the 'scattered' features and make a proper word
for them instead.
Signed-off-by: David Woodhouse
---
arch/x86/include/asm/cpufeature.h
Add MSR and bit definitions for SPEC_CTRL, PRED_CMD and ARCH_CAPABILITIES.
See Intel's 336996-Speculative-Execution-Side-Channel-Mitigations.pdf
Signed-off-by: David Woodhouse <d...@amazon.co.uk>
---
arch/x86/include/asm/msr-index.h | 11 +++
1 file changed, 11 insertions(+)
diff
Add MSR and bit definitions for SPEC_CTRL, PRED_CMD and ARCH_CAPABILITIES.
See Intel's 336996-Speculative-Execution-Side-Channel-Mitigations.pdf
Signed-off-by: David Woodhouse
---
arch/x86/include/asm/msr-index.h | 11 +++
1 file changed, 11 insertions(+)
diff --git a/arch/x86/include
When they advertise the IA32_ARCH_CAPABILITIES MSR and it has the RDCL_NO
bit set, they don't need KPTI either.
Signed-off-by: David Woodhouse <d...@amazon.co.uk>
---
arch/x86/kernel/cpu/common.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kern
When they advertise the IA32_ARCH_CAPABILITIES MSR and it has the RDCL_NO
bit set, they don't need KPTI either.
Signed-off-by: David Woodhouse
---
arch/x86/kernel/cpu/common.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86
AMD doesn't implement the Speculation Control MSR that Intel does, but
the Prediction Control MSR does exist and is advertised by a separate
CPUID bit. Add support for that.
Signed-off-by: David Woodhouse <d...@amazon.co.uk>
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kern
AMD doesn't implement the Speculation Control MSR that Intel does, but
the Prediction Control MSR does exist and is advertised by a separate
CPUID bit. Add support for that.
Signed-off-by: David Woodhouse
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/scattered.c| 1 +
2
On Sat, 2018-01-20 at 17:00 +0800, Hou Tao wrote:
>
> So has anyone encountered a similar problem before, and any suggestions
> and directions for the hard LOCKUP problems ?
Arjan, what is the Intel recommendation here?
smime.p7s
Description: S/MIME cryptographic signature
On Sat, 2018-01-20 at 17:00 +0800, Hou Tao wrote:
>
> So has anyone encountered a similar problem before, and any suggestions
> and directions for the hard LOCKUP problems ?
Arjan, what is the Intel recommendation here?
smime.p7s
Description: S/MIME cryptographic signature
On Fri, 2018-01-19 at 16:25 +0100, Paolo Bonzini wrote:
> Without retpolines, KVM userspace is not protected from the guest
> poisoning the BTB, because there is no IBRS-barrier on the vmexit
> path.
> So there are two more IBPBs that are needed if retpolines are
> enabled:
>
> 1) in
On Fri, 2018-01-19 at 16:25 +0100, Paolo Bonzini wrote:
> Without retpolines, KVM userspace is not protected from the guest
> poisoning the BTB, because there is no IBRS-barrier on the vmexit
> path.
> So there are two more IBPBs that are needed if retpolines are
> enabled:
>
> 1) in
direct thunk
For all three:
Acked-by: David Woodhouse <d...@amazon.co.uk>
Cc: sta...@vger.kernel.org
Thank you.
smime.p7s
Description: S/MIME cryptographic signature
direct thunk
For all three:
Acked-by: David Woodhouse
Cc: sta...@vger.kernel.org
Thank you.
smime.p7s
Description: S/MIME cryptographic signature
On Thu, 2018-01-18 at 14:48 +0100, Peter Zijlstra wrote:
> Now that we have objtool to validate the correctness of asm-goto
> constructs we can start using it to guarantee the absence of dynamic
> branches (and thus speculation).
>
> A primary prerequisite for this is of course that the compiler
On Thu, 2018-01-18 at 14:48 +0100, Peter Zijlstra wrote:
> Now that we have objtool to validate the correctness of asm-goto
> constructs we can start using it to guarantee the absence of dynamic
> branches (and thus speculation).
>
> A primary prerequisite for this is of course that the compiler
piler can apply the proper speculation protection.
>
> Signed-off-by: Thomas Gleixner <t...@linutronix.de>
Reviewed-by: David Woodhouse <d...@amazon.co.uk>
smime.p7s
Description: S/MIME cryptographic signature
piler can apply the proper speculation protection.
>
> Signed-off-by: Thomas Gleixner
Reviewed-by: David Woodhouse
smime.p7s
Description: S/MIME cryptographic signature
On Thu, 2018-01-18 at 05:01 -0800, Andi Kleen wrote:
> >
> > Side effect: [1/3] will move __x86_indirect_thunk_* functions
> > in kernel text area. Of course those functions were in the
> > .text area, but placed in right after _etext. This just moves
> > it right before the _etext.
> I assume
On Thu, 2018-01-18 at 05:01 -0800, Andi Kleen wrote:
> >
> > Side effect: [1/3] will move __x86_indirect_thunk_* functions
> > in kernel text area. Of course those functions were in the
> > .text area, but placed in right after _etext. This just moves
> > it right before the _etext.
> I assume
t was a previous iteration which
had the loop count passed in.
Acked-by: David Woodhouse <d...@amazon.co.uk>
smime.p7s
Description: S/MIME cryptographic signature
>
> This eliminates several instructions and avoids unnecessarily
> clobbering a register.
>
> Signed-off-by: Andi Kleen
We still clobber the register, but you're right it's now filled in the
__FILL_RETURN_BUFFER macro itself. It was a previous iteration which
had the loop count passe
On Mon, 2018-01-15 at 10:06 -0800, Andy Lutomirski wrote:
>
> > Refill or not, you are aware that a correctly timed SMI in a leaf
> > function will cause the next ret to speculate into userspace, because
> > there is guaranteed peturbance in the RSB? (On the expectation that the
> > SMM handler
On Mon, 2018-01-15 at 10:06 -0800, Andy Lutomirski wrote:
>
> > Refill or not, you are aware that a correctly timed SMI in a leaf
> > function will cause the next ret to speculate into userspace, because
> > there is guaranteed peturbance in the RSB? (On the expectation that the
> > SMM handler
On Mon, 2018-01-15 at 11:22 -0600, Josh Poimboeuf wrote:
> And also, people without objtool enabled (i.e., no ORC or livepatch)
> won't see the assertion. Do we care about those people? :-)
I think that's OK. Peter is right that this *would* be a GCC
regression. Someone who *does* have objtool
On Mon, 2018-01-15 at 11:22 -0600, Josh Poimboeuf wrote:
> And also, people without objtool enabled (i.e., no ORC or livepatch)
> won't see the assertion. Do we care about those people? :-)
I think that's OK. Peter is right that this *would* be a GCC
regression. Someone who *does* have objtool
On Mon, 2018-01-15 at 14:35 +, David Laight wrote:
> From: David Woodhouse
> >
> > Sent: 14 January 2018 17:04
> > x86/retpoline: Fill RSB on context switch for affected CPUs
> >
> > On context switch from a shallow call stack to a deeper one, as the CPU
On Mon, 2018-01-15 at 14:35 +, David Laight wrote:
> From: David Woodhouse
> >
> > Sent: 14 January 2018 17:04
> > x86/retpoline: Fill RSB on context switch for affected CPUs
> >
> > On context switch from a shallow call stack to a deeper one, as the CPU
On Mon, 2018-01-15 at 14:45 +0100, Peter Zijlstra wrote:
> On Fri, Jan 12, 2018 at 10:09:08AM +0000, David Woodhouse wrote:
> > static_cpu_has() + asm-goto is NOT SUFFICIENT.
> >
> > It's still *possible* for a missed optimisation in GCC to still leave
> > us with
On Mon, 2018-01-15 at 14:45 +0100, Peter Zijlstra wrote:
> On Fri, Jan 12, 2018 at 10:09:08AM +0000, David Woodhouse wrote:
> > static_cpu_has() + asm-goto is NOT SUFFICIENT.
> >
> > It's still *possible* for a missed optimisation in GCC to still leave
> > us with
On Mon, 2018-01-15 at 12:53 +, Van De Ven, Arjan wrote:
>
> binary what? ;-)
>
> retpoline (or lack thereof) is part of the kernel ABI at this point
Strictly speaking, only lack thereof.
If you build the kernel without CONFIG_RETPOLINE, you can't build
modules with retpoline and then
On Mon, 2018-01-15 at 12:53 +, Van De Ven, Arjan wrote:
>
> binary what? ;-)
>
> retpoline (or lack thereof) is part of the kernel ABI at this point
Strictly speaking, only lack thereof.
If you build the kernel without CONFIG_RETPOLINE, you can't build
modules with retpoline and then
On Mon, 2018-01-15 at 11:03 +0100, Thomas Gleixner wrote:
>
> > Our numbers on Skylake weren't bad, and there seem to be all kinds of
> > corner cases, so again, it seems as if IBRS is the safest choice.
>
> Talk is cheap. Show numbers comparing the full retpoline/RBS mitigation
> compared to
On Mon, 2018-01-15 at 11:03 +0100, Thomas Gleixner wrote:
>
> > Our numbers on Skylake weren't bad, and there seem to be all kinds of
> > corner cases, so again, it seems as if IBRS is the safest choice.
>
> Talk is cheap. Show numbers comparing the full retpoline/RBS mitigation
> compared to
On Sun, 2018-01-14 at 16:05 -0800, Andi Kleen wrote:
> > + if ((!boot_cpu_has(X86_FEATURE_PTI) &&
> > + !boot_cpu_has(X86_FEATURE_SMEP)) || is_skylake_era()) {
> > + setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
> > + pr_info("Filling RSB on context switch\n");
>
On Sun, 2018-01-14 at 16:05 -0800, Andi Kleen wrote:
> > + if ((!boot_cpu_has(X86_FEATURE_PTI) &&
> > + !boot_cpu_has(X86_FEATURE_SMEP)) || is_skylake_era()) {
> > + setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
> > + pr_info("Filling RSB on context switch\n");
>
On Mon, 2018-01-15 at 03:26 -0500, Jon Masters wrote:
>
> Our numbers on Skylake weren't bad, and there seem to be all kinds of
> corner cases, so again, it seems as if IBRS is the safest choice.
If only someone were rapidly iterating the IBRS patch set on top of the
latest tip, fixing the
On Mon, 2018-01-15 at 03:26 -0500, Jon Masters wrote:
>
> Our numbers on Skylake weren't bad, and there seem to be all kinds of
> corner cases, so again, it seems as if IBRS is the safest choice.
If only someone were rapidly iterating the IBRS patch set on top of the
latest tip, fixing the
Commit-ID: c995efd5a740d9cbafbf58bde4973e8b50b4d761
Gitweb: https://git.kernel.org/tip/c995efd5a740d9cbafbf58bde4973e8b50b4d761
Author: David Woodhouse <d...@amazon.co.uk>
AuthorDate: Fri, 12 Jan 2018 17:49:25 +
Committer: Thomas Gleixner <t...@linutronix.de>
CommitDate:
Commit-ID: c995efd5a740d9cbafbf58bde4973e8b50b4d761
Gitweb: https://git.kernel.org/tip/c995efd5a740d9cbafbf58bde4973e8b50b4d761
Author: David Woodhouse
AuthorDate: Fri, 12 Jan 2018 17:49:25 +
Committer: Thomas Gleixner
CommitDate: Mon, 15 Jan 2018 00:32:44 +0100
x86/retpoline
At the last minute, they were switched from __x86_indirect_thunk_rax to
__x86_indirect_thunk_ax without the 'r' or 'e' on the register name.
Except for the _r[89..] versions, obviously.
This is not entirely an improvement, IMO.
Reluctantly-signed-off-by: David Woodhouse <d...@amazon.co.uk>
At the last minute, they were switched from __x86_indirect_thunk_rax to
__x86_indirect_thunk_ax without the 'r' or 'e' on the register name.
Except for the _r[89..] versions, obviously.
This is not entirely an improvement, IMO.
Reluctantly-signed-off-by: David Woodhouse
---
I think we
On Sun, 2018-01-14 at 13:12 -0800, Linus Torvalds wrote:
> On Sun, Jan 14, 2018 at 1:01 PM, Thomas Gleixner
> wrote:
> >
> >
> > Good point. I'll queue a patch to that effect or do you just want
> > to do
> > that yourself?
> I don't think it's critical, and I don't care for
On Sun, 2018-01-14 at 13:12 -0800, Linus Torvalds wrote:
> On Sun, Jan 14, 2018 at 1:01 PM, Thomas Gleixner
> wrote:
> >
> >
> > Good point. I'll queue a patch to that effect or do you just want
> > to do
> > that yourself?
> I don't think it's critical, and I don't care for rc8, so it's not
>
Commit-ID: a0ab15c0fb68e202bebd9b17fa49fd7ec48975b3
Gitweb: https://git.kernel.org/tip/a0ab15c0fb68e202bebd9b17fa49fd7ec48975b3
Author: David Woodhouse <d...@amazon.co.uk>
AuthorDate: Fri, 12 Jan 2018 17:49:25 +
Committer: Thomas Gleixner <t...@linutronix.de>
CommitDate:
Commit-ID: a0ab15c0fb68e202bebd9b17fa49fd7ec48975b3
Gitweb: https://git.kernel.org/tip/a0ab15c0fb68e202bebd9b17fa49fd7ec48975b3
Author: David Woodhouse
AuthorDate: Fri, 12 Jan 2018 17:49:25 +
Committer: Thomas Gleixner
CommitDate: Sun, 14 Jan 2018 16:41:39 +0100
x86/retpoline
On Sat, 2018-01-13 at 14:10 +0100, Peter Zijlstra wrote:
> On Sat, Jan 13, 2018 at 12:30:11PM +0000, David Woodhouse wrote:
> >
> > On Sat, 2018-01-13 at 13:08 +0100, Peter Zijlstra wrote:
> > >
> > >
> > > ALTERNATIVE &qu
On Sat, 2018-01-13 at 14:10 +0100, Peter Zijlstra wrote:
> On Sat, Jan 13, 2018 at 12:30:11PM +0000, David Woodhouse wrote:
> >
> > On Sat, 2018-01-13 at 13:08 +0100, Peter Zijlstra wrote:
> > >
> > >
> > > ALTERNATIVE &qu
On Sat, 2018-01-13 at 14:10 +0100, Peter Zijlstra wrote:
> On Sat, Jan 13, 2018 at 12:30:11PM +0000, David Woodhouse wrote:
> >
> > On Sat, 2018-01-13 at 13:08 +0100, Peter Zijlstra wrote:
> > >
> > >
> > > ALTERNATIVE &qu
On Sat, 2018-01-13 at 14:10 +0100, Peter Zijlstra wrote:
> On Sat, Jan 13, 2018 at 12:30:11PM +0000, David Woodhouse wrote:
> >
> > On Sat, 2018-01-13 at 13:08 +0100, Peter Zijlstra wrote:
> > >
> > >
> > > ALTERNATIVE &qu
On Sat, 2018-01-13 at 13:08 +0100, Peter Zijlstra wrote:
>
> ALTERNATIVE "orq $(PTI_SWITCH_PGTABLE_MASK), \scratch_reg",
> "orq $(PTI_SWITCH_MASK), \scratch_reg", X86_FEATURE_PCID
>
> Is not wanting to compile though; probably that whole alternative vs
> macro thing
On Sat, 2018-01-13 at 13:08 +0100, Peter Zijlstra wrote:
>
> ALTERNATIVE "orq $(PTI_SWITCH_PGTABLE_MASK), \scratch_reg",
> "orq $(PTI_SWITCH_MASK), \scratch_reg", X86_FEATURE_PCID
>
> Is not wanting to compile though; probably that whole alternative vs
> macro thing
On Fri, 2018-01-12 at 10:45 -0800, Andi Kleen wrote:
> [This is an alternative to David's earlier patch to only
> handle context switch. It handles more cases.]
>
> Skylake needs some additional protections over plain RETPOLINE
> for Spectre_v2.
>
> The CPU can fall back to the potentially
On Fri, 2018-01-12 at 10:45 -0800, Andi Kleen wrote:
> [This is an alternative to David's earlier patch to only
> handle context switch. It handles more cases.]
>
> Skylake needs some additional protections over plain RETPOLINE
> for Spectre_v2.
>
> The CPU can fall back to the potentially
On Fri, 2018-01-12 at 09:55 -0800, Andi Kleen wrote:
> From: Andi Kleen
>
> There's a risk that a kernel that has full retpoline mitigations
> becomes vulnerable when a module gets loaded that hasn't been
> compiled with the right compiler or the right option.
>
> We
On Fri, 2018-01-12 at 09:55 -0800, Andi Kleen wrote:
> From: Andi Kleen
>
> There's a risk that a kernel that has full retpoline mitigations
> becomes vulnerable when a module gets loaded that hasn't been
> compiled with the right compiler or the right option.
>
> We cannot fix it, but should
On Fri, 2018-01-12 at 18:05 +, Andrew Cooper wrote:
>
> If you unconditionally fill the RSB on every entry to supervisor mode,
> then there are never guest-controlled RSB values to be found.
>
> With that property (and IBRS to protect Skylake+), you shouldn't need
> RSB filling anywhere in
On Fri, 2018-01-12 at 18:05 +, Andrew Cooper wrote:
>
> If you unconditionally fill the RSB on every entry to supervisor mode,
> then there are never guest-controlled RSB values to be found.
>
> With that property (and IBRS to protect Skylake+), you shouldn't need
> RSB filling anywhere in
On Fri, 2018-01-12 at 10:02 -0800, Andi Kleen wrote:
> > + if ((!boot_cpu_has(X86_FEATURE_PTI) &&
> > + !boot_cpu_has(X86_FEATURE_SMEP)) || is_skylake_era()) {
> > + setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
> > + pr_info("Filling RSB on context switch\n");
>
On Fri, 2018-01-12 at 10:02 -0800, Andi Kleen wrote:
> > + if ((!boot_cpu_has(X86_FEATURE_PTI) &&
> > + !boot_cpu_has(X86_FEATURE_SMEP)) || is_skylake_era()) {
> > + setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
> > + pr_info("Filling RSB on context switch\n");
>
lution for Skylake+ since there are many
other conditions which may result in the RSB becoming empty. The full
solution on Skylake+ is to use IBRS, which will prevent the problem even
when the RSB becomes empty. With IBRS, the RSB-stuffing will not be
required on context switch.
Signed-off-by:
1001 - 1100 of 4023 matches
Mail list logo