/intel_mid_pci.c:303:2: error: implicit declaration of function
‘acpi_noirq_set’; did you mean ‘acpi_irq_get’?
[-Werror=implicit-function-declaration]
acpi_noirq_set();
Signed-off-by: Randy Dunlap
Cc: Jacob Pan
Cc: Len Brown
Cc: Bjorn Helgaas
Cc: Jesse Barnes
Cc: Arjan van de Ven
Cc: linux
On 2/20/2019 7:35 AM, David Laight wrote:
From: Sent: 16 February 2019 12:56
To: Li, Aubrey
...
The above experiment just confirms what I said: The numbers are inaccurate
and potentially misleading to a large extent when the AVX using task is not
scheduled out for a longer time.
Not only
On 1/14/2019 5:06 AM, Jiri Kosina wrote:
On Mon, 14 Jan 2019, Pavel Machek wrote:
Frankly I'd not call it Meltdown, as it works only on data in the cache,
so the defense is completely different. Seems more like a l1tf
:-).
Meltdown on x86 also seems to work only for data in L1D, but the
On 12/31/2018 8:22 AM, Ben Greear wrote:
On 12/21/2018 05:17 PM, Tim Chen wrote:
On 12/21/18 1:59 PM, Ben Greear wrote:
On 12/21/18 9:44 AM, Tim Chen wrote:
Thomas,
Andi and I have made an update to our draft of the Spectre admin guide.
We may be out on Christmas vacation for a while. But
On 12/17/2018 3:29 AM, Paul E. McKenney wrote:
As does this sort of report on a line that contains simple integer
arithmetic and boolean operations.;-)
Any chance of a bisection?
btw this looks like something caused a stack overflow and thus all the
weirdness that then happens
On 12/11/2018 3:46 PM, Li, Aubrey wrote:
On 2018/12/12 1:18, Dave Hansen wrote:
On 12/10/18 4:24 PM, Aubrey Li wrote:
The tracking turns on the usage flag at the next context switch of
the task, but requires 3 consecutive context switches with no usage
to clear it. This decay is required
On processors with enhanced IBRS support, we recommend setting IBRS to 1
and left set.
Then why doesn't CPU with EIBRS support acutally *default* to '1', with
opt-out possibility for OS?
(slightly longer answer)
you can pretty much assume that on these CPUs, IBRS doesn't actually do anything
On processors with enhanced IBRS support, we recommend setting IBRS to 1
and left set.
Then why doesn't CPU with EIBRS support acutally *default* to '1', with
opt-out possibility for OS?
(slightly longer answer)
you can pretty much assume that on these CPUs, IBRS doesn't actually do anything
On processors with enhanced IBRS support, we recommend setting IBRS to 1
and left set.
Then why doesn't CPU with EIBRS support acutally *default* to '1', with
opt-out possibility for OS?
the BIOSes could indeed get this set up this way.
do you want to trust the bios to get it right?
On processors with enhanced IBRS support, we recommend setting IBRS to 1
and left set.
Then why doesn't CPU with EIBRS support acutally *default* to '1', with
opt-out possibility for OS?
the BIOSes could indeed get this set up this way.
do you want to trust the bios to get it right?
On 11/21/2018 2:53 PM, Borislav Petkov wrote:
On Wed, Nov 21, 2018 at 11:48:41PM +0100, Thomas Gleixner wrote:
Btw, I really do not like the app2app wording. I'd rather go for usr2usr,
but that's kinda horrible as well. But then, all of this is horrible.
Any better ideas?
It needs to have
On 11/21/2018 2:53 PM, Borislav Petkov wrote:
On Wed, Nov 21, 2018 at 11:48:41PM +0100, Thomas Gleixner wrote:
Btw, I really do not like the app2app wording. I'd rather go for usr2usr,
but that's kinda horrible as well. But then, all of this is horrible.
Any better ideas?
It needs to have
On 11/20/2018 11:27 PM, Jiri Kosina wrote:
On Mon, 19 Nov 2018, Arjan van de Ven wrote:
In the documentation, AMD officially recommends against this by default,
and I can speak for Intel that our position is that as well: this really
must not be on by default.
Thanks for pointing to the AMD
On 11/20/2018 11:27 PM, Jiri Kosina wrote:
On Mon, 19 Nov 2018, Arjan van de Ven wrote:
In the documentation, AMD officially recommends against this by default,
and I can speak for Intel that our position is that as well: this really
must not be on by default.
Thanks for pointing to the AMD
On 11/19/2018 6:00 AM, Linus Torvalds wrote:
On Sun, Nov 18, 2018 at 1:49 PM Jiri Kosina wrote:
So why do that STIBP slow-down by default when the people who *really*
care already disabled SMT?
BTW for them, there is no impact at all.
Right. People who really care about security and are
On 11/19/2018 6:00 AM, Linus Torvalds wrote:
On Sun, Nov 18, 2018 at 1:49 PM Jiri Kosina wrote:
So why do that STIBP slow-down by default when the people who *really*
care already disabled SMT?
BTW for them, there is no impact at all.
Right. People who really care about security and are
I'd prefer the kernel to do such clustering...
I think that is a next step.
Also, while the kernel can do this at a best effort basis, it cannot
take into account things the kernel doesn't know about, like high
priority job peak load etc.., things a job scheduler would know.
Then again, a
I'd prefer the kernel to do such clustering...
I think that is a next step.
Also, while the kernel can do this at a best effort basis, it cannot
take into account things the kernel doesn't know about, like high
priority job peak load etc.., things a job scheduler would know.
Then again, a
On 7/13/2018 12:19 PM, patrickg wrote:
This RFC patch is intended to allow bypass CPUID, MSR and QuickPIT calibration
methods should the user desire to.
The current ordering in ML x86 tsc is to calibrate in the order listed above;
returning whenever there's a successful calibration. However
On 7/13/2018 12:19 PM, patrickg wrote:
This RFC patch is intended to allow bypass CPUID, MSR and QuickPIT calibration
methods should the user desire to.
The current ordering in ML x86 tsc is to calibrate in the order listed above;
returning whenever there's a successful calibration. However
> To add a bit more to this, Intel just updated their
> IA32_ARCH_CAPABILITIES_MSR
> to have a new bit to sample to figure out whether you need IBRS or not
> during runtime.
actually we updated the document when you need RSB stuffing.
based on the request of various folks here on LKML.
> To add a bit more to this, Intel just updated their
> IA32_ARCH_CAPABILITIES_MSR
> to have a new bit to sample to figure out whether you need IBRS or not
> during runtime.
actually we updated the document when you need RSB stuffing.
based on the request of various folks here on LKML.
> > In the past the only guidance was to not load microcode at the same time to
> the
> > thread siblings of a core. We now have new guidance that the sibling must be
> > spinning and not doing other things that can introduce instability around
> loading
> > microcode.
>
> Document that properly
> > In the past the only guidance was to not load microcode at the same time to
> the
> > thread siblings of a core. We now have new guidance that the sibling must be
> > spinning and not doing other things that can introduce instability around
> loading
> > microcode.
>
> Document that properly
> > I meant software wise. You're not going to live migrate from xen to
> > kvm or backwards. or between very radically different versions of the
> > kvm stack.
>
> Forwards migration to a radically newer version certainly happens. So
> when the source hypervisor was too old to tell the VM
> > I meant software wise. You're not going to live migrate from xen to
> > kvm or backwards. or between very radically different versions of the
> > kvm stack.
>
> Forwards migration to a radically newer version certainly happens. So
> when the source hypervisor was too old to tell the VM
tends to only work between HV's that are relatively
> > homogeneous, that's nothing new...
>
> No Arjan, this is just wrong. Well, I suppose it's right in the present
> tense with the IBRS mess on Skylake, but it's _not_ been true until last
> year.
I meant software wise. You're not going to
tends to only work between HV's that are relatively
> > homogeneous, that's nothing new...
>
> No Arjan, this is just wrong. Well, I suppose it's right in the present
> tense with the IBRS mess on Skylake, but it's _not_ been true until last
> year.
I meant software wise. You're not going to
> > On Mon, Feb 19, 2018 at 4:13 PM, Alan Cox
> wrote:
> > >
> > > In theory there's nothing stopping a guest getting a 'you are about to
> > > gain/lose IBRS' message or having a new 'CPU' hotplugged and the old one
> > > removed.
> >
> > I'm not convinced we handle
> > On Mon, Feb 19, 2018 at 4:13 PM, Alan Cox
> wrote:
> > >
> > > In theory there's nothing stopping a guest getting a 'you are about to
> > > gain/lose IBRS' message or having a new 'CPU' hotplugged and the old one
> > > removed.
> >
> > I'm not convinced we handle the case of hotplug CPU's
> On Mon, 19 Feb 2018 23:42:24 +, "Van De Ven, Arjan" said:
>
> > the guest is not the problem; guests obviously will already honor if
> > Enhanced
> > IBRS is enumerated. The problem is mixed migration pools where the
> hypervisor
> > may need to
> On Mon, 19 Feb 2018 23:42:24 +, "Van De Ven, Arjan" said:
>
> > the guest is not the problem; guests obviously will already honor if
> > Enhanced
> > IBRS is enumerated. The problem is mixed migration pools where the
> hypervisor
> > may need to
>
> >>> Even if the guest doesn't have/support IBRS_ALL, and is frobbing the
> >>> (now emulated) MSR on every kernel entry/exit, that's *still* going to
> >>> be a metric shitload faster than what it *thought* it was doing.
>
> Is there any indication/log to the admin that VM doesn't know about
>
> >>> Even if the guest doesn't have/support IBRS_ALL, and is frobbing the
> >>> (now emulated) MSR on every kernel entry/exit, that's *still* going to
> >>> be a metric shitload faster than what it *thought* it was doing.
>
> Is there any indication/log to the admin that VM doesn't know about
On 2/16/2018 11:43 AM, Linus Torvalds wrote:
On Fri, Feb 16, 2018 at 11:38 AM, Linus Torvalds
wrote:
Of course, your patch still doesn't allow for "we claim to be skylake
for various other independent reasons, but the RSB issue is fixed".
.. maybe nobody ever
On 2/16/2018 11:43 AM, Linus Torvalds wrote:
On Fri, Feb 16, 2018 at 11:38 AM, Linus Torvalds
wrote:
Of course, your patch still doesn't allow for "we claim to be skylake
for various other independent reasons, but the RSB issue is fixed".
.. maybe nobody ever has a reason to do that,
On 2/14/2018 11:29 AM, Andy Shevchenko wrote:
On Mon, Feb 12, 2018 at 9:50 PM, Srinivas Pandruvada
wrote:
On systems supporting HWP (Hardware P-States) mode, we expected to
enumerate core priority via ACPI-CPPC tables. Unfortunately deployment of
TURBO 3.0
On 2/14/2018 11:29 AM, Andy Shevchenko wrote:
On Mon, Feb 12, 2018 at 9:50 PM, Srinivas Pandruvada
wrote:
On systems supporting HWP (Hardware P-States) mode, we expected to
enumerate core priority via ACPI-CPPC tables. Unfortunately deployment of
TURBO 3.0 didn't use this method to show core
So, any hints on what you think should be the correct fix here?
the patch sure looks correct to me, it now has a nice table for CPU IDs
including all of AMD (and soon hopefully the existing Intel ones that are not
exposed to meltdown)
So, any hints on what you think should be the correct fix here?
the patch sure looks correct to me, it now has a nice table for CPU IDs
including all of AMD (and soon hopefully the existing Intel ones that are not
exposed to meltdown)
> > Raw diff between the mainline blacklist and the bulletin looks like:
> > @@ -1,5 +1,6 @@
> > { INTEL_FAM6_BROADWELL_CORE, 0x04, 0x28 },
> > { INTEL_FAM6_BROADWELL_GT3E, 0x01, 0x1b },
> > +{ INTEL_FAM6_BROADWELL_X,0x01, 0x0b23 },
> > { INTEL_FAM6_BROADWELL_X,0x01,
> > Raw diff between the mainline blacklist and the bulletin looks like:
> > @@ -1,5 +1,6 @@
> > { INTEL_FAM6_BROADWELL_CORE, 0x04, 0x28 },
> > { INTEL_FAM6_BROADWELL_GT3E, 0x01, 0x1b },
> > +{ INTEL_FAM6_BROADWELL_X,0x01, 0x0b23 },
> > { INTEL_FAM6_BROADWELL_X,0x01,
> On Wed, Jan 31, 2018 at 8:55 AM, Paolo Bonzini wrote:
>
> > In fact this MSR can even be passed down unconditionally, since it needs
> > no save/restore and has no ill performance effect on the sibling
> > hyperthread.
>
> I'm a bit surprised to hear that IBPB has no ill
> On Wed, Jan 31, 2018 at 8:55 AM, Paolo Bonzini wrote:
>
> > In fact this MSR can even be passed down unconditionally, since it needs
> > no save/restore and has no ill performance effect on the sibling
> > hyperthread.
>
> I'm a bit surprised to hear that IBPB has no ill performance impact
On 1/31/2018 2:15 AM, Thomas Gleixner wrote:
Good luck with making all that work.
on the Intel side we're checking what we can do that works and doesn't break
things right now; hopefully we just end up with a bit in the arch capabilities
MSR for "you should do RSB stuffing" and then the HV's
On 1/31/2018 2:15 AM, Thomas Gleixner wrote:
Good luck with making all that work.
on the Intel side we're checking what we can do that works and doesn't break
things right now; hopefully we just end up with a bit in the arch capabilities
MSR for "you should do RSB stuffing" and then the HV's
> > short term there was some extremely rudimentary static analysis done.
> > clearly
> > that has extreme limitations both in insane rate of false positives, and
> > missing
> > cases.
>
> What was the output roughly, how many suspect places that need
> array_idx_nospec()
> handling: a few, a
> > short term there was some extremely rudimentary static analysis done.
> > clearly
> > that has extreme limitations both in insane rate of false positives, and
> > missing
> > cases.
>
> What was the output roughly, how many suspect places that need
> array_idx_nospec()
> handling: a few, a
> > Anyway, I do think the patches I've seen so far are ok, and the real
> > reason I'm writing this email is actually more about future patches:
> > do we have a good handle on where these array index sanitations will
> > be needed?
the obvious cases are currently obviously being discussed.
but
> > Anyway, I do think the patches I've seen so far are ok, and the real
> > reason I'm writing this email is actually more about future patches:
> > do we have a good handle on where these array index sanitations will
> > be needed?
the obvious cases are currently obviously being discussed.
but
On 1/30/2018 5:11 AM, Borislav Petkov wrote:
On Tue, Jan 30, 2018 at 01:57:21PM +0100, Thomas Gleixner wrote:
So much for the theory. That's not going to work. If the boot cpu has the
feature then the alternatives will have been applied. So even if the flag
mismatch can be observed when a
On 1/30/2018 5:11 AM, Borislav Petkov wrote:
On Tue, Jan 30, 2018 at 01:57:21PM +0100, Thomas Gleixner wrote:
So much for the theory. That's not going to work. If the boot cpu has the
feature then the alternatives will have been applied. So even if the flag
mismatch can be observed when a
On 1/29/2018 7:32 PM, Linus Torvalds wrote:
On Mon, Jan 29, 2018 at 5:32 PM, Arjan van de Ven <ar...@linux.intel.com> wrote:
the most simple solution is that we set the internal feature bit in Linux
to turn on the "stuff the RSB" workaround is we're on a SKL *or* as
On 1/29/2018 7:32 PM, Linus Torvalds wrote:
On Mon, Jan 29, 2018 at 5:32 PM, Arjan van de Ven wrote:
the most simple solution is that we set the internal feature bit in Linux
to turn on the "stuff the RSB" workaround is we're on a SKL *or* as a guest
in a VM.
That sounds
On 1/29/2018 4:23 PM, Linus Torvalds wrote:
Why do you even _care_ about the guest, and how it acts wrt Skylake?
What you should care about is not so much the guests (which do their
own thing) but protect guests from each other, no?
the most simple solution is that we set the internal feature
On 1/29/2018 4:23 PM, Linus Torvalds wrote:
Why do you even _care_ about the guest, and how it acts wrt Skylake?
What you should care about is not so much the guests (which do their
own thing) but protect guests from each other, no?
the most simple solution is that we set the internal feature
On 1/29/2018 12:42 PM, Eduardo Habkost wrote:
The question is how the hypervisor could tell that to the guest.
If Intel doesn't give us a CPUID bit that can be used to tell
that retpolines are enough, maybe we should use a hypervisor
CPUID bit for that?
the objective is to have retpoline be
On 1/29/2018 12:42 PM, Eduardo Habkost wrote:
The question is how the hypervisor could tell that to the guest.
If Intel doesn't give us a CPUID bit that can be used to tell
that retpolines are enough, maybe we should use a hypervisor
CPUID bit for that?
the objective is to have retpoline be
> On 29/01/2018 01:58, KarimAllah Ahmed wrote:
> > Add direct access to MSR_IA32_SPEC_CTRL for guests. Future intel processors
> > will use this MSR to indicate RDCL_NO (bit 0) and IBRS_ALL (bit 1).
>
> This has to be customizable per-VM (similar to the patches Amazon posted
> a while ago for
> On 29/01/2018 01:58, KarimAllah Ahmed wrote:
> > Add direct access to MSR_IA32_SPEC_CTRL for guests. Future intel processors
> > will use this MSR to indicate RDCL_NO (bit 0) and IBRS_ALL (bit 1).
>
> This has to be customizable per-VM (similar to the patches Amazon posted
> a while ago for
>
> On Sun, 2018-01-28 at 12:40 -0800, Andy Lutomirski wrote:
> >
> > Do you mean that the host would intercept the guest WRMSR and do
> > WRMSR itself? I would suggest that doing so is inconsistent with the
> > docs. As specified, doing WRMSR to write 1 to IBRS does *not*
> > protect the
>
> On Sun, 2018-01-28 at 12:40 -0800, Andy Lutomirski wrote:
> >
> > Do you mean that the host would intercept the guest WRMSR and do
> > WRMSR itself? I would suggest that doing so is inconsistent with the
> > docs. As specified, doing WRMSR to write 1 to IBRS does *not*
> > protect the
> > you asked before and even before you sent the email I confirmed to
> > you that the document is correct
> >
> > I'm not sure what the point is to then question that again 15 minutes
> > later other than creating more noise.
>
> Apologies, I hadn't seen the comment on IRC.
>
> Sometimes the
> > you asked before and even before you sent the email I confirmed to
> > you that the document is correct
> >
> > I'm not sure what the point is to then question that again 15 minutes
> > later other than creating more noise.
>
> Apologies, I hadn't seen the comment on IRC.
>
> Sometimes the
> On Fri, 2018-01-26 at 10:12 -0800, Arjan van de Ven wrote:
> > On 1/26/2018 10:11 AM, David Woodhouse wrote:
> > >
> > > I am *actively* ignoring Skylake right now. This is about per-SKL
> > > userspace even with SMEP, because we think Intel's document lie
> On Fri, 2018-01-26 at 10:12 -0800, Arjan van de Ven wrote:
> > On 1/26/2018 10:11 AM, David Woodhouse wrote:
> > >
> > > I am *actively* ignoring Skylake right now. This is about per-SKL
> > > userspace even with SMEP, because we think Intel's document lie
On 1/26/2018 10:11 AM, David Woodhouse wrote:
I am *actively* ignoring Skylake right now. This is about per-SKL
userspace even with SMEP, because we think Intel's document lies to us.
if you think we lie to you then I think we're done with the conversation?
Please tell us then what you
On 1/26/2018 10:11 AM, David Woodhouse wrote:
I am *actively* ignoring Skylake right now. This is about per-SKL
userspace even with SMEP, because we think Intel's document lies to us.
if you think we lie to you then I think we're done with the conversation?
Please tell us then what you
On 1/26/2018 7:27 AM, Dave Hansen wrote:
On 01/26/2018 04:14 AM, Yves-Alexis Perez wrote:
I know we'll still be able to manually enable PTI with a command line option,
but it's also a hardening feature which has the nice side effect of emulating
SMEP on CPU which don't support it (e.g the Atom
On 1/26/2018 7:27 AM, Dave Hansen wrote:
On 01/26/2018 04:14 AM, Yves-Alexis Perez wrote:
I know we'll still be able to manually enable PTI with a command line option,
but it's also a hardening feature which has the nice side effect of emulating
SMEP on CPU which don't support it (e.g the Atom
g; b...@suse.de; Mallick, Asit K
> <asit.k.mall...@intel.com>; rkrc...@redhat.com; karah...@amazon.de;
> h...@zytor.com; mi...@redhat.com; Nakajima, Jun
> <jun.nakaj...@intel.com>; x...@kernel.org; Raj, Ashok <ashok@intel.com>;
> Van De Ven, Arjan <arjan.v
k, Asit K
> ; rkrc...@redhat.com; karah...@amazon.de;
> h...@zytor.com; mi...@redhat.com; Nakajima, Jun
> ; x...@kernel.org; Raj, Ashok ;
> Van De Ven, Arjan ; tim.c.c...@linux.intel.com;
> pbonz...@redhat.com; a...@linux.intel.com; linux-kernel@vger.kernel.org;
> dw...@in
This patch tries to address the case when we do switch to init_mm and back.
Do you still have objections to the approach in this patch
to save the last active mm before switching to init_mm?
how do you know the last active mm did not go away and started a new process
with new content?
(other
This patch tries to address the case when we do switch to init_mm and back.
Do you still have objections to the approach in this patch
to save the last active mm before switching to init_mm?
how do you know the last active mm did not go away and started a new process
with new content?
(other
The idea is simple, do what we do for virt. Don't send IPI's to CPUs
that don't need them (in virt's case because the vCPU isn't running, in
our case because we're not in fact running a user process), but mark the
CPU as having needed a TLB flush.
I am really uncomfortable with that idea.
You
The idea is simple, do what we do for virt. Don't send IPI's to CPUs
that don't need them (in virt's case because the vCPU isn't running, in
our case because we're not in fact running a user process), but mark the
CPU as having needed a TLB flush.
I am really uncomfortable with that idea.
You
On 1/25/2018 5:50 AM, Peter Zijlstra wrote:
On Thu, Jan 25, 2018 at 05:21:30AM -0800, Arjan van de Ven wrote:
This means that 'A -> idle -> A' should never pass through switch_mm to
begin with.
Please clarify how you think it does.
the idle code does leave_mm() to avoid having to IP
On 1/25/2018 5:50 AM, Peter Zijlstra wrote:
On Thu, Jan 25, 2018 at 05:21:30AM -0800, Arjan van de Ven wrote:
This means that 'A -> idle -> A' should never pass through switch_mm to
begin with.
Please clarify how you think it does.
the idle code does leave_mm() to avoid having to IP
This means that 'A -> idle -> A' should never pass through switch_mm to
begin with.
Please clarify how you think it does.
the idle code does leave_mm() to avoid having to IPI CPUs in deep sleep states
for a tlb flush.
(trust me, that you really want, sequentially IPI's a pile of cores in a
This means that 'A -> idle -> A' should never pass through switch_mm to
begin with.
Please clarify how you think it does.
the idle code does leave_mm() to avoid having to IPI CPUs in deep sleep states
for a tlb flush.
(trust me, that you really want, sequentially IPI's a pile of cores in a
kind of hate the whitelist, but
Arjan is very insistent...)
Ick, no, whitelists are a pain for everyone involved. Don't do that
unless it is absolutely the only way it will ever work.
Arjan, why do you think this can only be done as a whitelist?
I suggested a minimum version list for those cpus
kind of hate the whitelist, but
Arjan is very insistent...)
Ick, no, whitelists are a pain for everyone involved. Don't do that
unless it is absolutely the only way it will ever work.
Arjan, why do you think this can only be done as a whitelist?
I suggested a minimum version list for those cpus
> > It is a reasonable approach. Let a process who needs max security
> > opt in with disabled dumpable. It can have a flush with IBPB clear before
> > starting to run, and have STIBP set while running.
> >
>
> Do we maybe want a separate opt in? I can easily imagine things like
> web browsers
> > It is a reasonable approach. Let a process who needs max security
> > opt in with disabled dumpable. It can have a flush with IBPB clear before
> > starting to run, and have STIBP set while running.
> >
>
> Do we maybe want a separate opt in? I can easily imagine things like
> web browsers
On 1/21/2018 8:21 AM, Ingo Molnar wrote:
So if it's only about the scheduler barrier, what cycle cost are we talking
about
here?
in the order of 5000 to 1 cycles.
(depends a bit on the cpu generation but this range is a reasonable
approximation)
Because putting something like this
On 1/21/2018 8:21 AM, Ingo Molnar wrote:
So if it's only about the scheduler barrier, what cycle cost are we talking
about
here?
in the order of 5000 to 1 cycles.
(depends a bit on the cpu generation but this range is a reasonable
approximation)
Because putting something like this
ou Tao <hout...@huawei.com>; aarca...@redhat.com; linux-
> ker...@vger.kernel.org; Van De Ven, Arjan <arjan.van.de@intel.com>
> Cc: mi...@redhat.com; Thomas Gleixner <t...@linutronix.de>;
> a...@linux.intel.com; dave.han...@linux.intel.com; pet...@infradead.org;
> qiuxi
Tao ; aarca...@redhat.com; linux-
> ker...@vger.kernel.org; Van De Ven, Arjan
> Cc: mi...@redhat.com; Thomas Gleixner ;
> a...@linux.intel.com; dave.han...@linux.intel.com; pet...@infradead.org;
> qiuxi...@huawei.com; wangkefeng.w...@huawei.com
> Subject: Re: [RH72 Spectre] ibpb_enabl
> Enabling IBRS does not prevent software from controlling the predicted
> targets of indirect branches of unrelated software executed later at
> the same predictor mode (for example, between two different user
> applications, or two different virtual machines). Such isolation can
> be ensured
> Enabling IBRS does not prevent software from controlling the predicted
> targets of indirect branches of unrelated software executed later at
> the same predictor mode (for example, between two different user
> applications, or two different virtual machines). Such isolation can
> be ensured
Does anybody have any other ideas?
the only other weird case that comes to mind; what happens if there's a line
dirty in the caches,
but the memory is now mapped uncached. (Which could happen if kexec does muck
with MTRRs, CR0 or other similar
things in weird ways)... not sure what happens
Does anybody have any other ideas?
the only other weird case that comes to mind; what happens if there's a line
dirty in the caches,
but the memory is now mapped uncached. (Which could happen if kexec does muck
with MTRRs, CR0 or other similar
things in weird ways)... not sure what happens
Does anybody have any other ideas?
wbinvd is thankfully not common, but also not rare (MTRR setup and a bunch of
other cases)
and in some other operating systems it happens even more than on Linux.. it's
generally not totally broken like this.
I can only imagine a machine check case where a
Does anybody have any other ideas?
wbinvd is thankfully not common, but also not rare (MTRR setup and a bunch of
other cases)
and in some other operating systems it happens even more than on Linux.. it's
generally not totally broken like this.
I can only imagine a machine check case where a
> I just sent a v3 that changes the VERMAGIC only, based on Greg's
> earlier feedback.
>
> It has the drawbacks that it:
> - refuses loading instead of warns
> - doesn't stop refusing when the feature is runtime disabled
>
> But it's much simpler, just a few lines of ifdef.
I think simple is
> I just sent a v3 that changes the VERMAGIC only, based on Greg's
> earlier feedback.
>
> It has the drawbacks that it:
> - refuses loading instead of warns
> - doesn't stop refusing when the feature is runtime disabled
>
> But it's much simpler, just a few lines of ifdef.
I think simple is
> Having firmware refill the RSB only makes a difference if you are on
> Skylake+ were RSB underflows are bad, and you're not using IBRS to
> protect your indirect predictions.
... and before that you don't need it.
> Having firmware refill the RSB only makes a difference if you are on
> Skylake+ were RSB underflows are bad, and you're not using IBRS to
> protect your indirect predictions.
... and before that you don't need it.
This would means that userspace would see return predictions based
on the values the kernel 'stuffed' into the RSB to fill it.
Potentially this leaks a kernel address to userspace.
KASLR pretty much died in May this year to be honest with the KAISER paper (if
not before then)
also with KPTI
This would means that userspace would see return predictions based
on the values the kernel 'stuffed' into the RSB to fill it.
Potentially this leaks a kernel address to userspace.
KASLR pretty much died in May this year to be honest with the KAISER paper (if
not before then)
also with KPTI
1 - 100 of 3723 matches
Mail list logo