On Wed, Aug 28, 2019 at 09:13:31PM +0200, Borislav Petkov wrote:
> On Tue, Aug 27, 2019 at 05:56:30PM -0700, Raj, Ashok wrote:
> > > "Cloud customers have expressed discontent as services disappear for
> > > a prolonged time. The restriction is that only one core (or only one
> > > thread of a
On Tue, Aug 27, 2019 at 05:24:07PM -0400, Boris Ostrovsky wrote:
> This was a bit too aggressive with changes to arch-specific code, only
> changes to __reload_late() would be needed.
Yeah, it is not that ugly but the moment the microcode engine is not
shared between the SMT threads anymore, this
On Tue, Aug 27, 2019 at 05:56:30PM -0700, Raj, Ashok wrote:
> > "Cloud customers have expressed discontent as services disappear for
> > a prolonged time. The restriction is that only one core (or only one
> > thread of a core in the case of an SMT system) goes through the update
> > while other
On Tue, Aug 27, 2019 at 02:25:01PM +0200, Borislav Petkov wrote:
> On Mon, Aug 26, 2019 at 01:23:39PM -0700, Raj, Ashok wrote:
> > > Cloud customers have expressed discontent as services disappear for a
> > > prolonged time. The restriction is that only one core goes through the
> > s/one core/one
On 8/27/19 3:43 PM, Boris Ostrovsky wrote:
>
>
> Something like this. I only lightly tested this but if there is interest
> I can test it more.
This was a bit too aggressive with changes to arch-specific code, only
changes to __reload_late() would be needed.
-boris
On Mon, Aug 26, 2019 at 01:32:48PM -0700, Raj, Ashok wrote:
> On Mon, Aug 26, 2019 at 08:53:05AM -0400, Boris Ostrovsky wrote:
> > On 8/24/19 4:53 AM, Borislav Petkov wrote:
> > >
> > > +wait_for_siblings:
> > > + if (__wait_for_cpus(_cpus_out, NSEC_PER_SEC))
> > > + panic("Timeout
On Mon, Aug 26, 2019 at 01:23:39PM -0700, Raj, Ashok wrote:
> > Cloud customers have expressed discontent as services disappear for a
> > prolonged time. The restriction is that only one core goes through the
> s/one core/one thread of a core/
>
> > update while other cores are quiesced.
>
On Mon, Aug 26, 2019 at 08:53:05AM -0400, Boris Ostrovsky wrote:
> On 8/24/19 4:53 AM, Borislav Petkov wrote:
> >
> > +wait_for_siblings:
> > + if (__wait_for_cpus(_cpus_out, NSEC_PER_SEC))
> > + panic("Timeout during microcode update!\n");
> > +
> > /*
> > -* Increase the
Hi Boris
Minor nit: Small commit log fixup below.
On Sat, Aug 24, 2019 at 10:53:00AM +0200, Borislav Petkov wrote:
> From: Ashok Raj
> Date: Thu, 22 Aug 2019 23:43:47 +0300
>
> Microcode update was changed to be serialized due to restrictions after
> Spectre days. Updating serially on a large
On Mon, Aug 26, 2019 at 08:53:05AM -0400, Boris Ostrovsky wrote:
> What is the advantage of having those other threads go through
> find_patch() and (in Intel case) intel_get_microcode_revision() (which
> involves two MSR accesses) vs. having the master sibling update slaves'
> microcode
On 8/24/19 4:53 AM, Borislav Petkov wrote:
>
> +wait_for_siblings:
> + if (__wait_for_cpus(_cpus_out, NSEC_PER_SEC))
> + panic("Timeout during microcode update!\n");
> +
> /*
> - * Increase the wait timeout to a safe value here since we're
> - * serializing the
From: Ashok Raj
Date: Thu, 22 Aug 2019 23:43:47 +0300
Microcode update was changed to be serialized due to restrictions after
Spectre days. Updating serially on a large multi-socket system can be
painful since it is being done on one CPU at a time.
Cloud customers have expressed discontent as
12 matches
Mail list logo