On 11/27/2018 02:30 PM, Thomas Gleixner wrote:
> On Tue, 27 Nov 2018, Lendacky, Thomas wrote:
>> On 11/25/2018 12:33 PM, Thomas Gleixner wrote:
>>> +/* Update x86_spec_ctrl_base in case SMT state changed. */
>>> +static void update_stibp_strict(void)
>>>  {
>>> -   wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
>>> +   u64 mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
>>> +
>>> +   if (sched_smt_active())
>>> +           mask |= SPEC_CTRL_STIBP;
>>> +
>>> +   if (mask == x86_spec_ctrl_base)
>>> +           return;
>>> +
>>> +   pr_info("Spectre v2 user space SMT mitigation: STIBP %s\n",
>>> +           mask & SPEC_CTRL_STIBP ? "always-on" : "off");
>>> +   x86_spec_ctrl_base = mask;
>>> +   on_each_cpu(update_stibp_msr, NULL, 1);
>>
>> Some more testing using spectre_v2_user=on and I've found that during boot
>> up, once the first SMT thread is encountered no more updates to MSRs for
>> STIBP are done for any CPUs brought up after that. The first SMT thread
>> will cause mask != x86_spec_ctrl_base, but then x86_spec_ctrl_base is set
>> to mask and the check always causes a return for subsequent CPUs that are
>> brought up.
> 
> The above code merily handles the switch between SMT and non-SMT mode,
> because there all other online CPUs need to be updated, but after that each
> upcoming CPU calls x86_spec_ctrl_setup_ap() which will write the MSR. So
> it's all good.

Yup, sorry for the noise. Trying to test different scenarios using some
code hacks and put them in the wrong place, so wasn't triggering the
WRMSR in x86_spec_ctrl_setup_ap().

Thanks,
Tom

> 
> Thanks,
> 
>       tglx
> 

Reply via email to