On 1/10/20 4:00 am, Thomas Gleixner wrote:
> CAUTION: This email originated from outside of the organization. Do not click 
> links or open attachments unless you can confirm the sender and know the 
> content is safe.
> 
> 
> 
> On Wed, Sep 30 2020 at 19:03, Peter Zijlstra wrote:
>> On Wed, Sep 30, 2020 at 05:40:08PM +0200, Thomas Gleixner wrote:
>> Also, that preempt_disable() in there doesn't actually do anything.
>> Worse, preempt_disable(); for_each_cpu(); is an anti-pattern. It mixes
>> static_cpu_has() and boot_cpu_has() in the same bloody condition and has
>> a pointless ret variable.
> 

I was being a bit crazy in mixing the two, considering that there might
be CPUs that do not support L1D flush (others might in the same system,
which is insane)

> I absolutely agree and I really missed it when looking at it before
> merging. cpus_read_lock()/unlock() is the right thing to do if at all.
> 

It seems like the right thing to do, get_cpu() used to be the old method.
The idea is to use cpu_data(i) in a hotplug safe manner.

>> It's shoddy code, that only works if you align the planets right. We
>> really shouldn't provide interfaces that are this bad.
>>
>> It's correct operation is only by accident.



> 
> True :(
> 
> I understand Balbirs problem and it makes some sense to provide a
> solution. We can:
> 
>     1) reject set_affinity() if the task has that flush muck enabled
>        and user space tries to move it to a SMT enabled core

I thought of this and it would be difficult to debug for users, taskset -c
would not work on applications that flush, etc, etc.

> 
>     2) disable the muck if if detects that it is runs on a SMT enabled
>        core suddenly (hotplug says hello)
> 
>        This one is nasty because there is no feedback to user space
>        about the wreckage.

Yes, agreed.

> 

Trying to look at the concerns, I wonder if this can still be saved

-       if (!boot_cpu_has_bug(X86_BUG_L1TF) ||
-                       !static_cpu_has(X86_FEATURE_FLUSH_L1D))
+       if (!static_cpu_has(X86_BUG_L1TF) ||
+               !static_cpu_has(X86_FEATURE_FLUSH_L1D))
                return -EINVAL;

-       cpu = get_cpu();

+       cpus_read_lock();
        for_each_cpu(i, &tsk->cpus_mask) {
                if (cpu_data(i).smt_active == true) {
-                       put_cpu();
+                       cpus_read_unlock();
                        return -EINVAL;
                }
        }
+       cpus_read_unlock();

        set_ti_thread_flag(&tsk->thread_info, TIF_SPEC_L1D_FLUSH);
-       put_cpu();
        return ret;
 }

I don't like the idea of iterating CPUs in the cpumask to check if they
all have SMT disabled, but that is a requirement for flush

Balbir

Reply via email to