On 8/12/20 8:54 AM, Oleg Nesterov wrote:
> On 08/11, Oleg Nesterov wrote:
>>
>> On 08/11, Jens Axboe wrote:
>>>
>>> --- a/kernel/task_work.c
>>> +++ b/kernel/task_work.c
>>> @@ -42,7 +42,8 @@ task_work_add(struct task_struct *task, struct 
>>> callback_head *work, int notify)
>>>             set_notify_resume(task);
>>>             break;
>>>     case TWA_SIGNAL:
>>> -           if (lock_task_sighand(task, &flags)) {
>>> +           if (!(READ_ONCE(task->jobctl) & JOBCTL_TASK_WORK) &&
>>> +               lock_task_sighand(task, &flags)) {
>>
>> Aaaaah, sorry Jens, now I think this is racy. So I am glad I didn't add
>> this optimization into the initial version ;)
>>
>> It is possible that JOBCTL_TASK_WORK is set but ->task_works == NULL. Say,
>> task_work_add(TWA_SIGNAL) + task_work_cancel(), or the target task can call
>> task_work_run() before it enters get_signal().
>>
>> And in this case another task_work_add(tsk, TWA_SIGNAL) can actually race
>> with get_signal() which does
>>
>>      current->jobctl &= ~JOBCTL_TASK_WORK;
>>      if (unlikely(current->task_works)) {
>>              spin_unlock_irq(&sighand->siglock);
>>              task_work_run();
>>
>> nothing guarantees that get_signal() sees ->task_works != NULL. Probably
>> this is what Jann meant.
>>
>> We can probably add a barrier into get_signal() but I didn't sleep today,
>> I'll try to think tomorrow.
> 
> I see nothing better than the additional change below. Peter, do you see
> another solution?
> 
> This needs a comment to explain that this mb() pairs with another barrier
> provided by cmpxchg() in task_work_add(). It ensures that either get_signal()
> sees the new work added by task_work_add(), or task_work_add() sees the
> result of "&= ~JOBCTL_TASK_WORK".
> 
> Oleg.
> 
> --- x/kernel/signal.c
> +++ x/kernel/signal.c
> @@ -2541,7 +2541,7 @@ bool get_signal(struct ksignal *ksig)
>  
>  relock:
>       spin_lock_irq(&sighand->siglock);
> -     current->jobctl &= ~JOBCTL_TASK_WORK;
> +     smp_store_mb(current->jobctl, current->jobctl & ~JOBCTL_TASK_WORK);
>       if (unlikely(current->task_works)) {
>               spin_unlock_irq(&sighand->siglock);
>               task_work_run();
> 

I think this should work when paired with the READ_ONCE() on the
task_work_add() side. I haven't managed to reproduce badness with the
existing one that doesn't have the smp_store_mb() here, so can't verify
much beyond that...

Are you going to send this out as a complete patch?

-- 
Jens Axboe

Reply via email to