>On 2018/10/08 10:19, Yong-Taek Lee wrote:
>> @@ -1056,6 +1056,7 @@ static int __set_oom_adj(struct file *file, int 
>> oom_adj, bool legacy)
>>         struct mm_struct *mm = NULL;
>>         struct task_struct *task;
>>         int err = 0;
>> +       int mm_users = 0;
>>
>>         task = get_proc_task(file_inode(file));
>>         if (!task)
>> @@ -1092,7 +1093,8 @@ static int __set_oom_adj(struct file *file, int 
>> oom_adj, bool legacy)
>>                 struct task_struct *p = find_lock_task_mm(task);
>>
>>                 if (p) {
>> -                       if (atomic_read(&p->mm->mm_users) > 1) {
>> +                       mm_users = atomic_read(&p->mm->mm_users);
>> +                       if ((mm_users > 1) && (mm_users != 
>> get_nr_threads(p))) {
>
> How can this work (even before this patch)? When clone(CLONE_VM without 
> CLONE_THREAD/CLONE_SIGHAND)
> is requested, copy_process() calls copy_signal() in order to copy 
> sig->oom_score_adj and
> sig->oom_score_adj_min before calling copy_mm() in order to increment 
> mm->mm_users, doesn't it?
> Then, we will get two different "struct signal_struct" with different 
> oom_score_adj/oom_score_adj_min
> but one "struct mm_struct" shared by two thread groups.
>

Are you talking about race between __set_oom_adj and copy_process?
If so, i agree with your opinion. It can not set oom_score_adj properly for 
copied process if __set_oom_adj
check mm_users before copy_process calls copy_mm after copy_signal. Please 
correct me if i misunderstood anything.

>>                                 mm = p->mm;
>>                                 atomic_inc(&mm->mm_count);
>>                         }

Reply via email to