From: Oleg Nesterov <[email protected]> 3.12-stable review patch. If anyone has any objections, please let me know.
=============== commit 4d4048be8a93769350efa31d2482a038b7de73d0 upstream. find_lock_task_mm() expects it is called under rcu or tasklist lock, but it seems that at least oom_unkillable_task()->task_in_mem_cgroup() and mem_cgroup_out_of_memory()->oom_badness() can call it lockless. Perhaps we could fix the callers, but this patch simply adds rcu lock into find_lock_task_mm(). This also allows to simplify a bit one of its callers, oom_kill_process(). Signed-off-by: Oleg Nesterov <[email protected]> Cc: Sergey Dyasly <[email protected]> Cc: Sameer Nanda <[email protected]> Cc: "Eric W. Biederman" <[email protected]> Cc: Frederic Weisbecker <[email protected]> Cc: Mandeep Singh Baines <[email protected]> Cc: "Ma, Xindong" <[email protected]> Reviewed-by: Michal Hocko <[email protected]> Cc: "Tu, Xiaobing" <[email protected]> Acked-by: David Rientjes <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Jiri Slaby <[email protected]> --- mm/oom_kill.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 7cdd05e685dc..f5bed7f17463 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -102,14 +102,19 @@ struct task_struct *find_lock_task_mm(struct task_struct *p) { struct task_struct *t; + rcu_read_lock(); + for_each_thread(p, t) { task_lock(t); if (likely(t->mm)) - return t; + goto found; task_unlock(t); } + t = NULL; +found: + rcu_read_unlock(); - return NULL; + return t; } /* return true if the task is not adequate as candidate victim task. */ @@ -461,10 +466,8 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order, } read_unlock(&tasklist_lock); - rcu_read_lock(); p = find_lock_task_mm(victim); if (!p) { - rcu_read_unlock(); put_task_struct(victim); return; } else if (victim != p) { @@ -490,6 +493,7 @@ void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order, * That thread will now get access to memory reserves since it has a * pending fatal signal. */ + rcu_read_lock(); for_each_process(p) if (p->mm == mm && !same_thread_group(p, victim) && !(p->flags & PF_KTHREAD)) { -- 1.9.3 -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html
