KASAN splats indicate that in some cases we free a live mm, then
continue to access it, with potentially disastrous results. This is
likely due to a mismatched mmdrop() somewhere in the kernel, but so far
the culprit remains elusive.

Let's have __mmdrop() verify that the mm isn't live for the current
task, similar to the existing check for init_mm. This way, we can catch
this class of issue earlier, and without requiring KASAN.

Currently, idle_task_exit() leaves active_mm stale after it switches to
init_mm. This isn't harmful, but will trigger the new assertions, so we
must adjust idle_task_exit() to update active_mm.

Signed-off-by: Mark Rutland <mark.rutl...@arm.com>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Ingo Molnar <mi...@kernel.org>
Cc: Mathieu Desnoyers <mathieu.desnoy...@efficios.com>
Cc: Michal Hocko <mho...@suse.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rik van Riel <r...@redhat.com>
Cc: Will Deacon <will.dea...@arm.com>
 kernel/fork.c       | 2 ++
 kernel/sched/core.c | 1 +
 2 files changed, 3 insertions(+)

Since v1 [1]:
* Avoid spurious warning in idle_task_exit()


[1] https://lkml.kernel.org/r/20180228121458.2230-1-mark.rutl...@arm.com

diff --git a/kernel/fork.c b/kernel/fork.c
index e5d9d405ae4e..ac94ce894219 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -595,6 +595,8 @@ static void check_mm(struct mm_struct *mm)
 void __mmdrop(struct mm_struct *mm)
        BUG_ON(mm == &init_mm);
+       WARN_ON_ONCE(mm == current->mm);
+       WARN_ON_ONCE(mm == current->active_mm);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index e7c535eee0a6..0ef844abc2da 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5506,6 +5506,7 @@ void idle_task_exit(void)
        if (mm != &init_mm) {
                switch_mm(mm, &init_mm, current);
+               current->active_mm = &init_mm;

Reply via email to