In switch_mmu_context() if we call steal_context_smp() to get a context
to use we shouldn't fall through and than call steal_context_up().  Doing
so can be problematic in that the 'mm' that steal_context_up() ends up
using will not get marked dirty in the stale_map[] for other CPUs that
might have used that mm.  Thus we could end up with stale TLB entries in
the other CPUs that can cause all kinda of havoc.

Signed-off-by: Kumar Gala <ga...@kernel.crashing.org>
Acked-by: Benjamin Herrenschmidt <b...@kernel.crashing.org>
---
 arch/powerpc/mm/mmu_context_nohash.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/powerpc/mm/mmu_context_nohash.c 
b/arch/powerpc/mm/mmu_context_nohash.c
index 92a1971..b1a727d 100644
--- a/arch/powerpc/mm/mmu_context_nohash.c
+++ b/arch/powerpc/mm/mmu_context_nohash.c
@@ -217,6 +217,7 @@ void switch_mmu_context(struct mm_struct *prev, struct 
mm_struct *next)
                        id = steal_context_smp(id);
                        if (id == MMU_NO_CONTEXT)
                                goto again;
+                       goto stolen;
                }
 #endif /* CONFIG_SMP */
                id = steal_context_up(id);
-- 
1.6.0.6

_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Reply via email to