[ 013/102] ARM: 7790/1: Fix deferred mm switch on VIVT processors

2013-08-08 Thread Greg Kroah-Hartman
3.10-stable review patch.  If anyone has any objections, please let me know.

--

From: Catalin Marinas 

commit bdae73cd374e28db544fdd9b77de689a36e3c129 upstream.

As of commit b9d4d42ad9 (ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on
pre-ARMv6 CPUs), the mm switching on VIVT processors is done in the
finish_arch_post_lock_switch() function to avoid whole cache flushing
with interrupts disabled. The need for deferred mm switch is stored as a
thread flag (TIF_SWITCH_MM). However, with preemption enabled, we can
have another thread switch before finish_arch_post_lock_switch(). If the
new thread has the same mm as the previous 'next' thread, the scheduler
will not call switch_mm() and the TIF_SWITCH_MM flag won't be set for
the new thread.

This patch moves the switch pending flag to the mm_context_t structure
since this is specific to the mm rather than thread.

Signed-off-by: Catalin Marinas 
Reported-by: Marc Kleine-Budde 
Tested-by: Marc Kleine-Budde 
Signed-off-by: Russell King 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/arm/include/asm/mmu.h |2 ++
 arch/arm/include/asm/mmu_context.h |   20 
 arch/arm/include/asm/thread_info.h |1 -
 3 files changed, 18 insertions(+), 5 deletions(-)

--- a/arch/arm/include/asm/mmu.h
+++ b/arch/arm/include/asm/mmu.h
@@ -6,6 +6,8 @@
 typedef struct {
 #ifdef CONFIG_CPU_HAS_ASID
atomic64_t  id;
+#else
+   int switch_pending;
 #endif
unsigned intvmalloc_seq;
unsigned long   sigpage;
--- a/arch/arm/include/asm/mmu_context.h
+++ b/arch/arm/include/asm/mmu_context.h
@@ -55,7 +55,7 @@ static inline void check_and_switch_cont
 * on non-ASID CPUs, the old mm will remain valid until the
 * finish_arch_post_lock_switch() call.
 */
-   set_ti_thread_flag(task_thread_info(tsk), TIF_SWITCH_MM);
+   mm->context.switch_pending = 1;
else
cpu_switch_mm(mm->pgd, mm);
 }
@@ -64,9 +64,21 @@ static inline void check_and_switch_cont
finish_arch_post_lock_switch
 static inline void finish_arch_post_lock_switch(void)
 {
-   if (test_and_clear_thread_flag(TIF_SWITCH_MM)) {
-   struct mm_struct *mm = current->mm;
-   cpu_switch_mm(mm->pgd, mm);
+   struct mm_struct *mm = current->mm;
+
+   if (mm && mm->context.switch_pending) {
+   /*
+* Preemption must be disabled during cpu_switch_mm() as we
+* have some stateful cache flush implementations. Check
+* switch_pending again in case we were preempted and the
+* switch to this mm was already done.
+*/
+   preempt_disable();
+   if (mm->context.switch_pending) {
+   mm->context.switch_pending = 0;
+   cpu_switch_mm(mm->pgd, mm);
+   }
+   preempt_enable_no_resched();
}
 }
 
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -156,7 +156,6 @@ extern int vfp_restore_user_hwstate(stru
 #define TIF_USING_IWMMXT   17
 #define TIF_MEMDIE 18  /* is terminating due to OOM killer */
 #define TIF_RESTORE_SIGMASK20
-#define TIF_SWITCH_MM  22  /* deferred switch_mm */
 
 #define _TIF_SIGPENDING(1 << TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED  (1 << TIF_NEED_RESCHED)


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[ 013/102] ARM: 7790/1: Fix deferred mm switch on VIVT processors

2013-08-08 Thread Greg Kroah-Hartman
3.10-stable review patch.  If anyone has any objections, please let me know.

--

From: Catalin Marinas catalin.mari...@arm.com

commit bdae73cd374e28db544fdd9b77de689a36e3c129 upstream.

As of commit b9d4d42ad9 (ARM: Remove __ARCH_WANT_INTERRUPTS_ON_CTXSW on
pre-ARMv6 CPUs), the mm switching on VIVT processors is done in the
finish_arch_post_lock_switch() function to avoid whole cache flushing
with interrupts disabled. The need for deferred mm switch is stored as a
thread flag (TIF_SWITCH_MM). However, with preemption enabled, we can
have another thread switch before finish_arch_post_lock_switch(). If the
new thread has the same mm as the previous 'next' thread, the scheduler
will not call switch_mm() and the TIF_SWITCH_MM flag won't be set for
the new thread.

This patch moves the switch pending flag to the mm_context_t structure
since this is specific to the mm rather than thread.

Signed-off-by: Catalin Marinas catalin.mari...@arm.com
Reported-by: Marc Kleine-Budde m...@pengutronix.de
Tested-by: Marc Kleine-Budde m...@pengutronix.de
Signed-off-by: Russell King rmk+ker...@arm.linux.org.uk
Signed-off-by: Greg Kroah-Hartman gre...@linuxfoundation.org

---
 arch/arm/include/asm/mmu.h |2 ++
 arch/arm/include/asm/mmu_context.h |   20 
 arch/arm/include/asm/thread_info.h |1 -
 3 files changed, 18 insertions(+), 5 deletions(-)

--- a/arch/arm/include/asm/mmu.h
+++ b/arch/arm/include/asm/mmu.h
@@ -6,6 +6,8 @@
 typedef struct {
 #ifdef CONFIG_CPU_HAS_ASID
atomic64_t  id;
+#else
+   int switch_pending;
 #endif
unsigned intvmalloc_seq;
unsigned long   sigpage;
--- a/arch/arm/include/asm/mmu_context.h
+++ b/arch/arm/include/asm/mmu_context.h
@@ -55,7 +55,7 @@ static inline void check_and_switch_cont
 * on non-ASID CPUs, the old mm will remain valid until the
 * finish_arch_post_lock_switch() call.
 */
-   set_ti_thread_flag(task_thread_info(tsk), TIF_SWITCH_MM);
+   mm-context.switch_pending = 1;
else
cpu_switch_mm(mm-pgd, mm);
 }
@@ -64,9 +64,21 @@ static inline void check_and_switch_cont
finish_arch_post_lock_switch
 static inline void finish_arch_post_lock_switch(void)
 {
-   if (test_and_clear_thread_flag(TIF_SWITCH_MM)) {
-   struct mm_struct *mm = current-mm;
-   cpu_switch_mm(mm-pgd, mm);
+   struct mm_struct *mm = current-mm;
+
+   if (mm  mm-context.switch_pending) {
+   /*
+* Preemption must be disabled during cpu_switch_mm() as we
+* have some stateful cache flush implementations. Check
+* switch_pending again in case we were preempted and the
+* switch to this mm was already done.
+*/
+   preempt_disable();
+   if (mm-context.switch_pending) {
+   mm-context.switch_pending = 0;
+   cpu_switch_mm(mm-pgd, mm);
+   }
+   preempt_enable_no_resched();
}
 }
 
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -156,7 +156,6 @@ extern int vfp_restore_user_hwstate(stru
 #define TIF_USING_IWMMXT   17
 #define TIF_MEMDIE 18  /* is terminating due to OOM killer */
 #define TIF_RESTORE_SIGMASK20
-#define TIF_SWITCH_MM  22  /* deferred switch_mm */
 
 #define _TIF_SIGPENDING(1  TIF_SIGPENDING)
 #define _TIF_NEED_RESCHED  (1  TIF_NEED_RESCHED)


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/