[PATCH 4.4 12/78] x86/mm, sched/core: Turn off IRQs in switch_mm()

2017-12-22 Thread Greg Kroah-Hartman
4.4-stable review patch.  If anyone has any objections, please let me know.

--

From: Andy Lutomirski 

commit 078194f8e9fe3cf54c8fd8bded48a1db5bd8eb8a upstream.

Potential races between switch_mm() and TLB-flush or LDT-flush IPIs
could be very messy.  AFAICT the code is currently okay, whether by
accident or by careful design, but enabling PCID will make it
considerably more complicated and will no longer be obviously safe.

Fix it with a big hammer: run switch_mm() with IRQs off.

To avoid a performance hit in the scheduler, we take advantage of
our knowledge that the scheduler already has IRQs disabled when it
calls switch_mm().

Signed-off-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Cc: Borislav Petkov 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Link: 
http://lkml.kernel.org/r/f19baf759693c9dcae64bbff76189db77cb13398.1461688545.git.l...@kernel.org
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/include/asm/mmu_context.h |3 +++
 arch/x86/mm/tlb.c  |   10 ++
 2 files changed, 13 insertions(+)

--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -107,6 +107,9 @@ static inline void enter_lazy_tlb(struct
 extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
  struct task_struct *tsk);
 
+extern void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+  struct task_struct *tsk);
+#define switch_mm_irqs_off switch_mm_irqs_off
 
 #define activate_mm(prev, next)\
 do {   \
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -64,6 +64,16 @@ EXPORT_SYMBOL_GPL(leave_mm);
 void switch_mm(struct mm_struct *prev, struct mm_struct *next,
   struct task_struct *tsk)
 {
+   unsigned long flags;
+
+   local_irq_save(flags);
+   switch_mm_irqs_off(prev, next, tsk);
+   local_irq_restore(flags);
+}
+
+void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+   struct task_struct *tsk)
+{
unsigned cpu = smp_processor_id();
 
if (likely(prev != next)) {




[PATCH 4.4 12/78] x86/mm, sched/core: Turn off IRQs in switch_mm()

2017-12-22 Thread Greg Kroah-Hartman
4.4-stable review patch.  If anyone has any objections, please let me know.

--

From: Andy Lutomirski 

commit 078194f8e9fe3cf54c8fd8bded48a1db5bd8eb8a upstream.

Potential races between switch_mm() and TLB-flush or LDT-flush IPIs
could be very messy.  AFAICT the code is currently okay, whether by
accident or by careful design, but enabling PCID will make it
considerably more complicated and will no longer be obviously safe.

Fix it with a big hammer: run switch_mm() with IRQs off.

To avoid a performance hit in the scheduler, we take advantage of
our knowledge that the scheduler already has IRQs disabled when it
calls switch_mm().

Signed-off-by: Andy Lutomirski 
Reviewed-by: Borislav Petkov 
Cc: Borislav Petkov 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Link: 
http://lkml.kernel.org/r/f19baf759693c9dcae64bbff76189db77cb13398.1461688545.git.l...@kernel.org
Signed-off-by: Ingo Molnar 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/include/asm/mmu_context.h |3 +++
 arch/x86/mm/tlb.c  |   10 ++
 2 files changed, 13 insertions(+)

--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -107,6 +107,9 @@ static inline void enter_lazy_tlb(struct
 extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
  struct task_struct *tsk);
 
+extern void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+  struct task_struct *tsk);
+#define switch_mm_irqs_off switch_mm_irqs_off
 
 #define activate_mm(prev, next)\
 do {   \
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -64,6 +64,16 @@ EXPORT_SYMBOL_GPL(leave_mm);
 void switch_mm(struct mm_struct *prev, struct mm_struct *next,
   struct task_struct *tsk)
 {
+   unsigned long flags;
+
+   local_irq_save(flags);
+   switch_mm_irqs_off(prev, next, tsk);
+   local_irq_restore(flags);
+}
+
+void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+   struct task_struct *tsk)
+{
unsigned cpu = smp_processor_id();
 
if (likely(prev != next)) {