4.14-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Sebastian Andrzej Siewior <[email protected]>

commit f77084d96355f5fba8e2c1fb3a51a393b1570de7 upstream.

The WARN_ON_ONCE(__read_cr3() != build_cr3()) in switch_mm_irqs_off()
triggers every once in a while during a snapshotted system upgrade.

The warning triggers since commit decab0888e6e ("x86/mm: Remove
preempt_disable/enable() from __native_flush_tlb()"). The callchain is:

  get_page_from_freelist() -> post_alloc_hook() -> __kernel_map_pages()

with CONFIG_DEBUG_PAGEALLOC enabled.

Disable preemption during CR3 reset / __flush_tlb_all() and add a comment
why preemption has to be disabled so it won't be removed accidentaly.

Add another preemptible() check in __flush_tlb_all() to catch callers with
enabled preemption when PGE is enabled, because PGE enabled does not
trigger the warning in __native_flush_tlb(). Suggested by Andy Lutomirski.

Fixes: decab0888e6e ("x86/mm: Remove preempt_disable/enable() from 
__native_flush_tlb()")
Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Thomas Gleixner <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: [email protected]
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
 arch/x86/include/asm/tlbflush.h |    6 ++++++
 arch/x86/mm/pageattr.c          |    6 +++++-
 2 files changed, 11 insertions(+), 1 deletion(-)

--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -466,6 +466,12 @@ static inline void __native_flush_tlb_on
  */
 static inline void __flush_tlb_all(void)
 {
+       /*
+        * This is to catch users with enabled preemption and the PGE feature
+        * and don't trigger the warning in __native_flush_tlb().
+        */
+       VM_WARN_ON_ONCE(preemptible());
+
        if (boot_cpu_has(X86_FEATURE_PGE)) {
                __flush_tlb_global();
        } else {
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -2037,9 +2037,13 @@ void __kernel_map_pages(struct page *pag
 
        /*
         * We should perform an IPI and flush all tlbs,
-        * but that can deadlock->flush only current cpu:
+        * but that can deadlock->flush only current cpu.
+        * Preemption needs to be disabled around __flush_tlb_all() due to
+        * CR3 reload in __native_flush_tlb().
         */
+       preempt_disable();
        __flush_tlb_all();
+       preempt_enable();
 
        arch_flush_lazy_mmu_mode();
 }


Reply via email to