From: Peter Zijlstra <pet...@infradead.org>

commit 0758cd8304942292e95a0f750c374533db378b32 upstream

Aneesh reported that:

        tlb_flush_mmu()
          tlb_flush_mmu_tlbonly()
            tlb_flush()                 <-- #1
          tlb_flush_mmu_free()
            tlb_table_flush()
              tlb_table_invalidate()
                tlb_flush_mmu_tlbonly()
                  tlb_flush()           <-- #2

does two TLBIs when tlb->fullmm, because __tlb_reset_range() will not
clear tlb->end in that case.

Observe that any caller to __tlb_adjust_range() also sets at least one of
the tlb->freed_tables || tlb->cleared_p* bits, and those are
unconditionally cleared by __tlb_reset_range().

Change the condition for actually issuing TLBI to having one of those bits
set, as opposed to having tlb->end != 0.

Link: 
http://lkml.kernel.org/r/20200116064531.483522-4-aneesh.ku...@linux.ibm.com
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com>
Reported-by: "Aneesh Kumar K.V" <aneesh.ku...@linux.ibm.com>
Cc: <sta...@vger.kernel.org>  # 4.19
Signed-off-by: Santosh Sivaraj <sant...@fossix.org>
[santosh: backported to 4.19 stable]
---
 include/asm-generic/tlb.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 19934cdd143e..427a70c56ddd 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -179,7 +179,12 @@ static inline void __tlb_reset_range(struct mmu_gather 
*tlb)
 
 static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
 {
-       if (!tlb->end)
+       /*
+        * Anything calling __tlb_adjust_range() also sets at least one of
+        * these bits.
+        */
+       if (!(tlb->freed_tables || tlb->cleared_ptes || tlb->cleared_pmds ||
+             tlb->cleared_puds || tlb->cleared_p4ds))
                return;
 
        tlb_flush(tlb);
-- 
2.25.4

Reply via email to