The nest mmu tlb flush needs to happen before the GPU translation shootdown
is launched to avoid the GPU refilling its tlb with stale nmmu translations
prior to the nmmu flush completing.

Signed-off-by: Alistair Popple <alist...@popple.id.au>
Cc: sta...@vger.kernel.org
---
 arch/powerpc/platforms/powernv/npu-dma.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/npu-dma.c 
b/arch/powerpc/platforms/powernv/npu-dma.c
index b5d960d..3d4f879 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -546,6 +546,12 @@ static void mmio_invalidate(struct npu_context 
*npu_context, int va,
        unsigned long pid = npu_context->mm->context.id;
 
        /*
+        * Unfortunately the nest mmu does not support flushing specific
+        * addresses so we have to flush the whole mm.
+        */
+       flush_tlb_mm(npu_context->mm);
+
+       /*
         * Loop over all the NPUs this process is active on and launch
         * an invalidate.
         */
@@ -576,12 +582,6 @@ static void mmio_invalidate(struct npu_context 
*npu_context, int va,
                }
        }
 
-       /*
-        * Unfortunately the nest mmu does not support flushing specific
-        * addresses so we have to flush the whole mm.
-        */
-       flush_tlb_mm(npu_context->mm);
-
        mmio_invalidate_wait(mmio_atsd_reg, flush);
        if (flush)
                /* Wait for the flush to complete */
-- 
2.1.4

Reply via email to