Hey Tom,

On Mon, Jun 05, 2017 at 02:52:35PM -0500, Tom Lendacky wrote:
> After reducing the amount of MMIO performed by the IOMMU during operation,
> perf data shows that flushing the TLB for all protection domains during
> DMA unmapping is a performance issue. It is not necessary to flush the
> TLBs for all protection domains, only the protection domains associated
> with iova's on the flush queue.
> 
> Create a separate queue that tracks the protection domains associated with
> the iova's on the flush queue. This new queue optimizes the flushing of
> TLBs to the required protection domains.
> 
> Reviewed-by: Arindam Nath <[email protected]>
> Signed-off-by: Tom Lendacky <[email protected]>
> ---
>  drivers/iommu/amd_iommu.c |   56 
> ++++++++++++++++++++++++++++++++++++++++-----
>  1 file changed, 50 insertions(+), 6 deletions(-)

I also did a major rewrite of the AMD IOMMU queue handling and flushing
code last week. It is functionally complete and I am currently testing,
documenting it, and cleaning it up. I pushed the current state of it to

        git://git.kernel.org/pub/scm/linux/kernel/git/joro/linux.git amd-iommu

Its quite intrusive as it implements a per-domain flush-queue, and uses
a ring-buffer instead of a real queue. But you see the details in the
code.

Can you please have a look and give it a test in your setup?


Thanks,

        Joerg

_______________________________________________
iommu mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to