On 06/08/18 13:27, Zhen Lei wrote:
.flush_iotlb_all can not just wait for previous tlbi operations to be completed, but should also invalid all TLBs of the related domain.
I think it was like that because the only caller in practice was iommu_group_create_direct_mappings(), and at that point any relevant invalidations would have already been issued anyway. Once flush_iotlb_all actually does what it's supposed to we'll get a bit of unavoidable over-invalidation on that path, but it's no big deal.
Reviewed-by: Robin Murphy <[email protected]>
Signed-off-by: Zhen Lei <[email protected]> --- drivers/iommu/arm-smmu-v3.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c index 4810f61..2f1304b 100644 --- a/drivers/iommu/arm-smmu-v3.c +++ b/drivers/iommu/arm-smmu-v3.c @@ -1770,6 +1770,14 @@ static int arm_smmu_map(struct iommu_domain *domain, unsigned long iova, return ops->unmap(ops, iova, size); } +static void arm_smmu_flush_iotlb_all(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + + if (smmu_domain->smmu) + arm_smmu_tlb_inv_context(smmu_domain); +} + static void arm_smmu_iotlb_sync(struct iommu_domain *domain) { struct arm_smmu_device *smmu = to_smmu_domain(domain)->smmu; @@ -1998,7 +2006,7 @@ static void arm_smmu_put_resv_regions(struct device *dev, .map = arm_smmu_map, .unmap = arm_smmu_unmap, .map_sg = default_iommu_map_sg, - .flush_iotlb_all = arm_smmu_iotlb_sync, + .flush_iotlb_all = arm_smmu_flush_iotlb_all, .iotlb_sync = arm_smmu_iotlb_sync, .iova_to_phys = arm_smmu_iova_to_phys, .add_device = arm_smmu_add_device, -- 1.8.3
_______________________________________________ iommu mailing list [email protected] https://lists.linuxfoundation.org/mailman/listinfo/iommu
