[PATCH v7 7/7] dma-iommu: account for min_align_mask w/swiotlb

2021-08-29 Thread David Stevens
From: David Stevens Pass the non-aligned size to __iommu_dma_map when using swiotlb bounce buffers in iommu_dma_map_page, to account for min_align_mask. To deal with granule alignment, __iommu_dma_map maps iova_align(size + iova_off) bytes starting at phys - iova_off. If iommu_dma_map_page

[PATCH v7 6/7] swiotlb: support aligned swiotlb buffers

2021-08-29 Thread David Stevens
From: David Stevens Add an argument to swiotlb_tbl_map_single that specifies the desired alignment of the allocated buffer. This is used by dma-iommu to ensure the buffer is aligned to the iova granule size when using swiotlb with untrusted sub-granule mappings. This addresses an issue where

[PATCH v7 5/7] dma-iommu: Check CONFIG_SWIOTLB more broadly

2021-08-29 Thread David Stevens
From: David Stevens Introduce a new dev_use_swiotlb function to guard swiotlb code, instead of overloading dev_is_untrusted. This allows CONFIG_SWIOTLB to be checked more broadly, so the swiotlb related code can be removed more aggressively. Signed-off-by: David Stevens Reviewed-by: Robin

[PATCH v7 4/7] dma-iommu: fold _swiotlb helpers into callers

2021-08-29 Thread David Stevens
From: David Stevens Fold the _swiotlb helper functions into the respective _page functions, since recent fixes have moved all logic from the _page functions to the _swiotlb functions. Signed-off-by: David Stevens Reviewed-by: Christoph Hellwig Reviewed-by: Robin Murphy ---

[PATCH v7 3/7] dma-iommu: skip extra sync during unmap w/swiotlb

2021-08-29 Thread David Stevens
From: David Stevens Calling the iommu_dma_sync_*_for_cpu functions during unmap can cause two copies out of the swiotlb buffer. Do the arch sync directly in __iommu_dma_unmap_swiotlb instead to avoid this. This makes the call to iommu_dma_sync_sg_for_cpu for untrusted devices in

[PATCH v7 2/7] dma-iommu: fix arch_sync_dma for map

2021-08-29 Thread David Stevens
From: David Stevens When calling arch_sync_dma, we need to pass it the memory that's actually being used for dma. When using swiotlb bounce buffers, this is the bounce buffer. Move arch_sync_dma into the __iommu_dma_map_swiotlb helper, so it can use the bounce buffer address if necessary. Now

[PATCH v7 1/7] dma-iommu: fix sync_sg with swiotlb

2021-08-29 Thread David Stevens
From: David Stevens The is_swiotlb_buffer function takes the physical address of the swiotlb buffer, not the physical address of the original buffer. The sglist contains the physical addresses of the original buffer, so for the sync_sg functions to work properly when a bounce buffer might have

[PATCH v7 0/7] Fixes for dma-iommu swiotlb bounce buffers

2021-08-29 Thread David Stevens
This patch set includes various fixes for dma-iommu's swiotlb bounce buffers for untrusted devices. The min_align_mask issue was found when running fio on an untrusted nvme device with bs=512. The other issues were found via code inspection, so I don't have any specific use cases where things