Re: [PATCH v4 5/6] swiotlb: support aligned swiotlb buffers
Hi David, Thank you for the patch! Yet something to improve: [auto build test ERROR on iommu/next] [also build test ERROR on hch-configfs/for-next linus/master v5.14-rc5] [cannot apply to swiotlb/linux-next next-20210813] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/David-Stevens/Fixes-for-dma-iommu-swiotlb-bounce-buffers/20210813-154739 base: https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next config: x86_64-randconfig-a003-20210812 (attached as .config) compiler: gcc-9 (Debian 9.3.0-22) 9.3.0 reproduce (this is a W=1 build): # https://github.com/0day-ci/linux/commit/50aeec27cc4ccaa914c0bbefa59e349278646b6e git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review David-Stevens/Fixes-for-dma-iommu-swiotlb-bounce-buffers/20210813-154739 git checkout 50aeec27cc4ccaa914c0bbefa59e349278646b6e # save the attached .config to linux build tree mkdir build_dir make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All errors (new ones prefixed by >>): drivers/xen/swiotlb-xen.c: In function 'xen_swiotlb_map_page': >> drivers/xen/swiotlb-xen.c:385:8: error: too few arguments to function >> 'swiotlb_tbl_map_single' 385 | map = swiotlb_tbl_map_single(dev, phys, size, size, dir, attrs); |^~ In file included from arch/x86/include/asm/swiotlb.h:5, from arch/x86/include/asm/dma-mapping.h:12, from include/linux/dma-map-ops.h:75, from include/linux/dma-direct.h:10, from drivers/xen/swiotlb-xen.c:30: include/linux/swiotlb.h:45:13: note: declared here 45 | phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys, | ^~ vim +/swiotlb_tbl_map_single +385 drivers/xen/swiotlb-xen.c b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 352 b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 353 /* b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 354 * Map a single buffer of the indicated size for DMA in streaming mode. The b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 355 * physical address to use is returned. b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 356 * b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 357 * Once the device is given the dma address, the device owns this memory until b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 358 * either xen_swiotlb_unmap_page or xen_swiotlb_dma_sync_single is performed. b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 359 */ dceb1a6819ab2c Christoph Hellwig 2017-05-21 360 static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 361 unsigned long offset, size_t size, b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 362 enum dma_data_direction dir, 00085f1efa387a Krzysztof Kozlowski 2016-08-03 363 unsigned long attrs) b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 364 { e05ed4d1fad9e7 Alexander Duyck 2012-10-15 365phys_addr_t map, phys = page_to_phys(page) + offset; 91ffe4ad534ab2 Stefano Stabellini2020-07-10 366dma_addr_t dev_addr = xen_phys_to_dma(dev, phys); b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 367 b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 368BUG_ON(dir == DMA_NONE); b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 369/* b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 370 * If the address happens to be in the device's DMA window, b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 371 * we can safely return the device addr and not worry about bounce b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 372 * buffering it. b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 373 */ 68a33b1794665b Christoph Hellwig 2019-11-19 374if (dma_capable(dev, dev_addr, size, true) && a4dba130891271 Stefano Stabellini2014-11-21 375 !range_straddles_page_boundary(phys, size) && 291be10fd75111 Julien Grall 2015-09-09 376 !xen_arch_need_swiotlb(dev, phys, dev_addr) && 063b8271ec8f70 Christoph Hellwig 2019-04-11 377swiotlb_force != SWIOTLB_FORCE) 063b8271ec8f70 Christoph Hellwig 2019-04-11 378goto done; b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 379 b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 380/* b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 381 * Oh well, have to allocate and map a bounce buffer. b097186fd29d5b Konrad Rzeszutek Wilk 2010-05-11 382 */ 2b2b614dd24e4e
Re: [PATCH v4 5/6] swiotlb: support aligned swiotlb buffers
Looks good, Reviewed-by: Christoph Hellwig ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v4 5/6] swiotlb: support aligned swiotlb buffers
From: David Stevens Add an argument to swiotlb_tbl_map_single that specifies the desired alignment of the allocated buffer. This is used by dma-iommu to ensure the buffer is aligned to the iova granule size when using swiotlb with untrusted sub-granule mappings. This addresses an issue where adjacent slots could be exposed to the untrusted device if IO_TLB_SIZE < iova granule < PAGE_SIZE. Signed-off-by: David Stevens --- drivers/iommu/dma-iommu.c | 4 ++-- include/linux/swiotlb.h | 3 ++- kernel/dma/swiotlb.c | 11 +++ 3 files changed, 11 insertions(+), 7 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index bad813d63ea6..b1b0327cc2f6 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -801,8 +801,8 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, size_t padding_size; aligned_size = iova_align(iovad, size); - phys = swiotlb_tbl_map_single(dev, phys, size, - aligned_size, dir, attrs); + phys = swiotlb_tbl_map_single(dev, phys, size, aligned_size, + iova_mask(iovad), dir, attrs); if (phys == DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h index 216854a5e513..93d82e43eb3a 100644 --- a/include/linux/swiotlb.h +++ b/include/linux/swiotlb.h @@ -44,7 +44,8 @@ extern void __init swiotlb_update_mem_attributes(void); phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys, size_t mapping_size, size_t alloc_size, - enum dma_data_direction dir, unsigned long attrs); + unsigned int alloc_aligned_mask, enum dma_data_direction dir, + unsigned long attrs); extern void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr, diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index e50df8d8f87e..d4c45d8cd1fa 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -427,7 +427,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index) * allocate a buffer from that IO TLB pool. */ static int find_slots(struct device *dev, phys_addr_t orig_addr, - size_t alloc_size) + size_t alloc_size, unsigned int alloc_align_mask) { struct io_tlb_mem *mem = io_tlb_default_mem; unsigned long boundary_mask = dma_get_seg_boundary(dev); @@ -450,6 +450,7 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr, stride = (iotlb_align_mask >> IO_TLB_SHIFT) + 1; if (alloc_size >= PAGE_SIZE) stride = max(stride, stride << (PAGE_SHIFT - IO_TLB_SHIFT)); + stride = max(stride, (alloc_align_mask >> IO_TLB_SHIFT) + 1); spin_lock_irqsave(&mem->lock, flags); if (unlikely(nslots > mem->nslabs - mem->used)) @@ -504,7 +505,8 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr, phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, - enum dma_data_direction dir, unsigned long attrs) + unsigned int alloc_align_mask, enum dma_data_direction dir, + unsigned long attrs) { struct io_tlb_mem *mem = io_tlb_default_mem; unsigned int offset = swiotlb_align_offset(dev, orig_addr); @@ -524,7 +526,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr, return (phys_addr_t)DMA_MAPPING_ERROR; } - index = find_slots(dev, orig_addr, alloc_size + offset); + index = find_slots(dev, orig_addr, + alloc_size + offset, alloc_align_mask); if (index == -1) { if (!(attrs & DMA_ATTR_NO_WARN)) dev_warn_ratelimited(dev, @@ -636,7 +639,7 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size, trace_swiotlb_bounced(dev, phys_to_dma(dev, paddr), size, swiotlb_force); - swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, dir, + swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, 0, dir, attrs); if (swiotlb_addr == (phys_addr_t)DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; -- 2.33.0.rc1.237.g0d66db33f3-goog ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu