On 28/12/15 16:05, Joerg Roedel wrote:
On Fri, Dec 18, 2015 at 05:01:47PM +0000, Robin Murphy wrote:
When mapping a non-page-aligned scatterlist entry, we copy the original
offset to the output DMA address before aligning it to hand off to
iommu_map_sg(), then later adding the IOVA page address portion to get
the final mapped address. However, when the IOVA page size is smaller
than the CPU page size, it is the offset within the IOVA page we want,
not that within the CPU page, which can easily be larger than an IOVA
page and thus result in an incorrect final address.
Fix the bug by taking only the IOVA-aligned part of the offset as the
basis of the DMA address, not the whole thing.
Signed-off-by: Robin Murphy <[email protected]>
---
drivers/iommu/dma-iommu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 982e716..03811e3 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -458,7 +458,7 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist
*sg,
size_t s_length = s->length;
size_t pad_len = (mask - iova_len + 1) & mask;
- sg_dma_address(s) = s->offset;
+ sg_dma_address(s) = s_offset;
Does not apply on v4.4-rc7.
Ah, I did these on top of the map_sg tweak[1] locally, and failed to
spot that that leaks into a context line here. There's no actual
dependency so I'll rebase and repost this bugfix momentarily - as that
other thread seems to have stalled, I'll probably have another crack at
a proper scatterlist segment merging implementation sometime after the
merge window.
Thanks,
Robin.
[1]:http://thread.gmane.org/gmane.linux.kernel.iommu/11512
_______________________________________________
iommu mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/iommu