From: Dominique Martinet <dominique.marti...@atmark-techno.com>

[ Upstream commit 868c9ddc182bc6728bb380cbfb3170734f72c599 ]

This is a follow-up on 5f89468e2f06 ("swiotlb: manipulate orig_addr
when tlb_addr has offset") which fixed unaligned dma mappings,
making sure the following overflows are caught:

- offset of the start of the slot within the device bigger than
requested address' offset, in other words if the base address
given in swiotlb_tbl_map_single to create the mapping (orig_addr)
was after the requested address for the sync (tlb_offset) in the
same block:

 |------------------------------------------| block
              <----------------------------> mapped part of the block
              ^
              orig_addr
       ^
       invalid tlb_addr for sync

- if the resulting offset was bigger than the allocation size
this one could happen if the mapping was not until the end. e.g.

 |------------------------------------------| block
      <---------------------> mapped part of the block
      ^                               ^
      orig_addr                       invalid tlb_addr

Both should never happen so print a warning and bail out without trying
to adjust the sizes/offsets: the first one could try to sync from
orig_addr to whatever is left of the requested size, but the later
really has nothing to sync there...

Signed-off-by: Dominique Martinet <dominique.marti...@atmark-techno.com>
Cc: Konrad Rzeszutek Wilk <konrad.w...@oracle.com>
Reviewed-by: Bumyong Lee <bumyong....@samsung.com
Cc: Chanho Park <chanho61.p...@samsung.com>
Cc: Christoph Hellwig <h...@lst.de>
Signed-off-by: Konrad Rzeszutek Wilk <kon...@kernel.org>
Signed-off-by: Sasha Levin <sas...@kernel.org>
---
 kernel/dma/swiotlb.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index e50df8d8f87e..23f8d0b168c5 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -354,13 +354,27 @@ static void swiotlb_bounce(struct device *dev, 
phys_addr_t tlb_addr, size_t size
        size_t alloc_size = mem->slots[index].alloc_size;
        unsigned long pfn = PFN_DOWN(orig_addr);
        unsigned char *vaddr = phys_to_virt(tlb_addr);
-       unsigned int tlb_offset;
+       unsigned int tlb_offset, orig_addr_offset;
 
        if (orig_addr == INVALID_PHYS_ADDR)
                return;
 
-       tlb_offset = (tlb_addr & (IO_TLB_SIZE - 1)) -
-                    swiotlb_align_offset(dev, orig_addr);
+       tlb_offset = tlb_addr & (IO_TLB_SIZE - 1);
+       orig_addr_offset = swiotlb_align_offset(dev, orig_addr);
+       if (tlb_offset < orig_addr_offset) {
+               dev_WARN_ONCE(dev, 1,
+                       "Access before mapping start detected. orig offset %u, 
requested offset %u.\n",
+                       orig_addr_offset, tlb_offset);
+               return;
+       }
+
+       tlb_offset -= orig_addr_offset;
+       if (tlb_offset > alloc_size) {
+               dev_WARN_ONCE(dev, 1,
+                       "Buffer overflow detected. Allocation size: %zu. 
Mapping size: %zu+%u.\n",
+                       alloc_size, size, tlb_offset);
+               return;
+       }
 
        orig_addr += tlb_offset;
        alloc_size -= tlb_offset;
-- 
2.30.2

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to