The boundary_size might be as large as ULONG_MAX, which means
that a device has no specific boundary limit. So "+ 1" would
potentially overflow.

Also, by following other places in the kernel, boundary_size
should align with the PAGE_SIZE before right shifting by the
PAGE_SHIFT. However, passing it to ALIGN() would potentially
overflow too.

According to kernel defines:
    #define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask))
    #define ALIGN(x, a) ALIGN_MASK(x, (typeof(x))(a) - 1)

We can simplify the logic here:
  ALIGN(boundary + 1, 1 << shift) >> shift
= ALIGN_MASK(b + 1, (1 << s) - 1) >> s
= {[b + 1 + (1 << s) - 1] & ~[(1 << s) - 1]} >> s
= [b + 1 + (1 << s) - 1] >> s
= [b + (1 << s)] >> s
= (b >> s) + 1

So fixing a potential overflow with the safer shortcut.

Signed-off-by: Nicolin Chen <nicoleots...@gmail.com>
Cc: Christoph Hellwig <h...@lst.de>
---
 arch/alpha/kernel/pci_iommu.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c
index 81037907268d..1ef2c647bd3e 100644
--- a/arch/alpha/kernel/pci_iommu.c
+++ b/arch/alpha/kernel/pci_iommu.c
@@ -141,12 +141,10 @@ iommu_arena_find_pages(struct device *dev, struct 
pci_iommu_arena *arena,
        unsigned long boundary_size;
 
        base = arena->dma_base >> PAGE_SHIFT;
-       if (dev) {
-               boundary_size = dma_get_seg_boundary(dev) + 1;
-               boundary_size >>= PAGE_SHIFT;
-       } else {
-               boundary_size = 1UL << (32 - PAGE_SHIFT);
-       }
+
+       boundary_size = dev ? dma_get_seg_boundary(dev) : U32_MAX;
+       /* Overflow-free shortcut for: ALIGN(b + 1, 1 << s) >> s */
+       boundary_size = (boundary_size >> PAGE_SHIFT) + 1;
 
        /* Search forward for the first mask-aligned sequence of N free ptes */
        ptes = arena->ptes;
-- 
2.17.1

Reply via email to