2.6.39-stable review patch.  If anyone has any objections, please let us know.

------------------

From: Chris Wright <[email protected]>

commit 1c9fc3d11b84fbd0c4f4aa7855702c2a1f098ebb upstream.

Mike Travis and Mike Habeck reported an issue where iova allocation
would return a range that was larger than a device's dma mask.

https://lkml.org/lkml/2011/3/29/423

The dmar initialization code will reserve all PCI MMIO regions and copy
those reservations into a domain specific iova tree.  It is possible for
one of those regions to be above the dma mask of a device.  It is typical
to allocate iovas with a 32bit mask (despite device's dma mask possibly
being larger) and cache the result until it exhausts the lower 32bit
address space.  Freeing the iova range that is >= the last iova in the
lower 32bit range when there is still an iova above the 32bit range will
corrupt the cached iova by pointing it to a region that is above 32bit.
If that region is also larger than the device's dma mask, a subsequent
allocation will return an unusable iova and cause dma failure.

Simply don't cache an iova that is above the 32bit caching boundary.

Reported-by: Mike Travis <[email protected]>
Reported-by: Mike Habeck <[email protected]>
Acked-by: Mike Travis <[email protected]>
Tested-by: Mike Habeck <[email protected]>
Signed-off-by: Chris Wright <[email protected]>
Signed-off-by: David Woodhouse <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>

---
 drivers/pci/iova.c |   12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

--- a/drivers/pci/iova.c
+++ b/drivers/pci/iova.c
@@ -63,8 +63,16 @@ __cached_rbnode_delete_update(struct iov
        curr = iovad->cached32_node;
        cached_iova = container_of(curr, struct iova, node);
 
-       if (free->pfn_lo >= cached_iova->pfn_lo)
-               iovad->cached32_node = rb_next(&free->node);
+       if (free->pfn_lo >= cached_iova->pfn_lo) {
+               struct rb_node *node = rb_next(&free->node);
+               struct iova *iova = container_of(node, struct iova, node);
+
+               /* only cache if it's below 32bit pfn */
+               if (node && iova->pfn_lo < iovad->dma_32bit_pfn)
+                       iovad->cached32_node = node;
+               else
+                       iovad->cached32_node = NULL;
+       }
 }
 
 /* Computes the padding size required, to make the


_______________________________________________
stable mailing list
[email protected]
http://linux.kernel.org/mailman/listinfo/stable

Reply via email to