From: Robin Murphy <robin.mur...@arm.com>

[ Upstream commit 5b61343b50590fb04a3f6be2cdc4868091757262 ]

For various reasons based on the allocator behaviour and typical
use-cases at the time, when the max32_alloc_size optimisation was
introduced it seemed reasonable to couple the reset of the tracked
size to the update of cached32_node upon freeing a relevant IOVA.
However, since subsequent optimisations focused on helping genuine
32-bit devices make best use of even more limited address spaces, it
is now a lot more likely for cached32_node to be anywhere in a "full"
32-bit address space, and as such more likely for space to become
available from IOVAs below that node being freed.

At this point, the short-cut in __cached_rbnode_delete_update() really
doesn't hold up any more, and we need to fix the logic to reliably
provide the expected behaviour. We still want cached32_node to only move
upwards, but we should reset the allocation size if *any* 32-bit space
has become available.

Reported-by: Yunfei Wang <yf.w...@mediatek.com>
Signed-off-by: Robin Murphy <robin.mur...@arm.com>
Reviewed-by: Miles Chen <miles.c...@mediatek.com>
Link: 
https://lore.kernel.org/r/033815732d83ca73b13c11485ac39336f15c3b40.1646318408.git.robin.mur...@arm.com
Signed-off-by: Joerg Roedel <jroe...@suse.de>
Signed-off-by: Sasha Levin <sas...@kernel.org>
---
 drivers/iommu/iova.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index 906582a21124..628a586be695 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -138,10 +138,11 @@ __cached_rbnode_delete_update(struct iova_domain *iovad, 
struct iova *free)
        cached_iova = rb_entry(iovad->cached32_node, struct iova, node);
        if (free == cached_iova ||
            (free->pfn_hi < iovad->dma_32bit_pfn &&
-            free->pfn_lo >= cached_iova->pfn_lo)) {
+            free->pfn_lo >= cached_iova->pfn_lo))
                iovad->cached32_node = rb_next(&free->node);
+
+       if (free->pfn_lo < iovad->dma_32bit_pfn)
                iovad->max32_alloc_size = iovad->dma_32bit_pfn;
-       }
 
        cached_iova = rb_entry(iovad->cached_node, struct iova, node);
        if (free->pfn_lo >= cached_iova->pfn_lo)
-- 
2.34.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to