Hi John,

On 2021-12-20 08:49, John Garry wrote:
On 24/09/2021 11:01, John Garry wrote:
Only dma-iommu.c and vdpa actually use the "fast" mode of IOVA alloc and
free. As such, it's wasteful that all other IOVA domains hold the rcache
memories.

In addition, the current IOVA domain init implementation is poor
(init_iova_domain()), in that errors are ignored and not passed to the
caller. The only errors can come from the IOVA rcache init, and fixing up
all the IOVA domain init callsites to handle the errors would take some
work.

Separate the IOVA rache out of the IOVA domain, and create a new IOVA
domain structure, iova_caching_domain.

Signed-off-by: John Garry <john.ga...@huawei.com>

Hi Robin,

Do you have any thoughts on this patch? The decision is whether we stick with a single iova domain structure or support this super structure for iova domains which support the rcache. I did not try the former - it would be do-able but I am not sure on how it would look.

TBH I feel inclined to take the simpler approach of just splitting the rcache array to a separate allocation, making init_iova_rcaches() public (with a proper return value), and tweaking put_iova_domain() to make rcache cleanup conditional. A residual overhead of 3 extra pointers in iova_domain doesn't seem like *too* much for non-DMA-API users to bear. Unless you want to try generalising the rcache mechanism completely away from IOVA API specifics, it doesn't seem like there's really enough to justify the bother of having its own distinct abstraction layer.

Cheers,
Robin.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to