On 2022-05-09 18:26, Jason Gunthorpe wrote:
On Mon, May 09, 2022 at 10:59:11AM +0100, Robin Murphy wrote:

IOMMU_DOMAIN_DMA_FQ now effectively takes over the original
semantics of IOMMU_DOMAIN_DMA as the one that depends on
driver-specific functionality.

If I grasp the FQ stuff right, it seems that this only requires the
driver to implement ops->flush_iotlb_all()? I don't see anything
obvious in any driver that is specifically _FQ related?

If yes, it makes me wonder why I see drivers implementing
ops->flush_iotlb_all() but not supporting the _FQ domain during alloc?

Further, if yes, wouldn't it make sense to just trigger FQ based on
domain->op->flush_iotlb_all being set?

The main thing drivers need to do for flush queue support is to actually implement the optimisations which make it worthwhile. It's up to individual drivers how much use they want to make of the iommu_iotlb_gather mechanism, and they're free to issue invalidations or even enforce their completion directly within their unmap callback if they so wish. In such cases, enabling a flush queue would do nothing but hurt performance and waste memory.

It seems like there is some confusion here, because I see the sysfs
default domain store path just does this:

        /* We can bring up a flush queue without tearing down the domain */
        if (type == IOMMU_DOMAIN_DMA_FQ && prev_dom->type == IOMMU_DOMAIN_DMA) {
                ret = iommu_dma_init_fq(prev_dom);
                if (!ret)
                        prev_dom->type = IOMMU_DOMAIN_DMA_FQ;
                goto out;
        }

Which will allow a driver that rejected creating DMA_FQ during alloc
to end up with DMA_FQ anyhow???

Yes, I can't remember if I ever mentioned it anywhere, but that is not an oversight. The sysfs interface is a bit of a specialist sport, and if a user really wants to go out of their way to bring up a flush queue which won't help performance, they can, and they can observe the zero-to-negative performance gain, and maybe learn not to bother again on that system. It's just not worth the additional complexity to try to save users from themselves.

FWIW, mtk-iommu doesn't really have any need to reject
IOMMU_DOMAIN_UNMANAGED, they just don't have any relevant client drivers
that want it;

Ok..

however arm-smmu definitely does need to continue rejecting
IOMMU_DOMAIN_DMA when it can't rely on the DT code ensuring the
correct probe ordering with the old DT binding (otherwise client
drivers are liable to get broken DMA ops).

I saw this code and wondered what it does?

smmu alloc_domain returns NULL, which if I read everything right
causes NULL to become the default_domain.

But then what happens? This driver has no detach_dev so after, say
VFIO does stuff, how do we get back into whatever boot-time state NULL
represents?

Shhh... the main point of the arm-smmu legacy binding support is to do whatever the handful of people still using ancient AMD Seattle boards with original firmware need to do. Clearly they haven't been reassigning devices straight back from VFIO to a kernel driver for the last 6-and-a-bit years since that's been broken, so I'm now discounting it as a supported legacy-binding-use-case. Don't give them ideas! ;)

Is it OK because dev_iommu_priv_get() in arm_smmu_attach_dev() will
always fail if legacy? If that is the case then why allow allocating
any domain at all?

It feels like this really wants a 'IOMMU_DOMAIN_PLATFORM_DMA_OPS' set
as the default_domain meaning that when that domain is assigned, the
platform's DMA ops are handling the iommu? If I get it right..

Nah, we just need to actually finish introducing default domains. I'm OK to tackle the remaining SoC IOMMU drivers once my bus ops work meanders into the arch/arm iommu-dma conversion revival; it just needs people who understand s390 and fsl-pamu well enough to at least (presumably) bodge up enough IOMMU_DOMAIN_IDENTITY support to replicate their current "detached" behaviours and force CONFIG_IOMMU_DEFAULT_PASSTHROUGH on those architectures.

Cheers,
Robin.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to