On 2020-07-17 14:22, Sironi, Filippo wrote:
On Fri, 2020-07-17 at 10:47 +0100, Robin Murphy wrote:

On 2020-07-17 10:20, Sebastian Ott via iommu wrote:
Hello Joerg,

On 2020-07-10 14:31, Joerg Roedel wrote:
On Wed, Jul 01, 2020 at 12:46:31AM +0200, Sebastian Ott wrote:
The IVRS ACPI table specifies maximum address sizes for I/O
addresses that can be handled by the IOMMUs in the system. Parse
data from the IVRS header to provide aperture information for
mappings and users of the iommu API.

Changes for V2:
   - use limits in iommu_setup_dma_ops()
   - rebased to current upstream

Sebastian Ott (3):
    iommu/amd: Parse supported address sizes from IVRS
    iommu/amd: Restrict aperture for domains to conform with IVRS
    iommu/amd: Actually enforce geometry aperture

Thanks for the changes. May I ask what the reason for those
changes are?
AFAIK all AMD IOMMU implementations (in hardware) support full
address spaces, and the IVRS table might actually be wrong,
limiting the
address space in the worst case to only 32 bit.

It's not the IOMMU, but we've encountered devices that are capable
more than
32- but less than 64- bit IOVA, and there's no way to express that
the IOVA
allocator in the PCIe spec. Our solution was to have our platforms
express an
IVRS entry that says the IOMMU is capable of 48-bit, which these
can generate.
48 bits is plenty of address space in this generation for the
application we have.

Hmm, for constraints of individual devices, it should really be their
drivers' job to call dma_set_mask*() with the appropriate value in the
first place rather than just assuming that 64 means anything >32. If
it's a case where the device itself technically is 64-bit capable, but
an upstream bridge is constraining it, then that limit can also be
described either by dedicated firmware properties (e.g. ACPI _DMA) or
with a quirk like via_no_dac().


You cannot rely on the device driver only because the device driver
attach might be a generic one like vfio-pci, for instance, that doesn't
have any device specific knowledge.

Indeed, but on the other hand a generic driver that doesn't know the device is highly unlikely to set up any DMA transactions by itself either. In the case of VFIO, it would then be the guest/userspace driver's responsibility to take the equivalent action to avoid allocating addresses the hardware can't actually use.

I'm mostly just wary that trying to fake up a per-device restriction as a global one is a bit crude, and has the inherent problem that whatever you think the lowest common denominator is, there's the potential for some device to be hotplugged in later and break the assumption you've already had to commit to.

And of course I am taking a bit of a DMA-API-centric viewpoint here - I think exposing per-device properties like bus_dma_limit that aren't easily identifiable for VFIO users to take into account is still rather an open problem.

iommu mailing list

Reply via email to