Hi,

On 2020/2/19 11:21, Daniel Drake wrote:
From: Jon Derrick<[email protected]>

The PCI devices handled by intel-iommu may have a DMA requester on
another bus, such as VMD subdevices needing to use the VMD endpoint.

The real DMA device is now used for the DMA mapping, but one case was
missed earlier: if the VMD device (and hence subdevices too) are under
IOMMU_DOMAIN_IDENTITY, mappings do not work.

Codepaths like intel_map_page() handle the IOMMU_DOMAIN_DMA case by
creating an iommu DMA mapping, and fall back on dma_direct_map_page()
for the IOMMU_DOMAIN_IDENTITY case. However, handling of the IDENTITY
case is broken when intel_page_page() handles a subdevice.

We observe that at iommu attach time, dmar_insert_one_dev_info() for
the subdevices will never set dev->archdata.iommu. This is because
that function uses find_domain() to check if there is already an IOMMU
for the device, and find_domain() then defers to the real DMA device
which does have one. Thus dmar_insert_one_dev_info() returns without
assigning dev->archdata.iommu.

Then, later:

1. intel_map_page() checks if an IOMMU mapping is needed by calling
    iommu_need_mapping() on the subdevice. identity_mapping() returns
    false because dev->archdata.iommu is NULL, so this function
    returns false indicating that mapping is needed.
2. __intel_map_single() is called to create the mapping.
3. __intel_map_single() calls find_domain(). This function now returns
    the IDENTITY domain corresponding to the real DMA device.
4. __intel_map_single() calls domain_get_iommu() on this "real" domain.
    A failure is hit and the entire operation is aborted, because this
    codepath is not intended to handle IDENTITY mappings:
        if (WARN_ON(domain->domain.type != IOMMU_DOMAIN_DMA))
                    return NULL;

Fix this by using the real DMA device when checking if a mapping is
needed. The IDENTITY case will then directly fall back on
dma_direct_map_page(). The subdevice DMA mask is still considered in
order to handle any situations where (e.g.) the subdevice only supports
32-bit DMA with the real DMA requester having a 64-bit DMA mask.

With respect, this is problematical. The parent and all subdevices share
a single translation entry. The DMA mask should be consistent.

Otherwise, for example, subdevice A has 64-bit DMA capability and uses
an identity domain for DMA translation. While subdevice B has 32-bit DMA
capability and is forced to switch to DMA domain. Subdevice A will be
impacted without any notification.

Best regards,
baolu
_______________________________________________
iommu mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to