On 2022-05-05 20:27, Jason Gunthorpe wrote:
On Thu, May 05, 2022 at 07:56:59PM +0100, Robin Murphy wrote:

Ack to that, there are certainly further improvements to consider once we've
got known-working code into a released kernel, but let's not get ahead of
ourselves just now.

Yes please
(I'm pretty sure we could get away with a single blocking domain per IOMMU
instance, rather than per group, but I deliberately saved that one for later
- right now one per group to match default domains is simpler to reason
about and less churny in the context of the current ownership patches)

I noticed this too..

But I thought the driver can do a better job of this. There is no
reason it has to allocate memory to return a IOMMU_DOMAIN_BLOCKED
domain - this struct could be a globally allocated singleton for the
entire driver and that would be even better memory savings.

I was thinking about the domain pointers themselves as well, but on reflection, little systems are most likely to have the simpler IOMMUs with one fixed group per instance anyway, while the kind of systems which end up with a couple of hundred groups can probably bear the cost of a couple of hundred extra pointers. So yeah, probably not worth the code cost of chasing down a group->devices->dev->iommu->blocking_domain in practice.

For instance, here is a sketch for the Intel driver based on Baolu's
remark that intel_iommu_detach_device() establishes a blocking
behavior already on detach_dev (Baolu if you like it can you make a
proper patch?):

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index df5c62ecf942b8..317945f6134d42 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -4324,6 +4324,8 @@ static struct iommu_domain 
*intel_iommu_domain_alloc(unsigned type)
                return domain;
        case IOMMU_DOMAIN_IDENTITY:
                return &si_domain->domain;
+       case IOMMU_DOMAIN_BLOCKED;
+               return &intel_blocking_domain;
        default:
                return NULL;
        }
@@ -4404,12 +4406,25 @@ static int intel_iommu_attach_device(struct 
iommu_domain *domain,
        return domain_add_dev_info(to_dmar_domain(domain), dev);
  }
-static void intel_iommu_detach_device(struct iommu_domain *domain,
-                                     struct device *dev)
+static int intel_iommu_block_dev(struct iommu_domain *domain,
+                                struct device *dev)
  {
+       if (!device_to_iommu(dev, NULL, NULL))
+               return -EINVAL;
+
        dmar_remove_one_dev_info(dev);
+       return 0;
  }
+static struct iommu_domain intel_blocking_domain {
+       .type = IOMMU_DOMAIN_BLOCKED,
+       .ops = &(const struct iommu_domain_ops){
+               .attach_dev = intel_iommu_detach_device,
+               // Fix core code so this works
+               .free = NULL,

TBH unless and until it becomes commonplace, I'd just have drivers provide a no-op .free callback for least surprise, if their .alloc does something funky. But either way, with domain ops now we can so easily do whatever we need, yay!

Cheers,
Robin.

+       },
+};
+
  static int intel_iommu_map(struct iommu_domain *domain,
                           unsigned long iova, phys_addr_t hpa,
                           size_t size, int iommu_prot, gfp_t gfp)
@@ -4890,7 +4905,6 @@ const struct iommu_ops intel_iommu_ops = {
  #endif
        .default_domain_ops = &(const struct iommu_domain_ops) {
                .attach_dev             = intel_iommu_attach_device,
-               .detach_dev             = intel_iommu_detach_device,
                .map_pages              = intel_iommu_map_pages,
                .unmap_pages            = intel_iommu_unmap_pages,
                .iotlb_sync_map         = intel_iommu_iotlb_sync_map,

Thanks,
Jason
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to