Re: [PATCH] iommu/arm-smmu-v3: add nr_ats_masters to avoid unnecessary operations

2019-08-13 Thread Leizhen (ThunderTown)



On 2019/8/14 1:10, Will Deacon wrote:
> On Mon, Aug 12, 2019 at 11:42:17AM +0100, John Garry wrote:
>> On 01/08/2019 13:20, Zhen Lei wrote:
>>> When (smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS) is true, even if a
>>> smmu domain does not contain any ats master, the operations of
>>> arm_smmu_atc_inv_to_cmd() and lock protection in arm_smmu_atc_inv_domain()
>>> are always executed. This will impact performance, especially in
>>> multi-core and stress scenarios. For my FIO test scenario, about 8%
>>> performance reduced.
>>>
>>> In fact, we can use a atomic member to record how many ats masters the
>>> smmu contains. And check that without traverse the list and check all
>>> masters one by one in the lock protection.
>>>
>>
>> Hi Will, Robin, Jean-Philippe,
>>
>> Can you kindly check this issue? We have seen a signifigant performance
>> regression here.
> 
> Sorry, John: Robin and Jean-Philippe are off at the moment and I've been
> swamped dealing with the arm64 queue. I'll try to get to this tomorrow.

Hi, all:
   I found my patch have some mistake, see below. I'm sorry I didn't see this 
coupling. 
I'm preparing v2. 

> @@ -1915,10 +1921,10 @@ static void arm_smmu_detach_dev(struct 
> arm_smmu_master *master)
>   list_del(>domain_head);
>   spin_unlock_irqrestore(_domain->devices_lock, flags);
>  
> - master->domain = NULL;
>   arm_smmu_install_ste_for_dev(master);

"master->domain = NULL" is needed in arm_smmu_install_ste_for_dev().

>  
>   arm_smmu_disable_ats(master);
> + master->domain = NULL;
>  }

> 
> Will
> 
> .
> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: [PATCH 15/15] iommu/arm-smmu: Add context init implementation hook

2019-08-13 Thread Krishna Reddy
Tested-by: Krishna Reddy

Validated the entire patch set on Tegra194 SOC based platform and confirmed 
that arm-smmu driver is functional as it has been. 

-KR

-Original Message-
From: Robin Murphy  
Sent: Friday, August 9, 2019 10:08 AM
To: w...@kernel.org
Cc: iommu@lists.linux-foundation.org; linux-arm-ker...@lists.infradead.org; 
j...@8bytes.org; vivek.gau...@codeaurora.org; bjorn.anders...@linaro.org; 
Krishna Reddy ; gregory.clem...@bootlin.com; 
robdcl...@gmail.com
Subject: [PATCH 15/15] iommu/arm-smmu: Add context init implementation hook

Allocating and initialising a context for a domain is another point where 
certain implementations are known to want special behaviour.
Currently the other half of the Cavium workaround comes into play here, so 
let's finish the job to get the whole thing right out of the way.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/arm-smmu-impl.c | 39 +--
 drivers/iommu/arm-smmu.c  | 51 +++
 drivers/iommu/arm-smmu.h  | 42 +++--
 3 files changed, 86 insertions(+), 46 deletions(-)

diff --git a/drivers/iommu/arm-smmu-impl.c b/drivers/iommu/arm-smmu-impl.c 
index c8904da08354..7a657d47b6ec 100644
--- a/drivers/iommu/arm-smmu-impl.c
+++ b/drivers/iommu/arm-smmu-impl.c
@@ -48,6 +48,12 @@ const struct arm_smmu_impl calxeda_impl = {  };
 
 
+struct cavium_smmu {
+   struct arm_smmu_device smmu;
+   u32 id_base;
+};
+#define to_csmmu(s)container_of(s, struct cavium_smmu, smmu)
+
 static int cavium_cfg_probe(struct arm_smmu_device *smmu)  {
static atomic_t context_count = ATOMIC_INIT(0); @@ -56,17 +62,46 @@ 
static int cavium_cfg_probe(struct arm_smmu_device *smmu)
 * Ensure ASID and VMID allocation is unique across all SMMUs in
 * the system.
 */
-   smmu->cavium_id_base = atomic_fetch_add(smmu->num_context_banks,
+   to_csmmu(smmu)->id_base = atomic_fetch_add(smmu->num_context_banks,
   _count);
dev_notice(smmu->dev, "\tenabling workaround for Cavium erratum 
27704\n");
 
return 0;
 }
 
+int cavium_init_context(struct arm_smmu_domain *smmu_domain) {
+   u32 id_base = to_csmmu(smmu_domain->smmu)->id_base;
+
+   if (smmu_domain->stage == ARM_SMMU_DOMAIN_S2)
+   smmu_domain->cfg.vmid += id_base;
+   else
+   smmu_domain->cfg.asid += id_base;
+
+   return 0;
+}
+
 const struct arm_smmu_impl cavium_impl = {
.cfg_probe = cavium_cfg_probe,
+   .init_context = cavium_init_context,
 };
 
+struct arm_smmu_device *cavium_smmu_impl_init(struct arm_smmu_device 
+*smmu) {
+   struct cavium_smmu *csmmu;
+
+   csmmu = devm_kzalloc(smmu->dev, sizeof(*csmmu), GFP_KERNEL);
+   if (!csmmu)
+   return ERR_PTR(-ENOMEM);
+
+   csmmu->smmu = *smmu;
+   csmmu->smmu.impl = _impl;
+
+   devm_kfree(smmu->dev, smmu);
+
+   return >smmu;
+}
+
 
 #define ARM_MMU500_ACTLR_CPRE  (1 << 1)
 
@@ -121,7 +156,7 @@ struct arm_smmu_device *arm_smmu_impl_init(struct 
arm_smmu_device *smmu)
smmu->impl = _impl;
 
if (smmu->model == CAVIUM_SMMUV2)
-   smmu->impl = _impl;
+   return cavium_smmu_impl_init(smmu);
 
if (smmu->model == ARM_MMU500)
smmu->impl = _mmu500_impl;
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c index 
298ab9e6a6cd..1c1c9ef91d7b 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -27,7 +27,6 @@
 #include 
 #include 
 #include  -#include   
#include   #include   #include 
 @@ -111,44 +110,6 @@ struct arm_smmu_master_cfg {  
#define for_each_cfg_sme(fw, i, idx) \
for (i = 0; idx = fwspec_smendx(fw, i), i < fw->num_ids; ++i)
 
-enum arm_smmu_context_fmt {
-   ARM_SMMU_CTX_FMT_NONE,
-   ARM_SMMU_CTX_FMT_AARCH64,
-   ARM_SMMU_CTX_FMT_AARCH32_L,
-   ARM_SMMU_CTX_FMT_AARCH32_S,
-};
-
-struct arm_smmu_cfg {
-   u8  cbndx;
-   u8  irptndx;
-   union {
-   u16 asid;
-   u16 vmid;
-   };
-   enum arm_smmu_cbar_type cbar;
-   enum arm_smmu_context_fmt   fmt;
-};
-#define INVALID_IRPTNDX0xff
-
-enum arm_smmu_domain_stage {
-   ARM_SMMU_DOMAIN_S1 = 0,
-   ARM_SMMU_DOMAIN_S2,
-   ARM_SMMU_DOMAIN_NESTED,
-   ARM_SMMU_DOMAIN_BYPASS,
-};
-
-struct arm_smmu_domain {
-   struct arm_smmu_device  *smmu;
-   struct io_pgtable_ops   *pgtbl_ops;
-   const struct iommu_gather_ops   *tlb_ops;
-   struct arm_smmu_cfg cfg;
-   enum arm_smmu_domain_stage  stage;
-   boolnon_strict;
-   struct mutexinit_mutex; /* Protects smmu pointer */
-   spinlock_t  cb_lock; /* 

Re: [PATCH] iommu/arm-smmu-v3: add nr_ats_masters to avoid unnecessary operations

2019-08-13 Thread Will Deacon
On Mon, Aug 12, 2019 at 11:42:17AM +0100, John Garry wrote:
> On 01/08/2019 13:20, Zhen Lei wrote:
> > When (smmu_domain->smmu->features & ARM_SMMU_FEAT_ATS) is true, even if a
> > smmu domain does not contain any ats master, the operations of
> > arm_smmu_atc_inv_to_cmd() and lock protection in arm_smmu_atc_inv_domain()
> > are always executed. This will impact performance, especially in
> > multi-core and stress scenarios. For my FIO test scenario, about 8%
> > performance reduced.
> > 
> > In fact, we can use a atomic member to record how many ats masters the
> > smmu contains. And check that without traverse the list and check all
> > masters one by one in the lock protection.
> > 
> 
> Hi Will, Robin, Jean-Philippe,
> 
> Can you kindly check this issue? We have seen a signifigant performance
> regression here.

Sorry, John: Robin and Jean-Philippe are off at the moment and I've been
swamped dealing with the arm64 queue. I'll try to get to this tomorrow.

Will


Re: [PATCH v4 13/22] iommu/vt-d: Enlightened PASID allocation

2019-08-13 Thread Jacob Pan
Hi Eric,

Apologize for the delayed response below,

On Tue, 16 Jul 2019 11:29:30 +0200
Auger Eric  wrote:

> Hi Jacob,
> On 6/9/19 3:44 PM, Jacob Pan wrote:
> > From: Lu Baolu 
> > 
> > If Intel IOMMU runs in caching mode, a.k.a. virtual IOMMU, the
> > IOMMU driver should rely on the emulation software to allocate
> > and free PASID IDs. The Intel vt-d spec revision 3.0 defines a
> > register set to support this. This includes a capability register,
> > a virtual command register and a virtual response register. Refer
> > to section 10.4.42, 10.4.43, 10.4.44 for more information.
> > 
> > This patch adds the enlightened PASID allocation/free interfaces
> > via the virtual command register.>
> > Cc: Ashok Raj 
> > Cc: Jacob Pan 
> > Cc: Kevin Tian 
> > Signed-off-by: Liu Yi L 
> > Signed-off-by: Lu Baolu 
> > Signed-off-by: Jacob Pan 
> > ---
> >  drivers/iommu/intel-pasid.c | 76
> > +
> > drivers/iommu/intel-pasid.h | 13 +++-
> > include/linux/intel-iommu.h |  2 ++ 3 files changed, 90
> > insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/iommu/intel-pasid.c
> > b/drivers/iommu/intel-pasid.c index 2fefeaf..69fddd3 100644
> > --- a/drivers/iommu/intel-pasid.c
> > +++ b/drivers/iommu/intel-pasid.c
> > @@ -63,6 +63,82 @@ void *intel_pasid_lookup_id(int pasid)
> > return p;
> >  }
> >  
> > +int vcmd_alloc_pasid(struct intel_iommu *iommu, unsigned int
> > *pasid) +{
> > +   u64 res;
> > +   u64 cap;
> > +   u8 status_code;
> > +   unsigned long flags;
> > +   int ret = 0;
> > +
> > +   if (!ecap_vcs(iommu->ecap)) {
> > +   pr_warn("IOMMU: %s: Hardware doesn't support
> > virtual command\n",
> > +   iommu->name);
> > +   return -ENODEV;
> > +   }
> > +
> > +   cap = dmar_readq(iommu->reg + DMAR_VCCAP_REG);
> > +   if (!(cap & DMA_VCS_PAS)) {
> > +   pr_warn("IOMMU: %s: Emulation software doesn't
> > support PASID allocation\n",
> > +   iommu->name);
> > +   return -ENODEV;
> > +   }
> > +
> > +   raw_spin_lock_irqsave(>register_lock, flags);
> > +   dmar_writeq(iommu->reg + DMAR_VCMD_REG, VCMD_CMD_ALLOC);
> > +   IOMMU_WAIT_OP(iommu, DMAR_VCRSP_REG, dmar_readq,
> > + !(res & VCMD_VRSP_IP), res);
> > +   raw_spin_unlock_irqrestore(>register_lock, flags);
> > +
> > +   status_code = VCMD_VRSP_SC(res);
> > +   switch (status_code) {
> > +   case VCMD_VRSP_SC_SUCCESS:
> > +   *pasid = VCMD_VRSP_RESULT(res);
> > +   break;
> > +   case VCMD_VRSP_SC_NO_PASID_AVAIL:
> > +   pr_info("IOMMU: %s: No PASID available\n",
> > iommu->name);
> > +   ret = -ENOMEM;
> > +   break;
> > +   default:
> > +   ret = -ENODEV;
> > +   pr_warn("IOMMU: %s: Unkonwn error code %d\n",  
> unknown
> s/unknown/unexpected
sounds good.
> > +   iommu->name, status_code);
> > +   }
> > +
> > +   return ret;
> > +}
> > +
> > +void vcmd_free_pasid(struct intel_iommu *iommu, unsigned int pasid)
> > +{
> > +   u64 res;
> > +   u8 status_code;
> > +   unsigned long flags;
> > +
> > +   if (!ecap_vcs(iommu->ecap)) {
> > +   pr_warn("IOMMU: %s: Hardware doesn't support
> > virtual command\n",
> > +   iommu->name);
> > +   return;
> > +   }  
> Logically shouldn't you also check DMAR_VCCAP_REG as well?
Good point, we may have more than just PASID allocation virtual
commands.
> > +
> > +   raw_spin_lock_irqsave(>register_lock, flags);
> > +   dmar_writeq(iommu->reg + DMAR_VCMD_REG, (pasid << 8) |
> > VCMD_CMD_FREE);
> > +   IOMMU_WAIT_OP(iommu, DMAR_VCRSP_REG, dmar_readq,
> > + !(res & VCMD_VRSP_IP), res);
> > +   raw_spin_unlock_irqrestore(>register_lock, flags);
> > +
> > +   status_code = VCMD_VRSP_SC(res);
> > +   switch (status_code) {
> > +   case VCMD_VRSP_SC_SUCCESS:
> > +   break;
> > +   case VCMD_VRSP_SC_INVALID_PASID:
> > +   pr_info("IOMMU: %s: Invalid PASID\n", iommu->name);
> > +   break;
> > +   default:
> > +   pr_warn("IOMMU: %s: Unkonwn error code %d\n",
> > +   iommu->name, status_code);  
> s/Unkonwn/Unexpected
will fix. 
> > +   }
> > +}
> > +
> >  /*
> >   * Per device pasid table management:
> >   */
> > diff --git a/drivers/iommu/intel-pasid.h
> > b/drivers/iommu/intel-pasid.h index 23537b3..4b26ab5 100644
> > --- a/drivers/iommu/intel-pasid.h
> > +++ b/drivers/iommu/intel-pasid.h
> > @@ -19,6 +19,16 @@
> >  #define PASID_PDE_SHIFT6
> >  #define MAX_NR_PASID_BITS  20
> >  
> > +/* Virtual command interface for enlightened pasid management. */
> > +#define VCMD_CMD_ALLOC 0x1
> > +#define VCMD_CMD_FREE  0x2
> > +#define VCMD_VRSP_IP   0x1
> > +#define VCMD_VRSP_SC(e)(((e) >> 1) & 0x3)
> > +#define VCMD_VRSP_SC_SUCCESS   0
> > +#define VCMD_VRSP_SC_NO_PASID_AVAIL1
> > +#define VCMD_VRSP_SC_INVALID_PASID 

RE: [RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted

2019-08-13 Thread Ram Pai
On Wed, Aug 14, 2019 at 12:24:39AM +1000, David Gibson wrote:
> On Tue, Aug 13, 2019 at 03:26:17PM +0200, Christoph Hellwig wrote:
> > On Mon, Aug 12, 2019 at 07:51:56PM +1000, David Gibson wrote:
> > > AFAICT we already kind of abuse this for the VIRTIO_F_IOMMU_PLATFORM,
> > > because to handle for cases where it *is* a device limitation, we
> > > assume that if the hypervisor presents VIRTIO_F_IOMMU_PLATFORM then
> > > the guest *must* select it.
> > > 
> > > What we actually need here is for the hypervisor to present
> > > VIRTIO_F_IOMMU_PLATFORM as available, but not required.  Then we need
> > > a way for the platform core code to communicate to the virtio driver
> > > that *it* requires the IOMMU to be used, so that the driver can select
> > > or not the feature bit on that basis.
> > 
> > I agree with the above, but that just brings us back to the original
> > issue - the whole bypass of the DMA OPS should be an option that the
> > device can offer, not the other way around.  And we really need to
> > fix that root cause instead of doctoring around it.
> 
> I'm not exactly sure what you mean by "device" in this context.  Do
> you mean the hypervisor (qemu) side implementation?
> 
> You're right that this was the wrong way around to begin with, but as
> well as being hard to change now, I don't see how it really addresses
> the current problem.  The device could default to IOMMU and allow
> bypass, but the driver would still need to get information from the
> platform to know that it *can't* accept that option in the case of a
> secure VM.  Reversed sense, but the same basic problem.
> 
> The hypervisor does not, and can not be aware of the secure VM
> restrictions - only the guest side platform code knows that.

This statement is almost entirely right. I will rephrase it to make it
entirely right.   

The hypervisor does not, and can not be aware of the secure VM
requirement that it needs to do some special processing that has nothing
to do with DMA address translation - only the guest side platform code
know that.

BTW: I do not consider 'bounce buffering' as 'DMA address translation'.
DMA address translation, translates CPU address to DMA address.  Bounce
buffering moves the data from one buffer at a given CPU address to
another buffer at a different CPU address.  Unfortunately the current
DMA ops conflates the two.  The need to do 'DMA address translation' 
is something the device can enforce.  But the need to do bounce
buffering, is something that the device should not be aware and should be
entirely a decision made locally by the kernel/driver in the secure VM.

RP

> 
> -- 
> David Gibson  | I'll have my music baroque, and my code
> david AT gibson.dropbear.id.au| minimalist, thank you.  NOT _the_ 
> _other_
>   | _way_ _around_!
> http://www.ozlabs.org/~dgibson



-- 
Ram Pai



Re: [RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted

2019-08-13 Thread David Gibson
On Tue, Aug 13, 2019 at 03:26:17PM +0200, Christoph Hellwig wrote:
> On Mon, Aug 12, 2019 at 07:51:56PM +1000, David Gibson wrote:
> > AFAICT we already kind of abuse this for the VIRTIO_F_IOMMU_PLATFORM,
> > because to handle for cases where it *is* a device limitation, we
> > assume that if the hypervisor presents VIRTIO_F_IOMMU_PLATFORM then
> > the guest *must* select it.
> > 
> > What we actually need here is for the hypervisor to present
> > VIRTIO_F_IOMMU_PLATFORM as available, but not required.  Then we need
> > a way for the platform core code to communicate to the virtio driver
> > that *it* requires the IOMMU to be used, so that the driver can select
> > or not the feature bit on that basis.
> 
> I agree with the above, but that just brings us back to the original
> issue - the whole bypass of the DMA OPS should be an option that the
> device can offer, not the other way around.  And we really need to
> fix that root cause instead of doctoring around it.

I'm not exactly sure what you mean by "device" in this context.  Do
you mean the hypervisor (qemu) side implementation?

You're right that this was the wrong way around to begin with, but as
well as being hard to change now, I don't see how it really addresses
the current problem.  The device could default to IOMMU and allow
bypass, but the driver would still need to get information from the
platform to know that it *can't* accept that option in the case of a
secure VM.  Reversed sense, but the same basic problem.

The hypervisor does not, and can not be aware of the secure VM
restrictions - only the guest side platform code knows that.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [RFC PATCH] virtio_ring: Use DMA API if guest memory is encrypted

2019-08-13 Thread Christoph Hellwig
On Mon, Aug 12, 2019 at 07:51:56PM +1000, David Gibson wrote:
> AFAICT we already kind of abuse this for the VIRTIO_F_IOMMU_PLATFORM,
> because to handle for cases where it *is* a device limitation, we
> assume that if the hypervisor presents VIRTIO_F_IOMMU_PLATFORM then
> the guest *must* select it.
> 
> What we actually need here is for the hypervisor to present
> VIRTIO_F_IOMMU_PLATFORM as available, but not required.  Then we need
> a way for the platform core code to communicate to the virtio driver
> that *it* requires the IOMMU to be used, so that the driver can select
> or not the feature bit on that basis.

I agree with the above, but that just brings us back to the original
issue - the whole bypass of the DMA OPS should be an option that the
device can offer, not the other way around.  And we really need to
fix that root cause instead of doctoring around it.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api

2019-08-13 Thread Christoph Hellwig
On Tue, Aug 13, 2019 at 08:09:26PM +0800, Tom Murphy wrote:
> Hi Christoph,
> 
> I quit my job and am having a great time traveling South East Asia.

Enjoy!  I just returned from my vacation.

> I definitely don't want this work to go to waste and I hope to repost it
> later this week but I can't guarantee it.
> 
> Let me know if you need this urgently.

It isn't in any strict sense urgent.  I just have various DMA API plans
that I'd rather just implement in dma-direct and dma-iommu rather than
also in two additional commonly used iommu drivers.  So on the one had
the sooner the better, on the other hand no real urgency.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v4 0/5] iommu/amd: Convert the AMD iommu driver to the dma-iommu api

2019-08-13 Thread Tom Murphy
Hi Christoph,

I quit my job and am having a great time traveling South East Asia.

I definitely don't want this work to go to waste and I hope to repost it
later this week but I can't guarantee it.

Let me know if you need this urgently.

Thanks,
Tom

On Sat 10 Aug 2019, 3:20 p.m. Christoph Hellwig,  wrote:

> On Sun, Jun 23, 2019 at 11:19:45PM -0700, Christoph Hellwig wrote:
> > Tom,
> >
> > next time please cc Jerg as the AMD IOMMU maintainer.
> >
> > Joerg, any chance you could review this?  Toms patches to convert the
> > AMD and Intel IOMMU drivers to the dma-iommu code are going to make my
> > life in DMA land significantly easier, so I have a vested interest
> > in this series moving forward :)
>
> Tom, can you repost the series?  Seems like there hasn't been any
> news for a month.
>
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH 2/3] iommu/vt-d: Apply per-device dma_ops

2019-08-13 Thread Lu Baolu

Hi again,

On 8/7/19 11:06 AM, Lu Baolu wrote:

Hi Christoph,

On 8/6/19 2:43 PM, Christoph Hellwig wrote:

Hi Lu,

I really do like the switch to the per-device dma_map_ops, but:

On Thu, Aug 01, 2019 at 02:01:55PM +0800, Lu Baolu wrote:

Current Intel IOMMU driver sets the system level dma_ops. This
implementation has at least the following drawbacks: 1) each
dma API will go through the IOMMU driver even the devices are
using identity mapped domains; 2) if user requests to use an
identity mapped domain (a.k.a. bypass iommu translation), the
driver might fall back to dma domain blindly if the device is
not able to address all system memory.


This is very clearly a behavioral regression.  The intel-iommu driver
has always used the iommu mapping to provide decent support for
devices that do not have the full 64-bit addressing capability, and
changing this will make a lot of existing setups go slower.



I agree with you that we should keep the capability and avoid possible
performance regression on some setups. But, instead of hard-coding this
in the iommu driver, I prefer a more scalable way.

For example, the concept of per group default domain type [1] seems to
be a good choice. The kernel could be statically compiled as by-default
"pass through" or "translate everything". The per group default domain
type API could then be used by the privileged user to tweak some of the
groups for better performance, either by 1) bypassing iommu translation
for the trusted super-speed devices, or 2) applying iommu translation to
access the system memory which is beyond the device's address capability
(without the necessary of using bounce buffer).

[1] https://www.spinics.net/lists/iommu/msg37113.html



The code that this patch is trying to remove also looks buggy. The check
and replace of domain happens in each DMA API, but there isn't any lock
to serialize them.

Best regards,
Lu Baolu