RE: [RFC PATCH 4/5] iommu/arm-smmu-v3: Use pinned VMID for NESTED stage with BTM

2021-07-21 Thread Shameerali Kolothum Thodi
Hi Jean,

> -Original Message-
> From: Jean-Philippe Brucker [mailto:jean-phili...@linaro.org]
> Sent: 04 March 2021 17:11
> To: Shameerali Kolothum Thodi 
> Cc: linux-arm-ker...@lists.infradead.org; iommu@lists.linux-foundation.org;
> kvm...@lists.cs.columbia.edu; m...@kernel.org;
> alex.william...@redhat.com; eric.au...@redhat.com;
> zhangfei@linaro.org; Jonathan Cameron
> ; Zengtao (B) ;
> linux...@openeuler.org
> Subject: Re: [RFC PATCH 4/5] iommu/arm-smmu-v3: Use pinned VMID for
> NESTED stage with BTM

[...]

> >
> > kfree(smmu_domain);
> > @@ -3199,6 +3230,17 @@ static int arm_smmu_attach_pasid_table(struct
> iommu_domain *domain,
> > !(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB))
> > goto out;
> >
> > +   if (smmu->features & ARM_SMMU_FEAT_BTM) {
> > +   ret = arm_smmu_pinned_vmid_get(smmu_domain);
> > +   if (ret < 0)
> > +   goto out;
> > +
> > +   if (smmu_domain->s2_cfg.vmid)
> > +   arm_smmu_bitmap_free(smmu->vmid_map,
> smmu_domain->s2_cfg.vmid);
> > +
> > +   smmu_domain->s2_cfg.vmid = (u16)ret;
> 
> That will require a TLB invalidation on the old VMID, once the STE is
> rewritten.
> 
> More generally I think this pinned VMID set conflicts with that of
> stage-2-only domains (which is the default state until a guest attaches a
> PASID table). Say you have one guest using DOMAIN_NESTED without PASID
> table, just DMA to IPA using VMID 0x8000. Now another guest attaches a
> PASID table and obtains the same VMID from KVM. The stage-2 translation
> might use TLB entries from the other guest, no?  They'll both create
> stage-2 TLB entries with {StreamWorld=NS-EL1, VMID=0x8000}

Now that we are trying to align the KVM VMID allocation algorithm similar to
that of the ASID allocator [1], I attempted to use that for the SMMU pinned 
VMID allocation. But the issue you have mentioned above is still valid. 

And as a solution what I have tried now is follow what pinned ASID is doing 
in SVA,
 -Use xarray for private VMIDs
 -Get pinned VMID from KVM for DOMAIN_NESTED with PASID table
 -If the new pinned VMID is in use by private, then update the private
  VMID(VMID update to a live STE).

This seems to work, but still need to run more tests with this though.  

> It's tempting to allocate all VMIDs through KVM instead, but that will
> force a dependency on KVM to use VFIO_TYPE1_NESTING_IOMMU and might
> break
> existing users of that extension (though I'm not sure there are any).
> Instead we might need to restrict the SMMU VMID bitmap to match the
> private VMID set in KVM.

Another solution I have in mind is, make the new KVM VMID allocator common
between SMMUv3 and KVM. This will help to avoid all the private and shared
VMID splitting, also no need for live updates to STE VMID. One possible drawback
is less number of available KVM VMIDs but with 16 bit VMID space I am not sure
how much that is a concern.

Please let me know your thoughts.

Thanks,
Shameer

[1]. 
https://lore.kernel.org/kvmarm/20210616155606.2806-1-shameerali.kolothum.th...@huawei.com/

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [Resend RFC PATCH V4 09/13] x86/Swiotlb/HV: Add Swiotlb bounce buffer remap function for HV IVM

2021-07-21 Thread Tianyu Lan

Thanks for review.

On 7/20/2021 9:54 PM, Christoph Hellwig wrote:


Please split the swiotlb changes into a separate patch from the
consumer.


OK. Will update.




  }
+
+/*
+ * hv_map_memory - map memory to extra space in the AMD SEV-SNP Isolation VM.
+ */
+unsigned long hv_map_memory(unsigned long addr, unsigned long size)
+{
+   unsigned long *pfns = kcalloc(size / HV_HYP_PAGE_SIZE,
+ sizeof(unsigned long),
+  GFP_KERNEL);
+   unsigned long vaddr;
+   int i;
+
+   if (!pfns)
+   return (unsigned long)NULL;
+
+   for (i = 0; i < size / HV_HYP_PAGE_SIZE; i++)
+   pfns[i] = virt_to_hvpfn((void *)addr + i * HV_HYP_PAGE_SIZE) +
+   (ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
+
+   vaddr = (unsigned long)vmap_pfn(pfns, size / HV_HYP_PAGE_SIZE,
+   PAGE_KERNEL_IO);
+   kfree(pfns);
+
+   return vaddr;


This seems to miss a 'select VMAP_PFN'. 


VMAP_PFN has been selected in the previous patch "RFC PATCH V4 08/13]
HV/Vmbus: Initialize VMbus ring buffer for Isolation VM"


But more importantly I don't
think this actually works.  Various DMA APIs do expect a struct page
backing, so how is this going to work with say dma_mmap_attrs or
dma_get_sgtable_attrs?


dma_mmap_attrs() and dma_get_sgtable_attrs() get input virtual address
belonging to backing memory with struct page and returns bounce buffer
dma physical address which is below shared_gpa_boundary(vTOM) and passed
to Hyper-V via vmbus protocol.

The new map virtual address is only to access bounce buffer in swiotlb
code and will not be used other places. It's stored in the mem->vstart.
So the new API set_memory_decrypted_map() in this series is only called
in the swiotlb code. Other platforms may replace set_memory_decrypted()
with set_memory_decrypted_map() as requested.




+static unsigned long __map_memory(unsigned long addr, unsigned long size)
+{
+   if (hv_is_isolation_supported())
+   return hv_map_memory(addr, size);
+
+   return addr;
+}
+
+static void __unmap_memory(unsigned long addr)
+{
+   if (hv_is_isolation_supported())
+   hv_unmap_memory(addr);
+}
+
+unsigned long set_memory_decrypted_map(unsigned long addr, unsigned long size)
+{
+   if (__set_memory_enc_dec(addr, size / PAGE_SIZE, false))
+   return (unsigned long)NULL;
+
+   return __map_memory(addr, size);
+}
+
+int set_memory_encrypted_unmap(unsigned long addr, unsigned long size)
+{
+   __unmap_memory(addr);
+   return __set_memory_enc_dec(addr, size / PAGE_SIZE, true);
+}


Why this obsfucation into all kinds of strange helpers?  Also I think
we want an ops vectors (or alternative calls) instead of the random
if checks here.


Yes, agree and will add ops for different platforms to map/unmap memory.




+ * @vstart:The virtual start address of the swiotlb memory pool. The 
swiotlb
+ * memory pool may be remapped in the memory encrypted case and 
store


Normall we'd call this vaddr or cpu_addr.


OK. Will update.




-   set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
-   memset(vaddr, 0, bytes);
+   mem->vstart = (void *)set_memory_decrypted_map((unsigned long)vaddr, 
bytes);


Please always pass kernel virtual addresses as pointers.

And I think these APIs might need better names, e.g.

arch_dma_map_decrypted and arch_dma_unmap_decrypted.

Also these will need fallback versions for non-x86 architectures that
currently use memory encryption.


Sure. Will update in the next version.

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 04/11] memory: mtk-smi: Rename smi_gen to smi_type

2021-07-21 Thread Ikjoon Jang
On Thu, Jul 15, 2021 at 8:14 PM Yong Wu  wrote:
>
> Prepare for adding smi sub common. Only rename from smi_gen to smi_type.
> No functional change.
>
> About the current "smi_gen", we have gen1/gen2 that stand for the
> generation number for HW. I plan to add a new type(sub_common), then the
> name "gen" is not prober.
>
> Signed-off-by: Yong Wu 

Reviewed-by: Ikjoon Jang 

> ---
>  drivers/memory/mtk-smi.c | 24 
>  1 file changed, 12 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/memory/mtk-smi.c b/drivers/memory/mtk-smi.c
> index a2213452059d..ff07b14bcd66 100644
> --- a/drivers/memory/mtk-smi.c
> +++ b/drivers/memory/mtk-smi.c
> @@ -55,7 +55,7 @@
>  /* All are MMU0 defaultly. Only specialize mmu1 here. */
>  #define F_MMU1_LARB(larbid)(0x1 << SMI_BUS_LARB_SHIFT(larbid))
>
> -enum mtk_smi_gen {
> +enum mtk_smi_type {
> MTK_SMI_GEN1,
> MTK_SMI_GEN2
>  };
> @@ -68,8 +68,8 @@ static const char * const mtk_smi_common_clks_optional[] = 
> {"gals0", "gals1"};
>  static const char * const mtk_smi_larb_clks_optional[] = {"gals"};
>
>  struct mtk_smi_common_plat {
> -   enum mtk_smi_gen gen;
> -   u32  bus_sel; /* Balance some larbs to enter mmu0 or mmu1 
> */
> +   enum mtk_smi_type   type;
> +   u32 bus_sel; /* Balance some larbs to enter mmu0 
> or mmu1 */
>  };
>
>  struct mtk_smi_larb_gen {
> @@ -402,27 +402,27 @@ static struct platform_driver mtk_smi_larb_driver = {
>  };
>
>  static const struct mtk_smi_common_plat mtk_smi_common_gen1 = {
> -   .gen = MTK_SMI_GEN1,
> +   .type = MTK_SMI_GEN1,
>  };
>
>  static const struct mtk_smi_common_plat mtk_smi_common_gen2 = {
> -   .gen = MTK_SMI_GEN2,
> +   .type = MTK_SMI_GEN2,
>  };
>
>  static const struct mtk_smi_common_plat mtk_smi_common_mt6779 = {
> -   .gen= MTK_SMI_GEN2,
> -   .bus_sel= F_MMU1_LARB(1) | F_MMU1_LARB(2) | F_MMU1_LARB(4) |
> - F_MMU1_LARB(5) | F_MMU1_LARB(6) | F_MMU1_LARB(7),
> +   .type = MTK_SMI_GEN2,
> +   .bus_sel  = F_MMU1_LARB(1) | F_MMU1_LARB(2) | F_MMU1_LARB(4) |
> +   F_MMU1_LARB(5) | F_MMU1_LARB(6) | F_MMU1_LARB(7),
>  };
>
>  static const struct mtk_smi_common_plat mtk_smi_common_mt8183 = {
> -   .gen  = MTK_SMI_GEN2,
> +   .type = MTK_SMI_GEN2,
> .bus_sel  = F_MMU1_LARB(1) | F_MMU1_LARB(2) | F_MMU1_LARB(5) |
> F_MMU1_LARB(7),
>  };
>
>  static const struct mtk_smi_common_plat mtk_smi_common_mt8192 = {
> -   .gen  = MTK_SMI_GEN2,
> +   .type = MTK_SMI_GEN2,
> .bus_sel  = F_MMU1_LARB(1) | F_MMU1_LARB(2) | F_MMU1_LARB(5) |
> F_MMU1_LARB(6),
>  };
> @@ -483,7 +483,7 @@ static int mtk_smi_common_probe(struct platform_device 
> *pdev)
>  * clock into emi clock domain, but for mtk smi gen2, there's no smi 
> ao
>  * base.
>  */
> -   if (common->plat->gen == MTK_SMI_GEN1) {
> +   if (common->plat->type == MTK_SMI_GEN1) {
> res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> common->smi_ao_base = devm_ioremap_resource(dev, res);
> if (IS_ERR(common->smi_ao_base))
> @@ -523,7 +523,7 @@ static int __maybe_unused mtk_smi_common_resume(struct 
> device *dev)
> if (ret)
> return ret;
>
> -   if (common->plat->gen == MTK_SMI_GEN2 && bus_sel)
> +   if (common->plat->type == MTK_SMI_GEN2 && bus_sel)
> writel(bus_sel, common->base + SMI_BUS_SEL);
> return 0;
>  }
> --
> 2.18.0
> ___
> Linux-mediatek mailing list
> linux-media...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-mediatek
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 03/11] memory: mtk-smi: Use clk_bulk clock ops

2021-07-21 Thread Ikjoon Jang
On Thu, Jul 15, 2021 at 8:23 PM Yong Wu  wrote:
>
> Use clk_bulk interface instead of the orginal one to simplify the code.
>
> SMI have several clocks: apb/smi/gals, the apb/smi clocks are required
> for both smi-common and smi-larb while the gals clock are optional.
> thus, use devm_clk_bulk_get for apb/smi and use
> devm_clk_bulk_get_optional for gals.
>
> For gals clocks, we already use get_optional for it, then the flag
> "has_gals" is not helpful now, remove it.
>
> Also remove clk fail logs since bulk interface already output fail log.
>
> Signed-off-by: Yong Wu 

Reviewed-by: Ikjoon Jang 

> ---
>  drivers/memory/mtk-smi.c | 138 +--
>  1 file changed, 47 insertions(+), 91 deletions(-)
>
> diff --git a/drivers/memory/mtk-smi.c b/drivers/memory/mtk-smi.c
> index c5fb51f73b34..a2213452059d 100644
> --- a/drivers/memory/mtk-smi.c
> +++ b/drivers/memory/mtk-smi.c
> @@ -60,9 +60,15 @@ enum mtk_smi_gen {
> MTK_SMI_GEN2
>  };
>
> +#define MTK_SMI_CLK_NR_MAX 4
> +
> +/* Always require apb/smi clocks while gals clocks are optional. */
> +static const char * const mtk_smi_clks_required[] = {"apb", "smi"};
> +static const char * const mtk_smi_common_clks_optional[] = {"gals0", 
> "gals1"};
> +static const char * const mtk_smi_larb_clks_optional[] = {"gals"};
> +
>  struct mtk_smi_common_plat {
> enum mtk_smi_gen gen;
> -   bool has_gals;
> u32  bus_sel; /* Balance some larbs to enter mmu0 or mmu1 
> */
>  };
>
> @@ -70,13 +76,12 @@ struct mtk_smi_larb_gen {
> int port_in_larb[MTK_LARB_NR_MAX + 1];
> void (*config_port)(struct device *dev);
> unsigned intlarb_direct_to_common_mask;
> -   boolhas_gals;
>  };
>
>  struct mtk_smi {
> struct device   *dev;
> -   struct clk  *clk_apb, *clk_smi;
> -   struct clk  *clk_gals0, *clk_gals1;
> +   unsigned intclk_num;
> +   struct clk_bulk_dataclks[MTK_SMI_CLK_NR_MAX];
> struct clk  *clk_async; /*only needed by mt2701*/
> union {
> void __iomem*smi_ao_base; /* only for gen1 */
> @@ -95,45 +100,6 @@ struct mtk_smi_larb { /* larb: local arbiter */
> unsigned char   *bank;
>  };
>
> -static int mtk_smi_clk_enable(const struct mtk_smi *smi)
> -{
> -   int ret;
> -
> -   ret = clk_prepare_enable(smi->clk_apb);
> -   if (ret)
> -   return ret;
> -
> -   ret = clk_prepare_enable(smi->clk_smi);
> -   if (ret)
> -   goto err_disable_apb;
> -
> -   ret = clk_prepare_enable(smi->clk_gals0);
> -   if (ret)
> -   goto err_disable_smi;
> -
> -   ret = clk_prepare_enable(smi->clk_gals1);
> -   if (ret)
> -   goto err_disable_gals0;
> -
> -   return 0;
> -
> -err_disable_gals0:
> -   clk_disable_unprepare(smi->clk_gals0);
> -err_disable_smi:
> -   clk_disable_unprepare(smi->clk_smi);
> -err_disable_apb:
> -   clk_disable_unprepare(smi->clk_apb);
> -   return ret;
> -}
> -
> -static void mtk_smi_clk_disable(const struct mtk_smi *smi)
> -{
> -   clk_disable_unprepare(smi->clk_gals1);
> -   clk_disable_unprepare(smi->clk_gals0);
> -   clk_disable_unprepare(smi->clk_smi);
> -   clk_disable_unprepare(smi->clk_apb);
> -}
> -
>  int mtk_smi_larb_get(struct device *larbdev)
>  {
> int ret = pm_runtime_resume_and_get(larbdev);
> @@ -270,7 +236,6 @@ static const struct mtk_smi_larb_gen mtk_smi_larb_mt6779 
> = {
>  };
>
>  static const struct mtk_smi_larb_gen mtk_smi_larb_mt8183 = {
> -   .has_gals   = true,
> .config_port= mtk_smi_larb_config_port_gen2_general,
> .larb_direct_to_common_mask = BIT(2) | BIT(3) | BIT(7),
>   /* IPU0 | IPU1 | CCU */
> @@ -312,6 +277,27 @@ static const struct of_device_id mtk_smi_larb_of_ids[] = 
> {
> {}
>  };
>
> +static int mtk_smi_dts_clk_init(struct device *dev, struct mtk_smi *smi,
> +   unsigned int clk_nr_optional,
> +   const char * const clk_optional[])
> +{
> +   int i, ret, clk_nr_required;
> +
> +   clk_nr_required = ARRAY_SIZE(mtk_smi_clks_required);
> +   for (i = 0; i < clk_nr_required; i++)
> +   smi->clks[i].id = mtk_smi_clks_required[i];
> +   ret = devm_clk_bulk_get(dev, clk_nr_required, smi->clks);
> +   if (ret)
> +   return ret;
> +
> +   for (i = 0; i < clk_nr_optional; i++)
> +   smi->clks[i + clk_nr_required].id = clk_optional[i];
> +   ret = devm_clk_bulk_get_optional(dev, clk_nr_optional,
> +smi->clks + clk_nr_required);
> +   smi->clk_num = clk_nr_required + clk_nr_optional;
> +

Re: [PATCH v2 06/11] memory: mtk-smi: Add error handle for smi_probe

2021-07-21 Thread Ikjoon Jang
On Thu, Jul 15, 2021 at 8:23 PM Yong Wu  wrote:
>
> Add error handle while component_add fail.
>
> Signed-off-by: Yong Wu 

Reviewed-by: Ikjoon Jang 

> ---
> It don't have the error handle when v1. it is not a fatal error.
> thus don't add fix tags.
> ---
>  drivers/memory/mtk-smi.c | 10 +-
>  1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/memory/mtk-smi.c b/drivers/memory/mtk-smi.c
> index 6f8e582bace5..e68cbb51dd12 100644
> --- a/drivers/memory/mtk-smi.c
> +++ b/drivers/memory/mtk-smi.c
> @@ -330,7 +330,15 @@ static int mtk_smi_larb_probe(struct platform_device 
> *pdev)
>
> pm_runtime_enable(dev);
> platform_set_drvdata(pdev, larb);
> -   return component_add(dev, _smi_larb_component_ops);
> +   ret = component_add(dev, _smi_larb_component_ops);
> +   if (ret)
> +   goto err_pm_disable;
> +   return 0;
> +
> +err_pm_disable:
> +   pm_runtime_disable(dev);
> +   device_link_remove(dev, larb->smi_common_dev);
> +   return ret;
>  }
>
>  static int mtk_smi_larb_remove(struct platform_device *pdev)
> --
> 2.18.0
> ___
> Linux-mediatek mailing list
> linux-media...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-mediatek
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 05/11] memory: mtk-smi: Adjust some code position

2021-07-21 Thread Ikjoon Jang
Hi,

On Thu, Jul 15, 2021 at 8:23 PM Yong Wu  wrote:
>
> No functional change. Only move the code position to make the code more
> readable.
> 1. Put the register smi-common above smi-larb. Prepare to add some others
>register setting.
> 2. Put mtk_smi_larb_unbind around larb_bind.
> 3. Sort the SoC data alphabetically. and put them in one line as the
>current kernel allow it.
>
> Signed-off-by: Yong Wu 

Reviewed-by: Ikjoon Jang 

> ---
>  drivers/memory/mtk-smi.c | 185 +++
>  1 file changed, 73 insertions(+), 112 deletions(-)
>
> diff --git a/drivers/memory/mtk-smi.c b/drivers/memory/mtk-smi.c
> index ff07b14bcd66..6f8e582bace5 100644
> --- a/drivers/memory/mtk-smi.c
> +++ b/drivers/memory/mtk-smi.c
> @@ -17,12 +17,15 @@
>  #include 
>  #include 
>
> -/* mt8173 */
> -#define SMI_LARB_MMU_EN0xf00
> +/* SMI COMMON */
> +#define SMI_BUS_SEL0x220
> +#define SMI_BUS_LARB_SHIFT(larbid) ((larbid) << 1)
> +/* All are MMU0 defaultly. Only specialize mmu1 here. */
> +#define F_MMU1_LARB(larbid)(0x1 << SMI_BUS_LARB_SHIFT(larbid))
>
> -/* mt8167 */
> -#define MT8167_SMI_LARB_MMU_EN 0xfc0
> +/* SMI LARB */
>
> +/* Below are about mmu enable registers, they are different in SoCs */
>  /* mt2701 */
>  #define REG_SMI_SECUR_CON_BASE 0x5c0
>
> @@ -41,20 +44,20 @@
>  /* mt2701 domain should be set to 3 */
>  #define SMI_SECUR_CON_VAL_DOMAIN(id)   (0x3 << id) & 0x7) << 2) + 1))
>
> -/* mt2712 */
> -#define SMI_LARB_NONSEC_CON(id)(0x380 + ((id) * 4))
> -#define F_MMU_EN   BIT(0)
> -#define BANK_SEL(id)   ({  \
> +/* mt8167 */
> +#define MT8167_SMI_LARB_MMU_EN 0xfc0
> +
> +/* mt8173 */
> +#define MT8173_SMI_LARB_MMU_EN 0xf00
> +
> +/* larb gen2 */
> +#define SMI_LARB_NONSEC_CON(id)(0x380 + ((id) * 4))
> +#define F_MMU_EN   BIT(0)
> +#define BANK_SEL(id)   ({  \
> u32 _id = (id) & 0x3;   \
> (_id << 8 | _id << 10 | _id << 12 | _id << 14); \
>  })
>
> -/* SMI COMMON */
> -#define SMI_BUS_SEL0x220
> -#define SMI_BUS_LARB_SHIFT(larbid) ((larbid) << 1)
> -/* All are MMU0 defaultly. Only specialize mmu1 here. */
> -#define F_MMU1_LARB(larbid)(0x1 << SMI_BUS_LARB_SHIFT(larbid))
> -
>  enum mtk_smi_type {
> MTK_SMI_GEN1,
> MTK_SMI_GEN2
> @@ -132,36 +135,16 @@ mtk_smi_larb_bind(struct device *dev, struct device 
> *master, void *data)
> return -ENODEV;
>  }
>
> -static void mtk_smi_larb_config_port_gen2_general(struct device *dev)
> -{
> -   struct mtk_smi_larb *larb = dev_get_drvdata(dev);
> -   u32 reg;
> -   int i;
> -
> -   if (BIT(larb->larbid) & larb->larb_gen->larb_direct_to_common_mask)
> -   return;
> -
> -   for_each_set_bit(i, (unsigned long *)larb->mmu, 32) {
> -   reg = readl_relaxed(larb->base + SMI_LARB_NONSEC_CON(i));
> -   reg |= F_MMU_EN;
> -   reg |= BANK_SEL(larb->bank[i]);
> -   writel(reg, larb->base + SMI_LARB_NONSEC_CON(i));
> -   }
> -}
> -
> -static void mtk_smi_larb_config_port_mt8173(struct device *dev)
> +static void
> +mtk_smi_larb_unbind(struct device *dev, struct device *master, void *data)
>  {
> -   struct mtk_smi_larb *larb = dev_get_drvdata(dev);
> -
> -   writel(*larb->mmu, larb->base + SMI_LARB_MMU_EN);
> +   /* Do nothing as the iommu is always enabled. */
>  }
>
> -static void mtk_smi_larb_config_port_mt8167(struct device *dev)
> -{
> -   struct mtk_smi_larb *larb = dev_get_drvdata(dev);
> -
> -   writel(*larb->mmu, larb->base + MT8167_SMI_LARB_MMU_EN);
> -}
> +static const struct component_ops mtk_smi_larb_component_ops = {
> +   .bind = mtk_smi_larb_bind,
> +   .unbind = mtk_smi_larb_unbind,
> +};
>
>  static void mtk_smi_larb_config_port_gen1(struct device *dev)
>  {
> @@ -194,26 +177,36 @@ static void mtk_smi_larb_config_port_gen1(struct device 
> *dev)
> }
>  }
>
> -static void
> -mtk_smi_larb_unbind(struct device *dev, struct device *master, void *data)
> +static void mtk_smi_larb_config_port_mt8167(struct device *dev)
>  {
> -   /* Do nothing as the iommu is always enabled. */
> +   struct mtk_smi_larb *larb = dev_get_drvdata(dev);
> +
> +   writel(*larb->mmu, larb->base + MT8167_SMI_LARB_MMU_EN);
>  }
>
> -static const struct component_ops mtk_smi_larb_component_ops = {
> -   .bind = mtk_smi_larb_bind,
> -   .unbind = mtk_smi_larb_unbind,
> -};
> +static void mtk_smi_larb_config_port_mt8173(struct device *dev)
> +{
> +   struct mtk_smi_larb *larb = dev_get_drvdata(dev);
>
> -static const struct mtk_smi_larb_gen mtk_smi_larb_mt8173 = {
> -   /* mt8173 do not need the port in larb */
> -   .config_port = mtk_smi_larb_config_port_mt8173,
> -};
> +   writel(*larb->mmu, larb->base + MT8173_SMI_LARB_MMU_EN);

Re: [PATCH -next] iommu/arm-smmu-v3: Add suspend and resume support

2021-07-21 Thread Robin Murphy

[ +Marc for MSI bits ]

On 2021-07-21 02:33, Bixuan Cui wrote:

Add suspend and resume support for arm-smmu-v3 by low-power mode.

When the smmu is suspended, it is powered off and the registers are
cleared. So saves the msi_msg context during msi interrupt initialization
of smmu. When resume happens it calls arm_smmu_device_reset() to restore
the registers.

Signed-off-by: Bixuan Cui 
Reviewed-by: Wei Yongjun 
Reviewed-by: Zhen Lei 
Reviewed-by: Ding Tianhong 
Reviewed-by: Hanjun Guo 
---

  drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 72 ++---
  1 file changed, 64 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 235f9bdaeaf2..bf1163acbcb1 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -40,6 +40,7 @@ MODULE_PARM_DESC(disable_bypass,
  
  static bool disable_msipolling;

  module_param(disable_msipolling, bool, 0444);
+static bool bypass;
  MODULE_PARM_DESC(disable_msipolling,
"Disable MSI-based polling for CMD_SYNC completion.");
  
@@ -3129,11 +3130,37 @@ static void arm_smmu_write_msi_msg(struct msi_desc *desc, struct msi_msg *msg)

doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo;
doorbell &= MSI_CFG0_ADDR_MASK;
  
+	/* Saves the msg context for resume if desc->msg is empty */

+   if (desc->msg.address_lo == 0 && desc->msg.address_hi == 0) {
+   desc->msg.address_lo = msg->address_lo;
+   desc->msg.address_hi = msg->address_hi;
+   desc->msg.data = msg->data;
+   }


My gut feeling is that this is something a device driver maybe shouldn't 
be poking into, but I'm not entirely familiar with the area :/



+
writeq_relaxed(doorbell, smmu->base + cfg[0]);
writel_relaxed(msg->data, smmu->base + cfg[1]);
writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]);
  }
  
+static void arm_smmu_resume_msis(struct arm_smmu_device *smmu)

+{
+   struct msi_desc *desc;
+   struct device *dev = smmu->dev;
+
+   for_each_msi_entry(desc, dev) {
+   switch (desc->platform.msi_index) {
+   case EVTQ_MSI_INDEX:
+   case GERROR_MSI_INDEX:
+   case PRIQ_MSI_INDEX:
+   arm_smmu_write_msi_msg(desc, &(desc->msg));
+   break;
+   default:
+   continue;
+
+   }
+   }
+}
+
  static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
  {
struct msi_desc *desc;
@@ -3184,11 +3211,17 @@ static void arm_smmu_setup_msis(struct arm_smmu_device 
*smmu)
devm_add_action(dev, arm_smmu_free_msis, dev);
  }
  
-static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)

+static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu, bool 
resume_mode)
  {
int irq, ret;
  
-	arm_smmu_setup_msis(smmu);

+   if (!resume_mode)
+   arm_smmu_setup_msis(smmu);
+   else {
+   /* The irq doesn't need to be re-requested during resume */
+   arm_smmu_resume_msis(smmu);
+   return;


What about wired IRQs?


+   }
  
  	/* Request interrupt lines */

irq = smmu->evtq.q.irq;
@@ -3230,7 +3263,7 @@ static void arm_smmu_setup_unique_irqs(struct 
arm_smmu_device *smmu)
}
  }
  
-static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)

+static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu, bool resume_mode)
  {
int ret, irq;
u32 irqen_flags = IRQ_CTRL_EVTQ_IRQEN | IRQ_CTRL_GERROR_IRQEN;
@@ -3257,7 +3290,7 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device 
*smmu)
if (ret < 0)
dev_warn(smmu->dev, "failed to enable combined irq\n");
} else
-   arm_smmu_setup_unique_irqs(smmu);
+   arm_smmu_setup_unique_irqs(smmu, resume_mode);
  
  	if (smmu->features & ARM_SMMU_FEAT_PRI)

irqen_flags |= IRQ_CTRL_PRIQ_IRQEN;
@@ -3282,7 +3315,7 @@ static int arm_smmu_device_disable(struct arm_smmu_device 
*smmu)
return ret;
  }
  
-static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)

+static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool 
resume_mode)


Er, what about the use of "bypass" towards the end of the function. Have 
you even compiled this?



  {
int ret;
u32 reg, enables;
@@ -3392,7 +3425,7 @@ static int arm_smmu_device_reset(struct arm_smmu_device 
*smmu, bool bypass)
}
}
  
-	ret = arm_smmu_setup_irqs(smmu);

+   ret = arm_smmu_setup_irqs(smmu, resume_mode);
if (ret) {
dev_err(smmu->dev, "failed to setup irqs\n");
return ret;
@@ -3749,6 +3782,24 @@ static void __iomem *arm_smmu_ioremap(struct device 
*dev, resource_size_t start,
return devm_ioremap_resource(dev, );
  

Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node

2021-07-21 Thread Ming Lei
On Wed, Jul 21, 2021 at 10:23:38AM +0100, John Garry wrote:
> On 21/07/2021 02:40, Ming Lei wrote:
> > > I think that you should see a significant performance boost.
> > There is build issue, please check your tree:
> > 
> >MODPOST vmlinux.symvers
> >MODINFO modules.builtin.modinfo
> >GEN modules.builtin
> >LD  .tmp_vmlinux.btf
> > ld: Unexpected GOT/PLT entries detected!
> > ld: Unexpected run-time procedure linkages detected!
> > ld: drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.o: in function 
> > `smmu_test_store':
> > /root/git/linux/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3892: undefined 
> > reference to `smmu_test_core'
> >BTF .btf.vmlinux.bin.o
> > pahole: .tmp_vmlinux.btf: No such file or directory
> >LD  .tmp_vmlinux.kallsyms1
> > .btf.vmlinux.bin.o: file not recognized: file format not recognized
> > make: *** [Makefile:1177: vmlinux] Error 1
> 
> Ah, sorry. I had some test code which was not properly guarded with
> necessary build switches.
> 
> I have now removed that from the tree, so please re-pull.

Now the kernel can be built successfully, but not see obvious improvement
on the reported issue:

[root@ampere-mtjade-04 ~]# uname -a
Linux ampere-mtjade-04.khw4.lab.eng.bos.redhat.com 5.14.0-rc2_smmu_fix+ #2 SMP 
Wed Jul 21 05:49:03 EDT 2021 aarch64 aarch64 aarch64 GNU/Linux

[root@ampere-mtjade-04 ~]# taskset -c 0 ~/git/tools/test/nvme/io_uring 10 1 
/dev/nvme1n1 4k
+ fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri 
--iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 
--filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread 
--name=test --group_reporting
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, 
ioengine=io_uring, iodepth=64
fio-3.27
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=1503MiB/s][r=385k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3143: Wed Jul 21 05:58:14 2021
  read: IOPS=384k, BW=1501MiB/s (1573MB/s)(14.7GiB/10001msec)

[root@ampere-mtjade-04 ~]# taskset -c 80 ~/git/tools/test/nvme/io_uring 10 1 
/dev/nvme1n1 4k
+ fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri 
--iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 
--filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread 
--name=test --group_reporting
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, 
ioengine=io_uring, iodepth=64
fio-3.27
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=138MiB/s][r=35.4k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3063: Wed Jul 21 05:55:31 2021
  read: IOPS=35.4k, BW=138MiB/s (145MB/s)(1383MiB/10001msec)


Thanks, 
Ming

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node

2021-07-21 Thread John Garry

On 21/07/2021 10:59, Ming Lei wrote:

I have now removed that from the tree, so please re-pull.

Now the kernel can be built successfully, but not see obvious improvement
on the reported issue:

[root@ampere-mtjade-04 ~]# uname -a
Linux ampere-mtjade-04.khw4.lab.eng.bos.redhat.com 5.14.0-rc2_smmu_fix+ #2 SMP 
Wed Jul 21 05:49:03 EDT 2021 aarch64 aarch64 aarch64 GNU/Linux

[root@ampere-mtjade-04 ~]# taskset -c 0 ~/git/tools/test/nvme/io_uring 10 1 
/dev/nvme1n1 4k
+ fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri 
--iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 
--filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread 
--name=test --group_reporting
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, 
ioengine=io_uring, iodepth=64
fio-3.27
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=1503MiB/s][r=385k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3143: Wed Jul 21 05:58:14 2021
   read: IOPS=384k, BW=1501MiB/s (1573MB/s)(14.7GiB/10001msec)


I am not sure what baseline you used previously, but you were getting 
327K then, so at least this would be an improvement.




[root@ampere-mtjade-04 ~]# taskset -c 80 ~/git/tools/test/nvme/io_uring 10 1 
/dev/nvme1n1 4k
+ fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri 
--iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 
--filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread 
--name=test --group_reporting
test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, 
ioengine=io_uring, iodepth=64
fio-3.27
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=138MiB/s][r=35.4k IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3063: Wed Jul 21 05:55:31 2021
   read: IOPS=35.4k, BW=138MiB/s (145MB/s)(1383MiB/10001msec)


I can try similar on our arm64 board when I get a chance.

Thanks,
John
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node

2021-07-21 Thread John Garry

On 21/07/2021 02:40, Ming Lei wrote:

I think that you should see a significant performance boost.

There is build issue, please check your tree:

   MODPOST vmlinux.symvers
   MODINFO modules.builtin.modinfo
   GEN modules.builtin
   LD  .tmp_vmlinux.btf
ld: Unexpected GOT/PLT entries detected!
ld: Unexpected run-time procedure linkages detected!
ld: drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.o: in function `smmu_test_store':
/root/git/linux/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3892: undefined 
reference to `smmu_test_core'
   BTF .btf.vmlinux.bin.o
pahole: .tmp_vmlinux.btf: No such file or directory
   LD  .tmp_vmlinux.kallsyms1
.btf.vmlinux.bin.o: file not recognized: file format not recognized
make: *** [Makefile:1177: vmlinux] Error 1


Ah, sorry. I had some test code which was not properly guarded with 
necessary build switches.


I have now removed that from the tree, so please re-pull.

Thanks,
John
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 4/5] iommu/vt-d: Disallow SVA if devices don't support 64-bit address

2021-07-21 Thread Robin Murphy

On 2021-07-21 02:50, Lu Baolu wrote:

Hi Robin,

Thanks a lot for reviewing my patch!

On 7/20/21 5:27 PM, Robin Murphy wrote:

On 2021-07-20 02:38, Lu Baolu wrote:

When the device and CPU share an address space (such as SVA), the device
must support the same addressing capability as the CPU. The CPU does not
consider the addressing ability of any device when managing the page 
table

of a process, so the device must have enough addressing ability to bind
the page table of the process.

Signed-off-by: Lu Baolu 
---
  drivers/iommu/intel/iommu.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index f45c80ce2381..f3cca1dd384d 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -5372,6 +5372,9 @@ static int intel_iommu_enable_sva(struct device 
*dev)

  if (!(iommu->flags & VTD_FLAG_SVM_CAPABLE))
  return -ENODEV;
+    if (!dev->dma_mask || *dev->dma_mask != DMA_BIT_MASK(64))


Careful - VFIO doesn't set DMA masks (since it doesn't use the DMA API),


SVA doesn't work through the VFIO framework.


Did anyone say it does? My point is that, as far as I understand, the 
SVA UAPI is very much intended to work *with* VFIO, and even if the 
finer details are still mired in the /dev/ioasid discussion today we 
should definitely expect to see VFIO-like use-cases at some point. I 
certainly don't see why any of the guest SVA stuff exists already if not 
for VFIO-assigned devices?



so this appears to be relying on another driver having bound previously,


Yes. You are right.

otherwise the mask would still be the default 32-bit one from 
pci_setup_device(). I'm not sure that's an entirely robust assumption.


Currently SVA implementation always requires a native kernel driver. The
assumption is that the drivers should check and set 64-bit addressing
capability before calling iommu_sva_xxx() APIs.


...and given that that is not a documented requirement, and certainly 
not a technical one (even a self-contained kernel driver could choose to 
only use SVA contexts and not touch the DMA API), it's an inherently 
fragile assumption which I'm confident *will* be broken eventually :)


Robin.


+    return -ENODEV;
+
  if (intel_iommu_enable_pasid(iommu, dev))
  return -ENODEV;



Best regards,
baolu

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH] firmware: QCOM_SCM: Allow qcom_scm driver to be loadable as a permenent module

2021-07-21 Thread Greg Kroah-Hartman
On Wed, Jul 07, 2021 at 04:53:20AM +, John Stultz wrote:
> Allow the qcom_scm driver to be loadable as a permenent module.

This feels like a regression, it should be allowed to be a module.

> This still uses the "depends on QCOM_SCM || !QCOM_SCM" bit to
> ensure that drivers that call into the qcom_scm driver are
> also built as modules. While not ideal in some cases its the
> only safe way I can find to avoid build errors without having
> those drivers select QCOM_SCM and have to force it on (as
> QCOM_SCM=n can be valid for those drivers).
> 
> Reviving this now that Saravana's fw_devlink defaults to on,
> which should avoid loading troubles seen before.

fw_devlink was supposed to resolve these issues and _allow_ code to be
built as modules and not forced to be built into the kernel.

Let's not go backwards please.

thanks,

greg k-h
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 09/11] memory: mtk-smi: mt8195: Add smi support

2021-07-21 Thread Ikjoon Jang
On Thu, Jul 15, 2021 at 8:22 PM Yong Wu  wrote:
>
> MT8195 has two smi-common, their IP are the same. Only the larbs that
> connect with the smi-common are different. thus the bus_sel are different
> for the two smi-common.
>
> Signed-off-by: Yong Wu 

Reviewed-by: Ikjoon Jang 

> ---
>  drivers/memory/mtk-smi.c | 23 +++
>  1 file changed, 23 insertions(+)
>
> diff --git a/drivers/memory/mtk-smi.c b/drivers/memory/mtk-smi.c
> index e5a34b3952a0..3c288716a378 100644
> --- a/drivers/memory/mtk-smi.c
> +++ b/drivers/memory/mtk-smi.c
> @@ -250,6 +250,10 @@ static const struct mtk_smi_larb_gen mtk_smi_larb_mt8192 
> = {
> .config_port= mtk_smi_larb_config_port_gen2_general,
>  };
>
> +static const struct mtk_smi_larb_gen mtk_smi_larb_mt8195 = {
> +   .config_port= mtk_smi_larb_config_port_gen2_general,
> +};
> +
>  static const struct of_device_id mtk_smi_larb_of_ids[] = {
> {.compatible = "mediatek,mt2701-smi-larb", .data = 
> _smi_larb_mt2701},
> {.compatible = "mediatek,mt2712-smi-larb", .data = 
> _smi_larb_mt2712},
> @@ -258,6 +262,7 @@ static const struct of_device_id mtk_smi_larb_of_ids[] = {
> {.compatible = "mediatek,mt8173-smi-larb", .data = 
> _smi_larb_mt8173},
> {.compatible = "mediatek,mt8183-smi-larb", .data = 
> _smi_larb_mt8183},
> {.compatible = "mediatek,mt8192-smi-larb", .data = 
> _smi_larb_mt8192},
> +   {.compatible = "mediatek,mt8195-smi-larb", .data = 
> _smi_larb_mt8195},
> {}
>  };
>
> @@ -430,6 +435,21 @@ static const struct mtk_smi_common_plat 
> mtk_smi_common_mt8192 = {
> F_MMU1_LARB(6),
>  };
>
> +static const struct mtk_smi_common_plat mtk_smi_common_mt8195_vdo = {
> +   .type = MTK_SMI_GEN2,
> +   .bus_sel  = F_MMU1_LARB(1) | F_MMU1_LARB(3) | F_MMU1_LARB(5) |
> +   F_MMU1_LARB(7),
> +};
> +
> +static const struct mtk_smi_common_plat mtk_smi_common_mt8195_vpp = {
> +   .type = MTK_SMI_GEN2,
> +   .bus_sel  = F_MMU1_LARB(1) | F_MMU1_LARB(2) | F_MMU1_LARB(7),
> +};
> +
> +static const struct mtk_smi_common_plat mtk_smi_sub_common_mt8195 = {
> +   .type = MTK_SMI_GEN2_SUB_COMM,
> +};
> +
>  static const struct of_device_id mtk_smi_common_of_ids[] = {
> {.compatible = "mediatek,mt2701-smi-common", .data = 
> _smi_common_gen1},
> {.compatible = "mediatek,mt2712-smi-common", .data = 
> _smi_common_gen2},
> @@ -438,6 +458,9 @@ static const struct of_device_id mtk_smi_common_of_ids[] 
> = {
> {.compatible = "mediatek,mt8173-smi-common", .data = 
> _smi_common_gen2},
> {.compatible = "mediatek,mt8183-smi-common", .data = 
> _smi_common_mt8183},
> {.compatible = "mediatek,mt8192-smi-common", .data = 
> _smi_common_mt8192},
> +   {.compatible = "mediatek,mt8195-smi-common-vdo", .data = 
> _smi_common_mt8195_vdo},
> +   {.compatible = "mediatek,mt8195-smi-common-vpp", .data = 
> _smi_common_mt8195_vpp},
> +   {.compatible = "mediatek,mt8195-smi-sub-common", .data = 
> _smi_sub_common_mt8195},
> {}
>  };
>
> --
> 2.18.0
> ___
> Linux-mediatek mailing list
> linux-media...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-mediatek
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 10/11] memory: mtk-smi: mt8195: Add initial setting for smi-common

2021-07-21 Thread Ikjoon Jang
On Thu, Jul 15, 2021 at 8:25 PM Yong Wu  wrote:
>
> To improve the performance, add initial setting for smi-common.
> some register use some fix setting(suggested from DE).
>
> Signed-off-by: Yong Wu 
> ---
>  drivers/memory/mtk-smi.c | 42 
>  1 file changed, 38 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/memory/mtk-smi.c b/drivers/memory/mtk-smi.c
> index 3c288716a378..c52bf02458ff 100644
> --- a/drivers/memory/mtk-smi.c
> +++ b/drivers/memory/mtk-smi.c
> @@ -18,11 +18,19 @@
>  #include 
>
>  /* SMI COMMON */
> +#define SMI_L1LEN  0x100
> +
>  #define SMI_BUS_SEL0x220
>  #define SMI_BUS_LARB_SHIFT(larbid) ((larbid) << 1)
>  /* All are MMU0 defaultly. Only specialize mmu1 here. */
>  #define F_MMU1_LARB(larbid)(0x1 << SMI_BUS_LARB_SHIFT(larbid))
>
> +#define SMI_M4U_TH 0x234
> +#define SMI_FIFO_TH1   0x238
> +#define SMI_FIFO_TH2   0x23c
> +#define SMI_DCM0x300
> +#define SMI_DUMMY  0x444
> +
>  /* SMI LARB */
>
>  /* Below are about mmu enable registers, they are different in SoCs */
> @@ -58,6 +66,13 @@
> (_id << 8 | _id << 10 | _id << 12 | _id << 14); \
>  })
>
> +#define SMI_COMMON_INIT_REGS_NR6
> +
> +struct mtk_smi_reg_pair {
> +   unsigned intoffset;
> +   u32 value;
> +};
> +
>  enum mtk_smi_type {
> MTK_SMI_GEN1,
> MTK_SMI_GEN2,   /* gen2 smi common */
> @@ -74,6 +89,8 @@ static const char * const mtk_smi_larb_clks_optional[] = 
> {"gals"};
>  struct mtk_smi_common_plat {
> enum mtk_smi_type   type;
> u32 bus_sel; /* Balance some larbs to enter mmu0 
> or mmu1 */
> +
> +   const struct mtk_smi_reg_pair   *init;
>  };
>
>  struct mtk_smi_larb_gen {
> @@ -409,6 +426,15 @@ static struct platform_driver mtk_smi_larb_driver = {
> }
>  };
>
> +static const struct mtk_smi_reg_pair 
> mtk_smi_common_mt8195_init[SMI_COMMON_INIT_REGS_NR] = {
> +   {SMI_L1LEN, 0xb},
> +   {SMI_M4U_TH, 0xe100e10},
> +   {SMI_FIFO_TH1, 0x506090a},
> +   {SMI_FIFO_TH2, 0x506090a},
> +   {SMI_DCM, 0x4f1},
> +   {SMI_DUMMY, 0x1},
> +};
> +
>  static const struct mtk_smi_common_plat mtk_smi_common_gen1 = {
> .type = MTK_SMI_GEN1,
>  };
> @@ -439,11 +465,13 @@ static const struct mtk_smi_common_plat 
> mtk_smi_common_mt8195_vdo = {
> .type = MTK_SMI_GEN2,
> .bus_sel  = F_MMU1_LARB(1) | F_MMU1_LARB(3) | F_MMU1_LARB(5) |
> F_MMU1_LARB(7),
> +   .init = mtk_smi_common_mt8195_init,
>  };
>
>  static const struct mtk_smi_common_plat mtk_smi_common_mt8195_vpp = {
> .type = MTK_SMI_GEN2,
> .bus_sel  = F_MMU1_LARB(1) | F_MMU1_LARB(2) | F_MMU1_LARB(7),
> +   .init = mtk_smi_common_mt8195_init,
>  };
>
>  static const struct mtk_smi_common_plat mtk_smi_sub_common_mt8195 = {
> @@ -530,15 +558,21 @@ static int mtk_smi_common_remove(struct platform_device 
> *pdev)
>  static int __maybe_unused mtk_smi_common_resume(struct device *dev)
>  {
> struct mtk_smi *common = dev_get_drvdata(dev);
> -   u32 bus_sel = common->plat->bus_sel;
> -   int ret;
> +   const struct mtk_smi_reg_pair *init = common->plat->init;
> +   u32 bus_sel = common->plat->bus_sel; /* default is 0 */
> +   int ret, i;
>
> ret = clk_bulk_prepare_enable(common->clk_num, common->clks);
> if (ret)
> return ret;
>
> -   if (common->plat->type == MTK_SMI_GEN2 && bus_sel)
> -   writel(bus_sel, common->base + SMI_BUS_SEL);
> +   if (common->plat->type != MTK_SMI_GEN2)
> +   return 0;
> +
> +   for (i = 0; i < SMI_COMMON_INIT_REGS_NR && init && init[i].offset; 
> i++)
> +   writel_relaxed(init[i].value, common->base + init[i].offset);

I'm not sure this array for register settings could be applied to other
platforms in future or only applied to mt8195. If it's only for mt8195,
I think taking callback function instead of mtk_smi_reg_pair[] as init member
would be better:

if (common->plat->init)
common->plat->init(...);

> +
> +   writel(bus_sel, common->base + SMI_BUS_SEL);
> return 0;
>  }
>
> --
> 2.18.0
> ___
> Linux-mediatek mailing list
> linux-media...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-mediatek
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 11/11] memory: mtk-smi: mt8195: Add initial setting for smi-larb

2021-07-21 Thread Ikjoon Jang
On Thu, Jul 15, 2021 at 8:23 PM Yong Wu  wrote:
>
> To improve the performance, We add some initial setting for smi larbs.
> there are two part:
> 1), Each port has the special ostd(outstanding) value in each larb.
> 2), Two general setting for each larb.
>
> In some SoC, this setting maybe changed dynamically for some special case
> like 4K, and this initial setting is enough in mt8195.
>
> Signed-off-by: Yong Wu 
> ---
>  drivers/memory/mtk-smi.c | 74 +++-
>  1 file changed, 73 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/memory/mtk-smi.c b/drivers/memory/mtk-smi.c
> index c52bf02458ff..1d9e67520433 100644
> --- a/drivers/memory/mtk-smi.c
> +++ b/drivers/memory/mtk-smi.c
> @@ -32,6 +32,14 @@
>  #define SMI_DUMMY  0x444
>
>  /* SMI LARB */
> +#define SMI_LARB_CMD_THRT_CON  0x24
> +#define SMI_LARB_THRT_EN   0x370256
> +
> +#define SMI_LARB_SW_FLAG   0x40
> +#define SMI_LARB_SW_FLAG_1 0x1
> +
> +#define SMI_LARB_OSTDL_PORT0x200
> +#define SMI_LARB_OSTDL_PORTx(id)   (SMI_LARB_OSTDL_PORT + (((id) & 0x1f) 
> << 2))
>
>  /* Below are about mmu enable registers, they are different in SoCs */
>  /* mt2701 */
> @@ -67,6 +75,11 @@
>  })
>
>  #define SMI_COMMON_INIT_REGS_NR6
> +#define SMI_LARB_PORT_NR_MAX   32
> +
> +#define MTK_SMI_FLAG_LARB_THRT_EN  BIT(0)
> +#define MTK_SMI_FLAG_LARB_SW_FLAG  BIT(1)
> +#define MTK_SMI_CAPS(flags, _x)(!!((flags) & (_x)))
>
>  struct mtk_smi_reg_pair {
> unsigned intoffset;
> @@ -97,6 +110,8 @@ struct mtk_smi_larb_gen {
> int port_in_larb[MTK_LARB_NR_MAX + 1];
> void (*config_port)(struct device *dev);
> unsigned intlarb_direct_to_common_mask;
> +   unsigned intflags_general;
> +   const u8(*ostd)[SMI_LARB_PORT_NR_MAX];
>  };
>
>  struct mtk_smi {
> @@ -213,12 +228,22 @@ static void mtk_smi_larb_config_port_mt8173(struct 
> device *dev)
>  static void mtk_smi_larb_config_port_gen2_general(struct device *dev)
>  {
> struct mtk_smi_larb *larb = dev_get_drvdata(dev);
> -   u32 reg;
> +   u32 reg, flags_general = larb->larb_gen->flags_general;
> +   const u8 *larbostd = larb->larb_gen->ostd[larb->larbid];
> int i;
>
> if (BIT(larb->larbid) & larb->larb_gen->larb_direct_to_common_mask)
> return;
>
> +   if (MTK_SMI_CAPS(flags_general, MTK_SMI_FLAG_LARB_THRT_EN))
> +   writel_relaxed(SMI_LARB_THRT_EN, larb->base + 
> SMI_LARB_CMD_THRT_CON);
> +
> +   if (MTK_SMI_CAPS(flags_general, MTK_SMI_FLAG_LARB_SW_FLAG))
> +   writel_relaxed(SMI_LARB_SW_FLAG_1, larb->base + 
> SMI_LARB_SW_FLAG);
> +
> +   for (i = 0; i < SMI_LARB_PORT_NR_MAX && larbostd && !!larbostd[i]; 
> i++)
> +   writel_relaxed(larbostd[i], larb->base + 
> SMI_LARB_OSTDL_PORTx(i));

All other mtk platform's larbs have the same format for SMI_LARB_OSTDL_PORTx()
registers at the same offset? or is this unique feature for mt8195?

> +
> for_each_set_bit(i, (unsigned long *)larb->mmu, 32) {
> reg = readl_relaxed(larb->base + SMI_LARB_NONSEC_CON(i));
> reg |= F_MMU_EN;
> @@ -227,6 +252,51 @@ static void mtk_smi_larb_config_port_gen2_general(struct 
> device *dev)
> }
>  }
>
> +static const u8 mtk_smi_larb_mt8195_ostd[][SMI_LARB_PORT_NR_MAX] = {
> +   [0] = {0x0a, 0xc, 0x22, 0x22, 0x01, 0x0a,}, /* larb0 */
> +   [1] = {0x0a, 0xc, 0x22, 0x22, 0x01, 0x0a,}, /* larb1 */
> +   [2] = {0x12, 0x12, 0x12, 0x12, 0x0a,},  /* ... */
> +   [3] = {0x12, 0x12, 0x12, 0x12, 0x28, 0x28, 0x0a,},
> +   [4] = {0x06, 0x01, 0x17, 0x06, 0x0a,},
> +   [5] = {0x06, 0x01, 0x17, 0x06, 0x06, 0x01, 0x06, 0x0a,},
> +   [6] = {0x06, 0x01, 0x06, 0x0a,},
> +   [7] = {0x0c, 0x0c, 0x12,},
> +   [8] = {0x0c, 0x0c, 0x12,},
> +   [9] = {0x0a, 0x08, 0x04, 0x06, 0x01, 0x01, 0x10, 0x18, 0x11, 0x0a,
> +   0x08, 0x04, 0x11, 0x06, 0x02, 0x06, 0x01, 0x11, 0x11, 0x06,},
> +   [10] = {0x18, 0x08, 0x01, 0x01, 0x20, 0x12, 0x18, 0x06, 0x05, 0x10,
> +   0x08, 0x08, 0x10, 0x08, 0x08, 0x18, 0x0c, 0x09, 0x0b, 0x0d,
> +   0x0d, 0x06, 0x10, 0x10,},
> +   [11] = {0x0e, 0x0e, 0x0e, 0x0e, 0x0e, 0x0e, 0x01, 0x01, 0x01, 0x01,},
> +   [12] = {0x09, 0x09, 0x05, 0x05, 0x0c, 0x18, 0x02, 0x02, 0x04, 0x02,},
> +   [13] = {0x02, 0x02, 0x12, 0x12, 0x02, 0x02, 0x02, 0x02, 0x08, 0x01,},
> +   [14] = {0x12, 0x12, 0x02, 0x02, 0x02, 0x02, 0x16, 0x01, 0x16, 0x01,
> +   0x01, 0x02, 0x02, 0x08, 0x02,},
> +   [15] = {},
> +   [16] = {0x28, 0x02, 0x02, 0x12, 0x02, 0x12, 0x10, 0x02, 0x02, 0x0a,
> +   0x12, 0x02, 0x0a, 0x16, 0x02, 0x04,},
> +   [17] = {0x1a, 0x0e, 0x0a, 0x0a, 0x0c, 0x0e, 0x10,},
> +   [18] = {0x12, 0x06, 0x12, 0x06,},
> +   

Re: [PATCH v2 08/11] memory: mtk-smi: Use devm_platform_ioremap_resource

2021-07-21 Thread Ikjoon Jang
On Thu, Jul 15, 2021 at 8:24 PM Yong Wu  wrote:
>
> No functional change. Simplify probing code.
>
> Signed-off-by: Yong Wu 

Reviewed-by: Ikjoon Jang 

> ---
>  drivers/memory/mtk-smi.c | 11 +++
>  1 file changed, 3 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/memory/mtk-smi.c b/drivers/memory/mtk-smi.c
> index ee49bb50f5f5..e5a34b3952a0 100644
> --- a/drivers/memory/mtk-smi.c
> +++ b/drivers/memory/mtk-smi.c
> @@ -317,7 +317,6 @@ static int mtk_smi_dts_clk_init(struct device *dev, 
> struct mtk_smi *smi,
>  static int mtk_smi_larb_probe(struct platform_device *pdev)
>  {
> struct mtk_smi_larb *larb;
> -   struct resource *res;
> struct device *dev = >dev;
> int ret;
>
> @@ -326,8 +325,7 @@ static int mtk_smi_larb_probe(struct platform_device 
> *pdev)
> return -ENOMEM;
>
> larb->larb_gen = of_device_get_match_data(dev);
> -   res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> -   larb->base = devm_ioremap_resource(dev, res);
> +   larb->base = devm_platform_ioremap_resource(pdev, 0);
> if (IS_ERR(larb->base))
> return PTR_ERR(larb->base);
>
> @@ -447,7 +445,6 @@ static int mtk_smi_common_probe(struct platform_device 
> *pdev)
>  {
> struct device *dev = >dev;
> struct mtk_smi *common;
> -   struct resource *res;
> int ret;
>
> common = devm_kzalloc(dev, sizeof(*common), GFP_KERNEL);
> @@ -468,8 +465,7 @@ static int mtk_smi_common_probe(struct platform_device 
> *pdev)
>  * base.
>  */
> if (common->plat->type == MTK_SMI_GEN1) {
> -   res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> -   common->smi_ao_base = devm_ioremap_resource(dev, res);
> +   common->smi_ao_base = devm_platform_ioremap_resource(pdev, 0);
> if (IS_ERR(common->smi_ao_base))
> return PTR_ERR(common->smi_ao_base);
>
> @@ -481,8 +477,7 @@ static int mtk_smi_common_probe(struct platform_device 
> *pdev)
> if (ret)
> return ret;
> } else {
> -   res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> -   common->base = devm_ioremap_resource(dev, res);
> +   common->base = devm_platform_ioremap_resource(pdev, 0);
> if (IS_ERR(common->base))
> return PTR_ERR(common->base);
> }
> --
> 2.18.0
> ___
> Linux-mediatek mailing list
> linux-media...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-mediatek
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 07/11] memory: mtk-smi: Add smi sub common support

2021-07-21 Thread Ikjoon Jang
On Thu, Jul 15, 2021 at 8:25 PM Yong Wu  wrote:
>
> In mt8195, there are some larbs connect with the smi-sub-common, then
> connect with smi-common.

Not critical but I suggest to describe what is smi-sub-common.
e.g. "some larbs are not directly connected to smi-common,
they are connected to smi-sub-common which is a bridge(?) interface to..."

>
> Before we create device link between smi-larb with smi-common. If we have
> sub-common, we should use device link the smi-larb and smi-sub-common,
> then use device link between the smi-sub-common with smi-common. This is
> for enabling clock/power automatically.
>
> Move the device link code to a new interface for reusing.
>
> There is no SW extra setting for smi-sub-common.
>
> Signed-off-by: Yong Wu 

Reviewed-by: Ikjoon Jang 

> ---
>  drivers/memory/mtk-smi.c | 75 +++-
>  1 file changed, 51 insertions(+), 24 deletions(-)
>
> diff --git a/drivers/memory/mtk-smi.c b/drivers/memory/mtk-smi.c
> index e68cbb51dd12..ee49bb50f5f5 100644
> --- a/drivers/memory/mtk-smi.c
> +++ b/drivers/memory/mtk-smi.c
> @@ -60,7 +60,8 @@
>
>  enum mtk_smi_type {
> MTK_SMI_GEN1,
> -   MTK_SMI_GEN2
> +   MTK_SMI_GEN2,   /* gen2 smi common */
> +   MTK_SMI_GEN2_SUB_COMM,  /* gen2 smi sub common */
>  };
>
>  #define MTK_SMI_CLK_NR_MAX 4
> @@ -90,13 +91,14 @@ struct mtk_smi {
> void __iomem*smi_ao_base; /* only for gen1 */
> void __iomem*base;/* only for gen2 */
> };
> +   struct device   *smi_common_dev; /* for sub common */
> const struct mtk_smi_common_plat *plat;
>  };
>
>  struct mtk_smi_larb { /* larb: local arbiter */
> struct mtk_smi  smi;
> void __iomem*base;
> -   struct device   *smi_common_dev;
> +   struct device   *smi_common_dev; /* common or 
> sub-common dev */
> const struct mtk_smi_larb_gen   *larb_gen;
> int larbid;
> u32 *mmu;
> @@ -259,6 +261,38 @@ static const struct of_device_id mtk_smi_larb_of_ids[] = 
> {
> {}
>  };
>
> +static int mtk_smi_device_link_common(struct device *dev, struct device 
> **com_dev)
> +{
> +   struct platform_device *smi_com_pdev;
> +   struct device_node *smi_com_node;
> +   struct device *smi_com_dev;
> +   struct device_link *link;
> +
> +   smi_com_node = of_parse_phandle(dev->of_node, "mediatek,smi", 0);
> +   if (!smi_com_node)
> +   return -EINVAL;
> +
> +   smi_com_pdev = of_find_device_by_node(smi_com_node);
> +   of_node_put(smi_com_node);
> +   if (smi_com_pdev) {
> +   /* smi common is the supplier, Make sure it is ready before */
> +   if (!platform_get_drvdata(smi_com_pdev))
> +   return -EPROBE_DEFER;
> +   smi_com_dev = _com_pdev->dev;
> +   link = device_link_add(dev, smi_com_dev,
> +  DL_FLAG_PM_RUNTIME | 
> DL_FLAG_STATELESS);
> +   if (!link) {
> +   dev_err(dev, "Unable to link smi-common dev\n");
> +   return -ENODEV;
> +   }
> +   *com_dev = smi_com_dev;
> +   } else {
> +   dev_err(dev, "Failed to get the smi_common device\n");
> +   return -EINVAL;
> +   }
> +   return 0;
> +}
> +
>  static int mtk_smi_dts_clk_init(struct device *dev, struct mtk_smi *smi,
> unsigned int clk_nr_optional,
> const char * const clk_optional[])
> @@ -285,9 +319,6 @@ static int mtk_smi_larb_probe(struct platform_device 
> *pdev)
> struct mtk_smi_larb *larb;
> struct resource *res;
> struct device *dev = >dev;
> -   struct device_node *smi_node;
> -   struct platform_device *smi_pdev;
> -   struct device_link *link;
> int ret;
>
> larb = devm_kzalloc(dev, sizeof(*larb), GFP_KERNEL);
> @@ -307,26 +338,10 @@ static int mtk_smi_larb_probe(struct platform_device 
> *pdev)
> return ret;
>
> larb->smi.dev = dev;
> -   smi_node = of_parse_phandle(dev->of_node, "mediatek,smi", 0);
> -   if (!smi_node)
> -   return -EINVAL;
>
> -   smi_pdev = of_find_device_by_node(smi_node);
> -   of_node_put(smi_node);
> -   if (smi_pdev) {
> -   if (!platform_get_drvdata(smi_pdev))
> -   return -EPROBE_DEFER;
> -   larb->smi_common_dev = _pdev->dev;
> -   link = device_link_add(dev, larb->smi_common_dev,
> -  DL_FLAG_PM_RUNTIME | 
> DL_FLAG_STATELESS);
> -   if (!link) {
> -   dev_err(dev, "Unable to link smi-common dev\n");
> -   

Re: [PATCH -next] iommu/arm-smmu-v3: Add suspend and resume support

2021-07-21 Thread Marc Zyngier
On Wed, 21 Jul 2021 12:42:14 +0100,
Robin Murphy  wrote:
> 
> [ +Marc for MSI bits ]
> 
> On 2021-07-21 02:33, Bixuan Cui wrote:
> > Add suspend and resume support for arm-smmu-v3 by low-power mode.
> > 
> > When the smmu is suspended, it is powered off and the registers are
> > cleared. So saves the msi_msg context during msi interrupt initialization
> > of smmu. When resume happens it calls arm_smmu_device_reset() to restore
> > the registers.
> > 
> > Signed-off-by: Bixuan Cui 
> > Reviewed-by: Wei Yongjun 
> > Reviewed-by: Zhen Lei 
> > Reviewed-by: Ding Tianhong 
> > Reviewed-by: Hanjun Guo 
> > ---
> > 
> >   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 72 ++---
> >   1 file changed, 64 insertions(+), 8 deletions(-)
> > 
> > diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
> > b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > index 235f9bdaeaf2..bf1163acbcb1 100644
> > --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> > @@ -40,6 +40,7 @@ MODULE_PARM_DESC(disable_bypass,
> > static bool disable_msipolling;
> >   module_param(disable_msipolling, bool, 0444);
> > +static bool bypass;
> >   MODULE_PARM_DESC(disable_msipolling,
> > "Disable MSI-based polling for CMD_SYNC completion.");
> >   @@ -3129,11 +3130,37 @@ static void arm_smmu_write_msi_msg(struct
> > msi_desc *desc, struct msi_msg *msg)
> > doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo;
> > doorbell &= MSI_CFG0_ADDR_MASK;
> >   + /* Saves the msg context for resume if desc->msg is empty */
> > +   if (desc->msg.address_lo == 0 && desc->msg.address_hi == 0) {
> > +   desc->msg.address_lo = msg->address_lo;
> > +   desc->msg.address_hi = msg->address_hi;
> > +   desc->msg.data = msg->data;
> > +   }
> 
> My gut feeling is that this is something a device driver maybe
> shouldn't be poking into, but I'm not entirely familiar with the area
> :/

Certainly not. If you rely on the message being stored into the
descriptors, then implement this in the core code, like we do for PCI.

> 
> > +
> > writeq_relaxed(doorbell, smmu->base + cfg[0]);
> > writel_relaxed(msg->data, smmu->base + cfg[1]);
> > writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]);
> >   }
> >   +static void arm_smmu_resume_msis(struct arm_smmu_device *smmu)
> > +{
> > +   struct msi_desc *desc;
> > +   struct device *dev = smmu->dev;
> > +
> > +   for_each_msi_entry(desc, dev) {
> > +   switch (desc->platform.msi_index) {
> > +   case EVTQ_MSI_INDEX:
> > +   case GERROR_MSI_INDEX:
> > +   case PRIQ_MSI_INDEX:
> > +   arm_smmu_write_msi_msg(desc, &(desc->msg));

Consider using get_cached_msi_msg() instead of using the internals of
the descriptor.

> > +   break;
> > +   default:
> > +   continue;
> > +
> > +   }
> > +   }
> > +}
> > +
> >   static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
> >   {
> > struct msi_desc *desc;
> > @@ -3184,11 +3211,17 @@ static void arm_smmu_setup_msis(struct 
> > arm_smmu_device *smmu)
> > devm_add_action(dev, arm_smmu_free_msis, dev);
> >   }
> >   -static void arm_smmu_setup_unique_irqs(struct arm_smmu_device
> > *smmu)
> > +static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu, bool 
> > resume_mode)
> >   {
> > int irq, ret;
> >   - arm_smmu_setup_msis(smmu);
> > +   if (!resume_mode)
> > +   arm_smmu_setup_msis(smmu);
> > +   else {
> > +   /* The irq doesn't need to be re-requested during resume */
> > +   arm_smmu_resume_msis(smmu);
> > +   return;
> 
> What about wired IRQs?

Yeah. I assume the SMMU needs to be told which event gets signalled
using MSIs or IRQs? Or is that implied by the MSI being configured or
not (I used to know the answer to that, but I have long paged it out).

> 
> > +   }
> > /* Request interrupt lines */
> > irq = smmu->evtq.q.irq;
> > @@ -3230,7 +3263,7 @@ static void arm_smmu_setup_unique_irqs(struct 
> > arm_smmu_device *smmu)
> > }
> >   }
> >   -static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
> > +static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu, bool 
> > resume_mode)
> >   {
> > int ret, irq;
> > u32 irqen_flags = IRQ_CTRL_EVTQ_IRQEN | IRQ_CTRL_GERROR_IRQEN;
> > @@ -3257,7 +3290,7 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device 
> > *smmu)
> > if (ret < 0)
> > dev_warn(smmu->dev, "failed to enable combined irq\n");
> > } else
> > -   arm_smmu_setup_unique_irqs(smmu);
> > +   arm_smmu_setup_unique_irqs(smmu, resume_mode);
> > if (smmu->features & ARM_SMMU_FEAT_PRI)
> > irqen_flags |= IRQ_CTRL_PRIQ_IRQEN;
> > @@ -3282,7 +3315,7 @@ static int arm_smmu_device_disable(struct 
> > arm_smmu_device *smmu)
> > return ret;
> >   }
> >   -static int 

Re: [bug report] iommu_dma_unmap_sg() is very slow then running IO from remote numa node

2021-07-21 Thread Ming Lei
On Wed, Jul 21, 2021 at 12:07:22PM +0100, John Garry wrote:
> On 21/07/2021 10:59, Ming Lei wrote:
> > > I have now removed that from the tree, so please re-pull.
> > Now the kernel can be built successfully, but not see obvious improvement
> > on the reported issue:
> > 
> > [root@ampere-mtjade-04 ~]# uname -a
> > Linux ampere-mtjade-04.khw4.lab.eng.bos.redhat.com 5.14.0-rc2_smmu_fix+ #2 
> > SMP Wed Jul 21 05:49:03 EDT 2021 aarch64 aarch64 aarch64 GNU/Linux
> > 
> > [root@ampere-mtjade-04 ~]# taskset -c 0 ~/git/tools/test/nvme/io_uring 10 1 
> > /dev/nvme1n1 4k
> > + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri 
> > --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 
> > --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread 
> > --name=test --group_reporting
> > test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
> > 4096B-4096B, ioengine=io_uring, iodepth=64
> > fio-3.27
> > Starting 1 process
> > Jobs: 1 (f=1): [r(1)][100.0%][r=1503MiB/s][r=385k IOPS][eta 00m:00s]
> > test: (groupid=0, jobs=1): err= 0: pid=3143: Wed Jul 21 05:58:14 2021
> >read: IOPS=384k, BW=1501MiB/s (1573MB/s)(14.7GiB/10001msec)
> 
> I am not sure what baseline you used previously, but you were getting 327K
> then, so at least this would be an improvement.

Yeah, that might be one improvement, but not checked it since code base
is changed.

> 
> > 
> > [root@ampere-mtjade-04 ~]# taskset -c 80 ~/git/tools/test/nvme/io_uring 10 
> > 1 /dev/nvme1n1 4k
> > + fio --bs=4k --ioengine=io_uring --fixedbufs --registerfiles --hipri 
> > --iodepth=64 --iodepth_batch_submit=16 --iodepth_batch_complete_min=16 
> > --filename=/dev/nvme1n1 --direct=1 --runtime=10 --numjobs=1 --rw=randread 
> > --name=test --group_reporting
> > test: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
> > 4096B-4096B, ioengine=io_uring, iodepth=64
> > fio-3.27
> > Starting 1 process
> > Jobs: 1 (f=1): [r(1)][100.0%][r=138MiB/s][r=35.4k IOPS][eta 00m:00s]
> > test: (groupid=0, jobs=1): err= 0: pid=3063: Wed Jul 21 05:55:31 2021
> >read: IOPS=35.4k, BW=138MiB/s (145MB/s)(1383MiB/10001msec)
> 
> I can try similar on our arm64 board when I get a chance.

The issue I reported is this one.

Thanks,
Ming

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] iommu/amd: Fix printing of IOMMU events when rate limiting kicks in

2021-07-21 Thread Lennert Buytenhek
For the printing of RMP_HW_ERROR / RMP_PAGE_FAULT / IO_PAGE_FAULT
events, the AMD IOMMU code uses such logic:

if (pdev)
dev_data = dev_iommu_priv_get(>dev);

if (dev_data && __ratelimit(_data->rs)) {
pci_err(pdev, ...
} else {
printk_ratelimit() / pr_err{,_ratelimited}(...
}

This means that if we receive an event for a PCI devid which actually
does have a struct pci_dev and an attached struct iommu_dev_data, but
rate limiting kicks in, we'll fall back to the non-PCI branch of the
test, and print the event in a different format.

Fix this by changing the logic to:

if (dev_data) {
if (__ratelimit(_data->rs)) {
pci_err(pdev, ...
}
} else {
pr_err_ratelimited(...
}

Suggested-by: Suravee Suthikulpanit 
Signed-off-by: Lennert Buytenhek 
---
 drivers/iommu/amd/iommu.c | 28 +---
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 811a49a95d04..a7d6d78147b7 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -425,9 +425,11 @@ static void amd_iommu_report_rmp_hw_error(volatile u32 
*event)
if (pdev)
dev_data = dev_iommu_priv_get(>dev);
 
-   if (dev_data && __ratelimit(_data->rs)) {
-   pci_err(pdev, "Event logged [RMP_HW_ERROR vmg_tag=0x%04x, 
spa=0x%llx, flags=0x%04x]\n",
-   vmg_tag, spa, flags);
+   if (dev_data) {
+   if (__ratelimit(_data->rs)) {
+   pci_err(pdev, "Event logged [RMP_HW_ERROR 
vmg_tag=0x%04x, spa=0x%llx, flags=0x%04x]\n",
+   vmg_tag, spa, flags);
+   }
} else {
pr_err_ratelimited("Event logged [RMP_HW_ERROR 
device=%02x:%02x.%x, vmg_tag=0x%04x, spa=0x%llx, flags=0x%04x]\n",
PCI_BUS_NUM(devid), PCI_SLOT(devid), PCI_FUNC(devid),
@@ -456,9 +458,11 @@ static void amd_iommu_report_rmp_fault(volatile u32 *event)
if (pdev)
dev_data = dev_iommu_priv_get(>dev);
 
-   if (dev_data && __ratelimit(_data->rs)) {
-   pci_err(pdev, "Event logged [RMP_PAGE_FAULT vmg_tag=0x%04x, 
gpa=0x%llx, flags_rmp=0x%04x, flags=0x%04x]\n",
-   vmg_tag, gpa, flags_rmp, flags);
+   if (dev_data) {
+   if (__ratelimit(_data->rs)) {
+   pci_err(pdev, "Event logged [RMP_PAGE_FAULT 
vmg_tag=0x%04x, gpa=0x%llx, flags_rmp=0x%04x, flags=0x%04x]\n",
+   vmg_tag, gpa, flags_rmp, flags);
+   }
} else {
pr_err_ratelimited("Event logged [RMP_PAGE_FAULT 
device=%02x:%02x.%x, vmg_tag=0x%04x, gpa=0x%llx, flags_rmp=0x%04x, 
flags=0x%04x]\n",
PCI_BUS_NUM(devid), PCI_SLOT(devid), PCI_FUNC(devid),
@@ -480,11 +484,13 @@ static void amd_iommu_report_page_fault(u16 devid, u16 
domain_id,
if (pdev)
dev_data = dev_iommu_priv_get(>dev);
 
-   if (dev_data && __ratelimit(_data->rs)) {
-   pci_err(pdev, "Event logged [IO_PAGE_FAULT domain=0x%04x 
address=0x%llx flags=0x%04x]\n",
-   domain_id, address, flags);
-   } else if (printk_ratelimit()) {
-   pr_err("Event logged [IO_PAGE_FAULT device=%02x:%02x.%x 
domain=0x%04x address=0x%llx flags=0x%04x]\n",
+   if (dev_data) {
+   if (__ratelimit(_data->rs)) {
+   pci_err(pdev, "Event logged [IO_PAGE_FAULT 
domain=0x%04x address=0x%llx flags=0x%04x]\n",
+   domain_id, address, flags);
+   }
+   } else {
+   pr_err_ratelimited("Event logged [IO_PAGE_FAULT 
device=%02x:%02x.%x domain=0x%04x address=0x%llx flags=0x%04x]\n",
PCI_BUS_NUM(devid), PCI_SLOT(devid), PCI_FUNC(devid),
domain_id, address, flags);
}
-- 
2.31.1
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] iommu/amd: Fix I/O page fault logging ratelimit test

2021-07-21 Thread Lennert Buytenhek
On Tue, Jul 20, 2021 at 07:05:50PM -0500, Suthikulpanit, Suravee wrote:

> Hi Lennert,

Hi Suravee,


> > On an AMD system, I/O page faults are usually logged like this:
> > 
> > diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
> > index 811a49a95d04..7ae426b092f2 100644
> > --- a/drivers/iommu/amd/iommu.c
> > +++ b/drivers/iommu/amd/iommu.c
> > @@ -483,7 +483,7 @@ static void amd_iommu_report_page_fault(u16 devid, u16 
> > domain_id,
> > if (dev_data && __ratelimit(_data->rs)) {
> > pci_err(pdev, "Event logged [IO_PAGE_FAULT domain=0x%04x 
> > address=0x%llx flags=0x%04x]\n",
> > domain_id, address, flags);
> > -   } else if (printk_ratelimit()) {
> > +   } else if (!dev_data && printk_ratelimit()) {
> 
> This seems a bit confusing. Also, according to the following comment in 
> include/linux/printk.h:
> 
> /*
>  * Please don't use printk_ratelimit(), because it shares ratelimiting state
>  * with all other unrelated printk_ratelimit() callsites.  Instead use
>  * printk_ratelimited() or plain old __ratelimit().
>  */
> 
> We probably should move away from using printk_ratelimit() here.
> What about the following change instead?
> 
> diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
> index 811a49a95d04..8eb5d3519743 100644
> --- a/drivers/iommu/amd/iommu.c
> +++ b/drivers/iommu/amd/iommu.c
> @@ -480,11 +480,12 @@ static void amd_iommu_report_page_fault(u16 devid, u16 
> domain_id,
> if (pdev)
> dev_data = dev_iommu_priv_get(>dev);
> 
> -   if (dev_data && __ratelimit(_data->rs)) {
> -   pci_err(pdev, "Event logged [IO_PAGE_FAULT domain=0x%04x 
> address=0x%llx flags=0x%04x]\n",
> -   domain_id, address, flags);
> -   } else if (printk_ratelimit()) {
> -   pr_err("Event logged [IO_PAGE_FAULT device=%02x:%02x.%x 
> domain=0x%04x address=0x%llx flags=0x%04x]\n",
> +   if (dev_data) {
> +   if (__ratelimit(_data->rs))
> +   pci_err(pdev, "Event logged [IO_PAGE_FAULT 
> domain=0x%04x address=0x%llx flags=0x%04x]\n",
> +   domain_id, address, flags);
> +   } else {
> +   pr_err_ratelimited("Event logged [IO_PAGE_FAULT 
> device=%02x:%02x.%x domain=0x%04x address=0x%llx flags=0x%04x]\n",
> PCI_BUS_NUM(devid), PCI_SLOT(devid), PCI_FUNC(devid),
> domain_id, address, flags);
> }

Looks good!


> Note also that there might be other places in this file that would need
> similar modification as well.

Indeed, there are two more sites like these.

I've sent a new patch that incorporates your feedback.  Thank you!


Cheers,
Lennert
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH -next] iommu/arm-smmu-v3: Add suspend and resume support

2021-07-21 Thread Robin Murphy

On 2021-07-21 14:12, Marc Zyngier wrote:

On Wed, 21 Jul 2021 12:42:14 +0100,
Robin Murphy  wrote:


[ +Marc for MSI bits ]

On 2021-07-21 02:33, Bixuan Cui wrote:

Add suspend and resume support for arm-smmu-v3 by low-power mode.

When the smmu is suspended, it is powered off and the registers are
cleared. So saves the msi_msg context during msi interrupt initialization
of smmu. When resume happens it calls arm_smmu_device_reset() to restore
the registers.

Signed-off-by: Bixuan Cui 
Reviewed-by: Wei Yongjun 
Reviewed-by: Zhen Lei 
Reviewed-by: Ding Tianhong 
Reviewed-by: Hanjun Guo 
---

   drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 72 ++---
   1 file changed, 64 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 235f9bdaeaf2..bf1163acbcb1 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -40,6 +40,7 @@ MODULE_PARM_DESC(disable_bypass,
 static bool disable_msipolling;
   module_param(disable_msipolling, bool, 0444);
+static bool bypass;
   MODULE_PARM_DESC(disable_msipolling,
"Disable MSI-based polling for CMD_SYNC completion.");
   @@ -3129,11 +3130,37 @@ static void arm_smmu_write_msi_msg(struct
msi_desc *desc, struct msi_msg *msg)
doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo;
doorbell &= MSI_CFG0_ADDR_MASK;
   +/* Saves the msg context for resume if desc->msg is empty */
+   if (desc->msg.address_lo == 0 && desc->msg.address_hi == 0) {
+   desc->msg.address_lo = msg->address_lo;
+   desc->msg.address_hi = msg->address_hi;
+   desc->msg.data = msg->data;
+   }


My gut feeling is that this is something a device driver maybe
shouldn't be poking into, but I'm not entirely familiar with the area
:/


Certainly not. If you rely on the message being stored into the
descriptors, then implement this in the core code, like we do for PCI.


Ah, so it would be an acceptable compromise to *read* desc->msg (and 
thus avoid having to store our own copy of the message) if the core was 
guaranteed to cache it? That's good to know, thanks.



+
writeq_relaxed(doorbell, smmu->base + cfg[0]);
writel_relaxed(msg->data, smmu->base + cfg[1]);
writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + cfg[2]);
   }
   +static void arm_smmu_resume_msis(struct arm_smmu_device *smmu)
+{
+   struct msi_desc *desc;
+   struct device *dev = smmu->dev;
+
+   for_each_msi_entry(desc, dev) {
+   switch (desc->platform.msi_index) {
+   case EVTQ_MSI_INDEX:
+   case GERROR_MSI_INDEX:
+   case PRIQ_MSI_INDEX:
+   arm_smmu_write_msi_msg(desc, &(desc->msg));


Consider using get_cached_msi_msg() instead of using the internals of
the descriptor.


Oh, there's even a proper API for it, marvellous! I hadn't managed to 
dig that far myself :)



+   break;
+   default:
+   continue;
+
+   }
+   }
+}
+
   static void arm_smmu_setup_msis(struct arm_smmu_device *smmu)
   {
struct msi_desc *desc;
@@ -3184,11 +3211,17 @@ static void arm_smmu_setup_msis(struct arm_smmu_device 
*smmu)
devm_add_action(dev, arm_smmu_free_msis, dev);
   }
   -static void arm_smmu_setup_unique_irqs(struct arm_smmu_device
*smmu)
+static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu, bool 
resume_mode)
   {
int irq, ret;
   -arm_smmu_setup_msis(smmu);
+   if (!resume_mode)
+   arm_smmu_setup_msis(smmu);
+   else {
+   /* The irq doesn't need to be re-requested during resume */
+   arm_smmu_resume_msis(smmu);
+   return;


What about wired IRQs?


Yeah. I assume the SMMU needs to be told which event gets signalled
using MSIs or IRQs? Or is that implied by the MSI being configured or
not (I used to know the answer to that, but I have long paged it out).


My mistake there - I misread the rather convoluted existing flow to 
think that bailing here would fail to enable wired IRQs, but of course 
the signalling decision is based on whether the MSI address is non-zero, 
and the enables in ARM_SMMU_IRQ_CTRL still apply either way.


Given that, I think this is that point at which to refactor this whole 
part so that logically requesting and physically programming the 
interrupts are split into distinct stages, which can then be called 
seperately as appropriate. Personally I have a particular dislike of 
this general anti-pattern:


int do_a_thing(bool but_only_do_part_of_it)


+   }
/* Request interrupt lines */
irq = smmu->evtq.q.irq;
@@ -3230,7 +3263,7 @@ static void arm_smmu_setup_unique_irqs(struct 
arm_smmu_device *smmu)
}
   }
   -static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
+static int 

Re: [PATCH 4/5] iommu/vt-d: Disallow SVA if devices don't support 64-bit address

2021-07-21 Thread Lu Baolu

On 2021/7/21 19:12, Robin Murphy wrote:

On 2021-07-21 02:50, Lu Baolu wrote:

Hi Robin,

Thanks a lot for reviewing my patch!

On 7/20/21 5:27 PM, Robin Murphy wrote:

On 2021-07-20 02:38, Lu Baolu wrote:
When the device and CPU share an address space (such as SVA), the 
device
must support the same addressing capability as the CPU. The CPU does 
not
consider the addressing ability of any device when managing the page 
table

of a process, so the device must have enough addressing ability to bind
the page table of the process.

Signed-off-by: Lu Baolu 
---
  drivers/iommu/intel/iommu.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index f45c80ce2381..f3cca1dd384d 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -5372,6 +5372,9 @@ static int intel_iommu_enable_sva(struct 
device *dev)

  if (!(iommu->flags & VTD_FLAG_SVM_CAPABLE))
  return -ENODEV;
+    if (!dev->dma_mask || *dev->dma_mask != DMA_BIT_MASK(64))


Careful - VFIO doesn't set DMA masks (since it doesn't use the DMA API),


SVA doesn't work through the VFIO framework.


Did anyone say it does? My point is that, as far as I understand, the 
SVA UAPI is very much intended to work *with* VFIO, and even if the 
finer details are still mired in the /dev/ioasid discussion today we 
should definitely expect to see VFIO-like use-cases at some point. I 
certainly don't see why any of the guest SVA stuff exists already if not 
for VFIO-assigned devices?


Agreed. From /dev/ioasid design point of view, it's possible to have
VFIO-like use case of SVA. Perhaps the device addressing capability
could be included in GET_DEV_INFO of /dev/ioasid UAPI.




so this appears to be relying on another driver having bound previously,


Yes. You are right.

otherwise the mask would still be the default 32-bit one from 
pci_setup_device(). I'm not sure that's an entirely robust assumption.


Currently SVA implementation always requires a native kernel driver. The
assumption is that the drivers should check and set 64-bit addressing
capability before calling iommu_sva_xxx() APIs.


...and given that that is not a documented requirement, and certainly 
not a technical one (even a self-contained kernel driver could choose to 
only use SVA contexts and not touch the DMA API), it's an inherently 
fragile assumption which I'm confident *will* be broken eventually :)




Fair enough. I will drop this patch.

Thanks a lot for the comments.

Best regards,
baolu


Robin.


+    return -ENODEV;
+
  if (intel_iommu_enable_pasid(iommu, dev))
  return -ENODEV;



Best regards,
baolu

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [Resend RFC PATCH V4 09/13] x86/Swiotlb/HV: Add Swiotlb bounce buffer remap function for HV IVM

2021-07-21 Thread Christoph Hellwig
On Wed, Jul 21, 2021 at 06:28:48PM +0800, Tianyu Lan wrote:
> dma_mmap_attrs() and dma_get_sgtable_attrs() get input virtual address
> belonging to backing memory with struct page and returns bounce buffer
> dma physical address which is below shared_gpa_boundary(vTOM) and passed
> to Hyper-V via vmbus protocol.
>
> The new map virtual address is only to access bounce buffer in swiotlb
> code and will not be used other places. It's stored in the mem->vstart.
> So the new API set_memory_decrypted_map() in this series is only called
> in the swiotlb code. Other platforms may replace set_memory_decrypted()
> with set_memory_decrypted_map() as requested.

It seems like you are indeed not using these new helpers in
dma_direct_alloc.  How is dma_alloc_attrs/dma_alloc_coherent going to
work on these systems?
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH -next] iommu/arm-smmu-v3: Add suspend and resume support

2021-07-21 Thread Marc Zyngier
On Wed, 21 Jul 2021 14:59:47 +0100,
Robin Murphy  wrote:
> 
> On 2021-07-21 14:12, Marc Zyngier wrote:
> > On Wed, 21 Jul 2021 12:42:14 +0100,
> > Robin Murphy  wrote:
> >> 
> >> [ +Marc for MSI bits ]
> >> 
> >> On 2021-07-21 02:33, Bixuan Cui wrote:
> >>> Add suspend and resume support for arm-smmu-v3 by low-power mode.
> >>> 
> >>> When the smmu is suspended, it is powered off and the registers are
> >>> cleared. So saves the msi_msg context during msi interrupt initialization
> >>> of smmu. When resume happens it calls arm_smmu_device_reset() to restore
> >>> the registers.
> >>> 
> >>> Signed-off-by: Bixuan Cui 
> >>> Reviewed-by: Wei Yongjun 
> >>> Reviewed-by: Zhen Lei 
> >>> Reviewed-by: Ding Tianhong 
> >>> Reviewed-by: Hanjun Guo 
> >>> ---
> >>> 
> >>>drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 72 ++---
> >>>1 file changed, 64 insertions(+), 8 deletions(-)
> >>> 
> >>> diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
> >>> b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> >>> index 235f9bdaeaf2..bf1163acbcb1 100644
> >>> --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> >>> +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
> >>> @@ -40,6 +40,7 @@ MODULE_PARM_DESC(disable_bypass,
> >>>  static bool disable_msipolling;
> >>>module_param(disable_msipolling, bool, 0444);
> >>> +static bool bypass;
> >>>MODULE_PARM_DESC(disable_msipolling,
> >>>   "Disable MSI-based polling for CMD_SYNC completion.");
> >>>@@ -3129,11 +3130,37 @@ static void arm_smmu_write_msi_msg(struct
> >>> msi_desc *desc, struct msi_msg *msg)
> >>>   doorbell = (((u64)msg->address_hi) << 32) | msg->address_lo;
> >>>   doorbell &= MSI_CFG0_ADDR_MASK;
> >>>+  /* Saves the msg context for resume if desc->msg is empty */
> >>> + if (desc->msg.address_lo == 0 && desc->msg.address_hi == 0) {
> >>> + desc->msg.address_lo = msg->address_lo;
> >>> + desc->msg.address_hi = msg->address_hi;
> >>> + desc->msg.data = msg->data;
> >>> + }
> >> 
> >> My gut feeling is that this is something a device driver maybe
> >> shouldn't be poking into, but I'm not entirely familiar with the area
> >> :/
> > 
> > Certainly not. If you rely on the message being stored into the
> > descriptors, then implement this in the core code, like we do for PCI.
> 
> Ah, so it would be an acceptable compromise to *read* desc->msg (and
> thus avoid having to store our own copy of the message) if the core
> was guaranteed to cache it? That's good to know, thanks.

Yeah, vfio, a couple of other weird drivers and (*surprise!*) ia64 are
using this kind of trick. I don't see a reason not to implement that
for platform-MSI (although level signalling may be interesting...), or
even to move it into the core MSI code.

> 
> >>> +
> >>>   writeq_relaxed(doorbell, smmu->base + cfg[0]);
> >>>   writel_relaxed(msg->data, smmu->base + cfg[1]);
> >>>   writel_relaxed(ARM_SMMU_MEMATTR_DEVICE_nGnRE, smmu->base + 
> >>> cfg[2]);
> >>>}
> >>>+static void arm_smmu_resume_msis(struct arm_smmu_device *smmu)
> >>> +{
> >>> + struct msi_desc *desc;
> >>> + struct device *dev = smmu->dev;
> >>> +
> >>> + for_each_msi_entry(desc, dev) {
> >>> + switch (desc->platform.msi_index) {
> >>> + case EVTQ_MSI_INDEX:
> >>> + case GERROR_MSI_INDEX:
> >>> + case PRIQ_MSI_INDEX:
> >>> + arm_smmu_write_msi_msg(desc, &(desc->msg));
> > 
> > Consider using get_cached_msi_msg() instead of using the internals of
> > the descriptor.
> 
> Oh, there's even a proper API for it, marvellous! I hadn't managed to
> dig that far myself :)

It is a bit odd in the sense that it takes a copy of the message
instead of returning a pointer, but at least this solves lifetime
issues.

Thanks,

M.

-- 
Without deviation from the norm, progress is not possible.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [Resend RFC PATCH V4 09/13] x86/Swiotlb/HV: Add Swiotlb bounce buffer remap function for HV IVM

2021-07-21 Thread Tianyu Lan




On 7/21/2021 10:33 PM, Christoph Hellwig wrote:

On Wed, Jul 21, 2021 at 06:28:48PM +0800, Tianyu Lan wrote:

dma_mmap_attrs() and dma_get_sgtable_attrs() get input virtual address
belonging to backing memory with struct page and returns bounce buffer
dma physical address which is below shared_gpa_boundary(vTOM) and passed
to Hyper-V via vmbus protocol.

The new map virtual address is only to access bounce buffer in swiotlb
code and will not be used other places. It's stored in the mem->vstart.
So the new API set_memory_decrypted_map() in this series is only called
in the swiotlb code. Other platforms may replace set_memory_decrypted()
with set_memory_decrypted_map() as requested.


It seems like you are indeed not using these new helpers in
dma_direct_alloc.  How is dma_alloc_attrs/dma_alloc_coherent going to
work on these systems?



Current vmbus device drivers don't use dma_alloc_attrs/dma_alloc
_coherent() because vmbus driver allocates ring buffer for all devices. 
Ring buffer has been marked decrypted and remapped in vmbus driver. 
Netvsc and storvsc drivers just need to use  dma_map_single/dma_map_page().




___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] firmware: QCOM_SCM: Allow qcom_scm driver to be loadable as a permenent module

2021-07-21 Thread Saravana Kannan via iommu
On Wed, Jul 21, 2021 at 1:23 PM Bjorn Andersson
 wrote:
>
> On Wed 21 Jul 13:00 CDT 2021, Saravana Kannan wrote:
>
> > On Wed, Jul 21, 2021 at 10:24 AM John Stultz  wrote:
> > >
> > > On Wed, Jul 21, 2021 at 4:54 AM Greg Kroah-Hartman
> > >  wrote:
> > > >
> > > > On Wed, Jul 07, 2021 at 04:53:20AM +, John Stultz wrote:
> > > > > Allow the qcom_scm driver to be loadable as a permenent module.
> > > >
> > > > This feels like a regression, it should be allowed to be a module.
> > >
> > > I'm sorry, I'm not sure I'm following you, Greg.  This patch is trying
> > > to enable the driver to be able to be loaded as a module.
> >
> > I think the mix up might be that Greg mentally read "permanent module"
> > as "builtin"?
> >
> > "permanent module" is just something that can't be unloaded once it's
> > loaded. It's not "builtin".
> >
>
> Afaict there's nothing in this patch that makes it more or less
> permanent.

The lack of a module_exit() makes it a permanent module. If you do
lsmod, it'll mark this as "[permanent]".

-Saravana

> The module will be quite permanent (in practice) because
> several other core modules reference symbols in the qcom_scm module.
>
> But thanks to a previous patch, the qcom_scm device comes with
> suppress_bind_attrs, to prevent that the device goes away from a simple
> unbind operation - which the API and client drivers aren't designed to
> handle.
>
> So, it would have been better in this case to omit the word "permanent"
> from the commit message, but the change is good and I don't want to
> rebase my tree to drop that word.
>
> Thanks,
> Bjorn
>
> > -Saravana
> >
> > >
> > > > > This still uses the "depends on QCOM_SCM || !QCOM_SCM" bit to
> > > > > ensure that drivers that call into the qcom_scm driver are
> > > > > also built as modules. While not ideal in some cases its the
> > > > > only safe way I can find to avoid build errors without having
> > > > > those drivers select QCOM_SCM and have to force it on (as
> > > > > QCOM_SCM=n can be valid for those drivers).
> > > > >
> > > > > Reviving this now that Saravana's fw_devlink defaults to on,
> > > > > which should avoid loading troubles seen before.
> > > >
> > > > fw_devlink was supposed to resolve these issues and _allow_ code to be
> > > > built as modules and not forced to be built into the kernel.
> > >
> > > Right. I'm re-submitting this patch to enable a driver to work as a
> > > module, because earlier attempts to submit it ran into boot trouble
> > > because fw_devlink wasn't yet enabled.
> > >
> > > I worry something in my description made it seem otherwise, so let me
> > > know how you read it and I'll try to avoid such confusion in the
> > > future.
> > >
> > > thanks
> > > -john
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] firmware: QCOM_SCM: Allow qcom_scm driver to be loadable as a permenent module

2021-07-21 Thread John Stultz
On Wed, Jul 21, 2021 at 4:54 AM Greg Kroah-Hartman
 wrote:
>
> On Wed, Jul 07, 2021 at 04:53:20AM +, John Stultz wrote:
> > Allow the qcom_scm driver to be loadable as a permenent module.
>
> This feels like a regression, it should be allowed to be a module.

I'm sorry, I'm not sure I'm following you, Greg.  This patch is trying
to enable the driver to be able to be loaded as a module.

> > This still uses the "depends on QCOM_SCM || !QCOM_SCM" bit to
> > ensure that drivers that call into the qcom_scm driver are
> > also built as modules. While not ideal in some cases its the
> > only safe way I can find to avoid build errors without having
> > those drivers select QCOM_SCM and have to force it on (as
> > QCOM_SCM=n can be valid for those drivers).
> >
> > Reviving this now that Saravana's fw_devlink defaults to on,
> > which should avoid loading troubles seen before.
>
> fw_devlink was supposed to resolve these issues and _allow_ code to be
> built as modules and not forced to be built into the kernel.

Right. I'm re-submitting this patch to enable a driver to work as a
module, because earlier attempts to submit it ran into boot trouble
because fw_devlink wasn't yet enabled.

I worry something in my description made it seem otherwise, so let me
know how you read it and I'll try to avoid such confusion in the
future.

thanks
-john
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] firmware: QCOM_SCM: Allow qcom_scm driver to be loadable as a permenent module

2021-07-21 Thread Bjorn Andersson
On Mon 19 Jul 16:53 CDT 2021, Saravana Kannan wrote:

> On Mon, Jul 19, 2021 at 12:36 PM Bjorn Andersson
>  wrote:
> >
> > On Mon 19 Jul 14:00 CDT 2021, John Stultz wrote:
> >
> > > On Fri, Jul 16, 2021 at 10:01 PM Bjorn Andersson
> > >  wrote:
> > > > On Tue 06 Jul 23:53 CDT 2021, John Stultz wrote:
> > > > > Allow the qcom_scm driver to be loadable as a permenent module.
> > > > >
> > > > > This still uses the "depends on QCOM_SCM || !QCOM_SCM" bit to
> > > > > ensure that drivers that call into the qcom_scm driver are
> > > > > also built as modules. While not ideal in some cases its the
> > > > > only safe way I can find to avoid build errors without having
> > > > > those drivers select QCOM_SCM and have to force it on (as
> > > > > QCOM_SCM=n can be valid for those drivers).
> > > > >
> > > > > Reviving this now that Saravana's fw_devlink defaults to on,
> > > > > which should avoid loading troubles seen before.
> > > > >
> > > >
> > > > Are you (in this last paragraph) saying that all those who have been
> > > > burnt by fw_devlink during the last months and therefor run with it
> > > > disabled will have a less fun experience once this is merged?
> > > >
> 
> Bjorn,
> 
> I jump in and help with any reports of issues with fw_devlink if I'm
> cc'ed. Please feel free to add me and I'll help fix any issues you
> have with fw_devlink=on.
> 

Thanks Saravana, unfortunately I've only heard these reports second hand
so far, not been able to reproduce them on my own. I appreciate your
support and will certainly reach out if I need some assistance.

> > >
> > > I guess potentially. So way back when this was originally submitted,
> > > some folks had trouble booting if it was set as a module due to it
> > > loading due to the deferred_probe_timeout expiring.
> > > My attempts to change the default timeout value to be larger ran into
> > > trouble, but Saravana's fw_devlink does manage to resolve things
> > > properly for this case.
> > >
> >
> > Unfortunately I see really weird things coming out of that, e.g. display
> > on my db845c is waiting for the USB hub on PCIe to load its firmware,
> > which typically times out after 60 seconds.
> >
> > I've stared at it quite a bit and I don't understand how they are
> > related.
> 
> Can you please add me to any email thread with the details? I'd be
> happy to help.
> 
> First step is to make sure all the devices probe as with
> fw_devlink=permissive. After that if you are still seeing issues, it's
> generally timing issues in the driver. But if the actual timing issue
> is identified (by you or whoever knows the driver seeing the issue),
> then I can help with fixes or suggestions for fixes.
> 
> > > But if folks are having issues w/ fw_devlink, and have it disabled,
> > > and set QCOM_SCM=m they could still trip over the issue with the
> > > timeout firing before it is loaded (especially if they are loading
> > > modules from late mounted storage rather than ramdisk).
> > >
> >
> > I guess we'll have to force QCOM_SCM=y in the defconfig and hope people
> > don't make it =m.
> >
> > > > (I'm picking this up, but I don't fancy the idea that some people are
> > > > turning the boot process into a lottery)
> > >
> > > Me neither, and I definitely think the deferred_probe_timeout logic is
> > > way too fragile, which is why I'm eager for fw_devlink as it's a much
> > > less racy approach to handling module loading dependencies.
> >
> > Right, deferred_probe_timeout is the main issue here. Without it we
> > might get some weird probe deferral runs, but either some driver is
> > missing or it settles eventually.
> >
> > With deferred_probe_timeout it's rather common for me to see things
> > end up probe out of order (even more now with fw_devlink finding cyclic
> > dependencies) and deferred_probe_timeout just breaking things.
> 
> Again, please CC me on these threads and I'd be happy to help.
> 
> >
> > > So if you
> > > want to hold on this, while any remaining fw_devlink issues get
> > > sorted, that's fine.  But I'd also not cast too much ire at
> > > fw_devlink, as the global probe timeout approach for handling optional
> > > links isn't great, and we need a better solution.
> > >
> >
> > There's no end to the possible and valid ways you can setup your
> > defconfig and run into the probe deferral issues, so I see no point in
> > holding this one back any longer. I just hope that one day it will be
> > possible to boot the upstream kernel in a reliable fashion.
> 
> Might not be believable, but I'm hoping fw_devlink helps you meet this goal :)
> 

Sounds good, I hope so too :)

Regards,
Bjorn
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 14/23] iommu: Introduce explicit type for non-strict DMA domains

2021-07-21 Thread Robin Murphy
Promote the difference between strict and non-strict DMA domains from an
internal detail to a distinct domain feature and type, to pave the road
for exposing it through the sysfs default domain interface.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/dma-iommu.c | 2 +-
 drivers/iommu/iommu.c | 6 +-
 include/linux/iommu.h | 6 ++
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index e28396cea6eb..b1af1ff324c5 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1311,7 +1311,7 @@ void iommu_setup_dma_ops(struct device *dev, u64 
dma_base, u64 dma_limit)
 * The IOMMU core code allocates the default DMA domain, which the
 * underlying IOMMU driver needs to support via the dma-iommu layer.
 */
-   if (domain->type == IOMMU_DOMAIN_DMA) {
+   if (domain->type & __IOMMU_DOMAIN_DMA_API) {
if (iommu_dma_init_domain(domain, dma_base, dma_limit, dev))
goto out_err;
dev->dma_ops = _dma_ops;
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index e23d0c9be9db..8333c334891e 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -114,6 +114,7 @@ static const char *iommu_domain_type_str(unsigned int t)
case IOMMU_DOMAIN_UNMANAGED:
return "Unmanaged";
case IOMMU_DOMAIN_DMA:
+   case IOMMU_DOMAIN_DMA_FQ:
return "Translated";
default:
return "Unknown";
@@ -547,6 +548,9 @@ static ssize_t iommu_group_show_type(struct iommu_group 
*group,
case IOMMU_DOMAIN_DMA:
type = "DMA\n";
break;
+   case IOMMU_DOMAIN_DMA_FQ:
+   type = "DMA-FQ\n";
+   break;
}
}
mutex_unlock(>mutex);
@@ -1942,7 +1946,7 @@ static struct iommu_domain *__iommu_domain_alloc(struct 
bus_type *bus,
/* Assume all sizes by default; the driver may override this later */
domain->pgsize_bitmap  = bus->iommu_ops->pgsize_bitmap;
 
-   if (type == IOMMU_DOMAIN_DMA && iommu_get_dma_cookie(domain)) {
+   if ((type & __IOMMU_DOMAIN_DMA_API) && iommu_get_dma_cookie(domain)) {
iommu_domain_free(domain);
domain = NULL;
}
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 007662b65474..56519110d43f 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -61,6 +61,7 @@ struct iommu_domain_geometry {
 #define __IOMMU_DOMAIN_DMA_API (1U << 1)  /* Domain for use in DMA-API
  implementation  */
 #define __IOMMU_DOMAIN_PT  (1U << 2)  /* Domain is identity mapped   */
+#define __IOMMU_DOMAIN_DMA_FQ  (1U << 3)  /* DMA-API uses flush queue*/
 
 /*
  * This are the possible domain-types
@@ -73,12 +74,17 @@ struct iommu_domain_geometry {
  * IOMMU_DOMAIN_DMA- Internally used for DMA-API implementations.
  *   This flag allows IOMMU drivers to implement
  *   certain optimizations for these domains
+ * IOMMU_DOMAIN_DMA_FQ - As above, but definitely using batched TLB
+ *   invalidation.
  */
 #define IOMMU_DOMAIN_BLOCKED   (0U)
 #define IOMMU_DOMAIN_IDENTITY  (__IOMMU_DOMAIN_PT)
 #define IOMMU_DOMAIN_UNMANAGED (__IOMMU_DOMAIN_PAGING)
 #define IOMMU_DOMAIN_DMA   (__IOMMU_DOMAIN_PAGING |\
 __IOMMU_DOMAIN_DMA_API)
+#define IOMMU_DOMAIN_DMA_FQ(__IOMMU_DOMAIN_PAGING |\
+__IOMMU_DOMAIN_DMA_API |   \
+__IOMMU_DOMAIN_DMA_FQ)
 
 struct iommu_domain {
unsigned type;
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 15/23] iommu/amd: Prepare for multiple DMA domain types

2021-07-21 Thread Robin Murphy
The DMA ops setup can simply be unconditional, since iommu-dma
already knows not to touch identity domains.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/amd/iommu.c | 9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 40ae7130fc80..1e992ef21f34 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1707,14 +1707,9 @@ static struct iommu_device 
*amd_iommu_probe_device(struct device *dev)
 
 static void amd_iommu_probe_finalize(struct device *dev)
 {
-   struct iommu_domain *domain;
-
/* Domains are initialized for this device - have a look what we ended 
up with */
-   domain = iommu_get_domain_for_dev(dev);
-   if (domain->type == IOMMU_DOMAIN_DMA)
-   iommu_setup_dma_ops(dev, 0, U64_MAX);
-   else
-   set_dma_ops(dev, NULL);
+   set_dma_ops(dev, NULL);
+   iommu_setup_dma_ops(dev, 0, U64_MAX);
 }
 
 static void amd_iommu_release_device(struct device *dev)
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 16/23] iommu/arm-smmu: Prepare for multiple DMA domain types

2021-07-21 Thread Robin Murphy
In preparation for the strict vs. non-strict decision for DMA domains to
be expressed in the domain type, make sure we expose our flush queue
awareness by accepting the new domain type, and test the specific
feature flag where we want to identify DMA domains in general.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 1 +
 drivers/iommu/arm/arm-smmu/arm-smmu.c   | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 039f371d2f8b..fa41026d272e 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1972,6 +1972,7 @@ static struct iommu_domain 
*arm_smmu_domain_alloc(unsigned type)
 
if (type != IOMMU_DOMAIN_UNMANAGED &&
type != IOMMU_DOMAIN_DMA &&
+   type != IOMMU_DOMAIN_DMA_FQ &&
type != IOMMU_DOMAIN_IDENTITY)
return NULL;
 
diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c 
b/drivers/iommu/arm/arm-smmu/arm-smmu.c
index 354be8f4c0ef..dbc14c265b15 100644
--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
+++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
@@ -870,10 +870,11 @@ static struct iommu_domain 
*arm_smmu_domain_alloc(unsigned type)
 
if (type != IOMMU_DOMAIN_UNMANAGED &&
type != IOMMU_DOMAIN_DMA &&
+   type != IOMMU_DOMAIN_DMA_FQ &&
type != IOMMU_DOMAIN_IDENTITY)
return NULL;
 
-   if (type == IOMMU_DOMAIN_DMA && using_legacy_binding)
+   if ((type & __IOMMU_DOMAIN_DMA_API) && using_legacy_binding)
return NULL;
/*
 * Allocate the domain and initialise some of its data structures.
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 18/23] iommu: Express DMA strictness via the domain type

2021-07-21 Thread Robin Murphy
Eliminate the iommu_get_dma_strict() indirection and pipe the
information through the domain type from the beginning. Besides
the flow simplification this also has several nice side-effects:

 - Automatically implies strict mode for untrusted devices by
   virtue of their IOMMU_DOMAIN_DMA override.
 - Ensures that we only ends up using flush queues for drivers
   which are aware of them and can actually benefit.
 - Allows us to handle flush queue init failure by falling back
   to strict mode instead of leaving it to possibly blow up later.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c |  2 +-
 drivers/iommu/arm/arm-smmu/arm-smmu.c   |  2 +-
 drivers/iommu/dma-iommu.c   | 10 ++
 drivers/iommu/iommu.c   | 14 --
 include/linux/iommu.h   |  1 -
 5 files changed, 12 insertions(+), 17 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index fa41026d272e..260b560d0075 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2175,7 +2175,7 @@ static int arm_smmu_domain_finalise(struct iommu_domain 
*domain,
.iommu_dev  = smmu->dev,
};
 
-   if (!iommu_get_dma_strict(domain))
+   if (domain->type == IOMMU_DOMAIN_DMA_FQ)
pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
 
pgtbl_ops = alloc_io_pgtable_ops(fmt, _cfg, smmu_domain);
diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c 
b/drivers/iommu/arm/arm-smmu/arm-smmu.c
index dbc14c265b15..2c717f3be056 100644
--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
+++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
@@ -765,7 +765,7 @@ static int arm_smmu_init_domain_context(struct iommu_domain 
*domain,
.iommu_dev  = smmu->dev,
};
 
-   if (!iommu_get_dma_strict(domain))
+   if (domain->type == IOMMU_DOMAIN_DMA_FQ)
pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
 
if (smmu->impl && smmu->impl->init_context) {
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index b1af1ff324c5..a114a7ad88ec 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -363,13 +363,15 @@ static int iommu_dma_init_domain(struct iommu_domain 
*domain, dma_addr_t base,
 
init_iova_domain(iovad, 1UL << order, base_pfn);
 
-   if (!cookie->fq_domain && !dev_is_untrusted(dev) &&
-   domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
+   if (domain->type == IOMMU_DOMAIN_DMA_FQ && !cookie->fq_domain &&
+   domain->ops->flush_iotlb_all) {
if (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
- iommu_dma_entry_dtor))
+ iommu_dma_entry_dtor)) {
pr_warn("iova flush queue initialization failed\n");
-   else
+   domain->type = IOMMU_DOMAIN_DMA;
+   } else {
cookie->fq_domain = domain;
+   }
}
 
return iova_reserve_iommu_regions(dev, domain);
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 8333c334891e..d7eaacae0944 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -135,6 +135,9 @@ static int __init iommu_subsys_init(void)
}
}
 
+   if (!iommu_default_passthrough() && !iommu_dma_strict)
+   iommu_def_domain_type = IOMMU_DOMAIN_DMA_FQ;
+
pr_info("Default domain type: %s %s\n",
iommu_domain_type_str(iommu_def_domain_type),
(iommu_cmd_line & IOMMU_CMD_LINE_DMA_API) ?
@@ -352,15 +355,6 @@ void iommu_set_dma_strict(bool strict)
iommu_dma_strict = strict;
 }
 
-bool iommu_get_dma_strict(struct iommu_domain *domain)
-{
-   /* only allow lazy flushing for DMA domains */
-   if (domain->type == IOMMU_DOMAIN_DMA)
-   return iommu_dma_strict;
-   return true;
-}
-EXPORT_SYMBOL_GPL(iommu_get_dma_strict);
-
 static ssize_t iommu_group_attr_show(struct kobject *kobj,
 struct attribute *__attr, char *buf)
 {
@@ -764,7 +758,7 @@ static int iommu_create_device_direct_mappings(struct 
iommu_group *group,
unsigned long pg_size;
int ret = 0;
 
-   if (!domain || domain->type != IOMMU_DOMAIN_DMA)
+   if (!domain || !(domain->type & __IOMMU_DOMAIN_DMA_API))
return 0;
 
BUG_ON(!domain->pgsize_bitmap);
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 56519110d43f..557c4c12e2cf 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -484,7 +484,6 @@ int iommu_set_pgtable_quirks(struct iommu_domain *domain,
unsigned long quirks);
 
 void iommu_set_dma_strict(bool val);
-bool iommu_get_dma_strict(struct iommu_domain *domain);
 
 

[PATCH 17/23] iommu/vt-d: Prepare for multiple DMA domain types

2021-07-21 Thread Robin Murphy
In preparation for the strict vs. non-strict decision for DMA domains to
be expressed in the domain type, make sure we expose our flush queue
awareness by accepting the new domain type, and test the specific
feature flag where we want to identify DMA domains in general. The DMA
ops setup can simply be made unconditional, since iommu-dma already
knows not to touch identity domains.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/intel/iommu.c | 15 ++-
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index e2add5a0caef..77d322272743 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -601,7 +601,7 @@ struct intel_iommu *domain_get_iommu(struct dmar_domain 
*domain)
int iommu_id;
 
/* si_domain and vm domain should not get here. */
-   if (WARN_ON(domain->domain.type != IOMMU_DOMAIN_DMA))
+   if (WARN_ON(!(domain->domain.type & __IOMMU_DOMAIN_DMA)))
return NULL;
 
for_each_domain_iommu(iommu_id, domain)
@@ -1035,7 +1035,7 @@ static struct dma_pte *pfn_to_dma_pte(struct dmar_domain 
*domain,
pteval = ((uint64_t)virt_to_dma_pfn(tmp_page) << 
VTD_PAGE_SHIFT) | DMA_PTE_READ | DMA_PTE_WRITE;
if (domain_use_first_level(domain)) {
pteval |= DMA_FL_PTE_XD | DMA_FL_PTE_US;
-   if (domain->domain.type == IOMMU_DOMAIN_DMA)
+   if (domain->domain.type & 
__IOMMU_DOMAIN_DMA_API)
pteval |= DMA_FL_PTE_ACCESS;
}
if (cmpxchg64(>val, 0ULL, pteval))
@@ -2346,7 +2346,7 @@ __domain_mapping(struct dmar_domain *domain, unsigned 
long iov_pfn,
if (domain_use_first_level(domain)) {
attr |= DMA_FL_PTE_XD | DMA_FL_PTE_US;
 
-   if (domain->domain.type == IOMMU_DOMAIN_DMA) {
+   if (domain->domain.type & __IOMMU_DOMAIN_DMA_API) {
attr |= DMA_FL_PTE_ACCESS;
if (prot & DMA_PTE_WRITE)
attr |= DMA_FL_PTE_DIRTY;
@@ -4528,6 +4528,7 @@ static struct iommu_domain 
*intel_iommu_domain_alloc(unsigned type)
 
switch (type) {
case IOMMU_DOMAIN_DMA:
+   case IOMMU_DOMAIN_DMA_FQ:
case IOMMU_DOMAIN_UNMANAGED:
dmar_domain = alloc_domain(0);
if (!dmar_domain) {
@@ -5164,12 +5165,8 @@ static void intel_iommu_release_device(struct device 
*dev)
 
 static void intel_iommu_probe_finalize(struct device *dev)
 {
-   struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
-
-   if (domain && domain->type == IOMMU_DOMAIN_DMA)
-   iommu_setup_dma_ops(dev, 0, U64_MAX);
-   else
-   set_dma_ops(dev, NULL);
+   set_dma_ops(dev, NULL);
+   iommu_setup_dma_ops(dev, 0, U64_MAX);
 }
 
 static void intel_iommu_get_resv_regions(struct device *device,
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 22/23] iommu: Allow enabling non-strict mode dynamically

2021-07-21 Thread Robin Murphy
Allocating and enabling a flush queue is in fact something we can
reasonably do while a DMA domain is active, without having to rebuild it
from scratch. Thus we can allow a strict -> non-strict transition from
sysfs without requiring to unbind the device's driver, which is of
particular interest to users who want to make selective relaxations to
critical devices like the one serving their root filesystem.

Disabling and draining a queue also seems technically possible to
achieve without rebuilding the whole domain, but would certainly be more
involved. Furthermore there's not such a clear use-case for tightening
up security *after* the device may already have done whatever it is that
you don't trust it not to do, so we only consider the relaxation case.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/iommu.c | 16 
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 4fad6d427d9d..43a2041d9629 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -3130,6 +3130,13 @@ static int iommu_change_dev_def_domain(struct 
iommu_group *group,
goto out;
}
 
+   /* We can bring up a flush queue without tearing down the domain */
+   if (type == IOMMU_DOMAIN_DMA_FQ && prev_dom->type == IOMMU_DOMAIN_DMA) {
+   prev_dom->type = IOMMU_DOMAIN_DMA_FQ;
+   ret = iommu_dma_init_fq(prev_dom);
+   goto out;
+   }
+
/* Sets group->default_domain to the newly allocated domain */
ret = iommu_group_alloc_default_domain(dev->bus, group, type);
if (ret)
@@ -3170,9 +3177,9 @@ static int iommu_change_dev_def_domain(struct iommu_group 
*group,
 }
 
 /*
- * Changing the default domain through sysfs requires the users to ubind the
- * drivers from the devices in the iommu group. Return failure if this doesn't
- * meet.
+ * Changing the default domain through sysfs requires the users to unbind the
+ * drivers from the devices in the iommu group, except for a DMA -> DMA-FQ
+ * transition. Return failure if this isn't met.
  *
  * We need to consider the race between this and the device release path.
  * device_lock(dev) is used here to guarantee that the device release path
@@ -3248,7 +3255,8 @@ static ssize_t iommu_group_store_type(struct iommu_group 
*group,
 
/* Check if the device in the group still has a driver bound to it */
device_lock(dev);
-   if (device_is_bound(dev)) {
+   if (device_is_bound(dev) && !(req_type == IOMMU_DOMAIN_DMA_FQ &&
+   group->default_domain->type == IOMMU_DOMAIN_DMA)) {
pr_err_ratelimited("Device is still bound to driver\n");
ret = -EBUSY;
goto out;
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 23/23] iommu/arm-smmu: Allow non-strict in pgtable_quirks interface

2021-07-21 Thread Robin Murphy
To make io-pgtable aware of a flush queue being dynamically enabled,
allow IO_PGTABLE_QUIRK_NON_STRICT to be set even after a domain has been
attached to, and hook up the final piece of the puzzle in iommu-dma.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 15 +++
 drivers/iommu/arm/arm-smmu/arm-smmu.c   | 11 +++
 drivers/iommu/dma-iommu.c   |  3 +++
 3 files changed, 29 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 260b560d0075..ca19e4551468 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -2714,6 +2714,20 @@ static int arm_smmu_enable_nesting(struct iommu_domain 
*domain)
return ret;
 }
 
+static int arm_smmu_set_pgtable_quirks(struct iommu_domain *domain,
+   unsigned long quirks)
+{
+   struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+   if (quirks == IO_PGTABLE_QUIRK_NON_STRICT && smmu_domain->pgtbl_ops) {
+   struct io_pgtable *iop = 
io_pgtable_ops_to_pgtable(smmu_domain->pgtbl_ops);
+
+   iop->cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
+   return 0;
+   }
+   return -EINVAL;
+}
+
 static int arm_smmu_of_xlate(struct device *dev, struct of_phandle_args *args)
 {
return iommu_fwspec_add_ids(dev, args->args, 1);
@@ -2828,6 +2842,7 @@ static struct iommu_ops arm_smmu_ops = {
.release_device = arm_smmu_release_device,
.device_group   = arm_smmu_device_group,
.enable_nesting = arm_smmu_enable_nesting,
+   .set_pgtable_quirks = arm_smmu_set_pgtable_quirks,
.of_xlate   = arm_smmu_of_xlate,
.get_resv_regions   = arm_smmu_get_resv_regions,
.put_resv_regions   = generic_iommu_put_resv_regions,
diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c 
b/drivers/iommu/arm/arm-smmu/arm-smmu.c
index 2c717f3be056..0f181f76c31b 100644
--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
+++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
@@ -1522,6 +1522,17 @@ static int arm_smmu_set_pgtable_quirks(struct 
iommu_domain *domain,
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
int ret = 0;
 
+   if (quirks == IO_PGTABLE_QUIRK_NON_STRICT) {
+   struct io_pgtable *iop;
+
+   if (!smmu_domain->pgtbl_ops)
+   return -EINVAL;
+
+   iop = io_pgtable_ops_to_pgtable(smmu_domain->pgtbl_ops);
+   iop->cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT;
+   return 0;
+   }
+
mutex_lock(_domain->init_mutex);
if (smmu_domain->smmu)
ret = -EPERM;
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 64e9eefce00e..6b51e45e2358 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -16,6 +16,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -326,6 +327,8 @@ int iommu_dma_init_fq(struct iommu_domain *domain)
return -ENODEV;
}
cookie->fq_domain = domain;
+   if (domain->ops->set_pgtable_quirks)
+   domain->ops->set_pgtable_quirks(domain, 
IO_PGTABLE_QUIRK_NON_STRICT);
return 0;
 }
 
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 08/23] iommu/rockchip: Drop IOVA cookie management

2021-07-21 Thread Robin Murphy
The core code bakes its own cookies now.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/rockchip-iommu.c | 11 +--
 1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
index 9febfb7f3025..c24561f54f32 100644
--- a/drivers/iommu/rockchip-iommu.c
+++ b/drivers/iommu/rockchip-iommu.c
@@ -1074,10 +1074,6 @@ static struct iommu_domain 
*rk_iommu_domain_alloc(unsigned type)
if (!rk_domain)
return NULL;
 
-   if (type == IOMMU_DOMAIN_DMA &&
-   iommu_get_dma_cookie(_domain->domain))
-   goto err_free_domain;
-
/*
 * rk32xx iommus use a 2 level pagetable.
 * Each level1 (dt) and level2 (pt) table has 1024 4-byte entries.
@@ -1085,7 +1081,7 @@ static struct iommu_domain 
*rk_iommu_domain_alloc(unsigned type)
 */
rk_domain->dt = (u32 *)get_zeroed_page(GFP_KERNEL | GFP_DMA32);
if (!rk_domain->dt)
-   goto err_put_cookie;
+   goto err_free_domain;
 
rk_domain->dt_dma = dma_map_single(dma_dev, rk_domain->dt,
   SPAGE_SIZE, DMA_TO_DEVICE);
@@ -1106,9 +1102,6 @@ static struct iommu_domain 
*rk_iommu_domain_alloc(unsigned type)
 
 err_free_dt:
free_page((unsigned long)rk_domain->dt);
-err_put_cookie:
-   if (type == IOMMU_DOMAIN_DMA)
-   iommu_put_dma_cookie(_domain->domain);
 err_free_domain:
kfree(rk_domain);
 
@@ -1137,8 +1130,6 @@ static void rk_iommu_domain_free(struct iommu_domain 
*domain)
 SPAGE_SIZE, DMA_TO_DEVICE);
free_page((unsigned long)rk_domain->dt);
 
-   if (domain->type == IOMMU_DOMAIN_DMA)
-   iommu_put_dma_cookie(_domain->domain);
kfree(rk_domain);
 }
 
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 07/23] iommu/mtk: Drop IOVA cookie management

2021-07-21 Thread Robin Murphy
The core code bakes its own cookies now.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/mtk_iommu.c | 6 --
 1 file changed, 6 deletions(-)

diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index 6f7c69688ce2..e39a6d1da28d 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -441,17 +441,11 @@ static struct iommu_domain 
*mtk_iommu_domain_alloc(unsigned type)
if (!dom)
return NULL;
 
-   if (iommu_get_dma_cookie(>domain)) {
-   kfree(dom);
-   return NULL;
-   }
-
return >domain;
 }
 
 static void mtk_iommu_domain_free(struct iommu_domain *domain)
 {
-   iommu_put_dma_cookie(domain);
kfree(to_mtk_domain(domain));
 }
 
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 09/23] iommu/sprd: Drop IOVA cookie management

2021-07-21 Thread Robin Murphy
The core code bakes its own cookies now.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/sprd-iommu.c | 6 --
 1 file changed, 6 deletions(-)

diff --git a/drivers/iommu/sprd-iommu.c b/drivers/iommu/sprd-iommu.c
index 73dfd9946312..2bc1de6e823d 100644
--- a/drivers/iommu/sprd-iommu.c
+++ b/drivers/iommu/sprd-iommu.c
@@ -144,11 +144,6 @@ static struct iommu_domain 
*sprd_iommu_domain_alloc(unsigned int domain_type)
if (!dom)
return NULL;
 
-   if (iommu_get_dma_cookie(>domain)) {
-   kfree(dom);
-   return NULL;
-   }
-
spin_lock_init(>pgtlock);
 
dom->domain.geometry.aperture_start = 0;
@@ -161,7 +156,6 @@ static void sprd_iommu_domain_free(struct iommu_domain 
*domain)
 {
struct sprd_iommu_domain *dom = to_sprd_domain(domain);
 
-   iommu_put_dma_cookie(domain);
kfree(dom);
 }
 
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 10/23] iommu/sun50i: Drop IOVA cookie management

2021-07-21 Thread Robin Murphy
The core code bakes its own cookies now.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/sun50i-iommu.c | 12 +---
 1 file changed, 1 insertion(+), 11 deletions(-)

diff --git a/drivers/iommu/sun50i-iommu.c b/drivers/iommu/sun50i-iommu.c
index 181bb1c3437c..c349a95ec7bd 100644
--- a/drivers/iommu/sun50i-iommu.c
+++ b/drivers/iommu/sun50i-iommu.c
@@ -610,14 +610,10 @@ static struct iommu_domain 
*sun50i_iommu_domain_alloc(unsigned type)
if (!sun50i_domain)
return NULL;
 
-   if (type == IOMMU_DOMAIN_DMA &&
-   iommu_get_dma_cookie(_domain->domain))
-   goto err_free_domain;
-
sun50i_domain->dt = (u32 *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
get_order(DT_SIZE));
if (!sun50i_domain->dt)
-   goto err_put_cookie;
+   goto err_free_domain;
 
refcount_set(_domain->refcnt, 1);
 
@@ -627,10 +623,6 @@ static struct iommu_domain 
*sun50i_iommu_domain_alloc(unsigned type)
 
return _domain->domain;
 
-err_put_cookie:
-   if (type == IOMMU_DOMAIN_DMA)
-   iommu_put_dma_cookie(_domain->domain);
-
 err_free_domain:
kfree(sun50i_domain);
 
@@ -644,8 +636,6 @@ static void sun50i_iommu_domain_free(struct iommu_domain 
*domain)
free_pages((unsigned long)sun50i_domain->dt, get_order(DT_SIZE));
sun50i_domain->dt = NULL;
 
-   iommu_put_dma_cookie(domain);
-
kfree(sun50i_domain);
 }
 
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 11/23] iommu/virtio: Drop IOVA cookie management

2021-07-21 Thread Robin Murphy
The core code bakes its own cookies now.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/virtio-iommu.c | 8 
 1 file changed, 8 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 6abdcab7273b..80930ce04a16 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -598,12 +598,6 @@ static struct iommu_domain *viommu_domain_alloc(unsigned 
type)
spin_lock_init(>mappings_lock);
vdomain->mappings = RB_ROOT_CACHED;
 
-   if (type == IOMMU_DOMAIN_DMA &&
-   iommu_get_dma_cookie(>domain)) {
-   kfree(vdomain);
-   return NULL;
-   }
-
return >domain;
 }
 
@@ -643,8 +637,6 @@ static void viommu_domain_free(struct iommu_domain *domain)
 {
struct viommu_domain *vdomain = to_viommu_domain(domain);
 
-   iommu_put_dma_cookie(domain);
-
/* Free all remaining mappings (size 2^64) */
viommu_del_mappings(vdomain, 0, 0);
 
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 12/23] iommu/dma: Unexport IOVA cookie management

2021-07-21 Thread Robin Murphy
IOVA cookies are now got and put by core code, so we no longer need to
export these to modular drivers. The export for getting MSI cookies
stays, since VFIO can still be a module, but it was already relying on
someone else putting them, so that aspect is unaffected.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/dma-iommu.c | 7 ---
 drivers/iommu/iommu.c | 3 +--
 2 files changed, 1 insertion(+), 9 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 98ba927aee1a..10067fbc4309 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -98,9 +98,6 @@ static struct iommu_dma_cookie *cookie_alloc(enum 
iommu_dma_cookie_type type)
 /**
  * iommu_get_dma_cookie - Acquire DMA-API resources for a domain
  * @domain: IOMMU domain to prepare for DMA-API usage
- *
- * IOMMU drivers should normally call this from their domain_alloc
- * callback when domain->type == IOMMU_DOMAIN_DMA.
  */
 int iommu_get_dma_cookie(struct iommu_domain *domain)
 {
@@ -113,7 +110,6 @@ int iommu_get_dma_cookie(struct iommu_domain *domain)
 
return 0;
 }
-EXPORT_SYMBOL(iommu_get_dma_cookie);
 
 /**
  * iommu_get_msi_cookie - Acquire just MSI remapping resources
@@ -151,8 +147,6 @@ EXPORT_SYMBOL(iommu_get_msi_cookie);
  * iommu_put_dma_cookie - Release a domain's DMA mapping resources
  * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie() or
  *  iommu_get_msi_cookie()
- *
- * IOMMU drivers should normally call this from their domain_free callback.
  */
 void iommu_put_dma_cookie(struct iommu_domain *domain)
 {
@@ -172,7 +166,6 @@ void iommu_put_dma_cookie(struct iommu_domain *domain)
kfree(cookie);
domain->iova_cookie = NULL;
 }
-EXPORT_SYMBOL(iommu_put_dma_cookie);
 
 /**
  * iommu_dma_get_resv_regions - Reserved region driver helper
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 0952de57c466..e23d0c9be9db 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1942,8 +1942,7 @@ static struct iommu_domain *__iommu_domain_alloc(struct 
bus_type *bus,
/* Assume all sizes by default; the driver may override this later */
domain->pgsize_bitmap  = bus->iommu_ops->pgsize_bitmap;
 
-   /* Temporarily ignore -EEXIST while drivers still get their own cookies 
*/
-   if (type == IOMMU_DOMAIN_DMA && iommu_get_dma_cookie(domain) == 
-ENOMEM) {
+   if (type == IOMMU_DOMAIN_DMA && iommu_get_dma_cookie(domain)) {
iommu_domain_free(domain);
domain = NULL;
}
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 13/23] iommu/dma: Remove redundant "!dev" checks

2021-07-21 Thread Robin Murphy
iommu_dma_init_domain() is now only called from iommu_setup_dma_ops(),
which has already assumed dev to be non-NULL.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/dma-iommu.c | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 10067fbc4309..e28396cea6eb 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -363,7 +363,7 @@ static int iommu_dma_init_domain(struct iommu_domain 
*domain, dma_addr_t base,
 
init_iova_domain(iovad, 1UL << order, base_pfn);
 
-   if (!cookie->fq_domain && (!dev || !dev_is_untrusted(dev)) &&
+   if (!cookie->fq_domain && !dev_is_untrusted(dev) &&
domain->ops->flush_iotlb_all && !iommu_get_dma_strict(domain)) {
if (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
  iommu_dma_entry_dtor))
@@ -372,9 +372,6 @@ static int iommu_dma_init_domain(struct iommu_domain 
*domain, dma_addr_t base,
cookie->fq_domain = domain;
}
 
-   if (!dev)
-   return 0;
-
return iova_reserve_iommu_regions(dev, domain);
 }
 
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 02/23] iommu/amd: Drop IOVA cookie management

2021-07-21 Thread Robin Murphy
The core code bakes its own cookies now.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/amd/iommu.c | 12 
 1 file changed, 12 deletions(-)

diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 811a49a95d04..40ae7130fc80 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1924,16 +1924,7 @@ static struct iommu_domain 
*amd_iommu_domain_alloc(unsigned type)
domain->domain.geometry.aperture_end   = ~0ULL;
domain->domain.geometry.force_aperture = true;
 
-   if (type == IOMMU_DOMAIN_DMA &&
-   iommu_get_dma_cookie(>domain) == -ENOMEM)
-   goto free_domain;
-
return >domain;
-
-free_domain:
-   protection_domain_free(domain);
-
-   return NULL;
 }
 
 static void amd_iommu_domain_free(struct iommu_domain *dom)
@@ -1950,9 +1941,6 @@ static void amd_iommu_domain_free(struct iommu_domain 
*dom)
if (!dom)
return;
 
-   if (dom->type == IOMMU_DOMAIN_DMA)
-   iommu_put_dma_cookie(>domain);
-
if (domain->flags & PD_IOMMUV2_MASK)
free_gcr3_table(domain);
 
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 03/23] iommu/arm-smmu: Drop IOVA cookie management

2021-07-21 Thread Robin Murphy
The core code bakes its own cookies now.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c |  7 ---
 drivers/iommu/arm/arm-smmu/arm-smmu.c   | 10 +++---
 drivers/iommu/arm/arm-smmu/qcom_iommu.c |  8 
 3 files changed, 3 insertions(+), 22 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 235f9bdaeaf2..039f371d2f8b 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -1984,12 +1984,6 @@ static struct iommu_domain 
*arm_smmu_domain_alloc(unsigned type)
if (!smmu_domain)
return NULL;
 
-   if (type == IOMMU_DOMAIN_DMA &&
-   iommu_get_dma_cookie(_domain->domain)) {
-   kfree(smmu_domain);
-   return NULL;
-   }
-
mutex_init(_domain->init_mutex);
INIT_LIST_HEAD(_domain->devices);
spin_lock_init(_domain->devices_lock);
@@ -2021,7 +2015,6 @@ static void arm_smmu_domain_free(struct iommu_domain 
*domain)
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_device *smmu = smmu_domain->smmu;
 
-   iommu_put_dma_cookie(domain);
free_io_pgtable_ops(smmu_domain->pgtbl_ops);
 
/* Free the CD and ASID, if we allocated them */
diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c 
b/drivers/iommu/arm/arm-smmu/arm-smmu.c
index f22dbeb1e510..354be8f4c0ef 100644
--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
+++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
@@ -872,6 +872,9 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned 
type)
type != IOMMU_DOMAIN_DMA &&
type != IOMMU_DOMAIN_IDENTITY)
return NULL;
+
+   if (type == IOMMU_DOMAIN_DMA && using_legacy_binding)
+   return NULL;
/*
 * Allocate the domain and initialise some of its data structures.
 * We can't really do anything meaningful until we've added a
@@ -881,12 +884,6 @@ static struct iommu_domain *arm_smmu_domain_alloc(unsigned 
type)
if (!smmu_domain)
return NULL;
 
-   if (type == IOMMU_DOMAIN_DMA && (using_legacy_binding ||
-   iommu_get_dma_cookie(_domain->domain))) {
-   kfree(smmu_domain);
-   return NULL;
-   }
-
mutex_init(_domain->init_mutex);
spin_lock_init(_domain->cb_lock);
 
@@ -901,7 +898,6 @@ static void arm_smmu_domain_free(struct iommu_domain 
*domain)
 * Free the domain resources. We assume that all devices have
 * already been detached.
 */
-   iommu_put_dma_cookie(domain);
arm_smmu_destroy_domain_context(domain);
kfree(smmu_domain);
 }
diff --git a/drivers/iommu/arm/arm-smmu/qcom_iommu.c 
b/drivers/iommu/arm/arm-smmu/qcom_iommu.c
index 021cf8f65ffc..4b7eca5f5148 100644
--- a/drivers/iommu/arm/arm-smmu/qcom_iommu.c
+++ b/drivers/iommu/arm/arm-smmu/qcom_iommu.c
@@ -335,12 +335,6 @@ static struct iommu_domain 
*qcom_iommu_domain_alloc(unsigned type)
if (!qcom_domain)
return NULL;
 
-   if (type == IOMMU_DOMAIN_DMA &&
-   iommu_get_dma_cookie(_domain->domain)) {
-   kfree(qcom_domain);
-   return NULL;
-   }
-
mutex_init(_domain->init_mutex);
spin_lock_init(_domain->pgtbl_lock);
 
@@ -351,8 +345,6 @@ static void qcom_iommu_domain_free(struct iommu_domain 
*domain)
 {
struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain);
 
-   iommu_put_dma_cookie(domain);
-
if (qcom_domain->iommu) {
/*
 * NOTE: unmap can be called after client device is powered
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 05/23] iommu/exynos: Drop IOVA cookie management

2021-07-21 Thread Robin Murphy
The core code bakes its own cookies now.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/exynos-iommu.c | 18 --
 1 file changed, 4 insertions(+), 14 deletions(-)

diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
index d0fbf1d10e18..34085d069cda 100644
--- a/drivers/iommu/exynos-iommu.c
+++ b/drivers/iommu/exynos-iommu.c
@@ -735,20 +735,16 @@ static struct iommu_domain 
*exynos_iommu_domain_alloc(unsigned type)
/* Check if correct PTE offsets are initialized */
BUG_ON(PG_ENT_SHIFT < 0 || !dma_dev);
 
+   if (type != IOMMU_DOMAIN_DMA && type != IOMMU_DOMAIN_UNMANAGED)
+   return NULL;
+
domain = kzalloc(sizeof(*domain), GFP_KERNEL);
if (!domain)
return NULL;
 
-   if (type == IOMMU_DOMAIN_DMA) {
-   if (iommu_get_dma_cookie(>domain) != 0)
-   goto err_pgtable;
-   } else if (type != IOMMU_DOMAIN_UNMANAGED) {
-   goto err_pgtable;
-   }
-
domain->pgtable = (sysmmu_pte_t *)__get_free_pages(GFP_KERNEL, 2);
if (!domain->pgtable)
-   goto err_dma_cookie;
+   goto err_pgtable;
 
domain->lv2entcnt = (short *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, 
1);
if (!domain->lv2entcnt)
@@ -779,9 +775,6 @@ static struct iommu_domain 
*exynos_iommu_domain_alloc(unsigned type)
free_pages((unsigned long)domain->lv2entcnt, 1);
 err_counter:
free_pages((unsigned long)domain->pgtable, 2);
-err_dma_cookie:
-   if (type == IOMMU_DOMAIN_DMA)
-   iommu_put_dma_cookie(>domain);
 err_pgtable:
kfree(domain);
return NULL;
@@ -809,9 +802,6 @@ static void exynos_iommu_domain_free(struct iommu_domain 
*iommu_domain)
 
spin_unlock_irqrestore(>lock, flags);
 
-   if (iommu_domain->type == IOMMU_DOMAIN_DMA)
-   iommu_put_dma_cookie(iommu_domain);
-
dma_unmap_single(dma_dev, virt_to_phys(domain->pgtable), LV1TABLE_SIZE,
 DMA_TO_DEVICE);
 
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 04/23] iommu/vt-d: Drop IOVA cookie management

2021-07-21 Thread Robin Murphy
The core code bakes its own cookies now.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/intel/iommu.c | 8 
 1 file changed, 8 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index dd22fc7d5176..e2add5a0caef 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1980,10 +1980,6 @@ static void domain_exit(struct dmar_domain *domain)
/* Remove associated devices and clear attached or cached domains */
domain_remove_dev_info(domain);
 
-   /* destroy iovas */
-   if (domain->domain.type == IOMMU_DOMAIN_DMA)
-   iommu_put_dma_cookie(>domain);
-
if (domain->pgd) {
struct page *freelist;
 
@@ -4544,10 +4540,6 @@ static struct iommu_domain 
*intel_iommu_domain_alloc(unsigned type)
return NULL;
}
 
-   if (type == IOMMU_DOMAIN_DMA &&
-   iommu_get_dma_cookie(_domain->domain))
-   return NULL;
-
domain = _domain->domain;
domain->geometry.aperture_start = 0;
domain->geometry.aperture_end   =
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 06/23] iommu/ipmmu-vmsa: Drop IOVA cookie management

2021-07-21 Thread Robin Murphy
The core code bakes its own cookies now.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/ipmmu-vmsa.c | 27 ---
 1 file changed, 4 insertions(+), 23 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 51ea6f00db2f..31252268f0d0 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -564,10 +564,13 @@ static irqreturn_t ipmmu_irq(int irq, void *dev)
  * IOMMU Operations
  */
 
-static struct iommu_domain *__ipmmu_domain_alloc(unsigned type)
+static struct iommu_domain *ipmmu_domain_alloc(unsigned type)
 {
struct ipmmu_vmsa_domain *domain;
 
+   if (type != IOMMU_DOMAIN_UNMANAGED && type != IOMMU_DOMAIN_DMA)
+   return NULL;
+
domain = kzalloc(sizeof(*domain), GFP_KERNEL);
if (!domain)
return NULL;
@@ -577,27 +580,6 @@ static struct iommu_domain *__ipmmu_domain_alloc(unsigned 
type)
return >io_domain;
 }
 
-static struct iommu_domain *ipmmu_domain_alloc(unsigned type)
-{
-   struct iommu_domain *io_domain = NULL;
-
-   switch (type) {
-   case IOMMU_DOMAIN_UNMANAGED:
-   io_domain = __ipmmu_domain_alloc(type);
-   break;
-
-   case IOMMU_DOMAIN_DMA:
-   io_domain = __ipmmu_domain_alloc(type);
-   if (io_domain && iommu_get_dma_cookie(io_domain)) {
-   kfree(io_domain);
-   io_domain = NULL;
-   }
-   break;
-   }
-
-   return io_domain;
-}
-
 static void ipmmu_domain_free(struct iommu_domain *io_domain)
 {
struct ipmmu_vmsa_domain *domain = to_vmsa_domain(io_domain);
@@ -606,7 +588,6 @@ static void ipmmu_domain_free(struct iommu_domain 
*io_domain)
 * Free the domain resources. We assume that all devices have already
 * been detached.
 */
-   iommu_put_dma_cookie(io_domain);
ipmmu_domain_destroy_context(domain);
free_io_pgtable_ops(domain->iop);
kfree(domain);
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 19/23] iommu: Expose DMA domain strictness via sysfs

2021-07-21 Thread Robin Murphy
The sysfs interface for default domain types exists primarily so users
can choose the performance/security tradeoff relevant to their own
workload. As such, the choice between the policies for DMA domains fits
perfectly as an additional point on that scale - downgrading a
particular device from a strict default to non-strict may be enough to
let it reach the desired level of performance, while still retaining
more peace of mind than with a wide-open identity domain. Now that we've
abstracted non-strict mode as a distinct type of DMA domain, allow it to
be chosen through the user interface as well.

Signed-off-by: Robin Murphy 
---
 Documentation/ABI/testing/sysfs-kernel-iommu_groups | 2 ++
 drivers/iommu/iommu.c   | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-kernel-iommu_groups 
b/Documentation/ABI/testing/sysfs-kernel-iommu_groups
index eae2f1c1e11e..43ba764ba5b7 100644
--- a/Documentation/ABI/testing/sysfs-kernel-iommu_groups
+++ b/Documentation/ABI/testing/sysfs-kernel-iommu_groups
@@ -42,6 +42,8 @@ Description:  /sys/kernel/iommu_groups//type shows 
the type of default
  ==
DMA   All the DMA transactions from the device in this group
  are translated by the iommu.
+   DMA-FQAs above, but using batched invalidation to lazily
+ remove translations after use.
identity  All the DMA transactions from the device in this group
  are not translated by the iommu.
auto  Change to the type the device was booted with.
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index d7eaacae0944..d3b562a33ac4 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -3195,6 +3195,8 @@ static ssize_t iommu_group_store_type(struct iommu_group 
*group,
req_type = IOMMU_DOMAIN_IDENTITY;
else if (sysfs_streq(buf, "DMA"))
req_type = IOMMU_DOMAIN_DMA;
+   else if (sysfs_streq(buf, "DMA-FQ"))
+   req_type = IOMMU_DOMAIN_DMA_FQ;
else if (sysfs_streq(buf, "auto"))
req_type = 0;
else
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 20/23] iommu: Allow choosing DMA strictness at build time

2021-07-21 Thread Robin Murphy
To parallel the sysfs behaviour, extend the build-time configuration
for default domains to include the new type as well.

Signed-off-by: Robin Murphy 

---

This effectively replaces patch #3 of John's "iommu: Enhance IOMMU
default DMA mode build options" series.
---
 drivers/iommu/Kconfig | 48 +++
 drivers/iommu/iommu.c |  2 +-
 2 files changed, 41 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 07b7c25cbed8..e3f7990046ae 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -79,16 +79,48 @@ config IOMMU_DEBUGFS
  debug/iommu directory, and then populate a subdirectory with
  entries as required.
 
-config IOMMU_DEFAULT_PASSTHROUGH
-   bool "IOMMU passthrough by default"
+choice
+   prompt "Default IOMMU domain type"
depends on IOMMU_API
-   help
- Enable passthrough by default, removing the need to pass in
- iommu.passthrough=on or iommu=pt through command line. If this
- is enabled, you can still disable with iommu.passthrough=off
- or iommu=nopt depending on the architecture.
+   default IOMMU_DEFAULT_DMA_LAZY if INTEL_IOMMU || AMD_IOMMU
+   default IOMMU_DEFAULT_DMA_STRICT
 
- If unsure, say N here.
+config IOMMU_DEFAULT_DMA_STRICT
+   bool "Translated - Strict"
+   help
+ Trusted devices use translation to restrict their access to only
+ DMA-mapped pages, with strict TLB invalidation on unmap. Equivalent
+ to passing "iommu.passthrough=0 iommu.strict=1" on the command line.
+
+ Untrusted devices always use this mode, with an additional layer of
+ bounce-buffering such that they cannot gain access to any unrelated
+ data within a mapped page.
+
+config IOMMU_DEFAULT_DMA_LAZY
+   bool "Translated - Lazy"
+   help
+ Trusted devices use translation to restrict their access to only
+ DMA-mapped pages, but with "lazy" batched TLB invalidation. This
+ mode allows higher performance with some IOMMUs due to reduced TLB
+ flushing, but at the cost of reduced isolation since devices may be
+ able to access memory for some time after it has been unmapped.
+ Equivalent to passing "iommu.passthrough=0 iommu.strict=0" on the
+ command line.
+
+ If this mode is not supported by the IOMMU driver, the effective
+ runtime default will fall back to IOMMU_DEFAULT_DMA_STRICT.
+
+config IOMMU_DEFAULT_PASSTHROUGH
+   bool "Passthrough"
+   help
+ Trusted devices are identity-mapped, giving them unrestricted access
+ to memory with minimal performance overhead. Equivalent to passing
+ "iommu.passthrough=1" (historically "iommu=pt") on the command line.
+
+ If this mode is not supported by the IOMMU driver, the effective
+ runtime default will fall back to IOMMU_DEFAULT_DMA_STRICT.
+
+endchoice
 
 config OF_IOMMU
def_bool y
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index d3b562a33ac4..4fad6d427d9d 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -30,7 +30,7 @@ static struct kset *iommu_group_kset;
 static DEFINE_IDA(iommu_group_ida);
 
 static unsigned int iommu_def_domain_type __read_mostly;
-static bool iommu_dma_strict __read_mostly = true;
+static bool iommu_dma_strict __read_mostly = 
IS_ENABLED(CONFIG_IOMMU_DEFAULT_DMA_STRICT);
 static u32 iommu_cmd_line __read_mostly;
 
 struct iommu_group {
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 21/23] iommu/dma: Factor out flush queue init

2021-07-21 Thread Robin Murphy
Factor out flush queue setup from the initial domain init so that we
can potentially trigger it from sysfs later on in a domain's lifetime.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/dma-iommu.c | 31 ---
 include/linux/dma-iommu.h |  9 ++---
 2 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index a114a7ad88ec..64e9eefce00e 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -310,6 +310,25 @@ static bool dev_is_untrusted(struct device *dev)
return dev_is_pci(dev) && to_pci_dev(dev)->untrusted;
 }
 
+int iommu_dma_init_fq(struct iommu_domain *domain)
+{
+   struct iommu_dma_cookie *cookie = domain->iova_cookie;
+
+   if (domain->type != IOMMU_DOMAIN_DMA_FQ || 
!domain->ops->flush_iotlb_all)
+   return -EINVAL;
+   if (cookie->fq_domain)
+   return 0;
+
+   if (init_iova_flush_queue(>iovad, iommu_dma_flush_iotlb_all,
+ iommu_dma_entry_dtor)) {
+   pr_warn("iova flush queue initialization failed\n");
+   domain->type = IOMMU_DOMAIN_DMA;
+   return -ENODEV;
+   }
+   cookie->fq_domain = domain;
+   return 0;
+}
+
 /**
  * iommu_dma_init_domain - Initialise a DMA mapping domain
  * @domain: IOMMU domain previously prepared by iommu_get_dma_cookie()
@@ -362,17 +381,7 @@ static int iommu_dma_init_domain(struct iommu_domain 
*domain, dma_addr_t base,
}
 
init_iova_domain(iovad, 1UL << order, base_pfn);
-
-   if (domain->type == IOMMU_DOMAIN_DMA_FQ && !cookie->fq_domain &&
-   domain->ops->flush_iotlb_all) {
-   if (init_iova_flush_queue(iovad, iommu_dma_flush_iotlb_all,
- iommu_dma_entry_dtor)) {
-   pr_warn("iova flush queue initialization failed\n");
-   domain->type = IOMMU_DOMAIN_DMA;
-   } else {
-   cookie->fq_domain = domain;
-   }
-   }
+   iommu_dma_init_fq(domain);
 
return iova_reserve_iommu_regions(dev, domain);
 }
diff --git a/include/linux/dma-iommu.h b/include/linux/dma-iommu.h
index 758ca4694257..81ab647f1618 100644
--- a/include/linux/dma-iommu.h
+++ b/include/linux/dma-iommu.h
@@ -20,6 +20,7 @@ void iommu_put_dma_cookie(struct iommu_domain *domain);
 
 /* Setup call for arch DMA mapping code */
 void iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 dma_limit);
+int iommu_dma_init_fq(struct iommu_domain *domain);
 
 /* The DMA API isn't _quite_ the whole story, though... */
 /*
@@ -37,9 +38,6 @@ void iommu_dma_compose_msi_msg(struct msi_desc *desc,
 
 void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list);
 
-void iommu_dma_free_cpu_cached_iovas(unsigned int cpu,
-   struct iommu_domain *domain);
-
 extern bool iommu_dma_forcedac;
 
 #else /* CONFIG_IOMMU_DMA */
@@ -54,6 +52,11 @@ static inline void iommu_setup_dma_ops(struct device *dev, 
u64 dma_base,
 {
 }
 
+static inline int iommu_dma_init_fq(struct iommu_domain *domain)
+{
+   return -EINVAL;
+}
+
 static inline int iommu_get_dma_cookie(struct iommu_domain *domain)
 {
return -ENODEV;
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 00/23] iommu: Refactor DMA domain strictness

2021-07-21 Thread Robin Murphy
Hi all,

First off, yes, this conflicts with just about everything else
currently in-flight. Sorry about that. If it stands up to initial review
then I'll start giving some thought to how to fit everything together
(particularly John's cleanup of strictness defaults, which I'd be
inclined to fold into a v2 of this series).

Anyway, this is my take on promoting the strict vs. non-strict DMA
domain choice to distinct domain types, so that it can fit logically
into the existing sysfs and Kconfig controls. The first 13 patches are
effectively preparatory cleanup to reduce churn in the later changes,
but could be merged in their own right even if the rest is too
contentious. I ended up splitting patches #2-#11 by driver for ease of
review, since some of them are more than just trivial deletions, but
they could readily be squashed (even as far as with #1 and #12 too).

I'm slightly surprised at how straightforward it's turned out, but it
has survived some very basic smoke testing for arm-smmu using dmatest
on my Arm Juno board. Branch here for convenience:

https://gitlab.arm.com/linux-arm/linux-rm/-/tree/iommu/fq

Please let me know what you think!

Robin.


Robin Murphy (23):
  iommu: Pull IOVA cookie management into the core
  iommu/amd: Drop IOVA cookie management
  iommu/arm-smmu: Drop IOVA cookie management
  iommu/vt-d: Drop IOVA cookie management
  iommu/exynos: Drop IOVA cookie management
  iommu/ipmmu-vmsa: Drop IOVA cookie management
  iommu/mtk: Drop IOVA cookie management
  iommu/rockchip: Drop IOVA cookie management
  iommu/sprd: Drop IOVA cookie management
  iommu/sun50i: Drop IOVA cookie management
  iommu/virtio: Drop IOVA cookie management
  iommu/dma: Unexport IOVA cookie management
  iommu/dma: Remove redundant "!dev" checks
  iommu: Introduce explicit type for non-strict DMA domains
  iommu/amd: Prepare for multiple DMA domain types
  iommu/arm-smmu: Prepare for multiple DMA domain types
  iommu/vt-d: Prepare for multiple DMA domain types
  iommu: Express DMA strictness via the domain type
  iommu: Expose DMA domain strictness via sysfs
  iommu: Allow choosing DMA strictness at build time
  iommu/dma: Factor out flush queue init
  iommu: Allow enabling non-strict mode dynamically
  iommu/arm-smmu: Allow non-strict in pgtable_quirks interface

 .../ABI/testing/sysfs-kernel-iommu_groups |  2 +
 drivers/iommu/Kconfig | 48 +++
 drivers/iommu/amd/iommu.c | 21 +---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c   | 25 ++
 drivers/iommu/arm/arm-smmu/arm-smmu.c | 24 ++
 drivers/iommu/arm/arm-smmu/qcom_iommu.c   |  8 
 drivers/iommu/dma-iommu.c | 44 +
 drivers/iommu/exynos-iommu.c  | 18 ++-
 drivers/iommu/intel/iommu.c   | 23 +++--
 drivers/iommu/iommu.c | 44 +++--
 drivers/iommu/ipmmu-vmsa.c| 27 ++-
 drivers/iommu/mtk_iommu.c |  6 ---
 drivers/iommu/rockchip-iommu.c| 11 +
 drivers/iommu/sprd-iommu.c|  6 ---
 drivers/iommu/sun50i-iommu.c  | 12 +
 drivers/iommu/virtio-iommu.c  |  8 
 include/linux/dma-iommu.h |  9 ++--
 include/linux/iommu.h | 10 +++-
 18 files changed, 160 insertions(+), 186 deletions(-)

-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 01/23] iommu: Pull IOVA cookie management into the core

2021-07-21 Thread Robin Murphy
Now that everyone has converged on iommu-dma for IOMMU_DOMAIN_DMA
support, we can abandon the notion of drivers being responsible for the
cookie type, and consolidate all the management into the core code.

Signed-off-by: Robin Murphy 
---
 drivers/iommu/iommu.c | 7 +++
 include/linux/iommu.h | 3 ++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 5419c4b9f27a..0952de57c466 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -7,6 +7,7 @@
 #define pr_fmt(fmt)"iommu: " fmt
 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1941,6 +1942,11 @@ static struct iommu_domain *__iommu_domain_alloc(struct 
bus_type *bus,
/* Assume all sizes by default; the driver may override this later */
domain->pgsize_bitmap  = bus->iommu_ops->pgsize_bitmap;
 
+   /* Temporarily ignore -EEXIST while drivers still get their own cookies 
*/
+   if (type == IOMMU_DOMAIN_DMA && iommu_get_dma_cookie(domain) == 
-ENOMEM) {
+   iommu_domain_free(domain);
+   domain = NULL;
+   }
return domain;
 }
 
@@ -1952,6 +1958,7 @@ EXPORT_SYMBOL_GPL(iommu_domain_alloc);
 
 void iommu_domain_free(struct iommu_domain *domain)
 {
+   iommu_put_dma_cookie(domain);
domain->ops->domain_free(domain);
 }
 EXPORT_SYMBOL_GPL(iommu_domain_free);
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 32d448050bf7..007662b65474 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -40,6 +40,7 @@ struct iommu_domain;
 struct notifier_block;
 struct iommu_sva;
 struct iommu_fault_event;
+struct iommu_dma_cookie;
 
 /* iommu fault flags */
 #define IOMMU_FAULT_READ   0x0
@@ -86,7 +87,7 @@ struct iommu_domain {
iommu_fault_handler_t handler;
void *handler_token;
struct iommu_domain_geometry geometry;
-   void *iova_cookie;
+   struct iommu_dma_cookie *iova_cookie;
 };
 
 enum iommu_cap {
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] firmware: QCOM_SCM: Allow qcom_scm driver to be loadable as a permenent module

2021-07-21 Thread Bjorn Andersson
On Wed 21 Jul 13:00 CDT 2021, Saravana Kannan wrote:

> On Wed, Jul 21, 2021 at 10:24 AM John Stultz  wrote:
> >
> > On Wed, Jul 21, 2021 at 4:54 AM Greg Kroah-Hartman
> >  wrote:
> > >
> > > On Wed, Jul 07, 2021 at 04:53:20AM +, John Stultz wrote:
> > > > Allow the qcom_scm driver to be loadable as a permenent module.
> > >
> > > This feels like a regression, it should be allowed to be a module.
> >
> > I'm sorry, I'm not sure I'm following you, Greg.  This patch is trying
> > to enable the driver to be able to be loaded as a module.
> 
> I think the mix up might be that Greg mentally read "permanent module"
> as "builtin"?
> 
> "permanent module" is just something that can't be unloaded once it's
> loaded. It's not "builtin".
> 

Afaict there's nothing in this patch that makes it more or less
permanent. The module will be quite permanent (in practice) because
several other core modules reference symbols in the qcom_scm module.

But thanks to a previous patch, the qcom_scm device comes with
suppress_bind_attrs, to prevent that the device goes away from a simple
unbind operation - which the API and client drivers aren't designed to
handle.

So, it would have been better in this case to omit the word "permanent"
from the commit message, but the change is good and I don't want to
rebase my tree to drop that word.

Thanks,
Bjorn

> -Saravana
> 
> >
> > > > This still uses the "depends on QCOM_SCM || !QCOM_SCM" bit to
> > > > ensure that drivers that call into the qcom_scm driver are
> > > > also built as modules. While not ideal in some cases its the
> > > > only safe way I can find to avoid build errors without having
> > > > those drivers select QCOM_SCM and have to force it on (as
> > > > QCOM_SCM=n can be valid for those drivers).
> > > >
> > > > Reviving this now that Saravana's fw_devlink defaults to on,
> > > > which should avoid loading troubles seen before.
> > >
> > > fw_devlink was supposed to resolve these issues and _allow_ code to be
> > > built as modules and not forced to be built into the kernel.
> >
> > Right. I'm re-submitting this patch to enable a driver to work as a
> > module, because earlier attempts to submit it ran into boot trouble
> > because fw_devlink wasn't yet enabled.
> >
> > I worry something in my description made it seem otherwise, so let me
> > know how you read it and I'll try to avoid such confusion in the
> > future.
> >
> > thanks
> > -john
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 01/11] dt-bindings: memory: mediatek: Add mt8195 smi binding

2021-07-21 Thread Rob Herring
On Thu, 15 Jul 2021 20:11:59 +0800, Yong Wu wrote:
> Add mt8195 smi supporting in the bindings.
> 
> In mt8195, there are two smi-common HW, one is for vdo(video output),
> the other is for vpp(video processing pipe). They connect with different
> smi-larbs, then some setting(bus_sel) is different. Differentiate them
> with the compatible string.
> 
> Something like this:
> 
> IOMMU(VDO)  IOMMU(VPP)
>|   |
>  SMI_COMMON_VDO  SMI_COMMON_VPP
>   
>   |  |   ...  |  | ...
> larb0 larb2  ...larb1 larb3...
> 
> Signed-off-by: Yong Wu 
> ---
>  .../bindings/memory-controllers/mediatek,smi-common.yaml| 6 +-
>  .../bindings/memory-controllers/mediatek,smi-larb.yaml  | 3 +++
>  2 files changed, 8 insertions(+), 1 deletion(-)
> 

Acked-by: Rob Herring 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2] iommu/vt-d: Dump DMAR translation structure

2021-07-21 Thread Kyung Min Park
When the dmar translation fault happens, the kernel prints a single line
fault reason with corresponding hexadecimal code defined in the Intel VT-d
specification.

Currently, when user wants to debug the translation fault in detail,
debugfs is used for dumping the dmar_translation_struct, which is not
available when the kernel failed to boot.

Dump the DMAR translation structure, pagewalk the IO page table and print
the page table entry when the fault happens.

Signed-off-by: Kyung Min Park 
---
ChangeLog:
- Change from v1 to v2:
  1. fix compilation issue with different IOMMU config option.
---
 drivers/iommu/intel/dmar.c  |   8 ++-
 drivers/iommu/intel/iommu.c | 107 
 include/linux/dmar.h|   4 ++
 3 files changed, 116 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
index d66f79acd14d..c74d97c5ec21 100644
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -1948,19 +1948,21 @@ static int dmar_fault_do_one(struct intel_iommu *iommu, 
int type,
   source_id >> 8, PCI_SLOT(source_id & 0xFF),
   PCI_FUNC(source_id & 0xFF), addr >> 48,
   fault_reason, reason);
-   else if (pasid == INVALID_IOASID)
+   else if (pasid == INVALID_IOASID) {
pr_err("[%s NO_PASID] Request device [0x%02x:0x%02x.%d] fault 
addr 0x%llx [fault reason 0x%02x] %s\n",
   type ? "DMA Read" : "DMA Write",
   source_id >> 8, PCI_SLOT(source_id & 0xFF),
   PCI_FUNC(source_id & 0xFF), addr,
   fault_reason, reason);
-   else
+   dmar_fault_dump_ptes(iommu, source_id, addr, pasid);
+   } else {
pr_err("[%s PASID 0x%x] Request device [0x%02x:0x%02x.%d] fault 
addr 0x%llx [fault reason 0x%02x] %s\n",
   type ? "DMA Read" : "DMA Write", pasid,
   source_id >> 8, PCI_SLOT(source_id & 0xFF),
   PCI_FUNC(source_id & 0xFF), addr,
   fault_reason, reason);
-
+   dmar_fault_dump_ptes(iommu, source_id, addr, pasid);
+   }
return 0;
 }
 
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index a6a07d985709..859ec0832429 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -173,6 +173,8 @@ static struct intel_iommu **g_iommus;
 
 static void __init check_tylersburg_isoch(void);
 static int rwbf_quirk;
+static inline struct device_domain_info *
+dmar_search_domain_by_dev_info(int segment, int bus, int devfn);
 
 /*
  * set to 1 to panic kernel if can't successfully enable VT-d
@@ -998,6 +1000,111 @@ static void free_context_table(struct intel_iommu *iommu)
spin_unlock_irqrestore(>lock, flags);
 }
 
+static void pgtable_walk(struct intel_iommu *iommu, unsigned long pfn, u8 bus, 
u8 devfn)
+{
+   struct dma_pte *parent, *pte = NULL;
+   struct device_domain_info *info;
+   struct dmar_domain *domain = NULL;
+   int offset, level;
+
+   info = dmar_search_domain_by_dev_info(iommu->segment, bus, devfn);
+   if (!info) {
+   pr_info("no iommu info for this device.\n");
+   return;
+   }
+
+   domain = info->domain;
+   if (!domain) {
+   pr_info("iommu domain does not exist for this device.\n");
+   return;
+   }
+   level = agaw_to_level(domain->agaw);
+   parent = domain->pgd;
+   if (!parent) {
+   pr_info("NULL pointer of page table entry.\n");
+   return;
+   }
+
+   while (1) {
+   offset = pfn_level_offset(pfn, level);
+   pte = [offset];
+   if (!pte || (dma_pte_superpage(pte) || !dma_pte_present(pte))) {
+   pr_info("pte not present at level %d", level);
+   break;
+   }
+   pr_info("pte level: %d, pte value: %llx\n", level, pte->val);
+
+   if (level == 1)
+   break;
+   parent = phys_to_virt(dma_pte_addr(pte));
+   level--;
+   }
+}
+
+void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id, unsigned 
long long addr,
+ u32 pasid)
+{
+   struct pasid_dir_entry *dir, *pde;
+   struct pasid_entry *entries, *pte;
+   struct context_entry *ctx_entry;
+   struct root_entry *rt_entry;
+   u8 devfn = source_id & 0xff;
+   u8 bus = source_id >> 8;
+   int i, dir_index, index;
+
+   /* root entry dump */
+   rt_entry = >root_entry[bus];
+   if (!rt_entry) {
+   pr_info("root table entry is not present\n");
+   return;
+   }
+   if (!sm_supported(iommu))
+   pr_info("%s, root_entry[63:0]: %llx", iommu->name, 
rt_entry->lo);
+   else
+   pr_info("%s, root_entry[127:0]: %llx, 

Re: [PATCH v2 02/11] dt-bindings: memory: mediatek: Add mt8195 smi sub common

2021-07-21 Thread Rob Herring
On Thu, Jul 15, 2021 at 08:12:00PM +0800, Yong Wu wrote:
> Add the binding for smi-sub-common. The SMI block diagram like this:
> 
> IOMMU
>  |  |
>   smi-common
>   --
>   |    |
>  larb0   larb7   <-max is 8
> 
> The smi-common connects with smi-larb and IOMMU. The maximum larbs number
> that connects with a smi-common is 8. If the engines number is over 8,
> sometimes we use a smi-sub-common which is nearly same with smi-common.
> It supports up to 8 input and 1 output(smi-common has 2 output)
> 
> Something like:
> 
> IOMMU
>  |  |
>   smi-common
>   -
>   |  |  ...
> larb0  sub-common   ...   <-max is 8
>   ---
>||...   <-max is 8 too.
>  larb2 larb5
> 
> We don't need extra SW setting for smi-sub-common, only the sub-common has
> special clocks need to enable when the engines access dram.
> 
> If it is sub-common, it should have a "mediatek,smi" phandle to point to
> its smi-common. meanwhile, the sub-common only has one gals clock.
> 
> Signed-off-by: Yong Wu 
> ---
>  .../mediatek,smi-common.yaml  | 25 +++
>  1 file changed, 25 insertions(+)
> 
> diff --git 
> a/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
>  
> b/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
> index 602592b6c3f5..f79d99ebc440 100644
> --- 
> a/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
> +++ 
> b/Documentation/devicetree/bindings/memory-controllers/mediatek,smi-common.yaml
> @@ -38,6 +38,7 @@ properties:
>- mediatek,mt8192-smi-common
>- mediatek,mt8195-smi-common-vdo
>- mediatek,mt8195-smi-common-vpp
> +  - mediatek,mt8195-smi-sub-common
>  
>- description: for mt7623
>  items:
> @@ -67,6 +68,10 @@ properties:
>  minItems: 2
>  maxItems: 4
>  
> +  mediatek,smi:
> +$ref: /schemas/types.yaml#/definitions/phandle-array

If only a phandle, then s/phandle-array/phandle/

> +description: a phandle to the smi-common node above. Only for sub-common.
> +
>  required:
>- compatible
>- reg
> @@ -93,6 +98,26 @@ allOf:
>  - const: smi
>  - const: async
>  
> +  - if:  # only for sub common
> +  properties:
> +compatible:
> +  contains:
> +enum:
> +  - mediatek,mt8195-smi-sub-common
> +then:
> +  required:
> +- mediatek,smi
> +  properties:
> +clock:
> +  items:
> +minItems: 3
> +maxItems: 3
> +clock-names:
> +  items:
> +- const: apb
> +- const: smi
> +- const: gals0
> +
>- if:  # for gen2 HW that have gals
>properties:
>  compatible:
> -- 
> 2.18.0
> 
> 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] firmware: QCOM_SCM: Allow qcom_scm driver to be loadable as a permenent module

2021-07-21 Thread Saravana Kannan via iommu
On Wed, Jul 21, 2021 at 10:24 AM John Stultz  wrote:
>
> On Wed, Jul 21, 2021 at 4:54 AM Greg Kroah-Hartman
>  wrote:
> >
> > On Wed, Jul 07, 2021 at 04:53:20AM +, John Stultz wrote:
> > > Allow the qcom_scm driver to be loadable as a permenent module.
> >
> > This feels like a regression, it should be allowed to be a module.
>
> I'm sorry, I'm not sure I'm following you, Greg.  This patch is trying
> to enable the driver to be able to be loaded as a module.

I think the mix up might be that Greg mentally read "permanent module"
as "builtin"?

"permanent module" is just something that can't be unloaded once it's
loaded. It's not "builtin".

-Saravana

>
> > > This still uses the "depends on QCOM_SCM || !QCOM_SCM" bit to
> > > ensure that drivers that call into the qcom_scm driver are
> > > also built as modules. While not ideal in some cases its the
> > > only safe way I can find to avoid build errors without having
> > > those drivers select QCOM_SCM and have to force it on (as
> > > QCOM_SCM=n can be valid for those drivers).
> > >
> > > Reviving this now that Saravana's fw_devlink defaults to on,
> > > which should avoid loading troubles seen before.
> >
> > fw_devlink was supposed to resolve these issues and _allow_ code to be
> > built as modules and not forced to be built into the kernel.
>
> Right. I'm re-submitting this patch to enable a driver to work as a
> module, because earlier attempts to submit it ran into boot trouble
> because fw_devlink wasn't yet enabled.
>
> I worry something in my description made it seem otherwise, so let me
> know how you read it and I'll try to avoid such confusion in the
> future.
>
> thanks
> -john
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses

2021-07-21 Thread Roman Skakun
> Fine with.  I've queued up the modified patch.

Good. Thanks!

>
> On Sat, Jul 17, 2021 at 11:39:21AM +0300, Roman Skakun wrote:
> > > We can merge this patch and create a new one for
> > > xen_swiotlb_free_coherent() later.
> > > Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page
> > > was problematic.
> > >
> > > This patch is fine by me.
> >
> > Good. I'm agreed too. Waiting for Christoph.
>
> Fine with.  I've queued up the modified patch.



--
Best Regards, Roman.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu