Re: [PATCH v3 00/13] MT8195 SMI support

2021-08-18 Thread 吴勇
On Wed, 2021-08-18 at 22:41 +0200, Krzysztof Kozlowski wrote:
> On 10/08/2021 10:08, Yong Wu wrote:
> > This patchset mainly adds SMI support for mt8195.
> > 
> > Comparing with the previous version, add two new functions:
> > a) add smi sub common
> > b) add initial setting for smi-common and smi-larb.
> > 
> > Change note:
> > v3:1) In the dt-binding:
> >a. Change mediatek,smi type from phandle-array to phandle
> > from Rob.
> >b. Add a new bool property (mediatek,smi_sub_common)
> >   to indicate if this is smi-sub-common.
> >2) Change the clock using bulk parting.
> >   keep the smi-common's has_gals flag. more strict.
> >3) More comment about larb initial setting.
> >4) Add a maintain patch
> 
> The patchset looks good to me but I saw now comments from Rob, so I
> expect a resend. Therefore there is also time for additional review -
> maybe continued by Ikjoon Jang?
> 
> If you sent a v4 with fixes rather soon and get ack from Rob, I could
> still try to get it into next merge window. After this weekend I
> won't
> be taking patches for this cycle and it will wait for the merge
> window
> to finish.

Hi Krzysztof,

Thanks very much for your information.

It looks the time is too short to get Rob's ack in this weekend. I will
wait for one or two weeks to see if there is other comments, then
resend.

> 
> 
> Best regards,
> Krzysztof
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 00/13] MT8195 SMI support

2021-08-18 Thread Krzysztof Kozlowski
On 10/08/2021 10:08, Yong Wu wrote:
> This patchset mainly adds SMI support for mt8195.
> 
> Comparing with the previous version, add two new functions:
> a) add smi sub common
> b) add initial setting for smi-common and smi-larb.
> 
> Change note:
> v3:1) In the dt-binding:
>a. Change mediatek,smi type from phandle-array to phandle from Rob.
>b. Add a new bool property (mediatek,smi_sub_common)
>   to indicate if this is smi-sub-common.
>2) Change the clock using bulk parting.
>   keep the smi-common's has_gals flag. more strict.
>3) More comment about larb initial setting.
>4) Add a maintain patch

The patchset looks good to me but I saw now comments from Rob, so I
expect a resend. Therefore there is also time for additional review -
maybe continued by Ikjoon Jang?

If you sent a v4 with fixes rather soon and get ack from Rob, I could
still try to get it into next merge window. After this weekend I won't
be taking patches for this cycle and it will wait for the merge window
to finish.


Best regards,
Krzysztof
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v4 00/24] iommu: Refactor DMA domain strictness

2021-08-18 Thread Robin Murphy

On 2021-08-18 12:29, Joerg Roedel wrote:

On Wed, Aug 11, 2021 at 01:21:14PM +0100, Robin Murphy wrote:

Robin Murphy (24):
   iommu: Pull IOVA cookie management into the core
   iommu/amd: Drop IOVA cookie management
   iommu/arm-smmu: Drop IOVA cookie management
   iommu/vt-d: Drop IOVA cookie management
   iommu/exynos: Drop IOVA cookie management
   iommu/ipmmu-vmsa: Drop IOVA cookie management
   iommu/mtk: Drop IOVA cookie management
   iommu/rockchip: Drop IOVA cookie management
   iommu/sprd: Drop IOVA cookie management
   iommu/sun50i: Drop IOVA cookie management
   iommu/virtio: Drop IOVA cookie management
   iommu/dma: Unexport IOVA cookie management
   iommu/dma: Remove redundant "!dev" checks
   iommu: Indicate queued flushes via gather data
   iommu/io-pgtable: Remove non-strict quirk
   iommu: Introduce explicit type for non-strict DMA domains
   iommu/amd: Prepare for multiple DMA domain types
   iommu/arm-smmu: Prepare for multiple DMA domain types
   iommu/vt-d: Prepare for multiple DMA domain types
   iommu: Express DMA strictness via the domain type
   iommu: Expose DMA domain strictness via sysfs
   iommu: Only log strictness for DMA domains
   iommu: Merge strictness and domain type configs
   iommu: Allow enabling non-strict mode dynamically


Applied all except patch 12. Please re-submit patch 12 together with the
APPLE DART fixups after v5.15-rc1 is out.


Brilliant, thanks for fixing that up!

Sven - I've prepared the follow-up patches already[1], so consider 
yourself off the hook (I see no point in trying to fix the nominal DART 
cookie bugs between now and then) :)


Cheers,
Robin.

[1] https://gitlab.arm.com/linux-arm/linux-rm/-/commits/iommu/fq-fixes
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 7/7] hexagon: use the generic global coherent pool

2021-08-18 Thread 'Christoph Hellwig'
Thanks,

I've pulled the whole series into the dma-mapping for-next tree.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 9/9] iommu/vt-d: Add present bit check in pasid entry setup helpers

2021-08-18 Thread Lu Baolu
From: Liu Yi L 

The helper functions should not modify the pasid entries which are still
in use. Add a check against present bit.

Signed-off-by: Liu Yi L 
Link: https://lore.kernel.org/r/20210817042425.1784279-1-yi.l@intel.com
Signed-off-by: Lu Baolu 
---
 drivers/iommu/intel/pasid.c | 16 
 1 file changed, 16 insertions(+)

diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
index eda599e70a68..07c390aed1fe 100644
--- a/drivers/iommu/intel/pasid.c
+++ b/drivers/iommu/intel/pasid.c
@@ -540,6 +540,10 @@ void intel_pasid_tear_down_entry(struct intel_iommu 
*iommu, struct device *dev,
devtlb_invalidation_with_pasid(iommu, dev, pasid);
 }
 
+/*
+ * This function flushes cache for a newly setup pasid table entry.
+ * Caller of it should not modify the in-use pasid table entries.
+ */
 static void pasid_flush_caches(struct intel_iommu *iommu,
struct pasid_entry *pte,
   u32 pasid, u16 did)
@@ -591,6 +595,10 @@ int intel_pasid_setup_first_level(struct intel_iommu 
*iommu,
if (WARN_ON(!pte))
return -EINVAL;
 
+   /* Caller must ensure PASID entry is not in use. */
+   if (pasid_pte_is_present(pte))
+   return -EBUSY;
+
pasid_clear_entry(pte);
 
/* Setup the first level page table pointer: */
@@ -690,6 +698,10 @@ int intel_pasid_setup_second_level(struct intel_iommu 
*iommu,
return -ENODEV;
}
 
+   /* Caller must ensure PASID entry is not in use. */
+   if (pasid_pte_is_present(pte))
+   return -EBUSY;
+
pasid_clear_entry(pte);
pasid_set_domain_id(pte, did);
pasid_set_slptr(pte, pgd_val);
@@ -729,6 +741,10 @@ int intel_pasid_setup_pass_through(struct intel_iommu 
*iommu,
return -ENODEV;
}
 
+   /* Caller must ensure PASID entry is not in use. */
+   if (pasid_pte_is_present(pte))
+   return -EBUSY;
+
pasid_clear_entry(pte);
pasid_set_domain_id(pte, did);
pasid_set_address_width(pte, iommu->agaw);
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 8/9] iommu/vt-d: Use pasid_pte_is_present() helper function

2021-08-18 Thread Lu Baolu
From: Liu Yi L 

Use the pasid_pte_is_present() helper for present bit check in the
intel_pasid_tear_down_entry().

Signed-off-by: Liu Yi L 
Link: https://lore.kernel.org/r/20210817042425.1784279-1-yi.l@intel.com
Signed-off-by: Lu Baolu 
---
 drivers/iommu/intel/pasid.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
index 9ec374e17469..eda599e70a68 100644
--- a/drivers/iommu/intel/pasid.c
+++ b/drivers/iommu/intel/pasid.c
@@ -517,7 +517,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, 
struct device *dev,
if (WARN_ON(!pte))
return;
 
-   if (!(pte->val[0] & PASID_PTE_PRESENT))
+   if (!pasid_pte_is_present(pte))
return;
 
did = pasid_get_domain_id(pte);
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 7/9] iommu/vt-d: Drop the kernel doc annotation

2021-08-18 Thread Lu Baolu
From: Andy Shevchenko 

Kernel doc validator is unhappy with the following

.../perf.c:16: warning: Function parameter or member 'latency_lock' not 
described in 'DEFINE_SPINLOCK'
.../perf.c:16: warning: expecting prototype for perf.c(). Prototype was for 
DEFINE_SPINLOCK() instead

Drop kernel doc annotation since the top comment is not in the required format.

Signed-off-by: Andy Shevchenko 
Link: 
https://lore.kernel.org/r/20210729163538.40101-1-andriy.shevche...@linux.intel.com
Signed-off-by: Lu Baolu 
---
 drivers/iommu/intel/perf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/intel/perf.c b/drivers/iommu/intel/perf.c
index 73b7ec705552..0e8e03252d92 100644
--- a/drivers/iommu/intel/perf.c
+++ b/drivers/iommu/intel/perf.c
@@ -1,5 +1,5 @@
 // SPDX-License-Identifier: GPL-2.0
-/**
+/*
  * perf.c - performance monitor
  *
  * Copyright (C) 2021 Intel Corporation
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 6/9] iommu/vt-d: Allow devices to have more than 32 outstanding PRs

2021-08-18 Thread Lu Baolu
The minimum per-IOMMU PRQ queue size is one 4K page, this is more entries
than the hardcoded limit of 32 in the current VT-d code. Some devices can
support up to 512 outstanding PRQs but underutilized by this limit of 32.
Although, 32 gives some rough fairness when multiple devices share the same
IOMMU PRQ queue, but far from optimal for customized use case. This extends
the per-IOMMU PRQ queue size to four 4K pages and let the devices have as
many outstanding page requests as they can.

Signed-off-by: Jacob Pan 
Signed-off-by: Lu Baolu 
Link: 
https://lore.kernel.org/r/20210720013856.4143880-1-baolu...@linux.intel.com
---
 include/linux/intel-svm.h   | 5 +
 drivers/iommu/intel/iommu.c | 3 ++-
 drivers/iommu/intel/svm.c   | 4 
 3 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/include/linux/intel-svm.h b/include/linux/intel-svm.h
index 10fa80eef13a..57cceecbe37f 100644
--- a/include/linux/intel-svm.h
+++ b/include/linux/intel-svm.h
@@ -14,6 +14,11 @@
 #define SVM_REQ_EXEC   (1<<1)
 #define SVM_REQ_PRIV   (1<<0)
 
+/* Page Request Queue depth */
+#define PRQ_ORDER  2
+#define PRQ_RING_MASK  ((0x1000 << PRQ_ORDER) - 0x20)
+#define PRQ_DEPTH  ((0x1000 << PRQ_ORDER) >> 5)
+
 /*
  * The SVM_FLAG_SUPERVISOR_MODE flag requests a PASID which can be used only
  * for access to kernel addresses. No IOTLB flushes are automatically done
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 8d4d49e12c51..d75f59ae28e6 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -33,6 +33,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -1541,7 +1542,7 @@ static void iommu_enable_dev_iotlb(struct 
device_domain_info *info)
 
if (info->pri_supported &&
(info->pasid_enabled ? pci_prg_resp_pasid_required(pdev) : 1)  &&
-   !pci_reset_pri(pdev) && !pci_enable_pri(pdev, 32))
+   !pci_reset_pri(pdev) && !pci_enable_pri(pdev, PRQ_DEPTH))
info->pri_enabled = 1;
 #endif
if (info->ats_supported && pci_ats_page_aligned(pdev) &&
diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index 4b9b3f35ba0e..2014fe8695ac 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -31,8 +31,6 @@ static irqreturn_t prq_event_thread(int irq, void *d);
 static void intel_svm_drain_prq(struct device *dev, u32 pasid);
 #define to_intel_svm_dev(handle) container_of(handle, struct intel_svm_dev, 
sva)
 
-#define PRQ_ORDER 0
-
 static DEFINE_XARRAY_ALLOC(pasid_private_array);
 static int pasid_private_add(ioasid_t pasid, void *priv)
 {
@@ -725,8 +723,6 @@ struct page_req_dsc {
u64 priv_data[2];
 };
 
-#define PRQ_RING_MASK  ((0x1000 << PRQ_ORDER) - 0x20)
-
 static bool is_canonical_address(u64 addr)
 {
int shift = 64 - (__VIRTUAL_MASK_SHIFT + 1);
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 5/9] iommu/vt-d: Preset A/D bits for user space DMA usage

2021-08-18 Thread Lu Baolu
We preset the access and dirty bits for IOVA over first level usage only
for the kernel DMA (i.e., when domain type is IOMMU_DOMAIN_DMA). We should
also preset the FL A/D for user space DMA usage. The idea is that even the
user space A/D bit memory write is unnecessary. We should avoid it to
minimize the overhead.

Suggested-by: Sanjay Kumar 
Signed-off-by: Lu Baolu 
Link: 
https://lore.kernel.org/r/20210720013856.4143880-1-baolu...@linux.intel.com
---
 drivers/iommu/intel/iommu.c | 10 +++---
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 8c9a9ed7dc09..8d4d49e12c51 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -2334,13 +2334,9 @@ __domain_mapping(struct dmar_domain *domain, unsigned 
long iov_pfn,
attr = prot & (DMA_PTE_READ | DMA_PTE_WRITE | DMA_PTE_SNP);
attr |= DMA_FL_PTE_PRESENT;
if (domain_use_first_level(domain)) {
-   attr |= DMA_FL_PTE_XD | DMA_FL_PTE_US;
-
-   if (iommu_is_dma_domain(>domain)) {
-   attr |= DMA_FL_PTE_ACCESS;
-   if (prot & DMA_PTE_WRITE)
-   attr |= DMA_FL_PTE_DIRTY;
-   }
+   attr |= DMA_FL_PTE_XD | DMA_FL_PTE_US | DMA_FL_PTE_ACCESS;
+   if (prot & DMA_PTE_WRITE)
+   attr |= DMA_FL_PTE_DIRTY;
}
 
pteval = ((phys_addr_t)phys_pfn << VTD_PAGE_SHIFT) | attr;
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 4/9] iommu/vt-d: Enable Intel IOMMU scalable mode by default

2021-08-18 Thread Lu Baolu
The commit 8950dcd83ae7d ("iommu/vt-d: Leave scalable mode default off")
leaves the scalable mode default off and end users could turn it on with
"intel_iommu=sm_on". Using the Intel IOMMU scalable mode for kernel DMA,
user-level device access and Shared Virtual Address have been enabled.
This enables the scalable mode by default if the hardware advertises the
support and adds kernel options of "intel_iommu=sm_on/sm_off" for end
users to configure it through the kernel parameters.

Suggested-by: Ashok Raj 
Suggested-by: Sanjay Kumar 
Signed-off-by: Lu Baolu 
Cc: Kevin Tian 
Link: 
https://lore.kernel.org/r/20210720013856.4143880-1-baolu...@linux.intel.com
---
 Documentation/admin-guide/kernel-parameters.txt | 11 ++-
 drivers/iommu/intel/iommu.c |  5 -
 drivers/iommu/intel/Kconfig |  1 +
 3 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt 
b/Documentation/admin-guide/kernel-parameters.txt
index 19192b39952a..87d46cb76121 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1946,11 +1946,12 @@
By default, super page will be supported if Intel IOMMU
has the capability. With this option, super page will
not be supported.
-   sm_on [Default Off]
-   By default, scalable mode will be disabled even if the
-   hardware advertises that it has support for the scalable
-   mode translation. With this option set, scalable mode
-   will be used on hardware which claims to support it.
+   sm_on
+   Enable the Intel IOMMU scalable mode if the hardware
+   advertises that it has support for the scalable mode
+   translation.
+   sm_off
+   Disallow use of the Intel IOMMU scalable mode.
tboot_noforce [Default Off]
Do not force the Intel IOMMU enabled under tboot.
By default, tboot will force Intel IOMMU on, which
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index acb91ddf32d0..8c9a9ed7dc09 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -432,8 +432,11 @@ static int __init intel_iommu_setup(char *str)
pr_info("Disable supported super page\n");
intel_iommu_superpage = 0;
} else if (!strncmp(str, "sm_on", 5)) {
-   pr_info("Intel-IOMMU: scalable mode supported\n");
+   pr_info("Enable scalable mode if hardware supports\n");
intel_iommu_sm = 1;
+   } else if (!strncmp(str, "sm_off", 6)) {
+   pr_info("Scalable mode is disallowed\n");
+   intel_iommu_sm = 0;
} else if (!strncmp(str, "tboot_noforce", 13)) {
pr_info("Intel-IOMMU: not forcing on after tboot. This 
could expose security risk for tboot\n");
intel_iommu_tboot_noforce = 1;
diff --git a/drivers/iommu/intel/Kconfig b/drivers/iommu/intel/Kconfig
index c1a92c3049d0..0ddb77115be7 100644
--- a/drivers/iommu/intel/Kconfig
+++ b/drivers/iommu/intel/Kconfig
@@ -84,6 +84,7 @@ config INTEL_IOMMU_FLOPPY_WA
 
 config INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON
bool "Enable Intel IOMMU scalable mode by default"
+   default y
help
  Selecting this option will enable by default the scalable mode if
  hardware presents the capability. The scalable mode is defined in
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 2/9] iommu/vt-d: Remove unnecessary oom message

2021-08-18 Thread Lu Baolu
From: Zhen Lei 

Fixes scripts/checkpatch.pl warning:
WARNING: Possible unnecessary 'out of memory' message

Remove it can help us save a bit of memory.

Signed-off-by: Zhen Lei 
Link: 
https://lore.kernel.org/r/20210609124937.14260-1-thunder.leiz...@huawei.com
Signed-off-by: Lu Baolu 
---
 drivers/iommu/intel/dmar.c  | 2 --
 drivers/iommu/intel/iommu.c | 6 +-
 2 files changed, 1 insertion(+), 7 deletions(-)

diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
index d66f79acd14d..0ec5514c9980 100644
--- a/drivers/iommu/intel/dmar.c
+++ b/drivers/iommu/intel/dmar.c
@@ -149,8 +149,6 @@ dmar_alloc_pci_notify_info(struct pci_dev *dev, unsigned 
long event)
} else {
info = kzalloc(size, GFP_KERNEL);
if (!info) {
-   pr_warn("Out of memory when allocating notify_info "
-   "for %s.\n", pci_name(dev));
if (dmar_dev_scope_status == 0)
dmar_dev_scope_status = -ENOMEM;
return NULL;
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 8fc46c9d6b96..36ce79c55766 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1779,11 +1779,8 @@ static int iommu_init_domains(struct intel_iommu *iommu)
spin_lock_init(>lock);
 
iommu->domain_ids = kcalloc(nlongs, sizeof(unsigned long), GFP_KERNEL);
-   if (!iommu->domain_ids) {
-   pr_err("%s: Allocating domain id array failed\n",
-  iommu->name);
+   if (!iommu->domain_ids)
return -ENOMEM;
-   }
 
size = (ALIGN(ndomains, 256) >> 8) * sizeof(struct dmar_domain **);
iommu->domains = kzalloc(size, GFP_KERNEL);
@@ -3224,7 +3221,6 @@ static int __init init_dmars(void)
g_iommus = kcalloc(g_num_of_iommus, sizeof(struct intel_iommu *),
GFP_KERNEL);
if (!g_iommus) {
-   pr_err("Allocating global iommu array failed\n");
ret = -ENOMEM;
goto error;
}
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 3/9] iommu/vt-d: Refactor Kconfig a bit

2021-08-18 Thread Lu Baolu
Put all sub-options inside a "if INTEL_IOMMU" so that they don't need to
always depend on INTEL_IOMMU. Use IS_ENABLED() instead of #ifdef as well.

Signed-off-by: Lu Baolu 
Link: 
https://lore.kernel.org/r/20210720013856.4143880-1-baolu...@linux.intel.com
---
 drivers/iommu/intel/iommu.c | 13 ++---
 drivers/iommu/intel/Kconfig | 18 ++
 2 files changed, 12 insertions(+), 19 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 36ce79c55766..acb91ddf32d0 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -327,17 +327,8 @@ static int intel_iommu_attach_device(struct iommu_domain 
*domain,
 static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain,
dma_addr_t iova);
 
-#ifdef CONFIG_INTEL_IOMMU_DEFAULT_ON
-int dmar_disabled = 0;
-#else
-int dmar_disabled = 1;
-#endif /* CONFIG_INTEL_IOMMU_DEFAULT_ON */
-
-#ifdef CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON
-int intel_iommu_sm = 1;
-#else
-int intel_iommu_sm;
-#endif /* CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON */
+int dmar_disabled = !IS_ENABLED(CONFIG_INTEL_IOMMU_DEFAULT_ON);
+int intel_iommu_sm = IS_ENABLED(CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON);
 
 int intel_iommu_enabled = 0;
 EXPORT_SYMBOL_GPL(intel_iommu_enabled);
diff --git a/drivers/iommu/intel/Kconfig b/drivers/iommu/intel/Kconfig
index 43ebd8af11c5..c1a92c3049d0 100644
--- a/drivers/iommu/intel/Kconfig
+++ b/drivers/iommu/intel/Kconfig
@@ -25,9 +25,11 @@ config INTEL_IOMMU
  and include PCI device scope covered by these DMA
  remapping devices.
 
+if INTEL_IOMMU
+
 config INTEL_IOMMU_DEBUGFS
bool "Export Intel IOMMU internals in Debugfs"
-   depends on INTEL_IOMMU && IOMMU_DEBUGFS
+   depends on IOMMU_DEBUGFS
select DMAR_PERF
help
  !!!WARNING!!!
@@ -41,7 +43,7 @@ config INTEL_IOMMU_DEBUGFS
 
 config INTEL_IOMMU_SVM
bool "Support for Shared Virtual Memory with Intel IOMMU"
-   depends on INTEL_IOMMU && X86_64
+   depends on X86_64
select PCI_PASID
select PCI_PRI
select MMU_NOTIFIER
@@ -53,9 +55,8 @@ config INTEL_IOMMU_SVM
  means of a Process Address Space ID (PASID).
 
 config INTEL_IOMMU_DEFAULT_ON
-   def_bool y
-   prompt "Enable Intel DMA Remapping Devices by default"
-   depends on INTEL_IOMMU
+   bool "Enable Intel DMA Remapping Devices by default"
+   default y
help
  Selecting this option will enable a DMAR device at boot time if
  one is found. If this option is not selected, DMAR support can
@@ -63,7 +64,7 @@ config INTEL_IOMMU_DEFAULT_ON
 
 config INTEL_IOMMU_BROKEN_GFX_WA
bool "Workaround broken graphics drivers (going away soon)"
-   depends on INTEL_IOMMU && BROKEN && X86
+   depends on BROKEN && X86
help
  Current Graphics drivers tend to use physical address
  for DMA and avoid using DMA APIs. Setting this config
@@ -74,7 +75,7 @@ config INTEL_IOMMU_BROKEN_GFX_WA
 
 config INTEL_IOMMU_FLOPPY_WA
def_bool y
-   depends on INTEL_IOMMU && X86
+   depends on X86
help
  Floppy disk drivers are known to bypass DMA API calls
  thereby failing to work when IOMMU is enabled. This
@@ -83,7 +84,6 @@ config INTEL_IOMMU_FLOPPY_WA
 
 config INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON
bool "Enable Intel IOMMU scalable mode by default"
-   depends on INTEL_IOMMU
help
  Selecting this option will enable by default the scalable mode if
  hardware presents the capability. The scalable mode is defined in
@@ -92,3 +92,5 @@ config INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON
  is not selected, scalable mode support could also be enabled by
  passing intel_iommu=sm_on to the kernel. If not sure, please use
  the default value.
+
+endif # INTEL_IOMMU
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 1/9] iommu/vt-d: Update the virtual command related registers

2021-08-18 Thread Lu Baolu
The VT-d spec Revision 3.3 updated the virtual command registers, virtual
command opcode B register, virtual command response register and virtual
command capability register (Section 10.4.43, 10.4.44, 10.4.45, 10.4.46).
This updates the virtual command interface implementation in the Intel
IOMMU driver accordingly.

Fixes: 24f27d32ab6b7 ("iommu/vt-d: Enlightened PASID allocation")
Signed-off-by: Lu Baolu 
Cc: Ashok Raj 
Cc: Sanjay Kumar 
Cc: Kevin Tian 
Link: 
https://lore.kernel.org/r/20210713042649.3547403-1-baolu...@linux.intel.com
---
 include/linux/intel-iommu.h |  6 +++---
 drivers/iommu/intel/pasid.h | 10 +-
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
index d0fa0b31994d..05a65eb155f7 100644
--- a/include/linux/intel-iommu.h
+++ b/include/linux/intel-iommu.h
@@ -124,9 +124,9 @@
 #define DMAR_MTRR_PHYSMASK8_REG 0x208
 #define DMAR_MTRR_PHYSBASE9_REG 0x210
 #define DMAR_MTRR_PHYSMASK9_REG 0x218
-#define DMAR_VCCAP_REG 0xe00 /* Virtual command capability register */
-#define DMAR_VCMD_REG  0xe10 /* Virtual command register */
-#define DMAR_VCRSP_REG 0xe20 /* Virtual command response register */
+#define DMAR_VCCAP_REG 0xe30 /* Virtual command capability register */
+#define DMAR_VCMD_REG  0xe00 /* Virtual command register */
+#define DMAR_VCRSP_REG 0xe10 /* Virtual command response register */
 
 #define DMAR_IQER_REG_IQEI(reg)FIELD_GET(GENMASK_ULL(3, 0), 
reg)
 #define DMAR_IQER_REG_ITESID(reg)  FIELD_GET(GENMASK_ULL(47, 32), reg)
diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
index c11bc8b833b8..d5552e2c160d 100644
--- a/drivers/iommu/intel/pasid.h
+++ b/drivers/iommu/intel/pasid.h
@@ -28,12 +28,12 @@
 #define VCMD_CMD_ALLOC 0x1
 #define VCMD_CMD_FREE  0x2
 #define VCMD_VRSP_IP   0x1
-#define VCMD_VRSP_SC(e)(((e) >> 1) & 0x3)
+#define VCMD_VRSP_SC(e)(((e) & 0xff) >> 1)
 #define VCMD_VRSP_SC_SUCCESS   0
-#define VCMD_VRSP_SC_NO_PASID_AVAIL2
-#define VCMD_VRSP_SC_INVALID_PASID 2
-#define VCMD_VRSP_RESULT_PASID(e)  (((e) >> 8) & 0xf)
-#define VCMD_CMD_OPERAND(e)((e) << 8)
+#define VCMD_VRSP_SC_NO_PASID_AVAIL16
+#define VCMD_VRSP_SC_INVALID_PASID 16
+#define VCMD_VRSP_RESULT_PASID(e)  (((e) >> 16) & 0xf)
+#define VCMD_CMD_OPERAND(e)((e) << 16)
 /*
  * Domain ID reserved for pasid entries programmed for first-level
  * only and pass-through transfer modes.
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 0/9] [PULL REQUEST] Intel IOMMU Updates for Linux v5.15

2021-08-18 Thread Lu Baolu
Hi Joerg,

The patches queued in this series are for v5.15.
It includes:

 - Update the virtual command related registers
 - Enable Intel IOMMU scalable mode by default
 - Preset A/D bits for user space DMA usage
 - Allow devices to have more than 32 outstanding PRs 
 - Various cleanups

Please pull.

Best regards,
Baolu

Andy Shevchenko (1):
  iommu/vt-d: Drop the kernel doc annotation

Liu Yi L (2):
  iommu/vt-d: Use pasid_pte_is_present() helper function
  iommu/vt-d: Add present bit check in pasid entry setup helpers

Lu Baolu (5):
  iommu/vt-d: Update the virtual command related registers
  iommu/vt-d: Refactor Kconfig a bit
  iommu/vt-d: Enable Intel IOMMU scalable mode by default
  iommu/vt-d: Preset A/D bits for user space DMA usage
  iommu/vt-d: Allow devices to have more than 32 outstanding PRs

Zhen Lei (1):
  iommu/vt-d: Remove unnecessary oom message

 .../admin-guide/kernel-parameters.txt | 11 +++---
 include/linux/intel-iommu.h   |  6 +--
 include/linux/intel-svm.h |  5 +++
 drivers/iommu/intel/pasid.h   | 10 ++---
 drivers/iommu/intel/dmar.c|  2 -
 drivers/iommu/intel/iommu.c   | 37 ++-
 drivers/iommu/intel/pasid.c   | 18 -
 drivers/iommu/intel/perf.c|  2 +-
 drivers/iommu/intel/svm.c |  4 --
 drivers/iommu/intel/Kconfig   | 19 ++
 10 files changed, 60 insertions(+), 54 deletions(-)

-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [GIT PULL] iommu/arm-smmu: Updates for 5.15

2021-08-18 Thread Joerg Roedel
On Wed, Aug 18, 2021 at 01:17:29PM +0100, Will Deacon wrote:
> Ok, I won't hold my breath!

Compile tests went fine and the kernel booted fine on my workstation, so
I pushed things out. Let's see whether testing in linux-next breaks
anything.

Regards,

Joerg
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [GIT PULL] iommu/arm-smmu: Updates for 5.15

2021-08-18 Thread Will Deacon
On Wed, Aug 18, 2021 at 02:08:25PM +0200, Joerg Roedel wrote:
> On Fri, Aug 13, 2021 at 05:47:35PM +0100, Will Deacon wrote:
> > This applies cleanly against iommu/next, but I suspect it will conflict
> > with Robin's series on the list. Please shout if you need anything from
> > me to help with that (e.g. rebase, checking a merge conflict).
> 
> For now there were at least no conflicts which git couldn't resolve
> automatically, but the compile tests are still running :)

Ok, I won't hold my breath!

> > The following changes since commit ff1176468d368232b684f75e82563369208bc371:
> > 
> >   Linux 5.14-rc3 (2021-07-25 15:35:14 -0700)
> > 
> > are available in the Git repository at:
> > 
> >   git://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git 
> > tags/arm-smmu-updates
> 
> So this is pulled now, thanks.

Cheers,

Will
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v11 12/12] Documentation: Add documentation for VDUSE

2021-08-18 Thread Xie Yongji
VDUSE (vDPA Device in Userspace) is a framework to support
implementing software-emulated vDPA devices in userspace. This
document is intended to clarify the VDUSE design and usage.

Signed-off-by: Xie Yongji 
---
 Documentation/userspace-api/index.rst |   1 +
 Documentation/userspace-api/vduse.rst | 233 ++
 2 files changed, 234 insertions(+)
 create mode 100644 Documentation/userspace-api/vduse.rst

diff --git a/Documentation/userspace-api/index.rst 
b/Documentation/userspace-api/index.rst
index 0b5eefed027e..c432be070f67 100644
--- a/Documentation/userspace-api/index.rst
+++ b/Documentation/userspace-api/index.rst
@@ -27,6 +27,7 @@ place where this information is gathered.
iommu
media/index
sysfs-platform_profile
+   vduse
 
 .. only::  subproject and html
 
diff --git a/Documentation/userspace-api/vduse.rst 
b/Documentation/userspace-api/vduse.rst
new file mode 100644
index ..42ef59ea5314
--- /dev/null
+++ b/Documentation/userspace-api/vduse.rst
@@ -0,0 +1,233 @@
+==
+VDUSE - "vDPA Device in Userspace"
+==
+
+vDPA (virtio data path acceleration) device is a device that uses a
+datapath which complies with the virtio specifications with vendor
+specific control path. vDPA devices can be both physically located on
+the hardware or emulated by software. VDUSE is a framework that makes it
+possible to implement software-emulated vDPA devices in userspace. And
+to make the device emulation more secure, the emulated vDPA device's
+control path is handled in the kernel and only the data path is
+implemented in the userspace.
+
+Note that only virtio block device is supported by VDUSE framework now,
+which can reduce security risks when the userspace process that implements
+the data path is run by an unprivileged user. The support for other device
+types can be added after the security issue of corresponding device driver
+is clarified or fixed in the future.
+
+Create/Destroy VDUSE devices
+
+
+VDUSE devices are created as follows:
+
+1. Create a new VDUSE instance with ioctl(VDUSE_CREATE_DEV) on
+   /dev/vduse/control.
+
+2. Setup each virtqueue with ioctl(VDUSE_VQ_SETUP) on /dev/vduse/$NAME.
+
+3. Begin processing VDUSE messages from /dev/vduse/$NAME. The first
+   messages will arrive while attaching the VDUSE instance to vDPA bus.
+
+4. Send the VDPA_CMD_DEV_NEW netlink message to attach the VDUSE
+   instance to vDPA bus.
+
+VDUSE devices are destroyed as follows:
+
+1. Send the VDPA_CMD_DEV_DEL netlink message to detach the VDUSE
+   instance from vDPA bus.
+
+2. Close the file descriptor referring to /dev/vduse/$NAME.
+
+3. Destroy the VDUSE instance with ioctl(VDUSE_DESTROY_DEV) on
+   /dev/vduse/control.
+
+The netlink messages can be sent via vdpa tool in iproute2 or use the
+below sample codes:
+
+.. code-block:: c
+
+   static int netlink_add_vduse(const char *name, enum vdpa_command cmd)
+   {
+   struct nl_sock *nlsock;
+   struct nl_msg *msg;
+   int famid;
+
+   nlsock = nl_socket_alloc();
+   if (!nlsock)
+   return -ENOMEM;
+
+   if (genl_connect(nlsock))
+   goto free_sock;
+
+   famid = genl_ctrl_resolve(nlsock, VDPA_GENL_NAME);
+   if (famid < 0)
+   goto close_sock;
+
+   msg = nlmsg_alloc();
+   if (!msg)
+   goto close_sock;
+
+   if (!genlmsg_put(msg, NL_AUTO_PORT, NL_AUTO_SEQ, famid, 0, 0, 
cmd, 0))
+   goto nla_put_failure;
+
+   NLA_PUT_STRING(msg, VDPA_ATTR_DEV_NAME, name);
+   if (cmd == VDPA_CMD_DEV_NEW)
+   NLA_PUT_STRING(msg, VDPA_ATTR_MGMTDEV_DEV_NAME, 
"vduse");
+
+   if (nl_send_sync(nlsock, msg))
+   goto close_sock;
+
+   nl_close(nlsock);
+   nl_socket_free(nlsock);
+
+   return 0;
+   nla_put_failure:
+   nlmsg_free(msg);
+   close_sock:
+   nl_close(nlsock);
+   free_sock:
+   nl_socket_free(nlsock);
+   return -1;
+   }
+
+How VDUSE works
+---
+
+As mentioned above, a VDUSE device is created by ioctl(VDUSE_CREATE_DEV) on
+/dev/vduse/control. With this ioctl, userspace can specify some basic 
configuration
+such as device name (uniquely identify a VDUSE device), virtio features, virtio
+configuration space, the number of virtqueues and so on for this emulated 
device.
+Then a char device interface (/dev/vduse/$NAME) is exported to userspace for 
device
+emulation. Userspace can use the VDUSE_VQ_SETUP ioctl on /dev/vduse/$NAME to
+add per-virtqueue configuration such as the max size of virtqueue to the 
device.
+
+After the initialization, the VDUSE device can be attached to vDPA bus via
+the 

[PATCH v11 11/12] vduse: Introduce VDUSE - vDPA Device in Userspace

2021-08-18 Thread Xie Yongji
This VDUSE driver enables implementing software-emulated vDPA
devices in userspace. The vDPA device is created by
ioctl(VDUSE_CREATE_DEV) on /dev/vduse/control. Then a char device
interface (/dev/vduse/$NAME) is exported to userspace for device
emulation.

In order to make the device emulation more secure, the device's
control path is handled in kernel. A message mechnism is introduced
to forward some dataplane related control messages to userspace.

And in the data path, the DMA buffer will be mapped into userspace
address space through different ways depending on the vDPA bus to
which the vDPA device is attached. In virtio-vdpa case, the MMU-based
software IOTLB is used to achieve that. And in vhost-vdpa case, the
DMA buffer is reside in a userspace memory region which can be shared
to the VDUSE userspace processs via transferring the shmfd.

For more details on VDUSE design and usage, please see the follow-on
Documentation commit.

Signed-off-by: Xie Yongji 
---
 Documentation/userspace-api/ioctl/ioctl-number.rst |1 +
 drivers/vdpa/Kconfig   |   10 +
 drivers/vdpa/Makefile  |1 +
 drivers/vdpa/vdpa_user/Makefile|5 +
 drivers/vdpa/vdpa_user/vduse_dev.c | 1611 
 include/uapi/linux/vduse.h |  304 
 6 files changed, 1932 insertions(+)
 create mode 100644 drivers/vdpa/vdpa_user/Makefile
 create mode 100644 drivers/vdpa/vdpa_user/vduse_dev.c
 create mode 100644 include/uapi/linux/vduse.h

diff --git a/Documentation/userspace-api/ioctl/ioctl-number.rst 
b/Documentation/userspace-api/ioctl/ioctl-number.rst
index 1409e40e6345..293ca3aef358 100644
--- a/Documentation/userspace-api/ioctl/ioctl-number.rst
+++ b/Documentation/userspace-api/ioctl/ioctl-number.rst
@@ -300,6 +300,7 @@ Code  Seq#Include File  
 Comments
 'z'   10-4F  drivers/s390/crypto/zcrypt_api.hconflict!
 '|'   00-7F  linux/media.h
 0x80  00-1F  linux/fb.h
+0x81  00-1F  linux/vduse.h
 0x89  00-06  arch/x86/include/asm/sockios.h
 0x89  0B-DF  linux/sockios.h
 0x89  E0-EF  linux/sockios.h 
SIOCPROTOPRIVATE range
diff --git a/drivers/vdpa/Kconfig b/drivers/vdpa/Kconfig
index a503c1b2bfd9..6e23bce6433a 100644
--- a/drivers/vdpa/Kconfig
+++ b/drivers/vdpa/Kconfig
@@ -33,6 +33,16 @@ config VDPA_SIM_BLOCK
  vDPA block device simulator which terminates IO request in a
  memory buffer.
 
+config VDPA_USER
+   tristate "VDUSE (vDPA Device in Userspace) support"
+   depends on EVENTFD && MMU && HAS_DMA
+   select DMA_OPS
+   select VHOST_IOTLB
+   select IOMMU_IOVA
+   help
+ With VDUSE it is possible to emulate a vDPA Device
+ in a userspace program.
+
 config IFCVF
tristate "Intel IFC VF vDPA driver"
depends on PCI_MSI
diff --git a/drivers/vdpa/Makefile b/drivers/vdpa/Makefile
index 67fe7f3d6943..f02ebed33f19 100644
--- a/drivers/vdpa/Makefile
+++ b/drivers/vdpa/Makefile
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0
 obj-$(CONFIG_VDPA) += vdpa.o
 obj-$(CONFIG_VDPA_SIM) += vdpa_sim/
+obj-$(CONFIG_VDPA_USER) += vdpa_user/
 obj-$(CONFIG_IFCVF)+= ifcvf/
 obj-$(CONFIG_MLX5_VDPA) += mlx5/
 obj-$(CONFIG_VP_VDPA)+= virtio_pci/
diff --git a/drivers/vdpa/vdpa_user/Makefile b/drivers/vdpa/vdpa_user/Makefile
new file mode 100644
index ..260e0b26af99
--- /dev/null
+++ b/drivers/vdpa/vdpa_user/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0
+
+vduse-y := vduse_dev.o iova_domain.o
+
+obj-$(CONFIG_VDPA_USER) += vduse.o
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c 
b/drivers/vdpa/vdpa_user/vduse_dev.c
new file mode 100644
index ..ce081b7895d5
--- /dev/null
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -0,0 +1,1611 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * VDUSE: vDPA Device in Userspace
+ *
+ * Copyright (C) 2020-2021 Bytedance Inc. and/or its affiliates. All rights 
reserved.
+ *
+ * Author: Xie Yongji 
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "iova_domain.h"
+
+#define DRV_AUTHOR   "Yongji Xie "
+#define DRV_DESC "vDPA Device in Userspace"
+#define DRV_LICENSE  "GPL v2"
+
+#define VDUSE_DEV_MAX (1U << MINORBITS)
+#define VDUSE_BOUNCE_SIZE (64 * 1024 * 1024)
+#define VDUSE_IOVA_SIZE (128 * 1024 * 1024)
+#define VDUSE_MSG_DEFAULT_TIMEOUT 30
+
+struct vduse_virtqueue {
+   u16 index;
+   u16 num_max;
+   u32 num;
+   u64 desc_addr;
+   u64 driver_addr;
+   u64 device_addr;
+   struct vdpa_vq_state state;
+   bool ready;
+   bool kicked;
+   spinlock_t kick_lock;
+   spinlock_t irq_lock;
+   struct eventfd_ctx *kickfd;
+   struct vdpa_callback cb;

Re: [GIT PULL] iommu/arm-smmu: Updates for 5.15

2021-08-18 Thread Joerg Roedel
On Fri, Aug 13, 2021 at 05:47:35PM +0100, Will Deacon wrote:
> This applies cleanly against iommu/next, but I suspect it will conflict
> with Robin's series on the list. Please shout if you need anything from
> me to help with that (e.g. rebase, checking a merge conflict).

For now there were at least no conflicts which git couldn't resolve
automatically, but the compile tests are still running :)

> The following changes since commit ff1176468d368232b684f75e82563369208bc371:
> 
>   Linux 5.14-rc3 (2021-07-25 15:35:14 -0700)
> 
> are available in the Git repository at:
> 
>   git://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git 
> tags/arm-smmu-updates

So this is pulled now, thanks.

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v11 09/12] vdpa: Support transferring virtual addressing during DMA mapping

2021-08-18 Thread Xie Yongji
This patch introduces an attribute for vDPA device to indicate
whether virtual address can be used. If vDPA device driver set
it, vhost-vdpa bus driver will not pin user page and transfer
userspace virtual address instead of physical address during
DMA mapping. And corresponding vma->vm_file and offset will be
also passed as an opaque pointer.

Suggested-by: Jason Wang 
Signed-off-by: Xie Yongji 
Acked-by: Jason Wang 
---
 drivers/vdpa/ifcvf/ifcvf_main.c   |  2 +-
 drivers/vdpa/mlx5/net/mlx5_vnet.c |  2 +-
 drivers/vdpa/vdpa.c   |  9 +++-
 drivers/vdpa/vdpa_sim/vdpa_sim.c  |  2 +-
 drivers/vdpa/virtio_pci/vp_vdpa.c |  2 +-
 drivers/vhost/vdpa.c  | 99 ++-
 include/linux/vdpa.h  | 20 ++--
 7 files changed, 117 insertions(+), 19 deletions(-)

diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
index ef10404e9deb..ff434070be94 100644
--- a/drivers/vdpa/ifcvf/ifcvf_main.c
+++ b/drivers/vdpa/ifcvf/ifcvf_main.c
@@ -492,7 +492,7 @@ static int ifcvf_probe(struct pci_dev *pdev, const struct 
pci_device_id *id)
}
 
adapter = vdpa_alloc_device(struct ifcvf_adapter, vdpa,
-   dev, _vdpa_ops, NULL);
+   dev, _vdpa_ops, NULL, false);
if (IS_ERR(adapter)) {
IFCVF_ERR(pdev, "Failed to allocate vDPA structure");
return PTR_ERR(adapter);
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c 
b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index b60c398a8d86..5ccc430906a0 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -2040,7 +2040,7 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev 
*v_mdev, const char *name)
max_vqs = min_t(u32, max_vqs, MLX5_MAX_SUPPORTED_VQS);
 
ndev = vdpa_alloc_device(struct mlx5_vdpa_net, mvdev.vdev, 
mdev->device, _vdpa_ops,
-name);
+name, false);
if (IS_ERR(ndev))
return PTR_ERR(ndev);
 
diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
index d77d59811389..41377df674d5 100644
--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -71,6 +71,7 @@ static void vdpa_release_dev(struct device *d)
  * @config: the bus operations that is supported by this device
  * @size: size of the parent structure that contains private data
  * @name: name of the vdpa device; optional.
+ * @use_va: indicate whether virtual address must be used by this device
  *
  * Driver should use vdpa_alloc_device() wrapper macro instead of
  * using this directly.
@@ -80,7 +81,8 @@ static void vdpa_release_dev(struct device *d)
  */
 struct vdpa_device *__vdpa_alloc_device(struct device *parent,
const struct vdpa_config_ops *config,
-   size_t size, const char *name)
+   size_t size, const char *name,
+   bool use_va)
 {
struct vdpa_device *vdev;
int err = -EINVAL;
@@ -91,6 +93,10 @@ struct vdpa_device *__vdpa_alloc_device(struct device 
*parent,
if (!!config->dma_map != !!config->dma_unmap)
goto err;
 
+   /* It should only work for the device that use on-chip IOMMU */
+   if (use_va && !(config->dma_map || config->set_map))
+   goto err;
+
err = -ENOMEM;
vdev = kzalloc(size, GFP_KERNEL);
if (!vdev)
@@ -106,6 +112,7 @@ struct vdpa_device *__vdpa_alloc_device(struct device 
*parent,
vdev->index = err;
vdev->config = config;
vdev->features_valid = false;
+   vdev->use_va = use_va;
 
if (name)
err = dev_set_name(>dev, "%s", name);
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index 827d613c4eb6..37070c3ec396 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -250,7 +250,7 @@ struct vdpasim *vdpasim_create(struct vdpasim_dev_attr 
*dev_attr)
ops = _config_ops;
 
vdpasim = vdpa_alloc_device(struct vdpasim, vdpa, NULL, ops,
-   dev_attr->name);
+   dev_attr->name, false);
if (IS_ERR(vdpasim)) {
ret = PTR_ERR(vdpasim);
goto err_alloc;
diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c 
b/drivers/vdpa/virtio_pci/vp_vdpa.c
index aec14f8c20fc..ff8d54606ea8 100644
--- a/drivers/vdpa/virtio_pci/vp_vdpa.c
+++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
@@ -435,7 +435,7 @@ static int vp_vdpa_probe(struct pci_dev *pdev, const struct 
pci_device_id *id)
return ret;
 
vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa,
-   dev, _vdpa_ops, NULL);
+   dev, _vdpa_ops, NULL, false);
if (IS_ERR(vp_vdpa)) {
dev_err(dev, "vp_vdpa: 

[PATCH v11 10/12] vduse: Implement an MMU-based software IOTLB

2021-08-18 Thread Xie Yongji
This implements an MMU-based software IOTLB to support mapping
kernel dma buffer into userspace dynamically. The basic idea
behind it is treating MMU (VA->PA) as IOMMU (IOVA->PA). The
software IOTLB will set up MMU mapping instead of IOMMU mapping
for the DMA transfer so that the userspace process is able to
use its virtual address to access the dma buffer in kernel.

To avoid security issue, a bounce-buffering mechanism is
introduced to prevent userspace accessing the original buffer
directly which may contain other kernel data. During the mapping,
unmapping, the software IOTLB will copy the data from the original
buffer to the bounce buffer and back, depending on the direction
of the transfer. And the bounce-buffer addresses will be mapped
into the user address space instead of the original one.

Signed-off-by: Xie Yongji 
Acked-by: Jason Wang 
---
 drivers/vdpa/vdpa_user/iova_domain.c | 545 +++
 drivers/vdpa/vdpa_user/iova_domain.h |  73 +
 2 files changed, 618 insertions(+)
 create mode 100644 drivers/vdpa/vdpa_user/iova_domain.c
 create mode 100644 drivers/vdpa/vdpa_user/iova_domain.h

diff --git a/drivers/vdpa/vdpa_user/iova_domain.c 
b/drivers/vdpa/vdpa_user/iova_domain.c
new file mode 100644
index ..1daae2608860
--- /dev/null
+++ b/drivers/vdpa/vdpa_user/iova_domain.c
@@ -0,0 +1,545 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * MMU-based software IOTLB.
+ *
+ * Copyright (C) 2020-2021 Bytedance Inc. and/or its affiliates. All rights 
reserved.
+ *
+ * Author: Xie Yongji 
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "iova_domain.h"
+
+static int vduse_iotlb_add_range(struct vduse_iova_domain *domain,
+u64 start, u64 last,
+u64 addr, unsigned int perm,
+struct file *file, u64 offset)
+{
+   struct vdpa_map_file *map_file;
+   int ret;
+
+   map_file = kmalloc(sizeof(*map_file), GFP_ATOMIC);
+   if (!map_file)
+   return -ENOMEM;
+
+   map_file->file = get_file(file);
+   map_file->offset = offset;
+
+   ret = vhost_iotlb_add_range_ctx(domain->iotlb, start, last,
+   addr, perm, map_file);
+   if (ret) {
+   fput(map_file->file);
+   kfree(map_file);
+   return ret;
+   }
+   return 0;
+}
+
+static void vduse_iotlb_del_range(struct vduse_iova_domain *domain,
+ u64 start, u64 last)
+{
+   struct vdpa_map_file *map_file;
+   struct vhost_iotlb_map *map;
+
+   while ((map = vhost_iotlb_itree_first(domain->iotlb, start, last))) {
+   map_file = (struct vdpa_map_file *)map->opaque;
+   fput(map_file->file);
+   kfree(map_file);
+   vhost_iotlb_map_free(domain->iotlb, map);
+   }
+}
+
+int vduse_domain_set_map(struct vduse_iova_domain *domain,
+struct vhost_iotlb *iotlb)
+{
+   struct vdpa_map_file *map_file;
+   struct vhost_iotlb_map *map;
+   u64 start = 0ULL, last = ULLONG_MAX;
+   int ret;
+
+   spin_lock(>iotlb_lock);
+   vduse_iotlb_del_range(domain, start, last);
+
+   for (map = vhost_iotlb_itree_first(iotlb, start, last); map;
+map = vhost_iotlb_itree_next(map, start, last)) {
+   map_file = (struct vdpa_map_file *)map->opaque;
+   ret = vduse_iotlb_add_range(domain, map->start, map->last,
+   map->addr, map->perm,
+   map_file->file,
+   map_file->offset);
+   if (ret)
+   goto err;
+   }
+   spin_unlock(>iotlb_lock);
+
+   return 0;
+err:
+   vduse_iotlb_del_range(domain, start, last);
+   spin_unlock(>iotlb_lock);
+   return ret;
+}
+
+void vduse_domain_clear_map(struct vduse_iova_domain *domain,
+   struct vhost_iotlb *iotlb)
+{
+   struct vhost_iotlb_map *map;
+   u64 start = 0ULL, last = ULLONG_MAX;
+
+   spin_lock(>iotlb_lock);
+   for (map = vhost_iotlb_itree_first(iotlb, start, last); map;
+map = vhost_iotlb_itree_next(map, start, last)) {
+   vduse_iotlb_del_range(domain, map->start, map->last);
+   }
+   spin_unlock(>iotlb_lock);
+}
+
+static int vduse_domain_map_bounce_page(struct vduse_iova_domain *domain,
+u64 iova, u64 size, u64 paddr)
+{
+   struct vduse_bounce_map *map;
+   u64 last = iova + size - 1;
+
+   while (iova <= last) {
+   map = >bounce_maps[iova >> PAGE_SHIFT];
+   if (!map->bounce_page) {
+   map->bounce_page = alloc_page(GFP_ATOMIC);
+   if (!map->bounce_page)
+   return -ENOMEM;
+

[PATCH v11 08/12] vdpa: factor out vhost_vdpa_pa_map() and vhost_vdpa_pa_unmap()

2021-08-18 Thread Xie Yongji
The upcoming patch is going to support VA mapping/unmapping.
So let's factor out the logic of PA mapping/unmapping firstly
to make the code more readable.

Suggested-by: Jason Wang 
Signed-off-by: Xie Yongji 
Acked-by: Jason Wang 
---
 drivers/vhost/vdpa.c | 55 +---
 1 file changed, 35 insertions(+), 20 deletions(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 87ab104792fb..80c7dd168b57 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -507,7 +507,7 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep,
return r;
 }
 
-static void vhost_vdpa_iotlb_unmap(struct vhost_vdpa *v, u64 start, u64 last)
+static void vhost_vdpa_pa_unmap(struct vhost_vdpa *v, u64 start, u64 last)
 {
struct vhost_dev *dev = >vdev;
struct vhost_iotlb *iotlb = dev->iotlb;
@@ -529,6 +529,11 @@ static void vhost_vdpa_iotlb_unmap(struct vhost_vdpa *v, 
u64 start, u64 last)
}
 }
 
+static void vhost_vdpa_iotlb_unmap(struct vhost_vdpa *v, u64 start, u64 last)
+{
+   return vhost_vdpa_pa_unmap(v, start, last);
+}
+
 static void vhost_vdpa_iotlb_free(struct vhost_vdpa *v)
 {
struct vhost_dev *dev = >vdev;
@@ -609,38 +614,28 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v, u64 
iova, u64 size)
}
 }
 
-static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
-  struct vhost_iotlb_msg *msg)
+static int vhost_vdpa_pa_map(struct vhost_vdpa *v,
+u64 iova, u64 size, u64 uaddr, u32 perm)
 {
struct vhost_dev *dev = >vdev;
-   struct vhost_iotlb *iotlb = dev->iotlb;
struct page **page_list;
unsigned long list_size = PAGE_SIZE / sizeof(struct page *);
unsigned int gup_flags = FOLL_LONGTERM;
unsigned long npages, cur_base, map_pfn, last_pfn = 0;
unsigned long lock_limit, sz2pin, nchunks, i;
-   u64 iova = msg->iova;
+   u64 start = iova;
long pinned;
int ret = 0;
 
-   if (msg->iova < v->range.first || !msg->size ||
-   msg->iova > U64_MAX - msg->size + 1 ||
-   msg->iova + msg->size - 1 > v->range.last)
-   return -EINVAL;
-
-   if (vhost_iotlb_itree_first(iotlb, msg->iova,
-   msg->iova + msg->size - 1))
-   return -EEXIST;
-
/* Limit the use of memory for bookkeeping */
page_list = (struct page **) __get_free_page(GFP_KERNEL);
if (!page_list)
return -ENOMEM;
 
-   if (msg->perm & VHOST_ACCESS_WO)
+   if (perm & VHOST_ACCESS_WO)
gup_flags |= FOLL_WRITE;
 
-   npages = PAGE_ALIGN(msg->size + (iova & ~PAGE_MASK)) >> PAGE_SHIFT;
+   npages = PAGE_ALIGN(size + (iova & ~PAGE_MASK)) >> PAGE_SHIFT;
if (!npages) {
ret = -EINVAL;
goto free;
@@ -654,7 +649,7 @@ static int vhost_vdpa_process_iotlb_update(struct 
vhost_vdpa *v,
goto unlock;
}
 
-   cur_base = msg->uaddr & PAGE_MASK;
+   cur_base = uaddr & PAGE_MASK;
iova &= PAGE_MASK;
nchunks = 0;
 
@@ -685,7 +680,7 @@ static int vhost_vdpa_process_iotlb_update(struct 
vhost_vdpa *v,
csize = (last_pfn - map_pfn + 1) << PAGE_SHIFT;
ret = vhost_vdpa_map(v, iova, csize,
 map_pfn << PAGE_SHIFT,
-msg->perm);
+perm);
if (ret) {
/*
 * Unpin the pages that are left 
unmapped
@@ -714,7 +709,7 @@ static int vhost_vdpa_process_iotlb_update(struct 
vhost_vdpa *v,
 
/* Pin the rest chunk */
ret = vhost_vdpa_map(v, iova, (last_pfn - map_pfn + 1) << PAGE_SHIFT,
-map_pfn << PAGE_SHIFT, msg->perm);
+map_pfn << PAGE_SHIFT, perm);
 out:
if (ret) {
if (nchunks) {
@@ -733,13 +728,33 @@ static int vhost_vdpa_process_iotlb_update(struct 
vhost_vdpa *v,
for (pfn = map_pfn; pfn <= last_pfn; pfn++)
unpin_user_page(pfn_to_page(pfn));
}
-   vhost_vdpa_unmap(v, msg->iova, msg->size);
+   vhost_vdpa_unmap(v, start, size);
}
 unlock:
mmap_read_unlock(dev->mm);
 free:
free_page((unsigned long)page_list);
return ret;
+
+}
+
+static int vhost_vdpa_process_iotlb_update(struct vhost_vdpa *v,
+  struct vhost_iotlb_msg *msg)
+{
+   struct vhost_dev *dev = >vdev;
+   struct vhost_iotlb *iotlb = dev->iotlb;
+
+   if (msg->iova < v->range.first || !msg->size ||
+   msg->iova > U64_MAX - msg->size + 1 ||
+   

[PATCH v11 07/12] vdpa: Add an opaque pointer for vdpa_config_ops.dma_map()

2021-08-18 Thread Xie Yongji
Add an opaque pointer for DMA mapping.

Suggested-by: Jason Wang 
Signed-off-by: Xie Yongji 
Acked-by: Jason Wang 
---
 drivers/vdpa/vdpa_sim/vdpa_sim.c | 6 +++---
 drivers/vhost/vdpa.c | 2 +-
 include/linux/vdpa.h | 2 +-
 3 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index e30d89b399d9..827d613c4eb6 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -544,14 +544,14 @@ static int vdpasim_set_map(struct vdpa_device *vdpa,
 }
 
 static int vdpasim_dma_map(struct vdpa_device *vdpa, u64 iova, u64 size,
-  u64 pa, u32 perm)
+  u64 pa, u32 perm, void *opaque)
 {
struct vdpasim *vdpasim = vdpa_to_sim(vdpa);
int ret;
 
spin_lock(>iommu_lock);
-   ret = vhost_iotlb_add_range(vdpasim->iommu, iova, iova + size - 1, pa,
-   perm);
+   ret = vhost_iotlb_add_range_ctx(vdpasim->iommu, iova, iova + size - 1,
+   pa, perm, opaque);
spin_unlock(>iommu_lock);
 
return ret;
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index d99d75ad30cc..87ab104792fb 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -574,7 +574,7 @@ static int vhost_vdpa_map(struct vhost_vdpa *v,
return r;
 
if (ops->dma_map) {
-   r = ops->dma_map(vdpa, iova, size, pa, perm);
+   r = ops->dma_map(vdpa, iova, size, pa, perm, NULL);
} else if (ops->set_map) {
if (!v->in_batch)
r = ops->set_map(vdpa, dev->iotlb);
diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index af7ea5ad795f..18f81612217e 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -271,7 +271,7 @@ struct vdpa_config_ops {
/* DMA ops */
int (*set_map)(struct vdpa_device *vdev, struct vhost_iotlb *iotlb);
int (*dma_map)(struct vdpa_device *vdev, u64 iova, u64 size,
-  u64 pa, u32 perm);
+  u64 pa, u32 perm, void *opaque);
int (*dma_unmap)(struct vdpa_device *vdev, u64 iova, u64 size);
 
/* Free device resources */
-- 
2.11.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v11 05/12] vhost-vdpa: Handle the failure of vdpa_reset()

2021-08-18 Thread Xie Yongji
The vdpa_reset() may fail now. This adds check to its return
value and fail the vhost_vdpa_open().

Signed-off-by: Xie Yongji 
---
 drivers/vhost/vdpa.c | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index b1c91b4db0ba..d99d75ad30cc 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -116,12 +116,13 @@ static void vhost_vdpa_unsetup_vq_irq(struct vhost_vdpa 
*v, u16 qid)
irq_bypass_unregister_producer(>call_ctx.producer);
 }
 
-static void vhost_vdpa_reset(struct vhost_vdpa *v)
+static int vhost_vdpa_reset(struct vhost_vdpa *v)
 {
struct vdpa_device *vdpa = v->vdpa;
 
-   vdpa_reset(vdpa);
v->in_batch = 0;
+
+   return vdpa_reset(vdpa);
 }
 
 static long vhost_vdpa_get_device_id(struct vhost_vdpa *v, u8 __user *argp)
@@ -868,7 +869,9 @@ static int vhost_vdpa_open(struct inode *inode, struct file 
*filep)
return -EBUSY;
 
nvqs = v->nvqs;
-   vhost_vdpa_reset(v);
+   r = vhost_vdpa_reset(v);
+   if (r)
+   goto err;
 
vqs = kmalloc_array(nvqs, sizeof(*vqs), GFP_KERNEL);
if (!vqs) {
-- 
2.11.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v11 06/12] vhost-iotlb: Add an opaque pointer for vhost IOTLB

2021-08-18 Thread Xie Yongji
Add an opaque pointer for vhost IOTLB. And introduce
vhost_iotlb_add_range_ctx() to accept it.

Suggested-by: Jason Wang 
Signed-off-by: Xie Yongji 
Acked-by: Jason Wang 
---
 drivers/vhost/iotlb.c   | 20 
 include/linux/vhost_iotlb.h |  3 +++
 2 files changed, 19 insertions(+), 4 deletions(-)

diff --git a/drivers/vhost/iotlb.c b/drivers/vhost/iotlb.c
index 0582079e4bcc..670d56c879e5 100644
--- a/drivers/vhost/iotlb.c
+++ b/drivers/vhost/iotlb.c
@@ -36,19 +36,21 @@ void vhost_iotlb_map_free(struct vhost_iotlb *iotlb,
 EXPORT_SYMBOL_GPL(vhost_iotlb_map_free);
 
 /**
- * vhost_iotlb_add_range - add a new range to vhost IOTLB
+ * vhost_iotlb_add_range_ctx - add a new range to vhost IOTLB
  * @iotlb: the IOTLB
  * @start: start of the IOVA range
  * @last: last of IOVA range
  * @addr: the address that is mapped to @start
  * @perm: access permission of this range
+ * @opaque: the opaque pointer for the new mapping
  *
  * Returns an error last is smaller than start or memory allocation
  * fails
  */
-int vhost_iotlb_add_range(struct vhost_iotlb *iotlb,
- u64 start, u64 last,
- u64 addr, unsigned int perm)
+int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb,
+ u64 start, u64 last,
+ u64 addr, unsigned int perm,
+ void *opaque)
 {
struct vhost_iotlb_map *map;
 
@@ -71,6 +73,7 @@ int vhost_iotlb_add_range(struct vhost_iotlb *iotlb,
map->last = last;
map->addr = addr;
map->perm = perm;
+   map->opaque = opaque;
 
iotlb->nmaps++;
vhost_iotlb_itree_insert(map, >root);
@@ -80,6 +83,15 @@ int vhost_iotlb_add_range(struct vhost_iotlb *iotlb,
 
return 0;
 }
+EXPORT_SYMBOL_GPL(vhost_iotlb_add_range_ctx);
+
+int vhost_iotlb_add_range(struct vhost_iotlb *iotlb,
+ u64 start, u64 last,
+ u64 addr, unsigned int perm)
+{
+   return vhost_iotlb_add_range_ctx(iotlb, start, last,
+addr, perm, NULL);
+}
 EXPORT_SYMBOL_GPL(vhost_iotlb_add_range);
 
 /**
diff --git a/include/linux/vhost_iotlb.h b/include/linux/vhost_iotlb.h
index 6b09b786a762..2d0e2f52f938 100644
--- a/include/linux/vhost_iotlb.h
+++ b/include/linux/vhost_iotlb.h
@@ -17,6 +17,7 @@ struct vhost_iotlb_map {
u32 perm;
u32 flags_padding;
u64 __subtree_last;
+   void *opaque;
 };
 
 #define VHOST_IOTLB_FLAG_RETIRE 0x1
@@ -29,6 +30,8 @@ struct vhost_iotlb {
unsigned int flags;
 };
 
+int vhost_iotlb_add_range_ctx(struct vhost_iotlb *iotlb, u64 start, u64 last,
+ u64 addr, unsigned int perm, void *opaque);
 int vhost_iotlb_add_range(struct vhost_iotlb *iotlb, u64 start, u64 last,
  u64 addr, unsigned int perm);
 void vhost_iotlb_del_range(struct vhost_iotlb *iotlb, u64 start, u64 last);
-- 
2.11.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v11 04/12] vdpa: Add reset callback in vdpa_config_ops

2021-08-18 Thread Xie Yongji
This adds a new callback to support device specific reset
behavior. The vdpa bus driver will call the reset function
instead of setting status to zero during resetting if device
driver supports the new callback.

Signed-off-by: Xie Yongji 
---
 drivers/vhost/vdpa.c |  9 +++--
 include/linux/vdpa.h | 11 ++-
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index b07aa161f7ad..b1c91b4db0ba 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -157,7 +157,7 @@ static long vhost_vdpa_set_status(struct vhost_vdpa *v, u8 
__user *statusp)
struct vdpa_device *vdpa = v->vdpa;
const struct vdpa_config_ops *ops = vdpa->config;
u8 status, status_old;
-   int nvqs = v->nvqs;
+   int ret, nvqs = v->nvqs;
u16 i;
 
if (copy_from_user(, statusp, sizeof(status)))
@@ -172,7 +172,12 @@ static long vhost_vdpa_set_status(struct vhost_vdpa *v, u8 
__user *statusp)
if (status != 0 && (ops->get_status(vdpa) & ~status) != 0)
return -EINVAL;
 
-   ops->set_status(vdpa, status);
+   if (status == 0 && ops->reset) {
+   ret = ops->reset(vdpa);
+   if (ret)
+   return ret;
+   } else
+   ops->set_status(vdpa, status);
 
if ((status & VIRTIO_CONFIG_S_DRIVER_OK) && !(status_old & 
VIRTIO_CONFIG_S_DRIVER_OK))
for (i = 0; i < nvqs; i++)
diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 8a645f8f4476..af7ea5ad795f 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -196,6 +196,9 @@ struct vdpa_iova_range {
  * @vdev: vdpa device
  * Returns the iova range supported by
  * the device.
+ * @reset: Reset device (optional)
+ * @vdev: vdpa device
+ * Returns integer: success (0) or error (< 0)
  * @set_map:   Set device memory mapping (optional)
  * Needed for device that using device
  * specific DMA translation (on-chip IOMMU)
@@ -263,6 +266,7 @@ struct vdpa_config_ops {
   const void *buf, unsigned int len);
u32 (*get_generation)(struct vdpa_device *vdev);
struct vdpa_iova_range (*get_iova_range)(struct vdpa_device *vdev);
+   int (*reset)(struct vdpa_device *vdev);
 
/* DMA ops */
int (*set_map)(struct vdpa_device *vdev, struct vhost_iotlb *iotlb);
@@ -351,12 +355,17 @@ static inline struct device *vdpa_get_dma_dev(struct 
vdpa_device *vdev)
return vdev->dma_dev;
 }
 
-static inline void vdpa_reset(struct vdpa_device *vdev)
+static inline int vdpa_reset(struct vdpa_device *vdev)
 {
const struct vdpa_config_ops *ops = vdev->config;
 
vdev->features_valid = false;
+   if (ops->reset)
+   return ops->reset(vdev);
+
ops->set_status(vdev, 0);
+
+   return 0;
 }
 
 static inline int vdpa_set_features(struct vdpa_device *vdev, u64 features)
-- 
2.11.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v11 03/12] vdpa: Fix some coding style issues

2021-08-18 Thread Xie Yongji
Fix some code indent issues and following checkpatch warning:

WARNING: Prefer 'unsigned int' to bare use of 'unsigned'
371: FILE: include/linux/vdpa.h:371:
+static inline void vdpa_get_config(struct vdpa_device *vdev, unsigned offset,

Signed-off-by: Xie Yongji 
---
 include/linux/vdpa.h | 34 +-
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 954b340f6c2f..8a645f8f4476 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -43,17 +43,17 @@ struct vdpa_vq_state_split {
  * @last_used_idx: used index
  */
 struct vdpa_vq_state_packed {
-u16last_avail_counter:1;
-u16last_avail_idx:15;
-u16last_used_counter:1;
-u16last_used_idx:15;
+   u16 last_avail_counter:1;
+   u16 last_avail_idx:15;
+   u16 last_used_counter:1;
+   u16 last_used_idx:15;
 };
 
 struct vdpa_vq_state {
- union {
-  struct vdpa_vq_state_split split;
-  struct vdpa_vq_state_packed packed;
- };
+   union {
+   struct vdpa_vq_state_split split;
+   struct vdpa_vq_state_packed packed;
+   };
 };
 
 struct vdpa_mgmt_dev;
@@ -131,7 +131,7 @@ struct vdpa_iova_range {
  * @vdev: vdpa device
  * @idx: virtqueue index
  * @state: pointer to returned state 
(last_avail_idx)
- * @get_vq_notification:   Get the notification area for a virtqueue
+ * @get_vq_notification:   Get the notification area for a virtqueue
  * @vdev: vdpa device
  * @idx: virtqueue index
  * Returns the notifcation area
@@ -353,25 +353,25 @@ static inline struct device *vdpa_get_dma_dev(struct 
vdpa_device *vdev)
 
 static inline void vdpa_reset(struct vdpa_device *vdev)
 {
-const struct vdpa_config_ops *ops = vdev->config;
+   const struct vdpa_config_ops *ops = vdev->config;
 
vdev->features_valid = false;
-ops->set_status(vdev, 0);
+   ops->set_status(vdev, 0);
 }
 
 static inline int vdpa_set_features(struct vdpa_device *vdev, u64 features)
 {
-const struct vdpa_config_ops *ops = vdev->config;
+   const struct vdpa_config_ops *ops = vdev->config;
 
vdev->features_valid = true;
-return ops->set_features(vdev, features);
+   return ops->set_features(vdev, features);
 }
 
-
-static inline void vdpa_get_config(struct vdpa_device *vdev, unsigned offset,
-  void *buf, unsigned int len)
+static inline void vdpa_get_config(struct vdpa_device *vdev,
+  unsigned int offset, void *buf,
+  unsigned int len)
 {
-const struct vdpa_config_ops *ops = vdev->config;
+   const struct vdpa_config_ops *ops = vdev->config;
 
/*
 * Config accesses aren't supposed to trigger before features are set.
-- 
2.11.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v11 02/12] file: Export receive_fd() to modules

2021-08-18 Thread Xie Yongji
Export receive_fd() so that some modules can use
it to pass file descriptor between processes without
missing any security stuffs.

Signed-off-by: Xie Yongji 
Acked-by: Jason Wang 
---
 fs/file.c| 6 ++
 include/linux/file.h | 7 +++
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/fs/file.c b/fs/file.c
index 86dc9956af32..210e540672aa 100644
--- a/fs/file.c
+++ b/fs/file.c
@@ -1134,6 +1134,12 @@ int receive_fd_replace(int new_fd, struct file *file, 
unsigned int o_flags)
return new_fd;
 }
 
+int receive_fd(struct file *file, unsigned int o_flags)
+{
+   return __receive_fd(file, NULL, o_flags);
+}
+EXPORT_SYMBOL_GPL(receive_fd);
+
 static int ksys_dup3(unsigned int oldfd, unsigned int newfd, int flags)
 {
int err = -EBADF;
diff --git a/include/linux/file.h b/include/linux/file.h
index 2de2e4613d7b..51e830b4fe3a 100644
--- a/include/linux/file.h
+++ b/include/linux/file.h
@@ -94,6 +94,9 @@ extern void fd_install(unsigned int fd, struct file *file);
 
 extern int __receive_fd(struct file *file, int __user *ufd,
unsigned int o_flags);
+
+extern int receive_fd(struct file *file, unsigned int o_flags);
+
 static inline int receive_fd_user(struct file *file, int __user *ufd,
  unsigned int o_flags)
 {
@@ -101,10 +104,6 @@ static inline int receive_fd_user(struct file *file, int 
__user *ufd,
return -EFAULT;
return __receive_fd(file, ufd, o_flags);
 }
-static inline int receive_fd(struct file *file, unsigned int o_flags)
-{
-   return __receive_fd(file, NULL, o_flags);
-}
 int receive_fd_replace(int new_fd, struct file *file, unsigned int o_flags);
 
 extern void flush_delayed_fput(void);
-- 
2.11.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v11 01/12] iova: Export alloc_iova_fast() and free_iova_fast()

2021-08-18 Thread Xie Yongji
Export alloc_iova_fast() and free_iova_fast() so that
some modules can make use of the per-CPU cache to get
rid of rbtree spinlock in alloc_iova() and free_iova()
during IOVA allocation.

Signed-off-by: Xie Yongji 
---
 drivers/iommu/iova.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index b6cf5f16123b..3941ed6bb99b 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -521,6 +521,7 @@ alloc_iova_fast(struct iova_domain *iovad, unsigned long 
size,
 
return new_iova->pfn_lo;
 }
+EXPORT_SYMBOL_GPL(alloc_iova_fast);
 
 /**
  * free_iova_fast - free iova pfn range into rcache
@@ -538,6 +539,7 @@ free_iova_fast(struct iova_domain *iovad, unsigned long 
pfn, unsigned long size)
 
free_iova(iovad, pfn);
 }
+EXPORT_SYMBOL_GPL(free_iova_fast);
 
 #define fq_ring_for_each(i, fq) \
for ((i) = (fq)->head; (i) != (fq)->tail; (i) = ((i) + 1) % 
IOVA_FQ_SIZE)
-- 
2.11.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v11 00/12] Introduce VDUSE - vDPA Device in Userspace

2021-08-18 Thread Xie Yongji
This series introduces a framework that makes it possible to implement
software-emulated vDPA devices in userspace. And to make the device
emulation more secure, the emulated vDPA device's control path is handled
in the kernel and only the data path is implemented in the userspace.

Since the emuldated vDPA device's control path is handled in the kernel,
a message mechnism is introduced to make userspace be aware of the data
path related changes. Userspace can use read()/write() to receive/reply
the control messages.

In the data path, the core is mapping dma buffer into VDUSE daemon's
address space, which can be implemented in different ways depending on
the vdpa bus to which the vDPA device is attached.

In virtio-vdpa case, we implements a MMU-based software IOTLB with
bounce-buffering mechanism to achieve that. And in vhost-vdpa case, the dma
buffer is reside in a userspace memory region which can be shared to the
VDUSE userspace processs via transferring the shmfd.

The details and our user case is shown below:

-   
--
|Container ||  QEMU(VM) |   |   
VDUSE daemon |
|   -  ||  ---  |   | 
-  |
|   |dev/vdx|  ||  |/dev/vhost-vdpa-x|  |   | | vDPA device 
emulation | | block driver | |
+--- ---+   
-+--+-
|   ||  
|
|   ||  
|
+---++--+-
|| block device |   |  vhost device || vduse driver |   
   | TCP/IP ||
|---+   +---+   
   -+|
|   |   |   |   
||
| --+--   --+--- ---+---
||
| | virtio-blk driver |   |  vhost-vdpa driver | | vdpa device |
||
| --+--   --+--- ---+---
||
|   |  virtio bus   |   |   
||
|   ++---   |   |   
||
||  |   |   
||
|  --+--|   |   
||
|  | virtio-blk device ||   |   
||
|  --+--|   |   
||
||  |   |   
||
| ---+---   |   |   
||
| |  virtio-vdpa driver |   |   |   
||
| ---+---   |   |   
||
||  |   |vdpa 
bus   ||
| 
---+--+---+ 
  ||
|   
 ---+--- |
-|
 NIC |--

 ---+---

|

   -+-

   | Remote Storages |

   ---

We make use of it to implement a block device connecting to
our distributed storage, which can be used both in containers and
VMs. Thus, we can have an unified technology stack in this two cases.

To test it with null-blk:

  $ qemu-storage-daemon \
  --chardev socket,id=charmonitor,path=/tmp/qmp.sock,server,nowait \
  --monitor chardev=charmonitor \
  --blockdev 
driver=host_device,cache.direct=on,aio=native,filename=/dev/nullb0,node-name=disk0
 \
  --export 

Re: [PATCH v2 13/63] iommu/amd: Use struct_group() for memcpy() region

2021-08-18 Thread Joerg Roedel
On Tue, Aug 17, 2021 at 11:04:43PM -0700, Kees Cook wrote:
> In preparation for FORTIFY_SOURCE performing compile-time and run-time
> field bounds checking for memcpy(), memmove(), and memset(), avoid
> intentionally writing across neighboring fields.
> 
> Use struct_group() in struct ivhd_entry around members ext and hidh, so
> they can be referenced together. This will allow memcpy() and sizeof()
> to more easily reason about sizes, improve readability, and avoid future
> warnings about writing beyond the end of ext.
> 
> "pahole" shows no size nor member offset changes to struct ivhd_entry.
> "objdump -d" shows no object code changes.
> 
> Cc: Joerg Roedel 
> Cc: Will Deacon 
> Cc: iommu@lists.linux-foundation.org
> Signed-off-by: Kees Cook 

Acked-by: Joerg Roedel 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v4 00/24] iommu: Refactor DMA domain strictness

2021-08-18 Thread Joerg Roedel
On Wed, Aug 11, 2021 at 01:21:14PM +0100, Robin Murphy wrote:
> Robin Murphy (24):
>   iommu: Pull IOVA cookie management into the core
>   iommu/amd: Drop IOVA cookie management
>   iommu/arm-smmu: Drop IOVA cookie management
>   iommu/vt-d: Drop IOVA cookie management
>   iommu/exynos: Drop IOVA cookie management
>   iommu/ipmmu-vmsa: Drop IOVA cookie management
>   iommu/mtk: Drop IOVA cookie management
>   iommu/rockchip: Drop IOVA cookie management
>   iommu/sprd: Drop IOVA cookie management
>   iommu/sun50i: Drop IOVA cookie management
>   iommu/virtio: Drop IOVA cookie management
>   iommu/dma: Unexport IOVA cookie management
>   iommu/dma: Remove redundant "!dev" checks
>   iommu: Indicate queued flushes via gather data
>   iommu/io-pgtable: Remove non-strict quirk
>   iommu: Introduce explicit type for non-strict DMA domains
>   iommu/amd: Prepare for multiple DMA domain types
>   iommu/arm-smmu: Prepare for multiple DMA domain types
>   iommu/vt-d: Prepare for multiple DMA domain types
>   iommu: Express DMA strictness via the domain type
>   iommu: Expose DMA domain strictness via sysfs
>   iommu: Only log strictness for DMA domains
>   iommu: Merge strictness and domain type configs
>   iommu: Allow enabling non-strict mode dynamically

Applied all except patch 12. Please re-submit patch 12 together with the
APPLE DART fixups after v5.15-rc1 is out.

Thanks,

Joerg
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 0/2] [PULL REQUEST] iommu/vt-d: Fixes for v5.14-rc7

2021-08-18 Thread Joerg Roedel
On Tue, Aug 17, 2021 at 08:43:19PM +0800, Lu Baolu wrote:
> Fenghua Yu (1):
>   iommu/vt-d: Fix PASID reference leak
> 
> Liu Yi L (1):
>   iommu/vt-d: Fix incomplete cache flush in
> intel_pasid_tear_down_entry()

Applied both, thanks.

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 2/2] iommu/arm-smmu-v3: Simplify useless instructions in arm_smmu_cmdq_build_cmd()

2021-08-18 Thread Zhen Lei
Although the parameter 'cmd' is always passed by a local array variable,
and only this function modifies it, the compiler does not know this. The
compiler almost always reads the value of cmd[i] from memory rather than
directly using the value cached in the register. This generates many
useless instruction operations and affects the performance to some extent.

To guide the compiler for proper optimization, 'cmd' is defined as a local
array variable, marked as register, and copied to the output parameter at
a time when the function is returned.

The optimization effect can be viewed by running the "size arm-smmu-v3.o"
command.

Before:
   textdata bss dec hex
  269541348  56   283586ec6

After:
   textdata bss dec hex
  267621348  56   281666e06

Signed-off-by: Zhen Lei 
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 11 ---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 01e95b56ffa07d1..7cec0c967f71d86 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -234,10 +234,12 @@ static int queue_remove_raw(struct arm_smmu_queue *q, u64 
*ent)
 }
 
 /* High-level queue accessors */
-static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
+static int arm_smmu_cmdq_build_cmd(u64 *out_cmd, struct arm_smmu_cmdq_ent *ent)
 {
-   memset(cmd, 0, 1 << CMDQ_ENT_SZ_SHIFT);
-   cmd[0] |= FIELD_PREP(CMDQ_0_OP, ent->opcode);
+   register u64 cmd[CMDQ_ENT_DWORDS];
+
+   cmd[0] = FIELD_PREP(CMDQ_0_OP, ent->opcode);
+   cmd[1] = 0;
 
switch (ent->opcode) {
case CMDQ_OP_TLBI_EL2_ALL:
@@ -332,6 +334,9 @@ static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct 
arm_smmu_cmdq_ent *ent)
return -ENOENT;
}
 
+   out_cmd[0] = cmd[0];
+   out_cmd[1] = cmd[1];
+
return 0;
 }
 
-- 
2.26.0.106.g9fadedd

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 1/2] iommu/arm-smmu-v3: Properly handle the return value of arm_smmu_cmdq_build_cmd()

2021-08-18 Thread Zhen Lei
1. Build command CMD_SYNC cannot fail. So the return value can be ignored.
2. The arm_smmu_cmdq_build_cmd() almost never fails, the addition of
   "unlikely()" can optimize the instruction pipeline.
3. Check the return value in arm_smmu_cmdq_batch_add().

Signed-off-by: Zhen Lei 
---
 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 18 --
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 
b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 3646bf8f021cd4c..01e95b56ffa07d1 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -409,10 +409,7 @@ static void __arm_smmu_cmdq_skip_err(struct 
arm_smmu_device *smmu,
dev_err(smmu->dev, "\t0x%016llx\n", (unsigned long long)cmd[i]);
 
/* Convert the erroneous command into a CMD_SYNC */
-   if (arm_smmu_cmdq_build_cmd(cmd, _sync)) {
-   dev_err(smmu->dev, "failed to convert to CMD_SYNC\n");
-   return;
-   }
+   arm_smmu_cmdq_build_cmd(cmd, _sync);
 
queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);
 }
@@ -860,7 +857,7 @@ static int __arm_smmu_cmdq_issue_cmd(struct arm_smmu_device 
*smmu,
 {
u64 cmd[CMDQ_ENT_DWORDS];
 
-   if (arm_smmu_cmdq_build_cmd(cmd, ent)) {
+   if (unlikely(arm_smmu_cmdq_build_cmd(cmd, ent))) {
dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
 ent->opcode);
return -EINVAL;
@@ -885,11 +882,20 @@ static void arm_smmu_cmdq_batch_add(struct 
arm_smmu_device *smmu,
struct arm_smmu_cmdq_batch *cmds,
struct arm_smmu_cmdq_ent *cmd)
 {
+   int index;
+
if (cmds->num == CMDQ_BATCH_ENTRIES) {
arm_smmu_cmdq_issue_cmdlist(smmu, cmds->cmds, cmds->num, false);
cmds->num = 0;
}
-   arm_smmu_cmdq_build_cmd(>cmds[cmds->num * CMDQ_ENT_DWORDS], cmd);
+
+   index = cmds->num * CMDQ_ENT_DWORDS;
+   if (unlikely(arm_smmu_cmdq_build_cmd(>cmds[index], cmd))) {
+   dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
+cmd->opcode);
+   return;
+   }
+
cmds->num++;
 }
 
-- 
2.26.0.106.g9fadedd

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 0/2] iommu/arm-smmu-v3: Perform some simple optimizations for arm_smmu_cmdq_build_cmd()

2021-08-18 Thread Zhen Lei
v1 --> v2:
1. Add patch 1, Properly handle the return value of arm_smmu_cmdq_build_cmd()
2. Remove arm_smmu_cmdq_copy_cmd(). In addition, when build command fails, 
out_cmd is not filled.


Zhen Lei (2):
  iommu/arm-smmu-v3: Properly handle the return value of
arm_smmu_cmdq_build_cmd()
  iommu/arm-smmu-v3: Simplify useless instructions in
arm_smmu_cmdq_build_cmd()

 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 29 ++---
 1 file changed, 20 insertions(+), 9 deletions(-)

-- 
2.26.0.106.g9fadedd

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2 13/63] iommu/amd: Use struct_group() for memcpy() region

2021-08-18 Thread Kees Cook
In preparation for FORTIFY_SOURCE performing compile-time and run-time
field bounds checking for memcpy(), memmove(), and memset(), avoid
intentionally writing across neighboring fields.

Use struct_group() in struct ivhd_entry around members ext and hidh, so
they can be referenced together. This will allow memcpy() and sizeof()
to more easily reason about sizes, improve readability, and avoid future
warnings about writing beyond the end of ext.

"pahole" shows no size nor member offset changes to struct ivhd_entry.
"objdump -d" shows no object code changes.

Cc: Joerg Roedel 
Cc: Will Deacon 
Cc: iommu@lists.linux-foundation.org
Signed-off-by: Kees Cook 
---
 drivers/iommu/amd/init.c | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index bdcf167b4afe..70506d6175e9 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -121,8 +121,10 @@ struct ivhd_entry {
u8 type;
u16 devid;
u8 flags;
-   u32 ext;
-   u32 hidh;
+   struct_group(ext_hid,
+   u32 ext;
+   u32 hidh;
+   );
u64 cid;
u8 uidf;
u8 uidl;
@@ -1377,7 +1379,8 @@ static int __init init_iommu_from_acpi(struct amd_iommu 
*iommu,
break;
}
 
-   memcpy(hid, (u8 *)(>ext), ACPIHID_HID_LEN - 1);
+   BUILD_BUG_ON(sizeof(e->ext_hid) != ACPIHID_HID_LEN - 1);
+   memcpy(hid, >ext_hid, ACPIHID_HID_LEN - 1);
hid[ACPIHID_HID_LEN - 1] = '\0';
 
if (!(*hid)) {
-- 
2.30.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu