[PATCH] iommu: fix return error code in iommu_probe_device()
If iommu_group_get() failed, it need return error code in iommu_probe_device(). Fixes: cf193888bfbd ("iommu: Move new probe_device path...") Reported-by: Hulk Robot Signed-off-by: Yang Yingliang --- drivers/iommu/iommu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index b53446bb8c6b..6f4a32df90f6 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -253,8 +253,10 @@ int iommu_probe_device(struct device *dev) goto err_out; group = iommu_group_get(dev); - if (!group) + if (!group) { + ret = -ENODEV; goto err_release; + } /* * Try to allocate a default domain - needs support from the -- 2.25.1 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v11 01/13] vfio: VFIO_IOMMU_SET_PASID_TABLE
Hi Eric, Thank you for the patch! Yet something to improve: [auto build test ERROR on v5.10-rc4] [also build test ERROR on next-20201116] [cannot apply to vfio/next] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Eric-Auger/SMMUv3-Nested-Stage-Setup-VFIO-part/20201116-190742 base:09162bc32c880a791c6c0668ce0745cf7958f576 config: x86_64-randconfig-s022-20201115 (attached as .config) compiler: gcc-9 (Debian 9.3.0-15) 9.3.0 reproduce: # apt-get install sparse # sparse version: v0.6.3-107-gaf3512a6-dirty # https://github.com/0day-ci/linux/commit/ce7c900f2d50c7e8cc62d54836369a3f3150adaf git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Eric-Auger/SMMUv3-Nested-Stage-Setup-VFIO-part/20201116-190742 git checkout ce7c900f2d50c7e8cc62d54836369a3f3150adaf # save the attached .config to linux build tree make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=x86_64 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All errors (new ones prefixed by >>): In file included from :32: >> ./usr/include/linux/vfio.h:1197:34: error: field 'config' has incomplete type 1197 | struct iommu_pasid_table_config config; /* used on SET */ | ^~ --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org .config.gz Description: application/gzip ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: iommu/vt-d: Cure VF irqdomain hickup
On Thu, Nov 12, 2020 at 08:15:02PM +0100, Thomas Gleixner wrote: > The recent changes to store the MSI irqdomain pointer in struct device > missed that Intel DMAR does not register virtual function devices. Due to > that a VF device gets the plain PCI-MSI domain assigned and then issues > compat MSI messages which get caught by the interrupt remapping unit. > > Cure that by inheriting the irq domain from the physical function > device. > > That's a temporary workaround. The correct fix is to inherit the irq domain > from the bus, but that's a larger effort which needs quite some other > changes to the way how x86 manages PCI and MSI domains. > > Fixes: 85a8dfc57a0b ("iommm/vt-d: Store irq domain in struct device") > Reported-by: Jason Gunthorpe > Signed-off-by: Thomas Gleixner > --- > drivers/iommu/intel/dmar.c | 19 ++- > 1 file changed, 18 insertions(+), 1 deletion(-) Our QA says it solves the issue: Tested-by: Itay Aveksis Thanks, Jason ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v6 3/3] firmware: QCOM_SCM: Allow qcom_scm driver to be loadable as a permenent module
On Mon, Nov 16, 2020 at 8:36 AM Will Deacon wrote: > On Mon, Nov 16, 2020 at 04:59:36PM +0100, Thierry Reding wrote: > > On Fri, Nov 06, 2020 at 04:27:10AM +, John Stultz wrote: > > Unfortunately, the ARM SMMU module will eventually end up being loaded > > once the root filesystem has been mounted (for example via SDHCI or > > Ethernet, both with using just plain, non-IOMMU-backed DMA API) and then > > initialize, configuring as "fault by default", which then results from a > > slew of SMMU faults from all the devices that have previously configured > > themselves without IOMMU support. > > I wonder if fw_devlink=on would help here? > > But either way, I'd be more inclined to revert this change if it's causing > problems for !QCOM devices. > > Linus -- please can you drop this one (patch 3/3) for now, given that it's > causing problems? Agreed. Apologies again for the trouble. I do feel like the probe timeout to handle optional links is causing a lot of the trouble here. I expect fw_devlink would solve this, but it may be awhile before it can be always enabled. I may see about pushing the default probe timeout value to be a little further out than init (I backed away from my last attempt as I didn't want to cause long (30 second) delays for cases like NFS root, but maybe 2-5 seconds would be enough to make things work better for everyone). thanks -john ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH 1/1] vfio/type1: Add subdev_ioasid callback to vfio_iommu_driver_ops
On Thu, 12 Nov 2020 10:24:07 +0800 Lu Baolu wrote: > Add API for getting the ioasid of a subdevice (vfio/mdev). This calls > into the backend IOMMU module to get the actual value or error number > if ioasid for subdevice is not supported. The physical device driver > implementations which rely on the vfio/mdev framework for mediated > device user level access could typically consume this interface like > below: > > struct device *dev = mdev_dev(mdev); > unsigned int pasid; > int ret; > > ret = vfio_subdev_ioasid(dev, ); > if (ret < 0) > return ret; > > /* Program device context with pasid value. */ > Seems like an overly specific callback. We already export means for you to get a vfio_group, test that a device is an mdev, and get the iommu device from an mdev. So you can already test whether a given device is an mdev with an iommu backing device that supports aux domains. The only missing piece seems to be that you can't get the domain for a group in order to retrieve the pasid. So why aren't we exporting a callback that given a vfio_group provides the iommu domain? Thanks, Alex > Signed-off-by: Lu Baolu > --- > drivers/vfio/vfio.c | 34 > drivers/vfio/vfio_iommu_type1.c | 57 + > include/linux/vfio.h| 4 +++ > 3 files changed, 95 insertions(+) > > diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c > index 2151bc7f87ab..4931e1492921 100644 > --- a/drivers/vfio/vfio.c > +++ b/drivers/vfio/vfio.c > @@ -2331,6 +2331,40 @@ int vfio_unregister_notifier(struct device *dev, enum > vfio_notify_type type, > } > EXPORT_SYMBOL(vfio_unregister_notifier); > > +int vfio_subdev_ioasid(struct device *dev, unsigned int *id) > +{ > + struct vfio_container *container; > + struct vfio_iommu_driver *driver; > + struct vfio_group *group; > + int ret; > + > + if (!dev || !id) > + return -EINVAL; > + > + group = vfio_group_get_from_dev(dev); > + if (!group) > + return -ENODEV; > + > + ret = vfio_group_add_container_user(group); > + if (ret) > + goto out; > + > + container = group->container; > + driver = container->iommu_driver; > + if (likely(driver && driver->ops->subdev_ioasid)) > + ret = driver->ops->subdev_ioasid(container->iommu_data, > + group->iommu_group, id); > + else > + ret = -ENOTTY; > + > + vfio_group_try_dissolve_container(group); > + > +out: > + vfio_group_put(group); > + return ret; > +} > +EXPORT_SYMBOL(vfio_subdev_ioasid); > + > /** > * Module/class support > */ > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index 67e827638995..f94cc7707d7e 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -2980,6 +2980,62 @@ static int vfio_iommu_type1_dma_rw(void *iommu_data, > dma_addr_t user_iova, > return ret; > } > > +static int vfio_iommu_type1_subdev_ioasid(void *iommu_data, > + struct iommu_group *iommu_group, > + unsigned int *id) > +{ > + struct vfio_iommu *iommu = iommu_data; > + struct vfio_domain *domain = NULL, *d; > + struct device *iommu_device = NULL; > + struct bus_type *bus = NULL; > + int ret; > + > + if (!iommu || !iommu_group || !id) > + return -EINVAL; > + > + mutex_lock(>lock); > + ret = iommu_group_for_each_dev(iommu_group, , vfio_bus_type); > + if (ret) > + goto out; > + > + if (!vfio_bus_is_mdev(bus)) { > + ret = -EINVAL; > + goto out; > + } > + > + ret = iommu_group_for_each_dev(iommu_group, _device, > +vfio_mdev_iommu_device); > + if (ret || !iommu_device || > + !iommu_dev_feature_enabled(iommu_device, IOMMU_DEV_FEAT_AUX)) { > + ret = -ENODEV; > + goto out; > + } > + > + list_for_each_entry(d, >domain_list, next) { > + if (find_iommu_group(d, iommu_group)) { > + domain = d; > + break; > + } > + } > + > + if (!domain) { > + ret = -ENODEV; > + goto out; > + } > + > + ret = iommu_aux_get_pasid(domain->domain, iommu_device); > + if (ret > 0) { > + *id = ret; > + ret = 0; > + } else { > + ret = -ENOSPC; > + } > + > +out: > + mutex_unlock(>lock); > + return ret; > +} > + > static const struct vfio_iommu_driver_ops vfio_iommu_driver_ops_type1 = { > .name = "vfio-iommu-type1", > .owner = THIS_MODULE, > @@ -2993,6 +3049,7 @@ static const struct vfio_iommu_driver_ops > vfio_iommu_driver_ops_type1 = { > .register_notifier
Re: [PATCH v6 3/3] firmware: QCOM_SCM: Allow qcom_scm driver to be loadable as a permenent module
On Mon, Nov 16, 2020 at 7:59 AM Thierry Reding wrote: > > On Fri, Nov 06, 2020 at 04:27:10AM +, John Stultz wrote: > > diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig > > index 04878caf6da49..c64d7a2b65134 100644 > > --- a/drivers/iommu/Kconfig > > +++ b/drivers/iommu/Kconfig > > @@ -248,6 +248,7 @@ config SPAPR_TCE_IOMMU > > config ARM_SMMU > > tristate "ARM Ltd. System MMU (SMMU) Support" > > depends on ARM64 || ARM || (COMPILE_TEST && !GENERIC_ATOMIC64) > > + depends on QCOM_SCM || !QCOM_SCM #if QCOM_SCM=m this can't be =y > > select IOMMU_API > > select IOMMU_IO_PGTABLE_LPAE > > select ARM_DMA_USE_IOMMU if ARM > > This, in conjunction with deferred probe timeout, causes mayhem on > Tegra186. The problem, as far as I can tell, is that there are various > devices that are hooked up to the ARM SMMU, but if ARM SMMU ends up > being built as a loadable module, then those devices will initialize > without IOMMU support (because deferred probe will timeout before the > ARM SMMU module can be loaded from the root filesystem). > > Unfortunately, the ARM SMMU module will eventually end up being loaded > once the root filesystem has been mounted (for example via SDHCI or > Ethernet, both with using just plain, non-IOMMU-backed DMA API) and then > initialize, configuring as "fault by default", which then results from a > slew of SMMU faults from all the devices that have previously configured > themselves without IOMMU support. Oof. My apologies for the trouble. Thanks so much for the report. Out of curiosity, does booting with deferred_probe_timeout=30 avoid the issue for you? thanks -john ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v11 05/13] vfio/pci: Register an iommu fault handler
Hi Eric, I love your patch! Perhaps something to improve: [auto build test WARNING on v5.10-rc4] [also build test WARNING on next-20201116] [cannot apply to vfio/next] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Eric-Auger/SMMUv3-Nested-Stage-Setup-VFIO-part/20201116-190742 base:09162bc32c880a791c6c0668ce0745cf7958f576 config: x86_64-allyesconfig (attached as .config) compiler: gcc-9 (Debian 9.3.0-15) 9.3.0 reproduce (this is a W=1 build): # https://github.com/0day-ci/linux/commit/747ef402696e1192684908ca99f06f3d68466c04 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Eric-Auger/SMMUv3-Nested-Stage-Setup-VFIO-part/20201116-190742 git checkout 747ef402696e1192684908ca99f06f3d68466c04 # save the attached .config to linux build tree make W=1 ARCH=x86_64 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All warnings (new ones prefixed by >>): In file included from include/linux/vfio.h:16, from drivers/vfio/pci/vfio_pci.c:26: include/uapi/linux/vfio.h:1231:34: error: field 'config' has incomplete type 1231 | struct iommu_pasid_table_config config; /* used on SET */ | ^~ >> drivers/vfio/pci/vfio_pci.c:339:5: warning: no previous prototype for >> 'vfio_pci_iommu_dev_fault_handler' [-Wmissing-prototypes] 339 | int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data) | ^~~~ vim +/vfio_pci_iommu_dev_fault_handler +339 drivers/vfio/pci/vfio_pci.c 338 > 339 int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void > *data) 340 { 341 struct vfio_pci_device *vdev = (struct vfio_pci_device *)data; 342 struct vfio_region_dma_fault *reg = 343 (struct vfio_region_dma_fault *)vdev->fault_pages; 344 struct iommu_fault *new; 345 u32 head, tail, size; 346 int ret = -EINVAL; 347 348 349 if (WARN_ON(!reg)) 350 return ret; 351 352 mutex_lock(>fault_queue_lock); 353 354 head = reg->head; 355 tail = reg->tail; 356 size = reg->nb_entries; 357 358 new = (struct iommu_fault *)(vdev->fault_pages + reg->offset + 359 head * reg->entry_size); 360 361 if (CIRC_SPACE(head, tail, size) < 1) { 362 ret = -ENOSPC; 363 goto unlock; 364 } 365 366 *new = *fault; 367 reg->head = (head + 1) % size; 368 ret = 0; 369 unlock: 370 mutex_unlock(>fault_queue_lock); 371 return ret; 372 } 373 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org .config.gz Description: application/gzip ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v4 04/24] dt-bindings: memory: mediatek: Add domain definition
On Wed, 11 Nov 2020 20:38:18 +0800, Yong Wu wrote: > In the latest SoC, there are several HW IP require a sepecial iova > range, mainly CCU and VPU has this requirement. Take CCU as a example, > CCU require its iova locate in the range(0x4000_ ~ 0x43ff_). > > In this patch we add a domain definition for the special port. In the > example of CCU, If we preassign CCU port in domain1, then iommu driver > will prepare a independent iommu domain of the special iova range for it, > then the iova got from dma_alloc_attrs(ccu-dev) will locate in its special > range. > > This is a preparing patch for multi-domain support. > > Signed-off-by: Yong Wu > --- > include/dt-bindings/memory/mtk-smi-larb-port.h | 9 - > 1 file changed, 8 insertions(+), 1 deletion(-) > Acked-by: Rob Herring ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v4 01/24] dt-bindings: iommu: mediatek: Convert IOMMU to DT schema
On Wed, 11 Nov 2020 20:38:15 +0800, Yong Wu wrote: > Convert MediaTek IOMMU to DT schema. > > Signed-off-by: Yong Wu > --- > .../bindings/iommu/mediatek,iommu.txt | 105 --- > .../bindings/iommu/mediatek,iommu.yaml| 167 ++ > 2 files changed, 167 insertions(+), 105 deletions(-) > delete mode 100644 Documentation/devicetree/bindings/iommu/mediatek,iommu.txt > create mode 100644 > Documentation/devicetree/bindings/iommu/mediatek,iommu.yaml > Reviewed-by: Rob Herring ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v6 3/3] firmware: QCOM_SCM: Allow qcom_scm driver to be loadable as a permenent module
On Mon, Nov 16, 2020 at 04:59:36PM +0100, Thierry Reding wrote: > On Fri, Nov 06, 2020 at 04:27:10AM +, John Stultz wrote: > > diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c > > index 7be48c1bec96d..6f431b73e617d 100644 > > --- a/drivers/firmware/qcom_scm.c > > +++ b/drivers/firmware/qcom_scm.c > > @@ -1280,6 +1280,7 @@ static const struct of_device_id qcom_scm_dt_match[] > > = { > > { .compatible = "qcom,scm" }, > > {} > > }; > > +MODULE_DEVICE_TABLE(of, qcom_scm_dt_match); > > > > static struct platform_driver qcom_scm_driver = { > > .driver = { > > @@ -1295,3 +1296,6 @@ static int __init qcom_scm_init(void) > > return platform_driver_register(_scm_driver); > > } > > subsys_initcall(qcom_scm_init); > > + > > +MODULE_DESCRIPTION("Qualcomm Technologies, Inc. SCM driver"); > > +MODULE_LICENSE("GPL v2"); > > diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig > > index 04878caf6da49..c64d7a2b65134 100644 > > --- a/drivers/iommu/Kconfig > > +++ b/drivers/iommu/Kconfig > > @@ -248,6 +248,7 @@ config SPAPR_TCE_IOMMU > > config ARM_SMMU > > tristate "ARM Ltd. System MMU (SMMU) Support" > > depends on ARM64 || ARM || (COMPILE_TEST && !GENERIC_ATOMIC64) > > + depends on QCOM_SCM || !QCOM_SCM #if QCOM_SCM=m this can't be =y > > select IOMMU_API > > select IOMMU_IO_PGTABLE_LPAE > > select ARM_DMA_USE_IOMMU if ARM > > This, in conjunction with deferred probe timeout, causes mayhem on > Tegra186. The problem, as far as I can tell, is that there are various > devices that are hooked up to the ARM SMMU, but if ARM SMMU ends up > being built as a loadable module, then those devices will initialize > without IOMMU support (because deferred probe will timeout before the > ARM SMMU module can be loaded from the root filesystem). > > Unfortunately, the ARM SMMU module will eventually end up being loaded > once the root filesystem has been mounted (for example via SDHCI or > Ethernet, both with using just plain, non-IOMMU-backed DMA API) and then > initialize, configuring as "fault by default", which then results from a > slew of SMMU faults from all the devices that have previously configured > themselves without IOMMU support. I wonder if fw_devlink=on would help here? But either way, I'd be more inclined to revert this change if it's causing problems for !QCOM devices. Linus -- please can you drop this one (patch 3/3) for now, given that it's causing problems? Cheers, Will ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v12 15/15] iommu/smmuv3: Add PASID cache invalidation per PASID
Hi Eric, I love your patch! Perhaps something to improve: [auto build test WARNING on iommu/next] [also build test WARNING on linus/master v5.10-rc4 next-20201116] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Eric-Auger/SMMUv3-Nested-Stage-Setup-IOMMU-part/20201116-185039 base: https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next config: arm64-randconfig-r034-20201115 (attached as .config) compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project c044709b8fbea2a9a375e4173a6bd735f6866c0c) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # install arm64 cross compiling tool for clang build # apt-get install binutils-aarch64-linux-gnu # https://github.com/0day-ci/linux/commit/95e4ccc61b7a7c06e1e0c6c01f362d590136ad3c git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Eric-Auger/SMMUv3-Nested-Stage-Setup-IOMMU-part/20201116-185039 git checkout 95e4ccc61b7a7c06e1e0c6c01f362d590136ad3c # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm64 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All warnings (new ones prefixed by >>): >> drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3010:8: warning: logical not is >> only applied to the left hand side of this bitwise operator >> [-Wlogical-not-parentheses] if (!info->flags & IOMMU_INV_PASID_FLAGS_PASID) ^~ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3010:8: note: add parentheses after the '!' to evaluate the bitwise operator first if (!info->flags & IOMMU_INV_PASID_FLAGS_PASID) ^ () drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c:3010:8: note: add parentheses around left hand side expression to silence this warning if (!info->flags & IOMMU_INV_PASID_FLAGS_PASID) ^ ( ) 1 warning generated. vim +3010 drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c 2960 2961 static int 2962 arm_smmu_cache_invalidate(struct iommu_domain *domain, struct device *dev, 2963struct iommu_cache_invalidate_info *inv_info) 2964 { 2965 struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); 2966 struct arm_smmu_device *smmu = smmu_domain->smmu; 2967 2968 if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) 2969 return -EINVAL; 2970 2971 if (!smmu) 2972 return -EINVAL; 2973 2974 if (inv_info->version != IOMMU_CACHE_INVALIDATE_INFO_VERSION_1) 2975 return -EINVAL; 2976 2977 if (inv_info->cache & IOMMU_CACHE_INV_TYPE_IOTLB) { 2978 if (inv_info->granularity == IOMMU_INV_GRANU_PASID) { 2979 struct iommu_inv_pasid_info *info = 2980 _info->granu.pasid_info; 2981 2982 if (!(info->flags & IOMMU_INV_PASID_FLAGS_ARCHID) || 2983 (info->flags & IOMMU_INV_PASID_FLAGS_PASID)) 2984 return -EINVAL; 2985 2986 __arm_smmu_tlb_inv_context(smmu_domain, info->archid); 2987 2988 } else if (inv_info->granularity == IOMMU_INV_GRANU_ADDR) { 2989 struct iommu_inv_addr_info *info = _info->granu.addr_info; 2990 size_t size = info->nb_granules * info->granule_size; 2991 bool leaf = info->flags & IOMMU_INV_ADDR_FLAGS_LEAF; 2992 2993 if (!(info->flags & IOMMU_INV_ADDR_FLAGS_ARCHID) || 2994 (info->flags & IOMMU_INV_ADDR_FLAGS_PASID)) 2995 return -EINVAL; 2996 2997 __arm_smmu_tlb_inv_range(info->addr, size, 2998 info->granule_size, leaf, 2999smmu_domain, info->archid); 3000 3001 arm_smmu_cmdq_issue_sync(smmu); 3002 } else { 3003
Re: [PATCH v6 3/3] firmware: QCOM_SCM: Allow qcom_scm driver to be loadable as a permenent module
On Fri, Nov 06, 2020 at 04:27:10AM +, John Stultz wrote: > Allow the qcom_scm driver to be loadable as a permenent module. > > This still uses the "depends on QCOM_SCM || !QCOM_SCM" bit to > ensure that drivers that call into the qcom_scm driver are > also built as modules. While not ideal in some cases its the > only safe way I can find to avoid build errors without having > those drivers select QCOM_SCM and have to force it on (as > QCOM_SCM=n can be valid for those drivers). > > Cc: Catalin Marinas > Cc: Will Deacon > Cc: Andy Gross > Cc: Bjorn Andersson > Cc: Joerg Roedel > Cc: Thomas Gleixner > Cc: Jason Cooper > Cc: Marc Zyngier > Cc: Linus Walleij > Cc: Vinod Koul > Cc: Kalle Valo > Cc: Maulik Shah > Cc: Lina Iyer > Cc: Saravana Kannan > Cc: Todd Kjos > Cc: Greg Kroah-Hartman > Cc: linux-arm-...@vger.kernel.org > Cc: iommu@lists.linux-foundation.org > Cc: linux-g...@vger.kernel.org > Acked-by: Kalle Valo > Acked-by: Greg Kroah-Hartman > Reviewed-by: Bjorn Andersson > Signed-off-by: John Stultz > --- > v3: > * Fix __arm_smccc_smc build issue reported by > kernel test robot > v4: > * Add "depends on QCOM_SCM || !QCOM_SCM" bit to ath10k > config that requires it. > v5: > * Fix QCOM_QCM typo in Kconfig, it should be QCOM_SCM > --- > drivers/firmware/Kconfig| 4 ++-- > drivers/firmware/Makefile | 3 ++- > drivers/firmware/qcom_scm.c | 4 > drivers/iommu/Kconfig | 2 ++ > drivers/net/wireless/ath/ath10k/Kconfig | 1 + > 5 files changed, 11 insertions(+), 3 deletions(-) > > diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig > index 3315e3c215864..5e369928bc567 100644 > --- a/drivers/firmware/Kconfig > +++ b/drivers/firmware/Kconfig > @@ -235,8 +235,8 @@ config INTEL_STRATIX10_RSU > Say Y here if you want Intel RSU support. > > config QCOM_SCM > - bool > - depends on ARM || ARM64 > + tristate "Qcom SCM driver" > + depends on (ARM && HAVE_ARM_SMCCC) || ARM64 > select RESET_CONTROLLER > > config QCOM_SCM_DOWNLOAD_MODE_DEFAULT > diff --git a/drivers/firmware/Makefile b/drivers/firmware/Makefile > index 5e013b6a3692e..523173cbff335 100644 > --- a/drivers/firmware/Makefile > +++ b/drivers/firmware/Makefile > @@ -17,7 +17,8 @@ obj-$(CONFIG_ISCSI_IBFT)+= iscsi_ibft.o > obj-$(CONFIG_FIRMWARE_MEMMAP)+= memmap.o > obj-$(CONFIG_RASPBERRYPI_FIRMWARE) += raspberrypi.o > obj-$(CONFIG_FW_CFG_SYSFS) += qemu_fw_cfg.o > -obj-$(CONFIG_QCOM_SCM) += qcom_scm.o qcom_scm-smc.o > qcom_scm-legacy.o > +obj-$(CONFIG_QCOM_SCM) += qcom-scm.o > +qcom-scm-objs += qcom_scm.o qcom_scm-smc.o qcom_scm-legacy.o > obj-$(CONFIG_TI_SCI_PROTOCOL)+= ti_sci.o > obj-$(CONFIG_TRUSTED_FOUNDATIONS) += trusted_foundations.o > obj-$(CONFIG_TURRIS_MOX_RWTM)+= turris-mox-rwtm.o > diff --git a/drivers/firmware/qcom_scm.c b/drivers/firmware/qcom_scm.c > index 7be48c1bec96d..6f431b73e617d 100644 > --- a/drivers/firmware/qcom_scm.c > +++ b/drivers/firmware/qcom_scm.c > @@ -1280,6 +1280,7 @@ static const struct of_device_id qcom_scm_dt_match[] = { > { .compatible = "qcom,scm" }, > {} > }; > +MODULE_DEVICE_TABLE(of, qcom_scm_dt_match); > > static struct platform_driver qcom_scm_driver = { > .driver = { > @@ -1295,3 +1296,6 @@ static int __init qcom_scm_init(void) > return platform_driver_register(_scm_driver); > } > subsys_initcall(qcom_scm_init); > + > +MODULE_DESCRIPTION("Qualcomm Technologies, Inc. SCM driver"); > +MODULE_LICENSE("GPL v2"); > diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig > index 04878caf6da49..c64d7a2b65134 100644 > --- a/drivers/iommu/Kconfig > +++ b/drivers/iommu/Kconfig > @@ -248,6 +248,7 @@ config SPAPR_TCE_IOMMU > config ARM_SMMU > tristate "ARM Ltd. System MMU (SMMU) Support" > depends on ARM64 || ARM || (COMPILE_TEST && !GENERIC_ATOMIC64) > + depends on QCOM_SCM || !QCOM_SCM #if QCOM_SCM=m this can't be =y > select IOMMU_API > select IOMMU_IO_PGTABLE_LPAE > select ARM_DMA_USE_IOMMU if ARM This, in conjunction with deferred probe timeout, causes mayhem on Tegra186. The problem, as far as I can tell, is that there are various devices that are hooked up to the ARM SMMU, but if ARM SMMU ends up being built as a loadable module, then those devices will initialize without IOMMU support (because deferred probe will timeout before the ARM SMMU module can be loaded from the root filesystem). Unfortunately, the ARM SMMU module will eventually end up being loaded once the root filesystem has been mounted (for example via SDHCI or Ethernet, both with using just plain, non-IOMMU-backed DMA API) and then initialize, configuring as "fault by default", which then results from a slew of SMMU faults from all the devices that have previously configured themselves without IOMMU support. One way to work around this is to just disable all QCOM-related drivers for the
Re: [PATCH v12 01/15] iommu: Introduce attach/detach_pasid_table API
Hi Eric, I love your patch! Perhaps something to improve: [auto build test WARNING on iommu/next] [also build test WARNING on linus/master v5.10-rc4 next-20201116] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Eric-Auger/SMMUv3-Nested-Stage-Setup-IOMMU-part/20201116-185039 base: https://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git next config: arm64-randconfig-r034-20201115 (attached as .config) compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project c044709b8fbea2a9a375e4173a6bd735f6866c0c) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # install arm64 cross compiling tool for clang build # apt-get install binutils-aarch64-linux-gnu # https://github.com/0day-ci/linux/commit/54be9a9e014a566f9c7640da201c24cfb1eda06e git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Eric-Auger/SMMUv3-Nested-Stage-Setup-IOMMU-part/20201116-185039 git checkout 54be9a9e014a566f9c7640da201c24cfb1eda06e # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=arm64 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All warnings (new ones prefixed by >>): >> drivers/iommu/iommu.c:2225:34: warning: overlapping comparisons always >> evaluate to false [-Wtautological-overlap-compare] if (pasid_table_data.config < 1 && pasid_table_data.config > 3) ^~ 1 warning generated. vim +2225 drivers/iommu/iommu.c 2182 2183 int iommu_uapi_attach_pasid_table(struct iommu_domain *domain, 2184void __user *uinfo) 2185 { 2186 struct iommu_pasid_table_config pasid_table_data = { 0 }; 2187 u32 minsz; 2188 2189 if (unlikely(!domain->ops->attach_pasid_table)) 2190 return -ENODEV; 2191 2192 /* 2193 * No new spaces can be added before the variable sized union, the 2194 * minimum size is the offset to the union. 2195 */ 2196 minsz = offsetof(struct iommu_pasid_table_config, vendor_data); 2197 2198 /* Copy minsz from user to get flags and argsz */ 2199 if (copy_from_user(_table_data, uinfo, minsz)) 2200 return -EFAULT; 2201 2202 /* Fields before the variable size union are mandatory */ 2203 if (pasid_table_data.argsz < minsz) 2204 return -EINVAL; 2205 2206 /* PASID and address granu require additional info beyond minsz */ 2207 if (pasid_table_data.version != PASID_TABLE_CFG_VERSION_1) 2208 return -EINVAL; 2209 if (pasid_table_data.format == IOMMU_PASID_FORMAT_SMMUV3 && 2210 pasid_table_data.argsz < 2211 offsetofend(struct iommu_pasid_table_config, vendor_data.smmuv3)) 2212 return -EINVAL; 2213 2214 /* 2215 * User might be using a newer UAPI header which has a larger data 2216 * size, we shall support the existing flags within the current 2217 * size. Copy the remaining user data _after_ minsz but not more 2218 * than the current kernel supported size. 2219 */ 2220 if (copy_from_user((void *)_table_data + minsz, uinfo + minsz, 2221 min_t(u32, pasid_table_data.argsz, sizeof(pasid_table_data)) - minsz)) return -EFAULT; 2223 2224 /* Now the argsz is validated, check the content */ > 2225 if (pasid_table_data.config < 1 && pasid_table_data.config > 3) 2226 return -EINVAL; 2227 2228 return domain->ops->attach_pasid_table(domain, _table_data); 2229 } 2230 EXPORT_SYMBOL_GPL(iommu_uapi_attach_pasid_table); 2231 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org .config.gz Description: application/gzip ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v11 05/13] vfio/pci: Register an iommu fault handler
Hi Eric, I love your patch! Perhaps something to improve: [auto build test WARNING on v5.10-rc4] [also build test WARNING on next-20201116] [cannot apply to vfio/next] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Eric-Auger/SMMUv3-Nested-Stage-Setup-VFIO-part/20201116-190742 base:09162bc32c880a791c6c0668ce0745cf7958f576 config: powerpc64-randconfig-r026-20201116 (attached as .config) compiler: clang version 12.0.0 (https://github.com/llvm/llvm-project c044709b8fbea2a9a375e4173a6bd735f6866c0c) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # install powerpc64 cross compiling tool for clang build # apt-get install binutils-powerpc64-linux-gnu # https://github.com/0day-ci/linux/commit/747ef402696e1192684908ca99f06f3d68466c04 git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Eric-Auger/SMMUv3-Nested-Stage-Setup-VFIO-part/20201116-190742 git checkout 747ef402696e1192684908ca99f06f3d68466c04 # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=powerpc64 If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All warnings (new ones prefixed by >>): In file included from drivers/vfio/pci/vfio_pci.c:26: In file included from include/linux/vfio.h:16: include/uapi/linux/vfio.h:1231:34: error: field has incomplete type 'struct iommu_pasid_table_config' struct iommu_pasid_table_config config; /* used on SET */ ^ include/uapi/linux/vfio.h:1231:9: note: forward declaration of 'struct iommu_pasid_table_config' struct iommu_pasid_table_config config; /* used on SET */ ^ >> drivers/vfio/pci/vfio_pci.c:339:5: warning: no previous prototype for >> function 'vfio_pci_iommu_dev_fault_handler' [-Wmissing-prototypes] int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data) ^ drivers/vfio/pci/vfio_pci.c:339:1: note: declare 'static' if the function is not intended to be used outside of this translation unit int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data) ^ static 1 warning and 1 error generated. vim +/vfio_pci_iommu_dev_fault_handler +339 drivers/vfio/pci/vfio_pci.c 338 > 339 int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void > *data) 340 { 341 struct vfio_pci_device *vdev = (struct vfio_pci_device *)data; 342 struct vfio_region_dma_fault *reg = 343 (struct vfio_region_dma_fault *)vdev->fault_pages; 344 struct iommu_fault *new; 345 u32 head, tail, size; 346 int ret = -EINVAL; 347 348 349 if (WARN_ON(!reg)) 350 return ret; 351 352 mutex_lock(>fault_queue_lock); 353 354 head = reg->head; 355 tail = reg->tail; 356 size = reg->nb_entries; 357 358 new = (struct iommu_fault *)(vdev->fault_pages + reg->offset + 359 head * reg->entry_size); 360 361 if (CIRC_SPACE(head, tail, size) < 1) { 362 ret = -ENOSPC; 363 goto unlock; 364 } 365 366 *new = *fault; 367 reg->head = (head + 1) % size; 368 ret = 0; 369 unlock: 370 mutex_unlock(>fault_queue_lock); 371 return ret; 372 } 373 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-...@lists.01.org .config.gz Description: application/gzip ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: iommu/vt-d: Cure VF irqdomain hickup
On 2020/11/16 17:47, Geert Uytterhoeven wrote: Hi Thomas, On Thu, Nov 12, 2020 at 8:16 PM Thomas Gleixner wrote: The recent changes to store the MSI irqdomain pointer in struct device missed that Intel DMAR does not register virtual function devices. Due to that a VF device gets the plain PCI-MSI domain assigned and then issues compat MSI messages which get caught by the interrupt remapping unit. Cure that by inheriting the irq domain from the physical function device. That's a temporary workaround. The correct fix is to inherit the irq domain from the bus, but that's a larger effort which needs quite some other changes to the way how x86 manages PCI and MSI domains. Fixes: 85a8dfc57a0b ("iommm/vt-d: Store irq domain in struct device") Reported-by: Jason Gunthorpe Signed-off-by: Thomas Gleixner --- drivers/iommu/intel/dmar.c | 19 ++- 1 file changed, 18 insertions(+), 1 deletion(-) --- a/drivers/iommu/intel/dmar.c +++ b/drivers/iommu/intel/dmar.c @@ -333,6 +333,11 @@ static void dmar_pci_bus_del_dev(struct dmar_iommu_notify_scope_dev(info); } +static inline void vf_inherit_msi_domain(struct pci_dev *pdev) +{ + dev_set_msi_domain(>dev, dev_get_msi_domain(>physfn->dev)); If CONFIG_PCI_ATS is not set: error: 'struct pci_dev' has no member named 'physfn' http://kisskb.ellerman.id.au/kisskb/buildresult/14400927/ Maybe pci_physfn() helper should be used here. Best regards, baolu ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: iommu/vt-d: Cure VF irqdomain hickup
Geert, On Mon, Nov 16 2020 at 10:47, Geert Uytterhoeven wrote: > On Thu, Nov 12, 2020 at 8:16 PM Thomas Gleixner wrote: >> The recent changes to store the MSI irqdomain pointer in struct device >> missed that Intel DMAR does not register virtual function devices. Due to >> that a VF device gets the plain PCI-MSI domain assigned and then issues >> compat MSI messages which get caught by the interrupt remapping unit. >> >> Cure that by inheriting the irq domain from the physical function >> device. >> >> That's a temporary workaround. The correct fix is to inherit the irq domain >> from the bus, but that's a larger effort which needs quite some other >> changes to the way how x86 manages PCI and MSI domains. >> >> Fixes: 85a8dfc57a0b ("iommm/vt-d: Store irq domain in struct device") >> Reported-by: Jason Gunthorpe >> Signed-off-by: Thomas Gleixner >> --- >> drivers/iommu/intel/dmar.c | 19 ++- >> 1 file changed, 18 insertions(+), 1 deletion(-) >> >> --- a/drivers/iommu/intel/dmar.c >> +++ b/drivers/iommu/intel/dmar.c >> @@ -333,6 +333,11 @@ static void dmar_pci_bus_del_dev(struct >> dmar_iommu_notify_scope_dev(info); >> } >> >> +static inline void vf_inherit_msi_domain(struct pci_dev *pdev) >> +{ >> + dev_set_msi_domain(>dev, >> dev_get_msi_domain(>physfn->dev)); > > If CONFIG_PCI_ATS is not set: > > error: 'struct pci_dev' has no member named 'physfn' > thanks for pointing that out. Yet moar ifdeffery, oh well... Thanks, tglx ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v11 13/13] vfio/pci: Inject page response upon response region fill
When the userspace increments the head of the page response buffer ring, let's push the response into the iommu layer. This is done through a workqueue that pops the responses from the ring buffer and increment the tail. Signed-off-by: Eric Auger --- drivers/vfio/pci/vfio_pci.c | 40 + drivers/vfio/pci/vfio_pci_private.h | 8 ++ drivers/vfio/pci/vfio_pci_rdwr.c| 1 + 3 files changed, 49 insertions(+) diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index e9a904ce3f0d..beea70d70151 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -542,6 +542,32 @@ static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev) return ret; } +static void dma_response_inject(struct work_struct *work) +{ + struct vfio_pci_dma_fault_response_work *rwork = + container_of(work, struct vfio_pci_dma_fault_response_work, inject); + struct vfio_region_dma_fault_response *header = rwork->header; + struct vfio_pci_device *vdev = rwork->vdev; + struct iommu_page_response *resp; + u32 tail, head, size; + + mutex_lock(>fault_response_queue_lock); + + tail = header->tail; + head = header->head; + size = header->nb_entries; + + while (CIRC_CNT(head, tail, size) >= 1) { + resp = (struct iommu_page_response *)(vdev->fault_response_pages + header->offset + + tail * header->entry_size); + + /* TODO: properly handle the return value */ + iommu_page_response(>pdev->dev, resp); + header->tail = tail = (tail + 1) % size; + } + mutex_unlock(>fault_response_queue_lock); +} + #define DMA_FAULT_RESPONSE_RING_LENGTH 512 static int vfio_pci_dma_fault_response_init(struct vfio_pci_device *vdev) @@ -585,8 +611,22 @@ static int vfio_pci_dma_fault_response_init(struct vfio_pci_device *vdev) header->nb_entries = DMA_FAULT_RESPONSE_RING_LENGTH; header->offset = PAGE_SIZE; + vdev->response_work = kzalloc(sizeof(*vdev->response_work), GFP_KERNEL); + if (!vdev->response_work) + goto out; + vdev->response_work->header = header; + vdev->response_work->vdev = vdev; + + /* launch the thread that will extract the response */ + INIT_WORK(>response_work->inject, dma_response_inject); + vdev->dma_fault_response_wq = + create_singlethread_workqueue("vfio-dma-fault-response"); + if (!vdev->dma_fault_response_wq) + return -ENOMEM; + return 0; out: + kfree(vdev->fault_response_pages); vdev->fault_response_pages = NULL; return ret; } diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h index 035634521cd0..5944f96ced0c 100644 --- a/drivers/vfio/pci/vfio_pci_private.h +++ b/drivers/vfio/pci/vfio_pci_private.h @@ -52,6 +52,12 @@ struct vfio_pci_irq_ctx { struct irq_bypass_producer producer; }; +struct vfio_pci_dma_fault_response_work { + struct work_struct inject; + struct vfio_region_dma_fault_response *header; + struct vfio_pci_device *vdev; +}; + struct vfio_pci_device; struct vfio_pci_region; @@ -145,6 +151,8 @@ struct vfio_pci_device { struct eventfd_ctx *req_trigger; u8 *fault_pages; u8 *fault_response_pages; + struct workqueue_struct *dma_fault_response_wq; + struct vfio_pci_dma_fault_response_work *response_work; struct mutexfault_queue_lock; struct mutexfault_response_queue_lock; struct list_headdummy_resources_list; diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c index efde0793360b..78c494fe35cc 100644 --- a/drivers/vfio/pci/vfio_pci_rdwr.c +++ b/drivers/vfio/pci/vfio_pci_rdwr.c @@ -430,6 +430,7 @@ size_t vfio_pci_dma_fault_response_rw(struct vfio_pci_device *vdev, char __user mutex_lock(>fault_response_queue_lock); header->head = new_head; mutex_unlock(>fault_response_queue_lock); + queue_work(vdev->dma_fault_response_wq, >response_work->inject); } else { if (copy_to_user(buf, base + pos, count)) return -EFAULT; -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v11 09/13] vfio: Add new IRQ for DMA fault reporting
Add a new IRQ type/subtype to get notification on nested stage DMA faults. Signed-off-by: Eric Auger --- include/uapi/linux/vfio.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 0e2bfbeccd08..1e5c82f9d14d 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -722,6 +722,9 @@ struct vfio_irq_info_cap_type { __u32 subtype; /* type specific */ }; +#define VFIO_IRQ_TYPE_NESTED (1) +#define VFIO_IRQ_SUBTYPE_DMA_FAULT (1) + /** * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set) * -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v11 12/13] vfio/pci: Register a DMA fault response region
In preparation for vSVA, let's register a DMA fault response region, where the userspace will push the page responses and increment the head of the buffer. The kernel will pop those responses and inject them on iommu side. Signed-off-by: Eric Auger --- drivers/vfio/pci/vfio_pci.c | 114 +--- drivers/vfio/pci/vfio_pci_private.h | 5 ++ drivers/vfio/pci/vfio_pci_rdwr.c| 39 ++ include/uapi/linux/vfio.h | 32 4 files changed, 181 insertions(+), 9 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index 65a83fd0e8c0..e9a904ce3f0d 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -318,9 +318,20 @@ static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev, kfree(vdev->fault_pages); } -static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev, - struct vfio_pci_region *region, - struct vm_area_struct *vma) +static void +vfio_pci_dma_fault_response_release(struct vfio_pci_device *vdev, + struct vfio_pci_region *region) +{ + if (vdev->dma_fault_response_wq) + destroy_workqueue(vdev->dma_fault_response_wq); + kfree(vdev->fault_response_pages); + vdev->fault_response_pages = NULL; +} + +static int __vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev, +struct vfio_pci_region *region, +struct vm_area_struct *vma, +u8 *pages) { u64 phys_len, req_len, pgoff, req_start; unsigned long long addr; @@ -333,14 +344,14 @@ static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev, ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1); req_start = pgoff << PAGE_SHIFT; - /* only the second page of the producer fault region is mmappable */ + /* only the second page of the fault region is mmappable */ if (req_start < PAGE_SIZE) return -EINVAL; if (req_start + req_len > phys_len) return -EINVAL; - addr = virt_to_phys(vdev->fault_pages); + addr = virt_to_phys(pages); vma->vm_private_data = vdev; vma->vm_pgoff = (addr >> PAGE_SHIFT) + pgoff; @@ -349,13 +360,29 @@ static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev, return ret; } -static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev, -struct vfio_pci_region *region, -struct vfio_info_cap *caps) +static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev, + struct vfio_pci_region *region, + struct vm_area_struct *vma) +{ + return __vfio_pci_dma_fault_mmap(vdev, region, vma, vdev->fault_pages); +} + +static int +vfio_pci_dma_fault_response_mmap(struct vfio_pci_device *vdev, + struct vfio_pci_region *region, + struct vm_area_struct *vma) +{ + return __vfio_pci_dma_fault_mmap(vdev, region, vma, vdev->fault_response_pages); +} + +static int __vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev, + struct vfio_pci_region *region, + struct vfio_info_cap *caps, + u32 cap_id) { struct vfio_region_info_cap_sparse_mmap *sparse = NULL; struct vfio_region_info_cap_fault cap = { - .header.id = VFIO_REGION_INFO_CAP_DMA_FAULT, + .header.id = cap_id, .header.version = 1, .version = 1, }; @@ -383,6 +410,14 @@ static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev, return ret; } +static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev, +struct vfio_pci_region *region, +struct vfio_info_cap *caps) +{ + return __vfio_pci_dma_fault_add_capability(vdev, region, caps, + VFIO_REGION_INFO_CAP_DMA_FAULT); +} + static const struct vfio_pci_regops vfio_pci_dma_fault_regops = { .rw = vfio_pci_dma_fault_rw, .release= vfio_pci_dma_fault_release, @@ -390,6 +425,13 @@ static const struct vfio_pci_regops vfio_pci_dma_fault_regops = { .add_capability = vfio_pci_dma_fault_add_capability, }; +static const struct vfio_pci_regops vfio_pci_dma_fault_response_regops = { + .rw = vfio_pci_dma_fault_response_rw, + .release= vfio_pci_dma_fault_response_release, + .mmap = vfio_pci_dma_fault_response_mmap, +
[PATCH v11 07/13] vfio: Use capability chains to handle device specific irq
From: Tina Zhang Caps the number of irqs with fixed indexes and uses capability chains to chain device specific irqs. Signed-off-by: Tina Zhang Signed-off-by: Eric Auger [Eric: Put cap_offset at the end of the vfio_irq_info struct, remove GFX IRQ at the moment and remove any reference to this latter in the commit message] --- --- include/uapi/linux/vfio.h | 19 ++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 629dfb38d9e7..0e2bfbeccd08 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -701,11 +701,27 @@ struct vfio_irq_info { #define VFIO_IRQ_INFO_MASKABLE (1 << 1) #define VFIO_IRQ_INFO_AUTOMASKED (1 << 2) #define VFIO_IRQ_INFO_NORESIZE (1 << 3) +#define VFIO_IRQ_INFO_FLAG_CAPS(1 << 4) /* Info supports caps */ __u32 index; /* IRQ index */ __u32 count; /* Number of IRQs within this index */ + __u32 cap_offset; /* Offset within info struct of first cap */ }; #define VFIO_DEVICE_GET_IRQ_INFO _IO(VFIO_TYPE, VFIO_BASE + 9) +/* + * The irq type capability allows IRQs unique to a specific device or + * class of devices to be exposed. + * + * The structures below define version 1 of this capability. + */ +#define VFIO_IRQ_INFO_CAP_TYPE 3 + +struct vfio_irq_info_cap_type { + struct vfio_info_cap_header header; + __u32 type; /* global per bus driver */ + __u32 subtype; /* type specific */ +}; + /** * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set) * @@ -807,7 +823,8 @@ enum { VFIO_PCI_MSIX_IRQ_INDEX, VFIO_PCI_ERR_IRQ_INDEX, VFIO_PCI_REQ_IRQ_INDEX, - VFIO_PCI_NUM_IRQS + VFIO_PCI_NUM_IRQS = 5 /* Fixed user ABI, IRQ indexes >=5 use */ + /* device specific cap to define content */ }; /* -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v11 05/13] vfio/pci: Register an iommu fault handler
Register an IOMMU fault handler which records faults in the DMA FAULT region ring buffer. In a subsequent patch, we will add the signaling of a specific eventfd to allow the userspace to be notified whenever a new fault as shown up. Signed-off-by: Eric Auger --- v11 -> v12: - take the fault_queue_lock before reading header (Zenghui) - also record recoverable errors v10 -> v11: - move iommu_unregister_device_fault_handler into vfio_pci_disable - check fault_pages != 0 v8 -> v9: - handler now takes an iommu_fault handle - eventfd signaling moved to a subsequent patch - check the fault type and return an error if != UNRECOV - still the fault handler registration can fail. We need to reach an agreement about how to deal with the situation v3 -> v4: - move iommu_unregister_device_fault_handler to vfio_pci_release --- drivers/vfio/pci/vfio_pci.c | 45 + 1 file changed, 45 insertions(+) diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index 7546a81e7fb6..b39d6ed66c71 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -27,6 +27,7 @@ #include #include #include +#include #include "vfio_pci_private.h" @@ -335,6 +336,41 @@ static const struct vfio_pci_regops vfio_pci_dma_fault_regops = { .add_capability = vfio_pci_dma_fault_add_capability, }; +int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data) +{ + struct vfio_pci_device *vdev = (struct vfio_pci_device *)data; + struct vfio_region_dma_fault *reg = + (struct vfio_region_dma_fault *)vdev->fault_pages; + struct iommu_fault *new; + u32 head, tail, size; + int ret = -EINVAL; + + + if (WARN_ON(!reg)) + return ret; + + mutex_lock(>fault_queue_lock); + + head = reg->head; + tail = reg->tail; + size = reg->nb_entries; + + new = (struct iommu_fault *)(vdev->fault_pages + reg->offset + +head * reg->entry_size); + + if (CIRC_SPACE(head, tail, size) < 1) { + ret = -ENOSPC; + goto unlock; + } + + *new = *fault; + reg->head = (head + 1) % size; + ret = 0; +unlock: + mutex_unlock(>fault_queue_lock); + return ret; +} + #define DMA_FAULT_RING_LENGTH 512 static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev) @@ -376,6 +412,13 @@ static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev) header->entry_size = sizeof(struct iommu_fault); header->nb_entries = DMA_FAULT_RING_LENGTH; header->offset = sizeof(struct vfio_region_dma_fault); + + ret = iommu_register_device_fault_handler(>pdev->dev, + vfio_pci_iommu_dev_fault_handler, + vdev); + if (ret) /* the dma fault region is freed in vfio_pci_disable() */ + goto out; + return 0; out: kfree(vdev->fault_pages); @@ -508,6 +551,8 @@ static void vfio_pci_disable(struct vfio_pci_device *vdev) VFIO_IRQ_SET_ACTION_TRIGGER, vdev->irq_type, 0, 0, NULL); + WARN_ON(iommu_unregister_device_fault_handler(>pdev->dev)); + /* Device closed, don't need mutex here */ list_for_each_entry_safe(ioeventfd, ioeventfd_tmp, >ioeventfds_list, next) { -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v11 03/13] vfio: VFIO_IOMMU_SET_MSI_BINDING
This patch adds the VFIO_IOMMU_SET_MSI_BINDING ioctl which aim to (un)register the guest MSI binding to the host. This latter then can use those stage 1 bindings to build a nested stage binding targeting the physical MSIs. Signed-off-by: Eric Auger --- v10 -> v11: - renamed ustruct into msi_binding - return 0 on unbind v8 -> v9: - merge VFIO_IOMMU_BIND_MSI/VFIO_IOMMU_UNBIND_MSI into a single VFIO_IOMMU_SET_MSI_BINDING ioctl - ioctl id changed v6 -> v7: - removed the dev arg v3 -> v4: - add UNBIND - unwind on BIND error v2 -> v3: - adapt to new proto of bind_guest_msi - directly use vfio_iommu_for_each_dev v1 -> v2: - s/vfio_iommu_type1_guest_msi_binding/vfio_iommu_type1_bind_guest_msi --- drivers/vfio/vfio_iommu_type1.c | 63 + include/uapi/linux/vfio.h | 20 +++ 2 files changed, 83 insertions(+) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 966909f542f1..bb2bc0971fb0 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2657,6 +2657,41 @@ static int vfio_cache_inv_fn(struct device *dev, void *data) return iommu_uapi_cache_invalidate(dc->domain, dev, (void __user *)arg); } +static int +vfio_bind_msi(struct vfio_iommu *iommu, + dma_addr_t giova, phys_addr_t gpa, size_t size) +{ + struct vfio_domain *d; + int ret = 0; + + mutex_lock(>lock); + + list_for_each_entry(d, >domain_list, next) { + ret = iommu_bind_guest_msi(d->domain, giova, gpa, size); + if (ret) + goto unwind; + } + goto unlock; +unwind: + list_for_each_entry_continue_reverse(d, >domain_list, next) { + iommu_unbind_guest_msi(d->domain, giova); + } +unlock: + mutex_unlock(>lock); + return ret; +} + +static void +vfio_unbind_msi(struct vfio_iommu *iommu, dma_addr_t giova) +{ + struct vfio_domain *d; + + mutex_lock(>lock); + list_for_each_entry(d, >domain_list, next) + iommu_unbind_guest_msi(d->domain, giova); + mutex_unlock(>lock); +} + static int vfio_iommu_migration_build_caps(struct vfio_iommu *iommu, struct vfio_info_cap *caps) { @@ -2866,6 +2901,32 @@ static int vfio_iommu_type1_cache_invalidate(struct vfio_iommu *iommu, return ret; } +static int vfio_iommu_type1_set_msi_binding(struct vfio_iommu *iommu, + unsigned long arg) +{ + struct vfio_iommu_type1_set_msi_binding msi_binding; + unsigned long minsz; + int ret = -EINVAL; + + minsz = offsetofend(struct vfio_iommu_type1_set_msi_binding, + size); + + if (copy_from_user(_binding, (void __user *)arg, minsz)) + return -EFAULT; + + if (msi_binding.argsz < minsz) + return -EINVAL; + + if (msi_binding.flags == VFIO_IOMMU_UNBIND_MSI) { + vfio_unbind_msi(iommu, msi_binding.iova); + ret = 0; + } else if (msi_binding.flags == VFIO_IOMMU_BIND_MSI) { + ret = vfio_bind_msi(iommu, msi_binding.iova, + msi_binding.gpa, msi_binding.size); + } + return ret; +} + static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, unsigned long arg) { @@ -2990,6 +3051,8 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, return vfio_iommu_type1_set_pasid_table(iommu, arg); case VFIO_IOMMU_CACHE_INVALIDATE: return vfio_iommu_type1_cache_invalidate(iommu, arg); + case VFIO_IOMMU_SET_MSI_BINDING: + return vfio_iommu_type1_set_msi_binding(iommu, arg); default: return -ENOTTY; } diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 0e6d94cc2ba4..b352e76cfb71 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -1212,6 +1212,26 @@ struct vfio_iommu_type1_cache_invalidate { }; #define VFIO_IOMMU_CACHE_INVALIDATE _IO(VFIO_TYPE, VFIO_BASE + 23) +/** + * VFIO_IOMMU_SET_MSI_BINDING - _IOWR(VFIO_TYPE, VFIO_BASE + 24, + * struct vfio_iommu_type1_set_msi_binding) + * + * Pass a stage 1 MSI doorbell mapping to the host so that this + * latter can build a nested stage2 mapping. Or conversely tear + * down a previously bound stage 1 MSI binding. + */ +struct vfio_iommu_type1_set_msi_binding { + __u32 argsz; + __u32 flags; +#define VFIO_IOMMU_BIND_MSI(1 << 0) +#define VFIO_IOMMU_UNBIND_MSI (1 << 1) + __u64 iova; /* MSI guest IOVA */ + /* Fields below are used on BIND */ + __u64 gpa;/* MSI guest physical address */ + __u64 size; /* size of stage1 mapping (bytes) */ +}; +#define VFIO_IOMMU_SET_MSI_BINDING _IO(VFIO_TYPE, VFIO_BASE + 24) + /* Additional API for
[PATCH v11 06/13] vfio/pci: Allow to mmap the fault queue
The DMA FAULT region contains the fault ring buffer. There is benefit to let the userspace mmap this area. Expose this mmappable area through a sparse mmap entry and implement the mmap operation. Signed-off-by: Eric Auger --- v8 -> v9: - remove unused index local variable in vfio_pci_fault_mmap --- drivers/vfio/pci/vfio_pci.c | 61 +++-- 1 file changed, 58 insertions(+), 3 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index b39d6ed66c71..2a6cc1a87323 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -318,21 +318,75 @@ static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev, kfree(vdev->fault_pages); } +static int vfio_pci_dma_fault_mmap(struct vfio_pci_device *vdev, + struct vfio_pci_region *region, + struct vm_area_struct *vma) +{ + u64 phys_len, req_len, pgoff, req_start; + unsigned long long addr; + unsigned int ret; + + phys_len = region->size; + + req_len = vma->vm_end - vma->vm_start; + pgoff = vma->vm_pgoff & + ((1U << (VFIO_PCI_OFFSET_SHIFT - PAGE_SHIFT)) - 1); + req_start = pgoff << PAGE_SHIFT; + + /* only the second page of the producer fault region is mmappable */ + if (req_start < PAGE_SIZE) + return -EINVAL; + + if (req_start + req_len > phys_len) + return -EINVAL; + + addr = virt_to_phys(vdev->fault_pages); + vma->vm_private_data = vdev; + vma->vm_pgoff = (addr >> PAGE_SHIFT) + pgoff; + + ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff, + req_len, vma->vm_page_prot); + return ret; +} + static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev, struct vfio_pci_region *region, struct vfio_info_cap *caps) { + struct vfio_region_info_cap_sparse_mmap *sparse = NULL; struct vfio_region_info_cap_fault cap = { .header.id = VFIO_REGION_INFO_CAP_DMA_FAULT, .header.version = 1, .version = 1, }; - return vfio_info_add_capability(caps, , sizeof(cap)); + size_t size = sizeof(*sparse) + sizeof(*sparse->areas); + int ret; + + ret = vfio_info_add_capability(caps, , sizeof(cap)); + if (ret) + return ret; + + sparse = kzalloc(size, GFP_KERNEL); + if (!sparse) + return -ENOMEM; + + sparse->header.id = VFIO_REGION_INFO_CAP_SPARSE_MMAP; + sparse->header.version = 1; + sparse->nr_areas = 1; + sparse->areas[0].offset = PAGE_SIZE; + sparse->areas[0].size = region->size - PAGE_SIZE; + + ret = vfio_info_add_capability(caps, >header, size); + if (ret) + kfree(sparse); + + return ret; } static const struct vfio_pci_regops vfio_pci_dma_fault_regops = { .rw = vfio_pci_dma_fault_rw, .release= vfio_pci_dma_fault_release, + .mmap = vfio_pci_dma_fault_mmap, .add_capability = vfio_pci_dma_fault_add_capability, }; @@ -403,7 +457,8 @@ static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev) VFIO_REGION_TYPE_NESTED, VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT, _pci_dma_fault_regops, size, - VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE, + VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE | + VFIO_REGION_INFO_FLAG_MMAP, vdev->fault_pages); if (ret) goto out; @@ -411,7 +466,7 @@ static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev) header = (struct vfio_region_dma_fault *)vdev->fault_pages; header->entry_size = sizeof(struct iommu_fault); header->nb_entries = DMA_FAULT_RING_LENGTH; - header->offset = sizeof(struct vfio_region_dma_fault); + header->offset = PAGE_SIZE; ret = iommu_register_device_fault_handler(>pdev->dev, vfio_pci_iommu_dev_fault_handler, -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v11 04/13] vfio/pci: Add VFIO_REGION_TYPE_NESTED region type
Add a new specific DMA_FAULT region aiming to exposed nested mode translation faults. This region only is exposed if the device is attached to a nested domain. The region has a ring buffer that contains the actual fault records plus a header allowing to handle it (tail/head indices, max capacity, entry size). At the moment the region is dimensionned for 512 fault records. Signed-off-by: Eric Auger --- v11 -> v12: - set fault_pages to NULL after free - check new_tail >= header->nb_entries (Zenghui) v10 -> v11: - rename vfio_pci_init_dma_fault_region into vfio_pci_dma_fault_init - free fault_pages in vfio_pci_dma_fault_release - only register the region if the device is attached to a nested domain v8 -> v9: - Use a single region instead of a prod/cons region v4 -> v5 - check cons is not null in vfio_pci_check_cons_fault v3 -> v4: - use 2 separate regions, respectively in read and write modes - add the version capability --- drivers/vfio/pci/vfio_pci.c | 76 + drivers/vfio/pci/vfio_pci_private.h | 6 +++ drivers/vfio/pci/vfio_pci_rdwr.c| 44 + include/uapi/linux/vfio.h | 34 + 4 files changed, 160 insertions(+) diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index e6190173482c..7546a81e7fb6 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -311,6 +311,78 @@ int vfio_pci_set_power_state(struct vfio_pci_device *vdev, pci_power_t state) return ret; } +static void vfio_pci_dma_fault_release(struct vfio_pci_device *vdev, + struct vfio_pci_region *region) +{ + kfree(vdev->fault_pages); +} + +static int vfio_pci_dma_fault_add_capability(struct vfio_pci_device *vdev, +struct vfio_pci_region *region, +struct vfio_info_cap *caps) +{ + struct vfio_region_info_cap_fault cap = { + .header.id = VFIO_REGION_INFO_CAP_DMA_FAULT, + .header.version = 1, + .version = 1, + }; + return vfio_info_add_capability(caps, , sizeof(cap)); +} + +static const struct vfio_pci_regops vfio_pci_dma_fault_regops = { + .rw = vfio_pci_dma_fault_rw, + .release= vfio_pci_dma_fault_release, + .add_capability = vfio_pci_dma_fault_add_capability, +}; + +#define DMA_FAULT_RING_LENGTH 512 + +static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev) +{ + struct vfio_region_dma_fault *header; + struct iommu_domain *domain; + size_t size; + bool nested; + int ret; + + domain = iommu_get_domain_for_dev(>pdev->dev); + ret = iommu_domain_get_attr(domain, DOMAIN_ATTR_NESTING, ); + if (ret || !nested) + return ret; + + mutex_init(>fault_queue_lock); + + /* +* We provision 1 page for the header and space for +* DMA_FAULT_RING_LENGTH fault records in the ring buffer. +*/ + size = ALIGN(sizeof(struct iommu_fault) * +DMA_FAULT_RING_LENGTH, PAGE_SIZE) + PAGE_SIZE; + + vdev->fault_pages = kzalloc(size, GFP_KERNEL); + if (!vdev->fault_pages) + return -ENOMEM; + + ret = vfio_pci_register_dev_region(vdev, + VFIO_REGION_TYPE_NESTED, + VFIO_REGION_SUBTYPE_NESTED_DMA_FAULT, + _pci_dma_fault_regops, size, + VFIO_REGION_INFO_FLAG_READ | VFIO_REGION_INFO_FLAG_WRITE, + vdev->fault_pages); + if (ret) + goto out; + + header = (struct vfio_region_dma_fault *)vdev->fault_pages; + header->entry_size = sizeof(struct iommu_fault); + header->nb_entries = DMA_FAULT_RING_LENGTH; + header->offset = sizeof(struct vfio_region_dma_fault); + return 0; +out: + kfree(vdev->fault_pages); + vdev->fault_pages = NULL; + return ret; +} + static int vfio_pci_enable(struct vfio_pci_device *vdev) { struct pci_dev *pdev = vdev->pdev; @@ -409,6 +481,10 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev) } } + ret = vfio_pci_dma_fault_init(vdev); + if (ret) + goto disable_exit; + vfio_pci_probe_mmaps(vdev); return 0; diff --git a/drivers/vfio/pci/vfio_pci_private.h b/drivers/vfio/pci/vfio_pci_private.h index 5c90e560c5c7..1d9b0f648133 100644 --- a/drivers/vfio/pci/vfio_pci_private.h +++ b/drivers/vfio/pci/vfio_pci_private.h @@ -134,6 +134,8 @@ struct vfio_pci_device { int ioeventfds_nr; struct eventfd_ctx *err_trigger; struct eventfd_ctx *req_trigger; + u8 *fault_pages; + struct mutexfault_queue_lock; struct list_headdummy_resources_list; struct mutexioeventfds_lock; struct list_head
[PATCH v11 10/13] vfio/pci: Register and allow DMA FAULT IRQ signaling
Register the VFIO_IRQ_TYPE_NESTED/VFIO_IRQ_SUBTYPE_DMA_FAULT IRQ that allows to signal a nested mode DMA fault. Signed-off-by: Eric Auger --- v10 -> v11: - the irq now is registered in vfio_pci_dma_fault_init() in case the domain is nested --- drivers/vfio/pci/vfio_pci.c | 21 - 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index 93e03a4a5f32..65a83fd0e8c0 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -397,6 +397,7 @@ int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data) (struct vfio_region_dma_fault *)vdev->fault_pages; struct iommu_fault *new; u32 head, tail, size; + int ext_irq_index; int ret = -EINVAL; @@ -422,7 +423,19 @@ int vfio_pci_iommu_dev_fault_handler(struct iommu_fault *fault, void *data) ret = 0; unlock: mutex_unlock(>fault_queue_lock); - return ret; + if (ret) + return ret; + + ext_irq_index = vfio_pci_get_ext_irq_index(vdev, VFIO_IRQ_TYPE_NESTED, + VFIO_IRQ_SUBTYPE_DMA_FAULT); + if (ext_irq_index < 0) + return -EINVAL; + + mutex_lock(>igate); + if (vdev->ext_irqs[ext_irq_index].trigger) + eventfd_signal(vdev->ext_irqs[ext_irq_index].trigger, 1); + mutex_unlock(>igate); + return 0; } #define DMA_FAULT_RING_LENGTH 512 @@ -474,6 +487,12 @@ static int vfio_pci_dma_fault_init(struct vfio_pci_device *vdev) if (ret) /* the dma fault region is freed in vfio_pci_disable() */ goto out; + ret = vfio_pci_register_irq(vdev, VFIO_IRQ_TYPE_NESTED, + VFIO_IRQ_SUBTYPE_DMA_FAULT, + VFIO_IRQ_INFO_EVENTFD); + if (ret) /* the fault handler is also freed in vfio_pci_disable() */ + goto out; + return 0; out: kfree(vdev->fault_pages); -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v11 08/13] vfio/pci: Add framework for custom interrupt indices
Implement IRQ capability chain infrastructure. All interrupt indexes beyond VFIO_PCI_NUM_IRQS are handled as extended interrupts. They are registered with a specific type/subtype and supported flags. Signed-off-by: Eric Auger --- drivers/vfio/pci/vfio_pci.c | 99 +++-- drivers/vfio/pci/vfio_pci_intrs.c | 62 ++ drivers/vfio/pci/vfio_pci_private.h | 14 3 files changed, 157 insertions(+), 18 deletions(-) diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c index 2a6cc1a87323..93e03a4a5f32 100644 --- a/drivers/vfio/pci/vfio_pci.c +++ b/drivers/vfio/pci/vfio_pci.c @@ -608,6 +608,14 @@ static void vfio_pci_disable(struct vfio_pci_device *vdev) WARN_ON(iommu_unregister_device_fault_handler(>pdev->dev)); + for (i = 0; i < vdev->num_ext_irqs; i++) + vfio_pci_set_irqs_ioctl(vdev, VFIO_IRQ_SET_DATA_NONE | + VFIO_IRQ_SET_ACTION_TRIGGER, + VFIO_PCI_NUM_IRQS + i, 0, 0, NULL); + vdev->num_ext_irqs = 0; + kfree(vdev->ext_irqs); + vdev->ext_irqs = NULL; + /* Device closed, don't need mutex here */ list_for_each_entry_safe(ioeventfd, ioeventfd_tmp, >ioeventfds_list, next) { @@ -823,6 +831,9 @@ static int vfio_pci_get_irq_count(struct vfio_pci_device *vdev, int irq_type) return 1; } else if (irq_type == VFIO_PCI_REQ_IRQ_INDEX) { return 1; + } else if (irq_type >= VFIO_PCI_NUM_IRQS && + irq_type < VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs) { + return 1; } return 0; @@ -1008,7 +1019,7 @@ static long vfio_pci_ioctl(void *device_data, info.flags |= VFIO_DEVICE_FLAGS_RESET; info.num_regions = VFIO_PCI_NUM_REGIONS + vdev->num_regions; - info.num_irqs = VFIO_PCI_NUM_IRQS; + info.num_irqs = VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs; if (IS_ENABLED(CONFIG_VFIO_PCI_ZDEV)) { int ret = vfio_pci_info_zdev_add_caps(vdev, ); @@ -1187,36 +1198,87 @@ static long vfio_pci_ioctl(void *device_data, } else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) { struct vfio_irq_info info; + struct vfio_info_cap caps = { .buf = NULL, .size = 0 }; + unsigned long capsz; minsz = offsetofend(struct vfio_irq_info, count); + /* For backward compatibility, cannot require this */ + capsz = offsetofend(struct vfio_irq_info, cap_offset); + if (copy_from_user(, (void __user *)arg, minsz)) return -EFAULT; - if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS) + if (info.argsz < minsz || + info.index >= VFIO_PCI_NUM_IRQS + vdev->num_ext_irqs) return -EINVAL; - switch (info.index) { - case VFIO_PCI_INTX_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX: - case VFIO_PCI_REQ_IRQ_INDEX: - break; - case VFIO_PCI_ERR_IRQ_INDEX: - if (pci_is_pcie(vdev->pdev)) - break; - fallthrough; - default: - return -EINVAL; - } + if (info.argsz >= capsz) + minsz = capsz; info.flags = VFIO_IRQ_INFO_EVENTFD; - info.count = vfio_pci_get_irq_count(vdev, info.index); - - if (info.index == VFIO_PCI_INTX_IRQ_INDEX) + switch (info.index) { + case VFIO_PCI_INTX_IRQ_INDEX: info.flags |= (VFIO_IRQ_INFO_MASKABLE | VFIO_IRQ_INFO_AUTOMASKED); - else + break; + case VFIO_PCI_MSI_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX: + case VFIO_PCI_REQ_IRQ_INDEX: info.flags |= VFIO_IRQ_INFO_NORESIZE; + break; + case VFIO_PCI_ERR_IRQ_INDEX: + info.flags |= VFIO_IRQ_INFO_NORESIZE; + if (!pci_is_pcie(vdev->pdev)) + return -EINVAL; + break; + default: + { + struct vfio_irq_info_cap_type cap_type = { + .header.id = VFIO_IRQ_INFO_CAP_TYPE, + .header.version = 1 }; + int ret, i; + + if (info.index >= VFIO_PCI_NUM_IRQS + + vdev->num_ext_irqs) + return -EINVAL; + info.index = array_index_nospec(info.index, +
[PATCH v11 01/13] vfio: VFIO_IOMMU_SET_PASID_TABLE
From: "Liu, Yi L" This patch adds an VFIO_IOMMU_SET_PASID_TABLE ioctl which aims to pass the virtual iommu guest configuration to the host. This latter takes the form of the so-called PASID table. Signed-off-by: Jacob Pan Signed-off-by: Liu, Yi L Signed-off-by: Eric Auger --- v11 -> v12: - use iommu_uapi_set_pasid_table - check SET and UNSET are not set simultaneously (Zenghui) v8 -> v9: - Merge VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE into a single VFIO_IOMMU_SET_PASID_TABLE ioctl. v6 -> v7: - add a comment related to VFIO_IOMMU_DETACH_PASID_TABLE v3 -> v4: - restore ATTACH/DETACH - add unwind on failure v2 -> v3: - s/BIND_PASID_TABLE/SET_PASID_TABLE v1 -> v2: - s/BIND_GUEST_STAGE/BIND_PASID_TABLE - remove the struct device arg --- drivers/vfio/vfio_iommu_type1.c | 65 + include/uapi/linux/vfio.h | 19 ++ 2 files changed, 84 insertions(+) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 67e827638995..87ddd9e882dc 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -2587,6 +2587,41 @@ static int vfio_iommu_iova_build_caps(struct vfio_iommu *iommu, return ret; } +static void +vfio_detach_pasid_table(struct vfio_iommu *iommu) +{ + struct vfio_domain *d; + + mutex_lock(>lock); + list_for_each_entry(d, >domain_list, next) + iommu_detach_pasid_table(d->domain); + + mutex_unlock(>lock); +} + +static int +vfio_attach_pasid_table(struct vfio_iommu *iommu, unsigned long arg) +{ + struct vfio_domain *d; + int ret = 0; + + mutex_lock(>lock); + + list_for_each_entry(d, >domain_list, next) { + ret = iommu_uapi_attach_pasid_table(d->domain, (void __user *)arg); + if (ret) + goto unwind; + } + goto unlock; +unwind: + list_for_each_entry_continue_reverse(d, >domain_list, next) { + iommu_detach_pasid_table(d->domain); + } +unlock: + mutex_unlock(>lock); + return ret; +} + static int vfio_iommu_migration_build_caps(struct vfio_iommu *iommu, struct vfio_info_cap *caps) { @@ -2747,6 +2782,34 @@ static int vfio_iommu_type1_unmap_dma(struct vfio_iommu *iommu, -EFAULT : 0; } +static int vfio_iommu_type1_set_pasid_table(struct vfio_iommu *iommu, + unsigned long arg) +{ + struct vfio_iommu_type1_set_pasid_table spt; + unsigned long minsz; + int ret = -EINVAL; + + minsz = offsetofend(struct vfio_iommu_type1_set_pasid_table, flags); + + if (copy_from_user(, (void __user *)arg, minsz)) + return -EFAULT; + + if (spt.argsz < minsz) + return -EINVAL; + + if (spt.flags & VFIO_PASID_TABLE_FLAG_SET && + spt.flags & VFIO_PASID_TABLE_FLAG_UNSET) + return -EINVAL; + + if (spt.flags & VFIO_PASID_TABLE_FLAG_SET) + ret = vfio_attach_pasid_table(iommu, arg + minsz); + else if (spt.flags & VFIO_PASID_TABLE_FLAG_UNSET) { + vfio_detach_pasid_table(iommu); + ret = 0; + } + return ret; +} + static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, unsigned long arg) { @@ -2867,6 +2930,8 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, return vfio_iommu_type1_unmap_dma(iommu, arg); case VFIO_IOMMU_DIRTY_PAGES: return vfio_iommu_type1_dirty_pages(iommu, arg); + case VFIO_IOMMU_SET_PASID_TABLE: + return vfio_iommu_type1_set_pasid_table(iommu, arg); default: return -ENOTTY; } diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 2f313a238a8f..78ce3ce6c331 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -14,6 +14,7 @@ #include #include +#include #define VFIO_API_VERSION 0 @@ -1180,6 +1181,24 @@ struct vfio_iommu_type1_dirty_bitmap_get { #define VFIO_IOMMU_DIRTY_PAGES _IO(VFIO_TYPE, VFIO_BASE + 17) +/* + * VFIO_IOMMU_SET_PASID_TABLE - _IOWR(VFIO_TYPE, VFIO_BASE + 22, + * struct vfio_iommu_type1_set_pasid_table) + * + * The SET operation passes a PASID table to the host while the + * UNSET operation detaches the one currently programmed. Setting + * a table while another is already programmed replaces the old table. + */ +struct vfio_iommu_type1_set_pasid_table { + __u32 argsz; + __u32 flags; +#define VFIO_PASID_TABLE_FLAG_SET (1 << 0) +#define VFIO_PASID_TABLE_FLAG_UNSET(1 << 1) + struct iommu_pasid_table_config config; /* used on SET */ +}; + +#define VFIO_IOMMU_SET_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 22) + /* Additional API for SPAPR TCE (Server POWERPC) IOMMU */ /* -- 2.21.3
[PATCH v11 02/13] vfio: VFIO_IOMMU_CACHE_INVALIDATE
From: "Liu, Yi L" When the guest "owns" the stage 1 translation structures, the host IOMMU driver has no knowledge of caching structure updates unless the guest invalidation requests are trapped and passed down to the host. This patch adds the VFIO_IOMMU_CACHE_INVALIDATE ioctl with aims at propagating guest stage1 IOMMU cache invalidations to the host. Signed-off-by: Liu, Yi L Signed-off-by: Eric Auger --- v10 -> v11: - renamed ustruct into cache_inv v8 -> v9: - change the ioctl ID v6 -> v7: - Use iommu_capsule struct - renamed vfio_iommu_for_each_dev into vfio_iommu_lookup_dev due to checkpatch error related to for_each_dev suffix v2 -> v3: - introduce vfio_iommu_for_each_dev back in this patch v1 -> v2: - s/TLB/CACHE - remove vfio_iommu_task usage - commit message rewording --- drivers/vfio/vfio_iommu_type1.c | 58 + include/uapi/linux/vfio.h | 13 2 files changed, 71 insertions(+) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 87ddd9e882dc..966909f542f1 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -143,6 +143,34 @@ struct vfio_regions { #define DIRTY_BITMAP_PAGES_MAX ((u64)INT_MAX) #define DIRTY_BITMAP_SIZE_MAX DIRTY_BITMAP_BYTES(DIRTY_BITMAP_PAGES_MAX) +struct domain_capsule { + struct iommu_domain *domain; + void *data; +}; + +/* iommu->lock must be held */ +static int +vfio_iommu_lookup_dev(struct vfio_iommu *iommu, + int (*fn)(struct device *dev, void *data), + unsigned long arg) +{ + struct domain_capsule dc = {.data = }; + struct vfio_domain *d; + struct vfio_group *g; + int ret = 0; + + list_for_each_entry(d, >domain_list, next) { + dc.domain = d->domain; + list_for_each_entry(g, >group_list, next) { + ret = iommu_group_for_each_dev(g->iommu_group, + , fn); + if (ret) + break; + } + } + return ret; +} + static int put_pfn(unsigned long pfn, int prot); static struct vfio_group *vfio_iommu_find_iommu_group(struct vfio_iommu *iommu, @@ -2621,6 +2649,13 @@ vfio_attach_pasid_table(struct vfio_iommu *iommu, unsigned long arg) mutex_unlock(>lock); return ret; } +static int vfio_cache_inv_fn(struct device *dev, void *data) +{ + struct domain_capsule *dc = (struct domain_capsule *)data; + unsigned long arg = *(unsigned long *)dc->data; + + return iommu_uapi_cache_invalidate(dc->domain, dev, (void __user *)arg); +} static int vfio_iommu_migration_build_caps(struct vfio_iommu *iommu, struct vfio_info_cap *caps) @@ -2810,6 +2845,27 @@ static int vfio_iommu_type1_set_pasid_table(struct vfio_iommu *iommu, return ret; } +static int vfio_iommu_type1_cache_invalidate(struct vfio_iommu *iommu, + unsigned long arg) +{ + struct vfio_iommu_type1_cache_invalidate cache_inv; + unsigned long minsz; + int ret; + + minsz = offsetofend(struct vfio_iommu_type1_cache_invalidate, flags); + + if (copy_from_user(_inv, (void __user *)arg, minsz)) + return -EFAULT; + + if (cache_inv.argsz < minsz || cache_inv.flags) + return -EINVAL; + + mutex_lock(>lock); + ret = vfio_iommu_lookup_dev(iommu, vfio_cache_inv_fn, arg + minsz); + mutex_unlock(>lock); + return ret; +} + static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, unsigned long arg) { @@ -2932,6 +2988,8 @@ static long vfio_iommu_type1_ioctl(void *iommu_data, return vfio_iommu_type1_dirty_pages(iommu, arg); case VFIO_IOMMU_SET_PASID_TABLE: return vfio_iommu_type1_set_pasid_table(iommu, arg); + case VFIO_IOMMU_CACHE_INVALIDATE: + return vfio_iommu_type1_cache_invalidate(iommu, arg); default: return -ENOTTY; } diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h index 78ce3ce6c331..0e6d94cc2ba4 100644 --- a/include/uapi/linux/vfio.h +++ b/include/uapi/linux/vfio.h @@ -1199,6 +1199,19 @@ struct vfio_iommu_type1_set_pasid_table { #define VFIO_IOMMU_SET_PASID_TABLE _IO(VFIO_TYPE, VFIO_BASE + 22) +/** + * VFIO_IOMMU_CACHE_INVALIDATE - _IOWR(VFIO_TYPE, VFIO_BASE + 23, + * struct vfio_iommu_type1_cache_invalidate) + * + * Propagate guest IOMMU cache invalidation to the host. + */ +struct vfio_iommu_type1_cache_invalidate { + __u32 argsz; + __u32 flags; + struct iommu_cache_invalidate_info info; +}; +#define VFIO_IOMMU_CACHE_INVALIDATE _IO(VFIO_TYPE, VFIO_BASE + 23) + /* Additional API for SPAPR TCE (Server POWERPC) IOMMU */
[PATCH v11 00/13] SMMUv3 Nested Stage Setup (VFIO part)
This series brings the VFIO part of HW nested paging support in the SMMUv3. This is a rebase on top of v5.10-rc4 The series depends on: [PATCH v12 00/15] SMMUv3 Nested Stage Setup (IOMMU part) 3 new IOCTLs are introduced that allow the userspace to 1) pass the guest stage 1 configuration 2) pass stage 1 MSI bindings 3) invalidate stage 1 related caches They map onto the related new IOMMU API functions. We introduce the capability to register specific interrupt indexes (see [1]). A new DMA_FAULT interrupt index allows to register an eventfd to be signaled whenever a stage 1 related fault is detected at physical level. Also two specific regions allow to - expose the fault records to the user space and - inject page responses. This latter functionality is not exercised in this series but is provided as a POC for further vSVA activities (Shameer's input). Best Regards Eric This series can be found at: https://github.com/eauger/linux/tree/5.10-rc4-2stage-v12 The series series includes Tina's patch steming from [1] "[RFC PATCH v2 1/3] vfio: Use capability chains to handle device specific irq" plus patches originally contributed by Yi. History: v10 -> v11: - rebase on top of v5.10-rc4 - adapt to changes on the IOMMU API (compliant with the doc written by Jacob/Yi) - addition of the page response region - Took into account Zenghui's comments - In this version I have kept the ioctl separate. Since Yi's series [2] is currently stalled, I've just rebased here. [2] [PATCH v7 00/16] vfio: expose virtual Shared Virtual Addressing to VMs v9 -> v10 - rebase on top of 5.6.0-rc3 (no change versus v9) v8 -> v9: - introduce specific irq framework - single fault region - iommu_unregister_device_fault_handler failure case not handled yet. v7 -> v8: - rebase on top of v5.2-rc1 and especially 8be39a1a04c1 iommu/arm-smmu-v3: Add a master->domain pointer - dynamic alloc of s1_cfg/s2_cfg - __arm_smmu_tlb_inv_asid/s1_range_nosync - check there is no HW MSI regions - asid invalidation using pasid extended struct (change in the uapi) - add s1_live/s2_live checks - move check about support of nested stages in domain finalise - fixes in error reporting according to the discussion with Robin - reordered the patches to have first iommu/smmuv3 patches and then VFIO patches v6 -> v7: - removed device handle from bind/unbind_guest_msi - added "iommu/smmuv3: Nested mode single MSI doorbell per domain enforcement" - added few uapi comments as suggested by Jean, Jacop and Alex v5 -> v6: - Fix compilation issue when CONFIG_IOMMU_API is unset v4 -> v5: - fix bug reported by Vincent: fault handler unregistration now happens in vfio_pci_release - IOMMU_FAULT_PERM_* moved outside of struct definition + small uapi changes suggested by Kean-Philippe (except fetch_addr) - iommu: introduce device fault report API: removed the PRI part. - see individual logs for more details - reset the ste abort flag on detach v3 -> v4: - took into account Alex, jean-Philippe and Robin's comments on v3 - rework of the smmuv3 driver integration - add tear down ops for msi binding and PASID table binding - fix S1 fault propagation - put fault reporting patches at the beginning of the series following Jean-Philippe's request - update of the cache invalidate and fault API uapis - VFIO fault reporting rework with 2 separate regions and one mmappable segment for the fault queue - moved to PATCH v2 -> v3: - When registering the S1 MSI binding we now store the device handle. This addresses Robin's comment about discimination of devices beonging to different S1 groups and using different physical MSI doorbells. - Change the fault reporting API: use VFIO_PCI_DMA_FAULT_IRQ_INDEX to set the eventfd and expose the faults through an mmappable fault region v1 -> v2: - Added the fault reporting capability - asid properly passed on invalidation (fix assignment of multiple devices) - see individual change logs for more info Eric Auger (10): vfio: VFIO_IOMMU_SET_MSI_BINDING vfio/pci: Add VFIO_REGION_TYPE_NESTED region type vfio/pci: Register an iommu fault handler vfio/pci: Allow to mmap the fault queue vfio/pci: Add framework for custom interrupt indices vfio: Add new IRQ for DMA fault reporting vfio/pci: Register and allow DMA FAULT IRQ signaling vfio: Document nested stage control vfio/pci: Register a DMA fault response region vfio/pci: Inject page response upon response region fill Liu, Yi L (2): vfio: VFIO_IOMMU_SET_PASID_TABLE vfio: VFIO_IOMMU_CACHE_INVALIDATE Tina Zhang (1): vfio: Use capability chains to handle device specific irq Documentation/driver-api/vfio.rst | 77 + drivers/vfio/pci/vfio_pci.c | 430 ++-- drivers/vfio/pci/vfio_pci_intrs.c | 62 drivers/vfio/pci/vfio_pci_private.h | 33 +++ drivers/vfio/pci/vfio_pci_rdwr.c| 84 ++ drivers/vfio/vfio_iommu_type1.c | 186 include/uapi/linux/vfio.h | 140 -
[PATCH v12 15/15] iommu/smmuv3: Add PASID cache invalidation per PASID
In order to cascade guest CFGI_CD, let's add PASID cache invalidation per PASID. Signed-off-by: Eric Auger --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 16 +--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 6549c3ee6af6..eb0e09936803 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -3002,9 +3002,19 @@ arm_smmu_cache_invalidate(struct iommu_domain *domain, struct device *dev, } else { return -EINVAL; } - } - if (inv_info->cache & IOMMU_CACHE_INV_TYPE_PASID || - inv_info->cache & IOMMU_CACHE_INV_TYPE_DEV_IOTLB) { + } else if (inv_info->cache & IOMMU_CACHE_INV_TYPE_PASID) { + if (inv_info->granularity == IOMMU_INV_GRANU_PASID) { + struct iommu_inv_pasid_info *info = + _info->granu.pasid_info; + + if (!info->flags & IOMMU_INV_PASID_FLAGS_PASID) + return -EINVAL; + + arm_smmu_sync_cd(smmu_domain, info->pasid, true); + } else { + return -ENOENT; + } + } else { /* IOMMU_CACHE_INV_TYPE_DEV_IOTLB */ return -ENOENT; } return 0; -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v12 12/15] iommu/smmuv3: Implement bind/unbind_guest_msi
The bind/unbind_guest_msi() callbacks check the domain is NESTED and redirect to the dma-iommu implementation. Signed-off-by: Eric Auger --- v6 -> v7: - remove device handle argument --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 43 + 1 file changed, 43 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 0c5ab4005f76..5aa9e0e747fa 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2738,6 +2738,47 @@ static void arm_smmu_get_resv_regions(struct device *dev, iommu_dma_get_resv_regions(dev, head); } +static int +arm_smmu_bind_guest_msi(struct iommu_domain *domain, + dma_addr_t giova, phys_addr_t gpa, size_t size) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu; + int ret = -EINVAL; + + mutex_lock(_domain->init_mutex); + smmu = smmu_domain->smmu; + if (!smmu) + goto out; + + if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + goto out; + + ret = iommu_dma_bind_guest_msi(domain, giova, gpa, size); +out: + mutex_unlock(_domain->init_mutex); + return ret; +} + +static void +arm_smmu_unbind_guest_msi(struct iommu_domain *domain, dma_addr_t giova) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu; + + mutex_lock(_domain->init_mutex); + smmu = smmu_domain->smmu; + if (!smmu) + goto unlock; + + if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + goto unlock; + + iommu_dma_unbind_guest_msi(domain, giova); +unlock: + mutex_unlock(_domain->init_mutex); +} + static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, struct iommu_pasid_table_config *cfg) { @@ -2970,6 +3011,8 @@ static struct iommu_ops arm_smmu_ops = { .attach_pasid_table = arm_smmu_attach_pasid_table, .detach_pasid_table = arm_smmu_detach_pasid_table, .cache_invalidate = arm_smmu_cache_invalidate, + .bind_guest_msi = arm_smmu_bind_guest_msi, + .unbind_guest_msi = arm_smmu_unbind_guest_msi, .dev_has_feat = arm_smmu_dev_has_feature, .dev_feat_enabled = arm_smmu_dev_feature_enabled, .dev_enable_feat= arm_smmu_dev_enable_feature, -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v12 13/15] iommu/smmuv3: Report non recoverable faults
When a stage 1 related fault event is read from the event queue, let's propagate it to potential external fault listeners, ie. users who registered a fault handler. Signed-off-by: Eric Auger --- v8 -> v9: - adapt to the removal of IOMMU_FAULT_UNRECOV_PERM_VALID: only look at IOMMU_FAULT_UNRECOV_ADDR_VALID which comes with perm - do not advertise IOMMU_FAULT_UNRECOV_PASID_VALID faults for translation faults - trace errors if !master - test nested before calling iommu_report_device_fault - call the fault handler unconditionnally in non nested mode v4 -> v5: - s/IOMMU_FAULT_PERM_INST/IOMMU_FAULT_PERM_EXEC --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 102 +--- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 80 +++ 2 files changed, 171 insertions(+), 11 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 5aa9e0e747fa..31a2500bde32 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1379,7 +1379,6 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) return 0; } -__maybe_unused static struct arm_smmu_master * arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) { @@ -1405,25 +1404,106 @@ arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) return master; } +/* Populates the record fields according to the input SMMU event */ +static bool arm_smmu_transcode_fault(u64 *evt, u8 type, +struct iommu_fault_unrecoverable *record) +{ + const struct arm_smmu_fault_propagation_data *data; + u32 fields; + + if (type >= ARRAY_SIZE(fault_propagation)) + return false; + + data = _propagation[type]; + if (!data->reason) + return false; + + fields = data->fields; + + if (data->s1_check & FIELD_GET(EVTQ_1_S2, evt[1])) + return false; /* S2 related fault, don't propagate */ + + if (fields & IOMMU_FAULT_UNRECOV_PASID_VALID) + record->pasid = FIELD_GET(EVTQ_0_SUBSTREAMID, evt[0]); + else { + /* all other transcoded errors have SSV */ + if (FIELD_GET(EVTQ_0_SSV, evt[0])) { + record->pasid = FIELD_GET(EVTQ_0_SUBSTREAMID, evt[0]); + fields |= IOMMU_FAULT_UNRECOV_PASID_VALID; + } + } + + if (fields & IOMMU_FAULT_UNRECOV_ADDR_VALID) { + if (FIELD_GET(EVTQ_1_RNW, evt[1])) + record->perm = IOMMU_FAULT_PERM_READ; + else + record->perm = IOMMU_FAULT_PERM_WRITE; + if (FIELD_GET(EVTQ_1_PNU, evt[1])) + record->perm |= IOMMU_FAULT_PERM_PRIV; + if (FIELD_GET(EVTQ_1_IND, evt[1])) + record->perm |= IOMMU_FAULT_PERM_EXEC; + record->addr = evt[2]; + } + + if (fields & IOMMU_FAULT_UNRECOV_FETCH_ADDR_VALID) + record->fetch_addr = FIELD_GET(EVTQ_3_FETCH_ADDR, evt[3]); + + record->flags = fields; + record->reason = data->reason; + return true; +} + +static void arm_smmu_report_event(struct arm_smmu_device *smmu, u64 *evt) +{ + u32 sid = FIELD_GET(EVTQ_0_STREAMID, evt[0]); + u8 type = FIELD_GET(EVTQ_0_ID, evt[0]); + struct arm_smmu_master *master; + struct iommu_fault_event event = {}; + bool nested; + int i; + + master = arm_smmu_find_master(smmu, sid); + if (!master || !master->domain) + goto out; + + event.fault.type = IOMMU_FAULT_DMA_UNRECOV; + + nested = (master->domain->stage == ARM_SMMU_DOMAIN_NESTED); + + if (nested) { + if (arm_smmu_transcode_fault(evt, type, )) { + /* +* Only S1 related faults should be reported to the +* guest and must not flood the host log. +* Also a fault handler should have been registered +* to guarantee the full nested functionality +*/ + WARN_ON_ONCE(iommu_report_device_fault(master->dev, + )); + return; + } + } else { + iommu_report_device_fault(master->dev, ); + } +out: + dev_info(smmu->dev, "event 0x%02x received:\n", type); + for (i = 0; i < EVTQ_ENT_DWORDS; ++i) { + dev_info(smmu->dev, "\t0x%016llx\n", +(unsigned long long)evt[i]); + } +} + /* IRQ and event handlers */ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) { - int i; struct arm_smmu_device *smmu = dev; struct arm_smmu_queue *q = >evtq.q; struct arm_smmu_ll_queue *llq = >llq; u64 evt[EVTQ_ENT_DWORDS];
[PATCH v12 08/15] iommu/smmuv3: Implement cache_invalidate
Implement domain-selective and page-selective IOTLB invalidations. Signed-off-by: Eric Auger --- v7 -> v8: - ASID based invalidation using iommu_inv_pasid_info - check ARCHID/PASID flags in addr based invalidation - use __arm_smmu_tlb_inv_context and __arm_smmu_tlb_inv_range_nosync v6 -> v7 - check the uapi version v3 -> v4: - adapt to changes in the uapi - add support for leaf parameter - do not use arm_smmu_tlb_inv_range_nosync or arm_smmu_tlb_inv_context anymore v2 -> v3: - replace __arm_smmu_tlb_sync by arm_smmu_cmdq_issue_sync v1 -> v2: - properly pass the asid --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 53 + 1 file changed, 53 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 73f7a56101dd..4b796693d697 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2774,6 +2774,58 @@ static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) mutex_unlock(_domain->init_mutex); } +static int +arm_smmu_cache_invalidate(struct iommu_domain *domain, struct device *dev, + struct iommu_cache_invalidate_info *inv_info) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_device *smmu = smmu_domain->smmu; + + if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + return -EINVAL; + + if (!smmu) + return -EINVAL; + + if (inv_info->version != IOMMU_CACHE_INVALIDATE_INFO_VERSION_1) + return -EINVAL; + + if (inv_info->cache & IOMMU_CACHE_INV_TYPE_IOTLB) { + if (inv_info->granularity == IOMMU_INV_GRANU_PASID) { + struct iommu_inv_pasid_info *info = + _info->granu.pasid_info; + + if (!(info->flags & IOMMU_INV_PASID_FLAGS_ARCHID) || +(info->flags & IOMMU_INV_PASID_FLAGS_PASID)) + return -EINVAL; + + __arm_smmu_tlb_inv_context(smmu_domain, info->archid); + + } else if (inv_info->granularity == IOMMU_INV_GRANU_ADDR) { + struct iommu_inv_addr_info *info = _info->granu.addr_info; + size_t size = info->nb_granules * info->granule_size; + bool leaf = info->flags & IOMMU_INV_ADDR_FLAGS_LEAF; + + if (!(info->flags & IOMMU_INV_ADDR_FLAGS_ARCHID) || +(info->flags & IOMMU_INV_ADDR_FLAGS_PASID)) + return -EINVAL; + + __arm_smmu_tlb_inv_range(info->addr, size, +info->granule_size, leaf, + smmu_domain, info->archid); + + arm_smmu_cmdq_issue_sync(smmu); + } else { + return -EINVAL; + } + } + if (inv_info->cache & IOMMU_CACHE_INV_TYPE_PASID || + inv_info->cache & IOMMU_CACHE_INV_TYPE_DEV_IOTLB) { + return -ENOENT; + } + return 0; +} + static bool arm_smmu_dev_has_feature(struct device *dev, enum iommu_dev_features feat) { @@ -2857,6 +2909,7 @@ static struct iommu_ops arm_smmu_ops = { .put_resv_regions = generic_iommu_put_resv_regions, .attach_pasid_table = arm_smmu_attach_pasid_table, .detach_pasid_table = arm_smmu_detach_pasid_table, + .cache_invalidate = arm_smmu_cache_invalidate, .dev_has_feat = arm_smmu_dev_has_feature, .dev_feat_enabled = arm_smmu_dev_feature_enabled, .dev_enable_feat= arm_smmu_dev_enable_feature, -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v12 09/15] dma-iommu: Implement NESTED_MSI cookie
Up to now, when the type was UNMANAGED, we used to allocate IOVA pages within a reserved IOVA MSI range. If both the host and the guest are exposed with SMMUs, each would allocate an IOVA. The guest allocates an IOVA (gIOVA) to map onto the guest MSI doorbell (gDB). The Host allocates another IOVA (hIOVA) to map onto the physical doorbell (hDB). So we end up with 2 unrelated mappings, at S1 and S2: S1 S2 gIOVA-> gDB hIOVA->hDB The PCI device would be programmed with hIOVA. No stage 1 mapping would existing, causing the MSIs to fault. iommu_dma_bind_guest_msi() allows to pass gIOVA/gDB to the host so that gIOVA can be used by the host instead of re-allocating a new hIOVA. S1 S2 gIOVA->gDB->hDB this time, the PCI device can be programmed with the gIOVA MSI doorbell which is correctly mapped through both stages. Nested mode is not compatible with HW MSI regions as in that case gDB and hDB should have a 1-1 mapping. This check will be done when attaching each device to the IOMMU domain. Signed-off-by: Eric Auger --- v10 -> v11: - fix compilation if !CONFIG_IOMMU_DMA v7 -> v8: - correct iommu_dma_(un)bind_guest_msi when !CONFIG_IOMMU_DMA - Mentioned nested mode is not compatible with HW MSI regions in commit message - protect with msi_lock on unbind v6 -> v7: - removed device handle v3 -> v4: - change function names; add unregister - protect with msi_lock v2 -> v3: - also store the device handle on S1 mapping registration. This garantees we associate the associated S2 mapping binds to the correct physical MSI controller. v1 -> v2: - unmap stage2 on put() --- drivers/iommu/dma-iommu.c | 142 +- include/linux/dma-iommu.h | 16 + 2 files changed, 155 insertions(+), 3 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 0cbcd3fc3e7e..a14ecad6b79b 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -27,12 +28,15 @@ struct iommu_dma_msi_page { struct list_headlist; dma_addr_t iova; + dma_addr_t gpa; phys_addr_t phys; + size_t s1_granule; }; enum iommu_dma_cookie_type { IOMMU_DMA_IOVA_COOKIE, IOMMU_DMA_MSI_COOKIE, + IOMMU_DMA_NESTED_MSI_COOKIE, }; struct iommu_dma_cookie { @@ -44,6 +48,7 @@ struct iommu_dma_cookie { dma_addr_t msi_iova; }; struct list_headmsi_page_list; + spinlock_t msi_lock; /* Domain for flush queue callback; NULL if flush queue not in use */ struct iommu_domain *fq_domain; @@ -62,6 +67,7 @@ static struct iommu_dma_cookie *cookie_alloc(enum iommu_dma_cookie_type type) cookie = kzalloc(sizeof(*cookie), GFP_KERNEL); if (cookie) { + spin_lock_init(>msi_lock); INIT_LIST_HEAD(>msi_page_list); cookie->type = type; } @@ -95,14 +101,17 @@ EXPORT_SYMBOL(iommu_get_dma_cookie); * * Users who manage their own IOVA allocation and do not want DMA API support, * but would still like to take advantage of automatic MSI remapping, can use - * this to initialise their own domain appropriately. Users should reserve a + * this to initialise their own domain appropriately. Users may reserve a * contiguous IOVA region, starting at @base, large enough to accommodate the * number of PAGE_SIZE mappings necessary to cover every MSI doorbell address - * used by the devices attached to @domain. + * used by the devices attached to @domain. The other way round is to provide + * usable iova pages through the iommu_dma_bind_doorbell API (nested stages + * use case) */ int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) { struct iommu_dma_cookie *cookie; + int nesting, ret; if (domain->type != IOMMU_DOMAIN_UNMANAGED) return -EINVAL; @@ -110,7 +119,12 @@ int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base) if (domain->iova_cookie) return -EEXIST; - cookie = cookie_alloc(IOMMU_DMA_MSI_COOKIE); + ret = iommu_domain_get_attr(domain, DOMAIN_ATTR_NESTING, ); + if (!ret && nesting) + cookie = cookie_alloc(IOMMU_DMA_NESTED_MSI_COOKIE); + else + cookie = cookie_alloc(IOMMU_DMA_MSI_COOKIE); + if (!cookie) return -ENOMEM; @@ -131,6 +145,7 @@ void iommu_put_dma_cookie(struct iommu_domain *domain) { struct iommu_dma_cookie *cookie = domain->iova_cookie; struct iommu_dma_msi_page *msi, *tmp; + bool s2_unmap = false; if (!cookie) return; @@ -138,7 +153,15 @@ void iommu_put_dma_cookie(struct
[PATCH v12 07/15] iommu/smmuv3: Allow stage 1 invalidation with unmanaged ASIDs
With nested stage support, soon we will need to invalidate S1 contexts and ranges tagged with an unmanaged asid, this latter being managed by the guest. So let's introduce 2 helpers that allow to invalidate with externally managed ASIDs Signed-off-by: Eric Auger --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 35 + 1 file changed, 29 insertions(+), 6 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 08ab0dd81049..73f7a56101dd 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1679,9 +1679,9 @@ static int arm_smmu_atc_inv_domain(struct arm_smmu_domain *smmu_domain, } /* IO_PGTABLE API */ -static void arm_smmu_tlb_inv_context(void *cookie) +static void __arm_smmu_tlb_inv_context(struct arm_smmu_domain *smmu_domain, + int ext_asid) { - struct arm_smmu_domain *smmu_domain = cookie; struct arm_smmu_device *smmu = smmu_domain->smmu; struct arm_smmu_cmdq_ent cmd; @@ -1692,7 +1692,11 @@ static void arm_smmu_tlb_inv_context(void *cookie) * insertion to guarantee those are observed before the TLBI. Do be * careful, 007. */ - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { + if (ext_asid >= 0) { /* guest stage 1 invalidation */ + cmd.opcode = CMDQ_OP_TLBI_NH_ASID; + cmd.tlbi.asid = ext_asid; + cmd.tlbi.vmid = smmu_domain->s2_cfg->vmid; + } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { arm_smmu_tlb_inv_asid(smmu, smmu_domain->s1_cfg->cd.asid); } else { cmd.opcode = CMDQ_OP_TLBI_S12_VMALL; @@ -1703,9 +1707,17 @@ static void arm_smmu_tlb_inv_context(void *cookie) arm_smmu_atc_inv_domain(smmu_domain, 0, 0, 0); } -static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size, +static void arm_smmu_tlb_inv_context(void *cookie) +{ + struct arm_smmu_domain *smmu_domain = cookie; + + __arm_smmu_tlb_inv_context(smmu_domain, -1); +} + +static void __arm_smmu_tlb_inv_range(unsigned long iova, size_t size, size_t granule, bool leaf, - struct arm_smmu_domain *smmu_domain) + struct arm_smmu_domain *smmu_domain, + int ext_asid) { struct arm_smmu_device *smmu = smmu_domain->smmu; unsigned long start = iova, end = iova + size, num_pages = 0, tg = 0; @@ -1720,7 +1732,11 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size, if (!size) return; - if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { + if (ext_asid >= 0) { /* guest stage 1 invalidation */ + cmd.opcode = CMDQ_OP_TLBI_NH_VA; + cmd.tlbi.asid = ext_asid; + cmd.tlbi.vmid = smmu_domain->s2_cfg->vmid; + } else if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { cmd.opcode = CMDQ_OP_TLBI_NH_VA; cmd.tlbi.asid = smmu_domain->s1_cfg->cd.asid; } else { @@ -1780,6 +1796,13 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size, arm_smmu_atc_inv_domain(smmu_domain, 0, start, size); } +static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size, + size_t granule, bool leaf, + struct arm_smmu_domain *smmu_domain) +{ + __arm_smmu_tlb_inv_range(iova, size, granule, leaf, smmu_domain, -1); +} + static void arm_smmu_tlb_inv_page_nosync(struct iommu_iotlb_gather *gather, unsigned long iova, size_t granule, void *cookie) -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v12 14/15] iommu/smmuv3: Accept configs with more than one context descriptor
In preparation for vSVA, let's accept userspace provided configs with more than one CD. We check the max CD against the host iommu capability and also the format (linear versus 2 level). Signed-off-by: Eric Auger Signed-off-by: Shameer Kolothum --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 13 - 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 31a2500bde32..6549c3ee6af6 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2901,11 +2901,12 @@ static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, if (smmu_domain->s1_cfg) goto out; - /* -* we currently support a single CD so s1fmt and s1dss -* fields are also ignored -*/ - if (cfg->pasid_bits) + list_for_each_entry(master, _domain->devices, domain_head) { + if (cfg->pasid_bits > master->ssid_bits) + goto out; + } + if (cfg->vendor_data.smmuv3.s1fmt == STRTAB_STE_0_S1FMT_64K_L2 && + !(smmu->features & ARM_SMMU_FEAT_2_LVL_CDTAB)) goto out; smmu_domain->s1_cfg = kzalloc(sizeof(*smmu_domain->s1_cfg), @@ -2916,6 +2917,8 @@ static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, } smmu_domain->s1_cfg->cdcfg.cdtab_dma = cfg->base_ptr; + smmu_domain->s1_cfg->s1cdmax = cfg->pasid_bits; + smmu_domain->s1_cfg->s1fmt = cfg->vendor_data.smmuv3.s1fmt; smmu_domain->abort = false; break; default: -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v12 10/15] iommu/smmuv3: Nested mode single MSI doorbell per domain enforcement
In nested mode we enforce the rule that all devices belonging to the same iommu_domain share the same msi_domain. Indeed if there were several physical MSI doorbells being used within a single iommu_domain, it becomes really difficult to resolve the nested stage mapping translating into the correct physical doorbell. So let's forbid this situation. Signed-off-by: Eric Auger --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 41 + 1 file changed, 41 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 4b796693d697..de03ac111f76 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2265,6 +2265,37 @@ static void arm_smmu_detach_dev(struct arm_smmu_master *master) arm_smmu_install_ste_for_dev(master); } +static bool arm_smmu_share_msi_domain(struct iommu_domain *domain, + struct device *dev) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct irq_domain *irqd = dev_get_msi_domain(dev); + struct arm_smmu_master *master; + unsigned long flags; + bool share = false; + + if (!irqd) + return true; + + spin_lock_irqsave(_domain->devices_lock, flags); + list_for_each_entry(master, _domain->devices, domain_head) { + struct irq_domain *d = dev_get_msi_domain(master->dev); + + if (!d) + continue; + if (irqd != d) { + dev_info(dev, "Nested mode forbids to attach devices " +"using different physical MSI doorbells " +"to the same iommu_domain"); + goto unlock; + } + } + share = true; +unlock: + spin_unlock_irqrestore(_domain->devices_lock, flags); + return share; +} + static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) { int ret = 0; @@ -2316,6 +2347,16 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) ret = -EINVAL; goto out_unlock; } + /* +* In nested mode we must check all devices belonging to the +* domain share the same physical MSI doorbell. Otherwise nested +* stage MSI binding is not supported. +*/ + if (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED && + !arm_smmu_share_msi_domain(domain, dev)) { + ret = -EINVAL; + goto out_unlock; + } master->domain = smmu_domain; -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v12 11/15] iommu/smmuv3: Enforce incompatibility between nested mode and HW MSI regions
Nested mode currently is not compatible with HW MSI reserved regions. Indeed MSI transactions targeting this MSI doorbells bypass the SMMU. Let's check nested mode is not attempted in such configuration. Signed-off-by: Eric Auger --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 23 +++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index de03ac111f76..0c5ab4005f76 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2296,6 +2296,23 @@ static bool arm_smmu_share_msi_domain(struct iommu_domain *domain, return share; } +static bool arm_smmu_has_hw_msi_resv_region(struct device *dev) +{ + struct iommu_resv_region *region; + bool has_msi_resv_region = false; + LIST_HEAD(resv_regions); + + iommu_get_resv_regions(dev, _regions); + list_for_each_entry(region, _regions, list) { + if (region->type == IOMMU_RESV_MSI) { + has_msi_resv_region = true; + break; + } + } + iommu_put_resv_regions(dev, _regions); + return has_msi_resv_region; +} + static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) { int ret = 0; @@ -2350,10 +2367,12 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev) /* * In nested mode we must check all devices belonging to the * domain share the same physical MSI doorbell. Otherwise nested -* stage MSI binding is not supported. +* stage MSI binding is not supported. Also nested mode is not +* compatible with MSI HW reserved regions. */ if (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED && - !arm_smmu_share_msi_domain(domain, dev)) { + (!arm_smmu_share_msi_domain(domain, dev) || +arm_smmu_has_hw_msi_resv_region(dev))) { ret = -EINVAL; goto out_unlock; } -- 2.21.3 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v12 05/15] iommu/smmuv3: Get prepared for nested stage support
When nested stage translation is setup, both s1_cfg and s2_cfg are allocated. We introduce a new smmu domain abort field that will be set upon guest stage1 configuration passing. arm_smmu_write_strtab_ent() is modified to write both stage fields in the STE and deal with the abort field. In nested mode, only stage 2 is "finalized" as the host does not own/configure the stage 1 context descriptor; guest does. Signed-off-by: Eric Auger --- v10 -> v11: - Fix an issue reported by Shameer when switching from with vSMMU to without vSMMU. Despite the spec does not seem to mention it seems to be needed to reset the 2 high 64b when switching from S1+S2 cfg to S1 only. Especially dst[3] needs to be reset (S2TTB). On some implementations, if the S2TTB is not reset, this causes a C_BAD_STE error --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 66 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 + 2 files changed, 58 insertions(+), 10 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 4baf9fafe462..9580090bd0c9 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1181,8 +1181,10 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, * three cases at the moment: * * 1. Invalid (all zero) -> bypass/fault (init) -* 2. Bypass/fault -> translation/bypass (attach) -* 3. Translation/bypass -> bypass/fault (detach) +* 2. Bypass/fault -> single stage translation/bypass (attach) +* 3. Single or nested stage Translation/bypass -> bypass/fault (detach) +* 4. S2 -> S1 + S2 (attach_pasid_table) +* 5. S1 + S2 -> S2 (detach_pasid_table) * * Given that we can't update the STE atomically and the SMMU * doesn't read the thing in a defined order, that leaves us @@ -1193,7 +1195,8 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, * 3. Update Config, sync */ u64 val = le64_to_cpu(dst[0]); - bool ste_live = false; + bool abort, translate, s1_live = false, s2_live = false, ste_live; + bool nested = false; struct arm_smmu_device *smmu = NULL; struct arm_smmu_s1_cfg *s1_cfg = NULL; struct arm_smmu_s2_cfg *s2_cfg = NULL; @@ -1213,6 +1216,7 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, if (smmu_domain) { s1_cfg = smmu_domain->s1_cfg; s2_cfg = smmu_domain->s2_cfg; + nested = (smmu_domain->stage == ARM_SMMU_DOMAIN_NESTED); } if (val & STRTAB_STE_0_V) { @@ -1220,23 +1224,37 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, case STRTAB_STE_0_CFG_BYPASS: break; case STRTAB_STE_0_CFG_S1_TRANS: + s1_live = true; + break; case STRTAB_STE_0_CFG_S2_TRANS: - ste_live = true; + s2_live = true; + break; + case STRTAB_STE_0_CFG_NESTED: + s1_live = true; + s2_live = true; break; case STRTAB_STE_0_CFG_ABORT: - BUG_ON(!disable_bypass); break; default: BUG(); /* STE corruption */ } } + ste_live = s1_live || s2_live; + /* Nuke the existing STE_0 value, as we're going to rewrite it */ val = STRTAB_STE_0_V; /* Bypass/fault */ - if (!smmu_domain || !(s1_cfg || s2_cfg)) { - if (!smmu_domain && disable_bypass) + + if (!smmu_domain) + abort = disable_bypass; + else + abort = smmu_domain->abort; + translate = s1_cfg || s2_cfg; + + if (abort || !translate) { + if (abort) val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT); else val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS); @@ -1254,8 +1272,18 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, return; } + /* S1 or S2 translation */ + + BUG_ON(ste_live && !nested); + + if (ste_live) { + /* First invalidate the live STE */ + dst[0] = cpu_to_le64(STRTAB_STE_0_CFG_ABORT); + arm_smmu_sync_ste_for_sid(smmu, sid); + } + if (s1_cfg) { - BUG_ON(ste_live); + BUG_ON(s1_live); dst[1] = cpu_to_le64( FIELD_PREP(STRTAB_STE_1_S1DSS, STRTAB_STE_1_S1DSS_SSID0) | FIELD_PREP(STRTAB_STE_1_S1CIR,
[PATCH v12 00/15] SMMUv3 Nested Stage Setup (IOMMU part)
This series brings the IOMMU part of HW nested paging support in the SMMUv3. The VFIO part is submitted separately. The IOMMU API is extended to support 2 new API functionalities: 1) pass the guest stage 1 configuration 2) pass stage 1 MSI bindings Then those capabilities gets implemented in the SMMUv3 driver. The virtualizer passes information through the VFIO user API which cascades them to the iommu subsystem. This allows the guest to own stage 1 tables and context descriptors (so-called PASID table) while the host owns stage 2 tables and main configuration structures (STE). Best Regards Eric This series can be found at: https://github.com/eauger/linux/tree/5.10-rc4-2stage-v12 (including the VFIO part) The series includes a patch from Jean-Philippe. It is better to review the original patch: [PATCH v8 2/9] iommu/arm-smmu-v3: Maintain a SID->device structure The VFIO series is sent separately. History: v11 -> v12: - rebase on top of v5.10-rc4 Two new patches paving the way for vSVA/ARM (Shameer's input) - iommu/smmuv3: Accept configs with more than one context descriptor - iommu/smmuv3: Add PASID cache invalidation per PASID v10 -> v11: - S2TTB reset when S2 is off - fix compil issue when CONFIG_IOMMU_DMA is not set v9 -> v10: - rebase on top of 5.6.0-rc3 v8 -> v9: - rebase on 5.3 - split iommu/vfio parts v6 -> v8: - Implement VFIO-PCI device specific interrupt framework v7 -> v8: - rebase on top of v5.2-rc1 and especially 8be39a1a04c1 iommu/arm-smmu-v3: Add a master->domain pointer - dynamic alloc of s1_cfg/s2_cfg - __arm_smmu_tlb_inv_asid/s1_range_nosync - check there is no HW MSI regions - asid invalidation using pasid extended struct (change in the uapi) - add s1_live/s2_live checks - move check about support of nested stages in domain finalise - fixes in error reporting according to the discussion with Robin - reordered the patches to have first iommu/smmuv3 patches and then VFIO patches v6 -> v7: - removed device handle from bind/unbind_guest_msi - added "iommu/smmuv3: Nested mode single MSI doorbell per domain enforcement" - added few uapi comments as suggested by Jean, Jacop and Alex v5 -> v6: - Fix compilation issue when CONFIG_IOMMU_API is unset v4 -> v5: - fix bug reported by Vincent: fault handler unregistration now happens in vfio_pci_release - IOMMU_FAULT_PERM_* moved outside of struct definition + small uapi changes suggested by Kean-Philippe (except fetch_addr) - iommu: introduce device fault report API: removed the PRI part. - see individual logs for more details - reset the ste abort flag on detach v3 -> v4: - took into account Alex, jean-Philippe and Robin's comments on v3 - rework of the smmuv3 driver integration - add tear down ops for msi binding and PASID table binding - fix S1 fault propagation - put fault reporting patches at the beginning of the series following Jean-Philippe's request - update of the cache invalidate and fault API uapis - VFIO fault reporting rework with 2 separate regions and one mmappable segment for the fault queue - moved to PATCH v2 -> v3: - When registering the S1 MSI binding we now store the device handle. This addresses Robin's comment about discimination of devices beonging to different S1 groups and using different physical MSI doorbells. - Change the fault reporting API: use VFIO_PCI_DMA_FAULT_IRQ_INDEX to set the eventfd and expose the faults through an mmappable fault region v1 -> v2: - Added the fault reporting capability - asid properly passed on invalidation (fix assignment of multiple devices) - see individual change logs for more info Eric Auger (15): iommu: Introduce attach/detach_pasid_table API iommu: Introduce bind/unbind_guest_msi iommu/arm-smmu-v3: Maintain a SID->device structure iommu/smmuv3: Dynamically allocate s1_cfg and s2_cfg iommu/smmuv3: Get prepared for nested stage support iommu/smmuv3: Implement attach/detach_pasid_table iommu/smmuv3: Allow stage 1 invalidation with unmanaged ASIDs iommu/smmuv3: Implement cache_invalidate dma-iommu: Implement NESTED_MSI cookie iommu/smmuv3: Nested mode single MSI doorbell per domain enforcement iommu/smmuv3: Enforce incompatibility between nested mode and HW MSI regions iommu/smmuv3: Implement bind/unbind_guest_msi iommu/smmuv3: Report non recoverable faults iommu/smmuv3: Accept configs with more than one context descriptor iommu/smmuv3: Add PASID cache invalidation per PASID drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 650 ++-- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 98 ++- drivers/iommu/dma-iommu.c | 142 - drivers/iommu/iommu.c | 104 include/linux/dma-iommu.h | 16 + include/linux/iommu.h | 41 ++ include/uapi/linux/iommu.h | 54 ++ 7 files changed, 1035 insertions(+), 70 deletions(-) -- 2.21.3 ___ iommu mailing list
[PATCH v12 06/15] iommu/smmuv3: Implement attach/detach_pasid_table
On attach_pasid_table() we program STE S1 related info set by the guest into the actual physical STEs. At minimum we need to program the context descriptor GPA and compute whether the stage1 is translated/bypassed or aborted. Signed-off-by: Eric Auger --- v7 -> v8: - remove smmu->features check, now done on domain finalize v6 -> v7: - check versions and comment the fact we don't need to take into account s1dss and s1fmt v3 -> v4: - adapt to changes in iommu_pasid_table_config - different programming convention at s1_cfg/s2_cfg/ste.abort v2 -> v3: - callback now is named set_pasid_table and struct fields are laid out differently. v1 -> v2: - invalidate the STE before changing them - hold init_mutex - handle new fields --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 98 + 1 file changed, 98 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index 9580090bd0c9..08ab0dd81049 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -2655,6 +2655,102 @@ static void arm_smmu_get_resv_regions(struct device *dev, iommu_dma_get_resv_regions(dev, head); } +static int arm_smmu_attach_pasid_table(struct iommu_domain *domain, + struct iommu_pasid_table_config *cfg) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_master *master; + struct arm_smmu_device *smmu; + unsigned long flags; + int ret = -EINVAL; + + if (cfg->format != IOMMU_PASID_FORMAT_SMMUV3) + return -EINVAL; + + if (cfg->version != PASID_TABLE_CFG_VERSION_1 || + cfg->vendor_data.smmuv3.version != PASID_TABLE_SMMUV3_CFG_VERSION_1) + return -EINVAL; + + mutex_lock(_domain->init_mutex); + + smmu = smmu_domain->smmu; + + if (!smmu) + goto out; + + if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + goto out; + + switch (cfg->config) { + case IOMMU_PASID_CONFIG_ABORT: + kfree(smmu_domain->s1_cfg); + smmu_domain->s1_cfg = NULL; + smmu_domain->abort = true; + break; + case IOMMU_PASID_CONFIG_BYPASS: + kfree(smmu_domain->s1_cfg); + smmu_domain->s1_cfg = NULL; + smmu_domain->abort = false; + break; + case IOMMU_PASID_CONFIG_TRANSLATE: + /* we do not support S1 <-> S1 transitions */ + if (smmu_domain->s1_cfg) + goto out; + + /* +* we currently support a single CD so s1fmt and s1dss +* fields are also ignored +*/ + if (cfg->pasid_bits) + goto out; + + smmu_domain->s1_cfg = kzalloc(sizeof(*smmu_domain->s1_cfg), + GFP_KERNEL); + if (!smmu_domain->s1_cfg) { + ret = -ENOMEM; + goto out; + } + + smmu_domain->s1_cfg->cdcfg.cdtab_dma = cfg->base_ptr; + smmu_domain->abort = false; + break; + default: + goto out; + } + spin_lock_irqsave(_domain->devices_lock, flags); + list_for_each_entry(master, _domain->devices, domain_head) + arm_smmu_install_ste_for_dev(master); + spin_unlock_irqrestore(_domain->devices_lock, flags); + ret = 0; +out: + mutex_unlock(_domain->init_mutex); + return ret; +} + +static void arm_smmu_detach_pasid_table(struct iommu_domain *domain) +{ + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + struct arm_smmu_master *master; + unsigned long flags; + + mutex_lock(_domain->init_mutex); + + if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED) + goto unlock; + + kfree(smmu_domain->s1_cfg); + smmu_domain->s1_cfg = NULL; + smmu_domain->abort = true; + + spin_lock_irqsave(_domain->devices_lock, flags); + list_for_each_entry(master, _domain->devices, domain_head) + arm_smmu_install_ste_for_dev(master); + spin_unlock_irqrestore(_domain->devices_lock, flags); + +unlock: + mutex_unlock(_domain->init_mutex); +} + static bool arm_smmu_dev_has_feature(struct device *dev, enum iommu_dev_features feat) { @@ -2736,6 +2832,8 @@ static struct iommu_ops arm_smmu_ops = { .of_xlate = arm_smmu_of_xlate, .get_resv_regions = arm_smmu_get_resv_regions, .put_resv_regions = generic_iommu_put_resv_regions, + .attach_pasid_table = arm_smmu_attach_pasid_table, + .detach_pasid_table = arm_smmu_detach_pasid_table, .dev_has_feat = arm_smmu_dev_has_feature,
[PATCH v12 03/15] iommu/arm-smmu-v3: Maintain a SID->device structure
When handling faults from the event or PRI queue, we need to find the struct device associated to a SID. Add a rb_tree to keep track of SIDs. Signed-off-by: Eric Auger Signed-off-by: Jean-Philippe Brucker --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 99 + drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 10 +++ 2 files changed, 109 insertions(+) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index e634bbe60573..d828d6cbeb0e 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -1350,6 +1350,32 @@ static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid) return 0; } +__maybe_unused +static struct arm_smmu_master * +arm_smmu_find_master(struct arm_smmu_device *smmu, u32 sid) +{ + struct rb_node *node; + struct arm_smmu_stream *stream; + struct arm_smmu_master *master = NULL; + + mutex_lock(>streams_mutex); + node = smmu->streams.rb_node; + while (node) { + stream = rb_entry(node, struct arm_smmu_stream, node); + if (stream->id < sid) { + node = node->rb_right; + } else if (stream->id > sid) { + node = node->rb_left; + } else { + master = stream->master; + break; + } + } + mutex_unlock(>streams_mutex); + + return master; +} + /* IRQ and event handlers */ static irqreturn_t arm_smmu_evtq_thread(int irq, void *dev) { @@ -2306,6 +2332,69 @@ static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid) return sid < limit; } +static int arm_smmu_insert_master(struct arm_smmu_device *smmu, + struct arm_smmu_master *master) +{ + int i; + int ret = 0; + struct arm_smmu_stream *new_stream, *cur_stream; + struct rb_node **new_node, *parent_node = NULL; + + master->streams = kcalloc(master->num_sids, + sizeof(struct arm_smmu_stream), GFP_KERNEL); + if (!master->streams) + return -ENOMEM; + + mutex_lock(>streams_mutex); + for (i = 0; i < master->num_sids && !ret; i++) { + new_stream = >streams[i]; + new_stream->id = master->sids[i]; + new_stream->master = master; + + new_node = &(smmu->streams.rb_node); + while (*new_node) { + cur_stream = rb_entry(*new_node, struct arm_smmu_stream, + node); + parent_node = *new_node; + if (cur_stream->id > new_stream->id) { + new_node = &((*new_node)->rb_left); + } else if (cur_stream->id < new_stream->id) { + new_node = &((*new_node)->rb_right); + } else { + dev_warn(master->dev, +"stream %u already in tree\n", +cur_stream->id); + ret = -EINVAL; + break; + } + } + + if (!ret) { + rb_link_node(_stream->node, parent_node, new_node); + rb_insert_color(_stream->node, >streams); + } + } + mutex_unlock(>streams_mutex); + + return ret; +} + +static void arm_smmu_remove_master(struct arm_smmu_device *smmu, + struct arm_smmu_master *master) +{ + int i; + + if (!master->streams) + return; + + mutex_lock(>streams_mutex); + for (i = 0; i < master->num_sids; i++) + rb_erase(>streams[i].node, >streams); + mutex_unlock(>streams_mutex); + + kfree(master->streams); +} + static struct iommu_ops arm_smmu_ops; static struct iommu_device *arm_smmu_probe_device(struct device *dev) @@ -2369,6 +2458,10 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev) master->ssid_bits = min_t(u8, master->ssid_bits, CTXDESC_LINEAR_CDMAX); + ret = arm_smmu_insert_master(smmu, master); + if (ret) + goto err_free_master; + return >iommu; err_free_master: @@ -2381,14 +2474,17 @@ static void arm_smmu_release_device(struct device *dev) { struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev); struct arm_smmu_master *master; + struct arm_smmu_device *smmu; if (!fwspec || fwspec->ops != _smmu_ops) return; master = dev_iommu_priv_get(dev); + smmu = master->smmu; WARN_ON(arm_smmu_master_sva_enabled(master)); arm_smmu_detach_dev(master);
[PATCH v12 02/15] iommu: Introduce bind/unbind_guest_msi
On ARM, MSI are translated by the SMMU. An IOVA is allocated for each MSI doorbell. If both the host and the guest are exposed with SMMUs, we end up with 2 different IOVAs allocated by each. guest allocates an IOVA (gIOVA) to map onto the guest MSI doorbell (gDB). The Host allocates another IOVA (hIOVA) to map onto the physical doorbell (hDB). So we end up with 2 untied mappings: S1S2 gIOVA->gDB hIOVA->hDB Currently the PCI device is programmed by the host with hIOVA as MSI doorbell. So this does not work. This patch introduces an API to pass gIOVA/gDB to the host so that gIOVA can be reused by the host instead of re-allocating a new IOVA. So the goal is to create the following nested mapping: S1S2 gIOVA->gDB ->hDB and program the PCI device with gIOVA MSI doorbell. In case we have several devices attached to this nested domain (devices belonging to the same group), they cannot be isolated on guest side either. So they should also end up in the same domain on guest side. We will enforce that all the devices attached to the host iommu domain use the same physical doorbell and similarly a single virtual doorbell mapping gets registered (1 single virtual doorbell is used on guest as well). Signed-off-by: Eric Auger --- v7 -> v8: - dummy iommu_unbind_guest_msi turned into a void function v6 -> v7: - remove the device handle parameter. - Add comments saying there can only be a single MSI binding registered per iommu_domain v5 -> v6: -fix compile issue when IOMMU_API is not set v3 -> v4: - add unbind v2 -> v3: - add a struct device handle --- drivers/iommu/iommu.c | 37 + include/linux/iommu.h | 20 2 files changed, 57 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index b061bf4c3bb2..3f311e25d6e2 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2251,6 +2251,43 @@ static void __iommu_detach_device(struct iommu_domain *domain, trace_detach_device_from_domain(dev); } +/** + * iommu_bind_guest_msi - Passes the stage1 GIOVA/GPA mapping of a + * virtual doorbell + * + * @domain: iommu domain the stage 1 mapping will be attached to + * @iova: iova allocated by the guest + * @gpa: guest physical address of the virtual doorbell + * @size: granule size used for the mapping + * + * The associated IOVA can be reused by the host to create a nested + * stage2 binding mapping translating into the physical doorbell used + * by the devices attached to the domain. + * + * All devices within the domain must share the same physical doorbell. + * A single MSI GIOVA/GPA mapping can be attached to an iommu_domain. + */ + +int iommu_bind_guest_msi(struct iommu_domain *domain, +dma_addr_t giova, phys_addr_t gpa, size_t size) +{ + if (unlikely(!domain->ops->bind_guest_msi)) + return -ENODEV; + + return domain->ops->bind_guest_msi(domain, giova, gpa, size); +} +EXPORT_SYMBOL_GPL(iommu_bind_guest_msi); + +void iommu_unbind_guest_msi(struct iommu_domain *domain, + dma_addr_t iova) +{ + if (unlikely(!domain->ops->unbind_guest_msi)) + return; + + domain->ops->unbind_guest_msi(domain, iova); +} +EXPORT_SYMBOL_GPL(iommu_unbind_guest_msi); + void iommu_detach_device(struct iommu_domain *domain, struct device *dev) { struct iommu_group *group; diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 464fcbecf841..35819bff03bc 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -225,6 +225,8 @@ struct iommu_iotlb_gather { * @sva_unbind_gpasid: unbind guest pasid and mm * @attach_pasid_table: attach a pasid table * @detach_pasid_table: detach the pasid table + * @bind_guest_msi: provides a stage1 giova/gpa MSI doorbell mapping + * @unbind_guest_msi: withdraw a stage1 giova/gpa MSI doorbell mapping * @def_domain_type: device default domain type, return value: * - IOMMU_DOMAIN_IDENTITY: must use an identity domain * - IOMMU_DOMAIN_DMA: must use a dma domain @@ -305,6 +307,10 @@ struct iommu_ops { int (*def_domain_type)(struct device *dev); + int (*bind_guest_msi)(struct iommu_domain *domain, + dma_addr_t giova, phys_addr_t gpa, size_t size); + void (*unbind_guest_msi)(struct iommu_domain *domain, dma_addr_t giova); + unsigned long pgsize_bitmap; struct module *owner; }; @@ -444,6 +450,10 @@ extern int iommu_attach_pasid_table(struct iommu_domain *domain, extern int iommu_uapi_attach_pasid_table(struct iommu_domain *domain, void __user *udata); extern void iommu_detach_pasid_table(struct iommu_domain *domain); +extern int iommu_bind_guest_msi(struct iommu_domain *domain, + dma_addr_t giova, phys_addr_t gpa, size_t size); +extern
[PATCH v12 01/15] iommu: Introduce attach/detach_pasid_table API
In virtualization use case, when a guest is assigned a PCI host device, protected by a virtual IOMMU on the guest, the physical IOMMU must be programmed to be consistent with the guest mappings. If the physical IOMMU supports two translation stages it makes sense to program guest mappings onto the first stage/level (ARM/Intel terminology) while the host owns the stage/level 2. In that case, it is mandated to trap on guest configuration settings and pass those to the physical iommu driver. This patch adds a new API to the iommu subsystem that allows to set/unset the pasid table information. A generic iommu_pasid_table_config struct is introduced in a new iommu.h uapi header. This is going to be used by the VFIO user API. Signed-off-by: Jean-Philippe Brucker Signed-off-by: Liu, Yi L Signed-off-by: Ashok Raj Signed-off-by: Jacob Pan Signed-off-by: Eric Auger --- v11 -> v12: - add argsz, name the union --- drivers/iommu/iommu.c | 67 ++ include/linux/iommu.h | 21 include/uapi/linux/iommu.h | 54 ++ 3 files changed, 142 insertions(+) diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c index b53446bb8c6b..b061bf4c3bb2 100644 --- a/drivers/iommu/iommu.c +++ b/drivers/iommu/iommu.c @@ -2171,6 +2171,73 @@ int iommu_uapi_sva_unbind_gpasid(struct iommu_domain *domain, struct device *dev } EXPORT_SYMBOL_GPL(iommu_uapi_sva_unbind_gpasid); +int iommu_attach_pasid_table(struct iommu_domain *domain, +struct iommu_pasid_table_config *cfg) +{ + if (unlikely(!domain->ops->attach_pasid_table)) + return -ENODEV; + + return domain->ops->attach_pasid_table(domain, cfg); +} + +int iommu_uapi_attach_pasid_table(struct iommu_domain *domain, + void __user *uinfo) +{ + struct iommu_pasid_table_config pasid_table_data = { 0 }; + u32 minsz; + + if (unlikely(!domain->ops->attach_pasid_table)) + return -ENODEV; + + /* +* No new spaces can be added before the variable sized union, the +* minimum size is the offset to the union. +*/ + minsz = offsetof(struct iommu_pasid_table_config, vendor_data); + + /* Copy minsz from user to get flags and argsz */ + if (copy_from_user(_table_data, uinfo, minsz)) + return -EFAULT; + + /* Fields before the variable size union are mandatory */ + if (pasid_table_data.argsz < minsz) + return -EINVAL; + + /* PASID and address granu require additional info beyond minsz */ + if (pasid_table_data.version != PASID_TABLE_CFG_VERSION_1) + return -EINVAL; + if (pasid_table_data.format == IOMMU_PASID_FORMAT_SMMUV3 && + pasid_table_data.argsz < + offsetofend(struct iommu_pasid_table_config, vendor_data.smmuv3)) + return -EINVAL; + + /* +* User might be using a newer UAPI header which has a larger data +* size, we shall support the existing flags within the current +* size. Copy the remaining user data _after_ minsz but not more +* than the current kernel supported size. +*/ + if (copy_from_user((void *)_table_data + minsz, uinfo + minsz, + min_t(u32, pasid_table_data.argsz, sizeof(pasid_table_data)) - minsz)) + return -EFAULT; + + /* Now the argsz is validated, check the content */ + if (pasid_table_data.config < 1 && pasid_table_data.config > 3) + return -EINVAL; + + return domain->ops->attach_pasid_table(domain, _table_data); +} +EXPORT_SYMBOL_GPL(iommu_uapi_attach_pasid_table); + +void iommu_detach_pasid_table(struct iommu_domain *domain) +{ + if (unlikely(!domain->ops->detach_pasid_table)) + return; + + domain->ops->detach_pasid_table(domain); +} +EXPORT_SYMBOL_GPL(iommu_detach_pasid_table); + static void __iommu_detach_device(struct iommu_domain *domain, struct device *dev) { diff --git a/include/linux/iommu.h b/include/linux/iommu.h index b95a6f8db6ff..464fcbecf841 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -223,6 +223,8 @@ struct iommu_iotlb_gather { * @cache_invalidate: invalidate translation caches * @sva_bind_gpasid: bind guest pasid and mm * @sva_unbind_gpasid: unbind guest pasid and mm + * @attach_pasid_table: attach a pasid table + * @detach_pasid_table: detach the pasid table * @def_domain_type: device default domain type, return value: * - IOMMU_DOMAIN_IDENTITY: must use an identity domain * - IOMMU_DOMAIN_DMA: must use a dma domain @@ -287,6 +289,9 @@ struct iommu_ops { void *drvdata); void (*sva_unbind)(struct iommu_sva *handle); u32 (*sva_get_pasid)(struct iommu_sva *handle); + int (*attach_pasid_table)(struct
[PATCH v12 04/15] iommu/smmuv3: Dynamically allocate s1_cfg and s2_cfg
In preparation for the introduction of nested stages let's turn s1_cfg and s2_cfg fields into pointers which are dynamically allocated depending on the smmu_domain stage. In nested mode, both stages will coexist and s1_cfg will be allocated when the guest configuration gets passed. Signed-off-by: Eric Auger --- drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 83 - drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 6 +- 2 files changed, 48 insertions(+), 41 deletions(-) diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c index d828d6cbeb0e..4baf9fafe462 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c @@ -953,9 +953,9 @@ static __le64 *arm_smmu_get_cd_ptr(struct arm_smmu_domain *smmu_domain, unsigned int idx; struct arm_smmu_l1_ctx_desc *l1_desc; struct arm_smmu_device *smmu = smmu_domain->smmu; - struct arm_smmu_ctx_desc_cfg *cdcfg = _domain->s1_cfg.cdcfg; + struct arm_smmu_ctx_desc_cfg *cdcfg = _domain->s1_cfg->cdcfg; - if (smmu_domain->s1_cfg.s1fmt == STRTAB_STE_0_S1FMT_LINEAR) + if (smmu_domain->s1_cfg->s1fmt == STRTAB_STE_0_S1FMT_LINEAR) return cdcfg->cdtab + ssid * CTXDESC_CD_DWORDS; idx = ssid >> CTXDESC_SPLIT; @@ -990,7 +990,7 @@ int arm_smmu_write_ctx_desc(struct arm_smmu_domain *smmu_domain, int ssid, __le64 *cdptr; struct arm_smmu_device *smmu = smmu_domain->smmu; - if (WARN_ON(ssid >= (1 << smmu_domain->s1_cfg.s1cdmax))) + if (WARN_ON(ssid >= (1 << smmu_domain->s1_cfg->s1cdmax))) return -E2BIG; cdptr = arm_smmu_get_cd_ptr(smmu_domain, ssid); @@ -1056,7 +1056,7 @@ static int arm_smmu_alloc_cd_tables(struct arm_smmu_domain *smmu_domain) size_t l1size; size_t max_contexts; struct arm_smmu_device *smmu = smmu_domain->smmu; - struct arm_smmu_s1_cfg *cfg = _domain->s1_cfg; + struct arm_smmu_s1_cfg *cfg = smmu_domain->s1_cfg; struct arm_smmu_ctx_desc_cfg *cdcfg = >cdcfg; max_contexts = 1 << cfg->s1cdmax; @@ -1104,7 +1104,7 @@ static void arm_smmu_free_cd_tables(struct arm_smmu_domain *smmu_domain) int i; size_t size, l1size; struct arm_smmu_device *smmu = smmu_domain->smmu; - struct arm_smmu_ctx_desc_cfg *cdcfg = _domain->s1_cfg.cdcfg; + struct arm_smmu_ctx_desc_cfg *cdcfg = _domain->s1_cfg->cdcfg; if (cdcfg->l1_desc) { size = CTXDESC_L2_ENTRIES * (CTXDESC_CD_DWORDS << 3); @@ -1211,17 +1211,8 @@ static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid, } if (smmu_domain) { - switch (smmu_domain->stage) { - case ARM_SMMU_DOMAIN_S1: - s1_cfg = _domain->s1_cfg; - break; - case ARM_SMMU_DOMAIN_S2: - case ARM_SMMU_DOMAIN_NESTED: - s2_cfg = _domain->s2_cfg; - break; - default: - break; - } + s1_cfg = smmu_domain->s1_cfg; + s2_cfg = smmu_domain->s2_cfg; } if (val & STRTAB_STE_0_V) { @@ -1664,10 +1655,10 @@ static void arm_smmu_tlb_inv_context(void *cookie) * careful, 007. */ if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { - arm_smmu_tlb_inv_asid(smmu, smmu_domain->s1_cfg.cd.asid); + arm_smmu_tlb_inv_asid(smmu, smmu_domain->s1_cfg->cd.asid); } else { cmd.opcode = CMDQ_OP_TLBI_S12_VMALL; - cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; + cmd.tlbi.vmid = smmu_domain->s2_cfg->vmid; arm_smmu_cmdq_issue_cmd(smmu, ); arm_smmu_cmdq_issue_sync(smmu); } @@ -1693,10 +1684,10 @@ static void arm_smmu_tlb_inv_range(unsigned long iova, size_t size, if (smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { cmd.opcode = CMDQ_OP_TLBI_NH_VA; - cmd.tlbi.asid = smmu_domain->s1_cfg.cd.asid; + cmd.tlbi.asid = smmu_domain->s1_cfg->cd.asid; } else { cmd.opcode = CMDQ_OP_TLBI_S2_IPA; - cmd.tlbi.vmid = smmu_domain->s2_cfg.vmid; + cmd.tlbi.vmid = smmu_domain->s2_cfg->vmid; } if (smmu->features & ARM_SMMU_FEAT_RANGE_INV) { @@ -1846,24 +1837,25 @@ static void arm_smmu_domain_free(struct iommu_domain *domain) { struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); struct arm_smmu_device *smmu = smmu_domain->smmu; + struct arm_smmu_s1_cfg *s1_cfg = smmu_domain->s1_cfg; + struct arm_smmu_s2_cfg *s2_cfg = smmu_domain->s2_cfg; iommu_put_dma_cookie(domain); free_io_pgtable_ops(smmu_domain->pgtbl_ops); /* Free the CD and ASID, if we allocated them */ -
Re: iommu/vt-d: Cure VF irqdomain hickup
Hi Thomas, On Thu, Nov 12, 2020 at 8:16 PM Thomas Gleixner wrote: > The recent changes to store the MSI irqdomain pointer in struct device > missed that Intel DMAR does not register virtual function devices. Due to > that a VF device gets the plain PCI-MSI domain assigned and then issues > compat MSI messages which get caught by the interrupt remapping unit. > > Cure that by inheriting the irq domain from the physical function > device. > > That's a temporary workaround. The correct fix is to inherit the irq domain > from the bus, but that's a larger effort which needs quite some other > changes to the way how x86 manages PCI and MSI domains. > > Fixes: 85a8dfc57a0b ("iommm/vt-d: Store irq domain in struct device") > Reported-by: Jason Gunthorpe > Signed-off-by: Thomas Gleixner > --- > drivers/iommu/intel/dmar.c | 19 ++- > 1 file changed, 18 insertions(+), 1 deletion(-) > > --- a/drivers/iommu/intel/dmar.c > +++ b/drivers/iommu/intel/dmar.c > @@ -333,6 +333,11 @@ static void dmar_pci_bus_del_dev(struct > dmar_iommu_notify_scope_dev(info); > } > > +static inline void vf_inherit_msi_domain(struct pci_dev *pdev) > +{ > + dev_set_msi_domain(>dev, > dev_get_msi_domain(>physfn->dev)); If CONFIG_PCI_ATS is not set: error: 'struct pci_dev' has no member named 'physfn' http://kisskb.ellerman.id.au/kisskb/buildresult/14400927/ Gr{oetje,eeting}s, Geert -- Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@linux-m68k.org In personal conversations with technical people, I call myself a hacker. But when I'm talking to journalists I just say "programmer" or something like that. -- Linus Torvalds ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu