Re: [PATCH v2] iommu/ipmmu-vmsa: Add r8a7796 DT binding
On Thu, Oct 27, 2016 at 07:45:10PM +0900, Magnus Damm wrote: > From: Magnus Damm > > Update the IPMMU DT binding documentation to include the r8a7796 compat > string for R-Car M3-W. > > Signed-off-by: Magnus Damm > Acked-by: Laurent Pinchart > --- > > Changes since V1: > - Added Acked-by from Laurent - thanks! > > Now broken out, however earlier V1 posted as part of: > [PATCH 0/3] iommu/ipmmu-vmsa: Initial r8a7796 support > > Documentation/devicetree/bindings/iommu/renesas,ipmmu-vmsa.txt |1 + > 1 file changed, 1 insertion(+) Acked-by: Rob Herring ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] iommu/vt-d: Fix IOMMU lookup for VF's
On Fri, 2016-10-21 at 15:32 -0700, Ashok Raj wrote: > IOMMU driver must pick the same IOMMU as that of a Physical Function (PF) for > any of its Virtual Functions (VF). It is not practical to list all the VF's > in the DMAR scope, as this list could be quite large. Linux also ignores any > VF's listed in DMAR. See dmar_pci_bus_notifier() for virtfn handling. > > Since the driver is looking for the bdf of the VF, it will not find one from > searching the DRHD listed in BIOS. As a result, the IOMMU driver associates > the VF's under the INCLUDE_ALL iommu incorrectly. > > This patch looks up the IOMMU of the PF when handling VF's. I've made some cosmetic fixes — adding comments, and tweaking the commit comment — and applied this to my tree. Thanks. -- dwmw2 smime.p7s Description: S/MIME cryptographic signature ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[RFC PATCH] x86: Allow IOMMU drivers to Hook arch_setup_dma_ops()
The Intel IOMMU code registers a notifier and plays nasty tricks to fill device pointers in with its DMAR scope tables when devices are detected, then later compares the device pointers in those tables when it needs to find the IOMMU unit for a given device. If we let it use arch_setup_dma_ops() to match the device to an IOMMU unit at device_register() time, this gets a whole lot saner. Signed-off-by: David Woodhouse --- Of course it's still not actually *sane*. We have per-device DMA ops, but per-bus IOMMU ops. What we actually want to do is ditch the DMA ops entirely and simply do it all through the IOMMU ops, like ARM does in its arch/arm/mm/dma-mapping.c. I think we can do an iova-dma-mapping.c with just an additional flag in the IOMMU domain which says that it can do lazy unmaps. And IOMMU ops should be per-device of course, not per-bus. But this is a start... arch/x86/include/asm/dma-mapping.h | 2 ++ arch/x86/include/asm/x86_init.h| 3 +++ arch/x86/kernel/x86_init.c | 3 +++ 3 files changed, 8 insertions(+) diff --git a/arch/x86/include/asm/dma-mapping.h b/arch/x86/include/asm/dma-mapping.h index 4446162..b3888a0 100644 --- a/arch/x86/include/asm/dma-mapping.h +++ b/arch/x86/include/asm/dma-mapping.h @@ -42,6 +42,8 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev) bool arch_dma_alloc_attrs(struct device **dev, gfp_t *gfp); #define arch_dma_alloc_attrs arch_dma_alloc_attrs +#define arch_setup_dma_ops x86_platform.iommu_setup_dma_ops + #define HAVE_ARCH_DMA_SUPPORTED 1 extern int dma_supported(struct device *hwdev, u64 mask); diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h index 6ba7931..6872dcc 100644 --- a/arch/x86/include/asm/x86_init.h +++ b/arch/x86/include/asm/x86_init.h @@ -7,6 +7,7 @@ struct mpc_bus; struct mpc_cpu; struct mpc_table; struct cpuinfo_x86; +struct iommu_ops; /** * struct x86_init_mpparse - platform specific mpparse ops @@ -207,6 +208,8 @@ struct x86_platform_ops { void (*get_wallclock)(struct timespec *ts); int (*set_wallclock)(const struct timespec *ts); void (*iommu_shutdown)(void); + void (*iommu_setup_dma_ops)(struct device *dev, u64 dma_base, u64 size, + const struct iommu_ops *iommu, bool coherent); bool (*is_untracked_pat_range)(u64 start, u64 end); void (*nmi_init)(void); unsigned char (*get_nmi_reason)(void); diff --git a/arch/x86/kernel/x86_init.c b/arch/x86/kernel/x86_init.c index 0bd9f12..5e54a72 100644 --- a/arch/x86/kernel/x86_init.c +++ b/arch/x86/kernel/x86_init.c @@ -28,6 +28,8 @@ void x86_init_noop(void) { } void __init x86_init_uint_noop(unsigned int unused) { } int __init iommu_init_noop(void) { return 0; } void iommu_shutdown_noop(void) { } +void iommu_setup_dma_ops_noop(struct device *dev, u64 dma_base, u64 size, + const struct iommu_ops *iommu, bool coherent) { } /* * The platform setup functions are preset with the default functions @@ -97,6 +99,7 @@ struct x86_platform_ops x86_platform __ro_after_init = { .get_wallclock = mach_get_cmos_time, .set_wallclock = mach_set_rtc_mmss, .iommu_shutdown = iommu_shutdown_noop, + .iommu_setup_dma_ops= iommu_setup_dma_ops_noop, .is_untracked_pat_range = is_ISA_range, .nmi_init = default_nmi_init, .get_nmi_reason = default_get_nmi_reason, -- 2.5.5 smime.p7s Description: S/MIME cryptographic signature ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] iommu/vt-d: Fix the size calculation of pasid table
On Wed, 2016-10-12 at 13:17 +0100, David Woodhouse wrote: > Yes, that looks correct. I think we may also need to limit it, because > full 20-bit PASID support means we'll attempt an order 11 allocation. > But that's certainly correct for now Actually, not quite correct. You fixed the allocation but not the free. And Mika had reported that even the *correct* allocation was failing because it was too large. So I've done it differently (untested)... - Subject: [PATCH] iommu/vt-d: Fix PASID table allocation Somehow I ended up with an off-by-three error in calculating the size of the PASID and PASID State tables, which triggers allocations failures as those tables unfortunately have to be physically contiguous. In fact, even the *correct* maximum size of 8MiB is problematic and is wont to lead to allocation failures. Since I have extracted a promise that this *will* be fixed in hardware, I'm happy to limit it on the current hardware to a maximum of 0x2 PASIDs, which gives us 1MiB tables — still not ideal, but better than before. Reported by Mika Kuoppala and also by Xunlei Pang who submitted a simpler patch to fix only the allocation (and not the free) to the "correct" limit... which was still problematic. Signed-off-by: David Woodhouse --- drivers/iommu/intel-svm.c | 28 +--- include/linux/intel-iommu.h | 1 + 2 files changed, 18 insertions(+), 11 deletions(-) diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c index 8ebb353..cb72e00 100644 --- a/drivers/iommu/intel-svm.c +++ b/drivers/iommu/intel-svm.c @@ -39,10 +39,18 @@ int intel_svm_alloc_pasid_tables(struct intel_iommu *iommu) struct page *pages; int order; - order = ecap_pss(iommu->ecap) + 7 - PAGE_SHIFT; - if (order < 0) - order = 0; - + /* Start at 2 because it's defined as 1^(1+PSS) */ + iommu->pasid_max = 2 << ecap_pss(iommu->ecap); + + /* Eventually I'm promised we will get a multi-level PASID table + * and it won't have to be physically contiguous. Until then, + * limit the size because 8MiB contiguous allocations can be hard + * to come by. The limit of 0x2, which is 1MiB for each of + * the PASID and PASID-state tables, is somewhat arbitrary. */ + if (iommu->pasid_max > 0x2) + iommu->pasid_max = 0x2; + + order = get_order(sizeof(struct pasid_entry) * iommu->pasid_max); pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); if (!pages) { pr_warn("IOMMU: %s: Failed to allocate PASID table\n", @@ -53,6 +61,8 @@ int intel_svm_alloc_pasid_tables(struct intel_iommu *iommu) pr_info("%s: Allocated order %d PASID table.\n", iommu->name, order); if (ecap_dis(iommu->ecap)) { + /* Just making it explicit... */ + BUILD_BUG_ON(sizeof(struct pasid_entry) != sizeof(struct pasid_state_entry)); pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); if (pages) iommu->pasid_state_table = page_address(pages); @@ -68,11 +78,7 @@ int intel_svm_alloc_pasid_tables(struct intel_iommu *iommu) int intel_svm_free_pasid_tables(struct intel_iommu *iommu) { - int order; - - order = ecap_pss(iommu->ecap) + 7 - PAGE_SHIFT; - if (order < 0) - order = 0; + int order = get_order(sizeof(struct pasid_entry) * iommu->pasid_max); if (iommu->pasid_table) { free_pages((unsigned long)iommu->pasid_table, order); @@ -371,8 +377,8 @@ int intel_svm_bind_mm(struct device *dev, int *pasid, int flags, struct svm_dev_ } svm->iommu = iommu; - if (pasid_max > 2 << ecap_pss(iommu->ecap)) - pasid_max = 2 << ecap_pss(iommu->ecap); + if (pasid_max > iommu->pasid_max) + pasid_max = iommu->pasid_max; /* Do not use PASID 0 in caching mode (virtualised IOMMU) */ ret = idr_alloc(&iommu->pasid_idr, svm, diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h index 2d9b6500..d49e26c 100644 --- a/include/linux/intel-iommu.h +++ b/include/linux/intel-iommu.h @@ -429,6 +429,7 @@ struct intel_iommu { struct page_req_dsc *prq; unsigned char prq_name[16];/* Name for PRQ interrupt */ struct idr pasid_idr; + u32 pasid_max; #endif struct q_inval *qi;/* Queued invalidation info */ u32 *iommu_state; /* Store iommu states between suspend and resume.*/ -- 2.5.5 -- dwmw2 smime.p7s Description: S/MIME cryptographic signature ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu