Re: [PATCH] iommu/amd: Fix IOMMU page flush when detach all devices from a domain
Hi Suravee, On Tue, Jan 22, 2019 at 03:53:18PM +, Suthikulpanit, Suravee wrote: > Thanks for the detail. Alright then, let's just go with the version you > sent on 1/16/19. Do you want me to resend V3 with that changes, or > would you be taking care of that? Please send me a v3 based on the diff I sent last week. Also add a separate patch which adds the missing dte flush for the alias entry. Thanks, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH] iommu/mediatek: Use correct fwspec in mtk_iommu_add_device()
From: Joerg Roedel The mtk_iommu_add_device() function keeps the fwspec in an on-stack pointer and calls mtk_iommu_create_mapping(), which might change its source, dev->iommu_fwspec. This causes the on-stack pointer to be obsoleted and the device initialization to fail. Update the on-stack fwspec pointer after mtk_iommu_create_mapping() has been called. Reported-by: Frank Wunderlich Fixes: a9bf2eec5a6f ('iommu/mediatek: Use helper functions to access dev->iommu_fwspec') Signed-off-by: Joerg Roedel --- drivers/iommu/mtk_iommu_v1.c | 4 1 file changed, 4 insertions(+) diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c index 6ede4286b835..f60bdb85c4c0 100644 --- a/drivers/iommu/mtk_iommu_v1.c +++ b/drivers/iommu/mtk_iommu_v1.c @@ -442,6 +442,10 @@ static int mtk_iommu_add_device(struct device *dev) iommu_spec.args_count = count; mtk_iommu_create_mapping(dev, &iommu_spec); + + /* dev->iommu_fwspec might have changed */ + fwspec = dev_iommu_fwspec_get(dev); + of_node_put(iommu_spec.np); } -- 2.13.7 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v6 0/3] iommu/io-pgtable-arm-v7s: Use DMA32 zone for page tables
Hi Andrew, On Fri, Jan 11, 2019 at 6:21 PM Joerg Roedel wrote: > > On Wed, Jan 02, 2019 at 01:51:45PM +0800, Nicolas Boichat wrote: > > Does anyone have any further comment on this series? If not, which > > maintainer is going to pick this up? I assume Andrew Morton? > > Probably, yes. I don't like to carry the mm-changes in iommu-tree, so > this should go through mm. Gentle ping on this series, it seems like it's better if it goes through your tree. Series still applies cleanly on linux-next, but I'm happy to resend if that helps. Thanks! > Regards, > > Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH 1/2] dma-direct: set_memory_{en, de}crypted() take number of pages
Lendacky, Thomas writes: > On 1/22/19 3:17 PM, Thiago Jung Bauermann wrote: >> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c >> index 355d16acee6d..bc78c37220ba 100644 >> --- a/kernel/dma/direct.c >> +++ b/kernel/dma/direct.c >> @@ -166,7 +166,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t >> size, >> >> ret = page_address(page); >> if (force_dma_unencrypted()) { >> -set_memory_decrypted((unsigned long)ret, 1 << get_order(size)); >> +set_memory_decrypted((unsigned long)ret, 1); > > The get_order() function will return the order for the specified size. To > then get the number of pages you perform the shift as is being done. The > change is definitely wrong since you are now hardcoding the page count to > 1. The call to __dma_direct_alloc_pages() will allocate more than one page > if the size is greater than a page. You are correct, of course. Sorry for the noise and thanks for explaining. -- Thiago Jung Bauermann IBM Linux Technology Center ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH 2/2] x86/kvmclock: set_memory_decrypted() takes number of pages
On 1/22/19 3:17 PM, Thiago Jung Bauermann wrote: > From: Ram Pai > > set_memory_decrypted() expects the number of PAGE_SIZE pages to decrypt. > kvmclock_init_mem() instead passes number of bytes. This decrypts a huge > number of pages resulting in data corruption. Same comment as patch 1/2 in this series. This is not correct. See comments below. > > Fixed it. > > [ bauermann: Slightly reworded commit message and added Fixes: tag. ] > Fixes: 6a1cac56f41f ("x86/kvm: Use __bss_decrypted attribute in shared > variables") > Signed-off-by: Ram Pai > Signed-off-by: Thiago Jung Bauermann > --- > arch/x86/kernel/kvmclock.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > Note: Found by code inspection. I don't have a way to test. > > diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c > index e811d4d1c824..b5c867dd2c8d 100644 > --- a/arch/x86/kernel/kvmclock.c > +++ b/arch/x86/kernel/kvmclock.c > @@ -251,8 +251,7 @@ static void __init kvmclock_init_mem(void) >* be mapped decrypted. >*/ > if (sev_active()) { > - r = set_memory_decrypted((unsigned long) hvclock_mem, > - 1UL << order); > + r = set_memory_decrypted((unsigned long) hvclock_mem, 1); Again, not correct. A number of pages were allocated based on the order. That number is calculated using the shift in the call. Hardcoding this to 1 is wrong. Thanks, Tom > if (r) { > __free_pages(p, order); > hvclock_mem = NULL; > ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH 1/2] dma-direct: set_memory_{en,de}crypted() take number of pages
On 1/22/19 3:17 PM, Thiago Jung Bauermann wrote: > From: Ram Pai > > set_memory_encrypted() and set_memory_decrypted() expect the number of > PAGE_SIZE pages to encrypt or decrypt. dma_direct_alloc() and > dma_direct_free() instead pass number of bytes. This encrypts/decrypts a > huge number of pages resulting in data corruption. That is not what it is doing. See comments below. > > Fixed it. > > [ bauermann: Slightly reworded commit message and added Fixes: tag. ] > Fixes: d7b417fa08d1 ("x86/mm: Add DMA support for SEV memory encryption") > Signed-off-by: Ram Pai > Signed-off-by: Thiago Jung Bauermann > --- > kernel/dma/direct.c | 6 ++ > 1 file changed, 2 insertions(+), 4 deletions(-) > > Notes: > > 1. This was tested on powerpc with patches adding support for running >under the ultravisor, which are not yet upstream. > > 2. The lines changed in this patch were added by commit c10f07aa27da >("dma/direct: Handle force decryption for DMA coherent buffers in >common code"), but it only moves the code from an x86-specific file. >Therefore the Fixes tag references the commit that first introduced >the code. > > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c > index 355d16acee6d..bc78c37220ba 100644 > --- a/kernel/dma/direct.c > +++ b/kernel/dma/direct.c > @@ -166,7 +166,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t > size, > > ret = page_address(page); > if (force_dma_unencrypted()) { > - set_memory_decrypted((unsigned long)ret, 1 << get_order(size)); > + set_memory_decrypted((unsigned long)ret, 1); The get_order() function will return the order for the specified size. To then get the number of pages you perform the shift as is being done. The change is definitely wrong since you are now hardcoding the page count to 1. The call to __dma_direct_alloc_pages() will allocate more than one page if the size is greater than a page. > *dma_handle = __phys_to_dma(dev, page_to_phys(page)); > } else { > *dma_handle = phys_to_dma(dev, page_to_phys(page)); > @@ -186,10 +186,8 @@ void __dma_direct_free_pages(struct device *dev, size_t > size, struct page *page) > void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, > dma_addr_t dma_addr, unsigned long attrs) > { > - unsigned int page_order = get_order(size); > - > if (force_dma_unencrypted()) > - set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); > + set_memory_encrypted((unsigned long)cpu_addr, 1); Same comment here as above. Thanks, Tom > __dma_direct_free_pages(dev, size, virt_to_page(cpu_addr)); > } > > ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH v2] iommu/vt-d: Implement dma_[un]map_resource()
Currently the Intel IOMMU uses the default dma_[un]map_resource() implementations does nothing and simply returns the physical address unmodified. However, this doesn't create the IOVA entries necessary for addresses mapped this way to work when the IOMMU is enabled. Thus, when the IOMMU is enabled, drivers relying on dma_map_resource() will trigger DMAR errors. We see this when running ntb_transport with the IOMMU enabled, DMA, and switchtec hardware. The implementation for intel_map_resource() is nearly identical to intel_map_page(), we just have to re-create __intel_map_single(). dma_unmap_resource() uses intel_unmap_page() directly as the functions are identical. Signed-off-by: Logan Gunthorpe Cc: David Woodhouse Cc: Joerg Roedel --- Changes in v2: * Revert to a single __intel_map_single() helper instead of having a second small __intel_map_page() helper. As requested by Christoph. drivers/iommu/intel-iommu.c | 23 --- 1 file changed, 16 insertions(+), 7 deletions(-) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 2bd9ac285c0d..64dab37c0b96 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -3649,11 +3649,9 @@ static int iommu_no_mapping(struct device *dev) return 0; } -static dma_addr_t __intel_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, int dir, - u64 dma_mask) +static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr, +size_t size, int dir, u64 dma_mask) { - phys_addr_t paddr = page_to_phys(page) + offset; struct dmar_domain *domain; phys_addr_t start_paddr; unsigned long iova_pfn; @@ -3715,7 +3713,15 @@ static dma_addr_t intel_map_page(struct device *dev, struct page *page, enum dma_data_direction dir, unsigned long attrs) { - return __intel_map_page(dev, page, offset, size, dir, *dev->dma_mask); + return __intel_map_single(dev, page_to_phys(page) + offset, size, + dir, *dev->dma_mask); +} + +static dma_addr_t intel_map_resource(struct device *dev, phys_addr_t phys_addr, +size_t size, enum dma_data_direction dir, +unsigned long attrs) +{ + return __intel_map_single(dev, phys_addr, size, dir, *dev->dma_mask); } static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size) @@ -3806,8 +3812,9 @@ static void *intel_alloc_coherent(struct device *dev, size_t size, return NULL; memset(page_address(page), 0, size); - *dma_handle = __intel_map_page(dev, page, 0, size, DMA_BIDIRECTIONAL, - dev->coherent_dma_mask); + *dma_handle = __intel_map_single(dev, page_to_phys(page), size, +DMA_BIDIRECTIONAL, +dev->coherent_dma_mask); if (*dma_handle != DMA_MAPPING_ERROR) return page_address(page); if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT)) @@ -3924,6 +3931,8 @@ static const struct dma_map_ops intel_dma_ops = { .unmap_sg = intel_unmap_sg, .map_page = intel_map_page, .unmap_page = intel_unmap_page, + .map_resource = intel_map_resource, + .unmap_resource = intel_unmap_page, .dma_supported = dma_direct_supported, }; -- 2.19.0 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH 2/2] x86/kvmclock: set_memory_decrypted() takes number of pages
From: Ram Pai set_memory_decrypted() expects the number of PAGE_SIZE pages to decrypt. kvmclock_init_mem() instead passes number of bytes. This decrypts a huge number of pages resulting in data corruption. Fixed it. [ bauermann: Slightly reworded commit message and added Fixes: tag. ] Fixes: 6a1cac56f41f ("x86/kvm: Use __bss_decrypted attribute in shared variables") Signed-off-by: Ram Pai Signed-off-by: Thiago Jung Bauermann --- arch/x86/kernel/kvmclock.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) Note: Found by code inspection. I don't have a way to test. diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c index e811d4d1c824..b5c867dd2c8d 100644 --- a/arch/x86/kernel/kvmclock.c +++ b/arch/x86/kernel/kvmclock.c @@ -251,8 +251,7 @@ static void __init kvmclock_init_mem(void) * be mapped decrypted. */ if (sev_active()) { - r = set_memory_decrypted((unsigned long) hvclock_mem, -1UL << order); + r = set_memory_decrypted((unsigned long) hvclock_mem, 1); if (r) { __free_pages(p, order); hvclock_mem = NULL; ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH 1/2] dma-direct: set_memory_{en, de}crypted() take number of pages
From: Ram Pai set_memory_encrypted() and set_memory_decrypted() expect the number of PAGE_SIZE pages to encrypt or decrypt. dma_direct_alloc() and dma_direct_free() instead pass number of bytes. This encrypts/decrypts a huge number of pages resulting in data corruption. Fixed it. [ bauermann: Slightly reworded commit message and added Fixes: tag. ] Fixes: d7b417fa08d1 ("x86/mm: Add DMA support for SEV memory encryption") Signed-off-by: Ram Pai Signed-off-by: Thiago Jung Bauermann --- kernel/dma/direct.c | 6 ++ 1 file changed, 2 insertions(+), 4 deletions(-) Notes: 1. This was tested on powerpc with patches adding support for running under the ultravisor, which are not yet upstream. 2. The lines changed in this patch were added by commit c10f07aa27da ("dma/direct: Handle force decryption for DMA coherent buffers in common code"), but it only moves the code from an x86-specific file. Therefore the Fixes tag references the commit that first introduced the code. diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 355d16acee6d..bc78c37220ba 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -166,7 +166,7 @@ void *dma_direct_alloc_pages(struct device *dev, size_t size, ret = page_address(page); if (force_dma_unencrypted()) { - set_memory_decrypted((unsigned long)ret, 1 << get_order(size)); + set_memory_decrypted((unsigned long)ret, 1); *dma_handle = __phys_to_dma(dev, page_to_phys(page)); } else { *dma_handle = phys_to_dma(dev, page_to_phys(page)); @@ -186,10 +186,8 @@ void __dma_direct_free_pages(struct device *dev, size_t size, struct page *page) void dma_direct_free_pages(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_addr, unsigned long attrs) { - unsigned int page_order = get_order(size); - if (force_dma_unencrypted()) - set_memory_encrypted((unsigned long)cpu_addr, 1 << page_order); + set_memory_encrypted((unsigned long)cpu_addr, 1); __dma_direct_free_pages(dev, size, virt_to_page(cpu_addr)); } ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] dma: debug: no need to check return value of debugfs_create functions
On Tue, Jan 22, 2019 at 06:44:38PM +, Robin Murphy wrote: > Hi Greg, > > On 22/01/2019 15:21, Greg Kroah-Hartman wrote: > > When calling debugfs functions, there is no need to ever check the > > return value. The function can work or not, but the code logic should > > never do something different based on this. > > > > Also delete the variables for the file dentries for the debugfs entries > > as they are never used at all once they are created. > > Modulo one nit below, I certainly approve of the cleanup :) > > Reviewed-by: Robin Murphy > > > Cc: Christoph Hellwig > > Cc: Marek Szyprowski > > Cc: Robin Murphy > > Cc: iommu@lists.linux-foundation.org > > Signed-off-by: Greg Kroah-Hartman > > --- > > kernel/dma/debug.c | 81 ++ > > 1 file changed, 10 insertions(+), 71 deletions(-) > > > > diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c > > index 23cf5361bcf1..2f5fc8b9d39f 100644 > > --- a/kernel/dma/debug.c > > +++ b/kernel/dma/debug.c > > @@ -136,14 +136,6 @@ static u32 nr_prealloc_entries = > > PREALLOC_DMA_DEBUG_ENTRIES; > > /* debugfs dentry's for the stuff above */ > > static struct dentry *dma_debug_dent__read_mostly; > > Does this actually need to be at file scope, or could it be punted to a > local in dma_debug_fs_init() while we're here? It can be moved to the function scope if you want me to, I was trying to do the least needed here :) thanks, greg k-h ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] dma: debug: no need to check return value of debugfs_create functions
Hi Greg, On 22/01/2019 15:21, Greg Kroah-Hartman wrote: When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Also delete the variables for the file dentries for the debugfs entries as they are never used at all once they are created. Modulo one nit below, I certainly approve of the cleanup :) Reviewed-by: Robin Murphy Cc: Christoph Hellwig Cc: Marek Szyprowski Cc: Robin Murphy Cc: iommu@lists.linux-foundation.org Signed-off-by: Greg Kroah-Hartman --- kernel/dma/debug.c | 81 ++ 1 file changed, 10 insertions(+), 71 deletions(-) diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index 23cf5361bcf1..2f5fc8b9d39f 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -136,14 +136,6 @@ static u32 nr_prealloc_entries = PREALLOC_DMA_DEBUG_ENTRIES; /* debugfs dentry's for the stuff above */ static struct dentry *dma_debug_dent__read_mostly; Does this actually need to be at file scope, or could it be punted to a local in dma_debug_fs_init() while we're here? Robin. -static struct dentry *global_disable_dent __read_mostly; -static struct dentry *error_count_dent __read_mostly; -static struct dentry *show_all_errors_dent __read_mostly; -static struct dentry *show_num_errors_dent __read_mostly; -static struct dentry *num_free_entries_dent __read_mostly; -static struct dentry *min_free_entries_dent __read_mostly; -static struct dentry *nr_total_entries_dent __read_mostly; -static struct dentry *filter_dent __read_mostly; /* per-driver filter related state */ @@ -840,66 +832,18 @@ static const struct file_operations filter_fops = { .llseek = default_llseek, }; -static int dma_debug_fs_init(void) +static void dma_debug_fs_init(void) { dma_debug_dent = debugfs_create_dir("dma-api", NULL); - if (!dma_debug_dent) { - pr_err("can not create debugfs directory\n"); - return -ENOMEM; - } - - global_disable_dent = debugfs_create_bool("disabled", 0444, - dma_debug_dent, - &global_disable); - if (!global_disable_dent) - goto out_err; - - error_count_dent = debugfs_create_u32("error_count", 0444, - dma_debug_dent, &error_count); - if (!error_count_dent) - goto out_err; - - show_all_errors_dent = debugfs_create_u32("all_errors", 0644, - dma_debug_dent, - &show_all_errors); - if (!show_all_errors_dent) - goto out_err; - - show_num_errors_dent = debugfs_create_u32("num_errors", 0644, - dma_debug_dent, - &show_num_errors); - if (!show_num_errors_dent) - goto out_err; - - num_free_entries_dent = debugfs_create_u32("num_free_entries", 0444, - dma_debug_dent, - &num_free_entries); - if (!num_free_entries_dent) - goto out_err; - - min_free_entries_dent = debugfs_create_u32("min_free_entries", 0444, - dma_debug_dent, - &min_free_entries); - if (!min_free_entries_dent) - goto out_err; - - nr_total_entries_dent = debugfs_create_u32("nr_total_entries", 0444, - dma_debug_dent, - &nr_total_entries); - if (!nr_total_entries_dent) - goto out_err; - - filter_dent = debugfs_create_file("driver_filter", 0644, - dma_debug_dent, NULL, &filter_fops); - if (!filter_dent) - goto out_err; - - return 0; -out_err: - debugfs_remove_recursive(dma_debug_dent); - - return -ENOMEM; + debugfs_create_bool("disabled", 0444, dma_debug_dent, &global_disable); + debugfs_create_u32("error_count", 0444, dma_debug_dent, &error_count); + debugfs_create_u32("all_errors", 0644, dma_debug_dent, &show_all_errors); + debugfs_create_u32("num_errors", 0644, dma_debug_dent, &show_num_errors); + debugfs_create_u32("num_free_entries", 0444, dma_debug_dent, &num_free_entries); + debugfs_create_u32("min_free_entries", 0444, dma_debug_dent, &min_free_entries); + debugfs_create_u32("nr_total_entries", 0444, dma_debug_dent, &nr_total_entries); + debugfs_create_file("driver_filter", 0644, dma_debug_dent, NULL, &filter_fops); } static int device_dma_allocations(struct device *dev, struct dma_debug_entry **out_entry) @@ -985,12 +929,7 @@ static int dma_debug_init(void) spin_lock_init(&dma_entry_hash[i].lock); } - if (dma_debug_fs_init() != 0) { - pr_err("error creating debugfs entries - disabling\n"); - global_disable = t
Re: [PATCH v1] iommu/s390: Declare s390 iommu reserved regions
On Mon, 21 Jan 2019 12:51:14 +0100 Pierre Morel wrote: > On 18/01/2019 14:51, Jean-Philippe Brucker wrote: > > Hi Pierre, > > > > On 18/01/2019 13:29, Pierre Morel wrote: > >> On 17/01/2019 14:02, Robin Murphy wrote: > >>> On 15/01/2019 17:37, Pierre Morel wrote: > The s390 iommu can only allow DMA transactions between the zPCI device > entries start_dma and end_dma. > > > ... > > >> > >> I already posted a patch retrieving the geometry through > >> VFIO_IOMMU_GET_INFO using a specific capability for the geometry [1], > >> and AFAIU, Alex did not agree with this. > > > > On arm we also need to report the IOMMU geometry to userspace (max IOVA > > size in particular). Shameer has been working on a solution [2] that > > presents a unified view of both geometry and reserved regions into the > > VFIO_IOMMU_GET_INFO call, and I think we should go with that. If I > > understand correctly it's currently blocked on the RMRR problem and > > we're waiting for Jacob or Ashok to take a look at it, as Kevin pinged > > them on thread [1]? > > > > [2] https://lkml.org/lkml/2018/4/18/293 > > > > Thanks, > > Jean > > > > Hi Jean, > > I hopped that this proposition went in the same direction based on the > following assumptions: > > > - The goal of the get_resv_region is defined in iommu.h as: > - > * @get_resv_regions: Request list of reserved regions for a device > - > > - A iommu reserve region is a region which should not be mapped. > Isn't it exactly what happens outside the aperture? > Shouldn't it be reported by the iommu reserved region? > > - If we use VFIO and want to get all reserved region we will have the > VFIO_IOMMU_GET_INFO call provided by Shameer and it can get all reserved > regions depending from the iommu driver itself at once by calling the > get_reserved_region callback instead of having to merge them with the > aperture. > > - If there are other reserved region, depending on the system > configuration and not on the IOMMU itself, the VFIO_IOMMU_GET_INFO call > will have to merge them with the region gotten from the iommu driver. > > - If we do not use QEMU nor VFIO at all, AFAIU, the standard way to > retrieve the reserved regions associated with a device is to call the > get_reserved_region callback from the associated iommu. > > Please tell me were I am wrong. The existing proposal by Shameer exposes an IOVA list, which includes ranges that the user _can_ map through the IOMMU, therefore this original patch is unnecessary, the IOMMU geometry is already the starting point of the IOVA list, creating the original node, which is split as necessary to account for the reserved regions. To me, presenting a user interface that exposes ranges that _cannot_ be mapped is a strange interface. If we started from a position of what information do we want to provide to the user and how will they consume it, would we present a list of reserved ranges? I think the only argument for the negative ranges is that we already have them in the kernel, but I don't see that it necessarily makes them the right solution for userspace. > >> What is different in what you propose? > >> > >> @Alex: I was hoping that this patch goes in your direction. What do you > >> think? I think it's unnecessary given that in Shameer's proposal: - We start from the IOMMU exposed geometry - We present a list of usable IOVA ranges, not a list of reserved ranges Thanks, Alex ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] arm64/xen: fix xen-swiotlb cache flushing
On Mon, Jan 21, 2019 at 03:56:29PM -0800, Stefano Stabellini wrote: > > Where should we pick this up? I could pick it up through the dma-mapping > > tree given that is where the problem is introduced, but the Xen or arm64 > > trees would also fit. > > I am happy for you to carry it in the dma-mapping tree, especially if > you have other fixes to send. Otherwise, let me know. Thanks, I've queued it up for Linus. ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Aw: Re: [BUG] "access dev->iommu_fwspec" cause crash on BPI-R2
Hi, thanks for quick reply, this seems to fix it no crash, xserver works, like revert of the commit...pushed the fix to below github-repo regards Frank > Gesendet: Dienstag, 22. Januar 2019 um 17:49 Uhr > Von: "Joerg Roedel" > An: "Frank Wunderlich" > Cc: "Matthias Brugger" , > iommu@lists.linux-foundation.org, linux-arm-ker...@lists.infradead.org, > linux-media...@lists.infradead.org, linux-ker...@vger.kernel.org, "Ryder Lee" > > Betreff: Re: [BUG] "access dev->iommu_fwspec" cause crash on BPI-R2 > > Hi Frank, > > thanks for the report! > > On Tue, Jan 22, 2019 at 05:09:09PM +0100, Frank Wunderlich wrote: > > Hi, > > > > the following Patch breaks hdmi (at least) on Bananapi R2 (mt7623): > > > > a9bf2eec5a6fc01a0a5250eaf0bf61dfd382a78a "iommu/mediatek: Use helper > > functions to access dev->iommu_fwspec" > > Does the attached diff fix the issue for you? > > Thanks, > > Joerg > > diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c > index 6ede4286b835..f60bdb85c4c0 100644 > --- a/drivers/iommu/mtk_iommu_v1.c > +++ b/drivers/iommu/mtk_iommu_v1.c > @@ -442,6 +442,10 @@ static int mtk_iommu_add_device(struct device *dev) > iommu_spec.args_count = count; > > mtk_iommu_create_mapping(dev, &iommu_spec); > + > + /* dev->iommu_fwspec might have changed */ > + fwspec = dev_iommu_fwspec_get(dev); > + > of_node_put(iommu_spec.np); > } > > ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH 1/3] iommu/arm-smmu: Move to bitmap for arm_smmu_domain atrributes
On Mon, Jan 21, 2019 at 7:23 PM Robin Murphy wrote: > > On 21/01/2019 05:53, Vivek Gautam wrote: > > A number of arm_smmu_domain's attributes can be assigned based > > on the iommu domains's attributes. These local attributes better > > be managed by a bitmap. > > So remove boolean flags and move to a 32-bit bitmap, and enable > > each bits separtely. > > > > Signed-off-by: Vivek Gautam > > --- > > drivers/iommu/arm-smmu.c | 10 ++ > > 1 file changed, 6 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c > > index 7ebbcf1b2eb3..52b300dfc096 100644 > > --- a/drivers/iommu/arm-smmu.c > > +++ b/drivers/iommu/arm-smmu.c > > @@ -257,10 +257,11 @@ struct arm_smmu_domain { > > const struct iommu_gather_ops *tlb_ops; > > struct arm_smmu_cfg cfg; > > enum arm_smmu_domain_stage stage; > > - boolnon_strict; > > struct mutexinit_mutex; /* Protects smmu pointer > > */ > > spinlock_t cb_lock; /* Serialises ATS1* ops and > > TLB syncs */ > > struct iommu_domain domain; > > +#define ARM_SMMU_DOMAIN_ATTR_NON_STRICT BIT(0) > > + unsigned intattr; > > }; > > > > struct arm_smmu_option_prop { > > @@ -901,7 +902,7 @@ static int arm_smmu_init_domain_context(struct > > iommu_domain *domain, > > if (smmu->features & ARM_SMMU_FEAT_COHERENT_WALK) > > pgtbl_cfg.quirks = IO_PGTABLE_QUIRK_NO_DMA; > > > > - if (smmu_domain->non_strict) > > + if (smmu_domain->attr & ARM_SMMU_DOMAIN_ATTR_NON_STRICT) > > pgtbl_cfg.quirks |= IO_PGTABLE_QUIRK_NON_STRICT; > > > > /* Non coherent page table mappings only for Stage-1 */ > > @@ -1598,7 +1599,8 @@ static int arm_smmu_domain_get_attr(struct > > iommu_domain *domain, > > case IOMMU_DOMAIN_DMA: > > switch (attr) { > > case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: > > - *(int *)data = smmu_domain->non_strict; > > + *(int *)data = !!(smmu_domain->attr & > > + ARM_SMMU_DOMAIN_ATTR_NON_STRICT); > > return 0; > > default: > > return -ENODEV; > > @@ -1638,7 +1640,7 @@ static int arm_smmu_domain_set_attr(struct > > iommu_domain *domain, > > case IOMMU_DOMAIN_DMA: > > switch (attr) { > > case DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE: > > - smmu_domain->non_strict = *(int *)data; > > + smmu_domain->attr |= ARM_SMMU_DOMAIN_ATTR_NON_STRICT; > > But what if *data == 0? Right, a check for data here also similar to what we are doing for QCOM_SYS_CACHE [1]. [1] https://lore.kernel.org/patchwork/patch/1033796/ Regards Vivek > > Robin. > > > break; > > default: > > ret = -ENODEV; > > > ___ > iommu mailing list > iommu@lists.linux-foundation.org > https://lists.linuxfoundation.org/mailman/listinfo/iommu -- QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[BUG] "access dev->iommu_fwspec" cause crash on BPI-R2
Hi, the following Patch breaks hdmi (at least) on Bananapi R2 (mt7623): a9bf2eec5a6fc01a0a5250eaf0bf61dfd382a78a "iommu/mediatek: Use helper functions to access dev->iommu_fwspec" https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/iommu/mtk_iommu_v1.c?id=a9bf2eec5a6fc01a0a5250eaf0bf61dfd382a78a [5.363509] Backtrace: [5.365946] [] (mtk_iommu_domain_free) from [] (iommu_domain_free+0x20/) [5.374670] r4:decd57a0 [5.377192] [] (iommu_domain_free) from [] (release_iommu_mapping+0x24/) [5.385922] [] (release_iommu_mapping) from [] (arm_iommu_release_mappi) [5.395943] r7: r6:decd5780 r5: r4:decd57a0 [5.401567] [] (arm_iommu_release_mapping.part.0) from [] (arch_setup_d) [5.411412] r5: r4:dead1410 [5.414968] [] (arch_setup_dma_ops) from [] (of_dma_configure+0x27c/0x3) [5.423521] r6:c0b58e20 r5: r4: [5.428109] [] (of_dma_configure) from [] (platform_dma_configure+0x28/) [5.436838] r10:c107efdc r9: r8:c10c0008 r7: r6:c1117b34 r5:dead1410 [5.444612] r4:c1117b30 [5.447131] [] (platform_dma_configure) from [] (really_probe+0xc4/0x42) [5.455602] [] (really_probe) from [] (driver_probe_device+0x88/0x1e0) [5.463814] r10: r9:c060a25c r8: r7:c1008c48 r6:c107efdc r5:c107efdc [5.471588] r4:dead1410 [5.474107] [] (driver_probe_device) from [] (__driver_attach+0x134/0x1) [5.482663] r9:c060a25c r8: r7:c1008c48 r6:c107efdc r5:dead1444 r4:dead1410 [5.490358] [] (__driver_attach) from [] (bus_for_each_dev+0x84/0xc4) [5.498480] r7:c1008c48 r6:c06080c8 r5:c107efdc r4: [5.504102] [] (bus_for_each_dev) from [] (driver_attach+0x2c/0x30) [5.512052] r7:c10bff30 r6:c107fad8 r5:decde780 r4:c107efdc [5.517675] [] (driver_attach) from [] (bus_add_driver+0x1d0/0x274) [5.525629] [] (bus_add_driver) from [] (driver_register+0x84/0x118) [5.533666] r8:c060a20c r7:c0609c60 r6:c10c0230 r5:0002 r4:c107efdc [5.540323] [] (driver_register) from [] (__platform_register_drivers+0) after reverting it i can boot without crash and start x-server my repo just for reference (revert not pushed yet): https://github.com/frank-w/BPI-R2-4.14/tree/5.0-hdmi i hope i had chosen the right way to report this... regards Frank ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] iommu/intel: quirk to disable DMAR for QM57 igfx
On Tue, Jan 22, 2019 at 04:48:26PM +0200, Joonas Lahtinen wrote: > According to our IOMMU folks there exists some desire to be able to assign > the iGFX device aka have intel_iommu=on instead of intel_iommu=igfx_off > due to how the devices might be grouped in IOMMU groups. Even when you > would not be using the iGFX device. You can force the igfx device into a SI domain, or does that also trigger the iommu issues on the chipset? In any case, if iommu=on breaks these systems I want to make them work again with opt-out, even at the cost of breaking assignability. Regards, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [BUG] "access dev->iommu_fwspec" cause crash on BPI-R2
Hi Frank, thanks for the report! On Tue, Jan 22, 2019 at 05:09:09PM +0100, Frank Wunderlich wrote: > Hi, > > the following Patch breaks hdmi (at least) on Bananapi R2 (mt7623): > > a9bf2eec5a6fc01a0a5250eaf0bf61dfd382a78a "iommu/mediatek: Use helper > functions to access dev->iommu_fwspec" Does the attached diff fix the issue for you? Thanks, Joerg diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c index 6ede4286b835..f60bdb85c4c0 100644 --- a/drivers/iommu/mtk_iommu_v1.c +++ b/drivers/iommu/mtk_iommu_v1.c @@ -442,6 +442,10 @@ static int mtk_iommu_add_device(struct device *dev) iommu_spec.args_count = count; mtk_iommu_create_mapping(dev, &iommu_spec); + + /* dev->iommu_fwspec might have changed */ + fwspec = dev_iommu_fwspec_get(dev); + of_node_put(iommu_spec.np); } ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] iommu/amd: Fix IOMMU page flush when detach all devices from a domain
Joerg, On 1/22/19 5:44 PM, j...@8bytes.org wrote: > Hi Suravee, > > On Thu, Jan 17, 2019 at 08:44:36AM +, Suthikulpanit, Suravee wrote: >> Then, in __domain_flush_pages, we issue command when the dev_iommu[] >= 0. >> This should preserve previous behavior, and only add flushing condition to >> the specific IOMMU in detached state. Please let me know what you think. > > I think the whole point why VFIO is detaching all devices first and then > goes into unmapping pages is that there are no flushes needed anymore > when devices are detached. > > But when we follow your proposal we would still do IOTLB flushes even > when no devices are attached anymore. So I'd like to avoid this, given > the implications on unmapping performance. We should just flush the > IOMMU TLB at detach time and be done with it. > > Makes sense? > > Regards, > > Joerg > Thanks for the detail. Alright then, let's just go with the version you sent on 1/16/19. Do you want me to resend V3 with that changes, or would you be taking care of that? Thanks, Suravee ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
[PATCH] dma: debug: no need to check return value of debugfs_create functions
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Also delete the variables for the file dentries for the debugfs entries as they are never used at all once they are created. Cc: Christoph Hellwig Cc: Marek Szyprowski Cc: Robin Murphy Cc: iommu@lists.linux-foundation.org Signed-off-by: Greg Kroah-Hartman --- kernel/dma/debug.c | 81 ++ 1 file changed, 10 insertions(+), 71 deletions(-) diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index 23cf5361bcf1..2f5fc8b9d39f 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -136,14 +136,6 @@ static u32 nr_prealloc_entries = PREALLOC_DMA_DEBUG_ENTRIES; /* debugfs dentry's for the stuff above */ static struct dentry *dma_debug_dent__read_mostly; -static struct dentry *global_disable_dent __read_mostly; -static struct dentry *error_count_dent __read_mostly; -static struct dentry *show_all_errors_dent __read_mostly; -static struct dentry *show_num_errors_dent __read_mostly; -static struct dentry *num_free_entries_dent __read_mostly; -static struct dentry *min_free_entries_dent __read_mostly; -static struct dentry *nr_total_entries_dent __read_mostly; -static struct dentry *filter_dent __read_mostly; /* per-driver filter related state */ @@ -840,66 +832,18 @@ static const struct file_operations filter_fops = { .llseek = default_llseek, }; -static int dma_debug_fs_init(void) +static void dma_debug_fs_init(void) { dma_debug_dent = debugfs_create_dir("dma-api", NULL); - if (!dma_debug_dent) { - pr_err("can not create debugfs directory\n"); - return -ENOMEM; - } - - global_disable_dent = debugfs_create_bool("disabled", 0444, - dma_debug_dent, - &global_disable); - if (!global_disable_dent) - goto out_err; - - error_count_dent = debugfs_create_u32("error_count", 0444, - dma_debug_dent, &error_count); - if (!error_count_dent) - goto out_err; - - show_all_errors_dent = debugfs_create_u32("all_errors", 0644, - dma_debug_dent, - &show_all_errors); - if (!show_all_errors_dent) - goto out_err; - - show_num_errors_dent = debugfs_create_u32("num_errors", 0644, - dma_debug_dent, - &show_num_errors); - if (!show_num_errors_dent) - goto out_err; - - num_free_entries_dent = debugfs_create_u32("num_free_entries", 0444, - dma_debug_dent, - &num_free_entries); - if (!num_free_entries_dent) - goto out_err; - - min_free_entries_dent = debugfs_create_u32("min_free_entries", 0444, - dma_debug_dent, - &min_free_entries); - if (!min_free_entries_dent) - goto out_err; - - nr_total_entries_dent = debugfs_create_u32("nr_total_entries", 0444, - dma_debug_dent, - &nr_total_entries); - if (!nr_total_entries_dent) - goto out_err; - - filter_dent = debugfs_create_file("driver_filter", 0644, - dma_debug_dent, NULL, &filter_fops); - if (!filter_dent) - goto out_err; - - return 0; -out_err: - debugfs_remove_recursive(dma_debug_dent); - - return -ENOMEM; + debugfs_create_bool("disabled", 0444, dma_debug_dent, &global_disable); + debugfs_create_u32("error_count", 0444, dma_debug_dent, &error_count); + debugfs_create_u32("all_errors", 0644, dma_debug_dent, &show_all_errors); + debugfs_create_u32("num_errors", 0644, dma_debug_dent, &show_num_errors); + debugfs_create_u32("num_free_entries", 0444, dma_debug_dent, &num_free_entries); + debugfs_create_u32("min_free_entries", 0444, dma_debug_dent, &min_free_entries); + debugfs_create_u32("nr_total_entries", 0444, dma_debug_dent, &nr_total_entries); + debugfs_create_file("driver_filter", 0644, dma_debug_dent, NULL, &filter_fops); } static int device_dma_allocations(struct device *dev, struct dma_debug_entry **out_entry) @@ -985,12 +929,7 @@ static int dma_debug_init(void) spin_lock_init(&dma_entry_hash[i].lock); } - if (dma_debug_fs_init() != 0) { - pr_err("error creating debugfs entries - disabling\n"); - global_disable = true; - - return 0; - } + dma_debug_fs_init(); nr_pages = DIV_ROUND_UP(nr_prealloc_entries, DMA_DEBUG_DYNAMIC_ENTRIES); for (i = 0; i < nr_pages; ++i) -- 2.20.1 ___ iommu mailing list iommu@lists.linux-foundat
Re: [PATCH] iommu/intel: quirk to disable DMAR for QM57 igfx
Quoting Joerg Roedel (2019-01-22 13:01:09) > Hi Daniel, > > On Tue, Jan 22, 2019 at 11:46:39AM +0100, Daniel Vetter wrote: > > Note that the string of platforms which have various issues with iommu > > and igfx is very long, thus far we only disabled it where there's no > > workaround to stop it from hanging the box, but otherwise left it > > enabled. So if we make a policy change to also disable it anywhere > > where it doesn't work well (instead of not at all), there's a pile > > more platforms to switch. > > I think its best to just disable iommu for the igfx devices on these > platforms. Can you pick up Eric's patch and extend it with the list of > affected platforms? We've been discussing this again more actively since a few months ago, and the discussion is still ongoing internally. According to our IOMMU folks there exists some desire to be able to assign the iGFX device aka have intel_iommu=on instead of intel_iommu=igfx_off due to how the devices might be grouped in IOMMU groups. Even when you would not be using the iGFX device. So for some uses, the fact that the device (group) is assignable seems to be more important than the iGFX device to be working. I'm afraid that retroactively disabling the assignment for such an old platform might break those usage scenarios. By my quick reading of the code, there's no way for user to turn the iGFX DMAR on once the quirk disables it. I guess one solution would be to default to intel_iommu=igfx_off for platforms that are older than certain threshold. But still allow user to enable. But that then requires duplicating the PCI ID database into iommu code. I don't really have winning moves to present, but I'm open to hearing how we can avoid more damage than starting to default to intel_iommu=on did already. Regards, Joonas > > Thanks, > > Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] dma-debug: add dumping facility via debugfs
On Fri, Jan 18, 2019 at 12:35:43PM +0100, Christoph Hellwig wrote: > On Wed, Jan 16, 2019 at 06:10:13PM +, Robin Murphy wrote: > > It's a shame that this is pretty much a duplication of > > debug_dma_dump_mappings(), but there seems no straightforward way to define > > one in terms of the other :/ > > We could always play some macro magic, but I think that is worse than > duplicating a little bit of functionality. I havn't checked in detail, can seq_buf be used somehow to write a function that fits both cases? > Btw, the dev argument to debug_dma_dump_mappings is always NULL and > could be removed.. This function was also written as a debug-helper for driver developers. As such it might not make it into the final driver, but the developer might want to use it to dump the mappings for her device only. So I'd like to keep the parameter, even when it is always NULL for in-kernel uses. > > (although given that we'd rather not have that weird external interface > > anyway, maybe "now you can use debugfs instead" might be justification > > enough for cleaning it up...) > > One argument against that is the system might be in a shape where you > can't easily read a file when it is in that shape. The argument for > that is that in many systems the full list of mappings might overwhelm > the kernel log. For driver developers a file is easier to use, but in case of an unusable system one can at least easily read out parts of the dma-mappings from a crash-dump if it is in the kernel-log. So I think it makes sense to have both. Regards, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] iommu/intel-iommu: fix memory leak in intel_iommu_put_resv_regions()
On Wed, Jan 16, 2019 at 08:11:44PM +0100, Gerald Schaefer wrote: > Commit 9d3a4de4cb8d ("iommu: Disambiguate MSI region types") changed > the reserved region type in intel_iommu_get_resv_regions() from > IOMMU_RESV_RESERVED to IOMMU_RESV_MSI, but it forgot to also change > the type in intel_iommu_put_resv_regions(). > > This leads to a memory leak, because now the check in > intel_iommu_put_resv_regions() for IOMMU_RESV_RESERVED will never > be true, and no allocated regions will be freed. > > Fix this by changing the region type in intel_iommu_put_resv_regions() > to IOMMU_RESV_MSI, matching the type of the allocated regions. > > Fixes: 9d3a4de4cb8d ("iommu: Disambiguate MSI region types") > Cc: # v4.11+ > Signed-off-by: Gerald Schaefer > --- > drivers/iommu/intel-iommu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) Applied, thanks. ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] iommu/intel: quirk to disable DMAR for QM57 igfx
Hi Daniel, On Tue, Jan 22, 2019 at 11:46:39AM +0100, Daniel Vetter wrote: > Note that the string of platforms which have various issues with iommu > and igfx is very long, thus far we only disabled it where there's no > workaround to stop it from hanging the box, but otherwise left it > enabled. So if we make a policy change to also disable it anywhere > where it doesn't work well (instead of not at all), there's a pile > more platforms to switch. I think its best to just disable iommu for the igfx devices on these platforms. Can you pick up Eric's patch and extend it with the list of affected platforms? Thanks, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] iommu: amd: call free_iova_fast with pfn in map_sg
On Thu, Jan 17, 2019 at 12:29:02PM -0700, Jerry Snitselaar wrote: > In the error path of map_sg, free_iova_fast is being called with > address instead of the pfn. This results in a bad value getting into > the rcache, and can result in hitting a BUG_ON when > iova_magazine_free_pfns is called. > > Cc: Joerg Roedel > Cc: Suravee Suthikulpanit > Signed-off-by: Jerry Snitselaar Applied, thanks. ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] iommu/intel: quirk to disable DMAR for QM57 igfx
On Tue, Jan 22, 2019 at 11:39 AM Joerg Roedel wrote: > > On Fri, Jan 18, 2019 at 12:17:05PM +, Eric Wong wrote: > > @@ -5411,6 +5411,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e20, > > quirk_iommu_g4x_gfx); > > DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e30, quirk_iommu_g4x_gfx); > > DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e40, quirk_iommu_g4x_gfx); > > DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e90, quirk_iommu_g4x_gfx); > > +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0044, quirk_iommu_g4x_gfx); > > > > static void quirk_iommu_rwbf(struct pci_dev *dev) > > { > > @@ -5457,7 +5458,6 @@ static void quirk_calpella_no_shadow_gtt(struct > > pci_dev *dev) > > } > > } > > DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0040, > > quirk_calpella_no_shadow_gtt); > > -DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0044, > > quirk_calpella_no_shadow_gtt); > > DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0062, > > quirk_calpella_no_shadow_gtt); > > DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x006a, > > quirk_calpella_no_shadow_gtt); > > This seems to make sense to me. Joonas, any comments or objections? This is ironlake, which has a huge iommu hack in the gpu driver to work around hard hangs, which: - causes massive stalls and kills performance - isn't well tested (it's the only one that needs this), so tends to break So if we do this then imo we should: - probably nuke that w/a too (check for needs_idle_maps and all the related stuff in i915_gem_gtt.c) - roll it out for all affected chips (i.e. need to include 0x0040). Note that the string of platforms which have various issues with iommu and igfx is very long, thus far we only disabled it where there's no workaround to stop it from hanging the box, but otherwise left it enabled. So if we make a policy change to also disable it anywhere where it doesn't work well (instead of not at all), there's a pile more platforms to switch. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] iommu/amd: Fix IOMMU page flush when detach all devices from a domain
Hi Suravee, On Thu, Jan 17, 2019 at 08:44:36AM +, Suthikulpanit, Suravee wrote: > Then, in __domain_flush_pages, we issue command when the dev_iommu[] >= 0. > This should preserve previous behavior, and only add flushing condition to > the specific IOMMU in detached state. Please let me know what you think. I think the whole point why VFIO is detaching all devices first and then goes into unmapping pages is that there are no flushes needed anymore when devices are detached. But when we follow your proposal we would still do IOTLB flushes even when no devices are attached anymore. So I'd like to avoid this, given the implications on unmapping performance. We should just flush the IOMMU TLB at detach time and be done with it. Makes sense? Regards, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH] iommu/intel: quirk to disable DMAR for QM57 igfx
On Fri, Jan 18, 2019 at 12:17:05PM +, Eric Wong wrote: > @@ -5411,6 +5411,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e20, > quirk_iommu_g4x_gfx); > DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e30, quirk_iommu_g4x_gfx); > DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e40, quirk_iommu_g4x_gfx); > DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e90, quirk_iommu_g4x_gfx); > +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0044, quirk_iommu_g4x_gfx); > > static void quirk_iommu_rwbf(struct pci_dev *dev) > { > @@ -5457,7 +5458,6 @@ static void quirk_calpella_no_shadow_gtt(struct pci_dev > *dev) > } > } > DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0040, > quirk_calpella_no_shadow_gtt); > -DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0044, > quirk_calpella_no_shadow_gtt); > DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x0062, > quirk_calpella_no_shadow_gtt); > DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x006a, > quirk_calpella_no_shadow_gtt); This seems to make sense to me. Joonas, any comments or objections? Regards, Joerg ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu