Hi David,
On 6/17/21 1:22 PM, David Gibson wrote:
The iommu_group can guarantee the isolation among different physical
devices (represented by RIDs). But when it comes to sub-devices (ex. mdev or
vDPA devices represented by RID + SSID), we have to rely on the
device driver for isolation. The
On 2021-06-15 17:21, Sai Prakash Ranjan wrote:
> Hi Krishna,
>
> On 2021-06-14 23:18, Krishna Reddy wrote:
>>> Right but we won't know until we profile the specific usecases or try them
>>> in
>>> generic workload to see if they affect the performance. Sure, over
>>> invalidation is
>>> a
On Thu, Jun 17, 2021 at 4:34 PM He Zhe wrote:
>
>
>
> On 6/15/21 10:13 PM, Xie Yongji wrote:
> > Increase the recursion depth of eventfd_signal() to 1. This
> > is the maximum recursion depth we have found so far, which
> > can be triggered with the following call chain:
> >
> >
Currently for iommu_unmap() of large scatter-gather list with page size
elements, the majority of time is spent in flushing of partial walks in
__arm_lpae_unmap() which is a VA based TLB invalidation invalidating
page-by-page on iommus like arm-smmu-v2 (TLBIVA) which do not support
range based
Set the pgtable quirk IO_PGTABLE_QUIRK_TLB_INV_ALL for QTI SoC
implementation to use ::tlb_flush_all() for partial walk flush
to improve unmap performance.
Signed-off-by: Sai Prakash Ranjan
---
drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c | 11 +++
1 file changed, 11 insertions(+)
diff
Add a quirk IO_PGTABLE_QUIRK_TLB_INV_ALL to invalidate entire context
with tlb_flush_all() callback in partial walk flush to improve unmap
performance on select few platforms where the cost of over-invalidation
is less than the unmap latency.
Signed-off-by: Sai Prakash Ranjan
---
Currently for iommu_unmap() of large scatter-gather list with page size
elements, the majority of time is spent in flushing of partial walks in
__arm_lpae_unmap() which is a VA based TLB invalidation invalidating
page-by-page on iommus like arm-smmu-v2 (TLBIVA) which do not support
range based
Hi Krishna,
On 2021-06-18 02:48, Krishna Reddy wrote:
Instead of flush_ops in init_context hook, perhaps a io_pgtable quirk
since this is
related to tlb, probably a bad name but IO_PGTABLE_QUIRK_TLB_INV which
will
be set in init_context impl hook and the prev condition in
On 6/17/21 3:41 PM, John Garry wrote:
@@ -349,10 +349,9 @@ static int __init iommu_dma_setup(char *str)
}
early_param("iommu.strict", iommu_dma_setup);
-void iommu_set_dma_strict(bool strict)
+void iommu_set_dma_strict(void)
{
- if (strict || !(iommu_cmd_line & IOMMU_CMD_LINE_STRICT))
Hi Robin,
On 6/18/21 2:56 AM, Robin Murphy wrote:
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 60b1ec42e73b..ff221d3ddcbc 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -349,10 +349,9 @@ static int __init iommu_dma_setup(char *str)
}
Hi John,
On 6/17/21 4:00 PM, John Garry wrote:
On 17/06/2021 08:32, Lu Baolu wrote:
On 6/16/21 7:03 PM, John Garry wrote:
@@ -4382,9 +4380,9 @@ int __init intel_iommu_init(void)
* is likely to be much lower than the overhead of
synchronizing
* the virtual and physical
On Thu, Jun 17, 2021 at 07:31:03AM +, Tian, Kevin wrote:
> > > Yes. function 1 is block-DMA while function 0 still attached to IOASID.
> > > Actually unbind from IOMMU fd doesn't change the security context.
> > > the change is conducted when attaching/detaching device to/from an
> > > IOASID.
On Thu, Jun 17, 2021 at 03:14:52PM -0600, Alex Williamson wrote:
> I've referred to this as a limitation of type1, that we can't put
> devices within the same group into different address spaces, such as
> behind separate vRoot-Ports in a vIOMMU config, but really, who cares?
> As isolation
On Tue, Jun 15, 2021 at 10:12:15AM -0600, Alex Williamson wrote:
>
> 1) A dual-function PCIe e1000e NIC where the functions are grouped
>together due to ACS isolation issues.
>
>a) Initial state: functions 0 & 1 are both bound to e1000e driver.
>
>b) Admin uses driverctl to bind
On Thu, 17 Jun 2021, Claire Chang wrote:
> Add the functions, swiotlb_{alloc,free} and is_swiotlb_for_alloc to
> support the memory allocation from restricted DMA pool.
>
> The restricted DMA pool is preferred if available.
>
> Note that since coherent allocation needs remapping, one must set up
On Thu, 17 Jun 2021, Claire Chang wrote:
> Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
> use it to determine whether to bounce the data or not. This will be
> useful later to allow for different pools.
>
> Signed-off-by: Claire Chang
> Reviewed-by: Christoph Hellwig
>
On Thu, 17 Jun 2021, Claire Chang wrote:
> Update is_swiotlb_active to add a struct device argument. This will be
> useful later to allow for different pools.
>
> Signed-off-by: Claire Chang
> Reviewed-by: Christoph Hellwig
> Tested-by: Stefano Stabellini
> Tested-by: Will Deacon
Acked-by:
On Thu, 17 Jun 2021, Claire Chang wrote:
> Update is_swiotlb_buffer to add a struct device argument. This will be
> useful later to allow for different pools.
>
> Signed-off-by: Claire Chang
> Reviewed-by: Christoph Hellwig
> Tested-by: Stefano Stabellini
> Tested-by: Will Deacon
Acked-by:
On Thu, 17 Jun 2021, Claire Chang wrote:
> Always have the pointer to the swiotlb pool used in struct device. This
> could help simplify the code for other pools.
>
> Signed-off-by: Claire Chang
> Reviewed-by: Christoph Hellwig
> Tested-by: Stefano Stabellini
> Tested-by: Will Deacon
On Thu, 17 Jun 2021, Claire Chang wrote:
> Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
> initialization to make the code reusable.
>
> Signed-off-by: Claire Chang
> Reviewed-by: Christoph Hellwig
> Tested-by: Stefano Stabellini
> Tested-by: Will Deacon
> ---
>
On Thu, Jun 17, 2021 at 02:45:46PM +1000, David Gibson wrote:
> On Wed, Jun 09, 2021 at 09:39:19AM -0300, Jason Gunthorpe wrote:
> > On Wed, Jun 09, 2021 at 02:24:03PM +0200, Joerg Roedel wrote:
> > > On Mon, Jun 07, 2021 at 02:58:18AM +, Tian, Kevin wrote:
> > > > - Device-centric (Jason)
On Thu, Jun 17, 2021 at 03:02:33PM +1000, David Gibson wrote:
> In other words, do we really have use cases where we need to identify
> different devices IDs, even though we know they're not isolated.
I think when PASID is added in and all the complexity that brings, it
does become more
On Thu, Jun 17, 2021 at 10:06 AM Suman Anna wrote:
>
> Hi Rob,
>
> On 6/15/21 2:15 PM, Rob Herring wrote:
> > If a property has an 'items' list, then a 'minItems' or 'maxItems' with the
> > same size as the list is redundant and can be dropped. Note that is DT
> > schema specific behavior and not
On 6/17/2021 1:30 PM, Joerg Roedel wrote:
On Thu, Jun 17, 2021 at 10:16:50AM -0700, Nick Desaulniers wrote:
On Thu, Jun 17, 2021 at 7:54 AM Joerg Roedel wrote:
From: Joerg Roedel
Fix this warning when compiled with clang and W=1:
drivers/iommu/intel/perf.c:16: warning: Function
> Instead of flush_ops in init_context hook, perhaps a io_pgtable quirk since
> this is
> related to tlb, probably a bad name but IO_PGTABLE_QUIRK_TLB_INV which will
> be set in init_context impl hook and the prev condition in
> io_pgtable_tlb_flush_walk()
> becomes something like below. Seems
On Thu, 17 Jun 2021 07:31:03 +
"Tian, Kevin" wrote:
> > From: Alex Williamson
> > Sent: Thursday, June 17, 2021 3:40 AM
> >
> > On Wed, 16 Jun 2021 06:43:23 +
> > "Tian, Kevin" wrote:
> >
> > > > From: Alex Williamson
> > > > Sent: Wednesday, June 16, 2021 12:12 AM
> > > >
> > > >
From: Krishna Reddy
iommu_group is getting created more than once during asynchronous multiple
display heads(devices) probe on Tegra194 SoC. All the display heads share
same SID and are expected to be in same iommu_group.
As arm_smmu_device_group() is not protecting group creation across
Domain is getting created more than once during asynchronous multiple
display heads(devices) probe. All the display heads share same SID and
are expected to be in same domain. As iommu_alloc_default_domain() call
is not protected, it ends up in creating two domains for two display
devices which
On Thu, Jun 17, 2021 at 10:16:50AM -0700, Nick Desaulniers wrote:
> On Thu, Jun 17, 2021 at 7:54 AM Joerg Roedel wrote:
> >
> > From: Joerg Roedel
> >
> > Fix this warning when compiled with clang and W=1:
> >
> > drivers/iommu/intel/perf.c:16: warning: Function parameter or
> > member
Multiple iommu domains and iommu groups are getting created for the devices
sharing same SID. It is expected for devices sharing same SID to be in same
iommu group and same iommu domain.
This is leading to context faults when one device is accessing IOVA from
other device which shouldn't be the
On 2021-06-17 09:00, John Garry wrote:
On 17/06/2021 08:32, Lu Baolu wrote:
On 6/16/21 7:03 PM, John Garry wrote:
@@ -4382,9 +4380,9 @@ int __init intel_iommu_init(void)
* is likely to be much lower than the overhead of
synchronizing
* the virtual and physical IOMMU
On 2021-06-16 12:03, John Garry wrote:
Now that the x86 drivers support iommu.strict, deprecate the custom
methods.
Signed-off-by: John Garry
---
Documentation/admin-guide/kernel-parameters.txt | 5 +++--
drivers/iommu/amd/init.c| 4 +++-
drivers/iommu/intel/iommu.c
On 2021-06-17 08:36, Lu Baolu wrote:
On 6/16/21 7:03 PM, John Garry wrote:
We only ever now set strict mode enabled in iommu_set_dma_strict(), so
just remove the argument.
Signed-off-by: John Garry
Reviewed-by: Robin Murphy
---
drivers/iommu/amd/init.c | 2 +-
On Thu, Jun 17, 2021 at 11:21:39AM +0530, Ashish Mhetre wrote:
>
>
> On 6/11/2021 6:19 PM, Robin Murphy wrote:
> > External email: Use caution opening links or attachments
> >
> >
> > On 2021-06-11 11:45, Will Deacon wrote:
> > > On Thu, Jun 10, 2021 at 09:46:53AM +0530, Ashish Mhetre wrote:
>
On Thu, Jun 17, 2021 at 7:54 AM Joerg Roedel wrote:
>
> From: Joerg Roedel
>
> Fix this warning when compiled with clang and W=1:
>
> drivers/iommu/intel/perf.c:16: warning: Function parameter or member
> 'latency_lock' not described in 'DEFINE_SPINLOCK'
>
On Mon, Jun 14, 2021 at 03:57:26PM +0100, Robin Murphy wrote:
> Consolidating the flush queue logic also meant that the "iommu.strict"
> option started taking effect on x86 as well. Make sure we document that.
>
> Fixes: a250c23f15c2 ("iommu: remove DOMAIN_ATTR_DMA_USE_FLUSH_QUEUE")
>
On Wed, Jun 16, 2021 at 11:58:13AM +0100, Will Deacon wrote:
> The following changes since commit c4681547bcce777daf576925a966ffa824edd09d:
>
> Linux 5.13-rc3 (2021-05-23 11:42:48 -1000)
>
> are available in the Git repository at:
>
>
From: Joerg Roedel
Fix this warning when compiled with clang and W=1:
drivers/iommu/intel/perf.c:16: warning: Function parameter or member
'latency_lock' not described in 'DEFINE_SPINLOCK'
drivers/iommu/intel/perf.c:16: warning: expecting prototype for
perf.c(). Prototype was
On Tue, Jun 15, 2021 at 2:15 PM Rob Herring wrote:
>
> If a property has an 'items' list, then a 'minItems' or 'maxItems' with the
> same size as the list is redundant and can be dropped. Note that is DT
> schema specific behavior and not standard json-schema behavior. The tooling
> will fixup
On Thu, Jun 10, 2021 at 10:03 AM Jean-Philippe Brucker
wrote:
>
> The ACPI Virtual I/O Translation Table describes topology of
> para-virtual platforms, similarly to vendor tables DMAR, IVRS and IORT.
> For now it describes the relation between virtio-iommu and the endpoints
> it manages.
>
>
On Wed, Jun 16, 2021 at 08:27:39PM -0400, Konrad Rzeszutek Wilk wrote:
> How unique is this NVMe? Should I be able to reproduce this with any
> type or is it specific to Google Cloud?
With swiotlb=force this should be reproducable everywhere.
___
iommu
On Tue, 15 Jun 2021 at 21:15, Rob Herring wrote:
>
> If a property has an 'items' list, then a 'minItems' or 'maxItems' with the
> same size as the list is redundant and can be dropped. Note that is DT
> schema specific behavior and not standard json-schema behavior. The tooling
> will fixup the
On 6/15/21 10:13 PM, Xie Yongji wrote:
> Increase the recursion depth of eventfd_signal() to 1. This
> is the maximum recursion depth we have found so far, which
> can be triggered with the following call chain:
>
> kvm_io_bus_write[kvm]
> --> ioeventfd_write
On 17/06/2021 08:32, Lu Baolu wrote:
On 6/16/21 7:03 PM, John Garry wrote:
@@ -4382,9 +4380,9 @@ int __init intel_iommu_init(void)
* is likely to be much lower than the overhead of
synchronizing
* the virtual and physical IOMMU page-tables.
*/
- if
@@ -349,10 +349,9 @@ static int __init iommu_dma_setup(char *str)
}
early_param("iommu.strict", iommu_dma_setup);
-void iommu_set_dma_strict(bool strict)
+void iommu_set_dma_strict(void)
{
- if (strict || !(iommu_cmd_line & IOMMU_CMD_LINE_STRICT))
- iommu_dma_strict = strict;
+
On 6/16/21 7:03 PM, John Garry wrote:
We only ever now set strict mode enabled in iommu_set_dma_strict(), so
just remove the argument.
Signed-off-by: John Garry
Reviewed-by: Robin Murphy
---
drivers/iommu/amd/init.c| 2 +-
drivers/iommu/intel/iommu.c | 6 +++---
drivers/iommu/iommu.c
On 6/16/21 7:03 PM, John Garry wrote:
@@ -4382,9 +4380,9 @@ int __init intel_iommu_init(void)
* is likely to be much lower than the overhead of synchronizing
* the virtual and physical IOMMU page-tables.
*/
- if
> From: Alex Williamson
> Sent: Thursday, June 17, 2021 3:40 AM
>
> On Wed, 16 Jun 2021 06:43:23 +
> "Tian, Kevin" wrote:
>
> > > From: Alex Williamson
> > > Sent: Wednesday, June 16, 2021 12:12 AM
> > >
> > > On Tue, 15 Jun 2021 02:31:39 +
> > > "Tian, Kevin" wrote:
> > >
> > > > >
On Wed, Jun 09, 2021 at 09:39:19AM -0300, Jason Gunthorpe wrote:
> On Wed, Jun 09, 2021 at 02:24:03PM +0200, Joerg Roedel wrote:
> > On Mon, Jun 07, 2021 at 02:58:18AM +, Tian, Kevin wrote:
> > > - Device-centric (Jason) vs. group-centric (David) uAPI. David is not
> > > fully
> > >
On Fri, Jun 11, 2021 at 01:45:29PM -0300, Jason Gunthorpe wrote:
> On Thu, Jun 10, 2021 at 09:38:42AM -0600, Alex Williamson wrote:
>
> > Opening the group is not the extent of the security check currently
> > required, the group must be added to a container and an IOMMU model
> > configured for
On Thu, Jun 10, 2021 at 01:50:22PM +0800, Lu Baolu wrote:
> On 6/9/21 8:39 PM, Jason Gunthorpe wrote:
> > On Wed, Jun 09, 2021 at 02:24:03PM +0200, Joerg Roedel wrote:
> > > On Mon, Jun 07, 2021 at 02:58:18AM +, Tian, Kevin wrote:
> > > > - Device-centric (Jason) vs. group-centric (David)
On Tue, Jun 08, 2021 at 10:17:56AM -0300, Jason Gunthorpe wrote:
> On Tue, Jun 08, 2021 at 12:37:04PM +1000, David Gibson wrote:
>
> > > The PPC/SPAPR support allows KVM to associate a vfio group to an IOMMU
> > > page table so that it can handle iotlb programming from pre-registered
> > > memory
On Wed, Jun 09, 2021 at 10:15:32AM -0600, Alex Williamson wrote:
> On Wed, 9 Jun 2021 17:51:26 +0200
> Joerg Roedel wrote:
>
> > On Wed, Jun 09, 2021 at 12:00:09PM -0300, Jason Gunthorpe wrote:
> > > Only *drivers* know what the actual device is going to do, devices do
> > > not. Since the group
On Thu, Jun 03, 2021 at 08:12:27AM +, Tian, Kevin wrote:
> > From: David Gibson
> > Sent: Wednesday, June 2, 2021 2:15 PM
> >
> [...]
>
> > >
> > > /*
> > > * Get information about an I/O address space
> > > *
> > > * Supported capabilities:
> > > * - VFIO type1 map/unmap;
> >
On Tue, Jun 08, 2021 at 04:04:06PM -0300, Jason Gunthorpe wrote:
> On Tue, Jun 08, 2021 at 10:53:02AM +1000, David Gibson wrote:
> > On Thu, Jun 03, 2021 at 08:52:24AM -0300, Jason Gunthorpe wrote:
> > > On Thu, Jun 03, 2021 at 03:13:44PM +1000, David Gibson wrote:
> > >
> > > > > We can still
On Thu, Jun 10, 2021 at 06:37:31PM +0200, Jean-Philippe Brucker wrote:
> On Tue, Jun 08, 2021 at 04:31:50PM +1000, David Gibson wrote:
> > For the qemu case, I would imagine a two stage fallback:
> >
> > 1) Ask for the exact IOMMU capabilities (including pagetable
> >format) that the
On 6/16/21 9:38 PM, Georgi Djakov wrote:
From: "Isaac J. Manjarres"
Since iommu_pgsize can calculate how many pages of the
same size can be mapped/unmapped before the next largest
page size boundary, add support for invoking an IOMMU
driver's map_pages() callback, if it provides one.
On 6/16/21 9:38 PM, Georgi Djakov wrote:
From: Will Deacon
Extend iommu_pgsize() to populate an optional 'count' parameter so that
we can direct unmapping operation to the ->unmap_pages callback if it
has been provided by the driver.
Signed-off-by: Will Deacon
Signed-off-by: Isaac J.
On 6/16/21 9:38 PM, Georgi Djakov wrote:
From: Will Deacon
The 'addr_merge' parameter to iommu_pgsize() is a fabricated address
intended to describe the alignment requirements to consider when
choosing an appropriate page size. On the iommu_map() path, this address
is the logical OR of the
On 6/16/21 9:38 PM, Georgi Djakov wrote:
From: Will Deacon
Avoid the potential for shifting values by amounts greater than the
width of their type by using a bitmap to compute page size in
iommu_pgsize().
Signed-off-by: Will Deacon
Signed-off-by: Isaac J. Manjarres
Signed-off-by: Georgi
v13: https://lore.kernel.org/patchwork/cover/1448001/
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
v13: https://lore.kernel.org/patchwork/cover/1448001/
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.
Signed-off-by: Claire Chang
Tested-by: Stefano Stabellini
Tested-by: Will Deacon
---
drivers/of/address.c| 33 +
Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.
Signed-off-by: Claire Chang
Tested-by: Stefano Stabellini
Tested-by: Will Deacon
---
Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.
Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.
The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at
Add the functions, swiotlb_{alloc,free} and is_swiotlb_for_alloc to
support the memory allocation from restricted DMA pool.
The restricted DMA pool is preferred if available.
Note that since coherent allocation needs remapping, one must set up
another device coherent pool by shared-dma-pool and
Add a new function, swiotlb_release_slots, to make the code reusable for
supporting different bounce buffer pools.
Signed-off-by: Claire Chang
Reviewed-by: Christoph Hellwig
Tested-by: Stefano Stabellini
Tested-by: Will Deacon
---
kernel/dma/swiotlb.c | 35 ---
Rename find_slots to swiotlb_find_slots and move the maintenance of
alloc_size to it for better code reusability later.
Signed-off-by: Claire Chang
Reviewed-by: Christoph Hellwig
Tested-by: Stefano Stabellini
Tested-by: Will Deacon
---
kernel/dma/swiotlb.c | 16
1 file
Propagate the swiotlb_force into io_tlb_default_mem->force_bounce and
use it to determine whether to bounce the data or not. This will be
useful later to allow for different pools.
Signed-off-by: Claire Chang
Reviewed-by: Christoph Hellwig
Tested-by: Stefano Stabellini
Tested-by: Will Deacon
Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for different pools.
Signed-off-by: Claire Chang
Reviewed-by: Christoph Hellwig
Tested-by: Stefano Stabellini
Tested-by: Will Deacon
---
drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for different pools.
Signed-off-by: Claire Chang
Reviewed-by: Christoph Hellwig
Tested-by: Stefano Stabellini
Tested-by: Will Deacon
---
drivers/iommu/dma-iommu.c | 12 ++--
Always have the pointer to the swiotlb pool used in struct device. This
could help simplify the code for other pools.
Signed-off-by: Claire Chang
Reviewed-by: Christoph Hellwig
Tested-by: Stefano Stabellini
Tested-by: Will Deacon
---
drivers/base/core.c| 4
include/linux/device.h |
Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools.
Signed-off-by: Claire Chang
Reviewed-by: Christoph Hellwig
Tested-by: Stefano Stabellini
Tested-by: Will Deacon
---
kernel/dma/swiotlb.c | 21 ++---
1 file changed, 14
Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.
Signed-off-by: Claire Chang
Reviewed-by: Christoph Hellwig
Tested-by: Stefano Stabellini
Tested-by: Will Deacon
---
kernel/dma/swiotlb.c | 50
This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.
For example, we plan to use the PCI-e bus for
75 matches
Mail list logo