> From: Joao Martins
> Sent: Friday, April 29, 2022 5:09 AM
>
> Presented herewith is a series that extends IOMMUFD to have IOMMU
> hardware support for dirty bit in the IOPTEs.
>
> Today, AMD Milan (which been out for a year now) supports it while ARM
> SMMUv3.2+ alongside VT-D rev3.x are
On 2022/4/29 上午2:00, Fenghua Yu wrote:
The PASID is being freed too early. It needs to stay around until after
device drivers that might be using it have had a chance to clear it out
of the hardware.
As a reminder:
mmget() /mmput() refcount the mm's address space
mmgrab()/mmdrop() refcount
> From: Jason Gunthorpe
> Sent: Thursday, April 28, 2022 11:11 PM
>
>
> > 3) "dynamic DMA windows" (DDW). The IBM IOMMU hardware allows for
> 2 IOVA
> > windows, which aren't contiguous with each other. The base addresses
> > of each of these are fixed, but the size of each window, the
On Thu, 28 Apr 2022, Boris Ostrovsky wrote:
> On 4/28/22 5:49 PM, Stefano Stabellini wrote:
> > On Thu, 28 Apr 2022, Christoph Hellwig wrote:
> > > On Tue, Apr 26, 2022 at 04:07:45PM -0700, Stefano Stabellini wrote:
> > > > > Reported-by: Rahul Singh
> > > > > Signed-off-by: Christoph Hellwig
>
On 4/28/22 5:49 PM, Stefano Stabellini wrote:
On Thu, 28 Apr 2022, Christoph Hellwig wrote:
On Tue, Apr 26, 2022 at 04:07:45PM -0700, Stefano Stabellini wrote:
Reported-by: Rahul Singh
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
Do you want to take this through the
On Thu, 28 Apr 2022, Christoph Hellwig wrote:
> On Tue, Apr 26, 2022 at 04:07:45PM -0700, Stefano Stabellini wrote:
> > > Reported-by: Rahul Singh
> > > Signed-off-by: Christoph Hellwig
> >
> > Reviewed-by: Stefano Stabellini
>
> Do you want to take this through the Xen tree or should I pick
From: Kunkun Jiang
This detects BBML feature and if SMMU supports it, transfer BBMLx
quirk to io-pgtable.
BBML1 requires still marking PTE nT prior to performing a
translation table update, while BBML2 requires neither break-before-make
nor PTE nT bit being set. For dirty tracking it needs to
Similar to other IOMMUs base unmap_read_dirty out of how unmap() with
the exception to having a non-racy clear of the PTE to return whether it
was dirty or not.
Signed-off-by: Joao Martins
---
drivers/iommu/intel/iommu.c | 43 -
include/linux/intel-iommu.h |
IOMMU advertises Access/Dirty bits if the extended capability
DMAR register reports it (ECAP, mnemonic ECAP.SSADS). The first
stage table, though, has not bit for advertising, unless referenced via
a scalable-mode PASID Entry. Relevant Intel IOMMU SDM ref for first stage
table "3.6.2 Accessed,
.read_and_clear_dirty() IOMMU domain op takes care of
reading the dirty bits (i.e. PTE has both DBM and AP[2] set)
and marshalling into a bitmap of a given page size.
While reading the dirty bits we also clear the PTE AP[2]
bit to mark it as writable-clean.
Structure it in a way that the IOPTE
Mostly reuses unmap existing code with the extra addition of
marshalling into a bitmap of a page size. To tackle the race,
switch away from a plain store to a cmpxchg() and check whether
IOVA was dirtied or not once it succeeds.
Signed-off-by: Joao Martins
---
From: Kunkun Jiang
As nested mode is not upstreamed now, we just aim to support dirty
log tracking for stage1 with io-pgtable mapping (means not support
SVA mapping). If HTTU is supported, we enable HA/HD bits in the SMMU
CD and transfer ARM_HD quirk to io-pgtable.
We additionally filter out
Similar to .read_and_clear_dirty() use the page table
walker helper functions and set DBM|RDONLY bit, thus
switching the IOPTE to writeable-clean.
Signed-off-by: Joao Martins
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 29
drivers/iommu/io-pgtable-arm.c | 52
From: Jean-Philippe Brucker
If the SMMU supports it and the kernel was built with HTTU support,
Probe support for Hardware Translation Table Update (HTTU) which is
essentially to enable hardware update of access and dirty flags.
Probe and set the smmu::features for Hardware Dirty and Hardware
Print the feature, much like other kernel-supported features.
One can still probe its actual hw support via sysfs, regardless
of what the kernel does.
Signed-off-by: Joao Martins
---
drivers/iommu/amd/init.c | 4
1 file changed, 4 insertions(+)
diff --git a/drivers/iommu/amd/init.c
IOMMU advertises Access/Dirty bits if the extended feature register
reports it. Relevant AMD IOMMU SDM ref[0]
"1.3.8 Enhanced Support for Access and Dirty Bits"
To enable it set the DTE flag in bits 7 and 8 to enable access, or
access+dirty. With that, the IOMMU starts marking the D and A flags
AMD implementation of unmap_read_dirty() is pretty simple as
mostly reuses unmap code with the extra addition of marshalling
the dirty bit into the bitmap as it walks the to-be-unmapped
IOPTE.
Extra care is taken though, to switch over to cmpxchg as opposed
to a non-serialized store to the PTE
Add a new test ioctl for simulating the dirty IOVAs
in the mock domain, and implement the mock iommu domain ops
that get the dirty tracking supported.
The selftest exercises the usual main workflow of:
1) Setting/Clearing dirty tracking from the iommu domain
2) Read and clear dirty IOPTEs
3)
Add the correspondent APIs for performing VFIO dirty tracking,
particularly VFIO_IOMMU_DIRTY_PAGES ioctl subcmds:
* VFIO_IOMMU_DIRTY_PAGES_FLAG_START: Start dirty tracking and allocates
the area @dirty_bitmap
* VFIO_IOMMU_DIRTY_PAGES_FLAG_STOP: Stop dirty
Today, the dirty state is lost and the page wouldn't be migrated to
destination potentially leading the guest into error.
Add an unmap API that reads the dirty bit and sets it in the
user passed bitmap. This unmap iommu API tackles a potentially
racy update to the dirty bit *when* doing DMA on a
Every IOMMU driver should be able to implement the needed
iommu domain ops to perform dirty tracking.
Connect a hw_pagetable to the IOMMU core dirty tracking ops.
It exposes all of the functionality for the UAPI:
- Enable/Disable dirty tracking on an IOMMU domain (hw_pagetable id)
- Read the
Add to iommu domain operations a set of callbacks to
perform dirty tracking, particulary to start and stop
tracking and finally to test and clear the dirty data.
Drivers are expected to dynamically change its hw protection
domain bits to toggle the tracking and flush some form of
control state
Presented herewith is a series that extends IOMMUFD to have IOMMU
hardware support for dirty bit in the IOPTEs.
Today, AMD Milan (which been out for a year now) supports it while ARM
SMMUv3.2+ alongside VT-D rev3.x are expected to eventually come along.
The intended use-case is to support Live
Add an argument to the kAPI that unmaps an IOVA from the attached
domains, to also receive a bitmap.
When passed an iommufd_dirty_data::bitmap we call out the special
dirty unmap (iommu_unmap_read_dirty()). The bitmap data is
iterated, similarly, like the read_and_clear_dirty() in IOVA
chunks
Add an io_pagetable kernel API to toggle dirty tracking:
* iopt_set_dirty_tracking(iopt, [domain], state)
It receives either NULL (which means all domains) or an
iommu_domain. The intended caller of this is via the hw_pagetable
object that is created on device attach, which passes an
Add an IO pagetable API iopt_read_and_clear_dirty_data() that
performs the reading of dirty IOPTEs for a given IOVA range and
then copying back to userspace from each area-internal bitmap.
Underneath it uses the IOMMU equivalent API which will read the
dirty bits, as well as atomically clearing
The PASID is being freed too early. It needs to stay around until after
device drivers that might be using it have had a chance to clear it out
of the hardware.
As a reminder:
mmget() /mmput() refcount the mm's address space
mmgrab()/mmdrop() refcount the mm itself
The PASID is currently tied
On 4/20/22 6:29 PM, Suravee Suthikulpanit wrote:
On AMD system with SNP enabled, IOMMU hardware checks the host translation
valid (TV) and guest translation valid (GV) bits in the device
table entry (DTE) before accessing the corresponded page tables.
However, current IOMMU driver sets the
On 2022-04-28 17:02, Andi Kleen wrote:
On 4/28/2022 8:07 AM, Robin Murphy wrote:
On 2022-04-28 15:55, Andi Kleen wrote:
On 4/28/2022 7:45 AM, Christoph Hellwig wrote:
On Thu, Apr 28, 2022 at 03:44:36PM +0100, Robin Murphy wrote:
Rather than introduce this extra level of allocator
On 4/28/22 09:01, Jean-Philippe Brucker wrote:
>> But, this misses an important point: even after the address space is
>> gone, the PASID will still be programmed into a device. Device drivers
>> might, for instance, still need to flush operations that are outstanding
>> and need to use that
On 4/28/2022 8:07 AM, Robin Murphy wrote:
On 2022-04-28 15:55, Andi Kleen wrote:
On 4/28/2022 7:45 AM, Christoph Hellwig wrote:
On Thu, Apr 28, 2022 at 03:44:36PM +0100, Robin Murphy wrote:
Rather than introduce this extra level of allocator complexity, how
about
just dividing up the
On Thu, Apr 28, 2022 at 08:09:04AM -0700, Dave Hansen wrote:
> On 4/25/22 21:20, Fenghua Yu wrote:
> >>From 84aa68f6174439d863c40cdc2db0e1b89d620dd0 Mon Sep 17 00:00:00 2001
> > From: Fenghua Yu
> > Date: Fri, 15 Apr 2022 00:51:33 -0700
> > Subject: [PATCH] iommu/sva: Fix PASID use-after-free
On 4/28/2022 10:44 PM, Robin Murphy wrote:
On 2022-04-28 15:14, Tianyu Lan wrote:
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently
On 4/28/2022 10:14 PM, Tianyu Lan wrote:
From: Tianyu Lan
In SEV/TDX Confidential VM, device DMA transaction needs use swiotlb
bounce buffer to share data with host/hypervisor. The swiotlb spinlock
introduces overhead among devices if they share io tlb mem. Avoid such
issue, introduce
On 4/28/22 08:28, Fenghua Yu wrote:
> Do you want me to change the changlog to add both this paragraph and the
> following paragraph?
Yes, as long as everyone agrees that it captures the issue well.
___
iommu mailing list
Hi, Dave,
On Thu, Apr 28, 2022 at 08:09:04AM -0700, Dave Hansen wrote:
> On 4/25/22 21:20, Fenghua Yu wrote:
> >>From 84aa68f6174439d863c40cdc2db0e1b89d620dd0 Mon Sep 17 00:00:00 2001
> > From: Fenghua Yu
> > Date: Fri, 15 Apr 2022 00:51:33 -0700
> > Subject: [PATCH] iommu/sva: Fix PASID
On Thu, Apr 07, 2022 at 04:32:28PM +0800, Yong Wu wrote:
> Yong Wu (2):
> dt-bindings: mediatek: mt8186: Add binding for MM iommu
> iommu/mediatek: Add mt8186 iommu support
>
> .../bindings/iommu/mediatek,iommu.yaml| 4 +
> drivers/iommu/mtk_iommu.c | 17 ++
>
On 4/28/2022 8:05 AM, Christoph Hellwig wrote:
On Thu, Apr 28, 2022 at 07:55:39AM -0700, Andi Kleen wrote:
At least for TDX need parallelism with a single device for performance.
So find a way to make it happen without exposing details to random
drivers.
That's what the original patch
On Fri, Apr 29, 2022 at 12:53:16AM +1000, David Gibson wrote:
> 2) Costly GUPs. pseries (the most common ppc machine type) always
> expects a (v)IOMMU. That means that unlike the common x86 model of a
> host with IOMMU, but guests with no-vIOMMU, guest initiated
> maps/unmaps can be a hot path.
On 4/25/22 21:20, Fenghua Yu wrote:
>>From 84aa68f6174439d863c40cdc2db0e1b89d620dd0 Mon Sep 17 00:00:00 2001
> From: Fenghua Yu
> Date: Fri, 15 Apr 2022 00:51:33 -0700
> Subject: [PATCH] iommu/sva: Fix PASID use-after-free issue
>
> A PASID might be still used on ARM after it is freed in
On 2022-04-28 15:55, Andi Kleen wrote:
On 4/28/2022 7:45 AM, Christoph Hellwig wrote:
On Thu, Apr 28, 2022 at 03:44:36PM +0100, Robin Murphy wrote:
Rather than introduce this extra level of allocator complexity, how
about
just dividing up the initial SWIOTLB allocation into multiple
On Thu, Apr 28, 2022 at 07:55:39AM -0700, Andi Kleen wrote:
> At least for TDX need parallelism with a single device for performance.
So find a way to make it happen without exposing details to random
drivers.
___
iommu mailing list
On Thu, Apr 21, 2022 at 01:21:20PM +0800, Lu Baolu wrote:
> static void iopf_handle_group(struct work_struct *work)
> {
> struct iopf_group *group;
> @@ -134,12 +78,23 @@ static void iopf_handle_group(struct work_struct *work)
> group = container_of(work, struct iopf_group, work);
>
On 2022-04-28 15:45, Christoph Hellwig wrote:
On Thu, Apr 28, 2022 at 03:44:36PM +0100, Robin Murphy wrote:
Rather than introduce this extra level of allocator complexity, how about
just dividing up the initial SWIOTLB allocation into multiple io_tlb_mem
instances?
Yeah. We're almost done
On 4/28/2022 7:45 AM, Christoph Hellwig wrote:
On Thu, Apr 28, 2022 at 03:44:36PM +0100, Robin Murphy wrote:
Rather than introduce this extra level of allocator complexity, how about
just dividing up the initial SWIOTLB allocation into multiple io_tlb_mem
instances?
Yeah. We're almost done
On Thu, Apr 21, 2022 at 01:21:12PM +0800, Lu Baolu wrote:
> Attaching an IOMMU domain to a PASID of a device is a generic operation
> for modern IOMMU drivers which support PASID-granular DMA address
> translation. Currently visible usage scenarios include (but not limited):
>
> - SVA (Shared
On Thu, Mar 24, 2022 at 04:04:03PM -0600, Alex Williamson wrote:
> On Wed, 23 Mar 2022 21:33:42 -0300
> Jason Gunthorpe wrote:
>
> > On Wed, Mar 23, 2022 at 04:51:25PM -0600, Alex Williamson wrote:
> >
> > > My overall question here would be whether we can actually achieve a
> > > compatibility
On 07/04/2022 09:57, Yong Wu wrote:
Add a new flag NON_STD_AXI, All the previous SoC support this flag.
Prepare for adding infra and apu iommu which don't support this.
Signed-off-by: Yong Wu
Reviewed-by: AngeloGioacchino Del Regno
---
drivers/iommu/mtk_iommu.c | 16 ++--
1
On Fri, Mar 25, 2022 at 2:13 PM wrote:
>
> From: Xiaoke Wang
>
> kzalloc() is a memory allocation function which can return NULL when
> some internal memory errors happen. So it is better to check it to
> prevent potential wrong memory access.
>
> Signed-off-by: Xiaoke Wang
> ---
>
Hi Baolu,
On Thu, Apr 21, 2022 at 01:21:19PM +0800, Lu Baolu wrote:
> +/*
> + * Get the attached domain for asynchronous usage, for example the I/O
> + * page fault handling framework. The caller get a reference counter
> + * of the domain automatically on a successful return and should put
> + *
On Thu, Apr 28, 2022 at 03:44:36PM +0100, Robin Murphy wrote:
> Rather than introduce this extra level of allocator complexity, how about
> just dividing up the initial SWIOTLB allocation into multiple io_tlb_mem
> instances?
Yeah. We're almost done removing all knowledge of swiotlb from
On 2022-04-28 15:14, Tianyu Lan wrote:
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO
On 22/04/2022 17:28, Shameer Kolothum wrote:
> Hi
>
> v9 --> v10
> -Addressed Christoph's comments. We now have a callback to
> struct iommu_resv_region to free all related memory and also dropped
> the FW specific union and now has a container struct iommu_iort_rmr_data.
> See patches #1
On Thu, Apr 28, 2022 at 03:58:30PM +1000, David Gibson wrote:
> On Thu, Mar 31, 2022 at 09:58:41AM -0300, Jason Gunthorpe wrote:
> > On Thu, Mar 31, 2022 at 03:36:29PM +1100, David Gibson wrote:
> >
> > > > +/**
> > > > + * struct iommu_ioas_iova_ranges - ioctl(IOMMU_IOAS_IOVA_RANGES)
> > > > + *
On 07/04/2022 09:57, Yong Wu wrote:
Add IOMMU_TYPE definition. In the mt8195, we have another IOMMU_TYPE:
infra iommu, also there will be another APU_IOMMU, thus, use 2bits for the
IOMMU_TYPE.
Signed-off-by: Yong Wu
Reviewed-by: AngeloGioacchino Del Regno
---
drivers/iommu/mtk_iommu.c |
On 2022-04-28 14:18, Robin Murphy wrote:
v1:
https://lore.kernel.org/linux-iommu/cover.1649935679.git.robin.mur...@arm.com/
Hi all,
Just some minor updates for v2, adding a workaround to avoid changing
VT-d behaviour for now, cleaning up the extra include I missed in
virtio-iommu, and
From: Tianyu Lan
In SEV/TDX Confidential VM, device DMA transaction needs use swiotlb
bounce buffer to share data with host/hypervisor. The swiotlb spinlock
introduces overhead among devices if they share io tlb mem. Avoid such
issue, introduce swiotlb_device_allocate() to allocate device bounce
On 07/04/2022 09:57, Yong Wu wrote:
We preassign some ports in a special bank via the new defined
banks_portmsk. Put it in the plat_data means it is not expected to be
adjusted dynamically.
If the iommu id in the iommu consumer's dtsi node is inside this
banks_portmsk, then we switch it to
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead to
From: Tianyu Lan
Traditionally swiotlb was not performance critical because it was only
used for slow devices. But in some setups, like TDX/SEV confidential
guests, all IO has to go through swiotlb. Currently swiotlb only has a
single lock. Under high IO load with multiple CPUs this can lead to
Joerg Roedel writes:
> Acked-by: Joerg Roedel
>
> Jonathan, will you merge that through the documentation tree?
Done.
Thanks,
jon
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
Hi Lorenzo,
> -Original Message-
> From: Lorenzo Pieralisi [mailto:lorenzo.pieral...@arm.com]
> Sent: 26 April 2022 16:30
> To: Shameerali Kolothum Thodi
> Cc: linux-arm-ker...@lists.infradead.org; linux-a...@vger.kernel.org;
> iommu@lists.linux-foundation.org; Linuxarm ;
>
On Thu, Apr 28, 2022 at 08:54:11AM -0300, Jason Gunthorpe wrote:
> Can we get this on a topic branch so Alex can pull it? There are
> conflicts with other VFIO patches
Right, we already discussed this. Moved the patches to a separate topic
branch. It will appear as 'vfio-notifier-fix' once I
On Tue, Apr 26, 2022 at 04:07:45PM -0700, Stefano Stabellini wrote:
> > Reported-by: Rahul Singh
> > Signed-off-by: Christoph Hellwig
>
> Reviewed-by: Stefano Stabellini
Do you want to take this through the Xen tree or should I pick it up?
Either way I'd love to see some testing on x86 as
Stop calling bus_set_iommu() since it's now unnecessary, and simplify
the probe failure path accordingly.
Reviewed-by: Jean-Philippe Brucker
Signed-off-by: Robin Murphy
---
drivers/iommu/virtio-iommu.c | 25 -
1 file changed, 25 deletions(-)
diff --git
Clean up the remaining trivial bus_set_iommu() callsites along
with the implementation. Now drivers only have to know and care
about iommu_device instances, phew!
Signed-off-by: Robin Murphy
---
drivers/iommu/arm/arm-smmu/qcom_iommu.c | 4
drivers/iommu/fsl_pamu_domain.c | 4
Stop calling bus_set_iommu() since it's now unnecessary, and simplify
the probe failure path accordingly.
Signed-off-by: Robin Murphy
---
drivers/iommu/tegra-smmu.c | 29 ++---
1 file changed, 6 insertions(+), 23 deletions(-)
diff --git a/drivers/iommu/tegra-smmu.c
Stop calling bus_set_iommu() since it's now unnecessary, and simplify
the probe failure paths accordingly.
Signed-off-by: Robin Murphy
---
drivers/iommu/mtk_iommu.c| 13 +
drivers/iommu/mtk_iommu_v1.c | 13 +
2 files changed, 2 insertions(+), 24 deletions(-)
diff
Stop calling bus_set_iommu() since it's now unnecessary, and simplify
the init failure path accordingly.
Signed-off-by: Robin Murphy
---
drivers/iommu/omap-iommu.c | 6 --
1 file changed, 6 deletions(-)
diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
index
Stop calling bus_set_iommu() since it's now unnecessary. This also
leaves the custom initcall effectively doing nothing but register
the driver, which no longer needs to happen early either, so convert
it to builtin_platform_driver().
Signed-off-by: Robin Murphy
---
drivers/iommu/ipmmu-vmsa.c |
Stop calling bus_set_iommu() since it's now unnecessary, and simplify
the init failure path accordingly.
Tested-by: Marek Szyprowski
Signed-off-by: Robin Murphy
---
drivers/iommu/exynos-iommu.c | 9 -
1 file changed, 9 deletions(-)
diff --git a/drivers/iommu/exynos-iommu.c
Stop calling bus_set_iommu() since it's now unnecessary, and simplify
the probe failure path accordingly.
Tested-by: Sven Peter
Reviewed-by: Sven Peter
Signed-off-by: Robin Murphy
---
drivers/iommu/apple-dart.c | 30 +-
1 file changed, 1 insertion(+), 29
Stop calling bus_set_iommu() since it's now unnecessary, and simplify
the probe failure path accordingly.
Acked-by: Will Deacon
Signed-off-by: Robin Murphy
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 53 +
1 file changed, 2 insertions(+), 51 deletions(-)
diff --git
Stop calling bus_set_iommu() since it's now unnecessary. With device
probes now replayed for every IOMMU instance registration, the whole
sorry ordering workaround for legacy DT bindings goes too, hooray!
Acked-by: Will Deacon
Signed-off-by: Robin Murphy
---
Stop calling bus_set_iommu() since it's now unnecessary, and
garbage-collect the last remnants of amd_iommu_init_api().
Signed-off-by: Robin Murphy
---
drivers/iommu/amd/amd_iommu.h | 1 -
drivers/iommu/amd/init.c | 9 +
drivers/iommu/amd/iommu.c | 21 -
3
Move the bus setup to iommu_device_register(). This should allow
bus_iommu_probe() to be correctly replayed for multiple IOMMU instances,
and leaves bus_set_iommu() as a glorified no-op to be cleaned up next.
At this point we can also handle cleanup better than just rolling back
the
Although the driver has some support implemented for non-PCI devices via
ANDD, it only registers itself for pci_bus_type, so has never actually
seen probe_device for a non-PCI device. Once the bus details move into
the IOMMU core, it appears there may be some issues with correctly
rejecting
The number of bus types that the IOMMU subsystem deals with is small and
manageable, so pull that list into core code as a first step towards
cleaning up all the boilerplate bus-awareness from drivers. Calling
iommu_probe_device() before bus->iommu_ops is set will simply return
-ENODEV and not
v1:
https://lore.kernel.org/linux-iommu/cover.1649935679.git.robin.mur...@arm.com/
Hi all,
Just some minor updates for v2, adding a workaround to avoid changing
VT-d behaviour for now, cleaning up the extra include I missed in
virtio-iommu, and collecting all the acks so far. As before, this is
On Thu, Apr 28, 2022 at 11:32:04AM +0200, Joerg Roedel wrote:
> On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote:
> > Lu Baolu (10):
> > iommu: Add DMA ownership management interfaces
> > driver core: Add dma_cleanup callback in bus_type
> > amba: Stop sharing
On 2022-04-28 02:04, Samuel Holland wrote:
So far, the driver has relied on arch/arm64/Kconfig to select IOMMU_DMA.
Unsurprisingly, this does not work on RISC-V, so the driver must select
IOMMU_DMA itself.
No, IOMMU_DMA should only be selected by the architecture code that's
also responsible
On Thu, Apr 28, 2022 at 04:52:39PM +0800, xkernel.w...@foxmail.com wrote:
> From: Xiaoke Wang
>
> kzalloc() is a memory allocation function which can return NULL when
> some internal memory errors happen. So it is better to check it to
> prevent potential wrong memory access.
>
> Besides, to
Hi Vasant, Hi Suravee,
On Mon, Apr 25, 2022 at 05:03:38PM +0530, Vasant Hegde wrote:
> Newer AMD systems can support multiple PCI segments, where each segment
> contains one or more IOMMU instances. However, an IOMMU instance can only
> support a single PCI segment.
Thanks for doing this, making
On Mon, Apr 25, 2022 at 05:04:15PM +0530, Vasant Hegde wrote:
> + seg_id = (iommu_fault->sbdf >> 16) & 0x;
> + devid = iommu_fault->sbdf & 0x;
This deserves some macros for readability.
___
iommu mailing list
On Mon, Apr 25, 2022 at 05:04:05PM +0530, Vasant Hegde wrote:
> From: Suravee Suthikulpanit
>
> Replace global amd_iommu_dev_table with per PCI segment device table.
> Also remove "dev_table_size".
>
> Co-developed-by: Vasant Hegde
> Signed-off-by: Vasant Hegde
> Signed-off-by: Suravee
On Mon, Apr 25, 2022 at 05:03:49PM +0530, Vasant Hegde wrote:
> + /* Size of the device table */
> + u32 dev_table_size;
Same here and with all other size indicators. If they are always going
to have their maximum value anyways, we can drop them.
On Mon, Apr 25, 2022 at 05:03:48PM +0530, Vasant Hegde wrote:
> + /* Largest PCI device id we expect translation requests for */
> + u16 last_bdf;
How does the IVRS table look like on these systems? Do they still
enumerate the whole PCI Bus/Dev/Fn space? If so I am fine with getting
rid
On Mon, Apr 25, 2022 at 05:03:39PM +0530, Vasant Hegde wrote:
Subject: iommu/amd: Update struct iommu_dev_data defination
^^ Typo
___
iommu mailing list
iommu@lists.linux-foundation.org
Hi Vasant,
On Mon, Apr 25, 2022 at 05:03:40PM +0530, Vasant Hegde wrote:
> +/*
> + * This structure contains information about one PCI segment in the system.
> + */
> +struct amd_iommu_pci_seg {
> + struct list_head list;
The purpose of this list_head needs a comment.
> +
> + /* PCI
On Mon, Apr 25, 2022 at 05:08:26PM +0800, Yang Yingliang wrote:
> It will cause null-ptr-deref in resource_size(), if platform_get_resource()
> returns NULL, move calling resource_size() after devm_ioremap_resource() that
> will check 'res' to avoid null-ptr-deref.
> And use
On Fri, Apr 22, 2022 at 02:21:03PM -0500, Rob Herring wrote:
> There's no need to show consumer side in provider examples. The ones
> used here are undocumented or undocumented in schemas which results in
> warnings.
>
> Signed-off-by: Rob Herring
Applied, thanks Rob.
On Fri, Apr 22, 2022 at 12:22:34PM +0100, Will Deacon wrote:
> git://git.kernel.org/pub/scm/linux/kernel/git/will/linux.git
> tags/arm-smmu-fixes
Pulled, thanks Will.
___
iommu mailing list
iommu@lists.linux-foundation.org
On Mon, Apr 18, 2022 at 08:49:49AM +0800, Lu Baolu wrote:
> Lu Baolu (10):
> iommu: Add DMA ownership management interfaces
> driver core: Add dma_cleanup callback in bus_type
> amba: Stop sharing platform_dma_configure()
> bus: platform,amba,fsl-mc,PCI: Add device DMA ownership management
On 28/04/2022 11:23, Robin Murphy wrote:
> On 2022-04-28 07:56, Krzysztof Kozlowski wrote:
>> On 27/04/2022 13:25, Andre Przywara wrote:
>>> The Page Request Interface (PRI) is an optional PCIe feature. As such, a
>>> SMMU would not need to handle it if the PCIe host bridge or the SMMU
>>> itself
On 2022-04-28 07:56, Krzysztof Kozlowski wrote:
On 27/04/2022 13:25, Andre Przywara wrote:
The Page Request Interface (PRI) is an optional PCIe feature. As such, a
SMMU would not need to handle it if the PCIe host bridge or the SMMU
itself do not implement it. Also an SMMU could be connected to
On Tue, Apr 12, 2022 at 06:12:11PM +0200, Sven Peter wrote:
> It's the same people anyway.
>
> Signed-off-by: Sven Peter
> ---
> MAINTAINERS | 10 ++
> 1 file changed, 2 insertions(+), 8 deletions(-)
Applied, thanks.
___
iommu mailing list
On Sat, Apr 23, 2022 at 04:23:29PM +0800, Lu Baolu wrote:
> Hi Joerg,
>
> One fix is queued for v5.18. It aims to fix:
>
> - Handle PCI stop marker messages in IOMMU driver to meet the
>requirement of I/O page fault handling framework.
>
> Please consider it for the iommu/fix branch.
>
>
On Sun, Apr 10, 2022 at 09:35:32AM +0800, Lu Baolu wrote:
> Hi Joerg,
>
> One fix is queued for v5.18. It aims to fix:
>
> - Calculate a feasible mask for non-aligned page-selective
>IOTLB invalidation.
>
> Please consider it for the iommu/fix branch.
>
> Best regards,
> Lu Baolu
>
>
From: Xiaoke Wang
kzalloc() is a memory allocation function which can return NULL when
some internal memory errors happen. So it is better to check it to
prevent potential wrong memory access.
Besides, to propagate the error to the caller, the type of
insert_iommu_master() is changed to `int`.
On Mon, Apr 11, 2022 at 12:16:04PM -0300, Jason Gunthorpe wrote:
> Jason Gunthorpe (4):
> iommu: Introduce the domain op enforce_cache_coherency()
> vfio: Move the Intel no-snoop control off of IOMMU_CACHE
> iommu: Redefine IOMMU_CAP_CACHE_COHERENCY as the cap flag for
> IOMMU_CACHE
>
1 - 100 of 116 matches
Mail list logo