Invalidate the caching of the intermediate L1ST descriptor after it has
been updated.
Signed-off-by: Zhen Lei
---
drivers/iommu/arm-smmu-v3.c | 16 ++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index
The (STRTAB_L1_DESC_DWORDS << 3) appears more than 1 times, replace it
with STRTAB_L1_DESC_SIZE to eliminate the duplication. And the latter
seems more clear when it's used to calculate memory size. And the same is
true for STRTAB_STE_DWORDS and CTXDESC_CD_DWORDS.
Signed-off-by: Zhen Lei
---
Some boards may not implement the STE.config=0b000 correctly, it also
reports event C_BAD_STE when a transaction incoming. To make kdump kernel
can be worked well in this situation, backup the strtab_base which is used
in the first kernel, to make the unexpected devices can reuse the old
mapping
No functional change, just prepare for the next patch.
Signed-off-by: Zhen Lei
---
drivers/iommu/arm-smmu-v3.c | 44 ++--
1 file changed, 22 insertions(+), 22 deletions(-)
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index
To reduce the risk of further crash, the device_shutdown() was not called
by the first kernel. That means some devices may still working in the
secondary kernel. For example, a netcard may still using ring buffer to
receive the broadcast messages in the kdump kernel. No events are reported
utill
This patch series include two parts:
1. Patch1-2 use dummy STE tables with "ste abort" hardware feature to abort
unexpected
devices accessing. For more details, see the description in patch 2.
2. If the "ste abort" feature is not support, force the unexpected devices in
the
secondary
Hi Kirti,
On 2/15/19 4:14 AM, Alex Williamson wrote:
On Wed, 13 Feb 2019 12:02:52 +0800
Lu Baolu wrote:
Hi,
The Mediate Device is a framework for fine-grained physical device
sharing across the isolated domains. Currently the mdev framework
is designed to be independent of the platform
Hi Jean,
On 2/16/19 2:46 AM, Jean-Philippe Brucker wrote:
On 14/02/2019 20:14, Alex Williamson wrote:
This patch series extends both IOMMU and vfio components to support
mdev device passing through when it could be isolated and protected
by the IOMMU units. The first part of this series (PATCH
I tried latest Linux on HP A180C (32-bit pa-risc). It works but the Zalon SCSI
driver barfs warnings for GSC addon differential scsi board.
The warnings seem to be DMA API related. Packaged 4.19 and self-compiled
5.0.0-rc7 exhibit the same problem.
[0.00] Linux version
On 18/02/2019 14:37, Stanislaw Gruszka wrote:
[...]
Another issue is that dma_map_sg() & dma_map_page() may require some
constraints. I'm not sure about that and I want to clarify that with
CCed mm maintainers. I think DMA drivers may expect sg->offset < PAGE_SIZE
for both dma_map_sg() and
On Sun, 17 Feb 2019 17:04:39 +0800, Yong Wu wrote:
> This patch adds decriptions for mt8183 IOMMU and SMI.
>
> mt8183 has only one M4U like mt8173 and is also MTK IOMMU gen2 which
> uses ARM Short-Descriptor translation table format.
>
> The mt8183 M4U-SMI HW diagram is as below:
>
>
> (cc: IOMMU & page_frag_alloc maintainers)
>
> On Tue, Jan 15, 2019 at 10:04:01AM +0100, Lorenzo Bianconi wrote:
> > > On Mon, Jan 14, 2019 at 1:18 AM Lorenzo Bianconi
> > > wrote:
> > > >
> > > > > On Sun, Jan 13, 2019 at 11:00 AM Lorenzo Bianconi
> > > > > wrote:
> > > > > >
> > > > > > >
>
(cc: IOMMU & page_frag_alloc maintainers)
On Tue, Jan 15, 2019 at 10:04:01AM +0100, Lorenzo Bianconi wrote:
> > On Mon, Jan 14, 2019 at 1:18 AM Lorenzo Bianconi
> > wrote:
> > >
> > > > On Sun, Jan 13, 2019 at 11:00 AM Lorenzo Bianconi
> > > > wrote:
> > > > >
> > > > > >
> > > > > >
> > > > >
Add a new VFIO_PCI_DMA_FAULT_IRQ_INDEX index. This allows to
set/unset an eventfd that will be triggered when DMA translation
faults are detected at physical level when the nested mode is used.
Signed-off-by: Eric Auger
---
drivers/vfio/pci/vfio_pci.c | 3 +++
This patch registers a fault handler which records faults in
a circular buffer and then signals an eventfd. This buffer is
exposed within the fault region.
Signed-off-by: Eric Auger
---
drivers/vfio/pci/vfio_pci.c | 49 +
drivers/vfio/pci/vfio_pci_private.h |
New iotcls were introduced to pass information about guest stage1
to the host through VFIO. Let's document the nested stage control.
Signed-off-by: Eric Auger
---
v2 -> v3:
- document the new fault API
v1 -> v2:
- use the new ioctl names
- add doc related to fault handling
---
The Producer Fault region contains the fault queue in the second page.
There is benefit to let the userspace mmap this area. So let's expose
this mmappable area through a sparse mmap entry and implement the mmap
operation.
Signed-off-by: Eric Auger
---
drivers/vfio/pci/vfio_pci.c | 61
This patch adds two new regions aiming to handle nested mode
translation faults.
The first region (two host kernel pages) is read-only from the
user-space perspective. The first page contains an header
that provides information about the circular buffer located in the
second page. The circular
When a stage 1 related fault event is read from the event queue,
let's propagate it to potential external fault listeners, ie. users
who registered a fault handler.
Signed-off-by: Eric Auger
---
drivers/iommu/arm-smmu-v3.c | 169 +---
1 file changed, 158
The bind/unbind_guest_msi() callbacks check the domain
is NESTED and redirect to the dma-iommu implementation.
Signed-off-by: Eric Auger
---
drivers/iommu/arm-smmu-v3.c | 44 +
1 file changed, 44 insertions(+)
diff --git a/drivers/iommu/arm-smmu-v3.c
Up to now, when the type was UNMANAGED, we used to
allocate IOVA pages within a range provided by the user.
This does not work in nested mode.
If both the host and the guest are exposed with SMMUs, each
would allocate an IOVA. The guest allocates an IOVA (gIOVA)
to map onto the guest MSI doorbell
From: Jean-Philippe Brucker
When handling faults from the event or PRI queue, we need to find the
struct device associated to a SID. Add a rb_tree to keep track of SIDs.
Signed-off-by: Jean-Philippe Brucker
---
drivers/iommu/arm-smmu-v3.c | 136 ++--
1 file
From: "Liu, Yi L"
When the guest "owns" the stage 1 translation structures, the host
IOMMU driver has no knowledge of caching structure updates unless
the guest invalidation requests are trapped and passed down to the
host.
This patch adds the VFIO_IOMMU_CACHE_INVALIDATE ioctl with aims
at
To allow nested stage support, we need to store both
stage 1 and stage 2 configurations (and remove the former
union).
A nested setup is characterized by both s1_cfg and s2_cfg
set. If s1_cfg is NULL, if ste.abort is set, traffic can't pass.
If abort is not set, S1 is bypassed. Abort can be
Implement domain-selective and page-selective IOTLB invalidations.
Signed-off-by: Eric Auger
---
v3 -> v4:
- adapt to changes in the uapi
- add support for leaf parameter
- do not use arm_smmu_tlb_inv_range_nosync or arm_smmu_tlb_inv_context
anymore
v2 -> v3:
- replace __arm_smmu_tlb_sync
On attach_pasid_table() we program STE S1 related info set
by the guest into the actual physical STEs. At minimum
we need to program the context descriptor GPA and compute
whether the stage1 is translated/bypassed or aborted.
Signed-off-by: Eric Auger
---
v3 -> v4:
- adapt to changes in
From: "Liu, Yi L"
In any virtualization use case, when the first translation stage
is "owned" by the guest OS, the host IOMMU driver has no knowledge
of caching structure updates unless the guest invalidation activities
are trapped by the virtualizer and passed down to the host.
Since the
From: "Liu, Yi L"
This patch adds VFIO_IOMMU_ATTACH/DETACH_PASID_TABLE ioctl
which aims to pass/withdraw the virtual iommu guest configuration
to/from the VFIO driver downto to the iommu subsystem.
Signed-off-by: Jacob Pan
Signed-off-by: Liu, Yi L
Signed-off-by: Eric Auger
---
v3 -> v4:
-
This patch adds the VFIO_IOMMU_BIND/UNBIND_MSI ioctl which aim
to pass/withdraw the guest MSI binding to/from the host.
Signed-off-by: Eric Auger
---
v3 -> v4:
- add UNBIND
- unwind on BIND error
v2 -> v3:
- adapt to new proto of bind_guest_msi
- directly use vfio_iommu_for_each_dev
v1 -> v2:
From: Jean-Philippe Brucker
When removing a mapping from a domain, we need to send an invalidation to
all devices that might have stored it in their Address Translation Cache
(ATC). In addition with SVM, we'll need to invalidate context descriptors
of all devices attached to a live domain.
On ARM, MSI are translated by the SMMU. An IOVA is allocated
for each MSI doorbell. If both the host and the guest are exposed
with SMMUs, we end up with 2 different IOVAs allocated by each.
guest allocates an IOVA (gIOVA) to map onto the guest MSI
doorbell (gDB). The Host allocates another IOVA
From: Jacob Pan
In virtualization use case, when a guest is assigned
a PCI host device, protected by a virtual IOMMU on the guest,
the physical IOMMU must be programmed to be consistent with
the guest mappings. If the physical IOMMU supports two
translation stages it makes sense to program guest
From: Jacob Pan
Traditionally, device specific faults are detected and handled within
their own device drivers. When IOMMU is enabled, faults such as DMA
related transactions are detected by IOMMU. There is no generic
reporting mechanism to report faults back to the in-kernel device
driver or
From: Jacob Pan
DMA faults can be detected by IOMMU at device level. Adding a pointer
to struct device allows IOMMU subsystem to report relevant faults
back to the device driver for further handling.
For direct assigned device (or user space drivers), guest OS holds
responsibility to handle and
From: Jacob Pan
Device faults detected by IOMMU can be reported outside the IOMMU
subsystem for further processing. This patch introduces
a generic device fault data structure.
The fault can be either an unrecoverable fault or a page request,
also referred to as a recoverable fault.
We only
This series allows a virtualizer to program the nested stage mode.
This is useful when both the host and the guest are exposed with
an SMMUv3 and a PCI device is assigned to the guest using VFIO.
In this mode, the physical IOMMU must be programmed to translate
the two stages: the one set up by
Hi Nicolin,
On 2019-02-15 21:06, Nicolin Chen wrote:
> The addresses within a single page are always contiguous, so it's
> not so necessary to always allocate one single page from CMA area.
> Since the CMA area has a limited predefined size of space, it may
> run out of space in heavy use cases,
37 matches
Mail list logo