> From: Baolu Lu
> Sent: Friday, May 6, 2022 1:57 PM
>
> On 2022/5/6 03:46, Steve Wahl wrote:
> > Increase DMAR_UNITS_SUPPORTED to support 64 sockets with 10 DMAR
> units
> > each, for a total of 640.
> >
> > If the available hardware exceeds DMAR_UNITS_SUPPORTED (previously
> set
> > to MAX_IO_A
> From: Lu Baolu
> Sent: Friday, May 6, 2022 1:27 PM
> +
> +/*
> + * Set the page snoop control for a pasid entry which has been set up.
> + */
> +void intel_pasid_setup_page_snoop_control(struct intel_iommu *iommu,
> + struct device *dev, u32 pasid)
> +{
> +
On 2022/5/6 03:46, Steve Wahl wrote:
Increase DMAR_UNITS_SUPPORTED to support 64 sockets with 10 DMAR units
each, for a total of 640.
If the available hardware exceeds DMAR_UNITS_SUPPORTED (previously set
to MAX_IO_APICS, or 128), it causes these messages: "DMAR: Failed to
allocate seq_id", "DMA
On 2022/5/5 21:38, Jean-Philippe Brucker wrote:
Hi Baolu,
On Thu, May 05, 2022 at 04:31:38PM +0800, Baolu Lu wrote:
On 2022/5/4 02:20, Jean-Philippe Brucker wrote:
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 7cae631c1baa..33449523afbe 100644
--- a/drivers/iommu/iommu.c
+++
On Thu, 2022-05-05 at 21:27 +0800, Miles Chen wrote:
> When larbdev is NULL (in the case I hit, the node is incorrectly set
> iommus = <&iommu NUM>), it will cause device_link_add() fail and
> kernel crashes when we try to print dev_name(larbdev).
>
> Let's fail the probe if a larbdev is NULL to a
On 2022/5/3 15:49, Jean-Philippe Brucker wrote:
On Sat, Apr 30, 2022 at 03:33:17PM +0800, Baolu Lu wrote:
Jean, another quick question about the iommu_sva_bind_device()
/**
* iommu_sva_bind_device() - Bind a process address space to a device
* @dev: the device
* @mm: the mm to bind, calle
As enforce_cache_coherency has been introduced into the iommu_domain_ops,
the kernel component which owns the iommu domain is able to opt-in its
requirement for force snooping support. The iommu driver has no need to
hard code the page snoop control bit in the PASID table entries anymore.
Signed-o
The IOMMU force snooping capability is not required to be consistent
among all the IOMMUs anymore. Remove force snooping capability check
in the IOMMU hot-add path and domain_update_iommu_snooping() becomes
a dead code now.
Signed-off-by: Lu Baolu
Reviewed-by: Jason Gunthorpe
Reviewed-by: Kevin
As domain->force_snooping only impacts the devices attached with the
domain, there's no need to check against all IOMMU units. On the other
hand, force_snooping could be set on a domain no matter whether it has
been attached or not, and once set it is an immutable flag. If no
device attached, the o
In the attach_dev callback of the default domain ops, if the domain has
been set force_snooping, but the iommu hardware of the device does not
support SC(Snoop Control) capability, the callback should block it and
return a corresponding error code.
Signed-off-by: Lu Baolu
Reviewed-by: Jason Gunth
Hi folks,
Previously, the IOMMU capability of enforcing cache coherency was queried
through iommu_capable(IOMMU_CAP_CACHE_COHERENCY). This is a global
capability, hence the IOMMU driver reports support for this capability
only when all IOMMUs in the system has this support.
Commit 6043257b1de06 (
On Thu, May 05, 2022 at 04:07:28PM -0300, Jason Gunthorpe wrote:
> On Mon, May 02, 2022 at 05:30:05PM +1000, David Gibson wrote:
>
> > > It is a bit more CPU work since maps in the lower range would have to
> > > be copied over, but conceptually the model matches the HW nesting.
> >
> > Ah.. ok.
> From: Jason Gunthorpe
> Sent: Thursday, May 5, 2022 10:08 PM
>
> On Thu, May 05, 2022 at 07:40:37AM +, Tian, Kevin wrote:
>
> > In concept this is an iommu property instead of a domain property.
>
> Not really, domains shouldn't be changing behaviors once they are
> created. If a domain s
> From: Jason Gunthorpe
> Sent: Thursday, May 5, 2022 9:55 PM
>
> On Thu, May 05, 2022 at 11:03:18AM +, Tian, Kevin wrote:
>
> > iiuc the purpose of 'write-protection' here is to capture in-fly dirty pages
> > in the said race window until unmap and iotlb is invalidated is completed.
>
> No
> From: Joao Martins
> Sent: Thursday, May 5, 2022 7:51 PM
>
> On 5/5/22 12:03, Tian, Kevin wrote:
> >> From: Joao Martins
> >> Sent: Thursday, May 5, 2022 6:07 PM
> >>
> >> On 5/5/22 08:42, Tian, Kevin wrote:
> From: Jason Gunthorpe
> Sent: Tuesday, May 3, 2022 2:53 AM
>
>
On 2022/4/29 上午9:39, Zhangfei Gao wrote:
On 2022/4/29 上午2:00, Fenghua Yu wrote:
The PASID is being freed too early. It needs to stay around until after
device drivers that might be using it have had a chance to clear it out
of the hardware.
As a reminder:
mmget() /mmput() refcount the mm
The HPET-based hardlockup detector relies on the TSC to determine if an
observed NMI interrupt was originated by HPET timer. Hence, this detector
can no longer be used with an unstable TSC.
In such case, permanently stop the HPET-based hardlockup detector and
start the perf-based detector.
Cc: An
The generic hardlockup detector is based on perf. It also provides a set
of weak functions that CPU architectures can override. Add a shim
hardlockup detector for x86 that overrides such functions and can
select between perf and HPET implementations of the detector.
For clarity, add the intermedia
It is not possible to determine the source of a non-maskable interrupt
(NMI) in x86. When dealing with an HPET channel, the only direct method to
determine whether it caused an NMI would be to read the Interrupt Status
register.
However, reading HPET registers is slow and, therefore, not to be don
When there are multiple implementations of the NMI watchdog, there may be
situations in which switching from one to another is needed. If the time-
stamp counter becomes unstable, the HPET-based NMI watchdog can no longer
be used. Similarly, the HPET-based NMI watchdog relies on tsc_khz and
needs t
The HPET hardlockup detector relies on tsc_khz to estimate the value of
that the TSC will have when its HPET channel fires. A refined tsc_khz
helps to estimate better the expected TSC value.
Using the early value of tsc_khz may lead to a large error in the expected
TSC value. Restarting the NMI wa
Keep the HPET-based hardlockup detector disabled unless explicitly enabled
via a command-line argument. If such parameter is not given, the
initialization of the HPET-based hardlockup detector fails and the NMI
watchdog will fall back to use the perf-based implementation.
Implement the command-lin
Implement a hardlockup detector that uses an HPET channel as the source
of the non-maskable interrupt. Implement the basic functionality to
start, stop, and configure the timer.
Designate as the handling CPU one of the CPUs that the detector monitors.
Use it to service the NMI from the HPET channe
Prepare hardlockup_panic_setup() to handle a comma-separated list of
options. Thus, it can continue parsing its own command-line options while
ignoring parameters that are relevant only to specific implementations of
the hardlockup detector. Such implementations may use an early_param to
parse thei
Certain implementations of the hardlockup detector require support for
Inter-Processor Interrupt shorthands. On x86, support for these can only
be determined after all the possible CPUs have booted once (in
smp_init()). Other architectures may not need such check.
lockup_detector_init() only perfo
Add a NMI_WATCHDOG as a new category of NMI handler. This new category
is to be used with the HPET-based hardlockup detector. This detector
does not have a direct way of checking if the HPET timer is the source of
the NMI. Instead, it indirectly estimates it using the time-stamp counter.
Therefore
The procedure to detect hardlockups is independent of the underlying
mechanism that generates the non-maskable interrupt used to drive the
detector. Thus, it can be put in a separate, generic function. In this
manner, it can be invoked by various implementations of the NMI watchdog.
For this purpo
The current default implementation of the hardlockup detector assumes that
it is implemented using perf events. However, the hardlockup detector can
be driven by other sources of non-maskable interrupts (e.g., a properly
configured timer).
Group and wrap in #ifdef CONFIG_HARDLOCKUP_DETECTOR_PERF a
The HPET hardlockup detector needs a dedicated HPET channel. Hence, create
a new HPET_MODE_NMI_WATCHDOG mode category to indicate that it cannot be
used for other purposes. Using MSI interrupts greatly simplifies the
implementation of the detector. Specifically, it helps to avoid the
complexities o
struct irq_cfg::delivery_mode specifies the delivery mode of each IRQ
separately. Configuring the delivery mode of an IRTE would require adding
a third argument to prepare_irte(). Instead, simply take a pointer to the
irq_cfg for which an IRTE is being configured. This change does not cause
functio
Certain types of interrupts, such as NMI, do not have an associated vector.
They, however, target specific CPUs. Thus, when assigning the destination
CPU, it is beneficial to select the one with the lowest number of vectors.
Prepend the functions matrix_find_best_cpu_managed() and
matrix_find_best_
The flag X86_ALLOC_AS_NMI indicates that the IRQs to be allocated in an
IRQ domain need to be configured as NMIs. Add an as_nmi argument to
hpet_assign_irq(). Even though the HPET clock events do not need NMI
IRQs, the HPET hardlockup detector does. A subsequent changeset will
implement the reserv
Programming an HPET channel as periodic requires setting the
HPET_TN_SETVAL bit in the channel configuration. Plus, the comparator
register must be written twice (once for the comparator value and once for
the periodic value). Since this programming might be needed in several
places (e.g., the HPET
There are no restrictions in hardware to set MSI messages with its
own delivery mode. Use the mode specified in the provided IRQ hardware
configuration data. Since most of the IRQs are configured to use the
delivery mode of the APIC driver in use (set in all of them to
APIC_DELIVERY_MODE_FIXED), t
These functions are used to check and set specific bits in a Device Table
Entry. For instance, they can be used to modify the setting of the NMIPass
field.
Currently, these functions are used only for ACPI-specified devices.
However, an interrupt is to be allocated with NMI as delivery mode, the
D
In order to allow hpet_writel() to be used by other components (e.g.,
the HPET-based hardlockup detector), expose it in the HPET header file.
Cc: Andi Kleen
Cc: Stephane Eranian
Cc: "Ravi V. Shankar"
Cc: iommu@lists.linux-foundation.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: x...@kernel.org
Revi
If NMIPass is enabled in a device's DTE, the IOMMU lets NMI interrupt
messages pass through unmapped. Therefore, the contents of the MSI
message, not an IRTE, determine how and where the NMI is delivered.
Since the IOMMU driver owns the MSI message of the NMI irq, compose
it using the non-interrup
As per the AMD I/O Virtualization Technology (IOMMU) Specification, the
AMD IOMMU only remaps fixed and arbitrated MSIs. NMIs are controlled
by the NMIPass bit of a Device Table Entry. When set, the IOMMU passes
through NMI interrupt messages unmapped. Otherwise, they are aborted.
Furthermore, Sec
Hi,
This is the sixth version of this patchset. It again took me a while to
post this version as I have been working on other projects and implemented
major reworks in the series.
This work has gone through several versions. I have incorporated feedback
from Thomas Gleixner and others. Many of th
When the destination mode of an interrupt is physical APICID, the interrupt
is delivered only to the single CPU of which the physical APICID is
specified in the destination ID field. Therefore, the redirection hint is
meaningless.
Furthermore, on certain processors, the IOMMU does not deliver the
The Intel IOMMU interrupt remapping driver already programs correctly the
delivery mode of individual irqs as per their irq_data. Improve handling
of NMIs. Allow only one irq per NMI. Also, it is not necessary to cleanup
irq vectors after updating affinity. NMIs do not have associated vectors.
Cc:
Currently, the delivery mode of all interrupts is set to the mode of the
APIC driver in use. There are no restrictions in hardware to configure the
delivery mode of each interrupt individually. Also, certain IRQs need to be
configured with a specific delivery mode (e.g., NMI).
Add a new member, de
There are no hardware requirements to use the same delivery mode for all
interrupts. Use the mode specified in the provided IRQ hardware
configuration data. Since all IRQs are configured to use the delivery mode
of the APIC drive, the only functional changes are where IRQs are
configured to use a s
There are cases in which it is necessary to set the delivery mode of an
interrupt as NMI. Add a new flag that callers can specify when allocating
an IRQ.
Cc: Andi Kleen
Cc: "Ravi V. Shankar"
Cc: Stephane Eranian
Cc: iommu@lists.linux-foundation.org
Cc: linuxppc-...@lists.ozlabs.org
Cc: x...@ker
Vectors are meaningless when allocating IRQs with NMI as the delivery mode.
In such case, skip the reservation of IRQ vectors. Do it in the lowest-
level functions where the actual IRQ reservation takes place.
Since NMIs target specific CPUs, keep the functionality to find the best
CPU.
Cc: Andi
The flag X86_IRQ_ALLOC_AS_NMI indicates to the interrupt controller that
it should configure the delivery mode of an IRQ as NMI. Implement such
request. This causes irq_domain children in the hierarchy to configure
their irq_chips accordingly. When no specific delivery mode is requested,
continue u
> From: Jason Gunthorpe
> Sent: Friday, May 6, 2022 12:16 AM
>
> Call iommu_group_do_attach_device() only from
> __iommu_group_attach_domain() which should be used to attach any
> domain to
> the group.
>
> The only unique thing __iommu_attach_group() does is to check if the group
> is already a
> From: Jason Gunthorpe
> Sent: Thursday, May 5, 2022 11:33 PM
> > > /*
> > > - * If the group has been claimed already, do not re-attach the default
> > > - * domain.
> > > + * New drivers should support default domains and so the
> > > detach_dev() op
> > > + * will never be called. Otherwi
Increase DMAR_UNITS_SUPPORTED to support 64 sockets with 10 DMAR units
each, for a total of 640.
If the available hardware exceeds DMAR_UNITS_SUPPORTED (previously set
to MAX_IO_APICS, or 128), it causes these messages: "DMAR: Failed to
allocate seq_id", "DMAR: Parse DMAR table failure.", and "x2a
On 2022-05-05 20:27, Jason Gunthorpe wrote:
On Thu, May 05, 2022 at 07:56:59PM +0100, Robin Murphy wrote:
Ack to that, there are certainly further improvements to consider once we've
got known-working code into a released kernel, but let's not get ahead of
ourselves just now.
Yes please
(
On Thu, May 05, 2022 at 07:56:59PM +0100, Robin Murphy wrote:
> Ack to that, there are certainly further improvements to consider once we've
> got known-working code into a released kernel, but let's not get ahead of
> ourselves just now.
Yes please
> (I'm pretty sure we could get away with a s
On Mon, May 02, 2022 at 05:30:05PM +1000, David Gibson wrote:
> > It is a bit more CPU work since maps in the lower range would have to
> > be copied over, but conceptually the model matches the HW nesting.
>
> Ah.. ok. IIUC what you're saying is that the kernel-side IOASes have
> fixed windows,
On 2022-05-05 16:33, Jason Gunthorpe wrote:
On Thu, May 05, 2022 at 10:56:28AM +, Tian, Kevin wrote:
From: Jason Gunthorpe
Sent: Thursday, May 5, 2022 3:09 AM
Once the group enters 'owned' mode it can never be assigned back to the
default_domain or to a NULL domain. It must always be activ
Hi Leo,
Thanks for your review, some replies below.
On 2022/4/30 15:35, Leo Yan wrote:
On Thu, Apr 07, 2022 at 08:58:39PM +0800, Yicong Yang via iommu wrote:
From: Qi Liu
'perf record' and 'perf report --dump-raw-trace' supported in this
patch.
Example usage:
Output will contain raw PTT
Call iommu_group_do_attach_device() only from
__iommu_group_attach_domain() which should be used to attach any domain to
the group.
The only unique thing __iommu_attach_group() does is to check if the group
is already attached to some caller specified group. Put this test into
__iommu_group_is_cor
On Thu, May 05, 2022 at 10:56:28AM +, Tian, Kevin wrote:
> > From: Jason Gunthorpe
> > Sent: Thursday, May 5, 2022 3:09 AM
> >
> > Once the group enters 'owned' mode it can never be assigned back to the
> > default_domain or to a NULL domain. It must always be actively assigned to
>
> worth
On Thu, May 05, 2022 at 03:53:08PM +0100, Will Deacon wrote:
> On Thu, May 05, 2022 at 04:15:29PM +0200, Thierry Reding wrote:
> > On Fri, Apr 29, 2022 at 10:22:40AM +0200, Thierry Reding wrote:
> > > From: Thierry Reding
> > >
> > > Hi Joerg,
> > >
> > > this is essentially a resend of v2 with
Il 05/05/22 15:27, Miles Chen ha scritto:
When larbdev is NULL (in the case I hit, the node is incorrectly set
iommus = <&iommu NUM>), it will cause device_link_add() fail and
kernel crashes when we try to print dev_name(larbdev).
Let's fail the probe if a larbdev is NULL to avoid invalid inputs
On Thu, May 05, 2022 at 04:15:29PM +0200, Thierry Reding wrote:
> On Fri, Apr 29, 2022 at 10:22:40AM +0200, Thierry Reding wrote:
> > From: Thierry Reding
> >
> > Hi Joerg,
> >
> > this is essentially a resend of v2 with a Acked-by:s from Robin and Will
> > added. These have been on the list for
On Fri, Apr 29, 2022 at 10:22:40AM +0200, Thierry Reding wrote:
> From: Thierry Reding
>
> Hi Joerg,
>
> this is essentially a resend of v2 with a Acked-by:s from Robin and Will
> added. These have been on the list for quite a while now, but apparently
> there was a misunderstanding, so neither
On Thu, May 05, 2022 at 07:40:37AM +, Tian, Kevin wrote:
> In concept this is an iommu property instead of a domain property.
Not really, domains shouldn't be changing behaviors once they are
created. If a domain supports dirty tracking and I attach a new device
then it still must support di
On Thu, May 05, 2022 at 11:03:18AM +, Tian, Kevin wrote:
> iiuc the purpose of 'write-protection' here is to capture in-fly dirty pages
> in the said race window until unmap and iotlb is invalidated is completed.
No, the purpose is to perform "unmap" without destroying the dirty bit
in the pr
Hi Baolu,
On Thu, May 05, 2022 at 04:31:38PM +0800, Baolu Lu wrote:
> On 2022/5/4 02:20, Jean-Philippe Brucker wrote:
> > > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> > > index 7cae631c1baa..33449523afbe 100644
> > > --- a/drivers/iommu/iommu.c
> > > +++ b/drivers/iommu/iommu.c
>
When larbdev is NULL (in the case I hit, the node is incorrectly set
iommus = <&iommu NUM>), it will cause device_link_add() fail and
kernel crashes when we try to print dev_name(larbdev).
Let's fail the probe if a larbdev is NULL to avoid invalid inputs from
dts.
It should work for normal correc
On Mon, May 2, 2022 at 3:35 PM Geert Uytterhoeven
wrote:
>
> Despite the name, R-Car V3U is the first member of the R-Car Gen4
> family. Hence move its compatible value to the R-Car Gen4 section.
>
> Signed-off-by: Geert Uytterhoeven
> ---
> Documentation/devicetree/bindings/gpio/renesas,rcar-g
On 2022/5/5 16:46, Tian, Kevin wrote:
From: Lu Baolu
Sent: Thursday, May 5, 2022 9:07 AM
As enforce_cache_coherency has been introduced into the
iommu_domain_ops,
the kernel component which owns the iommu domain is able to opt-in its
requirement for force snooping support. The iommu driver has
Hi Leo,
Thanks for the comments. Some questions and replies below.
On 2022/4/30 0:00, Leo Yan wrote:
> On Thu, Apr 07, 2022 at 08:58:36PM +0800, Yicong Yang via iommu wrote:
>> HiSilicon PCIe tune and trace device(PTT) is a PCIe Root Complex integrated
>> Endpoint(RCiEP) device, providing the cap
On 2022/5/5 16:43, Tian, Kevin wrote:
From: Lu Baolu
Sent: Thursday, May 5, 2022 9:07 AM
As domain->force_snooping only impacts the devices attached with the
domain, there's no need to check against all IOMMU units. At the same
time, for a brand new domain (hasn't been attached to any device),
On 5/5/22 12:03, Tian, Kevin wrote:
>> From: Joao Martins
>> Sent: Thursday, May 5, 2022 6:07 PM
>>
>> On 5/5/22 08:42, Tian, Kevin wrote:
From: Jason Gunthorpe
Sent: Tuesday, May 3, 2022 2:53 AM
On Mon, May 02, 2022 at 12:11:07PM -0600, Alex Williamson wrote:
> On Fri, 29
> From: Joao Martins
> Sent: Thursday, May 5, 2022 6:07 PM
>
> On 5/5/22 08:42, Tian, Kevin wrote:
> >> From: Jason Gunthorpe
> >> Sent: Tuesday, May 3, 2022 2:53 AM
> >>
> >> On Mon, May 02, 2022 at 12:11:07PM -0600, Alex Williamson wrote:
> >>> On Fri, 29 Apr 2022 05:45:20 +
> >>> "Tian, K
> From: Jason Gunthorpe
> Sent: Thursday, May 5, 2022 3:09 AM
>
> Once the group enters 'owned' mode it can never be assigned back to the
> default_domain or to a NULL domain. It must always be actively assigned to
worth pointing out that a NULL domain is not always translated to DMA
blocked on
On 5/5/22 08:42, Tian, Kevin wrote:
>> From: Jason Gunthorpe
>> Sent: Tuesday, May 3, 2022 2:53 AM
>>
>> On Mon, May 02, 2022 at 12:11:07PM -0600, Alex Williamson wrote:
>>> On Fri, 29 Apr 2022 05:45:20 +
>>> "Tian, Kevin" wrote:
> From: Joao Martins
> 3) Unmapping an IOVA range whi
On 5/5/22 08:25, Shameerali Kolothum Thodi wrote:
>> -Original Message-
>> From: Joao Martins [mailto:joao.m.mart...@oracle.com]
>> Sent: 29 April 2022 12:05
>> To: Tian, Kevin
>> Cc: Joerg Roedel ; Suravee Suthikulpanit
>> ; Will Deacon ; Robin
>> Murphy ; Jean-Philippe Brucker
>> ; zhuke
Hi Joerg,
On 5/2/2022 4:24 PM, Joerg Roedel wrote:
> Hi Vasant,
>
> On Fri, Apr 29, 2022 at 08:15:49PM +0530, Vasant Hegde wrote:
>> We still need to parse IVHD to find max devices supported by each PCI segment
>> (same as the way its doing it today). Hence we need all these variables.
>
> From
> From: Lu Baolu
> Sent: Thursday, May 5, 2022 9:07 AM
>
> As enforce_cache_coherency has been introduced into the
> iommu_domain_ops,
> the kernel component which owns the iommu domain is able to opt-in its
> requirement for force snooping support. The iommu driver has no need to
> hard code the
> From: Lu Baolu
> Sent: Thursday, May 5, 2022 9:07 AM
>
> The IOMMU force snooping capability is not required to be consistent
> among all the IOMMUs anymore. Remove force snooping capability check
> in the IOMMU hot-add path and domain_update_iommu_snooping()
> becomes
> a dead code now.
>
> S
> From: Lu Baolu
> Sent: Thursday, May 5, 2022 9:07 AM
>
> As domain->force_snooping only impacts the devices attached with the
> domain, there's no need to check against all IOMMU units. At the same
> time, for a brand new domain (hasn't been attached to any device), the
> force_snooping field c
> From: Lu Baolu
> Sent: Thursday, May 5, 2022 9:07 AM
>
> In the attach_dev callback of the default domain ops, if the domain has
> been set force_snooping, but the iommu hardware of the device does not
> support SC(Snoop Control) capability, the callback should block it and
> return a correspon
On 2022/5/4 02:20, Jean-Philippe Brucker wrote:
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 7cae631c1baa..33449523afbe 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -3174,3 +3174,24 @@ void iommu_detach_device_pasid(struct iommu_domain
*domain,
iomm
> From: Jason Gunthorpe
> Sent: Tuesday, May 3, 2022 2:53 AM
>
> On Mon, May 02, 2022 at 12:11:07PM -0600, Alex Williamson wrote:
> > On Fri, 29 Apr 2022 05:45:20 +
> > "Tian, Kevin" wrote:
> > > > From: Joao Martins
> > > > 3) Unmapping an IOVA range while returning its dirty bit prior to
> From: Jason Gunthorpe
> Sent: Friday, April 29, 2022 8:39 PM
>
> > >> * There's no capabilities API in IOMMUFD, and in this RFC each vendor
> tracks
> > >
> > > there was discussion adding device capability uAPI somewhere.
> > >
> > ack let me know if there was snippets to the conversation as I
> -Original Message-
> From: Joao Martins [mailto:joao.m.mart...@oracle.com]
> Sent: 29 April 2022 12:05
> To: Tian, Kevin
> Cc: Joerg Roedel ; Suravee Suthikulpanit
> ; Will Deacon ; Robin
> Murphy ; Jean-Philippe Brucker
> ; zhukeqian ;
> Shameerali Kolothum Thodi ;
> David Woodhouse
On 2022/5/4 02:12, Jean-Philippe Brucker wrote:
On Mon, May 02, 2022 at 09:48:37AM +0800, Lu Baolu wrote:
Add support for SVA domain allocation and provide an SVA-specific
iommu_domain_ops.
Signed-off-by: Lu Baolu
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 14 +++
.../iommu/arm
83 matches
Mail list logo