Am Freitag, 23. Juli 2021, 19:50:08 CEST schrieb Logan Gunthorpe:
> Now that all the .map_sg operations have been converted to returning
> proper error codes, drop the code to handle a zero return value,
> add a warning if a zero is returned and update the comment for the
> map_sg operation.
I see
Hi Robin,
On 2021/7/22 2:20, Robin Murphy wrote:
Eliminate the iommu_get_dma_strict() indirection and pipe the
information through the domain type from the beginning. Besides
the flow simplification this also has several nice side-effects:
- Automatically implies strict mode for untrusted dev
Hi Robin,
On 2021/7/22 2:20, Robin Murphy wrote:
In preparation for the strict vs. non-strict decision for DMA domains to
be expressed in the domain type, make sure we expose our flush queue
awareness by accepting the new domain type, and test the specific
feature flag where we want to identify
On Tue, Jul 20, 2021 at 02:38:22PM +0100, Will Deacon wrote:
> Hi again, folks,
>
> This is version two of the patch series I posted yesterday:
>
> https://lore.kernel.org/r/20210719123054.6844-1-w...@kernel.org
>
> The only changes since v1 are:
>
> * Squash patches 2 and 3, amending the c
On Sat, Jul 24, 2021 at 01:17:46AM +0200, Halil Pasic wrote:
> Since commit 903cd0f315fe ("swiotlb: Use is_swiotlb_force_bounce for
> swiotlb data bouncing") if code sets swiotlb_force it needs to do so
> before the swiotlb is initialised. Otherwise
> io_tlb_default_mem->force_bounce will not get s
Since commit 903cd0f315fe ("swiotlb: Use is_swiotlb_force_bounce for
swiotlb data bouncing") if code sets swiotlb_force it needs to do so
before the swiotlb is initialised. Otherwise
io_tlb_default_mem->force_bounce will not get set to true, and devices
that use (the default) swiotlb will not bounc
On Fri, 23 Jul 2021 19:53:58 +0200
Christian Borntraeger wrote:
> On 23.07.21 16:01, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jul 23, 2021 at 10:50:57AM +0200, Christian Borntraeger wrote:
> >>
> >>
> >> On 23.07.21 10:47, Halil Pasic wrote:
> >>> On Fri, 23 Jul 2021 08:14:19 +0200
> >>> Chri
On Sun, Jul 11, 2021 at 11:25 PM Christoph Hellwig wrote:
>
> Only build the code to support the global coherent pool if support for
> it is enabled.
>
> Signed-off-by: Christoph Hellwig
> Tested-by: Dillon Min
> ---
> include/linux/dma-map-ops.h | 18 +++---
> kernel/dma/coherent.c
Currently, linux,dma-default is used to reserve a global non-coherent pool
to allocate memory for dma operations. This can be useful for RISC-V as
well as the ISA specification doesn't specify a method to modify PMA
attributes or page table entries to define non-cacheable area yet.
A non-cacheable
RISC-V privilege specification doesn't define an IOMMU or any method modify
PMA attributes or PTE entries to allow non-coherent mappings yet. In
the beginning, this approach was adopted assuming that most of the RISC-V
platforms would support full cache-coherent IO. Here is excerpt from the
priv sp
To facilitate streaming DMA APIs, this patch introduces a set of generic
cache operations related dma sync. Any platform can use the generic ops
to provide platform specific cache management operations. Once the
standard RISC-V CMO extension is available, it can be built on top of it.
Signed-off-b
Currently, of_dma_get_range is kept in of_private.h as it is used by only
OF support code.
Move it to of_address.h so that it can be used by the code outside OF
support code.
Signed-off-by: Atish Patra
---
drivers/of/of_private.h| 10 --
include/linux/of_address.h | 12
In future, there will be more RISC-V platforms with non-coherent DMA.
Instead of selecting all the required config options in every soc, create
a new config that selects all the required configs related non-coherent
DMA.
Signed-off-by: Atish Patra
---
arch/riscv/Kconfig | 8
1 file chan
DMA_GLOBAL_POOL config may be enabled for platforms where global pool is
not supported because a generic defconfig is expected to boot on different
platforms. Specifically, some RISC-V platforms may use global pool for
non-coherent devices while some other platforms are completely coherent.
However
From: Nate Watterson
Copy the arm-smmu driver's infrastructure for handling implementation
specific details outside the flow of architectural code.
Signed-off-by: Nate Watterson
Signed-off-by: Nicolin Chen
---
drivers/iommu/arm/arm-smmu-v3/Makefile | 2 +-
drivers/iommu/arm/arm-smmu
From: Nate Watterson
NVIDIA's Grace SoC includes custom CMDQ-Virtualization (CMDQV)
hardware, which adds multiple VCMDQ interfaces to supplement
the architected SMMU_CMDQ in an effort to reduce contention.
To make use of these supplemental CMDQs in arm-smmu-v3 driver,
we borrow the "implementati
From: Nicolin Chen
The SMMUv3 devices implemented in the Grace SoC support NVIDIA's custom
CMDQ-Virtualization (CMDQV) hardware. Like the new ECMDQ feature first
introduced in the ARM SMMUv3.3 specification, CMDQV adds multiple VCMDQ
interfaces to supplement the single architected SMMU_CMDQ in an
On 23.07.21 16:01, Konrad Rzeszutek Wilk wrote:
On Fri, Jul 23, 2021 at 10:50:57AM +0200, Christian Borntraeger wrote:
On 23.07.21 10:47, Halil Pasic wrote:
On Fri, 23 Jul 2021 08:14:19 +0200
Christian Borntraeger wrote:
Resending with the correct email of Heiko
On 23.07.21 03:12,
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
pci_map_single_1() can fail for different reasons, but since the only
supported type of error return is DMA_MAPPING_ERROR, we coalesce those
errors into EIO.
ENOMEM is returned when no page tables can b
Setting the ->dma_address to DMA_MAPPING_ERROR is not part of the
->map_sg calling convention, so remove it.
Link: https://lore.kernel.org/linux-mips/20210716063241.gc13...@lst.de/
Suggested-by: Christoph Hellwig
Signed-off-by: Logan Gunthorpe
Cc: Russell King
Cc: Thomas Bogendoerfer
---
arch
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
In the case of a DMA_MAPPING_ERROR, -EIO is returned. Otherwise,
-ENOMEM or -EINVAL is returned depending on the error from
__map_sg_chunk().
Signed-off-by: Martin Oliveira
Signed-off-by: Logan Gunthorp
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
In the case of a dma_mapping_error() return -EIO as the actual cause
is opaque here.
sba_coalesce_chunks() may only presently fail if sba_alloc_range()
fails, which in turn only fails if the iommu is ou
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
vdma_alloc() may fail for different reasons, but since it only supports
indicating an error via a return of DMA_MAPPING_ERROR, we coalesce the
different reasons into -EIO as is documented on dma_map_sgta
Setting the ->dma_address to DMA_MAPPING_ERROR is not part of
the ->map_sg calling convention, so remove it.
Link: https://lore.kernel.org/linux-mips/20210716063241.gc13...@lst.de/
Suggested-by: Christoph Hellwig
Signed-off-by: Logan Gunthorpe
Cc: Michael Ellerman
Cc: Benjamin Herrenschmidt
Cc
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
Propagate the error up if vio_dma_iommu_map_sg() fails.
ppc_iommu_map_sg() may fail either because of iommu_range_alloc() or
because of tbl->it_ops->set(). The former only supports returning an
error wi
Convert to ssize_t return code so the return code from __iommu_map()
can be returned all the way down through dma_iommu_map_sg().
Signed-off-by: Logan Gunthorpe
Cc: Joerg Roedel
Cc: Will Deacon
---
drivers/iommu/iommu.c | 15 +++
include/linux/iommu.h | 22 +++---
2
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
So propagate the error from __s390_dma_map_sg() up. __s390_dma_map_sg()
returns either -ENOMEM on allocation failure or -EINVAL which is
the same as what's expected by dma_map_sgtable().
Signed-off-by:
Setting the ->dma_address to DMA_MAPPING_ERROR is not part of
the ->map_sg calling convention, so remove it.
Link: https://lore.kernel.org/linux-mips/20210716063241.gc13...@lst.de/
Suggested-by: Christoph Hellwig
Signed-off-by: Logan Gunthorpe
Cc: Niklas Schnelle
Cc: Gerald Schaefer
Cc: Heiko
Return appropriate error codes EINVAL or ENOMEM from
iommup_dma_map_sg(). If lower level code returns ENOMEM, then we
return it, other errors are coalesced into EINVAL.
iommu_dma_map_sg_swiotlb() returns -EIO as its an unknown error
from a call that returns DMA_MAPPING_ERROR.
Signed-off-by: Logan
Setting the ->dma_address to DMA_MAPPING_ERROR is not part of
the ->map_sg calling convention, so remove it.
Link: https://lore.kernel.org/linux-mips/20210716063241.gc13...@lst.de/
Suggested-by: Christoph Hellwig
Signed-off-by: Logan Gunthorpe
Cc: "David S. Miller"
Cc: Niklas Schnelle
Cc: Mich
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
Returning an errno from __sbus_iommu_map_sg() results in
sbus_iommu_map_sg_gflush() and sbus_iommu_map_sg_pflush() returning an
errno, as those functions are wrappers around __sbus_iommu_map_sg().
Signe
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
Return -EINVAL if the ioc cannot be obtained.
Signed-off-by: Martin Oliveira
Signed-off-by: Logan Gunthorpe
Cc: "James E.J. Bottomley"
Cc: Helge Deller
---
drivers/parisc/ccio-dma.c | 2 +-
drivers
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
So make __dma_map_cont() return a valid errno (which is then propagated
to gart_map_sg() via dma_map_cont()) and return it in case of failure.
Also, return -EINVAL in case of invalid nents.
Signed-off-
Setting the ->dma_address to DMA_MAPPING_ERROR is not part of
the ->map_sg calling convention, so remove it.
Link: https://lore.kernel.org/linux-mips/20210716063241.gc13...@lst.de/
Suggested-by: Christoph Hellwig
Signed-off-by: Logan Gunthorpe
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav P
Allow dma_map_sgtable() to pass errors from the map_sg() ops. This
will be required for returning appropriate error codes when mapping
P2PDMA memory.
Introduce __dma_map_sg_attrs() which will return the raw error code
from the map_sg operation (whether it be negative or zero). Then add a
dma_map_s
Hi,
This v2 of the series is spun out and expanded from my work to add
P2PDMA support to DMA map operations[1]. v1 is at [2]. The main changes
in v1 are to more carefully define the meaning of the error codes for
dma_map_sgtable().
The P2PDMA work requires distinguishing different error condition
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
xen_swiotlb_map_sg() may only fail if xen_swiotlb_map_page() fails, but
xen_swiotlb_map_page() only supports returning errors as
DMA_MAPPING_ERROR. So coalesce all errors into EIO per the documentation
f
From: Martin Oliveira
The .map_sg() op now expects an error code instead of zero on failure.
The only errno to return is -EINVAL in the case when DMA is not
supported.
Signed-off-by: Martin Oliveira
Signed-off-by: Logan Gunthorpe
---
kernel/dma/dummy.c | 2 +-
1 file changed, 1 insertion(+),
Now that all the .map_sg operations have been converted to returning
proper error codes, drop the code to handle a zero return value,
add a warning if a zero is returned and update the comment for the
map_sg operation.
Signed-off-by: Logan Gunthorpe
---
kernel/dma/mapping.c | 7 ---
1 file c
Now that the map_sg() op expects error codes instead of return zero on
error, convert dma_direct_map_sg() to return an error code. Per the
documentation for dma_map_sgtable(), -EIO is returned due to an
DMA_MAPPING_ERROR with unknown cause.
Signed-off-by: Logan Gunthorpe
---
kernel/dma/direct.c
From: Nadav Amit
When running on an AMD vIOMMU, it is better to avoid TLB flushes
of unmodified PTEs. vIOMMUs require the hypervisor to synchronize the
virtualized IOMMU's PTEs with the physical ones. This process induce
overheads.
AMD IOMMU allows us to flush any range that is aligned to the po
From: Nadav Amit
On virtual machines, software must flush the IOTLB after each page table
entry update.
The iommu_map_sg() code iterates through the given scatter-gather list
and invokes iommu_map() for each element in the scatter-gather list,
which calls into the vendor IOMMU driver through iom
From: Nadav Amit
AMD's IOMMU can flush efficiently (i.e., in a single flush) any range.
This is in contrast, for instnace, to Intel IOMMUs that have a limit on
the number of pages that can be flushed in a single flush. In addition,
AMD's IOMMU do not care about the page-size, so changes of the p
From: Nadav Amit
Refactor iommu_iotlb_gather_add_page() and factor out the logic that
detects whether IOTLB gather range and a new range are disjoint. To be
used by the next patch that implements different gathering logic for
AMD.
Note that updating gather->pgsize unconditionally does not affect
From: Nadav Amit
Recent patch attempted to enable selective page flushes on AMD IOMMU but
neglected to adapt amd_iommu_iotlb_sync() to use the selective flushes.
Adapt amd_iommu_iotlb_sync() to use selective flushes and change
amd_iommu_unmap() to collect the flushes. As a defensive measure, to
From: Robin Murphy
The Mediatek driver is not the only one which might want a basic
address-based gathering behaviour, so although it's arguably simple
enough to open-code, let's factor it out for the sake of cleanliness.
Let's also take this opportunity to document the intent of these
helpers fo
From: Nadav Amit
Do not use flush-queue on virtualized environments, where the NpCache
capability of the IOMMU is set. This is required to reduce
virtualization overheads.
This change follows a similar change to Intel's VT-d and a detailed
explanation as for the rationale is described in commit
From: Nadav Amit
The previous patch, commit 268aa4548277 ("iommu/amd: Page-specific
invalidations for more than one page") was supposed to enable
page-selective IOTLB flushes on AMD.
Besides the bug that was already fixed by commit a017c567915f
("iommu/amd: Fix wrong parentheses on page-specific
On Wed 21 Jul 16:07 CDT 2021, Saravana Kannan wrote:
> On Wed, Jul 21, 2021 at 1:23 PM Bjorn Andersson
> wrote:
> >
> > On Wed 21 Jul 13:00 CDT 2021, Saravana Kannan wrote:
> >
> > > On Wed, Jul 21, 2021 at 10:24 AM John Stultz
> > > wrote:
> > > >
> > > > On Wed, Jul 21, 2021 at 4:54 AM Greg K
On Fri, Jul 23, 2021 at 10:50:57AM +0200, Christian Borntraeger wrote:
>
>
> On 23.07.21 10:47, Halil Pasic wrote:
> > On Fri, 23 Jul 2021 08:14:19 +0200
> > Christian Borntraeger wrote:
> >
> > > Resending with the correct email of Heiko
> > >
> > > On 23.07.21 03:12, Halil Pasic wrote:
>
On Wed, Jul 21, 2021 at 10:24:01AM -0700, John Stultz wrote:
> On Wed, Jul 21, 2021 at 4:54 AM Greg Kroah-Hartman
> wrote:
> >
> > On Wed, Jul 07, 2021 at 04:53:20AM +, John Stultz wrote:
> > > Allow the qcom_scm driver to be loadable as a permenent module.
> >
> > This feels like a regression
On Thu, Jul 22, 2021 at 06:40:18PM +0100, Robin Murphy wrote:
> On 2021-07-22 16:54, Ming Lei wrote:
> [...]
> > > If you are still keen to investigate more, then can try either of these:
> > >
> > > - add iommu.strict=0 to the cmdline
> > >
> > > - use perf record+annotate to find the hotspot
>
On 23.07.21 10:47, Halil Pasic wrote:
On Fri, 23 Jul 2021 08:14:19 +0200
Christian Borntraeger wrote:
Resending with the correct email of Heiko
On 23.07.21 03:12, Halil Pasic wrote:
On Thu, 22 Jul 2021 21:22:58 +0200
Christian Borntraeger wrote:
On 20.07.21 15:38, Will Deacon wr
On Fri, 23 Jul 2021 08:14:19 +0200
Christian Borntraeger wrote:
> Resending with the correct email of Heiko
>
> On 23.07.21 03:12, Halil Pasic wrote:
> > On Thu, 22 Jul 2021 21:22:58 +0200
> > Christian Borntraeger wrote:
> >
> >> On 20.07.21 15:38, Will Deacon wrote:
> >>> Hi again, f
> -Original Message-
> From: Jean-Philippe Brucker [mailto:jean-phili...@linaro.org]
> Sent: 22 July 2021 17:46
> To: Shameerali Kolothum Thodi
> Cc: linux-arm-ker...@lists.infradead.org; iommu@lists.linux-foundation.org;
> kvm...@lists.cs.columbia.edu; m...@kernel.org;
> alex.william..
55 matches
Mail list logo