Hi Will,
On 14/11/14 18:56, Will Deacon wrote:
of_dma_configure determines the size of the DMA range for a device by
either parsing the dma-ranges property or inspecting the coherent DMA
mask. This same information can be used to initialise the max segment
size and boundary_mask to a default
Hi Will,
On 14/11/14 18:56, Will Deacon wrote:
Hello everybody,
Here is the fourth iteration of the RFC I've previously posted here:
RFCv1:
http://lists.infradead.org/pipermail/linux-arm-kernel/2014-August/283023.html
RFCv2:
In order to share the IOVA allocator with other architectures, break
the unnecssary dependency on the Intel IOMMU driver and move the
remaining IOVA internals to iova.c
Signed-off-by: Robin Murphy robin.mur...@arm.com
---
drivers/iommu/intel-iommu.c | 33 ++---
Hi all,
I've been implementing IOMMU DMA mapping for arm64, based on tidied-up
parts of the existing arch/arm/mm/dma-mapping.c with a clear divide
between the arch-specific parts and the general DMA-API to IOMMU-API layer
so that that can be shared; similar to what Ritesh started before and was
Systems may contain heterogeneous IOMMUs supporting differing minimum
page sizes, which may also not be common with the CPU page size.
Thus it is practical to have an explicit notion of IOVA granularity
to simplify handling of mapping and allocation constraints.
As an initial step, move the IOVA
In preparation for sharing the IOVA allocator, build it for all
IOMMU API users.
Signed-off-by: Robin Murphy robin.mur...@arm.com
---
drivers/iommu/Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index
To share the IOVA allocator with other architectures, it needs to
accommodate more general aperture restrictions; move the lower limit
from a compile-time constant to a runtime domain property to allow
IOVA domains with different requirements to co-exist.
Also reword the slightly unclear
If the IOMMU supports pages smaller than the CPU page size, segments
which lie at offsets within the CPU page may be mapped based on the
finer-grained IOMMU page boundaries. This minimises the amount of
non-buffer memory between the CPU page boundary and the start of the
segment which must be
There's an off-by-one bug in function __domain_mapping(), which may
trigger the BUG_ON(nr_pages lvl_pages) when
(nr_pages + 1) superpage_mask == 0
The issue was introduced by commit 9051aa0268dc intel-iommu: Combine
domain_pfn_mapping() and domain_sg_mapping(), which sets sg_res to
Enhance MSI code to support hierarchy irqdomain, it helps to make
the architecture more clear.
Signed-off-by: Jiang Liu jiang@linux.intel.com
---
Hi Thomas,
Sorry, my branch hasn't been updated to the latest tip/x86/apic
branch. With this patch rebased, all following patches should
On 2014/11/26 1:27, Robin Murphy wrote:
In preparation for sharing the IOVA allocator, build it for all
IOMMU API users.
Signed-off-by: Robin Murphy robin.mur...@arm.com
---
drivers/iommu/Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git
On 2014/11/26 1:27, Robin Murphy wrote:
Systems may contain heterogeneous IOMMUs supporting differing minimum
page sizes, which may also not be common with the CPU page size.
Thus it is practical to have an explicit notion of IOVA granularity
to simplify handling of mapping and allocation
12 matches
Mail list logo