On Tue, Jul 28, 2020 at 08:11:40AM +0300, Mike Rapoport wrote:
> From: Mike Rapoport
>
> The memory size calculation in cma_early_percent_memory() traverses
> memblock.memory rather than simply call memblock_phys_mem_size(). The
> comment in that function suggests that at some point there should
>From IOMMU p.o.v., PASIDs allocated and managed by external components
(e.g. VFIO) will be passed in for gpasid_bind/unbind operation. IOMMU
needs some knowledge to check the PASID ownership, hence add an interface
for those components to tell the PASID owner.
In latest kernel design, PASID owner
Nesting translation allows two-levels/stages page tables, with 1st level
for guest translations (e.g. GVA->GPA), 2nd level for host translations
(e.g. GPA->HPA). This patch adds interface for binding guest page tables
to a PASID. This PASID must have been allocated by the userspace before
the bindi
This patch reports nesting info, and only supports the case where all
the physical iomms have the same CAP/ECAP MASKS.
Cc: Kevin Tian
CC: Jacob Pan
Cc: Alex Williamson
Cc: Eric Auger
Cc: Jean-Philippe Brucker
Cc: Joerg Roedel
Cc: Lu Baolu
Signed-off-by: Liu Yi L
Signed-off-by: Jacob Pan
-
This patch is added as instead of returning a boolean for DOMAIN_ATTR_NESTING,
iommu_domain_get_attr() should return an iommu_nesting_info handle. For
now, return an empty nesting info struct for now as true nesting is not
yet supported by the SMMUs.
Cc: Will Deacon
Cc: Robin Murphy
Cc: Eric Aug
Shared Virtual Addressing (SVA), a.k.a, Shared Virtual Memory (SVM) on
Intel platforms allows address space sharing between device DMA and
applications. SVA can reduce programming complexity and enhance security.
This VFIO series is intended to expose SVA usage to VMs. i.e. Sharing
guest applicati
Shared Virtual Addressing (a.k.a Shared Virtual Memory) allows sharing
multiple process virtual address spaces with the device for simplified
programming model. PASID is used to tag an virtual address space in DMA
requests and to identify the related translation structure in IOMMU. When
a PASID-cap
This patch allows userspace to request PASID allocation/free, e.g. when
serving the request from the guest.
PASIDs that are not freed by userspace are automatically freed when the
IOASID set is destroyed when process exits.
Cc: Kevin Tian
CC: Jacob Pan
Cc: Alex Williamson
Cc: Eric Auger
Cc: J
This patch refactors the vfio_iommu_type1_ioctl() to use switch instead of
if-else, and each command got a helper function.
Cc: Kevin Tian
CC: Jacob Pan
Cc: Alex Williamson
Cc: Eric Auger
Cc: Jean-Philippe Brucker
Cc: Joerg Roedel
Cc: Lu Baolu
Reviewed-by: Eric Auger
Suggested-by: Christop
From: Eric Auger
The VFIO API was enhanced to support nested stage control: a bunch of
new ioctls and usage guideline.
Let's document the process to follow to set up nested mode.
Cc: Kevin Tian
CC: Jacob Pan
Cc: Alex Williamson
Cc: Eric Auger
Cc: Jean-Philippe Brucker
Cc: Joerg Roedel
Cc:
When an IOMMU domain with nesting attribute is used for guest SVA, a
system-wide PASID is allocated for binding with the device and the domain.
For security reason, we need to check the PASID passed from user-space.
e.g. page table bind/unbind and PASID related cache invalidation.
Cc: Kevin Tian
This patch provides an interface allowing the userspace to invalidate
IOMMU cache for first-level page table. It is required when the first
level IOMMU page table is not managed by the host kernel in the nested
translation setup.
Cc: Kevin Tian
CC: Jacob Pan
Cc: Alex Williamson
Cc: Eric Auger
IOMMUs that support nesting translation needs report the capability info
to userspace. It gives information about requirements the userspace needs
to implement plus other features characterizing the physical implementation.
This patch reports nesting info by DOMAIN_ATTR_NESTING. Caller can get
nes
Recent years, mediated device pass-through framework (e.g. vfio-mdev)
is used to achieve flexible device sharing across domains (e.g. VMs).
Also there are hardware assisted mediated pass-through solutions from
platform vendors. e.g. Intel VT-d scalable mode which supports Intel
Scalable I/O Virtual
This patch exports iommu nesting capability info to user space through
VFIO. Userspace is expected to check this info for supported uAPIs (e.g.
PASID alloc/free, bind page table, and cache invalidation) and the vendor
specific format information for first level/stage page table that will be
bound t
From: Yi Sun
Current interface is good enough for SVA virtualization on an assigned
physical PCI device, but when it comes to mediated devices, a physical
device may attached with multiple aux-domains. Also, for guest unbind,
the PASID to be unbind should be allocated to the VM. This check requir
This patch exposes PCIe PASID capability to guest for assigned devices.
Existing vfio_pci driver hides it from guest by setting the capability
length as 0 in pci_ext_cap_length[].
And this patch only exposes PASID capability for devices which has PCIe
PASID extended struture in its configuration s
From: Mike Rapoport
numa_clear_kernel_node_hotplug() function first traverses numa_meminfo
regions to set node ID in memblock.reserved and than traverses
memblock.reserved to update reserved_nodemask to include node IDs that were
set in the first loop.
Remove redundant traversal over memblock.re
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));
/* do someth
From: Mike Rapoport
for_each_memblock() is used exclusively to iterate over memblock.memory in
a few places that use data from memblock_region rather than the memory
ranges.
Remove type parameter from the for_each_memblock() iterator to improve
encapsulation of memblock internals from its users.
From: Mike Rapoport
for_each_memblock_type() is not used outside mm/memblock.c, move it there
from include/linux/memblock.h
Signed-off-by: Mike Rapoport
---
include/linux/memblock.h | 5 -
mm/memblock.c| 5 +
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/in
From: Mike Rapoport
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start_pfn = memblock_region_memory_base_pfn(reg);
end_pfn = memblock_region_memory_end_pfn(reg);
/* do something with start_pfn an
From: Mike Rapoport
Currently for_each_mem_range() iterator is the most generic way to traverse
memblock regions. As such, it has 8 parameters and it is hardly convenient
to users. Most users choose to utilize one of its wrappers and the only
user that actually needs most of the parameters outsid
From: Mike Rapoport
microblaze does not support neither NUMA not SPARSMEM, so there is no point
to call memblock_set_node() and sparse_memory_present_with_active_regions()
functions during microblaze memory initialization.
Remove these calls and the surrounding code.
Signed-off-by: Mike Rapopor
From: Mike Rapoport
fadump_reserve_crash_area() reserves memory from a specified base address
till the end of the RAM.
Replace iteration through the memblock.memory with a single call to
memblock_reserve() with appropriate that will take care of proper memory
reservation.
Signed-off-by: Mike R
From: Mike Rapoport
RISC-V does not (yet) support NUMA and for UMA architectures node 0 is
used implicitly during early memory initialization.
There is no need to call memblock_set_node(), remove this call and the
surrounding code.
Signed-off-by: Mike Rapoport
---
arch/riscv/mm/init.c | 9 --
From: Mike Rapoport
The only user of memblock_dbg() outside memblock was s390 setup code and it
is converted to use pr_debug() instead.
This allows to stop exposing memblock_debug and memblock_dbg() to the rest
of the kernel.
Signed-off-by: Mike Rapoport
---
arch/s390/kernel/setup.c | 4 ++--
From: Mike Rapoport
The memory size calculation in kvm_cma_reserve() traverses memblock.memory
rather than simply call memblock_phys_mem_size(). The comment in that
function suggests that at some point there should have been call to
memblock_analyze() before memblock_phys_mem_size() could be used
From: Mike Rapoport
Instead of traversing memblock.memory regions to find memory_start and
memory_end, simply query memblock_{start,end}_of_DRAM().
Signed-off-by: Mike Rapoport
---
arch/h8300/kernel/setup.c| 8 +++-
arch/nds32/kernel/setup.c| 8 ++--
arch/openrisc/kernel/setup.
From: Mike Rapoport
The memory size calculation in cma_early_percent_memory() traverses
memblock.memory rather than simply call memblock_phys_mem_size(). The
comment in that function suggests that at some point there should have been
call to memblock_analyze() before memblock_phys_mem_size() coul
From: Mike Rapoport
The function free_highpages() in both arm and xtensa essentially open-code
for_each_free_mem_range() loop to detect high memory pages that were not
reserved and that should be initialized and passed to the buddy allocator.
Replace open-coded implementation of for_each_free_me
From: Mike Rapoport
dummy_numa_init() loops over memblock.memory and passes nid=0 to
numa_add_memblk() which essentially wraps memblock_set_node(). However,
memblock_set_node() can cope with entire memory span itself, so the loop
over memblock.memory regions is redundant.
Replace the loop with a
From: Mike Rapoport
Hi,
These patches simplify several uses of memblock iterators and hide some of
the memblock implementation details from the rest of the system.
The patches are on top of v5.8-rc7 + cherry-pick of "mm/sparse: cleanup the
code surrounding memory_present()" [1] from mmotm tree.
v2 that reuses SWIOTLB here: https://lore.kernel.org/patchwork/cover/1280705/
Thanks,
Claire
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma property is presented.
One can specify two reserved-memory nodes in the device tree. One with
shared-dma-pool to handle the coherent DMA buffer allocation, and
another one with devic
Introduce the new compatible string, device-swiotlb-pool, for restricted
DMA. One can specify the address and length of the device swiotlb memory
region by device-swiotlb-pool in the device tree.
Signed-off-by: Claire Chang
---
.../reserved-memory/reserved-memory.txt | 35 +
Add the initialization function to create device swiotlb pools from
matching reserved-memory nodes in the device tree.
Signed-off-by: Claire Chang
---
include/linux/device.h | 4 ++
kernel/dma/swiotlb.c | 148 +
2 files changed, 126 insertions(+), 26 d
Added a new struct, io_tlb_mem, as the IO TLB memory pool descriptor and
moved relevant global variables into that struct.
This will be useful later to allow for per-device swiotlb regions.
Signed-off-by: Claire Chang
---
drivers/iommu/intel/iommu.c | 2 +-
drivers/xen/swiotlb-xen.c | 4 +-
Regardless of swiotlb setting, the device swiotlb pool is preferred if
available.
The device swiotlb pools provide a basic level of protection against
the DMA overwriting buffer contents at unexpected times. However, to
protect against general data leakage and system memory corruption, the
system
This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.
For example, we plan to use the PCI-e bus for Wi
Jerry Snitselaar @ 2020-06-30 13:06 MST:
> This patchset imeplements the suggestion from Linus to move the
> Kconfig and Makefile bits for AMD and Intel into their respective
> directories.
>
> v2: Rebase against v5.8-rc3. Dropped ---help--- changes from Kconfig as that
> was
> dealt with i
Hi
[This is an automated email]
This commit has been processed because it contains a "Fixes:" tag
fixing commit: b16d0cb9e2fc ("iommu/vt-d: Always enable PASID/PRI PCI
capabilities before ATS").
The bot has tested the following trees: v5.7.10, v5.4.53, v4.19.134, v4.14.189,
v4.9.231, v4.4.231.
Few exported functions from AMD IOMMU driver are missing prototypes.
They have declaration in arch/x86/events/amd/iommu.h but this file
cannot be included in the driver. Add prototypes to fix W=1 warnings
like:
drivers/iommu/amd/init.c:3066:19: warning:
no previous prototype for 'get_
The of_device_id is included unconditionally by of.h header and used
in the driver as well. Remove of_match_ptr to fix W=1 compile test
warning with !CONFIG_OF:
drivers/iommu/mtk_iommu.c:833:34: warning: 'mtk_iommu_of_ids' defined but
not used [-Wunused-const-variable=]
833 | static co
Hi Christoph,
thanks for having a look at this!
On Fri, 2020-07-24 at 15:41 +0200, Christoph Hellwig wrote:
> Yes, the iommu is an interesting case, and the current code is
> wrong for that.
Care to expand on this? I do get that checking dma_coherent_ok() on memory
that'll later on be mapped into
On Mon, Jul 27, 2020 at 8:03 AM Jordan Crouse wrote:
>
> On Sun, Jul 26, 2020 at 10:03:07AM -0700, Rob Clark wrote:
> > On Mon, Jul 20, 2020 at 8:41 AM Jordan Crouse
> > wrote:
> > >
> > > The Adreno GPU has the capability to manage its own pagetables and switch
> > > them dynamically from the h
On Sun, Jul 26, 2020 at 10:03:07AM -0700, Rob Clark wrote:
> On Mon, Jul 20, 2020 at 8:41 AM Jordan Crouse wrote:
> >
> > The Adreno GPU has the capability to manage its own pagetables and switch
> > them dynamically from the hardware. To do this the GPU uses TTBR1 for
> > "global" GPU memory and
On Sun, Jul 26, 2020 at 11:27:03PM -0700, Bjorn Andersson wrote:
> On Mon 20 Jul 08:40 PDT 2020, Jordan Crouse wrote:
> > diff --git a/drivers/iommu/arm-smmu-qcom.c b/drivers/iommu/arm-smmu-qcom.c
> [..]
> > +static int qcom_adreno_smmu_alloc_context_bank(struct arm_smmu_domain
> > *smmu_domain,
>
On Sat, Jul 25, 2020 at 1:46 AM Jonathan Bakker wrote:
>
> Hi Tomasz,
>
> On 2020-07-20 6:10 a.m., Tomasz Figa wrote:
> > On Sat, Jul 11, 2020 at 8:17 PM Jonathan Bakker wrote:
> >>
> >> Hi Tomasz,
> >>
> >> On 2020-07-07 11:44 a.m., Tomasz Figa wrote:
> >>> Hi Jonathan,
> >>>
> >>> On Sat, Apr 2
Hi Joerg,
As requested in [1], here is a second Arm SMMU pull request for 5.9, moving
the driver files into their own subdirectory to avoid cluttering
drivers/iommu/.
Cheers,
Will
[1] https://lore.kernel.org/r/20200722133323.gg27...@8bytes.org
--->8
The following changes since commit aa7ec732
On Sat, 2020-07-11 at 14:48 +0800, Yong Wu wrote:
> In the previous SoC, the M4U HW is in the EMI power domain which is
> always on. the latest M4U is in the display power domain which may be
> turned on/off, thus we have to add pm_runtime interface for it.
>
> we should enable its power before M4
51 matches
Mail list logo