Re: [RFC] /dev/ioasid uAPI proposal

2021-05-28 Thread Jason Gunthorpe
On Thu, May 27, 2021 at 07:58:12AM +, Tian, Kevin wrote:

> 2.1. /dev/ioasid uAPI
> +
> 
> /*
>   * Check whether an uAPI extension is supported. 
>   *
>   * This is for FD-level capabilities, such as locked page pre-registration. 
>   * IOASID-level capabilities are reported through IOASID_GET_INFO.
>   *
>   * Return: 0 if not supported, 1 if supported.
>   */
> #define IOASID_CHECK_EXTENSION_IO(IOASID_TYPE, IOASID_BASE + 0)

 
> /*
>   * Register user space memory where DMA is allowed.
>   *
>   * It pins user pages and does the locked memory accounting so sub-
>   * sequent IOASID_MAP/UNMAP_DMA calls get faster.
>   *
>   * When this ioctl is not used, one user page might be accounted
>   * multiple times when it is mapped by multiple IOASIDs which are
>   * not nested together.
>   *
>   * Input parameters:
>   *   - vaddr;
>   *   - size;
>   *
>   * Return: 0 on success, -errno on failure.
>   */
> #define IOASID_REGISTER_MEMORY_IO(IOASID_TYPE, IOASID_BASE + 1)
> #define IOASID_UNREGISTER_MEMORY  _IO(IOASID_TYPE, IOASID_BASE + 2)

So VA ranges are pinned and stored in a tree and later references to
those VA ranges by any other IOASID use the pin cached in the tree?

It seems reasonable and is similar to the ioasid parent/child I
suggested for PPC.

IMHO this should be merged with the all SW IOASID that is required for
today's mdev drivers. If this can be done while keeping this uAPI then
great, otherwise I don't think it is so bad to weakly nest a physical
IOASID under a SW one just to optimize page pinning.

Either way this seems like a smart direction

> /*
>   * Allocate an IOASID. 
>   *
>   * IOASID is the FD-local software handle representing an I/O address 
>   * space. Each IOASID is associated with a single I/O page table. User 
>   * must call this ioctl to get an IOASID for every I/O address space that is
>   * intended to be enabled in the IOMMU.
>   *
>   * A newly-created IOASID doesn't accept any command before it is 
>   * attached to a device. Once attached, an empty I/O page table is 
>   * bound with the IOMMU then the user could use either DMA mapping 
>   * or pgtable binding commands to manage this I/O page table.

Can the IOASID can be populated before being attached?

>   * Device attachment is initiated through device driver uAPI (e.g. VFIO)
>   *
>   * Return: allocated ioasid on success, -errno on failure.
>   */
> #define IOASID_ALLOC  _IO(IOASID_TYPE, IOASID_BASE + 3)
> #define IOASID_FREE   _IO(IOASID_TYPE, IOASID_BASE + 4)

I assume alloc will include quite a big structure to satisfy the
various vendor needs?

> /*
>   * Get information about an I/O address space
>   *
>   * Supported capabilities:
>   *   - VFIO type1 map/unmap;
>   *   - pgtable/pasid_table binding
>   *   - hardware nesting vs. software nesting;
>   *   - ...
>   *
>   * Related attributes:
>   *   - supported page sizes, reserved IOVA ranges (DMA mapping);
>   *   - vendor pgtable formats (pgtable binding);
>   *   - number of child IOASIDs (nesting);
>   *   - ...
>   *
>   * Above information is available only after one or more devices are
>   * attached to the specified IOASID. Otherwise the IOASID is just a
>   * number w/o any capability or attribute.

This feels wrong to learn most of these attributes of the IOASID after
attaching to a device.

The user should have some idea how it intends to use the IOASID when
it creates it and the rest of the system should match the intention.

For instance if the user is creating a IOASID to cover the guest GPA
with the intention of making children it should indicate this during
alloc.

If the user is intending to point a child IOASID to a guest page table
in a certain descriptor format then it should indicate it during
alloc.

device bind should fail if the device somehow isn't compatible with
the scheme the user is tring to use.

> /*
>   * Map/unmap process virtual addresses to I/O virtual addresses.
>   *
>   * Provide VFIO type1 equivalent semantics. Start with the same 
>   * restriction e.g. the unmap size should match those used in the 
>   * original mapping call. 
>   *
>   * If IOASID_REGISTER_MEMORY has been called, the mapped vaddr
>   * must be already in the preregistered list.
>   *
>   * Input parameters:
>   *   - u32 ioasid;
>   *   - refer to vfio_iommu_type1_dma_{un}map
>   *
>   * Return: 0 on success, -errno on failure.
>   */
> #define IOASID_MAP_DMA_IO(IOASID_TYPE, IOASID_BASE + 6)
> #define IOASID_UNMAP_DMA  _IO(IOASID_TYPE, IOASID_BASE + 7)

What about nested IOASIDs?

> /*
>   * Create a nesting IOASID (child) on an existing IOASID (parent)
>   *
>   * IOASIDs can be nested together, implying that the output address 
>   * from one I/O page table (child) must be further translated by 
>   * another I/O page table (parent).
>   *
>   * As the child adds essentially another reference to the I/O page table 
>   * represented by the parent, any device attached to the child ioasid

Re: [RFC] /dev/ioasid uAPI proposal

2021-05-28 Thread Jason Gunthorpe
On Fri, May 28, 2021 at 10:24:56AM +0800, Jason Wang wrote:
> > IOASID nesting can be implemented in two ways: hardware nesting and
> > software nesting. With hardware support the child and parent I/O page
> > tables are walked consecutively by the IOMMU to form a nested translation.
> > When it's implemented in software, the ioasid driver
> 
> Need to explain what did "ioasid driver" mean.

I think it means "drivers/iommu"

> And if yes, does it allow the device for software specific implementation:
> 
> 1) swiotlb or

I think it is necessary to have a 'software page table' which is
required to do all the mdevs we have today.

> 2) device specific IOASID implementation

"drivers/iommu" is pluggable, so I guess it can exist? I've never seen
it done before though

If we'd want this to drive an on-device translation table is an
interesting question. I don't have an answer

> > I/O page tables routed through PASID are installed in a per-RID PASID
> > table structure.
> 
> I'm not sure this is true for all archs.

It must be true. For security reasons access to a PASID must be
limited by RID.

RID_A assigned to guest A should not be able to access a PASID being
used by RID_B in guest B. Only a per-RID restriction can accomplish
this.

> I would like to know the reason for such indirection.
> 
> It looks to me the ioasid fd is sufficient for performing any operations.
> 
> Such allocation only work if as ioas fd can have multiple ioasid which seems
> not the case you describe here.

It is the case, read the examples section. One had 3 interrelated
IOASID objects inside the same FD.
 
> > 5.3. IOASID nesting (software)
> > +
> > 
> > Same usage scenario as 5.2, with software-based IOASID nesting
> > available. In this mode it is the kernel instead of user to create the
> > shadow mapping.
> > 
> > The flow before guest boots is same as 5.2, except one point. Because
> > giova_ioasid is nested on gpa_ioasid, locked accounting is only
> > conducted for gpa_ioasid. So it's not necessary to pre-register virtual
> > memory.
> > 
> > To save space we only list the steps after boots (i.e. both dev1/dev2
> > have been attached to gpa_ioasid before guest boots):
> > 
> > /* After boots */
> > /* Make GIOVA space nested on GPA space */
> > giova_ioasid = ioctl(ioasid_fd, IOASID_CREATE_NESTING,
> > gpa_ioasid);
> > 
> > /* Attach dev2 to the new address space (child)
> >   * Note dev2 is still attached to gpa_ioasid (parent)
> >   */
> > at_data = { .ioasid = giova_ioasid};
> > ioctl(device_fd2, VFIO_ATTACH_IOASID, &at_data);
> 
> 
> For vDPA, we need something similar. And in the future, vDPA may allow
> multiple ioasid to be attached to a single device. It should work with the
> current design.

What do you imagine multiple IOASID's being used for in VDPA?

Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC] /dev/ioasid uAPI proposal

2021-05-28 Thread Jason Gunthorpe
On Fri, May 28, 2021 at 06:23:07PM +0200, Jean-Philippe Brucker wrote:

> Regarding the invalidation, I think limiting it to IOASID may work but it
> does bother me that we can't directly forward all invalidations received
> on the vIOMMU: if the guest sends a device-wide invalidation, do we
> iterate over all IOASIDs and issue one ioctl for each?  Sure the guest is
> probably sending that because of detaching the PASID table, for which the
> kernel did perform the invalidation, but we can't just assume that and
> ignore the request, there may be a different reason. Iterating is going to
> take a lot time, whereas with the current API we can send a single request
> and issue a single command to the IOMMU hardware.

I think the invalidation could stand some improvement, but that also
feels basically incremental to the essence of the proposal.

I agree with the general goal that the uAPI should be able to issue
invalidates that directly map to HW invalidations.

> Similarly, if the guest sends an ATC invalidation for a whole device (in
> the SMMU, that's an ATC_INV without SSID), we'll have to transform that
> into multiple IOTLB invalidations?  We can't just send it on IOASID #0,
> because it may not have been created by the guest.

For instance adding device labels allows an invalidate device
operation to exist and the "generic" kernel driver can iterate over
all IOASIDs hooked to the device. Overridable by the IOMMU driver.

> > Notes:
> > -   It might be confusing as IOASID is also used in the kernel (drivers/
> > iommu/ioasid.c) to represent PCI PASID or ARM substream ID. We need
> > find a better name later to differentiate.
> 
> Yes this isn't just about allocating PASIDs anymore. /dev/iommu or
> /dev/ioas would make more sense.

Either makes sense to me

/dev/iommu and the internal IOASID objects can be called IOAS (==
iommu_domain) is not bad

> >   * Get information about an I/O address space
> >   *
> >   * Supported capabilities:
> >   * - VFIO type1 map/unmap;
> >   * - pgtable/pasid_table binding
> >   * - hardware nesting vs. software nesting;
> >   * - ...
> >   *
> >   * Related attributes:
> >   * - supported page sizes, reserved IOVA ranges (DMA mapping);
> >   * - vendor pgtable formats (pgtable binding);
> >   * - number of child IOASIDs (nesting);
> >   * - ...
> >   *
> >   * Above information is available only after one or more devices are
> >   * attached to the specified IOASID. Otherwise the IOASID is just a
> >   * number w/o any capability or attribute.
> >   *
> >   * Input parameters:
> >   * - u32 ioasid;
> >   *
> >   * Output parameters:
> >   * - many. TBD.
> 
> We probably need a capability format similar to PCI and VFIO.

Designing this kind of uAPI where it is half HW and half generic is
really tricky to get right. Probably best to take the detailed design
of the IOCTL structs later.

Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC] /dev/ioasid uAPI proposal

2021-05-28 Thread Jason Gunthorpe
On Thu, May 27, 2021 at 07:58:12AM +, Tian, Kevin wrote:
> /dev/ioasid provides an unified interface for managing I/O page tables for 
> devices assigned to userspace. Device passthrough frameworks (VFIO, vDPA, 
> etc.) are expected to use this interface instead of creating their own logic 
> to 
> isolate untrusted device DMAs initiated by userspace. 

It is very long, but I think this has turned out quite well. It
certainly matches the basic sketch I had in my head when we were
talking about how to create vDPA devices a few years ago.

When you get down to the operations they all seem pretty common sense
and straightfoward. Create an IOASID. Connect to a device. Fill the
IOASID with pages somehow. Worry about PASID labeling.

It really is critical to get all the vendor IOMMU people to go over it
and see how their HW features map into this.

Thanks,
Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC] /dev/ioasid uAPI proposal

2021-05-28 Thread Jason Gunthorpe
On Thu, May 27, 2021 at 07:58:12AM +, Tian, Kevin wrote:
> 
> 5. Use Cases and Flows
> 
> Here assume VFIO will support a new model where every bound device
> is explicitly listed under /dev/vfio thus a device fd can be acquired w/o 
> going through legacy container/group interface. For illustration purpose
> those devices are just called dev[1...N]:
> 
>   device_fd[1...N] = open("/dev/vfio/devices/dev[1...N]", mode);
> 
> As explained earlier, one IOASID fd is sufficient for all intended use cases:
> 
>   ioasid_fd = open("/dev/ioasid", mode);
> 
> For simplicity below examples are all made for the virtualization story.
> They are representative and could be easily adapted to a non-virtualization
> scenario.

For others, I don't think this is *strictly* necessary, we can
probably still get to the device_fd using the group_fd and fit in
/dev/ioasid. It does make the rest of this more readable though.


> Three types of IOASIDs are considered:
> 
>   gpa_ioasid[1...N]:  for GPA address space
>   giova_ioasid[1...N]:for guest IOVA address space
>   gva_ioasid[1...N]:  for guest CPU VA address space
> 
> At least one gpa_ioasid must always be created per guest, while the other 
> two are relevant as far as vIOMMU is concerned.
> 
> Examples here apply to both pdev and mdev, if not explicitly marked out
> (e.g. in section 5.5). VFIO device driver in the kernel will figure out the 
> associated routing information in the attaching operation.
> 
> For illustration simplicity, IOASID_CHECK_EXTENSION and IOASID_GET_
> INFO are skipped in these examples.
> 
> 5.1. A simple example
> ++
> 
> Dev1 is assigned to the guest. One gpa_ioasid is created. The GPA address
> space is managed through DMA mapping protocol:
> 
>   /* Bind device to IOASID fd */
>   device_fd = open("/dev/vfio/devices/dev1", mode);
>   ioasid_fd = open("/dev/ioasid", mode);
>   ioctl(device_fd, VFIO_BIND_IOASID_FD, ioasid_fd);
> 
>   /* Attach device to IOASID */
>   gpa_ioasid = ioctl(ioasid_fd, IOASID_ALLOC);
>   at_data = { .ioasid = gpa_ioasid};
>   ioctl(device_fd, VFIO_ATTACH_IOASID, &at_data);
> 
>   /* Setup GPA mapping */
>   dma_map = {
>   .ioasid = gpa_ioasid;
>   .iova   = 0;// GPA
>   .vaddr  = 0x4000;   // HVA
>   .size   = 1GB;
>   };
>   ioctl(ioasid_fd, IOASID_DMA_MAP, &dma_map);
> 
> If the guest is assigned with more than dev1, user follows above sequence
> to attach other devices to the same gpa_ioasid i.e. sharing the GPA 
> address space cross all assigned devices.

eg

device2_fd = open("/dev/vfio/devices/dev1", mode);
ioctl(device2_fd, VFIO_BIND_IOASID_FD, ioasid_fd);
ioctl(device2_fd, VFIO_ATTACH_IOASID, &at_data);

Right?

> 
> 5.2. Multiple IOASIDs (no nesting)
> 
> 
> Dev1 and dev2 are assigned to the guest. vIOMMU is enabled. Initially
> both devices are attached to gpa_ioasid. After boot the guest creates 
> an GIOVA address space (giova_ioasid) for dev2, leaving dev1 in pass
> through mode (gpa_ioasid).
> 
> Suppose IOASID nesting is not supported in this case. Qemu need to
> generate shadow mappings in userspace for giova_ioasid (like how
> VFIO works today).
> 
> To avoid duplicated locked page accounting, it's recommended to pre-
> register the virtual address range that will be used for DMA:
> 
>   device_fd1 = open("/dev/vfio/devices/dev1", mode);
>   device_fd2 = open("/dev/vfio/devices/dev2", mode);
>   ioasid_fd = open("/dev/ioasid", mode);
>   ioctl(device_fd1, VFIO_BIND_IOASID_FD, ioasid_fd);
>   ioctl(device_fd2, VFIO_BIND_IOASID_FD, ioasid_fd);
> 
>   /* pre-register the virtual address range for accounting */
>   mem_info = { .vaddr = 0x4000; .size = 1GB };
>   ioctl(ioasid_fd, IOASID_REGISTER_MEMORY, &mem_info);
> 
>   /* Attach dev1 and dev2 to gpa_ioasid */
>   gpa_ioasid = ioctl(ioasid_fd, IOASID_ALLOC);
>   at_data = { .ioasid = gpa_ioasid};
>   ioctl(device_fd1, VFIO_ATTACH_IOASID, &at_data);
>   ioctl(device_fd2, VFIO_ATTACH_IOASID, &at_data);
> 
>   /* Setup GPA mapping */
>   dma_map = {
>   .ioasid = gpa_ioasid;
>   .iova   = 0;// GPA
>   .vaddr  = 0x4000;   // HVA
>   .size   = 1GB;
>   };
>   ioctl(ioasid_fd, IOASID_DMA_MAP, &dma_map);
> 
>   /* After boot, guest enables an GIOVA space for dev2 */
>   giova_ioasid = ioctl(ioasid_fd, IOASID_ALLOC);
> 
>   /* First detach dev2 from previous address space */
>   at_data = { .ioasid = gpa_ioasid};
>   ioctl(device_fd2, VFIO_DETACH_IOASID, &at_data);
> 
>   /* Then attach dev2 to the new address space */
>   at_data = { .ioasid = giova_ioasid};
>   ioctl(device_fd2, VFIO_ATTACH_IOASID, &at_data);
> 
>   /* Setup a shadow DMA mapping according to vIOMMU
> 

Re: [RFC] /dev/ioasid uAPI proposal

2021-05-28 Thread Jason Gunthorpe
On Thu, May 27, 2021 at 07:58:12AM +, Tian, Kevin wrote:

> IOASID nesting can be implemented in two ways: hardware nesting and 
> software nesting. With hardware support the child and parent I/O page 
> tables are walked consecutively by the IOMMU to form a nested translation. 
> When it's implemented in software, the ioasid driver is responsible for 
> merging the two-level mappings into a single-level shadow I/O page table. 
> Software nesting requires both child/parent page tables operated through 
> the dma mapping protocol, so any change in either level can be captured 
> by the kernel to update the corresponding shadow mapping.

Why? A SW emulation could do this synchronization during invalidation
processing if invalidation contained an IOVA range.

I think this document would be stronger to include some "Rational"
statements in key places

> Based on the underlying IOMMU capability one device might be allowed 
> to attach to multiple I/O address spaces, with DMAs accessing them by 
> carrying different routing information. One of them is the default I/O 
> address space routed by PCI Requestor ID (RID) or ARM Stream ID. The 
> remaining are routed by RID + Process Address Space ID (PASID) or 
> Stream+Substream ID. For simplicity the following context uses RID and
> PASID when talking about the routing information for I/O address spaces.

I wonder if we should just adopt the ARM naming as the API
standard. It is general and doesn't have the SVA connotation that
"Process Address Space ID" carries.
 
> Device must be bound to an IOASID FD before attach operation can be
> conducted. This is also through VFIO uAPI. In this proposal one device 
> should not be bound to multiple FD's. Not sure about the gain of 
> allowing it except adding unnecessary complexity. But if others have 
> different view we can further discuss.

Unless there is some internal kernel design reason to block it, I
wouldn't go out of my way to prevent it.

> VFIO must ensure its device composes DMAs with the routing information
> attached to the IOASID. For pdev it naturally happens since vPASID is 
> directly programmed to the device by guest software. For mdev this 
> implies any guest operation carrying a vPASID on this device must be 
> trapped into VFIO and then converted to pPASID before sent to the 
> device. A detail explanation about PASID virtualization policies can be 
> found in section 4. 

vPASID and related seems like it needs other IOMMU vendors to take a
very careful look. I'm really glad to see this starting to be spelled
out in such a clear way, as it was hard to see from the patches there
is vendor variation.

> With above design /dev/ioasid uAPI is all about I/O address spaces. 
> It doesn't include any device routing information, which is only 
> indirectly registered to the ioasid driver through VFIO uAPI. For
> example, I/O page fault is always reported to userspace per IOASID,
> although it's physically reported per device (RID+PASID). 

I agree with Jean-Philippe - at the very least erasing this
information needs a major rational - but I don't really see why it
must be erased? The HW reports the originating device, is it just a
matter of labeling the devices attached to the /dev/ioasid FD so it
can be reported to userspace?

> multiple attached devices) and then generates a per-device virtual I/O 
> page fault into guest. Similarly the iotlb invalidation uAPI describes the 
> granularity in the I/O address space (all, or a range), different from the 
> underlying IOMMU semantics (domain-wide, PASID-wide, range-based).

This seems OK though, I can't think of a reason to allow an IOASID to
be left partially invalidated???
 
> I/O page tables routed through PASID are installed in a per-RID PASID 
> table structure. Some platforms implement the PASID table in the guest 
> physical space (GPA), expecting it managed by the guest. The guest
> PASID table is bound to the IOMMU also by attaching to an IOASID, 
> representing the per-RID vPASID space. 
> 
> We propose the host kernel needs to explicitly track  guest I/O page 
> tables even on these platforms, i.e. the same pgtable binding protocol 
> should be used universally on all platforms (with only difference on who
> actually writes the PASID table). One opinion from previous discussion 
> was treating this special IOASID as a container for all guest I/O page 
> tables i.e. hiding them from the host. 

> However this way significantly 
> violates the philosophy in this /dev/ioasid proposal. It is not one IOASID 
> one address space any more. Device routing information (indirectly 
> marking hidden I/O spaces) has to be carried in iotlb invalidation and 
> page faulting uAPI to help connect vIOMMU with the underlying 
> pIOMMU. This is one design choice to be confirmed with ARM guys.

I'm confused by this rational.

For a vIOMMU that has IO page tables in the guest the basic
choices are:
 - Do we have a hypervisor trap to bind the page table or not? (RID

Re: [PATCH v2 00/10] arm64: tegra: Prevent early SMMU faults

2021-05-28 Thread Thierry Reding
On Tue, Apr 20, 2021 at 07:26:09PM +0200, Thierry Reding wrote:
> From: Thierry Reding 
> 
> Hi,
> 
> this is a set of patches that is the result of earlier discussions
> regarding early identity mappings that are needed to avoid SMMU faults
> during early boot.
> 
> The goal here is to avoid early identity mappings altogether and instead
> postpone the need for the identity mappings to when devices are attached
> to the SMMU. This works by making the SMMU driver coordinate with the
> memory controller driver on when to start enforcing SMMU translations.
> This makes Tegra behave in a more standard way and pushes the code to
> deal with the Tegra-specific programming into the NVIDIA SMMU
> implementation.
> 
> Compared to the original version of these patches, I've split the
> preparatory work into a separate patch series because it became very
> large and will be mostly uninteresting for this audience.
> 
> Patch 1 provides a mechanism to program SID overrides at runtime. Patch
> 2 updates the ARM SMMU device tree bindings to include the Tegra186
> compatible string as suggested by Robin during review.
> 
> Patches 3 and 4 create the fundamentals in the SMMU driver to support
> this and also make this functionality available on Tegra186. Patch 5
> hooks the ARM SMMU up to the memory controller so that the memory client
> stream ID overrides can be programmed at the right time.
> 
> Patch 6 extends this mechanism to Tegra186 and patches 7-9 enable all of
> this through device tree updates. Patch 10 is included here to show how
> SMMU will be enabled for display controllers. However, it cannot be
> applied yet because the code to create identity mappings for potentially
> live framebuffers hasn't been merged yet.
> 
> The end result is that various peripherals will have SMMU enabled, while
> the display controllers will keep using passthrough, as initially set up
> by firmware. Once the device tree bindings have been accepted and the
> SMMU driver has been updated to create identity mappings for the display
> controllers, they can be hooked up to the SMMU and the code in this
> series will automatically program the SID overrides to enable SMMU
> translations at the right time.
> 
> Note that the series creates a compile time dependency between the
> memory controller and IOMMU trees. If it helps I can provide a branch
> for each tree, modelling the dependency, once the series has been
> reviewed.
> 
> Changes in v2:
> - split off the preparatory work into a separate series (that needs to
>   be applied first)
> - address review comments by Robin
> 
> Thierry
> 
> Thierry Reding (10):
>   memory: tegra: Implement SID override programming
>   dt-bindings: arm-smmu: Add Tegra186 compatible string
>   iommu/arm-smmu: Implement ->probe_finalize()
>   iommu/arm-smmu: tegra: Detect number of instances at runtime
>   iommu/arm-smmu: tegra: Implement SID override programming
>   iommu/arm-smmu: Use Tegra implementation on Tegra186
>   arm64: tegra: Use correct compatible string for Tegra186 SMMU
>   arm64: tegra: Hook up memory controller to SMMU on Tegra186
>   arm64: tegra: Enable SMMU support on Tegra194
>   arm64: tegra: Enable SMMU support for display on Tegra194
> 
>  .../devicetree/bindings/iommu/arm,smmu.yaml   |  11 +-
>  arch/arm64/boot/dts/nvidia/tegra186.dtsi  |   4 +-
>  arch/arm64/boot/dts/nvidia/tegra194.dtsi  | 166 ++
>  drivers/iommu/arm/arm-smmu/arm-smmu-impl.c|   3 +-
>  drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c  |  90 --
>  drivers/iommu/arm/arm-smmu/arm-smmu.c |  13 ++
>  drivers/iommu/arm/arm-smmu/arm-smmu.h |   1 +
>  drivers/memory/tegra/mc.c |   9 +
>  drivers/memory/tegra/tegra186.c   |  72 
>  include/soc/tegra/mc.h|   3 +
>  10 files changed, 349 insertions(+), 23 deletions(-)

Will, Robin,

do you have any more comments on the ARM SMMU bits of this series? If
not, can you guys provide an Acked-by so that Krzysztof can pick this
(modulo the DT patches) up into the memory-controller tree for v5.14?

I'll send out a v3 with the bisectibilitiy fix that Krishna pointed
out.

Thanks,
Thierry


signature.asc
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v2 1/5] dt-bindings: reserved-memory: Document memory region specifier

2021-05-28 Thread Thierry Reding
On Thu, May 20, 2021 at 05:03:06PM -0500, Rob Herring wrote:
> On Fri, Apr 23, 2021 at 06:32:30PM +0200, Thierry Reding wrote:
> > From: Thierry Reding 
> > 
> > Reserved memory region phandle references can be accompanied by a
> > specifier that provides additional information about how that specific
> > reference should be treated.
> > 
> > One use-case is to mark a memory region as needing an identity mapping
> > in the system's IOMMU for the device that references the region. This is
> > needed for example when the bootloader has set up hardware (such as a
> > display controller) to actively access a memory region (e.g. a boot
> > splash screen framebuffer) during boot. The operating system can use the
> > identity mapping flag from the specifier to make sure an IOMMU identity
> > mapping is set up for the framebuffer before IOMMU translations are
> > enabled for the display controller.
> > 
> > Signed-off-by: Thierry Reding 
> > ---
> >  .../reserved-memory/reserved-memory.txt   | 21 +++
> >  include/dt-bindings/reserved-memory.h |  8 +++
> >  2 files changed, 29 insertions(+)
> >  create mode 100644 include/dt-bindings/reserved-memory.h
> 
> Sorry for being slow on this. I have 2 concerns.
> 
> First, this creates an ABI issue. A DT with cells in 'memory-region' 
> will not be understood by an existing OS. I'm less concerned about this 
> if we address that with a stable fix. (Though I'm pretty sure we've 
> naively added #?-cells in the past ignoring this issue.)

A while ago I had proposed adding memory-region*s* as an alternative
name for memory-region to make the naming more consistent with other
types of properties (think clocks, resets, gpios, ...). If we added
that, we could easily differentiate between the "legacy" cases where
no #memory-region-cells was allowed and the new cases where it was.

> Second, it could be the bootloader setting up the reserved region. If a 
> node already has 'memory-region', then adding more regions is more 
> complicated compared to adding new properties. And defining what each 
> memory-region entry is or how many in schemas is impossible.

It's true that updating the property gets a bit complicated, but it's
not exactly rocket science. We really just need to splice the array. I
have a working implemention for this in U-Boot.

For what it's worth, we could run into the same issue with any new
property that we add. Even if we renamed this to iommu-memory-region,
it's still possible that a bootloader may have to update this property
if it already exists (it could be hard-coded in DT, or it could have
been added by some earlier bootloader or firmware).

> Both could be addressed with a new property. Perhaps something like 
> 'iommu-memory-region = <&phandle>;'. I think the 'iommu' prefix is 
> appropriate given this is entirely because of the IOMMU being in the 
> mix. I might feel differently if we had other uses for cells, but I 
> don't really see it in this case. 

I'm afraid that down the road we'll end up with other cases and then we
might proliferate a number of *-memory-region properties with varying
prefixes.

I am aware of one other case where we might need something like this: on
some Tegra SoCs we have audio processors that will access memory buffers
using a DMA engine. These processors are booted from early firmware
using firmware from system memory. In order to avoid trashing the
firmware, we need to reserve memory. We can do this using reserved
memory nodes. However, the audio DMA engine also uses the SMMU, so we
need to make sure that the firmware memory is marked as reserved within
the SMMU. This is similar to the identity mapping case, but not exactly
the same. Instead of creating a 1:1 mapping, we just want that IOVA
region to be reserved (i.e. IOMMU_RESV_RESERVED instead of
IOMMU_RESV_DIRECT{,_RELAXABLE}).

That would also fall into the IOMMU domain, but we can't reuse the
iommu-memory-region property for that because then we don't have enough
information to decide which type of reservation we need.

We could obviously make iommu-memory-region take a specifier, but we
could just as well use memory-regions in that case since we have
something more generic anyway.

With the #memory-region-cells proposal, we can easily extend the cell in
the specifier with an additional MEMORY_REGION_IOMMU_RESERVE flag to
take that other use case into account. If we than also change to the new
memory-regions property name, we avoid the ABI issue (and we gain a bit
of consistency while at it).

Thierry


signature.asc
Description: PGP signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v4 21/26] iommu/arm-smmu-v3: Ratelimit event dump

2021-05-28 Thread Jean-Philippe Brucker
Hi Aaro,

On Fri, May 28, 2021 at 11:09:58AM +0300, Aaro Koskinen wrote:
> Hi,
> 
> On Mon, Feb 24, 2020 at 07:23:56PM +0100, Jean-Philippe Brucker wrote:
> > When a device or driver misbehaves, it is possible to receive events
> > much faster than we can print them out. Ratelimit the printing of
> > events.
> > 
> > Signed-off-by: Jean-Philippe Brucker 
> 
> Tested-by: Aaro Koskinen 
> 
> > During the SVA tests when the device driver didn't properly stop DMA
> > before unbinding, the event queue thread would almost lock-up the server
> > with a flood of event 0xa. This patch helped recover from the error.
> 
> I was just debugging a similar case, and this patch was required to
> prevent system from locking up.
> 
> Could you please resend this patch independently from the other patches
> in the series, as it seems it's a worthwhile fix and still relevent for
> current kernels. Thanks,

Ok, I'll resend it

Thanks,
Jean

> 
> A.
> 
> > ---
> >  drivers/iommu/arm-smmu-v3.c | 13 -
> >  1 file changed, 8 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> > index 28f8583cd47b..6a5987cce03f 100644
> > --- a/drivers/iommu/arm-smmu-v3.c
> > +++ b/drivers/iommu/arm-smmu-v3.c
> > @@ -2243,17 +2243,20 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, 
> > void *dev)
> > struct arm_smmu_device *smmu = dev;
> > struct arm_smmu_queue *q = &smmu->evtq.q;
> > struct arm_smmu_ll_queue *llq = &q->llq;
> > +   static DEFINE_RATELIMIT_STATE(rs, DEFAULT_RATELIMIT_INTERVAL,
> > + DEFAULT_RATELIMIT_BURST);
> > u64 evt[EVTQ_ENT_DWORDS];
> >  
> > do {
> > while (!queue_remove_raw(q, evt)) {
> > u8 id = FIELD_GET(EVTQ_0_ID, evt[0]);
> >  
> > -   dev_info(smmu->dev, "event 0x%02x received:\n", id);
> > -   for (i = 0; i < ARRAY_SIZE(evt); ++i)
> > -   dev_info(smmu->dev, "\t0x%016llx\n",
> > -(unsigned long long)evt[i]);
> > -
> > +   if (__ratelimit(&rs)) {
> > +   dev_info(smmu->dev, "event 0x%02x received:\n", 
> > id);
> > +   for (i = 0; i < ARRAY_SIZE(evt); ++i)
> > +   dev_info(smmu->dev, "\t0x%016llx\n",
> > +(unsigned long long)evt[i]);
> > +   }
> > }
> >  
> > /*
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC] /dev/ioasid uAPI proposal

2021-05-28 Thread Jean-Philippe Brucker
On Thu, May 27, 2021 at 07:58:12AM +, Tian, Kevin wrote:
> /dev/ioasid provides an unified interface for managing I/O page tables for 
> devices assigned to userspace. Device passthrough frameworks (VFIO, vDPA, 
> etc.) are expected to use this interface instead of creating their own logic 
> to 
> isolate untrusted device DMAs initiated by userspace. 
> 
> This proposal describes the uAPI of /dev/ioasid and also sample sequences 
> with VFIO as example in typical usages. The driver-facing kernel API provided 
> by the iommu layer is still TBD, which can be discussed after consensus is 
> made on this uAPI.
> 
> It's based on a lengthy discussion starting from here:
>   
> https://lore.kernel.org/linux-iommu/20210330132830.go2356...@nvidia.com/ 
> 
> It ends up to be a long writing due to many things to be summarized and
> non-trivial effort required to connect them into a complete proposal.
> Hope it provides a clean base to converge.

Firstly thanks for writing this up and for your patience. I've not read in
detail the second half yet, will take another look later.

> 1. Terminologies and Concepts
> -
> 
> IOASID FD is the container holding multiple I/O address spaces. User 
> manages those address spaces through FD operations. Multiple FD's are 
> allowed per process, but with this proposal one FD should be sufficient for 
> all intended usages.
> 
> IOASID is the FD-local software handle representing an I/O address space. 
> Each IOASID is associated with a single I/O page table. IOASIDs can be 
> nested together, implying the output address from one I/O page table 
> (represented by child IOASID) must be further translated by another I/O 
> page table (represented by parent IOASID).
> 
> I/O address space can be managed through two protocols, according to 
> whether the corresponding I/O page table is constructed by the kernel or 
> the user. When kernel-managed, a dma mapping protocol (similar to 
> existing VFIO iommu type1) is provided for the user to explicitly specify 
> how the I/O address space is mapped. Otherwise, a different protocol is 
> provided for the user to bind an user-managed I/O page table to the 
> IOMMU, plus necessary commands for iotlb invalidation and I/O fault 
> handling. 
> 
> Pgtable binding protocol can be used only on the child IOASID's, implying 
> IOASID nesting must be enabled. This is because the kernel doesn't trust 
> userspace. Nesting allows the kernel to enforce its DMA isolation policy 
> through the parent IOASID.
> 
> IOASID nesting can be implemented in two ways: hardware nesting and 
> software nesting. With hardware support the child and parent I/O page 
> tables are walked consecutively by the IOMMU to form a nested translation. 
> When it's implemented in software, the ioasid driver is responsible for 
> merging the two-level mappings into a single-level shadow I/O page table. 
> Software nesting requires both child/parent page tables operated through 
> the dma mapping protocol, so any change in either level can be captured 
> by the kernel to update the corresponding shadow mapping.

Is there an advantage to moving software nesting into the kernel?
We could just have the guest do its usual combined map/unmap on the child
fd

> 
> An I/O address space takes effect in the IOMMU only after it is attached 
> to a device. The device in the /dev/ioasid context always refers to a 
> physical one or 'pdev' (PF or VF). 
> 
> One I/O address space could be attached to multiple devices. In this case, 
> /dev/ioasid uAPI applies to all attached devices under the specified IOASID.
> 
> Based on the underlying IOMMU capability one device might be allowed 
> to attach to multiple I/O address spaces, with DMAs accessing them by 
> carrying different routing information. One of them is the default I/O 
> address space routed by PCI Requestor ID (RID) or ARM Stream ID. The 
> remaining are routed by RID + Process Address Space ID (PASID) or 
> Stream+Substream ID. For simplicity the following context uses RID and
> PASID when talking about the routing information for I/O address spaces.
> 
> Device attachment is initiated through passthrough framework uAPI (use
> VFIO for simplicity in following context). VFIO is responsible for 
> identifying 
> the routing information and registering it to the ioasid driver when calling 
> ioasid attach helper function. It could be RID if the assigned device is 
> pdev (PF/VF) or RID+PASID if the device is mediated (mdev). In addition, 
> user might also provide its view of virtual routing information (vPASID) in 
> the attach call, e.g. when multiple user-managed I/O address spaces are 
> attached to the vfio_device. In this case VFIO must figure out whether 
> vPASID should be directly used (for pdev) or converted to a kernel-
> allocated one (pPASID, for mdev) for physical routing (see section 4).
> 
> Device must be bound to an IOASID FD before attach operation can be
> conducted. Th

Re: [PATCH 2/2] iommu: Drop unnecessary of_iommu.h includes

2021-05-28 Thread Heiko Stübner
Am Donnerstag, 27. Mai 2021, 21:37:10 CEST schrieb Rob Herring:
> The only place of_iommu.h is needed is in drivers/of/device.c. Remove it
> from everywhere else.
> 
> Cc: Will Deacon 
> Cc: Robin Murphy 
> Cc: Joerg Roedel 
> Cc: Rob Clark 
> Cc: Marek Szyprowski 
> Cc: Krzysztof Kozlowski 
> Cc: Bjorn Andersson 
> Cc: Yong Wu 
> Cc: Matthias Brugger 
> Cc: Heiko Stuebner 
> Cc: Jean-Philippe Brucker 
> Cc: Frank Rowand 
> Cc: linux-arm-ker...@lists.infradead.org
> Cc: iommu@lists.linux-foundation.org
> Signed-off-by: Rob Herring 

> diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
> index 7a2932772fdf..bb50e015b1d5 100644
> --- a/drivers/iommu/rockchip-iommu.c
> +++ b/drivers/iommu/rockchip-iommu.c
> @@ -21,7 +21,6 @@
>  #include 
>  #include 
>  #include 
> -#include 
>  #include 
>  #include 
>  #include 

for Rockchip:
Acked-by: Heiko Stuebner 


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH] iommu: Print default strict or lazy mode at init time

2021-05-28 Thread John Garry
As well as the default domain type, it's useful to know whether strict
or lazy mode is default for DMA domains, so add this info in a separate
print.

Signed-off-by: John Garry 

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 808ab70d5df5..f25fae62f077 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -138,6 +138,11 @@ static int __init iommu_subsys_init(void)
(iommu_cmd_line & IOMMU_CMD_LINE_DMA_API) ?
"(set via kernel command line)" : "");
 
+   pr_info("Default DMA domain mode: %s %s\n",
+   iommu_dma_strict ? "strict" : "lazy",
+   (iommu_cmd_line & IOMMU_CMD_LINE_STRICT) ?
+   "(set via kernel command line)" : "");
+
return 0;
 }
 subsys_initcall(iommu_subsys_init);
-- 
2.26.2

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH -next] iommu/vt-d: use DEVICE_ATTR_RO macro

2021-05-28 Thread YueHaibing
Use DEVICE_ATTR_RO() helper instead of plain DEVICE_ATTR(),
which makes the code a bit shorter and easier to read.

Signed-off-by: YueHaibing 
---
 drivers/iommu/intel/iommu.c | 42 -
 1 file changed, 18 insertions(+), 24 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index be35284a2016..0638ea8f6f7d 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -4138,62 +4138,56 @@ static inline struct intel_iommu 
*dev_to_intel_iommu(struct device *dev)
return container_of(iommu_dev, struct intel_iommu, iommu);
 }
 
-static ssize_t intel_iommu_show_version(struct device *dev,
-   struct device_attribute *attr,
-   char *buf)
+static ssize_t version_show(struct device *dev,
+   struct device_attribute *attr, char *buf)
 {
struct intel_iommu *iommu = dev_to_intel_iommu(dev);
u32 ver = readl(iommu->reg + DMAR_VER_REG);
return sprintf(buf, "%d:%d\n",
   DMAR_VER_MAJOR(ver), DMAR_VER_MINOR(ver));
 }
-static DEVICE_ATTR(version, S_IRUGO, intel_iommu_show_version, NULL);
+static DEVICE_ATTR_RO(version);
 
-static ssize_t intel_iommu_show_address(struct device *dev,
-   struct device_attribute *attr,
-   char *buf)
+static ssize_t address_show(struct device *dev,
+   struct device_attribute *attr, char *buf)
 {
struct intel_iommu *iommu = dev_to_intel_iommu(dev);
return sprintf(buf, "%llx\n", iommu->reg_phys);
 }
-static DEVICE_ATTR(address, S_IRUGO, intel_iommu_show_address, NULL);
+static DEVICE_ATTR_RO(address);
 
-static ssize_t intel_iommu_show_cap(struct device *dev,
-   struct device_attribute *attr,
-   char *buf)
+static ssize_t cap_show(struct device *dev,
+   struct device_attribute *attr, char *buf)
 {
struct intel_iommu *iommu = dev_to_intel_iommu(dev);
return sprintf(buf, "%llx\n", iommu->cap);
 }
-static DEVICE_ATTR(cap, S_IRUGO, intel_iommu_show_cap, NULL);
+static DEVICE_ATTR_RO(cap);
 
-static ssize_t intel_iommu_show_ecap(struct device *dev,
-   struct device_attribute *attr,
-   char *buf)
+static ssize_t ecap_show(struct device *dev,
+struct device_attribute *attr, char *buf)
 {
struct intel_iommu *iommu = dev_to_intel_iommu(dev);
return sprintf(buf, "%llx\n", iommu->ecap);
 }
-static DEVICE_ATTR(ecap, S_IRUGO, intel_iommu_show_ecap, NULL);
+static DEVICE_ATTR_RO(ecap);
 
-static ssize_t intel_iommu_show_ndoms(struct device *dev,
- struct device_attribute *attr,
- char *buf)
+static ssize_t domains_supported_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
 {
struct intel_iommu *iommu = dev_to_intel_iommu(dev);
return sprintf(buf, "%ld\n", cap_ndoms(iommu->cap));
 }
-static DEVICE_ATTR(domains_supported, S_IRUGO, intel_iommu_show_ndoms, NULL);
+static DEVICE_ATTR_RO(domains_supported);
 
-static ssize_t intel_iommu_show_ndoms_used(struct device *dev,
-  struct device_attribute *attr,
-  char *buf)
+static ssize_t domains_used_show(struct device *dev,
+struct device_attribute *attr, char *buf)
 {
struct intel_iommu *iommu = dev_to_intel_iommu(dev);
return sprintf(buf, "%d\n", bitmap_weight(iommu->domain_ids,
  cap_ndoms(iommu->cap)));
 }
-static DEVICE_ATTR(domains_used, S_IRUGO, intel_iommu_show_ndoms_used, NULL);
+static DEVICE_ATTR_RO(domains_used);
 
 static struct attribute *intel_iommu_attrs[] = {
&dev_attr_version.attr,
-- 
2.17.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH -next] iommu/amd: use DEVICE_ATTR_RO macro

2021-05-28 Thread YueHaibing
Use DEVICE_ATTR_RO() helper instead of plain DEVICE_ATTR(),
which makes the code a bit shorter and easier to read.

Signed-off-by: YueHaibing 
---
 drivers/iommu/amd/init.c | 14 ++
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
index d006724f4dc2..4ffb694bd297 100644
--- a/drivers/iommu/amd/init.c
+++ b/drivers/iommu/amd/init.c
@@ -1731,23 +1731,21 @@ static void init_iommu_perf_ctr(struct amd_iommu *iommu)
return;
 }
 
-static ssize_t amd_iommu_show_cap(struct device *dev,
- struct device_attribute *attr,
- char *buf)
+static ssize_t cap_show(struct device *dev,
+   struct device_attribute *attr, char *buf)
 {
struct amd_iommu *iommu = dev_to_amd_iommu(dev);
return sprintf(buf, "%x\n", iommu->cap);
 }
-static DEVICE_ATTR(cap, S_IRUGO, amd_iommu_show_cap, NULL);
+static DEVICE_ATTR_RO(cap);
 
-static ssize_t amd_iommu_show_features(struct device *dev,
-  struct device_attribute *attr,
-  char *buf)
+static ssize_t features_show(struct device *dev,
+struct device_attribute *attr, char *buf)
 {
struct amd_iommu *iommu = dev_to_amd_iommu(dev);
return sprintf(buf, "%llx\n", iommu->features);
 }
-static DEVICE_ATTR(features, S_IRUGO, amd_iommu_show_features, NULL);
+static DEVICE_ATTR_RO(features);
 
 static struct attribute *amd_iommu_attrs[] = {
&dev_attr_cap.attr,
-- 
2.17.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v4 21/26] iommu/arm-smmu-v3: Ratelimit event dump

2021-05-28 Thread Aaro Koskinen
Hi,

On Mon, Feb 24, 2020 at 07:23:56PM +0100, Jean-Philippe Brucker wrote:
> When a device or driver misbehaves, it is possible to receive events
> much faster than we can print them out. Ratelimit the printing of
> events.
> 
> Signed-off-by: Jean-Philippe Brucker 

Tested-by: Aaro Koskinen 

> During the SVA tests when the device driver didn't properly stop DMA
> before unbinding, the event queue thread would almost lock-up the server
> with a flood of event 0xa. This patch helped recover from the error.

I was just debugging a similar case, and this patch was required to
prevent system from locking up.

Could you please resend this patch independently from the other patches
in the series, as it seems it's a worthwhile fix and still relevent for
current kernels. Thanks,

A.

> ---
>  drivers/iommu/arm-smmu-v3.c | 13 -
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
> index 28f8583cd47b..6a5987cce03f 100644
> --- a/drivers/iommu/arm-smmu-v3.c
> +++ b/drivers/iommu/arm-smmu-v3.c
> @@ -2243,17 +2243,20 @@ static irqreturn_t arm_smmu_evtq_thread(int irq, void 
> *dev)
>   struct arm_smmu_device *smmu = dev;
>   struct arm_smmu_queue *q = &smmu->evtq.q;
>   struct arm_smmu_ll_queue *llq = &q->llq;
> + static DEFINE_RATELIMIT_STATE(rs, DEFAULT_RATELIMIT_INTERVAL,
> +   DEFAULT_RATELIMIT_BURST);
>   u64 evt[EVTQ_ENT_DWORDS];
>  
>   do {
>   while (!queue_remove_raw(q, evt)) {
>   u8 id = FIELD_GET(EVTQ_0_ID, evt[0]);
>  
> - dev_info(smmu->dev, "event 0x%02x received:\n", id);
> - for (i = 0; i < ARRAY_SIZE(evt); ++i)
> - dev_info(smmu->dev, "\t0x%016llx\n",
> -  (unsigned long long)evt[i]);
> -
> + if (__ratelimit(&rs)) {
> + dev_info(smmu->dev, "event 0x%02x received:\n", 
> id);
> + for (i = 0; i < ARRAY_SIZE(evt); ++i)
> + dev_info(smmu->dev, "\t0x%016llx\n",
> +  (unsigned long long)evt[i]);
> + }
>   }
>  
>   /*
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu