On 2020/10/15 上午7:10, Alex Williamson wrote:
On Wed, 14 Oct 2020 03:08:31 +0000
"Tian, Kevin" <kevin.t...@intel.com> wrote:

From: Jason Wang <jasow...@redhat.com>
Sent: Tuesday, October 13, 2020 2:22 PM


On 2020/10/12 下午4:38, Tian, Kevin wrote:
From: Jason Wang <jasow...@redhat.com>
Sent: Monday, September 14, 2020 12:20 PM
[...]
   > If it's possible, I would suggest a generic uAPI instead of a VFIO
specific one.

Jason suggest something like /dev/sva. There will be a lot of other
subsystems that could benefit from this (e.g vDPA).

Have you ever considered this approach?
Hi, Jason,

We did some study on this approach and below is the output. It's a
long writing but I didn't find a way to further abstract w/o losing
necessary context. Sorry about that.

Overall the real purpose of this series is to enable IOMMU nested
translation capability with vSVA as one major usage, through
below new uAPIs:
        1) Report/enable IOMMU nested translation capability;
        2) Allocate/free PASID;
        3) Bind/unbind guest page table;
        4) Invalidate IOMMU cache;
        5) Handle IOMMU page request/response (not in this series);
1/3/4) is the minimal set for using IOMMU nested translation, with
the other two optional. For example, the guest may enable vSVA on
a device without using PASID. Or, it may bind its gIOVA page table
which doesn't require page fault support. Finally, all operations can
be applied to either physical device or subdevice.

Then we evaluated each uAPI whether generalizing it is a good thing
both in concept and regarding to complexity.

First, unlike other uAPIs which are all backed by iommu_ops, PASID
allocation/free is through the IOASID sub-system.

A question here, is IOASID expected to be the single management
interface for PASID?
yes

(I'm asking since there're already vendor specific IDA based PASID
allocator e.g amdgpu_pasid_alloc())
That comes before IOASID core was introduced. I think it should be
changed to use the new generic interface. Jacob/Jean can better
comment if other reason exists for this exception.

   From this angle
we feel generalizing PASID management does make some sense.
First, PASID is just a number and not related to any device before
it's bound to a page table and IOMMU domain. Second, PASID is a
global resource (at least on Intel VT-d),

I think we need a definition of "global" here. It looks to me for vt-d
the PASID table is per device.
PASID table is per device, thus VT-d could support per-device PASIDs
in concept. However on Intel platform we require PASIDs to be managed
in system-wide (cross host and guest) when combining vSVA, SIOV, SR-IOV
and ENQCMD together. Thus the host creates only one 'global' PASID
namespace but do use per-device PASID table to assure isolation between
devices on Intel platforms. But ARM does it differently as Jean explained.
They have a global namespace for host processes on all host-owned
devices (same as Intel), but then per-device namespace when a device
(and its PASID table) is assigned to userspace.

Another question, is this possible to have two DMAR hardware unit(at
least I can see two even in my laptop). In this case, is PASID still a
global resource?
yes

   while having separate VFIO/
VDPA allocation interfaces may easily cause confusion in userspace,
e.g. which interface to be used if both VFIO/VDPA devices exist.
Moreover, an unified interface allows centralized control over how
many PASIDs are allowed per process.

Yes.

One unclear part with this generalization is about the permission.
Do we open this interface to any process or only to those which
have assigned devices? If the latter, what would be the mechanism
to coordinate between this new interface and specific passthrough
frameworks?

I'm not sure, but if you just want a permission, you probably can
introduce new capability (CAP_XXX) for this.

   A more tricky case, vSVA support on ARM (Eric/Jean
please correct me) plans to do per-device PASID namespace which
is built on a bind_pasid_table iommu callback to allow guest fully
manage its PASIDs on a given passthrough device.

I see, so I think the answer is to prepare for the namespace support
from the start. (btw, I don't see how namespace is handled in current
IOASID module?)
The PASID table is based on GPA when nested translation is enabled
on ARM SMMU. This design implies that the guest manages PASID
table thus PASIDs instead of going through host-side API on assigned
device. From this angle we don't need explicit namespace in the host
API. Just need a way to control how many PASIDs a process is allowed
to allocate in the global namespace. btw IOASID module already has
'set' concept per-process and PASIDs are managed per-set. Then the
quota control can be easily introduced in the 'set' level.

   I'm not sure
how such requirement can be unified w/o involving passthrough
frameworks, or whether ARM could also switch to global PASID
style...

Second, IOMMU nested translation is a per IOMMU domain
capability. Since IOMMU domains are managed by VFIO/VDPA
   (alloc/free domain, attach/detach device, set/get domain attribute,
etc.), reporting/enabling the nesting capability is an natural
extension to the domain uAPI of existing passthrough frameworks.
Actually, VFIO already includes a nesting enable interface even
before this series. So it doesn't make sense to generalize this uAPI
out.

So my understanding is that VFIO already:

1) use multiple fds
2) separate IOMMU ops to a dedicated container fd (type1 iommu)
3) provides API to associated devices/group with a container
This is not really correct, or at least doesn't match my mental model.
A vfio container represents a set of groups (one or more devices per
group), which share an IOMMU model and context.  The user separately
opens a vfio container and group device files.  A group is associated
to the container via ioctl on the group, providing the container fd.
The user then sets the IOMMU model on the container, which selects the
vfio IOMMU uAPI they'll use.  We support multiple IOMMU models where
each vfio IOMMU backend registers a set of callbacks with vfio-core.


Yes.



And all the proposal in this series is to reuse the container fd. It
should be possible to replace e.g type1 IOMMU with a unified module.
yes, this is the alternative option that I raised in the last paragraph.
"[R]euse the container fd" is where I get lost here.  The container is
a fundamental part of vfio.  Does this instead mean to introduce a new
vfio IOMMU backend model?


Yes, a new backend model or allow using external module as its IOMMU backend.


   The module would need to interact with vfio
via vfio_iommu_driver_ops callbacks, so this "unified module" requires
a vfio interface.  I don't understand how this contributes to something
that vdpa would also make use of.


If an external module is allowed, then it could be reused by vDPA and any other subsystems that want to do vSVA.




Then the tricky part comes with the remaining operations (3/4/5),
which are all backed by iommu_ops thus effective only within an
IOMMU domain. To generalize them, the first thing is to find a way
to associate the sva_FD (opened through generic /dev/sva) with an
IOMMU domain that is created by VFIO/VDPA. The second thing is
to replicate {domain<->device/subdevice} association in /dev/sva
path because some operations (e.g. page fault) is triggered/handled
per device/subdevice.

Is there any reason that the #PF can not be handled via SVA fd?
using per-device FDs or multiplexing all fault info through one sva_FD
is just an implementation choice. The key is to mark faults per device/
subdevice thus anyway requires a userspace-visible handle/tag to
represent device/subdevice and the domain/device association must
be constructed in this new path.

   Therefore, /dev/sva must provide both per-
domain and per-device uAPIs similar to what VFIO/VDPA already
does. Moreover, mapping page fault to subdevice requires pre-
registering subdevice fault data to IOMMU layer when binding
guest page table, while such fault data can be only retrieved from
parent driver through VFIO/VDPA.

However, we failed to find a good way even at the 1st step about
domain association. The iommu domains are not exposed to the
userspace, and there is no 1:1 mapping between domain and device.
In VFIO, all devices within the same VFIO container share the address
space but they may be organized in multiple IOMMU domains based
on their bus type. How (should we let) the userspace know the

domain information and open an sva_FD for each domain is the main
problem here.

The SVA fd is not necessarily opened by userspace. It could be get
through subsystem specific uAPIs.

E.g for vDPA if a vDPA device contains several vSVA-capable domains, we can:

1) introduce uAPI for userspace to know the number of vSVA-capable
domain
2) introduce e.g VDPA_GET_SVA_FD to get the fd for each vSVA-capable
domain
and also new interface to notify userspace when a domain disappears
or a device is detached? Finally looks we are creating a completely set
of new subsystem specific uAPIs just for generalizing another set of
subsystem specific uAPIs. Remember after separating PASID mgmt.
out then most of remaining vSVA uAPIs are simpler wrapper of IOMMU
API. Replicating them is much easier logic than developing a new glue
mechanism in each subsystem.
Right, I don't see the advantage here, subsystem specific uAPIs using
common internal interfaces is what was being proposed.


The problem is if PASID is per device, then this could work. But if it's not, we will get conflict if more than one devices (subsystems) want to use the same PASID to identify the same process address space. If this is true, we need a uAPI beyond VFIO specific one.



In the end we just realized that doing such generalization doesn't
really lead to a clear design and instead requires tight coordination
between /dev/sva and VFIO/VDPA for almost every new uAPI
(especially about synchronization when the domain/device
association is changed or when the device/subdevice is being reset/
drained). Finally it may become a usability burden to the userspace
on proper use of the two interfaces on the assigned device.

Based on above analysis we feel that just generalizing PASID mgmt.
might be a good thing to look at while the remaining operations are
better being VFIO/VDPA specific uAPIs. anyway in concept those are
just a subset of the page table management capabilities that an
IOMMU domain affords. Since all other aspects of the IOMMU domain
is managed by VFIO/VDPA already, continuing this path for new nesting
capability sounds natural. There is another option by generalizing the
entire IOMMU domain management (sort of the entire vfio_iommu_
type1), but it's unclear whether such intrusive change is worthwhile
(especially when VFIO/VDPA already goes different route even in legacy
mapping uAPI: map/unmap vs. IOTLB).

Thoughts?

I'm ok with starting with a unified PASID management and consider the
unified vSVA/vIOMMU uAPI later.
Glad to see that we have consensus here. :)
I see the benefit in a common PASID quota mechanism rather than the
ad-hoc limits introduced for vfio, but vfio integration does have the
benefit of being tied to device access, whereas it seems it seems a
user will need to be granted some CAP_SVA capability separate from the
device to make use of this interface.  It's possible for vfio to honor
shared limits, just as we make use of locked memory limits shared by
the task, so I'm not sure yet the benefit provided by a separate
userspace interface outside of vfio.  A separate interface also throws
a kink is userspace use of vfio, where we expect the interface is
largely self contained, ie. if a user has access to the vfio group and
container device files, they can fully make use of their device, up to
limits imposed by things like locked memory.  I'm concerned that
management tools will actually need to understand the intended usage of
a device in order to grant new capabilities, file access, and limits to
a process making use of these features.  Hopefully your prototype will
clarify some of those aspects.  Thanks,

Alex

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to