RE: [PATCH v5 14/15] vfio: Document dual stage control
Hi Eric, > From: Auger Eric > Sent: Saturday, July 18, 2020 9:40 PM > > Hi Yi, > > On 7/12/20 1:21 PM, Liu Yi L wrote: > > From: Eric Auger > > > > The VFIO API was enhanced to support nested stage control: a bunch of > > new iotcls and usage guideline. > ioctls got it. > > > > Let's document the process to follow to set up nested mode. > > > > Cc: Kevin Tian > > CC: Jacob Pan > > Cc: Alex Williamson > > Cc: Eric Auger > > Cc: Jean-Philippe Brucker > > Cc: Joerg Roedel > > Cc: Lu Baolu > > Reviewed-by: Stefan Hajnoczi > > Signed-off-by: Eric Auger > > Signed-off-by: Liu Yi L > > --- > > v3 -> v4: > > *) add review-by from Stefan Hajnoczi > > > > v2 -> v3: > > *) address comments from Stefan Hajnoczi > > > > v1 -> v2: > > *) new in v2, compared with Eric's original version, pasid table bind > >and fault reporting is removed as this series doesn't cover them. > >Original version from Eric. > >https://lkml.org/lkml/2020/3/20/700 > > --- > > Documentation/driver-api/vfio.rst | 67 > +++ > > 1 file changed, 67 insertions(+) > > > > diff --git a/Documentation/driver-api/vfio.rst > > b/Documentation/driver-api/vfio.rst > > index f1a4d3c..0672c45 100644 > > --- a/Documentation/driver-api/vfio.rst > > +++ b/Documentation/driver-api/vfio.rst > > @@ -239,6 +239,73 @@ group and can access them as follows:: > > /* Gratuitous device reset and go... */ > > ioctl(device, VFIO_DEVICE_RESET); > > > > +IOMMU Dual Stage Control > > + > > + > > +Some IOMMUs support 2 stages/levels of translation. Stage corresponds to > > +the ARM terminology while level corresponds to Intel's VTD terminology. > > +In the following text we use either without distinction. > > + > > +This is useful when the guest is exposed with a virtual IOMMU and some > > +devices are assigned to the guest through VFIO. Then the guest OS can use > > +stage 1 (GIOVA -> GPA or GVA->GPA), while the hypervisor uses stage 2 for > > +VM isolation (GPA -> HPA). > > + > > +Under dual stage translation, the guest gets ownership of the stage 1 page > > +tables and also owns stage 1 configuration structures. The hypervisor owns > > +the root configuration structure (for security reason), including stage 2 > > +configuration. This works as long as configuration structures and page > > table > > +formats are compatible between the virtual IOMMU and the physical IOMMU. > > + > > +Assuming the HW supports it, this nested mode is selected by choosing the > > +VFIO_TYPE1_NESTING_IOMMU type through: > > + > > +ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU); > > + > > +This forces the hypervisor to use the stage 2, leaving stage 1 available > > +for guest usage. The guest stage 1 format depends on IOMMU vendor, and > > +it is the same with the nesting configuration method. User space should > > +check the format and configuration method after setting nesting type by > > +using: > > + > > +ioctl(container->fd, VFIO_IOMMU_GET_INFO, &nesting_info); > > + > > +Details can be found in Documentation/userspace-api/iommu.rst. For Intel > > +VT-d, each stage 1 page table is bound to host by: > > + > > +nesting_op->flags = VFIO_IOMMU_NESTING_OP_BIND_PGTBL; > > +memcpy(&nesting_op->data, &bind_data, sizeof(bind_data)); > > +ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op); > > + > > +As mentioned above, guest OS may use stage 1 for GIOVA->GPA or GVA->GPA. > the guest OS, here and below? got it. > > +GVA->GPA page tables are available when PASID (Process Address Space ID) > > +is exposed to guest. e.g. guest with PASID-capable devices assigned. For > > +such page table binding, the bind_data should include PASID info, which > > +is allocated by guest itself or by host. This depends on hardware vendor. > > +e.g. Intel VT-d requires to allocate PASID from host. This requirement is > > > +defined by the Virtual Command Support in VT-d 3.0 spec, guest software > > +running on VT-d should allocate PASID from host kernel. > because VTD 3.0 requires the unicity of the PASID, system wide, instead > of the above repetition. I see. perhaps better to say Intel platform. :-) will refine it. > > To allocate PASID > > +from host, user space should check the IOMMU_NESTING_FEAT_SYSWIDE_PASID > > +bit of the nesting info reported from host kernel. VFIO reports the nesting > > +info by VFIO_IOMMU_GET_INFO. User space could allocate PASID from host by: > if SYSWIDE_PASID requirement is exposed, the userspace *must* allocate ... got it. > > + > > +req.flags = VFIO_IOMMU_ALLOC_PASID; > > +ioctl(container, VFIO_IOMMU_PASID_REQUEST, &req); > > + > > +With first stage/level page table bound to host, it allows to combine the > > +guest stage 1 translation along with the hypervisor stage 2 translation to > > +get final address. > > + > > +When the guest invalidates stage 1 related caches, invalidations must be > > +forwarded to the host through > > + > > +nest
Re: [PATCH v5 14/15] vfio: Document dual stage control
Hi Yi, On 7/12/20 1:21 PM, Liu Yi L wrote: > From: Eric Auger > > The VFIO API was enhanced to support nested stage control: a bunch of > new iotcls and usage guideline. ioctls > > Let's document the process to follow to set up nested mode. > > Cc: Kevin Tian > CC: Jacob Pan > Cc: Alex Williamson > Cc: Eric Auger > Cc: Jean-Philippe Brucker > Cc: Joerg Roedel > Cc: Lu Baolu > Reviewed-by: Stefan Hajnoczi > Signed-off-by: Eric Auger > Signed-off-by: Liu Yi L > --- > v3 -> v4: > *) add review-by from Stefan Hajnoczi > > v2 -> v3: > *) address comments from Stefan Hajnoczi > > v1 -> v2: > *) new in v2, compared with Eric's original version, pasid table bind >and fault reporting is removed as this series doesn't cover them. >Original version from Eric. >https://lkml.org/lkml/2020/3/20/700 > --- > Documentation/driver-api/vfio.rst | 67 > +++ > 1 file changed, 67 insertions(+) > > diff --git a/Documentation/driver-api/vfio.rst > b/Documentation/driver-api/vfio.rst > index f1a4d3c..0672c45 100644 > --- a/Documentation/driver-api/vfio.rst > +++ b/Documentation/driver-api/vfio.rst > @@ -239,6 +239,73 @@ group and can access them as follows:: > /* Gratuitous device reset and go... */ > ioctl(device, VFIO_DEVICE_RESET); > > +IOMMU Dual Stage Control > + > + > +Some IOMMUs support 2 stages/levels of translation. Stage corresponds to > +the ARM terminology while level corresponds to Intel's VTD terminology. > +In the following text we use either without distinction. > + > +This is useful when the guest is exposed with a virtual IOMMU and some > +devices are assigned to the guest through VFIO. Then the guest OS can use > +stage 1 (GIOVA -> GPA or GVA->GPA), while the hypervisor uses stage 2 for > +VM isolation (GPA -> HPA). > + > +Under dual stage translation, the guest gets ownership of the stage 1 page > +tables and also owns stage 1 configuration structures. The hypervisor owns > +the root configuration structure (for security reason), including stage 2 > +configuration. This works as long as configuration structures and page table > +formats are compatible between the virtual IOMMU and the physical IOMMU. > + > +Assuming the HW supports it, this nested mode is selected by choosing the > +VFIO_TYPE1_NESTING_IOMMU type through: > + > +ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU); > + > +This forces the hypervisor to use the stage 2, leaving stage 1 available > +for guest usage. The guest stage 1 format depends on IOMMU vendor, and > +it is the same with the nesting configuration method. User space should > +check the format and configuration method after setting nesting type by > +using: > + > +ioctl(container->fd, VFIO_IOMMU_GET_INFO, &nesting_info); > + > +Details can be found in Documentation/userspace-api/iommu.rst. For Intel > +VT-d, each stage 1 page table is bound to host by: > + > +nesting_op->flags = VFIO_IOMMU_NESTING_OP_BIND_PGTBL; > +memcpy(&nesting_op->data, &bind_data, sizeof(bind_data)); > +ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op); > + > +As mentioned above, guest OS may use stage 1 for GIOVA->GPA or GVA->GPA. the guest OS, here and below? > +GVA->GPA page tables are available when PASID (Process Address Space ID) > +is exposed to guest. e.g. guest with PASID-capable devices assigned. For > +such page table binding, the bind_data should include PASID info, which > +is allocated by guest itself or by host. This depends on hardware vendor. > +e.g. Intel VT-d requires to allocate PASID from host. This requirement is > +defined by the Virtual Command Support in VT-d 3.0 spec, guest software > +running on VT-d should allocate PASID from host kernel. because VTD 3.0 requires the unicity of the PASID, system wide, instead of the above repetition. To allocate PASID > +from host, user space should check the IOMMU_NESTING_FEAT_SYSWIDE_PASID > +bit of the nesting info reported from host kernel. VFIO reports the nesting > +info by VFIO_IOMMU_GET_INFO. User space could allocate PASID from host by: if SYSWIDE_PASID requirement is exposed, the userspace *must* allocate ... > + > +req.flags = VFIO_IOMMU_ALLOC_PASID; > +ioctl(container, VFIO_IOMMU_PASID_REQUEST, &req); > + > +With first stage/level page table bound to host, it allows to combine the > +guest stage 1 translation along with the hypervisor stage 2 translation to > +get final address. > + > +When the guest invalidates stage 1 related caches, invalidations must be > +forwarded to the host through > + > +nesting_op->flags = VFIO_IOMMU_NESTING_OP_CACHE_INVLD; > +memcpy(&nesting_op->data, &inv_data, sizeof(inv_data)); > +ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op); > + > +Those invalidations can happen at various granularity levels, page, context, > +... > + > VFIO User API > > --- I see you drop
[PATCH v5 14/15] vfio: Document dual stage control
From: Eric Auger The VFIO API was enhanced to support nested stage control: a bunch of new iotcls and usage guideline. Let's document the process to follow to set up nested mode. Cc: Kevin Tian CC: Jacob Pan Cc: Alex Williamson Cc: Eric Auger Cc: Jean-Philippe Brucker Cc: Joerg Roedel Cc: Lu Baolu Reviewed-by: Stefan Hajnoczi Signed-off-by: Eric Auger Signed-off-by: Liu Yi L --- v3 -> v4: *) add review-by from Stefan Hajnoczi v2 -> v3: *) address comments from Stefan Hajnoczi v1 -> v2: *) new in v2, compared with Eric's original version, pasid table bind and fault reporting is removed as this series doesn't cover them. Original version from Eric. https://lkml.org/lkml/2020/3/20/700 --- Documentation/driver-api/vfio.rst | 67 +++ 1 file changed, 67 insertions(+) diff --git a/Documentation/driver-api/vfio.rst b/Documentation/driver-api/vfio.rst index f1a4d3c..0672c45 100644 --- a/Documentation/driver-api/vfio.rst +++ b/Documentation/driver-api/vfio.rst @@ -239,6 +239,73 @@ group and can access them as follows:: /* Gratuitous device reset and go... */ ioctl(device, VFIO_DEVICE_RESET); +IOMMU Dual Stage Control + + +Some IOMMUs support 2 stages/levels of translation. Stage corresponds to +the ARM terminology while level corresponds to Intel's VTD terminology. +In the following text we use either without distinction. + +This is useful when the guest is exposed with a virtual IOMMU and some +devices are assigned to the guest through VFIO. Then the guest OS can use +stage 1 (GIOVA -> GPA or GVA->GPA), while the hypervisor uses stage 2 for +VM isolation (GPA -> HPA). + +Under dual stage translation, the guest gets ownership of the stage 1 page +tables and also owns stage 1 configuration structures. The hypervisor owns +the root configuration structure (for security reason), including stage 2 +configuration. This works as long as configuration structures and page table +formats are compatible between the virtual IOMMU and the physical IOMMU. + +Assuming the HW supports it, this nested mode is selected by choosing the +VFIO_TYPE1_NESTING_IOMMU type through: + +ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_NESTING_IOMMU); + +This forces the hypervisor to use the stage 2, leaving stage 1 available +for guest usage. The guest stage 1 format depends on IOMMU vendor, and +it is the same with the nesting configuration method. User space should +check the format and configuration method after setting nesting type by +using: + +ioctl(container->fd, VFIO_IOMMU_GET_INFO, &nesting_info); + +Details can be found in Documentation/userspace-api/iommu.rst. For Intel +VT-d, each stage 1 page table is bound to host by: + +nesting_op->flags = VFIO_IOMMU_NESTING_OP_BIND_PGTBL; +memcpy(&nesting_op->data, &bind_data, sizeof(bind_data)); +ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op); + +As mentioned above, guest OS may use stage 1 for GIOVA->GPA or GVA->GPA. +GVA->GPA page tables are available when PASID (Process Address Space ID) +is exposed to guest. e.g. guest with PASID-capable devices assigned. For +such page table binding, the bind_data should include PASID info, which +is allocated by guest itself or by host. This depends on hardware vendor. +e.g. Intel VT-d requires to allocate PASID from host. This requirement is +defined by the Virtual Command Support in VT-d 3.0 spec, guest software +running on VT-d should allocate PASID from host kernel. To allocate PASID +from host, user space should check the IOMMU_NESTING_FEAT_SYSWIDE_PASID +bit of the nesting info reported from host kernel. VFIO reports the nesting +info by VFIO_IOMMU_GET_INFO. User space could allocate PASID from host by: + +req.flags = VFIO_IOMMU_ALLOC_PASID; +ioctl(container, VFIO_IOMMU_PASID_REQUEST, &req); + +With first stage/level page table bound to host, it allows to combine the +guest stage 1 translation along with the hypervisor stage 2 translation to +get final address. + +When the guest invalidates stage 1 related caches, invalidations must be +forwarded to the host through + +nesting_op->flags = VFIO_IOMMU_NESTING_OP_CACHE_INVLD; +memcpy(&nesting_op->data, &inv_data, sizeof(inv_data)); +ioctl(container->fd, VFIO_IOMMU_NESTING_OP, nesting_op); + +Those invalidations can happen at various granularity levels, page, context, +... + VFIO User API --- -- 2.7.4 ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu