On Fri, May 16, 2025 at 02:02:29AM +0800, Xu Yilun wrote:
> > IMHO, I think it might be helpful that you can picture out what are the
> > minimum requirements (function/life cycle) to the current IOMMUFD TSM
> > bind architecture:
> > 
> > 1.host tsm_bind (preparation) is in IOMMUFD, triggered by QEMU handling
> > the TVM-HOST call.
> > 2. TDI acceptance is handled in guest_request() to accept the TDI after
> > the validation in the TVM)
> 
> I'll try my best to brainstorm and make a flow in ASCII. 
> 
> (*) means new feature
> 
> 
>       Guest          Guest TSM       QEMU           VFIO            IOMMUFD   
>     host TSM          KVM 
>       -----          ---------       ----           ----            -------   
>     --------          ---
> 1.                                                                            
>    *Connect(IDE)
> 2.                                 Init vdev            

open /dev/vfio/XX as a VFIO action

Then VFIO attaches to IOMMUFD as an iommufd action creating the idev

> 3.                                *create dmabuf   
> 4.                                               *export dmabuf               
>                
> 5.                                create memslot
> 6.                                                                            
>                   *import dmabuf
> 7.                                setup shared DMA
> 8.                                                                 create hwpt
> 9.                                               attach hwpt
> 10.                                  kvm run
> 11.enum shared dev
> 12.*Connect(Bind)
> 13.                  *GHCI Bind
> 14.                                  *Bind
> 15                                                                 CC viommu 
> alloc
> 16.                                                                vdevice 
> allloc

viommu and vdevice creation happen before KVM run. The vPCI function
is visible to the guest from the very start, even though it is in T=0
mode. If a platform does not require any special CC steps prior to KVM
run then it just has a NOP for these functions.

What you have here is some new BIND operation against the already
existing vdevice as we discussed earlier.

> 16.                                              *attach vdev
> 17.                                                               *setup CC 
> viommu
> 18                                                                 *tsm_bind
> 19.                                                                           
>        *bind
> 20.*Attest
> 21.               *GHCI get CC info
> 22.                                 *get CC info
> 23.                                                                *vdev 
> guest req
> 24.                                                                           
>       *guest req
> 25.*Accept
> 26.             *GHCI accept MMIO/DMA
> 27.                                *accept MMIO/DMA
> 28.                                                               *vdev guest 
> req
> 29.                                                                           
>       *guest req
> 30.                                                                           
>                    *map private MMIO
> 31.             *GHCI start tdi
> 32.                                *start tdi
> 33.                                                               *vdev guest 
> req
> 34.                                                                           
>       *guest req

This seems reasonable you want to have some generic RPC scheme to
carry messages fro mthe VM to the TSM tunneled through the iommufd
vdevice (because the vdevice has the vPCI ID, the KVM ID, the VIOMMU
id and so on)

> 35.Workload...
> 36.*disconnect(Unbind)
> 37.              *GHCI unbind
> 38.                                *Unbind
> 39.                                            *detach vdev

unbind vdev. vdev remains until kvm is stopped.

> 40.                                                               *tsm_unbind
> 41.                                                                           
>       *TDX stop tdi
> 42.                                                                           
>       *TDX disable mmio cb
> 43.                                            *cb dmabuf revoke
> 44.                                                                           
>                     *unmap private MMIO
> 45.                                                                           
>       *TDX disable dma cb
> 46.                                                              *cb disable 
> CC viommu

I don't know why you'd disable a viommu while the VM is running,
doesn't make sense.

> 47.                                                                           
>       *TDX tdi free
> 48.                                                                           
>       *enable mmio
> 49.                                            *cb dmabuf recover
> 50.workable shared dev

This is a nice chart, it would be good to see a comparable chart for
AMD and ARM

Jason

Reply via email to