On Sun, Jul 12, 2020 at 6:16 PM Yv Lin <yvl...@gmail.com> wrote:

>
> Here are some summaries that I learned from what you told.
> 1) If a device is passed through to guestOS via vfio, and there is no
> IOMMU present in guestOS. all memory regions within the device address
> space will be pinned down. if IOMMU is presented in guestOS, qemu could
> only pin and map the needed pages (specified by dma_map_page() called in
> guestOS device driver), but as vIOMMU is emulated, the performance is not
> good.
> 2) Even if a pcie device can support ATS/PRI capability and it's passed
> through to guestOS, the above statement is still true, the IO page fault
> and demanding page won't be utilized anyway.
>

Correct, also note that a vIOMMU is never enabled during early boot on
x86/64, therefore all guest memory will be pinned initially.  Also a vIOMMU
introduces locked memory accounting issues as each device address space
makes use of a separate VFIO container, which does accounting separately.
And finally, ATS implies that we honor devices making use of
"pre-translated" DMA, which implies a degree of trust that the user/device
cannot make use of this as a vector to exploit the host.  Thanks,

Alex
_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users

Reply via email to