On Sun, Jul 12, 2020 at 6:36 PM Yv Lin <yvl...@gmail.com> wrote:

> After more thoughts, I guess that
> 1) normally ppl don't enable vIOMMU unless they need to use a nested
> guest, as vIOMMU is slow and the memory accounting issue you just mentioned.
>

vIOMMU w/ device assignment is more often used for DPDK in a guest than for
nested guests.


> 2) host IOMMU driver actually can do io page fault and on-demanding
> pinning/mapping for ATS/PRI-capable device, but currently qemu doesn't tell
> if a pass-through device and host IOMMU can do it or not. If this is true,
> maybe we can remove the pinning of all guest memory for this type of
> device??
>

To some extent this is under development with the work Intel and others are
contributing for SVA and SIOV support.  The primary focus is to support
nested paging with PASID support, there are page fault interfaces
proposed.  Thanks,

Alex
_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users

Reply via email to