After more thoughts, I guess that
1) normally ppl don't enable vIOMMU unless they need to use a nested guest,
as vIOMMU is slow and the memory accounting issue you just mentioned.
2) host IOMMU driver actually can do io page fault and on-demanding
pinning/mapping for ATS/PRI-capable device, but currently qemu doesn't tell
if a pass-through device and host IOMMU can do it or not. If this is true,
maybe we can remove the pinning of all guest memory for this type of
device??

Thanks.




On Sun, Jul 12, 2020 at 5:24 PM Alex Williamson <alex.l.william...@gmail.com>
wrote:

> On Sun, Jul 12, 2020 at 6:16 PM Yv Lin <yvl...@gmail.com> wrote:
>
>>
>> Here are some summaries that I learned from what you told.
>> 1) If a device is passed through to guestOS via vfio, and there is no
>> IOMMU present in guestOS. all memory regions within the device address
>> space will be pinned down. if IOMMU is presented in guestOS, qemu could
>> only pin and map the needed pages (specified by dma_map_page() called in
>> guestOS device driver), but as vIOMMU is emulated, the performance is not
>> good.
>> 2) Even if a pcie device can support ATS/PRI capability and it's passed
>> through to guestOS, the above statement is still true, the IO page fault
>> and demanding page won't be utilized anyway.
>>
>
> Correct, also note that a vIOMMU is never enabled during early boot on
> x86/64, therefore all guest memory will be pinned initially.  Also a vIOMMU
> introduces locked memory accounting issues as each device address space
> makes use of a separate VFIO container, which does accounting separately.
> And finally, ATS implies that we honor devices making use of
> "pre-translated" DMA, which implies a degree of trust that the user/device
> cannot make use of this as a vector to exploit the host.  Thanks,
>
> Alex
>
_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users

Reply via email to