I'm new to qemu and vfio and looking for some information about if a pcie device passed through to guestOS by vfio needs the hypervisor to pin down all guest memory(all VM memory). I found the link https://lkml.org/lkml/2018/10/30/221. it says "it shows the whole guest memory were pinned (vfio_pin_pages()), viewed by...", but I searched vfio_pin_pages() caller, it seems being used by some mdev drivers like intel vgpu driver and they seem only pin needed pages to SPA, why the links say "the whole guest memory"? what does "the whole guest memory" mean here?
In the link https://terenceli.github.io/%E6%8A%80%E6%9C%AF/2019/08/31/vfio-passthrough, it mentioned vfio_dma_map() function which would be called by a memory listener for IOMMU memory region, it looks to me that this vfio_dma_map is called when amd_iommu device(qemu/hw/i386/amd_iommu.c) is emulated in qemu, I assume this is so-called vIOMMU. So it seems only when vIOMMU is enabled then qemu has a chance to pin down a needed region of memory and it matches the answer in the first link. But how is this related to AMD IOMMU Nested address translation? I'm not considering a nested guest (L2 guest) yet. Thanks.
_______________________________________________ vfio-users mailing list vfio-users@redhat.com https://www.redhat.com/mailman/listinfo/vfio-users