On Sun, Jul 12, 2020 at 4:57 PM Alex Williamson <alex.l.william...@gmail.com>
wrote:

> On Sun, Jul 12, 2020 at 5:38 PM Yv Lin <yvl...@gmail.com> wrote:
>
>>
>>
>> On Sun, Jul 12, 2020 at 1:59 PM Alex Williamson <
>> alex.l.william...@gmail.com> wrote:
>>
>>> On Sun, Jul 12, 2020 at 12:25 PM Yv Lin <yvl...@gmail.com> wrote:
>>>
>>>> Btw, IOMMUv2 can support peripheral page request (PPR) so in theory if
>>>> an end point pcie device can support ATS/PRI, pinning down all memory is
>>>> not necessary, does current vfio driver or qemu has corresponding support
>>>> to save pinned memory?
>>>>
>>>
>>> I think you're very much over estimating the difference between
>>> VFIO_TYPE1_IOMMU and VFIO_TYPE1v2_IOMMU, if this is what you're referring
>>> to.  The difference is only subtle unmapping semantics, none of what you
>>> mention above.
>>>
>>
>> I was referring to AMD iommuv2 (drivers/iommu/amd_iommu_v2.c in linux
>> kernel tree).  If the host machine bears a AMD iommuv2 which has PPR
>> capability, does it help for vfio/qemu no pinning down all memories?
>>
>
> No, there are not yet any interfaces to handle PPR through VFIO and I'm
> not even sure AMD vIOMMU works with VFIO.  Thanks,
>

Here are some summaries that I learned from what you told.
1) If a device is passed through to guestOS via vfio, and there is no IOMMU
present in guestOS. all memory regions within the device address space will
be pinned down. if IOMMU is presented in guestOS, qemu could only pin and
map the needed pages (specified by dma_map_page() called in guestOS device
driver), but as vIOMMU is emulated, the performance is not good.
2) Even if a pcie device can support ATS/PRI capability and it's passed
through to guestOS, the above statement is still true, the IO page fault
and demanding page won't be utilized anyway.

thanks.


>
> Alex
>
>>
_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users

Reply via email to