On Sun, Jul 12, 2020 at 1:59 PM Alex Williamson <alex.l.william...@gmail.com>
wrote:

> On Sun, Jul 12, 2020 at 12:25 PM Yv Lin <yvl...@gmail.com> wrote:
>
>> Btw, IOMMUv2 can support peripheral page request (PPR) so in theory if an
>> end point pcie device can support ATS/PRI, pinning down all memory is not
>> necessary, does current vfio driver or qemu has corresponding support to
>> save pinned memory?
>>
>
> I think you're very much over estimating the difference between
> VFIO_TYPE1_IOMMU and VFIO_TYPE1v2_IOMMU, if this is what you're referring
> to.  The difference is only subtle unmapping semantics, none of what you
> mention above.
>

I was referring to AMD iommuv2 (drivers/iommu/amd_iommu_v2.c in linux
kernel tree).  If the host machine bears a AMD iommuv2 which has PPR
capability, does it help for vfio/qemu no pinning down all memories?


>
>> On Sun, Jul 12, 2020 at 11:03 AM Yv Lin <yvl...@gmail.com> wrote:
>>
>>> Hi Alex,
>>> thanks for the detailed explanation. it does clarify more to me. I read
>>> the vfio_listener_region_add() more carefully. It seems check every
>>> memory region against container's host window, for IOMMUv1 vfio device, the
>>> host window is always 64bit full range (vfio_host_win_add(container, 0,
>>> (hwaddr)-1, info.iova_pgsizes); in vfio_connect_container()), so basically
>>> mean all memory region will be pinned and mapped to host IOMMU, is the
>>> understanding right?
>>>
>>
> The listener maps everything within the address space of the device, the
> extent of that address space depends on whether a vIOMMU is present and
> active.  When there is no vIOMMU, the full address space of the VM is
> mapped.  Thanks,
>

Thanks.


> Alex
>
_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users

Reply via email to