I'm trying to use the vfio-pci driver to pass-through two PCIe endpoint
devices into a VM. On the host, each of these PCIe endpoint devices is in
its own IOMMU group. From inside the VM, I would like to perform P2P DMA
operations. So basically, programming the DMA engine of one of the devices
to write directly to a BAR mapped region of the other device.


Is this something that is supported by the vfio driver, working with Qemu?
Are there any VM configuration gotchas I need to keep in mind for this
particular use-case? I'm on an AMD Rome server, FWIW.


This works on the host (when I'm not using VMs) with IOMMU disabled. And it
also works on the host with the IOMMU enabled as long as I add the
appropriate IOMMU mapping of the other device's BAR mapped address to the
appropriate IOMMU group.


But from what I can tell, when the endpoint devices are passed through to
the VM, it doesn't appear that any IOMMU mappings are created on the host
to translate gPA of the other endpoint's BAR mapped address. But of course
DMA to/from host DRAM does work in that same configuration, so I know IOMMU
mappings are being created to translate gPA of DRAM.


Thanks,

Maran
_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users

Reply via email to