On Tue, 8 Sep 2020 11:31:42 -0700
Maran Wilson <maran.wil...@gmail.com> wrote:

> On Tue, Sep 8, 2020 at 10:22 AM Alex Williamson <alex.william...@redhat.com>
> wrote:
> 
> > On Tue, 8 Sep 2020 09:59:46 -0700
> > Maran Wilson <maran.wil...@gmail.com> wrote:
> >  
> > > I'm trying to use the vfio-pci driver to pass-through two PCIe endpoint
> > > devices into a VM. On the host, each of these PCIe endpoint devices is in
> > > its own IOMMU group. From inside the VM, I would like to perform P2P DMA
> > > operations. So basically, programming the DMA engine of one of the  
> > devices  
> > > to write directly to a BAR mapped region of the other device.
> > >
> > >
> > > Is this something that is supported by the vfio driver, working with  
> > Qemu?  
> > > Are there any VM configuration gotchas I need to keep in mind for this
> > > particular use-case? I'm on an AMD Rome server, FWIW.
> > >
> > >
> > > This works on the host (when I'm not using VMs) with IOMMU disabled. And  
> > it  
> > > also works on the host with the IOMMU enabled as long as I add the
> > > appropriate IOMMU mapping of the other device's BAR mapped address to the
> > > appropriate IOMMU group.
> > >
> > >
> > > But from what I can tell, when the endpoint devices are passed through to
> > > the VM, it doesn't appear that any IOMMU mappings are created on the host
> > > to translate gPA of the other endpoint's BAR mapped address. But of  
> > course  
> > > DMA to/from host DRAM does work in that same configuration, so I know  
> > IOMMU  
> > > mappings are being created to translate gPA of DRAM.  
> >
> > As long as we can mmap the endpoint BAR (>= PAGE_SIZE) then the BAR GPA
> > should be mapped through the IOMMU to enable p2p within the VM.  You
> >  
> 
> Thanks Alex. Does it require the endpoints having a common root port inside
> the VM? Or does that part not matter?

There could be platform specific topology requirements, but since you
indicate it works on the host with the IOMMU enabled, I assume we're ok.

> If you happen to know the routine name or two in the driver and/or qemu
> that handles this, it would help me get my bearings sooner and allow me to
> instrument from the kernel side too to see why my experiment is not working.

In QEMU all vfio mappings are handled through a MemoryListener where
vfio_listener_region_{add,del} are the callbacks.  vfio_dma_map() and
vfio_dma_unmap() are the wrappers for the ioctl into the kernel call.
BARs mappings will report true for memory_region_is_ram_device(), so we
won't consider it a fatal error when we can't map them, but we'll
certainly try to map them.

On the kernel side, you'd be using the type1 IOMMU backend, where the
ioctl will go through vfio_iommu_type1_ioctl() and should land in
vfio_dma_do_map() for the mapping case.

> > should be able to see this with tracing enabled in QEMU for vfio*.
> >  
> 
> I will try that too, thanks!
> 
> By the way, is this functionality present as far back as the 4.15 kernel?

It's essentially always been present.  Thanks,

Alex

_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users

Reply via email to