Just wanted to wrap up this thread by confirming what Alex said is true (in
case anyone else is interested in this topic in the future). After enabling
IOMMU tracing on the host I was able to confirm that IOMMU mappings were,
in fact, being created properly to map the gPA to hPA of both devices' BAR
resources.

It turns out that our hardware device provides a backdoor way of reading
PCI config space via BAR mapped register space.  The driver inside the VM
was using that and thereby reading back the hPA of the BAR (and using that
to program the DMA controller). This sort of breaks the whole pass-through
model so I'll have to sort that out on the driver/device side to close that
loophole somehow so that the driver inside the VM is forced to use standard
Linux APIs to read PCI config space. That way KVM/Qemu can properly
intercept the access and return the gPA values.

Thanks,
-Maran

On Tue, Sep 8, 2020 at 1:14 PM Alex Williamson <alex.william...@redhat.com>
wrote:

> On Tue, 8 Sep 2020 11:31:42 -0700
> Maran Wilson <maran.wil...@gmail.com> wrote:
>
> > On Tue, Sep 8, 2020 at 10:22 AM Alex Williamson <
> alex.william...@redhat.com>
> > wrote:
> >
> > > On Tue, 8 Sep 2020 09:59:46 -0700
> > > Maran Wilson <maran.wil...@gmail.com> wrote:
> > >
> > > > I'm trying to use the vfio-pci driver to pass-through two PCIe
> endpoint
> > > > devices into a VM. On the host, each of these PCIe endpoint devices
> is in
> > > > its own IOMMU group. From inside the VM, I would like to perform P2P
> DMA
> > > > operations. So basically, programming the DMA engine of one of the
> > > devices
> > > > to write directly to a BAR mapped region of the other device.
> > > >
> > > >
> > > > Is this something that is supported by the vfio driver, working
> with
> > > Qemu?
> > > > Are there any VM configuration gotchas I need to keep in mind for
> this
> > > > particular use-case? I'm on an AMD Rome server, FWIW.
> > > >
> > > >
> > > > This works on the host (when I'm not using VMs) with IOMMU disabled.
> And
> > > it
> > > > also works on the host with the IOMMU enabled as long as I add the
> > > > appropriate IOMMU mapping of the other device's BAR mapped address
> to the
> > > > appropriate IOMMU group.
> > > >
> > > >
> > > > But from what I can tell, when the endpoint devices are passed
> through to
> > > > the VM, it doesn't appear that any IOMMU mappings are created on the
> host
> > > > to translate gPA of the other endpoint's BAR mapped address. But of
> > > course
> > > > DMA to/from host DRAM does work in that same configuration, so I
> know
> > > IOMMU
> > > > mappings are being created to translate gPA of DRAM.
> > >
> > > As long as we can mmap the endpoint BAR (>= PAGE_SIZE) then the BAR GPA
> > > should be mapped through the IOMMU to enable p2p within the VM.  You
> > >
> >
> > Thanks Alex. Does it require the endpoints having a common root port
> inside
> > the VM? Or does that part not matter?
>
> There could be platform specific topology requirements, but since you
> indicate it works on the host with the IOMMU enabled, I assume we're ok.
>
> > If you happen to know the routine name or two in the driver and/or qemu
> > that handles this, it would help me get my bearings sooner and allow me
> to
> > instrument from the kernel side too to see why my experiment is not
> working.
>
> In QEMU all vfio mappings are handled through a MemoryListener where
> vfio_listener_region_{add,del} are the callbacks.  vfio_dma_map() and
> vfio_dma_unmap() are the wrappers for the ioctl into the kernel call.
> BARs mappings will report true for memory_region_is_ram_device(), so we
> won't consider it a fatal error when we can't map them, but we'll
> certainly try to map them.
>
> On the kernel side, you'd be using the type1 IOMMU backend, where the
> ioctl will go through vfio_iommu_type1_ioctl() and should land in
> vfio_dma_do_map() for the mapping case.
>
> > > should be able to see this with tracing enabled in QEMU for vfio*.
> > >
> >
> > I will try that too, thanks!
> >
> > By the way, is this functionality present as far back as the 4.15 kernel?
>
> It's essentially always been present.  Thanks,
>
> Alex
>
>
_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://www.redhat.com/mailman/listinfo/vfio-users

Reply via email to