On Wed, 3 Dec 2025 14:18:17 -0700
Stephen Bates <[email protected]> wrote:

> This patch introduces a PCI MMIO Bridge device that enables PCI devices
> to perform MMIO operations on other PCI devices via command packets. This
> provides software-defined PCIe peer-to-peer (P2P) communication without
> requiring specific hardware topology.

Who is supposed to use this and why wouldn't they just use bounce
buffering through a guest kernel driver?  Is rudimentary data movement
something we really want/need to push to the VMM?  The device seems
inherently insecure.

> Configuration:
>   qemu-system-x86_64 -machine q35 \
>       -device pci-mmio-bridge,shadow-gpa=0x80000000,shadow-size=4096
> 
> - shadow-gpa: Guest physical address (default: 0x80000000, 0=auto)
> - shadow-size: Buffer size in bytes (default: 4096, min: 4096)
> - poll-interval-ns: Polling interval (default: 1000000 = 1ms)
> - enabled: Enable/disable bridge (default: true)

Wouldn't it make more sense if the buffer were allocated by the guest
driver and programmed at runtime?  Polling just adds yet more
questionable VMM overhead, why not ioeventfds?

> The bridge exposes shadow buffer information via a vendor-specific PCI config
> space:
> 
>   Offset 0x40: GPA bits [31:0]
>   Offset 0x44: GPA bits [63:32]
>   Offset 0x48: Buffer size
>   Offset 0x4C: Queue depth

Arbitrary registers like this should be exposed via BARs or at least in
a vendor specific capability within config space.

...
> +
> +VFIO can only map guest RAM not emulated PCI MMIO space. And, at the present

only guest RAM: False, not emulated PCI MMIO: True

> +time, VFIO cannot map MMIO space into an IOVA mapping. Therefore the PCI MMIO

Other assigned device MMIO, it absolutely can.  The legacy type1 support
has always had this and IOMMUFD based vfio is about to gain this via
dma-buf sharing as well.  Thanks,

Alex

Reply via email to