On Wed, Jan 28, 2026 at 10:42:53AM -0800, Matthew Brost wrote: > Yes, this is exactly what I envision here. First, let me explain the > possible addressing modes on the UAL fabric: > > - Physical (akin to IOMMU passthrough) > - Virtual (akin to IOMMU enabled) > > Physical mode is straightforward — resolve the PFN to a cross-device > physical address, then install it into the initiator’s page tables along > with a bit indicating routing over the network. In this mode, the vfuncs > here are basically NOPs.
Ugh of course they would invent something so complicated. I'm not convinced this should be hidden inside DRM. The DMA API is the place to make things mappable and for an open standard like UALink it would make sense that the DMA API is the broker to connect things as it will be more than just one GPU driver talking to itself. There is a journey to get there, but I don't think it is too complicated. It also probably ties in fairly nicely with the ideas coming for multi path PCIe fabrics. > So, if it isn’t clear — these vfuncs hide whether PCIe P2P is being used > (IOMMU in passthrough or enabled) or UAL is being used (physical or > virtual) for DRM common layer. They manage the resources for the > connection and provide the information needed to program the initiator > PTEs (address + “use interconnect” vs. “use PCIe P2P bit”). This looks like it is taking the DMA API and sticking drm_ in front of it :( I don't think this is a good direction for the kernel, DRM should not be internally building such key infrastructure. I'm confident we will see NICs and storage wired up to these fabrics as well. Jason
