On Thu, Jan 29, 2026 at 11:28:38AM -0800, Matthew Brost wrote:
> > DMA API is already bus agnostic, I think there is no issue to plug in
> > a ualink_device or whatever under there and make it do something
> 
> I have thought about this, which is why our idea was to roughly duplicate
> the DMA API and layer it almost exactly the same. My only concern would
> be the semantics.
> 
> dma_iova_alloc() ← This is reclaim-safe currently, AFAIK.
> 
> ual_iova_alloc() ← If this allocates GPU memory for page tables, it is
> basically impossible to make reclaim-safe (i.e. call under a notifier
> lock), avoid dma-resv locks (i.e., call in map_dma_buf) without
> subsysem-level rewrites in DRM for allocating memory and driver-level
> rewrites of the bind code / for Xe, Nouveau (likely Nova), and AMDGPU.

If GFP_NO_RECLAIM is your only issue I'm sure that can be delt with.

> Then of course dma_addr_t now means something entirely different from
> the original intent.

No, dma_addr_t means an address the DMA API created for a specific
struct device that represents that device's address space.

There is no issue to have a seperate address space for a ual_link
device from a pci_device.

> DMA API, as I believe it should work aside from the semantic changes and
> perhaps minor tweaks to go from struct page -> physical address over the
> network.

We got rid of struct page from the core DMA API already..

I think your biggest challenge will be to describe the GPU VRAM in a
way that is relative to the ualink networking... phys_addr_t might not
cut it.

Jason

Reply via email to