On 23.09.25 15:12, Jason Gunthorpe wrote: >> When you want to communicate addresses in a device specific address >> space you need a device specific type for that and not abuse >> phys_addr_t. > > I'm not talking about abusing phys_addr_t, I'm talking about putting a > legitimate CPU address in there. > > You can argue it is hack in Xe to reverse engineer the VRAM offset > from a CPU physical, and I would be sympathetic, but it does allow > VFIO to be general not specialized to Xe.
No, exactly that doesn't work for all use cases. That's why I'm pushing back so hard on using phys_addr_t or CPU addresses. See the CPU address is only valid temporary because the VF BAR is only a window into the device memory. This window is open as long as the CPU is using it, but as soon as that is not the case any more that window might close creating tons of lifetime issues. >> The real question is where does the VFIO gets the necessary >> information which parts of the BAR to expose? > > It needs a varaint driver that understands to reach into the PF parent > and extract this information. > > There is a healthy amount of annoyance to building something like this. > >>> From this thread I think if VFIO had the negotiated option to export a >>> CPU phys_addr_t then the Xe PF driver can reliably convert that to a >>> VRAM offset. >>> >>> We need to add a CPU phys_addr_t option for VFIO to iommufd and KVM >>> anyhow, those cases can't use dma_addr_t. >> >> Clear NAK to using CPU phys_addr_t. This is just a horrible idea. > > We already talked about this, Simona agreed, we need to get > phys_addr_t optionally out of VFIO's dmabuf for a few importers. We > cannot use dma_addr_t. Not saying that we should use dma_addr_t, but using phys_addr_t is as equally broken and I will certainly NAK any approach using this as general interface between drivers. What Simona agreed on is exactly what I proposed as well, that you get a private interface for exactly that use case. Regards, Christian. > > Jason
