Hi Christian,

I got a few more questions inline

From: Christian König <christian.koe...@amd.com>
Sent: Wednesday, January 24, 2024 3:33 AM
To: Zeng, Oak <oak.z...@intel.com>; Danilo Krummrich <d...@redhat.com>; Dave 
Airlie <airl...@redhat.com>; Daniel Vetter <dan...@ffwll.ch>; Felix Kuehling 
<felix.kuehl...@amd.com>
Cc: Welty, Brian <brian.we...@intel.com>; dri-devel@lists.freedesktop.org; 
intel...@lists.freedesktop.org; Bommu, Krishnaiah <krishnaiah.bo...@intel.com>; 
Ghimiray, Himal Prasad <himal.prasad.ghimi...@intel.com>; 
thomas.hellst...@linux.intel.com; Vishwanathapura, Niranjana 
<niranjana.vishwanathap...@intel.com>; Brost, Matthew 
<matthew.br...@intel.com>; Gupta, saurabhg <saurabhg.gu...@intel.com>
Subject: Re: Making drm_gpuvm work across gpu devices

Am 23.01.24 um 20:37 schrieb Zeng, Oak:

[SNIP]



Yes most API are per device based.



One exception I know is actually the kfd SVM API. If you look at the svm_ioctl 
function, it is per-process based. Each kfd_process represent a process across 
N gpu devices.

Yeah and that was a big mistake in my opinion. We should really not do that 
ever again.



Need to say, kfd SVM represent a shared virtual address space across CPU and 
all GPU devices on the system. This is by the definition of SVM (shared virtual 
memory). This is very different from our legacy gpu *device* driver which works 
for only one device (i.e., if you want one device to access another device's 
memory, you will have to use dma-buf export/import etc).

Exactly that thinking is what we have currently found as blocker for a 
virtualization projects. Having SVM as device independent feature which somehow 
ties to the process address space turned out to be an extremely bad idea.

The background is that this only works for some use cases but not all of them.

What's working much better is to just have a mirror functionality which says 
that a range A..B of the process address space is mapped into a range C..D of 
the GPU address space.

Those ranges can then be used to implement the SVM feature required for higher 
level APIs and not something you need at the UAPI or even inside the low level 
kernel memory management.


The whole purpose of the HMM design is to create a shared address space b/t cpu 
and gpu program. See here: https://www.kernel.org/doc/Documentation/vm/hmm.rst. 
Mapping process address A..B to C..D of GPU address space is exactly referred 
as “split address space” in the HMM design.



When you talk about migrating memory to a device you also do this on a per 
device basis and *not* tied to the process address space. If you then get 
crappy performance because userspace gave contradicting information where to 
migrate memory then that's a bug in userspace and not something the kernel 
should try to prevent somehow.

[SNIP]


I think if you start using the same drm_gpuvm for multiple devices you

will sooner or later start to run into the same mess we have seen with

KFD, where we moved more and more functionality from the KFD to the DRM

render node because we found that a lot of the stuff simply doesn't work

correctly with a single object to maintain the state.



As I understand it, KFD is designed to work across devices. A single pseudo 
/dev/kfd device represent all hardware gpu devices. That is why during kfd 
open, many pdd (process device data) is created, each for one hardware device 
for this process.

Yes, I'm perfectly aware of that. And I can only repeat myself that I see this 
design as a rather extreme failure. And I think it's one of the reasons why 
NVidia is so dominant with Cuda.

This whole approach KFD takes was designed with the idea of extending the CPU 
process into the GPUs, but this idea only works for a few use cases and is not 
something we should apply to drivers in general.

A very good example are virtualization use cases where you end up with CPU 
address != GPU address because the VAs are actually coming from the guest VM 
and not the host process.


Are you talking about general virtualization set up such as SRIOV, GPU device 
pass through, or something else?

In a typical virtualization set up, gpu driver such as xekmd or amdgpu is 
always a guest driver. In xekmd case, xekmd doesn’t need to know it is 
operating under virtualized environment. So the virtual address in driver is 
guest virtual address. From kmd driver perspective, there is no difference b/t 
bare metal and virtualized.

Are you talking about special virtualized setup such as para-virtualized/VirGL? 
I need more background info to understand why you end up with CPU address !=GPU 
address in SVM….


SVM is a high level concept of OpenCL, Cuda, ROCm etc.. This should not have 
any influence on the design of the kernel UAPI.


Maybe a terminology problem here. I agree what you said above. We also have 
achieved the SVM design with our BO-centric driver such as i915, xekmd.

But we are mainly talking about system allocator here, like use malloc’ed 
memory directly for GPU program. And we want to leverage HMM. System allocator 
can be used to implement the same SVM concept at OpenCL/Cuda/ROCm, but SVM can 
be implemented with BO-centric driver also.


If you want to do something similar as KFD for Xe I think you need to get 
explicit permission to do this from Dave and Daniel and maybe even Linus.

If you look at my series 
https://lore.kernel.org/dri-devel/20231221043812.3783313-1-oak.z...@intel.com/, 
I am not doing things similar to KFD.

Regards,
Oak


Regards,
Christian.

Reply via email to