Hi Chunming,

From: 周春明(日月) <riyue....@alibaba-inc.com>
Sent: Thursday, January 25, 2024 6:01 AM
To: Zeng, Oak <oak.z...@intel.com>; Christian König <christian.koe...@amd.com>; 
Danilo Krummrich <d...@redhat.com>; Dave Airlie <airl...@redhat.com>; Daniel 
Vetter <dan...@ffwll.ch>; Felix Kuehling <felix.kuehl...@amd.com>; Shah, Ankur 
N <ankur.n.s...@intel.com>; Winiarski, Michal <michal.winiar...@intel.com>
Cc: Brost, Matthew <matthew.br...@intel.com>; thomas.hellst...@linux.intel.com; 
Welty, Brian <brian.we...@intel.com>; dri-devel@lists.freedesktop.org; 
Ghimiray, Himal Prasad <himal.prasad.ghimi...@intel.com>; Gupta, saurabhg 
<saurabhg.gu...@intel.com>; Bommu, Krishnaiah <krishnaiah.bo...@intel.com>; 
Vishwanathapura, Niranjana <niranjana.vishwanathap...@intel.com>; 
intel...@lists.freedesktop.org
Subject: 回复:Making drm_gpuvm work across gpu devices

[snip]

Fd0 = open(card0)

Fd1 = open(card1)

Vm0 =xe_vm_create(fd0) //driver create process xe_svm on the process's first 
vm_create

Vm1 = xe_vm_create(fd1) //driver re-use xe_svm created above if called from 
same process

Queue0 = xe_exec_queue_create(fd0, vm0)

Queue1 = xe_exec_queue_create(fd1, vm1)

//check p2p capability calling L0 API….

ptr = malloc()//this replace bo_create, vm_bind, dma-import/export

Xe_exec(queue0, ptr)//submit gpu job which use ptr, on card0

Xe_exec(queue1, ptr)//submit gpu job which use ptr, on card1

//Gpu page fault handles memory allocation/migration/mapping to gpu
[snip]
Hi Oak,
From your sample code, you not only need va-manager cross gpu devices, but also 
cpu, right?

No. Per the feedback from Christian and Danilo, I would give up the idea of 
making drm_gpuvm to work across gpu devices. I might want to come back later 
but for now it is not the plan anymore.

I think you need a UVA (unified va) manager in user space and make range of 
drm_gpuvm reserved from cpu va space. In that way, malloc's va and gpu va are 
in same space and will not conflict. And then via HMM mechanism, gpu devices 
can safely use VA passed from HMM.

Under HMM, both GPU and CPU are simply under the same address space. A same 
virtual address represent the same allocation for both CPU and GPUs. See the 
hmm doc here: https://www.kernel.org/doc/Documentation/vm/hmm.rst.  User space 
program doesn’t need to reserve any address range. All the address ranges are 
managed by linux kernel core mm. Today GPU kmd driver has some structure to 
save the address range based memory attributes.

Regards,
Oak

By the way, I'm not familiar with drm_gpuvm, traditionally, gpu driver often 
put va-manager in user space, not sure what's benefit we can get from drm_gpuvm 
invented in kernel space. Can anyone help explain more?

- Chunming
------------------------------------------------------------------
发件人:Zeng, Oak <oak.z...@intel.com<mailto:oak.z...@intel.com>>
发送时间:2024年1月25日(星期四) 09:17
收件人:"Christian König" 
<christian.koe...@amd.com<mailto:christian.koe...@amd.com>>; Danilo Krummrich 
<d...@redhat.com<mailto:d...@redhat.com>>; Dave Airlie 
<airl...@redhat.com<mailto:airl...@redhat.com>>; Daniel Vetter 
<dan...@ffwll.ch<mailto:dan...@ffwll.ch>>; Felix Kuehling 
<felix.kuehl...@amd.com<mailto:felix.kuehl...@amd.com>>; "Shah, Ankur N" 
<ankur.n.s...@intel.com<mailto:ankur.n.s...@intel.com>>; "Winiarski, Michal" 
<michal.winiar...@intel.com<mailto:michal.winiar...@intel.com>>
抄 送:"Brost, Matthew" <matthew.br...@intel.com<mailto:matthew.br...@intel.com>>; 
thomas.hellst...@linux.intel.com<mailto:thomas.hellst...@linux.intel.com> 
<thomas.hellst...@linux.intel.com<mailto:thomas.hellst...@linux.intel.com>>; 
"Welty, Brian" <brian.we...@intel.com<mailto:brian.we...@intel.com>>; 
dri-devel@lists.freedesktop.org<mailto:dri-devel@lists.freedesktop.org> 
<dri-devel@lists.freedesktop.org<mailto:dri-devel@lists.freedesktop.org>>; 
"Ghimiray, Himal Prasad" 
<himal.prasad.ghimi...@intel.com<mailto:himal.prasad.ghimi...@intel.com>>; 
"Gupta, saurabhg" <saurabhg.gu...@intel.com<mailto:saurabhg.gu...@intel.com>>; 
"Bommu, Krishnaiah" 
<krishnaiah.bo...@intel.com<mailto:krishnaiah.bo...@intel.com>>; 
"Vishwanathapura, Niranjana" 
<niranjana.vishwanathap...@intel.com<mailto:niranjana.vishwanathap...@intel.com>>;
 intel...@lists.freedesktop.org<mailto:intel...@lists.freedesktop.org> 
<intel...@lists.freedesktop.org<mailto:intel...@lists.freedesktop.org>>
主 题:RE: Making drm_gpuvm work across gpu devices

Hi Christian,

Even though I mentioned KFD design, I didn’t mean to copy the KFD design. I 
also had hard time to understand the difficulty of KFD under virtualization 
environment.

For us, Xekmd doesn't need to know it is running under bare metal or 
virtualized environment. Xekmd is always a guest driver. All the virtual 
address used in xekmd is guest virtual address. For SVM, we require all the VF 
devices share one single shared address space with guest CPU program. So all 
the design works in bare metal environment can automatically work under 
virtualized environment. +@Shah, Ankur N<mailto:ankur.n.s...@intel.com> 
+@Winiarski, Michal<mailto:michal.winiar...@intel.com> to backup me if I am 
wrong.

Again, shared virtual address space b/t cpu and all gpu devices is a hard 
requirement for our system allocator design (which means malloc’ed memory, cpu 
stack variables, globals can be directly used in gpu program. Same requirement 
as kfd SVM design). This was aligned with our user space software stack.

For anyone who want to implement system allocator, or SVM, this is a hard 
requirement. I started this thread hoping I can leverage the drm_gpuvm design 
to manage the shared virtual address space (as the address range split/merge 
function was scary to me and I didn’t want re-invent). I guess my takeaway from 
this you and Danilo is this approach is a NAK. Thomas also mentioned to me 
drm_gpuvm is a overkill for our svm address range split/merge. So I will make 
things work first by manage address range xekmd internally. I can re-look 
drm-gpuvm approach in the future.

Maybe a pseudo user program can illustrate our programming model:


Fd0 = open(card0)

Fd1 = open(card1)

Vm0 =xe_vm_create(fd0) //driver create process xe_svm on the process's first 
vm_create

Vm1 = xe_vm_create(fd1) //driver re-use xe_svm created above if called from 
same process

Queue0 = xe_exec_queue_create(fd0, vm0)

Queue1 = xe_exec_queue_create(fd1, vm1)

//check p2p capability calling L0 API….

ptr = malloc()//this replace bo_create, vm_bind, dma-import/export

Xe_exec(queue0, ptr)//submit gpu job which use ptr, on card0

Xe_exec(queue1, ptr)//submit gpu job which use ptr, on card1

//Gpu page fault handles memory allocation/migration/mapping to gpu

As you can see, from above model, our design is a little bit different than the 
KFD design. user need to explicitly create gpuvm (vm0 and vm1 above) for each 
gpu device. Driver internally have a xe_svm represent the shared address space 
b/t cpu and multiple gpu devices. But end user doesn’t see and no need to 
create xe_svm. The shared virtual address space is really managed by linux core 
mm (through the vma struct, mm_struct etc). From each gpu device’s perspective, 
it just operate under its own gpuvm, not aware of the existence of other gpuvm, 
even though in reality all those gpuvm shares a same virtual address space.

See one more comment inline

From: Christian König 
<christian.koe...@amd.com<mailto:christian.koe...@amd.com>>
Sent: Wednesday, January 24, 2024 3:33 AM
To: Zeng, Oak <oak.z...@intel.com<mailto:oak.z...@intel.com>>; Danilo Krummrich 
<d...@redhat.com<mailto:d...@redhat.com>>; Dave Airlie 
<airl...@redhat.com<mailto:airl...@redhat.com>>; Daniel Vetter 
<dan...@ffwll.ch<mailto:dan...@ffwll.ch>>; Felix Kuehling 
<felix.kuehl...@amd.com<mailto:felix.kuehl...@amd.com>>
Cc: Welty, Brian <brian.we...@intel.com<mailto:brian.we...@intel.com>>; 
dri-devel@lists.freedesktop.org<mailto:dri-devel@lists.freedesktop.org>; 
intel...@lists.freedesktop.org<mailto:intel...@lists.freedesktop.org>; Bommu, 
Krishnaiah <krishnaiah.bo...@intel.com<mailto:krishnaiah.bo...@intel.com>>; 
Ghimiray, Himal Prasad 
<himal.prasad.ghimi...@intel.com<mailto:himal.prasad.ghimi...@intel.com>>; 
thomas.hellst...@linux.intel.com<mailto:thomas.hellst...@linux.intel.com>; 
Vishwanathapura, Niranjana 
<niranjana.vishwanathap...@intel.com<mailto:niranjana.vishwanathap...@intel.com>>;
 Brost, Matthew <matthew.br...@intel.com<mailto:matthew.br...@intel.com>>; 
Gupta, saurabhg <saurabhg.gu...@intel.com<mailto:saurabhg.gu...@intel.com>>
Subject: Re: Making drm_gpuvm work across gpu devices

Am 23.01.24 um 20:37 schrieb Zeng, Oak:
[SNIP]



Yes most API are per device based.



One exception I know is actually the kfd SVM API. If you look at the svm_ioctl 
function, it is per-process based. Each kfd_process represent a process across 
N gpu devices.

Yeah and that was a big mistake in my opinion. We should really not do that 
ever again.


Need to say, kfd SVM represent a shared virtual address space across CPU and 
all GPU devices on the system. This is by the definition of SVM (shared virtual 
memory). This is very different from our legacy gpu *device* driver which works 
for only one device (i.e., if you want one device to access another device's 
memory, you will have to use dma-buf export/import etc).

Exactly that thinking is what we have currently found as blocker for a 
virtualization projects. Having SVM as device independent feature which somehow 
ties to the process address space turned out to be an extremely bad idea.

The background is that this only works for some use cases but not all of them.

What's working much better is to just have a mirror functionality which says 
that a range A..B of the process address space is mapped into a range C..D of 
the GPU address space.

Those ranges can then be used to implement the SVM feature required for higher 
level APIs and not something you need at the UAPI or even inside the low level 
kernel memory management.

When you talk about migrating memory to a device you also do this on a per 
device basis and *not* tied to the process address space. If you then get 
crappy performance because userspace gave contradicting information where to 
migrate memory then that's a bug in userspace and not something the kernel 
should try to prevent somehow.

[SNIP]

I think if you start using the same drm_gpuvm for multiple devices you

will sooner or later start to run into the same mess we have seen with

KFD, where we moved more and more functionality from the KFD to the DRM

render node because we found that a lot of the stuff simply doesn't work

correctly with a single object to maintain the state.



As I understand it, KFD is designed to work across devices. A single pseudo 
/dev/kfd device represent all hardware gpu devices. That is why during kfd 
open, many pdd (process device data) is created, each for one hardware device 
for this process.

Yes, I'm perfectly aware of that. And I can only repeat myself that I see this 
design as a rather extreme failure. And I think it's one of the reasons why 
NVidia is so dominant with Cuda.

This whole approach KFD takes was designed with the idea of extending the CPU 
process into the GPUs, but this idea only works for a few use cases and is not 
something we should apply to drivers in general.

A very good example are virtualization use cases where you end up with CPU 
address != GPU address because the VAs are actually coming from the guest VM 
and not the host process.


I don’t get the problem here. For us, under virtualization, both the cpu 
address and gpu virtual address operated in xekmd is guest virtual address. 
They can still share the same virtual address space (as SVM required)

Oak


SVM is a high level concept of OpenCL, Cuda, ROCm etc.. This should not have 
any influence on the design of the kernel UAPI.

If you want to do something similar as KFD for Xe I think you need to get 
explicit permission to do this from Dave and Daniel and maybe even Linus.

Regards,
Christian.

Reply via email to