RE: re:Making drm_gpuvm work across gpu devices

2024-01-29 Thread Zeng, Oak
Hi Chunming,

In this email thread, Christian mentioned a very special virtualization 
environment where multiple guess processes relies on a host proxy process to 
talk to kfd. Such setup has a hard confliction with SVM concept as SVM means 
shared virtual address space in *one* process while the host proxy process in 
this setup need to represent multiple guest process. Thus SVM doesn’t work in 
such setup.

Normal GPU virtualization such as SRIOV, or system virtualization (such as 
passing whole GPU device to guest machine), works perfectly fine with SVM 
design.

Regards,
Oak

From: 周春明(日月) 
Sent: Monday, January 29, 2024 10:55 PM
To: Felix Kuehling ; Christian König 
; Daniel Vetter 
Cc: Brost, Matthew ; thomas.hellst...@linux.intel.com; 
Welty, Brian ; Ghimiray, Himal Prasad 
; dri-devel@lists.freedesktop.org; Gupta, 
saurabhg ; Danilo Krummrich ; Zeng, 
Oak ; Bommu, Krishnaiah ; Dave 
Airlie ; Vishwanathapura, Niranjana 
; intel...@lists.freedesktop.org; 毛钧(落提) 

Subject: re:Making drm_gpuvm work across gpu devices


Hi Felix,

Following your thread, you mentioned many times that SVM API cannot run in 
virtualization env, I still don't get it why.
Why you often said need a host proxy process? Cannot HW report page fault 
interrupt per VF via vfid? Isn't it sriov env?

Regargs,
-Chunming
--
发件人:Felix Kuehling mailto:felix.kuehl...@amd.com>>
发送时间:2024年1月30日(星期二) 04:24
收件人:"Christian König" 
mailto:christian.koe...@amd.com>>; Daniel Vetter 
mailto:dan...@ffwll.ch>>
抄 送:"Brost, Matthew" mailto:matthew.br...@intel.com>>; 
thomas.hellst...@linux.intel.com<mailto:thomas.hellst...@linux.intel.com> 
mailto:thomas.hellst...@linux.intel.com>>; 
"Welty, Brian" mailto:brian.we...@intel.com>>; 
"Ghimiray, Himal Prasad" 
mailto:himal.prasad.ghimi...@intel.com>>; 
dri-devel@lists.freedesktop.org<mailto:dri-devel@lists.freedesktop.org> 
mailto:dri-devel@lists.freedesktop.org>>; 
"Gupta, saurabhg" mailto:saurabhg.gu...@intel.com>>; 
Danilo Krummrich mailto:d...@redhat.com>>; "Zeng, Oak" 
mailto:oak.z...@intel.com>>; "Bommu, Krishnaiah" 
mailto:krishnaiah.bo...@intel.com>>; Dave Airlie 
mailto:airl...@redhat.com>>; "Vishwanathapura, Niranjana" 
mailto:niranjana.vishwanathap...@intel.com>>;
 intel...@lists.freedesktop.org<mailto:intel...@lists.freedesktop.org> 
mailto:intel...@lists.freedesktop.org>>
主 题:Re: Making drm_gpuvm work across gpu devices


On 2024-01-29 14:03, Christian König wrote:
> Am 29.01.24 um 18:52 schrieb Felix Kuehling:
>> On 2024-01-29 11:28, Christian König wrote:
>>> Am 29.01.24 um 17:24 schrieb Felix Kuehling:
>>>> On 2024-01-29 10:33, Christian König wrote:
>>>>> Am 29.01.24 um 16:03 schrieb Felix Kuehling:
>>>>>> On 2024-01-25 13:32, Daniel Vetter wrote:
>>>>>>> On Wed, Jan 24, 2024 at 09:33:12AM +0100, Christian König wrote:
>>>>>>>> Am 23.01.24 um 20:37 schrieb Zeng, Oak:
>>>>>>>>> [SNIP]
>>>>>>>>> Yes most API are per device based.
>>>>>>>>>
>>>>>>>>> One exception I know is actually the kfd SVM API. If you look
>>>>>>>>> at the svm_ioctl function, it is per-process based. Each
>>>>>>>>> kfd_process represent a process across N gpu devices.
>>>>>>>> Yeah and that was a big mistake in my opinion. We should really
>>>>>>>> not do that
>>>>>>>> ever again.
>>>>>>>>
>>>>>>>>> Need to say, kfd SVM represent a shared virtual address space
>>>>>>>>> across CPU and all GPU devices on the system. This is by the
>>>>>>>>> definition of SVM (shared virtual memory). This is very
>>>>>>>>> different from our legacy gpu *device* driver which works for
>>>>>>>>> only one device (i.e., if you want one device to access
>>>>>>>>> another device's memory, you will have to use dma-buf
>>>>>>>>> export/import etc).
>>>>>>>> Exactly that thinking is what we have currently found as
>>>>>>>> blocker for a
>>>>>>>> virtualization projects. Having SVM as device independent
>>>>>>>> feature which
>>>>>>>> somehow ties to the process address space turned out to be an
>>>>>>>> extremely bad
>>>>>>>> idea.
>>>>>>>>
>>

re:Making drm_gpuvm work across gpu devices

2024-01-29 Thread 周春明(日月)
Hi Felix,
Following your thread, you mentioned many times that SVM API cannot run in 
virtualization env, I still don't get it why.
Why you often said need a host proxy process? Cannot HW report page fault 
interrupt per VF via vfid? Isn't it sriov env?
Regargs,
-Chunming
--
发件人:Felix Kuehling 
发送时间:2024年1月30日(星期二) 04:24
收件人:"Christian König" ; Daniel Vetter 

抄 送:"Brost, Matthew" ; 
thomas.hellst...@linux.intel.com ; "Welty, 
Brian" ; "Ghimiray, Himal Prasad" 
; dri-devel@lists.freedesktop.org 
; "Gupta, saurabhg" 
; Danilo Krummrich ; "Zeng, Oak" 
; "Bommu, Krishnaiah" ; Dave 
Airlie ; "Vishwanathapura, Niranjana" 
; intel...@lists.freedesktop.org 

主 题:Re: Making drm_gpuvm work across gpu devices
On 2024-01-29 14:03, Christian König wrote:
> Am 29.01.24 um 18:52 schrieb Felix Kuehling:
>> On 2024-01-29 11:28, Christian König wrote:
>>> Am 29.01.24 um 17:24 schrieb Felix Kuehling:
 On 2024-01-29 10:33, Christian König wrote:
> Am 29.01.24 um 16:03 schrieb Felix Kuehling:
>> On 2024-01-25 13:32, Daniel Vetter wrote:
>>> On Wed, Jan 24, 2024 at 09:33:12AM +0100, Christian König wrote:
 Am 23.01.24 um 20:37 schrieb Zeng, Oak:
> [SNIP]
> Yes most API are per device based.
>
> One exception I know is actually the kfd SVM API. If you look 
> at the svm_ioctl function, it is per-process based. Each 
> kfd_process represent a process across N gpu devices.
 Yeah and that was a big mistake in my opinion. We should really 
 not do that
 ever again.

> Need to say, kfd SVM represent a shared virtual address space 
> across CPU and all GPU devices on the system. This is by the 
> definition of SVM (shared virtual memory). This is very 
> different from our legacy gpu *device* driver which works for 
> only one device (i.e., if you want one device to access 
> another device's memory, you will have to use dma-buf 
> export/import etc).
 Exactly that thinking is what we have currently found as 
 blocker for a
 virtualization projects. Having SVM as device independent 
 feature which
 somehow ties to the process address space turned out to be an 
 extremely bad
 idea.

 The background is that this only works for some use cases but 
 not all of
 them.

 What's working much better is to just have a mirror 
 functionality which says
 that a range A..B of the process address space is mapped into a 
 range C..D
 of the GPU address space.

 Those ranges can then be used to implement the SVM feature 
 required for
 higher level APIs and not something you need at the UAPI or 
 even inside the
 low level kernel memory management.

 When you talk about migrating memory to a device you also do 
 this on a per
 device basis and *not* tied to the process address space. If 
 you then get
 crappy performance because userspace gave contradicting 
 information where to
 migrate memory then that's a bug in userspace and not something 
 the kernel
 should try to prevent somehow.

 [SNIP]
>> I think if you start using the same drm_gpuvm for multiple 
>> devices you
>> will sooner or later start to run into the same mess we have 
>> seen with
>> KFD, where we moved more and more functionality from the KFD 
>> to the DRM
>> render node because we found that a lot of the stuff simply 
>> doesn't work
>> correctly with a single object to maintain the state.
> As I understand it, KFD is designed to work across devices. A 
> single pseudo /dev/kfd device represent all hardware gpu 
> devices. That is why during kfd open, many pdd (process device 
> data) is created, each for one hardware device for this process.
 Yes, I'm perfectly aware of that. And I can only repeat myself 
 that I see
 this design as a rather extreme failure. And I think it's one 
 of the reasons
 why NVidia is so dominant with Cuda.

 This whole approach KFD takes was designed with the idea of 
 extending the
 CPU process into the GPUs, but this idea only works for a few 
 use cases and
 is not something we should apply to drivers in general.

 A very good example are virtualization use cases where you end 
 up with CPU
 address != GPU address because the VAs are actually coming from 
 the guest VM
 and not the host process.

 SVM is a high level concept of OpenCL, Cuda, ROCm etc.. This 
 should not have
 any infl