Am 18.03.21 um 08:23 schrieb Dennis Li:
We have defined two variables in_gpu_reset and reset_sem in adev object. The 
atomic type variable in_gpu_reset is used to avoid recovery thread reenter and 
make lower functions return more earlier when recovery start, but couldn't 
block recovery thread when it access hardware. The r/w semaphore reset_sem is 
used to solve these synchronization issues between recovery thread and other 
threads.

The original solution locked registers' access in lower functions, which will 
introduce following issues:

1) many lower functions are used in both recovery thread and others. Firstly we 
must harvest these functions, it is easy to miss someones. Secondly these 
functions need select which lock (read lock or write lock) will be used, 
according to the thread it is running in. If the thread context isn't 
considered, the added lock will easily introduce deadlock. Besides that, in 
most time, developer easily forget to add locks for new functions.

2) performance drop. More lower functions are more frequently called.

3) easily introduce false positive lockdep complaint, because write lock has 
big range in recovery thread, but low level functions will hold read lock may 
be protected by other locks in other threads.

Therefore the new solution will try to add lock protection for ioctls of kfd. 
Its goal is that there are no threads except for recovery thread or its 
children (for xgmi) to access hardware when doing GPU reset and resume. So 
refine recovery thread as the following:

Step 0: atomic_cmpxchg(&adev->in_gpu_reset, 0, 1)
    1). if failed, it means system had a recovery thread running, current 
thread exit directly;
    2). if success, enter recovery thread;

Step 1: cancel all delay works, stop drm schedule, complete all unreceived 
fences and so on. It try to stop or pause other threads.

Step 2: call down_write(&adev->reset_sem) to hold write lock, which will block 
recovery thread until other threads release read locks.

Those two steps need to be exchanged or otherwise it is possible that new delayed work items etc are started before the lock is taken.

Just to make it clear until this is fixed the whole patch set is a NAK.

Regards,
Christian.


Step 3: normally, there is only recovery threads running to access hardware, it 
is safe to do gpu reset now.

Step 4: do post gpu reset, such as call all ips' resume functions;

Step 5: atomic set adev->in_gpu_reset as 0, wake up other threads and release 
write lock. Recovery thread exit normally.

Other threads call the amdgpu_read_lock to synchronize with recovery thread. If 
it finds that in_gpu_reset is 1, it should release read lock if it has holden 
one, and then blocks itself to wait for recovery finished event. If thread 
successfully hold read lock and in_gpu_reset is 0, it continues. It will exit 
normally or be stopped by recovery thread in step 1.

Dennis Li (4):
   drm/amdgpu: remove reset lock from low level functions
   drm/amdgpu: refine the GPU recovery sequence
   drm/amdgpu: instead of using down/up_read directly
   drm/amdkfd: add reset lock protection for kfd entry functions

  drivers/gpu/drm/amd/amdgpu/amdgpu.h           |   6 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c   |  14 +-
  drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    | 173 +++++++++++++-----
  .../gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c    |   8 -
  drivers/gpu/drm/amd/amdgpu/gmc_v10_0.c        |   4 +-
  drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c         |   9 +-
  drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c         |   5 +-
  drivers/gpu/drm/amd/amdgpu/mxgpu_nv.c         |   5 +-
  drivers/gpu/drm/amd/amdkfd/kfd_chardev.c      | 172 ++++++++++++++++-
  drivers/gpu/drm/amd/amdkfd/kfd_priv.h         |   3 +-
  drivers/gpu/drm/amd/amdkfd/kfd_process.c      |   4 +
  .../amd/amdkfd/kfd_process_queue_manager.c    |  17 ++
  12 files changed, 345 insertions(+), 75 deletions(-)


_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

Reply via email to