Modern data center dGPUs are usually equipped with very large VRAM. On server with such dGPUs(192GB VRAM * 8) and 2TB system memory, hibernate will fail due to no enough free memory.
The root cause is that during hibernation all VRAM memory get evicted to GTT or shmem. In both case, it is in system memory and kernel will try to copy the pages to hibernation image. In the worst case, this causes 2 copies of VRAM memory in system memory, 2TB is not enough for the hibernation image. 192GB * 8 * 2 = 3TB > 2TB. The fix includes following 2 changes. With 2 changes, there's much less pages needed to be copied to hibernate image and hibernation can succeed. 1. move GTT to shmem after evicting VRAM. then the GTT pages can be freed. 2. force write shmem pages to swap disk and free shmem pages. After swapout GTT to shmem in hibernation prepare stage, swapin and restore BOs in thaw stage takes lots of time(50 mintues observed for 8 dGPUs). And it's not necessary since the follow-up hibernate stages do not use GPU for hibernation successful case. The third patch is just skip the BOs restore in thaw stage to reduce the hibernation time. Samuel Zhang (3): drm/amdgpu: move GTT to SHM after eviction for hibernation PM: hibernate: shrink shmem pages after dev_pm_ops.prepare() drm/amdgpu: skip kfd resume_process for dev_pm_ops.thaw() drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 ++ drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 13 ++++++++++++- drivers/gpu/drm/ttm/ttm_resource.c | 18 ++++++++++++++++++ include/drm/ttm/ttm_resource.h | 1 + kernel/power/hibernate.c | 13 +++++++++++++ 6 files changed, 47 insertions(+), 2 deletions(-) -- 2.43.5