When application A submits jobs (a1, a2, a3) and application B submits job b1 with a dependency on a2's scheduler fence, the normal execution flow is: 1. a1 gets popped from the entity by the scheduler 2. run_job(a1) executes 3. a1's scheduled fence gets signaled 4. drm_sched_run_job_work() calls drm_sched_run_job_queue() at the end 5. The scheduler wakes up and re-selects entities to pop jobs 6. Since b1's dependency is cleared, scheduler can select b1 and continue
However, if application A is killed before a1 gets popped by the scheduler, then a1, a2, a3 are killed sequentially by drm_sched_entity_kill_jobs_cb(). During the kill process, their scheduled fences are still signaled, but the kill process itself lacks the work_run_job that would normally be scheduled by drm_sched_run_job_work(). This means b1's dependency gets cleared, but there's no work_run_job to drive the scheduler to continue running, causing the scheduler to enter sleep state and application B to hang. Add drm_sched_wakeup() in entity_kill_job_work() to prevent scheduler sleep and subsequent application hangs. v2: - Move drm_sched_wakeup() to after drm_sched_fence_scheduled() v3: - Clarify the normal flow vs kill process comparison Signed-off-by: Lin.Cao <linca...@amd.com> --- drivers/gpu/drm/scheduler/sched_entity.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index e671aa241720..66f2a43c58fd 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -177,6 +177,7 @@ static void drm_sched_entity_kill_jobs_work(struct work_struct *wrk) struct drm_sched_job *job = container_of(wrk, typeof(*job), work); drm_sched_fence_scheduled(job->s_fence, NULL); + drm_sched_wakeup(job->sched); drm_sched_fence_finished(job->s_fence, -ESRCH); WARN_ON(job->s_fence->parent); job->sched->ops->free_job(job); -- 2.46.1