On Thu, Apr 18, 2019 at 5:00 PM Andrey Grodzovsky
wrote:
>
> From: Christian König
>
> We now destroy finished jobs from the worker thread to make sure that
> we never destroy a job currently in timeout processing.
> By this we avoid holding lock around ring mirror list in drm_sched_stop
> which
mplify the handling.
Andrey
Original Message --------
Subject: Re: [PATCH v5 3/6] drm/scheduler: rework job destruction
From: "Grodzovsky, Andrey"
To: "Zhou, David(ChunMing)"
,dri-devel@lists.freedesktop.org,amd-...@lists.freedesktop.org,e...@anholt.net,etna...@lists.
This patch is to fix deadlock between fence->lock and sched->job_list_lock,
right?
So I suggest to just move list_del_init(&s_job->node) from
drm_sched_process_job to work thread. That will avoid deadlock described in the
link.
Original Message ----
Subject: Re:
On 4/22/19 8:48 AM, Chunming Zhou wrote:
> Hi Andrey,
>
> static void drm_sched_process_job(struct dma_fence *f, struct
> dma_fence_cb *cb)
> {
> ...
> spin_lock_irqsave(&sched->job_list_lock, flags);
> /* remove job from ring_mirror_list */
> list_del_init(&s_job->no
Hi Andrey,
static void drm_sched_process_job(struct dma_fence *f, struct
dma_fence_cb *cb)
{
...
spin_lock_irqsave(&sched->job_list_lock, flags);
/* remove job from ring_mirror_list */
list_del_init(&s_job->node);
spin_unlock_irqrestore(&sched->job_list_lock, f