Re: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)

2018-04-13 Thread Christian König

Hi Monk/Emily,

give me the weekend to take a closer look since I'm very busy this morning.

In general the order of ctx_fini and vm fini is very important cause we 
otherwise dereference invalid pointers here.


Regards,
Christian.

Am 13.04.2018 um 08:18 schrieb Deng, Emily:

Hi Monk,
 Another consideration, it will be better to put amdgpu_ctx_mgr_fini() 
beneath amdgpu_vm_fini, in
this case, it will first set all ctx and vm entity rq to null, then set all the 
not scheduled job's fence to signal.

Best Wishes,
Emily Deng


-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
Of Deng, Emily
Sent: Friday, April 13, 2018 2:08 PM
To: Liu, Monk <monk@amd.com>; Christian König
<ckoenig.leichtzumer...@gmail.com>
Cc: amd-gfx@lists.freedesktop.org
Subject: RE: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)

Hi Monk,
  Thanks your review, I will refine the code as your suggestion 1),2),3), 
4),5).
  About 6): I think it is no relation to put the amdgpu_ctx_mgr_fini before 
or
after amdgpu_vm_fini.

Hi Christian,
  Do you have any thoughts about 6)?

Best Wishes,
Emily Deng


-Original Message-
From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
Of monk
Sent: Friday, April 13, 2018 1:01 PM
To: Deng, Emily <emily.d...@amd.com>; amd-gfx@lists.freedesktop.org
Subject: Re: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)

Hi Christian & Emily

This v3 version looks pretty good to me, but still some parts need to
improve:

e.g.

1)entity->finished doesn't presenting what it really means, better
rename it to entity->last_scheduled.

2)drm_sched_entity_fini_job_cb() better renamed to
drm_sched_entity_kill_jobs_cb()

3) no need to pass "entity->finished" (line275 of gpu_schedulers.c) to
drm_sched_entity_fini_job_cb() if -ENOENT returned after
dma_fence_add_callback since this parm is not needed at all in this
callback routine

4)better change type of entity->fini_status to "int" instead of
"uint32_t" ... it should be aligned with the return type of
wait_event_killable()

5)

+   if (entity->finished) {
+   dma_fence_put(entity->finished);
+   entity->finished = NULL;
}

no need to check entity->finished because dma_fence_put() will do it

inside...



and the same here in job_recovery:

+
+   if (s_job->entity->finished)
+   dma_fence_put(s_job->entity->finished);

and the same here in sched_main:

+   if (entity->finished)
+   dma_fence_put(entity->finished);


6) why put amdgpu_ctx_mgr_fini() beneath amdgpu_vm_fini() ? any reason
for that ?


thanks

/Monk






On 04/13/2018 10:06 AM, Deng, Emily wrote:

Ping

Best Wishes,
Emily Deng





-Original Message-
From: Emily Deng [mailto:emily.d...@amd.com]
Sent: Thursday, April 12, 2018 6:22 PM
To: amd-gfx@lists.freedesktop.org
Cc: Deng, Emily <emily.d...@amd.com>; Liu, Monk

<monk@amd.com>

Subject: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)

issue:
there are VMC page fault occurred if force APP kill during 3dmark
test, the cause is in entity_fini we manually signal all those jobs
in entity's queue which confuse the sync/dep
mechanism:

1)page fault occurred in sdma's clear job which operate on shadow
buffer, and shadow buffer's Gart table is cleaned by ttm_bo_release
since the fence in its reservation was fake signaled by
entity_fini() under the case of SIGKILL received.

2)page fault occurred in gfx' job because during the lifetime of
gfx job we manually fake signal all jobs from its entity in
entity_fini(), thus the unmapping/clear PTE job depend on those
result fence is satisfied and sdma start clearing the PTE and lead
to GFX

page fault.

fix:
1)should at least wait all jobs already scheduled complete in
entity_fini() if SIGKILL is the case.

2)if a fence signaled and try to clear some entity's dependency,
should set this entity guilty to prevent its job really run since
the dependency is fake signaled.

v2:
splitting drm_sched_entity_fini() into two functions:
1)The first one is does the waiting, removes the entity from the
runqueue and returns an error when the process was killed.
2)The second one then goes over the entity, install it as
completion signal for the remaining jobs and signals all jobs with an

error code.

v3:
1)Replace the fini1 and fini2 with better name 2)Call the first
part before the VM teardown in
amdgpu_driver_postclose_kms() and the second part after the VM
teardown 3)Keep the original function drm_sched_entity_fini to
refine the

code.

Signed-off-by: Monk Liu <monk@amd.com>
Signed-off-by: Emily Deng <emily.d...@amd.com>
---
   drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  2 +
   drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   | 64

RE: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)

2018-04-13 Thread Deng, Emily
Hi Monk,
Another consideration, it will be better to put amdgpu_ctx_mgr_fini() 
beneath amdgpu_vm_fini, in 
this case, it will first set all ctx and vm entity rq to null, then set all the 
not scheduled job's fence to signal.

Best Wishes,
Emily Deng

> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Deng, Emily
> Sent: Friday, April 13, 2018 2:08 PM
> To: Liu, Monk <monk@amd.com>; Christian König
> <ckoenig.leichtzumer...@gmail.com>
> Cc: amd-gfx@lists.freedesktop.org
> Subject: RE: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)
> 
> Hi Monk,
>  Thanks your review, I will refine the code as your suggestion 1),2),3), 
> 4),5).
>  About 6): I think it is no relation to put the amdgpu_ctx_mgr_fini 
> before or
> after amdgpu_vm_fini.
> 
> Hi Christian,
>  Do you have any thoughts about 6)?
> 
> Best Wishes,
> Emily Deng
> 
> > -Original Message-
> > From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> > Of monk
> > Sent: Friday, April 13, 2018 1:01 PM
> > To: Deng, Emily <emily.d...@amd.com>; amd-gfx@lists.freedesktop.org
> > Subject: Re: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)
> >
> > Hi Christian & Emily
> >
> > This v3 version looks pretty good to me, but still some parts need to
> > improve:
> >
> > e.g.
> >
> > 1)entity->finished doesn't presenting what it really means, better
> > rename it to entity->last_scheduled.
> >
> > 2)drm_sched_entity_fini_job_cb() better renamed to
> > drm_sched_entity_kill_jobs_cb()
> >
> > 3) no need to pass "entity->finished" (line275 of gpu_schedulers.c) to
> > drm_sched_entity_fini_job_cb() if -ENOENT returned after
> > dma_fence_add_callback since this parm is not needed at all in this
> > callback routine
> >
> > 4)better change type of entity->fini_status to "int" instead of
> > "uint32_t" ... it should be aligned with the return type of
> > wait_event_killable()
> >
> > 5)
> >
> > +   if (entity->finished) {
> > +   dma_fence_put(entity->finished);
> > +   entity->finished = NULL;
> > }
> >
> > no need to check entity->finished because dma_fence_put() will do it
> inside...
> >
> >
> >
> > and the same here in job_recovery:
> >
> > +
> > +   if (s_job->entity->finished)
> > +   dma_fence_put(s_job->entity->finished);
> >
> > and the same here in sched_main:
> >
> > +   if (entity->finished)
> > +   dma_fence_put(entity->finished);
> >
> >
> > 6) why put amdgpu_ctx_mgr_fini() beneath amdgpu_vm_fini() ? any reason
> > for that ?
> >
> >
> > thanks
> >
> > /Monk
> >
> >
> >
> >
> >
> >
> > On 04/13/2018 10:06 AM, Deng, Emily wrote:
> > > Ping
> > >
> > > Best Wishes,
> > > Emily Deng
> > >
> > >
> > >
> > >
> > >> -Original Message-
> > >> From: Emily Deng [mailto:emily.d...@amd.com]
> > >> Sent: Thursday, April 12, 2018 6:22 PM
> > >> To: amd-gfx@lists.freedesktop.org
> > >> Cc: Deng, Emily <emily.d...@amd.com>; Liu, Monk
> > <monk@amd.com>
> > >> Subject: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)
> > >>
> > >> issue:
> > >> there are VMC page fault occurred if force APP kill during 3dmark
> > >> test, the cause is in entity_fini we manually signal all those jobs
> > >> in entity's queue which confuse the sync/dep
> > >> mechanism:
> > >>
> > >> 1)page fault occurred in sdma's clear job which operate on shadow
> > >> buffer, and shadow buffer's Gart table is cleaned by ttm_bo_release
> > >> since the fence in its reservation was fake signaled by
> > >> entity_fini() under the case of SIGKILL received.
> > >>
> > >> 2)page fault occurred in gfx' job because during the lifetime of
> > >> gfx job we manually fake signal all jobs from its entity in
> > >> entity_fini(), thus the unmapping/clear PTE job depend on those
> > >> result fence is satisfied and sdma start clearing the PTE and lead
> > >> to GFX
> > page fault.
> > >>
> > >> fix:
> > >> 1)should at least wait all jobs alrea

RE: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)

2018-04-13 Thread Deng, Emily
Hi Monk,
 Thanks your review, I will refine the code as your suggestion 1),2),3), 
4),5).
 About 6): I think it is no relation to put the amdgpu_ctx_mgr_fini before 
or after amdgpu_vm_fini.

Hi Christian, 
 Do you have any thoughts about 6)?

Best Wishes,
Emily Deng

> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of monk
> Sent: Friday, April 13, 2018 1:01 PM
> To: Deng, Emily <emily.d...@amd.com>; amd-gfx@lists.freedesktop.org
> Subject: Re: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)
> 
> Hi Christian & Emily
> 
> This v3 version looks pretty good to me, but still some parts need to
> improve:
> 
> e.g.
> 
> 1)entity->finished doesn't presenting what it really means, better rename it
> to entity->last_scheduled.
> 
> 2)drm_sched_entity_fini_job_cb() better renamed to
> drm_sched_entity_kill_jobs_cb()
> 
> 3) no need to pass "entity->finished" (line275 of gpu_schedulers.c) to
> drm_sched_entity_fini_job_cb() if -ENOENT returned after
> dma_fence_add_callback since this parm is not needed at all in this callback
> routine
> 
> 4)better change type of entity->fini_status to "int" instead of "uint32_t" 
> ... it
> should be aligned with the return type of
> wait_event_killable()
> 
> 5)
> 
> + if (entity->finished) {
> + dma_fence_put(entity->finished);
> + entity->finished = NULL;
>   }
> 
> no need to check entity->finished because dma_fence_put() will do it inside...
> 
> 
> 
> and the same here in job_recovery:
> 
> +
> + if (s_job->entity->finished)
> + dma_fence_put(s_job->entity->finished);
> 
> and the same here in sched_main:
> 
> + if (entity->finished)
> + dma_fence_put(entity->finished);
> 
> 
> 6) why put amdgpu_ctx_mgr_fini() beneath amdgpu_vm_fini() ? any reason
> for that ?
> 
> 
> thanks
> 
> /Monk
> 
> 
> 
> 
> 
> 
> On 04/13/2018 10:06 AM, Deng, Emily wrote:
> > Ping
> >
> > Best Wishes,
> > Emily Deng
> >
> >
> >
> >
> >> -Original Message-
> >> From: Emily Deng [mailto:emily.d...@amd.com]
> >> Sent: Thursday, April 12, 2018 6:22 PM
> >> To: amd-gfx@lists.freedesktop.org
> >> Cc: Deng, Emily <emily.d...@amd.com>; Liu, Monk
> <monk@amd.com>
> >> Subject: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)
> >>
> >> issue:
> >> there are VMC page fault occurred if force APP kill during 3dmark
> >> test, the cause is in entity_fini we manually signal all those jobs
> >> in entity's queue which confuse the sync/dep
> >> mechanism:
> >>
> >> 1)page fault occurred in sdma's clear job which operate on shadow
> >> buffer, and shadow buffer's Gart table is cleaned by ttm_bo_release
> >> since the fence in its reservation was fake signaled by entity_fini()
> >> under the case of SIGKILL received.
> >>
> >> 2)page fault occurred in gfx' job because during the lifetime of gfx
> >> job we manually fake signal all jobs from its entity in
> >> entity_fini(), thus the unmapping/clear PTE job depend on those
> >> result fence is satisfied and sdma start clearing the PTE and lead to GFX
> page fault.
> >>
> >> fix:
> >> 1)should at least wait all jobs already scheduled complete in
> >> entity_fini() if SIGKILL is the case.
> >>
> >> 2)if a fence signaled and try to clear some entity's dependency,
> >> should set this entity guilty to prevent its job really run since the
> >> dependency is fake signaled.
> >>
> >> v2:
> >> splitting drm_sched_entity_fini() into two functions:
> >> 1)The first one is does the waiting, removes the entity from the
> >> runqueue and returns an error when the process was killed.
> >> 2)The second one then goes over the entity, install it as completion
> >> signal for the remaining jobs and signals all jobs with an error code.
> >>
> >> v3:
> >> 1)Replace the fini1 and fini2 with better name 2)Call the first part
> >> before the VM teardown in
> >> amdgpu_driver_postclose_kms() and the second part after the VM
> >> teardown 3)Keep the original function drm_sched_entity_fini to refine the
> code.
> >>
> >> Signed-off-by: Monk Liu <monk@amd.com>
> >> Signed-off-by: Emily Deng <emily.d...@amd.com&g

Re: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)

2018-04-12 Thread monk

Hi Christian & Emily

This v3 version looks pretty good to me, but still some parts need to 
improve:


e.g.

1)entity->finished doesn't presenting what it really means, better 
rename it to entity->last_scheduled.


2)drm_sched_entity_fini_job_cb() better renamed to 
drm_sched_entity_kill_jobs_cb()


3) no need to pass "entity->finished" (line275 of gpu_schedulers.c) to 
drm_sched_entity_fini_job_cb() if -ENOENT returned after 
dma_fence_add_callback since this parm is not needed at all in this 
callback routine


4)better change type of entity->fini_status to "int" instead of 
"uint32_t" ... it should be aligned with the return type of 
wait_event_killable()


5)

+   if (entity->finished) {
+   dma_fence_put(entity->finished);
+   entity->finished = NULL;
}

no need to check entity->finished because dma_fence_put() will do it inside...



and the same here in job_recovery:

+
+   if (s_job->entity->finished)
+   dma_fence_put(s_job->entity->finished);

and the same here in sched_main:

+   if (entity->finished)
+   dma_fence_put(entity->finished);


6) why put amdgpu_ctx_mgr_fini() beneath amdgpu_vm_fini() ? any reason for that 
?


thanks

/Monk






On 04/13/2018 10:06 AM, Deng, Emily wrote:

Ping

Best Wishes,
Emily Deng





-Original Message-
From: Emily Deng [mailto:emily.d...@amd.com]
Sent: Thursday, April 12, 2018 6:22 PM
To: amd-gfx@lists.freedesktop.org
Cc: Deng, Emily ; Liu, Monk 
Subject: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)

issue:
there are VMC page fault occurred if force APP kill during 3dmark test, the
cause is in entity_fini we manually signal all those jobs in entity's queue
which confuse the sync/dep
mechanism:

1)page fault occurred in sdma's clear job which operate on shadow buffer,
and shadow buffer's Gart table is cleaned by ttm_bo_release since the fence
in its reservation was fake signaled by entity_fini() under the case of SIGKILL
received.

2)page fault occurred in gfx' job because during the lifetime of gfx job we
manually fake signal all jobs from its entity in entity_fini(), thus the
unmapping/clear PTE job depend on those result fence is satisfied and sdma
start clearing the PTE and lead to GFX page fault.

fix:
1)should at least wait all jobs already scheduled complete in entity_fini() if
SIGKILL is the case.

2)if a fence signaled and try to clear some entity's dependency, should set
this entity guilty to prevent its job really run since the dependency is fake
signaled.

v2:
splitting drm_sched_entity_fini() into two functions:
1)The first one is does the waiting, removes the entity from the runqueue
and returns an error when the process was killed.
2)The second one then goes over the entity, install it as completion signal for
the remaining jobs and signals all jobs with an error code.

v3:
1)Replace the fini1 and fini2 with better name 2)Call the first part before the
VM teardown in
amdgpu_driver_postclose_kms() and the second part after the VM teardown
3)Keep the original function drm_sched_entity_fini to refine the code.

Signed-off-by: Monk Liu 
Signed-off-by: Emily Deng 
---
  drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  2 +
  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   | 64
++
  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c   |  5 ++-
  drivers/gpu/drm/scheduler/gpu_scheduler.c | 74
++-
  include/drm/gpu_scheduler.h   |  7 +++
  5 files changed, 131 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 2babfad..200db73 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -681,6 +681,8 @@ int amdgpu_ctx_ioctl(struct drm_device *dev, void
*data,  int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx, unsigned
ring_id);

  void amdgpu_ctx_mgr_init(struct amdgpu_ctx_mgr *mgr);
+void amdgpu_ctx_mgr_entity_cleanup(struct amdgpu_ctx_mgr *mgr); void
+amdgpu_ctx_mgr_entity_fini(struct amdgpu_ctx_mgr *mgr);
  void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr);


diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
index 09d35051..659add4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
@@ -111,8 +111,9 @@ static int amdgpu_ctx_init(struct amdgpu_device
*adev,
return r;
  }

-static void amdgpu_ctx_fini(struct amdgpu_ctx *ctx)
+static void amdgpu_ctx_fini(struct kref *ref)
  {
+   struct amdgpu_ctx *ctx = container_of(ref, struct amdgpu_ctx,
+refcount);
struct amdgpu_device *adev = ctx->adev;
unsigned i, j;

@@ -125,13 +126,11 @@ static void amdgpu_ctx_fini(struct amdgpu_ctx
*ctx)
kfree(ctx->fences);
ctx->fences = NULL;

-   for (i = 0; 

RE: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)

2018-04-12 Thread Deng, Emily
Ping

Best Wishes,
Emily Deng




> -Original Message-
> From: Emily Deng [mailto:emily.d...@amd.com]
> Sent: Thursday, April 12, 2018 6:22 PM
> To: amd-gfx@lists.freedesktop.org
> Cc: Deng, Emily ; Liu, Monk 
> Subject: [PATCH] drm/gpu-sched: fix force APP kill hang(v3)
> 
> issue:
> there are VMC page fault occurred if force APP kill during 3dmark test, the
> cause is in entity_fini we manually signal all those jobs in entity's queue
> which confuse the sync/dep
> mechanism:
> 
> 1)page fault occurred in sdma's clear job which operate on shadow buffer,
> and shadow buffer's Gart table is cleaned by ttm_bo_release since the fence
> in its reservation was fake signaled by entity_fini() under the case of 
> SIGKILL
> received.
> 
> 2)page fault occurred in gfx' job because during the lifetime of gfx job we
> manually fake signal all jobs from its entity in entity_fini(), thus the
> unmapping/clear PTE job depend on those result fence is satisfied and sdma
> start clearing the PTE and lead to GFX page fault.
> 
> fix:
> 1)should at least wait all jobs already scheduled complete in entity_fini() if
> SIGKILL is the case.
> 
> 2)if a fence signaled and try to clear some entity's dependency, should set
> this entity guilty to prevent its job really run since the dependency is fake
> signaled.
> 
> v2:
> splitting drm_sched_entity_fini() into two functions:
> 1)The first one is does the waiting, removes the entity from the runqueue
> and returns an error when the process was killed.
> 2)The second one then goes over the entity, install it as completion signal 
> for
> the remaining jobs and signals all jobs with an error code.
> 
> v3:
> 1)Replace the fini1 and fini2 with better name 2)Call the first part before 
> the
> VM teardown in
> amdgpu_driver_postclose_kms() and the second part after the VM teardown
> 3)Keep the original function drm_sched_entity_fini to refine the code.
> 
> Signed-off-by: Monk Liu 
> Signed-off-by: Emily Deng 
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu.h   |  2 +
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c   | 64
> ++
>  drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c   |  5 ++-
>  drivers/gpu/drm/scheduler/gpu_scheduler.c | 74
> ++-
>  include/drm/gpu_scheduler.h   |  7 +++
>  5 files changed, 131 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> index 2babfad..200db73 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> @@ -681,6 +681,8 @@ int amdgpu_ctx_ioctl(struct drm_device *dev, void
> *data,  int amdgpu_ctx_wait_prev_fence(struct amdgpu_ctx *ctx, unsigned
> ring_id);
> 
>  void amdgpu_ctx_mgr_init(struct amdgpu_ctx_mgr *mgr);
> +void amdgpu_ctx_mgr_entity_cleanup(struct amdgpu_ctx_mgr *mgr); void
> +amdgpu_ctx_mgr_entity_fini(struct amdgpu_ctx_mgr *mgr);
>  void amdgpu_ctx_mgr_fini(struct amdgpu_ctx_mgr *mgr);
> 
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> index 09d35051..659add4 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
> @@ -111,8 +111,9 @@ static int amdgpu_ctx_init(struct amdgpu_device
> *adev,
>   return r;
>  }
> 
> -static void amdgpu_ctx_fini(struct amdgpu_ctx *ctx)
> +static void amdgpu_ctx_fini(struct kref *ref)
>  {
> + struct amdgpu_ctx *ctx = container_of(ref, struct amdgpu_ctx,
> +refcount);
>   struct amdgpu_device *adev = ctx->adev;
>   unsigned i, j;
> 
> @@ -125,13 +126,11 @@ static void amdgpu_ctx_fini(struct amdgpu_ctx
> *ctx)
>   kfree(ctx->fences);
>   ctx->fences = NULL;
> 
> - for (i = 0; i < adev->num_rings; i++)
> - drm_sched_entity_fini(>rings[i]->sched,
> -   >rings[i].entity);
> -
>   amdgpu_queue_mgr_fini(adev, >queue_mgr);
> 
>   mutex_destroy(>lock);
> +
> + kfree(ctx);
>  }
> 
>  static int amdgpu_ctx_alloc(struct amdgpu_device *adev, @@ -170,12
> +169,15 @@ static int amdgpu_ctx_alloc(struct amdgpu_device *adev,
> static void amdgpu_ctx_do_release(struct kref *ref)  {
>   struct amdgpu_ctx *ctx;
> + u32 i;
> 
>   ctx = container_of(ref, struct amdgpu_ctx, refcount);
> 
> - amdgpu_ctx_fini(ctx);
> + for (i = 0; i < ctx->adev->num_rings; i++)
> + drm_sched_entity_fini(>adev->rings[i]->sched,
> + >rings[i].entity);
> 
> - kfree(ctx);
> + amdgpu_ctx_fini(ref);
>  }
> 
>  static int amdgpu_ctx_free(struct amdgpu_fpriv *fpriv, uint32_t id) @@ -
> 435,16 +437,62 @@ void amdgpu_ctx_mgr_init(struct amdgpu_ctx_mgr
> *mgr)
>   idr_init(>ctx_handles);
>  }
> 
> +void amdgpu_ctx_mgr_entity_fini(struct amdgpu_ctx_mgr *mgr) {
> + struct amdgpu_ctx *ctx;
> + struct idr *idp;
> + uint32_t