Chris Wilson <[email protected]> writes:

> If we detect a hang in a closed context, just flush all of its requests
> and cancel any remaining execution along the context. Note that after
> closing the context, the last reference to the context may be dropped,
> leaving it only valid under RCU.

Sound good. But is there a window for userspace to start
to see -EIO if it resubmits to a closed context?

In other words, after userspace doing gem_ctx_destroy(ctx_handle),
we would return -EINVAL due to ctx_handle being stale
earlier than we check for banned status and return -EIO?

-Mika

>
> Signed-off-by: Chris Wilson <[email protected]>
> ---
>  drivers/gpu/drm/i915/gt/intel_reset.c | 7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/gt/intel_reset.c 
> b/drivers/gpu/drm/i915/gt/intel_reset.c
> index f03e000051c1..a6b0d00c3a51 100644
> --- a/drivers/gpu/drm/i915/gt/intel_reset.c
> +++ b/drivers/gpu/drm/i915/gt/intel_reset.c
> @@ -81,6 +81,11 @@ static bool context_mark_guilty(struct i915_gem_context 
> *ctx)
>       bool banned;
>       int i;
>  
> +     if (i915_gem_context_is_closed(ctx)) {
> +             i915_gem_context_set_banned(ctx);
> +             return true;
> +     }
> +
>       atomic_inc(&ctx->guilty_count);
>  
>       /* Cool contexts are too cool to be banned! (Used for reset testing.) */
> @@ -124,6 +129,7 @@ void __i915_request_reset(struct i915_request *rq, bool 
> guilty)
>  
>       GEM_BUG_ON(i915_request_completed(rq));
>  
> +     rcu_read_lock(); /* protect the GEM context */
>       if (guilty) {
>               i915_request_skip(rq, -EIO);
>               if (context_mark_guilty(rq->gem_context))
> @@ -132,6 +138,7 @@ void __i915_request_reset(struct i915_request *rq, bool 
> guilty)
>               dma_fence_set_error(&rq->fence, -EAGAIN);
>               context_mark_innocent(rq->gem_context);
>       }
> +     rcu_read_unlock();
>  }
>  
>  static bool i915_in_reset(struct pci_dev *pdev)
> -- 
> 2.24.0
>
> _______________________________________________
> Intel-gfx mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to