Re: [Intel-gfx] [PATCH v2] drm/i915/execlists: Delay updating ring register state after resume

2018-03-28 Thread Mika Kuoppala
Chris Wilson  writes:

> Now that we reload both RING_HEAD and RING_TAIL when rebinding the
> context, we do not need to scrub those registers immediately on resume.
>
> v2: Handle the perma-pinned contexts.
>
> Signed-off-by: Chris Wilson 
> Cc: Tvrtko Ursulin 
> Cc: Mika Kuoppala 
> ---
>  drivers/gpu/drm/i915/intel_lrc.c | 31 ---
>  1 file changed, 12 insertions(+), 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c 
> b/drivers/gpu/drm/i915/intel_lrc.c
> index 654634254b64..2bf5128efb26 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -2536,13 +2536,14 @@ static int execlists_context_deferred_alloc(struct 
> i915_gem_context *ctx,
>   return ret;
>  }
>  
> -void intel_lr_context_resume(struct drm_i915_private *dev_priv)
> +void intel_lr_context_resume(struct drm_i915_private *i915)
>  {
>   struct intel_engine_cs *engine;
>   struct i915_gem_context *ctx;
>   enum intel_engine_id id;
>  
> - /* Because we emit WA_TAIL_DWORDS there may be a disparity
> + /*
> +  * Because we emit WA_TAIL_DWORDS there may be a disparity
>* between our bookkeeping in ce->ring->head and ce->ring->tail and
>* that stored in context. As we only write new commands from
>* ce->ring->tail onwards, everything before that is junk. If the GPU
> @@ -2552,27 +2553,19 @@ void intel_lr_context_resume(struct drm_i915_private 
> *dev_priv)
>* So to avoid that we reset the context images upon resume. For
>* simplicity, we just zero everything out.
>*/
> - list_for_each_entry(ctx, _priv->contexts.list, link) {
> - for_each_engine(engine, dev_priv, id) {
> - struct intel_context *ce = >engine[engine->id];
> - u32 *reg;
> -
> - if (!ce->state)
> - continue;
> + list_for_each_entry(ctx, >contexts.list, link) {
> + for_each_engine(engine, i915, id) {
> + struct intel_context *ce = >engine[id];
>  
> - reg = i915_gem_object_pin_map(ce->state->obj,
> -   I915_MAP_WB);
> - if (WARN_ON(IS_ERR(reg)))
> + if (!ce->ring)
>   continue;
>  
> - reg += LRC_STATE_PN * PAGE_SIZE / sizeof(*reg);
> - reg[CTX_RING_HEAD+1] = 0;
> - reg[CTX_RING_TAIL+1] = 0;
> -
> - ce->state->obj->mm.dirty = true;
> - i915_gem_object_unpin_map(ce->state->obj);
> -
>   intel_ring_reset(ce->ring, 0);
> +
> + if (ce->pin_count) { /* otherwise done in context_pin */

From my understanding this is for kernel context only. So the comment
should mention the kernel context.

Reviewed-by: Mika Kuoppala 

> + ce->lrc_reg_state[CTX_RING_HEAD+1] = 0;
> + ce->lrc_reg_state[CTX_RING_TAIL+1] = 0;
> + }
>   }
>   }
>  }
> -- 
> 2.16.3
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH v2] drm/i915/execlists: Delay updating ring register state after resume

2018-03-27 Thread Chris Wilson
Now that we reload both RING_HEAD and RING_TAIL when rebinding the
context, we do not need to scrub those registers immediately on resume.

v2: Handle the perma-pinned contexts.

Signed-off-by: Chris Wilson 
Cc: Tvrtko Ursulin 
Cc: Mika Kuoppala 
---
 drivers/gpu/drm/i915/intel_lrc.c | 31 ---
 1 file changed, 12 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 654634254b64..2bf5128efb26 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -2536,13 +2536,14 @@ static int execlists_context_deferred_alloc(struct 
i915_gem_context *ctx,
return ret;
 }
 
-void intel_lr_context_resume(struct drm_i915_private *dev_priv)
+void intel_lr_context_resume(struct drm_i915_private *i915)
 {
struct intel_engine_cs *engine;
struct i915_gem_context *ctx;
enum intel_engine_id id;
 
-   /* Because we emit WA_TAIL_DWORDS there may be a disparity
+   /*
+* Because we emit WA_TAIL_DWORDS there may be a disparity
 * between our bookkeeping in ce->ring->head and ce->ring->tail and
 * that stored in context. As we only write new commands from
 * ce->ring->tail onwards, everything before that is junk. If the GPU
@@ -2552,27 +2553,19 @@ void intel_lr_context_resume(struct drm_i915_private 
*dev_priv)
 * So to avoid that we reset the context images upon resume. For
 * simplicity, we just zero everything out.
 */
-   list_for_each_entry(ctx, _priv->contexts.list, link) {
-   for_each_engine(engine, dev_priv, id) {
-   struct intel_context *ce = >engine[engine->id];
-   u32 *reg;
-
-   if (!ce->state)
-   continue;
+   list_for_each_entry(ctx, >contexts.list, link) {
+   for_each_engine(engine, i915, id) {
+   struct intel_context *ce = >engine[id];
 
-   reg = i915_gem_object_pin_map(ce->state->obj,
- I915_MAP_WB);
-   if (WARN_ON(IS_ERR(reg)))
+   if (!ce->ring)
continue;
 
-   reg += LRC_STATE_PN * PAGE_SIZE / sizeof(*reg);
-   reg[CTX_RING_HEAD+1] = 0;
-   reg[CTX_RING_TAIL+1] = 0;
-
-   ce->state->obj->mm.dirty = true;
-   i915_gem_object_unpin_map(ce->state->obj);
-
intel_ring_reset(ce->ring, 0);
+
+   if (ce->pin_count) { /* otherwise done in context_pin */
+   ce->lrc_reg_state[CTX_RING_HEAD+1] = 0;
+   ce->lrc_reg_state[CTX_RING_TAIL+1] = 0;
+   }
}
}
 }
-- 
2.16.3

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx