On 7/11/2017 3:37 PM, Chris Wilson wrote:
Quoting Michel Thierry (2017-07-11 22:29:39)
Using the HWSP ggtt_offset to get the lrca offset is only correct if the
HWSP happens to be before it (when we reuse the PPHWSP of the kernel
context as the engine HWSP). Instead of making this assumption, get the
lrca offset from the kernel_context engine state.

And while looking at this part of the GuC interaction, it was also
noticed that the firmware expects the size of only the engine context
(context minus the execlist part, i.e. don't include the first 80
dwords), so pass the right size.

Reported-by: Daniele Ceraolo Spurio <daniele.ceraolospu...@intel.com>
Signed-off-by: Michel Thierry <michel.thie...@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospu...@intel.com>
Cc: Michal Wajdeczko <michal.wajdec...@intel.com>
Cc: Oscar Mateo <oscar.ma...@intel.com>
---
  drivers/gpu/drm/i915/i915_guc_submission.c | 15 ++++++++++++---
  1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c 
b/drivers/gpu/drm/i915/i915_guc_submission.c
index 48a1e9349a2c..97ec11a7cc9a 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -1018,6 +1018,12 @@ static void guc_policies_init(struct guc_policies 
*policies)
         policies->is_valid = 1;
  }
+/*
+ * The first 80 dwords of the register state context, containing the
+ * execlists and ppgtt registers.
+ */
+#define LR_HW_CONTEXT_SIZE     (80 * sizeof(u32))
+
  static int guc_ads_create(struct intel_guc *guc)
  {
         struct drm_i915_private *dev_priv = guc_to_i915(guc);
@@ -1033,6 +1039,7 @@ static int guc_ads_create(struct intel_guc *guc)
         struct intel_engine_cs *engine;
         enum intel_engine_id id;
         u32 base;
+       u32 lrca_offset = LRC_PPHWSP_PN * PAGE_SIZE;

Make it const and try to avoid the visual breaks in line length? (Whilst
also trying not to gratuitously mix types.)

ok

GEM_BUG_ON(guc->ads_vma); @@ -1062,13 +1069,15 @@ static int guc_ads_create(struct intel_guc *guc)
          * engines after a reset. Here we use the Render ring default
          * context, which must already exist and be pinned in the GGTT,
          * so its address won't change after we've told the GuC where
-        * to find it.
+        * to find it. GuC will skip the PPHWSP and 'Execlist Context',
+        * thus reduce the size by 1 page (PPHWSP) and 80 dwords.
          */
         blob->ads.golden_context_lrca =
-               dev_priv->engine[RCS]->status_page.ggtt_offset;
+               guc_ggtt_offset(dev_priv->kernel_context->engine[RCS].state) + 
lrca_offset;
+ /* context_size - PPHWSP - first 80 dwords */
         for_each_engine(engine, dev_priv, id)
-               blob->ads.eng_state_size[engine->guc_id] = engine->context_size;
+               blob->ads.eng_state_size[engine->guc_id] = engine->context_size 
- lrca_offset - LR_HW_CONTEXT_SIZE;

The first LRC_PPHWSP_PN pages are not included in engine->context_size,
they are added on by execlist_context_deferred_alloc().


I think there is only one extra page added in execlists_context_deferred_alloc, and it is the "GuC shared data". And yes, we don't include that one in in the context_size. (also __intel_engine_context_size has a comment saying the size includes the HWSP).

I'm finding it odd that we pass in a pointer to the start of the context
image, but that the guc wants to be told the size of the context minus
the bytes that *it* is choosing to skip at the start. (May be correct,
but looks weird and merits a warning.) The alternative explanation is
that it is not skipping the first 80 dwords, but the last.

Yes, it's really odd style. Daniele & I looked at it, and guc takes the pointer and uses "size of the PPHWSP + the size of the execlist_context (those 80 dwords)" as the starting point of the engine-state... that's why ads.eng_state_size needs to be decreased.

Anyway, since we always round up the ctx size to the next page, I
imagine we've been lucky and there are enough 0's at the end of the context.

-Chris


Thanks!
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to