When unwinding requests on a reset context, if other requests in the
context are in the priority list the requests could be resubmitted out
of seqno order. Traverse the list of active requests in reverse and
append to the head of the priority list to fix this.

Fixes: eb5e7da736f3 ("drm/i915/guc: Reset implementation for new GuC interface")
Signed-off-by: Matthew Brost <matthew.br...@intel.com>
Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospu...@intel.com>
Cc: <sta...@vger.kernel.org>
---
 drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c 
b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
index aff5dd247a88..0c1e6b465fba 100644
--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
@@ -806,9 +806,9 @@ __unwind_incomplete_requests(struct intel_context *ce)
 
        spin_lock_irqsave(&sched_engine->lock, flags);
        spin_lock(&ce->guc_active.lock);
-       list_for_each_entry_safe(rq, rn,
-                                &ce->guc_active.requests,
-                                sched.link) {
+       list_for_each_entry_safe_reverse(rq, rn,
+                                        &ce->guc_active.requests,
+                                        sched.link) {
                if (i915_request_completed(rq))
                        continue;
 
@@ -825,7 +825,7 @@ __unwind_incomplete_requests(struct intel_context *ce)
                }
                GEM_BUG_ON(i915_sched_engine_is_empty(sched_engine));
 
-               list_add_tail(&rq->sched.link, pl);
+               list_add(&rq->sched.link, pl);
                set_bit(I915_FENCE_FLAG_PQUEUE, &rq->fence.flags);
 
                spin_lock(&ce->guc_active.lock);
-- 
2.32.0

Reply via email to