Re: [Intel-gfx] [PATCH 1/2] drm/i915: Make sure engines are idle during GPU idling in LR mode

2016-11-04 Thread Imre Deak
On Sat, 2016-11-05 at 00:32 +0200, Imre Deak wrote:
> On Fri, 2016-11-04 at 21:01 +, Chris Wilson wrote:
> > On Fri, Nov 04, 2016 at 10:33:24PM +0200, Imre Deak wrote:
> > > On Thu, 2016-11-03 at 21:14 +, Chris Wilson wrote:
> > > > Where is that guaranteed? I thought we only serialised with the
> > > > pm
> > > > interrupts. Remember this happens before rpm suspend, since
> > > > gem_idle_work_handler is responsible for dropping the GPU
> > > > wakelock.
> > > 
> > > I meant that the 100msec after the last request signals
> > > completion
> > > and
> > > this handler is scheduled is normally enough for the context
> > > complete
> > > interrupt to get delivered. But yea, it's not a guarantee.
> > 
> > If only it was that deterministic! The idle_worker was scheduled
> > 100ms
> > after some retire_worker, just not necessarily the most recent. So
> > it
> > could be running exactly as active_requests -> 0 and so before the
> > context-interrupt.
> 
> Right, but we don't poll in that case, so there is no overhead.

Ok, there is a small window in the idle_worker after the unlocked poll
and before taking the lock where a new request could be submitted and
retired. In that case active_requests could be 0 after taking the lock
and we'd have the poll overhead there.

We could detect this by the fact that there is a new idle_worker
pending and bail out in that case. We shouldn't idle the GPU in that
case anyway.

> > Anyway, it was a good find!
> > -Chris
> > 
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 1/2] drm/i915: Make sure engines are idle during GPU idling in LR mode

2016-11-04 Thread Imre Deak
On Fri, 2016-11-04 at 21:01 +, Chris Wilson wrote:
> On Fri, Nov 04, 2016 at 10:33:24PM +0200, Imre Deak wrote:
> > On Thu, 2016-11-03 at 21:14 +, Chris Wilson wrote:
> > > Where is that guaranteed? I thought we only serialised with the
> > > pm
> > > interrupts. Remember this happens before rpm suspend, since
> > > gem_idle_work_handler is responsible for dropping the GPU
> > > wakelock.
> > 
> > I meant that the 100msec after the last request signals completion
> > and
> > this handler is scheduled is normally enough for the context
> > complete
> > interrupt to get delivered. But yea, it's not a guarantee.
> 
> If only it was that deterministic! The idle_worker was scheduled
> 100ms
> after some retire_worker, just not necessarily the most recent. So it
> could be running exactly as active_requests -> 0 and so before the
> context-interrupt.

Right, but we don't poll in that case, so there is no overhead.

> Anyway, it was a good find!
> -Chris
> 
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 1/2] drm/i915: Make sure engines are idle during GPU idling in LR mode

2016-11-04 Thread Chris Wilson
On Fri, Nov 04, 2016 at 10:33:24PM +0200, Imre Deak wrote:
> On Thu, 2016-11-03 at 21:14 +, Chris Wilson wrote:
> > Where is that guaranteed? I thought we only serialised with the pm
> > interrupts. Remember this happens before rpm suspend, since
> > gem_idle_work_handler is responsible for dropping the GPU wakelock.
> 
> I meant that the 100msec after the last request signals completion and
> this handler is scheduled is normally enough for the context complete
> interrupt to get delivered. But yea, it's not a guarantee.

If only it was that deterministic! The idle_worker was scheduled 100ms
after some retire_worker, just not necessarily the most recent. So it
could be running exactly as active_requests -> 0 and so before the
context-interrupt.

Anyway, it was a good find!
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 1/2] drm/i915: Make sure engines are idle during GPU idling in LR mode

2016-11-04 Thread Imre Deak
On Thu, 2016-11-03 at 21:14 +, Chris Wilson wrote:
> On Thu, Nov 03, 2016 at 10:57:23PM +0200, Imre Deak wrote:
> > On Thu, 2016-11-03 at 18:59 +, Chris Wilson wrote:
> > > On Thu, Nov 03, 2016 at 06:19:37PM +0200, Imre Deak wrote:
> > > > We assume that the GPU is idle once receiving the seqno via the last
> > > > request's user interrupt. In execlist mode the corresponding context
> > > > completed interrupt can be delayed though and until this latter
> > > > interrupt arrives we consider the request to be pending on the ELSP
> > > > submit port. This can cause a problem during system suspend where this
> > > > last request will be seen by the resume code as still pending. Such
> > > > pending requests are normally replayed after a GPU reset, but during
> > > > resume we reset both SW and HW tracking of the ring head/tail pointers,
> > > > so replaying the pending request with its stale tale pointer will leave
> > > > the ring in an inconsistent state. A subsequent request submission can
> > > > lead then to the GPU executing from uninitialized area in the ring
> > > > behind the above stale tail pointer.
> > > > 
> > > > Fix this by making sure any pending request on the ELSP port is
> > > > completed before suspending. I used a polling wait since the completion
> > > > time I measured was <1ms and since normally we only need to wait during
> > > > system suspend. GPU idling during runtime suspend is scheduled with a
> > > > delay (currently 50-100ms) after the retirement of the last request at
> > > > which point the context completed interrupt must have arrived already.
> > > > 
> > > > The chance of this bug was increased by
> > > > 
> > > > commit 1c777c5d1dcdf8fa0223fcff35fb387b5bb9517a
> > > > Author: Imre Deak 
> > > > Date:   Wed Oct 12 17:46:37 2016 +0300
> > > > 
> > > > drm/i915/hsw: Fix GPU hang during resume from S3-devices state
> > > > 
> > > > but it could happen even without the explicit GPU reset, since we
> > > > disable interrupts afterwards during the suspend sequence.
> > > > 
> > > > Cc: Chris Wilson 
> > > > Cc: Mika Kuoppala 
> > > > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98470
> > > > Signed-off-by: Imre Deak 
> > > > ---
> > > >  drivers/gpu/drm/i915/i915_gem.c  |  3 +++
> > > >  drivers/gpu/drm/i915/intel_lrc.c | 12 
> > > >  drivers/gpu/drm/i915/intel_lrc.h |  1 +
> > > >  3 files changed, 16 insertions(+)
> > > > 
> > > > diff --git a/drivers/gpu/drm/i915/i915_gem.c 
> > > > b/drivers/gpu/drm/i915/i915_gem.c
> > > > index 1f995ce..5ff02b5 100644
> > > > --- a/drivers/gpu/drm/i915/i915_gem.c
> > > > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > > > @@ -2766,6 +2766,9 @@ i915_gem_idle_work_handler(struct work_struct 
> > > > *work)
> > > >     if (dev_priv->gt.active_requests)
> > > >     goto out_unlock;
> > > >  
> > > > +   if (i915.enable_execlists)
> > > > +   intel_lr_wait_engines_idle(dev_priv);
> > > 
> > > Idle work handler... So runtime suspend.
> > > Anyway this is not an ideal place for a stall under struct_mutex (even if
> > > 16x10us, it's the principle!).
> > 
> > During runtime suspend this won't add any overhead since the context
> > done interrupt happened already (unless there is a bug somewhere else).
> 
> Where is that guaranteed? I thought we only serialised with the pm
> interrupts. Remember this happens before rpm suspend, since
> gem_idle_work_handler is responsible for dropping the GPU wakelock.

I meant that the 100msec after the last request signals completion and
this handler is scheduled is normally enough for the context complete
interrupt to get delivered. But yea, it's not a guarantee.

> > > Move this to before the first READ_ONCE(dev_priv->gt.active_requests);
> > > so we stall before taking the lock, and skip if any new requests arrive
> > > whilst waiting.
> > > 
> > > (Also i915.enable_execlists is forbidden. But meh)
> > > 
> > > static struct drm_i915_gem_request *
> > > execlists_active_port(struct intel_engine_cs *engine)
> > > {
> > >   struct drm_i915_gem_request *request;
> > > 
> > >   request = READ_ONCE(engine->execlist_port[1]);
> > >   if (request)
> > >   return request;
> > > 
> > >   return READ_ONCE(engine->execlist_port[0]);
> > > }
> > > 
> > > /* Wait for execlists to settle, but bail if any new requests come in */
> > > for_each_engine(engine, dev_priv, id) {
> > >   struct drm_i915_gem_request *request;
> > > 
> > >   request = execlists_active_port(engine);
> > >   if (!request)
> > >   continue;
> > > 
> > >   if (wait_for(execlists_active_port(engine) != request, 10))
> > >   DRM_ERROR("Timeout waiting for %s to idle\n", engine->name);
> > > }
> > 
> > Hm, but we still need to re-check and bail out if not idle with
> > struct_mutex held, since gt.active_requests could go 0->1->0 before
> > taking struct_mutex? I can rewrite things 

Re: [Intel-gfx] [PATCH 1/2] drm/i915: Make sure engines are idle during GPU idling in LR mode

2016-11-03 Thread Chris Wilson
On Thu, Nov 03, 2016 at 10:57:23PM +0200, Imre Deak wrote:
> On Thu, 2016-11-03 at 18:59 +, Chris Wilson wrote:
> > On Thu, Nov 03, 2016 at 06:19:37PM +0200, Imre Deak wrote:
> > > We assume that the GPU is idle once receiving the seqno via the last
> > > request's user interrupt. In execlist mode the corresponding context
> > > completed interrupt can be delayed though and until this latter
> > > interrupt arrives we consider the request to be pending on the ELSP
> > > submit port. This can cause a problem during system suspend where this
> > > last request will be seen by the resume code as still pending. Such
> > > pending requests are normally replayed after a GPU reset, but during
> > > resume we reset both SW and HW tracking of the ring head/tail pointers,
> > > so replaying the pending request with its stale tale pointer will leave
> > > the ring in an inconsistent state. A subsequent request submission can
> > > lead then to the GPU executing from uninitialized area in the ring
> > > behind the above stale tail pointer.
> > > 
> > > Fix this by making sure any pending request on the ELSP port is
> > > completed before suspending. I used a polling wait since the completion
> > > time I measured was <1ms and since normally we only need to wait during
> > > system suspend. GPU idling during runtime suspend is scheduled with a
> > > delay (currently 50-100ms) after the retirement of the last request at
> > > which point the context completed interrupt must have arrived already.
> > > 
> > > The chance of this bug was increased by
> > > 
> > > commit 1c777c5d1dcdf8fa0223fcff35fb387b5bb9517a
> > > Author: Imre Deak 
> > > Date:   Wed Oct 12 17:46:37 2016 +0300
> > > 
> > > drm/i915/hsw: Fix GPU hang during resume from S3-devices state
> > > 
> > > but it could happen even without the explicit GPU reset, since we
> > > disable interrupts afterwards during the suspend sequence.
> > > 
> > > Cc: Chris Wilson 
> > > Cc: Mika Kuoppala 
> > > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98470
> > > Signed-off-by: Imre Deak 
> > > ---
> > >  drivers/gpu/drm/i915/i915_gem.c  |  3 +++
> > >  drivers/gpu/drm/i915/intel_lrc.c | 12 
> > >  drivers/gpu/drm/i915/intel_lrc.h |  1 +
> > >  3 files changed, 16 insertions(+)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/i915_gem.c 
> > > b/drivers/gpu/drm/i915/i915_gem.c
> > > index 1f995ce..5ff02b5 100644
> > > --- a/drivers/gpu/drm/i915/i915_gem.c
> > > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > > @@ -2766,6 +2766,9 @@ i915_gem_idle_work_handler(struct work_struct *work)
> > >   if (dev_priv->gt.active_requests)
> > >   goto out_unlock;
> > >  
> > > + if (i915.enable_execlists)
> > > + intel_lr_wait_engines_idle(dev_priv);
> > 
> > Idle work handler... So runtime suspend.
> > Anyway this is not an ideal place for a stall under struct_mutex (even if
> > 16x10us, it's the principle!).
> 
> During runtime suspend this won't add any overhead since the context
> done interrupt happened already (unless there is a bug somewhere else).

Where is that guaranteed? I thought we only serialised with the pm
interrupts. Remember this happens before rpm suspend, since
gem_idle_work_handler is responsible for dropping the GPU wakelock.
 
> > Move this to before the first READ_ONCE(dev_priv->gt.active_requests);
> > so we stall before taking the lock, and skip if any new requests arrive
> > whilst waiting.
> > 
> > (Also i915.enable_execlists is forbidden. But meh)
> > 
> > static struct drm_i915_gem_request *
> > execlists_active_port(struct intel_engine_cs *engine)
> > {
> > struct drm_i915_gem_request *request;
> > 
> > request = READ_ONCE(engine->execlist_port[1]);
> > if (request)
> > return request;
> > 
> > return READ_ONCE(engine->execlist_port[0]);
> > }
> > 
> > /* Wait for execlists to settle, but bail if any new requests come in */
> > for_each_engine(engine, dev_priv, id) {
> > struct drm_i915_gem_request *request;
> > 
> > request = execlists_active_port(engine);
> > if (!request)
> > continue;
> > 
> > if (wait_for(execlists_active_port(engine) != request, 10))
> > DRM_ERROR("Timeout waiting for %s to idle\n", engine->name);
> > }
> 
> Hm, but we still need to re-check and bail out if not idle with
> struct_mutex held, since gt.active_requests could go 0->1->0 before
> taking struct_mutex? I can rewrite things with that check added, using
> the above.

Hmm, apparently we don't care ;) If the context-done interrupt is
serialised with runtime suspend, then we don't need a wait here at all.
On the system path there are no new requests and we are just flushing
the idle worker.

But yes, for the sake of correctness do both an unlocked wait followed
by a locked wait.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre

Re: [Intel-gfx] [PATCH 1/2] drm/i915: Make sure engines are idle during GPU idling in LR mode

2016-11-03 Thread Imre Deak
On Thu, 2016-11-03 at 18:59 +, Chris Wilson wrote:
> On Thu, Nov 03, 2016 at 06:19:37PM +0200, Imre Deak wrote:
> > We assume that the GPU is idle once receiving the seqno via the last
> > request's user interrupt. In execlist mode the corresponding context
> > completed interrupt can be delayed though and until this latter
> > interrupt arrives we consider the request to be pending on the ELSP
> > submit port. This can cause a problem during system suspend where this
> > last request will be seen by the resume code as still pending. Such
> > pending requests are normally replayed after a GPU reset, but during
> > resume we reset both SW and HW tracking of the ring head/tail pointers,
> > so replaying the pending request with its stale tale pointer will leave
> > the ring in an inconsistent state. A subsequent request submission can
> > lead then to the GPU executing from uninitialized area in the ring
> > behind the above stale tail pointer.
> > 
> > Fix this by making sure any pending request on the ELSP port is
> > completed before suspending. I used a polling wait since the completion
> > time I measured was <1ms and since normally we only need to wait during
> > system suspend. GPU idling during runtime suspend is scheduled with a
> > delay (currently 50-100ms) after the retirement of the last request at
> > which point the context completed interrupt must have arrived already.
> > 
> > The chance of this bug was increased by
> > 
> > commit 1c777c5d1dcdf8fa0223fcff35fb387b5bb9517a
> > Author: Imre Deak 
> > Date:   Wed Oct 12 17:46:37 2016 +0300
> > 
> > drm/i915/hsw: Fix GPU hang during resume from S3-devices state
> > 
> > but it could happen even without the explicit GPU reset, since we
> > disable interrupts afterwards during the suspend sequence.
> > 
> > Cc: Chris Wilson 
> > Cc: Mika Kuoppala 
> > Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98470
> > Signed-off-by: Imre Deak 
> > ---
> >  drivers/gpu/drm/i915/i915_gem.c  |  3 +++
> >  drivers/gpu/drm/i915/intel_lrc.c | 12 
> >  drivers/gpu/drm/i915/intel_lrc.h |  1 +
> >  3 files changed, 16 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c 
> > b/drivers/gpu/drm/i915/i915_gem.c
> > index 1f995ce..5ff02b5 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -2766,6 +2766,9 @@ i915_gem_idle_work_handler(struct work_struct *work)
> >     if (dev_priv->gt.active_requests)
> >     goto out_unlock;
> >  
> > +   if (i915.enable_execlists)
> > +   intel_lr_wait_engines_idle(dev_priv);
> 
> Idle work handler... So runtime suspend.
> Anyway this is not an ideal place for a stall under struct_mutex (even if
> 16x10us, it's the principle!).

During runtime suspend this won't add any overhead since the context
done interrupt happened already (unless there is a bug somewhere else).

> Move this to before the first READ_ONCE(dev_priv->gt.active_requests);
> so we stall before taking the lock, and skip if any new requests arrive
> whilst waiting.
> 
> (Also i915.enable_execlists is forbidden. But meh)
> 
> static struct drm_i915_gem_request *
> execlists_active_port(struct intel_engine_cs *engine)
> {
>   struct drm_i915_gem_request *request;
> 
>   request = READ_ONCE(engine->execlist_port[1]);
>   if (request)
>   return request;
> 
>   return READ_ONCE(engine->execlist_port[0]);
> }
> 
> /* Wait for execlists to settle, but bail if any new requests come in */
> for_each_engine(engine, dev_priv, id) {
>   struct drm_i915_gem_request *request;
> 
>   request = execlists_active_port(engine);
>   if (!request)
>   continue;
> 
>   if (wait_for(execlists_active_port(engine) != request, 10))
>   DRM_ERROR("Timeout waiting for %s to idle\n", engine->name);
> }

Hm, but we still need to re-check and bail out if not idle with
struct_mutex held, since gt.active_requests could go 0->1->0 before
taking struct_mutex? I can rewrite things with that check added, using
the above.

--Imre
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 1/2] drm/i915: Make sure engines are idle during GPU idling in LR mode

2016-11-03 Thread Chris Wilson
On Thu, Nov 03, 2016 at 06:19:37PM +0200, Imre Deak wrote:
> We assume that the GPU is idle once receiving the seqno via the last
> request's user interrupt. In execlist mode the corresponding context
> completed interrupt can be delayed though and until this latter
> interrupt arrives we consider the request to be pending on the ELSP
> submit port. This can cause a problem during system suspend where this
> last request will be seen by the resume code as still pending. Such
> pending requests are normally replayed after a GPU reset, but during
> resume we reset both SW and HW tracking of the ring head/tail pointers,
> so replaying the pending request with its stale tale pointer will leave
> the ring in an inconsistent state. A subsequent request submission can
> lead then to the GPU executing from uninitialized area in the ring
> behind the above stale tail pointer.
> 
> Fix this by making sure any pending request on the ELSP port is
> completed before suspending. I used a polling wait since the completion
> time I measured was <1ms and since normally we only need to wait during
> system suspend. GPU idling during runtime suspend is scheduled with a
> delay (currently 50-100ms) after the retirement of the last request at
> which point the context completed interrupt must have arrived already.
> 
> The chance of this bug was increased by
> 
> commit 1c777c5d1dcdf8fa0223fcff35fb387b5bb9517a
> Author: Imre Deak 
> Date:   Wed Oct 12 17:46:37 2016 +0300
> 
> drm/i915/hsw: Fix GPU hang during resume from S3-devices state
> 
> but it could happen even without the explicit GPU reset, since we
> disable interrupts afterwards during the suspend sequence.
> 
> Cc: Chris Wilson 
> Cc: Mika Kuoppala 
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98470
> Signed-off-by: Imre Deak 
> ---
>  drivers/gpu/drm/i915/i915_gem.c  |  3 +++
>  drivers/gpu/drm/i915/intel_lrc.c | 12 
>  drivers/gpu/drm/i915/intel_lrc.h |  1 +
>  3 files changed, 16 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 1f995ce..5ff02b5 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -2766,6 +2766,9 @@ i915_gem_idle_work_handler(struct work_struct *work)
>   if (dev_priv->gt.active_requests)
>   goto out_unlock;
>  
> + if (i915.enable_execlists)
> + intel_lr_wait_engines_idle(dev_priv);

Idle work handler... So runtime suspend. Anyway this is not an
ideal place for a stall under struct_mutex (even if 16x10us, it's the
principle!).

Move this to before the first READ_ONCE(dev_priv->gt.active_requests);
so we stall before taking the lock, and skip if any new requests arrive
whilst waiting.

(Also i915.enable_execlists is forbidden. But meh)

static struct drm_i915_gem_request *
execlists_active_port(struct intel_engine_cs *engine)
{
struct drm_i915_gem_request *request;

request = READ_ONCE(engine->execlist_port[1]);
if (request)
return request;

return READ_ONCE(engine->execlist_port[0]);
}

/* Wait for execlists to settle, but bail if any new requests come in */
for_each_engine(engine, dev_priv, id) {
struct drm_i915_gem_request *request;

request = execlists_active_port(engine);
if (!request)
continue;

if (wait_for(execlists_active_port(engine) != request, 10))
DRM_ERROR("Timeout waiting for %s to idle\n", engine->name);
}

-- 
Chris Wilson, Intel Open Source Technology Centre
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 1/2] drm/i915: Make sure engines are idle during GPU idling in LR mode

2016-11-03 Thread Tvrtko Ursulin


On 03/11/2016 16:19, Imre Deak wrote:

We assume that the GPU is idle once receiving the seqno via the last
request's user interrupt. In execlist mode the corresponding context
completed interrupt can be delayed though and until this latter
interrupt arrives we consider the request to be pending on the ELSP
submit port. This can cause a problem during system suspend where this
last request will be seen by the resume code as still pending. Such
pending requests are normally replayed after a GPU reset, but during
resume we reset both SW and HW tracking of the ring head/tail pointers,
so replaying the pending request with its stale tale pointer will leave
the ring in an inconsistent state. A subsequent request submission can
lead then to the GPU executing from uninitialized area in the ring
behind the above stale tail pointer.

Fix this by making sure any pending request on the ELSP port is
completed before suspending. I used a polling wait since the completion
time I measured was <1ms and since normally we only need to wait during
system suspend. GPU idling during runtime suspend is scheduled with a
delay (currently 50-100ms) after the retirement of the last request at
which point the context completed interrupt must have arrived already.

The chance of this bug was increased by

commit 1c777c5d1dcdf8fa0223fcff35fb387b5bb9517a
Author: Imre Deak 
Date:   Wed Oct 12 17:46:37 2016 +0300

drm/i915/hsw: Fix GPU hang during resume from S3-devices state

but it could happen even without the explicit GPU reset, since we
disable interrupts afterwards during the suspend sequence.

Cc: Chris Wilson 
Cc: Mika Kuoppala 
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98470
Signed-off-by: Imre Deak 
---
 drivers/gpu/drm/i915/i915_gem.c  |  3 +++
 drivers/gpu/drm/i915/intel_lrc.c | 12 
 drivers/gpu/drm/i915/intel_lrc.h |  1 +
 3 files changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 1f995ce..5ff02b5 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2766,6 +2766,9 @@ i915_gem_idle_work_handler(struct work_struct *work)
if (dev_priv->gt.active_requests)
goto out_unlock;

+   if (i915.enable_execlists)
+   intel_lr_wait_engines_idle(dev_priv);
+
for_each_engine(engine, dev_priv, id)
i915_gem_batch_pool_fini(>batch_pool);

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index fa3012c..ee4aaf1 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -522,6 +522,18 @@ static bool execlists_elsp_idle(struct intel_engine_cs 
*engine)
return !engine->execlist_port[0].request;
 }

+void intel_lr_wait_engines_idle(struct drm_i915_private *dev_priv)
+{
+   struct intel_engine_cs *engine;
+   enum intel_engine_id id;
+
+   for_each_engine(engine, dev_priv, id) {
+   if (wait_for(execlists_elsp_idle(engine), 10))
+   DRM_ERROR("Timeout waiting for engine %s to idle\n",
+ engine->name);


Just noticed engine names are currently like "render ring",etc, so you 
can drop the 'engine' from the message.



+   }
+}
+
 static bool execlists_elsp_ready(struct intel_engine_cs *engine)
 {
int port;
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 4fed816..bf3775e 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -87,6 +87,7 @@ void intel_lr_context_unpin(struct i915_gem_context *ctx,

 struct drm_i915_private;

+void intel_lr_wait_engines_idle(struct drm_i915_private *dev_priv);
 void intel_lr_context_resume(struct drm_i915_private *dev_priv);
 uint64_t intel_lr_context_descriptor(struct i915_gem_context *ctx,
 struct intel_engine_cs *engine);



Regards,

Tvrtko
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


[Intel-gfx] [PATCH 1/2] drm/i915: Make sure engines are idle during GPU idling in LR mode

2016-11-03 Thread Imre Deak
We assume that the GPU is idle once receiving the seqno via the last
request's user interrupt. In execlist mode the corresponding context
completed interrupt can be delayed though and until this latter
interrupt arrives we consider the request to be pending on the ELSP
submit port. This can cause a problem during system suspend where this
last request will be seen by the resume code as still pending. Such
pending requests are normally replayed after a GPU reset, but during
resume we reset both SW and HW tracking of the ring head/tail pointers,
so replaying the pending request with its stale tale pointer will leave
the ring in an inconsistent state. A subsequent request submission can
lead then to the GPU executing from uninitialized area in the ring
behind the above stale tail pointer.

Fix this by making sure any pending request on the ELSP port is
completed before suspending. I used a polling wait since the completion
time I measured was <1ms and since normally we only need to wait during
system suspend. GPU idling during runtime suspend is scheduled with a
delay (currently 50-100ms) after the retirement of the last request at
which point the context completed interrupt must have arrived already.

The chance of this bug was increased by

commit 1c777c5d1dcdf8fa0223fcff35fb387b5bb9517a
Author: Imre Deak 
Date:   Wed Oct 12 17:46:37 2016 +0300

drm/i915/hsw: Fix GPU hang during resume from S3-devices state

but it could happen even without the explicit GPU reset, since we
disable interrupts afterwards during the suspend sequence.

Cc: Chris Wilson 
Cc: Mika Kuoppala 
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=98470
Signed-off-by: Imre Deak 
---
 drivers/gpu/drm/i915/i915_gem.c  |  3 +++
 drivers/gpu/drm/i915/intel_lrc.c | 12 
 drivers/gpu/drm/i915/intel_lrc.h |  1 +
 3 files changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 1f995ce..5ff02b5 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2766,6 +2766,9 @@ i915_gem_idle_work_handler(struct work_struct *work)
if (dev_priv->gt.active_requests)
goto out_unlock;
 
+   if (i915.enable_execlists)
+   intel_lr_wait_engines_idle(dev_priv);
+
for_each_engine(engine, dev_priv, id)
i915_gem_batch_pool_fini(>batch_pool);
 
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index fa3012c..ee4aaf1 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -522,6 +522,18 @@ static bool execlists_elsp_idle(struct intel_engine_cs 
*engine)
return !engine->execlist_port[0].request;
 }
 
+void intel_lr_wait_engines_idle(struct drm_i915_private *dev_priv)
+{
+   struct intel_engine_cs *engine;
+   enum intel_engine_id id;
+
+   for_each_engine(engine, dev_priv, id) {
+   if (wait_for(execlists_elsp_idle(engine), 10))
+   DRM_ERROR("Timeout waiting for engine %s to idle\n",
+ engine->name);
+   }
+}
+
 static bool execlists_elsp_ready(struct intel_engine_cs *engine)
 {
int port;
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 4fed816..bf3775e 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -87,6 +87,7 @@ void intel_lr_context_unpin(struct i915_gem_context *ctx,
 
 struct drm_i915_private;
 
+void intel_lr_wait_engines_idle(struct drm_i915_private *dev_priv);
 void intel_lr_context_resume(struct drm_i915_private *dev_priv);
 uint64_t intel_lr_context_descriptor(struct i915_gem_context *ctx,
 struct intel_engine_cs *engine);
-- 
2.5.0

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx