Re: [Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-13 Thread Rodrigo Vivi
Rodrigo Vivi  writes:

> On Sat, Feb 10, 2018 at 09:43:59AM +, Chris Wilson wrote:
>> Quoting Andy Lutomirski (2018-02-09 17:55:38)
>> > On Fri, Feb 9, 2018 at 7:39 AM, Rodrigo Vivi  
>> > wrote:
>> > > Rodrigo Vivi  writes:
>> > > So, I move the hacked scheduled to 10s and forced psr_activate 10ms
>> > > after that and the result is this:
>> > >
>> > >
>> > > [   11.757909] [drm:intel_psr_enable [i915]] *ERROR* I915-DEBUG:
>> > > Scheduling 10s
>> > > [   11.778586] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Work 
>> > > busy!
>> > > ...
>> > > a bunch more of Work busy
>> > > ...
>> > > [   21.980643] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Work 
>> > > busy!
>> > 
>> > This like ("Work busy!") is intel_psr_flush() noticing that
>> > work_busy() returns true (which it does indeed do when the work is
>> > scheduled for the future).  But this means that intel_psr_flush()
>> > wants to keep PSR off for a little bit (10s with your patch applied)
>> > because the screen isn't idle.
>> > 
>> > > [   21.983060] [drm:intel_psr_work [i915]] *ERROR* I915-DEBUG: Work 
>> > > running
>> > 
>> > And here's intel_psr_work() turning PSR back on 3ms later.
>> > 
>> > So I think you're seeing exactly the bug I described.
>
> Well, not in the way you described though.
>
> As we can see on the log work running get executed only 10 seconds after
> intel_psr_enable wanted. None of the thousand flushes happening there will
> trigger psr activate before the wanted 10s.
>
> So the hack indeed work as expected.
>
> Given that description I proved that it works as expected and move away.
> But since you insisted I step back and re-read your patch like if you had
> never mentioned the hack on intel_psr_enable ever, but only focusing
> on the regular calls from intel_psr_flush and you are right,
> we do have a bug that would affect VLV/CHV here.
>
> In the current code the psr_activate can happen any moment from 0ms to 100ms
> after the flush.
>
> This 100ms would just reduce the amount of fast exit-activate cases,
> but not eliminate the possibility of enabling before 100ms.
>
> On core platforms it shouldn't be any problem because HW tracking would
> take care of waiting the extra idle frames.
>
> This could be a particular problem for VLV/CHV platforms where I believe
> we should wait few frames before really activating it back.
>
> So, probably we can go ahead with your patch so we get this covered for
> VLV/CHV, but I'd like a version without mentioning the intel_psr_enable
> case that is already proven work as expected.
>
> But meanwhile I will check back on the hack and see if we can already
> remove that since many things got fixed and improved around eDP link training
> and psr itself.
>
> Also I will discuss with DK and check the possibility of a immediate activate
> on platforms with HW tracking. Specially now that we are moving towards more
> use of HW tracking.
>
> In this case we would reduce this scheduled/timer based solution only for 
> VLV/CHV.
>
> And then on VLV/CHV front there are even more issues to sync with VBLANK etc 
> in
> a platform that probably went to the market without any single unit with PSR 
> panels
> that was to expensive when those platforms got released.
>
> I even consider totally removing VLV/CHV code completely.
>
>> 
>> It's also evident in the tests that PSR isn't being disabled as
>> often/quick as we need for frontbuffer updates to be seen.
>
> Yeap, it is evident we have many bugs there, but I'm afraid this is not
> the answer for the lag.
> DK found some promising ones...

Ok. I was wrong. This also explains and fixes some cases here like
fbcon/fbdev... So:

Tested-by: Rodrigo Vivi 

I'd still want a version without mention to the schedule case at
intel_psr_enable to avoid confusion here. But since I'm now
basing my patches on top of this:

Acked-by: Rodrigo Vivi 

>
>> -Chris
> ___
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-12 Thread Rodrigo Vivi
On Sat, Feb 10, 2018 at 09:43:59AM +, Chris Wilson wrote:
> Quoting Andy Lutomirski (2018-02-09 17:55:38)
> > On Fri, Feb 9, 2018 at 7:39 AM, Rodrigo Vivi  wrote:
> > > Rodrigo Vivi  writes:
> > > So, I move the hacked scheduled to 10s and forced psr_activate 10ms
> > > after that and the result is this:
> > >
> > >
> > > [   11.757909] [drm:intel_psr_enable [i915]] *ERROR* I915-DEBUG:
> > > Scheduling 10s
> > > [   11.778586] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Work busy!
> > > ...
> > > a bunch more of Work busy
> > > ...
> > > [   21.980643] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Work busy!
> > 
> > This like ("Work busy!") is intel_psr_flush() noticing that
> > work_busy() returns true (which it does indeed do when the work is
> > scheduled for the future).  But this means that intel_psr_flush()
> > wants to keep PSR off for a little bit (10s with your patch applied)
> > because the screen isn't idle.
> > 
> > > [   21.983060] [drm:intel_psr_work [i915]] *ERROR* I915-DEBUG: Work 
> > > running
> > 
> > And here's intel_psr_work() turning PSR back on 3ms later.
> > 
> > So I think you're seeing exactly the bug I described.

Well, not in the way you described though.

As we can see on the log work running get executed only 10 seconds after
intel_psr_enable wanted. None of the thousand flushes happening there will
trigger psr activate before the wanted 10s.

So the hack indeed work as expected.

Given that description I proved that it works as expected and move away.
But since you insisted I step back and re-read your patch like if you had
never mentioned the hack on intel_psr_enable ever, but only focusing
on the regular calls from intel_psr_flush and you are right,
we do have a bug that would affect VLV/CHV here.

In the current code the psr_activate can happen any moment from 0ms to 100ms
after the flush.

This 100ms would just reduce the amount of fast exit-activate cases,
but not eliminate the possibility of enabling before 100ms.

On core platforms it shouldn't be any problem because HW tracking would
take care of waiting the extra idle frames.

This could be a particular problem for VLV/CHV platforms where I believe
we should wait few frames before really activating it back.

So, probably we can go ahead with your patch so we get this covered for
VLV/CHV, but I'd like a version without mentioning the intel_psr_enable
case that is already proven work as expected.

But meanwhile I will check back on the hack and see if we can already
remove that since many things got fixed and improved around eDP link training
and psr itself.

Also I will discuss with DK and check the possibility of a immediate activate
on platforms with HW tracking. Specially now that we are moving towards more
use of HW tracking.

In this case we would reduce this scheduled/timer based solution only for 
VLV/CHV.

And then on VLV/CHV front there are even more issues to sync with VBLANK etc in
a platform that probably went to the market without any single unit with PSR 
panels
that was to expensive when those platforms got released.

I even consider totally removing VLV/CHV code completely.

> 
> It's also evident in the tests that PSR isn't being disabled as
> often/quick as we need for frontbuffer updates to be seen.

Yeap, it is evident we have many bugs there, but I'm afraid this is not
the answer for the lag.
DK found some promising ones...

> -Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-10 Thread Chris Wilson
Quoting Andy Lutomirski (2018-02-09 17:55:38)
> On Fri, Feb 9, 2018 at 7:39 AM, Rodrigo Vivi  wrote:
> > Rodrigo Vivi  writes:
> > So, I move the hacked scheduled to 10s and forced psr_activate 10ms
> > after that and the result is this:
> >
> >
> > [   11.757909] [drm:intel_psr_enable [i915]] *ERROR* I915-DEBUG:
> > Scheduling 10s
> > [   11.778586] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Work busy!
> > ...
> > a bunch more of Work busy
> > ...
> > [   21.980643] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Work busy!
> 
> This like ("Work busy!") is intel_psr_flush() noticing that
> work_busy() returns true (which it does indeed do when the work is
> scheduled for the future).  But this means that intel_psr_flush()
> wants to keep PSR off for a little bit (10s with your patch applied)
> because the screen isn't idle.
> 
> > [   21.983060] [drm:intel_psr_work [i915]] *ERROR* I915-DEBUG: Work running
> 
> And here's intel_psr_work() turning PSR back on 3ms later.
> 
> So I think you're seeing exactly the bug I described.

It's also evident in the tests that PSR isn't being disabled as
often/quick as we need for frontbuffer updates to be seen.
-Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-09 Thread Pandiyan, Dhinakaran

On Thu, 2018-02-08 at 23:39 -0800, Rodrigo Vivi wrote:
> Rodrigo Vivi  writes:
> 
> > "Pandiyan, Dhinakaran"  writes:
> >
> >> On Thu, 2018-02-08 at 14:48 -0800, Rodrigo Vivi wrote:
> >>> Hi Andy,
> >>>
> >>> thanks for getting involved with PSR and sorry for not replying sooner.
> >>>
> >>> I first saw this patch on that bugzilla entry but only now I stop to
> >>> really think why I have written the code that way.
> >>>
> >>> So some clarity below.
> >>>
> >>> On Mon, Feb 05, 2018 at 10:07:09PM +, Andy Lutomirski wrote:
> >>> > The current PSR code has a two call sites that each schedule delayed
> >>> > work to activate PSR.  As far as I can tell, each call site intends
> >>> > to keep PSR inactive for the given amount of time and then allow it
> >>> > to be activated.
> >>> >
> >>> > The call sites are:
> >>> >
> >>> >  - intel_psr_enable(), which explicitly states in a comment that
> >>> >it's trying to keep PSR off a short time after the dispay is
> >>> >initialized as a workaround.
> >>>
> >>> First of all I really want to kill this call here and remove the
> >>> FIXME. It was an ugly hack that I added to solve a corner case
> >>> that was leaving me with blank screens when activating so sooner.
> >>>
> >>> >
> >>> >  - intel_psr_flush().  There isn't an explcit explanation, but the
> >>> >intent is presumably to keep PSR off until the display has been
> >>> >idle for 100ms.
> >>>
> >>> The reason for 100 is kind of ugly-nonsense-empirical value
> >>> I concluded from VLV/CHV experience.
> >>> On platforms with HW tracking HW waits few identical frames
> >>> until really activating PSR. VLV/CHV activation is immediate.
> >>> But HW is also different and there it seemed that hw needed a
> >>> few more time before starting the transitions.
> >>> Furthermore I didn't want to add that so quickly because I didn't
> >>> want to take the risk of killing battery with software tracking
> >>> when doing transitions so quickly using software tracking.
> >>>
> >>> >
> >>> > The current code doesn't actually accomplish either of these goals.
> >>> > Rather than keeping PSR inactive for the given amount of time, it
> >>> > will schedule PSR for activation after the given time, with the
> >>> > earliest target time in such a request winning.
> >>>
> >>> Putting that way I was asking myself how that hack had ever fixed
> >>> my issue. Because the way you explained here seems obvious that it
> >>> wouldn't ever fix my bug or any other.
> >>>
> >>> So I applied your patch and it made even more sense (without considering
> >>> the fact I want to kill the first call anyways).
> >>>
> >>> So I came back, removed your patch and tried to understand how did
> >>> it ever worked.
> >>>
> >>> So, the thing is that intel_psr_flush will never be really executed
> >>> if intel_psr_enable wasn't executed. That is guaranteed by:
> >>>
> >>> mutex_lock(_priv->psr.lock);
> >>>   if (!dev_priv->psr.enabled) {
> >>>
> >>> So, intel_psr_enable will be for sure the first one to schedule the
> >>> work delayed to the ugly higher delay.
> >>>
> >>> >
> >>> > In other words, if intel_psr_enable() is immediately followed by
> >>> > intel_psr_flush(), then PSR will be activated after 100ms even if
> >>> > intel_psr_enable() wanted a longer delay.  And, if the screen is
> >>> > being constantly updated so that intel_psr_flush() is called once
> >>> > per frame at 60Hz, PSR will still be activated once every 100ms.
> >>>
> >>> During this time you are right, many calls of intel_psr_exit
> >>> coming from flush functions can be called... But none of
> >>> them will schedule the work with 100 delay.
> >>>
> >>> they will skip on
> >>> if (!work_busy(_priv->psr.work.work))
> >>
> >> Wouldn't work_busy() return false until the work is actually queued
> >> which is 100ms after calling schedule_delayed_work()?
> >
> > That's not my understanding of work_busy man.
> >
> > "work_busy - test whether a work is currently pending or running"

Sorry about this, WORK_STRUCT_PENDING_BIT is indeed set as soon as
schedule_delayed_work() is called.

> >
> > I consider it as pending one...
> >
> > But yeap... it was a long time ago that I did this so I'm not sure...
> 
> I wouldn't be able to sleep today if I hand't experiment this.
> 
> So, I move the hacked scheduled to 10s and forced psr_activate 10ms
> after that and the result is this:
> 
> 
> [   11.757909] [drm:intel_psr_enable [i915]] *ERROR* I915-DEBUG:
> Scheduling 10s
> [   11.778586] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Work busy!
> ...
> a bunch more of Work busy
> ...
> [   21.980643] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Work busy!
> [   21.983060] [drm:intel_psr_work [i915]] *ERROR* I915-DEBUG: Work running
> [   21.986353] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Scheduling 
> normal 100ms
> So, the schedules are working as expected, I'm not crazy and I will be
> able to sleep :)
> 
> (I 

Re: [Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-09 Thread Andy Lutomirski
On Fri, Feb 9, 2018 at 7:39 AM, Rodrigo Vivi  wrote:
> Rodrigo Vivi  writes:
>
>> "Pandiyan, Dhinakaran"  writes:
>>
>>> On Thu, 2018-02-08 at 14:48 -0800, Rodrigo Vivi wrote:
 Hi Andy,

 thanks for getting involved with PSR and sorry for not replying sooner.

 I first saw this patch on that bugzilla entry but only now I stop to
 really think why I have written the code that way.

 So some clarity below.

 On Mon, Feb 05, 2018 at 10:07:09PM +, Andy Lutomirski wrote:
 > The current PSR code has a two call sites that each schedule delayed
 > work to activate PSR.  As far as I can tell, each call site intends
 > to keep PSR inactive for the given amount of time and then allow it
 > to be activated.
 >
 > The call sites are:
 >
 >  - intel_psr_enable(), which explicitly states in a comment that
 >it's trying to keep PSR off a short time after the dispay is
 >initialized as a workaround.

 First of all I really want to kill this call here and remove the
 FIXME. It was an ugly hack that I added to solve a corner case
 that was leaving me with blank screens when activating so sooner.

 >
 >  - intel_psr_flush().  There isn't an explcit explanation, but the
 >intent is presumably to keep PSR off until the display has been
 >idle for 100ms.

 The reason for 100 is kind of ugly-nonsense-empirical value
 I concluded from VLV/CHV experience.
 On platforms with HW tracking HW waits few identical frames
 until really activating PSR. VLV/CHV activation is immediate.
 But HW is also different and there it seemed that hw needed a
 few more time before starting the transitions.
 Furthermore I didn't want to add that so quickly because I didn't
 want to take the risk of killing battery with software tracking
 when doing transitions so quickly using software tracking.

 >
 > The current code doesn't actually accomplish either of these goals.
 > Rather than keeping PSR inactive for the given amount of time, it
 > will schedule PSR for activation after the given time, with the
 > earliest target time in such a request winning.

 Putting that way I was asking myself how that hack had ever fixed
 my issue. Because the way you explained here seems obvious that it
 wouldn't ever fix my bug or any other.

 So I applied your patch and it made even more sense (without considering
 the fact I want to kill the first call anyways).

 So I came back, removed your patch and tried to understand how did
 it ever worked.

 So, the thing is that intel_psr_flush will never be really executed
 if intel_psr_enable wasn't executed. That is guaranteed by:

 mutex_lock(_priv->psr.lock);
 if (!dev_priv->psr.enabled) {

 So, intel_psr_enable will be for sure the first one to schedule the
 work delayed to the ugly higher delay.

 >
 > In other words, if intel_psr_enable() is immediately followed by
 > intel_psr_flush(), then PSR will be activated after 100ms even if
 > intel_psr_enable() wanted a longer delay.  And, if the screen is
 > being constantly updated so that intel_psr_flush() is called once
 > per frame at 60Hz, PSR will still be activated once every 100ms.

 During this time you are right, many calls of intel_psr_exit
 coming from flush functions can be called... But none of
 them will schedule the work with 100 delay.

 they will skip on
 if (!work_busy(_priv->psr.work.work))
>>>
>>> Wouldn't work_busy() return false until the work is actually queued
>>> which is 100ms after calling schedule_delayed_work()?
>>
>> That's not my understanding of work_busy man.
>>
>> "work_busy - test whether a work is currently pending or running"
>>
>> I consider it as pending one...
>>
>> But yeap... it was a long time ago that I did this so I'm not sure...
>
> I wouldn't be able to sleep today if I hand't experiment this.
>
> So, I move the hacked scheduled to 10s and forced psr_activate 10ms
> after that and the result is this:
>
>
> [   11.757909] [drm:intel_psr_enable [i915]] *ERROR* I915-DEBUG:
> Scheduling 10s
> [   11.778586] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Work busy!
> ...
> a bunch more of Work busy
> ...
> [   21.980643] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Work busy!

This like ("Work busy!") is intel_psr_flush() noticing that
work_busy() returns true (which it does indeed do when the work is
scheduled for the future).  But this means that intel_psr_flush()
wants to keep PSR off for a little bit (10s with your patch applied)
because the screen isn't idle.

> [   21.983060] [drm:intel_psr_work [i915]] *ERROR* I915-DEBUG: Work running

And here's intel_psr_work() turning PSR back on 3ms later.

So I think you're 

Re: [Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-08 Thread Rodrigo Vivi
Rodrigo Vivi  writes:

> "Pandiyan, Dhinakaran"  writes:
>
>> On Thu, 2018-02-08 at 14:48 -0800, Rodrigo Vivi wrote:
>>> Hi Andy,
>>>
>>> thanks for getting involved with PSR and sorry for not replying sooner.
>>>
>>> I first saw this patch on that bugzilla entry but only now I stop to
>>> really think why I have written the code that way.
>>>
>>> So some clarity below.
>>>
>>> On Mon, Feb 05, 2018 at 10:07:09PM +, Andy Lutomirski wrote:
>>> > The current PSR code has a two call sites that each schedule delayed
>>> > work to activate PSR.  As far as I can tell, each call site intends
>>> > to keep PSR inactive for the given amount of time and then allow it
>>> > to be activated.
>>> >
>>> > The call sites are:
>>> >
>>> >  - intel_psr_enable(), which explicitly states in a comment that
>>> >it's trying to keep PSR off a short time after the dispay is
>>> >initialized as a workaround.
>>>
>>> First of all I really want to kill this call here and remove the
>>> FIXME. It was an ugly hack that I added to solve a corner case
>>> that was leaving me with blank screens when activating so sooner.
>>>
>>> >
>>> >  - intel_psr_flush().  There isn't an explcit explanation, but the
>>> >intent is presumably to keep PSR off until the display has been
>>> >idle for 100ms.
>>>
>>> The reason for 100 is kind of ugly-nonsense-empirical value
>>> I concluded from VLV/CHV experience.
>>> On platforms with HW tracking HW waits few identical frames
>>> until really activating PSR. VLV/CHV activation is immediate.
>>> But HW is also different and there it seemed that hw needed a
>>> few more time before starting the transitions.
>>> Furthermore I didn't want to add that so quickly because I didn't
>>> want to take the risk of killing battery with software tracking
>>> when doing transitions so quickly using software tracking.
>>>
>>> >
>>> > The current code doesn't actually accomplish either of these goals.
>>> > Rather than keeping PSR inactive for the given amount of time, it
>>> > will schedule PSR for activation after the given time, with the
>>> > earliest target time in such a request winning.
>>>
>>> Putting that way I was asking myself how that hack had ever fixed
>>> my issue. Because the way you explained here seems obvious that it
>>> wouldn't ever fix my bug or any other.
>>>
>>> So I applied your patch and it made even more sense (without considering
>>> the fact I want to kill the first call anyways).
>>>
>>> So I came back, removed your patch and tried to understand how did
>>> it ever worked.
>>>
>>> So, the thing is that intel_psr_flush will never be really executed
>>> if intel_psr_enable wasn't executed. That is guaranteed by:
>>>
>>> mutex_lock(_priv->psr.lock);
>>> if (!dev_priv->psr.enabled) {
>>>
>>> So, intel_psr_enable will be for sure the first one to schedule the
>>> work delayed to the ugly higher delay.
>>>
>>> >
>>> > In other words, if intel_psr_enable() is immediately followed by
>>> > intel_psr_flush(), then PSR will be activated after 100ms even if
>>> > intel_psr_enable() wanted a longer delay.  And, if the screen is
>>> > being constantly updated so that intel_psr_flush() is called once
>>> > per frame at 60Hz, PSR will still be activated once every 100ms.
>>>
>>> During this time you are right, many calls of intel_psr_exit
>>> coming from flush functions can be called... But none of
>>> them will schedule the work with 100 delay.
>>>
>>> they will skip on
>>> if (!work_busy(_priv->psr.work.work))
>>
>> Wouldn't work_busy() return false until the work is actually queued
>> which is 100ms after calling schedule_delayed_work()?
>
> That's not my understanding of work_busy man.
>
> "work_busy - test whether a work is currently pending or running"
>
> I consider it as pending one...
>
> But yeap... it was a long time ago that I did this so I'm not sure...

I wouldn't be able to sleep today if I hand't experiment this.

So, I move the hacked scheduled to 10s and forced psr_activate 10ms
after that and the result is this:


[   11.757909] [drm:intel_psr_enable [i915]] *ERROR* I915-DEBUG:
Scheduling 10s
[   11.778586] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Work busy!
...
a bunch more of Work busy
...
[   21.980643] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Work busy!
[   21.983060] [drm:intel_psr_work [i915]] *ERROR* I915-DEBUG: Work running
[   21.986353] [drm:intel_psr_flush [i915]] *ERROR* I915-DEBUG: Scheduling 
normal 100ms

So, the schedules are working as expected, I'm not crazy and I will be
able to sleep :)

(I will also attach my patch and dmesg to the bugzilla entry and close
the bug.)

But again, thank you very much for jumping here on PSR and activelly
working to improve the code. I really appreciate that.

Oh well... maybe I won't be able to sleep... I could swear this hack
here was for SKL, but when doing the patch I noticed that the hack is
for gen < 9... I will have to check some git 

Re: [Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-08 Thread Rodrigo Vivi
"Pandiyan, Dhinakaran"  writes:

> On Thu, 2018-02-08 at 14:48 -0800, Rodrigo Vivi wrote:
>> Hi Andy,
>> 
>> thanks for getting involved with PSR and sorry for not replying sooner.
>> 
>> I first saw this patch on that bugzilla entry but only now I stop to
>> really think why I have written the code that way.
>> 
>> So some clarity below.
>> 
>> On Mon, Feb 05, 2018 at 10:07:09PM +, Andy Lutomirski wrote:
>> > The current PSR code has a two call sites that each schedule delayed
>> > work to activate PSR.  As far as I can tell, each call site intends
>> > to keep PSR inactive for the given amount of time and then allow it
>> > to be activated.
>> >
>> > The call sites are:
>> >
>> >  - intel_psr_enable(), which explicitly states in a comment that
>> >it's trying to keep PSR off a short time after the dispay is
>> >initialized as a workaround.
>> 
>> First of all I really want to kill this call here and remove the
>> FIXME. It was an ugly hack that I added to solve a corner case
>> that was leaving me with blank screens when activating so sooner.
>> 
>> >
>> >  - intel_psr_flush().  There isn't an explcit explanation, but the
>> >intent is presumably to keep PSR off until the display has been
>> >idle for 100ms.
>> 
>> The reason for 100 is kind of ugly-nonsense-empirical value
>> I concluded from VLV/CHV experience.
>> On platforms with HW tracking HW waits few identical frames
>> until really activating PSR. VLV/CHV activation is immediate.
>> But HW is also different and there it seemed that hw needed a
>> few more time before starting the transitions.
>> Furthermore I didn't want to add that so quickly because I didn't
>> want to take the risk of killing battery with software tracking
>> when doing transitions so quickly using software tracking.
>> 
>> >
>> > The current code doesn't actually accomplish either of these goals.
>> > Rather than keeping PSR inactive for the given amount of time, it
>> > will schedule PSR for activation after the given time, with the
>> > earliest target time in such a request winning.
>> 
>> Putting that way I was asking myself how that hack had ever fixed
>> my issue. Because the way you explained here seems obvious that it
>> wouldn't ever fix my bug or any other.
>> 
>> So I applied your patch and it made even more sense (without considering
>> the fact I want to kill the first call anyways).
>> 
>> So I came back, removed your patch and tried to understand how did
>> it ever worked.
>> 
>> So, the thing is that intel_psr_flush will never be really executed
>> if intel_psr_enable wasn't executed. That is guaranteed by:
>> 
>> mutex_lock(_priv->psr.lock);
>>  if (!dev_priv->psr.enabled) {
>> 
>> So, intel_psr_enable will be for sure the first one to schedule the
>> work delayed to the ugly higher delay.
>> 
>> >
>> > In other words, if intel_psr_enable() is immediately followed by
>> > intel_psr_flush(), then PSR will be activated after 100ms even if
>> > intel_psr_enable() wanted a longer delay.  And, if the screen is
>> > being constantly updated so that intel_psr_flush() is called once
>> > per frame at 60Hz, PSR will still be activated once every 100ms.
>> 
>> During this time you are right, many calls of intel_psr_exit
>> coming from flush functions can be called... But none of
>> them will schedule the work with 100 delay.
>> 
>> they will skip on
>> if (!work_busy(_priv->psr.work.work))
>
> Wouldn't work_busy() return false until the work is actually queued
> which is 100ms after calling schedule_delayed_work()?

That's not my understanding of work_busy man.

"work_busy - test whether a work is currently pending or running"

I consider it as pending one...

But yeap... it was a long time ago that I did this so I'm not sure...

>
> For e.g, flushes at 0, 16, 32...96 will have work_busy() returning false
> until 100ms.
>
> The first psr_work will end up getting scheduled at 100ms, which I
> believe is not what we want. 
>
>
> However, I think 
>
>   if (dev_priv->psr.busy_frontbuffer_bits)
>   goto unlock;
>
>   intel_psr_activate(intel_dp);
>
> in psr_work might prevent activate being called at 100ms if an
> invalidate happened to be called before that.
>
>
>
>
>> 
>> So, the higher delayed *hack* will be respected and PSR won't get
>> activated before that.
>> 
>> On the other hand you might ask what if,
>> for some strange reason,
>> (intel_dp->panel_power_cycle_delay * 5) is lesser than 100.
>> Well, on this case this would be ok, because it happens only
>> once and only on gen > 9 where hw tracking will wait the minimal
>> number of frames before the actual transition to PSR.
>> 
>> In either cases I believe we are safe.
>> 
>> Thanks,
>> Rodrigo.
> ___
> dri-devel mailing list
> dri-de...@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
___
Intel-gfx mailing list

Re: [Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-08 Thread Pandiyan, Dhinakaran



On Thu, 2018-02-08 at 17:01 -0800, Andy Lutomirski wrote:
> 
> 
> > On Feb 8, 2018, at 4:39 PM, Pandiyan, Dhinakaran 
> >  wrote:
> > 
> > 
> >> On Thu, 2018-02-08 at 14:48 -0800, Rodrigo Vivi wrote:
> >> Hi Andy,
> >> 
> >> thanks for getting involved with PSR and sorry for not replying sooner.
> >> 
> >> I first saw this patch on that bugzilla entry but only now I stop to
> >> really think why I have written the code that way.
> >> 
> >> So some clarity below.
> >> 
> >>> On Mon, Feb 05, 2018 at 10:07:09PM +, Andy Lutomirski wrote:
> >>> The current PSR code has a two call sites that each schedule delayed
> >>> work to activate PSR.  As far as I can tell, each call site intends
> >>> to keep PSR inactive for the given amount of time and then allow it
> >>> to be activated.
> >>> 
> >>> The call sites are:
> >>> 
> >>> - intel_psr_enable(), which explicitly states in a comment that
> >>>   it's trying to keep PSR off a short time after the dispay is
> >>>   initialized as a workaround.
> >> 
> >> First of all I really want to kill this call here and remove the
> >> FIXME. It was an ugly hack that I added to solve a corner case
> >> that was leaving me with blank screens when activating so sooner.
> >> 
> >>> 
> >>> - intel_psr_flush().  There isn't an explcit explanation, but the
> >>>   intent is presumably to keep PSR off until the display has been
> >>>   idle for 100ms.
> >> 
> >> The reason for 100 is kind of ugly-nonsense-empirical value
> >> I concluded from VLV/CHV experience.
> >> On platforms with HW tracking HW waits few identical frames
> >> until really activating PSR. VLV/CHV activation is immediate.
> >> But HW is also different and there it seemed that hw needed a
> >> few more time before starting the transitions.
> >> Furthermore I didn't want to add that so quickly because I didn't
> >> want to take the risk of killing battery with software tracking
> >> when doing transitions so quickly using software tracking.
> >> 
> >>> 
> >>> The current code doesn't actually accomplish either of these goals.
> >>> Rather than keeping PSR inactive for the given amount of time, it
> >>> will schedule PSR for activation after the given time, with the
> >>> earliest target time in such a request winning.
> >> 
> >> Putting that way I was asking myself how that hack had ever fixed
> >> my issue. Because the way you explained here seems obvious that it
> >> wouldn't ever fix my bug or any other.
> >> 
> >> So I applied your patch and it made even more sense (without considering
> >> the fact I want to kill the first call anyways).
> >> 
> >> So I came back, removed your patch and tried to understand how did
> >> it ever worked.
> >> 
> >> So, the thing is that intel_psr_flush will never be really executed
> >> if intel_psr_enable wasn't executed. That is guaranteed by:
> >> 
> >> mutex_lock(_priv->psr.lock);
> >>if (!dev_priv->psr.enabled) {
> >> 
> >> So, intel_psr_enable will be for sure the first one to schedule the
> >> work delayed to the ugly higher delay.
> >> 
> >>> 
> >>> In other words, if intel_psr_enable() is immediately followed by
> >>> intel_psr_flush(), then PSR will be activated after 100ms even if
> >>> intel_psr_enable() wanted a longer delay.  And, if the screen is
> >>> being constantly updated so that intel_psr_flush() is called once
> >>> per frame at 60Hz, PSR will still be activated once every 100ms.
> >> 
> >> During this time you are right, many calls of intel_psr_exit
> >> coming from flush functions can be called... But none of
> >> them will schedule the work with 100 delay.
> >> 
> >> they will skip on
> >> if (!work_busy(_priv->psr.work.work))
> 
> As below, the first call will.  Then, 100ms later, the work will fire.  Then 
> the next flush will schedule it again, etc.
> 
> > 
> > Wouldn't work_busy() return false until the work is actually queued
> > which is 100ms after calling schedule_delayed_work()?
> > 
> > For e.g, flushes at 0, 16, 32...96 will have work_busy() returning false
> > until 100ms.
> > 
> > The first psr_work will end up getting scheduled at 100ms, which I
> > believe is not what we want. 
> 
> Indeed.  I stuck some printks in and this seems to be what happens.
> 
> > 
> > 
> > However, I think 
> > 
> >if (dev_priv->psr.busy_frontbuffer_bits)
> >goto unlock;
> > 
> >intel_psr_activate(intel_dp);
> > 
> > in psr_work might prevent activate being called at 100ms if an
> > invalidate happened to be called before that.
> > 
> 
> On my system, invalidate is never called.
I noticed the same in console mode with fbdev, there are no invalidates.

>   Even if it were called, that check would only help if we got lucky and the 
> work fired after invalidate and before the subsequent flush.
> 
Agreed. This dependency on invalidate sure looks wrong. 

> > 
> > 
> > 
> >> 
> >> So, the higher delayed *hack* will be respected and PSR won't get
> >> activated before that.
> >> 
> >> On the other hand you 

Re: [Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-08 Thread Andy Lutomirski



> On Feb 8, 2018, at 4:39 PM, Pandiyan, Dhinakaran 
>  wrote:
> 
> 
>> On Thu, 2018-02-08 at 14:48 -0800, Rodrigo Vivi wrote:
>> Hi Andy,
>> 
>> thanks for getting involved with PSR and sorry for not replying sooner.
>> 
>> I first saw this patch on that bugzilla entry but only now I stop to
>> really think why I have written the code that way.
>> 
>> So some clarity below.
>> 
>>> On Mon, Feb 05, 2018 at 10:07:09PM +, Andy Lutomirski wrote:
>>> The current PSR code has a two call sites that each schedule delayed
>>> work to activate PSR.  As far as I can tell, each call site intends
>>> to keep PSR inactive for the given amount of time and then allow it
>>> to be activated.
>>> 
>>> The call sites are:
>>> 
>>> - intel_psr_enable(), which explicitly states in a comment that
>>>   it's trying to keep PSR off a short time after the dispay is
>>>   initialized as a workaround.
>> 
>> First of all I really want to kill this call here and remove the
>> FIXME. It was an ugly hack that I added to solve a corner case
>> that was leaving me with blank screens when activating so sooner.
>> 
>>> 
>>> - intel_psr_flush().  There isn't an explcit explanation, but the
>>>   intent is presumably to keep PSR off until the display has been
>>>   idle for 100ms.
>> 
>> The reason for 100 is kind of ugly-nonsense-empirical value
>> I concluded from VLV/CHV experience.
>> On platforms with HW tracking HW waits few identical frames
>> until really activating PSR. VLV/CHV activation is immediate.
>> But HW is also different and there it seemed that hw needed a
>> few more time before starting the transitions.
>> Furthermore I didn't want to add that so quickly because I didn't
>> want to take the risk of killing battery with software tracking
>> when doing transitions so quickly using software tracking.
>> 
>>> 
>>> The current code doesn't actually accomplish either of these goals.
>>> Rather than keeping PSR inactive for the given amount of time, it
>>> will schedule PSR for activation after the given time, with the
>>> earliest target time in such a request winning.
>> 
>> Putting that way I was asking myself how that hack had ever fixed
>> my issue. Because the way you explained here seems obvious that it
>> wouldn't ever fix my bug or any other.
>> 
>> So I applied your patch and it made even more sense (without considering
>> the fact I want to kill the first call anyways).
>> 
>> So I came back, removed your patch and tried to understand how did
>> it ever worked.
>> 
>> So, the thing is that intel_psr_flush will never be really executed
>> if intel_psr_enable wasn't executed. That is guaranteed by:
>> 
>> mutex_lock(_priv->psr.lock);
>>if (!dev_priv->psr.enabled) {
>> 
>> So, intel_psr_enable will be for sure the first one to schedule the
>> work delayed to the ugly higher delay.
>> 
>>> 
>>> In other words, if intel_psr_enable() is immediately followed by
>>> intel_psr_flush(), then PSR will be activated after 100ms even if
>>> intel_psr_enable() wanted a longer delay.  And, if the screen is
>>> being constantly updated so that intel_psr_flush() is called once
>>> per frame at 60Hz, PSR will still be activated once every 100ms.
>> 
>> During this time you are right, many calls of intel_psr_exit
>> coming from flush functions can be called... But none of
>> them will schedule the work with 100 delay.
>> 
>> they will skip on
>> if (!work_busy(_priv->psr.work.work))

As below, the first call will.  Then, 100ms later, the work will fire.  Then 
the next flush will schedule it again, etc.

> 
> Wouldn't work_busy() return false until the work is actually queued
> which is 100ms after calling schedule_delayed_work()?
> 
> For e.g, flushes at 0, 16, 32...96 will have work_busy() returning false
> until 100ms.
> 
> The first psr_work will end up getting scheduled at 100ms, which I
> believe is not what we want. 

Indeed.  I stuck some printks in and this seems to be what happens.

> 
> 
> However, I think 
> 
>if (dev_priv->psr.busy_frontbuffer_bits)
>goto unlock;
> 
>intel_psr_activate(intel_dp);
> 
> in psr_work might prevent activate being called at 100ms if an
> invalidate happened to be called before that.
> 

On my system, invalidate is never called.  Even if it were called, that check 
would only help if we got lucky and the work fired after invalidate and before 
the subsequent flush.

> 
> 
> 
>> 
>> So, the higher delayed *hack* will be respected and PSR won't get
>> activated before that.
>> 
>> On the other hand you might ask what if,
>> for some strange reason,
>> (intel_dp->panel_power_cycle_delay * 5) is lesser than 100.
>> Well, on this case this would be ok, because it happens only
>> once and only on gen > 9 where hw tracking will wait the minimal
>> number of frames before the actual transition to PSR.
>> 
>> In either cases I believe we are safe.
>> 
>> Thanks,
>> Rodrigo.
___
Intel-gfx mailing list

Re: [Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-08 Thread Pandiyan, Dhinakaran

On Thu, 2018-02-08 at 14:48 -0800, Rodrigo Vivi wrote:
> Hi Andy,
> 
> thanks for getting involved with PSR and sorry for not replying sooner.
> 
> I first saw this patch on that bugzilla entry but only now I stop to
> really think why I have written the code that way.
> 
> So some clarity below.
> 
> On Mon, Feb 05, 2018 at 10:07:09PM +, Andy Lutomirski wrote:
> > The current PSR code has a two call sites that each schedule delayed
> > work to activate PSR.  As far as I can tell, each call site intends
> > to keep PSR inactive for the given amount of time and then allow it
> > to be activated.
> >
> > The call sites are:
> >
> >  - intel_psr_enable(), which explicitly states in a comment that
> >it's trying to keep PSR off a short time after the dispay is
> >initialized as a workaround.
> 
> First of all I really want to kill this call here and remove the
> FIXME. It was an ugly hack that I added to solve a corner case
> that was leaving me with blank screens when activating so sooner.
> 
> >
> >  - intel_psr_flush().  There isn't an explcit explanation, but the
> >intent is presumably to keep PSR off until the display has been
> >idle for 100ms.
> 
> The reason for 100 is kind of ugly-nonsense-empirical value
> I concluded from VLV/CHV experience.
> On platforms with HW tracking HW waits few identical frames
> until really activating PSR. VLV/CHV activation is immediate.
> But HW is also different and there it seemed that hw needed a
> few more time before starting the transitions.
> Furthermore I didn't want to add that so quickly because I didn't
> want to take the risk of killing battery with software tracking
> when doing transitions so quickly using software tracking.
> 
> >
> > The current code doesn't actually accomplish either of these goals.
> > Rather than keeping PSR inactive for the given amount of time, it
> > will schedule PSR for activation after the given time, with the
> > earliest target time in such a request winning.
> 
> Putting that way I was asking myself how that hack had ever fixed
> my issue. Because the way you explained here seems obvious that it
> wouldn't ever fix my bug or any other.
> 
> So I applied your patch and it made even more sense (without considering
> the fact I want to kill the first call anyways).
> 
> So I came back, removed your patch and tried to understand how did
> it ever worked.
> 
> So, the thing is that intel_psr_flush will never be really executed
> if intel_psr_enable wasn't executed. That is guaranteed by:
> 
> mutex_lock(_priv->psr.lock);
>   if (!dev_priv->psr.enabled) {
> 
> So, intel_psr_enable will be for sure the first one to schedule the
> work delayed to the ugly higher delay.
> 
> >
> > In other words, if intel_psr_enable() is immediately followed by
> > intel_psr_flush(), then PSR will be activated after 100ms even if
> > intel_psr_enable() wanted a longer delay.  And, if the screen is
> > being constantly updated so that intel_psr_flush() is called once
> > per frame at 60Hz, PSR will still be activated once every 100ms.
> 
> During this time you are right, many calls of intel_psr_exit
> coming from flush functions can be called... But none of
> them will schedule the work with 100 delay.
> 
> they will skip on
> if (!work_busy(_priv->psr.work.work))

Wouldn't work_busy() return false until the work is actually queued
which is 100ms after calling schedule_delayed_work()?

For e.g, flushes at 0, 16, 32...96 will have work_busy() returning false
until 100ms.

The first psr_work will end up getting scheduled at 100ms, which I
believe is not what we want. 


However, I think 

if (dev_priv->psr.busy_frontbuffer_bits)
goto unlock;

intel_psr_activate(intel_dp);

in psr_work might prevent activate being called at 100ms if an
invalidate happened to be called before that.




> 
> So, the higher delayed *hack* will be respected and PSR won't get
> activated before that.
> 
> On the other hand you might ask what if,
> for some strange reason,
> (intel_dp->panel_power_cycle_delay * 5) is lesser than 100.
> Well, on this case this would be ok, because it happens only
> once and only on gen > 9 where hw tracking will wait the minimal
> number of frames before the actual transition to PSR.
> 
> In either cases I believe we are safe.
> 
> Thanks,
> Rodrigo.
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-08 Thread Rodrigo Vivi

Hi Andy,

thanks for getting involved with PSR and sorry for not replying sooner.

I first saw this patch on that bugzilla entry but only now I stop to
really think why I have written the code that way.

So some clarity below.

On Mon, Feb 05, 2018 at 10:07:09PM +, Andy Lutomirski wrote:
> The current PSR code has a two call sites that each schedule delayed
> work to activate PSR.  As far as I can tell, each call site intends
> to keep PSR inactive for the given amount of time and then allow it
> to be activated.
>
> The call sites are:
>
>  - intel_psr_enable(), which explicitly states in a comment that
>it's trying to keep PSR off a short time after the dispay is
>initialized as a workaround.

First of all I really want to kill this call here and remove the
FIXME. It was an ugly hack that I added to solve a corner case
that was leaving me with blank screens when activating so sooner.

>
>  - intel_psr_flush().  There isn't an explcit explanation, but the
>intent is presumably to keep PSR off until the display has been
>idle for 100ms.

The reason for 100 is kind of ugly-nonsense-empirical value
I concluded from VLV/CHV experience.
On platforms with HW tracking HW waits few identical frames
until really activating PSR. VLV/CHV activation is immediate.
But HW is also different and there it seemed that hw needed a
few more time before starting the transitions.
Furthermore I didn't want to add that so quickly because I didn't
want to take the risk of killing battery with software tracking
when doing transitions so quickly using software tracking.

>
> The current code doesn't actually accomplish either of these goals.
> Rather than keeping PSR inactive for the given amount of time, it
> will schedule PSR for activation after the given time, with the
> earliest target time in such a request winning.

Putting that way I was asking myself how that hack had ever fixed
my issue. Because the way you explained here seems obvious that it
wouldn't ever fix my bug or any other.

So I applied your patch and it made even more sense (without considering
the fact I want to kill the first call anyways).

So I came back, removed your patch and tried to understand how did
it ever worked.

So, the thing is that intel_psr_flush will never be really executed
if intel_psr_enable wasn't executed. That is guaranteed by:

mutex_lock(_priv->psr.lock);
if (!dev_priv->psr.enabled) {

So, intel_psr_enable will be for sure the first one to schedule the
work delayed to the ugly higher delay.

>
> In other words, if intel_psr_enable() is immediately followed by
> intel_psr_flush(), then PSR will be activated after 100ms even if
> intel_psr_enable() wanted a longer delay.  And, if the screen is
> being constantly updated so that intel_psr_flush() is called once
> per frame at 60Hz, PSR will still be activated once every 100ms.

During this time you are right, many calls of intel_psr_exit
coming from flush functions can be called... But none of
them will schedule the work with 100 delay.

they will skip on
if (!work_busy(_priv->psr.work.work))

So, the higher delayed *hack* will be respected and PSR won't get
activated before that.

On the other hand you might ask what if,
for some strange reason,
(intel_dp->panel_power_cycle_delay * 5) is lesser than 100.
Well, on this case this would be ok, because it happens only
once and only on gen > 9 where hw tracking will wait the minimal
number of frames before the actual transition to PSR.

In either cases I believe we are safe.

Thanks,
Rodrigo.

>
> Rewrite the code so that it does what was intended.  This adds
> a new function intel_psr_schedule(), which will enable PSR after
> the requested time but no sooner.
>
> Signed-off-by: Andy Lutomirski 
> ---
>  drivers/gpu/drm/i915/i915_debugfs.c |  9 +++--
>  drivers/gpu/drm/i915/i915_drv.h |  4 ++-
>  drivers/gpu/drm/i915/intel_psr.c| 69 
> -
>  3 files changed, 71 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
> b/drivers/gpu/drm/i915/i915_debugfs.c
> index c65e381b85f3..b67db93f905d 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -2663,8 +2663,13 @@ static int i915_edp_psr_status(struct seq_file *m, 
> void *data)
>   seq_printf(m, "Active: %s\n", yesno(dev_priv->psr.active));
>   seq_printf(m, "Busy frontbuffer bits: 0x%03x\n",
>  dev_priv->psr.busy_frontbuffer_bits);
> - seq_printf(m, "Re-enable work scheduled: %s\n",
> -yesno(work_busy(_priv->psr.work.work)));
> +
> + if (timer_pending(_priv->psr.activate_timer))
> + seq_printf(m, "Activate scheduled: yes, in %ldms\n",
> +(long)(dev_priv->psr.earliest_activate - jiffies) *
> +1000 / HZ);
> + else
> + seq_printf(m, "Re-enable scheduled: no\n");
>
>   if (HAS_DDI(dev_priv)) {
>   

[Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-07 Thread Chris Wilson
From: Andy Lutomirski 

The current PSR code has a two call sites that each schedule delayed
work to activate PSR.  As far as I can tell, each call site intends
to keep PSR inactive for the given amount of time and then allow it
to be activated.

The call sites are:

 - intel_psr_enable(), which explicitly states in a comment that
   it's trying to keep PSR off a short time after the dispay is
   initialized as a workaround.

 - intel_psr_flush().  There isn't an explcit explanation, but the
   intent is presumably to keep PSR off until the display has been
   idle for 100ms.

The current code doesn't actually accomplish either of these goals.
Rather than keeping PSR inactive for the given amount of time, it
will schedule PSR for activation after the given time, with the
earliest target time in such a request winning.

In other words, if intel_psr_enable() is immediately followed by
intel_psr_flush(), then PSR will be activated after 100ms even if
intel_psr_enable() wanted a longer delay.  And, if the screen is
being constantly updated so that intel_psr_flush() is called once
per frame at 60Hz, PSR will still be activated once every 100ms.

Rewrite the code so that it does what was intended.  This adds
a new function intel_psr_schedule(), which will enable PSR after
the requested time but no sooner.

Signed-off-by: Andy Lutomirski 
---

Rebase into drm-tip.
-Chris

---
 drivers/gpu/drm/i915/i915_debugfs.c |  8 +++--
 drivers/gpu/drm/i915/i915_drv.h |  3 +-
 drivers/gpu/drm/i915/intel_psr.c| 66 -
 3 files changed, 66 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
b/drivers/gpu/drm/i915/i915_debugfs.c
index 2bdce9fea671..e0de912a5ec9 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -2528,8 +2528,12 @@ static int i915_edp_psr_status(struct seq_file *m, void 
*data)
seq_printf(m, "Active: %s\n", yesno(dev_priv->psr.active));
seq_printf(m, "Busy frontbuffer bits: 0x%03x\n",
   dev_priv->psr.busy_frontbuffer_bits);
-   seq_printf(m, "Re-enable work scheduled: %s\n",
-  yesno(work_busy(_priv->psr.work.work)));
+
+   if (timer_pending(_priv->psr.activate_timer))
+   seq_printf(m, "Activate scheduled: yes, in %dms\n",
+  
jiffies_to_msecs(dev_priv->psr.activate_timer.expires - jiffies));
+   else
+   seq_printf(m, "Activate scheduled: no\n");
 
if (HAS_DDI(dev_priv)) {
if (dev_priv->psr.psr2_support)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 7db3557b945c..26b9fb0eda40 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -762,7 +762,8 @@ struct i915_psr {
bool sink_support;
struct intel_dp *enabled;
bool active;
-   struct delayed_work work;
+   struct timer_list activate_timer;
+   struct work_struct activate_work;
unsigned busy_frontbuffer_bits;
bool psr2_support;
bool aux_frame_sync;
diff --git a/drivers/gpu/drm/i915/intel_psr.c b/drivers/gpu/drm/i915/intel_psr.c
index e9feffdea899..087c0726aa8a 100644
--- a/drivers/gpu/drm/i915/intel_psr.c
+++ b/drivers/gpu/drm/i915/intel_psr.c
@@ -450,6 +450,28 @@ static void intel_psr_activate(struct intel_dp *intel_dp)
dev_priv->psr.active = true;
 }
 
+static void intel_psr_schedule(struct drm_i915_private *i915,
+  unsigned long min_wait_ms)
+{
+   unsigned long next;
+
+   lockdep_assert_held(>psr.lock);
+
+   /*
+* We update next enable and call mod_timer() because it's
+* possible that intel_psr_wrk() has already been called and is
+* waiting for psr.lock. If that's the case, we don't want it
+* to immediately enable PSR.
+*
+* We also need to make sure that PSR is never activated earlier
+* than requested to avoid breaking intel_psr_enable()'s workaround
+* for pre-gen9 hardware.
+*/
+   next = jiffies + msecs_to_jiffies(min_wait_ms);
+   if (time_after(next, i915->psr.activate_timer.expires))
+   mod_timer(>psr.activate_timer, next);
+}
+
 static void hsw_psr_enable_source(struct intel_dp *intel_dp,
  const struct intel_crtc_state *crtc_state)
 {
@@ -534,8 +556,8 @@ void intel_psr_enable(struct intel_dp *intel_dp,
 * - On HSW/BDW we get a recoverable frozen screen until
 *   next exit-activate sequence.
 */
-   schedule_delayed_work(_priv->psr.work,
- 
msecs_to_jiffies(intel_dp->panel_power_cycle_delay * 5));
+   intel_psr_schedule(dev_priv,
+  intel_dp->panel_power_cycle_delay * 5);
}
 
 unlock:
@@ -653,13 +675,14 @@ void 

[Intel-gfx] [PATCH] drm/i915: Improve PSR activation timing

2018-02-05 Thread Andy Lutomirski
The current PSR code has a two call sites that each schedule delayed
work to activate PSR.  As far as I can tell, each call site intends
to keep PSR inactive for the given amount of time and then allow it
to be activated.

The call sites are:

 - intel_psr_enable(), which explicitly states in a comment that
   it's trying to keep PSR off a short time after the dispay is
   initialized as a workaround.

 - intel_psr_flush().  There isn't an explcit explanation, but the
   intent is presumably to keep PSR off until the display has been
   idle for 100ms.

The current code doesn't actually accomplish either of these goals.
Rather than keeping PSR inactive for the given amount of time, it
will schedule PSR for activation after the given time, with the
earliest target time in such a request winning.

In other words, if intel_psr_enable() is immediately followed by
intel_psr_flush(), then PSR will be activated after 100ms even if
intel_psr_enable() wanted a longer delay.  And, if the screen is
being constantly updated so that intel_psr_flush() is called once
per frame at 60Hz, PSR will still be activated once every 100ms.

Rewrite the code so that it does what was intended.  This adds
a new function intel_psr_schedule(), which will enable PSR after
the requested time but no sooner.

Signed-off-by: Andy Lutomirski 
---
 drivers/gpu/drm/i915/i915_debugfs.c |  9 +++--
 drivers/gpu/drm/i915/i915_drv.h |  4 ++-
 drivers/gpu/drm/i915/intel_psr.c| 69 -
 3 files changed, 71 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c 
b/drivers/gpu/drm/i915/i915_debugfs.c
index c65e381b85f3..b67db93f905d 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -2663,8 +2663,13 @@ static int i915_edp_psr_status(struct seq_file *m, void 
*data)
seq_printf(m, "Active: %s\n", yesno(dev_priv->psr.active));
seq_printf(m, "Busy frontbuffer bits: 0x%03x\n",
   dev_priv->psr.busy_frontbuffer_bits);
-   seq_printf(m, "Re-enable work scheduled: %s\n",
-  yesno(work_busy(_priv->psr.work.work)));
+
+   if (timer_pending(_priv->psr.activate_timer))
+   seq_printf(m, "Activate scheduled: yes, in %ldms\n",
+  (long)(dev_priv->psr.earliest_activate - jiffies) *
+  1000 / HZ);
+   else
+   seq_printf(m, "Re-enable scheduled: no\n");
 
if (HAS_DDI(dev_priv)) {
if (dev_priv->psr.psr2_support)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 46eb729b367d..c0fb7d65cda6 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1192,7 +1192,9 @@ struct i915_psr {
bool source_ok;
struct intel_dp *enabled;
bool active;
-   struct delayed_work work;
+   struct timer_list activate_timer;
+   struct work_struct activate_work;
+   unsigned long earliest_activate;
unsigned busy_frontbuffer_bits;
bool psr2_support;
bool aux_frame_sync;
diff --git a/drivers/gpu/drm/i915/intel_psr.c b/drivers/gpu/drm/i915/intel_psr.c
index 55ea5eb3b7df..333d90d4e5af 100644
--- a/drivers/gpu/drm/i915/intel_psr.c
+++ b/drivers/gpu/drm/i915/intel_psr.c
@@ -461,6 +461,30 @@ static void intel_psr_activate(struct intel_dp *intel_dp)
dev_priv->psr.active = true;
 }
 
+static void intel_psr_schedule(struct drm_i915_private *dev_priv,
+  unsigned long min_wait_ms)
+{
+   unsigned long next;
+
+   lockdep_assert_held(_priv->psr.lock);
+
+   /*
+* We update next_enable *and* call mod_timer() because it's
+* possible that intel_psr_work() has already been called and is
+* waiting for psr.lock.  If that's the case, we don't want it
+* to immediately enable PSR.
+*
+* We also need to make sure that PSR is never activated earlier
+* than requested to avoid breaking intel_psr_enable()'s workaround
+* for pre-gen9 hardware.
+*/
+   next = jiffies + msecs_to_jiffies(min_wait_ms);
+   if (time_after(next, dev_priv->psr.earliest_activate)) {
+   dev_priv->psr.earliest_activate = next;
+   mod_timer(_priv->psr.activate_timer, next);
+   }
+}
+
 static void hsw_psr_enable_source(struct intel_dp *intel_dp,
  const struct intel_crtc_state *crtc_state)
 {
@@ -544,8 +568,7 @@ void intel_psr_enable(struct intel_dp *intel_dp,
 * - On HSW/BDW we get a recoverable frozen screen until
 *   next exit-activate sequence.
 */
-   schedule_delayed_work(_priv->psr.work,
- 
msecs_to_jiffies(intel_dp->panel_power_cycle_delay * 5));
+   intel_psr_schedule(dev_priv, intel_dp->panel_power_cycle_delay 
* 5);
}