Re: [Intel-gfx] [RFC PATCH] drm/i915: Debugfs statistics interface for psr

2022-05-20 Thread Hogander, Jouni
Hello Jose,

Thank you for your comments, see my response below.
 
On Thu, 2022-05-19 at 13:47 +, Souza, Jose wrote:
> On Thu, 2022-05-19 at 12:17 +, Hogander, Jouni wrote:
> > On Wed, 2022-05-18 at 20:00 +, Souza, Jose wrote:
> > > On Wed, 2022-05-18 at 10:45 +0300, Jouni Högander wrote:
> > > > Currently there is no mean to get understanding how psr is
> > > > utilized.
> > > > E.g. How much psr is actually enabled or how often driver falls
> > > > back
> > > > to full update.
> > > 
> > > But with this information what can you do?
> > 
> > If you are doing pnp analysis for use cases with PSR2 capable
> > panel.
> > Having such information telling you how PSR is utilized is valuable
> > to
> > make your statement if there is room for optimization.
> > 
> > or
> > 
> > you are doing power measurement follow-up and facing regression in
> > certain use case. You can check possible changes in PSR utilization
> > and make your statement if it's involved or not. E.g. recent
> > changes to
> > fall back to full update if area calculation fails or the one where
> > using continuous full update instead of psr disable would have been
> > visible in these statistics.
> 
> The goal is to get rid of all full frame update cases, knowing that
> it have 10% of full updates do not adds anything.

I was thinking it could help us to determine where to invest our time.
Possibly also estimate impact of changes we are planning to take in the
future.

> About PNP this information is not helpful, most of the power saved
> with PSR is when display is idle and it goes to DC5/DC6 states that
> can be measured
> in i915_dmc_info and by pkg10 residency from the SOC.

Yes, this is true. PSR is not the only factor affecting DC5/DC6 usage,
right? I.e. when you see a change in DC5/DC6 recidencies: we have
currently nothing available to determine if it's due to PSR or
something else. Of course this can be debugged, but having proper
information on PSR utilization could give us some trace.

As an example I have taken snapshot of stats over psr_stress_test
written by you. Comparison is for recent patches into psr code (cff +
sff on calc failure):

Without recent patches:

PSR disabled vblank count : 48
Total vblank count : 371
Selective update count : 152
Full update count : 148

With those WA:s

PSR disabled vblank count : 7
Total vblank count : 326
Selective update count : 152
Full update count : 63

"PSR disabled vblank count" drop is quite obvious due to changes in
those patches. "Full update count" was a bit surprice at least to me.

> 
> Other thing is this only covers Tigerlake and Alderlake-P, I guess it
> would only print the number of vblanks for older platforms.

Interface is targeted to tell how many vblank periods there were with
PSR disabled on devices with PSR1/without selective fetch.

> > Addition to these I have been thinking extending this to gather
> > statistics or history over used update area as well. Considering
> > recent
> > bugs we have fixed in psr code: As a first step you need to start
> > instrumenting psr code and ask reporter to take trace with your
> > instrumentation. Instead we could have some statistic or even last
> > SU
> > areas when CRC error is triggered and make these available in logs
> > or
> > via some debugfs interface.
> > 
> > I'm pretty sure we will revisit psr code in the future and
> > introduce
> > new bugs and new workarounds among new features. I'm trying to
> > figure
> > out something to ease up analyzing their impact and maybe even
> > rootcausing bugs.
> 
> If you have something that proved to be useful in a few bugs I would
> consider this.
> But merge a bunch of code to returns information that will not be
> useful is not good and would bring more maintenance burden in future.
> If you check the psr debugfs we already have some of it that was not
> much useful.

This valid point/requirement. I can keep this patch on my own and maybe
add that SU area trace into it.
> 
> > > I don't see much value on it.
> > > We have now some cases that leads to full update, that needs to
> > > be
> > > properly fixed so it can always do partial updates.
> > 
> > There are still these cases, but also frontbuffer invalidate/flush
> > callbacks are there. e.g. currently there is no way to say how
> > often we
> > are doing full update. Before the "continuous full update" we
> > didn't
> > know how much psr is actually disabled due to invalidate/flush.
> > 
> > > Did not checked the implementation details.
> > > 
> > > > This patch addresses this by adding new debugfs interface
> > > > i915_edp_psr_stats. Statistics gathering is started by writing
> > > > 1/y/Y and
> > > > stopped by writing 0/n/N into this new interface.
> > > > 
> > > > Following fields are provided for reading by this new
> > > > interface:
> > > > 
> > > > "PSR disabled vblank count"
> > > > 
> > > > Over how many vblank periods  PSR was disabled after statistics
> > > > gathering got started. I.e. How many normal 

Re: [Intel-gfx] [RFC PATCH] drm/i915: Debugfs statistics interface for psr

2022-05-19 Thread Souza, Jose
On Thu, 2022-05-19 at 12:17 +, Hogander, Jouni wrote:
> On Wed, 2022-05-18 at 20:00 +, Souza, Jose wrote:
> > On Wed, 2022-05-18 at 10:45 +0300, Jouni Högander wrote:
> > > Currently there is no mean to get understanding how psr is
> > > utilized.
> > > E.g. How much psr is actually enabled or how often driver falls
> > > back
> > > to full update.
> > 
> > But with this information what can you do?
> 
> If you are doing pnp analysis for use cases with PSR2 capable panel.
> Having such information telling you how PSR is utilized is valuable to
> make your statement if there is room for optimization.
> 
> or
> 
> you are doing power measurement follow-up and facing regression in
> certain use case. You can check possible changes in PSR utilization
> and make your statement if it's involved or not. E.g. recent changes to
> fall back to full update if area calculation fails or the one where
> using continuous full update instead of psr disable would have been
> visible in these statistics.

The goal is to get rid of all full frame update cases, knowing that it have 10% 
of full updates do not adds anything.

About PNP this information is not helpful, most of the power saved with PSR is 
when display is idle and it goes to DC5/DC6 states that can be measured
in i915_dmc_info and by pkg10 residency from the SOC.

Other thing is this only covers Tigerlake and Alderlake-P, I guess it would 
only print the number of vblanks for older platforms.

> 
> Addition to these I have been thinking extending this to gather
> statistics or history over used update area as well. Considering recent
> bugs we have fixed in psr code: As a first step you need to start
> instrumenting psr code and ask reporter to take trace with your
> instrumentation. Instead we could have some statistic or even last SU
> areas when CRC error is triggered and make these available in logs or
> via some debugfs interface.
> 
> I'm pretty sure we will revisit psr code in the future and introduce
> new bugs and new workarounds among new features. I'm trying to figure
> out something to ease up analyzing their impact and maybe even
> rootcausing bugs.

If you have something that proved to be useful in a few bugs I would consider 
this.
But merge a bunch of code to returns information that will not be useful is not 
good and would bring more maintenance burden in future.
If you check the psr debugfs we already have some of it that was not much 
useful.

> 
> > 
> > I don't see much value on it.
> > We have now some cases that leads to full update, that needs to be
> > properly fixed so it can always do partial updates.
> 
> There are still these cases, but also frontbuffer invalidate/flush
> callbacks are there. e.g. currently there is no way to say how often we
> are doing full update. Before the "continuous full update" we didn't
> know how much psr is actually disabled due to invalidate/flush.
> 
> > Did not checked the implementation details.
> > 
> > > This patch addresses this by adding new debugfs interface
> > > i915_edp_psr_stats. Statistics gathering is started by writing
> > > 1/y/Y and
> > > stopped by writing 0/n/N into this new interface.
> > > 
> > > Following fields are provided for reading by this new interface:
> > > 
> > > "PSR disabled vblank count"
> > > 
> > > Over how many vblank periods  PSR was disabled after statistics
> > > gathering got started. I.e. How many normal updates were sent to
> > > panel.
> > > 
> > > "Total vblank count"
> > > 
> > > Total vblank count after statistics gathering got started.
> > > 
> > > "Selective update count"
> > > 
> > > How many selective updates (PSR2) were done after statistics
> > > gathering
> > > got started.
> > > 
> > > "Full update count"
> > > 
> > > How many times driver decided to fall back to full update when
> > > trying to
> > > perform selective update.
> > > 
> > > Cc: José Roberto de Souza 
> > > Cc: Mika Kahola 
> > > Cc: Uma Shankar 
> > > Cc: Nischal Varide 
> > > Signed-off-by: Jouni Högander 
> > > ---
> > >  .../drm/i915/display/intel_display_debugfs.c  | 100 
> > >  .../drm/i915/display/intel_display_types.h|  16 ++
> > >  drivers/gpu/drm/i915/display/intel_psr.c  | 144
> > > ++
> > >  drivers/gpu/drm/i915/display/intel_psr.h  |   2 +
> > >  4 files changed, 236 insertions(+), 26 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> > > b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> > > index 452d773fd4e3..c29f151062e4 100644
> > > --- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> > > +++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> > > @@ -9,6 +9,7 @@
> > >  #include 
> > > 
> > >  #include "i915_debugfs.h"
> > > +#include "intel_crtc.h"
> > >  #include "intel_de.h"
> > >  #include "intel_display_debugfs.h"
> > >  #include "intel_display_power.h"
> > > @@ -1868,6 +1869,95 @@ i915_fifo_underrun_reset_write(struct file
> > > *filp,
> > >  return 

Re: [Intel-gfx] [RFC PATCH] drm/i915: Debugfs statistics interface for psr

2022-05-19 Thread Hogander, Jouni
On Wed, 2022-05-18 at 20:00 +, Souza, Jose wrote:
> On Wed, 2022-05-18 at 10:45 +0300, Jouni Högander wrote:
> > Currently there is no mean to get understanding how psr is
> > utilized.
> > E.g. How much psr is actually enabled or how often driver falls
> > back
> > to full update.
> 
> But with this information what can you do?

If you are doing pnp analysis for use cases with PSR2 capable panel.
Having such information telling you how PSR is utilized is valuable to
make your statement if there is room for optimization.

or

you are doing power measurement follow-up and facing regression in
certain use case. You can check possible changes in PSR utilization
and make your statement if it's involved or not. E.g. recent changes to
fall back to full update if area calculation fails or the one where
using continuous full update instead of psr disable would have been
visible in these statistics.

Addition to these I have been thinking extending this to gather
statistics or history over used update area as well. Considering recent
bugs we have fixed in psr code: As a first step you need to start
instrumenting psr code and ask reporter to take trace with your
instrumentation. Instead we could have some statistic or even last SU
areas when CRC error is triggered and make these available in logs or
via some debugfs interface.

I'm pretty sure we will revisit psr code in the future and introduce
new bugs and new workarounds among new features. I'm trying to figure
out something to ease up analyzing their impact and maybe even
rootcausing bugs.

> 
> I don't see much value on it.
> We have now some cases that leads to full update, that needs to be
> properly fixed so it can always do partial updates.

There are still these cases, but also frontbuffer invalidate/flush
callbacks are there. e.g. currently there is no way to say how often we
are doing full update. Before the "continuous full update" we didn't
know how much psr is actually disabled due to invalidate/flush.

> Did not checked the implementation details.
> 
> > This patch addresses this by adding new debugfs interface
> > i915_edp_psr_stats. Statistics gathering is started by writing
> > 1/y/Y and
> > stopped by writing 0/n/N into this new interface.
> > 
> > Following fields are provided for reading by this new interface:
> > 
> > "PSR disabled vblank count"
> > 
> > Over how many vblank periods  PSR was disabled after statistics
> > gathering got started. I.e. How many normal updates were sent to
> > panel.
> > 
> > "Total vblank count"
> > 
> > Total vblank count after statistics gathering got started.
> > 
> > "Selective update count"
> > 
> > How many selective updates (PSR2) were done after statistics
> > gathering
> > got started.
> > 
> > "Full update count"
> > 
> > How many times driver decided to fall back to full update when
> > trying to
> > perform selective update.
> > 
> > Cc: José Roberto de Souza 
> > Cc: Mika Kahola 
> > Cc: Uma Shankar 
> > Cc: Nischal Varide 
> > Signed-off-by: Jouni Högander 
> > ---
> >  .../drm/i915/display/intel_display_debugfs.c  | 100 
> >  .../drm/i915/display/intel_display_types.h|  16 ++
> >  drivers/gpu/drm/i915/display/intel_psr.c  | 144
> > ++
> >  drivers/gpu/drm/i915/display/intel_psr.h  |   2 +
> >  4 files changed, 236 insertions(+), 26 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> > b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> > index 452d773fd4e3..c29f151062e4 100644
> > --- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> > +++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> > @@ -9,6 +9,7 @@
> >  #include 
> > 
> >  #include "i915_debugfs.h"
> > +#include "intel_crtc.h"
> >  #include "intel_de.h"
> >  #include "intel_display_debugfs.h"
> >  #include "intel_display_power.h"
> > @@ -1868,6 +1869,95 @@ i915_fifo_underrun_reset_write(struct file
> > *filp,
> >  return cnt;
> >  }
> > 
> > +static int intel_psr_stats(struct seq_file *m, struct intel_dp
> > *intel_dp)
> > +{
> > +struct drm_i915_private *dev_priv = (m->private);
> > +struct intel_psr *psr = _dp->psr;
> > +struct drm_crtc *crtc = _crtc_for_pipe(dev_priv, psr->pipe)-
> > >base;
> > +u64 total_vblank_count = psr->stats.total_vblank_count,
> > +non_psr_vblank_count = psr->stats.non_psr_vblank_count;
> > +ktime_t vblanktime;
> > +
> > +if (!psr->active)
> > +non_psr_vblank_count += drm_crtc_vblank_count_and_time(crtc,
> > ) -
> > +psr->stats.psr_disable_vblank;
> > +
> > +seq_printf(m, "PSR disabled vblank count : %llu\n",
> > non_psr_vblank_count);
> > +
> > +if (psr->stats.enable)
> > +total_vblank_count += drm_crtc_vblank_count_and_time(crtc,
> > ) -
> > +psr->stats.start_vblank;
> > +
> > +seq_printf(m, "Total vblank count : %llu\n", total_vblank_count);
> > +seq_printf(m, "Selective update count : %llu\n", psr-
> > >stats.selective_update_count);
> > +seq_printf(m, "Full update count : %llu\n", psr-
> > 

Re: [Intel-gfx] [RFC PATCH] drm/i915: Debugfs statistics interface for psr

2022-05-18 Thread Souza, Jose
On Wed, 2022-05-18 at 10:45 +0300, Jouni Högander wrote:
> Currently there is no mean to get understanding how psr is utilized.
> E.g. How much psr is actually enabled or how often driver falls back
> to full update.

But with this information what can you do?

I don't see much value on it.
We have now some cases that leads to full update, that needs to be properly 
fixed so it can always do partial updates.

Did not checked the implementation details.

> 
> This patch addresses this by adding new debugfs interface
> i915_edp_psr_stats. Statistics gathering is started by writing 1/y/Y and
> stopped by writing 0/n/N into this new interface.
> 
> Following fields are provided for reading by this new interface:
> 
> "PSR disabled vblank count"
> 
> Over how many vblank periods  PSR was disabled after statistics
> gathering got started. I.e. How many normal updates were sent to panel.
> 
> "Total vblank count"
> 
> Total vblank count after statistics gathering got started.
> 
> "Selective update count"
> 
> How many selective updates (PSR2) were done after statistics gathering
> got started.
> 
> "Full update count"
> 
> How many times driver decided to fall back to full update when trying to
> perform selective update.
> 
> Cc: José Roberto de Souza 
> Cc: Mika Kahola 
> Cc: Uma Shankar 
> Cc: Nischal Varide 
> Signed-off-by: Jouni Högander 
> ---
>  .../drm/i915/display/intel_display_debugfs.c  | 100 
>  .../drm/i915/display/intel_display_types.h|  16 ++
>  drivers/gpu/drm/i915/display/intel_psr.c  | 144 ++
>  drivers/gpu/drm/i915/display/intel_psr.h  |   2 +
>  4 files changed, 236 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c 
> b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> index 452d773fd4e3..c29f151062e4 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> +++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> @@ -9,6 +9,7 @@
>  #include 
>  
>  #include "i915_debugfs.h"
> +#include "intel_crtc.h"
>  #include "intel_de.h"
>  #include "intel_display_debugfs.h"
>  #include "intel_display_power.h"
> @@ -1868,6 +1869,95 @@ i915_fifo_underrun_reset_write(struct file *filp,
>   return cnt;
>  }
>  
> +static int intel_psr_stats(struct seq_file *m, struct intel_dp *intel_dp)
> +{
> + struct drm_i915_private *dev_priv = (m->private);
> + struct intel_psr *psr = _dp->psr;
> + struct drm_crtc *crtc = _crtc_for_pipe(dev_priv, psr->pipe)->base;
> + u64 total_vblank_count = psr->stats.total_vblank_count,
> + non_psr_vblank_count = psr->stats.non_psr_vblank_count;
> + ktime_t vblanktime;
> +
> + if (!psr->active)
> + non_psr_vblank_count += drm_crtc_vblank_count_and_time(crtc, 
> ) -
> + psr->stats.psr_disable_vblank;
> +
> + seq_printf(m, "PSR disabled vblank count : %llu\n", 
> non_psr_vblank_count);
> +
> + if (psr->stats.enable)
> + total_vblank_count += drm_crtc_vblank_count_and_time(crtc, 
> ) -
> + psr->stats.start_vblank;
> +
> + seq_printf(m, "Total vblank count : %llu\n", total_vblank_count);
> + seq_printf(m, "Selective update count : %llu\n", 
> psr->stats.selective_update_count);
> + seq_printf(m, "Full update count : %llu\n", 
> psr->stats.full_update_count);
> +
> + return 0;
> +}
> +
> +static int i915_edp_psr_stats_show(struct seq_file *m, void *data)
> +{
> + struct drm_i915_private *dev_priv = (m->private);
> + struct intel_dp *intel_dp = NULL;
> + struct intel_encoder *encoder;
> +
> + if (!HAS_PSR(dev_priv))
> + return -ENODEV;
> +
> + /* Find the first EDP which supports PSR */
> + for_each_intel_encoder_with_psr(_priv->drm, encoder) {
> + intel_dp = enc_to_intel_dp(encoder);
> + break;
> + }
> +
> + if (!intel_dp)
> + return -ENODEV;
> +
> + return intel_psr_stats(m, intel_dp);
> +}
> +
> +static ssize_t
> +i915_edp_psr_stats_write(struct file *filp, const char __user *ubuf,
> +  size_t cnt, loff_t *ppos)
> +{
> + struct seq_file *m = filp->private_data;
> + struct drm_i915_private *dev_priv = m->private;
> + struct intel_dp *intel_dp = NULL;
> + struct intel_encoder *encoder;
> + int ret;
> + bool enable_stats;
> +
> + ret = kstrtobool_from_user(ubuf, cnt, _stats);
> + if (ret)
> + return ret;
> +
> + if (!HAS_PSR(dev_priv))
> + return -ENODEV;
> +
> + /* Find the first EDP which supports PSR */
> + for_each_intel_encoder_with_psr(_priv->drm, encoder) {
> + intel_dp = enc_to_intel_dp(encoder);
> + break;
> + }
> +
> + if (!intel_dp)
> + return -ENODEV;
> +
> + if (enable_stats)
> + intel_psr_stats_enable_stats(intel_dp);
> + else
> + intel_psr_stats_disable_stats(intel_dp);
> +
> + return cnt;

Re: [Intel-gfx] [RFC PATCH] drm/i915: Debugfs statistics interface for psr

2022-05-18 Thread Jani Nikula
On Wed, 18 May 2022, Jouni Högander  wrote:
> Currently there is no mean to get understanding how psr is utilized.
> E.g. How much psr is actually enabled or how often driver falls back
> to full update.
>
> This patch addresses this by adding new debugfs interface
> i915_edp_psr_stats. Statistics gathering is started by writing 1/y/Y and
> stopped by writing 0/n/N into this new interface.

IMO time to move all PSR debugfs handling to intel_psr.c before
this. See e.g. commit d2de8ccfb299 ("drm/i915/fbc: Move FBC debugfs
stuff into intel_fbc.c") for FBC.

With that, I think you could also split out intel_psr_regs.h and
encapsulate psr register access to intel_psr.c.

BR,
Jani.

>
> Following fields are provided for reading by this new interface:
>
> "PSR disabled vblank count"
>
> Over how many vblank periods  PSR was disabled after statistics
> gathering got started. I.e. How many normal updates were sent to panel.
>
> "Total vblank count"
>
> Total vblank count after statistics gathering got started.
>
> "Selective update count"
>
> How many selective updates (PSR2) were done after statistics gathering
> got started.
>
> "Full update count"
>
> How many times driver decided to fall back to full update when trying to
> perform selective update.
>
> Cc: José Roberto de Souza 
> Cc: Mika Kahola 
> Cc: Uma Shankar 
> Cc: Nischal Varide 
> Signed-off-by: Jouni Högander 
> ---
>  .../drm/i915/display/intel_display_debugfs.c  | 100 
>  .../drm/i915/display/intel_display_types.h|  16 ++
>  drivers/gpu/drm/i915/display/intel_psr.c  | 144 ++
>  drivers/gpu/drm/i915/display/intel_psr.h  |   2 +
>  4 files changed, 236 insertions(+), 26 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c 
> b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> index 452d773fd4e3..c29f151062e4 100644
> --- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> +++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
> @@ -9,6 +9,7 @@
>  #include 
>  
>  #include "i915_debugfs.h"
> +#include "intel_crtc.h"
>  #include "intel_de.h"
>  #include "intel_display_debugfs.h"
>  #include "intel_display_power.h"
> @@ -1868,6 +1869,95 @@ i915_fifo_underrun_reset_write(struct file *filp,
>   return cnt;
>  }
>  
> +static int intel_psr_stats(struct seq_file *m, struct intel_dp *intel_dp)
> +{
> + struct drm_i915_private *dev_priv = (m->private);
> + struct intel_psr *psr = _dp->psr;
> + struct drm_crtc *crtc = _crtc_for_pipe(dev_priv, psr->pipe)->base;
> + u64 total_vblank_count = psr->stats.total_vblank_count,
> + non_psr_vblank_count = psr->stats.non_psr_vblank_count;
> + ktime_t vblanktime;
> +
> + if (!psr->active)
> + non_psr_vblank_count += drm_crtc_vblank_count_and_time(crtc, 
> ) -
> + psr->stats.psr_disable_vblank;
> +
> + seq_printf(m, "PSR disabled vblank count : %llu\n", 
> non_psr_vblank_count);
> +
> + if (psr->stats.enable)
> + total_vblank_count += drm_crtc_vblank_count_and_time(crtc, 
> ) -
> + psr->stats.start_vblank;
> +
> + seq_printf(m, "Total vblank count : %llu\n", total_vblank_count);
> + seq_printf(m, "Selective update count : %llu\n", 
> psr->stats.selective_update_count);
> + seq_printf(m, "Full update count : %llu\n", 
> psr->stats.full_update_count);
> +
> + return 0;
> +}
> +
> +static int i915_edp_psr_stats_show(struct seq_file *m, void *data)
> +{
> + struct drm_i915_private *dev_priv = (m->private);
> + struct intel_dp *intel_dp = NULL;
> + struct intel_encoder *encoder;
> +
> + if (!HAS_PSR(dev_priv))
> + return -ENODEV;
> +
> + /* Find the first EDP which supports PSR */
> + for_each_intel_encoder_with_psr(_priv->drm, encoder) {
> + intel_dp = enc_to_intel_dp(encoder);
> + break;
> + }
> +
> + if (!intel_dp)
> + return -ENODEV;
> +
> + return intel_psr_stats(m, intel_dp);
> +}
> +
> +static ssize_t
> +i915_edp_psr_stats_write(struct file *filp, const char __user *ubuf,
> +  size_t cnt, loff_t *ppos)
> +{
> + struct seq_file *m = filp->private_data;
> + struct drm_i915_private *dev_priv = m->private;
> + struct intel_dp *intel_dp = NULL;
> + struct intel_encoder *encoder;
> + int ret;
> + bool enable_stats;
> +
> + ret = kstrtobool_from_user(ubuf, cnt, _stats);
> + if (ret)
> + return ret;
> +
> + if (!HAS_PSR(dev_priv))
> + return -ENODEV;
> +
> + /* Find the first EDP which supports PSR */
> + for_each_intel_encoder_with_psr(_priv->drm, encoder) {
> + intel_dp = enc_to_intel_dp(encoder);
> + break;
> + }
> +
> + if (!intel_dp)
> + return -ENODEV;
> +
> + if (enable_stats)
> + intel_psr_stats_enable_stats(intel_dp);
> + else
> + 

[Intel-gfx] [RFC PATCH] drm/i915: Debugfs statistics interface for psr

2022-05-18 Thread Jouni Högander
Currently there is no mean to get understanding how psr is utilized.
E.g. How much psr is actually enabled or how often driver falls back
to full update.

This patch addresses this by adding new debugfs interface
i915_edp_psr_stats. Statistics gathering is started by writing 1/y/Y and
stopped by writing 0/n/N into this new interface.

Following fields are provided for reading by this new interface:

"PSR disabled vblank count"

Over how many vblank periods  PSR was disabled after statistics
gathering got started. I.e. How many normal updates were sent to panel.

"Total vblank count"

Total vblank count after statistics gathering got started.

"Selective update count"

How many selective updates (PSR2) were done after statistics gathering
got started.

"Full update count"

How many times driver decided to fall back to full update when trying to
perform selective update.

Cc: José Roberto de Souza 
Cc: Mika Kahola 
Cc: Uma Shankar 
Cc: Nischal Varide 
Signed-off-by: Jouni Högander 
---
 .../drm/i915/display/intel_display_debugfs.c  | 100 
 .../drm/i915/display/intel_display_types.h|  16 ++
 drivers/gpu/drm/i915/display/intel_psr.c  | 144 ++
 drivers/gpu/drm/i915/display/intel_psr.h  |   2 +
 4 files changed, 236 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/i915/display/intel_display_debugfs.c 
b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
index 452d773fd4e3..c29f151062e4 100644
--- a/drivers/gpu/drm/i915/display/intel_display_debugfs.c
+++ b/drivers/gpu/drm/i915/display/intel_display_debugfs.c
@@ -9,6 +9,7 @@
 #include 
 
 #include "i915_debugfs.h"
+#include "intel_crtc.h"
 #include "intel_de.h"
 #include "intel_display_debugfs.h"
 #include "intel_display_power.h"
@@ -1868,6 +1869,95 @@ i915_fifo_underrun_reset_write(struct file *filp,
return cnt;
 }
 
+static int intel_psr_stats(struct seq_file *m, struct intel_dp *intel_dp)
+{
+   struct drm_i915_private *dev_priv = (m->private);
+   struct intel_psr *psr = _dp->psr;
+   struct drm_crtc *crtc = _crtc_for_pipe(dev_priv, psr->pipe)->base;
+   u64 total_vblank_count = psr->stats.total_vblank_count,
+   non_psr_vblank_count = psr->stats.non_psr_vblank_count;
+   ktime_t vblanktime;
+
+   if (!psr->active)
+   non_psr_vblank_count += drm_crtc_vblank_count_and_time(crtc, 
) -
+   psr->stats.psr_disable_vblank;
+
+   seq_printf(m, "PSR disabled vblank count : %llu\n", 
non_psr_vblank_count);
+
+   if (psr->stats.enable)
+   total_vblank_count += drm_crtc_vblank_count_and_time(crtc, 
) -
+   psr->stats.start_vblank;
+
+   seq_printf(m, "Total vblank count : %llu\n", total_vblank_count);
+   seq_printf(m, "Selective update count : %llu\n", 
psr->stats.selective_update_count);
+   seq_printf(m, "Full update count : %llu\n", 
psr->stats.full_update_count);
+
+   return 0;
+}
+
+static int i915_edp_psr_stats_show(struct seq_file *m, void *data)
+{
+   struct drm_i915_private *dev_priv = (m->private);
+   struct intel_dp *intel_dp = NULL;
+   struct intel_encoder *encoder;
+
+   if (!HAS_PSR(dev_priv))
+   return -ENODEV;
+
+   /* Find the first EDP which supports PSR */
+   for_each_intel_encoder_with_psr(_priv->drm, encoder) {
+   intel_dp = enc_to_intel_dp(encoder);
+   break;
+   }
+
+   if (!intel_dp)
+   return -ENODEV;
+
+   return intel_psr_stats(m, intel_dp);
+}
+
+static ssize_t
+i915_edp_psr_stats_write(struct file *filp, const char __user *ubuf,
+size_t cnt, loff_t *ppos)
+{
+   struct seq_file *m = filp->private_data;
+   struct drm_i915_private *dev_priv = m->private;
+   struct intel_dp *intel_dp = NULL;
+   struct intel_encoder *encoder;
+   int ret;
+   bool enable_stats;
+
+   ret = kstrtobool_from_user(ubuf, cnt, _stats);
+   if (ret)
+   return ret;
+
+   if (!HAS_PSR(dev_priv))
+   return -ENODEV;
+
+   /* Find the first EDP which supports PSR */
+   for_each_intel_encoder_with_psr(_priv->drm, encoder) {
+   intel_dp = enc_to_intel_dp(encoder);
+   break;
+   }
+
+   if (!intel_dp)
+   return -ENODEV;
+
+   if (enable_stats)
+   intel_psr_stats_enable_stats(intel_dp);
+   else
+   intel_psr_stats_disable_stats(intel_dp);
+
+   return cnt;
+}
+
+static int i915_edp_psr_stats_open(struct inode *inode, struct file *file)
+{
+   struct drm_i915_private *dev_priv = inode->i_private;
+
+   return single_open(file, i915_edp_psr_stats_show, dev_priv);
+}
+
 static const struct file_operations i915_fifo_underrun_reset_ops = {
.owner = THIS_MODULE,
.open = simple_open,
@@ -1875,6 +1965,15 @@ static const struct file_operations 
i915_fifo_underrun_reset_ops = {
.llseek =