Re: [Intel-gfx] Possible regression in drm/i915 driver: memleak

2022-12-20 Thread srinivas pandruvada
+Added DRM mailing list and maintainers

On Tue, 2022-12-20 at 15:33 +0100, Mirsad Todorovac wrote:
> Hi all,
> 
> I have been unsuccessful to find any particular Intel i915 maintainer
> emails, so my best bet is to post here, as you will must assuredly 
> already know them.
> 
> The problem is a kernel memory leak that is repeatedly occurring 
> triggered during the execution of Chrome browser under the latest
> 6.1.0+ 
> kernel of this morning and Almalinux 8.6 on a Lenovo desktop box
> with Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz CPU.
> 
> The build is with KMEMLEAK, KASAN and MGLRU turned on during the
> build, 
> on a vanilla mainline kernel from Mr. Torvalds' tree.
> 
> The leaks look like this one:
> 
> unreferenced object 0x888131754880 (size 64):
>    comm "chrome", pid 13058, jiffies 4298568878 (age 3708.084s)
>    hex dump (first 32 bytes):
>  01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> 
>  00 00 00 00 00 00 00 00 00 80 1e 3e 83 88 ff ff 
> ...>
>    backtrace:
>  [] slab_post_alloc_hook+0xb2/0x340
>  [] __kmem_cache_alloc_node+0x1bf/0x2c0
>  [] kmalloc_trace+0x2a/0xb0
>  [] drm_vma_node_allow+0x45/0x150 [drm]
>  [] __assign_mmap_offset_handle+0x615/0x820
> [i915]
>  [] i915_gem_mmap_offset_ioctl+0x77/0x110
> [i915]
>  [] drm_ioctl_kernel+0x181/0x280 [drm]
>  [] drm_ioctl+0x2dd/0x6a0 [drm]
>  [] __x64_sys_ioctl+0xc4/0x100
>  [] do_syscall_64+0x58/0x80
>  [] entry_SYSCALL_64_after_hwframe+0x72/0xdc
> 
> The complete list of leaks in attachment, but they seem similar or
> the same.
> 
> Please find attached lshw and kernel build config file.
> 
> I will probably check the same parms on my laptop at home, which is
> also 
> Lenovo, but a different hw config and Ubuntu 22.10.
> 
> Thanks,
> Mirsad
> 
> -- 
> Mirsad Goran Todorovac
> Sistem inženjer
> Grafički fakultet | Akademija likovnih umjetnosti
> Sveučilište u Zagrebu



Re: [Intel-gfx] [PATCH V2 5/13] hid: use time_is_after_jiffies() instead of jiffies judgment

2022-02-11 Thread srinivas pandruvada
On Thu, 2022-02-10 at 18:30 -0800, Qing Wang wrote:
> From: Wang Qing 
> 
> It is better to use time_xxx() directly instead of jiffies judgment
> for understanding.
> 
> Signed-off-by: Wang Qing 
Acked-by: Srinivas Pandruvada 

> ---
>  drivers/hid/intel-ish-hid/ipc/ipc.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/hid/intel-ish-hid/ipc/ipc.c b/drivers/hid/intel-
> ish-hid/ipc/ipc.c
> index 8ccb246..15e1423
> --- a/drivers/hid/intel-ish-hid/ipc/ipc.c
> +++ b/drivers/hid/intel-ish-hid/ipc/ipc.c
> @@ -578,7 +578,7 @@ static void _ish_sync_fw_clock(struct
> ishtp_device *dev)
> static unsigned longprev_sync;
> uint64_tusec;
>  
> -   if (prev_sync && jiffies - prev_sync < 20 * HZ)
> +   if (prev_sync && time_is_after_jiffies(prev_sync + 20 * HZ))
> return;
>  
> prev_sync = jiffies;



Re: [Intel-gfx] [PATCH 0/9] GPU-bound energy efficiency improvements for the intel_pstate driver.

2018-04-17 Thread Srinivas Pandruvada
On Tue, 2018-04-17 at 15:03 +0100, Chris Wilson wrote:
> I have to ask, if this is all just to work around iowait triggering
> high
> frequencies for GPU bound applications, does it all just boil down to
> i915 incorrectly using iowait. Does this patch set perform better
> than
> 
> diff --git a/drivers/gpu/drm/i915/i915_request.c
> b/drivers/gpu/drm/i915/i915_request.c
> index 9ca9c24b4421..7e7c95411bcd 100644
> --- a/drivers/gpu/drm/i915/i915_request.c
> +++ b/drivers/gpu/drm/i915/i915_request.c
> @@ -1267,7 +1267,7 @@ long i915_request_wait(struct i915_request *rq,
> goto complete;
> }
>  
> -   timeout = io_schedule_timeout(timeout);
> +   timeout = schedule_timeout(timeout);
> } while (1);
>  
> GEM_BUG_ON(!intel_wait_has_seqno());
> 
> Quite clearly the general framework could prove useful in a broader
> range of situations, but does the above suffice? (And can be
> backported
> to stable.)

Definitely a very good test to do.

Thanks,
Srinivas

> -Chris
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx


Re: [Intel-gfx] [PATCH 0/9] GPU-bound energy efficiency improvements for the intel_pstate driver.

2018-04-16 Thread Srinivas Pandruvada
On Mon, 2018-04-16 at 17:04 +0300, Eero Tamminen wrote:
> Hi,
> 
> On 14.04.2018 07:01, Srinivas Pandruvada wrote:
> > Hi Francisco,
> > 
> > [...]
> > 
> > > Are you no longer interested in improving those aspects of the
> > > non-
> > > HWP
> > > governor?  Is it that you're planning to delete it and move back
> > > to a
> > > generic cpufreq governor for non-HWP platforms in the near
> > > future?
> > 
> > Yes that is the plan for Atom platforms, which are only non HWP
> > platforms till now. You have to show good gain for performance and
> > performance/watt to carry and maintain such big change. So we have
> > to
> > see your performance and power numbers.
> 
> For the active cases, you can look at the links at the beginning / 
> bottom of this mail thread.  Francisco provided performance results
> for 
>  >100 benchmarks.
Looks like you didn't test the idle cases, which are more important.
Systems will tend to be more idle (increased +50% by the patches). Once
you fix the idle, you have to retest and then results will be
interesting.

Once you fix this, then it is pure algorithm, whether it is done in
intel-pstate or sched-util governor is not a big different. It is
better to do in sched-util as this will benefit all architectures and
will get better test coverage and maintained.

Thanks,
Srinivas



> 
> 
> At this side of Atlantic, we've been testing different versions of
> the 
> patchset in past few months for >50 Linux 3D benchmarks on 6
> different 
> platforms.
> 
> On Geminilake and few BXT configurations (where 3D benchmarks are
> TDP 
> limited), many tests' performance improves by 5-15%, also complex
> ones. 
> And more importantly, there were no regressions.
> 
> (You can see details + links to more info in Jira ticket VIZ-12078.)
> 
> *On (fully) TDP limited cases, power usage (obviously) keeps the
> same, 
> so performance/watt improvements can be derived from the measured 
> performance improvements.*
> 
> 
> We have data also for earlier platforms from slightly older versions
> of 
> the patchset, but on those it didn't have any significant impact on 
> performance.
> 
> I think the main reason for this is that BYT & BSW NUCs that we
> have, 
> have space only for single memory module.  Without dual-memory
> channel 
> configuration, benchmarks are too memory-bottlenecked to utilized
> GPU 
> enough to make things TDP limited on those platforms.
> 
> However, now that I look at the old BYT & BSW data (for few
> benchmarks 
> which improved most on BXT & GLK), I see that there's a reduction in
> the 
> CPU power utilization according to RAPL, at least on BSW.
> 
> 
>   - Eero
> 
> 
> > > > This will benefit all architectures including x86 + non i915.
> > > > 
> > > 
> > > The current design encourages re-use of the IO utilization
> > > statistic
> > > (see PATCH 1) by other governors as a mechanism driving the
> > > trade-off
> > > between energy efficiency and responsiveness based on whether the
> > > system
> > > is close to CPU-bound, in whatever way is applicable to each
> > > governor
> > > (e.g. it would make sense for it to be hooked up to the EPP
> > > preference
> > > knob in the case of the intel_pstate HWP governor, which would
> > > allow
> > > it
> > > to achieve better energy efficiency in IO-bound situations just
> > > like
> > > this series does for non-HWP parts).  There's nothing really x86-
> > > nor
> > > i915-specific about it.
> > > 
> > > > BTW intel-pstate can be driven by sched-util governor (passive
> > > > mode),
> > > > so if your prove benefits to Broxton, this can be a default.
> > > > As before:
> > > > - No regression to idle power at all. This is more important
> > > > than
> > > > benchmarks
> > > > - Not just score, performance/watt is important
> > > > 
> > > 
> > > Is schedutil actually on par with the intel_pstate non-HWP
> > > governor
> > > as
> > > of today, according to these metrics and the overall benchmark
> > > numbers?
> > 
> > Yes, except for few cases. I have not tested recently, so may be
> > better.
> > 
> > Thanks,
> > Srinivas
> > 
> > 
> > > > Thanks,
> > > > Srinivas
> > > > 
> > > > 
> > > > > > controller does, even though the frequent IO waits may
> 

Re: [Intel-gfx] [PATCH 0/9] GPU-bound energy efficiency improvements for the intel_pstate driver.

2018-04-13 Thread Srinivas Pandruvada
Hi Francisco,

[...]

> Are you no longer interested in improving those aspects of the non-
> HWP
> governor?  Is it that you're planning to delete it and move back to a
> generic cpufreq governor for non-HWP platforms in the near future?

Yes that is the plan for Atom platforms, which are only non HWP
platforms till now. You have to show good gain for performance and
performance/watt to carry and maintain such big change. So we have to
see your performance and power numbers.

> 
> > This will benefit all architectures including x86 + non i915.
> > 
> 
> The current design encourages re-use of the IO utilization statistic
> (see PATCH 1) by other governors as a mechanism driving the trade-off
> between energy efficiency and responsiveness based on whether the
> system
> is close to CPU-bound, in whatever way is applicable to each governor
> (e.g. it would make sense for it to be hooked up to the EPP
> preference
> knob in the case of the intel_pstate HWP governor, which would allow
> it
> to achieve better energy efficiency in IO-bound situations just like
> this series does for non-HWP parts).  There's nothing really x86- nor
> i915-specific about it.
> 
> > BTW intel-pstate can be driven by sched-util governor (passive
> > mode),
> > so if your prove benefits to Broxton, this can be a default.
> > As before:
> > - No regression to idle power at all. This is more important than
> > benchmarks
> > - Not just score, performance/watt is important
> > 
> 
> Is schedutil actually on par with the intel_pstate non-HWP governor
> as
> of today, according to these metrics and the overall benchmark
> numbers?
Yes, except for few cases. I have not tested recently, so may be
better.

Thanks,
Srinivas


> > Thanks,
> > Srinivas
> > 
> > 
> > > > controller does, even though the frequent IO waits may actually
> > > > be
> > > > an
> > > > indication that the system is IO-bound (which means that the
> > > > large
> > > > energy usage increase may not be translated in any performance
> > > > benefit
> > > > in practice, not to speak of performance being impacted
> > > > negatively
> > > > in
> > > > TDP-bound scenarios like GPU rendering).
> > > > 
> > > > Regarding run-time complexity, I haven't observed this governor
> > > > to
> > > > be
> > > > measurably more computationally intensive than the present
> > > > one.  It's a
> > > > bunch more instructions indeed, but still within the same
> > > > ballpark
> > > > as
> > > > the current governor.  The average increase in CPU utilization
> > > > on
> > > > my BXT
> > > > with this series is less than 0.03% (sampled via ftrace for v1,
> > > > I
> > > > can
> > > > repeat the measurement for the v2 I have in the works, though I
> > > > don't
> > > > expect the result to be substantially different).  If this is a
> > > > problem
> > > > for you there are several optimization opportunities that would
> > > > cut
> > > > down
> > > > the number of CPU cycles get_target_pstate_lp() takes to
> > > > execute by
> > > > a
> > > > large percent (most of the optimization ideas I can think of
> > > > right
> > > > now
> > > > though would come at some
> > > > accuracy/maintainability/debuggability
> > > > cost,
> > > > but may still be worth pursuing), but the computational
> > > > overhead is
> > > > low
> > > > enough at this point that the impact on any benchmark or real
> > > > workload
> > > > would be orders of magnitude lower than its variance, which
> > > > makes
> > > > it
> > > > kind of difficult to keep the discussion data-driven [as
> > > > possibly
> > > > any
> > > > performance optimization discussion should ever be ;)].
> > > > 
> > > > > 
> > > > > Thanks,
> > > > > Srinivas
> > > > > 
> > > > > 
> > > > > 
> > > > > > 
> > > > > > > [Absolute benchmark results are unfortunately omitted
> > > > > > > from
> > > > > > > this
> > > > > > > letter
> > > > > > > due to company policies, but the percent change and
> > > > > > > Student's
> > > > > > > T
> > > > > > > p-value are included above and in the referenced
> > > > > > > benchmark
> > > > > > > results]
> > > > > > > 
> > > > > > > The most obvious impact of this series will likely be the
> > > > > > > overall
> > > > > > > improvement in graphics performance on systems with an
> > > > > > > IGP
> > > > > > > integrated
> > > > > > > into the processor package (though for the moment this is
> > > > > > > only
> > > > > > > enabled
> > > > > > > on BXT+), because the TDP budget shared among CPU and GPU
> > > > > > > can
> > > > > > > frequently become a limiting factor in low-power
> > > > > > > devices.  On
> > > > > > > heavily
> > > > > > > TDP-bound devices this series improves performance of
> > > > > > > virtually any
> > > > > > > non-trivial graphics rendering by a significant amount
> > > > > > > (of
> > > > > > > the
> > > > > > > order
> > > > > > > of the energy efficiency improvement for that workload
> > > > > > > assuming the
> > > > > > > optimization didn't cause it to become non-TDP-bound).
> 

Re: [Intel-gfx] [PATCH 0/9] GPU-bound energy efficiency improvements for the intel_pstate driver.

2018-04-12 Thread Srinivas Pandruvada
On Wed, 2018-04-11 at 09:26 -0700, Francisco Jerez wrote:
> 
> "just like" here is possibly somewhat unfair to the schedutil
> governor,
> admittedly its progressive IOWAIT boosting behavior seems somewhat
> less
> wasteful than the intel_pstate non-HWP governor's IOWAIT boosting
> behavior, but it's still largely unhelpful on IO-bound conditions.
> 

OK, if you think so, then improve it for sched-util governor or other
mechanisms (as Juri suggested) instead of intel-pstate. This will
benefit all architectures including x86 + non i915.

BTW intel-pstate can be driven by sched-util governor (passive mode),
so if your prove benefits to Broxton, this can be a default.
As before:
- No regression to idle power at all. This is more important than
benchmarks
- Not just score, performance/watt is important

Thanks,
Srinivas


> > controller does, even though the frequent IO waits may actually be
> > an
> > indication that the system is IO-bound (which means that the large
> > energy usage increase may not be translated in any performance
> > benefit
> > in practice, not to speak of performance being impacted negatively
> > in
> > TDP-bound scenarios like GPU rendering).
> > 
> > Regarding run-time complexity, I haven't observed this governor to
> > be
> > measurably more computationally intensive than the present
> > one.  It's a
> > bunch more instructions indeed, but still within the same ballpark
> > as
> > the current governor.  The average increase in CPU utilization on
> > my BXT
> > with this series is less than 0.03% (sampled via ftrace for v1, I
> > can
> > repeat the measurement for the v2 I have in the works, though I
> > don't
> > expect the result to be substantially different).  If this is a
> > problem
> > for you there are several optimization opportunities that would cut
> > down
> > the number of CPU cycles get_target_pstate_lp() takes to execute by
> > a
> > large percent (most of the optimization ideas I can think of right
> > now
> > though would come at some accuracy/maintainability/debuggability
> > cost,
> > but may still be worth pursuing), but the computational overhead is
> > low
> > enough at this point that the impact on any benchmark or real
> > workload
> > would be orders of magnitude lower than its variance, which makes
> > it
> > kind of difficult to keep the discussion data-driven [as possibly
> > any
> > performance optimization discussion should ever be ;)].
> > 
> > > 
> > > Thanks,
> > > Srinivas
> > > 
> > > 
> > > 
> > > > 
> > > > > [Absolute benchmark results are unfortunately omitted from
> > > > > this
> > > > > letter
> > > > > due to company policies, but the percent change and Student's
> > > > > T
> > > > > p-value are included above and in the referenced benchmark
> > > > > results]
> > > > > 
> > > > > The most obvious impact of this series will likely be the
> > > > > overall
> > > > > improvement in graphics performance on systems with an IGP
> > > > > integrated
> > > > > into the processor package (though for the moment this is
> > > > > only
> > > > > enabled
> > > > > on BXT+), because the TDP budget shared among CPU and GPU can
> > > > > frequently become a limiting factor in low-power devices.  On
> > > > > heavily
> > > > > TDP-bound devices this series improves performance of
> > > > > virtually any
> > > > > non-trivial graphics rendering by a significant amount (of
> > > > > the
> > > > > order
> > > > > of the energy efficiency improvement for that workload
> > > > > assuming the
> > > > > optimization didn't cause it to become non-TDP-bound).
> > > > > 
> > > > > See [1]-[5] for detailed numbers including various graphics
> > > > > benchmarks
> > > > > and a sample of the Phoronix daily-system-tracker.  Some
> > > > > popular
> > > > > graphics benchmarks like GfxBench gl_manhattan31 and gl_4
> > > > > improve
> > > > > between 5% and 11% on our systems.  The exact improvement can
> > > > > vary
> > > > > substantially between systems (compare the benchmark results
> > > > > from
> > > > > the
> > > > > two different J3455 systems [1] and [3]) due to a number of
> > > > > factors,
> > > > > including the ratio between CPU and GPU processing power, the
> > > > > behavior
> > > > > of the userspace graphics driver, the windowing system and
> > > > > resolution,
> > > > > the BIOS (which has an influence on the package TDP), the
> > > > > thermal
> > > > > characteristics of the system, etc.
> > > > > 
> > > > > Unigine Valley and Heaven improve by a similar factor on some
> > > > > systems
> > > > > (see the J3455 results [1]), but on others the improvement is
> > > > > lower
> > > > > because the benchmark fails to fully utilize the GPU, which
> > > > > causes
> > > > > the
> > > > > heuristic to remain in low-latency state for longer, which
> > > > > leaves a
> > > > > reduced TDP budget available to the GPU, which prevents
> > > > > performance
> > > > > from increasing further.  This can be avoided by using the
> > > > > alternative
> > > > > 

Re: [Intel-gfx] [PATCH 0/9] GPU-bound energy efficiency improvements for the intel_pstate driver.

2018-04-10 Thread Srinivas Pandruvada
On Tue, 2018-04-10 at 15:28 -0700, Francisco Jerez wrote:
> Francisco Jerez  writes:
> 
[...]


> For the case anyone is wondering what's going on, Srinivas pointed me
> at
> a larger idle power usage increase off-list, ultimately caused by the
> low-latency heuristic as discussed in the paragraph above.  I have a
> v2
> of PATCH 6 that gives the controller a third response curve roughly
> intermediate between the low-latency and low-power states of this
> revision, which avoids the energy usage increase while C0 residency
> is
> low (e.g. during idle) expected for v1.  The low-latency behavior of
> this revision is still going to be available based on a heuristic (in
> particular when a realtime-priority task is scheduled).  We're
> carrying
> out some additional testing, I'll post the code here eventually.

Please try sched-util governor also. There is a frequency-invariant
patch, which I can send you (This eventually will be pushed by Peter).
We want to avoid complexity to intel-pstate for non HWP power sensitive
platforms as far as possible.


Thanks,
Srinivas



> 
> > [Absolute benchmark results are unfortunately omitted from this
> > letter
> > due to company policies, but the percent change and Student's T
> > p-value are included above and in the referenced benchmark results]
> > 
> > The most obvious impact of this series will likely be the overall
> > improvement in graphics performance on systems with an IGP
> > integrated
> > into the processor package (though for the moment this is only
> > enabled
> > on BXT+), because the TDP budget shared among CPU and GPU can
> > frequently become a limiting factor in low-power devices.  On
> > heavily
> > TDP-bound devices this series improves performance of virtually any
> > non-trivial graphics rendering by a significant amount (of the
> > order
> > of the energy efficiency improvement for that workload assuming the
> > optimization didn't cause it to become non-TDP-bound).
> > 
> > See [1]-[5] for detailed numbers including various graphics
> > benchmarks
> > and a sample of the Phoronix daily-system-tracker.  Some popular
> > graphics benchmarks like GfxBench gl_manhattan31 and gl_4 improve
> > between 5% and 11% on our systems.  The exact improvement can vary
> > substantially between systems (compare the benchmark results from
> > the
> > two different J3455 systems [1] and [3]) due to a number of
> > factors,
> > including the ratio between CPU and GPU processing power, the
> > behavior
> > of the userspace graphics driver, the windowing system and
> > resolution,
> > the BIOS (which has an influence on the package TDP), the thermal
> > characteristics of the system, etc.
> > 
> > Unigine Valley and Heaven improve by a similar factor on some
> > systems
> > (see the J3455 results [1]), but on others the improvement is lower
> > because the benchmark fails to fully utilize the GPU, which causes
> > the
> > heuristic to remain in low-latency state for longer, which leaves a
> > reduced TDP budget available to the GPU, which prevents performance
> > from increasing further.  This can be avoided by using the
> > alternative
> > heuristic parameters suggested in the commit message of PATCH 8,
> > which
> > provide a lower IO utilization threshold and hysteresis for the
> > controller to attempt to save energy.  I'm not proposing those for
> > upstream (yet) because they would also increase the risk for
> > latency-sensitive IO-heavy workloads to regress (like SynMark2
> > OglTerrainFly* and some arguably poorly designed IPC-bound X11
> > benchmarks).
> > 
> > Discrete graphics aren't likely to experience that much of a
> > visible
> > improvement from this, even though many non-IGP workloads *could*
> > benefit by reducing the system's energy usage while the discrete
> > GPU
> > (or really, any other IO device) becomes a bottleneck, but this is
> > not
> > attempted in this series, since that would involve making an energy
> > efficiency/latency trade-off that only the maintainers of the
> > respective drivers are in a position to make.  The cpufreq
> > interface
> > introduced in PATCH 1 to achieve this is left as an opt-in for that
> > reason, only the i915 DRM driver is hooked up since it will get the
> > most direct pay-off due to the increased energy budget available to
> > the GPU, but other power-hungry third-party gadgets built into the
> > same package (*cough* AMD *cough* Mali *cough* PowerVR *cough*) may
> > be
> > able to benefit from this interface eventually by instrumenting the
> > driver in a similar way.
> > 
> > The cpufreq interface is not exclusively tied to the intel_pstate
> > driver, because other governors can make use of the statistic
> > calculated as result to avoid over-optimizing for latency in
> > scenarios
> > where a lower frequency would be able to achieve similar throughput
> > while using less energy.  The interpretation of this statistic
> > relies
> > on the observation that for as long as 

Re: [Intel-gfx] [PATCH] drm/i915: Quirk to ignore VBT bpp

2013-09-23 Thread Srinivas Pandruvada

On 09/22/2013 11:18 PM, Jani Nikula wrote:

On Sat, 21 Sep 2013, Ben Widawsky benjamin.widaw...@intel.com wrote:

We've had several reports of an Asus Zenbook reporting an 18bpp eDP
display, which then proceeds to not work. Using the default 24, work
just fine. Since it appears this is somewhat common in the budding world
of eDP, make a new quirk for it, and use it.

Srinivas, are you using UEFI boot? Does the problem go away if you try
enabling CSM or legacy boot?

That would be [1]. On certain machines we need to use the bpp from VBT,
otherwise eDP fails. For some reason the VBT on certain other machines
reports a different bpp value depending on UEFI vs. CSM/legacy boot,
where the former fails but latter works.

There are more affected machines than just Asus Zenbook UX31A IVB, and
I'm not sure if quirking is the right option... still hoping to find a
good solution that works out of the box for everyone.

I see for some folks enabling legacy mode solves this issue and for some 
it doesn't solve (There are number of report of this issue).
For me UEFI boot is important because I am working on some feature 
requiring UEFI boot.

BR,
Jani.


[1] https://bugzilla.kernel.org/show_bug.cgi?id=59841



This code has been changed several times. Amongst the most recent with
the best history are:

commit 57c219633275c7e7413f8bc7be250dc092887458
Author: Daniel Vetter daniel.vet...@ffwll.ch
Date:   Thu Apr 4 17:19:37 2013 +0200

 drm/i915: revert eDP bpp clamping code changes

and

commit af13188a1a6623fc8b4b6c42178046fb80f8b1d0
Author: Daniel Vetter daniel.vet...@ffwll.ch
Date:   Tue Feb 19 17:45:00 2013 +0100

 drm/i915: force bpp for eDP panels

Reported-by: Srinivas Pandruvada srinivas.pandruv...@linux.intel.com
CC: Adam Jackson a...@redhat.com
CC: Daniel Vetter daniel.vet...@ffwll.ch
Signed-off-by: Ben Widawsky b...@bwidawsk.net
---
  drivers/gpu/drm/i915/i915_drv.h  |  1 +
  drivers/gpu/drm/i915/intel_display.c | 12 
  drivers/gpu/drm/i915/intel_dp.c  |  4 +++-
  3 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 8c52cbd..bc8ff0a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -672,6 +672,7 @@ enum intel_sbi_destination {
  #define QUIRK_LVDS_SSC_DISABLE (11)
  #define QUIRK_INVERT_BRIGHTNESS (12)
  #define QUIRK_NO_PCH_PWM_ENABLE (13)
+#define QUIRK_IGNORE_VBT_BPP   (14)
  
  struct intel_fbdev;

  struct intel_fbc_work;
diff --git a/drivers/gpu/drm/i915/intel_display.c 
b/drivers/gpu/drm/i915/intel_display.c
index 8206ee7..c364377 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -10139,6 +10139,15 @@ static void quirk_no_pcm_pwm_enable(struct drm_device 
*dev)
DRM_INFO(applying no-PCH_PWM_ENABLE quirk\n);
  }
  
+/* Some machines (ux31a) advertise the panel should use 18bpp, but it lies.

+ */
+static void quirk_ignore_vbt_bpp(struct drm_device *dev)
+{
+   struct drm_i915_private *dev_priv = dev-dev_private;
+   dev_priv-quirks |= QUIRK_IGNORE_VBT_BPP;
+   DRM_INFO(applying IGNORE_VBT_BPP quirk\n);
+}
+
  struct intel_quirk {
int device;
int subsystem_vendor;
@@ -10213,6 +10222,9 @@ static struct intel_quirk intel_quirks[] = {
{ 0x0116, 0x1028, 0x052e, quirk_no_pcm_pwm_enable },
/* Dell XPS13 HD and XPS13 FHD Ivy Bridge */
{ 0x0166, 0x1028, 0x058b, quirk_no_pcm_pwm_enable },
+
+   /* Asus Zenbook UX31A Ivybridge eDP */
+   { 0x0166, 0x1043, 0x1517, quirk_ignore_vbt_bpp },
  };
  
  static void intel_init_quirks(struct drm_device *dev)

diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
index 9770160..fd47be8 100644
--- a/drivers/gpu/drm/i915/intel_dp.c
+++ b/drivers/gpu/drm/i915/intel_dp.c
@@ -805,7 +805,9 @@ intel_dp_compute_config(struct intel_encoder *encoder,
/* Walk through all bpp values. Luckily they're all nicely spaced with 2
 * bpc in between. */
bpp = pipe_config-pipe_bpp;
-   if (is_edp(intel_dp)  dev_priv-vbt.edp_bpp) {
+   if (is_edp(intel_dp) 
+   dev_priv-vbt.edp_bpp 
+   (dev_priv-quirks  QUIRK_IGNORE_VBT_BPP) == 0) {
DRM_DEBUG_KMS(clamping bpp for eDP panel to BIOS-provided 
%i\n,
  dev_priv-vbt.edp_bpp);
bpp = min_t(int, bpp, dev_priv-vbt.edp_bpp);
--
1.8.4

___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx


___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx