On 14.04.2018 07:01, Srinivas Pandruvada wrote:
Are you no longer interested in improving those aspects of the non-
governor? Is it that you're planning to delete it and move back to a
generic cpufreq governor for non-HWP platforms in the near future?
Yes that is the plan for Atom platforms, which are only non HWP
platforms till now. You have to show good gain for performance and
performance/watt to carry and maintain such big change. So we have to
see your performance and power numbers.
For the active cases, you can look at the links at the beginning /
bottom of this mail thread. Francisco provided performance results for
At this side of Atlantic, we've been testing different versions of the
patchset in past few months for >50 Linux 3D benchmarks on 6 different
On Geminilake and few BXT configurations (where 3D benchmarks are TDP
limited), many tests' performance improves by 5-15%, also complex ones.
And more importantly, there were no regressions.
(You can see details + links to more info in Jira ticket VIZ-12078.)
*On (fully) TDP limited cases, power usage (obviously) keeps the same,
so performance/watt improvements can be derived from the measured
We have data also for earlier platforms from slightly older versions of
the patchset, but on those it didn't have any significant impact on
I think the main reason for this is that BYT & BSW NUCs that we have,
have space only for single memory module. Without dual-memory channel
configuration, benchmarks are too memory-bottlenecked to utilized GPU
enough to make things TDP limited on those platforms.
However, now that I look at the old BYT & BSW data (for few benchmarks
which improved most on BXT & GLK), I see that there's a reduction in the
CPU power utilization according to RAPL, at least on BSW.
This will benefit all architectures including x86 + non i915.
The current design encourages re-use of the IO utilization statistic
(see PATCH 1) by other governors as a mechanism driving the trade-off
between energy efficiency and responsiveness based on whether the
is close to CPU-bound, in whatever way is applicable to each governor
(e.g. it would make sense for it to be hooked up to the EPP
knob in the case of the intel_pstate HWP governor, which would allow
to achieve better energy efficiency in IO-bound situations just like
this series does for non-HWP parts). There's nothing really x86- nor
i915-specific about it.
BTW intel-pstate can be driven by sched-util governor (passive
so if your prove benefits to Broxton, this can be a default.
- No regression to idle power at all. This is more important than
- Not just score, performance/watt is important
Is schedutil actually on par with the intel_pstate non-HWP governor
of today, according to these metrics and the overall benchmark
Yes, except for few cases. I have not tested recently, so may be
controller does, even though the frequent IO waits may actually
indication that the system is IO-bound (which means that the
energy usage increase may not be translated in any performance
in practice, not to speak of performance being impacted
TDP-bound scenarios like GPU rendering).
Regarding run-time complexity, I haven't observed this governor
measurably more computationally intensive than the present
one. It's a
bunch more instructions indeed, but still within the same
the current governor. The average increase in CPU utilization
with this series is less than 0.03% (sampled via ftrace for v1,
repeat the measurement for the v2 I have in the works, though I
expect the result to be substantially different). If this is a
for you there are several optimization opportunities that would
the number of CPU cycles get_target_pstate_lp() takes to
large percent (most of the optimization ideas I can think of
though would come at some
but may still be worth pursuing), but the computational
enough at this point that the impact on any benchmark or real
would be orders of magnitude lower than its variance, which
kind of difficult to keep the discussion data-driven [as
performance optimization discussion should ever be ;)].
[Absolute benchmark results are unfortunately omitted
due to company policies, but the percent change and
p-value are included above and in the referenced
The most obvious impact of this series will likely be the
improvement in graphics performance on systems with an
into the processor package (though for the moment this is
on BXT+), because the TDP budget shared among CPU and GPU
frequently become a limiting factor in low-power
TDP-bound devices this series improves performance of
non-trivial graphics rendering by a significant amount
of the energy efficiency improvement for that workload
optimization didn't cause it to become non-TDP-bound).
See - for detailed numbers including various
and a sample of the Phoronix daily-system-tracker. Some
graphics benchmarks like GfxBench gl_manhattan31 and gl_4
between 5% and 11% on our systems. The exact improvement
substantially between systems (compare the benchmark
two different J3455 systems  and ) due to a number
including the ratio between CPU and GPU processing power,
of the userspace graphics driver, the windowing system
the BIOS (which has an influence on the package TDP), the
characteristics of the system, etc.
Unigine Valley and Heaven improve by a similar factor on
(see the J3455 results ), but on others the
because the benchmark fails to fully utilize the GPU,
heuristic to remain in low-latency state for longer,
reduced TDP budget available to the GPU, which prevents
from increasing further. This can be avoided by using
heuristic parameters suggested in the commit message of
provide a lower IO utilization threshold and hysteresis
controller to attempt to save energy. I'm not proposing
upstream (yet) because they would also increase the risk
latency-sensitive IO-heavy workloads to regress (like
OglTerrainFly* and some arguably poorly designed IPC-
Discrete graphics aren't likely to experience that much
improvement from this, even though many non-IGP workloads
benefit by reducing the system's energy usage while the
(or really, any other IO device) becomes a bottleneck,
attempted in this series, since that would involve making
efficiency/latency trade-off that only the maintainers of
respective drivers are in a position to make. The
introduced in PATCH 1 to achieve this is left as an opt-
reason, only the i915 DRM driver is hooked up since it
most direct pay-off due to the increased energy budget
the GPU, but other power-hungry third-party gadgets built
same package (*cough* AMD *cough* Mali *cough* PowerVR
able to benefit from this interface eventually by
driver in a similar way.
The cpufreq interface is not exclusively tied to the
driver, because other governors can make use of the
calculated as result to avoid over-optimizing for latency
where a lower frequency would be able to achieve similar
while using less energy. The interpretation of this
on the observation that for as long as the system is CPU-
load occurring as a result of the execution of a program
roughly linearly with the clock frequency the program is
(assuming that the CPU has enough processing power) a
reached at which the program won't be able to execute
increasing CPU frequency because the throughput limits of
will have been attained. Increasing frequencies past
pessimizes energy usage for no real benefit -- The
for the CPU to lock to the minimum frequency that is able
IO devices involved fully utilized (assuming we are past
maximum-efficiency inflection point of the CPU's power-
curve), which is roughly the goal of this series.
PELT could be a useful extension for this model since its
heuristic assumptions would become more accurate if the
load could be tracked separately for each scheduling
is not attempted in this series because the additional
computational cost of such an approach is hard to justify
stage, particularly since the current governor has
Various frequency and step-function response graphs are
- for comparison (obtained empirically on a BXT
The response curves for the low-latency and low-power
heuristic are shown separately -- As you can see they
the frequency response curve of the current
response of the aggressive heuristic is within a single
(even though it's not quite obvious from the graph with
zoom provided). I'll attach benchmark results from a
non-TDP-limited machine (which means there will be no TDP
increase that could possibly mask a performance
kind) as soon as they come out.
Thanks to Eero and Valtteri for testing a number of
revisions of this series (and there were quite a few of
than half a dozen systems, they helped spot quite a few
earlier versions of this heuristic.
[PATCH 1/9] cpufreq: Implement infrastructure keeping
aggregated IO active time.
[PATCH 2/9] Revert "cpufreq: intel_pstate: Replace
[PATCH 3/9] Revert "cpufreq: intel_pstate: Shorten a
[PATCH 4/9] Revert "cpufreq: intel_pstate: Simplify
[PATCH 5/9] Revert "cpufreq: intel_pstate: Drop
[PATCH 6/9] cpufreq/intel_pstate: Implement variably low-
filtering controller for small core.
[PATCH 7/9] SQUASH: cpufreq/intel_pstate: Enable LP
based on ACPI FADT profile.
[PATCH 8/9] OPTIONAL: cpufreq/intel_pstate: Expose LP
parameters via debugfs.
[PATCH 9/9] drm/i915/execlists: Report GPU rendering as
Intel-gfx mailing list
Intel-gfx mailing list