No assumption can be made upon the rate at which frequency updates get triggered, as there are scheduling policies (like SCHED_DEADLINE) which don't trigger them so frequently.
Remove such assumption from the code. Signed-off-by: Juri Lelli <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Rafael J. Wysocki <[email protected]> Cc: Viresh Kumar <[email protected]> Cc: Luca Abeni <[email protected]> Cc: Claudio Scordino <[email protected]> --- kernel/sched/cpufreq_schedutil.c | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index da67a1cf91e7..40f30373b709 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -233,14 +233,13 @@ static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu) * If the CPU utilization was last updated before the previous * frequency update and the time elapsed between the last update * of the CPU utilization and the last frequency update is long - * enough, don't take the CPU into account as it probably is - * idle now (and clear iowait_boost for it). + * enough, reset iowait_boost, as it probably is not boosted + * anymore now. */ delta_ns = last_freq_update_time - j_sg_cpu->last_update; - if (delta_ns > TICK_NSEC) { + if (delta_ns > TICK_NSEC) j_sg_cpu->iowait_boost = 0; - continue; - } + if (j_sg_cpu->flags & SCHED_CPUFREQ_RT) return policy->cpuinfo.max_freq; -- 2.10.0

