On 22-Jan 12:30, Quentin Perret wrote:
> On Tuesday 15 Jan 2019 at 10:15:06 (+0000), Patrick Bellasi wrote:
> > diff --git a/kernel/sched/cpufreq_schedutil.c 
> > b/kernel/sched/cpufreq_schedutil.c
> > index 520ee2b785e7..38a05a4f78cc 100644
> > --- a/kernel/sched/cpufreq_schedutil.c
> > +++ b/kernel/sched/cpufreq_schedutil.c
> > @@ -201,9 +201,6 @@ unsigned long schedutil_freq_util(int cpu, unsigned 
> > long util_cfs,
> >     unsigned long dl_util, util, irq;
> >     struct rq *rq = cpu_rq(cpu);
> >  
> > -   if (type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt))
> > -           return max;
> > -
> >     /*
> >      * Early check to see if IRQ/steal time saturates the CPU, can be
> >      * because of inaccuracies in how we track these -- see
> > @@ -219,15 +216,19 @@ unsigned long schedutil_freq_util(int cpu, unsigned 
> > long util_cfs,
> >      * utilization (PELT windows are synchronized) we can directly add them
> >      * to obtain the CPU's actual utilization.
> >      *
> > -    * CFS utilization can be boosted or capped, depending on utilization
> > -    * clamp constraints requested by currently RUNNABLE tasks.
> > +    * CFS and RT utilization can be boosted or capped, depending on
> > +    * utilization clamp constraints requested by currently RUNNABLE
> > +    * tasks.
> >      * When there are no CFS RUNNABLE tasks, clamps are released and
> >      * frequency will be gracefully reduced with the utilization decay.
> >      */
> > -   util = (type == ENERGY_UTIL)
> > -           ? util_cfs
> > -           : uclamp_util(rq, util_cfs);
> > -   util += cpu_util_rt(rq);
> > +   util = cpu_util_rt(rq);
> > +   if (type == FREQUENCY_UTIL) {
> > +           util += cpu_util_cfs(rq);
> > +           util  = uclamp_util(rq, util);
> 
> So with this we don't go to max to anymore for CONFIG_UCLAMP_TASK=n no ?

Mmm... good point!

I need to guard this chagen for the !CONFIG_UCLAMP_TASK case!

> 
> Thanks,
> Quentin

Cheers Patrick

-- 
#include <best/regards.h>

Patrick Bellasi

Reply via email to