From: Juri Lelli juri.le...@arm.com
When a CPU is going idle it is pointless to ask for an OPP update as we
would wake up another task only to ask for the same capacity we are already
running at (utilization gets moved to blocked_utilization). We thus add
cpufreq_sched_reset_capacity() interface
From: Juri Lelli juri.le...@arm.com
Patch sched/fair: add triggers for OPP change requests introduced OPP
change triggers for enqueue_task_fair(), but the trigger was operating only
for wakeups. Fact is that it makes sense to consider wakeup_new also (i.e.,
fork()), as we don't know anything
From: Juri Lelli juri.le...@arm.com
Introduce a static key to only affect scheduler hot paths when sched
governor is enabled.
cc: Ingo Molnar mi...@redhat.com
cc: Peter Zijlstra pet...@infradead.org
Signed-off-by: Juri Lelli juri.le...@arm.com
---
kernel/sched/cpufreq_sched.c | 14
triggers for load balancing
Patrick Bellasi (7):
sched/tune: add detailed documentation
sched/tune: add sysctl interface to define a boost value
sched/fair: add function to convert boost value into margin
sched/fair: add boosted CPU usage
sched/tune: add initial support for CGroups based
From: Juri Lelli juri.le...@arm.com
Since the true utilization of a long running task is not detectable while
it is running and might be bigger than the current cpu capacity, create the
maximum cpu capacity head room by requesting the maximum cpu capacity once
the cpu usage plus the capacity
From: Juri Lelli juri.le...@arm.com
As we don't trigger freq changes from {en,de}queue_task_fair() during load
balancing, we need to do explicitly so on load balancing paths.
cc: Ingo Molnar mi...@redhat.com
cc: Peter Zijlstra pet...@infradead.org
Signed-off-by: Juri Lelli juri.le...@arm.com
---
...@infradead.org
Signed-off-by: Patrick Bellasi patrick.bell...@arm.com
---
kernel/sched/fair.c | 38 ++
1 file changed, 38 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 955dfe1..15fde75 100644
--- a/kernel/sched/fair.c
+++ b/kernel
in between these two boundaries is used to bias the
power/performance trade-off, the higher the boost value the more the
scheduler is biased toward performance boosting instead of energy
efficiency.
cc: Ingo Molnar mi...@redhat.com
cc: Peter Zijlstra pet...@infradead.org
Signed-off-by: Patrick Bellasi
.
This patch provides a detailed description of the motivations and design
decisions behind the implementation of the SchedTune.
cc: Jonathan Corbet cor...@lwn.net
cc: linux-...@vger.kernel.org
Signed-off-by: Patrick Bellasi patrick.bell...@arm.com
---
Documentation/scheduler/sched-tune.txt | 367
From: Juri Lelli juri.le...@arm.com
Use the cpu argument of cpufreq_sched_set_cap() to handle per_cpu writes,
as the thing can be called remotely (e.g., from load balacing code).
cc: Ingo Molnar mi...@redhat.com
cc: Peter Zijlstra pet...@infradead.org
Signed-off-by: Juri Lelli juri.le...@arm.com
From: Juri Lelli juri.le...@arm.com
Each time a task is {en,de}queued we might need to adapt the current
frequency to the new usage. Add triggers on {en,de}queue_task_fair() for
this purpose. Only trigger a freq request if we are effectively waking up
or going to sleep. Filter out load
to
different boost groups.
cc: Tejun Heo t...@kernel.org
cc: Li Zefan lize...@huawei.com
cc: Johannes Weiner han...@cmpxchg.org
cc: Ingo Molnar mi...@redhat.com
cc: Peter Zijlstra pet...@infradead.org
Signed-off-by: Patrick Bellasi patrick.bell...@arm.com
---
include/linux/cgroup_subsys.h | 4
.
cc: Ingo Molnar mi...@redhat.com
cc: Peter Zijlstra pet...@infradead.org
Signed-off-by: Patrick Bellasi patrick.bell...@arm.com
---
kernel/sched/fair.c | 32 +++-
1 file changed, 31 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index
Signed-off-by: Patrick Bellasi patrick.bell...@arm.com
---
kernel/sched/tune.c | 100
1 file changed, 100 insertions(+)
diff --git a/kernel/sched/tune.c b/kernel/sched/tune.c
index a26295c..3223ef3 100644
--- a/kernel/sched/tune.c
+++ b/kernel
boost value required by all its
currently RUNNABLE tasks.
cc: Ingo Molnar mi...@redhat.com
cc: Peter Zijlstra pet...@infradead.org
Signed-off-by: Patrick Bellasi patrick.bell...@arm.com
---
kernel/sched/fair.c | 17 +++---
kernel/sched/tune.c | 94
task to run on the little
> core and the light task to run on the big core.
That's an interesting point we should keep into consideration for the
design of the complete solution.
I would prefer to post-pone this discussion on the list once we will
present the next extension of SchedTune which
On Wed, Sep 16, 2015 at 12:55:12AM +0100, Steve Muckle wrote:
> On 09/15/2015 08:00 AM, Patrick Bellasi wrote:
> >> Agreed, though I also think those tunable values might also change for a
> >> given set of tasks in different circumstances.
> >
> > Could you provi
On Wed, Sep 09, 2015 at 09:16:10PM +0100, Steve Muckle wrote:
> Hi Patrick,
Hi Steve,
> On 09/03/2015 02:18 AM, Patrick Bellasi wrote:
> > In my view, one of the main goals of sched-DVFS is actually that to be
> > a solid and generic replacement of different CPUFreq governors
On Mon, Sep 14, 2015 at 09:00:51PM +0100, Steve Muckle wrote:
> Hi Patrick,
>
> On 09/11/2015 04:09 AM, Patrick Bellasi wrote:
> >> It's also worth noting that mobile vendors typically add all sorts of
> >> hacks on top of the existing cpufreq governors which fur
ns and comparison, can potentially be reduced to a single
comparison, e.g.
next_freq = util > (curr_cap - margin)
? curr_freq + 1
: curr_freq
where margin is pre-computed to be for example 51 (i.e. 5% of 1024) as
well as (curr_cap - margin), which can be cached at each OPP change.
--
#include
Patrick Bellasi
) non boosted tasks (and in general when SchedTune is not in use)
gets OPPs jumps based on the hardcoded M margin
b) boosted tasks can get more aggressive OPPs jumps based on the B
margin
While the M margin is hardcoded, the B one is defined via CGroups
depending on the how much tasks needs to be boosted.
--
#include
Patrick Bellasi
un-time update based on the "limits" schema of
the RDM.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Tejun Heo <t...@kernel.org>
Cc: linux-kernel@vger.kernel.org
---
init/Kc
sched/core and is public available
from this repository:
git://www.linux-arm.com/linux-pb eas/stune/rfcv3
Cheers Patrick
.:: References
[1] https://lkml.org/lkml/2016/10/27/503
[2] https://lkml.org/lkml/2016/11/25/342
[3] https://lkml.org/lkml/2016/10/14/312
Patrick Bellasi (5):
sched
the original cgroup's RQ followed by an enqueue in the new one.
The same argument is true for tasks migrations thus, tasks migrations
between CPUs and CGruoups are ultimately managed like tasks
wakeups/sleeps.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redha
the
fast path: {enqueue,dequeue}_task
and the
slow path: cpu_capacity_{min,max}_write_u64
is provided in a dedicated patch.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Tejun Heo &l
sk belongs to.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux...@vger.kernel.org
---
ker
. frequency)
The default values for boosting and capping are defined to be:
- capacity_min: 0
- capacity_max: SCHED_CAPACITY_SCALE
which means that by default no boosting/capping is enforced.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Pe
u, struct task_struct *p)
> return cpu_util(cpu);
>
> capacity = capacity_orig_of(cpu);
> - util = max_t(long, cpu_rq(cpu)->cfs.avg.util_avg - task_util(p), 0);
> + util = max_t(long, cpu_rq(cpu)->cfs.avg.util_avg - task_util_peak(p),
> 0);
>
> return (util >= capacity) ? capacity : util;
> }
> @@ -5476,7 +5481,7 @@ static int wake_cap(struct task_struct *p, int cpu, int
> prev_cpu)
> /* Bring task utilization in sync with prev_cpu */
> sync_entity_load_avg(>se);
>
> - return min_cap * 1024 < task_util(p) * capacity_margin;
> + return min_cap * 1024 < task_util_peak(p) * capacity_margin;
> }
>
> /*
> --
> 1.9.1
>
--
#include
Patrick Bellasi
On 27-Oct 14:30, Tejun Heo wrote:
> Hello, Patrick.
Hi Tejun,
> On Thu, Oct 27, 2016 at 06:41:05PM +0100, Patrick Bellasi wrote:
> > To support task performance boosting, the usage of a single knob has the
> > advantage to be a simple solution, both from the implementation and
On 27-Oct 16:39, Tejun Heo wrote:
> Hello, Patrick.
>
> On Thu, Oct 27, 2016 at 09:14:39PM +0100, Patrick Bellasi wrote:
> > I'm wondering also how much confusing and complex it can be to
> > configure a system where you have not overlapping groups of tasks with
&g
On 04-Nov 15:16, Viresh Kumar wrote:
> On 27-10-16, 18:41, Patrick Bellasi wrote:
> > +This last requirement is especially important if we consider that
> > schedutil can
> > +potentially replace all currently available CPUFreq policies. Since
> > schedutil
>
t your example was intentionally simplified, however it
suggested me that maybe we should try to start a "campaign" to collect
a description of use-cases we would like to optimize for.
Knowing timing and desirable behaviours at the end can also help on
design and implement better solutions.
--
#include
Patrick Bellasi
are:
1) collect further feedback to properly refine the design of
what will be the next RFCv3 of SchedTune
2) develop and present on LKML the RFCv3 for SchedTune which should
implement the consensus driven design from the previous step
References
==
[1] https://marc.info/?i=2016102
ation.
>
> > We could fairly easy; if this is so desired; make the PELT window size a
> > CONFIG option (hidden by default).
> >
> > But like everything; patches should come with numbers justifying them
> > etc..
> >
>
> Sure. :)
>
> > > > Also, there was the idea of; once the above ideas have all been
> > > > explored; tying the freq ram rate to the power curve.
> > > >
> > >
> > > Yep. That's an interesting one to look at, but it might require some
> > > time.
> >
> > Sure, just saying that we should resist knobs until all other avenues
> > have been explored. Never start with a knob.
--
#include
Patrick Bellasi
On 21-Nov 16:26, Peter Zijlstra wrote:
> On Mon, Nov 21, 2016 at 02:59:19PM +0000, Patrick Bellasi wrote:
>
> > A fundamental problem in IMO is that we are trying to use a "dynamic
> > metric" to act as a "predictor".
> >
> > PELT is a "dy
have a good framework for implementing such a tunable.
This patch provides a detailed description of the motivations and design
decisions behind the implementation of SchedTune.
Cc: Jonathan Corbet <cor...@lwn.net>
Cc: linux-...@vger.kernel.org
Signed-off-by: Patrick Bellasi <pat
oost a CPU to the maximum boost value required by all its
currently RUNNABLE tasks.
Cc: Ingo Molnar <mi...@kernel.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
---
kernel/exit.c | 5 ++
kernel/sched/fair.c | 28
power/performance trade-off, the higher the boost value the more the
scheduler is biased toward performance boosting instead of energy
efficiency.
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
-
Cc: Peter Zijlstra <pet...@infradead.org>
Suggested-by: Srinath Sridharan <srinat...@google.com>
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
---
Documentation/scheduler/sched-tune.txt | 44 ++
include/linux/sched/sysc
-grade devices
Credits
===
[*] This work has been supported by an extensive collaborative effort between
ARM, Linaro and Google, targeting production devices.
References
==
[1] https://lkml.org/lkml/2015/8/19/419
[2] https://github.com/ARM-software/lisa
Patrick Bellasi (8):
touch boost behavior on Android
systems).
Cc: Ingo Molnar <mi...@kernel.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
---
kernel/sched/fair.c | 2 +-
kernel/sched/tune.c | 73 +
nd
only while there are RUNNABLE tasks on that CPU).
Cc: Ingo Molnar <mi...@kernel.org>
Cc: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
---
kernel/sched/cpufreq_schedutil.c | 4 ++--
kernel/sched/fair.c | 36 +
red to
compute the boost value for CPUs which have RUNNABLE tasks belonging to
different boost groups.
Cc: Tejun Heo <t...@kernel.org>
Cc: Li Zefan <lize...@huawei.com>
Cc: Johannes Weiner <han...@cmpxchg.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <
which is:
50% boosting means to run at half-way between the current and the
maximum performance which a task could achieve on that system
NOTE: this code is suitable for all signals operating in range
[0..SCHED_CAPACITY_SCALE]
Cc: Ingo Molnar <mi...@kernel.org>
Cc: Peter Zijlstra &
kernel/sched/fair.c. The tracepoint is defined in:
include/trace/events/sched.h
Here it is:
https://android.googlesource.com/kernel/common/+/android-3.18/include/trace/events/sched.h#822
--
#include
Patrick Bellasi
On 27-Oct 22:58, Peter Zijlstra wrote:
> On Thu, Oct 27, 2016 at 06:41:00PM +0100, Patrick Bellasi wrote:
> >
> > This RFC is an update to the initial SchedTune proposal [1] for a central
> > scheduler-driven power-performance control.
> > The posting is being made ahe
On 22-Mar 17:28, Joel Fernandes (Google) wrote:
> Hi,
>
> On Mon, Mar 20, 2017 at 11:08 AM, Patrick Bellasi
> <patrick.bell...@arm.com> wrote:
> > On 20-Mar 13:15, Tejun Heo wrote:
> >> Hello,
> >>
> >> On Tue, Feb
On 21-Mar 16:18, Peter Zijlstra wrote:
> On Tue, Mar 21, 2017 at 03:08:20PM +0000, Patrick Bellasi wrote:
>
> > And than we can move this bit into an inline function, something like e.g.:
> >
> >static inline bool sugov_this_cpu_is_busy()
> >{
>
On 23-Mar 12:01, Tejun Heo wrote:
> Hello,
Hi Tejun,
> On Thu, Mar 23, 2017 at 10:32:54AM +, Patrick Bellasi wrote:
> > > But then we would lose out on being able to attach capacity
> > > constraints to specific tasks or groups of tasks?
> >
> > Yes, righ
ribute__((nonnull (6)));
> > +{
> > + return ___update_load_avg(now, cpu, sa, weight, running, cfs_rq);
>
> Although ideally we'd be able to tell the compiler that cfs_rq will not
> be NULL here. Hurmph.. no __builtin for that I think :/
What about the above attribute?
>
> > +}
--
#include
Patrick Bellasi
On 13-Mar 03:08, Joel Fernandes (Google) wrote:
> Hi Patrick,
>
> On Tue, Feb 28, 2017 at 6:38 AM, Patrick Bellasi
> <patrick.bell...@arm.com> wrote:
> > Currently schedutil enforce a maximum OPP when RT/DL tasks are RUNNABLE.
> > Such a mandatory policy can be ma
Few comments inline, otherwise LGTM.
Cheers Patrick
On 10-Mar 12:47, Joel Fernandes wrote:
> This patch rewrites comments related task priorities and CPU usage
> along with an example to show how it works.
>
> Cc: Juri Lelli <juri.le...@arm.com>
> Cc: Patrick Bellasi <
On 20-Mar 23:51, Rafael J. Wysocki wrote:
> On Thu, Mar 16, 2017 at 4:15 AM, Joel Fernandes <joe...@google.com> wrote:
> > Hi Rafael,
>
> Hi,
>
> > On Wed, Mar 15, 2017 at 6:04 PM, Rafael J. Wysocki <raf...@kernel.org>
> > wrote:
> >> On Wed,
On 15-Mar 12:41, Rafael J. Wysocki wrote:
> On Tuesday, February 28, 2017 02:38:37 PM Patrick Bellasi wrote:
> > Was: SchedTune: central, scheduler-driven, power-perfomance control
> >
> > This series presents a possible alternative design for what has been
> >
On 13-Mar 03:46, Joel Fernandes (Google) wrote:
> On Tue, Feb 28, 2017 at 6:38 AM, Patrick Bellasi
> <patrick.bell...@arm.com> wrote:
> > The CPU CGroup controller allows to assign a specified (maximum)
> > bandwidth to tasks within a group, however it does not enforce a
On 15-Mar 05:35, Joel Fernandes wrote:
> On Wed, Mar 15, 2017 at 5:04 AM, Patrick Bellasi
> <patrick.bell...@arm.com> wrote:
> > Few comments inline, otherwise LGTM.
>
> Ok, I'll take that as an Acked-by with the following comment addressed
> if that's Ok with you.
On 15-Mar 12:52, Rafael J. Wysocki wrote:
> On Friday, March 03, 2017 12:38:30 PM Patrick Bellasi wrote:
> > On 03-Mar 14:01, Viresh Kumar wrote:
> > > On 02-03-17, 15:45, Patrick Bellasi wrote:
> > > > diff --git a/kernel/sched/cpufreq_schedutil.c
> > &g
On 15-Mar 09:10, Paul E. McKenney wrote:
> On Wed, Mar 15, 2017 at 06:20:28AM -0700, Joel Fernandes wrote:
> > On Wed, Mar 15, 2017 at 4:20 AM, Patrick Bellasi
> > <patrick.bell...@arm.com> wrote:
> > > On 13-Mar 03:46, Joel Fernandes (Google) wrote:
> > >>
On 16-Mar 02:04, Rafael J. Wysocki wrote:
> On Wed, Mar 15, 2017 at 1:59 PM, Patrick Bellasi
> <patrick.bell...@arm.com> wrote:
> > On 15-Mar 12:41, Rafael J. Wysocki wrote:
> >> On Tuesday, February 28, 2017 02:38:37 PM Patrick Bellasi wrote:
> >> > Wa
right?
> >
>
> When the task is enqueued back we select the frequency considering its
> bandwidth request (and the bandwidth/utilization of the others). So,
> when it actually starts running it will already have enough capacity to
> finish in time.
Here we are factoring out the time required to actually switch to the
required OPP. I think Joel was referring to this time.
That time cannot really be eliminated but from having faster OOP
swiching HW support. Still, jumping strating to the "optimal" OPP
instead of rumping up is a big improvement.
--
#include
Patrick Bellasi
mind as things that might already use something like that.
Maybe the problem is not going down (e.g. when there are only small
CFS tasks it makes perfectly sense) but instead not being fast enough
on rampin-up when a new RT task is activated.
And this boils down to two main point:
1) throttling for up transitions perhaps is only harmful
2) the call sites for schedutils updates are not properly positioned
in specific scheduler decision points.
The proposed patch is adding yet another throttling mechanism, perhaps
on top of one which already needs to be improved.
--
#include
Patrick Bellasi
next_f = policy->cpuinfo.max_freq;
> >> } else {
> >> sugov_get_util(, );
> >> @@ -215,6 +247,7 @@ static void sugov_update_single(struct u
> >> next_f = get_next_freq(sg_policy, util, max);
> >> }
> >> sugov_update_commit(sg_policy, time, next_f);
> >> + sugov_save_idle_calls(sg_cpu);
> >> }
> >>
> >> static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu)
> >> @@ -278,12 +311,13 @@ static void sugov_update_shared(struct u
> >> sg_cpu->last_update = time;
> >>
> >> if (sugov_should_update_freq(sg_policy, time)) {
> >> - if (flags & SCHED_CPUFREQ_RT_DL)
> >> + if ((flags & SCHED_CPUFREQ_RT_DL) ||
> >> sugov_cpu_is_busy(sg_cpu))
> >
> > What about others CPUs in this policy?
> >
> >> next_f = sg_policy->policy->cpuinfo.max_freq;
> >> else
> >> next_f = sugov_next_freq_shared(sg_cpu);
> >>
> >> sugov_update_commit(sg_policy, time, next_f);
> >> + sugov_save_idle_calls(sg_cpu);
> >> }
> >>
> >> raw_spin_unlock(_policy->update_lock);
> >
> > --
> > viresh
--
#include
Patrick Bellasi
On 20-Mar 13:15, Tejun Heo wrote:
> Hello,
>
> On Tue, Feb 28, 2017 at 02:38:38PM +, Patrick Bellasi wrote:
> > This patch extends the CPU controller by adding a couple of new
> > attributes, capacity_min and capacity_max, which can be used to enforce
> > bandwidth
On 20-Mar 14:05, Rafael J. Wysocki wrote:
> On Monday, March 20, 2017 01:06:15 PM Patrick Bellasi wrote:
> > On 20-Mar 13:50, Peter Zijlstra wrote:
> > > On Mon, Mar 20, 2017 at 01:35:12PM +0100, Rafael J. Wysocki wrote:
> > > > On Monday, March 20, 2017 11
On 20-Mar 10:51, Tejun Heo wrote:
> Hello, Patrick.
Hi Tejun,
> On Tue, Feb 28, 2017 at 02:38:37PM +, Patrick Bellasi wrote:
> > a) Boosting of important tasks, by enforcing a minimum capacity in the
> > CPUs where they are enqueued for execution.
> > b) Cap
On 16-Mar 00:32, Rafael J. Wysocki wrote:
> On Wed, Mar 15, 2017 at 3:40 PM, Patrick Bellasi
> <patrick.bell...@arm.com> wrote:
> > On 15-Mar 12:52, Rafael J. Wysocki wrote:
> >> On Friday, March 03, 2017 12:38:30 PM Patrick Bellasi wrote:
> >> > On 03-Mar 14
hz_get_sleep_length(void)
> return ts->sleep_length;
> }
>
> +/**
> + * tick_nohz_get_idle_calls - return the current idle calls counter value
> + *
> + * Called from the schedutil frequency scaling governor in scheduler context.
> + */
> +unsigned long tick_nohz_get_idle_calls(void)
> +{
> + struct tick_sched *ts = this_cpu_ptr(_cpu_sched);
> +
> + return ts->idle_calls;
> +}
> +
> static void tick_nohz_account_idle_ticks(struct tick_sched *ts)
> {
> #ifndef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
>
--
#include
Patrick Bellasi
lt; sg_policy->next_freq)
> > + next_freq = sg_policy->next_freq;
> > +
> > if (policy->fast_switch_enabled) {
> > if (sg_policy->next_freq == next_freq) {
> > trace_cpu_frequency(policy->cur,
> > smp_processor_id());
> > @@ -214,7 +234,7 @@ static void sugov_update_single(struct u
> > sugov_iowait_boost(sg_cpu, , );
> > next_f = get_next_freq(sg_policy, util, max);
> > }
> > - sugov_update_commit(sg_policy, time, next_f);
> > + sugov_update_commit(sg_cpu, sg_policy, time, next_f);
> > }
> >
[...]
--
#include
Patrick Bellasi
On 15-Mar 10:24, Paul E. McKenney wrote:
> On Wed, Mar 15, 2017 at 04:44:39PM +0000, Patrick Bellasi wrote:
> > On 15-Mar 09:10, Paul E. McKenney wrote:
> > > On Wed, Mar 15, 2017 at 06:20:28AM -0700, Joel Fernandes wrote:
> > > > On Wed, Mar 15, 20
On 06-Mar 09:35, Steven Rostedt wrote:
> On Thu, 2 Mar 2017 15:45:03 +
> Patrick Bellasi <patrick.bell...@arm.com> wrote:
>
> > @@ -287,6 +289,10 @@ static void sugov_update_shared(struct
> > update_util_data *hook, u64 time,
> > goto done
On 06-Mar 09:29, Steven Rostedt wrote:
> On Fri, 3 Mar 2017 09:11:25 +0530
> Viresh Kumar <viresh.ku...@linaro.org> wrote:
>
> > On 02-03-17, 15:45, Patrick Bellasi wrote:
> > > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > index e2ed
000, Patrick Bellasi wrote:
>
> > a) Bias OPP selection.
> >Thus granting that certain critical tasks always run at least at a
> >specified frequency.
> >
> > b) Bias TASKS placement, which requires an additional extension not
> >yet posted
On 12-Apr 16:34, Peter Zijlstra wrote:
> On Wed, Apr 12, 2017 at 02:27:41PM +0100, Patrick Bellasi wrote:
> > On 12-Apr 14:48, Peter Zijlstra wrote:
> > > On Tue, Apr 11, 2017 at 06:58:33PM +0100, Patrick Bellasi wrote:
> > > > > illustrated per your above poin
On 12-Apr 14:15, Peter Zijlstra wrote:
> On Tue, Apr 11, 2017 at 06:58:33PM +0100, Patrick Bellasi wrote:
> > We should consider also that at the CPUFreq side we already expose
> > knobs like scaling_{min,max}_freq which are much more platform
> > dependant than capacity
On 12-Apr 14:10, Peter Zijlstra wrote:
> Let me reply in parts as I read this.. easy things first :-)
>
> On Tue, Apr 11, 2017 at 06:58:33PM +0100, Patrick Bellasi wrote:
> > On 10-Apr 09:36, Peter Zijlstra wrote:
>
> > > 4) they have muddled semantics, because wh
On 12-Apr 14:22, Peter Zijlstra wrote:
> On Tue, Apr 11, 2017 at 06:58:33PM +0100, Patrick Bellasi wrote:
> > Sorry, I don't get instead what are the "confusing nesting properties"
> > you are referring to?
>
> If a parent group sets min=.2 and max=.8, what are
On 12-Apr 14:48, Peter Zijlstra wrote:
> On Tue, Apr 11, 2017 at 06:58:33PM +0100, Patrick Bellasi wrote:
> > > illustrated per your above points in that it affects both, while in
> > > fact it actually modifies another metric, namely util_avg.
> >
> >
On 12-Apr 18:14, Peter Zijlstra wrote:
> On Wed, Apr 12, 2017 at 03:43:10PM +0100, Patrick Bellasi wrote:
> > On 12-Apr 16:34, Peter Zijlstra wrote:
> > > On Wed, Apr 12, 2017 at 02:27:41PM +0100, Patrick Bellasi wrote:
> > > > On 12-Apr 14:48, Peter Zijlstra wrote:
On 12-Apr 17:37, Peter Zijlstra wrote:
> On Wed, Apr 12, 2017 at 02:55:38PM +0100, Patrick Bellasi wrote:
> > On 12-Apr 14:10, Peter Zijlstra wrote:
>
> > > Even for the cgroup interface, I think they should set a per-task
> > > property, not a group property.
> &
On 02-Mar 17:09, Vincent Guittot wrote:
> On 2 March 2017 at 16:45, Patrick Bellasi <patrick.bell...@arm.com> wrote:
> > The current version of schedutil has some issues related to the management
> > of update flags used by systems with frequency domains spawning multiple
&
On 03-Mar 14:01, Viresh Kumar wrote:
> On 02-03-17, 15:45, Patrick Bellasi wrote:
> > diff --git a/kernel/sched/cpufreq_schedutil.c
> > b/kernel/sched/cpufreq_schedutil.c
> > @@ -293,15 +305,29 @@ static void sugov_update_shared(struct
> > update_util_data *hook,
On 03-Mar 10:49, Viresh Kumar wrote:
> On 02-03-17, 15:45, Patrick Bellasi wrote:
> > In system where multiple CPUs shares the same frequency domain a small
> > workload on a CPU can still be subject frequency spikes, generated by
> > the activation of the sugov's kthread.
>
not make sense for it to bias
the schedutil's frequency selection.
This patch exploits the information related to the current task to silently
ignore cpufreq_update_this_cpu() calls, coming from the RT scheduler, while
the sugov kthread is running.
Signed-off-by: Patrick Bellasi <patrick.b
Patrick
[1] https://gist.github.com/d6a21b459a18091b2b058668a550010d
Patrick Bellasi (6):
cpufreq: schedutil: reset sg_cpus's flags at IDLE enter
cpufreq: schedutil: ignore the sugov kthread for frequencies
selections
cpufreq: schedutil: ensure max frequency while running RT/DL tasks
domain to scale down the frequency in
case that should be needed.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
Cc: Viresh Kumar <viresh.
-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
Cc: Viresh Kumar <viresh.ku...@linaro.org>
Cc: linux-kernel@vger.kernel.org
Cc: linux...@vge
Under certain conditions (i.e. CPU entering idle and current task being
the sugov thread) we can skip a frequency update.
Thus, let's postpone the collection of the FAIR utilisation when really
needed.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redha
update
calls in the only sensible places, which are:
- when an RT task wakeups and it's enqueued in a CPU
- when we actually pick a RT task for execution
- at each tick time
- when a task is set to be RT
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redha
is not in progress, a frequency switch is always authorized when
running in "rt_mode", i.e. the current task in a CPU belongs to the
RT/DL class.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.o
Hi Paul,
On 30-Mar 14:15, Paul Turner wrote:
> On Mon, Mar 20, 2017 at 11:08 AM, Patrick Bellasi
> <patrick.bell...@arm.com> wrote:
> > On 20-Mar 13:15, Tejun Heo wrote:
> >> Hello,
> >>
> >> On Tue, Feb 28, 2017 at 02:38:38PM +, Patrick Bel
T_DL)
> + if ((j_sg_cpu->flags & SCHED_CPUFREQ_RT_DL) ||
> j_sg_cpu->overload)
> return policy->cpuinfo.max_freq;
>
> j_util = j_sg_cpu->util;
> @@ -273,12 +274,13 @@ static void sugov_update_shared(struct u
> sg_cpu->util = util;
> sg_cpu->max = max;
> sg_cpu->flags = flags;
> + sg_cpu->overload = this_rq()->rd->overload;
>
> sugov_set_iowait_boost(sg_cpu, time, flags);
> sg_cpu->last_update = time;
>
> if (sugov_should_update_freq(sg_policy, time)) {
> - if (flags & SCHED_CPUFREQ_RT_DL)
> + if ((flags & SCHED_CPUFREQ_RT_DL) || sg_cpu->overload)
> next_f = sg_policy->policy->cpuinfo.max_freq;
> else
> next_f = sugov_next_freq_shared(sg_cpu);
>
--
#include
Patrick Bellasi
On 21-Mar 15:46, Rafael J. Wysocki wrote:
> On Tuesday, March 21, 2017 02:38:42 PM Patrick Bellasi wrote:
> > On 21-Mar 15:26, Rafael J. Wysocki wrote:
> > > On Tuesday, March 21, 2017 02:37:08 PM Vincent Guittot wrote:
> > > > On 21 March 2017 at 14:22, Peter Zijlstr
; Note that utilization is an absolute metric, not a windowed one. That
> is, there is no actual time associated with it. Now, for practical
> purposes we end up using windowed things in many places,
>
--
#include
Patrick Bellasi
On 07-Apr 17:30, Peter Zijlstra wrote:
> On Thu, Mar 02, 2017 at 03:45:04PM +0000, Patrick Bellasi wrote:
> > + struct task_struct *curr = cpu_curr(smp_processor_id());
>
> Isn't that a weird way of writing 'current' ?
Right... (cough)... it's a new fangled way. :-/
Will
On 29-Mar 00:18, Rafael J. Wysocki wrote:
> On Thursday, March 02, 2017 03:45:02 PM Patrick Bellasi wrote:
> > Currently, sg_cpu's flags are set to the value defined by the last call of
> > the cpufreq_update_util()/cpufreq_update_this_cpu(); for RT/DL classes
>
to be better in
sync with the current status of a CPU.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
Cc: Viresh Kumar <viresh.ku...@l
sense to have flags
aggregation in the schedutil code instead of the core scheduler.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
Cc:
[1] https://lkml.org/lkml/2017/3/2/385
[2] https://gist.github.com/derkling/0cd7210e4fa6f2ec3558073006e5ad70
Patrick Bellasi (6):
cpufreq: schedutil: ignore sugov kthreads
cpufreq: schedutil: reset sg_cpus's flags at IDLE enter
cpufreq: schedutil: ensure max frequency while running RT/DL tasks
not make sense for it to bias
the schedutil's frequency selection policy.
This patch exploits the information related to the current task to silently
ignore cpufreq_update_this_cpu() calls, coming from the RT scheduler, while
the sugov kthread is running.
Signed-off-by: Patrick Bellasi
e possible.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
Cc: Viresh Kumar <viresh.ku...@linaro.org>
Cc: linux-kernel@vger
1 - 100 of 1316 matches
Mail list logo