On 11-Apr 10:07, Viresh Kumar wrote:
> On 10-04-18, 16:59, Patrick Bellasi wrote:
> > The iowait boosting code has been recently updated to add a progressive
> > boosting behavior which allows to be less aggressive in boosting tasks
> > doing only sporadic IO operations, t
On 11-Apr 10:07, Viresh Kumar wrote:
> On 10-04-18, 16:59, Patrick Bellasi wrote:
> > The iowait boosting code has been recently updated to add a progressive
> > boosting behavior which allows to be less aggressive in boosting tasks
> > doing only sporadic IO operations, t
On 10-Apr 21:37, Peter Zijlstra wrote:
> On Tue, Apr 10, 2018 at 04:59:31PM +0100, Patrick Bellasi wrote:
> > The iowait boosting code has been recently updated to add a progressive
> > boosting behavior which allows to be less aggressive in boosting tasks
> > doing only s
On 10-Apr 21:37, Peter Zijlstra wrote:
> On Tue, Apr 10, 2018 at 04:59:31PM +0100, Patrick Bellasi wrote:
> > The iowait boosting code has been recently updated to add a progressive
> > boosting behavior which allows to be less aggressive in boosting tasks
> > doing only s
On 11-Apr 08:57, Vincent Guittot wrote:
> On 10 April 2018 at 13:04, Patrick Bellasi <patrick.bell...@arm.com> wrote:
> > On 09-Apr 10:51, Vincent Guittot wrote:
> >> On 6 April 2018 at 19:28, Patrick Bellasi <patrick.bell...@arm.com> wrote:
> >> Pet
On 11-Apr 08:57, Vincent Guittot wrote:
> On 10 April 2018 at 13:04, Patrick Bellasi wrote:
> > On 09-Apr 10:51, Vincent Guittot wrote:
> >> On 6 April 2018 at 19:28, Patrick Bellasi wrote:
> >> Peter,
> >> what was your goal with adding the cond
On 11-Apr 09:57, Vincent Guittot wrote:
> On 6 April 2018 at 19:28, Patrick Bellasi <patrick.bell...@arm.com> wrote:
>
> > }
> > @@ -5454,8 +5441,11 @@ static void dequeue_task_fair(struct rq *rq, struct
> > task_struct *p, int flags)
> >
On 11-Apr 09:57, Vincent Guittot wrote:
> On 6 April 2018 at 19:28, Patrick Bellasi wrote:
>
> > }
> > @@ -5454,8 +5441,11 @@ static void dequeue_task_fair(struct rq *rq, struct
> > task_struct *p, int flags)
> > update_cfs_group(se);
> &g
Hi Tejun,
On 09-Apr 15:24, Tejun Heo wrote:
> On Mon, Apr 09, 2018 at 05:56:12PM +0100, Patrick Bellasi wrote:
> > This patch extends the CPU controller by adding a couple of new attributes,
> > util_min and util_max, which can be used to enforce frequency boosting and
> >
Hi Tejun,
On 09-Apr 15:24, Tejun Heo wrote:
> On Mon, Apr 09, 2018 at 05:56:12PM +0100, Patrick Bellasi wrote:
> > This patch extends the CPU controller by adding a couple of new attributes,
> > util_min and util_max, which can be used to enforce frequency boosting and
> >
functions and better align the in-code documentation.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Reported-by: Viresh Kumar <viresh.ku...@linaro.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wy
functions and better align the in-code documentation.
Signed-off-by: Patrick Bellasi
Reported-by: Viresh Kumar
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Joel Fernandes
Cc: Steve Muckle
Cc: Juri Lelli
Cc: Dietmar Eggemann
Cc: linux-kernel@vger.kernel.org
On 10-Apr 16:26, Viresh Kumar wrote:
> On 10-04-18, 11:43, Patrick Bellasi wrote:
> > On 05-Apr 15:28, Viresh Kumar wrote:
> > What about this new version for the two functions,
> > just compile tested:
> >
> > ---8<---
> >
> > static void sugo
On 10-Apr 16:26, Viresh Kumar wrote:
> On 10-04-18, 11:43, Patrick Bellasi wrote:
> > On 05-Apr 15:28, Viresh Kumar wrote:
> > What about this new version for the two functions,
> > just compile tested:
> >
> > ---8<---
> >
> > static void sugo
Hi Joel,
On 06-Apr 16:48, Joel Fernandes wrote:
> On Fri, Apr 6, 2018 at 10:28 AM, Patrick Bellasi
> <patrick.bell...@arm.com> wrote:
> > Schedutil is not properly updated when the first FAIR task wakes up on a
> > CPU and when a RQ is (un)throttled. This is m
Hi Joel,
On 06-Apr 16:48, Joel Fernandes wrote:
> On Fri, Apr 6, 2018 at 10:28 AM, Patrick Bellasi
> wrote:
> > Schedutil is not properly updated when the first FAIR task wakes up on a
> > CPU and when a RQ is (un)throttled. This is mainly due to the current
> > int
Hi Vincent,
On 09-Apr 10:51, Vincent Guittot wrote:
> Hi Patrick
>
> On 6 April 2018 at 19:28, Patrick Bellasi <patrick.bell...@arm.com> wrote:
> > Schedutil is not properly updated when the first FAIR task wakes up on a
> > CPU and when a RQ is (un)throttled. This i
Hi Vincent,
On 09-Apr 10:51, Vincent Guittot wrote:
> Hi Patrick
>
> On 6 April 2018 at 19:28, Patrick Bellasi wrote:
> > Schedutil is not properly updated when the first FAIR task wakes up on a
> > CPU and when a RQ is (un)throttled. This is mainly due to the current
>
Hi Vincent,
On 05-Apr 15:28, Viresh Kumar wrote:
> On 28-03-18, 10:07, Patrick Bellasi wrote:
> > diff --git a/kernel/sched/cpufreq_schedutil.c
> > b/kernel/sched/cpufreq_schedutil.c
> > index 2b124811947d..c840b0626735 100644
> > --- a/kernel/sched/cpufreq_schedut
Hi Vincent,
On 05-Apr 15:28, Viresh Kumar wrote:
> On 28-03-18, 10:07, Patrick Bellasi wrote:
> > diff --git a/kernel/sched/cpufreq_schedutil.c
> > b/kernel/sched/cpufreq_schedutil.c
> > index 2b124811947d..c840b0626735 100644
> > --- a/kernel/sched/cpufreq_schedut
for a
specified task by extending sched_setattr, a syscall which already
allows to define task specific properties for different scheduling
classes.
Specifically, a new pair of attributes allows to specify a minimum and
maximum utilization which the scheduler should consider for a task.
Signed-off
for a
specified task by extending sched_setattr, a syscall which already
allows to define task specific properties for different scheduling
classes.
Specifically, a new pair of attributes allows to specify a minimum and
maximum utilization which the scheduler should consider for a task.
Signed-off-by:
boosting and capping are defined to be:
- util_min: 0
- util_max: SCHED_CAPACITY_SCALE
which means that by default no boosting/capping is enforced on FAIR
tasks, and thus the frequency will be selected considering the actual
utilization value of each CPU.
Signed-off-by: Patrick Bellasi <patrick.be
boosting and capping are defined to be:
- util_min: 0
- util_max: SCHED_CAPACITY_SCALE
which means that by default no boosting/capping is enforced on FAIR
tasks, and thus the frequency will be selected considering the actual
utilization value of each CPU.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
*cgroup_subsys_state, a
clamp group is assigned to the task, which is possibly different than
the task specific clamp group. We then ensure to update the current
clamp group accounting for all the tasks which are currently runnable on
the cgroup via a new uclamp_group_get_tg() call.
Signed-off-by:
*cgroup_subsys_state, a
clamp group is assigned to the task, which is possibly different than
the task specific clamp group. We then ensure to update the current
clamp group accounting for all the tasks which are currently runnable on
the cgroup via a new uclamp_group_get_tg() call.
Signed-off-by: Pa
always run at the maximum OPP if not otherwise
constrained by userspace.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
Cc: Viresh Kumar
return the properly aggregated
constrains as described above. This will also make sched_getattr a
convenient userpace API to know the utilization constraints enforced on
a task by the CGroups's CPU controller.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@r
always run at the maximum OPP if not otherwise
constrained by userspace.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Joel Fernandes
Cc: Juri Lelli
Cc: Dietmar Eggemann
Cc: Morten Rasmussen
Cc: linux-kernel@vger.kernel.org
Cc
return the properly aggregated
constrains as described above. This will also make sched_getattr a
convenient userpace API to know the utilization constraints enforced on
a task by the CGroups's CPU controller.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Tejun Heo
Cc
ceed the number of maximum different
clamp values supported.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Paul Turner <p...@google.com>
Cc: Joel Fernandes <joe...@google.com>
Cc:
ceed the number of maximum different
clamp values supported.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Paul Turner
Cc: Joel Fernandes
Cc: Juri Lelli
Cc: Dietmar Eggemann
Cc: Morten Rasmussen
Cc: linux-kernel@vger.kernel.org
Cc: linux...@vger.kernel.org
---
or going to schedule a less boosted
or more clamped task.
Moreover, the expected number of different clamp values, which can be
configured at build time, is usually so small to make not worth a more
advanced ordering algorithm. In real use-cases we expect less then 10
different values.
Signed-off
em at the maximum
frequency is not strictly required.
Cheers Patrick
Patrick Bellasi (7):
sched/core: uclamp: add CPU clamp groups accounting
sched/core: uclamp: map TASK clamp values into CPU clamp groups
sched/core: uclamp: extend sched_setattr to support utilization
clamping
sched
or going to schedule a less boosted
or more clamped task.
Moreover, the expected number of different clamp values, which can be
configured at build time, is usually so small to make not worth a more
advanced ordering algorithm. In real use-cases we expect less then 10
different values.
Signed-off
em at the maximum
frequency is not strictly required.
Cheers Patrick
Patrick Bellasi (7):
sched/core: uclamp: add CPU clamp groups accounting
sched/core: uclamp: map TASK clamp values into CPU clamp groups
sched/core: uclamp: extend sched_setattr to support utilization
clamping
sched
s have been verified to give
PELT a further improvement in performance, compared to other out-of-tree
load tracking solutions, when it comes to track interactive workloads
thus better supporting both tasks placements and frequencies selections.
Signed-off-by: Patrick Bellasi <patrick.bell...@a
s have been verified to give
PELT a further improvement in performance, compared to other out-of-tree
load tracking solutions, when it comes to track interactive workloads
thus better supporting both tasks placements and frequencies selections.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc
utilization, which is updated by a following sched event
This new proposal allows also to better aggregate schedutil related
flags, which are required only at enqueue_task_fair() time.
Indeed, IOWAIT and MIGRATION flags are now requested only when a task is
actually visible at the root cfs_rq level.
utilization, which is updated by a following sched event
This new proposal allows also to better aggregate schedutil related
flags, which are required only at enqueue_task_fair() time.
Indeed, IOWAIT and MIGRATION flags are now requested only when a task is
actually visible at the root cfs_rq le
t; # CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
The governor in use is not schedutil... thus util_est could effect the
test just because of signals tracking overheads, of because of the way
we affect tasks placement in WK and LB paths... which can be
correlated to the impact on task migrations and preemption...
> CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
> # CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
> # CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
> CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
> CONFIG_CPU_FREQ_GOV_POWERSAVE=y
> CONFIG_CPU_FREQ_GOV_USERSPACE=y
> CONFIG_CPU_FREQ_GOV_ONDEMAND=y
> CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
> # CONFIG_CPU_FREQ_GOV_SCHEDUTIL is not set
>
> #
> # CPU frequency scaling drivers
> #
> CONFIG_X86_INTEL_PSTATE=y
> CONFIG_X86_PCC_CPUFREQ=m
> CONFIG_X86_ACPI_CPUFREQ=m
> CONFIG_X86_ACPI_CPUFREQ_CPB=y
> CONFIG_X86_POWERNOW_K8=m
> CONFIG_X86_AMD_FREQ_SENSITIVITY=m
> # CONFIG_X86_SPEEDSTEP_CENTRINO is not set
> CONFIG_X86_P4_CLOCKMOD=m
>
--
#include
Patrick Bellasi
t; # CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
The governor in use is not schedutil... thus util_est could effect the
test just because of signals tracking overheads, of because of the way
we affect tasks placement in WK and LB paths... which can be
correlated to the impact on task migrations and preemption...
> CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
> # CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
> # CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
> CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
> CONFIG_CPU_FREQ_GOV_POWERSAVE=y
> CONFIG_CPU_FREQ_GOV_USERSPACE=y
> CONFIG_CPU_FREQ_GOV_ONDEMAND=y
> CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
> # CONFIG_CPU_FREQ_GOV_SCHEDUTIL is not set
>
> #
> # CPU frequency scaling drivers
> #
> CONFIG_X86_INTEL_PSTATE=y
> CONFIG_X86_PCC_CPUFREQ=m
> CONFIG_X86_ACPI_CPUFREQ=m
> CONFIG_X86_ACPI_CPUFREQ_CPB=y
> CONFIG_X86_POWERNOW_K8=m
> CONFIG_X86_AMD_FREQ_SENSITIVITY=m
> # CONFIG_X86_SPEEDSTEP_CENTRINO is not set
> CONFIG_X86_P4_CLOCKMOD=m
>
--
#include
Patrick Bellasi
.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
Cc: Viresh Kumar <viresh.ku...@linaro.org>
Cc: Joel Fernandes <joe...@google.com>
.
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Joel Fernandes
Cc: Steve Muckle
Cc: Juri Lelli
Cc: Dietmar Eggemann
Cc: linux-kernel@vger.kernel.org
Cc: linux...@vger.kernel.org
---
Based on today's tip/sched/core:
b720342 sched
the if (!sd) ?
That's the same think I was also proposing in my reply to this patch.
But in my case the point was mainly to make the code easier to
follow... which at the end it's also to void all the consideration on
dependencies you describe above.
Joel, can you have a look at what I proposed... I was not entirely
sure that we miss some code paths doing it that way.
> If you still want to keep the logic this way, then probably you should
> also check if (tmp->flags & sd_flag) == true in the loop? That way
> energy_sd wont be set at all (Since we're basically saying we dont
> want to do wake up across this sd (in energy aware fashion in this
> case) if the domain flags don't watch the wake up sd_flag.
>
> thanks,
>
> - Joel
--
#include
Patrick Bellasi
y reply to this patch.
But in my case the point was mainly to make the code easier to
follow... which at the end it's also to void all the consideration on
dependencies you describe above.
Joel, can you have a look at what I proposed... I was not entirely
sure that we miss some code paths doing it that way.
> If you still want to keep the logic this way, then probably you should
> also check if (tmp->flags & sd_flag) == true in the loop? That way
> energy_sd wont be set at all (Since we're basically saying we dont
> want to do wake up across this sd (in energy aware fashion in this
> case) if the domain flags don't watch the wake up sd_flag.
>
> thanks,
>
> - Joel
--
#include
Patrick Bellasi
On 21-Mar 14:26, Quentin Perret wrote:
> On Wednesday 21 Mar 2018 at 12:39:21 (+), Patrick Bellasi wrote:
> > On 20-Mar 09:43, Dietmar Eggemann wrote:
> > > From: Quentin Perret <quentin.per...@arm.com>
> >
> > [...]
> >
> > > +static unsi
On 21-Mar 14:26, Quentin Perret wrote:
> On Wednesday 21 Mar 2018 at 12:39:21 (+), Patrick Bellasi wrote:
> > On 20-Mar 09:43, Dietmar Eggemann wrote:
> > > From: Quentin Perret
> >
> > [...]
> >
> > > +static unsigned long comp
ine)
> @@ -6586,6 +6652,8 @@ select_task_rq_fair(struct task_struct *p, int
> prev_cpu, int sd_flag, int wake_f
> if (want_affine)
> current->recent_used_cpu = cpu;
> }
> + } else if (energy_sd) {
> + new_cpu = find_energy_efficient_cpu(energy_sd, p, prev_cpu);
> } else {
> new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
> }
--
#include
Patrick Bellasi
fair(struct task_struct *p, int
> prev_cpu, int sd_flag, int wake_f
> if (want_affine)
> current->recent_used_cpu = cpu;
> }
> + } else if (energy_sd) {
> + new_cpu = find_energy_efficient_cpu(energy_sd, p, prev_cpu);
> } else {
> new_cpu = find_idlest_cpu(sd, p, cpu, prev_cpu, sd_flag);
> }
--
#include
Patrick Bellasi
> + return energy;
> +}
> +
> /*
> * select_task_rq_fair: Select target runqueue for the waking task in domains
> * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE,
> --
> 2.11.0
>
--
#include
Patrick Bellasi
/*
> * select_task_rq_fair: Select target runqueue for the waking task in domains
> * that have the 'sd_flag' flag set. In practice, this is SD_BALANCE_WAKE,
> --
> 2.11.0
>
--
#include
Patrick Bellasi
t; work only when schedutil is in use (if so we should probably make it
> conditional on that)?
Yes, I would say that EAS mostly makes sense when you have a "minimum"
control on OPPs... otherwise all the energy estimations are really
fuzzy.
> Also, even when schedutil is in use, shouldn't we ask it for a util
> "computation" instead of replicating its _current_ heuristic?
Are you proposing to have the 1.25 factor only here and remove it from
schedutil?
> I fear the two might diverge in the future.
That could be avoided by factoring out from schedutil the
"compensation" factor into a proper function to be used by all the
interested playes, isn't it?
--
#include
Patrick Bellasi
es sense when you have a "minimum"
control on OPPs... otherwise all the energy estimations are really
fuzzy.
> Also, even when schedutil is in use, shouldn't we ask it for a util
> "computation" instead of replicating its _current_ heuristic?
Are you proposing to have the 1.25 factor only here and remove it from
schedutil?
> I fear the two might diverge in the future.
That could be avoided by factoring out from schedutil the
"compensation" factor into a proper function to be used by all the
interested playes, isn't it?
--
#include
Patrick Bellasi
Commit-ID: a07630b8b2c16f82fd5b71d890079f4dd7599c1d
Gitweb: https://git.kernel.org/tip/a07630b8b2c16f82fd5b71d890079f4dd7599c1d
Author: Patrick Bellasi <patrick.bell...@arm.com>
AuthorDate: Fri, 9 Mar 2018 09:52:44 +
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate:
Commit-ID: d519329f72a6f36bc4f2b85452640cfe583b4f81
Gitweb: https://git.kernel.org/tip/d519329f72a6f36bc4f2b85452640cfe583b4f81
Author: Patrick Bellasi <patrick.bell...@arm.com>
AuthorDate: Fri, 9 Mar 2018 09:52:45 +
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate:
Commit-ID: a07630b8b2c16f82fd5b71d890079f4dd7599c1d
Gitweb: https://git.kernel.org/tip/a07630b8b2c16f82fd5b71d890079f4dd7599c1d
Author: Patrick Bellasi
AuthorDate: Fri, 9 Mar 2018 09:52:44 +
Committer: Ingo Molnar
CommitDate: Tue, 20 Mar 2018 08:11:08 +0100
sched/cpufreq/schedutil
Commit-ID: d519329f72a6f36bc4f2b85452640cfe583b4f81
Gitweb: https://git.kernel.org/tip/d519329f72a6f36bc4f2b85452640cfe583b4f81
Author: Patrick Bellasi
AuthorDate: Fri, 9 Mar 2018 09:52:45 +
Committer: Ingo Molnar
CommitDate: Tue, 20 Mar 2018 08:11:09 +0100
sched/fair: Update
Commit-ID: f9be3e5961c5554879a491961187472e923f5ee0
Gitweb: https://git.kernel.org/tip/f9be3e5961c5554879a491961187472e923f5ee0
Author: Patrick Bellasi <patrick.bell...@arm.com>
AuthorDate: Fri, 9 Mar 2018 09:52:43 +
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate:
Commit-ID: f9be3e5961c5554879a491961187472e923f5ee0
Gitweb: https://git.kernel.org/tip/f9be3e5961c5554879a491961187472e923f5ee0
Author: Patrick Bellasi
AuthorDate: Fri, 9 Mar 2018 09:52:43 +
Committer: Ingo Molnar
CommitDate: Tue, 20 Mar 2018 08:11:07 +0100
sched/fair: Use
Commit-ID: 7f65ea42eb00bc902f1c37a71e984e4f4064cfa9
Gitweb: https://git.kernel.org/tip/7f65ea42eb00bc902f1c37a71e984e4f4064cfa9
Author: Patrick Bellasi <patrick.bell...@arm.com>
AuthorDate: Fri, 9 Mar 2018 09:52:42 +
Committer: Ingo Molnar <mi...@kernel.org>
CommitDate:
Commit-ID: 7f65ea42eb00bc902f1c37a71e984e4f4064cfa9
Gitweb: https://git.kernel.org/tip/7f65ea42eb00bc902f1c37a71e984e4f4064cfa9
Author: Patrick Bellasi
AuthorDate: Fri, 9 Mar 2018 09:52:42 +
Committer: Ingo Molnar
CommitDate: Tue, 20 Mar 2018 08:11:06 +0100
sched/fair: Add
of the estimated utilization (at
previous dequeue time) of all the tasks currently RUNNABLE on that CPU.
This allows to properly represent the spare capacity of a CPU which, for
example, has just got a big task running since a long sleep period.
Signed-off-by: Patrick Bellasi <patrick.b
of the estimated utilization (at
previous dequeue time) of all the tasks currently RUNNABLE on that CPU.
This allows to properly represent the spare capacity of a CPU which, for
example, has just got a big task running since a long sleep period.
Signed-off-by: Patrick Bellasi
Reviewed-by: Dietmar
is tracked only for
objects of interests, specifically:
- Tasks: to better support tasks placement decisions
- root cfs_rqs: to better support both tasks placement decisions as
well as frequencies selection
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Reviewed-by: D
is tracked only for
objects of interests, specifically:
- Tasks: to better support tasks placement decisions
- root cfs_rqs: to better support both tasks placement decisions as
well as frequencies selection
Signed-off-by: Patrick Bellasi
Reviewed-by: Dietmar Eggemann
Cc: Ingo Molnar
l/2018/2/22/639
20180222170153.673-1-patrick.bell...@arm.com
[2] git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
sched/core (commit 083c6eeab2cc)
[3] https://lkml.org/lkml/2018/1/23/645
20180123180847.4477-1-patrick.bell...@arm.com
Patrick Bellasi (4):
sched/fair: add util_est o
l/2018/2/22/639
20180222170153.673-1-patrick.bell...@arm.com
[2] git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
sched/core (commit 083c6eeab2cc)
[3] https://lkml.org/lkml/2018/1/23/645
20180123180847.4477-1-patrick.bell...@arm.com
Patrick Bellasi (4):
sched/fair: add util_est o
-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggem...@arm.com>
Acked-by: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
Acked-by: Viresh Kumar <viresh.ku...@linaro.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <
-by: Chris Redpath <chris.redp...@arm.com>
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Paul Turner <p...@google.com>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
-off-by: Patrick Bellasi
Reviewed-by: Dietmar Eggemann
Acked-by: Rafael J. Wysocki
Acked-by: Viresh Kumar
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Paul Turner
Cc: Vincent Guittot
Cc: Morten Rasmussen
Cc: Dietmar Eggemann
Cc: linux-kernel
-by: Chris Redpath
Signed-off-by: Patrick Bellasi
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Paul Turner
Cc: Vincent Guittot
Cc: Morten Rasmussen
Cc: Dietmar Eggemann
Cc: linux-kernel@vger.kernel.org
---
Changes in v6:
- remove READ_ONCE from rq-lock protected code paths
- change flag name
On 08-Mar 10:48, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 05:01:53PM +0000, Patrick Bellasi wrote:
> > +#define UTIL_EST_NEED_UPDATE_FLAG 0x1
>
> > @@ -5321,12 +5345,19 @@ static inline void util_est_dequeue(struct cfs_rq
> > *cfs_rq,
> > if (!task
On 08-Mar 10:48, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 05:01:53PM +0000, Patrick Bellasi wrote:
> > +#define UTIL_EST_NEED_UPDATE_FLAG 0x1
>
> > @@ -5321,12 +5345,19 @@ static inline void util_est_dequeue(struct cfs_rq
> > *cfs_rq,
> > if (!task
On 07-Mar 10:39, Peter Zijlstra wrote:
> On Tue, Mar 06, 2018 at 07:58:51PM +0100, Peter Zijlstra wrote:
> > On Thu, Feb 22, 2018 at 05:01:50PM +, Patrick Bellasi wrote:
> > > +static inline void util_est_enqueue(struct cfs_rq *cfs_rq,
> > > +
On 07-Mar 10:39, Peter Zijlstra wrote:
> On Tue, Mar 06, 2018 at 07:58:51PM +0100, Peter Zijlstra wrote:
> > On Thu, Feb 22, 2018 at 05:01:50PM +, Patrick Bellasi wrote:
> > > +static inline void util_est_enqueue(struct cfs_rq *cfs_rq,
> > > +
On 07-Mar 13:24, Peter Zijlstra wrote:
> On Wed, Mar 07, 2018 at 11:31:49AM +0000, Patrick Bellasi wrote:
> > > It appears to me this isn't a stable situation and completely relies on
> > > the !nr_running case to recalibrate. If we ensure that doesn't happen
> > > f
On 07-Mar 13:24, Peter Zijlstra wrote:
> On Wed, Mar 07, 2018 at 11:31:49AM +0000, Patrick Bellasi wrote:
> > > It appears to me this isn't a stable situation and completely relies on
> > > the !nr_running case to recalibrate. If we ensure that doesn't happen
> > > f
On 07-Mar 13:26, Peter Zijlstra wrote:
> On Wed, Mar 07, 2018 at 11:47:11AM +0000, Patrick Bellasi wrote:
> > On 06-Mar 20:02, Peter Zijlstra wrote:
> > > On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> > > > +struct util_est
On 07-Mar 13:26, Peter Zijlstra wrote:
> On Wed, Mar 07, 2018 at 11:47:11AM +0000, Patrick Bellasi wrote:
> > On 06-Mar 20:02, Peter Zijlstra wrote:
> > > On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> > > > +struct util_est
On 06-Mar 19:56, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> > +/**
> > + * Estimation Utilization for FAIR tasks.
> > + *
> > + * Support data structure to track an Exponential Weighted Moving Average
> > + * (EWMA)
On 06-Mar 19:56, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> > +/**
> > + * Estimation Utilization for FAIR tasks.
> > + *
> > + * Support data structure to track an Exponential Weighted Moving Average
> > + * (EWMA)
On 06-Mar 20:02, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> > +struct util_est {
> > + unsigned intenqueued;
> > + unsigned intewma;
> > +#define UTIL_EST_WEIGHT_SHIFT
On 06-Mar 20:02, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> > +struct util_est {
> > + unsigned intenqueued;
> > + unsigned intewma;
> > +#define UTIL_EST_WEIGHT_SHIFT
On 06-Mar 19:58, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> > +static inline void util_est_enqueue(struct cfs_rq *cfs_rq,
> > + struct task_struct *p)
> > +{
> > + unsigned int enqueued;
&g
On 06-Mar 19:58, Peter Zijlstra wrote:
> On Thu, Feb 22, 2018 at 05:01:50PM +0000, Patrick Bellasi wrote:
> > +static inline void util_est_enqueue(struct cfs_rq *cfs_rq,
> > + struct task_struct *p)
> > +{
> > + unsigned int enqueued;
&g
The changelog is missing the below CCs. :(
Since that's a new patch in this series, I expect some feedbacks and
thus I'll add them on the next respin.
On 22-Feb 17:01, Patrick Bellasi wrote:
> The estimated utilization of a task is currently updated every time the
> task is dequeued. H
The changelog is missing the below CCs. :(
Since that's a new patch in this series, I expect some feedbacks and
thus I'll add them on the next respin.
On 22-Feb 17:01, Patrick Bellasi wrote:
> The estimated utilization of a task is currently updated every time the
> task is dequeued. H
This is missing the below #ifdef guards, adding here has a note for
the next resping on list.
On 22-Feb 17:01, Patrick Bellasi wrote:
[...]
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e1febd252a84..c8526687f107 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/
This is missing the below #ifdef guards, adding here has a note for
the next resping on list.
On 22-Feb 17:01, Patrick Bellasi wrote:
[...]
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e1febd252a84..c8526687f107 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/
.
This allows to properly represent the spare capacity of a CPU which, for
example, has just got a big task running since a long sleep period.
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Ingo Molnar <mi...@redhat
.
This allows to properly represent the spare capacity of a CPU which, for
example, has just got a big task running since a long sleep period.
Signed-off-by: Patrick Bellasi
Reviewed-by: Dietmar Eggemann
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Paul Turner
Cc
-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggem...@arm.com>
Acked-by: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Rafael J. Wysocki &
-off-by: Patrick Bellasi
Reviewed-by: Dietmar Eggemann
Acked-by: Rafael J. Wysocki
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
Cc: Paul Turner
Cc: Vincent Guittot
Cc: Morten Rasmussen
Cc: Dietmar Eggemann
Cc: linux-kernel@vger.kernel.org
Cc: linux
-by: Chris Redpath <chris.redp...@arm.com>
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
---
Changes in v5:
- set SCHED_FEAT(UTIL_EST, true) as default (Peter)
---
kernel/sched/fair.c | 39 +++
kernel/sched/features.h | 2 +-
2 files
-by: Chris Redpath
Signed-off-by: Patrick Bellasi
---
Changes in v5:
- set SCHED_FEAT(UTIL_EST, true) as default (Peter)
---
kernel/sched/fair.c | 39 +++
kernel/sched/features.h | 2 +-
2 files changed, 36 insertions(+), 5 deletions(-)
diff --git
: to better support tasks placement decisions
- root cfs_rqs: to better support both tasks placement decisions as
well as frequencies selection
Signed-off-by: Patrick Bellasi <patrick.bell...@arm.com>
Reviewed-by: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Ing
: to better support tasks placement decisions
- root cfs_rqs: to better support both tasks placement decisions as
well as frequencies selection
Signed-off-by: Patrick Bellasi
Reviewed-by: Dietmar Eggemann
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Rafael J. Wysocki
Cc: Viresh Kumar
t?h=v4.16-rc2#n1508
[4] https://lkml.org/lkml/2018/1/23/645
20180123180847.4477-1-patrick.bell...@arm.com
Patrick Bellasi (4):
sched/fair: add util_est on top of PELT
sched/fair: use util_est in LB and WU paths
sched/cpufreq_schedutil: use util_est for OPP selection
sched/fair:
t?h=v4.16-rc2#n1508
[4] https://lkml.org/lkml/2018/1/23/645
20180123180847.4477-1-patrick.bell...@arm.com
Patrick Bellasi (4):
sched/fair: add util_est on top of PELT
sched/fair: use util_est in LB and WU paths
sched/cpufreq_schedutil: use util_est for OPP selection
sched/fair:
801 - 900 of 1316 matches
Mail list logo