/lkml/2013/10/18/121
[2] https://lkml.org/lkml/2013/11/5/239
[3] https://lkml.org/lkml/2013/11/5/449
Hi Vincent,
given the discussion we had for v1-v3 and a short boot test of v4:
For patch 1/5, 4/5, 5/5 on ARM TC2 (heterogeneous dual socket w/o SMT
machine):
Reviewed-by: Dietmar Eggemann
Test
On 21/03/14 11:04, Vincent Guittot wrote:
On 20 March 2014 18:18, Dietmar Eggemann wrote:
On 20/03/14 17:02, Vincent Guittot wrote:
On 20 March 2014 13:41, Dietmar Eggemann wrote:
On 19/03/14 16:22, Vincent Guittot wrote:
We replace the old way to configure the scheduler topology with a
[...]
>> In that same discussion ISTR a suggestion about adding avg_running time,
>> as opposed to the current avg_runnable. The sum of avg_running should be
>> much more accurate, and still react correctly to migrations.
>
> I haven't look in details but I agree that avg_running would be much
>
Hi Bruno and Josh,
On 16/07/14 17:17, Josh Boyer wrote:
Adding Dietmar in since he is the original author.
josh
On Wed, Jul 16, 2014 at 09:55:46AM -0500, Bruno Wolff III wrote:
caffcdd8d27ba78730d5540396ce72ad022aff2c has been causing crashes
early in the boot process on one of three machines
Hi Greg,
On 16/07/14 19:52, Greg Donald wrote:
On Wed, Jul 16, 2014 at 05:27:36PM +0200, Peter Zijlstra wrote:
Could you confirm if reverting caffcdd8d27ba78730d5540396ce72ad022aff2c
cures things for you?
Otherwise there's two very similar issues, see also:
lkml.kernel.org/r/2014071614554
On 16/07/14 21:54, Bruno Wolff III wrote:
On Wed, Jul 16, 2014 at 21:17:32 +0200,
Dietmar Eggemann wrote:
Hi Bruno and Josh,
From the issue, I see that the machine making trouble is an Xeon (2
processors w/ hyper-threading).
Could you please share:
cat /proc/cpuinfo and
I have attached
On 17/07/14 05:09, Bruno Wolff III wrote:
On Thu, Jul 17, 2014 at 01:18:36 +0200,
Dietmar Eggemann wrote:
So the output of
$ cat /proc/sys/kernel/sched_domain/cpu*/domain*/*
would be handy too.
Thanks, this was helpful.
I see from the sched domain layout that you have SMT (domain0) and
On 17/07/14 11:04, Peter Zijlstra wrote:
On Thu, Jul 17, 2014 at 10:57:55AM +0200, Dietmar Eggemann wrote:
There is also the possibility that the memory for sched_group sg is not
(completely) zeroed out:
sg = kzalloc_node(sizeof(struct sched_group) + cpumask_size
0 0 0 0 0 16 1196 1080
46 43570 75 0 0 1080 0 0 0 0 0 0 0 0 0 2947 280 0
domain1 1f 18768 18763 3 3006 2 0 9 18055 6 6 0 0 0 0 0 1 1125 996 94
81038 43 0 18 978 0 0 0 0 0 0 0 0 0 1582 172 0
# cat /proc/sys/kernel/sched_domain/cpu0/domain*/name
GMC
DIE
so MC level gets changed to mask 0-1.
&
On 23/07/14 10:31, Michel Dänzer wrote:
> On 23.07.2014 18:25, Peter Zijlstra wrote:
>> On Wed, Jul 23, 2014 at 10:28:19AM +0200, Peter Zijlstra wrote:
>>
>>> Of course, the other thing that patch did is clear sgp->power (now
>>> sgc->capacity).
>>
>> Hmm, re-reading the thread there isn't a clear
... turned out that probably the cc list was too big for lkml. Dropping
all the individual email addresses on CC.
... it seems that this message hasn't made it to the list. Apologies to
everyone on To: and Cc: receiving it again.
On 03/06/14 13:14, Peter Zijlstra wrote:
> On Fri, May 30, 2014 at
On 09/06/14 22:18, Yuyang Du wrote:
> On Mon, Jun 09, 2014 at 06:56:17PM +0100, Dietmar Eggemann wrote:
>
> Thanks, Dietmar.
>
>> I'm running these patches on my ARM TC2 on top of
>> kernel/git/torvalds/linux.git (v3.15-rc7-79-gfe45736f4134). There're
>> c
On 10/06/14 19:09, Yuyang Du wrote:
> On Tue, Jun 10, 2014 at 12:52:06PM +0100, Dietmar Eggemann wrote:
>
> Hi Dietmar,
>
>> Not in this sense but there is no functionality in the scheduler right
>> now to check constantly if an sd flag has been set/unset via sysctl.
&
Hi Vincent & Peter,
On 28/05/14 07:49, Vincent Guittot wrote:
[...]
>
> Nick,
>
> While doing some rework on the wake affine part of the scheduler, i
> failed to catch the use case that takes advantage of a condition that
> you added some while ago with the commit
> a3f21bce1fefdf92a4d1705e888d3
On 23/05/14 16:53, Vincent Guittot wrote:
> Monitor the activity level of each group of each sched_domain level. The
> activity is the amount of cpu_power that is currently used on a CPU or group
> of CPUs. We use the runnable_avg_sum and _period to evaluate this activity
> level. In the special us
On 23/05/14 16:53, Vincent Guittot wrote:
> If the CPU is used for handling lot of IRQs, trig a load balance to check if
> it's worth moving its tasks on another CPU that has more capacity
>
> Signed-off-by: Vincent Guittot
> ---
> kernel/sched/fair.c | 13 +
> 1 file changed, 13 ins
On 23/05/14 16:52, Vincent Guittot wrote:
> power_orig is only changed for system with a SMT sched_domain level in order
> to
> reflect the lower capacity of CPUs. Heterogenous system also have to reflect
> an
> original capacity that is different from the default value.
>
> Create a more generi
On 30/05/14 20:20, Vincent Guittot wrote:
On 30 May 2014 11:50, Dietmar Eggemann wrote:
On 23/05/14 16:53, Vincent Guittot wrote:
Monitor the activity level of each group of each sched_domain level. The
activity is the amount of cpu_power that is currently used on a CPU or group
of CPUs. We
[...]
>> (1) We assume that the current way (update_cpu_power() calls
>> arch_scale_freq_power() to get the avg power(freq) over the time period
>> since the last call to arch_scale_freq_power()) is suitable
>> for us. Do you have another opinion here?
>
> Using power (or power_freq as you mention
[...]
>> Firstly, we need to scale cpu power in update_cpu_power() regarding
>> uArch, frequency and rt/irq pressure.
>> Here the freq related value we get back from arch_scale_freq_power(...,
>> cpu) could be an instantaneous value (curr_freq(cpu)/max_freq(cpu)).
>>
>> Secondly, to be able to scal
From: Dietmar Eggemann
Since is_same_group is only used in group scheduling code, there is
no need to define it outside CONFIG_FAIR_GROUP_SCHED.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/fair.c |6 --
1 file changed, 6 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel
On 31/01/14 14:04, Daniel Lezcano wrote:
> On 01/31/2014 10:39 AM, Preeti U Murthy wrote:
>> Hi Peter,
>>
>> On 01/31/2014 02:32 PM, Peter Zijlstra wrote:
>>> On Fri, Jan 31, 2014 at 02:15:47PM +0530, Preeti Murthy wrote:
>
> If the driver does its own random mapping that will break the gov
On 12/11/13 18:08, Peter Zijlstra wrote:
> On Tue, Nov 12, 2013 at 05:43:36PM +0000, Dietmar Eggemann wrote:
>> This patch removes the sched_domain initializer macros
>> SD_[SIBLING|MC|BOOK|CPU]_INIT in core.c and in archs and replaces them
>> with calls to the new fun
On 20/12/13 14:04, Peter Zijlstra wrote:
>> +/*
>> + * SD_flags allowed in topology descriptions.
>> + *
>> + * SD_SHARE_CPUPOWER - describes SMT topologies
>> + * SD_SHARE_PKG_RESOURCES - describes shared caches
>> + * SD_NUMA- describes NUMA topologies
>> + *
>> + * Odd one o
On 20/12/13 14:08, Peter Zijlstra wrote:
> On Fri, Dec 13, 2013 at 12:11:28PM +, dietmar.eggem...@arm.com wrote:
>> From: Dietmar Eggemann
>>
>> In case the arch is allowed to define the conventional scheduler domain
>> topology level (i.e. the one without SD_NUMA t
On 20/12/13 14:00, Peter Zijlstra wrote:
> On Fri, Dec 13, 2013 at 12:11:20PM +, dietmar.eggem...@arm.com wrote:
>> From: Dietmar Eggemann
>>
>> This patch-set cleans up the scheduler domain level initialization code.
>> It is based on the idea of Peter Zijlstr
Hi Vincent,
On 18/12/13 14:13, Vincent Guittot wrote:
This patch applies on top of the two patches [1][2] that have been proposed by
Peter for creating a new way to initialize sched_domain. It includes some minor
compilation fixes and a trial of using this new method on ARM platform.
[1] https:/
On 04/08/14 04:20, Michael Ellerman wrote:
> On Fri, 2014-08-01 at 14:24 -0700, Sukadev Bhattiprolu wrote:
>> Dietmar Eggemann [dietmar.eggem...@arm.com] wrote:
>> | > ltcbrazos2-lp07 login: [ 181.915974] [ cut here
>> ]
>> | > [ 181.91
On 10/10/14 04:21, Yuyang Du wrote:
[...]
@@ -331,21 +330,16 @@ struct cfs_rq {
#ifdef CONFIG_SMP
/*
-* CFS Load tracking
-* Under CFS, load is tracked on a per-entity basis and aggregated up.
-* This allows for the description of both thread and group usage
On 10/10/14 04:21, Yuyang Du wrote:
[...]
@@ -331,21 +330,16 @@ struct cfs_rq {
#ifdef CONFIG_SMP
/*
-* CFS Load tracking
-* Under CFS, load is tracked on a per-entity basis and aggregated up.
-* This allows for the description of both thread and group usage
On 22/10/14 07:07, Mike Turquette wrote:
> Building on top of the scale invariant capacity patches and earlier
We don't have scale invariant capacity yet but scale invariant
load/utilization.
> patches in this series that prepare CFS for scaling cpu frequency, this
> patch implements a simple, na
Hi Mike,
On 22/10/14 07:07, Mike Turquette wrote:
> {en,de}queue_task_fair are updated to track which cpus will have changed
> utilization values as function of task queueing.
The sentence is a little bit misleading. We update the se utilization
contrib and the cfs_rq utilization in {en,de}queue_
Hi Yuyang,
On 15/07/15 01:04, Yuyang Du wrote:
[...]
> @@ -4674,7 +4487,7 @@ static long effective_load(struct task_group *tg, int
> cpu, long wl, long wg)
> /*
> * w = rw_i + @wl
> */
> - w = se->my_q->load.weight + wl;
> +
On 15/07/15 01:04, Yuyang Du wrote:
> For cfs_rq, we have load.weight, runnable_load_avg, and load_avg. We
> now start to clean up how they are used.
>
> First, as group sched_entity already largely uses load_avg, we now expand
> to use load_avg in all cases.
You're talking about group se's or cf
Hi Vincent,
On 03/08/15 10:22, Vincent Guittot wrote:
> Hi Morten,
>
>
> On 7 July 2015 at 20:23, Morten Rasmussen wrote:
>> From: Morten Rasmussen
>>
>
> [snip]
>
>> -
>> #endif
>> diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
>> index 08b7847..9c09e6e 100644
>> ---
On 14/08/15 17:23, Morten Rasmussen wrote:
> From: Dietmar Eggemann
[...]
> @@ -2596,7 +2597,7 @@ __update_load_avg(u64 now, int cpu, struct sched_avg
> *sa,
> }
> }
> if (running)
> - sa->ut
On 13/04/16 19:44, Yuyang Du wrote:
> On Wed, Apr 13, 2016 at 05:28:18PM +0200, Vincent Guittot wrote:
[...]
> By "bailing out", you mean return without update because the delta is less
> than 1ms?
yes.
>
>>> Examples of 1 periodic task pinned to a cpu on an ARM64 system, HZ=250
>>> in steady
Replace all occurrences of se->my_q right values with group_cfs_rq(se)
so it is used consistently to access the cfs_rq owned by this se/tg.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/fair.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.
From: Morten Rasmussen
cpu_avg_load_per_task() is called in situations where the local
sched_group currently has no runnable tasks according to the group
statistics (sum of rq->cfs.h_nr_running) to calculate the local
load_per_task based on the destination cpu average load_per_task. Since
group h
Do the update of total load and total capacity of sched_domain
statistics before detecting if it is the local group. This and the
inclusion of sg=sg->next into the condition of the do...while loop
makes the code easier to read.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/fair.c |
( +- 0.50% )7.151685712 ( +- 0.64% )
Dietmar Eggemann (5):
sched/fair: Remove remaining power aware scheduling comments
sched/fair: Fix comment in calculate_imbalance()
sched/fair: Clean up the logic in fix_small_imbalance()
sched/fair: Reorder code in update_sd_lb_stats()
sched
Commit 8e7fbcbc22c1 ("sched: Remove stale power aware scheduling remnants
and dysfunctional knobs") deleted the power aware scheduling support.
This patch gets rid of the remaining power aware scheduling comments.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/fair.c | 13 +++---
From: Morten Rasmussen
In calculate_imbalance() load_above_capacity currently has the unit
[load] while it is used as being [load/capacity]. Not only is it wrong it
also makes it unlikely that load_above_capacity is ever used as the
subsequent code picks the smaller of load_above_capacity and the
("sched/balancing: Fix 'local->avg_load >
sds->avg_load' case in calculate_imbalance()") added the second
operand of the or operator.
Update this comment accordingly and also use the current variable
names.
Signed-off-by: Dietmar Eggemann
---
kernel/sched/fair.c | 7 +++
ion was
if (max_load - this_load >= busiest_load_per_task * imbn)
which over time changed into the current version where
scaled_busy_load_per_task is to be found on both sides of
the if condition.
Signed-off-by: Dietmar Eggemann
---
The original smpnice implementation sets imbalance to the
On 03/05/16 11:12, Peter Zijlstra wrote:
> On Fri, Apr 29, 2016 at 08:32:41PM +0100, Dietmar Eggemann wrote:
>> Avoid the need to add scaled_busy_load_per_task on both sides of the if
>> condition to determine whether imbalance has to be set to
>> busiest->load_per_tas
Hi Vincent,
On 04/05/16 08:17, Vincent Guittot wrote:
> Ensure that changes of the utilization of a sched_entity will be
> reflected in the task_group hierarchy down to the root cfs.
>
> This patch tries another way than the flat utilization hierarchy proposal to
> ensure that the changes will be
Hi Vincent,
On 24/05/16 10:55, Vincent Guittot wrote:
> Fix the insertion of cfs_rq in rq->leaf_cfs_rq_list to ensure that
> a child will always called before its parent.
>
> The hierarchical order in shares update list has been introduced by
> commit 67e86250f8ea ("sched: Introduce hierarchal or
On 25/05/16 16:01, Vincent Guittot wrote:
> The cfs_rq->avg.last_update_time is initialize to 0 with the main effect
> that the 1st sched_entity that will be attached, will keep its
> last_update_time set to 0 and will attached once again during the
> enqueue.
> Initialize cfs_rq->avg.last_update_t
On 27/05/16 18:16, Vincent Guittot wrote:
> On 27 May 2016 at 17:48, Dietmar Eggemann wrote:
>> On 25/05/16 16:01, Vincent Guittot wrote:
>>> The cfs_rq->avg.last_update_time is initialize to 0 with the main effect
>>> that the 1st sched_entity that w
On 09/05/16 11:48, Peter Zijlstra wrote:
Couldn't you just always access sd->shared via
sd = rcu_dereference(per_cpu(sd_llc, cpu)) for
updating nr_busy_cpus?
The call_rcu() thing is on the sd any way.
@@ -5879,7 +5879,6 @@ static void destroy_sched_domains(struct sched_domain *sd)
DEFINE_PER_CP
On 16/05/16 18:02, Peter Zijlstra wrote:
> On Mon, May 16, 2016 at 04:31:08PM +0100, Dietmar Eggemann wrote:
>> On 09/05/16 11:48, Peter Zijlstra wrote:
>>
>> Couldn't you just always access sd->shared via
>> sd = rcu_dereference(per_cpu(sd_llc, cpu)) for
>&
On 04/09/15 08:26, Vincent Guittot wrote:
> On 3 September 2015 at 21:58, Dietmar Eggemann
> wrote:
[...]
> So, with the patch below that updates the arm definition of
> arch_scale_cpu_capacity, you can add my Acked-by: Vincent Guittot
> on this patch and the additional one
&g
On 04/09/15 10:08, Vincent Guittot wrote:
> On 14 August 2015 at 18:23, Morten Rasmussen wrote:
>> From: Dietmar Eggemann
>>
>> Use the advent of the per-entity load tracking rewrite to streamline the
>> naming of utilization related data and functions by usi
On 03/02/16 11:59, Juri Lelli wrote:
> Define arch_wants_init_cpu_capacity() to return true; so that
> cpufreq_init_cpu_capacity() can go ahead and profile CPU capacities
> at boot time.
[...]
>
> +bool arch_wants_init_cpu_capacity(void)
> +{
> + return true;
Isn't this a little bit too si
On 08/02/16 13:13, Mark Brown wrote:
> On Mon, Feb 08, 2016 at 12:28:39PM +0000, Dietmar Eggemann wrote:
>> On 03/02/16 11:59, Juri Lelli wrote:
>
>>> +bool arch_wants_init_cpu_capacity(void)
>>> +{
>>> + return true;
>
>> Isn't this a lit
On 05/02/16 09:30, Juri Lelli wrote:
> On 04/02/16 16:46, Vincent Guittot wrote:
>> On 4 February 2016 at 16:44, Vincent Guittot
>> wrote:
>>> On 4 February 2016 at 15:13, Juri Lelli wrote:
On 04/02/16 13:35, Vincent Guittot wrote:
> On 4 February 2016 at 13:16, Juri Lelli wrote:
>
On 21/11/14 12:35, Morten Rasmussen wrote:
> On Mon, Nov 03, 2014 at 04:54:41PM +, Vincent Guittot wrote:
>> From: Morten Rasmussen
>>
Could we rename this patch to 'sched: Make usage tracking frequency
scale-invariant'?
The reason is, since we scale sched_avg::running_avg_sum according to
t
Hi Yuyang,
On 08/10/14 01:50, Yuyang Du wrote:
> Hi Morten,
>
> Sorry for late jumping in.
>
> The problem seems to be self-evident. But for the implementation to be
> equally attractive it needs to account for every freq change for every task,
> or anything less than that makes it less attracti
On 07/10/14 13:13, Vincent Guittot wrote:
> Add new statistics which reflect the average time a task is running on the CPU
> and the sum of these running time of the tasks on a runqueue. The latter is
> named utilization_avg_contrib.
>
> This patch is based on the usage metric that was proposed in
On 12/05/15 20:39, Morten Rasmussen wrote:
> Let available compute capacity and estimated energy impact select
> wake-up target cpu when energy-aware scheduling is enabled and the
> system in not over-utilized (above the tipping point).
>
> energy_aware_wake_cpu() attempts to find group of cpus wi
On 30/04/15 08:46, pang.xun...@zte.com.cn wrote:
> linux-kernel-ow...@vger.kernel.org wrote 2015-03-26 AM 02:44:48:
>
>> Dietmar Eggemann
>>
>> Re: [RFCv3 PATCH 45/48] sched: Skip cpu as lb src which has one task
>> and capacity gte the dst cpu
>>
>&g
On 03/05/15 07:27, pang.xun...@zte.com.cn wrote:
> Hi Dietmar,
>
> Dietmar Eggemann wrote 2015-03-24 AM 03:19:41:
>>
>> Re: [RFCv3 PATCH 12/48] sched: Make usage tracking cpu scale-invariant
[...]
>> In the previous patch-set https://lkml.org/lkml/2014/12/2/332we
&g
On 01/05/15 10:56, pang.xun...@zte.com.cn wrote:
> Hi Dietmar,
>
> Dietmar Eggemann wrote 2015-05-01 AM 04:17:51:
>>
>> Re: [RFCv3 PATCH 37/48] sched: Determine the current sched_group
> idle-state
>>
>> On 30/04/15 06:12, pang.xun...@zte.com.cn wrote:
>
n too.
-- Dietmar
https://git.linaro.org/people/mturquette/linux.git eas-next
commit 1fadb581b0be9420b143e43ff2f4a07ea7e45f6c
Author: Dietmar Eggemann
AuthorDate: Tue Dec 2 14:06:24 2014 +
Commit: Michael Turquette
CommitDate: Tue Dec 9 20:33:17 2014 -0800
sched: Make usag
On 24/02/15 10:21, Vincent Guittot wrote:
> On 19 February 2015 at 18:18, Morten Rasmussen
> wrote:
>> On Thu, Feb 19, 2015 at 04:52:41PM +, Peter Zijlstra wrote:
>>> On Thu, Jan 15, 2015 at 11:09:25AM +0100, Vincent Guittot wrote:
[...]
>> Agreed. I think it is reasonable to assume that th
On 27/02/15 15:54, Vincent Guittot wrote:
When a CPU is used to handle a lot of IRQs or some RT tasks, the remaining
capacity for CFS tasks can be significantly reduced. Once we detect such
situation by comparing cpu_capacity_orig and cpu_capacity, we trig an idle
load balance to check if it's wo
On 24/03/15 17:39, Morten Rasmussen wrote:
> On Tue, Mar 24, 2015 at 04:10:37PM +, Peter Zijlstra wrote:
>> On Tue, Mar 24, 2015 at 10:44:24AM +, Morten Rasmussen wrote:
> Maybe remind us why this needs to be tied to sched_groups ? Why can't we
> attach the energy information to the
On 25/03/15 23:50, Sai Gurrappadi wrote:
> On 02/04/2015 10:31 AM, Morten Rasmussen wrote:
>> From: Dietmar Eggemann
>> while (!list_empty(tasks)) {
>> @@ -6121,6 +6121,20 @@ static int detach_tasks(struct lb_env *env)
>> i
On 23/03/15 14:46, Peter Zijlstra wrote:
On Wed, Feb 04, 2015 at 06:30:49PM +, Morten Rasmussen wrote:
From: Dietmar Eggemann
Besides the existing frequency scale-invariance correction factor, apply
cpu scale-invariance correction factor to usage tracking.
Cpu scale-invariance takes cpu
On 23/03/15 16:47, Peter Zijlstra wrote:
On Mon, Mar 16, 2015 at 02:15:46PM +, Morten Rasmussen wrote:
You are absolutely right. The current code is broken for system
topologies where all cpus share the same clock source. To be honest, it
is actually worse than that and you already pointed o
On 24/03/15 13:41, Peter Zijlstra wrote:
On Wed, Feb 04, 2015 at 06:31:15PM +, Morten Rasmussen wrote:
+ .use_ea = (energy_aware() && sd->groups->sge) ? true :
false,
The return value of a logical and should already be a boolean.
Indeed, thanks for spotting this!
On 24/03/15 13:56, Peter Zijlstra wrote:
On Wed, Feb 04, 2015 at 06:31:15PM +, Morten Rasmussen wrote:
From: Dietmar Eggemann
Energy-aware load balancing should only happen if the ENERGY_AWARE feature
is turned on and the sched domain on which the load balancing is performed
on contains
On 24/03/15 15:21, Peter Zijlstra wrote:
On Wed, Feb 04, 2015 at 06:31:19PM +, Morten Rasmussen wrote:
+++ b/kernel/sched/fair.c
@@ -7216,6 +7216,37 @@ static struct rq *find_busiest_queue(struct lb_env *env,
unsigned long busiest_load = 0, busiest_capacity = 1;
int i;
+
On 24/03/15 15:26, Peter Zijlstra wrote:
On Wed, Feb 04, 2015 at 06:31:21PM +, Morten Rasmussen wrote:
From: Dietmar Eggemann
Energy-aware load balancing bases on cpu usage so the upper bound of its
operational range is a fully utilized cpu. Above this tipping point it
makes more sense to
On 24/03/15 15:27, Peter Zijlstra wrote:
On Wed, Feb 04, 2015 at 06:31:22PM +, Morten Rasmussen wrote:
From: Dietmar Eggemann
Skip cpu as a potential src (costliest) in case it has only one task
running and its original capacity is greater than or equal to the
original capacity of the dst
On 27/02/15 15:54, Vincent Guittot wrote:
> Monitor the usage level of each group of each sched_domain level. The usage is
> the portion of cpu_capacity_orig that is currently used on a CPU or group of
> CPUs. We use the utilization_load_avg to evaluate the usage level of each
> group.
>
> The uti
On 27/02/15 15:54, Vincent Guittot wrote:
> From: Morten Rasmussen
>
> Apply frequency scale-invariance correction factor to usage tracking.
> Each segment of the running_load_avg geometric series is now scaled by the
The same comment I sent out on [PATCH v10 07/11]:
The use of underscores in r
Hi Morten,
On 04/02/15 18:31, Morten Rasmussen wrote:
> With energy-aware scheduling enabled nohz_kick_needed() generates many
> nohz idle-balance kicks which lead to nothing when multiple tasks get
> packed on a single cpu to save energy. This causes unnecessary wake-ups
> and hence wastes energy
Hi Mike,
On 14/03/16 05:22, Michael Turquette wrote:
> From: Dietmar Eggemann
>
> Implements cpufreq_scale_freq_capacity() to provide the scheduler with a
> frequency scaling correction factor for more accurate load-tracking.
>
> The factor is:
>
>
On 14/03/16 05:22, Michael Turquette wrote:
> arch_scale_freq_capacity is weird. It specifies an arch hook for an
> implementation that could easily vary within an architecture or even a
> chip family.
>
> This patch helps to mitigate this weirdness by defaulting to the
> cpufreq-provided implemen
On 15/03/16 20:46, Michael Turquette wrote:
> Quoting Dietmar Eggemann (2016-03-15 12:13:58)
>> On 14/03/16 05:22, Michael Turquette wrote:
[...]
>> For me this independence of the scheduler code towards the actual
>> implementation of the Frequency Invariant Engine
Hi Vincent,
On 04/07/2016 02:04 PM, Vincent Guittot wrote:
Hi Dietmar,
On 6 April 2016 at 20:53, Dietmar Eggemann wrote:
On 06/04/16 09:37, Morten Rasmussen wrote:
On Tue, Apr 05, 2016 at 06:00:40PM +0100, Dietmar Eggemann wrote:
[...]
@@ -2910,8 +2920,13 @@ static void
agated down to the root cfs_rq of that
cpu.
This makes decisions based on cpu_util() for scheduling or cpu frequency
settings less accurate in case tasks are running in task groups.
This patch aggregates the task utilization only on the root cfs_rq,
essentially bypassing cfs_rq's and se's r
On 06/04/16 09:37, Morten Rasmussen wrote:
> On Tue, Apr 05, 2016 at 06:00:40PM +0100, Dietmar Eggemann wrote:
>> @@ -2893,8 +2906,12 @@ static void attach_entity_load_avg(struct cfs_rq
>> *cfs_rq, struct sched_entity *s
>> se->avg.last_update_time = cfs_
On 10/04/16 23:36, Yuyang Du wrote:
> __compute_runnable_contrib() uses a loop to compute sum, whereas a
> table loopup can do it faster in a constant time.
>
> The following python script can be used to generate the constants:
>
> print " #: yN_inv yN_sum"
> print "---"
On 10/04/16 23:36, Yuyang Du wrote:
> After we dropped the incomplete period, the current period should be
> a complete "past" period, since all period boundaries, in the past
> or in the future, are predetermined.
>
> With incomplete current period:
>
> |||||
>--
On 10/04/16 23:36, Yuyang Du wrote:
[...]
> @@ -2704,11 +2694,14 @@ static __always_inline int
> __update_load_avg(u64 now, int cpu, struct sched_avg *sa,
> unsigned long weight, int running, struct cfs_rq *cfs_rq)
> {
> - u64 delta, scaled_delta, periods;
> - u32 contri
On 10/04/16 23:36, Yuyang Du wrote:
> In __update_load_avg(), the current period is never complete. This
> basically leads to a slightly over-decayed average, say on average we
> have 50% current period, then we will lose 1.08%(=(1-0.5^(1/64)) of
> past avg. More importantly, the incomplete current
On 05/11/15 10:12, Peter Zijlstra wrote:
>
> People, trim your emails!
>
> On Wed, Nov 04, 2015 at 08:58:30AM -0800, Jacob Pan wrote:
>
>>> I also like #2 too. Specially now that it is not limited to a specific
>>> platform. One question though, could you still keep the cooling device
>>> suppor
On 11/06/2015 07:10 PM, Jacob Pan wrote:
On Fri, 6 Nov 2015 18:30:01 +
Dietmar Eggemann wrote:
On 05/11/15 10:12, Peter Zijlstra wrote:
People, trim your emails!
On Wed, Nov 04, 2015 at 08:58:30AM -0800, Jacob Pan wrote:
I also like #2 too. Specially now that it is not limited to a
On 10/29/2015 05:19 PM, Catalin Marinas wrote:
On Mon, Oct 19, 2015 at 05:55:49PM +0100, Dietmar Eggemann wrote:
Make sure that the task scheduler domain hierarchy is set-up correctly
on systems with single or multi-cluster topology.
Signed-off-by: Dietmar Eggemann
---
arch/arm64/configs
On 28/08/2019 15:14, Rik van Riel wrote:
> On Wed, 2019-08-28 at 09:51 +0200, Dietmar Eggemann wrote:
>> On 22/08/2019 04:17, Rik van Riel wrote:
>>> Now that enqueue_task_fair and dequeue_task_fair no longer iterate
>>> up
>>> the hierarchy all th
On 22/08/2019 04:17, Rik van Riel wrote:
> The current implementation of the CPU controller uses hierarchical
> runqueues, where on wakeup a task is enqueued on its group's runqueue,
> the group is enqueued on the runqueue of the group above it, etc.
>
> This increases a fairly large amount of ove
On 04/09/2019 10:55, Ingo Molnar wrote:
>
> * Ingo Molnar wrote:
>
>> +if (!access_ok(uattr, ksize)
>> return -EFAULT;
>
> How about we pretend that I never sent v2? ;-)
>
> -v3 attached. Build and minimally boot tested.
>
> Thanks,
>
> Ingo
>
This patch fixes the is
On 22/08/2019 04:17, Rik van Riel wrote:
> Make it clear that update_curr only works on tasks any more.
>
> There is no need for task_tick_fair to call it on every sched entity up
> the hierarchy, so move the call out of entity_tick.
>
> Signed-off-by: Rik van Riel `
> Signed-off-by: Rik van Riel
On 22/08/2019 04:17, Rik van Riel wrote:
> Now that enqueue_task_fair and dequeue_task_fair no longer iterate up
> the hierarchy all the time, a method to lazily propagate sum_exec_runtime
> up the hierarchy is necessary.
>
> Once a tick, propagate the newly accumulated exec_runtime up the hierarc
On 22/08/2019 04:17, Rik van Riel wrote:
> Sometimes the hierarchical load of a sched_entity needs to be calculated.
> Rename task_h_load to task_se_h_load, and directly pass a sched_entity to
> that function.
>
> Move the function declaration up above where it will be used later.
>
> No funct
On 22/08/2019 04:17, Rik van Riel wrote:
> Flatten the hierarchical runqueues into just the per CPU rq.cfs runqueue.
>
> Iteration of the sched_entity hierarchy is rate limited to once per jiffy
> per sched_entity, which is a smaller change than it seems, because load
> average adjustments were al
On 26/08/2019 11:40, Peter Zijlstra wrote:
> On Mon, Aug 26, 2019 at 11:10:52AM +0200, Rafael J. Wysocki wrote:
>> On Wednesday, August 7, 2019 5:33:40 PM CEST Douglas RAILLARD wrote:
>>> Fast switching path only emits an event for the CPU of interest, whereas the
>>> regular path emits an event fo
301 - 400 of 871 matches
Mail list logo