Re: [PATCH v4 0/5] rework sched_domain topology description

2014-04-12 Thread Dietmar Eggemann
/lkml/2013/10/18/121 [2] https://lkml.org/lkml/2013/11/5/239 [3] https://lkml.org/lkml/2013/11/5/449 Hi Vincent, given the discussion we had for v1-v3 and a short boot test of v4: For patch 1/5, 4/5, 5/5 on ARM TC2 (heterogeneous dual socket w/o SMT machine): Reviewed-by: Dietmar Eggemann Test

Re: [PATCH v3 1/6] sched: rework of sched_domain topology definition

2014-03-24 Thread Dietmar Eggemann
On 21/03/14 11:04, Vincent Guittot wrote: On 20 March 2014 18:18, Dietmar Eggemann wrote: On 20/03/14 17:02, Vincent Guittot wrote: On 20 March 2014 13:41, Dietmar Eggemann wrote: On 19/03/14 16:22, Vincent Guittot wrote: We replace the old way to configure the scheduler topology with a

Re: [PATCH v3 09/12] Revert "sched: Put rq's sched_avg under CONFIG_FAIR_GROUP_SCHED"

2014-07-14 Thread Dietmar Eggemann
[...] >> In that same discussion ISTR a suggestion about adding avg_running time, >> as opposed to the current avg_runnable. The sum of avg_running should be >> much more accurate, and still react correctly to migrations. > > I haven't look in details but I agree that avg_running would be much >

Re: Scheduler regression from caffcdd8d27ba78730d5540396ce72ad022aff2c

2014-07-16 Thread Dietmar Eggemann
Hi Bruno and Josh, On 16/07/14 17:17, Josh Boyer wrote: Adding Dietmar in since he is the original author. josh On Wed, Jul 16, 2014 at 09:55:46AM -0500, Bruno Wolff III wrote: caffcdd8d27ba78730d5540396ce72ad022aff2c has been causing crashes early in the boot process on one of three machines

Re: find_busiest_group divide error

2014-07-16 Thread Dietmar Eggemann
Hi Greg, On 16/07/14 19:52, Greg Donald wrote: On Wed, Jul 16, 2014 at 05:27:36PM +0200, Peter Zijlstra wrote: Could you confirm if reverting caffcdd8d27ba78730d5540396ce72ad022aff2c cures things for you? Otherwise there's two very similar issues, see also: lkml.kernel.org/r/2014071614554

Re: Scheduler regression from caffcdd8d27ba78730d5540396ce72ad022aff2c

2014-07-16 Thread Dietmar Eggemann
On 16/07/14 21:54, Bruno Wolff III wrote: On Wed, Jul 16, 2014 at 21:17:32 +0200, Dietmar Eggemann wrote: Hi Bruno and Josh, From the issue, I see that the machine making trouble is an Xeon (2 processors w/ hyper-threading). Could you please share: cat /proc/cpuinfo and I have attached

Re: Scheduler regression from caffcdd8d27ba78730d5540396ce72ad022aff2c

2014-07-17 Thread Dietmar Eggemann
On 17/07/14 05:09, Bruno Wolff III wrote: On Thu, Jul 17, 2014 at 01:18:36 +0200, Dietmar Eggemann wrote: So the output of $ cat /proc/sys/kernel/sched_domain/cpu*/domain*/* would be handy too. Thanks, this was helpful. I see from the sched domain layout that you have SMT (domain0) and

Re: Scheduler regression from caffcdd8d27ba78730d5540396ce72ad022aff2c

2014-07-17 Thread Dietmar Eggemann
On 17/07/14 11:04, Peter Zijlstra wrote: On Thu, Jul 17, 2014 at 10:57:55AM +0200, Dietmar Eggemann wrote: There is also the possibility that the memory for sched_group sg is not (completely) zeroed out: sg = kzalloc_node(sizeof(struct sched_group) + cpumask_size

Re: Scheduler regression from caffcdd8d27ba78730d5540396ce72ad022aff2c

2014-07-22 Thread Dietmar Eggemann
0 0 0 0 0 16 1196 1080 46 43570 75 0 0 1080 0 0 0 0 0 0 0 0 0 2947 280 0 domain1 1f 18768 18763 3 3006 2 0 9 18055 6 6 0 0 0 0 0 1 1125 996 94 81038 43 0 18 978 0 0 0 0 0 0 0 0 0 1582 172 0 # cat /proc/sys/kernel/sched_domain/cpu0/domain*/name GMC DIE so MC level gets changed to mask 0-1. &

Re: Random panic in load_balance() with 3.16-rc

2014-07-23 Thread Dietmar Eggemann
On 23/07/14 10:31, Michel Dänzer wrote: > On 23.07.2014 18:25, Peter Zijlstra wrote: >> On Wed, Jul 23, 2014 at 10:28:19AM +0200, Peter Zijlstra wrote: >> >>> Of course, the other thing that patch did is clear sgp->power (now >>> sgc->capacity). >> >> Hmm, re-reading the thread there isn't a clear

Re: [RFC PATCH 07/16 v3] Init Workload Consolidation flags in sched_domain

2014-06-09 Thread Dietmar Eggemann
... turned out that probably the cc list was too big for lkml. Dropping all the individual email addresses on CC. ... it seems that this message hasn't made it to the list. Apologies to everyone on To: and Cc: receiving it again. On 03/06/14 13:14, Peter Zijlstra wrote: > On Fri, May 30, 2014 at

Re: [RFC PATCH 07/16 v3] Init Workload Consolidation flags in sched_domain

2014-06-10 Thread Dietmar Eggemann
On 09/06/14 22:18, Yuyang Du wrote: > On Mon, Jun 09, 2014 at 06:56:17PM +0100, Dietmar Eggemann wrote: > > Thanks, Dietmar. > >> I'm running these patches on my ARM TC2 on top of >> kernel/git/torvalds/linux.git (v3.15-rc7-79-gfe45736f4134). There're >> c

Re: [RFC PATCH 07/16 v3] Init Workload Consolidation flags in sched_domain

2014-06-11 Thread Dietmar Eggemann
On 10/06/14 19:09, Yuyang Du wrote: > On Tue, Jun 10, 2014 at 12:52:06PM +0100, Dietmar Eggemann wrote: > > Hi Dietmar, > >> Not in this sense but there is no functionality in the scheduler right >> now to check constantly if an sd flag has been set/unset via sysctl. &

Re: [PATCH v2 02/11] sched: remove a wake_affine condition

2014-05-28 Thread Dietmar Eggemann
Hi Vincent & Peter, On 28/05/14 07:49, Vincent Guittot wrote: [...] > > Nick, > > While doing some rework on the wake affine part of the scheduler, i > failed to catch the use case that takes advantage of a condition that > you added some while ago with the commit > a3f21bce1fefdf92a4d1705e888d3

Re: [PATCH v2 08/11] sched: get CPU's activity statistic

2014-05-30 Thread Dietmar Eggemann
On 23/05/14 16:53, Vincent Guittot wrote: > Monitor the activity level of each group of each sched_domain level. The > activity is the amount of cpu_power that is currently used on a CPU or group > of CPUs. We use the runnable_avg_sum and _period to evaluate this activity > level. In the special us

Re: [PATCH v2 10/11] sched: move cfs task on a CPU with higher capacity

2014-05-30 Thread Dietmar Eggemann
On 23/05/14 16:53, Vincent Guittot wrote: > If the CPU is used for handling lot of IRQs, trig a load balance to check if > it's worth moving its tasks on another CPU that has more capacity > > Signed-off-by: Vincent Guittot > --- > kernel/sched/fair.c | 13 + > 1 file changed, 13 ins

Re: [PATCH v2 04/11] sched: Allow all archs to set the power_orig

2014-05-30 Thread Dietmar Eggemann
On 23/05/14 16:52, Vincent Guittot wrote: > power_orig is only changed for system with a SMT sched_domain level in order > to > reflect the lower capacity of CPUs. Heterogenous system also have to reflect > an > original capacity that is different from the default value. > > Create a more generi

Re: [PATCH v2 08/11] sched: get CPU's activity statistic

2014-06-01 Thread Dietmar Eggemann
On 30/05/14 20:20, Vincent Guittot wrote: On 30 May 2014 11:50, Dietmar Eggemann wrote: On 23/05/14 16:53, Vincent Guittot wrote: Monitor the activity level of each group of each sched_domain level. The activity is the amount of cpu_power that is currently used on a CPU or group of CPUs. We

Re: [PATCH v2 04/11] sched: Allow all archs to set the power_orig

2014-06-04 Thread Dietmar Eggemann
[...] >> (1) We assume that the current way (update_cpu_power() calls >> arch_scale_freq_power() to get the avg power(freq) over the time period >> since the last call to arch_scale_freq_power()) is suitable >> for us. Do you have another opinion here? > > Using power (or power_freq as you mention

Re: [PATCH v2 04/11] sched: Allow all archs to set the power_orig

2014-06-05 Thread Dietmar Eggemann
[...] >> Firstly, we need to scale cpu power in update_cpu_power() regarding >> uArch, frequency and rt/irq pressure. >> Here the freq related value we get back from arch_scale_freq_power(..., >> cpu) could be an instantaneous value (curr_freq(cpu)/max_freq(cpu)). >> >> Secondly, to be able to scal

[PATCH] sched: delete is_same_group outside CONFIG_FAIR_GROUP_SCHED

2014-01-29 Thread dietmar . eggemann
From: Dietmar Eggemann Since is_same_group is only used in group scheduling code, there is no need to define it outside CONFIG_FAIR_GROUP_SCHED. Signed-off-by: Dietmar Eggemann --- kernel/sched/fair.c |6 -- 1 file changed, 6 deletions(-) diff --git a/kernel/sched/fair.c b/kernel

Re: [RFC PATCH 3/3] idle: store the idle state index in the struct rq

2014-01-31 Thread Dietmar Eggemann
On 31/01/14 14:04, Daniel Lezcano wrote: > On 01/31/2014 10:39 AM, Preeti U Murthy wrote: >> Hi Peter, >> >> On 01/31/2014 02:32 PM, Peter Zijlstra wrote: >>> On Fri, Jan 31, 2014 at 02:15:47PM +0530, Preeti Murthy wrote: > > If the driver does its own random mapping that will break the gov

Re: [RFC][PATCH v5 01/14] sched: add a new arch_sd_local_flags for sched_domain init

2013-11-13 Thread Dietmar Eggemann
On 12/11/13 18:08, Peter Zijlstra wrote: > On Tue, Nov 12, 2013 at 05:43:36PM +0000, Dietmar Eggemann wrote: >> This patch removes the sched_domain initializer macros >> SD_[SIBLING|MC|BOOK|CPU]_INIT in core.c and in archs and replaces them >> with calls to the new fun

Re: [RFC PATCH 5/8] sched: introduce common topology level init function

2014-01-06 Thread Dietmar Eggemann
On 20/12/13 14:04, Peter Zijlstra wrote: >> +/* >> + * SD_flags allowed in topology descriptions. >> + * >> + * SD_SHARE_CPUPOWER - describes SMT topologies >> + * SD_SHARE_PKG_RESOURCES - describes shared caches >> + * SD_NUMA- describes NUMA topologies >> + * >> + * Odd one o

Re: [RFC PATCH 8/8] sched: remove scheduler domain naming

2014-01-06 Thread Dietmar Eggemann
On 20/12/13 14:08, Peter Zijlstra wrote: > On Fri, Dec 13, 2013 at 12:11:28PM +, dietmar.eggem...@arm.com wrote: >> From: Dietmar Eggemann >> >> In case the arch is allowed to define the conventional scheduler domain >> topology level (i.e. the one without SD_NUMA t

Re: [RFC PATCH 0/8] change scheduler domain hierarchy set-up

2014-01-06 Thread Dietmar Eggemann
On 20/12/13 14:00, Peter Zijlstra wrote: > On Fri, Dec 13, 2013 at 12:11:20PM +, dietmar.eggem...@arm.com wrote: >> From: Dietmar Eggemann >> >> This patch-set cleans up the scheduler domain level initialization code. >> It is based on the idea of Peter Zijlstr

Re: [RFC] sched: CPU topology try

2013-12-23 Thread Dietmar Eggemann
Hi Vincent, On 18/12/13 14:13, Vincent Guittot wrote: This patch applies on top of the two patches [1][2] that have been proposed by Peter for creating a new way to initialize sched_domain. It includes some minor compilation fixes and a trial of using this new method on ARM platform. [1] https:/

Re: scheduler crash on Power

2014-08-04 Thread Dietmar Eggemann
On 04/08/14 04:20, Michael Ellerman wrote: > On Fri, 2014-08-01 at 14:24 -0700, Sukadev Bhattiprolu wrote: >> Dietmar Eggemann [dietmar.eggem...@arm.com] wrote: >> | > ltcbrazos2-lp07 login: [ 181.915974] [ cut here >> ] >> | > [ 181.91

Re: [RESEND PATCH 2/3 v5] sched: Rewrite per entity runnable load average tracking

2014-10-23 Thread Dietmar Eggemann
On 10/10/14 04:21, Yuyang Du wrote: [...] @@ -331,21 +330,16 @@ struct cfs_rq { #ifdef CONFIG_SMP /* -* CFS Load tracking -* Under CFS, load is tracked on a per-entity basis and aggregated up. -* This allows for the description of both thread and group usage

Re: [RESEND PATCH 2/3 v5] sched: Rewrite per entity runnable load average tracking

2014-10-23 Thread Dietmar Eggemann
On 10/10/14 04:21, Yuyang Du wrote: [...] @@ -331,21 +330,16 @@ struct cfs_rq { #ifdef CONFIG_SMP /* -* CFS Load tracking -* Under CFS, load is tracked on a per-entity basis and aggregated up. -* This allows for the description of both thread and group usage

Re: [PATCH RFC 7/7] sched: energy_model: simple cpu frequency scaling policy

2014-10-27 Thread Dietmar Eggemann
On 22/10/14 07:07, Mike Turquette wrote: > Building on top of the scale invariant capacity patches and earlier We don't have scale invariant capacity yet but scale invariant load/utilization. > patches in this series that prepare CFS for scaling cpu frequency, this > patch implements a simple, na

Re: [PATCH RFC 6/7] sched: cfs: cpu frequency scaling based on task placement

2014-10-27 Thread Dietmar Eggemann
Hi Mike, On 22/10/14 07:07, Mike Turquette wrote: > {en,de}queue_task_fair are updated to track which cpus will have changed > utilization values as function of task queueing. The sentence is a little bit misleading. We update the se utilization contrib and the cfs_rq utilization in {en,de}queue_

Re: [PATCH v10 2/7] sched: Rewrite runnable load and utilization average tracking

2015-07-24 Thread Dietmar Eggemann
Hi Yuyang, On 15/07/15 01:04, Yuyang Du wrote: [...] > @@ -4674,7 +4487,7 @@ static long effective_load(struct task_group *tg, int > cpu, long wl, long wg) > /* > * w = rw_i + @wl > */ > - w = se->my_q->load.weight + wl; > +

Re: [PATCH v10 7/7] sched: Clean up load average references

2015-07-24 Thread Dietmar Eggemann
On 15/07/15 01:04, Yuyang Du wrote: > For cfs_rq, we have load.weight, runnable_load_avg, and load_avg. We > now start to clean up how they are used. > > First, as group sched_entity already largely uses load_avg, we now expand > to use load_avg in all cases. You're talking about group se's or cf

Re: [RFCv5 PATCH 01/46] arm: Frequency invariant scheduler load-tracking support

2015-08-17 Thread Dietmar Eggemann
Hi Vincent, On 03/08/15 10:22, Vincent Guittot wrote: > Hi Morten, > > > On 7 July 2015 at 20:23, Morten Rasmussen wrote: >> From: Morten Rasmussen >> > > [snip] > >> - >> #endif >> diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c >> index 08b7847..9c09e6e 100644 >> ---

Re: [PATCH 3/6] sched/fair: Make utilization tracking cpu scale-invariant

2015-08-14 Thread Dietmar Eggemann
On 14/08/15 17:23, Morten Rasmussen wrote: > From: Dietmar Eggemann [...] > @@ -2596,7 +2597,7 @@ __update_load_avg(u64 now, int cpu, struct sched_avg > *sa, > } > } > if (running) > - sa->ut

Re: [PATCH 2/4] sched/fair: Drop out incomplete current period when sched averages accrue

2016-04-14 Thread Dietmar Eggemann
On 13/04/16 19:44, Yuyang Du wrote: > On Wed, Apr 13, 2016 at 05:28:18PM +0200, Vincent Guittot wrote: [...] > By "bailing out", you mean return without update because the delta is less > than 1ms? yes. > >>> Examples of 1 periodic task pinned to a cpu on an ARM64 system, HZ=250 >>> in steady

[PATCH 7/7] sched/fair: Use group_cfs_rq(se) instead of se->my_q

2016-04-29 Thread Dietmar Eggemann
Replace all occurrences of se->my_q right values with group_cfs_rq(se) so it is used consistently to access the cfs_rq owned by this se/tg. Signed-off-by: Dietmar Eggemann --- kernel/sched/fair.c | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/sched/fair.

[PATCH 5/7] sched/fair: Remove cpu_avg_load_per_task()

2016-04-29 Thread Dietmar Eggemann
From: Morten Rasmussen cpu_avg_load_per_task() is called in situations where the local sched_group currently has no runnable tasks according to the group statistics (sum of rq->cfs.h_nr_running) to calculate the local load_per_task based on the destination cpu average load_per_task. Since group h

[PATCH 6/7] sched/fair: Reorder code in update_sd_lb_stats()

2016-04-29 Thread Dietmar Eggemann
Do the update of total load and total capacity of sched_domain statistics before detecting if it is the local group. This and the inclusion of sg=sg->next into the condition of the do...while loop makes the code easier to read. Signed-off-by: Dietmar Eggemann --- kernel/sched/fair.c |

[PATCH 0/7] sched/fair: fixes and cleanups

2016-04-29 Thread Dietmar Eggemann
( +- 0.50% )7.151685712 ( +- 0.64% ) Dietmar Eggemann (5): sched/fair: Remove remaining power aware scheduling comments sched/fair: Fix comment in calculate_imbalance() sched/fair: Clean up the logic in fix_small_imbalance() sched/fair: Reorder code in update_sd_lb_stats() sched

[PATCH 1/7] sched/fair: Remove remaining power aware scheduling comments

2016-04-29 Thread Dietmar Eggemann
Commit 8e7fbcbc22c1 ("sched: Remove stale power aware scheduling remnants and dysfunctional knobs") deleted the power aware scheduling support. This patch gets rid of the remaining power aware scheduling comments. Signed-off-by: Dietmar Eggemann --- kernel/sched/fair.c | 13 +++---

[PATCH 3/7] sched/fair: Correct unit of load_above_capacity

2016-04-29 Thread Dietmar Eggemann
From: Morten Rasmussen In calculate_imbalance() load_above_capacity currently has the unit [load] while it is used as being [load/capacity]. Not only is it wrong it also makes it unlikely that load_above_capacity is ever used as the subsequent code picks the smaller of load_above_capacity and the

[PATCH 2/7] sched/fair: Fix comment in calculate_imbalance()

2016-04-29 Thread Dietmar Eggemann
("sched/balancing: Fix 'local->avg_load > sds->avg_load' case in calculate_imbalance()") added the second operand of the or operator. Update this comment accordingly and also use the current variable names. Signed-off-by: Dietmar Eggemann --- kernel/sched/fair.c | 7 +++

[PATCH 4/7] sched/fair: Clean up the logic in fix_small_imbalance()

2016-04-29 Thread Dietmar Eggemann
ion was if (max_load - this_load >= busiest_load_per_task * imbn) which over time changed into the current version where scaled_busy_load_per_task is to be found on both sides of the if condition. Signed-off-by: Dietmar Eggemann --- The original smpnice implementation sets imbalance to the

Re: [PATCH 4/7] sched/fair: Clean up the logic in fix_small_imbalance()

2016-05-03 Thread Dietmar Eggemann
On 03/05/16 11:12, Peter Zijlstra wrote: > On Fri, Apr 29, 2016 at 08:32:41PM +0100, Dietmar Eggemann wrote: >> Avoid the need to add scaled_busy_load_per_task on both sides of the if >> condition to determine whether imbalance has to be set to >> busiest->load_per_tas

Re: [RFC PATCH] sched: reflect sched_entity movement into task_group's utilization

2016-05-11 Thread Dietmar Eggemann
Hi Vincent, On 04/05/16 08:17, Vincent Guittot wrote: > Ensure that changes of the utilization of a sched_entity will be > reflected in the task_group hierarchy down to the root cfs. > > This patch tries another way than the flat utilization hierarchy proposal to > ensure that the changes will be

Re: [RFC PATCH] sched: fix hierarchical order in rq->leaf_cfs_rq_list

2016-05-25 Thread Dietmar Eggemann
Hi Vincent, On 24/05/16 10:55, Vincent Guittot wrote: > Fix the insertion of cfs_rq in rq->leaf_cfs_rq_list to ensure that > a child will always called before its parent. > > The hierarchical order in shares update list has been introduced by > commit 67e86250f8ea ("sched: Introduce hierarchal or

Re: [PATCH v2] sched: fix first task of a task group is attached twice

2016-05-27 Thread Dietmar Eggemann
On 25/05/16 16:01, Vincent Guittot wrote: > The cfs_rq->avg.last_update_time is initialize to 0 with the main effect > that the 1st sched_entity that will be attached, will keep its > last_update_time set to 0 and will attached once again during the > enqueue. > Initialize cfs_rq->avg.last_update_t

Re: [PATCH v2] sched: fix first task of a task group is attached twice

2016-05-27 Thread Dietmar Eggemann
On 27/05/16 18:16, Vincent Guittot wrote: > On 27 May 2016 at 17:48, Dietmar Eggemann wrote: >> On 25/05/16 16:01, Vincent Guittot wrote: >>> The cfs_rq->avg.last_update_time is initialize to 0 with the main effect >>> that the 1st sched_entity that w

Re: [RFC][PATCH 4/7] sched: Replace sd_busy/nr_busy_cpus with sched_domain_shared

2016-05-16 Thread Dietmar Eggemann
On 09/05/16 11:48, Peter Zijlstra wrote: Couldn't you just always access sd->shared via sd = rcu_dereference(per_cpu(sd_llc, cpu)) for updating nr_busy_cpus? The call_rcu() thing is on the sd any way. @@ -5879,7 +5879,6 @@ static void destroy_sched_domains(struct sched_domain *sd) DEFINE_PER_CP

Re: [RFC][PATCH 4/7] sched: Replace sd_busy/nr_busy_cpus with sched_domain_shared

2016-05-16 Thread Dietmar Eggemann
On 16/05/16 18:02, Peter Zijlstra wrote: > On Mon, May 16, 2016 at 04:31:08PM +0100, Dietmar Eggemann wrote: >> On 09/05/16 11:48, Peter Zijlstra wrote: >> >> Couldn't you just always access sd->shared via >> sd = rcu_dereference(per_cpu(sd_llc, cpu)) for >&

Re: [PATCH 2/6] sched/fair: Convert arch_scale_cpu_capacity() from weak function to #define

2015-09-11 Thread Dietmar Eggemann
On 04/09/15 08:26, Vincent Guittot wrote: > On 3 September 2015 at 21:58, Dietmar Eggemann > wrote: [...] > So, with the patch below that updates the arm definition of > arch_scale_cpu_capacity, you can add my Acked-by: Vincent Guittot > on this patch and the additional one &g

Re: [PATCH 4/6] sched/fair: Name utilization related data and functions consistently

2015-09-11 Thread Dietmar Eggemann
On 04/09/15 10:08, Vincent Guittot wrote: > On 14 August 2015 at 18:23, Morten Rasmussen wrote: >> From: Dietmar Eggemann >> >> Use the advent of the per-entity load tracking rewrite to streamline the >> naming of utilization related data and functions by usi

Re: [PATCH v3 4/6] arm64: Enable dynamic CPU capacity initialization

2016-02-08 Thread Dietmar Eggemann
On 03/02/16 11:59, Juri Lelli wrote: > Define arch_wants_init_cpu_capacity() to return true; so that > cpufreq_init_cpu_capacity() can go ahead and profile CPU capacities > at boot time. [...] > > +bool arch_wants_init_cpu_capacity(void) > +{ > + return true; Isn't this a little bit too si

Re: [PATCH v3 4/6] arm64: Enable dynamic CPU capacity initialization

2016-02-08 Thread Dietmar Eggemann
On 08/02/16 13:13, Mark Brown wrote: > On Mon, Feb 08, 2016 at 12:28:39PM +0000, Dietmar Eggemann wrote: >> On 03/02/16 11:59, Juri Lelli wrote: > >>> +bool arch_wants_init_cpu_capacity(void) >>> +{ >>> + return true; > >> Isn't this a lit

Re: [PATCH v3 2/6] drivers/cpufreq: implement init_cpu_capacity_default()

2016-02-09 Thread Dietmar Eggemann
On 05/02/16 09:30, Juri Lelli wrote: > On 04/02/16 16:46, Vincent Guittot wrote: >> On 4 February 2016 at 16:44, Vincent Guittot >> wrote: >>> On 4 February 2016 at 15:13, Juri Lelli wrote: On 04/02/16 13:35, Vincent Guittot wrote: > On 4 February 2016 at 13:16, Juri Lelli wrote: >

Re: [PATCH v9 04/10] sched: Make sched entity usage tracking scale-invariant

2014-11-26 Thread Dietmar Eggemann
On 21/11/14 12:35, Morten Rasmussen wrote: > On Mon, Nov 03, 2014 at 04:54:41PM +, Vincent Guittot wrote: >> From: Morten Rasmussen >> Could we rename this patch to 'sched: Make usage tracking frequency scale-invariant'? The reason is, since we scale sched_avg::running_avg_sum according to t

Re: [PATCH 1/7] sched: Introduce scale-invariant load tracking

2014-10-08 Thread Dietmar Eggemann
Hi Yuyang, On 08/10/14 01:50, Yuyang Du wrote: > Hi Morten, > > Sorry for late jumping in. > > The problem seems to be self-evident. But for the implementation to be > equally attractive it needs to account for every freq change for every task, > or anything less than that makes it less attracti

Re: [PATCH v7 3/7] sched: add utilization_avg_contrib

2014-10-08 Thread Dietmar Eggemann
On 07/10/14 13:13, Vincent Guittot wrote: > Add new statistics which reflect the average time a task is running on the CPU > and the sum of these running time of the tasks on a runqueue. The latter is > named utilization_avg_contrib. > > This patch is based on the usage metric that was proposed in

Re: [RFCv4 PATCH 31/34] sched: Energy-aware wake-up task placement

2015-05-14 Thread Dietmar Eggemann
On 12/05/15 20:39, Morten Rasmussen wrote: > Let available compute capacity and estimated energy impact select > wake-up target cpu when energy-aware scheduling is enabled and the > system in not over-utilized (above the tipping point). > > energy_aware_wake_cpu() attempts to find group of cpus wi

Re: [RFCv3 PATCH 45/48] sched: Skip cpu as lb src which has one task and capacity gte the dst cpu

2015-05-05 Thread Dietmar Eggemann
On 30/04/15 08:46, pang.xun...@zte.com.cn wrote: > linux-kernel-ow...@vger.kernel.org wrote 2015-03-26 AM 02:44:48: > >> Dietmar Eggemann >> >> Re: [RFCv3 PATCH 45/48] sched: Skip cpu as lb src which has one task >> and capacity gte the dst cpu >> >&g

Re: [RFCv3 PATCH 12/48] sched: Make usage tracking cpu scale-invariant

2015-05-06 Thread Dietmar Eggemann
On 03/05/15 07:27, pang.xun...@zte.com.cn wrote: > Hi Dietmar, > > Dietmar Eggemann wrote 2015-03-24 AM 03:19:41: >> >> Re: [RFCv3 PATCH 12/48] sched: Make usage tracking cpu scale-invariant [...] >> In the previous patch-set https://lkml.org/lkml/2014/12/2/332we &g

Re: [RFCv3 PATCH 37/48] sched: Determine the current sched_group idle-state

2015-05-01 Thread Dietmar Eggemann
On 01/05/15 10:56, pang.xun...@zte.com.cn wrote: > Hi Dietmar, > > Dietmar Eggemann wrote 2015-05-01 AM 04:17:51: >> >> Re: [RFCv3 PATCH 37/48] sched: Determine the current sched_group > idle-state >> >> On 30/04/15 06:12, pang.xun...@zte.com.cn wrote: >

Re: [sched] WARNING: CPU: 0 PID: 0 at arch/x86/kernel/cpu/common.c:1439 warn_pre_alternatives()

2014-12-19 Thread Dietmar Eggemann
n too. -- Dietmar https://git.linaro.org/people/mturquette/linux.git eas-next commit 1fadb581b0be9420b143e43ff2f4a07ea7e45f6c Author: Dietmar Eggemann AuthorDate: Tue Dec 2 14:06:24 2014 + Commit: Michael Turquette CommitDate: Tue Dec 9 20:33:17 2014 -0800 sched: Make usag

Re: [PATCH RESEND v9 05/10] sched: make scale_rt invariant with frequency

2015-02-24 Thread Dietmar Eggemann
On 24/02/15 10:21, Vincent Guittot wrote: > On 19 February 2015 at 18:18, Morten Rasmussen > wrote: >> On Thu, Feb 19, 2015 at 04:52:41PM +, Peter Zijlstra wrote: >>> On Thu, Jan 15, 2015 at 11:09:25AM +0100, Vincent Guittot wrote: [...] >> Agreed. I think it is reasonable to assume that th

Re: [PATCH v10 11/11] sched: move cfs task on a CPU with higher capacity

2015-03-26 Thread Dietmar Eggemann
On 27/02/15 15:54, Vincent Guittot wrote: When a CPU is used to handle a lot of IRQs or some RT tasks, the remaining capacity for CFS tasks can be significantly reduced. Once we detect such situation by comparing cpu_capacity_orig and cpu_capacity, we trig an idle load balance to check if it's wo

Re: [RFCv3 PATCH 30/48] sched: Calculate energy consumption of sched_group

2015-03-26 Thread Dietmar Eggemann
On 24/03/15 17:39, Morten Rasmussen wrote: > On Tue, Mar 24, 2015 at 04:10:37PM +, Peter Zijlstra wrote: >> On Tue, Mar 24, 2015 at 10:44:24AM +, Morten Rasmussen wrote: > Maybe remind us why this needs to be tied to sched_groups ? Why can't we > attach the energy information to the

Re: [RFCv3 PATCH 43/48] sched: Introduce energy awareness into detach_tasks

2015-03-27 Thread Dietmar Eggemann
On 25/03/15 23:50, Sai Gurrappadi wrote: > On 02/04/2015 10:31 AM, Morten Rasmussen wrote: >> From: Dietmar Eggemann >> while (!list_empty(tasks)) { >> @@ -6121,6 +6121,20 @@ static int detach_tasks(struct lb_env *env) >> i

Re: [RFCv3 PATCH 12/48] sched: Make usage tracking cpu scale-invariant

2015-03-23 Thread Dietmar Eggemann
On 23/03/15 14:46, Peter Zijlstra wrote: On Wed, Feb 04, 2015 at 06:30:49PM +, Morten Rasmussen wrote: From: Dietmar Eggemann Besides the existing frequency scale-invariance correction factor, apply cpu scale-invariance correction factor to usage tracking. Cpu scale-invariance takes cpu

Re: [RFCv3 PATCH 30/48] sched: Calculate energy consumption of sched_group

2015-03-23 Thread Dietmar Eggemann
On 23/03/15 16:47, Peter Zijlstra wrote: On Mon, Mar 16, 2015 at 02:15:46PM +, Morten Rasmussen wrote: You are absolutely right. The current code is broken for system topologies where all cpus share the same clock source. To be honest, it is actually worse than that and you already pointed o

Re: [RFCv3 PATCH 38/48] sched: Infrastructure to query if load balancing is energy-aware

2015-03-24 Thread Dietmar Eggemann
On 24/03/15 13:41, Peter Zijlstra wrote: On Wed, Feb 04, 2015 at 06:31:15PM +, Morten Rasmussen wrote: + .use_ea = (energy_aware() && sd->groups->sge) ? true : false, The return value of a logical and should already be a boolean. Indeed, thanks for spotting this!

Re: [RFCv3 PATCH 38/48] sched: Infrastructure to query if load balancing is energy-aware

2015-03-24 Thread Dietmar Eggemann
On 24/03/15 13:56, Peter Zijlstra wrote: On Wed, Feb 04, 2015 at 06:31:15PM +, Morten Rasmussen wrote: From: Dietmar Eggemann Energy-aware load balancing should only happen if the ENERGY_AWARE feature is turned on and the sched domain on which the load balancing is performed on contains

Re: [RFCv3 PATCH 42/48] sched: Introduce energy awareness into find_busiest_queue

2015-03-24 Thread Dietmar Eggemann
On 24/03/15 15:21, Peter Zijlstra wrote: On Wed, Feb 04, 2015 at 06:31:19PM +, Morten Rasmussen wrote: +++ b/kernel/sched/fair.c @@ -7216,6 +7216,37 @@ static struct rq *find_busiest_queue(struct lb_env *env, unsigned long busiest_load = 0, busiest_capacity = 1; int i; +

Re: [RFCv3 PATCH 44/48] sched: Tipping point from energy-aware to conventional load balancing

2015-03-24 Thread Dietmar Eggemann
On 24/03/15 15:26, Peter Zijlstra wrote: On Wed, Feb 04, 2015 at 06:31:21PM +, Morten Rasmussen wrote: From: Dietmar Eggemann Energy-aware load balancing bases on cpu usage so the upper bound of its operational range is a fully utilized cpu. Above this tipping point it makes more sense to

Re: [RFCv3 PATCH 45/48] sched: Skip cpu as lb src which has one task and capacity gte the dst cpu

2015-03-25 Thread Dietmar Eggemann
On 24/03/15 15:27, Peter Zijlstra wrote: On Wed, Feb 04, 2015 at 06:31:22PM +, Morten Rasmussen wrote: From: Dietmar Eggemann Skip cpu as a potential src (costliest) in case it has only one task running and its original capacity is greater than or equal to the original capacity of the dst

Re: [PATCH v10 07/11] sched: get CPU's usage statistic

2015-03-03 Thread Dietmar Eggemann
On 27/02/15 15:54, Vincent Guittot wrote: > Monitor the usage level of each group of each sched_domain level. The usage is > the portion of cpu_capacity_orig that is currently used on a CPU or group of > CPUs. We use the utilization_load_avg to evaluate the usage level of each > group. > > The uti

Re: [PATCH v10 04/11] sched: Make sched entity usage tracking scale-invariant

2015-03-03 Thread Dietmar Eggemann
On 27/02/15 15:54, Vincent Guittot wrote: > From: Morten Rasmussen > > Apply frequency scale-invariance correction factor to usage tracking. > Each segment of the running_load_avg geometric series is now scaled by the The same comment I sent out on [PATCH v10 07/11]: The use of underscores in r

Re: [RFCv3 PATCH 48/48] sched: Disable energy-unfriendly nohz kicks

2015-02-20 Thread Dietmar Eggemann
Hi Morten, On 04/02/15 18:31, Morten Rasmussen wrote: > With energy-aware scheduling enabled nohz_kick_needed() generates many > nohz idle-balance kicks which lead to nothing when multiple tasks get > packed on a single cpu to save energy. This causes unnecessary wake-ups > and hence wastes energy

Re: [PATCH 7/8] cpufreq: Frequency invariant scheduler load-tracking support

2016-03-15 Thread Dietmar Eggemann
Hi Mike, On 14/03/16 05:22, Michael Turquette wrote: > From: Dietmar Eggemann > > Implements cpufreq_scale_freq_capacity() to provide the scheduler with a > frequency scaling correction factor for more accurate load-tracking. > > The factor is: > >

Re: [PATCH 8/8] sched: prefer cpufreq_scale_freq_capacity

2016-03-15 Thread Dietmar Eggemann
On 14/03/16 05:22, Michael Turquette wrote: > arch_scale_freq_capacity is weird. It specifies an arch hook for an > implementation that could easily vary within an architecture or even a > chip family. > > This patch helps to mitigate this weirdness by defaulting to the > cpufreq-provided implemen

Re: [PATCH 8/8] sched: prefer cpufreq_scale_freq_capacity

2016-03-18 Thread Dietmar Eggemann
On 15/03/16 20:46, Michael Turquette wrote: > Quoting Dietmar Eggemann (2016-03-15 12:13:58) >> On 14/03/16 05:22, Michael Turquette wrote: [...] >> For me this independence of the scheduler code towards the actual >> implementation of the Frequency Invariant Engine

Re: [PATCH RFC] sched/fair: let cpu's cfs_rq to reflect task migration

2016-04-07 Thread Dietmar Eggemann
Hi Vincent, On 04/07/2016 02:04 PM, Vincent Guittot wrote: Hi Dietmar, On 6 April 2016 at 20:53, Dietmar Eggemann wrote: On 06/04/16 09:37, Morten Rasmussen wrote: On Tue, Apr 05, 2016 at 06:00:40PM +0100, Dietmar Eggemann wrote: [...] @@ -2910,8 +2920,13 @@ static void

Re: [PATCH RFC] sched/fair: let cpu's cfs_rq to reflect task migration

2016-04-05 Thread Dietmar Eggemann
agated down to the root cfs_rq of that cpu. This makes decisions based on cpu_util() for scheduling or cpu frequency settings less accurate in case tasks are running in task groups. This patch aggregates the task utilization only on the root cfs_rq, essentially bypassing cfs_rq's and se's r

Re: [PATCH RFC] sched/fair: let cpu's cfs_rq to reflect task migration

2016-04-06 Thread Dietmar Eggemann
On 06/04/16 09:37, Morten Rasmussen wrote: > On Tue, Apr 05, 2016 at 06:00:40PM +0100, Dietmar Eggemann wrote: >> @@ -2893,8 +2906,12 @@ static void attach_entity_load_avg(struct cfs_rq >> *cfs_rq, struct sched_entity *s >> se->avg.last_update_time = cfs_

Re: [PATCH 1/4] sched/fair: Optimize sum computation with a lookup table

2016-04-11 Thread Dietmar Eggemann
On 10/04/16 23:36, Yuyang Du wrote: > __compute_runnable_contrib() uses a loop to compute sum, whereas a > table loopup can do it faster in a constant time. > > The following python script can be used to generate the constants: > > print " #: yN_inv yN_sum" > print "---"

Re: [PATCH 3/4] sched/fair: Modify accumulated sums for load/util averages

2016-04-11 Thread Dietmar Eggemann
On 10/04/16 23:36, Yuyang Du wrote: > After we dropped the incomplete period, the current period should be > a complete "past" period, since all period boundaries, in the past > or in the future, are predetermined. > > With incomplete current period: > > ||||| >--

Re: [PATCH 2/4] sched/fair: Drop out incomplete current period when sched averages accrue

2016-04-12 Thread Dietmar Eggemann
On 10/04/16 23:36, Yuyang Du wrote: [...] > @@ -2704,11 +2694,14 @@ static __always_inline int > __update_load_avg(u64 now, int cpu, struct sched_avg *sa, > unsigned long weight, int running, struct cfs_rq *cfs_rq) > { > - u64 delta, scaled_delta, periods; > - u32 contri

Re: [PATCH 2/4] sched/fair: Drop out incomplete current period when sched averages accrue

2016-04-13 Thread Dietmar Eggemann
On 10/04/16 23:36, Yuyang Du wrote: > In __update_load_avg(), the current period is never complete. This > basically leads to a slightly over-decayed average, say on average we > have 50% current period, then we will lose 1.08%(=(1-0.5^(1/64)) of > past avg. More importantly, the incomplete current

Re: [RFC PATCH 0/3] CFS idle injection

2015-11-06 Thread Dietmar Eggemann
On 05/11/15 10:12, Peter Zijlstra wrote: > > People, trim your emails! > > On Wed, Nov 04, 2015 at 08:58:30AM -0800, Jacob Pan wrote: > >>> I also like #2 too. Specially now that it is not limited to a specific >>> platform. One question though, could you still keep the cooling device >>> suppor

Re: [RFC PATCH 0/3] CFS idle injection

2015-11-06 Thread Dietmar Eggemann
On 11/06/2015 07:10 PM, Jacob Pan wrote: On Fri, 6 Nov 2015 18:30:01 + Dietmar Eggemann wrote: On 05/11/15 10:12, Peter Zijlstra wrote: People, trim your emails! On Wed, Nov 04, 2015 at 08:58:30AM -0800, Jacob Pan wrote: I also like #2 too. Specially now that it is not limited to a

Re: [PATCH] ARM64: Enable multi-core scheduler support by default

2015-10-30 Thread Dietmar Eggemann
On 10/29/2015 05:19 PM, Catalin Marinas wrote: On Mon, Oct 19, 2015 at 05:55:49PM +0100, Dietmar Eggemann wrote: Make sure that the task scheduler domain hierarchy is set-up correctly on systems with single or multi-cluster topology. Signed-off-by: Dietmar Eggemann --- arch/arm64/configs

Re: [PATCH 13/15] sched,fair: propagate sum_exec_runtime up the hierarchy

2019-08-29 Thread Dietmar Eggemann
On 28/08/2019 15:14, Rik van Riel wrote: > On Wed, 2019-08-28 at 09:51 +0200, Dietmar Eggemann wrote: >> On 22/08/2019 04:17, Rik van Riel wrote: >>> Now that enqueue_task_fair and dequeue_task_fair no longer iterate >>> up >>> the hierarchy all th

Re: [PATCH RFC v4 0/15] sched,fair: flatten CPU controller runqueues

2019-09-02 Thread Dietmar Eggemann
On 22/08/2019 04:17, Rik van Riel wrote: > The current implementation of the CPU controller uses hierarchical > runqueues, where on wakeup a task is enqueued on its group's runqueue, > the group is enqueued on the runqueue of the group above it, etc. > > This increases a fairly large amount of ove

Re: [PATCH v3] sched/core: Fix uclamp ABI bug, clean up and robustify sched_read_attr() ABI logic and code

2019-09-04 Thread Dietmar Eggemann
On 04/09/2019 10:55, Ingo Molnar wrote: > > * Ingo Molnar wrote: > >> +if (!access_ok(uattr, ksize) >> return -EFAULT; > > How about we pretend that I never sent v2? ;-) > > -v3 attached. Build and minimally boot tested. > > Thanks, > > Ingo > This patch fixes the is

Re: [PATCH 12/15] sched,fair: flatten update_curr functionality

2019-08-27 Thread Dietmar Eggemann
On 22/08/2019 04:17, Rik van Riel wrote: > Make it clear that update_curr only works on tasks any more. > > There is no need for task_tick_fair to call it on every sched entity up > the hierarchy, so move the call out of entity_tick. > > Signed-off-by: Rik van Riel ` > Signed-off-by: Rik van Riel

Re: [PATCH 13/15] sched,fair: propagate sum_exec_runtime up the hierarchy

2019-08-28 Thread Dietmar Eggemann
On 22/08/2019 04:17, Rik van Riel wrote: > Now that enqueue_task_fair and dequeue_task_fair no longer iterate up > the hierarchy all the time, a method to lazily propagate sum_exec_runtime > up the hierarchy is necessary. > > Once a tick, propagate the newly accumulated exec_runtime up the hierarc

Re: [PATCH 01/15] sched: introduce task_se_h_load helper

2019-08-23 Thread Dietmar Eggemann
On 22/08/2019 04:17, Rik van Riel wrote: > Sometimes the hierarchical load of a sched_entity needs to be calculated. > Rename task_h_load to task_se_h_load, and directly pass a sched_entity to > that function. > > Move the function declaration up above where it will be used later. > > No funct

Re: [PATCH 11/15] sched,fair: flatten hierarchical runqueues

2019-08-23 Thread Dietmar Eggemann
On 22/08/2019 04:17, Rik van Riel wrote: > Flatten the hierarchical runqueues into just the per CPU rq.cfs runqueue. > > Iteration of the sched_entity hierarchy is rate limited to once per jiffy > per sched_entity, which is a smaller change than it seems, because load > average adjustments were al

Re: [PATCH] sched/cpufreq: Align trace event behavior of fast switching

2019-08-26 Thread Dietmar Eggemann
On 26/08/2019 11:40, Peter Zijlstra wrote: > On Mon, Aug 26, 2019 at 11:10:52AM +0200, Rafael J. Wysocki wrote: >> On Wednesday, August 7, 2019 5:33:40 PM CEST Douglas RAILLARD wrote: >>> Fast switching path only emits an event for the CPU of interest, whereas the >>> regular path emits an event fo

<    1   2   3   4   5   6   7   8   9   >