).
So we define the range to [0..SCHED_SCALE_CAPACITY] in order to avoid overflow.
cc: Paul Turner p...@google.com
cc: Ben Segall bseg...@google.com
Signed-off-by: Morten Rasmussen morten.rasmus...@arm.com
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/fair.c | 21
/2014/8/12/295
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/core.c | 12 -
kernel/sched/fair.c | 150 +++
kernel/sched/sched.h | 2 +-
3 files changed, 69 insertions(+), 95 deletions(-)
diff --git a/kernel
because I'm not sure whether we can rely on
arch_scale_freq_capacity to be short and efficient ?
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/fair.c | 17 +
kernel/sched/sched.h | 4 +++-
2 files changed, 8 insertions(+), 13 deletions(-)
diff --git
with frequency
scaling invariance on the running_load_avg.
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/fair.c | 29 +
1 file changed, 29 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9ab5233..7ca5656 100644
for migrating the task.
The nohz_kick_needed function has been cleaned up a bit while adding the new
test
env.src_cpu and env.src_rq must be set unconditionnally because they are used
in need_active_balance which is called even if busiest-nr_running equals 1
Signed-off-by: Vincent Guittot
tracking and not in the CPU capacity. arch_scale_freq_capacity will be
revisited for scaling load with the current frequency of the CPUs in a later
patch.
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/fair.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/kernel
. As an example, we
can detect when a CPU handles a significant amount of irq
(with CONFIG_IRQ_TIME_ACCOUNTING) but this CPU is seen as an idle CPU by
scheduler whereas CPUs, which are really idle, are available.
- evaluate the available capacity for CFS tasks
Signed-off-by: Vincent Guittot
Rasmussen morten.rasmus...@arm.com
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/debug.c | 2 ++
kernel/sched/fair.c | 3 +++
2 files changed, 5 insertions(+)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index f384452..efb47ed 100644
--- a/kernel/sched
On 19 November 2014 16:15, pang.xunlei pang.xun...@linaro.org wrote:
On 4 November 2014 00:54, Vincent Guittot vincent.guit...@linaro.org wrote:
[snip]
+static inline bool
+group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
{
- unsigned int capacity_factor, smt, cpus
On 10 September 2014 15:50, Peter Zijlstra pet...@infradead.org wrote:
On Sat, Aug 30, 2014 at 10:37:40PM +0530, Preeti U Murthy wrote:
- if ((sd-flags SD_SHARE_CPUCAPACITY) weight 1) {
- if (sched_feat(ARCH_CAPACITY))
Aren't you missing this check above? I understand that it
On 10 September 2014 15:53, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Aug 26, 2014 at 01:06:49PM +0200, Vincent Guittot wrote:
This new field cpu_capacity_orig reflects the available capacity of a CPUs
unlike the cpu_capacity which reflects the current capacity that can be
altered
On 11 September 2014 12:07, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Aug 26, 2014 at 01:06:51PM +0200, Vincent Guittot wrote:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 18db43e..60ae1ce 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6049,6
On 11 September 2014 12:13, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Aug 26, 2014 at 01:06:51PM +0200, Vincent Guittot wrote:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 18db43e..60ae1ce 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6049,6
On 11 September 2014 13:17, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Aug 26, 2014 at 01:06:52PM +0200, Vincent Guittot wrote:
index 5c2c885..7dfd584 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1073,10 +1073,10 @@ struct sched_avg {
* above by 1024/(1-y
On 11 September 2014 13:17, Peter Zijlstra pet...@infradead.org wrote:
};
Man, I should go look at Yuyang's rewrite of this all again. I just
tried to figure out the decay stuff and my head hurts ;-)
Regarding Yuyang's rewrite, i had a patch above his patches to add the
running figure that
On 11 September 2014 14:34, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Aug 26, 2014 at 01:06:53PM +0200, Vincent Guittot wrote:
Monitor the utilization level of each group of each sched_domain level. The
utilization is the amount of cpu_capacity that is currently used on a CPU or
group
On 11 September 2014 18:15, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Aug 26, 2014 at 01:06:54PM +0200, Vincent Guittot wrote:
+static inline int group_has_free_capacity(struct sg_lb_stats *sgs,
+ struct lb_env *env)
{
+ if ((sgs-group_capacity_orig * 100
On 30 August 2014 14:00, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi Vincent,
On 08/26/2014 04:36 PM, Vincent Guittot wrote:
The computation of avg_load and avg_load_per_task should only takes into
account the number of cfs tasks. The non cfs task are already taken into
account
On 3 September 2014 11:11, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
On 09/01/2014 02:15 PM, Vincent Guittot wrote:
On 30 August 2014 19:50, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi Vincent,
index 18db43e..60ae1ce 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
On 3 September 2014 14:26, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
On 09/03/2014 05:14 PM, Vincent Guittot wrote:
On 3 September 2014 11:11, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
On 09/01/2014 02:15 PM, Vincent Guittot wrote:
[snip]
Ok I understand your explanation above
On 3 September 2014 14:21, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi,
Hi Preeti,
There are places in kernel/sched/fair.c in the load balancing part where
rq-nr_running is used as against cfs_rq-nr_running. At least I could
not make out why the former was used in the following
On 4 September 2014 01:43, Tim Chen tim.c.c...@linux.intel.com wrote:
On Wed, 2014-09-03 at 13:09 +0200, Vincent Guittot wrote:
On 30 August 2014 14:00, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
Hi Vincent,
On 08/26/2014 04:36 PM, Vincent Guittot wrote:
The computation of avg_load
The update of update_rq_runnable_avg interface is missing for
CONFIG_FAIR_GROUP_SCHED in the original patch
[PATCH v5 09/12] sched: add usage_load_avg
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
Hi Peter,
Do you prefer that I sent a new version of
[PATCH v5 09/12] sched: add
On 21 November 2014 06:36, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 9/26/14, 8:17 PM, Vincent Guittot wrote:
[snip]
You add up the individual cpu usage values for a group by
sgs-group_usage += get_cpu_usage(i) in update_sg_lb_stats and later use
sgs-group_usage
On 23 November 2014 at 01:22, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 10/3/14, 8:50 PM, Vincent Guittot wrote:
On 3 October 2014 11:35, Morten Rasmussen morten.rasmus...@arm.com
wrote:
On Fri, Oct 03, 2014 at 08:24:23AM +0100, Vincent Guittot wrote:
On 2 October 2014 18:57
On 23 November 2014 at 02:03, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 10/9/14, 10:18 PM, Vincent Guittot wrote:
On 9 October 2014 14:16, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Oct 07, 2014 at 02:13:36PM +0200, Vincent Guittot wrote:
+static inline bool
On 23 November 2014 at 11:25, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 7/29/14, 1:51 AM, Vincent Guittot wrote:
The imbalance flag can stay set whereas there is no imbalance.
Let assume that we have 3 tasks that run on a dual cores /dual cluster
system.
We will have some idle
On 24 November 2014 at 01:34, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 5/28/14, 7:15 PM, Vincent Guittot wrote:
On 28 May 2014 12:58, Peter Zijlstra pet...@infradead.org wrote:
On Fri, May 23, 2014 at 05:53:03PM +0200, Vincent Guittot wrote:
[snip]
Now I'm only struggling
On 21 November 2014 at 13:34, Morten Rasmussen morten.rasmus...@arm.com wrote:
Should the subject mention that the patch adds utilization tracking?
Maybe: 'sched: Add utilization tracking' ?
On Mon, Nov 03, 2014 at 04:54:38PM +, Vincent Guittot wrote:
Add new statistics which reflect
On 21 November 2014 at 13:35, Morten Rasmussen morten.rasmus...@arm.com wrote:
s/usage/utilization/ in subject.
On Mon, Nov 03, 2014 at 04:54:39PM +, Vincent Guittot wrote:
From: Morten Rasmussen morten.rasmus...@arm.com
Adds usage contribution tracking for group entities. Unlike
s
On 21 November 2014 at 13:35, Morten Rasmussen morten.rasmus...@arm.com wrote:
On Mon, Nov 03, 2014 at 04:54:42PM +, Vincent Guittot wrote:
[snip]
The average running time of RT tasks is used to estimate the remaining
compute
@@ -5801,19 +5801,12 @@ static unsigned long
On 21 November 2014 at 13:37, Morten Rasmussen morten.rasmus...@arm.com wrote:
On Mon, Nov 03, 2014 at 04:54:45PM +, Vincent Guittot wrote:
[snip]
*/
if (prefer_sibling sds-local
- sds-local_stat.group_has_free_capacity
On 21 November 2014 at 13:37, Morten Rasmussen morten.rasmus...@arm.com wrote:
On Mon, Nov 03, 2014 at 04:54:47PM +, Vincent Guittot wrote:
+ /*
+ * The dst_cpu is idle and the src_cpu CPU has only 1 CFS task.
+ * It's worth migrating the task if the src_cpu's capacity
On 11 September 2014 21:02, Nicolas Pitre nicolas.pi...@linaro.org wrote:
On Tue, 26 Aug 2014, Vincent Guittot wrote:
This new field cpu_capacity_orig reflects the available capacity of a CPUs
s/a CPUs/a CPU/
good catch
unlike the cpu_capacity which reflects the current capacity that can
On 15 September 2014 13:42, Peter Zijlstra pet...@infradead.org wrote:
On Sun, Sep 14, 2014 at 09:41:56PM +0200, Peter Zijlstra wrote:
On Thu, Sep 11, 2014 at 07:26:48PM +0200, Vincent Guittot wrote:
On 11 September 2014 18:15, Peter Zijlstra pet...@infradead.org wrote:
I'm confused about
On 15 September 2014 21:15, Morten Rasmussen morten.rasmus...@arm.com wrote:
On Tue, Aug 26, 2014 at 12:06:52PM +0100, Vincent Guittot wrote:
Add new statistics which reflect the average time a task is running on the
CPU and the sum of the tasks' running on a runqueue. The latter is named
On 16 September 2014 00:14, Vincent Guittot vincent.guit...@linaro.org wrote:
On 15 September 2014 13:42, Peter Zijlstra pet...@infradead.org wrote:
On Sun, Sep 14, 2014 at 09:41:56PM +0200, Peter Zijlstra wrote:
On Thu, Sep 11, 2014 at 07:26:48PM +0200, Vincent Guittot wrote:
On 11 September
On 15 September 2014 12:45, Morten Rasmussen morten.rasmus...@arm.com wrote:
On Thu, Sep 11, 2014 at 03:04:44PM +0100, Peter Zijlstra wrote:
On Thu, Sep 11, 2014 at 03:07:52PM +0200, Vincent Guittot wrote:
Also I'm not entirely sure I like the usage, utilization names/metrics.
I would
On 17 September 2014 15:25, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Sep 16, 2014 at 12:14:54AM +0200, Vincent Guittot wrote:
On 15 September 2014 13:42, Peter Zijlstra pet...@infradead.org wrote:
OK, I've reconsidered _again_, I still don't get it.
So fundamentally I think its
On 25 September 2014 19:23, Morten Rasmussen morten.rasmus...@arm.com wrote:
[snip]
/* Remainder of delta accrued against u_0` */
if (runnable)
- sa-runnable_avg_sum += delta;
+ sa-runnable_avg_sum += (delta * scale_cap)
+
On 25 September 2014 21:05, Dietmar Eggemann dietmar.eggem...@arm.com wrote:
On 23/09/14 17:08, Vincent Guittot wrote:
Monitor the usage level of each group of each sched_domain level. The usage
is
the amount of cpu_capacity that is currently used on a CPU or group of CPUs.
We use
On 25 September 2014 21:19, Dietmar Eggemann dietmar.eggem...@arm.com wrote:
On 25/09/14 09:35, Vincent Guittot wrote:
[snip]
In case sgs-group_type is group_overloaded you could set
sgs-group_out_of_capacity to 1 without calling group_is_overloaded again.
I prefer to keep sgs
On 10 November 2014 06:54, l...@01.org wrote:
FYI, we noticed the below changes on
https://git.linaro.org/people/mturquette/linux.git eas-next
commit 9597d64116d0d441dea32e7f5f05fa135d16f44b (sched: replace
capacity_factor by usage)
b57a1e0afff2cbac 9597d64116d0d441dea32e7f5f
On 14 November 2014 04:35, Yuanhan Liu yuanhan@linux.intel.com wrote:
On Wed, Nov 12, 2014 at 03:44:34PM +0100, Vincent Guittot wrote:
On 10 November 2014 06:54, l...@01.org wrote:
FYI, we noticed the below changes on
https://git.linaro.org/people/mturquette/linux.git eas-next
in get_cpu_usage. But the scaling invariance will come
in another patchset.
Finally, the sched_group-sched_group_capacity-capacity_orig has been removed
because it's more used during load balance.
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/core.c | 12
a group of CPUs can handle
Rename runnable_avg_period into avg_period as it is now used with both
runnable_avg_sum and running_avg_sum
Add some descriptions of the variables to explain their differences
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
include/linux/sched.h | 19
cpu capacity
[1] https://lkml.org/lkml/2014/7/18/110
[2] https://lkml.org/lkml/2014/7/25/589
Vincent Guittot (6):
sched: add per rq cpu_capacity_orig
sched: move cfs task on a CPU with higher capacity
sched: add utilization_avg_contrib
sched: get CPU's usage statistic
sched: replace
add the SD_PREFER_SIBLING flag for SMT level in order to ensure that
the scheduler will put at least 1 task per core.
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/core.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
Monitor the usage level of each group of each sched_domain level. The usage is
the amount of cpu_capacity that is currently used on a CPU or group of CPUs.
We use the utilization_load_avg to evaluate the usage level of each group.
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/fair.c | 70 ++---
1 file changed, 50 insertions(+), 20 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 622f8b0..7422044 100644
--- a/kernel/sched
to evaluate the usage of
a the CPU by CFS tasks
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
Reviewed-by: Kamalesh Babulal kamal...@linux.vnet.ibm.com
---
kernel/sched/core.c | 2 +-
kernel/sched/fair.c | 8 +++-
kernel/sched/sched.h | 1 +
3 files changed, 9 insertions(+), 2 deletions
On 24 September 2014 19:48, Dietmar Eggemann dietmar.eggem...@arm.com wrote:
On 23/09/14 17:08, Vincent Guittot wrote:
[snip]
This review (by PeterZ) during v5 of your patch-set recommended some
renaming (e.g. s/group_has_free_capacity/group_has_capacity and
s/group_out_of_capacity
in get_cpu_usage. But the scaling invariance will come
in another patchset.
Finally, the sched_group-sched_group_capacity-capacity_orig has been removed
because it's more used during load balance.
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
Hi,
This update of the patch takes
On 24 September 2014 14:27, Preeti U Murthy pre...@linux.vnet.ibm.com wrote:
On 09/23/2014 09:38 PM, Vincent Guittot wrote:
add the SD_PREFER_SIBLING flag for SMT level in order to ensure that
the scheduler will put at least 1 task per core.
Signed-off-by: Vincent Guittot vincent.guit
On 22 September 2014 18:24, Morten Rasmussen morten.rasmus...@arm.com wrote:
From: Dietmar Eggemann dietmar.eggem...@arm.com
The per-entity load-tracking currently neither accounts for frequency
changes due to frequency scaling (cpufreq) nor for micro-architectural
differences between cpus
On 3 November 2014 03:12, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 14/10/31 下午4:47, Vincent Guittot wrote:
This patchset consolidates several changes in the capacity and the usage
tracking of the CPU. It provides a frequency invariant metric of the usage
of
CPUs and generally
On 3 November 2014 08:01, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 14/10/31 下午4:47, Vincent Guittot wrote:
The scheduler tries to compute how many tasks a group of CPUs can handle
by
assuming that a task's load is SCHED_LOAD_SCALE and a CPU's capacity is
SCHED_CAPACITY_SCALE
On 3 November 2014 16:51, Peter Zijlstra pet...@infradead.org wrote:
On Fri, Oct 31, 2014 at 09:47:32AM +0100, Vincent Guittot wrote:
The call to arch_scale_frequency_capacity in the rt scheduling path might be
a concern for RT folks because I'm not sure whether we can rely
/10/131
[2] https://lkml.org/lkml/2014/7/25/589
Morten Rasmussen (2):
sched: Track group sched_entity usage contributions
sched: Make sched entity usage tracking scale-invariant
Vincent Guittot (8):
sched: add utilization_avg_contrib
sched: remove frequency scaling from cpu_capacity
sched
because I'm not sure whether we can rely on
arch_scale_freq_capacity to be short and efficient ?
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/fair.c | 17 +
kernel/sched/sched.h | 4 +++-
2 files changed, 8 insertions(+), 13 deletions(-)
diff --git
of CPUs can handle.
Rename runnable_avg_period into avg_period as it is now used with both
runnable_avg_sum and running_avg_sum
Add some descriptions of the variables to explain their differences
cc: Paul Turner p...@google.com
cc: Ben Segall bseg...@google.com
Signed-off-by: Vincent Guittot
. As an example, we
can detect when a CPU handles a significant amount of irq
(with CONFIG_IRQ_TIME_ACCOUNTING) but this CPU is seen as an idle CPU by
scheduler whereas CPUs, which are really idle, are available.
- evaluate the available capacity for CFS tasks
Signed-off-by: Vincent Guittot
Add the SD_PREFER_SIBLING flag for SMT level in order to ensure that
the scheduler will put at least 1 task per core.
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
Reviewed-by: Preeti U. Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/core.c | 1 +
1 file changed, 1 insertion
with frequency
scaling invariance on the running_load_avg.
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/fair.c | 29 +
1 file changed, 29 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4782733..884578e 100644
for migrating the task.
The nohz_kick_needed function has been cleaned up a bit while adding the new
test
env.src_cpu and env.src_rq must be set unconditionnally because they are used
in need_active_balance which is called even if busiest-nr_running equals 1
Signed-off-by: Vincent Guittot
tracking and not in the CPU capacity. arch_scale_freq_capacity will be
revisited for scaling load with the current frequency of the CPUs in a later
patch.
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/fair.c | 7 ---
1 file changed, 7 deletions(-)
diff --git a/kernel
Rasmussen morten.rasmus...@arm.com
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/debug.c | 2 ++
kernel/sched/fair.c | 3 +++
2 files changed, 5 insertions(+)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index f384452..efb47ed 100644
--- a/kernel/sched
/2014/8/12/295
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/core.c | 12 -
kernel/sched/fair.c | 150 +--
kernel/sched/sched.h | 2 +-
3 files changed, 75 insertions(+), 89 deletions(-)
diff --git a/kernel
).
So we define the range to [0..SCHED_SCALE_CAPACITY] in order to avoid overflow.
cc: Paul Turner p...@google.com
cc: Ben Segall bseg...@google.com
Signed-off-by: Morten Rasmussen morten.rasmus...@arm.com
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/fair.c | 21
On 25 November 2014 at 00:47, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 7/29/14, 1:51 AM, Vincent Guittot wrote:
The imbalance flag can stay set whereas there is no imbalance.
Let assume that we have 3 tasks that run on a dual cores /dual cluster
system.
We will have some idle
On 24 November 2014 at 18:05, Morten Rasmussen morten.rasmus...@arm.com wrote:
On Mon, Nov 24, 2014 at 02:24:00PM +, Vincent Guittot wrote:
On 21 November 2014 at 13:35, Morten Rasmussen morten.rasmus...@arm.com
wrote:
On Mon, Nov 03, 2014 at 04:54:42PM +, Vincent Guittot wrote
On 25 November 2014 at 03:24, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 11/4/14, 12:54 AM, Vincent Guittot wrote:
The average running time of RT tasks is used to estimate the remaining
compute
capacity for CFS tasks. This remaining capacity is the original capacity
scaled
down
On 26 November 2014 at 06:18, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 11/25/14, 9:52 PM, Vincent Guittot wrote:
On 25 November 2014 at 03:24, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 11/4/14, 12:54 AM, Vincent Guittot wrote:
The average running time of RT tasks
On 18 November 2014 11:47, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 10/31/14, 4:47 PM, Vincent Guittot wrote:
When a CPU is used to handle a lot of IRQs or some RT tasks, the remaining
capacity for CFS tasks can be significantly reduced. Once we detect such
I see the cpu
On 2 December 2014 at 15:06, Morten Rasmussen morten.rasmus...@arm.com wrote:
From: Dietmar Eggemann dietmar.eggem...@arm.com
Besides the existing frequency scale-invariance correction factor, apply
cpu scale-invariance correction factor to usage and load tracking.
Cpu scale-invariance takes
On 2 December 2014 at 15:06, Morten Rasmussen morten.rasmus...@arm.com wrote:
From: Morten Rasmussen morten.rasmus...@arm.com
Architectures that don't have any other means for tracking cpu frequency
changes need a callback from cpufreq to implement a scaling factor to
enable scale-invariant
On 2 December 2014 at 15:06, Morten Rasmussen morten.rasmus...@arm.com wrote:
Introduces the blocked utilization, the utilization counter-part to
cfs_rq-utilization_load_avg. It is the sum of sched_entity utilization
contributions of entities that were recently on the cfs_rq that are
currently
On 2 December 2014 at 15:06, Morten Rasmussen morten.rasmus...@arm.com wrote:
Add the blocked utilization contribution to group sched_entity
utilization (se-avg.utilization_avg_contrib) and to get_cpu_usage().
With this change cpu usage now includes recent usage by currently
non-runnable
-runnable_avg_sum += scaled_delta;
if (running)
- sa-running_avg_sum += delta * scale_freq
-SCHED_CAPACITY_SHIFT;
+ sa-running_avg_sum += scaled_delta;
sa-avg_period += delta;
return decayed;
Acked-by: Vincent Guittot
On 9 October 2014 17:18, Peter Zijlstra pet...@infradead.org wrote:
On Thu, Oct 09, 2014 at 04:18:02PM +0200, Vincent Guittot wrote:
On 9 October 2014 14:16, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Oct 07, 2014 at 02:13:36PM +0200, Vincent Guittot wrote:
+static inline bool
On 10 October 2014 09:17, Vincent Guittot vincent.guit...@linaro.org wrote:
yes i think it latter because it give a more stable view of the
s/latter/matter/
overload state and have free capacity state of the CPU.
One additional point is that the imbalance_pct will ensure that a
cpu/group
On 9 October 2014 17:30, Peter Zijlstra pet...@infradead.org wrote:
On Thu, Oct 09, 2014 at 04:59:36PM +0200, Vincent Guittot wrote:
On 9 October 2014 13:23, Peter Zijlstra pet...@infradead.org wrote:
On Tue, Oct 07, 2014 at 02:13:32PM +0200, Vincent Guittot wrote:
+++ b/kernel/sched/fair.c
On 9 October 2014 17:12, Peter Zijlstra pet...@infradead.org wrote:
+static int get_cpu_usage(int cpu)
+{
+ unsigned long usage = cpu_rq(cpu)-cfs.utilization_load_avg;
+ unsigned long capacity = capacity_orig_of(cpu);
+
+ if (usage = SCHED_LOAD_SCALE)
+
On 4 November 2014 04:21, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
On 14/10/31 下午4:47, Vincent Guittot wrote:
Add the SD_PREFER_SIBLING flag for SMT level in order to ensure that
the scheduler will put at least 1 task per core.
What's the behavior before this patch?
Before
On 4 November 2014 09:30, Wanpeng Li kernel...@gmail.com wrote:
On 14/10/31 下午4:47, Vincent Guittot wrote:
When a CPU is used to handle a lot of IRQs or some RT tasks, the remaining
capacity for CFS tasks can be significantly reduced. Once we detect such
situation by comparing
On 4 November 2014 11:42, Wanpeng Li kernel...@gmail.com wrote:
Hi Vincent,
+
+/*
* Group imbalance indicates (and tries to solve) the problem where
balancing
* groups is inadequate due to tsk_cpus_allowed() constraints.
*
@@ -6562,6 +6574,28 @@ static int
On 4 November 2014 13:07, Hillf Danton hillf...@alibaba-inc.com wrote:
+ /*
+* The dst_cpu is idle and the src_cpu CPU has only 1 CFS task.
Why specify one task instead of not less than one?
if cfs.h_nr_running = 0 (which should not occurs at that point), we
don't need
On 4 November 2014 13:54, Hillf Danton hillf...@alibaba-inc.com wrote:
On 4 November 2014 13:07, Hillf Danton hillf...@alibaba-inc.com wrote:
+ /*
+* The dst_cpu is idle and the src_cpu CPU has only 1 CFS task.
Why specify one task instead of not less than one?
On 4 November 2014 14:31, Hillf Danton hillf...@alibaba-inc.com wrote:
I wonder if you can please shed light on the case that
the dst_cpu is newly idle.
The main problem if we do the test only for newly idle case, is that
we are not sure to move the task because we must rely on the
On 4 December 2014 at 10:05, Hillf Danton hillf...@alibaba-inc.com wrote:
From: zhang jun jun.zh...@intel.com
when cpu == -1 and sd-child == NULL, select_task_rq_fair return -1, system
panic.
[ 0.738326] BUG: unable to handle kernel paging request at 8800997ea928
[ 0.746138] IP:
On 4 December 2014 at 11:23, Liu, Chuansheng chuansheng@intel.com wrote:
-Original Message-
From: Vincent Guittot [mailto:vincent.guit...@linaro.org]
Sent: Thursday, December 04, 2014 6:08 PM
To: Hillf Danton
Cc: Zhang, Jun; Ingo Molnar; Peter Zijlstra; linux-kernel; Liu
On 4 December 2014 at 12:10, Hillf Danton hillf...@alibaba-inc.com wrote:
The change below will give a similar behavior than 3.18 for 3.14 and
we still match the condition if (new_cpu == -1 || new_cpu == cpu) in
And -1 is no longer needed.
yes indeed
order to go in the child level
---
On 4 December 2014 at 12:43, jun.zh...@intel.com wrote:
From: zhang jun jun.zh...@intel.com
in function select_task_rq_fair, when find_idlest_cpu return -1 and sd-child
== NULL
select_task_rq_fair return -1, system panic.
you forgot to add on which kernel version this patch applies.
We
for migrating the task.
The nohz_kick_needed function has been cleaned up a bit while adding the new
test
env.src_cpu and env.src_rq must be set unconditionnally because they are used
in need_active_balance which is called even if busiest-nr_running equals 1
Signed-off-by: Vincent Guittot
).
So we define the range to [0..SCHED_SCALE_CAPACITY] in order to avoid overflow.
cc: Paul Turner p...@google.com
cc: Ben Segall bseg...@google.com
Signed-off-by: Morten Rasmussen morten.rasmus...@arm.com
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/fair.c | 21
/10/131
[2] https://lkml.org/lkml/2014/7/25/589
Morten Rasmussen (2):
sched: Track group sched_entity usage contributions
sched: Make sched entity usage tracking scale-invariant
Vincent Guittot (8):
sched: add utilization_avg_contrib
sched: remove frequency scaling from cpu_capacity
sched
Rasmussen morten.rasmus...@arm.com
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/debug.c | 2 ++
kernel/sched/fair.c | 3 +++
2 files changed, 5 insertions(+)
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index f384452..efb47ed 100644
--- a/kernel/sched
Add the SD_PREFER_SIBLING flag for SMT level in order to ensure that
the scheduler will put at least 1 task per core.
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
Reviewed-by: Preeti U. Murthy pre...@linux.vnet.ibm.com
---
kernel/sched/core.c | 1 +
1 file changed, 1 insertion
with frequency
scaling invariance on the running_load_avg.
Signed-off-by: Vincent Guittot vincent.guit...@linaro.org
---
kernel/sched/fair.c | 29 +
1 file changed, 29 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4782733..884578e 100644
. As an example, we
can detect when a CPU handles a significant amount of irq
(with CONFIG_IRQ_TIME_ACCOUNTING) but this CPU is seen as an idle CPU by
scheduler whereas CPUs, which are really idle, are available.
- evaluate the available capacity for CFS tasks
Signed-off-by: Vincent Guittot
501 - 600 of 3961 matches
Mail list logo