On 18 October 2016 at 11:07, Peter Zijlstra wrote:
> On Mon, Oct 17, 2016 at 11:52:39PM +0100, Dietmar Eggemann wrote:
>>
>> Something looks weird related to the use of for_each_possible_cpu(i) in
>> online_fair_sched_group() on my i5-3320M CPU (4 logical cpus).
>>
>> In
On 13 October 2016 at 23:34, Vincent Guittot <vincent.guit...@linaro.org> wrote:
> On 13 October 2016 at 20:49, Dietmar Eggemann <dietmar.eggem...@arm.com>
> wrote:
>> On 13/10/16 17:48, Vincent Guittot wrote:
>>> On 13 October 2016 at 17:52, Joseph Salisbury
&g
Le Friday 14 Oct 2016 à 14:10:07 (+0100), Dietmar Eggemann a écrit :
> On 14/10/16 09:24, Vincent Guittot wrote:
> > On 13 October 2016 at 23:34, Vincent Guittot <vincent.guit...@linaro.org>
> > wrote:
> >> On 13 October 2016 at 20:49, Dietmar Eggemann <diet
is greater than its share so it will
contribute the same load of a task of equal weight.
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.c | 200 ++-
kernel/sched/sched.h | 1 +
2 files changed, 200 insertions
The moves of tasks are now propagated down to root and the utilization
of cfs_rq reflects reality so it doesn't need to be estimated at init.
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
any pending changes.The propagation relies on patch
"sched: fix hierarchical order in rq->leaf_cfs_rq_list", which orders
children and parents, to ensure that it's done in one pass.
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.c | 6 ++
s before, and after any potential parents that
are already in the list. The easiest way is to put the cfs_rq just after the
last inserted one and to keep track of it untl the branch is fully added.
Signed-off-by: Vincent Guittot <vincent.guit..
Factorize post_init_entity_util_avg and part of attach_task_cfs_rq
in one function attach_entity_cfs_rq
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.c | 24 +---
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/kernel
sched_entity
and cfs_rq metrics to now.
Use update_load_avg everytime we have to update and sync cfs_rq and
sched_entity before changing the state of a sched_enity
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.
sent them as a single patchset because the fix is
independent of this one
- Merge some functions that are always used together
- During update of blocked load, ensure that the sched_entity is synced
with the cfs_rq applying changes
- Fix an issue when task changes its cpu affinity
Vincent Guittot (
Le Friday 14 Oct 2016 à 12:04:02 (-0400), Joseph Salisbury a écrit :
> On 10/14/2016 11:18 AM, Vincent Guittot wrote:
> > Le Friday 14 Oct 2016 à 14:10:07 (+0100), Dietmar Eggemann a écrit :
> >> On 14/10/16 09:24, Vincent Guittot wrote:
> >>> On 13 October
Le Tuesday 18 Oct 2016 à 12:34:12 (+0200), Peter Zijlstra a écrit :
> On Tue, Oct 18, 2016 at 11:45:48AM +0200, Vincent Guittot wrote:
> > On 18 October 2016 at 11:07, Peter Zijlstra <pet...@infradead.org> wrote:
> > > So aside from funny BIOSes, this should al
On 18 October 2016 at 13:09, Peter Zijlstra <pet...@infradead.org> wrote:
> On Wed, Oct 12, 2016 at 09:41:36AM +0200, Vincent Guittot wrote:
>
>> ok. In fact, I have noticed another regression with tip/sched/core and
>> hackbench while looking at yours.
>> I have bis
On 19 October 2016 at 11:46, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
> On 18/10/16 12:56, Vincent Guittot wrote:
>> Le Tuesday 18 Oct 2016 à 12:34:12 (+0200), Peter Zijlstra a écrit :
>>> On Tue, Oct 18, 2016 at 11:45:48AM +0200, Vincent Guittot wrote:
>>
On 19 October 2016 at 13:33, Peter Zijlstra <pet...@infradead.org> wrote:
> On Tue, Oct 18, 2016 at 01:56:51PM +0200, Vincent Guittot wrote:
>
>> ---
>> kernel/sched/fair.c | 9 -
>> 1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff
On 19 October 2016 at 16:49, Joseph Salisbury
<joseph.salisb...@canonical.com> wrote:
> On 10/18/2016 07:56 AM, Vincent Guittot wrote:
>> Le Tuesday 18 Oct 2016 à 12:34:12 (+0200), Peter Zijlstra a écrit :
>>> On Tue, Oct 18, 2016 at 11:45:48AM +0200, Vincent Guittot wrote:
On 19 October 2016 at 15:30, Morten Rasmussen <morten.rasmus...@arm.com> wrote:
> On Tue, Oct 18, 2016 at 01:56:51PM +0200, Vincent Guittot wrote:
>> Le Tuesday 18 Oct 2016 à 12:34:12 (+0200), Peter Zijlstra a écrit :
>> > On Tue, Oct 18, 2016 at 11:45:48AM +020
ed to something else
than 0 because their load will increase when entity will be attached.
Fixes: 3d30544f0212 ("sched/fair: Apply more PELT fixes)
Reported-by: Joseph Salisbury <joseph.salisb...@canonical.com>
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
On 19 October 2016 at 17:33, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
> On 19/10/16 12:25, Vincent Guittot wrote:
>> On 19 October 2016 at 11:46, Dietmar Eggemann <dietmar.eggem...@arm.com>
>> wrote:
>>> On 18/10/16 12:56, Vincent Guittot wrote:
&g
ched_domain_span(sd), target, wrap) {
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aa47589..820a787 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sc
057 0.057(0%) 0.057(0%) 0,055(+5%)
max0.066 0.068 0.070 0,063
stdev +/-9% +/-9% +/-8% +/-9%
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
Changes since v2:
- Rebase on latest sched/core
- Get same results with the rebase an
/2016/10/18/206
Vincent Guittot (2):
sched: fix find_idlest_group for fork
sched: use load_avg for selecting idlest group
kernel/sched/fair.c | 54 +++--
1 file changed, 44 insertions(+), 10 deletions(-)
--
2.7.4
On 23 November 2016 at 16:51, Kevin Hilman <khil...@baylibre.com> wrote:
> Vincent Guittot <vincent.guit...@linaro.org> writes:
>
>> On 22 November 2016 at 19:12, Kevin Hilman <khil...@baylibre.com> wrote:
>>> Viresh Kumar <viresh.ku...@linaro.org> wri
> >
>> > [1] http://www.96boards.org/product/hikey
>> > [2] https://play.google.com/store/apps/details?id=com.quicinc.vellamo
>> >
>> >
>> >> From: Vincent Guittot <vincent.guit...@linaro.org>
>> >> Date: Tue, Nov 8, 2016 at 12:26 AM
On 21 November 2016 at 15:37, Juri Lelli wrote:
> On 21/11/16 15:17, Peter Zijlstra wrote:
>> On Mon, Nov 21, 2016 at 01:53:08PM +, Juri Lelli wrote:
>> > On 21/11/16 13:26, Peter Zijlstra wrote:
>>
>> > > So the limited decay would be the dominant factor in ramp-up time,
On 28 November 2016 at 18:01, Matt Fleming <m...@codeblueprint.co.uk> wrote:
> On Fri, 25 Nov, at 04:34:32PM, Vincent Guittot wrote:
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index aa47589..820a787 100644
>> --- a/kernel/sched/fa
On 28 November 2016 at 18:02, Matt Fleming <m...@codeblueprint.co.uk> wrote:
> On Fri, 25 Nov, at 04:34:31PM, Vincent Guittot wrote:
>> This patchset was originally 1 patch but a perf regression happened during
>> the rebase.
>> The patch 01 fixes the perf regression d
On 22 November 2016 at 19:12, Kevin Hilman wrote:
> Viresh Kumar writes:
>
>> On 21-11-16, 09:07, Rob Herring wrote:
>>> On Fri, Nov 18, 2016 at 02:53:12PM +0530, Viresh Kumar wrote:
>>> > Some platforms have the capability to configure the
On 10 November 2016 at 18:04, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
> On 08/11/16 09:53, Vincent Guittot wrote:
>> Ensure that the move of a sched_entity will be reflected in load and
>> utilization of the task_group hierarchy.
>>
>> When a sched_en
Hi,
My hikey board failed to detect and mount sdcard with v4.9-rc1 and i
have bisected the issue to this patch. Once reverted, the sdcard is
detected again.
Regards,
Vincent
On 25 August 2016 at 05:00, Guodong Xu wrote:
> Add resets property into dwmmc_0, dwmmc_1 and
On 21 October 2016 at 14:19, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
>
> On 10/17/2016 10:14 AM, Vincent Guittot wrote:
>>
>> When a task moves from/to a cfs_rq, we set a flag which is then used to
>> propagate the change at parent level (sched_entity and
On 26 October 2016 at 13:16, Peter Zijlstra <pet...@infradead.org> wrote:
> On Wed, Oct 26, 2016 at 09:05:49AM +0200, Vincent Guittot wrote:
>> >
>> > The 'detach across' and 'attach across' in detach_task_cfs_rq() and
>> > attach_entity_cfs_rq() do
On 26 October 2016 at 13:41, Peter Zijlstra <pet...@infradead.org> wrote:
> On Mon, Oct 17, 2016 at 11:14:10AM +0200, Vincent Guittot wrote:
>> @@ -3110,11 +3116,12 @@ static inline void update_load_avg(struct
>> sched_entity *se, int update_tg)
>>* Track tas
On 26 October 2016 at 12:54, Peter Zijlstra <pet...@infradead.org> wrote:
> On Mon, Oct 17, 2016 at 11:14:11AM +0200, Vincent Guittot wrote:
>> /*
>> + * Signed add and clamp on underflow.
>> + *
>> + * Explicitly do a load-store to ensure the intermedia
On 11 October 2016 at 12:24, Matt Fleming <m...@codeblueprint.co.uk> wrote:
> On Mon, 10 Oct, at 07:34:40PM, Vincent Guittot wrote:
>>
>> Subject: [PATCH] sched: use load_avg for selecting idlest group
>>
>> select_busiest_group only compares the runnable_load_a
On 7 October 2016 at 01:11, Vincent Guittot <vincent.guit...@linaro.org> wrote:
>
> On 5 October 2016 at 11:38, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
> > On 09/26/2016 01:19 PM, Vincent Guittot wrote:
> >>
> >> Factorize post_init_enti
On 11 October 2016 at 20:57, Matt Fleming <m...@codeblueprint.co.uk> wrote:
> On Tue, 11 Oct, at 03:14:47PM, Vincent Guittot wrote:
>> >
>> > I see a regression,
>> >
>> > baseline: 2.41228
>> > patched : 2.64528 (-9.7%)
>>
On 13 October 2016 at 17:52, Joseph Salisbury
<joseph.salisb...@canonical.com> wrote:
> On 10/13/2016 06:58 AM, Vincent Guittot wrote:
>> Hi,
>>
>> On 12 October 2016 at 18:21, Joseph Salisbury
>> <joseph.salisb...@canonical.com> wrote:
>>> On 10/12
On 13 October 2016 at 20:49, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
> On 13/10/16 17:48, Vincent Guittot wrote:
>> On 13 October 2016 at 17:52, Joseph Salisbury
>> <joseph.salisb...@canonical.com> wrote:
>>> On 10/13/2016 06:58 AM, Vincent Gui
On 8 October 2016 at 13:49, Mike Galbraith <efa...@gmx.de> wrote:
> On Sat, 2016-10-08 at 13:37 +0200, Vincent Guittot wrote:
>> On 8 October 2016 at 10:39, Ingo Molnar <mi...@kernel.org> wrote:
>> >
>> > * Peter Zijlstra <pet...@infradead.org> wrote:
&
Hi,
On 12 October 2016 at 18:21, Joseph Salisbury
<joseph.salisb...@canonical.com> wrote:
> On 10/12/2016 08:20 AM, Vincent Guittot wrote:
>> On 8 October 2016 at 13:49, Mike Galbraith <efa...@gmx.de> wrote:
>>> On Sat, 2016-10-08 at 13:37 +0200, Vincent Guittot
On 12 October 2016 at 17:03, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
> On 26/09/16 13:19, Vincent Guittot wrote:
>> A task can be asynchronously detached from cfs_rq when migrating
>> between CPUs. The load of the migrated task is then removed from
>> so
On 17 October 2016 at 15:19, Peter Zijlstra wrote:
> On Mon, Oct 17, 2016 at 12:49:55PM +0100, Dietmar Eggemann wrote:
>
>> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> > index 8b03fb5..8926685 100644
>> > --- a/kernel/sched/fair.c
>> > +++
On 8 December 2016 at 15:09, Matt Fleming <m...@codeblueprint.co.uk> wrote:
> On Mon, 05 Dec, at 01:35:46PM, Matt Fleming wrote:
>> On Mon, 05 Dec, at 10:27:36AM, Vincent Guittot wrote:
>> >
>> > Hi Matt,
>> >
>> > Thanks for the results.
&
comparing runnable_load
[1] https://lkml.org/lkml/2016/10/18/206
[2] https://lkml.org/lkml/2016/12/8/260
[3] https://lkml.org/lkml/2016/12/8/260
Vincent Guittot (2):
sched: fix find_idlest_group for fork
sched: use load_avg for selecting idlest group
kernel/sched/fair.c | 54
/-9% +/-9% +/-8% +/-9%
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.c | 48 ++--
1 file changed, 38 insertions(+), 10 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
inde
d_domain_span(sd), target, wrap) {
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
Acked-by: Morten Rasmussen <morten.rasmus...@arm.com>
---
kernel/sched/fair.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 92cb50d.
On 9 December 2016 at 16:22, Peter Zijlstra <pet...@infradead.org> wrote:
> On Thu, Dec 08, 2016 at 05:56:54PM +0100, Vincent Guittot wrote:
>> @@ -5449,14 +5456,32 @@ find_idlest_group(struct sched_domain *sd, struct
>> task_struct *p,
>> }
>>
>&
Gentle ping ...
Vincent
On 1 December 2016 at 17:38, Vincent Guittot <vincent.guit...@linaro.org> wrote:
> The update of the share of a cfs_rq is done when its load_avg is updated
> but before the group_entity's load_avg has been updated for the past time
> slot. This generates
On 15 December 2016 at 22:42, Peter Zijlstra <pet...@infradead.org> wrote:
>
> On Thu, Dec 01, 2016 at 05:38:53PM +0100, Vincent Guittot wrote:
> > The update of the share of a cfs_rq is done when its load_avg is updated
> > but before the group_entity's load_avg has been u
Hi Ying,
On 12 December 2016 at 06:43, kernel test robot
wrote:
> Greeting,
>
> FYI, we noticed a 149% regression of ftq.noise.50% due to commit:
>
>
> commit: 4e5160766fcc9f41bbd38bac11f92dce993644aa ("sched/fair: Propagate
> asynchrous detach")
>
ntity is updated only once its load_avg
has been synced with current time.
Cc: <sta...@vger.kernel.org>
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
I have seen the problem on tip/sched/core, v4.8 and v4.7. Previous versions
might also have the problem but I haven'
On 29 November 2016 at 11:57, Morten Rasmussen <morten.rasmus...@arm.com> wrote:
> On Fri, Nov 25, 2016 at 04:34:32PM +0100, Vincent Guittot wrote:
>> During fork, the utilization of a task is init once the rq has been
>> selected because the current utilization level of
On 29 November 2016 at 15:50, Morten Rasmussen <morten.rasmus...@arm.com> wrote:
> On Tue, Nov 29, 2016 at 02:04:27PM +0100, Vincent Guittot wrote:
>> On 29 November 2016 at 11:57, Morten Rasmussen <morten.rasmus...@arm.com>
>> wrote:
>> > On Fri, Nov 25, 2016
On 4 December 2016 at 00:25, Matt Fleming <m...@codeblueprint.co.uk> wrote:
> On Fri, 25 Nov, at 04:34:32PM, Vincent Guittot wrote:
>> During fork, the utilization of a task is init once the rq has been
>> selected because the current utilization level of the rq is used to s
Le Saturday 03 Dec 2016 à 21:47:07 (+), Matt Fleming a écrit :
> On Fri, 02 Dec, at 07:31:04PM, Brendan Gregg wrote:
> >
> > For background, is this from the "A decade of wasted cores" paper's
> > patches?
>
> No, this patch fixes an issue I originally reported here,
>
>
On 30 November 2016 at 13:49, Morten Rasmussen <morten.rasmus...@arm.com> wrote:
> On Fri, Nov 25, 2016 at 04:34:33PM +0100, Vincent Guittot wrote:
>> find_idlest_group() only compares the runnable_load_avg when looking for
>> the least loaded group. But on fork intensive us
On 30 November 2016 at 14:49, Vincent Guittot
<vincent.guit...@linaro.org> wrote:
> On 30 November 2016 at 13:49, Morten Rasmussen <morten.rasmus...@arm.com>
> wrote:
>> On Fri, Nov 25, 2016 at 04:34:33PM +0100, Vincent Guittot wrote:
>>> find_idlest_group() on
On 30 November 2016 at 15:24, Morten Rasmussen <morten.rasmus...@arm.com> wrote:
> On Wed, Nov 30, 2016 at 02:54:00PM +0100, Vincent Guittot wrote:
>> On 30 November 2016 at 14:49, Vincent Guittot
>> <vincent.guit...@linaro.org> wrote:
>> > On 30 Nove
Hi Dietmar and Ying,
Le Tuesday 03 Jan 2017 à 11:38:39 (+0100), Dietmar Eggemann a écrit :
> Hi Vincent and Ying,
>
> On 01/02/2017 04:42 PM, Vincent Guittot wrote:
> >Hi Ying,
> >
> >On 28 December 2016 at 09:17, Huang, Ying <ying.hu...@intel.com> wrote:
ong...@intel.com> wrote:
> On 01/02, Vincent Guittot wrote:
>>Hi Xiaolong,
>>
>>Le Monday 19 Dec 2016 ŕ 08:14:53 (+0800), kernel test robot a écrit :
>>>
>>> Greeting,
>>>
>>> FYI, we noticed a -4.5% regression of unixbench.score due to comm
On 4 January 2017 at 04:08, Huang, Ying <ying.hu...@intel.com> wrote:
> Vincent Guittot <vincent.guit...@linaro.org> writes:
>
>>>
>>> Vincent, like we discussed in September last year, the proper fix would
>>> probably be a cfs-rq->nr_attached
Hi Ying,
On 28 December 2016 at 09:17, Huang, Ying <ying.hu...@intel.com> wrote:
> Vincent Guittot <vincent.guit...@linaro.org> writes:
>
>> Le Tuesday 13 Dec 2016 . 09:47:30 (+0800), Huang, Ying a .crit :
>>> Hi, Vincent,
>>>
>>> Vincent Guitto
Hi Xiaolong,
Le Monday 19 Dec 2016 à 08:14:53 (+0800), kernel test robot a écrit :
>
> Greeting,
>
> FYI, we noticed a -4.5% regression of unixbench.score due to commit:
I have been able to restore performance on my platform with the patch below.
Could you test it ?
---
kernel/sched/core.c |
On 4 January 2017 at 18:20, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
> On 21/12/16 15:50, Vincent Guittot wrote:
>
> IMHO, the overall idea makes sense to me. Just a couple of small
> questions ...
>
>> The update of the share of a cfs_rq is done w
On 16 December 2016 at 09:55, Vincent Guittot
<vincent.guit...@linaro.org> wrote:
> On 15 December 2016 at 22:42, Peter Zijlstra <pet...@infradead.org> wrote:
>>
>> On Thu, Dec 01, 2016 at 05:38:53PM +0100, Vincent Guittot wrote:
>> > The update of the share o
the
cfs_rq and the weight of the group_entity is updated only once its load_avg
has been synced with current time.
Cc: <sta...@vger.kernel.org> #v4.4+
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.c | 53 +++
Le Tuesday 13 Dec 2016 à 09:47:30 (+0800), Huang, Ying a écrit :
> Hi, Vincent,
>
> Vincent Guittot <vincent.guit...@linaro.org> writes:
>
> > Hi Ying,
> >
> > On 12 December 2016 at 06:43, kernel test robot
> > <ying.hu...@linux.intel.com> wrote
On 23 March 2017 at 00:56, Joel Fernandes <joe...@google.com> wrote:
> On Mon, Mar 20, 2017 at 5:34 AM, Patrick Bellasi
> <patrick.bell...@arm.com> wrote:
>> On 20-Mar 09:26, Vincent Guittot wrote:
>>> On 20 March 2017 at 04:57, Viresh Kumar <viresh.ku...@lina
gt; more threads, like in Power 8, smt 8 mode.
>
> Fix this by only allowing local group to pull a task, if the source group
> has more number of tasks than the local group.
>
> Signed-off-by: Srikar Dronamraju <sri...@linux.vnet.ibm.com>
Acked-by: Vincent Guittot <vincent.gui
On 21 March 2017 at 15:58, Peter Zijlstra <pet...@infradead.org> wrote:
>
> On Tue, Mar 21, 2017 at 03:16:19PM +0100, Vincent Guittot wrote:
> > On 21 March 2017 at 15:03, Peter Zijlstra <pet...@infradead.org> wrote:
> >
> > > On Tue, Mar 21, 2017 at
On 22 March 2017 at 17:22, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
> On 22/03/17 09:22, Vincent Guittot wrote:
>> On 21 March 2017 at 18:46, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
>>> Hi Vincent,
>>>
>&
On 28 March 2017 at 08:35, Dietmar Eggemann wrote:
> This patch-set introduces trace events for load (and utilization)
> tracking for the following three cfs scheduler bricks: cfs_rq,
> sched_entity and task_group.
>
> I've decided to sent it out because people are
On 27 March 2017 at 18:50, Peter Zijlstra wrote:
> On Fri, Mar 24, 2017 at 02:08:58PM +, Juri Lelli wrote:
>> Worker kthread needs to be able to change frequency for all other
>> threads.
>>
>> Make it special, just under STOP class.
>
> *yuck* ;-)
>
> So imagine our
tilization, decreases from 223ms with
current scale invariance down to 121ms with the new algorithm. For this
test, i have enable arch_scale_freq for arm64.
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
include/linux/sched.h | 1 +
kernel/sched/fair.c | 49
On 25 March 2017 at 02:14, Sai Gurrappadi wrote:
> Hi Rafael,
>
> On 03/21/2017 04:08 PM, Rafael J. Wysocki wrote:
>> From: Rafael J. Wysocki
>>
>> The way the schedutil governor uses the PELT metric causes it to
>> underestimate the CPU
On 25 March 2017 at 04:48, Joel Fernandes <joe...@google.com> wrote:
> Hi Vincent,
>
> On Thu, Mar 23, 2017 at 3:08 PM, Vincent Guittot
> <vincent.guit...@linaro.org> wrote:
> [..]
>>>>
>>>>> So I'm not really aligned with the description of y
On 30 March 2017 at 10:58, Juri Lelli wrote:
> Hi,
>
> On 30/03/17 00:41, Rafael J. Wysocki wrote:
>> On Friday, March 24, 2017 02:08:59 PM Juri Lelli wrote:
>> > No assumption can be made upon the rate at which frequency updates get
>> > triggered, as there are scheduling
On 27 March 2017 at 15:18, Juri Lelli wrote:
> parse_cpu_capacity() has to return 0 on failure, but it currently returns
> 1 instead if raw_capacity kcalloc failed.
>
> Fix it by removing the negation of the return value.
>
> Cc: Russell King
>
ested-by: Sudeep Holla <sudeep.ho...@arm.com>
> Fixes: 7e5930aaef5d ('ARM: 8622/3: add sysfs cpu_capacity attribute')
> Signed-off-by: Juri Lelli <juri.le...@arm.com>
Acked-by: Vincent Guittot <vincent.guit...@linaro.org>
> ---
> arch/arm/kernel/topology.c | 2 --
>
On 28 March 2017 at 17:35, Vincent Guittot <vincent.guit...@linaro.org> wrote:
> The current implementation of load tracking invariance scales the contribution
> with current frequency and uarch performance (only for utilization) of the
> CPU. One main result of this formula is t
On 20 March 2017 at 04:57, Viresh Kumar wrote:
> On 19-03-17, 14:34, Rafael J. Wysocki wrote:
>> From: Rafael J. Wysocki
>>
>> The PELT metric used by the schedutil governor underestimates the
>> CPU utilization in some cases. The reason for
On 20 March 2017 at 22:46, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki
>
> The way the schedutil governor uses the PELT metric causes it to
> underestimate the CPU utilization in some cases.
>
> That can be easily demonstrated by running
On 21 March 2017 at 14:22, Peter Zijlstra <pet...@infradead.org> wrote:
> On Tue, Mar 21, 2017 at 09:50:28AM +0100, Vincent Guittot wrote:
>> On 20 March 2017 at 22:46, Rafael J. Wysocki <r...@rjwysocki.net> wrote:
>
>> > To work around this issue use the observat
On 20 March 2017 at 13:59, Rafael J. Wysocki <r...@rjwysocki.net> wrote:
> On Monday, March 20, 2017 09:26:34 AM Vincent Guittot wrote:
>> On 20 March 2017 at 04:57, Viresh Kumar <viresh.ku...@linaro.org> wrote:
>> > On 19-03-17, 14:34, Rafael J. Wysocki wrote:
agressive
optimization has been tried but has shown worse score.
Reported-by: ying.hu...@linux.intel.com
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
Fixes: 4e5160766fcc ("sched/fair: Propagate asynchrous detach")
---
kernel/sched/fair.c | 39 +
On 21 March 2017 at 15:03, Peter Zijlstra <pet...@infradead.org> wrote:
> On Tue, Mar 21, 2017 at 02:37:08PM +0100, Vincent Guittot wrote:
>> On 21 March 2017 at 14:22, Peter Zijlstra <pet...@infradead.org> wrote:
>
>> For the not overloaded case, it makes sense t
On 21 March 2017 at 18:46, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
> Hi Vincent,
>
> On 17/03/17 13:47, Vincent Guittot wrote:
>
> [...]
>
>> Reported-by: ying.hu...@linux.intel.com
>> Signed-off-by: Vincent Guittot <vincent.guit...@linaro.o
Le Wednesday 12 Apr 2017 à 13:28:58 (+0200), Peter Zijlstra a écrit :
> On Tue, Apr 11, 2017 at 03:09:20PM +0200, Vincent Guittot wrote:
> > Le Tuesday 11 Apr 2017 à 12:49:49 (+0200), Peter Zijlstra a écrit :
> > >
> > > Lets go back to the unscaled version:
>
tilization, decreases from 223ms with
current scale invariance down to 121ms with the new algorithm. For this
test, i have enable arch_scale_freq for arm64.
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
Update since v1:
- rebase on latest tip/sched/core which includes
"Opt
Le Monday 10 Apr 2017 à 19:38:02 (+0200), Peter Zijlstra a écrit :
>
> Thanks for the rebase.
>
> On Mon, Apr 10, 2017 at 11:18:29AM +0200, Vincent Guittot wrote:
>
> Ok, so let me try and paraphrase what this patch does.
>
> So consider a task that runs 1
On 11 April 2017 at 11:12, Peter Zijlstra <pet...@infradead.org> wrote:
> On Tue, Apr 11, 2017 at 09:52:21AM +0200, Vincent Guittot wrote:
>
>> > > + } else if (!weight) {
>> > > + if (sa->util_sum < (LOAD_AVG_MAX * 1000)) {
>> >
&
Le Tuesday 11 Apr 2017 à 12:41:36 (+0200), Peter Zijlstra a écrit :
> On Tue, Apr 11, 2017 at 11:40:21AM +0200, Vincent Guittot wrote:
> > Le Tuesday 11 Apr 2017 à 10:53:05 (+0200), Peter Zijlstra a écrit :
> > > On Tue, Apr 11, 2017 at 09:52:21AM +0200, Vincent Guittot wrote:
&g
Le Tuesday 11 Apr 2017 à 10:53:05 (+0200), Peter Zijlstra a écrit :
> On Tue, Apr 11, 2017 at 09:52:21AM +0200, Vincent Guittot wrote:
> > Le Monday 10 Apr 2017 à 19:38:02 (+0200), Peter Zijlstra a écrit :
> > >
> > > Thanks for the rebase.
> > >
> >
Le Tuesday 11 Apr 2017 à 12:49:49 (+0200), Peter Zijlstra a écrit :
>
> Lets go back to the unscaled version:
>
> running idle
>|*|-|
>
> With the current code, that would effectively end up like (again
> assuming 50%):
>
> running idle
>
On 12 April 2017 at 17:44, Peter Zijlstra <pet...@infradead.org> wrote:
> On Wed, Apr 12, 2017 at 04:50:47PM +0200, Vincent Guittot wrote:
>> Le Wednesday 12 Apr 2017 à 13:28:58 (+0200), Peter Zijlstra a écrit :
>
>> >
>> > |
On 13 April 2017 at 20:06, Peter Zijlstra <pet...@infradead.org> wrote:
> On Thu, Apr 13, 2017 at 04:59:15PM +0200, Vincent Guittot wrote:
>> On 13 April 2017 at 15:32, Peter Zijlstra <pet...@infradead.org> wrote:
>> > On Wed, Apr 12, 2017 at 01:28:58PM +0200, Pete
On 13 April 2017 at 18:13, Peter Zijlstra <pet...@infradead.org> wrote:
> On Thu, Apr 13, 2017 at 05:16:20PM +0200, Vincent Guittot wrote:
>> On 13 April 2017 at 15:39, Peter Zijlstra <pet...@infradead.org> wrote:
>
>> > OK, so the reason util_avg varies is b
On 13 April 2017 at 15:32, Peter Zijlstra wrote:
> On Wed, Apr 12, 2017 at 01:28:58PM +0200, Peter Zijlstra wrote:
>
>> I still wonder about the whole !running vs !weight thing.,
>
> Ah, since we use this for both util _and_ load, we need !running &&
> !weight, and it so
On 13 April 2017 at 15:39, Peter Zijlstra <pet...@infradead.org> wrote:
> On Tue, Apr 11, 2017 at 09:52:21AM +0200, Vincent Guittot wrote:
>
>> > Secondly, what's up with the util_sum < LOAD_AVG_MAX * 1000 thing?
>>
>> The lost idle time makes sense only if
901 - 1000 of 3961 matches
Mail list logo