pet...@infradead.org>
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
Sorry some unexpected characters appeared in the commit message of previous
version
kernel/sched/fair.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/
value instead. Both solutions
make load/util_avg being stable with the advantage of using the most up to
date value for the 2nd patch. I have split it into 2 patches to show the 2
versions but if the 2nd patch looks ok, we should probably squashed them into
one.
Vincent Guittot (2):
sched/cfs: make
n load/util_sum, we update the max
value according to the current position in the time segment instead of
removing its contribution.
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.c | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/
n load/util_sum, we update the max
value according to the current position in the time segment instead of
removing its contribution.
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.c | 9 +++--
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/
ty: failed to allocate
> memory for raw capacities\n");
> cap_parsing_failed = true;
> - return !ret;
> + return 0;
Acked-by: Vincent Guittot <vincent.guit...@linaor.org>
> }
> }
> capacity_scale = max(cpu_capacity, capacity_scale);
> --
> 2.10.0
>
ot;|browse confirm w|else|confirm w|endif
Suggested-by: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/k
value instead. Both solutions
make load/util_avg being stable with the advantage of using the most up to
date value for the 2nd patch. I have split it into 2 patches to show the 2
versions but if the 2nd patch looks ok, we should probably squashed them into
one.
Vincent Guittot (2):
sched/cfs: make
On 14 April 2017 at 10:49, Vincent Guittot <vincent.guit...@linaro.org> wrote:
> On 13 April 2017 at 18:13, Peter Zijlstra <pet...@infradead.org> wrote:
>> On Thu, Apr 13, 2017 at 05:16:20PM +0200, Vincent Guittot wrote:
>>> On 13 April 2017 at 15:39, Peter Zijlstr
he max
value according to the current position in the time segment instead of
removing its contribution.
Suggested-by: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
Fold both patches in one
kernel/sched/fair.c | 6 +++---
On 2 March 2017 at 16:45, Patrick Bellasi wrote:
> The current version of schedutil has some issues related to the management
> of update flags used by systems with frequency domains spawning multiple CPUs.
>
> Each time a CPU utilisation update is issued by the scheduler
On 21 March 2017 at 18:00, Vincent Guittot <vincent.guit...@linaro.org> wrote:
> On 21 March 2017 at 15:58, Peter Zijlstra <pet...@infradead.org> wrote:
>>
>> On Tue, Mar 21, 2017 at 03:16:19PM +0100, Vincent Guittot wrote:
>> > On 21 March 2017 at 15:03,
On 9 August 2017 at 19:51, Joel Fernandes <joe...@google.com> wrote:
> Hi Vincent,
>
> On Wed, Aug 9, 2017 at 3:23 AM, Vincent Guittot
> <vincent.guit...@linaro.org> wrote:
>
>>>
>>> Yes this is true, however since I'm using the 'delta' instead of
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.c | 18 ++
1 file changed, 18 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 008c514..5fdcb42 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@
On 30 June 2017 at 15:58, Vincent Guittot <vincent.guit...@linaro.org> wrote:
> The running state is a subset of runnable state which means that running
> can't be set if runnable (weight) is cleared. There are corner cases
> where the current sched_entity has been already dequ
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/fair.c | 12
1 file changed, 12 insertions(+)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 008c514..bc36a75 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@
On 4 July 2017 at 11:44, Peter Zijlstra <pet...@infradead.org> wrote:
> On Tue, Jul 04, 2017 at 11:12:34AM +0200, Vincent Guittot wrote:
>> On 4 July 2017 at 10:34, Peter Zijlstra <pet...@infradead.org> wrote:
>> > On Tue, Jul 04, 2017 at 09:27:07AM +0200, Peter Zij
On 4 July 2017 at 09:27, Peter Zijlstra <pet...@infradead.org> wrote:
> On Sat, Jul 01, 2017 at 07:06:13AM +0200, Vincent Guittot wrote:
>> The running state is a subset of runnable state which means that running
>> can't be set if runnable (weight) is cleared. There are
On 4 July 2017 at 10:34, Peter Zijlstra <pet...@infradead.org> wrote:
> On Tue, Jul 04, 2017 at 09:27:07AM +0200, Peter Zijlstra wrote:
>> On Sat, Jul 01, 2017 at 07:06:13AM +0200, Vincent Guittot wrote:
>> > The running state is a subset of runnable state which means
Hi Josef,
On 30 June 2017 at 03:56, wrote:
> From: Josef Bacik
>
> We only track the load avg of a se in 1024 ns chunks, so in order to
> make up for the loss of the < 1024 ns part of a run/sleep delta we only
> add the time we processed to the
Hi Georgi,
On 27 June 2017 at 19:49, Georgi Djakov wrote:
[snip]
> +
> +static int interconnect_aggregate(struct interconnect_node *node,
> + struct interconnect_creq *creq)
> +{
> + int ret = 0;
> +
> +
he max
value according to the current position in the time segment instead of
removing its contribution.
Suggested-by: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
Changes:
-Correct typo in commit message: s/MAX_LOAD_AVG/LOAD_A
On 25 April 2017 at 23:08, Tejun Heo wrote:
> On Tue, Apr 25, 2017 at 11:49:41AM -0700, Tejun Heo wrote:
>> > I have run a quick test with your patches and schbench on my platform.
>> > I haven't been able to reproduce your regression but my platform is
>> > quite different from
On 24 April 2017 at 22:14, Tejun Heo wrote:
> We noticed that with cgroup CPU controller in use, the scheduling
>
> Note the drastic increase in p99 scheduling latency. After
> investigation, it turned out that the update_sd_lb_stats(), which is
> used by load_balance() to pick
Le Tuesday 25 Apr 2017 à 11:12:19 (-0700), Tejun Heo a écrit :
> Hello,
>
> On Tue, Apr 25, 2017 at 10:35:53AM +0200, Vincent Guittot wrote:
> > not sure to catch your example:
> > a task TA with a load_avg = 1 is the only task in a task group GB so
> >
onal and keeps the
> parent's runnable_load_avg true to the sum of scaled loads of all
> tasks queued under it which is critical for the correction operation
> of load balancer. The next patch will depend on it.
>
> Signed-off-by: Tejun Heo <t...@kernel.org>
> Cc: Vincent Guit
On 27 April 2017 at 00:40, Tejun Heo <t...@kernel.org> wrote:
> Hello,
>
> On Wed, Apr 26, 2017 at 06:51:23PM +0200, Vincent Guittot wrote:
>> > It's not temporary. The weight of a group is its shares, which is its
>> > load fraction of the configured
On 27 April 2017 at 00:52, Tejun Heo <t...@kernel.org> wrote:
> Hello,
>
> On Wed, Apr 26, 2017 at 08:12:09PM +0200, Vincent Guittot wrote:
>> On 24 April 2017 at 22:14, Tejun Heo <t...@kernel.org> wrote:
>> Can the problem be on the load balance side instead ? an
On 27 April 2017 at 02:30, Tejun Heo <t...@kernel.org> wrote:
> Hello, Vincent.
>
> On Wed, Apr 26, 2017 at 12:21:52PM +0200, Vincent Guittot wrote:
>> > This is from the follow-up patch. I was confused. Because we don't
>> > propagate decays, we still s
On 27 April 2017 at 00:27, Tejun Heo <t...@kernel.org> wrote:
> Hello, Vincent.
>
> On Wed, Apr 26, 2017 at 06:14:17PM +0200, Vincent Guittot wrote:
>> > + if (gcfs_rq->load.weight) {
>> > + long shares = calc_cfs_shares(gcfs_rq, gcfs_rq-
correction operation
> of load balancer. The next patch will depend on it.
>
> Signed-off-by: Tejun Heo <t...@kernel.org>
> Cc: Vincent Guittot <vincent.guit...@linaro.org>
> Cc: Ingo Molnar <mi...@redhat.com>
> Cc: Peter Zijl
entiles (usec)
> 50.th: 40
> 75.th: 71
> 90.th: 89
> 95.th: 108
> *99.th: 679
> 99.5000th: 3500
> 99.9000th: 10960
> min=0, max=13790
>
> [1] git://git.kernel.org/pub/scm/linux/k
On 25 April 2017 at 10:46, Vincent Guittot <vincent.guit...@linaro.org> wrote:
> On 24 April 2017 at 22:14, Tejun Heo <t...@kernel.org> wrote:
>> We noticed that with cgroup CPU controller in use, the scheduling
>> latency gets wonky regardless of nesting level
On 25 April 2017 at 13:05, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
> On 19/04/17 17:54, Vincent Guittot wrote:
>> In the current implementation of load/util_avg, we assume that the ongoing
>> time segment has fully elapsed, and util/load_sum is divided by LOAD_AVG
On 25 April 2017 at 11:05, Vincent Guittot <vincent.guit...@linaro.org> wrote:
> On 25 April 2017 at 10:46, Vincent Guittot <vincent.guit...@linaro.org> wrote:
>> On 24 April 2017 at 22:14, Tejun Heo <t...@kernel.org> wrote:
>>> We noticed that with cgroup C
tilization, decreases from 223ms with
current scale invariance down to 121ms with the new algorithm. For this
test, i have enable arch_scale_freq for arm64.
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
Change since v3
- Add comments
- With patch ("sched/cfs: make util/load_av
On 25 April 2017 at 16:53, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
> On 25/04/17 13:40, Vincent Guittot wrote:
>> On 25 April 2017 at 13:05, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
>>> On 19/04/17 17:54, Vincent Guittot wrote:
>>>
On 9 August 2017 at 01:11, Joel Fernandes <joe...@google.com> wrote:
> Hi Vincent,
>
> On Mon, Aug 7, 2017 at 6:24 AM, Vincent Guittot
> <vincent.guit...@linaro.org> wrote:
>> Hi Joel,
>>
>> On 4 August 2017 at 17:40, Joel Fernandes <joe...@google.
On 7 August 2017 at 18:44, Peter Zijlstra <pet...@infradead.org> wrote:
> On Fri, Aug 04, 2017 at 03:40:21PM +0200, Vincent Guittot wrote:
>
>> There were several comments on v1:
>> - As raised by Peter for v1, if IRQ time is taken into account in
>> rt_avg,
d to the inaccuracy
I agree that there is an inaccuracy (the max absolute value of 22) but
that's in favor of less overhead. Have you seen wrong behavior because
of this inaccuracy ?
>
> With the patch, the error in the signal is significantly reduced, and is
> non-existent beyond a s
mecanism.
We don't use rt_avg which doesn't have the same dynamic as PELT and which
can include IRQ time.
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
Change since v1:
- rebase on tip/sched/core
There were several comments on v1:
- As raised by Peter for v1, if IRQ time is
version in
patch 1/2
Vincent Guittot (2):
sched/rt: add utilization tracking
cpufreq/schedutil: add rt utilization tracking
kernel/sched/cpufreq_schedutil.c | 2 +-
kernel/sched/fair.c | 21 +
kernel/sched/rt.c| 9 +
kernel/sched
Add both cfs and rt utilization when selecting an OPP as rt can preempt and
steal cfs's running time.
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
kernel/sched/cpufreq_schedutil.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel
d yet.
>
> On Mon, Apr 10, 2017 at 11:18:29AM +0200, Vincent Guittot wrote:
> > The current implementation of load tracking invariance scales the
> > contribution with current frequency and uarch performance (only for
> > utilization) of the CPU. One main result of this
Hi Tejun,
On 10 May 2017 at 16:44, Tejun Heo <t...@kernel.org> wrote:
> Hello,
>
> On Wed, May 10, 2017 at 08:50:14AM +0200, Vincent Guittot wrote:
>> On 9 May 2017 at 18:18, Tejun Heo <t...@kernel.org> wrote:
>> > Currently, rq->leaf_cfs_rq_list is a
On 10 May 2017 at 17:09, Tejun Heo wrote:
> Hello, Vincent.
>
> On Fri, May 05, 2017 at 11:30:31AM -0400, Tejun Heo wrote:
>> > For shares_runnable, it should be
>> >
>> > group_entity->runnable_load_avg = cfs_rq->runnable_load_avg *
>> > group_entity->avg.load_avg /
Hi Peter,
On 12 May 2017 at 18:44, Peter Zijlstra wrote:
> Remove the load from the load_sum for sched_entities, basically
> turning load_sum into runnable_sum. This prepares for better
> reweighting of group entities.
>
> Since we now have different rules for computing
On 12 May 2017 at 22:19, Rohit Jain wrote:
> On 05/12/2017 12:46 PM, Peter Zijlstra wrote:
>>
>> On Fri, May 12, 2017 at 11:04:26AM -0700, Rohit Jain wrote:
>>>
>>> The patch avoids CPUs which might be considered interrupt-heavy when
>>> trying to schedule threads (on the
Le Wednesday 17 May 2017 à 09:04:47 (+0200), Vincent Guittot a écrit :
> Hi Peter,
>
> On 12 May 2017 at 18:44, Peter Zijlstra <pet...@infradead.org> wrote:
> > Remove the load from the load_sum for sched_entities, basically
> > turning load_sum into runnable_sum
On 12 May 2017 at 18:44, Peter Zijlstra <pet...@infradead.org> wrote:
> Most call sites of update_load_avg() already have cfs_rq_of(se)
> available, pass it down instead of recomputing it.
>
> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Acked-by: Vince
On 12 May 2017 at 15:16, Tejun Heo <t...@kernel.org> wrote:
> Hello, Vincent.
>
> On Thu, May 11, 2017 at 09:02:22AM +0200, Vincent Guittot wrote:
>> Sorry, what i mean is:
>> When the group entity of a cfs_rq is enqueued, we are sure that either
>>
On 12 May 2017 at 18:44, Peter Zijlstra <pet...@infradead.org> wrote:
> For consistencies sake, we should have only a single reading of
> tg->shares.
>
> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Acked-by: Vincent Guittot <vincent.guit...@linaro.org>
Hi Tejun,
On 9 May 2017 at 18:18, Tejun Heo wrote:
> Currently, rq->leaf_cfs_rq_list is a traversal ordered list of all
> live cfs_rqs which have ever been active on the CPU; unfortunately,
> this makes update_blocked_averages() O(# total cgroups) which isn't
> scalable at all.
Hi Kevin,
On 5 June 2017 at 11:07, Tao Wang wrote:
> cpu idle cooling driver performs synchronized idle injection across
> all cpu in same cluster, offers a new method to cooling down cpu,
> that is similar to intel_power_clamp driver, but is basically
> designed for
/sched/sched.h.
>
> Cc: Catalin Marinas <catalin.mari...@arm.com>
> Cc: Will Deacon <will.dea...@arm.com>
> Cc: Juri Lelli <juri.le...@arm.com>
> Signed-off-by: Dietmar Eggemann <dietmar.eggem...@arm.com>
Acked-by: Vincent Guittot <vincent.guit...@linaro.org>
ue.
>
> Cc: Catalin Marinas <catalin.mari...@arm.com>
> Cc: Will Deacon <will.dea...@arm.com>
> Cc: Russell King <li...@arm.linux.org.uk>
> Cc: Greg Kroah-Hartman <gre...@linuxfoundation.org>
> Cc: Juri Lelli <juri.le...@arm.com>
> Signed-off-by: Dietmar Eggemann <dietmar.eggem...@arm.com>
Acked-by: Vincent Guittot <vincent.guit...@linaro.org>
;
> Cc: Russell King <li...@arm.linux.org.uk>
> Cc: Juri Lelli <juri.le...@arm.com>
> Signed-off-by: Dietmar Eggemann <dietmar.eggem...@arm.com>
Acked-by: Vincent Guittot <vincent.guit...@linaro.org>
> ---
> arch/arm/include/asm/topology.h | 5 +
> arch/
gt;
> Cc: Catalin Marinas <catalin.mari...@arm.com>
> Cc: Will Deacon <will.dea...@arm.com>
> Cc: Juri Lelli <juri.le...@arm.com>
> Signed-off-by: Dietmar Eggemann <dietmar.eggem...@arm.com>
Acked-by: Vincent Guittot <vincent.guit...@linaro.org>
n
> kernel/sched/sched.h.
>
> Cc: Russell King <li...@arm.linux.org.uk>
> Cc: Juri Lelli <juri.le...@arm.com>
> Signed-off-by: Dietmar Eggemann <dietmar.eggem...@arm.com>
Acked-by: Vincent Guittot <vincent.guit...@linaro.org>
> ---
> arch/arm/include/asm/topology.
On 8 June 2017 at 09:55, Dietmar Eggemann wrote:
> Implements an arch-specific frequency-scaling function
> topology_get_freq_scale() which provides the following frequency
> scaling factor:
>
> current_freq(cpu) << SCHED_CAPACITY_SHIFT / max_supported_freq(cpu)
>
>
fs_rq averages up-to-date (which means having done
> the attach), but we need the cfs_rq->avg.runnable_avg to not yet
> include the se's contribution (since se->on_rq == 0).
>
> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Acked-by: Vincent Guit
On 19 May 2017 at 08:07, kernel test robot wrote:
>
> Greeting,
>
> FYI, we noticed a -7.4% regression of unixbench.score due to commit:
That's interesting because it's just the opposite of what I received 4
days ago for unixbench shell1 test. I'm going to have a look:
utilization and take
it into account when selecting OPP.
Patch 1 tracks utilization of rt_rq.
Patch 2 adds the rt_rq's utilization when selection OPP for cfs tasks
This patchset doesn't change the OPP selection policy for RT tasks
Vincent Guittot (2):
sched/rt: add utilization tracking
cpufreq
mecanism.
We don't use rt_avg which doesn't have the same dynamic as PELT and which
can include IRQ time that are also accounted in cfs task utilization
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
If the changes are reasonnable, it might worth moving the PELT fu
Add both cfs_rq and rt_rq's utilization when selecting an OPP for cfs task
as rt task can preempt and steal cfs's running time.
This prevent frequency drops when rt tasks steal running time to cfs tasks
which appear lower than they are.
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.
On 31 May 2017 at 11:40, Peter Zijlstra <pet...@infradead.org> wrote:
> On Wed, May 24, 2017 at 11:00:51AM +0200, Vincent Guittot wrote:
>> schedutil governor relies on cfs_rq's util_avg to choose the OPP when cfs
>> tasks are running. When the CPU is overloaded by cfs an
On 15 May 2017 at 17:35, Georgi Djakov wrote:
> This patch introduce a new API to get the requirement and configure the
> interconnect buses across the entire chipset to fit with the current demand.
>
> The API is using a consumer/provider-based model, where the
On 8 June 2017 at 14:59, Jean Wangtao <jean.wang...@linaro.org> wrote:
>
> Hi Vincent,
>
> 2017年6月8日 下午3:19,"Vincent Guittot" <vincent.guit...@linaro.org>写道:
>
> Hi Kevin,
>
> On 5 June 2017 at 11:07, Tao Wang <kevin.wang...@hisilicon.com> wro
On 14 June 2017 at 14:55, Daniel Lezcano wrote:
> On Sat, Jun 10, 2017 at 08:00:28PM +0200, Jean Wangtao wrote:
>> On 9 June 2017 at 10:20, Daniel Lezcano wrote:
>>
>> > On Tue, Jun 06, 2017 at 09:11:35AM +0530, viresh kumar wrote:
>> > > +
On 14 June 2017 at 09:55, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
>
> On 06/12/2017 04:27 PM, Vincent Guittot wrote:
> > On 8 June 2017 at 09:55, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
>
> Hi Vincent,
>
> Thanks for the review!
>
>
add both cfs and rt utilization when selecting an OPP as rt can preempt and
steal cfs's running time
---
kernel/sched/cpufreq_schedutil.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
:if expand("%") ==
mecanism.
We don't use rt_avg which doesn't have the same dynamic as PELT and which
can include IRQ time.
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
---
Change since v1:
- rebase on tip/sched/core
There were several comments on v1:
- As raised by Peter for v1, if IRQ time is
version in
patch 1/2
Vincent Guittot (2):
sched/rt: add utilization tracking
cpufreq/schedutil: add rt utilization tracking
kernel/sched/cpufreq_schedutil.c | 2 +-
kernel/sched/fair.c | 21 +
kernel/sched/rt.c| 9 +
kernel/sched
On 3 May 2017 at 23:49, Tejun Heo <t...@kernel.org> wrote:
> On Wed, May 03, 2017 at 03:09:38PM +0200, Peter Zijlstra wrote:
>> On Wed, May 03, 2017 at 12:37:37PM +0200, Vincent Guittot wrote:
>> > On 3 May 2017 at 11:37, Peter Zijlstra <pet...@infradead.org> wrote:
Le Wednesday 03 May 2017 à 20:00:28 (+0200), Peter Zijlstra a écrit :
>
[snip]
>
> Just FUDGE2 on its own seems to be the best on my system and is a change
> that makes sense (and something Paul recently pointed out as well).
>
> The implementation isn't particularly pretty or fast, but
On 28 April 2017 at 18:14, Tejun Heo wrote:
> Hello, Vincent.
>
>>
>> The only interest of runnable_load_avg is to be null when a cfs_rq is
>> idle whereas load_avg is not but not to be higher than load_avg. The
>> root cause is that load_balance only looks at "load" but not
On 28 April 2017 at 22:33, Tejun Heo <t...@kernel.org> wrote:
> Hello, Vincent.
>
> On Thu, Apr 27, 2017 at 10:29:10AM +0200, Vincent Guittot wrote:
>> On 27 April 2017 at 00:52, Tejun Heo <t...@kernel.org> wrote:
>> > Hello,
>> >
>> > On
Hi Tejun,
On 2 May 2017 at 23:50, Tejun Heo <t...@kernel.org> wrote:
> Hello,
>
> On Tue, May 02, 2017 at 09:18:53AM +0200, Vincent Guittot wrote:
>> > dbg_odd: odd: dst=28 idle=2 brk=32 lbtgt=0-31 type=2
>> > dbg_odd_dump: A: grp=1,17 w=2 avg=7.247
On 2 May 2017 at 22:56, Tejun Heo <t...@kernel.org> wrote:
> Hello, Vincent.
>
> On Tue, May 02, 2017 at 08:56:52AM +0200, Vincent Guittot wrote:
>> On 28 April 2017 at 18:14, Tejun Heo <t...@kernel.org> wrote:
>> > I'll follow up in the other subth
On 3 May 2017 at 09:25, Vincent Guittot <vincent.guit...@linaro.org> wrote:
> On 2 May 2017 at 22:56, Tejun Heo <t...@kernel.org> wrote:
>> Hello, Vincent.
>>
>> On Tue, May 02, 2017 at 08:56:52AM +0200, Vincent Guittot wrote:
>>> On 28 April 2017 a
On 3 May 2017 at 11:37, Peter Zijlstra <pet...@infradead.org> wrote:
>
> On Wed, May 03, 2017 at 09:34:51AM +0200, Vincent Guittot wrote:
>
> > We use load_avg for calculating a stable share and we want to use it
> > more and more. So breaking it becaus
On 4 May 2017 at 22:29, Tejun Heo wrote:
> From: Peter Zijlstra
>
> This patch is combination of
>
>
> http://lkml.kernel.org/r/20170502081905.ga4...@worktop.programming.kicks-ass.net
> +
>
With the patch applied, the p99 latency from inside a cgroup is
> equivalent to the root cgroup case.
>
> # ~/schbench -m 2 -t 16 -s 1 -c 15000 -r 30
> Latency percentiles (usec)
> 50.th: 40
> 75.th: 71
> 90.th: 89
> 95.t
On 5 May 2017 at 12:42, Vincent Guittot <vincent.guit...@linaro.org> wrote:
> On 4 May 2017 at 22:30, Tejun Heo <t...@kernel.org> wrote:
>> We noticed that with cgroup CPU controller in use, the scheduling
>> latency gets wonky regardless of nesting level
Hi Tejun,
On 4 May 2017 at 22:28, Tejun Heo wrote:
> Hello,
>
> v1 posting can be found at
>
> http://lkml.kernel.org/r/20170424201344.ga14...@wtj.duckdns.org
>
> The patchset is still RFC and based on v4.11. I used Peter's updated
> calc_cfs_shares() instead of scaling
On 5 May 2017 at 15:28, Tejun Heo <t...@kernel.org> wrote:
> Hello, Vincent.
>
> On Fri, May 05, 2017 at 10:46:53AM +0200, Vincent Guittot wrote:
>> schbench results looks better with this version
>> Latency percentiles (usec)
>> 50.th: 212
>> 75.th:
On 4 May 2017 at 22:30, Tejun Heo wrote:
[snip]
> /* Take into account change of load of a child task group */
> static inline void
> update_tg_cfs_load(struct cfs_rq *cfs_rq, struct sched_entity *se)
> @@ -3120,17 +3144,6 @@ update_tg_cfs_load(struct cfs_rq *cfs_rq
>
>
> Fix the problem by always considering now (time) as the reference for
> deciding when CPUs have stale contributions.
>
> Signed-off-by: Juri Lelli <juri.le...@arm.com>
> Cc: Rafael J. Wysocki <rafael.j.wyso...@intel.com>
> Cc: Viresh Kumar <viresh.ku...@linaro.org>
FWIW
Acked-by: Vincent Guittot <vincent.guit...@linaro.org>
On 28 April 2017 at 19:46, Tejun Heo <t...@kernel.org> wrote:
> Hello, Vincent.
>
> On Thu, Apr 27, 2017 at 10:59:12AM +0200, Vincent Guittot wrote:
>> > But the only difference there is that we lose accuracy in calculation;
>> > otherwise, the end results ar
Hi Tejun,
Le Tuesday 02 May 2017 à 09:18:53 (+0200), Vincent Guittot a écrit :
> On 28 April 2017 at 22:33, Tejun Heo <t...@kernel.org> wrote:
> > Hello, Vincent.
> >
> > On Thu, Apr 27, 2017 at 10:29:10AM +0200, Vincent Guittot wrote:
> >> On 27 April 2017
On 1 May 2017 at 11:00, Peter Zijlstra <pet...@infradead.org> wrote:
> On Sat, Apr 29, 2017 at 12:09:24AM +0200, Peter Zijlstra wrote:
>> On Mon, Apr 10, 2017 at 11:18:29AM +0200, Vincent Guittot wrote:
>> > +++ b/include/linux/sched.h
>> > @@ -3
Hi Tejun,
On 4 May 2017 at 19:43, Tejun Heo <t...@kernel.org> wrote:
> Hello,
>
> On Thu, May 04, 2017 at 10:19:46AM +0200, Vincent Guittot wrote:
>> > schbench inside a cgroup and have some base load, it is actually
>> > expected to show worse latency
On 30 May 2017 at 17:50, Morten Rasmussen <morten.rasmus...@arm.com> wrote:
> On Wed, May 24, 2017 at 11:00:51AM +0200, Vincent Guittot wrote:
>> schedutil governor relies on cfs_rq's util_avg to choose the OPP when cfs
>> tasks are running. When the CPU is overloaded by
On 9 October 2017 at 17:03, Vincent Guittot <vincent.guit...@linaro.org> wrote:
> Hi Peter,
>
> On 1 September 2017 at 15:21, Peter Zijlstra <pet...@infradead.org> wrote:
>> When an entity migrates in (or out) of a runqueue, we need to add (or
>> remove) its
On 10 October 2017 at 09:29, Peter Zijlstra <pet...@infradead.org> wrote:
> On Mon, Oct 09, 2017 at 05:29:04PM +0200, Vincent Guittot wrote:
>> On 9 October 2017 at 17:03, Vincent Guittot <vincent.guit...@linaro.org>
>> wrote:
>> > On 1 Septembe
Hi Peter,
On 1 September 2017 at 15:21, Peter Zijlstra wrote:
> When an entity migrates in (or out) of a runqueue, we need to add (or
> remove) its contribution from the entire PELT hierarchy, because even
> non-runnable entities are included in the load average sums.
>
>
On 13 October 2017 at 22:41, Peter Zijlstra <pet...@infradead.org> wrote:
> On Fri, Oct 13, 2017 at 05:22:54PM +0200, Vincent Guittot wrote:
>>
>> I have studied a bit more how to improve the propagation formula and the
>> changes below is doing the job for the UCs that
city-dmips-mhz' based solution is now the only one. It is
> shared between arm and arm64 and works for every big.LITTLE system no
> matter which core types it consists of.
>
> Cc: Russell King <li...@arm.linux.org.uk>
> Cc: Vincent Guittot <vincent.guit...@linaro.org>
>
Hi Peter,
Le Tuesday 10 Oct 2017 à 09:44:53 (+0200), Vincent Guittot a écrit :
> On 10 October 2017 at 09:29, Peter Zijlstra <pet...@infradead.org> wrote:
> > On Mon, Oct 09, 2017 at 05:29:04PM +0200, Vincent Guittot wrote:
> >> On 9 October 2017 at 17:03, Vince
Hi Peter,
Le Friday 13 Oct 2017 à 22:41:11 (+0200), Peter Zijlstra a écrit :
> On Fri, Oct 13, 2017 at 05:22:54PM +0200, Vincent Guittot wrote:
> >
> > I have studied a bit more how to improve the propagation formula and the
> > changes below is doing the job for the U
On 6 September 2017 at 13:43, Dietmar Eggemann <dietmar.eggem...@arm.com> wrote:
> Hi Vincent,
>
> On 04/09/17 08:49, Vincent Guittot wrote:
>> Hi Dietmar,
>>
>> Removing cpu effificiency table looks good to me. Nevertheless, i have
>> some comments below fo
1001 - 1100 of 3961 matches
Mail list logo