Hi Rafael,
On Mon, Apr 10 2017 at 00:10, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki
>
> Make the schedutil governor compute the initial (default) value of
> the rate_limit_us sysfs attribute by multiplying the transition
> latency by a multiplier depending on
Hi Rafael,
On Fri, Apr 14 2017 at 22:51, Rafael J. Wysocki wrote:
> On Tuesday, April 11, 2017 12:20:41 AM Rafael J. Wysocki wrote:
>> From: Rafael J. Wysocki
>>
>> Make the schedutil governor take the initial (default) value of the
>> rate_limit_us sysfs attribute
On Wed, Aug 02 2017 at 13:24, Peter Zijlstra wrote:
> On Wed, Aug 02, 2017 at 02:10:02PM +0100, Brendan Jackman wrote:
>> We use task_util in find_idlest_group via capacity_spare_wake. This
>> task_util is updated in wake_cap. However wake_cap is not the only
>>
Hi,
On Fri, Jun 30 2017 at 17:55, Josef Bacik wrote:
> On Fri, Jun 30, 2017 at 07:02:20PM +0200, Mike Galbraith wrote:
>> On Fri, 2017-06-30 at 10:28 -0400, Josef Bacik wrote:
>> > On Thu, Jun 29, 2017 at 08:04:59PM -0700, Joel Fernandes wrote:
>> >
>> > > That makes sense that we multiply
On Thu, Aug 03 2017 at 13:15, Josef Bacik wrote:
> On Thu, Aug 03, 2017 at 11:53:19AM +0100, Brendan Jackman wrote:
>>
>> Hi,
>>
>> On Fri, Jun 30 2017 at 17:55, Josef Bacik wrote:
>> > On Fri, Jun 30, 2017 at 07:02:20PM +0200, Mike Galbraith wrote:
>> &
have relatively few CPUs, which
suggests the trade-off makes sense here.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Josef Bacik <jo...@toxicpanda.com>
Cc: Joel Fernandes
On Wed, Aug 09 2017 at 21:22, Atish Patra wrote:
> On 08/03/2017 10:05 AM, Brendan Jackman wrote:
>>
>> On Thu, Aug 03 2017 at 13:15, Josef Bacik wrote:
>>> On Thu, Aug 03, 2017 at 11:53:19AM +0100, Brendan Jackman wrote:
>>>>
>>>> Hi,
>>&
cing, we won't need the task_util and
we'd just clobber the last_update_time, which is supposed to be 0.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
Cc: Josef Bacik <jo...
Hi Josef,
I happened to be thinking about something like this while investigating
a totally different issue with ARM big.LITTLE. Comment below...
On Fri, Jul 14 2017 at 13:21, Josef Bacik wrote:
> From: Josef Bacik
>
> The wake affinity logic will move tasks between two cpu's
cing, we won't need the task_util and
we'd just clobber the last_update_time, which is supposed to be 0.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
Cc: Josef Bacik <jo...
the force_balance case means
there's an upper bound on the time before we can attempt to solve the
underutilization: after DIE's sd->balance_interval has passed the
next nohz balance kick will help us out.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Ingo Molnar <mi...@re
Hi Viresh,
On Mon, May 22 2017 at 05:10, Viresh Kumar wrote:
> The rate_limit_us for the schedutil governor is getting set to 500 ms by
> default for the ARM64 hikey board. And its way too much, even for the
> default value. Lets set the default transition_delay_ns to something
> more realistic
On Mon, Sep 18 2017 at 22:15, Joel Fernandes wrote:
> Hi Brendan,
Hi Joel,
Thanks for taking a look :)
> On Fri, Aug 11, 2017 at 2:45 AM, Brendan Jackman
> <brendan.jack...@arm.com> wrote:
>> This patch adds a parameter to select_task_rq, sibling_count_hint
>> allowi
On Wed, Sep 20 2017 at 05:06, Joel Fernandes wrote:
>> On Tue, Sep 19, 2017 at 3:05 AM, Brendan Jackman
>> <brendan.jack...@arm.com> wrote:
>>> On Mon, Sep 18 2017 at 22:15, Joel Fernandes wrote:
> [..]
>>>>> IIUC, if wake_affine() behaves correctly
Hi Joel,
Sorry I didn't see your comments on the code before, I think it's
orthoganal to the other thread about the overall design so I'll just
respond here.
On Tue, Sep 19 2017 at 05:15, Joel Fernandes wrote:
> Hi Brendan,
>
> On Fri, Aug 11, 2017 at 2:45 AM, Brendan Jackman
[snip]
&
Hi Peter, Josef,
Do you have any thoughts on this one?
On Mon, Aug 07 2017 at 16:39, Brendan Jackman wrote:
> The "goto force_balance" here is intended to mitigate the fact that
> avg_load calculations can result in bad placement decisions when
> priority is asymmetrical
Hi Peter,
Ping.
Log of previous discussion: https://patchwork.kernel.org/patch/9876769/
Cheers,
Brendan
On Tue, Aug 08 2017 at 09:55, Brendan Jackman wrote:
> We use task_util in find_idlest_group via capacity_spare_wake. This
> task_util is updated in wake_cap. However wa
cing, we won't need the task_util and
we'd just clobber the last_update_time, which is supposed to be 0.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
Cc: Josef Bacik <jo...
the force_balance case means
there's an upper bound on the time before we can attempt to solve the
underutilization: after DIE's sd->balance_interval has passed the
next nohz balance kick will help us out.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Ingo Molnar <mi...@re
just initialise @new_cpu to
@cpu instead of @prev_cpu (which is what PeterZ suggested in v1 review). In
that case, some extra code could be removed in & around
find_idlest_group_cpu.
Brendan Jackman (5):
sched/fair: Move select_task_rq_fair slow-path into its own function
sched/fair
is added as a
variable in the new function, with the same initial value as the
@new_cpu in select_task_rq_fair.
Suggested-by: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
C
and we return @prev_cpu from select_task_rq_fair.
This is fixed by initialising @new_cpu to @cpu instead of
@prev_cpu.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
C
Since commit 83a0a96a5f26 ("sched/fair: Leverage the idle state info
when choosing the "idlest" cpu") find_idlest_grou_cpu (formerly
find_idlest_cpu) no longer returns -1.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...
in that case.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
Cc: Josef Bacik <jo...@toxicpanda.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Morten Rasmussen <morten.rasm
for this case, and a comment to
find_idlest_group. Now when find_idlest_group returns NULL, it always
means that the local group is allowed and idlest.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot &
On Mon, Aug 28 2017 at 08:56, Vincent Guittot wrote:
> On 25 August 2017 at 17:51, Brendan Jackman <brendan.jack...@arm.com> wrote:
>>
>> On Fri, Aug 25 2017 at 13:38, Vincent Guittot wrote:
>>> On 25 August 2017 at 12:16, Brendan Jackman <brendan.jack...@arm.c
Hi PeterZ,
I just got this in my inbox and noticed I didn't adress it to anyone. I
meant to address it to you.
On Fri, Sep 29 2017 at 17:05, Brendan Jackman wrote:
> There has been a bit of discussion on this RFC, but before I do any
> more work I'd really like your input on the basi
, Brendan Jackman wrote:
> This patch adds a parameter to select_task_rq, sibling_count_hint
> allowing the caller, where it has this information, to inform the
> sched_class the number of tasks that are being woken up as part of
> the same event.
>
> The wake_q mechanism
Since 83a0a96a5f26 (sched/fair: Leverage the idle state info when
choosing the "idlest" cpu) find_idlest_cpu no longer returns -1.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot <vincent.guit
patch also re-words the check for whether the group in
consideration is local, under the assumption that the first group in
the sched domain is always the local one.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Morten Rasmussen <mor
This patchset optimises away an unused comparison, and fixes some corner cases
in
the find_idlest_group path of select_task_rq_fair.
Brendan Jackman (2):
sched/fair: Remove unnecessary comparison with -1
sched/fair: Fix use of NULL with find_idlest_group
kernel/sched/fair.c | 36
On Tue, Aug 22 2017 at 04:34, Joel Fernandes wrote:
> Hi Peter,
>
> On Mon, Aug 21, 2017 at 2:14 PM, Peter Zijlstra <pet...@infradead.org> wrote:
>> On Mon, Aug 21, 2017 at 04:21:28PM +0100, Brendan Jackman wrote:
>>> The current use of returning NULL from find_idl
On Tue, Aug 22 2017 at 10:39, Brendan Jackman wrote:
> On Tue, Aug 22 2017 at 04:34, Joel Fernandes wrote:
>> Hi Peter,
>>
>> On Mon, Aug 21, 2017 at 2:14 PM, Peter Zijlstra <pet...@infradead.org> wrote:
>>> On Mon, Aug 21, 2017 at 04:21:28PM +0100, Brendan
On Tue, Aug 22 2017 at 07:48, Vincent Guittot wrote:
> On 21 August 2017 at 17:21, Brendan Jackman <brendan.jack...@arm.com> wrote:
>> The current use of returning NULL from find_idlest_group is broken in
> [snip]
>> ---
>> kernel/sched/fair.c | 34 +++
On Tue, Aug 22 2017 at 11:03, Peter Zijlstra wrote:
> On Tue, Aug 22, 2017 at 11:39:26AM +0100, Brendan Jackman wrote:
>
>> However the code movement helps - I'll combine it with Vincent's
>> suggestions and post a v2.
>
> Please also split into multiple patches, as
is added as a
variable in the new function, with the same initial value as the
@new_cpu in select_task_rq_fair.
Suggested-by: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
C
-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
Cc: Josef Bacik <jo...@toxicpanda.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Morten Rasmussen <morten.rasmus...
Since commit 83a0a96a5f26 ("sched/fair: Leverage the idle state info
when choosing the "idlest" cpu") find_idlest_grou_cpu (formerly
find_idlest_cpu) no longer returns -1.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...
in that case.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
Cc: Josef Bacik <jo...@toxicpanda.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Morten Rasmussen <morten.rasm
for this case, and a comment to
find_idlest_group. Now when find_idlest_group returns NULL, it always
means that the local group is allowed and idlest.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot &
erZ suggested in v1 review). In
that case, some extra code could be removed in & around
find_idlest_group_cpu.
Brendan Jackman (5):
sched/fair: Move select_task_rq_fair slow-path into its own function
sched/fair: Remove unnecessary comparison with -1
sched/fair: Fix find_idlest_group
On Fri, Aug 25 2017 at 13:38, Vincent Guittot wrote:
> On 25 August 2017 at 12:16, Brendan Jackman <brendan.jack...@arm.com> wrote:
>> find_idlest_group currently returns NULL when the local group is
>> idlest. The caller then continues the find_idlest_group searc
Hi Josef,
Thanks for taking a look.
On Mon, Aug 21 2017 at 17:26, Josef Bacik wrote:
> On Mon, Aug 21, 2017 at 04:21:28PM +0100, Brendan Jackman wrote:
[...]
>> -local_group = cpumask_test_cpu(this_cpu,
>> - sched_g
and we return @prev_cpu from select_task_rq_fair.
This is fixed by initialising @new_cpu to @cpu instead of
@prev_cpu.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
C
is added as a
variable in the new function, with the same initial value as the
@new_cpu in select_task_rq_fair.
Suggested-by: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
C
for this case, and a comment to
find_idlest_group. Now when find_idlest_group returns NULL, it always
means that the local group is allowed and idlest.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot &
in that case.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
Cc: Josef Bacik <jo...@toxicpanda.com>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Morten Rasmussen <morten.rasm
just initialise @new_cpu to
@cpu instead of @prev_cpu (which is what PeterZ suggested in v1 review). In
that case, some extra code could be removed in & around
find_idlest_group_cpu.
Brendan Jackman (5):
sched/fair: Move select_task_rq_fair slow-path into its own function
sched/fair
Since commit 83a0a96a5f26 ("sched/fair: Leverage the idle state info
when choosing the "idlest" cpu") find_idlest_grou_cpu (formerly
find_idlest_cpu) no longer returns -1.
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
Cc: Dietmar Eggemann <dietmar.eggem...
into the future. This field is used to determine
the need for triggering the newly-added NOHZ kick. So if such
newly-idle balances are happening often enough, no additional CPU
wakeups are required to keep all the CPUs' loads updated.
Brendan Jackman (1):
sched/fair: Update blocked load
edhat.com>
Cc: Morten Rasmussen <morten.rasmus...@arm.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
[Switched remote update interval to use PELT half life]
[Moved update_blocked_averges call outside rebalance_domains
the rq lock.
Cc: Dietmar Eggemann <dietmar.eggem...@arm.com>
Cc: Vincent Guittot <vincent.guit...@linaro.org>
Cc: Ingo Molnar <mi...@redhat.com>
Cc: Morten Rasmussen <morten.rasmus...@arm.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Brendan Ja
Hi Vincent,
On Mon, Nov 20 2017 at 09:04, Vincent Guittot wrote:
> On 24 October 2017 at 14:25, Brendan Jackman <brendan.jack...@arm.com> wrote:
>> @@ -9062,7 +9109,12 @@ static __latent_entropy void
>> run_rebalance_domains(struct softirq_action *h)
>> *
edhat.com>
Cc: Morten Rasmussen <morten.rasmus...@arm.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Signed-off-by: Vincent Guittot <vincent.guit...@linaro.org>
[Switched remote update interval to use PELT half life]
[Moved update_blocked_averges call outside rebalance_domains
Hi Todd,
On Thu, Nov 09 2017 at 19:56, Todd Kjos wrote:
>> @@ -8683,6 +8692,10 @@ static void nohz_balancer_kick(void)
>>
>> if (test_and_set_bit(NOHZ_BALANCE_KICK, nohz_flags(ilb_cpu)))
>> return;
>> +
>> + if (only_update)
>> +
CPUHP_AP_SCHED_MIGRATE_DYING doesn't exist, it looks like this was
supposed to refer to CPUHP_AP_SCHED_STARTING's teardown callback
i.e. sched_cpu_dying.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Sebastian Andrzej
ing
CPU B convert the pending/ongoing stats kick to a proper balance
by clearing the NOHZ_STATS_KICK bit in nohz_kick_needed.
Brendan Jackman (1):
sched/fair: Update blocked load from newly idle balance
Vincent Guittot (1):
sched: force update of blocked load of idle cpus
kernel/sc
.@infradead.org>
Signed-off-by: Brendan Jackman <brendan.jack...@arm.com>
---
kernel/sched/core.c | 1 +
kernel/sched/fair.c | 44 ++--
kernel/sched/sched.h | 1 +
3 files changed, 40 insertions(+), 6 deletions(-)
diff --git a/kernel/sche
edhat.com>
> Cc: Peter Zijlstra <pet...@infradead.org>
> Cc: Brendan Jackman <brendan.jack...@arm.com>
> Cc: Dietmar <dietmar.eggem...@arm.com>
> Signed-off-by: Joel Fernandes <joe...@google.com>
FWIW:
Reviewed-by: Brendan Jackman <brendan.jack...@arm.com>
On Mon, Aug 28 2017 at 08:56, Vincent Guittot wrote:
> On 25 August 2017 at 17:51, Brendan Jackman wrote:
>>
>> On Fri, Aug 25 2017 at 13:38, Vincent Guittot wrote:
>>> On 25 August 2017 at 12:16, Brendan Jackman wrote:
>>>> find_idlest_group current
just initialise @new_cpu to
@cpu instead of @prev_cpu (which is what PeterZ suggested in v1 review). In
that case, some extra code could be removed in & around
find_idlest_group_cpu.
Brendan Jackman (5):
sched/fair: Move select_task_rq_fair slow-path into its own function
sched/fair
Since commit 83a0a96a5f26 ("sched/fair: Leverage the idle state info
when choosing the "idlest" cpu") find_idlest_grou_cpu (formerly
find_idlest_cpu) no longer returns -1.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ing
in that case.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
Reviewed-by: Vincent Guittot
---
kernel/sched/fair.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/kernel/sched
and we return @prev_cpu from select_task_rq_fair.
This is fixed by initialising @new_cpu to @cpu instead of
@prev_cpu.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
---
kernel/sched/fair.c | 2
for this case, and a comment to
find_idlest_group. Now when find_idlest_group returns NULL, it always
means that the local group is allowed and idlest.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter
is added as a
variable in the new function, with the same initial value as the
@new_cpu in select_task_rq_fair.
Suggested-by: Peter Zijlstra
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
cing, we won't need the task_util and
we'd just clobber the last_update_time, which is supposed to be 0.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
---
changes v1 -> v2: Just cosmetic
the force_balance case means
there's an upper bound on the time before we can attempt to solve the
underutilization: after DIE's sd->balance_interval has passed the
next nohz balance kick will help us out.
Signed-off-by: Brendan Jackman
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
Since commit 83a0a96a5f26 ("sched/fair: Leverage the idle state info
when choosing the "idlest" cpu") find_idlest_grou_cpu (formerly
find_idlest_cpu) no longer returns -1.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ing
in that case.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
Reviewed-by: Vincent Guittot
Reviewed-by: Josef Bacik
---
kernel/sched/fair.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions
just initialise @new_cpu to
@cpu instead of @prev_cpu (which is what PeterZ suggested in v1 review). In
that case, some extra code could be removed in & around
find_idlest_group_cpu.
Brendan Jackman (5):
sched/fair: Move select_task_rq_fair slow-path into its own function
sched/fair
is added as a
variable in the new function, with the same initial value as the
@new_cpu in select_task_rq_fair.
Suggested-by: Peter Zijlstra
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
for this case, and a comment to
find_idlest_group. Now when find_idlest_group returns NULL, it always
means that the local group is allowed and idlest.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter
and we return @prev_cpu from select_task_rq_fair.
This is fixed by initialising @new_cpu to @cpu instead of
@prev_cpu.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
Reviewed-by: Josef Bacik
Hi Josef,
I happened to be thinking about something like this while investigating
a totally different issue with ARM big.LITTLE. Comment below...
On Fri, Jul 14 2017 at 13:21, Josef Bacik wrote:
> From: Josef Bacik
>
> The wake affinity logic will move tasks between two cpu's that appear to be
, Brendan Jackman wrote:
> This patch adds a parameter to select_task_rq, sibling_count_hint
> allowing the caller, where it has this information, to inform the
> sched_class the number of tasks that are being woken up as part of
> the same event.
>
> The wake_q mechanism
Hi PeterZ,
I just got this in my inbox and noticed I didn't adress it to anyone. I
meant to address it to you.
On Fri, Sep 29 2017 at 17:05, Brendan Jackman wrote:
> There has been a bit of discussion on this RFC, but before I do any
> more work I'd really like your input on the basi
CPUHP_AP_SCHED_MIGRATE_DYING doesn't exist, it looks like this was
supposed to refer to CPUHP_AP_SCHED_STARTING's teardown callback
i.e. sched_cpu_dying.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Sebastian Andrzej Siewior
Cc: Boris Ostrovsky
Cc: Dietmar Eggemann
Cc: Quentin
ch
> 'cpu' belongs to is chosen. So we're always guaranteed to call
> find_idlest_group_cpu with a group to which cpu is non-local. This makes one
> of
> the conditions in find_idlest_group_cpu an impossible one, which we can get
> rid
> off.
>
> Cc: Ingo Molnar
> Cc: Peter Z
On Mon, Sep 18 2017 at 22:15, Joel Fernandes wrote:
> Hi Brendan,
Hi Joel,
Thanks for taking a look :)
> On Fri, Aug 11, 2017 at 2:45 AM, Brendan Jackman
> wrote:
>> This patch adds a parameter to select_task_rq, sibling_count_hint
>> allowing the caller, where it
On Wed, Sep 20 2017 at 05:06, Joel Fernandes wrote:
>> On Tue, Sep 19, 2017 at 3:05 AM, Brendan Jackman
>> wrote:
>>> On Mon, Sep 18 2017 at 22:15, Joel Fernandes wrote:
> [..]
>>>>> IIUC, if wake_affine() behaves correctly this trick wouldn't be
>
Hi Joel,
Sorry I didn't see your comments on the code before, I think it's
orthoganal to the other thread about the overall design so I'll just
respond here.
On Tue, Sep 19 2017 at 05:15, Joel Fernandes wrote:
> Hi Brendan,
>
> On Fri, Aug 11, 2017 at 2:45 AM, Brendan Jackman
[snip]
&
Hi Peter,
Ping.
Log of previous discussion: https://patchwork.kernel.org/patch/9876769/
Cheers,
Brendan
On Tue, Aug 08 2017 at 09:55, Brendan Jackman wrote:
> We use task_util in find_idlest_group via capacity_spare_wake. This
> task_util is updated in wake_cap. However wa
Hi Peter, Josef,
Do you have any thoughts on this one?
On Mon, Aug 07 2017 at 16:39, Brendan Jackman wrote:
> The "goto force_balance" here is intended to mitigate the fact that
> avg_load calculations can result in bad placement decisions when
> priority is asymmetrical
Hi Todd,
On Thu, Nov 09 2017 at 19:56, Todd Kjos wrote:
>> @@ -8683,6 +8692,10 @@ static void nohz_balancer_kick(void)
>>
>> if (test_and_set_bit(NOHZ_BALANCE_KICK, nohz_flags(ilb_cpu)))
>> return;
>> +
>> + if (only_update)
>> +
On Wed, Aug 09 2017 at 21:22, Atish Patra wrote:
> On 08/03/2017 10:05 AM, Brendan Jackman wrote:
>>
>> On Thu, Aug 03 2017 at 13:15, Josef Bacik wrote:
>>> On Thu, Aug 03, 2017 at 11:53:19AM +0100, Brendan Jackman wrote:
>>>>
>>>> Hi,
>>&
have relatively few CPUs, which
suggests the trade-off makes sense here.
Signed-off-by: Brendan Jackman
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Josef Bacik
Cc: Joel Fernandes
Cc: Mike Galbraith
Cc: Matt Fleming
---
include/linux/sched/wake_q.h | 2 ++
kernel/sched/core.c |
the force_balance case means
there's an upper bound on the time before we can attempt to solve the
underutilization: after DIE's sd->balance_interval has passed the
next nohz balance kick will help us out.
Signed-off-by: Brendan Jackman
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
This patchset optimises away an unused comparison, and fixes some corner cases
in
the find_idlest_group path of select_task_rq_fair.
Brendan Jackman (2):
sched/fair: Remove unnecessary comparison with -1
sched/fair: Fix use of NULL with find_idlest_group
kernel/sched/fair.c | 36
Since 83a0a96a5f26 (sched/fair: Leverage the idle state info when
choosing the "idlest" cpu) find_idlest_cpu no longer returns -1.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
-
patch also re-words the check for whether the group in
consideration is local, under the assumption that the first group in
the sched domain is always the local one.
Signed-off-by: Brendan Jackman
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
Cc: Dietmar Eggemann
Cc: Vince
Hi Josef,
Thanks for taking a look.
On Mon, Aug 21 2017 at 17:26, Josef Bacik wrote:
> On Mon, Aug 21, 2017 at 04:21:28PM +0100, Brendan Jackman wrote:
[...]
>> -local_group = cpumask_test_cpu(this_cpu,
>> - sched_g
On Tue, Aug 22 2017 at 04:34, Joel Fernandes wrote:
> Hi Peter,
>
> On Mon, Aug 21, 2017 at 2:14 PM, Peter Zijlstra wrote:
>> On Mon, Aug 21, 2017 at 04:21:28PM +0100, Brendan Jackman wrote:
>>> The current use of returning NULL from find_idlest_group is broken in
&
On Tue, Aug 22 2017 at 07:48, Vincent Guittot wrote:
> On 21 August 2017 at 17:21, Brendan Jackman wrote:
>> The current use of returning NULL from find_idlest_group is broken in
> [snip]
>> ---
>> kernel/sched/fair.c | 34 +++---
>>
On Tue, Aug 22 2017 at 10:39, Brendan Jackman wrote:
> On Tue, Aug 22 2017 at 04:34, Joel Fernandes wrote:
>> Hi Peter,
>>
>> On Mon, Aug 21, 2017 at 2:14 PM, Peter Zijlstra wrote:
>>> On Mon, Aug 21, 2017 at 04:21:28PM +0100, Brendan Jackman wrote:
>>>&
On Tue, Aug 22 2017 at 11:03, Peter Zijlstra wrote:
> On Tue, Aug 22, 2017 at 11:39:26AM +0100, Brendan Jackman wrote:
>
>> However the code movement helps - I'll combine it with Vincent's
>> suggestions and post a v2.
>
> Please also split into multiple patches, as
On Fri, Aug 25 2017 at 13:38, Vincent Guittot wrote:
> On 25 August 2017 at 12:16, Brendan Jackman wrote:
>> find_idlest_group currently returns NULL when the local group is
>> idlest. The caller then continues the find_idlest_group search at a
>> lower level of the curre
cing, we won't need the task_util and
we'd just clobber the last_update_time, which is supposed to be 0.
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
---
changes v1 -> v2: Just cosmetic
erZ suggested in v1 review). In
that case, some extra code could be removed in & around
find_idlest_group_cpu.
Brendan Jackman (5):
sched/fair: Move select_task_rq_fair slow-path into its own function
sched/fair: Remove unnecessary comparison with -1
sched/fair: Fix find_idlest_group
is added as a
variable in the new function, with the same initial value as the
@new_cpu in select_task_rq_fair.
Suggested-by: Peter Zijlstra
Signed-off-by: Brendan Jackman
Cc: Dietmar Eggemann
Cc: Vincent Guittot
Cc: Josef Bacik
Cc: Ingo Molnar
Cc: Morten Rasmussen
Cc: Peter Zijlstra
1 - 100 of 277 matches
Mail list logo