On 05/07/2013 02:17 PM, Alex Shi wrote:
> On 05/06/2013 07:10 PM, Peter Zijlstra wrote:
The runnable_avgs themselves actually have a fair bit of history in
them already (50% is last 32ms); but given that they don't need to be
cut-off to respond to load being migrated I'm guessing we
On 05/06/2013 07:10 PM, Peter Zijlstra wrote:
>> > The runnable_avgs themselves actually have a fair bit of history in
>> > them already (50% is last 32ms); but given that they don't need to be
>> > cut-off to respond to load being migrated I'm guessing we could
>> > actually potentially get by wit
On 05/07/2013 02:34 AM, Paul Turner wrote:
>> > Current load balance doesn't consider slept task's load which is
>> > represented by blocked_load_avg. And the slept task is not on_rq, so
>> > consider it in load balance is a little strange.
> The load-balancer has a longer time horizon; think of bl
> The load-balancer has a longer time horizon; think of blocked_loag_avg
> to be a signal for the load, already assigned to this cpu, which is
> expected to appear (within roughly the next quantum).
>
> Consider the following scenario:
>
> tasks: A,B (40% busy), C (90% busy)
>
> Suppose we have
On Mon, May 6, 2013 at 8:00 AM, Alex Shi wrote:
>>
>> blocked_load_avg is the expected "to wake" contribution from tasks
>> already assigned to this rq.
>>
>> e.g. this could be:
>> load = this_rq->cfs.runnable_load_avg + this_rq->cfs.blocked_load_avg;
>
> Current load balance doesn't consider s
>
> blocked_load_avg is the expected "to wake" contribution from tasks
> already assigned to this rq.
>
> e.g. this could be:
> load = this_rq->cfs.runnable_load_avg + this_rq->cfs.blocked_load_avg;
Current load balance doesn't consider slept task's load which is
represented by blocked_load_av
On Mon, May 06, 2013 at 03:33:45AM -0700, Paul Turner wrote:
> Yeah, most of the rationale is super hand-wavy; especially the fairly
> arbitrary choice of periods (e.g. busy_idx vs newidle).
>
> I think the other rationale is:
> For smaller indicies (e.g. newidle) we speed up response time by
>
On Mon, May 6, 2013 at 3:19 AM, Peter Zijlstra wrote:
> On Mon, May 06, 2013 at 01:46:19AM -0700, Paul Turner wrote:
>> On Sun, May 5, 2013 at 6:45 PM, Alex Shi wrote:
>> > @@ -2536,7 +2536,7 @@ static void __update_cpu_load(struct rq *this_rq,
>> > unsigned long this_load,
>> > void update_idl
On Mon, May 06, 2013 at 01:46:19AM -0700, Paul Turner wrote:
> On Sun, May 5, 2013 at 6:45 PM, Alex Shi wrote:
> > @@ -2536,7 +2536,7 @@ static void __update_cpu_load(struct rq *this_rq,
> > unsigned long this_load,
> > void update_idle_cpu_load(struct rq *this_rq)
> > {
> > unsigned lo
On Sun, May 5, 2013 at 6:45 PM, Alex Shi wrote:
> They are the base values in load balance, update them with rq runnable
> load average, then the load balance will consider runnable load avg
> naturally.
>
> Signed-off-by: Alex Shi
> ---
> kernel/sched/core.c | 4 ++--
> kernel/sched/fair.c | 4
They are the base values in load balance, update them with rq runnable
load average, then the load balance will consider runnable load avg
naturally.
Signed-off-by: Alex Shi
---
kernel/sched/core.c | 4 ++--
kernel/sched/fair.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --g
11 matches
Mail list logo