On 05/07/2013 01:43 PM, Alex Shi wrote:
>> > This also brings forth another question,should we modify wake_affine()
>> > to pass the runnable load average of the waking up task to
>> > effective_load().
>> >
>> > What do you think?
> I am not Paul. :)
>
> The acceptable patch of pgbench
On 05/06/2013 05:59 PM, Preeti U Murthy wrote:
> Suggestion1: Would change the CPU share calculation to use runnable load
> average all the time.
>
> Suggestion2: Did opposite of point 2 above,it used runnable load average
> while calculating the CPU share *before* a new task has been woken up
>
On 05/06/2013 05:59 PM, Preeti U Murthy wrote:
> On 05/06/2013 03:05 PM, Alex Shi wrote:
>> On 05/06/2013 05:06 PM, Paul Turner wrote:
>>> I don't think this is a good idea:
>>>
>>> The problem with not using the instantaneous weight here is that you
>>> potentially penalize the latency of
On 05/06/2013 03:05 PM, Alex Shi wrote:
> On 05/06/2013 05:06 PM, Paul Turner wrote:
>> I don't think this is a good idea:
>>
>> The problem with not using the instantaneous weight here is that you
>> potentially penalize the latency of interactive tasks (similarly,
>> potentially important
Hi Alex,Michael,
Can you try out the below patch and check? I have the reason mentioned in the
changelog.
If this also causes performance regression,you probably need to remove changes
made in
effective_load() as Michael points out. I believe the below patch should not
cause
performance
Hi, Preeti
On 05/06/2013 03:10 PM, Preeti U Murthy wrote:
> Hi Alex,Michael,
>
> Can you try out the below patch and check?
Sure, I will take a try also.
I have the reason mentioned in the changelog.
> If this also causes performance regression,you probably need to remove
> changes made in
>
On Mon, May 6, 2013 at 2:35 AM, Alex Shi wrote:
> On 05/06/2013 05:06 PM, Paul Turner wrote:
>> I don't think this is a good idea:
>>
>> The problem with not using the instantaneous weight here is that you
>> potentially penalize the latency of interactive tasks (similarly,
>> potentially
>
> But actually I'm wondering whether it is necessary to change
> effective_load()?
>
> It is only severed for wake-affine and the whole stuff is still in the
> dark, if patch 1~6 already show good results, why don't we leave it there?
It is used for pipe connected process, and your testing
On 05/06/2013 05:06 PM, Paul Turner wrote:
> I don't think this is a good idea:
>
> The problem with not using the instantaneous weight here is that you
> potentially penalize the latency of interactive tasks (similarly,
> potentially important background threads -- e.g. garbage collection).
>
>
On 05/06/2013 01:39 PM, Alex Shi wrote:
> On 05/06/2013 11:34 AM, Michael Wang wrote:
@@ -3045,7 +3045,7 @@ static long effective_load(struct task_group *tg,
int cpu, long wl, long wg)
/*
* w = rw_i + @wl
*/
- w =
I don't think this is a good idea:
The problem with not using the instantaneous weight here is that you
potentially penalize the latency of interactive tasks (similarly,
potentially important background threads -- e.g. garbage collection).
Counter-intuitively we actually want such tasks on the
On 05/06/2013 04:02 PM, Alex Shi wrote:
> On 05/06/2013 03:49 PM, Michael Wang wrote:
>> On 05/06/2013 01:39 PM, Alex Shi wrote:
>> [snip]
>>
>> Rough test done:
>>
>>>
>>> 1, change back the tg_weight in calc_tg_weight() to use tg_load_contrib not
>>> direct load.
>>
>> This way stop the
On 05/06/2013 03:49 PM, Michael Wang wrote:
> On 05/06/2013 01:39 PM, Alex Shi wrote:
> [snip]
>
> Rough test done:
>
>>
>> 1, change back the tg_weight in calc_tg_weight() to use tg_load_contrib not
>> direct load.
>
> This way stop the regression of patch 7.
>
>>
>> diff --git
On 05/06/2013 01:39 PM, Alex Shi wrote:
[snip]
Rough test done:
>
> 1, change back the tg_weight in calc_tg_weight() to use tg_load_contrib not
> direct load.
This way stop the regression of patch 7.
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6f4f14b..c770f8d 100644
On 05/06/2013 05:59 PM, Preeti U Murthy wrote:
Suggestion1: Would change the CPU share calculation to use runnable load
average all the time.
Suggestion2: Did opposite of point 2 above,it used runnable load average
while calculating the CPU share *before* a new task has been woken up
while
On 05/06/2013 01:39 PM, Alex Shi wrote:
[snip]
Rough test done:
1, change back the tg_weight in calc_tg_weight() to use tg_load_contrib not
direct load.
This way stop the regression of patch 7.
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6f4f14b..c770f8d 100644
---
On 05/06/2013 03:49 PM, Michael Wang wrote:
On 05/06/2013 01:39 PM, Alex Shi wrote:
[snip]
Rough test done:
1, change back the tg_weight in calc_tg_weight() to use tg_load_contrib not
direct load.
This way stop the regression of patch 7.
diff --git a/kernel/sched/fair.c
On 05/06/2013 04:02 PM, Alex Shi wrote:
On 05/06/2013 03:49 PM, Michael Wang wrote:
On 05/06/2013 01:39 PM, Alex Shi wrote:
[snip]
Rough test done:
1, change back the tg_weight in calc_tg_weight() to use tg_load_contrib not
direct load.
This way stop the regression of patch 7.
diff
I don't think this is a good idea:
The problem with not using the instantaneous weight here is that you
potentially penalize the latency of interactive tasks (similarly,
potentially important background threads -- e.g. garbage collection).
Counter-intuitively we actually want such tasks on the
On 05/06/2013 01:39 PM, Alex Shi wrote:
On 05/06/2013 11:34 AM, Michael Wang wrote:
@@ -3045,7 +3045,7 @@ static long effective_load(struct task_group *tg,
int cpu, long wl, long wg)
/*
* w = rw_i + @wl
*/
- w = se-my_q-load.weight + wl;
+
On 05/06/2013 05:06 PM, Paul Turner wrote:
I don't think this is a good idea:
The problem with not using the instantaneous weight here is that you
potentially penalize the latency of interactive tasks (similarly,
potentially important background threads -- e.g. garbage collection).
But actually I'm wondering whether it is necessary to change
effective_load()?
It is only severed for wake-affine and the whole stuff is still in the
dark, if patch 1~6 already show good results, why don't we leave it there?
It is used for pipe connected process, and your testing showed
On Mon, May 6, 2013 at 2:35 AM, Alex Shi alex@intel.com wrote:
On 05/06/2013 05:06 PM, Paul Turner wrote:
I don't think this is a good idea:
The problem with not using the instantaneous weight here is that you
potentially penalize the latency of interactive tasks (similarly,
potentially
Hi, Preeti
On 05/06/2013 03:10 PM, Preeti U Murthy wrote:
Hi Alex,Michael,
Can you try out the below patch and check?
Sure, I will take a try also.
I have the reason mentioned in the changelog.
If this also causes performance regression,you probably need to remove
changes made in
Hi Alex,Michael,
Can you try out the below patch and check? I have the reason mentioned in the
changelog.
If this also causes performance regression,you probably need to remove changes
made in
effective_load() as Michael points out. I believe the below patch should not
cause
performance
On 05/06/2013 03:05 PM, Alex Shi wrote:
On 05/06/2013 05:06 PM, Paul Turner wrote:
I don't think this is a good idea:
The problem with not using the instantaneous weight here is that you
potentially penalize the latency of interactive tasks (similarly,
potentially important background
On 05/06/2013 11:34 AM, Michael Wang wrote:
>> > @@ -3045,7 +3045,7 @@ static long effective_load(struct task_group *tg,
>> > int cpu, long wl, long wg)
>> >/*
>> > * w = rw_i + @wl
>> > */
>> > - w = se->my_q->load.weight + wl;
>> > + w =
Hi, Alex
On 05/06/2013 09:45 AM, Alex Shi wrote:
> effective_load calculates the load change as seen from the
> root_task_group. It needs to engage the runnable average
> of changed task.
[snip]
> */
> @@ -3045,7 +3045,7 @@ static long effective_load(struct task_group *tg, int
> cpu, long wl,
effective_load calculates the load change as seen from the
root_task_group. It needs to engage the runnable average
of changed task.
Thanks for Morten Rasmussen and PeterZ's reminder of this.
Signed-off-by: Alex Shi
---
kernel/sched/fair.c | 24
1 file changed, 12
effective_load calculates the load change as seen from the
root_task_group. It needs to engage the runnable average
of changed task.
Thanks for Morten Rasmussen and PeterZ's reminder of this.
Signed-off-by: Alex Shi alex@intel.com
---
kernel/sched/fair.c | 24
1
Hi, Alex
On 05/06/2013 09:45 AM, Alex Shi wrote:
effective_load calculates the load change as seen from the
root_task_group. It needs to engage the runnable average
of changed task.
[snip]
*/
@@ -3045,7 +3045,7 @@ static long effective_load(struct task_group *tg, int
cpu, long wl, long
On 05/06/2013 11:34 AM, Michael Wang wrote:
@@ -3045,7 +3045,7 @@ static long effective_load(struct task_group *tg,
int cpu, long wl, long wg)
/*
* w = rw_i + @wl
*/
- w = se-my_q-load.weight + wl;
+ w = se-my_q-tg_load_contrib +
32 matches
Mail list logo