Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-29 Thread Alex Shi
On 05/30/2013 01:00 AM, Jason Low wrote:
> On Fri, 2013-05-10 at 23:17 +0800, Alex Shi wrote:
>> blocked_load_avg sometime is too heavy and far bigger than runnable load
>> avg. that make balance make wrong decision. So better don't consider it.
>>
>> Signed-off-by: Alex Shi 
> 
> Hi Alex,
> 
> I have been testing these patches with a Java server workload on an 8
> socket (80 core) box with Hyperthreading enabled, and I have been seeing
> good results with these patches.
> 
> When using a 3.10-rc2 tip kernel with patches 1-8, there was about a 40%
> improvement in performance of the workload compared to when using the
> vanilla 3.10-rc2 tip kernel with no patches. When using a 3.10-rc2 tip
> kernel with just patches 1-7, the performance improvement of the
> workload over the vanilla 3.10-rc2 tip kernel was about 25%.
> 
> Tested-by: Jason Low 
> 

That is impressive!

Thanks a lot for your testing! Just curious, what the benchmark are you
using? :)

> Thanks,
> Jason
> 


-- 
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-29 Thread Jason Low
On Fri, 2013-05-10 at 23:17 +0800, Alex Shi wrote:
> blocked_load_avg sometime is too heavy and far bigger than runnable load
> avg. that make balance make wrong decision. So better don't consider it.
> 
> Signed-off-by: Alex Shi 

Hi Alex,

I have been testing these patches with a Java server workload on an 8
socket (80 core) box with Hyperthreading enabled, and I have been seeing
good results with these patches.

When using a 3.10-rc2 tip kernel with patches 1-8, there was about a 40%
improvement in performance of the workload compared to when using the
vanilla 3.10-rc2 tip kernel with no patches. When using a 3.10-rc2 tip
kernel with just patches 1-7, the performance improvement of the
workload over the vanilla 3.10-rc2 tip kernel was about 25%.

Tested-by: Jason Low 

Thanks,
Jason

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-29 Thread Jason Low
On Fri, 2013-05-10 at 23:17 +0800, Alex Shi wrote:
 blocked_load_avg sometime is too heavy and far bigger than runnable load
 avg. that make balance make wrong decision. So better don't consider it.
 
 Signed-off-by: Alex Shi alex@intel.com

Hi Alex,

I have been testing these patches with a Java server workload on an 8
socket (80 core) box with Hyperthreading enabled, and I have been seeing
good results with these patches.

When using a 3.10-rc2 tip kernel with patches 1-8, there was about a 40%
improvement in performance of the workload compared to when using the
vanilla 3.10-rc2 tip kernel with no patches. When using a 3.10-rc2 tip
kernel with just patches 1-7, the performance improvement of the
workload over the vanilla 3.10-rc2 tip kernel was about 25%.

Tested-by: Jason Low jason.l...@hp.com

Thanks,
Jason

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-29 Thread Alex Shi
On 05/30/2013 01:00 AM, Jason Low wrote:
 On Fri, 2013-05-10 at 23:17 +0800, Alex Shi wrote:
 blocked_load_avg sometime is too heavy and far bigger than runnable load
 avg. that make balance make wrong decision. So better don't consider it.

 Signed-off-by: Alex Shi alex@intel.com
 
 Hi Alex,
 
 I have been testing these patches with a Java server workload on an 8
 socket (80 core) box with Hyperthreading enabled, and I have been seeing
 good results with these patches.
 
 When using a 3.10-rc2 tip kernel with patches 1-8, there was about a 40%
 improvement in performance of the workload compared to when using the
 vanilla 3.10-rc2 tip kernel with no patches. When using a 3.10-rc2 tip
 kernel with just patches 1-7, the performance improvement of the
 workload over the vanilla 3.10-rc2 tip kernel was about 25%.
 
 Tested-by: Jason Low jason.l...@hp.com
 

That is impressive!

Thanks a lot for your testing! Just curious, what the benchmark are you
using? :)

 Thanks,
 Jason
 


-- 
Thanks
Alex
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-28 Thread Alex Shi
On 05/16/2013 05:23 PM, Peter Zijlstra wrote:
> On Tue, May 14, 2013 at 07:35:25PM +0800, Alex Shi wrote:
> 
>> > I tested all benchmarks on cover letter maintained, aim7, kbuild etc.
>> > with autogroup enabled. There is no clear performance change.
>> > But since the machine just run benchmark without anyother load, that
>> > doesn't enough.
> Back when we started with smp-fair cgroup muck someone wrote a test for it. I
> _think_ it ended up in the LTP test-suite.

Peter:

copy changlong's testing result again, the ltp cgroup stress testing
show this patchset can reduce the stress testing time:

# run test
7. sudo ./runltp -p -l /tmp/cgroup.results.log  -d /tmp -o
/tmp/cgroup.log -f cgroup

my test results:
3.10-rc1  patch1-7 patch1-8
duration=764   duration=754   duration=750
duration=764   duration=754   duration=751
duration=763   duration=755   duration=751

duration means the seconds of testing cost.

Tested-by: Changlong Xie 

Paul, would you like to give some comments?

> 
> Now I don't know if that's up-to-date enough to catch some of the cases we've
> recently fixed (as in the past few years) so it might want to be updated.
> 
> Paul, do you guys at Google have some nice test-cases for all this?



-- 
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-28 Thread Alex Shi
On 05/16/2013 05:23 PM, Peter Zijlstra wrote:
 On Tue, May 14, 2013 at 07:35:25PM +0800, Alex Shi wrote:
 
  I tested all benchmarks on cover letter maintained, aim7, kbuild etc.
  with autogroup enabled. There is no clear performance change.
  But since the machine just run benchmark without anyother load, that
  doesn't enough.
 Back when we started with smp-fair cgroup muck someone wrote a test for it. I
 _think_ it ended up in the LTP test-suite.

Peter:

copy changlong's testing result again, the ltp cgroup stress testing
show this patchset can reduce the stress testing time:

# run test
7. sudo ./runltp -p -l /tmp/cgroup.results.log  -d /tmp -o
/tmp/cgroup.log -f cgroup

my test results:
3.10-rc1  patch1-7 patch1-8
duration=764   duration=754   duration=750
duration=764   duration=754   duration=751
duration=763   duration=755   duration=751

duration means the seconds of testing cost.

Tested-by: Changlong Xie changlongx@intel.com

Paul, would you like to give some comments?

 
 Now I don't know if that's up-to-date enough to catch some of the cases we've
 recently fixed (as in the past few years) so it might want to be updated.
 
 Paul, do you guys at Google have some nice test-cases for all this?



-- 
Thanks
Alex
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-23 Thread Alex Shi
On 05/23/2013 03:32 PM, Changlong Xie wrote:
> 2013/5/16 Peter Zijlstra :
>> On Tue, May 14, 2013 at 07:35:25PM +0800, Alex Shi wrote:
>>
>>> I tested all benchmarks on cover letter maintained, aim7, kbuild etc.
>>> with autogroup enabled. There is no clear performance change.
>>> But since the machine just run benchmark without anyother load, that
>>> doesn't enough.
>>
>> Back when we started with smp-fair cgroup muck someone wrote a test for it. I
>> _think_ it ended up in the LTP test-suite.
>>
> 
> Hi Peter
> 

> my test results:
> 3.10-rc1  patch1-7 patch1-8
> duration=764   duration=754   duration=750
> duration=764   duration=754   duration=751
> duration=763   duration=755   duration=751
> 
> duration means the seconds of testing cost.
> 
> Tested-by: Changlong Xie 

Seems the 8th patch is helpful on cgroup. Thanks Changlong!

-- 
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-23 Thread Changlong Xie
2013/5/16 Peter Zijlstra :
> On Tue, May 14, 2013 at 07:35:25PM +0800, Alex Shi wrote:
>
>> I tested all benchmarks on cover letter maintained, aim7, kbuild etc.
>> with autogroup enabled. There is no clear performance change.
>> But since the machine just run benchmark without anyother load, that
>> doesn't enough.
>
> Back when we started with smp-fair cgroup muck someone wrote a test for it. I
> _think_ it ended up in the LTP test-suite.
>

Hi Peter

I just download the lastest ltp from
http://sourceforge.net/projects/ltp/files/LTP%20Source/ltp-20130503/
and do cgroup benchmark tests on our SB-EP machine with 2S*8CORE*2SMT,
64G memory.

Following is my testing procedures:
1. tar -xvf ltp-full-20130503.tar
2. cd ltp-full-20130503
3. ./configure prefix=/mnt/ltp && make -j32 && sudo make install
4. cd /mnt/ltp

# create general testcase named cgroup_fj
5. echo -e "cgroup_fj  run_cgroup_test_fj.sh" > runtest/cgroup

# we only test cpuset/cpu/cpuacct cgroup benchmark cases, here is my
cgroup_fj_testcases.sh
6. [changlox@lkp-sb03 bin]$ cat testcases/bin/cgroup_fj_testcases.sh
stress 2 2 1 1 1
stress 4 2 1 1 1
stress 5 2 1 1 1
stress 2 1 1 1 2
stress 2 1 1 2 1
stress 2 1 1 2 2
stress 2 1 1 2 3
stress 2 1 2 1 1
stress 2 1 2 1 2
stress 2 1 2 1 3
stress 2 1 2 2 1
stress 2 1 2 2 2
stress 4 1 1 1 2
stress 4 1 2 1 1
stress 4 1 2 1 2
stress 4 1 2 1 3
stress 5 1 1 1 2
stress 5 1 1 2 1
stress 5 1 1 2 2
stress 5 1 1 2 3
stress 5 1 2 1 1
stress 5 1 2 1 2
stress 5 1 2 1 3
stress 5 1 2 2 1
stress 5 1 2 2 2

# run test
7. sudo ./runltp -p -l /tmp/cgroup.results.log  -d /tmp -o
/tmp/cgroup.log -f cgroup

my test results:
3.10-rc1  patch1-7 patch1-8
duration=764   duration=754   duration=750
duration=764   duration=754   duration=751
duration=763   duration=755   duration=751

duration means the seconds of testing cost.

Tested-by: Changlong Xie 

> Now I don't know if that's up-to-date enough to catch some of the cases we've
> recently fixed (as in the past few years) so it might want to be updated.
>
> Paul, do you guys at Google have some nice test-cases for all this?
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/



--
Best regards
Changlox
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-23 Thread Changlong Xie
2013/5/16 Peter Zijlstra pet...@infradead.org:
 On Tue, May 14, 2013 at 07:35:25PM +0800, Alex Shi wrote:

 I tested all benchmarks on cover letter maintained, aim7, kbuild etc.
 with autogroup enabled. There is no clear performance change.
 But since the machine just run benchmark without anyother load, that
 doesn't enough.

 Back when we started with smp-fair cgroup muck someone wrote a test for it. I
 _think_ it ended up in the LTP test-suite.


Hi Peter

I just download the lastest ltp from
http://sourceforge.net/projects/ltp/files/LTP%20Source/ltp-20130503/
and do cgroup benchmark tests on our SB-EP machine with 2S*8CORE*2SMT,
64G memory.

Following is my testing procedures:
1. tar -xvf ltp-full-20130503.tar
2. cd ltp-full-20130503
3. ./configure prefix=/mnt/ltp  make -j32  sudo make install
4. cd /mnt/ltp

# create general testcase named cgroup_fj
5. echo -e cgroup_fj  run_cgroup_test_fj.sh  runtest/cgroup

# we only test cpuset/cpu/cpuacct cgroup benchmark cases, here is my
cgroup_fj_testcases.sh
6. [changlox@lkp-sb03 bin]$ cat testcases/bin/cgroup_fj_testcases.sh
stress 2 2 1 1 1
stress 4 2 1 1 1
stress 5 2 1 1 1
stress 2 1 1 1 2
stress 2 1 1 2 1
stress 2 1 1 2 2
stress 2 1 1 2 3
stress 2 1 2 1 1
stress 2 1 2 1 2
stress 2 1 2 1 3
stress 2 1 2 2 1
stress 2 1 2 2 2
stress 4 1 1 1 2
stress 4 1 2 1 1
stress 4 1 2 1 2
stress 4 1 2 1 3
stress 5 1 1 1 2
stress 5 1 1 2 1
stress 5 1 1 2 2
stress 5 1 1 2 3
stress 5 1 2 1 1
stress 5 1 2 1 2
stress 5 1 2 1 3
stress 5 1 2 2 1
stress 5 1 2 2 2

# run test
7. sudo ./runltp -p -l /tmp/cgroup.results.log  -d /tmp -o
/tmp/cgroup.log -f cgroup

my test results:
3.10-rc1  patch1-7 patch1-8
duration=764   duration=754   duration=750
duration=764   duration=754   duration=751
duration=763   duration=755   duration=751

duration means the seconds of testing cost.

Tested-by: Changlong Xie changlongx@intel.com

 Now I don't know if that's up-to-date enough to catch some of the cases we've
 recently fixed (as in the past few years) so it might want to be updated.

 Paul, do you guys at Google have some nice test-cases for all this?
 --
 To unsubscribe from this list: send the line unsubscribe linux-kernel in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 Please read the FAQ at  http://www.tux.org/lkml/



--
Best regards
Changlox
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-23 Thread Alex Shi
On 05/23/2013 03:32 PM, Changlong Xie wrote:
 2013/5/16 Peter Zijlstra pet...@infradead.org:
 On Tue, May 14, 2013 at 07:35:25PM +0800, Alex Shi wrote:

 I tested all benchmarks on cover letter maintained, aim7, kbuild etc.
 with autogroup enabled. There is no clear performance change.
 But since the machine just run benchmark without anyother load, that
 doesn't enough.

 Back when we started with smp-fair cgroup muck someone wrote a test for it. I
 _think_ it ended up in the LTP test-suite.

 
 Hi Peter
 

 my test results:
 3.10-rc1  patch1-7 patch1-8
 duration=764   duration=754   duration=750
 duration=764   duration=754   duration=751
 duration=763   duration=755   duration=751
 
 duration means the seconds of testing cost.
 
 Tested-by: Changlong Xie changlongx@intel.com

Seems the 8th patch is helpful on cgroup. Thanks Changlong!

-- 
Thanks
Alex
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-16 Thread Peter Zijlstra
On Tue, May 14, 2013 at 07:35:25PM +0800, Alex Shi wrote:

> I tested all benchmarks on cover letter maintained, aim7, kbuild etc.
> with autogroup enabled. There is no clear performance change.
> But since the machine just run benchmark without anyother load, that
> doesn't enough.

Back when we started with smp-fair cgroup muck someone wrote a test for it. I
_think_ it ended up in the LTP test-suite.

Now I don't know if that's up-to-date enough to catch some of the cases we've
recently fixed (as in the past few years) so it might want to be updated.

Paul, do you guys at Google have some nice test-cases for all this?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-16 Thread Peter Zijlstra
On Tue, May 14, 2013 at 07:35:25PM +0800, Alex Shi wrote:

 I tested all benchmarks on cover letter maintained, aim7, kbuild etc.
 with autogroup enabled. There is no clear performance change.
 But since the machine just run benchmark without anyother load, that
 doesn't enough.

Back when we started with smp-fair cgroup muck someone wrote a test for it. I
_think_ it ended up in the LTP test-suite.

Now I don't know if that's up-to-date enough to catch some of the cases we've
recently fixed (as in the past few years) so it might want to be updated.

Paul, do you guys at Google have some nice test-cases for all this?
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-14 Thread Alex Shi
On 05/14/2013 05:05 PM, Paul Turner wrote:
>> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> > index 91e60ac..75c200c 100644
>> > --- a/kernel/sched/fair.c
>> > +++ b/kernel/sched/fair.c
>> > @@ -1339,7 +1339,7 @@ static inline void 
>> > __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
>> > struct task_group *tg = cfs_rq->tg;
>> > s64 tg_contrib;
>> >
>> > -   tg_contrib = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg;
> Nack -- This is necessary for correct shares distribution.

I was going to set this patch as RFC. :)

BTW, did you do some test of this part?

-- 
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-14 Thread Alex Shi
On 05/14/2013 04:31 PM, Peter Zijlstra wrote:
> On Fri, May 10, 2013 at 11:17:29PM +0800, Alex Shi wrote:
>> > blocked_load_avg sometime is too heavy and far bigger than runnable load
>> > avg. that make balance make wrong decision. So better don't consider it.
> Would you happen to have an example around that illustrates this? 

Sorry, No.
> 
> Also, you've just changed the cgroup balancing -- did you run any tests on 
> that?
> 

I tested all benchmarks on cover letter maintained, aim7, kbuild etc.
with autogroup enabled. There is no clear performance change.
But since the machine just run benchmark without anyother load, that
doesn't enough.

-- 
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-14 Thread Paul Turner
On Fri, May 10, 2013 at 8:17 AM, Alex Shi  wrote:
> blocked_load_avg sometime is too heavy and far bigger than runnable load
> avg. that make balance make wrong decision. So better don't consider it.
>
> Signed-off-by: Alex Shi 
> ---
>  kernel/sched/fair.c |2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 91e60ac..75c200c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1339,7 +1339,7 @@ static inline void 
> __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
> struct task_group *tg = cfs_rq->tg;
> s64 tg_contrib;
>
> -   tg_contrib = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg;

Nack -- This is necessary for correct shares distribution.
T
> +   tg_contrib = cfs_rq->runnable_load_avg;
> tg_contrib -= cfs_rq->tg_load_contrib;
>
> if (force_update || abs64(tg_contrib) > cfs_rq->tg_load_contrib / 8) {
> --
> 1.7.5.4
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-14 Thread Peter Zijlstra
On Fri, May 10, 2013 at 11:17:29PM +0800, Alex Shi wrote:
> blocked_load_avg sometime is too heavy and far bigger than runnable load
> avg. that make balance make wrong decision. So better don't consider it.

Would you happen to have an example around that illustrates this? 

Also, you've just changed the cgroup balancing -- did you run any tests on that?

> Signed-off-by: Alex Shi 
> ---
>  kernel/sched/fair.c |2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 91e60ac..75c200c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1339,7 +1339,7 @@ static inline void 
> __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
>   struct task_group *tg = cfs_rq->tg;
>   s64 tg_contrib;
>  
> - tg_contrib = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg;
> + tg_contrib = cfs_rq->runnable_load_avg;
>   tg_contrib -= cfs_rq->tg_load_contrib;
>  
>   if (force_update || abs64(tg_contrib) > cfs_rq->tg_load_contrib / 8) {
> -- 
> 1.7.5.4
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-14 Thread Peter Zijlstra
On Fri, May 10, 2013 at 11:17:29PM +0800, Alex Shi wrote:
 blocked_load_avg sometime is too heavy and far bigger than runnable load
 avg. that make balance make wrong decision. So better don't consider it.

Would you happen to have an example around that illustrates this? 

Also, you've just changed the cgroup balancing -- did you run any tests on that?

 Signed-off-by: Alex Shi alex@intel.com
 ---
  kernel/sched/fair.c |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)
 
 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
 index 91e60ac..75c200c 100644
 --- a/kernel/sched/fair.c
 +++ b/kernel/sched/fair.c
 @@ -1339,7 +1339,7 @@ static inline void 
 __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
   struct task_group *tg = cfs_rq-tg;
   s64 tg_contrib;
  
 - tg_contrib = cfs_rq-runnable_load_avg + cfs_rq-blocked_load_avg;
 + tg_contrib = cfs_rq-runnable_load_avg;
   tg_contrib -= cfs_rq-tg_load_contrib;
  
   if (force_update || abs64(tg_contrib)  cfs_rq-tg_load_contrib / 8) {
 -- 
 1.7.5.4
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-14 Thread Paul Turner
On Fri, May 10, 2013 at 8:17 AM, Alex Shi alex@intel.com wrote:
 blocked_load_avg sometime is too heavy and far bigger than runnable load
 avg. that make balance make wrong decision. So better don't consider it.

 Signed-off-by: Alex Shi alex@intel.com
 ---
  kernel/sched/fair.c |2 +-
  1 files changed, 1 insertions(+), 1 deletions(-)

 diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
 index 91e60ac..75c200c 100644
 --- a/kernel/sched/fair.c
 +++ b/kernel/sched/fair.c
 @@ -1339,7 +1339,7 @@ static inline void 
 __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
 struct task_group *tg = cfs_rq-tg;
 s64 tg_contrib;

 -   tg_contrib = cfs_rq-runnable_load_avg + cfs_rq-blocked_load_avg;

Nack -- This is necessary for correct shares distribution.
T
 +   tg_contrib = cfs_rq-runnable_load_avg;
 tg_contrib -= cfs_rq-tg_load_contrib;

 if (force_update || abs64(tg_contrib)  cfs_rq-tg_load_contrib / 8) {
 --
 1.7.5.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-14 Thread Alex Shi
On 05/14/2013 04:31 PM, Peter Zijlstra wrote:
 On Fri, May 10, 2013 at 11:17:29PM +0800, Alex Shi wrote:
  blocked_load_avg sometime is too heavy and far bigger than runnable load
  avg. that make balance make wrong decision. So better don't consider it.
 Would you happen to have an example around that illustrates this? 

Sorry, No.
 
 Also, you've just changed the cgroup balancing -- did you run any tests on 
 that?
 

I tested all benchmarks on cover letter maintained, aim7, kbuild etc.
with autogroup enabled. There is no clear performance change.
But since the machine just run benchmark without anyother load, that
doesn't enough.

-- 
Thanks
Alex
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-14 Thread Alex Shi
On 05/14/2013 05:05 PM, Paul Turner wrote:
  diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
  index 91e60ac..75c200c 100644
  --- a/kernel/sched/fair.c
  +++ b/kernel/sched/fair.c
  @@ -1339,7 +1339,7 @@ static inline void 
  __update_cfs_rq_tg_load_contrib(struct cfs_rq *cfs_rq,
  struct task_group *tg = cfs_rq-tg;
  s64 tg_contrib;
 
  -   tg_contrib = cfs_rq-runnable_load_avg + cfs_rq-blocked_load_avg;
 Nack -- This is necessary for correct shares distribution.

I was going to set this patch as RFC. :)

BTW, did you do some test of this part?

-- 
Thanks
Alex
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-10 Thread Alex Shi
blocked_load_avg sometime is too heavy and far bigger than runnable load
avg. that make balance make wrong decision. So better don't consider it.

Signed-off-by: Alex Shi 
---
 kernel/sched/fair.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 91e60ac..75c200c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1339,7 +1339,7 @@ static inline void __update_cfs_rq_tg_load_contrib(struct 
cfs_rq *cfs_rq,
struct task_group *tg = cfs_rq->tg;
s64 tg_contrib;
 
-   tg_contrib = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg;
+   tg_contrib = cfs_rq->runnable_load_avg;
tg_contrib -= cfs_rq->tg_load_contrib;
 
if (force_update || abs64(tg_contrib) > cfs_rq->tg_load_contrib / 8) {
-- 
1.7.5.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[patch v6 8/8] sched: remove blocked_load_avg in tg

2013-05-10 Thread Alex Shi
blocked_load_avg sometime is too heavy and far bigger than runnable load
avg. that make balance make wrong decision. So better don't consider it.

Signed-off-by: Alex Shi alex@intel.com
---
 kernel/sched/fair.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 91e60ac..75c200c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1339,7 +1339,7 @@ static inline void __update_cfs_rq_tg_load_contrib(struct 
cfs_rq *cfs_rq,
struct task_group *tg = cfs_rq-tg;
s64 tg_contrib;
 
-   tg_contrib = cfs_rq-runnable_load_avg + cfs_rq-blocked_load_avg;
+   tg_contrib = cfs_rq-runnable_load_avg;
tg_contrib -= cfs_rq-tg_load_contrib;
 
if (force_update || abs64(tg_contrib)  cfs_rq-tg_load_contrib / 8) {
-- 
1.7.5.4

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/