Re: [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-03 Thread Michael Gerdau
> regarding the fairness of the different schedulers, please note the > different runtimes for each component of the workload: > > LTMM: 5655.07/ 5682 > LTMB: 7729.81/ 7755 > LTBM: 7720.70/ 7746 > > this means that a fair scheduler would _not_ be the one that finishes >

Re: [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-03 Thread Ingo Molnar
* Michael Gerdau <[EMAIL PROTECTED]> wrote: > There are 3 scenarios: > j1 - all 3 tasks run sequentially >/proc/sys/kernel/sched_granularity_ns=400 >/proc/sys/kernel/rr_interval=16 > j3 - all 3 tasks run in parallel >

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-03 Thread Al Boldi
William Lee Irwin III wrote: > On Thu, May 03, 2007 at 09:42:51AM +0300, Al Boldi wrote: > > sched_rr_get_interval(0, ); > > printf("pid %d, prio %3d, interval of %d nsec\n", getpid(), > > getpriority(PRIO_PROCESS, 0), ts.tv_nsec); > > Oh dear. What are you trying to figure out

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-03 Thread William Lee Irwin III
> William Lee Irwin III wrote: On Thu, May 03, 2007 at 09:42:51AM +0300, Al Boldi wrote: > sched_rr_get_interval(0, ); > printf("pid %d, prio %3d, interval of %d nsec\n", getpid(), > getpriority(PRIO_PROCESS, 0), ts.tv_nsec); Oh dear. What are you trying to figure out from the

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-03 Thread Al Boldi
William Lee Irwin III wrote: > William Lee Irwin III wrote: > >> That's odd. The ->load_weight changes should've improved that quite > >> a bit. There may be something slightly off in how lag is computed, > >> or maybe the O(n) lag issue Ying Tang spotted is biting you. > > On Thu, May 03, 2007 at

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-03 Thread Al Boldi
William Lee Irwin III wrote: William Lee Irwin III wrote: That's odd. The -load_weight changes should've improved that quite a bit. There may be something slightly off in how lag is computed, or maybe the O(n) lag issue Ying Tang spotted is biting you. On Thu, May 03, 2007 at 06:51:43AM

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-03 Thread William Lee Irwin III
William Lee Irwin III wrote: On Thu, May 03, 2007 at 09:42:51AM +0300, Al Boldi wrote: sched_rr_get_interval(0, ts); printf(pid %d, prio %3d, interval of %d nsec\n, getpid(), getpriority(PRIO_PROCESS, 0), ts.tv_nsec); Oh dear. What are you trying to figure out from the task's

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-03 Thread Al Boldi
William Lee Irwin III wrote: On Thu, May 03, 2007 at 09:42:51AM +0300, Al Boldi wrote: sched_rr_get_interval(0, ts); printf(pid %d, prio %3d, interval of %d nsec\n, getpid(), getpriority(PRIO_PROCESS, 0), ts.tv_nsec); Oh dear. What are you trying to figure out from the

Re: [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-03 Thread Ingo Molnar
* Michael Gerdau [EMAIL PROTECTED] wrote: There are 3 scenarios: j1 - all 3 tasks run sequentially /proc/sys/kernel/sched_granularity_ns=400 /proc/sys/kernel/rr_interval=16 j3 - all 3 tasks run in parallel /proc/sys/kernel/sched_granularity_ns=400

Re: [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-03 Thread Michael Gerdau
regarding the fairness of the different schedulers, please note the different runtimes for each component of the workload: LTMM: 5655.07/ 5682 LTMB: 7729.81/ 7755 LTBM: 7720.70/ 7746 this means that a fair scheduler would _not_ be the one that finishes them first

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-02 Thread William Lee Irwin III
William Lee Irwin III wrote: >> That's odd. The ->load_weight changes should've improved that quite >> a bit. There may be something slightly off in how lag is computed, >> or maybe the O(n) lag issue Ying Tang spotted is biting you. On Thu, May 03, 2007 at 06:51:43AM +0300, Al Boldi wrote: > Is

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-02 Thread Al Boldi
William Lee Irwin III wrote: > Con Kolivas wrote: > >> Looks good, thanks. Ingo's been hard at work since then and has v8 out > >> by now. SD has not changed so you wouldn't need to do the whole lot of > >> tests on SD again unless you don't trust some of the results. > > On Thu, May 03, 2007 at

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-02 Thread William Lee Irwin III
Con Kolivas wrote: >> Looks good, thanks. Ingo's been hard at work since then and has v8 out by >> now. SD has not changed so you wouldn't need to do the whole lot of tests >> on SD again unless you don't trust some of the results. On Thu, May 03, 2007 at 02:11:39AM +0300, Al Boldi wrote: > Well,

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-02 Thread Al Boldi
Con Kolivas wrote: > On Monday 30 April 2007 18:05, Michael Gerdau wrote: > > meanwhile I've redone my numbercrunching tests with the following > > kernels: 2.6.21.1 (mainline) > > 2.6.21-sd046 > > 2.6.21-cfs-v6 > > running on a dualcore x86_64. > > [I will run the same test with

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-02 Thread Con Kolivas
On Monday 30 April 2007 18:05, Michael Gerdau wrote: > i list, > > meanwhile I've redone my numbercrunching tests with the following kernels: > 2.6.21.1 (mainline) > 2.6.21-sd046 > 2.6.21-cfs-v6 > running on a dualcore x86_64. > [I will run the same test with 2.6.21.1-cfs-v7 over the

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-02 Thread Con Kolivas
On Monday 30 April 2007 18:05, Michael Gerdau wrote: i list, meanwhile I've redone my numbercrunching tests with the following kernels: 2.6.21.1 (mainline) 2.6.21-sd046 2.6.21-cfs-v6 running on a dualcore x86_64. [I will run the same test with 2.6.21.1-cfs-v7 over the next

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-02 Thread Al Boldi
Con Kolivas wrote: On Monday 30 April 2007 18:05, Michael Gerdau wrote: meanwhile I've redone my numbercrunching tests with the following kernels: 2.6.21.1 (mainline) 2.6.21-sd046 2.6.21-cfs-v6 running on a dualcore x86_64. [I will run the same test with 2.6.21.1-cfs-v7 over

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-02 Thread William Lee Irwin III
Con Kolivas wrote: Looks good, thanks. Ingo's been hard at work since then and has v8 out by now. SD has not changed so you wouldn't need to do the whole lot of tests on SD again unless you don't trust some of the results. On Thu, May 03, 2007 at 02:11:39AM +0300, Al Boldi wrote: Well, I

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-02 Thread Al Boldi
William Lee Irwin III wrote: Con Kolivas wrote: Looks good, thanks. Ingo's been hard at work since then and has v8 out by now. SD has not changed so you wouldn't need to do the whole lot of tests on SD again unless you don't trust some of the results. On Thu, May 03, 2007 at 02:11:39AM

Re: [ck] [REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-05-02 Thread William Lee Irwin III
William Lee Irwin III wrote: That's odd. The -load_weight changes should've improved that quite a bit. There may be something slightly off in how lag is computed, or maybe the O(n) lag issue Ying Tang spotted is biting you. On Thu, May 03, 2007 at 06:51:43AM +0300, Al Boldi wrote: Is it not

[REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-04-30 Thread Michael Gerdau
i list, meanwhile I've redone my numbercrunching tests with the following kernels: 2.6.21.1 (mainline) 2.6.21-sd046 2.6.21-cfs-v6 running on a dualcore x86_64. [I will run the same test with 2.6.21.1-cfs-v7 over the next days, likely tonight] The tests consist of 3 tasks (named LTMM,

[REPORT] 2.6.21.1 vs 2.6.21-sd046 vs 2.6.21-cfs-v6

2007-04-30 Thread Michael Gerdau
i list, meanwhile I've redone my numbercrunching tests with the following kernels: 2.6.21.1 (mainline) 2.6.21-sd046 2.6.21-cfs-v6 running on a dualcore x86_64. [I will run the same test with 2.6.21.1-cfs-v7 over the next days, likely tonight] The tests consist of 3 tasks (named LTMM,