Re: [patch] CFS scheduler, -v6

2007-05-01 Thread William Lee Irwin III
On Tue, May 01, 2007 at 09:55:15AM +0200, Nick Piggin wrote:
> People seem to be confusing scheduler policy with a modular framework.
> First of all, I don't know that any of the schedulers can "just go in"
> and replace the mainline one, because they are still under development
> and have not been sufficiently tested and contrasted IMO.

I've already made noise about separate modular framework patches, so
recast whatever confusion seems apparent to you in those terms.


On Tue, May 01, 2007 at 09:55:15AM +0200, Nick Piggin wrote:
> Secondly, if we want to have a modular framework, it should:
> - be a seperate patchset to any new scheduler policy
> - obviously retain the existing policy for testing / comparison purposes
> - be able to be compiled out. I don't know whether it's CFS policy or
>   the framework (maybe both), but CFS is quite a bit slower at context
>   switching when I last measured with lmbench (several releases ago).

Compiling such things out is interesting at best, as they typically
require significant code restructuring. You can make the indirect calls
conditional by calling some particular driver directly inside wrapper
macros for the indirect calls via case analysis on ->policy, I suppose.

There are issues with using cfs' notion of a modular framework to
verify performance non-regression, in particular the fact it's
incapable of representing mainline. There is also the problem of doing
very little in the way of hiding data, so that in the event of using it
to compare different implementations of the same policy, e.g. competing
SCHED_OTHER implementations, each is stuck maintaining the others'
state variables, save for the case where one or both happens to have
all its state variable updates fit entirely inside the driver operations,
in which case they still take the hit for bloating the task_struct, and
worse yet, some asymmetry in terms of which is exempt from maintaining
the others' state variables occurs which furthermore penalizes the
competitor maintaining the least state or doing the least state updates.
Such a state of affairs must not be allowed to stand.


On Tue, May 01, 2007 at 09:55:15AM +0200, Nick Piggin wrote:
> I still would rather not have a modular framework unless we decide
> that is the only wake to make upstream progress. But if we did have
> the modular framework, we still need to decide on the process of
> avoiding profileration, selecting a default scheduler, and a plan for
> future phasing out of non-default GP schedulers once a new one gets
> selected.

It certainly cuts down on the eye bleed but I suppose that takes a
back seat to performance considerations.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-05-01 Thread Nick Piggin
On Sun, Apr 29, 2007 at 03:37:47PM +0200, Thomas Gleixner wrote:
> On Sun, 2007-04-29 at 05:55 -0700, William Lee Irwin III wrote:
> > You'll also hit the same holes should you attempt to write such a
> > modularity patch for mainline as opposed to porting current mainline to
> > the driver API as-given. It takes a bit more work to get something that
> > actually works for all this, and it borders on disingenuity to
> > suggest that the scheduler class/driver API as it now stands is
> > capable of any such thing as porting current mainline, nicksched, or SD
> > to it without significant code impact to the core scheduler code.
> 
> I never said, that the current implementation of CFS fits the criteria
> of modularity, but it is a step in that direction. I'm well aware that
> there is a bunch of things missing and it has hard coded leftovers,
> which are related to the current two hard coded policy classes.

[ I've tuned out of most of the scheduler discussion lately, sorry ;) ]


People seem to be confusing scheduler policy with a modular framework.

First of all, I don't know that any of the schedulers can "just go in"
and replace the mainline one, because they are still under development
and have not been sufficiently tested and contrasted IMO.

Secondly, if we want to have a modular framework, it should:
- be a seperate patchset to any new scheduler policy
- obviously retain the existing policy for testing / comparison purposes
- be able to be compiled out. I don't know whether it's CFS policy or
  the framework (maybe both), but CFS is quite a bit slower at context
  switching when I last measured with lmbench (several releases ago).

I still would rather not have a modular framework unless we decide
that is the only wake to make upstream progress. But if we did have
the modular framework, we still need to decide on the process of
avoiding profileration, selecting a default scheduler, and a plan for
future phasing out of non-default GP schedulers once a new one gets
selected.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-05-01 Thread Nick Piggin
On Sun, Apr 29, 2007 at 03:37:47PM +0200, Thomas Gleixner wrote:
 On Sun, 2007-04-29 at 05:55 -0700, William Lee Irwin III wrote:
  You'll also hit the same holes should you attempt to write such a
  modularity patch for mainline as opposed to porting current mainline to
  the driver API as-given. It takes a bit more work to get something that
  actually works for all this, and it borders on disingenuity to
  suggest that the scheduler class/driver API as it now stands is
  capable of any such thing as porting current mainline, nicksched, or SD
  to it without significant code impact to the core scheduler code.
 
 I never said, that the current implementation of CFS fits the criteria
 of modularity, but it is a step in that direction. I'm well aware that
 there is a bunch of things missing and it has hard coded leftovers,
 which are related to the current two hard coded policy classes.

[ I've tuned out of most of the scheduler discussion lately, sorry ;) ]


People seem to be confusing scheduler policy with a modular framework.

First of all, I don't know that any of the schedulers can just go in
and replace the mainline one, because they are still under development
and have not been sufficiently tested and contrasted IMO.

Secondly, if we want to have a modular framework, it should:
- be a seperate patchset to any new scheduler policy
- obviously retain the existing policy for testing / comparison purposes
- be able to be compiled out. I don't know whether it's CFS policy or
  the framework (maybe both), but CFS is quite a bit slower at context
  switching when I last measured with lmbench (several releases ago).

I still would rather not have a modular framework unless we decide
that is the only wake to make upstream progress. But if we did have
the modular framework, we still need to decide on the process of
avoiding profileration, selecting a default scheduler, and a plan for
future phasing out of non-default GP schedulers once a new one gets
selected.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-05-01 Thread William Lee Irwin III
On Tue, May 01, 2007 at 09:55:15AM +0200, Nick Piggin wrote:
 People seem to be confusing scheduler policy with a modular framework.
 First of all, I don't know that any of the schedulers can just go in
 and replace the mainline one, because they are still under development
 and have not been sufficiently tested and contrasted IMO.

I've already made noise about separate modular framework patches, so
recast whatever confusion seems apparent to you in those terms.


On Tue, May 01, 2007 at 09:55:15AM +0200, Nick Piggin wrote:
 Secondly, if we want to have a modular framework, it should:
 - be a seperate patchset to any new scheduler policy
 - obviously retain the existing policy for testing / comparison purposes
 - be able to be compiled out. I don't know whether it's CFS policy or
   the framework (maybe both), but CFS is quite a bit slower at context
   switching when I last measured with lmbench (several releases ago).

Compiling such things out is interesting at best, as they typically
require significant code restructuring. You can make the indirect calls
conditional by calling some particular driver directly inside wrapper
macros for the indirect calls via case analysis on -policy, I suppose.

There are issues with using cfs' notion of a modular framework to
verify performance non-regression, in particular the fact it's
incapable of representing mainline. There is also the problem of doing
very little in the way of hiding data, so that in the event of using it
to compare different implementations of the same policy, e.g. competing
SCHED_OTHER implementations, each is stuck maintaining the others'
state variables, save for the case where one or both happens to have
all its state variable updates fit entirely inside the driver operations,
in which case they still take the hit for bloating the task_struct, and
worse yet, some asymmetry in terms of which is exempt from maintaining
the others' state variables occurs which furthermore penalizes the
competitor maintaining the least state or doing the least state updates.
Such a state of affairs must not be allowed to stand.


On Tue, May 01, 2007 at 09:55:15AM +0200, Nick Piggin wrote:
 I still would rather not have a modular framework unless we decide
 that is the only wake to make upstream progress. But if we did have
 the modular framework, we still need to decide on the process of
 avoiding profileration, selecting a default scheduler, and a plan for
 future phasing out of non-default GP schedulers once a new one gets
 selected.

It certainly cuts down on the eye bleed but I suppose that takes a
back seat to performance considerations.


-- wli
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 3d smoothness (was: Re: [patch] CFS scheduler, -v6)

2007-04-30 Thread Kasper Sandberg
On Mon, 2007-04-30 at 22:17 +0200, Ingo Molnar wrote:
> * Kasper Sandberg <[EMAIL PROTECTED]> wrote:
> 
> > This patch makes things much worse, [...]
> 
> yeah, the small patch i sent to you in private mail was indeed buggy,
> please disregard it.
It also hardlocked my box :) but it was worth a shot.
> 
>   Ingo
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 3d smoothness (was: Re: [patch] CFS scheduler, -v6)

2007-04-30 Thread Ingo Molnar

* Kasper Sandberg <[EMAIL PROTECTED]> wrote:

> This patch makes things much worse, [...]

yeah, the small patch i sent to you in private mail was indeed buggy,
please disregard it.

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


3d smoothness (was: Re: [patch] CFS scheduler, -v6)

2007-04-30 Thread Kasper Sandberg
On Sunday 29 April 2007 19:39, Ingo Molnar wrote:
> hi Kasper,
>
> i found an aspect of CFS that could cause the kind of 'stuttering' you
> described in such detail. I'm wondering whether you could try the
> attached -v8-rc1 patch ontop of the -v7 CFS patch - does it improve the
> 'games FPS' situation in any way? Thanks in advance,
>
>   Ingo

This patch makes things much worse, i'd categorize it as severe regression
compared to cfs 7. It makes the cursor in X stutter enormously, it even 
caused my entire X to lock up for a second, and events like keyboard input is 
totally wrecked, it lagged as i wrote in xchat. as for under load, it seems 
only worse..

Also if i just press a link in konqueror on some website, while it loads, the 
mouse is stuttering untill the page has loaded finished.

this seems weird because its such a relatively simple patch.

In the patchs defense, gtk seems to redraw faster (when, and only when 3d is 
NOT running)

i also discovered another thing about cfs 7 (without this patch), which is 
same behavior as old staircase/vanilla, but which SD actually fixes.

this is a wine case, where when it loads a level in world of warcraft, the 
audio skips. I believe this to be a problem in wine, however, in sd it 
actually does not skip. On the desktop however, the audio issues were totally 
fixed in v7..

mvh.
Kasper Sandberg
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


3d smoothness (was: Re: [patch] CFS scheduler, -v6)

2007-04-30 Thread Kasper Sandberg
On Sunday 29 April 2007 19:39, Ingo Molnar wrote:
 hi Kasper,

 i found an aspect of CFS that could cause the kind of 'stuttering' you
 described in such detail. I'm wondering whether you could try the
 attached -v8-rc1 patch ontop of the -v7 CFS patch - does it improve the
 'games FPS' situation in any way? Thanks in advance,

   Ingo

This patch makes things much worse, i'd categorize it as severe regression
compared to cfs 7. It makes the cursor in X stutter enormously, it even 
caused my entire X to lock up for a second, and events like keyboard input is 
totally wrecked, it lagged as i wrote in xchat. as for under load, it seems 
only worse..

Also if i just press a link in konqueror on some website, while it loads, the 
mouse is stuttering untill the page has loaded finished.

this seems weird because its such a relatively simple patch.

In the patchs defense, gtk seems to redraw faster (when, and only when 3d is 
NOT running)

i also discovered another thing about cfs 7 (without this patch), which is 
same behavior as old staircase/vanilla, but which SD actually fixes.

this is a wine case, where when it loads a level in world of warcraft, the 
audio skips. I believe this to be a problem in wine, however, in sd it 
actually does not skip. On the desktop however, the audio issues were totally 
fixed in v7..

mvh.
Kasper Sandberg
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 3d smoothness (was: Re: [patch] CFS scheduler, -v6)

2007-04-30 Thread Ingo Molnar

* Kasper Sandberg [EMAIL PROTECTED] wrote:

 This patch makes things much worse, [...]

yeah, the small patch i sent to you in private mail was indeed buggy,
please disregard it.

Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 3d smoothness (was: Re: [patch] CFS scheduler, -v6)

2007-04-30 Thread Kasper Sandberg
On Mon, 2007-04-30 at 22:17 +0200, Ingo Molnar wrote:
 * Kasper Sandberg [EMAIL PROTECTED] wrote:
 
  This patch makes things much worse, [...]
 
 yeah, the small patch i sent to you in private mail was indeed buggy,
 please disregard it.
It also hardlocked my box :) but it was worth a shot.
 
   Ingo
 

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Mark Lord

Willy Tarreau wrote:

..
Contrarily to most people, I don't see them as competitors. I see SD as
a first step with a low risk of regression, and CFS as an ultimate
solution relying on a more solid framework.


I see SD as 100% chance of regression on my main machine.

But I will retest (on Monday?) with the latest, just to see
if it has improved closer to mainline or not.

-ml
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Kasper Sandberg
On Sun, 2007-04-29 at 08:42 -0700, Ray Lee wrote:
> On 4/29/07, Kasper Sandberg <[EMAIL PROTECTED]> wrote:
> > On Sun, 2007-04-29 at 08:59 +0200, Ingo Molnar wrote:
> > > well, there are several reports of CFS being significantly better than
> > > SD on a number of workloads - and i know of only two reports where SD
> > > was reported to be better than CFS: in Kasper's test (where i'd like to
> > > know what the "3D stuff" he uses is and take a good look at that
> > > workload), and another 3D report which was done against -v6. (And even
> > > in these two reports the 'smoothness advantage' was not dramatic. If you
> > > know of any other reports then please let me know!)
> >
> > I can tell you one thing, its not just me that has observed the
> > smoothness in 3d stuff, after i tried rsdl first i've had lots of people
> > try rsdl and subsequently sd because of the significant improvement in
> > smoothness, and they have all found the same results.
> >
> > The stuff i have tested with in particular is unreal tournament 2004 and
> > world of warcraft through wine, both running opengl, and consuming all
> > the cpu time it can get.
> 
> [snip more of sd smoother than cfs report]
> 
> WINE is an interesting workload as it does most of its work out of
> process to the 'wineserver', which then does more work out of process
> to the X server. So, it's three mutually interacting processes total,
> once one includes the original client (Unreal Tournament or World of
> Warcraft, in this case).
the wineserver process is using next to no cpu-time compared to the main
process..

> 
> Perhaps running one of the windows system performance apps (that can
> be freely downloaded) under WINE would give some hard numbers people
> could use to try to reproduce the report.
> 
> Ray
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ray Lee

On 4/29/07, Kasper Sandberg <[EMAIL PROTECTED]> wrote:

On Sun, 2007-04-29 at 08:59 +0200, Ingo Molnar wrote:
> well, there are several reports of CFS being significantly better than
> SD on a number of workloads - and i know of only two reports where SD
> was reported to be better than CFS: in Kasper's test (where i'd like to
> know what the "3D stuff" he uses is and take a good look at that
> workload), and another 3D report which was done against -v6. (And even
> in these two reports the 'smoothness advantage' was not dramatic. If you
> know of any other reports then please let me know!)

I can tell you one thing, its not just me that has observed the
smoothness in 3d stuff, after i tried rsdl first i've had lots of people
try rsdl and subsequently sd because of the significant improvement in
smoothness, and they have all found the same results.

The stuff i have tested with in particular is unreal tournament 2004 and
world of warcraft through wine, both running opengl, and consuming all
the cpu time it can get.


[snip more of sd smoother than cfs report]

WINE is an interesting workload as it does most of its work out of
process to the 'wineserver', which then does more work out of process
to the X server. So, it's three mutually interacting processes total,
once one includes the original client (Unreal Tournament or World of
Warcraft, in this case).

Perhaps running one of the windows system performance apps (that can
be freely downloaded) under WINE would give some hard numbers people
could use to try to reproduce the report.

Ray
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Gene Heskett
On Sunday 29 April 2007, Paolo Ciarrocchi wrote:
[...]

>> > > CFS modifies the scheduler and nothing else, SD fiddles all over the
>> > > kernel in interesting ways.

Huh?  Doesn't grok.

>> > Hmmm I guess you confused both of them this time. CFS touches many
>> > places, which is why I think the testing coverage is still very low. SD
>> > can be tested faster. My real concern is : are there still people
>> > observing regressions with it ? If yes, they should be fixed before even
>> > being merged. If no, why not merge it as a fix for the many known corner
>> > cases of current scheduler ? After all, it's already in -mm.
>> >
>> > Willy
>>
>> Willy, you're making far too much sense. Are you replying to the correct
>> mailing list?
>
>FWIW, I strongly agree with Willy.

If we're putting it to a vote, I'm with Willy.  But this is a dictatorship and 
we shouldn't forget it. :)

-- 
Cheers, Gene
"There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
An ambassador is an honest man sent abroad to lie and intrigue for the
benefit of his country.
-- Sir Henry Wotton, 1568-1639
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Gene Heskett
On Sunday 29 April 2007, Willy Tarreau wrote:
>On Sun, Apr 29, 2007 at 08:59:01AM +0200, Ingo Molnar wrote:
>> * Willy Tarreau <[EMAIL PROTECTED]> wrote:
>> > I don't know if Mike still has problems with SD, but there are now
>> > several interesting reports of SD giving better feedback than CFS on
>> > real work. In my experience, CFS seems smoother on *technical* tests,
>> > which I agree that they do not really simulate real work.
>>
>> well, there are several reports of CFS being significantly better than
>> SD on a number of workloads - and i know of only two reports where SD
>> was reported to be better than CFS: in Kasper's test (where i'd like to
>> know what the "3D stuff" he uses is and take a good look at that
>> workload), and another 3D report which was done against -v6. (And even
>> in these two reports the 'smoothness advantage' was not dramatic. If you
>> know of any other reports then please let me know!)
>
>There was Caglar Onur too but he said he will redo all the tests. I'm
>not tracking all tests nor versions, so it might be possible that some
>of the differences vanish with v7.
>
>In fact, what I'd like to see in 2.6.22 is something better for everybody
>and with *no* regression, even if it's not perfect. I had the feeling
>that SD matched that goal right now, except for Mike who has not tested
>recent versions. Don't get me wrong, I still think that CFS is a more
>interesting long-term target. But it may require more time to satisfy
>everyone. At least with one of them in 2.6.22, we won't waste time
>comparing to current mainline.
>
>>  Ingo
>
>Willy

In the FWIW category, I haven't built and tested a 'mainline' since at least 
2-3 weeks ago.  That's how dramatic the differences are here.  Here, my main 
notifier of scheduling fubar artifacts is usually kmail, which in itself 
seems to have a poor threading model, giving the composer pauses whenever its 
off sorting incoming mail, or compacting a folder, all the usual stuff that 
it needs to do in the background.  Those lags were from 10 to 30 seconds 
long, and I could type whole sentences before they showed up on screen with 
mainline.

The best either of these schedulers can do is hold that down to 3 or 4 words, 
but that's an amazing difference in itself.  With either of these schedulers, 
having a running gzip session that amanda launched in the background cause 
kmail to display a new message 5-30 seconds after the + key has been tapped 
is now into the sub 4 second range & often much less.  SD seems to, as it 
states, give everyone a turn at the well, so the slowdowns when gzip is 
running are somewhat more noticeable, whereas with CFS, gzip seems to be 
pretty well preempted long enough to process most user keypresses.  Not all, 
because tapping the + key to display the next message can at times be a 
pretty complex operation.

For my workload, CFS seems to be a marginally better solution, but either is 
so much better than mainline that there cannot be a reversion to mainline 
performance here without a lot of kicking and screaming.

'vmstat -n 1' results show that CFS uses a lot less time doing context 
switches, which as IUI, is to be counted against OS overhead as it does no 
productive work while the switch is being done.  For CFS, that's generally 
less than 500/second, and averageing around 350, which compared to SD046's 
average of about 18,000/second, it would appear that CFS allows more REAL 
work to get done by holding down on the non-productive time a context switch 
requires.

FWIW, amanda runtimes tend to back that up, most CFS runs are sub 2 hours, SD 
runs are seemingly around 2h:10m.  But that again is not over a sufficiently 
large sample to be a benchmark tool either, just one persons observation.  I 
should have marked the amdump logs so I could have determined that easier by 
tracking which scheduler was running for that dump.  amplot can be 
informative, but one must also correlate, and a week ago is ancient history 
as I have no way to verify which I was running then.

The X86's poor register architecture pretty well chains us to the 'context 
switch' if we want multitasking.

I'm reminded of how that was handled on a now largely dead architecture some 
here may never have seen an example of, TI's 99xx chips, where all 
accumulators and registers were actually stored in memory, and a context 
switch was a simply matter of reloading the register that pointed into this 
memory array with a new address so a context switch was just a matter of 
reading the next processes address and storing it in the address register, 
itself also just a location in memory.  The state image of the process 
being 'put to sleep' was therefore maintained indefinitely as long as the 
memory was refreshed.  Too bad we can't do that on the x86 but I assume TI 
has patent lawyers standing by ready to jump on that one.  However, with 
today's L1 cache being the speed and size that it is, it sure looks like a 
doable thing even 

Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Thomas Gleixner
On Sun, 2007-04-29 at 05:55 -0700, William Lee Irwin III wrote:
> You'll also hit the same holes should you attempt to write such a
> modularity patch for mainline as opposed to porting current mainline to
> the driver API as-given. It takes a bit more work to get something that
> actually works for all this, and it borders on disingenuity to
> suggest that the scheduler class/driver API as it now stands is
> capable of any such thing as porting current mainline, nicksched, or SD
> to it without significant code impact to the core scheduler code.

I never said, that the current implementation of CFS fits the criteria
of modularity, but it is a step in that direction. I'm well aware that
there is a bunch of things missing and it has hard coded leftovers,
which are related to the current two hard coded policy classes.

> So on both these points, I don't see cfs as being adequate as it now
> stands for a modular, hierarchical scheduler design. If we want a truly
> modular and hierarchical scheduler design, I'd suggest pursuing it
> directly and independently of policy, and furthermore considering the
> representability of various policies in the scheduling class/driver API
> as a test of its adequacy.

Ack. I don't worry much whether the CFS policy is better than the SD
one. I'm all for a truly modular design. SD and SCHED_FAIR are good
proofs for it.

tglx


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread William Lee Irwin III
On Sun, Apr 29, 2007 at 02:13:30PM +0200, Thomas Gleixner wrote:
> SD is a one to one replacement of the existing scheduler guts - with a
> different behaviour.
> CFS is a huge step into a modular and hierarchical scheduler design,
> which allows more than just implementing a clever scheduler for a single
> purpose. In a hierarchical scheduler you can implement resource
> management and other fancy things, in the monolitic design of the
> current scheduler (and it's proposed replacement SD) you can't. But SD
> can be made one of the modular variants.

The modularity provided is not enough to allow an implementation of
mainline, SD, or nicksched without significant core scheduler impact.

CFS doesn't have all that much to do with scheduler classes. A weak form
of them was done in tandem with the scheduler itself. The modularity
provided is sufficiently weak the advantage is largely prettiness of the
code. So essentially CFS is every bit as monolithic as mainline, SD, et
al, with some dressing that suggests modularity without actually making
any accommodations for alternative policies (e.g. reverting to mainline).

You'll hit the holes in the driver API quite quickly should you attempt
to port mainline to it. You'll hit several missing driver operations
right in schedule(), for starters. At some point you may also notice
that simple enqueue operations are not all that's there. Representing
enqueueing to active vs. expired and head vs. tail are needed for
current mainline to be representible by a set of driver operations.
It's also a bit silly to remove and re-insert a queue element for cfs
(or anything else using a tree-structured heap, which yes, a search
tree is, even if a slow one), which could use a reprioritization driver
operation, but I suppose it won't oops.

You'll also hit the same holes should you attempt to write such a
modularity patch for mainline as opposed to porting current mainline to
the driver API as-given. It takes a bit more work to get something that
actually works for all this, and it borders on disingenuity to
suggest that the scheduler class/driver API as it now stands is
capable of any such thing as porting current mainline, nicksched, or SD
to it without significant code impact to the core scheduler code.

So on both these points, I don't see cfs as being adequate as it now
stands for a modular, hierarchical scheduler design. If we want a truly
modular and hierarchical scheduler design, I'd suggest pursuing it
directly and independently of policy, and furthermore considering the
representability of various policies in the scheduling class/driver API
as a test of its adequacy.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Willy Tarreau
On Sun, Apr 29, 2007 at 01:59:13PM +0200, Thomas Gleixner wrote:
> On Sun, 2007-04-29 at 13:11 +0200, Willy Tarreau wrote:
> > > As a sidenote: I really wonder if anybody noticed yet, that the whole
> > > CFS / SD comparison is so ridiculous, that it is not even funny anymore.
> > 
> > Contrarily to most people, I don't see them as competitors. I see SD as
> > a first step with a low risk of regression, and CFS as an ultimate
> > solution relying on a more solid framework.
> 
> That's the whole reason why I don't see any usefulness in merging SD
> now. When we merge SD now, then we need to care of both - the real
> solution and the fixup of regressions. Right now we have a not perfect
> scheduler with known weak points. Ripping it out and replacing it is
> going to introduce regressions, what ever low risk you see.

Of course, but that's also the purpose of -rc. And given its small
footprint, it will be as easy to revert it as to apply it, should any
big problem appear.

> And I still do not see a benefit of an intermediate step with a in my
> opinion medium to high risk of regressions, instead of going the full
> way, when we agree that this is the correct solution.

The only difference is the time to get it in the right shape. If it
requires 3 versions (6 months), it may be worth "upgrading" the current
scheduler to make users happy. I'm not kidding, I've switched the default
boot to 2.6 on my notebook after trying SD and CFS. It was the first time
I got my system in 2.6 at least as usable as in 2.4. And I know I'm not
the only one.

Willy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Kasper Sandberg
On Sun, 2007-04-29 at 14:13 +0200, Thomas Gleixner wrote:
> On Sun, 2007-04-29 at 14:00 +0200, Kasper Sandberg wrote:
> > On Sun, 2007-04-29 at 13:11 +0200, Willy Tarreau wrote:
> > > On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
> > 
> > > Contrarily to most people, I don't see them as competitors. I see SD as
> > > a first step with a low risk of regression, and CFS as an ultimate
> > > solution relying on a more solid framework.
> > > 
> > See this is the part i dont understand, what makes CFS the ultimate
> > solution compared to SD?
> 
> SD is a one to one replacement of the existing scheduler guts - with a
> different behaviour.
> 
> CFS is a huge step into a modular and hierarchical scheduler design,
> which allows more than just implementing a clever scheduler for a single
> purpose. In a hierarchical scheduler you can implement resource
> management and other fancy things, in the monolitic design of the
> current scheduler (and it's proposed replacement SD) you can't. But SD
> can be made one of the modular variants.
But all these things, arent they just in the modular scheduler policy
code? and not the actual sched_cfs one?

> 
>   tglx
> 
> 
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Thomas Gleixner
On Sun, 2007-04-29 at 14:00 +0200, Kasper Sandberg wrote:
> On Sun, 2007-04-29 at 13:11 +0200, Willy Tarreau wrote:
> > On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
> 
> > Contrarily to most people, I don't see them as competitors. I see SD as
> > a first step with a low risk of regression, and CFS as an ultimate
> > solution relying on a more solid framework.
> > 
> See this is the part i dont understand, what makes CFS the ultimate
> solution compared to SD?

SD is a one to one replacement of the existing scheduler guts - with a
different behaviour.

CFS is a huge step into a modular and hierarchical scheduler design,
which allows more than just implementing a clever scheduler for a single
purpose. In a hierarchical scheduler you can implement resource
management and other fancy things, in the monolitic design of the
current scheduler (and it's proposed replacement SD) you can't. But SD
can be made one of the modular variants.

tglx


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Paolo Ciarrocchi

On 4/29/07, Con Kolivas <[EMAIL PROTECTED]> wrote:

On Sunday 29 April 2007 21:11, Willy Tarreau wrote:
> On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
> > Willy,
> >
> > On Sun, 2007-04-29 at 09:16 +0200, Willy Tarreau wrote:
> > > In fact, what I'd like to see in 2.6.22 is something better for
> > > everybody and with *no* regression, even if it's not perfect. I had the
> > > feeling that SD matched that goal right now, except for Mike who has
> > > not tested recent versions. Don't get me wrong, I still think that CFS
> > > is a more interesting long-term target. But it may require more time to
> > > satisfy everyone. At least with one of them in 2.6.22, we won't waste
> > > time comparing to current mainline.
> >
> > Oh no, we really do _NOT_ want to throw SD or anything else at mainline
> > in a hurry just for not wasting time on comparing to the current
> > scheduler.
>
> It is not about doing it in a hurry. I see SD as a small yet efficient
> update to current scheduler. It's not perfect, probably not much extensible
> but the risks of breaking anything are small given the fact that it does
> not change much of the code or behaviour.
>
> IMHO, it is something which can provide users with a useful update while
> leaving us with some more time to carefully implement the features of CFS
> one at a time, and if it requires 5 versions, it's not a problem.
>
> > I agree that CFS is the more interesting target and I prefer to push the
> > more interesting one even if it takes a release cycle longer. The main
> > reason for me is the design of CFS. Even if it is not really modular
> > right now, it is not rocket science to make it fully modular.
> >
> > Looking at the areas where people work on, e.g. containers, resource
> > management, cpu isolation, fully tickless systems , we really need
> > to go into that direction, when we want to avoid permanent tinkering in
> > the core scheduler code for the next five years.
> >
> > As a sidenote: I really wonder if anybody noticed yet, that the whole
> > CFS / SD comparison is so ridiculous, that it is not even funny anymore.
>
> Contrarily to most people, I don't see them as competitors. I see SD as
> a first step with a low risk of regression, and CFS as an ultimate
> solution relying on a more solid framework.
>
> > CFS modifies the scheduler and nothing else, SD fiddles all over the
> > kernel in interesting ways.
>
> Hmmm I guess you confused both of them this time. CFS touches many places,
> which is why I think the testing coverage is still very low. SD can be
> tested faster. My real concern is : are there still people observing
> regressions with it ? If yes, they should be fixed before even being
> merged. If no, why not merge it as a fix for the many known corner cases
> of current scheduler ? After all, it's already in -mm.
>
> Willy

Willy, you're making far too much sense. Are you replying to the correct
mailing list?


FWIW, I strongly agree with Willy.

Ciao,
--
Paolo
"Tutto cio' che merita di essere fatto,merita di essere fatto bene"
Philip Stanhope IV conte di Chesterfield
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Kasper Sandberg
On Sun, 2007-04-29 at 13:11 +0200, Willy Tarreau wrote:
> On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:

> Contrarily to most people, I don't see them as competitors. I see SD as
> a first step with a low risk of regression, and CFS as an ultimate
> solution relying on a more solid framework.
> 
See this is the part i dont understand, what makes CFS the ultimate
solution compared to SD?


> 
> Willy
> 
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Thomas Gleixner
On Sun, 2007-04-29 at 13:11 +0200, Willy Tarreau wrote:
> > As a sidenote: I really wonder if anybody noticed yet, that the whole
> > CFS / SD comparison is so ridiculous, that it is not even funny anymore.
> 
> Contrarily to most people, I don't see them as competitors. I see SD as
> a first step with a low risk of regression, and CFS as an ultimate
> solution relying on a more solid framework.

That's the whole reason why I don't see any usefulness in merging SD
now. When we merge SD now, then we need to care of both - the real
solution and the fixup of regressions. Right now we have a not perfect
scheduler with known weak points. Ripping it out and replacing it is
going to introduce regressions, what ever low risk you see.

And I still do not see a benefit of an intermediate step with a in my
opinion medium to high risk of regressions, instead of going the full
way, when we agree that this is the correct solution.

tglx


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Thomas Gleixner
On Sun, 2007-04-29 at 20:53 +1000, Con Kolivas wrote:
> On Sunday 29 April 2007 20:30, Thomas Gleixner wrote:
> > As a sidenote: I really wonder if anybody noticed yet, that the whole
> > CFS / SD comparison is so ridiculous, that it is not even funny anymore.
> > CFS modifies the scheduler and nothing else, SD fiddles all over the
> > kernel in interesting ways.
> 
> This is a WTF if ever I saw one.

Sorry. My dumbness. I went into the wrong directory to run the diffstat.
It had the full ck set applied.

tglx

/me goes off to clean the harddisk


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Con Kolivas
On Sunday 29 April 2007 21:11, Willy Tarreau wrote:
> On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
> > Willy,
> >
> > On Sun, 2007-04-29 at 09:16 +0200, Willy Tarreau wrote:
> > > In fact, what I'd like to see in 2.6.22 is something better for
> > > everybody and with *no* regression, even if it's not perfect. I had the
> > > feeling that SD matched that goal right now, except for Mike who has
> > > not tested recent versions. Don't get me wrong, I still think that CFS
> > > is a more interesting long-term target. But it may require more time to
> > > satisfy everyone. At least with one of them in 2.6.22, we won't waste
> > > time comparing to current mainline.
> >
> > Oh no, we really do _NOT_ want to throw SD or anything else at mainline
> > in a hurry just for not wasting time on comparing to the current
> > scheduler.
>
> It is not about doing it in a hurry. I see SD as a small yet efficient
> update to current scheduler. It's not perfect, probably not much extensible
> but the risks of breaking anything are small given the fact that it does
> not change much of the code or behaviour.
>
> IMHO, it is something which can provide users with a useful update while
> leaving us with some more time to carefully implement the features of CFS
> one at a time, and if it requires 5 versions, it's not a problem.
>
> > I agree that CFS is the more interesting target and I prefer to push the
> > more interesting one even if it takes a release cycle longer. The main
> > reason for me is the design of CFS. Even if it is not really modular
> > right now, it is not rocket science to make it fully modular.
> >
> > Looking at the areas where people work on, e.g. containers, resource
> > management, cpu isolation, fully tickless systems , we really need
> > to go into that direction, when we want to avoid permanent tinkering in
> > the core scheduler code for the next five years.
> >
> > As a sidenote: I really wonder if anybody noticed yet, that the whole
> > CFS / SD comparison is so ridiculous, that it is not even funny anymore.
>
> Contrarily to most people, I don't see them as competitors. I see SD as
> a first step with a low risk of regression, and CFS as an ultimate
> solution relying on a more solid framework.
>
> > CFS modifies the scheduler and nothing else, SD fiddles all over the
> > kernel in interesting ways.
>
> Hmmm I guess you confused both of them this time. CFS touches many places,
> which is why I think the testing coverage is still very low. SD can be
> tested faster. My real concern is : are there still people observing
> regressions with it ? If yes, they should be fixed before even being
> merged. If no, why not merge it as a fix for the many known corner cases
> of current scheduler ? After all, it's already in -mm.
>
> Willy

Willy, you're making far too much sense. Are you replying to the correct 
mailing list?

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Thomas Gleixner
On Sun, 2007-04-29 at 12:48 +0200, Kasper Sandberg wrote:
> On Sun, 2007-04-29 at 12:30 +0200, Thomas Gleixner wrote:
> > Willy,
> 
> > As a sidenote: I really wonder if anybody noticed yet, that the whole
> > CFS / SD comparison is so ridiculous, that it is not even funny anymore.
> > CFS modifies the scheduler and nothing else, SD fiddles all over the
> > kernel in interesting ways. 
> > 
> 
> have you looked at diffstat lately? :)
> 
> sd:
>  Documentation/sched-design.txt  |  241 +++
>  Documentation/sysctl/kernel.txt |   14
>  Makefile|2
>  fs/pipe.c   |7
>  fs/proc/array.c |2
>  include/linux/init_task.h   |4
>  include/linux/sched.h   |   32 -
>  kernel/sched.c  | 1279
> +++-
>  kernel/softirq.c|2
>  kernel/sysctl.c |   26
>  kernel/workqueue.c  |2
>  11 files changed, 919 insertions(+), 692 deletions(-)
> 
> cfs:
>  Documentation/kernel-parameters.txt |   43
>  Documentation/sched-design-CFS.txt  |  107 +
>  Makefile|2
>  arch/i386/kernel/smpboot.c  |   13
>  arch/i386/kernel/tsc.c  |8
>  arch/ia64/kernel/setup.c|6
>  arch/mips/kernel/smp.c  |   11
>  arch/sparc/kernel/smp.c |   10
>  arch/sparc64/kernel/smp.c   |   36
>  fs/proc/array.c |   11
>  fs/proc/base.c  |2
>  fs/proc/internal.h  |1
>  include/asm-i386/unistd.h   |3
>  include/asm-x86_64/unistd.h |4
>  include/linux/hardirq.h |   13
>  include/linux/sched.h   |   94 +
>  init/main.c |2
>  kernel/exit.c   |3
>  kernel/fork.c   |4
>  kernel/posix-cpu-timers.c   |   34
>  kernel/sched.c  | 2288
> +---
>  kernel/sched_debug.c|  152 ++
>  kernel/sched_fair.c |  601 +
>  kernel/sched_rt.c   |  184 ++
>  kernel/sched_stats.h|  235 +++
>  kernel/sysctl.c |   32
>  26 files changed, 2062 insertions(+), 1837 deletions(-)

Sorry, my bad. I looked at a diffstat of the ck full set. Not only the
SD part of it.

tglx


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Willy Tarreau
On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
> Willy,
> 
> On Sun, 2007-04-29 at 09:16 +0200, Willy Tarreau wrote:
> > In fact, what I'd like to see in 2.6.22 is something better for everybody
> > and with *no* regression, even if it's not perfect. I had the feeling
> > that SD matched that goal right now, except for Mike who has not tested
> > recent versions. Don't get me wrong, I still think that CFS is a more
> > interesting long-term target. But it may require more time to satisfy
> > everyone. At least with one of them in 2.6.22, we won't waste time
> > comparing to current mainline.
> 
> Oh no, we really do _NOT_ want to throw SD or anything else at mainline
> in a hurry just for not wasting time on comparing to the current
> scheduler.

It is not about doing it in a hurry. I see SD as a small yet efficient
update to current scheduler. It's not perfect, probably not much extensible
but the risks of breaking anything are small given the fact that it does
not change much of the code or behaviour.

IMHO, it is something which can provide users with a useful update while
leaving us with some more time to carefully implement the features of CFS
one at a time, and if it requires 5 versions, it's not a problem.

> I agree that CFS is the more interesting target and I prefer to push the
> more interesting one even if it takes a release cycle longer. The main
> reason for me is the design of CFS. Even if it is not really modular
> right now, it is not rocket science to make it fully modular.
> 
> Looking at the areas where people work on, e.g. containers, resource
> management, cpu isolation, fully tickless systems , we really need
> to go into that direction, when we want to avoid permanent tinkering in
> the core scheduler code for the next five years.
> 
> As a sidenote: I really wonder if anybody noticed yet, that the whole
> CFS / SD comparison is so ridiculous, that it is not even funny anymore.

Contrarily to most people, I don't see them as competitors. I see SD as
a first step with a low risk of regression, and CFS as an ultimate
solution relying on a more solid framework.

> CFS modifies the scheduler and nothing else, SD fiddles all over the
> kernel in interesting ways. 

Hmmm I guess you confused both of them this time. CFS touches many places,
which is why I think the testing coverage is still very low. SD can be
tested faster. My real concern is : are there still people observing
regressions with it ? If yes, they should be fixed before even being
merged. If no, why not merge it as a fix for the many known corner cases
of current scheduler ? After all, it's already in -mm.

Willy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread hui
On Sun, Apr 29, 2007 at 08:53:36PM +1000, Con Kolivas wrote:
> On Sunday 29 April 2007 20:30, Thomas Gleixner wrote:
> > As a sidenote: I really wonder if anybody noticed yet, that the whole
> > CFS / SD comparison is so ridiculous, that it is not even funny anymore.
> > CFS modifies the scheduler and nothing else, SD fiddles all over the
> > kernel in interesting ways.
> 
> This is a WTF if ever I saw one.

You should look at the progression of SD versus CFS. You'll find the exact
opposite has happened and it's kind of baffling that you'd say something like
that. So I don't know what is coloring your experiences with this.

SD is highly regular patch and cleanly applies into that portion of the kernel.
Folks have been asking from some kind of pluggability for this kind of
development for years and had it repeatedly blocked in various ways. So this
seems quite odd that you'd say something like that.

bill

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Con Kolivas
On Sunday 29 April 2007 20:30, Thomas Gleixner wrote:
> As a sidenote: I really wonder if anybody noticed yet, that the whole
> CFS / SD comparison is so ridiculous, that it is not even funny anymore.
> CFS modifies the scheduler and nothing else, SD fiddles all over the
> kernel in interesting ways.

This is a WTF if ever I saw one.

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Kasper Sandberg
On Sun, 2007-04-29 at 12:30 +0200, Thomas Gleixner wrote:
> Willy,

> As a sidenote: I really wonder if anybody noticed yet, that the whole
> CFS / SD comparison is so ridiculous, that it is not even funny anymore.
> CFS modifies the scheduler and nothing else, SD fiddles all over the
> kernel in interesting ways. 
> 

have you looked at diffstat lately? :)

sd:
 Documentation/sched-design.txt  |  241 +++
 Documentation/sysctl/kernel.txt |   14
 Makefile|2
 fs/pipe.c   |7
 fs/proc/array.c |2
 include/linux/init_task.h   |4
 include/linux/sched.h   |   32 -
 kernel/sched.c  | 1279
+++-
 kernel/softirq.c|2
 kernel/sysctl.c |   26
 kernel/workqueue.c  |2
 11 files changed, 919 insertions(+), 692 deletions(-)

cfs:
 Documentation/kernel-parameters.txt |   43
 Documentation/sched-design-CFS.txt  |  107 +
 Makefile|2
 arch/i386/kernel/smpboot.c  |   13
 arch/i386/kernel/tsc.c  |8
 arch/ia64/kernel/setup.c|6
 arch/mips/kernel/smp.c  |   11
 arch/sparc/kernel/smp.c |   10
 arch/sparc64/kernel/smp.c   |   36
 fs/proc/array.c |   11
 fs/proc/base.c  |2
 fs/proc/internal.h  |1
 include/asm-i386/unistd.h   |3
 include/asm-x86_64/unistd.h |4
 include/linux/hardirq.h |   13
 include/linux/sched.h   |   94 +
 init/main.c |2
 kernel/exit.c   |3
 kernel/fork.c   |4
 kernel/posix-cpu-timers.c   |   34
 kernel/sched.c  | 2288
+---
 kernel/sched_debug.c|  152 ++
 kernel/sched_fair.c |  601 +
 kernel/sched_rt.c   |  184 ++
 kernel/sched_stats.h|  235 +++
 kernel/sysctl.c |   32
 26 files changed, 2062 insertions(+), 1837 deletions(-)


> This is worse than apples and oranges, it's more like apples and
> screwdrivers. 

> 
>   tglx
> 
> 
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread William Lee Irwin III
On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
> Can we please stop this useless pissing contest and sit down and get a
> modular design into mainline, which allows folks to work and integrate
> their "workload X perfect scheduler" and gives us the flexibility to
> adjust to the needs of upcoming functionality.

If I don't see some sort of modularity patch soon I'll post one myself.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Thomas Gleixner
Willy,

On Sun, 2007-04-29 at 09:16 +0200, Willy Tarreau wrote:
> In fact, what I'd like to see in 2.6.22 is something better for everybody
> and with *no* regression, even if it's not perfect. I had the feeling
> that SD matched that goal right now, except for Mike who has not tested
> recent versions. Don't get me wrong, I still think that CFS is a more
> interesting long-term target. But it may require more time to satisfy
> everyone. At least with one of them in 2.6.22, we won't waste time
> comparing to current mainline.

Oh no, we really do _NOT_ want to throw SD or anything else at mainline
in a hurry just for not wasting time on comparing to the current
scheduler.

I agree that CFS is the more interesting target and I prefer to push the
more interesting one even if it takes a release cycle longer. The main
reason for me is the design of CFS. Even if it is not really modular
right now, it is not rocket science to make it fully modular.

Looking at the areas where people work on, e.g. containers, resource
management, cpu isolation, fully tickless systems , we really need
to go into that direction, when we want to avoid permanent tinkering in
the core scheduler code for the next five years.

As a sidenote: I really wonder if anybody noticed yet, that the whole
CFS / SD comparison is so ridiculous, that it is not even funny anymore.
CFS modifies the scheduler and nothing else, SD fiddles all over the
kernel in interesting ways. 

This is worse than apples and oranges, it's more like apples and
screwdrivers. 

Can we please stop this useless pissing contest and sit down and get a
modular design into mainline, which allows folks to work and integrate
their "workload X perfect scheduler" and gives us the flexibility to
adjust to the needs of upcoming functionality.

tglx


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Mike Galbraith
On Sun, 2007-04-29 at 19:52 +1000, Con Kolivas wrote:
> On Sunday 29 April 2007 18:00, Ingo Molnar wrote:
> > * Willy Tarreau <[EMAIL PROTECTED]> wrote:
> > > > > [...] except for Mike who has not tested recent versions. [...]
> > > >
> > > > actually, dont discount Mark Lord's test results either. And it
> > > > might be a good idea for Mike to re-test SD 0.46?
> > >
> > > In any case, it might be a good idea because Mike encountered a
> > > problem that nobody could reproduce. [...]
> >
> > actually, Mark Lord too reproduced something similar to Mike's results.
> > Please try those workloads yourself.
> 
> I see no suggestion that either Mark or Mike have tested, or for that matter 
> _have any intention of testing_, the current version of SD without fancy 
> renicing or anything involved. Willy I grealy appreciate you trying, but I 
> don't know why you're bothering even trying here since clearly 1. Ingo is the 
> scheduler maintainer 2. he's working on a competing implementation and 3. in 
> my excellent physical and mental state I seem to have slighted the two 
> testers (both?) somewhere along the line. Mike feels his testing was a 
> complete waste of time yet it would be ludicrous for me to say that SD didn't 
> evolve 20 versions further due to his earlier testing, and was the impetus 
> for you to start work on CFS. The crunch came that we couldn't agree that 
> fair was appropriate for mainline and we parted ways. That fairness has not 
> been a problem for his view on CFS though but he has only tested older 
> versions of SD that still had bugs.

The crunch for me came when you started hand-waving and spin-doctoring
as you are doing now.  Listening to twisted echoes of my voice is not my
idea of a good time.

-Mike

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Con Kolivas
On Sunday 29 April 2007 18:00, Ingo Molnar wrote:
> * Willy Tarreau <[EMAIL PROTECTED]> wrote:
> > > > [...] except for Mike who has not tested recent versions. [...]
> > >
> > > actually, dont discount Mark Lord's test results either. And it
> > > might be a good idea for Mike to re-test SD 0.46?
> >
> > In any case, it might be a good idea because Mike encountered a
> > problem that nobody could reproduce. [...]
>
> actually, Mark Lord too reproduced something similar to Mike's results.
> Please try those workloads yourself.

I see no suggestion that either Mark or Mike have tested, or for that matter 
_have any intention of testing_, the current version of SD without fancy 
renicing or anything involved. Willy I grealy appreciate you trying, but I 
don't know why you're bothering even trying here since clearly 1. Ingo is the 
scheduler maintainer 2. he's working on a competing implementation and 3. in 
my excellent physical and mental state I seem to have slighted the two 
testers (both?) somewhere along the line. Mike feels his testing was a 
complete waste of time yet it would be ludicrous for me to say that SD didn't 
evolve 20 versions further due to his earlier testing, and was the impetus 
for you to start work on CFS. The crunch came that we couldn't agree that 
fair was appropriate for mainline and we parted ways. That fairness has not 
been a problem for his view on CFS though but he has only tested older 
versions of SD that still had bugs.

Given facts 1 and 2 above I have all but resigned myself to the fact that SD 
has -less than zero- chance of ever being considered for mainline and it's my 
job to use it as something to compare your competing design with to make sure 
that when (and I do mean when since there seems no doubt in everyone else's 
mind) CFS becomes part of mainline that it is as good as SD. Saying people 
found CFS better than SD is, in my humble opinion, an exaggeration since 
every one I could find was a glowing standalone report of CFS rather than any 
comparison to the current very stable bug free version of SD. On the other 
hand I still see that when people compare them side to side they find SD is 
better, so I will hold CFS against that comparison - when comparing fairness 
based designs.

On a related note - implementing a framework is nice but doesn't address any 
of the current fairness/starvation/corner case problems mainline has. I don't 
see much point in rushing the framework merging since it's still in flux.

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread William Lee Irwin III
On Sun, Apr 29, 2007 at 12:54:36AM -0700, William Lee Irwin III wrote:
>> Common code for rbtree-based priority queues can be factored out of
>> cfq, cfs, and hrtimers.

On Sun, Apr 29, 2007 at 10:13:17AM +0200, Willy Tarreau wrote:
> In my experience, rbtrees are painfully slow. Yesterday, I spent the
> day replacing them in haproxy with other trees I developped a few
> years ago, which look like radix trees. They are about 2-3 times as
> fast to insert 64-bit data, and you walk through them in O(1). I have
> many changes to apply to them before they could be used in kernel, but
> at least I think we already have code available for other types of trees.

Dynamic allocation of auxiliary indexing structures is problematic for
the scheduler, which significantly constrains the algorithms one may
use for this purpose.

rbtrees are not my favorite either. Faster alternatives to rbtrees
exist even among binary trees; for instance, it's not so difficult to
implement a heap-ordered tree maintaining the red-black invariant with
looser constraints on the tree structure and hence less rebalancing.
One could always try implementing a van Emde Boas queue, if he felt
particularly brave.

Some explanation of the structure may be found at:
http://courses.csail.mit.edu/6.897/spring03/scribe_notes/L1/lecture1.pdf

According to that, y-trees use less space, and exponential trees are
asymptotically faster with a worst-case asymptotic running time of

O(min(lg(lg(u))*lg(lg(n))/lg(lg(lg(u))), sqrt(lg(n)/lg(lg(n)

for all operations, so van Emde Boas is not the ultimate algorithm by
any means at O(lg(lg(u))); in these estimates, u is the size of the
"universe," or otherwise the range of the key data type. Not to say
that any of those are appropriate for the kernel; it's rather likely
we'll have to settle for something less interesting, if we bother
ditching rbtrees at all, on account of the constraints of the kernel
environment.

I'll see what I can do about a userspace test harness for priority
queues more comprehensive than smart-queue.c. I have in mind the
ability to replay traces obtained from queues in the kernel and loading
priority queue implementations via dlopen()/dlsym() et al. valgrind can
do most of the dirty work. Otherwise running a trace for some period of
time and emitting the number of operations it got through should serve
as a benchmark. With that in hand, people can grind out priority queue
implementations and see how they compare on real operation sequences
logged from live kernels.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread William Lee Irwin III
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
>> I think it'd be a good idea to merge scheduler classes before changing 
>> over the policy so future changes to policy have smaller code impact. 
>> Basically, get scheduler classes going with the mainline scheduler.

On Sun, Apr 29, 2007 at 10:03:59AM +0200, Ingo Molnar wrote:
> i've got a split up patch for the class stuff already, but lets first 
> get some wider test-coverage before even thinking about upstream 
> integration. This is all v2.6.22 stuff at the earliest.

I'd like to get some regression testing (standard macrobenchmarks) in
on the scheduler class bits in isolation, as they do have rather
non-negligible impacts on load balancing code, to changes in which such
macrobenchmarks are quite sensitive.

This shouldn't take much more than kicking off a benchmark on an
internal box at work already set up to do such testing routinely.
I won't need to write any fresh testcases etc. for it. Availability
of the test systems may have to wait until Monday, since various people
not wanting benchmarks disturbed are likely to be out for the weekend.

It would also be beneficial for the other schedulers to be able to
standardize on the scheduling class framework as far in advance as
possible. In such a manner comparative testing by end-users and more
industrial regression testing can be facilitated.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Willy Tarreau
On Sun, Apr 29, 2007 at 12:54:36AM -0700, William Lee Irwin III wrote:
> On Sun, Apr 29, 2007 at 09:16:27AM +0200, Willy Tarreau wrote:
> > In fact, what I'd like to see in 2.6.22 is something better for everybody
> > and with *no* regression, even if it's not perfect. I had the feeling
> > that SD matched that goal right now, except for Mike who has not tested
> > recent versions. Don't get me wrong, I still think that CFS is a more
> > interesting long-term target. But it may require more time to satisfy
> > everyone. At least with one of them in 2.6.22, we won't waste time
> > comparing to current mainline.
> 
> I think it'd be a good idea to merge scheduler classes before changing
> over the policy so future changes to policy have smaller code impact.
> Basically, get scheduler classes going with the mainline scheduler.
> 
> There are other pieces that can be merged earlier, too, for instance,
> the correction to the comment in init/main.c. Directed yields can
> probably also go in as nops or -ENOSYS returns if not fully implemented,
> though I suspect there shouldn't be much in the way of implementing them.
> p->array vs. p->on_rq can be merged early too.

I agree that merging some framework is a good way to proceed.

> Common code for rbtree-based priority queues can be factored out of
> cfq, cfs, and hrtimers.

In my experience, rbtrees are painfully slow. Yesterday, I spent the
day replacing them in haproxy with other trees I developped a few
years ago, which look like radix trees. They are about 2-3 times as
fast to insert 64-bit data, and you walk through them in O(1). I have
many changes to apply to them before they could be used in kernel, but
at least I think we already have code available for other types of trees.

> There are extensive /proc/ reporting changes, large chunks of which
> could go in before the policy as well.
> 
> I'm camping in this weekend, so I'll see what I can eke out.

good luck !

Willy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Mike Galbraith
On Sun, 2007-04-29 at 09:16 +0200, Willy Tarreau wrote:

> In fact, what I'd like to see in 2.6.22 is something better for everybody
> and with *no* regression, even if it's not perfect. I had the feeling
> that SD matched that goal right now, except for Mike who has not tested
> recent versions. Don't get me wrong, I still think that CFS is a more
> interesting long-term target.

While I haven't tested recent SD versions, unless it's design has
radically changed recently, I know what to expect.  CFS is giving me a
very high quality experience already (it's at a whopping v7), while
RSDL/SD irritated me greatly at version v40.  As far as I'm concerned,
CFS is the superior target, short-term, long-term whatever-term.  For
the tree where I make the decisions, the hammer has fallen, and RSDL/SD
is history.  Heck, I'm _almost_ ready to rm -rf my own scheduler trees
as well... I could really use some free disk space.

-Mike

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ingo Molnar

* Kasper Sandberg <[EMAIL PROTECTED]> wrote:

> If you have some ideas on how these problems might be fixed i'd surely 
> try fixes and stuff, or if you have some data you need me to collect 
> to better understand whats going on. But i suspect any somewhat 
> demanding 3d application will do, and the difference is so staggering 
> that when you see it in effect, you cant miss it.

it would be great if you could try a simple experiment: does something 
as simple as glxgears resized to a large window trigger this 
'stuttering' phenomenon when other stuff is running? If not, could you 
try to find the simplest 3D stuff under Linux that already triggers it 
so that i can reproduce it?

(Also, as an independent debug-test, could you try CONFIG_PREEMPT too 
perhaps? I.e. is this 'stuttering' behavior independent of the 
preemption model and a general property of CFS?)

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ingo Molnar

* William Lee Irwin III <[EMAIL PROTECTED]> wrote:

> I think it'd be a good idea to merge scheduler classes before changing 
> over the policy so future changes to policy have smaller code impact. 
> Basically, get scheduler classes going with the mainline scheduler.

i've got a split up patch for the class stuff already, but lets first 
get some wider test-coverage before even thinking about upstream 
integration. This is all v2.6.22 stuff at the earliest.

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Willy Tarreau
On Sun, Apr 29, 2007 at 10:00:28AM +0200, Ingo Molnar wrote:
> 
> * Willy Tarreau <[EMAIL PROTECTED]> wrote:
> 
> > > > [...] except for Mike who has not tested recent versions. [...]
> > > 
> > > actually, dont discount Mark Lord's test results either. And it 
> > > might be a good idea for Mike to re-test SD 0.46?
> > 
> > In any case, it might be a good idea because Mike encountered a 
> > problem that nobody could reproduce. [...]
> 
> actually, Mark Lord too reproduced something similar to Mike's results. 

OK.

> Please try those workloads yourself.

Unfortunately, I do not have their tools, environments nor hardware.
That's the advantage of having multiple testers ;-)

>   Ingo

Willy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ingo Molnar

* Willy Tarreau <[EMAIL PROTECTED]> wrote:

> > > [...] except for Mike who has not tested recent versions. [...]
> > 
> > actually, dont discount Mark Lord's test results either. And it 
> > might be a good idea for Mike to re-test SD 0.46?
> 
> In any case, it might be a good idea because Mike encountered a 
> problem that nobody could reproduce. [...]

actually, Mark Lord too reproduced something similar to Mike's results. 
Please try those workloads yourself.

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Kasper Sandberg
On Sun, 2007-04-29 at 08:59 +0200, Ingo Molnar wrote:
> * Willy Tarreau <[EMAIL PROTECTED]> wrote:
> 
> > I don't know if Mike still has problems with SD, but there are now 
> > several interesting reports of SD giving better feedback than CFS on 
> > real work. In my experience, CFS seems smoother on *technical* tests, 
> > which I agree that they do not really simulate real work.
> 
> well, there are several reports of CFS being significantly better than 
> SD on a number of workloads - and i know of only two reports where SD 
> was reported to be better than CFS: in Kasper's test (where i'd like to 
> know what the "3D stuff" he uses is and take a good look at that 
> workload), and another 3D report which was done against -v6. (And even 
> in these two reports the 'smoothness advantage' was not dramatic. If you 
> know of any other reports then please let me know!)

I can tell you one thing, its not just me that has observed the
smoothness in 3d stuff, after i tried rsdl first i've had lots of people
try rsdl and subsequently sd because of the significant improvement in
smoothness, and they have all found the same results.

The stuff i have tested with in particular is unreal tournament 2004 and
world of warcraft through wine, both running opengl, and consuming all
the cpu time it can get.

and the thing that happens is simply that even when theres only that
process, sd is still smoother, but the significance is much larger once
just something starts, like if the mail client starts fetching mail, and
running some somewhat demanding stuff like spamasassin, the only way you
notice it is by the drop in fps, smoothness is 100% intact with SD
(ofcourse if you started HUGE load it probably would get so little cpu
it would stutter), but with every other scheduler you will notice
immediate and quite severe stuttering, in fact to many it will seem
intolerable.

I can tell you how I first noticed this, i was experimenting in ut2k4
with sd, and usually i always have to close my mail client, because when
spamasassin starts (nice 0), the game would stutter quite much, but when
i was playing i noticed some IO activity and work noises from my disk,
but that was all, no noticable stutter or problems with the 3d, but i
couldnt figure out why, i then discovered i had forgotten to close my
mail client which i previously ALWAYS have had to do.

If you have some ideas on how these problems might be fixed i'd surely
try fixes and stuff, or if you have some data you need me to collect to
better understand whats going on. But i suspect any somewhat demanding
3d application will do, and the difference is so staggering that when
you see it in effect, you cant miss it.

> 
>   Ingo
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread William Lee Irwin III
On Sun, Apr 29, 2007 at 09:16:27AM +0200, Willy Tarreau wrote:
> In fact, what I'd like to see in 2.6.22 is something better for everybody
> and with *no* regression, even if it's not perfect. I had the feeling
> that SD matched that goal right now, except for Mike who has not tested
> recent versions. Don't get me wrong, I still think that CFS is a more
> interesting long-term target. But it may require more time to satisfy
> everyone. At least with one of them in 2.6.22, we won't waste time
> comparing to current mainline.

I think it'd be a good idea to merge scheduler classes before changing
over the policy so future changes to policy have smaller code impact.
Basically, get scheduler classes going with the mainline scheduler.

There are other pieces that can be merged earlier, too, for instance,
the correction to the comment in init/main.c. Directed yields can
probably also go in as nops or -ENOSYS returns if not fully implemented,
though I suspect there shouldn't be much in the way of implementing them.
p->array vs. p->on_rq can be merged early too. Common code for rbtree-
based priority queues can be factored out of cfq, cfs, and hrtimers.
There are extensive /proc/ reporting changes, large chunks of which
could go in before the policy as well.

I'm camping in this weekend, so I'll see what I can eke out.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Willy Tarreau
On Sun, Apr 29, 2007 at 09:30:30AM +0200, Ingo Molnar wrote:
> > In fact, what I'd like to see in 2.6.22 is something better for 
> > everybody and with *no* regression, even if it's not perfect.
> >
> > I had the feeling that SD matched that goal right now, [...]
> 
> curious, which are the reports where in your opinion CFS behaves worse 
> than vanilla?

see below :-)

> There were two audio skipping reports against CFS, the 
> most serious one got resolved and i hope the other one has been resolved 
> by the same fix as well. (i'm still waiting for feedback on that one)

your answer to your question above ;-)
Yes, we're all waiting for feedback. And I said I did not track the
versions involved, so it is possible that all previously encountered
regressions are fixed by now.

> > [...] except for Mike who has not tested recent versions. [...]
> 
> actually, dont discount Mark Lord's test results either. And it might be 
> a good idea for Mike to re-test SD 0.46?

In any case, it might be a good idea because Mike encountered a problem
that nobody could reproduce. It may come from hardware, scheduler design,
scheduler bug, or any other bug, but whatever the cause, it would be
interesting to conclude on it.

>   Ingo

Willy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ingo Molnar

* Willy Tarreau <[EMAIL PROTECTED]> wrote:

> > know of any other reports then please let me know!)
> 
> There was Caglar Onur too but he said he will redo all the tests. 
> [...]

well, Caglar said CFSv7 works as well as CFSv6 in his latest tests and 
that he'll redo all the tests to re-verify his original regression 
report :)

> In fact, what I'd like to see in 2.6.22 is something better for 
> everybody and with *no* regression, even if it's not perfect.
>
> I had the feeling that SD matched that goal right now, [...]

curious, which are the reports where in your opinion CFS behaves worse 
than vanilla? There were two audio skipping reports against CFS, the 
most serious one got resolved and i hope the other one has been resolved 
by the same fix as well. (i'm still waiting for feedback on that one)

> [...] except for Mike who has not tested recent versions. [...]

actually, dont discount Mark Lord's test results either. And it might be 
a good idea for Mike to re-test SD 0.46?

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Willy Tarreau
On Sun, Apr 29, 2007 at 08:59:01AM +0200, Ingo Molnar wrote:
> 
> * Willy Tarreau <[EMAIL PROTECTED]> wrote:
> 
> > I don't know if Mike still has problems with SD, but there are now 
> > several interesting reports of SD giving better feedback than CFS on 
> > real work. In my experience, CFS seems smoother on *technical* tests, 
> > which I agree that they do not really simulate real work.
> 
> well, there are several reports of CFS being significantly better than 
> SD on a number of workloads - and i know of only two reports where SD 
> was reported to be better than CFS: in Kasper's test (where i'd like to 
> know what the "3D stuff" he uses is and take a good look at that 
> workload), and another 3D report which was done against -v6. (And even 
> in these two reports the 'smoothness advantage' was not dramatic. If you 
> know of any other reports then please let me know!)

There was Caglar Onur too but he said he will redo all the tests. I'm
not tracking all tests nor versions, so it might be possible that some
of the differences vanish with v7.

In fact, what I'd like to see in 2.6.22 is something better for everybody
and with *no* regression, even if it's not perfect. I had the feeling
that SD matched that goal right now, except for Mike who has not tested
recent versions. Don't get me wrong, I still think that CFS is a more
interesting long-term target. But it may require more time to satisfy
everyone. At least with one of them in 2.6.22, we won't waste time
comparing to current mainline.

>   Ingo

Willy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ingo Molnar

* Willy Tarreau <[EMAIL PROTECTED]> wrote:

> I don't know if Mike still has problems with SD, but there are now 
> several interesting reports of SD giving better feedback than CFS on 
> real work. In my experience, CFS seems smoother on *technical* tests, 
> which I agree that they do not really simulate real work.

well, there are several reports of CFS being significantly better than 
SD on a number of workloads - and i know of only two reports where SD 
was reported to be better than CFS: in Kasper's test (where i'd like to 
know what the "3D stuff" he uses is and take a good look at that 
workload), and another 3D report which was done against -v6. (And even 
in these two reports the 'smoothness advantage' was not dramatic. If you 
know of any other reports then please let me know!)

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ingo Molnar

* Kasper Sandberg <[EMAIL PROTECTED]> wrote:

> Okay so i've tried with cfs 7 now, and the completely broken audio 
> behavior is fixed.

great! :) This worried me alot!

> Im not sure im describing properly, but say it takes 35fps for the 3d 
> stuff to seem perfect, the fps monitor updates once every 1 or two 
> seconds, showing average fps(havent looked at the code, but i assume 
> it spans those 1-2 seconds), usually i have like 60 fps, but under 
> load it can go down to 35, [...]

What is this "3D stuff" exactly, and what are you using to monitor the 
framerates? (Also, could you please try another experiment and enable 
CONFIG_PREEMPT? CFS works smoothest with that enabled.)

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Mike Galbraith
On Sun, 2007-04-29 at 07:30 +0200, Willy Tarreau wrote:

> I don't know if Mike still has problems with SD...

I'm neither testing recent SD releases nor looking at the source.  All
the testing I did was a waste of my time and lkml bandwidth.

-Mike

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Mike Galbraith
On Sun, 2007-04-29 at 07:30 +0200, Willy Tarreau wrote:

 I don't know if Mike still has problems with SD...

I'm neither testing recent SD releases nor looking at the source.  All
the testing I did was a waste of my time and lkml bandwidth.

-Mike

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ingo Molnar

* Kasper Sandberg [EMAIL PROTECTED] wrote:

 Okay so i've tried with cfs 7 now, and the completely broken audio 
 behavior is fixed.

great! :) This worried me alot!

 Im not sure im describing properly, but say it takes 35fps for the 3d 
 stuff to seem perfect, the fps monitor updates once every 1 or two 
 seconds, showing average fps(havent looked at the code, but i assume 
 it spans those 1-2 seconds), usually i have like 60 fps, but under 
 load it can go down to 35, [...]

What is this 3D stuff exactly, and what are you using to monitor the 
framerates? (Also, could you please try another experiment and enable 
CONFIG_PREEMPT? CFS works smoothest with that enabled.)

Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ingo Molnar

* Willy Tarreau [EMAIL PROTECTED] wrote:

 I don't know if Mike still has problems with SD, but there are now 
 several interesting reports of SD giving better feedback than CFS on 
 real work. In my experience, CFS seems smoother on *technical* tests, 
 which I agree that they do not really simulate real work.

well, there are several reports of CFS being significantly better than 
SD on a number of workloads - and i know of only two reports where SD 
was reported to be better than CFS: in Kasper's test (where i'd like to 
know what the 3D stuff he uses is and take a good look at that 
workload), and another 3D report which was done against -v6. (And even 
in these two reports the 'smoothness advantage' was not dramatic. If you 
know of any other reports then please let me know!)

Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Willy Tarreau
On Sun, Apr 29, 2007 at 08:59:01AM +0200, Ingo Molnar wrote:
 
 * Willy Tarreau [EMAIL PROTECTED] wrote:
 
  I don't know if Mike still has problems with SD, but there are now 
  several interesting reports of SD giving better feedback than CFS on 
  real work. In my experience, CFS seems smoother on *technical* tests, 
  which I agree that they do not really simulate real work.
 
 well, there are several reports of CFS being significantly better than 
 SD on a number of workloads - and i know of only two reports where SD 
 was reported to be better than CFS: in Kasper's test (where i'd like to 
 know what the 3D stuff he uses is and take a good look at that 
 workload), and another 3D report which was done against -v6. (And even 
 in these two reports the 'smoothness advantage' was not dramatic. If you 
 know of any other reports then please let me know!)

There was Caglar Onur too but he said he will redo all the tests. I'm
not tracking all tests nor versions, so it might be possible that some
of the differences vanish with v7.

In fact, what I'd like to see in 2.6.22 is something better for everybody
and with *no* regression, even if it's not perfect. I had the feeling
that SD matched that goal right now, except for Mike who has not tested
recent versions. Don't get me wrong, I still think that CFS is a more
interesting long-term target. But it may require more time to satisfy
everyone. At least with one of them in 2.6.22, we won't waste time
comparing to current mainline.

   Ingo

Willy

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ingo Molnar

* Willy Tarreau [EMAIL PROTECTED] wrote:

  know of any other reports then please let me know!)
 
 There was Caglar Onur too but he said he will redo all the tests. 
 [...]

well, Caglar said CFSv7 works as well as CFSv6 in his latest tests and 
that he'll redo all the tests to re-verify his original regression 
report :)

 In fact, what I'd like to see in 2.6.22 is something better for 
 everybody and with *no* regression, even if it's not perfect.

 I had the feeling that SD matched that goal right now, [...]

curious, which are the reports where in your opinion CFS behaves worse 
than vanilla? There were two audio skipping reports against CFS, the 
most serious one got resolved and i hope the other one has been resolved 
by the same fix as well. (i'm still waiting for feedback on that one)

 [...] except for Mike who has not tested recent versions. [...]

actually, dont discount Mark Lord's test results either. And it might be 
a good idea for Mike to re-test SD 0.46?

Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Willy Tarreau
On Sun, Apr 29, 2007 at 09:30:30AM +0200, Ingo Molnar wrote:
  In fact, what I'd like to see in 2.6.22 is something better for 
  everybody and with *no* regression, even if it's not perfect.
 
  I had the feeling that SD matched that goal right now, [...]
 
 curious, which are the reports where in your opinion CFS behaves worse 
 than vanilla?

see below :-)

 There were two audio skipping reports against CFS, the 
 most serious one got resolved and i hope the other one has been resolved 
 by the same fix as well. (i'm still waiting for feedback on that one)

your answer to your question above ;-)
Yes, we're all waiting for feedback. And I said I did not track the
versions involved, so it is possible that all previously encountered
regressions are fixed by now.

  [...] except for Mike who has not tested recent versions. [...]
 
 actually, dont discount Mark Lord's test results either. And it might be 
 a good idea for Mike to re-test SD 0.46?

In any case, it might be a good idea because Mike encountered a problem
that nobody could reproduce. It may come from hardware, scheduler design,
scheduler bug, or any other bug, but whatever the cause, it would be
interesting to conclude on it.

   Ingo

Willy

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread William Lee Irwin III
On Sun, Apr 29, 2007 at 09:16:27AM +0200, Willy Tarreau wrote:
 In fact, what I'd like to see in 2.6.22 is something better for everybody
 and with *no* regression, even if it's not perfect. I had the feeling
 that SD matched that goal right now, except for Mike who has not tested
 recent versions. Don't get me wrong, I still think that CFS is a more
 interesting long-term target. But it may require more time to satisfy
 everyone. At least with one of them in 2.6.22, we won't waste time
 comparing to current mainline.

I think it'd be a good idea to merge scheduler classes before changing
over the policy so future changes to policy have smaller code impact.
Basically, get scheduler classes going with the mainline scheduler.

There are other pieces that can be merged earlier, too, for instance,
the correction to the comment in init/main.c. Directed yields can
probably also go in as nops or -ENOSYS returns if not fully implemented,
though I suspect there shouldn't be much in the way of implementing them.
p-array vs. p-on_rq can be merged early too. Common code for rbtree-
based priority queues can be factored out of cfq, cfs, and hrtimers.
There are extensive /proc/ reporting changes, large chunks of which
could go in before the policy as well.

I'm camping in this weekend, so I'll see what I can eke out.


-- wli
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Kasper Sandberg
On Sun, 2007-04-29 at 08:59 +0200, Ingo Molnar wrote:
 * Willy Tarreau [EMAIL PROTECTED] wrote:
 
  I don't know if Mike still has problems with SD, but there are now 
  several interesting reports of SD giving better feedback than CFS on 
  real work. In my experience, CFS seems smoother on *technical* tests, 
  which I agree that they do not really simulate real work.
 
 well, there are several reports of CFS being significantly better than 
 SD on a number of workloads - and i know of only two reports where SD 
 was reported to be better than CFS: in Kasper's test (where i'd like to 
 know what the 3D stuff he uses is and take a good look at that 
 workload), and another 3D report which was done against -v6. (And even 
 in these two reports the 'smoothness advantage' was not dramatic. If you 
 know of any other reports then please let me know!)

I can tell you one thing, its not just me that has observed the
smoothness in 3d stuff, after i tried rsdl first i've had lots of people
try rsdl and subsequently sd because of the significant improvement in
smoothness, and they have all found the same results.

The stuff i have tested with in particular is unreal tournament 2004 and
world of warcraft through wine, both running opengl, and consuming all
the cpu time it can get.

and the thing that happens is simply that even when theres only that
process, sd is still smoother, but the significance is much larger once
just something starts, like if the mail client starts fetching mail, and
running some somewhat demanding stuff like spamasassin, the only way you
notice it is by the drop in fps, smoothness is 100% intact with SD
(ofcourse if you started HUGE load it probably would get so little cpu
it would stutter), but with every other scheduler you will notice
immediate and quite severe stuttering, in fact to many it will seem
intolerable.

I can tell you how I first noticed this, i was experimenting in ut2k4
with sd, and usually i always have to close my mail client, because when
spamasassin starts (nice 0), the game would stutter quite much, but when
i was playing i noticed some IO activity and work noises from my disk,
but that was all, no noticable stutter or problems with the 3d, but i
couldnt figure out why, i then discovered i had forgotten to close my
mail client which i previously ALWAYS have had to do.

If you have some ideas on how these problems might be fixed i'd surely
try fixes and stuff, or if you have some data you need me to collect to
better understand whats going on. But i suspect any somewhat demanding
3d application will do, and the difference is so staggering that when
you see it in effect, you cant miss it.

 
   Ingo
 

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ingo Molnar

* Willy Tarreau [EMAIL PROTECTED] wrote:

   [...] except for Mike who has not tested recent versions. [...]
  
  actually, dont discount Mark Lord's test results either. And it 
  might be a good idea for Mike to re-test SD 0.46?
 
 In any case, it might be a good idea because Mike encountered a 
 problem that nobody could reproduce. [...]

actually, Mark Lord too reproduced something similar to Mike's results. 
Please try those workloads yourself.

Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Willy Tarreau
On Sun, Apr 29, 2007 at 10:00:28AM +0200, Ingo Molnar wrote:
 
 * Willy Tarreau [EMAIL PROTECTED] wrote:
 
[...] except for Mike who has not tested recent versions. [...]
   
   actually, dont discount Mark Lord's test results either. And it 
   might be a good idea for Mike to re-test SD 0.46?
  
  In any case, it might be a good idea because Mike encountered a 
  problem that nobody could reproduce. [...]
 
 actually, Mark Lord too reproduced something similar to Mike's results. 

OK.

 Please try those workloads yourself.

Unfortunately, I do not have their tools, environments nor hardware.
That's the advantage of having multiple testers ;-)

   Ingo

Willy

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ingo Molnar

* William Lee Irwin III [EMAIL PROTECTED] wrote:

 I think it'd be a good idea to merge scheduler classes before changing 
 over the policy so future changes to policy have smaller code impact. 
 Basically, get scheduler classes going with the mainline scheduler.

i've got a split up patch for the class stuff already, but lets first 
get some wider test-coverage before even thinking about upstream 
integration. This is all v2.6.22 stuff at the earliest.

Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ingo Molnar

* Kasper Sandberg [EMAIL PROTECTED] wrote:

 If you have some ideas on how these problems might be fixed i'd surely 
 try fixes and stuff, or if you have some data you need me to collect 
 to better understand whats going on. But i suspect any somewhat 
 demanding 3d application will do, and the difference is so staggering 
 that when you see it in effect, you cant miss it.

it would be great if you could try a simple experiment: does something 
as simple as glxgears resized to a large window trigger this 
'stuttering' phenomenon when other stuff is running? If not, could you 
try to find the simplest 3D stuff under Linux that already triggers it 
so that i can reproduce it?

(Also, as an independent debug-test, could you try CONFIG_PREEMPT too 
perhaps? I.e. is this 'stuttering' behavior independent of the 
preemption model and a general property of CFS?)

Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Mike Galbraith
On Sun, 2007-04-29 at 09:16 +0200, Willy Tarreau wrote:

 In fact, what I'd like to see in 2.6.22 is something better for everybody
 and with *no* regression, even if it's not perfect. I had the feeling
 that SD matched that goal right now, except for Mike who has not tested
 recent versions. Don't get me wrong, I still think that CFS is a more
 interesting long-term target.

While I haven't tested recent SD versions, unless it's design has
radically changed recently, I know what to expect.  CFS is giving me a
very high quality experience already (it's at a whopping v7), while
RSDL/SD irritated me greatly at version v40.  As far as I'm concerned,
CFS is the superior target, short-term, long-term whatever-term.  For
the tree where I make the decisions, the hammer has fallen, and RSDL/SD
is history.  Heck, I'm _almost_ ready to rm -rf my own scheduler trees
as well... I could really use some free disk space.

-Mike

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Willy Tarreau
On Sun, Apr 29, 2007 at 12:54:36AM -0700, William Lee Irwin III wrote:
 On Sun, Apr 29, 2007 at 09:16:27AM +0200, Willy Tarreau wrote:
  In fact, what I'd like to see in 2.6.22 is something better for everybody
  and with *no* regression, even if it's not perfect. I had the feeling
  that SD matched that goal right now, except for Mike who has not tested
  recent versions. Don't get me wrong, I still think that CFS is a more
  interesting long-term target. But it may require more time to satisfy
  everyone. At least with one of them in 2.6.22, we won't waste time
  comparing to current mainline.
 
 I think it'd be a good idea to merge scheduler classes before changing
 over the policy so future changes to policy have smaller code impact.
 Basically, get scheduler classes going with the mainline scheduler.
 
 There are other pieces that can be merged earlier, too, for instance,
 the correction to the comment in init/main.c. Directed yields can
 probably also go in as nops or -ENOSYS returns if not fully implemented,
 though I suspect there shouldn't be much in the way of implementing them.
 p-array vs. p-on_rq can be merged early too.

I agree that merging some framework is a good way to proceed.

 Common code for rbtree-based priority queues can be factored out of
 cfq, cfs, and hrtimers.

In my experience, rbtrees are painfully slow. Yesterday, I spent the
day replacing them in haproxy with other trees I developped a few
years ago, which look like radix trees. They are about 2-3 times as
fast to insert 64-bit data, and you walk through them in O(1). I have
many changes to apply to them before they could be used in kernel, but
at least I think we already have code available for other types of trees.

 There are extensive /proc/ reporting changes, large chunks of which
 could go in before the policy as well.
 
 I'm camping in this weekend, so I'll see what I can eke out.

good luck !

Willy

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread William Lee Irwin III
* William Lee Irwin III [EMAIL PROTECTED] wrote:
 I think it'd be a good idea to merge scheduler classes before changing 
 over the policy so future changes to policy have smaller code impact. 
 Basically, get scheduler classes going with the mainline scheduler.

On Sun, Apr 29, 2007 at 10:03:59AM +0200, Ingo Molnar wrote:
 i've got a split up patch for the class stuff already, but lets first 
 get some wider test-coverage before even thinking about upstream 
 integration. This is all v2.6.22 stuff at the earliest.

I'd like to get some regression testing (standard macrobenchmarks) in
on the scheduler class bits in isolation, as they do have rather
non-negligible impacts on load balancing code, to changes in which such
macrobenchmarks are quite sensitive.

This shouldn't take much more than kicking off a benchmark on an
internal box at work already set up to do such testing routinely.
I won't need to write any fresh testcases etc. for it. Availability
of the test systems may have to wait until Monday, since various people
not wanting benchmarks disturbed are likely to be out for the weekend.

It would also be beneficial for the other schedulers to be able to
standardize on the scheduling class framework as far in advance as
possible. In such a manner comparative testing by end-users and more
industrial regression testing can be facilitated.


-- wli
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread William Lee Irwin III
On Sun, Apr 29, 2007 at 12:54:36AM -0700, William Lee Irwin III wrote:
 Common code for rbtree-based priority queues can be factored out of
 cfq, cfs, and hrtimers.

On Sun, Apr 29, 2007 at 10:13:17AM +0200, Willy Tarreau wrote:
 In my experience, rbtrees are painfully slow. Yesterday, I spent the
 day replacing them in haproxy with other trees I developped a few
 years ago, which look like radix trees. They are about 2-3 times as
 fast to insert 64-bit data, and you walk through them in O(1). I have
 many changes to apply to them before they could be used in kernel, but
 at least I think we already have code available for other types of trees.

Dynamic allocation of auxiliary indexing structures is problematic for
the scheduler, which significantly constrains the algorithms one may
use for this purpose.

rbtrees are not my favorite either. Faster alternatives to rbtrees
exist even among binary trees; for instance, it's not so difficult to
implement a heap-ordered tree maintaining the red-black invariant with
looser constraints on the tree structure and hence less rebalancing.
One could always try implementing a van Emde Boas queue, if he felt
particularly brave.

Some explanation of the structure may be found at:
http://courses.csail.mit.edu/6.897/spring03/scribe_notes/L1/lecture1.pdf

According to that, y-trees use less space, and exponential trees are
asymptotically faster with a worst-case asymptotic running time of

O(min(lg(lg(u))*lg(lg(n))/lg(lg(lg(u))), sqrt(lg(n)/lg(lg(n)

for all operations, so van Emde Boas is not the ultimate algorithm by
any means at O(lg(lg(u))); in these estimates, u is the size of the
universe, or otherwise the range of the key data type. Not to say
that any of those are appropriate for the kernel; it's rather likely
we'll have to settle for something less interesting, if we bother
ditching rbtrees at all, on account of the constraints of the kernel
environment.

I'll see what I can do about a userspace test harness for priority
queues more comprehensive than smart-queue.c. I have in mind the
ability to replay traces obtained from queues in the kernel and loading
priority queue implementations via dlopen()/dlsym() et al. valgrind can
do most of the dirty work. Otherwise running a trace for some period of
time and emitting the number of operations it got through should serve
as a benchmark. With that in hand, people can grind out priority queue
implementations and see how they compare on real operation sequences
logged from live kernels.


-- wli
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Con Kolivas
On Sunday 29 April 2007 18:00, Ingo Molnar wrote:
 * Willy Tarreau [EMAIL PROTECTED] wrote:
[...] except for Mike who has not tested recent versions. [...]
  
   actually, dont discount Mark Lord's test results either. And it
   might be a good idea for Mike to re-test SD 0.46?
 
  In any case, it might be a good idea because Mike encountered a
  problem that nobody could reproduce. [...]

 actually, Mark Lord too reproduced something similar to Mike's results.
 Please try those workloads yourself.

I see no suggestion that either Mark or Mike have tested, or for that matter 
_have any intention of testing_, the current version of SD without fancy 
renicing or anything involved. Willy I grealy appreciate you trying, but I 
don't know why you're bothering even trying here since clearly 1. Ingo is the 
scheduler maintainer 2. he's working on a competing implementation and 3. in 
my excellent physical and mental state I seem to have slighted the two 
testers (both?) somewhere along the line. Mike feels his testing was a 
complete waste of time yet it would be ludicrous for me to say that SD didn't 
evolve 20 versions further due to his earlier testing, and was the impetus 
for you to start work on CFS. The crunch came that we couldn't agree that 
fair was appropriate for mainline and we parted ways. That fairness has not 
been a problem for his view on CFS though but he has only tested older 
versions of SD that still had bugs.

Given facts 1 and 2 above I have all but resigned myself to the fact that SD 
has -less than zero- chance of ever being considered for mainline and it's my 
job to use it as something to compare your competing design with to make sure 
that when (and I do mean when since there seems no doubt in everyone else's 
mind) CFS becomes part of mainline that it is as good as SD. Saying people 
found CFS better than SD is, in my humble opinion, an exaggeration since 
every one I could find was a glowing standalone report of CFS rather than any 
comparison to the current very stable bug free version of SD. On the other 
hand I still see that when people compare them side to side they find SD is 
better, so I will hold CFS against that comparison - when comparing fairness 
based designs.

On a related note - implementing a framework is nice but doesn't address any 
of the current fairness/starvation/corner case problems mainline has. I don't 
see much point in rushing the framework merging since it's still in flux.

-- 
-ck
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Mike Galbraith
On Sun, 2007-04-29 at 19:52 +1000, Con Kolivas wrote:
 On Sunday 29 April 2007 18:00, Ingo Molnar wrote:
  * Willy Tarreau [EMAIL PROTECTED] wrote:
 [...] except for Mike who has not tested recent versions. [...]
   
actually, dont discount Mark Lord's test results either. And it
might be a good idea for Mike to re-test SD 0.46?
  
   In any case, it might be a good idea because Mike encountered a
   problem that nobody could reproduce. [...]
 
  actually, Mark Lord too reproduced something similar to Mike's results.
  Please try those workloads yourself.
 
 I see no suggestion that either Mark or Mike have tested, or for that matter 
 _have any intention of testing_, the current version of SD without fancy 
 renicing or anything involved. Willy I grealy appreciate you trying, but I 
 don't know why you're bothering even trying here since clearly 1. Ingo is the 
 scheduler maintainer 2. he's working on a competing implementation and 3. in 
 my excellent physical and mental state I seem to have slighted the two 
 testers (both?) somewhere along the line. Mike feels his testing was a 
 complete waste of time yet it would be ludicrous for me to say that SD didn't 
 evolve 20 versions further due to his earlier testing, and was the impetus 
 for you to start work on CFS. The crunch came that we couldn't agree that 
 fair was appropriate for mainline and we parted ways. That fairness has not 
 been a problem for his view on CFS though but he has only tested older 
 versions of SD that still had bugs.

The crunch for me came when you started hand-waving and spin-doctoring
as you are doing now.  Listening to twisted echoes of my voice is not my
idea of a good time.

-Mike

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Thomas Gleixner
Willy,

On Sun, 2007-04-29 at 09:16 +0200, Willy Tarreau wrote:
 In fact, what I'd like to see in 2.6.22 is something better for everybody
 and with *no* regression, even if it's not perfect. I had the feeling
 that SD matched that goal right now, except for Mike who has not tested
 recent versions. Don't get me wrong, I still think that CFS is a more
 interesting long-term target. But it may require more time to satisfy
 everyone. At least with one of them in 2.6.22, we won't waste time
 comparing to current mainline.

Oh no, we really do _NOT_ want to throw SD or anything else at mainline
in a hurry just for not wasting time on comparing to the current
scheduler.

I agree that CFS is the more interesting target and I prefer to push the
more interesting one even if it takes a release cycle longer. The main
reason for me is the design of CFS. Even if it is not really modular
right now, it is not rocket science to make it fully modular.

Looking at the areas where people work on, e.g. containers, resource
management, cpu isolation, fully tickless systems , we really need
to go into that direction, when we want to avoid permanent tinkering in
the core scheduler code for the next five years.

As a sidenote: I really wonder if anybody noticed yet, that the whole
CFS / SD comparison is so ridiculous, that it is not even funny anymore.
CFS modifies the scheduler and nothing else, SD fiddles all over the
kernel in interesting ways. 

This is worse than apples and oranges, it's more like apples and
screwdrivers. 

Can we please stop this useless pissing contest and sit down and get a
modular design into mainline, which allows folks to work and integrate
their workload X perfect scheduler and gives us the flexibility to
adjust to the needs of upcoming functionality.

tglx


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread William Lee Irwin III
On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
 Can we please stop this useless pissing contest and sit down and get a
 modular design into mainline, which allows folks to work and integrate
 their workload X perfect scheduler and gives us the flexibility to
 adjust to the needs of upcoming functionality.

If I don't see some sort of modularity patch soon I'll post one myself.


-- wli
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Kasper Sandberg
On Sun, 2007-04-29 at 12:30 +0200, Thomas Gleixner wrote:
 Willy,
snip
 As a sidenote: I really wonder if anybody noticed yet, that the whole
 CFS / SD comparison is so ridiculous, that it is not even funny anymore.
 CFS modifies the scheduler and nothing else, SD fiddles all over the
 kernel in interesting ways. 
 

have you looked at diffstat lately? :)

sd:
 Documentation/sched-design.txt  |  241 +++
 Documentation/sysctl/kernel.txt |   14
 Makefile|2
 fs/pipe.c   |7
 fs/proc/array.c |2
 include/linux/init_task.h   |4
 include/linux/sched.h   |   32 -
 kernel/sched.c  | 1279
+++-
 kernel/softirq.c|2
 kernel/sysctl.c |   26
 kernel/workqueue.c  |2
 11 files changed, 919 insertions(+), 692 deletions(-)

cfs:
 Documentation/kernel-parameters.txt |   43
 Documentation/sched-design-CFS.txt  |  107 +
 Makefile|2
 arch/i386/kernel/smpboot.c  |   13
 arch/i386/kernel/tsc.c  |8
 arch/ia64/kernel/setup.c|6
 arch/mips/kernel/smp.c  |   11
 arch/sparc/kernel/smp.c |   10
 arch/sparc64/kernel/smp.c   |   36
 fs/proc/array.c |   11
 fs/proc/base.c  |2
 fs/proc/internal.h  |1
 include/asm-i386/unistd.h   |3
 include/asm-x86_64/unistd.h |4
 include/linux/hardirq.h |   13
 include/linux/sched.h   |   94 +
 init/main.c |2
 kernel/exit.c   |3
 kernel/fork.c   |4
 kernel/posix-cpu-timers.c   |   34
 kernel/sched.c  | 2288
+---
 kernel/sched_debug.c|  152 ++
 kernel/sched_fair.c |  601 +
 kernel/sched_rt.c   |  184 ++
 kernel/sched_stats.h|  235 +++
 kernel/sysctl.c |   32
 26 files changed, 2062 insertions(+), 1837 deletions(-)


 This is worse than apples and oranges, it's more like apples and
 screwdrivers. 
snip
 
   tglx
 
 
 

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Con Kolivas
On Sunday 29 April 2007 20:30, Thomas Gleixner wrote:
 As a sidenote: I really wonder if anybody noticed yet, that the whole
 CFS / SD comparison is so ridiculous, that it is not even funny anymore.
 CFS modifies the scheduler and nothing else, SD fiddles all over the
 kernel in interesting ways.

This is a WTF if ever I saw one.

-- 
-ck
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread hui
On Sun, Apr 29, 2007 at 08:53:36PM +1000, Con Kolivas wrote:
 On Sunday 29 April 2007 20:30, Thomas Gleixner wrote:
  As a sidenote: I really wonder if anybody noticed yet, that the whole
  CFS / SD comparison is so ridiculous, that it is not even funny anymore.
  CFS modifies the scheduler and nothing else, SD fiddles all over the
  kernel in interesting ways.
 
 This is a WTF if ever I saw one.

You should look at the progression of SD versus CFS. You'll find the exact
opposite has happened and it's kind of baffling that you'd say something like
that. So I don't know what is coloring your experiences with this.

SD is highly regular patch and cleanly applies into that portion of the kernel.
Folks have been asking from some kind of pluggability for this kind of
development for years and had it repeatedly blocked in various ways. So this
seems quite odd that you'd say something like that.

bill

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Willy Tarreau
On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
 Willy,
 
 On Sun, 2007-04-29 at 09:16 +0200, Willy Tarreau wrote:
  In fact, what I'd like to see in 2.6.22 is something better for everybody
  and with *no* regression, even if it's not perfect. I had the feeling
  that SD matched that goal right now, except for Mike who has not tested
  recent versions. Don't get me wrong, I still think that CFS is a more
  interesting long-term target. But it may require more time to satisfy
  everyone. At least with one of them in 2.6.22, we won't waste time
  comparing to current mainline.
 
 Oh no, we really do _NOT_ want to throw SD or anything else at mainline
 in a hurry just for not wasting time on comparing to the current
 scheduler.

It is not about doing it in a hurry. I see SD as a small yet efficient
update to current scheduler. It's not perfect, probably not much extensible
but the risks of breaking anything are small given the fact that it does
not change much of the code or behaviour.

IMHO, it is something which can provide users with a useful update while
leaving us with some more time to carefully implement the features of CFS
one at a time, and if it requires 5 versions, it's not a problem.

 I agree that CFS is the more interesting target and I prefer to push the
 more interesting one even if it takes a release cycle longer. The main
 reason for me is the design of CFS. Even if it is not really modular
 right now, it is not rocket science to make it fully modular.
 
 Looking at the areas where people work on, e.g. containers, resource
 management, cpu isolation, fully tickless systems , we really need
 to go into that direction, when we want to avoid permanent tinkering in
 the core scheduler code for the next five years.
 
 As a sidenote: I really wonder if anybody noticed yet, that the whole
 CFS / SD comparison is so ridiculous, that it is not even funny anymore.

Contrarily to most people, I don't see them as competitors. I see SD as
a first step with a low risk of regression, and CFS as an ultimate
solution relying on a more solid framework.

 CFS modifies the scheduler and nothing else, SD fiddles all over the
 kernel in interesting ways. 

Hmmm I guess you confused both of them this time. CFS touches many places,
which is why I think the testing coverage is still very low. SD can be
tested faster. My real concern is : are there still people observing
regressions with it ? If yes, they should be fixed before even being
merged. If no, why not merge it as a fix for the many known corner cases
of current scheduler ? After all, it's already in -mm.

Willy

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Thomas Gleixner
On Sun, 2007-04-29 at 12:48 +0200, Kasper Sandberg wrote:
 On Sun, 2007-04-29 at 12:30 +0200, Thomas Gleixner wrote:
  Willy,
 snip
  As a sidenote: I really wonder if anybody noticed yet, that the whole
  CFS / SD comparison is so ridiculous, that it is not even funny anymore.
  CFS modifies the scheduler and nothing else, SD fiddles all over the
  kernel in interesting ways. 
  
 
 have you looked at diffstat lately? :)
 
 sd:
  Documentation/sched-design.txt  |  241 +++
  Documentation/sysctl/kernel.txt |   14
  Makefile|2
  fs/pipe.c   |7
  fs/proc/array.c |2
  include/linux/init_task.h   |4
  include/linux/sched.h   |   32 -
  kernel/sched.c  | 1279
 +++-
  kernel/softirq.c|2
  kernel/sysctl.c |   26
  kernel/workqueue.c  |2
  11 files changed, 919 insertions(+), 692 deletions(-)
 
 cfs:
  Documentation/kernel-parameters.txt |   43
  Documentation/sched-design-CFS.txt  |  107 +
  Makefile|2
  arch/i386/kernel/smpboot.c  |   13
  arch/i386/kernel/tsc.c  |8
  arch/ia64/kernel/setup.c|6
  arch/mips/kernel/smp.c  |   11
  arch/sparc/kernel/smp.c |   10
  arch/sparc64/kernel/smp.c   |   36
  fs/proc/array.c |   11
  fs/proc/base.c  |2
  fs/proc/internal.h  |1
  include/asm-i386/unistd.h   |3
  include/asm-x86_64/unistd.h |4
  include/linux/hardirq.h |   13
  include/linux/sched.h   |   94 +
  init/main.c |2
  kernel/exit.c   |3
  kernel/fork.c   |4
  kernel/posix-cpu-timers.c   |   34
  kernel/sched.c  | 2288
 +---
  kernel/sched_debug.c|  152 ++
  kernel/sched_fair.c |  601 +
  kernel/sched_rt.c   |  184 ++
  kernel/sched_stats.h|  235 +++
  kernel/sysctl.c |   32
  26 files changed, 2062 insertions(+), 1837 deletions(-)

Sorry, my bad. I looked at a diffstat of the ck full set. Not only the
SD part of it.

tglx


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Con Kolivas
On Sunday 29 April 2007 21:11, Willy Tarreau wrote:
 On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
  Willy,
 
  On Sun, 2007-04-29 at 09:16 +0200, Willy Tarreau wrote:
   In fact, what I'd like to see in 2.6.22 is something better for
   everybody and with *no* regression, even if it's not perfect. I had the
   feeling that SD matched that goal right now, except for Mike who has
   not tested recent versions. Don't get me wrong, I still think that CFS
   is a more interesting long-term target. But it may require more time to
   satisfy everyone. At least with one of them in 2.6.22, we won't waste
   time comparing to current mainline.
 
  Oh no, we really do _NOT_ want to throw SD or anything else at mainline
  in a hurry just for not wasting time on comparing to the current
  scheduler.

 It is not about doing it in a hurry. I see SD as a small yet efficient
 update to current scheduler. It's not perfect, probably not much extensible
 but the risks of breaking anything are small given the fact that it does
 not change much of the code or behaviour.

 IMHO, it is something which can provide users with a useful update while
 leaving us with some more time to carefully implement the features of CFS
 one at a time, and if it requires 5 versions, it's not a problem.

  I agree that CFS is the more interesting target and I prefer to push the
  more interesting one even if it takes a release cycle longer. The main
  reason for me is the design of CFS. Even if it is not really modular
  right now, it is not rocket science to make it fully modular.
 
  Looking at the areas where people work on, e.g. containers, resource
  management, cpu isolation, fully tickless systems , we really need
  to go into that direction, when we want to avoid permanent tinkering in
  the core scheduler code for the next five years.
 
  As a sidenote: I really wonder if anybody noticed yet, that the whole
  CFS / SD comparison is so ridiculous, that it is not even funny anymore.

 Contrarily to most people, I don't see them as competitors. I see SD as
 a first step with a low risk of regression, and CFS as an ultimate
 solution relying on a more solid framework.

  CFS modifies the scheduler and nothing else, SD fiddles all over the
  kernel in interesting ways.

 Hmmm I guess you confused both of them this time. CFS touches many places,
 which is why I think the testing coverage is still very low. SD can be
 tested faster. My real concern is : are there still people observing
 regressions with it ? If yes, they should be fixed before even being
 merged. If no, why not merge it as a fix for the many known corner cases
 of current scheduler ? After all, it's already in -mm.

 Willy

Willy, you're making far too much sense. Are you replying to the correct 
mailing list?

-- 
-ck
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Thomas Gleixner
On Sun, 2007-04-29 at 20:53 +1000, Con Kolivas wrote:
 On Sunday 29 April 2007 20:30, Thomas Gleixner wrote:
  As a sidenote: I really wonder if anybody noticed yet, that the whole
  CFS / SD comparison is so ridiculous, that it is not even funny anymore.
  CFS modifies the scheduler and nothing else, SD fiddles all over the
  kernel in interesting ways.
 
 This is a WTF if ever I saw one.

Sorry. My dumbness. I went into the wrong directory to run the diffstat.
It had the full ck set applied.

tglx

/me goes off to clean the harddisk


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Thomas Gleixner
On Sun, 2007-04-29 at 13:11 +0200, Willy Tarreau wrote:
  As a sidenote: I really wonder if anybody noticed yet, that the whole
  CFS / SD comparison is so ridiculous, that it is not even funny anymore.
 
 Contrarily to most people, I don't see them as competitors. I see SD as
 a first step with a low risk of regression, and CFS as an ultimate
 solution relying on a more solid framework.

That's the whole reason why I don't see any usefulness in merging SD
now. When we merge SD now, then we need to care of both - the real
solution and the fixup of regressions. Right now we have a not perfect
scheduler with known weak points. Ripping it out and replacing it is
going to introduce regressions, what ever low risk you see.

And I still do not see a benefit of an intermediate step with a in my
opinion medium to high risk of regressions, instead of going the full
way, when we agree that this is the correct solution.

tglx


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Kasper Sandberg
On Sun, 2007-04-29 at 13:11 +0200, Willy Tarreau wrote:
 On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
snip
 Contrarily to most people, I don't see them as competitors. I see SD as
 a first step with a low risk of regression, and CFS as an ultimate
 solution relying on a more solid framework.
 
See this is the part i dont understand, what makes CFS the ultimate
solution compared to SD?

snip
 
 Willy
 
 

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Paolo Ciarrocchi

On 4/29/07, Con Kolivas [EMAIL PROTECTED] wrote:

On Sunday 29 April 2007 21:11, Willy Tarreau wrote:
 On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
  Willy,
 
  On Sun, 2007-04-29 at 09:16 +0200, Willy Tarreau wrote:
   In fact, what I'd like to see in 2.6.22 is something better for
   everybody and with *no* regression, even if it's not perfect. I had the
   feeling that SD matched that goal right now, except for Mike who has
   not tested recent versions. Don't get me wrong, I still think that CFS
   is a more interesting long-term target. But it may require more time to
   satisfy everyone. At least with one of them in 2.6.22, we won't waste
   time comparing to current mainline.
 
  Oh no, we really do _NOT_ want to throw SD or anything else at mainline
  in a hurry just for not wasting time on comparing to the current
  scheduler.

 It is not about doing it in a hurry. I see SD as a small yet efficient
 update to current scheduler. It's not perfect, probably not much extensible
 but the risks of breaking anything are small given the fact that it does
 not change much of the code or behaviour.

 IMHO, it is something which can provide users with a useful update while
 leaving us with some more time to carefully implement the features of CFS
 one at a time, and if it requires 5 versions, it's not a problem.

  I agree that CFS is the more interesting target and I prefer to push the
  more interesting one even if it takes a release cycle longer. The main
  reason for me is the design of CFS. Even if it is not really modular
  right now, it is not rocket science to make it fully modular.
 
  Looking at the areas where people work on, e.g. containers, resource
  management, cpu isolation, fully tickless systems , we really need
  to go into that direction, when we want to avoid permanent tinkering in
  the core scheduler code for the next five years.
 
  As a sidenote: I really wonder if anybody noticed yet, that the whole
  CFS / SD comparison is so ridiculous, that it is not even funny anymore.

 Contrarily to most people, I don't see them as competitors. I see SD as
 a first step with a low risk of regression, and CFS as an ultimate
 solution relying on a more solid framework.

  CFS modifies the scheduler and nothing else, SD fiddles all over the
  kernel in interesting ways.

 Hmmm I guess you confused both of them this time. CFS touches many places,
 which is why I think the testing coverage is still very low. SD can be
 tested faster. My real concern is : are there still people observing
 regressions with it ? If yes, they should be fixed before even being
 merged. If no, why not merge it as a fix for the many known corner cases
 of current scheduler ? After all, it's already in -mm.

 Willy

Willy, you're making far too much sense. Are you replying to the correct
mailing list?


FWIW, I strongly agree with Willy.

Ciao,
--
Paolo
Tutto cio' che merita di essere fatto,merita di essere fatto bene
Philip Stanhope IV conte di Chesterfield
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Thomas Gleixner
On Sun, 2007-04-29 at 14:00 +0200, Kasper Sandberg wrote:
 On Sun, 2007-04-29 at 13:11 +0200, Willy Tarreau wrote:
  On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
 snip
  Contrarily to most people, I don't see them as competitors. I see SD as
  a first step with a low risk of regression, and CFS as an ultimate
  solution relying on a more solid framework.
  
 See this is the part i dont understand, what makes CFS the ultimate
 solution compared to SD?

SD is a one to one replacement of the existing scheduler guts - with a
different behaviour.

CFS is a huge step into a modular and hierarchical scheduler design,
which allows more than just implementing a clever scheduler for a single
purpose. In a hierarchical scheduler you can implement resource
management and other fancy things, in the monolitic design of the
current scheduler (and it's proposed replacement SD) you can't. But SD
can be made one of the modular variants.

tglx


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Kasper Sandberg
On Sun, 2007-04-29 at 14:13 +0200, Thomas Gleixner wrote:
 On Sun, 2007-04-29 at 14:00 +0200, Kasper Sandberg wrote:
  On Sun, 2007-04-29 at 13:11 +0200, Willy Tarreau wrote:
   On Sun, Apr 29, 2007 at 12:30:54PM +0200, Thomas Gleixner wrote:
  snip
   Contrarily to most people, I don't see them as competitors. I see SD as
   a first step with a low risk of regression, and CFS as an ultimate
   solution relying on a more solid framework.
   
  See this is the part i dont understand, what makes CFS the ultimate
  solution compared to SD?
 
 SD is a one to one replacement of the existing scheduler guts - with a
 different behaviour.
 
 CFS is a huge step into a modular and hierarchical scheduler design,
 which allows more than just implementing a clever scheduler for a single
 purpose. In a hierarchical scheduler you can implement resource
 management and other fancy things, in the monolitic design of the
 current scheduler (and it's proposed replacement SD) you can't. But SD
 can be made one of the modular variants.
But all these things, arent they just in the modular scheduler policy
code? and not the actual sched_cfs one?

 
   tglx
 
 
 

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Willy Tarreau
On Sun, Apr 29, 2007 at 01:59:13PM +0200, Thomas Gleixner wrote:
 On Sun, 2007-04-29 at 13:11 +0200, Willy Tarreau wrote:
   As a sidenote: I really wonder if anybody noticed yet, that the whole
   CFS / SD comparison is so ridiculous, that it is not even funny anymore.
  
  Contrarily to most people, I don't see them as competitors. I see SD as
  a first step with a low risk of regression, and CFS as an ultimate
  solution relying on a more solid framework.
 
 That's the whole reason why I don't see any usefulness in merging SD
 now. When we merge SD now, then we need to care of both - the real
 solution and the fixup of regressions. Right now we have a not perfect
 scheduler with known weak points. Ripping it out and replacing it is
 going to introduce regressions, what ever low risk you see.

Of course, but that's also the purpose of -rc. And given its small
footprint, it will be as easy to revert it as to apply it, should any
big problem appear.

 And I still do not see a benefit of an intermediate step with a in my
 opinion medium to high risk of regressions, instead of going the full
 way, when we agree that this is the correct solution.

The only difference is the time to get it in the right shape. If it
requires 3 versions (6 months), it may be worth upgrading the current
scheduler to make users happy. I'm not kidding, I've switched the default
boot to 2.6 on my notebook after trying SD and CFS. It was the first time
I got my system in 2.6 at least as usable as in 2.4. And I know I'm not
the only one.

Willy

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread William Lee Irwin III
On Sun, Apr 29, 2007 at 02:13:30PM +0200, Thomas Gleixner wrote:
 SD is a one to one replacement of the existing scheduler guts - with a
 different behaviour.
 CFS is a huge step into a modular and hierarchical scheduler design,
 which allows more than just implementing a clever scheduler for a single
 purpose. In a hierarchical scheduler you can implement resource
 management and other fancy things, in the monolitic design of the
 current scheduler (and it's proposed replacement SD) you can't. But SD
 can be made one of the modular variants.

The modularity provided is not enough to allow an implementation of
mainline, SD, or nicksched without significant core scheduler impact.

CFS doesn't have all that much to do with scheduler classes. A weak form
of them was done in tandem with the scheduler itself. The modularity
provided is sufficiently weak the advantage is largely prettiness of the
code. So essentially CFS is every bit as monolithic as mainline, SD, et
al, with some dressing that suggests modularity without actually making
any accommodations for alternative policies (e.g. reverting to mainline).

You'll hit the holes in the driver API quite quickly should you attempt
to port mainline to it. You'll hit several missing driver operations
right in schedule(), for starters. At some point you may also notice
that simple enqueue operations are not all that's there. Representing
enqueueing to active vs. expired and head vs. tail are needed for
current mainline to be representible by a set of driver operations.
It's also a bit silly to remove and re-insert a queue element for cfs
(or anything else using a tree-structured heap, which yes, a search
tree is, even if a slow one), which could use a reprioritization driver
operation, but I suppose it won't oops.

You'll also hit the same holes should you attempt to write such a
modularity patch for mainline as opposed to porting current mainline to
the driver API as-given. It takes a bit more work to get something that
actually works for all this, and it borders on disingenuity to
suggest that the scheduler class/driver API as it now stands is
capable of any such thing as porting current mainline, nicksched, or SD
to it without significant code impact to the core scheduler code.

So on both these points, I don't see cfs as being adequate as it now
stands for a modular, hierarchical scheduler design. If we want a truly
modular and hierarchical scheduler design, I'd suggest pursuing it
directly and independently of policy, and furthermore considering the
representability of various policies in the scheduling class/driver API
as a test of its adequacy.


-- wli
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Thomas Gleixner
On Sun, 2007-04-29 at 05:55 -0700, William Lee Irwin III wrote:
 You'll also hit the same holes should you attempt to write such a
 modularity patch for mainline as opposed to porting current mainline to
 the driver API as-given. It takes a bit more work to get something that
 actually works for all this, and it borders on disingenuity to
 suggest that the scheduler class/driver API as it now stands is
 capable of any such thing as porting current mainline, nicksched, or SD
 to it without significant code impact to the core scheduler code.

I never said, that the current implementation of CFS fits the criteria
of modularity, but it is a step in that direction. I'm well aware that
there is a bunch of things missing and it has hard coded leftovers,
which are related to the current two hard coded policy classes.

 So on both these points, I don't see cfs as being adequate as it now
 stands for a modular, hierarchical scheduler design. If we want a truly
 modular and hierarchical scheduler design, I'd suggest pursuing it
 directly and independently of policy, and furthermore considering the
 representability of various policies in the scheduling class/driver API
 as a test of its adequacy.

Ack. I don't worry much whether the CFS policy is better than the SD
one. I'm all for a truly modular design. SD and SCHED_FAIR are good
proofs for it.

tglx


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Gene Heskett
On Sunday 29 April 2007, Willy Tarreau wrote:
On Sun, Apr 29, 2007 at 08:59:01AM +0200, Ingo Molnar wrote:
 * Willy Tarreau [EMAIL PROTECTED] wrote:
  I don't know if Mike still has problems with SD, but there are now
  several interesting reports of SD giving better feedback than CFS on
  real work. In my experience, CFS seems smoother on *technical* tests,
  which I agree that they do not really simulate real work.

 well, there are several reports of CFS being significantly better than
 SD on a number of workloads - and i know of only two reports where SD
 was reported to be better than CFS: in Kasper's test (where i'd like to
 know what the 3D stuff he uses is and take a good look at that
 workload), and another 3D report which was done against -v6. (And even
 in these two reports the 'smoothness advantage' was not dramatic. If you
 know of any other reports then please let me know!)

There was Caglar Onur too but he said he will redo all the tests. I'm
not tracking all tests nor versions, so it might be possible that some
of the differences vanish with v7.

In fact, what I'd like to see in 2.6.22 is something better for everybody
and with *no* regression, even if it's not perfect. I had the feeling
that SD matched that goal right now, except for Mike who has not tested
recent versions. Don't get me wrong, I still think that CFS is a more
interesting long-term target. But it may require more time to satisfy
everyone. At least with one of them in 2.6.22, we won't waste time
comparing to current mainline.

  Ingo

Willy

In the FWIW category, I haven't built and tested a 'mainline' since at least 
2-3 weeks ago.  That's how dramatic the differences are here.  Here, my main 
notifier of scheduling fubar artifacts is usually kmail, which in itself 
seems to have a poor threading model, giving the composer pauses whenever its 
off sorting incoming mail, or compacting a folder, all the usual stuff that 
it needs to do in the background.  Those lags were from 10 to 30 seconds 
long, and I could type whole sentences before they showed up on screen with 
mainline.

The best either of these schedulers can do is hold that down to 3 or 4 words, 
but that's an amazing difference in itself.  With either of these schedulers, 
having a running gzip session that amanda launched in the background cause 
kmail to display a new message 5-30 seconds after the + key has been tapped 
is now into the sub 4 second range  often much less.  SD seems to, as it 
states, give everyone a turn at the well, so the slowdowns when gzip is 
running are somewhat more noticeable, whereas with CFS, gzip seems to be 
pretty well preempted long enough to process most user keypresses.  Not all, 
because tapping the + key to display the next message can at times be a 
pretty complex operation.

For my workload, CFS seems to be a marginally better solution, but either is 
so much better than mainline that there cannot be a reversion to mainline 
performance here without a lot of kicking and screaming.

'vmstat -n 1' results show that CFS uses a lot less time doing context 
switches, which as IUI, is to be counted against OS overhead as it does no 
productive work while the switch is being done.  For CFS, that's generally 
less than 500/second, and averageing around 350, which compared to SD046's 
average of about 18,000/second, it would appear that CFS allows more REAL 
work to get done by holding down on the non-productive time a context switch 
requires.

FWIW, amanda runtimes tend to back that up, most CFS runs are sub 2 hours, SD 
runs are seemingly around 2h:10m.  But that again is not over a sufficiently 
large sample to be a benchmark tool either, just one persons observation.  I 
should have marked the amdump logs so I could have determined that easier by 
tracking which scheduler was running for that dump.  amplot can be 
informative, but one must also correlate, and a week ago is ancient history 
as I have no way to verify which I was running then.

The X86's poor register architecture pretty well chains us to the 'context 
switch' if we want multitasking.

I'm reminded of how that was handled on a now largely dead architecture some 
here may never have seen an example of, TI's 99xx chips, where all 
accumulators and registers were actually stored in memory, and a context 
switch was a simply matter of reloading the register that pointed into this 
memory array with a new address so a context switch was just a matter of 
reading the next processes address and storing it in the address register, 
itself also just a location in memory.  The state image of the process 
being 'put to sleep' was therefore maintained indefinitely as long as the 
memory was refreshed.  Too bad we can't do that on the x86 but I assume TI 
has patent lawyers standing by ready to jump on that one.  However, with 
today's L1 cache being the speed and size that it is, it sure looks like a 
doable thing even at 2GHZ+ clocks.

Yup, we've tried lots of very 

Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Gene Heskett
On Sunday 29 April 2007, Paolo Ciarrocchi wrote:
[...]

   CFS modifies the scheduler and nothing else, SD fiddles all over the
   kernel in interesting ways.

Huh?  Doesn't grok.

  Hmmm I guess you confused both of them this time. CFS touches many
  places, which is why I think the testing coverage is still very low. SD
  can be tested faster. My real concern is : are there still people
  observing regressions with it ? If yes, they should be fixed before even
  being merged. If no, why not merge it as a fix for the many known corner
  cases of current scheduler ? After all, it's already in -mm.
 
  Willy

 Willy, you're making far too much sense. Are you replying to the correct
 mailing list?

FWIW, I strongly agree with Willy.

If we're putting it to a vote, I'm with Willy.  But this is a dictatorship and 
we shouldn't forget it. :)

-- 
Cheers, Gene
There are four boxes to be used in defense of liberty:
 soap, ballot, jury, and ammo. Please use in that order.
-Ed Howdershelt (Author)
An ambassador is an honest man sent abroad to lie and intrigue for the
benefit of his country.
-- Sir Henry Wotton, 1568-1639
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Ray Lee

On 4/29/07, Kasper Sandberg [EMAIL PROTECTED] wrote:

On Sun, 2007-04-29 at 08:59 +0200, Ingo Molnar wrote:
 well, there are several reports of CFS being significantly better than
 SD on a number of workloads - and i know of only two reports where SD
 was reported to be better than CFS: in Kasper's test (where i'd like to
 know what the 3D stuff he uses is and take a good look at that
 workload), and another 3D report which was done against -v6. (And even
 in these two reports the 'smoothness advantage' was not dramatic. If you
 know of any other reports then please let me know!)

I can tell you one thing, its not just me that has observed the
smoothness in 3d stuff, after i tried rsdl first i've had lots of people
try rsdl and subsequently sd because of the significant improvement in
smoothness, and they have all found the same results.

The stuff i have tested with in particular is unreal tournament 2004 and
world of warcraft through wine, both running opengl, and consuming all
the cpu time it can get.


[snip more of sd smoother than cfs report]

WINE is an interesting workload as it does most of its work out of
process to the 'wineserver', which then does more work out of process
to the X server. So, it's three mutually interacting processes total,
once one includes the original client (Unreal Tournament or World of
Warcraft, in this case).

Perhaps running one of the windows system performance apps (that can
be freely downloaded) under WINE would give some hard numbers people
could use to try to reproduce the report.

Ray
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Kasper Sandberg
On Sun, 2007-04-29 at 08:42 -0700, Ray Lee wrote:
 On 4/29/07, Kasper Sandberg [EMAIL PROTECTED] wrote:
  On Sun, 2007-04-29 at 08:59 +0200, Ingo Molnar wrote:
   well, there are several reports of CFS being significantly better than
   SD on a number of workloads - and i know of only two reports where SD
   was reported to be better than CFS: in Kasper's test (where i'd like to
   know what the 3D stuff he uses is and take a good look at that
   workload), and another 3D report which was done against -v6. (And even
   in these two reports the 'smoothness advantage' was not dramatic. If you
   know of any other reports then please let me know!)
 
  I can tell you one thing, its not just me that has observed the
  smoothness in 3d stuff, after i tried rsdl first i've had lots of people
  try rsdl and subsequently sd because of the significant improvement in
  smoothness, and they have all found the same results.
 
  The stuff i have tested with in particular is unreal tournament 2004 and
  world of warcraft through wine, both running opengl, and consuming all
  the cpu time it can get.
 
 [snip more of sd smoother than cfs report]
 
 WINE is an interesting workload as it does most of its work out of
 process to the 'wineserver', which then does more work out of process
 to the X server. So, it's three mutually interacting processes total,
 once one includes the original client (Unreal Tournament or World of
 Warcraft, in this case).
the wineserver process is using next to no cpu-time compared to the main
process..

 
 Perhaps running one of the windows system performance apps (that can
 be freely downloaded) under WINE would give some hard numbers people
 could use to try to reproduce the report.
 
 Ray
 

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-29 Thread Mark Lord

Willy Tarreau wrote:

..
Contrarily to most people, I don't see them as competitors. I see SD as
a first step with a low risk of regression, and CFS as an ultimate
solution relying on a more solid framework.


I see SD as 100% chance of regression on my main machine.

But I will retest (on Monday?) with the latest, just to see
if it has improved closer to mainline or not.

-ml
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-28 Thread Willy Tarreau
Hi,

On Sun, Apr 29, 2007 at 03:18:32AM +0200, Kasper Sandberg wrote:
> Okay so i've tried with cfs 7 now, and the completely broken audio
> behavior is fixed.
> 
> The only things i really notice now is that gtk apps seems to redraw
> somewhat slower, and renicing X doesent seem to be able to bring it on
> par with SD or vanilla.
> 
> And smoothness just doesent match SD, it may be abit better than
> vanilla/staircase, i cant really definitively say, but SD just has the
> smoothness factor which is extremely attractive.

(...)

I don't know if Mike still has problems with SD, but there are now several
interesting reports of SD giving better feedback than CFS on real work. In
my experience, CFS seems smoother on *technical* tests, which I agree that
they do not really simulate real work.

I really think that if we merged SD in 2.6.22, at least we could focus
more on differences between it (which will become mainline) and CFS in
order to improve CFS for later inclusion when mature enough. Or maybe
only relevant parts of CFS will be merged into mainline later. But at
least, testers will not have to patch anymore to report feedback with
SD during -rc, and above all they would not compare anymore against
old-vanilla, thus reducing the number of tests.

Just my 2 cents,
Willy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-28 Thread Kasper Sandberg
Okay so i've tried with cfs 7 now, and the completely broken audio
behavior is fixed.

The only things i really notice now is that gtk apps seems to redraw
somewhat slower, and renicing X doesent seem to be able to bring it on
par with SD or vanilla.

And smoothness just doesent match SD, it may be abit better than
vanilla/staircase, i cant really definitively say, but SD just has the
smoothness factor which is extremely attractive.

This is with 3d stuff, like through wine or natively on linux, under
load(and even just minor things like using a browser or other things,
like spamasassin), vanilla/staircase(not rsdl or sd)/cfs will go down in
FPS, but at the same time stutter, it goes down to around the same fps
under same load, as SD, but SD is completely smooth.

Im not sure im describing properly, but say it takes 35fps for the 3d
stuff to seem perfect, the fps monitor updates once every 1 or two
seconds, showing average fps(havent looked at the code, but i assume it
spans those 1-2 seconds), usually i have like 60 fps, but under load it
can go down to 35, but under anything but SD its not smooth, it will do
the 35 fps, but i suspect it does it in chunks, for example it will skip
for 200 ms and then hurry to display the 35 frames. This means it does
get the workload done, but not in a very plesant matter, and its here i
see SD as being in such a high league that its really impossible to
describe the results with any other word than Perfect.

mvh.
Kasper Sandberg

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-28 Thread Lee Revell

On 4/28/07, Kasper Sandberg <[EMAIL PROTECTED]> wrote:

tried looking for buffer stuff in /proc/asound, couldnt find anything,
im using the via82xx driver.



Use fuser to see which sound device is used:

$ fuser /dev/snd/*
/dev/snd/controlC0:  14028
/dev/snd/pcmC0D0c:   14028m
/dev/snd/pcmC0D0p:   14028m

So process 14028 is using capture device 0 substream 0 and playback
device 0 substream 0.  Examine the hw_params for playback device like
so:

$ cat /proc/asound/card0/pcm0p/sub0/hw_params
access: MMAP_INTERLEAVED
format: S16_LE
subformat: STD
channels: 2
rate: 48000 (48000/1)
period_size: 1024
buffer_size: 2048
tick_time: 1000

This application (jackd) is a sophisticated user of ALSA API and
allows the user to set period and buffer size but many apps just use
the default they get from ALSA.  These apps will work well with a
driver that happens to have a large default buffer but will fail
(skip) with drivers that default to a small buffer.

Lee
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-28 Thread Kasper Sandberg
On Fri, 2007-04-27 at 13:55 +0200, Ingo Molnar wrote:
> * Ingo Molnar <[EMAIL PROTECTED]> wrote:
> 
> > update for lkml readers: this is some really 'catastrophic' condition 
> > triggering on your box. Here ogg123 just never skips on an older 750 
> > MHz box, which is 4-5 times slower than your 2GHz box - while i have 
> > _fourty nice-0 infinite loops_ running. I.e. at this clearly 
> > ridiculous load, at just 2.5% of CPU time ogg123 is just chugging 
> > along nicely and never leaves out a beat.
> 
> Kasper, just to exclude the possibility that this is somehow related to 
> IO scheduling, could you copy the OGG file over to /dev/shm and play it 
> from there? Do you still get the bad skips?
Just copied to a tmpfs, and it still skips badly.

in response to your question, Ingo, yes, i see those atleast 0 ms
messages.

I am not running esd, i use alsa directly from ogg123.

but its not just ogg123, mplayer does it too. just moving a window can
trigger it. even scrolling in my maillist causes it.

and this ONLY happens on cfs, not vanilla, not staircase, not sd.

while i look at top, the load average is 0.11

its definetly not an IO issue, cause i just tried creating some IO load,
like reading files, it doesent skip, but moving windows triggers it
better than anything(mplayer seems more sensitive than ogg123), it seems
anything X-related makes it explode..

tried looking for buffer stuff in /proc/asound, couldnt find anything,
im using the via82xx driver.


> 
>   Ingo
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-28 Thread Ingo Molnar

* Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:

> > yeah, indeed. Would you like to do a patch for that?
> 
> My pleasure :)

thanks! I've applied your patch to my tree and it will be in -v7 which 
i'll release in a few minutes.

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-28 Thread Srivatsa Vaddagiri
On Sat, Apr 28, 2007 at 08:53:27PM +0530, Srivatsa Vaddagiri wrote:
> With the patch below applied, I ran a "time -p make -s -j10 bzImage"
> test.

On a 4CPU (accounting HT) Intel Xeon 3.6GHz box

> 
> 2.6.20 + cfs-v6   -> 186.45 (real)
> 2.6.20 + cfs-v6 + this_patch  -> 184.55 (real)
> 
> or about ~1% improvement in real wall-clock time. This was with the default 
> sched_granularity_ns of 600. I suspect larger the value of
> sched_granularity_ns and the number of (SCHED_NORMAL) tasks in system, better 
> the benefit from this caching.

-- 
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-28 Thread Srivatsa Vaddagiri
On Sat, Apr 28, 2007 at 03:53:38PM +0200, Ingo Molnar wrote:
> > Won't it help if you update rq->rb_leftmost above from the value 
> > returned by rb_first(), so that subsequent calls to first_fair will be 
> > sped up?
> 
> yeah, indeed. Would you like to do a patch for that?

My pleasure :)

With the patch below applied, I ran a "time -p make -s -j10 bzImage"
test.

2.6.20 + cfs-v6 -> 186.45 (real)
2.6.20 + cfs-v6 + this_patch-> 184.55 (real)

or about ~1% improvement in real wall-clock time. This was with the default 
sched_granularity_ns of 600. I suspect larger the value of
sched_granularity_ns and the number of (SCHED_NORMAL) tasks in system, better 
the benefit from this caching.


Cache value returned by rb_first(), for faster subsequent lookups.

Signed-off-by : Srivatsa Vaddagiri <[EMAIL PROTECTED]>


---


diff -puN kernel/sched_fair.c~speedup kernel/sched_fair.c
--- linux-2.6.21/kernel/sched_fair.c~speedup2007-04-28 19:28:08.0 
+0530
+++ linux-2.6.21-vatsa/kernel/sched_fair.c  2007-04-28 19:34:55.0 
+0530
@@ -86,7 +86,9 @@ static inline struct rb_node * first_fai
 {
if (rq->rb_leftmost)
return rq->rb_leftmost;
-   return rb_first(>tasks_timeline);
+   /* Cache the value returned by rb_first() */
+   rq->rb_leftmost = rb_first(>tasks_timeline);
+   return rq->rb_leftmost;
 }
 
 static struct task_struct * __pick_next_task_fair(struct rq *rq)
_






-- 
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-28 Thread Ingo Molnar

* Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:

> On Wed, Apr 25, 2007 at 11:47:04PM +0200, Ingo Molnar wrote:
> > The CFS patch against v2.6.21-rc7 or against v2.6.20.7 can be downloaded 
> > from the usual place:
> > 
> > http://redhat.com/~mingo/cfs-scheduler/
> 
> +static inline struct rb_node * first_fair(struct rq *rq)
> +{
> + if (rq->rb_leftmost)
> + return rq->rb_leftmost;
> + return rb_first(>tasks_timeline);
> +}
> 
> Won't it help if you update rq->rb_leftmost above from the value 
> returned by rb_first(), so that subsequent calls to first_fair will be 
> sped up?

yeah, indeed. Would you like to do a patch for that?

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-28 Thread Srivatsa Vaddagiri
On Wed, Apr 25, 2007 at 11:47:04PM +0200, Ingo Molnar wrote:
> The CFS patch against v2.6.21-rc7 or against v2.6.20.7 can be downloaded 
> from the usual place:
> 
> http://redhat.com/~mingo/cfs-scheduler/

+static inline struct rb_node * first_fair(struct rq *rq)
+{
+   if (rq->rb_leftmost)
+   return rq->rb_leftmost;
+   return rb_first(>tasks_timeline);
+}

Won't it help if you update rq->rb_leftmost above from the value
returned by rb_first(), so that subsequent calls to first_fair will be
sped up?

-- 
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [patch] CFS scheduler, -v6

2007-04-28 Thread Srivatsa Vaddagiri
On Wed, Apr 25, 2007 at 11:47:04PM +0200, Ingo Molnar wrote:
 The CFS patch against v2.6.21-rc7 or against v2.6.20.7 can be downloaded 
 from the usual place:
 
 http://redhat.com/~mingo/cfs-scheduler/

+static inline struct rb_node * first_fair(struct rq *rq)
+{
+   if (rq-rb_leftmost)
+   return rq-rb_leftmost;
+   return rb_first(rq-tasks_timeline);
+}

Won't it help if you update rq-rb_leftmost above from the value
returned by rb_first(), so that subsequent calls to first_fair will be
sped up?

-- 
Regards,
vatsa
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


  1   2   >