Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-19 Thread Helge Hafting

Antonio Vargas wrote:


IIRC, about 2 or three years ago (or maybe on the 2.6.10 timeframe),
there was a patch which managed to pass the interactive from one app
to another when there was a pipe or udp connection between them. This
meant that a marked-as-interactive xterm would, when blocked waiting
for an Xserver response, transfer some of its interactiveness to the
Xserver, and aparently it worked very good for desktop workloads so,
maybe adapting it for this new scheduler would be good.

And it was dropped because of some very nasty side effect,
probably a DOS opportunity.

Helge Hafting

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-19 Thread Helge Hafting

Antonio Vargas wrote:


IIRC, about 2 or three years ago (or maybe on the 2.6.10 timeframe),
there was a patch which managed to pass the interactive from one app
to another when there was a pipe or udp connection between them. This
meant that a marked-as-interactive xterm would, when blocked waiting
for an Xserver response, transfer some of its interactiveness to the
Xserver, and aparently it worked very good for desktop workloads so,
maybe adapting it for this new scheduler would be good.

And it was dropped because of some very nasty side effect,
probably a DOS opportunity.

Helge Hafting

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-18 Thread jos poortvliet
Op Sunday 18 March 2007, schreef Con Kolivas:
> On Monday 12 March 2007 22:26, Al Boldi wrote:
> > Con Kolivas wrote:
> > > On Monday 12 March 2007 15:42, Al Boldi wrote:
> > > > Con Kolivas wrote:
> > > > > On Monday 12 March 2007 08:52, Con Kolivas wrote:
> > > > > > And thank you! I think I know what's going on now. I think each
> > > > > > rotation is followed by another rotation before the higher
> > > > > > priority task is getting a look in in schedule() to even get
> > > > > > quota and add it to the runqueue quota. I'll try a simple change
> > > > > > to see if that helps. Patch coming up shortly.
> > > > >
> > > > > Can you try the following patch and see if it helps. There's also
> > > > > one minor preemption logic fix in there that I'm planning on
> > > > > including. Thanks!
> > > >
> > > > Applied on top of v0.28 mainline, and there is no difference.
> > > >
> > > > What's it look like on your machine?
> > >
> > > The higher priority one always get 6-7ms whereas the lower priority one
> > > runs 6-7ms and then one larger perfectly bound expiration amount.
> > > Basically exactly as I'd expect. The higher priority task gets
> > > precisely RR_INTERVAL maximum latency whereas the lower priority task
> > > gets RR_INTERVAL min and full expiration (according to the virtual
> > > deadline) as a maximum. That's exactly how I intend it to work. Yes I
> > > realise that the max latency ends up being longer intermittently on the
> > > niced task but that's -in my opinion- perfectly fine as a compromise to
> > > ensure the nice 0 one always gets low latency.
> >
> > I think, it should be possible to spread this max expiration latency
> > across the rotation, should it not?
>
> There is a way that I toyed with of creating maps of slots to use for each
> different priority, but it broke the O(1) nature of the virtual deadline
> management. Minimising algorithmic complexity seemed more important to
> maintain than getting slightly better latency spreads for niced tasks. It
> also appeared to be less cache friendly in design. I could certainly try
> and implement it but how much importance are we to place on latency of
> niced tasks? Are you aware of any usage scenario where latency sensitive
> tasks are ever significantly niced in the real world?

I do always nice down heavy games, it makes them more smooth...

-- 
Disclaimer:

Alles wat ik doe denk en zeg is gebaseerd op het wereldbeeld wat ik nu heb. 
Ik ben niet verantwoordelijk voor wijzigingen van de wereld, of het beeld wat 
ik daarvan heb, noch voor de daaruit voortvloeiende gedragingen van mezelf. 
Alles wat ik zeg is aardig bedoeld, tenzij expliciet vermeld.


pgpTxYO89UBU3.pgp
Description: PGP signature


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-18 Thread jos poortvliet
Op Sunday 18 March 2007, schreef Con Kolivas:
 On Monday 12 March 2007 22:26, Al Boldi wrote:
  Con Kolivas wrote:
   On Monday 12 March 2007 15:42, Al Boldi wrote:
Con Kolivas wrote:
 On Monday 12 March 2007 08:52, Con Kolivas wrote:
  And thank you! I think I know what's going on now. I think each
  rotation is followed by another rotation before the higher
  priority task is getting a look in in schedule() to even get
  quota and add it to the runqueue quota. I'll try a simple change
  to see if that helps. Patch coming up shortly.

 Can you try the following patch and see if it helps. There's also
 one minor preemption logic fix in there that I'm planning on
 including. Thanks!
   
Applied on top of v0.28 mainline, and there is no difference.
   
What's it look like on your machine?
  
   The higher priority one always get 6-7ms whereas the lower priority one
   runs 6-7ms and then one larger perfectly bound expiration amount.
   Basically exactly as I'd expect. The higher priority task gets
   precisely RR_INTERVAL maximum latency whereas the lower priority task
   gets RR_INTERVAL min and full expiration (according to the virtual
   deadline) as a maximum. That's exactly how I intend it to work. Yes I
   realise that the max latency ends up being longer intermittently on the
   niced task but that's -in my opinion- perfectly fine as a compromise to
   ensure the nice 0 one always gets low latency.
 
  I think, it should be possible to spread this max expiration latency
  across the rotation, should it not?

 There is a way that I toyed with of creating maps of slots to use for each
 different priority, but it broke the O(1) nature of the virtual deadline
 management. Minimising algorithmic complexity seemed more important to
 maintain than getting slightly better latency spreads for niced tasks. It
 also appeared to be less cache friendly in design. I could certainly try
 and implement it but how much importance are we to place on latency of
 niced tasks? Are you aware of any usage scenario where latency sensitive
 tasks are ever significantly niced in the real world?

I do always nice down heavy games, it makes them more smooth...

-- 
Disclaimer:

Alles wat ik doe denk en zeg is gebaseerd op het wereldbeeld wat ik nu heb. 
Ik ben niet verantwoordelijk voor wijzigingen van de wereld, of het beeld wat 
ik daarvan heb, noch voor de daaruit voortvloeiende gedragingen van mezelf. 
Alles wat ik zeg is aardig bedoeld, tenzij expliciet vermeld.


pgpTxYO89UBU3.pgp
Description: PGP signature


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-17 Thread Bill Davidsen

David Lang wrote:

On Fri, 9 Mar 2007, Al Boldi wrote:




My preferred sphere of operation is the Manichean domain of faster vs.
slower, functionality vs. non-functionality, and the like. For me, such
design concerns are like the need for a kernel to format pagetables so
the x86 MMU decodes what was intended, or for a compiler to emit valid
assembly instructions, or for a programmer to write C the compiler
won't reject with parse errors.


Sure, but I think, even from a technical point of view, competition is 
a good
thing to have.  Pluggable schedulers give us this kind of competition, 
that

forces each scheduler to refine or become obsolete.  Think evolution.


The point Linus is makeing is that with pluggable schedulers there isn't 
competition between them, the various developer teams would go off in 
their own direction and any drawbacks to their scheduler could be 
answered with "that's not what we are good at, use a different 
scheduler", with the very real possibility that a person could get this 
answer from ALL schedulers, leaving them with nothing good to use.


Have you noticed that currently that is exactly what happens? If the 
default scheduler doesn't handle your load well you have the option of 
rewriting it and maintaining it, or doing without, or tying to fix your 
case without breaking others, or patching in some other, non-mainline, 
scheduler.


The default scheduler has been around long enough that I don't see it 
being tuned for any A without making some B perform worse. Thus multiple 
schedulers are a possible solution.


They don't need to be available as runtime choices, boot time selection 
would still allow reasonable testing. I can see myself using a compile 
time option and building multiple kernels, but not the average user.


--
Bill Davidsen <[EMAIL PROTECTED]>
  "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-17 Thread Bill Davidsen

Con Kolivas wrote:

On Monday 12 March 2007 22:26, Al Boldi wrote:

Con Kolivas wrote:

On Monday 12 March 2007 15:42, Al Boldi wrote:

Con Kolivas wrote:

On Monday 12 March 2007 08:52, Con Kolivas wrote:

And thank you! I think I know what's going on now. I think each
rotation is followed by another rotation before the higher priority
task is getting a look in in schedule() to even get quota and add
it to the runqueue quota. I'll try a simple change to see if that
helps. Patch coming up shortly.

Can you try the following patch and see if it helps. There's also one
minor preemption logic fix in there that I'm planning on including.
Thanks!

Applied on top of v0.28 mainline, and there is no difference.

What's it look like on your machine?

The higher priority one always get 6-7ms whereas the lower priority one
runs 6-7ms and then one larger perfectly bound expiration amount.
Basically exactly as I'd expect. The higher priority task gets precisely
RR_INTERVAL maximum latency whereas the lower priority task gets
RR_INTERVAL min and full expiration (according to the virtual deadline)
as a maximum. That's exactly how I intend it to work. Yes I realise that
the max latency ends up being longer intermittently on the niced task but
that's -in my opinion- perfectly fine as a compromise to ensure the nice
0 one always gets low latency.

I think, it should be possible to spread this max expiration latency across
the rotation, should it not?


There is a way that I toyed with of creating maps of slots to use for each 
different priority, but it broke the O(1) nature of the virtual deadline 
management. Minimising algorithmic complexity seemed more important to 
maintain than getting slightly better latency spreads for niced tasks. It 
also appeared to be less cache friendly in design. I could certainly try and 
implement it but how much importance are we to place on latency of niced 
tasks? Are you aware of any usage scenario where latency sensitive tasks are 
ever significantly niced in the real world?


It depends on how you reconcile "completely fair" and "order of 
magnitude blips in latency." It looks (from the results, not the code) 
as if nice is implemented by round-robin scheduling followed by once in 
a while just not giving the CPU to the nice task for a while. Given the 
smooth nature of the performance otherwise, it's more obvious than if 
you weren't doing such a good job most of the time.


Ugly stands out more on something beautiful!

--
Bill Davidsen <[EMAIL PROTECTED]>
  "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-17 Thread Bill Davidsen

Con Kolivas wrote:

On Monday 12 March 2007 22:26, Al Boldi wrote:

Con Kolivas wrote:

On Monday 12 March 2007 15:42, Al Boldi wrote:

Con Kolivas wrote:

On Monday 12 March 2007 08:52, Con Kolivas wrote:

And thank you! I think I know what's going on now. I think each
rotation is followed by another rotation before the higher priority
task is getting a look in in schedule() to even get quota and add
it to the runqueue quota. I'll try a simple change to see if that
helps. Patch coming up shortly.

Can you try the following patch and see if it helps. There's also one
minor preemption logic fix in there that I'm planning on including.
Thanks!

Applied on top of v0.28 mainline, and there is no difference.

What's it look like on your machine?

The higher priority one always get 6-7ms whereas the lower priority one
runs 6-7ms and then one larger perfectly bound expiration amount.
Basically exactly as I'd expect. The higher priority task gets precisely
RR_INTERVAL maximum latency whereas the lower priority task gets
RR_INTERVAL min and full expiration (according to the virtual deadline)
as a maximum. That's exactly how I intend it to work. Yes I realise that
the max latency ends up being longer intermittently on the niced task but
that's -in my opinion- perfectly fine as a compromise to ensure the nice
0 one always gets low latency.

I think, it should be possible to spread this max expiration latency across
the rotation, should it not?


There is a way that I toyed with of creating maps of slots to use for each 
different priority, but it broke the O(1) nature of the virtual deadline 
management. Minimising algorithmic complexity seemed more important to 
maintain than getting slightly better latency spreads for niced tasks. It 
also appeared to be less cache friendly in design. I could certainly try and 
implement it but how much importance are we to place on latency of niced 
tasks? Are you aware of any usage scenario where latency sensitive tasks are 
ever significantly niced in the real world?


It depends on how you reconcile completely fair and order of 
magnitude blips in latency. It looks (from the results, not the code) 
as if nice is implemented by round-robin scheduling followed by once in 
a while just not giving the CPU to the nice task for a while. Given the 
smooth nature of the performance otherwise, it's more obvious than if 
you weren't doing such a good job most of the time.


Ugly stands out more on something beautiful!

--
Bill Davidsen [EMAIL PROTECTED]
  We have more to fear from the bungling of the incompetent than from
the machinations of the wicked.  - from Slashdot
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-17 Thread Bill Davidsen

David Lang wrote:

On Fri, 9 Mar 2007, Al Boldi wrote:




My preferred sphere of operation is the Manichean domain of faster vs.
slower, functionality vs. non-functionality, and the like. For me, such
design concerns are like the need for a kernel to format pagetables so
the x86 MMU decodes what was intended, or for a compiler to emit valid
assembly instructions, or for a programmer to write C the compiler
won't reject with parse errors.


Sure, but I think, even from a technical point of view, competition is 
a good
thing to have.  Pluggable schedulers give us this kind of competition, 
that

forces each scheduler to refine or become obsolete.  Think evolution.


The point Linus is makeing is that with pluggable schedulers there isn't 
competition between them, the various developer teams would go off in 
their own direction and any drawbacks to their scheduler could be 
answered with that's not what we are good at, use a different 
scheduler, with the very real possibility that a person could get this 
answer from ALL schedulers, leaving them with nothing good to use.


Have you noticed that currently that is exactly what happens? If the 
default scheduler doesn't handle your load well you have the option of 
rewriting it and maintaining it, or doing without, or tying to fix your 
case without breaking others, or patching in some other, non-mainline, 
scheduler.


The default scheduler has been around long enough that I don't see it 
being tuned for any A without making some B perform worse. Thus multiple 
schedulers are a possible solution.


They don't need to be available as runtime choices, boot time selection 
would still allow reasonable testing. I can see myself using a compile 
time option and building multiple kernels, but not the average user.


--
Bill Davidsen [EMAIL PROTECTED]
  We have more to fear from the bungling of the incompetent than from
the machinations of the wicked.  - from Slashdot
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread Antonio Vargas

On 3/12/07, jos poortvliet <[EMAIL PROTECTED]> wrote:

Op Monday 12 March 2007, schreef Con Kolivas:
> On Tuesday 13 March 2007 01:14, Al Boldi wrote:
> > Con Kolivas wrote:
> > > > > The higher priority one always get 6-7ms whereas the lower priority
> > > > > one runs 6-7ms and then one larger perfectly bound expiration
> > > > > amount. Basically exactly as I'd expect. The higher priority task
> > > > > gets precisely RR_INTERVAL maximum latency whereas the lower
> > > > > priority task gets RR_INTERVAL min and full expiration (according
> > > > > to the virtual deadline) as a maximum. That's exactly how I intend
> > > > > it to work. Yes I realise that the max latency ends up being longer
> > > > > intermittently on the niced task but that's -in my opinion-
> > > > > perfectly fine as a compromise to ensure the nice 0 one always gets
> > > > > low latency.
> > > >
> > > > I think, it should be possible to spread this max expiration latency
> > > > across the rotation, should it not?
> > >
> > > There is a way that I toyed with of creating maps of slots to use for
> > > each different priority, but it broke the O(1) nature of the virtual
> > > deadline management. Minimising algorithmic complexity seemed more
> > > important to maintain than getting slightly better latency spreads for
> > > niced tasks. It also appeared to be less cache friendly in design. I
> > > could certainly try and implement it but how much importance are we to
> > > place on latency of niced tasks? Are you aware of any usage scenario
> > > where latency sensitive tasks are ever significantly niced in the real
> > > world?
> >
> > It only takes one negatively nice'd proc to affect X adversely.
>
> I have an idea. Give me some time to code up my idea. Lack of sleep is
> making me very unpleasant.

You're excited by RSDL and the positive comments, aren't you? Well, don't
forget to sleep, sleeping makes ppl smarter you know ;-)




IIRC, about 2 or three years ago (or maybe on the 2.6.10 timeframe),
there was a patch which managed to pass the interactive from one app
to another when there was a pipe or udp connection between them. This
meant that a marked-as-interactive xterm would, when blocked waiting
for an Xserver response, transfer some of its interactiveness to the
Xserver, and aparently it worked very good for desktop workloads so,
maybe adapting it for this new scheduler would be good.



--
Greetz, Antonio Vargas aka winden of network

http://network.amigascne.org/
[EMAIL PROTECTED]
[EMAIL PROTECTED]

Every day, every year
you have to work
you have to study
you have to scene.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread jos poortvliet
Op Monday 12 March 2007, schreef Con Kolivas:
> On Tuesday 13 March 2007 01:14, Al Boldi wrote:
> > Con Kolivas wrote:
> > > > > The higher priority one always get 6-7ms whereas the lower priority
> > > > > one runs 6-7ms and then one larger perfectly bound expiration
> > > > > amount. Basically exactly as I'd expect. The higher priority task
> > > > > gets precisely RR_INTERVAL maximum latency whereas the lower
> > > > > priority task gets RR_INTERVAL min and full expiration (according
> > > > > to the virtual deadline) as a maximum. That's exactly how I intend
> > > > > it to work. Yes I realise that the max latency ends up being longer
> > > > > intermittently on the niced task but that's -in my opinion-
> > > > > perfectly fine as a compromise to ensure the nice 0 one always gets
> > > > > low latency.
> > > >
> > > > I think, it should be possible to spread this max expiration latency
> > > > across the rotation, should it not?
> > >
> > > There is a way that I toyed with of creating maps of slots to use for
> > > each different priority, but it broke the O(1) nature of the virtual
> > > deadline management. Minimising algorithmic complexity seemed more
> > > important to maintain than getting slightly better latency spreads for
> > > niced tasks. It also appeared to be less cache friendly in design. I
> > > could certainly try and implement it but how much importance are we to
> > > place on latency of niced tasks? Are you aware of any usage scenario
> > > where latency sensitive tasks are ever significantly niced in the real
> > > world?
> >
> > It only takes one negatively nice'd proc to affect X adversely.
>
> I have an idea. Give me some time to code up my idea. Lack of sleep is
> making me very unpleasant.

You're excited by RSDL and the positive comments, aren't you? Well, don't 
forget to sleep, sleeping makes ppl smarter you know ;-)


pgpg8yLZmNOn3.pgp
Description: PGP signature


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread Con Kolivas
On Tuesday 13 March 2007 01:14, Al Boldi wrote:
> Con Kolivas wrote:
> > > > The higher priority one always get 6-7ms whereas the lower priority
> > > > one runs 6-7ms and then one larger perfectly bound expiration amount.
> > > > Basically exactly as I'd expect. The higher priority task gets
> > > > precisely RR_INTERVAL maximum latency whereas the lower priority task
> > > > gets RR_INTERVAL min and full expiration (according to the virtual
> > > > deadline) as a maximum. That's exactly how I intend it to work. Yes I
> > > > realise that the max latency ends up being longer intermittently on
> > > > the niced task but that's -in my opinion- perfectly fine as a
> > > > compromise to ensure the nice 0 one always gets low latency.
> > >
> > > I think, it should be possible to spread this max expiration latency
> > > across the rotation, should it not?
> >
> > There is a way that I toyed with of creating maps of slots to use for
> > each different priority, but it broke the O(1) nature of the virtual
> > deadline management. Minimising algorithmic complexity seemed more
> > important to maintain than getting slightly better latency spreads for
> > niced tasks. It also appeared to be less cache friendly in design. I
> > could certainly try and implement it but how much importance are we to
> > place on latency of niced tasks? Are you aware of any usage scenario
> > where latency sensitive tasks are ever significantly niced in the real
> > world?
>
> It only takes one negatively nice'd proc to affect X adversely.

I have an idea. Give me some time to code up my idea. Lack of sleep is making 
me very unpleasant.

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread Al Boldi
jos poortvliet wrote:
> > It only takes one negatively nice'd proc to affect X adversely.
>
> Then, maybe, we should start nicing X again, like we did/had to do until a
> few years ago? Or should we just wait until X gets fixed (after all,
> development goes faster than ever)? Or is this really the scheduler's
> fault?

It's not enough to renice X.  You would have to renice it, and any app that 
needed fixed latency, to the same nice of the negatively nice'd proc, which 
defeats the purpose...


Thanks!

--
Al

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread michael chang

On 3/12/07, jos poortvliet <[EMAIL PROTECTED]> wrote:

Op Monday 12 March 2007, schreef Al Boldi:
>
> It only takes one negatively nice'd proc to affect X adversely.

goes faster than ever)? Or is this really the scheduler's fault?



Take this with a grain of salt, but, I don't think this is the
scheduler's _fault_. That said, if the scheduler can fix it, it's not
necessarily a bad thing.

--
~Mike
- Just the crazy copy cat.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread jos poortvliet
Op Monday 12 March 2007, schreef Al Boldi:
> Con Kolivas wrote:
> > > > The higher priority one always get 6-7ms whereas the lower priority
> > > > one runs 6-7ms and then one larger perfectly bound expiration amount.
> > > > Basically exactly as I'd expect. The higher priority task gets
> > > > precisely RR_INTERVAL maximum latency whereas the lower priority task
> > > > gets RR_INTERVAL min and full expiration (according to the virtual
> > > > deadline) as a maximum. That's exactly how I intend it to work. Yes I
> > > > realise that the max latency ends up being longer intermittently on
> > > > the niced task but that's -in my opinion- perfectly fine as a
> > > > compromise to ensure the nice 0 one always gets low latency.
> > >
> > > I think, it should be possible to spread this max expiration latency
> > > across the rotation, should it not?
> >
> > There is a way that I toyed with of creating maps of slots to use for
> > each different priority, but it broke the O(1) nature of the virtual
> > deadline management. Minimising algorithmic complexity seemed more
> > important to maintain than getting slightly better latency spreads for
> > niced tasks. It also appeared to be less cache friendly in design. I
> > could certainly try and implement it but how much importance are we to
> > place on latency of niced tasks? Are you aware of any usage scenario
> > where latency sensitive tasks are ever significantly niced in the real
> > world?
>
> It only takes one negatively nice'd proc to affect X adversely.

Then, maybe, we should start nicing X again, like we did/had to do until a few 
years ago? Or should we just wait until X gets fixed (after all, development 
goes faster than ever)? Or is this really the scheduler's fault?

> Thanks!
>
> --
> Al
>
> ___
> http://ck.kolivas.org/faqs/replying-to-mailing-list.txt
> ck mailing list - mailto: [EMAIL PROTECTED]
> http://vds.kolivas.org/mailman/listinfo/ck



-- 
Disclaimer:

Alles wat ik doe denk en zeg is gebaseerd op het wereldbeeld wat ik nu heb. 
Ik ben niet verantwoordelijk voor wijzigingen van de wereld, of het beeld wat 
ik daarvan heb, noch voor de daaruit voortvloeiende gedragingen van mezelf. 
Alles wat ik zeg is aardig bedoeld, tenzij expliciet vermeld.


pgppX4nCMFZsG.pgp
Description: PGP signature


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread Al Boldi
Con Kolivas wrote:
> > > The higher priority one always get 6-7ms whereas the lower priority
> > > one runs 6-7ms and then one larger perfectly bound expiration amount.
> > > Basically exactly as I'd expect. The higher priority task gets
> > > precisely RR_INTERVAL maximum latency whereas the lower priority task
> > > gets RR_INTERVAL min and full expiration (according to the virtual
> > > deadline) as a maximum. That's exactly how I intend it to work. Yes I
> > > realise that the max latency ends up being longer intermittently on
> > > the niced task but that's -in my opinion- perfectly fine as a
> > > compromise to ensure the nice 0 one always gets low latency.
> >
> > I think, it should be possible to spread this max expiration latency
> > across the rotation, should it not?
>
> There is a way that I toyed with of creating maps of slots to use for each
> different priority, but it broke the O(1) nature of the virtual deadline
> management. Minimising algorithmic complexity seemed more important to
> maintain than getting slightly better latency spreads for niced tasks. It
> also appeared to be less cache friendly in design. I could certainly try
> and implement it but how much importance are we to place on latency of
> niced tasks? Are you aware of any usage scenario where latency sensitive
> tasks are ever significantly niced in the real world?

It only takes one negatively nice'd proc to affect X adversely.


Thanks!

--
Al

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread Con Kolivas
On Monday 12 March 2007 22:26, Al Boldi wrote:
> Con Kolivas wrote:
> > On Monday 12 March 2007 15:42, Al Boldi wrote:
> > > Con Kolivas wrote:
> > > > On Monday 12 March 2007 08:52, Con Kolivas wrote:
> > > > > And thank you! I think I know what's going on now. I think each
> > > > > rotation is followed by another rotation before the higher priority
> > > > > task is getting a look in in schedule() to even get quota and add
> > > > > it to the runqueue quota. I'll try a simple change to see if that
> > > > > helps. Patch coming up shortly.
> > > >
> > > > Can you try the following patch and see if it helps. There's also one
> > > > minor preemption logic fix in there that I'm planning on including.
> > > > Thanks!
> > >
> > > Applied on top of v0.28 mainline, and there is no difference.
> > >
> > > What's it look like on your machine?
> >
> > The higher priority one always get 6-7ms whereas the lower priority one
> > runs 6-7ms and then one larger perfectly bound expiration amount.
> > Basically exactly as I'd expect. The higher priority task gets precisely
> > RR_INTERVAL maximum latency whereas the lower priority task gets
> > RR_INTERVAL min and full expiration (according to the virtual deadline)
> > as a maximum. That's exactly how I intend it to work. Yes I realise that
> > the max latency ends up being longer intermittently on the niced task but
> > that's -in my opinion- perfectly fine as a compromise to ensure the nice
> > 0 one always gets low latency.
>
> I think, it should be possible to spread this max expiration latency across
> the rotation, should it not?

There is a way that I toyed with of creating maps of slots to use for each 
different priority, but it broke the O(1) nature of the virtual deadline 
management. Minimising algorithmic complexity seemed more important to 
maintain than getting slightly better latency spreads for niced tasks. It 
also appeared to be less cache friendly in design. I could certainly try and 
implement it but how much importance are we to place on latency of niced 
tasks? Are you aware of any usage scenario where latency sensitive tasks are 
ever significantly niced in the real world?

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread Al Boldi
Con Kolivas wrote:
> On Monday 12 March 2007 15:42, Al Boldi wrote:
> > Con Kolivas wrote:
> > > On Monday 12 March 2007 08:52, Con Kolivas wrote:
> > > > And thank you! I think I know what's going on now. I think each
> > > > rotation is followed by another rotation before the higher priority
> > > > task is getting a look in in schedule() to even get quota and add it
> > > > to the runqueue quota. I'll try a simple change to see if that
> > > > helps. Patch coming up shortly.
> > >
> > > Can you try the following patch and see if it helps. There's also one
> > > minor preemption logic fix in there that I'm planning on including.
> > > Thanks!
> >
> > Applied on top of v0.28 mainline, and there is no difference.
> >
> > What's it look like on your machine?
>
> The higher priority one always get 6-7ms whereas the lower priority one
> runs 6-7ms and then one larger perfectly bound expiration amount.
> Basically exactly as I'd expect. The higher priority task gets precisely
> RR_INTERVAL maximum latency whereas the lower priority task gets
> RR_INTERVAL min and full expiration (according to the virtual deadline) as
> a maximum. That's exactly how I intend it to work. Yes I realise that the
> max latency ends up being longer intermittently on the niced task but
> that's -in my opinion- perfectly fine as a compromise to ensure the nice 0
> one always gets low latency.

I think, it should be possible to spread this max expiration latency across 
the rotation, should it not?


Thanks!

--
Al

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread Al Boldi
Con Kolivas wrote:
 On Monday 12 March 2007 15:42, Al Boldi wrote:
  Con Kolivas wrote:
   On Monday 12 March 2007 08:52, Con Kolivas wrote:
And thank you! I think I know what's going on now. I think each
rotation is followed by another rotation before the higher priority
task is getting a look in in schedule() to even get quota and add it
to the runqueue quota. I'll try a simple change to see if that
helps. Patch coming up shortly.
  
   Can you try the following patch and see if it helps. There's also one
   minor preemption logic fix in there that I'm planning on including.
   Thanks!
 
  Applied on top of v0.28 mainline, and there is no difference.
 
  What's it look like on your machine?

 The higher priority one always get 6-7ms whereas the lower priority one
 runs 6-7ms and then one larger perfectly bound expiration amount.
 Basically exactly as I'd expect. The higher priority task gets precisely
 RR_INTERVAL maximum latency whereas the lower priority task gets
 RR_INTERVAL min and full expiration (according to the virtual deadline) as
 a maximum. That's exactly how I intend it to work. Yes I realise that the
 max latency ends up being longer intermittently on the niced task but
 that's -in my opinion- perfectly fine as a compromise to ensure the nice 0
 one always gets low latency.

I think, it should be possible to spread this max expiration latency across 
the rotation, should it not?


Thanks!

--
Al

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread Con Kolivas
On Monday 12 March 2007 22:26, Al Boldi wrote:
 Con Kolivas wrote:
  On Monday 12 March 2007 15:42, Al Boldi wrote:
   Con Kolivas wrote:
On Monday 12 March 2007 08:52, Con Kolivas wrote:
 And thank you! I think I know what's going on now. I think each
 rotation is followed by another rotation before the higher priority
 task is getting a look in in schedule() to even get quota and add
 it to the runqueue quota. I'll try a simple change to see if that
 helps. Patch coming up shortly.
   
Can you try the following patch and see if it helps. There's also one
minor preemption logic fix in there that I'm planning on including.
Thanks!
  
   Applied on top of v0.28 mainline, and there is no difference.
  
   What's it look like on your machine?
 
  The higher priority one always get 6-7ms whereas the lower priority one
  runs 6-7ms and then one larger perfectly bound expiration amount.
  Basically exactly as I'd expect. The higher priority task gets precisely
  RR_INTERVAL maximum latency whereas the lower priority task gets
  RR_INTERVAL min and full expiration (according to the virtual deadline)
  as a maximum. That's exactly how I intend it to work. Yes I realise that
  the max latency ends up being longer intermittently on the niced task but
  that's -in my opinion- perfectly fine as a compromise to ensure the nice
  0 one always gets low latency.

 I think, it should be possible to spread this max expiration latency across
 the rotation, should it not?

There is a way that I toyed with of creating maps of slots to use for each 
different priority, but it broke the O(1) nature of the virtual deadline 
management. Minimising algorithmic complexity seemed more important to 
maintain than getting slightly better latency spreads for niced tasks. It 
also appeared to be less cache friendly in design. I could certainly try and 
implement it but how much importance are we to place on latency of niced 
tasks? Are you aware of any usage scenario where latency sensitive tasks are 
ever significantly niced in the real world?

-- 
-ck
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread Al Boldi
Con Kolivas wrote:
   The higher priority one always get 6-7ms whereas the lower priority
   one runs 6-7ms and then one larger perfectly bound expiration amount.
   Basically exactly as I'd expect. The higher priority task gets
   precisely RR_INTERVAL maximum latency whereas the lower priority task
   gets RR_INTERVAL min and full expiration (according to the virtual
   deadline) as a maximum. That's exactly how I intend it to work. Yes I
   realise that the max latency ends up being longer intermittently on
   the niced task but that's -in my opinion- perfectly fine as a
   compromise to ensure the nice 0 one always gets low latency.
 
  I think, it should be possible to spread this max expiration latency
  across the rotation, should it not?

 There is a way that I toyed with of creating maps of slots to use for each
 different priority, but it broke the O(1) nature of the virtual deadline
 management. Minimising algorithmic complexity seemed more important to
 maintain than getting slightly better latency spreads for niced tasks. It
 also appeared to be less cache friendly in design. I could certainly try
 and implement it but how much importance are we to place on latency of
 niced tasks? Are you aware of any usage scenario where latency sensitive
 tasks are ever significantly niced in the real world?

It only takes one negatively nice'd proc to affect X adversely.


Thanks!

--
Al

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread jos poortvliet
Op Monday 12 March 2007, schreef Al Boldi:
 Con Kolivas wrote:
The higher priority one always get 6-7ms whereas the lower priority
one runs 6-7ms and then one larger perfectly bound expiration amount.
Basically exactly as I'd expect. The higher priority task gets
precisely RR_INTERVAL maximum latency whereas the lower priority task
gets RR_INTERVAL min and full expiration (according to the virtual
deadline) as a maximum. That's exactly how I intend it to work. Yes I
realise that the max latency ends up being longer intermittently on
the niced task but that's -in my opinion- perfectly fine as a
compromise to ensure the nice 0 one always gets low latency.
  
   I think, it should be possible to spread this max expiration latency
   across the rotation, should it not?
 
  There is a way that I toyed with of creating maps of slots to use for
  each different priority, but it broke the O(1) nature of the virtual
  deadline management. Minimising algorithmic complexity seemed more
  important to maintain than getting slightly better latency spreads for
  niced tasks. It also appeared to be less cache friendly in design. I
  could certainly try and implement it but how much importance are we to
  place on latency of niced tasks? Are you aware of any usage scenario
  where latency sensitive tasks are ever significantly niced in the real
  world?

 It only takes one negatively nice'd proc to affect X adversely.

Then, maybe, we should start nicing X again, like we did/had to do until a few 
years ago? Or should we just wait until X gets fixed (after all, development 
goes faster than ever)? Or is this really the scheduler's fault?

 Thanks!

 --
 Al

 ___
 http://ck.kolivas.org/faqs/replying-to-mailing-list.txt
 ck mailing list - mailto: [EMAIL PROTECTED]
 http://vds.kolivas.org/mailman/listinfo/ck



-- 
Disclaimer:

Alles wat ik doe denk en zeg is gebaseerd op het wereldbeeld wat ik nu heb. 
Ik ben niet verantwoordelijk voor wijzigingen van de wereld, of het beeld wat 
ik daarvan heb, noch voor de daaruit voortvloeiende gedragingen van mezelf. 
Alles wat ik zeg is aardig bedoeld, tenzij expliciet vermeld.


pgppX4nCMFZsG.pgp
Description: PGP signature


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread michael chang

On 3/12/07, jos poortvliet [EMAIL PROTECTED] wrote:

Op Monday 12 March 2007, schreef Al Boldi:

 It only takes one negatively nice'd proc to affect X adversely.

goes faster than ever)? Or is this really the scheduler's fault?



Take this with a grain of salt, but, I don't think this is the
scheduler's _fault_. That said, if the scheduler can fix it, it's not
necessarily a bad thing.

--
~Mike
- Just the crazy copy cat.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread Al Boldi
jos poortvliet wrote:
  It only takes one negatively nice'd proc to affect X adversely.

 Then, maybe, we should start nicing X again, like we did/had to do until a
 few years ago? Or should we just wait until X gets fixed (after all,
 development goes faster than ever)? Or is this really the scheduler's
 fault?

It's not enough to renice X.  You would have to renice it, and any app that 
needed fixed latency, to the same nice of the negatively nice'd proc, which 
defeats the purpose...


Thanks!

--
Al

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread Con Kolivas
On Tuesday 13 March 2007 01:14, Al Boldi wrote:
 Con Kolivas wrote:
The higher priority one always get 6-7ms whereas the lower priority
one runs 6-7ms and then one larger perfectly bound expiration amount.
Basically exactly as I'd expect. The higher priority task gets
precisely RR_INTERVAL maximum latency whereas the lower priority task
gets RR_INTERVAL min and full expiration (according to the virtual
deadline) as a maximum. That's exactly how I intend it to work. Yes I
realise that the max latency ends up being longer intermittently on
the niced task but that's -in my opinion- perfectly fine as a
compromise to ensure the nice 0 one always gets low latency.
  
   I think, it should be possible to spread this max expiration latency
   across the rotation, should it not?
 
  There is a way that I toyed with of creating maps of slots to use for
  each different priority, but it broke the O(1) nature of the virtual
  deadline management. Minimising algorithmic complexity seemed more
  important to maintain than getting slightly better latency spreads for
  niced tasks. It also appeared to be less cache friendly in design. I
  could certainly try and implement it but how much importance are we to
  place on latency of niced tasks? Are you aware of any usage scenario
  where latency sensitive tasks are ever significantly niced in the real
  world?

 It only takes one negatively nice'd proc to affect X adversely.

I have an idea. Give me some time to code up my idea. Lack of sleep is making 
me very unpleasant.

-- 
-ck
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread jos poortvliet
Op Monday 12 March 2007, schreef Con Kolivas:
 On Tuesday 13 March 2007 01:14, Al Boldi wrote:
  Con Kolivas wrote:
 The higher priority one always get 6-7ms whereas the lower priority
 one runs 6-7ms and then one larger perfectly bound expiration
 amount. Basically exactly as I'd expect. The higher priority task
 gets precisely RR_INTERVAL maximum latency whereas the lower
 priority task gets RR_INTERVAL min and full expiration (according
 to the virtual deadline) as a maximum. That's exactly how I intend
 it to work. Yes I realise that the max latency ends up being longer
 intermittently on the niced task but that's -in my opinion-
 perfectly fine as a compromise to ensure the nice 0 one always gets
 low latency.
   
I think, it should be possible to spread this max expiration latency
across the rotation, should it not?
  
   There is a way that I toyed with of creating maps of slots to use for
   each different priority, but it broke the O(1) nature of the virtual
   deadline management. Minimising algorithmic complexity seemed more
   important to maintain than getting slightly better latency spreads for
   niced tasks. It also appeared to be less cache friendly in design. I
   could certainly try and implement it but how much importance are we to
   place on latency of niced tasks? Are you aware of any usage scenario
   where latency sensitive tasks are ever significantly niced in the real
   world?
 
  It only takes one negatively nice'd proc to affect X adversely.

 I have an idea. Give me some time to code up my idea. Lack of sleep is
 making me very unpleasant.

You're excited by RSDL and the positive comments, aren't you? Well, don't 
forget to sleep, sleeping makes ppl smarter you know ;-)


pgpg8yLZmNOn3.pgp
Description: PGP signature


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-12 Thread Antonio Vargas

On 3/12/07, jos poortvliet [EMAIL PROTECTED] wrote:

Op Monday 12 March 2007, schreef Con Kolivas:
 On Tuesday 13 March 2007 01:14, Al Boldi wrote:
  Con Kolivas wrote:
 The higher priority one always get 6-7ms whereas the lower priority
 one runs 6-7ms and then one larger perfectly bound expiration
 amount. Basically exactly as I'd expect. The higher priority task
 gets precisely RR_INTERVAL maximum latency whereas the lower
 priority task gets RR_INTERVAL min and full expiration (according
 to the virtual deadline) as a maximum. That's exactly how I intend
 it to work. Yes I realise that the max latency ends up being longer
 intermittently on the niced task but that's -in my opinion-
 perfectly fine as a compromise to ensure the nice 0 one always gets
 low latency.
   
I think, it should be possible to spread this max expiration latency
across the rotation, should it not?
  
   There is a way that I toyed with of creating maps of slots to use for
   each different priority, but it broke the O(1) nature of the virtual
   deadline management. Minimising algorithmic complexity seemed more
   important to maintain than getting slightly better latency spreads for
   niced tasks. It also appeared to be less cache friendly in design. I
   could certainly try and implement it but how much importance are we to
   place on latency of niced tasks? Are you aware of any usage scenario
   where latency sensitive tasks are ever significantly niced in the real
   world?
 
  It only takes one negatively nice'd proc to affect X adversely.

 I have an idea. Give me some time to code up my idea. Lack of sleep is
 making me very unpleasant.

You're excited by RSDL and the positive comments, aren't you? Well, don't
forget to sleep, sleeping makes ppl smarter you know ;-)




IIRC, about 2 or three years ago (or maybe on the 2.6.10 timeframe),
there was a patch which managed to pass the interactive from one app
to another when there was a pipe or udp connection between them. This
meant that a marked-as-interactive xterm would, when blocked waiting
for an Xserver response, transfer some of its interactiveness to the
Xserver, and aparently it worked very good for desktop workloads so,
maybe adapting it for this new scheduler would be good.



--
Greetz, Antonio Vargas aka winden of network

http://network.amigascne.org/
[EMAIL PROTECTED]
[EMAIL PROTECTED]

Every day, every year
you have to work
you have to study
you have to scene.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread Con Kolivas
On Monday 12 March 2007 15:42, Al Boldi wrote:
> Con Kolivas wrote:
> > On Monday 12 March 2007 08:52, Con Kolivas wrote:
> > > And thank you! I think I know what's going on now. I think each
> > > rotation is followed by another rotation before the higher priority
> > > task is getting a look in in schedule() to even get quota and add it to
> > > the runqueue quota. I'll try a simple change to see if that helps.
> > > Patch coming up shortly.
> >
> > Can you try the following patch and see if it helps. There's also one
> > minor preemption logic fix in there that I'm planning on including.
> > Thanks!
>
> Applied on top of v0.28 mainline, and there is no difference.
>
> What's it look like on your machine?

The higher priority one always get 6-7ms whereas the lower priority one runs 
6-7ms and then one larger perfectly bound expiration amount. Basically 
exactly as I'd expect. The higher priority task gets precisely RR_INTERVAL 
maximum latency whereas the lower priority task gets RR_INTERVAL min and full 
expiration (according to the virtual deadline) as a maximum. That's exactly 
how I intend it to work. Yes I realise that the max latency ends up being 
longer intermittently on the niced task but that's -in my opinion- perfectly 
fine as a compromise to ensure the nice 0 one always gets low latency.

Eg:
nice 0 vs nice 10

nice 0:
pid 6288, prio   0, out for7 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms

nice 10:
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for   66 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms

exactly as I'd expect. If you want fixed latencies _of niced tasks_ in the 
presence of less niced tasks you will not get them with this scheduler. What 
you will get, though, is a perfectly bound relationship knowing exactly what 
the maximum latency will ever be.

Thanks for the test case. It's interesting and nice that it confirms this 
scheduler works as I expect it to.

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread Al Boldi
Con Kolivas wrote:
> On Monday 12 March 2007 08:52, Con Kolivas wrote:
> > And thank you! I think I know what's going on now. I think each rotation
> > is followed by another rotation before the higher priority task is
> > getting a look in in schedule() to even get quota and add it to the
> > runqueue quota. I'll try a simple change to see if that helps. Patch
> > coming up shortly.
>
> Can you try the following patch and see if it helps. There's also one
> minor preemption logic fix in there that I'm planning on including.
> Thanks!

Applied on top of v0.28 mainline, and there is no difference.

What's it look like on your machine?


Thanks!

--
Al

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread Con Kolivas
On Monday 12 March 2007 09:29, bert hubert wrote:
> Con,
>
> Recent kernel versions have real problems for me on the interactivity
> front, with even a simple 'make' of my C++ program (PowerDNS) causing
> Firefox to slow down to a crawl.
>
> RSDL fixed all that, the system is noticeably snappier.
>
> As a case in point, I used to notice when a compile was done because the
> system stopped being sluggish.
>
> Today, a few times, I only noticed 'make' was done because the fans of my
> computer slowed down.
>
> Thanks for the good work! I'm on 2.6.21-rc3-rsdl-0.29.

You're most welcome, and thank you for the report :)

>   Bert

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread bert hubert
Con,

Recent kernel versions have real problems for me on the interactivity front,
with even a simple 'make' of my C++ program (PowerDNS) causing Firefox to
slow down to a crawl.

RSDL fixed all that, the system is noticeably snappier.

As a case in point, I used to notice when a compile was done because the
system stopped being sluggish.

Today, a few times, I only noticed 'make' was done because the fans of my
computer slowed down.

Thanks for the good work! I'm on 2.6.21-rc3-rsdl-0.29.

Bert

-- 
http://www.PowerDNS.com  Open source, database driven DNS Software 
http://netherlabs.nl  Open and Closed source services
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread Con Kolivas
On Monday 12 March 2007 08:52, Con Kolivas wrote:
> And thank you! I think I know what's going on now. I think each rotation is
> followed by another rotation before the higher priority task is getting a
> look in in schedule() to even get quota and add it to the runqueue quota.
> I'll try a simple change to see if that helps. Patch coming up shortly.

Can you try the following patch and see if it helps. There's also one minor
preemption logic fix in there that I'm planning on including. Thanks!

---
 kernel/sched.c |   24 +++-
 1 file changed, 11 insertions(+), 13 deletions(-)

Index: linux-2.6.21-rc3-mm2/kernel/sched.c
===
--- linux-2.6.21-rc3-mm2.orig/kernel/sched.c2007-03-12 08:47:43.0 
+1100
+++ linux-2.6.21-rc3-mm2/kernel/sched.c 2007-03-12 09:10:33.0 +1100
@@ -96,10 +96,9 @@ unsigned long long __attribute__((weak))
  * provided it is not a realtime comparison.
  */
 #define TASK_PREEMPTS_CURR(p, curr) \
-   (((p)->prio < (curr)->prio) || (((p)->prio == (curr)->prio) && \
+   (((p)->prio < (curr)->prio) || (!rt_task(p) && \
((p)->static_prio < (curr)->static_prio && \
-   ((curr)->static_prio > (curr)->prio)) && \
-   !rt_task(p)))
+   ((curr)->static_prio > (curr)->prio
 
 /*
  * This is the time all tasks within the same priority round robin.
@@ -3323,7 +3322,7 @@ static inline void major_prio_rotation(s
  */
 static inline void rotate_runqueue_priority(struct rq *rq)
 {
-   int new_prio_level, remaining_quota;
+   int new_prio_level;
struct prio_array *array;
 
/*
@@ -3334,7 +,6 @@ static inline void rotate_runqueue_prior
if (unlikely(sched_find_first_bit(rq->dyn_bitmap) < rq->prio_level))
return;
 
-   remaining_quota = rq_quota(rq, rq->prio_level);
array = rq->active;
if (rq->prio_level > MAX_PRIO - 2) {
/* Major rotation required */
@@ -3368,10 +3366,11 @@ static inline void rotate_runqueue_prior
}
rq->prio_level = new_prio_level;
/*
-* While we usually rotate with the rq quota being 0, it is possible
-* to be negative so we subtract any deficit from the new level.
+* As we are merging to a prio_level that may not have anything in
+* its quota we add 1 to ensure the tasks get to run in schedule() to
+* add their quota to it.
 */
-   rq_quota(rq, new_prio_level) += remaining_quota;
+   rq_quota(rq, new_prio_level) += 1;
 }
 
 static void task_running_tick(struct rq *rq, struct task_struct *p)
@@ -3397,12 +3396,11 @@ static void task_running_tick(struct rq 
if (!--p->time_slice)
task_expired_entitlement(rq, p);
/*
-* The rq quota can become negative due to a task being queued in
-* scheduler without any quota left at that priority level. It is
-* cheaper to allow it to run till this scheduler tick and then
-* subtract it from the quota of the merged queues.
+* We only employ the deadline mechanism if we run over the quota.
+* It allows aliasing problems around the scheduler_tick to be
+* less harmful.
 */
-   if (!rt_task(p) && --rq_quota(rq, rq->prio_level) <= 0) {
+   if (!rt_task(p) && --rq_quota(rq, rq->prio_level) < 0) {
if (unlikely(p->first_time_slice))
p->first_time_slice = 0;
rotate_runqueue_priority(rq);


-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread Con Kolivas
On Monday 12 March 2007 05:11, Al Boldi wrote:
> Al Boldi wrote:
> > BTW, another way to show these hickups would be through some kind of a
> > cpu/proc timing-tracer.  Do we have something like that?
>
> Here is something like a tracer.
>
> Original idea by Chris Friesen, thanks, from this post:
> http://marc.theaimsgroup.com/?l=linux-kernel=117331003029329=4
>
> Try attached chew.c like this:
> Boot into /bin/sh.
> Run chew in one console.
> Run nice chew in another console.
> Watch timings.
>
> Console 1: ./chew

> Console 2: nice -10 ./chew

> pid 669, prio  10, out for5 ms
> pid 669, prio  10, out for   65 ms

One full expiration

> pid 669, prio  10, out for6 ms
> pid 669, prio  10, out for   65 ms

again

> Console 2: nice -15 ./chew
> pid 673, prio  15, out for6 ms
> pid 673, prio  15, out for   95 ms

again and so on..

> OTOH, mainline is completely smooth, albeit huge drop-outs.

Heh. That's not much good either is it.

> Thanks!

And thank you! I think I know what's going on now. I think each rotation is 
followed by another rotation before the higher priority task is getting a 
look in in schedule() to even get quota and add it to the runqueue quota. 
I'll try a simple change to see if that helps. Patch coming up shortly.

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread Al Boldi
Al Boldi wrote:
> BTW, another way to show these hickups would be through some kind of a
> cpu/proc timing-tracer.  Do we have something like that?

Here is something like a tracer.

Original idea by Chris Friesen, thanks, from this post:
http://marc.theaimsgroup.com/?l=linux-kernel=117331003029329=4

Try attached chew.c like this:
Boot into /bin/sh.
Run chew in one console.
Run nice chew in another console.
Watch timings.

Console 1: ./chew
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for5 ms

Console 2: nice -10 ./chew
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for5 ms
pid 669, prio  10, out for   65 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for5 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for5 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for   65 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms

Console 2: nice -15 ./chew
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for   95 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for   95 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for   95 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for   95 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for5 ms

Console 2: nice -18 ./chew
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for6 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for6 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for5 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for6 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for5 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for5 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for5 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for6 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for6 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for5 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for5 ms

Console 2: nice -19 ./chew
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms

Now with negative nice:
Console 1: ./chew
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for  125 ms
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out 

Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread Al Boldi
Al Boldi wrote:
 BTW, another way to show these hickups would be through some kind of a
 cpu/proc timing-tracer.  Do we have something like that?

Here is something like a tracer.

Original idea by Chris Friesen, thanks, from this post:
http://marc.theaimsgroup.com/?l=linux-kernelm=117331003029329w=4

Try attached chew.c like this:
Boot into /bin/sh.
Run chew in one console.
Run nice chew in another console.
Watch timings.

Console 1: ./chew
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for5 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for6 ms
pid 655, prio   0, out for5 ms

Console 2: nice -10 ./chew
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for5 ms
pid 669, prio  10, out for   65 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for5 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for5 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for   65 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms
pid 669, prio  10, out for6 ms

Console 2: nice -15 ./chew
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for   95 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for   95 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for   95 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for   95 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for5 ms
pid 673, prio  15, out for6 ms
pid 673, prio  15, out for5 ms

Console 2: nice -18 ./chew
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for6 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for6 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for5 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for6 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for5 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for5 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for5 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for6 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for6 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for5 ms
pid 677, prio  18, out for  113 ms
pid 677, prio  18, out for5 ms

Console 2: nice -19 ./chew
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms
pid 679, prio  19, out for  119 ms

Now with negative nice:
Console 1: ./chew
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for  125 ms
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for6 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out for5 ms
pid 674, prio   0, out 

Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread Con Kolivas
On Monday 12 March 2007 05:11, Al Boldi wrote:
 Al Boldi wrote:
  BTW, another way to show these hickups would be through some kind of a
  cpu/proc timing-tracer.  Do we have something like that?

 Here is something like a tracer.

 Original idea by Chris Friesen, thanks, from this post:
 http://marc.theaimsgroup.com/?l=linux-kernelm=117331003029329w=4

 Try attached chew.c like this:
 Boot into /bin/sh.
 Run chew in one console.
 Run nice chew in another console.
 Watch timings.

 Console 1: ./chew

 Console 2: nice -10 ./chew

 pid 669, prio  10, out for5 ms
 pid 669, prio  10, out for   65 ms

One full expiration

 pid 669, prio  10, out for6 ms
 pid 669, prio  10, out for   65 ms

again

 Console 2: nice -15 ./chew
 pid 673, prio  15, out for6 ms
 pid 673, prio  15, out for   95 ms

again and so on..

 OTOH, mainline is completely smooth, albeit huge drop-outs.

Heh. That's not much good either is it.

 Thanks!

And thank you! I think I know what's going on now. I think each rotation is 
followed by another rotation before the higher priority task is getting a 
look in in schedule() to even get quota and add it to the runqueue quota. 
I'll try a simple change to see if that helps. Patch coming up shortly.

-- 
-ck
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread Con Kolivas
On Monday 12 March 2007 08:52, Con Kolivas wrote:
 And thank you! I think I know what's going on now. I think each rotation is
 followed by another rotation before the higher priority task is getting a
 look in in schedule() to even get quota and add it to the runqueue quota.
 I'll try a simple change to see if that helps. Patch coming up shortly.

Can you try the following patch and see if it helps. There's also one minor
preemption logic fix in there that I'm planning on including. Thanks!

---
 kernel/sched.c |   24 +++-
 1 file changed, 11 insertions(+), 13 deletions(-)

Index: linux-2.6.21-rc3-mm2/kernel/sched.c
===
--- linux-2.6.21-rc3-mm2.orig/kernel/sched.c2007-03-12 08:47:43.0 
+1100
+++ linux-2.6.21-rc3-mm2/kernel/sched.c 2007-03-12 09:10:33.0 +1100
@@ -96,10 +96,9 @@ unsigned long long __attribute__((weak))
  * provided it is not a realtime comparison.
  */
 #define TASK_PREEMPTS_CURR(p, curr) \
-   (((p)-prio  (curr)-prio) || (((p)-prio == (curr)-prio)  \
+   (((p)-prio  (curr)-prio) || (!rt_task(p)  \
((p)-static_prio  (curr)-static_prio  \
-   ((curr)-static_prio  (curr)-prio))  \
-   !rt_task(p)))
+   ((curr)-static_prio  (curr)-prio
 
 /*
  * This is the time all tasks within the same priority round robin.
@@ -3323,7 +3322,7 @@ static inline void major_prio_rotation(s
  */
 static inline void rotate_runqueue_priority(struct rq *rq)
 {
-   int new_prio_level, remaining_quota;
+   int new_prio_level;
struct prio_array *array;
 
/*
@@ -3334,7 +,6 @@ static inline void rotate_runqueue_prior
if (unlikely(sched_find_first_bit(rq-dyn_bitmap)  rq-prio_level))
return;
 
-   remaining_quota = rq_quota(rq, rq-prio_level);
array = rq-active;
if (rq-prio_level  MAX_PRIO - 2) {
/* Major rotation required */
@@ -3368,10 +3366,11 @@ static inline void rotate_runqueue_prior
}
rq-prio_level = new_prio_level;
/*
-* While we usually rotate with the rq quota being 0, it is possible
-* to be negative so we subtract any deficit from the new level.
+* As we are merging to a prio_level that may not have anything in
+* its quota we add 1 to ensure the tasks get to run in schedule() to
+* add their quota to it.
 */
-   rq_quota(rq, new_prio_level) += remaining_quota;
+   rq_quota(rq, new_prio_level) += 1;
 }
 
 static void task_running_tick(struct rq *rq, struct task_struct *p)
@@ -3397,12 +3396,11 @@ static void task_running_tick(struct rq 
if (!--p-time_slice)
task_expired_entitlement(rq, p);
/*
-* The rq quota can become negative due to a task being queued in
-* scheduler without any quota left at that priority level. It is
-* cheaper to allow it to run till this scheduler tick and then
-* subtract it from the quota of the merged queues.
+* We only employ the deadline mechanism if we run over the quota.
+* It allows aliasing problems around the scheduler_tick to be
+* less harmful.
 */
-   if (!rt_task(p)  --rq_quota(rq, rq-prio_level) = 0) {
+   if (!rt_task(p)  --rq_quota(rq, rq-prio_level)  0) {
if (unlikely(p-first_time_slice))
p-first_time_slice = 0;
rotate_runqueue_priority(rq);


-- 
-ck
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread bert hubert
Con,

Recent kernel versions have real problems for me on the interactivity front,
with even a simple 'make' of my C++ program (PowerDNS) causing Firefox to
slow down to a crawl.

RSDL fixed all that, the system is noticeably snappier.

As a case in point, I used to notice when a compile was done because the
system stopped being sluggish.

Today, a few times, I only noticed 'make' was done because the fans of my
computer slowed down.

Thanks for the good work! I'm on 2.6.21-rc3-rsdl-0.29.

Bert

-- 
http://www.PowerDNS.com  Open source, database driven DNS Software 
http://netherlabs.nl  Open and Closed source services
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread Con Kolivas
On Monday 12 March 2007 09:29, bert hubert wrote:
 Con,

 Recent kernel versions have real problems for me on the interactivity
 front, with even a simple 'make' of my C++ program (PowerDNS) causing
 Firefox to slow down to a crawl.

 RSDL fixed all that, the system is noticeably snappier.

 As a case in point, I used to notice when a compile was done because the
 system stopped being sluggish.

 Today, a few times, I only noticed 'make' was done because the fans of my
 computer slowed down.

 Thanks for the good work! I'm on 2.6.21-rc3-rsdl-0.29.

You're most welcome, and thank you for the report :)

   Bert

-- 
-ck
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread Al Boldi
Con Kolivas wrote:
 On Monday 12 March 2007 08:52, Con Kolivas wrote:
  And thank you! I think I know what's going on now. I think each rotation
  is followed by another rotation before the higher priority task is
  getting a look in in schedule() to even get quota and add it to the
  runqueue quota. I'll try a simple change to see if that helps. Patch
  coming up shortly.

 Can you try the following patch and see if it helps. There's also one
 minor preemption logic fix in there that I'm planning on including.
 Thanks!

Applied on top of v0.28 mainline, and there is no difference.

What's it look like on your machine?


Thanks!

--
Al

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-11 Thread Con Kolivas
On Monday 12 March 2007 15:42, Al Boldi wrote:
 Con Kolivas wrote:
  On Monday 12 March 2007 08:52, Con Kolivas wrote:
   And thank you! I think I know what's going on now. I think each
   rotation is followed by another rotation before the higher priority
   task is getting a look in in schedule() to even get quota and add it to
   the runqueue quota. I'll try a simple change to see if that helps.
   Patch coming up shortly.
 
  Can you try the following patch and see if it helps. There's also one
  minor preemption logic fix in there that I'm planning on including.
  Thanks!

 Applied on top of v0.28 mainline, and there is no difference.

 What's it look like on your machine?

The higher priority one always get 6-7ms whereas the lower priority one runs 
6-7ms and then one larger perfectly bound expiration amount. Basically 
exactly as I'd expect. The higher priority task gets precisely RR_INTERVAL 
maximum latency whereas the lower priority task gets RR_INTERVAL min and full 
expiration (according to the virtual deadline) as a maximum. That's exactly 
how I intend it to work. Yes I realise that the max latency ends up being 
longer intermittently on the niced task but that's -in my opinion- perfectly 
fine as a compromise to ensure the nice 0 one always gets low latency.

Eg:
nice 0 vs nice 10

nice 0:
pid 6288, prio   0, out for7 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms
pid 6288, prio   0, out for6 ms

nice 10:
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for   66 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms
pid 6290, prio  10, out for6 ms

exactly as I'd expect. If you want fixed latencies _of niced tasks_ in the 
presence of less niced tasks you will not get them with this scheduler. What 
you will get, though, is a perfectly bound relationship knowing exactly what 
the maximum latency will ever be.

Thanks for the test case. It's interesting and nice that it confirms this 
scheduler works as I expect it to.

-- 
-ck
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-10 Thread William Lee Irwin III
William Lee Irwin III wrote:
>> Last I checked there were limits to runtime configurability centering
>> around only supporting a compiled-in set of scheduling drivers, unless
>> Peter's taken it the rest of the way without my noticing. It's unclear
>> what you have in mind in terms of dynamic extensibility. My only guess
>> would be pluggable scheduling policy/class support for individual
>> schedulers in addition to plugging the individual schedulers, except
>> I'm rather certain that Williams' code doesn't do anything with modules.

On Sat, Mar 10, 2007 at 07:47:11PM +0300, Al Boldi wrote:
> Correct, it doesn't, yet.  But do you think that PlugSched has the basic 
> infrastructure in place to support this, or would it require a complete 
> redesign/rewrite.

The piece I got done was just representing schedulers as driver-like
affairs (which, embarrassingly enough, needed lots of bugfixing), and
everyone's just been running with that and boot-time switching ever
since. Runtime switching (to module-loaded schedulers or otherwise)
needs a lot of hotplug-esque work. Scheduler class support, pluggable
or otherwise, needs per-scheduler abstracting things out along the same
lines as what was originally done for the overall schedulers
surrounding enqueueing and dequeueing so the scheduler itself only
plucks tasks out of and stuffs tasks into some sort of abstracted-out
queue or set of queues, though I did try to break things down at a low
enough level where they'd be plausible for more than just the one
driver (never distributed) I used to test the design. I dumped the
entire project long before ever getting to where modules entered the
picture, and have never touched modules otherwise, so I'm not entirely
sure what other issues would come up with those after the smoke clears
from runtime switching.

I don't plan on doing anything here myself, since the boot-time
switching etc. is likely already considered offensive enough.

The next time something comes up that bears a risk of positioning me
against the kernel's political winds, I'll just rm it or not write it
at all instead of leaving code around (or worse yet, passing it around)
to be taken up by others. It just leaves a lot of embarrassed explaining
to do when it resurfaces years later, or otherwise leaves a rather bad
taste in my mouth when NIH'd years later like other things not mentioned
here (VM code kept quiet similarly to plugsched) and everyone approves
so long as it didn't come from me.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-10 Thread Al Boldi
William Lee Irwin III wrote:
> William Lee Irwin III wrote:
> >> A useful exercise may also be enumerating
> >> your expectations and having those who actually work with the code
> >> describe how well those are actually met.
>
> On Sat, Mar 10, 2007 at 08:34:25AM +0300, Al Boldi wrote:
> > A runtime configurable framework that allows for dynamically extensible
> > schedulers.  PlugSched seems to be a good start.
>
> Last I checked there were limits to runtime configurability centering
> around only supporting a compiled-in set of scheduling drivers, unless
> Peter's taken it the rest of the way without my noticing. It's unclear
> what you have in mind in terms of dynamic extensibility. My only guess
> would be pluggable scheduling policy/class support for individual
> schedulers in addition to plugging the individual schedulers, except
> I'm rather certain that Williams' code doesn't do anything with modules.

Correct, it doesn't, yet.  But do you think that PlugSched has the basic 
infrastructure in place to support this, or would it require a complete 
redesign/rewrite.


Thanks!

--
Al

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-10 Thread Al Boldi
William Lee Irwin III wrote:
 William Lee Irwin III wrote:
  A useful exercise may also be enumerating
  your expectations and having those who actually work with the code
  describe how well those are actually met.

 On Sat, Mar 10, 2007 at 08:34:25AM +0300, Al Boldi wrote:
  A runtime configurable framework that allows for dynamically extensible
  schedulers.  PlugSched seems to be a good start.

 Last I checked there were limits to runtime configurability centering
 around only supporting a compiled-in set of scheduling drivers, unless
 Peter's taken it the rest of the way without my noticing. It's unclear
 what you have in mind in terms of dynamic extensibility. My only guess
 would be pluggable scheduling policy/class support for individual
 schedulers in addition to plugging the individual schedulers, except
 I'm rather certain that Williams' code doesn't do anything with modules.

Correct, it doesn't, yet.  But do you think that PlugSched has the basic 
infrastructure in place to support this, or would it require a complete 
redesign/rewrite.


Thanks!

--
Al

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-10 Thread William Lee Irwin III
William Lee Irwin III wrote:
 Last I checked there were limits to runtime configurability centering
 around only supporting a compiled-in set of scheduling drivers, unless
 Peter's taken it the rest of the way without my noticing. It's unclear
 what you have in mind in terms of dynamic extensibility. My only guess
 would be pluggable scheduling policy/class support for individual
 schedulers in addition to plugging the individual schedulers, except
 I'm rather certain that Williams' code doesn't do anything with modules.

On Sat, Mar 10, 2007 at 07:47:11PM +0300, Al Boldi wrote:
 Correct, it doesn't, yet.  But do you think that PlugSched has the basic 
 infrastructure in place to support this, or would it require a complete 
 redesign/rewrite.

The piece I got done was just representing schedulers as driver-like
affairs (which, embarrassingly enough, needed lots of bugfixing), and
everyone's just been running with that and boot-time switching ever
since. Runtime switching (to module-loaded schedulers or otherwise)
needs a lot of hotplug-esque work. Scheduler class support, pluggable
or otherwise, needs per-scheduler abstracting things out along the same
lines as what was originally done for the overall schedulers
surrounding enqueueing and dequeueing so the scheduler itself only
plucks tasks out of and stuffs tasks into some sort of abstracted-out
queue or set of queues, though I did try to break things down at a low
enough level where they'd be plausible for more than just the one
driver (never distributed) I used to test the design. I dumped the
entire project long before ever getting to where modules entered the
picture, and have never touched modules otherwise, so I'm not entirely
sure what other issues would come up with those after the smoke clears
from runtime switching.

I don't plan on doing anything here myself, since the boot-time
switching etc. is likely already considered offensive enough.

The next time something comes up that bears a risk of positioning me
against the kernel's political winds, I'll just rm it or not write it
at all instead of leaving code around (or worse yet, passing it around)
to be taken up by others. It just leaves a lot of embarrassed explaining
to do when it resurfaces years later, or otherwise leaves a rather bad
taste in my mouth when NIH'd years later like other things not mentioned
here (VM code kept quiet similarly to plugsched) and everyone approves
so long as it didn't come from me.


-- wli
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread William Lee Irwin III
William Lee Irwin III wrote:
 This sort of concern is too subjective for me to have an opinion on it.

On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
>>> How diplomatic.

William Lee Irwin III wrote:
>> Impoliteness doesn't accomplish anything I want to do.

On Sat, Mar 10, 2007 at 08:34:25AM +0300, Al Boldi wrote:
> Fair enough.  But being honest about it, without flaming, may be more 
> constructive.

There was no flamage. It is literally true.


William Lee Irwin III wrote:
>> I'm more of a cooperative than competitive person, not to say that
>> flies well in Linux. There are more productive uses of time than having
>> everyone NIH'ing everyone else's code. If the result isn't so great,
>> I'd rather send them code or talk them about what needs to be done.

On Sat, Mar 10, 2007 at 08:34:25AM +0300, Al Boldi wrote:
> Ok, let's call it cooperative competitiveness.  You know, the kind of 
> competitiveness that drives improvements that helps everybody

This trips over ideological issues best not discussed on lkml.


William Lee Irwin III wrote:
>> The extant versions of it fall well short of Linus' challenge as well
>> as my original goals for it.

On Sat, Mar 10, 2007 at 08:34:25AM +0300, Al Boldi wrote:
> Do you mean Peter Williams' PlugSched-6.5-for-2.6.20?

You'd be well-served by talking to Peter Williams sometime. He's a
knowledgable individual. I should also mention that Con Kolivas did
significant amounts of work to get the early codebase he inherited
from me working before things were handed off to Peter Williams.


William Lee Irwin III wrote:
>> A useful exercise may also be enumerating
>> your expectations and having those who actually work with the code
>> describe how well those are actually met.

On Sat, Mar 10, 2007 at 08:34:25AM +0300, Al Boldi wrote:
> A runtime configurable framework that allows for dynamically extensible 
> schedulers.  PlugSched seems to be a good start.

Last I checked there were limits to runtime configurability centering
around only supporting a compiled-in set of scheduling drivers, unless
Peter's taken it the rest of the way without my noticing. It's unclear
what you have in mind in terms of dynamic extensibility. My only guess
would be pluggable scheduling policy/class support for individual
schedulers in addition to plugging the individual schedulers, except
I'm rather certain that Williams' code doesn't do anything with modules.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread Al Boldi
William Lee Irwin III wrote:
> William Lee Irwin III wrote:
> >> This sort of concern is too subjective for me to have an opinion on it.
>
> On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
> > How diplomatic.
>
> Impoliteness doesn't accomplish anything I want to do.

Fair enough.  But being honest about it, without flaming, may be more 
constructive.

> William Lee Irwin III wrote:
> >> My preferred sphere of operation is the Manichean domain of faster vs.
> >> slower, functionality vs. non-functionality, and the like. For me, such
> >> design concerns are like the need for a kernel to format pagetables so
> >> the x86 MMU decodes what was intended, or for a compiler to emit valid
> >> assembly instructions, or for a programmer to write C the compiler
> >> won't reject with parse errors.
>
> On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
> > Sure, but I think, even from a technical point of view, competition is a
> > good thing to have.  Pluggable schedulers give us this kind of
> > competition, that forces each scheduler to refine or become obsolete. 
> > Think evolution.
>
> I'm more of a cooperative than competitive person, not to say that
> flies well in Linux. There are more productive uses of time than having
> everyone NIH'ing everyone else's code. If the result isn't so great,
> I'd rather send them code or talk them about what needs to be done.

Ok, let's call it cooperative competitiveness.  You know, the kind of 
competitiveness that drives improvements that helps everybody

> Linus Torvalds wrote:
> >> And hey, you can try to prove me wrong. Code talks. So far, nobody has
> >> really ever come close.
> >> So go and code it up, and show the end result. So far, nobody who
> >> actually *does* CPU schedulers have really wanted to do it, because
> >> they all want to muck around with their own private versions of the
> >> data structures.
>
> On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
> > What about PlugSched?
>
> The extant versions of it fall well short of Linus' challenge as well
> as my original goals for it.

Do you mean Peter Williams' PlugSched-6.5-for-2.6.20?

> A useful exercise may also be enumerating
> your expectations and having those who actually work with the code
> describe how well those are actually met.

A runtime configurable framework that allows for dynamically extensible 
schedulers.  PlugSched seems to be a good start.


Thanks!

--
Al



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread William Lee Irwin III
On Fri, Mar 09, 2007 at 05:18:31PM -0500, Ryan Hope wrote:
> from what I understood, there is a performance loss in plugsched
> schedulers because they have to share code
> even if pluggable schedulers is not a viable option, being able to
> choose which one was built into the kernel would be easy (only takes a
> few ifdefs), i too think competition would be good

Neither sharing code nor data structures is strictly necessary for a
pluggable scheduler. For instance, backing out per-cpu runqueues in
favor of a single locklessly-accessed queue or similar per-leaf-domain
queues is one potential design alternative (never mind difficulties
with ->cpus_allowed) explicitly considered for the sake of sched_yield()
semantics on SMP, among other concerns. What plugsched originally did
was to provide a set of driver functions and allow each scheduler to
play with its private data declared static in separate C files in what
were later intended to become kernel modules. As far as I know, runtime
switchover code to complement all that has never been written in such a
form. One possibility abandoned early-on was to have multiple schedulers
simultaneously active to manage different portions of the system with
different policies, in no small part due to the difficulty of load
balancing between the partitions associated with the different schedulers.
Some misguided attempts were made to export the lowest-level API possible,
which I rather quickly deemed a mistake, but they still held to such
largely design considerations as I described above.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread David Lang

On Fri, 9 Mar 2007, Al Boldi wrote:




My preferred sphere of operation is the Manichean domain of faster vs.
slower, functionality vs. non-functionality, and the like. For me, such
design concerns are like the need for a kernel to format pagetables so
the x86 MMU decodes what was intended, or for a compiler to emit valid
assembly instructions, or for a programmer to write C the compiler
won't reject with parse errors.


Sure, but I think, even from a technical point of view, competition is a good
thing to have.  Pluggable schedulers give us this kind of competition, that
forces each scheduler to refine or become obsolete.  Think evolution.


The point Linus is makeing is that with pluggable schedulers there isn't 
competition between them, the various developer teams would go off in their own 
direction and any drawbacks to their scheduler could be answered with "that's 
not what we are good at, use a different scheduler", with the very real 
possibility that a person could get this answer from ALL schedulers, leaving 
them with nothing good to use.


David Lang
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread William Lee Irwin III
William Lee Irwin III wrote:
>> The short translation of my message for you is "Linus, please don't
>> LART me too hard."

On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
> Right.

Given where the code originally came from, I've got bullets to dodge.


William Lee Irwin III wrote:
>> This sort of concern is too subjective for me to have an opinion on it.

On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
> How diplomatic.

Impoliteness doesn't accomplish anything I want to do.


William Lee Irwin III wrote:
>> My preferred sphere of operation is the Manichean domain of faster vs.
>> slower, functionality vs. non-functionality, and the like. For me, such
>> design concerns are like the need for a kernel to format pagetables so
>> the x86 MMU decodes what was intended, or for a compiler to emit valid
>> assembly instructions, or for a programmer to write C the compiler
>> won't reject with parse errors.

On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
> Sure, but I think, even from a technical point of view, competition is a good 
> thing to have.  Pluggable schedulers give us this kind of competition, that 
> forces each scheduler to refine or become obsolete.  Think evolution.

I'm more of a cooperative than competitive person, not to say that
flies well in Linux. There are more productive uses of time than having
everyone NIH'ing everyone else's code. If the result isn't so great,
I'd rather send them code or talk them about what needs to be done.


William Lee Irwin III wrote:
>> If Linus, akpm, et al object to the
>> design, then invalid output was produced. Please refer to Linus, akpm,
>> et al for these sorts of design concerns.

On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
> Point taken.

Decisions with respect to overall kernel design are made from well
above my level. Similarly with coding style, release management, code
directory hierarchy, nomenclature, and more. These things are Linus'
and devolved to those who go along with him on those fronts. If I
made those decisions, you might as well call it "wlix" not "Linux."


Linus Torvalds wrote:
>> And hey, you can try to prove me wrong. Code talks. So far, nobody has
>> really ever come close.
>> So go and code it up, and show the end result. So far, nobody who actually
>> *does* CPU schedulers have really wanted to do it, because they all want
>> to muck around with their own private versions of the data structures.

On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
> What about PlugSched?

The extant versions of it fall well short of Linus' challenge as well
as my original goals for it. A useful exercise may also be enumerating
your expectations and having those who actually work with the code
describe how well those are actually met.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread Ryan Hope

from what I understood, there is a performance loss in plugsched
schedulers because they have to share code

even if pluggable schedulers is not a viable option, being able to
choose which one was built into the kernel would be easy (only takes a
few ifdefs), i too think competition would be good

On 3/9/07, Al Boldi <[EMAIL PROTECTED]> wrote:

William Lee Irwin III wrote:
> William Lee Irwin III wrote:
> >> I consider policy issues to be hopeless political quagmires and
> >> therefore stick to mechanism. So even though I may have started the
> >> code in question, I have little or nothing to say about that sort of
> >> use for it.
> >> There's my longwinded excuse for having originated that tidbit of code.
>
> On Fri, Mar 09, 2007 at 04:25:55PM +0300, Al Boldi wrote:
> > I've no idea what both of you are talking about.
>
> The short translation of my message for you is "Linus, please don't
> LART me too hard."

Right.

> On Fri, Mar 09, 2007 at 04:25:55PM +0300, Al Boldi wrote:
> > How can giving people the freedom of choice be in any way
> > counter-productive?
>
> This sort of concern is too subjective for me to have an opinion on it.

How diplomatic.

> My preferred sphere of operation is the Manichean domain of faster vs.
> slower, functionality vs. non-functionality, and the like. For me, such
> design concerns are like the need for a kernel to format pagetables so
> the x86 MMU decodes what was intended, or for a compiler to emit valid
> assembly instructions, or for a programmer to write C the compiler
> won't reject with parse errors.

Sure, but I think, even from a technical point of view, competition is a good
thing to have.  Pluggable schedulers give us this kind of competition, that
forces each scheduler to refine or become obsolete.  Think evolution.

> If Linus, akpm, et al object to the
> design, then invalid output was produced. Please refer to Linus, akpm,
> et al for these sorts of design concerns.

Point taken.

Linus Torvalds wrote:
> And hey, you can try to prove me wrong. Code talks. So far, nobody has
> really ever come close.
>
> So go and code it up, and show the end result. So far, nobody who actually
> *does* CPU schedulers have really wanted to do it, because they all want
> to muck around with their own private versions of the data structures.

What about PlugSched?


Thanks!

--
Al

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-09 Thread Rodney Gordon II
On Sunday 04 March 2007 01:00, Con Kolivas wrote:
> This message is to announce the first general public release of the
> "Rotating Staircase DeadLine" cpu scheduler.
>
> Based on previous work from the staircase cpu scheduler I set out to
> design, from scratch, a new scheduling policy design which satisfies every
> requirement for SCHED_NORMAL (otherwise known as SCHED_OTHER) task
> management.
>

Con, you've really outdone yourself this time ! :D

As a long time user of the -ck patchset, RSDL is a welcome change, and a great 
piece of code to play around with, and USE!

Booted up on my system perfectly, Pentium-D 830 3GHz, 1.5GB RAM.

No problems whatsoever so far, using 0.26. I can launch up a bunch of encode 
jobs, in SCHED_NORMAL even, and still have low latency on my desktop (I know 
it's not low latency _specific_ code, but it works very well).

I guess all I can say is.. wow. This code isn't "prime time" ready, yet.. But 
it can be, and would be a great addition to mainline.

Hell, a little tuning and merging this with a few current ck patches could 
make a damn fine kernel, and probably beat out the original staircase in 
desktops. :)

Keep up the good work !
-r

-- 
Rodney "meff" Gordon II -*- [EMAIL PROTECTED]
Systems Administrator / Coder Geek -*- Open yourself to OpenSource
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread Al Boldi
William Lee Irwin III wrote:
> William Lee Irwin III wrote:
> >> I consider policy issues to be hopeless political quagmires and
> >> therefore stick to mechanism. So even though I may have started the
> >> code in question, I have little or nothing to say about that sort of
> >> use for it.
> >> There's my longwinded excuse for having originated that tidbit of code.
>
> On Fri, Mar 09, 2007 at 04:25:55PM +0300, Al Boldi wrote:
> > I've no idea what both of you are talking about.
>
> The short translation of my message for you is "Linus, please don't
> LART me too hard."

Right.

> On Fri, Mar 09, 2007 at 04:25:55PM +0300, Al Boldi wrote:
> > How can giving people the freedom of choice be in any way
> > counter-productive?
>
> This sort of concern is too subjective for me to have an opinion on it.

How diplomatic.

> My preferred sphere of operation is the Manichean domain of faster vs.
> slower, functionality vs. non-functionality, and the like. For me, such
> design concerns are like the need for a kernel to format pagetables so
> the x86 MMU decodes what was intended, or for a compiler to emit valid
> assembly instructions, or for a programmer to write C the compiler
> won't reject with parse errors.

Sure, but I think, even from a technical point of view, competition is a good 
thing to have.  Pluggable schedulers give us this kind of competition, that 
forces each scheduler to refine or become obsolete.  Think evolution.

> If Linus, akpm, et al object to the
> design, then invalid output was produced. Please refer to Linus, akpm,
> et al for these sorts of design concerns.

Point taken.

Linus Torvalds wrote:
> And hey, you can try to prove me wrong. Code talks. So far, nobody has
> really ever come close.
>
> So go and code it up, and show the end result. So far, nobody who actually
> *does* CPU schedulers have really wanted to do it, because they all want
> to muck around with their own private versions of the data structures.

What about PlugSched?


Thanks!

--
Al

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-09 Thread Linus Torvalds


On Fri, 9 Mar 2007, Bill Davidsen wrote:
>
> But it IS okay for people to make special-case schedulers. Because it's MY
> machine,

Sure.

Go wild. It's what open-source is all about.

I'm not stopping you.

I'm just not merging code that makes the scheduler unreadable, even hard 
to understand, and slows things down. I'm also not merging code that sets 
some scheduler policy limits by having specific "pluggable scheduler 
interfaces".

Different schedulers tend to need different data structures in some *very* 
core data, like the per-cpu run-queues, in "struct task_struct", in 
"struct thread_struct" etc etc. Those are some of *the* most low-level 
structures in the kernel. And those are things that get set up to have as 
little cache footprint a possible etc.

IO schedulers have basically none of those issues. Once you need to do IO, 
you'll happibly use a few indirect pointers, it's not going to show up 
anywhere. But in the scheduler, 10 cycles here and there will be a big 
deal.

And hey, you can try to prove me wrong. Code talks. So far, nobody has 
really ever come close.

So go and code it up, and show the end result. So far, nobody who actually 
*does* CPU schedulers have really wanted to do it, because they all want 
to muck around with their own private versions of the data structures.

Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-09 Thread Bill Davidsen

Linus Torvalds wrote:

On Thu, 8 Mar 2007, Bill Davidsen wrote:
  

Please, could you now rethink plugable scheduler as well? Even if one had to
be chosen at boot time and couldn't be change thereafter, it would still allow
a few new thoughts to be included.



No. Really.

I absolutely *detest* pluggable schedulers. They have a huge downside: 
they allow people to think that it's ok to make special-case schedulers. 
  
But it IS okay for people to make special-case schedulers. Because it's 
MY machine, and how it behaves under mixed load is not a technical 
issue, it's a POLICY issue, and therefore the only way you can allow the 
admin to implement that policy is to either provide several schedulers 
or to provide all sorts of tunable knobs. And by having a few schedulers 
which have been heavily tested and reviewed, you can define the policy 
the scheduler implements and document it. Instead of people writing 
their own, or hacking the code, they could have a few well-tested 
choices, with known policy goals.

And I simply very fundamentally disagree.

If you want to play with a scheduler of your own, go wild. It's easy 
(well, you'll find out that getting good results isn't, but that's a 
different thing). But actual pluggable schedulers just cause people to 
think that "oh, the scheduler performs badly under circumstance X, so 
let's tell people to use special scheduler Y for that case".
  
And has that been a problem with io schedulers? I don't see any vast 
proliferation of them, I don't see contentious exchanges on LKML, or 
people asking how to get yet another into mainline. In fact, I would say 
that the io scheduler situation is as right as anything can be, choices 
for special cases, lack of requests for something else.
And CPU scheduling really isn't that complicated. It's *way* simpler than 
IO scheduling. There simply is *no*excuse* for not trying to do it well 
enough for all cases, or for having special-case stuff.
  
This supposes that the desired behavior, the policy, is the same on all 
machines or that there is currently a way to set the target. If I want 
interactive response with no consideration to batch (and can't trust 
users to use nice), I want one policy. If I want a compromise, the 
current scheduler or RSDL are candidates, but they do different things.
But even IO scheduling actually ends up being largely the same. Yes, we 
have pluggable schedulers, and we even allow switching them, but in the 
end, we don't want people to actually do it. It's much better to have a 
scheduler that is "good enough" than it is to have five that are "perfect" 
for five particular cases.
  
We not only have multiple io schedulers, we have many tunable io 
parameters, all of which allow people to make their system behave the 
way they think is best. It isn't causing complaint, confusion, or 
instability. We have many people requesting a different scheduler, so 
obviously what we have isn't "good enough" and I doubt any one scheduler 
can be, given that the target behavior is driven by non-technical choices.


--
bill davidsen <[EMAIL PROTECTED]>
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread William Lee Irwin III
William Lee Irwin III wrote:
>> I consider policy issues to be hopeless political quagmires and
>> therefore stick to mechanism. So even though I may have started the
>> code in question, I have little or nothing to say about that sort of
>> use for it.
>> There's my longwinded excuse for having originated that tidbit of code.

On Fri, Mar 09, 2007 at 04:25:55PM +0300, Al Boldi wrote:
> I've no idea what both of you are talking about.

The short translation of my message for you is "Linus, please don't
LART me too hard."


On Fri, Mar 09, 2007 at 04:25:55PM +0300, Al Boldi wrote:
> How can giving people the freedom of choice be in any way counter-productive?

This sort of concern is too subjective for me to have an opinion on it.
My preferred sphere of operation is the Manichean domain of faster vs.
slower, functionality vs. non-functionality, and the like. For me, such
design concerns are like the need for a kernel to format pagetables so
the x86 MMU decodes what was intended, or for a compiler to emit valid
assembly instructions, or for a programmer to write C the compiler
won't reject with parse errors. If Linus, akpm, et al object to the
design, then invalid output was produced. Please refer to Linus, akpm,
et al for these sorts of design concerns.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread Al Boldi
William Lee Irwin III wrote:
> On Thu, Mar 08, 2007 at 10:31:48PM -0800, Linus Torvalds wrote:
> > No. Really.
> > I absolutely *detest* pluggable schedulers. They have a huge downside:
> > they allow people to think that it's ok to make special-case schedulers.
> > And I simply very fundamentally disagree.
> > If you want to play with a scheduler of your own, go wild. It's easy
> > (well, you'll find out that getting good results isn't, but that's a
> > different thing). But actual pluggable schedulers just cause people to
> > think that "oh, the scheduler performs badly under circumstance X, so
> > let's tell people to use special scheduler Y for that case".
> > And CPU scheduling really isn't that complicated. It's *way* simpler
> > than IO scheduling. There simply is *no*excuse* for not trying to do it
> > well enough for all cases, or for having special-case stuff.
> > But even IO scheduling actually ends up being largely the same. Yes, we
> > have pluggable schedulers, and we even allow switching them, but in the
> > end, we don't want people to actually do it. It's much better to have a
> > scheduler that is "good enough" than it is to have five that are
> > "perfect" for five particular cases.
>
> For the most part I was trying to assist development, but ran out of
> patience and interest before getting much of anywhere. The basic idea
> was to be able to fork over a kernel to a benchmark team and have them
> run head-to-head comparisons, switching schedulers on the fly,
> particularly on machines that took a very long time to boot. The
> concept ideally involved making observations and loading fresh
> schedulers based on them as kernel modules on the fly. I was more
> interested in rapid incremental changes than total rewrites, though I
> considered total rewrites to be tests of adequacy, since somewhere in
> the back of my mind I had thoughts about experimenting with gang
> scheduling policies on those machines taking very long times to boot.
>
> What actually got written, the result of it being picked up by others,
> and how it's getting used are all rather far from what I had in mind,
> not that I'm offended in the least by any of it. I also had little or
> no interest in mainline for it. The intention was more on the order of
> an elaborate instrumentation patch for systems where the time required
> to reboot is prohibitive and the duration of access strictly limited.
> (In fact, downward-revised estimates of the likelihood of such access
> also factored into the abandonment of the codebase.)
>
> I consider policy issues to be hopeless political quagmires and
> therefore stick to mechanism. So even though I may have started the
> code in question, I have little or nothing to say about that sort of
> use for it.
>
> There's my longwinded excuse for having originated that tidbit of code.

I've no idea what both of you are talking about.

How can giving people the freedom of choice be in any way counter-productive?


Thanks!

--
Al

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-09 Thread William Lee Irwin III
On Thu, Mar 08, 2007 at 10:31:48PM -0800, Linus Torvalds wrote:
> No. Really.
> I absolutely *detest* pluggable schedulers. They have a huge downside: 
> they allow people to think that it's ok to make special-case schedulers. 
> And I simply very fundamentally disagree.
> If you want to play with a scheduler of your own, go wild. It's easy 
> (well, you'll find out that getting good results isn't, but that's a 
> different thing). But actual pluggable schedulers just cause people to 
> think that "oh, the scheduler performs badly under circumstance X, so 
> let's tell people to use special scheduler Y for that case".
> And CPU scheduling really isn't that complicated. It's *way* simpler than 
> IO scheduling. There simply is *no*excuse* for not trying to do it well 
> enough for all cases, or for having special-case stuff.
> But even IO scheduling actually ends up being largely the same. Yes, we 
> have pluggable schedulers, and we even allow switching them, but in the 
> end, we don't want people to actually do it. It's much better to have a 
> scheduler that is "good enough" than it is to have five that are "perfect" 
> for five particular cases.

For the most part I was trying to assist development, but ran out of
patience and interest before getting much of anywhere. The basic idea
was to be able to fork over a kernel to a benchmark team and have them
run head-to-head comparisons, switching schedulers on the fly,
particularly on machines that took a very long time to boot. The
concept ideally involved making observations and loading fresh
schedulers based on them as kernel modules on the fly. I was more
interested in rapid incremental changes than total rewrites, though I
considered total rewrites to be tests of adequacy, since somewhere in
the back of my mind I had thoughts about experimenting with gang
scheduling policies on those machines taking very long times to boot.

What actually got written, the result of it being picked up by others,
and how it's getting used are all rather far from what I had in mind,
not that I'm offended in the least by any of it. I also had little or
no interest in mainline for it. The intention was more on the order of
an elaborate instrumentation patch for systems where the time required
to reboot is prohibitive and the duration of access strictly limited.
(In fact, downward-revised estimates of the likelihood of such access
also factored into the abandonment of the codebase.)

I consider policy issues to be hopeless political quagmires and
therefore stick to mechanism. So even though I may have started the
code in question, I have little or nothing to say about that sort of
use for it.

There's my longwinded excuse for having originated that tidbit of code.


-- wli
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-09 Thread William Lee Irwin III
On Thu, Mar 08, 2007 at 10:31:48PM -0800, Linus Torvalds wrote:
 No. Really.
 I absolutely *detest* pluggable schedulers. They have a huge downside: 
 they allow people to think that it's ok to make special-case schedulers. 
 And I simply very fundamentally disagree.
 If you want to play with a scheduler of your own, go wild. It's easy 
 (well, you'll find out that getting good results isn't, but that's a 
 different thing). But actual pluggable schedulers just cause people to 
 think that oh, the scheduler performs badly under circumstance X, so 
 let's tell people to use special scheduler Y for that case.
 And CPU scheduling really isn't that complicated. It's *way* simpler than 
 IO scheduling. There simply is *no*excuse* for not trying to do it well 
 enough for all cases, or for having special-case stuff.
 But even IO scheduling actually ends up being largely the same. Yes, we 
 have pluggable schedulers, and we even allow switching them, but in the 
 end, we don't want people to actually do it. It's much better to have a 
 scheduler that is good enough than it is to have five that are perfect 
 for five particular cases.

For the most part I was trying to assist development, but ran out of
patience and interest before getting much of anywhere. The basic idea
was to be able to fork over a kernel to a benchmark team and have them
run head-to-head comparisons, switching schedulers on the fly,
particularly on machines that took a very long time to boot. The
concept ideally involved making observations and loading fresh
schedulers based on them as kernel modules on the fly. I was more
interested in rapid incremental changes than total rewrites, though I
considered total rewrites to be tests of adequacy, since somewhere in
the back of my mind I had thoughts about experimenting with gang
scheduling policies on those machines taking very long times to boot.

What actually got written, the result of it being picked up by others,
and how it's getting used are all rather far from what I had in mind,
not that I'm offended in the least by any of it. I also had little or
no interest in mainline for it. The intention was more on the order of
an elaborate instrumentation patch for systems where the time required
to reboot is prohibitive and the duration of access strictly limited.
(In fact, downward-revised estimates of the likelihood of such access
also factored into the abandonment of the codebase.)

I consider policy issues to be hopeless political quagmires and
therefore stick to mechanism. So even though I may have started the
code in question, I have little or nothing to say about that sort of
use for it.

There's my longwinded excuse for having originated that tidbit of code.


-- wli
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread Al Boldi
William Lee Irwin III wrote:
 On Thu, Mar 08, 2007 at 10:31:48PM -0800, Linus Torvalds wrote:
  No. Really.
  I absolutely *detest* pluggable schedulers. They have a huge downside:
  they allow people to think that it's ok to make special-case schedulers.
  And I simply very fundamentally disagree.
  If you want to play with a scheduler of your own, go wild. It's easy
  (well, you'll find out that getting good results isn't, but that's a
  different thing). But actual pluggable schedulers just cause people to
  think that oh, the scheduler performs badly under circumstance X, so
  let's tell people to use special scheduler Y for that case.
  And CPU scheduling really isn't that complicated. It's *way* simpler
  than IO scheduling. There simply is *no*excuse* for not trying to do it
  well enough for all cases, or for having special-case stuff.
  But even IO scheduling actually ends up being largely the same. Yes, we
  have pluggable schedulers, and we even allow switching them, but in the
  end, we don't want people to actually do it. It's much better to have a
  scheduler that is good enough than it is to have five that are
  perfect for five particular cases.

 For the most part I was trying to assist development, but ran out of
 patience and interest before getting much of anywhere. The basic idea
 was to be able to fork over a kernel to a benchmark team and have them
 run head-to-head comparisons, switching schedulers on the fly,
 particularly on machines that took a very long time to boot. The
 concept ideally involved making observations and loading fresh
 schedulers based on them as kernel modules on the fly. I was more
 interested in rapid incremental changes than total rewrites, though I
 considered total rewrites to be tests of adequacy, since somewhere in
 the back of my mind I had thoughts about experimenting with gang
 scheduling policies on those machines taking very long times to boot.

 What actually got written, the result of it being picked up by others,
 and how it's getting used are all rather far from what I had in mind,
 not that I'm offended in the least by any of it. I also had little or
 no interest in mainline for it. The intention was more on the order of
 an elaborate instrumentation patch for systems where the time required
 to reboot is prohibitive and the duration of access strictly limited.
 (In fact, downward-revised estimates of the likelihood of such access
 also factored into the abandonment of the codebase.)

 I consider policy issues to be hopeless political quagmires and
 therefore stick to mechanism. So even though I may have started the
 code in question, I have little or nothing to say about that sort of
 use for it.

 There's my longwinded excuse for having originated that tidbit of code.

I've no idea what both of you are talking about.

How can giving people the freedom of choice be in any way counter-productive?


Thanks!

--
Al

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread William Lee Irwin III
William Lee Irwin III wrote:
 I consider policy issues to be hopeless political quagmires and
 therefore stick to mechanism. So even though I may have started the
 code in question, I have little or nothing to say about that sort of
 use for it.
 There's my longwinded excuse for having originated that tidbit of code.

On Fri, Mar 09, 2007 at 04:25:55PM +0300, Al Boldi wrote:
 I've no idea what both of you are talking about.

The short translation of my message for you is Linus, please don't
LART me too hard.


On Fri, Mar 09, 2007 at 04:25:55PM +0300, Al Boldi wrote:
 How can giving people the freedom of choice be in any way counter-productive?

This sort of concern is too subjective for me to have an opinion on it.
My preferred sphere of operation is the Manichean domain of faster vs.
slower, functionality vs. non-functionality, and the like. For me, such
design concerns are like the need for a kernel to format pagetables so
the x86 MMU decodes what was intended, or for a compiler to emit valid
assembly instructions, or for a programmer to write C the compiler
won't reject with parse errors. If Linus, akpm, et al object to the
design, then invalid output was produced. Please refer to Linus, akpm,
et al for these sorts of design concerns.


-- wli
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-09 Thread Bill Davidsen

Linus Torvalds wrote:

On Thu, 8 Mar 2007, Bill Davidsen wrote:
  

Please, could you now rethink plugable scheduler as well? Even if one had to
be chosen at boot time and couldn't be change thereafter, it would still allow
a few new thoughts to be included.



No. Really.

I absolutely *detest* pluggable schedulers. They have a huge downside: 
they allow people to think that it's ok to make special-case schedulers. 
  
But it IS okay for people to make special-case schedulers. Because it's 
MY machine, and how it behaves under mixed load is not a technical 
issue, it's a POLICY issue, and therefore the only way you can allow the 
admin to implement that policy is to either provide several schedulers 
or to provide all sorts of tunable knobs. And by having a few schedulers 
which have been heavily tested and reviewed, you can define the policy 
the scheduler implements and document it. Instead of people writing 
their own, or hacking the code, they could have a few well-tested 
choices, with known policy goals.

And I simply very fundamentally disagree.

If you want to play with a scheduler of your own, go wild. It's easy 
(well, you'll find out that getting good results isn't, but that's a 
different thing). But actual pluggable schedulers just cause people to 
think that oh, the scheduler performs badly under circumstance X, so 
let's tell people to use special scheduler Y for that case.
  
And has that been a problem with io schedulers? I don't see any vast 
proliferation of them, I don't see contentious exchanges on LKML, or 
people asking how to get yet another into mainline. In fact, I would say 
that the io scheduler situation is as right as anything can be, choices 
for special cases, lack of requests for something else.
And CPU scheduling really isn't that complicated. It's *way* simpler than 
IO scheduling. There simply is *no*excuse* for not trying to do it well 
enough for all cases, or for having special-case stuff.
  
This supposes that the desired behavior, the policy, is the same on all 
machines or that there is currently a way to set the target. If I want 
interactive response with no consideration to batch (and can't trust 
users to use nice), I want one policy. If I want a compromise, the 
current scheduler or RSDL are candidates, but they do different things.
But even IO scheduling actually ends up being largely the same. Yes, we 
have pluggable schedulers, and we even allow switching them, but in the 
end, we don't want people to actually do it. It's much better to have a 
scheduler that is good enough than it is to have five that are perfect 
for five particular cases.
  
We not only have multiple io schedulers, we have many tunable io 
parameters, all of which allow people to make their system behave the 
way they think is best. It isn't causing complaint, confusion, or 
instability. We have many people requesting a different scheduler, so 
obviously what we have isn't good enough and I doubt any one scheduler 
can be, given that the target behavior is driven by non-technical choices.


--
bill davidsen [EMAIL PROTECTED]
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-09 Thread Linus Torvalds


On Fri, 9 Mar 2007, Bill Davidsen wrote:

 But it IS okay for people to make special-case schedulers. Because it's MY
 machine,

Sure.

Go wild. It's what open-source is all about.

I'm not stopping you.

I'm just not merging code that makes the scheduler unreadable, even hard 
to understand, and slows things down. I'm also not merging code that sets 
some scheduler policy limits by having specific pluggable scheduler 
interfaces.

Different schedulers tend to need different data structures in some *very* 
core data, like the per-cpu run-queues, in struct task_struct, in 
struct thread_struct etc etc. Those are some of *the* most low-level 
structures in the kernel. And those are things that get set up to have as 
little cache footprint a possible etc.

IO schedulers have basically none of those issues. Once you need to do IO, 
you'll happibly use a few indirect pointers, it's not going to show up 
anywhere. But in the scheduler, 10 cycles here and there will be a big 
deal.

And hey, you can try to prove me wrong. Code talks. So far, nobody has 
really ever come close.

So go and code it up, and show the end result. So far, nobody who actually 
*does* CPU schedulers have really wanted to do it, because they all want 
to muck around with their own private versions of the data structures.

Linus
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread Al Boldi
William Lee Irwin III wrote:
 William Lee Irwin III wrote:
  I consider policy issues to be hopeless political quagmires and
  therefore stick to mechanism. So even though I may have started the
  code in question, I have little or nothing to say about that sort of
  use for it.
  There's my longwinded excuse for having originated that tidbit of code.

 On Fri, Mar 09, 2007 at 04:25:55PM +0300, Al Boldi wrote:
  I've no idea what both of you are talking about.

 The short translation of my message for you is Linus, please don't
 LART me too hard.

Right.

 On Fri, Mar 09, 2007 at 04:25:55PM +0300, Al Boldi wrote:
  How can giving people the freedom of choice be in any way
  counter-productive?

 This sort of concern is too subjective for me to have an opinion on it.

How diplomatic.

 My preferred sphere of operation is the Manichean domain of faster vs.
 slower, functionality vs. non-functionality, and the like. For me, such
 design concerns are like the need for a kernel to format pagetables so
 the x86 MMU decodes what was intended, or for a compiler to emit valid
 assembly instructions, or for a programmer to write C the compiler
 won't reject with parse errors.

Sure, but I think, even from a technical point of view, competition is a good 
thing to have.  Pluggable schedulers give us this kind of competition, that 
forces each scheduler to refine or become obsolete.  Think evolution.

 If Linus, akpm, et al object to the
 design, then invalid output was produced. Please refer to Linus, akpm,
 et al for these sorts of design concerns.

Point taken.

Linus Torvalds wrote:
 And hey, you can try to prove me wrong. Code talks. So far, nobody has
 really ever come close.

 So go and code it up, and show the end result. So far, nobody who actually
 *does* CPU schedulers have really wanted to do it, because they all want
 to muck around with their own private versions of the data structures.

What about PlugSched?


Thanks!

--
Al

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-09 Thread Rodney Gordon II
On Sunday 04 March 2007 01:00, Con Kolivas wrote:
 This message is to announce the first general public release of the
 Rotating Staircase DeadLine cpu scheduler.

 Based on previous work from the staircase cpu scheduler I set out to
 design, from scratch, a new scheduling policy design which satisfies every
 requirement for SCHED_NORMAL (otherwise known as SCHED_OTHER) task
 management.


Con, you've really outdone yourself this time ! :D

As a long time user of the -ck patchset, RSDL is a welcome change, and a great 
piece of code to play around with, and USE!

Booted up on my system perfectly, Pentium-D 830 3GHz, 1.5GB RAM.

No problems whatsoever so far, using 0.26. I can launch up a bunch of encode 
jobs, in SCHED_NORMAL even, and still have low latency on my desktop (I know 
it's not low latency _specific_ code, but it works very well).

I guess all I can say is.. wow. This code isn't prime time ready, yet.. But 
it can be, and would be a great addition to mainline.

Hell, a little tuning and merging this with a few current ck patches could 
make a damn fine kernel, and probably beat out the original staircase in 
desktops. :)

Keep up the good work !
-r

-- 
Rodney meff Gordon II -*- [EMAIL PROTECTED]
Systems Administrator / Coder Geek -*- Open yourself to OpenSource
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread Ryan Hope

from what I understood, there is a performance loss in plugsched
schedulers because they have to share code

even if pluggable schedulers is not a viable option, being able to
choose which one was built into the kernel would be easy (only takes a
few ifdefs), i too think competition would be good

On 3/9/07, Al Boldi [EMAIL PROTECTED] wrote:

William Lee Irwin III wrote:
 William Lee Irwin III wrote:
  I consider policy issues to be hopeless political quagmires and
  therefore stick to mechanism. So even though I may have started the
  code in question, I have little or nothing to say about that sort of
  use for it.
  There's my longwinded excuse for having originated that tidbit of code.

 On Fri, Mar 09, 2007 at 04:25:55PM +0300, Al Boldi wrote:
  I've no idea what both of you are talking about.

 The short translation of my message for you is Linus, please don't
 LART me too hard.

Right.

 On Fri, Mar 09, 2007 at 04:25:55PM +0300, Al Boldi wrote:
  How can giving people the freedom of choice be in any way
  counter-productive?

 This sort of concern is too subjective for me to have an opinion on it.

How diplomatic.

 My preferred sphere of operation is the Manichean domain of faster vs.
 slower, functionality vs. non-functionality, and the like. For me, such
 design concerns are like the need for a kernel to format pagetables so
 the x86 MMU decodes what was intended, or for a compiler to emit valid
 assembly instructions, or for a programmer to write C the compiler
 won't reject with parse errors.

Sure, but I think, even from a technical point of view, competition is a good
thing to have.  Pluggable schedulers give us this kind of competition, that
forces each scheduler to refine or become obsolete.  Think evolution.

 If Linus, akpm, et al object to the
 design, then invalid output was produced. Please refer to Linus, akpm,
 et al for these sorts of design concerns.

Point taken.

Linus Torvalds wrote:
 And hey, you can try to prove me wrong. Code talks. So far, nobody has
 really ever come close.

 So go and code it up, and show the end result. So far, nobody who actually
 *does* CPU schedulers have really wanted to do it, because they all want
 to muck around with their own private versions of the data structures.

What about PlugSched?


Thanks!

--
Al

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread William Lee Irwin III
William Lee Irwin III wrote:
 The short translation of my message for you is Linus, please don't
 LART me too hard.

On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
 Right.

Given where the code originally came from, I've got bullets to dodge.


William Lee Irwin III wrote:
 This sort of concern is too subjective for me to have an opinion on it.

On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
 How diplomatic.

Impoliteness doesn't accomplish anything I want to do.


William Lee Irwin III wrote:
 My preferred sphere of operation is the Manichean domain of faster vs.
 slower, functionality vs. non-functionality, and the like. For me, such
 design concerns are like the need for a kernel to format pagetables so
 the x86 MMU decodes what was intended, or for a compiler to emit valid
 assembly instructions, or for a programmer to write C the compiler
 won't reject with parse errors.

On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
 Sure, but I think, even from a technical point of view, competition is a good 
 thing to have.  Pluggable schedulers give us this kind of competition, that 
 forces each scheduler to refine or become obsolete.  Think evolution.

I'm more of a cooperative than competitive person, not to say that
flies well in Linux. There are more productive uses of time than having
everyone NIH'ing everyone else's code. If the result isn't so great,
I'd rather send them code or talk them about what needs to be done.


William Lee Irwin III wrote:
 If Linus, akpm, et al object to the
 design, then invalid output was produced. Please refer to Linus, akpm,
 et al for these sorts of design concerns.

On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
 Point taken.

Decisions with respect to overall kernel design are made from well
above my level. Similarly with coding style, release management, code
directory hierarchy, nomenclature, and more. These things are Linus'
and devolved to those who go along with him on those fronts. If I
made those decisions, you might as well call it wlix not Linux.


Linus Torvalds wrote:
 And hey, you can try to prove me wrong. Code talks. So far, nobody has
 really ever come close.
 So go and code it up, and show the end result. So far, nobody who actually
 *does* CPU schedulers have really wanted to do it, because they all want
 to muck around with their own private versions of the data structures.

On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
 What about PlugSched?

The extant versions of it fall well short of Linus' challenge as well
as my original goals for it. A useful exercise may also be enumerating
your expectations and having those who actually work with the code
describe how well those are actually met.


-- wli
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread David Lang

On Fri, 9 Mar 2007, Al Boldi wrote:




My preferred sphere of operation is the Manichean domain of faster vs.
slower, functionality vs. non-functionality, and the like. For me, such
design concerns are like the need for a kernel to format pagetables so
the x86 MMU decodes what was intended, or for a compiler to emit valid
assembly instructions, or for a programmer to write C the compiler
won't reject with parse errors.


Sure, but I think, even from a technical point of view, competition is a good
thing to have.  Pluggable schedulers give us this kind of competition, that
forces each scheduler to refine or become obsolete.  Think evolution.


The point Linus is makeing is that with pluggable schedulers there isn't 
competition between them, the various developer teams would go off in their own 
direction and any drawbacks to their scheduler could be answered with that's 
not what we are good at, use a different scheduler, with the very real 
possibility that a person could get this answer from ALL schedulers, leaving 
them with nothing good to use.


David Lang
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread William Lee Irwin III
On Fri, Mar 09, 2007 at 05:18:31PM -0500, Ryan Hope wrote:
 from what I understood, there is a performance loss in plugsched
 schedulers because they have to share code
 even if pluggable schedulers is not a viable option, being able to
 choose which one was built into the kernel would be easy (only takes a
 few ifdefs), i too think competition would be good

Neither sharing code nor data structures is strictly necessary for a
pluggable scheduler. For instance, backing out per-cpu runqueues in
favor of a single locklessly-accessed queue or similar per-leaf-domain
queues is one potential design alternative (never mind difficulties
with -cpus_allowed) explicitly considered for the sake of sched_yield()
semantics on SMP, among other concerns. What plugsched originally did
was to provide a set of driver functions and allow each scheduler to
play with its private data declared static in separate C files in what
were later intended to become kernel modules. As far as I know, runtime
switchover code to complement all that has never been written in such a
form. One possibility abandoned early-on was to have multiple schedulers
simultaneously active to manage different portions of the system with
different policies, in no small part due to the difficulty of load
balancing between the partitions associated with the different schedulers.
Some misguided attempts were made to export the lowest-level API possible,
which I rather quickly deemed a mistake, but they still held to such
largely design considerations as I described above.


-- wli
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread Al Boldi
William Lee Irwin III wrote:
 William Lee Irwin III wrote:
  This sort of concern is too subjective for me to have an opinion on it.

 On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
  How diplomatic.

 Impoliteness doesn't accomplish anything I want to do.

Fair enough.  But being honest about it, without flaming, may be more 
constructive.

 William Lee Irwin III wrote:
  My preferred sphere of operation is the Manichean domain of faster vs.
  slower, functionality vs. non-functionality, and the like. For me, such
  design concerns are like the need for a kernel to format pagetables so
  the x86 MMU decodes what was intended, or for a compiler to emit valid
  assembly instructions, or for a programmer to write C the compiler
  won't reject with parse errors.

 On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
  Sure, but I think, even from a technical point of view, competition is a
  good thing to have.  Pluggable schedulers give us this kind of
  competition, that forces each scheduler to refine or become obsolete. 
  Think evolution.

 I'm more of a cooperative than competitive person, not to say that
 flies well in Linux. There are more productive uses of time than having
 everyone NIH'ing everyone else's code. If the result isn't so great,
 I'd rather send them code or talk them about what needs to be done.

Ok, let's call it cooperative competitiveness.  You know, the kind of 
competitiveness that drives improvements that helps everybody

 Linus Torvalds wrote:
  And hey, you can try to prove me wrong. Code talks. So far, nobody has
  really ever come close.
  So go and code it up, and show the end result. So far, nobody who
  actually *does* CPU schedulers have really wanted to do it, because
  they all want to muck around with their own private versions of the
  data structures.

 On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
  What about PlugSched?

 The extant versions of it fall well short of Linus' challenge as well
 as my original goals for it.

Do you mean Peter Williams' PlugSched-6.5-for-2.6.20?

 A useful exercise may also be enumerating
 your expectations and having those who actually work with the code
 describe how well those are actually met.

A runtime configurable framework that allows for dynamically extensible 
schedulers.  PlugSched seems to be a good start.


Thanks!

--
Al



-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Pluggable Schedulers (was: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler)

2007-03-09 Thread William Lee Irwin III
William Lee Irwin III wrote:
 This sort of concern is too subjective for me to have an opinion on it.

On Fri, Mar 09, 2007 at 11:43:46PM +0300, Al Boldi wrote:
 How diplomatic.

William Lee Irwin III wrote:
 Impoliteness doesn't accomplish anything I want to do.

On Sat, Mar 10, 2007 at 08:34:25AM +0300, Al Boldi wrote:
 Fair enough.  But being honest about it, without flaming, may be more 
 constructive.

There was no flamage. It is literally true.


William Lee Irwin III wrote:
 I'm more of a cooperative than competitive person, not to say that
 flies well in Linux. There are more productive uses of time than having
 everyone NIH'ing everyone else's code. If the result isn't so great,
 I'd rather send them code or talk them about what needs to be done.

On Sat, Mar 10, 2007 at 08:34:25AM +0300, Al Boldi wrote:
 Ok, let's call it cooperative competitiveness.  You know, the kind of 
 competitiveness that drives improvements that helps everybody

This trips over ideological issues best not discussed on lkml.


William Lee Irwin III wrote:
 The extant versions of it fall well short of Linus' challenge as well
 as my original goals for it.

On Sat, Mar 10, 2007 at 08:34:25AM +0300, Al Boldi wrote:
 Do you mean Peter Williams' PlugSched-6.5-for-2.6.20?

You'd be well-served by talking to Peter Williams sometime. He's a
knowledgable individual. I should also mention that Con Kolivas did
significant amounts of work to get the early codebase he inherited
from me working before things were handed off to Peter Williams.


William Lee Irwin III wrote:
 A useful exercise may also be enumerating
 your expectations and having those who actually work with the code
 describe how well those are actually met.

On Sat, Mar 10, 2007 at 08:34:25AM +0300, Al Boldi wrote:
 A runtime configurable framework that allows for dynamically extensible 
 schedulers.  PlugSched seems to be a good start.

Last I checked there were limits to runtime configurability centering
around only supporting a compiled-in set of scheduling drivers, unless
Peter's taken it the rest of the way without my noticing. It's unclear
what you have in mind in terms of dynamic extensibility. My only guess
would be pluggable scheduling policy/class support for individual
schedulers in addition to plugging the individual schedulers, except
I'm rather certain that Williams' code doesn't do anything with modules.


-- wli
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread hui
On Thu, Mar 08, 2007 at 10:31:48PM -0800, Linus Torvalds wrote:
> On Thu, 8 Mar 2007, Bill Davidsen wrote:
> > Please, could you now rethink plugable scheduler as well? Even if one had to
> > be chosen at boot time and couldn't be change thereafter, it would still 
> > allow
> > a few new thoughts to be included.
> 
> No. Really.
> 
> I absolutely *detest* pluggable schedulers. They have a huge downside: 
> they allow people to think that it's ok to make special-case schedulers. 
> And I simply very fundamentally disagree.

Linus,

This is where I have to respectfully disagree. There are types of loads
that aren't covered in SCHED_OTHER. They are typically certain real time
loads and those folks (regardless of -rt patch) would benefit greatly
from having something like that in place. Those scheduler developers can
plug in (at compile time) their work without having to track and forward
port their code constantly so that non-SCHED_OTHER policies can be
experimented with easily.

This is especially so with rate monotonic influenced schedulers that are
in the works by real time folks, stock kernel or not. This is about
making Linux generally accessible to those folks and not folks doing
SCHED_OTHER work. They are orthogonal.

bill

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Linus Torvalds


On Thu, 8 Mar 2007, Bill Davidsen wrote:
>
> Please, could you now rethink plugable scheduler as well? Even if one had to
> be chosen at boot time and couldn't be change thereafter, it would still allow
> a few new thoughts to be included.

No. Really.

I absolutely *detest* pluggable schedulers. They have a huge downside: 
they allow people to think that it's ok to make special-case schedulers. 
And I simply very fundamentally disagree.

If you want to play with a scheduler of your own, go wild. It's easy 
(well, you'll find out that getting good results isn't, but that's a 
different thing). But actual pluggable schedulers just cause people to 
think that "oh, the scheduler performs badly under circumstance X, so 
let's tell people to use special scheduler Y for that case".

And CPU scheduling really isn't that complicated. It's *way* simpler than 
IO scheduling. There simply is *no*excuse* for not trying to do it well 
enough for all cases, or for having special-case stuff.

But even IO scheduling actually ends up being largely the same. Yes, we 
have pluggable schedulers, and we even allow switching them, but in the 
end, we don't want people to actually do it. It's much better to have a 
scheduler that is "good enough" than it is to have five that are "perfect" 
for five particular cases.

Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Bill Davidsen

Con Kolivas wrote:

On Wednesday 07 March 2007 04:50, Bill Davidsen wrote:



With luck I'll get to shake out that patch in combination with kvm later
today.


Great thanks!. I've appreciated all the feedback so far.

I did try, the 2.6.21-rc3-git3 doesn't want to kvm for me, and your 
patch may not be doing what it should. I'm falling back to 2.6.20 and 
will retest after I document my kvm issues.


--
Bill Davidsen <[EMAIL PROTECTED]>
  "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Bill Davidsen

Linus Torvalds wrote:


On Mon, 5 Mar 2007, Ed Tomlinson wrote:
The patch _does_ make a difference.  For instance reading mail with freenet working 
hard  (threaded java application) and gentoo's emerge triggering compiles to update the 
box is much smoother.


Think this scheduler needs serious looking at.  


I agree, partly because it's obviously been getting rave reviews so far, 
but mainly because it looks like you can think about behaviour a lot 
better, something that was always very hard with the interactivity 
boosters with process state history.


I'm not at all opposed to this, but we do need:
 - to not do it at this stage in the stable kernel
 - to let it sit in -mm for at least a short while
 - and generally more people testing more loads.

Please, could you now rethink plugable scheduler as well? Even if one 
had to be chosen at boot time and couldn't be change thereafter, it 
would still allow a few new thoughts to be included.


I don't actually worry too much about switching out a CPU scheduler: those 
things are places where you *can* largely read the source code and get an 
idea for them (although with the kind of history state that we currently 
have, it's really really hard). But at the very least they aren't likely 
to have subtle bugs that show up elsewhere, so...


I confess that the default scheduler works for me most of the time, i/o 
tuning is more productive. I want tot test with kvm load, but 
2.6.21-rc3-git3 doesn't want to run kvm at all, I'm looking to see what 
I broke, since nbd doesn't work, either.


I'm collecting OOPS now, will forward when I have a few more.

So as long as the generic concerns above are under control, I'll happily 
try something like this if it can be merged early in a merge window..


Linus



--
Bill Davidsen <[EMAIL PROTECTED]>
  "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Fabio Comolli

Well, downloaded - compiled - booted: initng measures 17.369 seconds
to complete the boot process; without the patch the same kernel booted
in 21.553 seconds.

Very impressive.
Many thanks for your work.

Fabio






On 3/8/07, Con Kolivas <[EMAIL PROTECTED]> wrote:

On Friday 09 March 2007 07:25, Fabio Comolli wrote:
> Hi Con
> It would be nice if you could rebase this patch to latest git or at
> least to 2.6.21-rc3.
> Regards,

Check in http://ck.kolivas.org/patches/staircase-deadline/
There's an -rc3 patch there.

--
-ck


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Con Kolivas
On Friday 09 March 2007 07:25, Fabio Comolli wrote:
> Hi Con
> It would be nice if you could rebase this patch to latest git or at
> least to 2.6.21-rc3.
> Regards,

Check in http://ck.kolivas.org/patches/staircase-deadline/
There's an -rc3 patch there.

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Fabio Comolli

Hi Con
It would be nice if you could rebase this patch to latest git or at
least to 2.6.21-rc3.
Regards,
Fabio




On 3/4/07, Con Kolivas <[EMAIL PROTECTED]> wrote:

This message is to announce the first general public release of the "Rotating
Staircase DeadLine" cpu scheduler.

Based on previous work from the staircase cpu scheduler I set out to design,
from scratch, a new scheduling policy design which satisfies every
requirement for SCHED_NORMAL (otherwise known as SCHED_OTHER) task management.

Available for download are:

 A full rollup of the patch for 2.6.20:
http://ck.kolivas.org/patches/staircase-deadline/sched-rsdl-0.26.patch

 Split patches for 2.6.20(which will follow this email):
http://ck.kolivas.org/patches/staircase-deadline/split-out/

 The readme (which will also constitute the rest of this email):
http://ck.kolivas.org/patches/staircase-deadline/rsdl_scheduler.readme


The following readme is also included as documentation in
Documentation/sched-design.txt


Rotating Staircase Deadline cpu scheduler policy


Design summary
==

A novel design which incorporates a foreground-background descending priority
system (the staircase) with runqueue managed minor and major epochs (rotation
and deadline).


Features


A starvation free, strict fairness O(1) scalable design with interactivity
as good as the above restrictions can provide. There is no interactivity
estimator, no sleep/run measurements and only simple fixed accounting.
The design has strict enough a design and accounting that task behaviour
can be modelled and maximum scheduling latencies can be predicted by
the virtual deadline mechanism that manages runqueues. The prime concern
in this design is to maintain fairness at all costs determined by nice level,
yet to maintain as good interactivity as can be allowed within the
constraints of strict fairness.


Design description
==

RSDL works off the principle of providing each task a quota of runtime that
it is allowed to run at each priority level equal to its static priority
(ie. its nice level) and every priority below that. When each task is queued,
the cpu that it is queued onto also keeps a record of that quota. If the
task uses up its quota it is decremented one priority level. Also, if the cpu
notices a quota full has been used for that priority level, it pushes
everything remaining at that priority level to the next lowest priority
level. Once every runtime quota has been consumed of every priority level,
a task is queued on the "expired" array. When no other tasks exist with
quota, the expired array is activated and fresh quotas are handed out. This
is all done in O(1).


Design details
==

Each cpu has its own runqueue which micromanages its own epochs, and each
task keeps a record of its own entitlement of cpu time. Most of the rest
of these details apply to non-realtime tasks as rt task management is
straight forward.

Each runqueue keeps a record of what major epoch it is up to in the
rq->prio_rotation field which is incremented on each major epoch. It also
keeps a record of quota available to each priority value valid for that
major epoch in rq->prio_quota[].

Each task keeps a record of what major runqueue epoch it was last running
on in p->rotation. It also keeps a record of what priority levels it has
already been allocated quota from during this epoch in a bitmap p->bitmap.

The only tunable that determines all other details is the RR_INTERVAL. This
is set to 6ms (minimum on 1000HZ, higher at different HZ values).

All tasks are initially given a quota based on RR_INTERVAL. This is equal to
RR_INTERVAL between nice values of 0 and 19, and progressively larger for
nice values from -1 to -20. This is assigned to p->quota and only changes
with changes in nice level.

As a task is first queued, it checks in recalc_task_prio to see if it has
run at this runqueue's current priority rotation. If it has not, it will
have its p->prio level set to equal its p->static_prio (nice level) and will
be given a p->time_slice equal to the p->quota, and has its allocation
bitmap bit set in p->bitmap for its static priority (nice value). This
quota is then also added to the current runqueue's rq->prio_quota[p->prio].
It is then queued on the current active priority array.

If a task has already been running during this major epoch, if it has
p->time_slice left and the rq->prio_quota for the task's p->prio still
has quota, it will be placed back on the active array, but no more quota
will be added to either the task or the runqueue quota.

If a task has been running during this major epoch, but does not have
p->time_slice left or the runqueue's prio_quota for this task's p->prio
does not have quota, it will find the next lowest priority in its bitmap
that it has not been allocated quota from. It then gets the a full quota
in p->time_slice and adds that to the quota value for the relevant 

Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Tim Tassonis

Hi Con

Just also wanted to throw in my less than two cents: I applied the patch 
and also have the very strong subjective impression that my system 
"feels" much more responsive than with stock 2.6.20.


Thanks for the great work.

Bye
Tim
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Con Kolivas
On Thursday 08 March 2007 19:53, Ingo Molnar wrote:
> * Con Kolivas <[EMAIL PROTECTED]> wrote:
> > This message is to announce the first general public release of the
> > "Rotating Staircase DeadLine" cpu scheduler.
> >
> > Based on previous work from the staircase cpu scheduler I set out to
> > design, from scratch, a new scheduling policy design which satisfies
> > every requirement for SCHED_NORMAL (otherwise known as SCHED_OTHER)
> > task management.
>
> cool! I like this even more than i liked your original staircase
> scheduler from 2 years ago :) Lets try what we did back then: put it
> into -mm and see what breaks (if anything). But in general, it is
> becoming increasingly clear that the interactivity estimator is a more
> fragile concept than the built-in quota mechanism of the staircase
> scheduler, so if it works in practice i'm quite in favor of it, even if
> it regresses /some/ workloads.

Great! Thanks for your support. 

After futzing around for all that time I've become sure that an approach 
without an interactive estimator is our only way forward. So far the 
throughput benchmarks are encouraging too so I suspect the estimator may be 
causing harm there too.

Ensuring the different arches and cpuidle work properly I likely will need 
help with, though, so I'd appreciate any help from people if they see 
something obvious and can get a grip of my code.

Thanks!

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Ingo Molnar

* Con Kolivas <[EMAIL PROTECTED]> wrote:

> This message is to announce the first general public release of the 
> "Rotating Staircase DeadLine" cpu scheduler.
> 
> Based on previous work from the staircase cpu scheduler I set out to 
> design, from scratch, a new scheduling policy design which satisfies 
> every requirement for SCHED_NORMAL (otherwise known as SCHED_OTHER) 
> task management.

cool! I like this even more than i liked your original staircase 
scheduler from 2 years ago :) Lets try what we did back then: put it 
into -mm and see what breaks (if anything). But in general, it is 
becoming increasingly clear that the interactivity estimator is a more 
fragile concept than the built-in quota mechanism of the staircase 
scheduler, so if it works in practice i'm quite in favor of it, even if 
it regresses /some/ workloads.

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Tim Tassonis

Hi Con

Just also wanted to throw in my less than two cents: I applied the patch 
and also have the very strong subjective impression that my system 
feels much more responsive than with stock 2.6.20.


Thanks for the great work.

Bye
Tim
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Fabio Comolli

Hi Con
It would be nice if you could rebase this patch to latest git or at
least to 2.6.21-rc3.
Regards,
Fabio




On 3/4/07, Con Kolivas [EMAIL PROTECTED] wrote:

This message is to announce the first general public release of the Rotating
Staircase DeadLine cpu scheduler.

Based on previous work from the staircase cpu scheduler I set out to design,
from scratch, a new scheduling policy design which satisfies every
requirement for SCHED_NORMAL (otherwise known as SCHED_OTHER) task management.

Available for download are:

 A full rollup of the patch for 2.6.20:
http://ck.kolivas.org/patches/staircase-deadline/sched-rsdl-0.26.patch

 Split patches for 2.6.20(which will follow this email):
http://ck.kolivas.org/patches/staircase-deadline/split-out/

 The readme (which will also constitute the rest of this email):
http://ck.kolivas.org/patches/staircase-deadline/rsdl_scheduler.readme


The following readme is also included as documentation in
Documentation/sched-design.txt


Rotating Staircase Deadline cpu scheduler policy


Design summary
==

A novel design which incorporates a foreground-background descending priority
system (the staircase) with runqueue managed minor and major epochs (rotation
and deadline).


Features


A starvation free, strict fairness O(1) scalable design with interactivity
as good as the above restrictions can provide. There is no interactivity
estimator, no sleep/run measurements and only simple fixed accounting.
The design has strict enough a design and accounting that task behaviour
can be modelled and maximum scheduling latencies can be predicted by
the virtual deadline mechanism that manages runqueues. The prime concern
in this design is to maintain fairness at all costs determined by nice level,
yet to maintain as good interactivity as can be allowed within the
constraints of strict fairness.


Design description
==

RSDL works off the principle of providing each task a quota of runtime that
it is allowed to run at each priority level equal to its static priority
(ie. its nice level) and every priority below that. When each task is queued,
the cpu that it is queued onto also keeps a record of that quota. If the
task uses up its quota it is decremented one priority level. Also, if the cpu
notices a quota full has been used for that priority level, it pushes
everything remaining at that priority level to the next lowest priority
level. Once every runtime quota has been consumed of every priority level,
a task is queued on the expired array. When no other tasks exist with
quota, the expired array is activated and fresh quotas are handed out. This
is all done in O(1).


Design details
==

Each cpu has its own runqueue which micromanages its own epochs, and each
task keeps a record of its own entitlement of cpu time. Most of the rest
of these details apply to non-realtime tasks as rt task management is
straight forward.

Each runqueue keeps a record of what major epoch it is up to in the
rq-prio_rotation field which is incremented on each major epoch. It also
keeps a record of quota available to each priority value valid for that
major epoch in rq-prio_quota[].

Each task keeps a record of what major runqueue epoch it was last running
on in p-rotation. It also keeps a record of what priority levels it has
already been allocated quota from during this epoch in a bitmap p-bitmap.

The only tunable that determines all other details is the RR_INTERVAL. This
is set to 6ms (minimum on 1000HZ, higher at different HZ values).

All tasks are initially given a quota based on RR_INTERVAL. This is equal to
RR_INTERVAL between nice values of 0 and 19, and progressively larger for
nice values from -1 to -20. This is assigned to p-quota and only changes
with changes in nice level.

As a task is first queued, it checks in recalc_task_prio to see if it has
run at this runqueue's current priority rotation. If it has not, it will
have its p-prio level set to equal its p-static_prio (nice level) and will
be given a p-time_slice equal to the p-quota, and has its allocation
bitmap bit set in p-bitmap for its static priority (nice value). This
quota is then also added to the current runqueue's rq-prio_quota[p-prio].
It is then queued on the current active priority array.

If a task has already been running during this major epoch, if it has
p-time_slice left and the rq-prio_quota for the task's p-prio still
has quota, it will be placed back on the active array, but no more quota
will be added to either the task or the runqueue quota.

If a task has been running during this major epoch, but does not have
p-time_slice left or the runqueue's prio_quota for this task's p-prio
does not have quota, it will find the next lowest priority in its bitmap
that it has not been allocated quota from. It then gets the a full quota
in p-time_slice and adds that to the quota value for the relevant priority
rq-prio_quota. It is 

Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Con Kolivas
On Friday 09 March 2007 07:25, Fabio Comolli wrote:
 Hi Con
 It would be nice if you could rebase this patch to latest git or at
 least to 2.6.21-rc3.
 Regards,

Check in http://ck.kolivas.org/patches/staircase-deadline/
There's an -rc3 patch there.

-- 
-ck
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Fabio Comolli

Well, downloaded - compiled - booted: initng measures 17.369 seconds
to complete the boot process; without the patch the same kernel booted
in 21.553 seconds.

Very impressive.
Many thanks for your work.

Fabio






On 3/8/07, Con Kolivas [EMAIL PROTECTED] wrote:

On Friday 09 March 2007 07:25, Fabio Comolli wrote:
 Hi Con
 It would be nice if you could rebase this patch to latest git or at
 least to 2.6.21-rc3.
 Regards,

Check in http://ck.kolivas.org/patches/staircase-deadline/
There's an -rc3 patch there.

--
-ck


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Bill Davidsen

Linus Torvalds wrote:


On Mon, 5 Mar 2007, Ed Tomlinson wrote:
The patch _does_ make a difference.  For instance reading mail with freenet working 
hard  (threaded java application) and gentoo's emerge triggering compiles to update the 
box is much smoother.


Think this scheduler needs serious looking at.  


I agree, partly because it's obviously been getting rave reviews so far, 
but mainly because it looks like you can think about behaviour a lot 
better, something that was always very hard with the interactivity 
boosters with process state history.


I'm not at all opposed to this, but we do need:
 - to not do it at this stage in the stable kernel
 - to let it sit in -mm for at least a short while
 - and generally more people testing more loads.

Please, could you now rethink plugable scheduler as well? Even if one 
had to be chosen at boot time and couldn't be change thereafter, it 
would still allow a few new thoughts to be included.


I don't actually worry too much about switching out a CPU scheduler: those 
things are places where you *can* largely read the source code and get an 
idea for them (although with the kind of history state that we currently 
have, it's really really hard). But at the very least they aren't likely 
to have subtle bugs that show up elsewhere, so...


I confess that the default scheduler works for me most of the time, i/o 
tuning is more productive. I want tot test with kvm load, but 
2.6.21-rc3-git3 doesn't want to run kvm at all, I'm looking to see what 
I broke, since nbd doesn't work, either.


I'm collecting OOPS now, will forward when I have a few more.

So as long as the generic concerns above are under control, I'll happily 
try something like this if it can be merged early in a merge window..


Linus



--
Bill Davidsen [EMAIL PROTECTED]
  We have more to fear from the bungling of the incompetent than from
the machinations of the wicked.  - from Slashdot
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Bill Davidsen

Con Kolivas wrote:

On Wednesday 07 March 2007 04:50, Bill Davidsen wrote:



With luck I'll get to shake out that patch in combination with kvm later
today.


Great thanks!. I've appreciated all the feedback so far.

I did try, the 2.6.21-rc3-git3 doesn't want to kvm for me, and your 
patch may not be doing what it should. I'm falling back to 2.6.20 and 
will retest after I document my kvm issues.


--
Bill Davidsen [EMAIL PROTECTED]
  We have more to fear from the bungling of the incompetent than from
the machinations of the wicked.  - from Slashdot
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Linus Torvalds


On Thu, 8 Mar 2007, Bill Davidsen wrote:

 Please, could you now rethink plugable scheduler as well? Even if one had to
 be chosen at boot time and couldn't be change thereafter, it would still allow
 a few new thoughts to be included.

No. Really.

I absolutely *detest* pluggable schedulers. They have a huge downside: 
they allow people to think that it's ok to make special-case schedulers. 
And I simply very fundamentally disagree.

If you want to play with a scheduler of your own, go wild. It's easy 
(well, you'll find out that getting good results isn't, but that's a 
different thing). But actual pluggable schedulers just cause people to 
think that oh, the scheduler performs badly under circumstance X, so 
let's tell people to use special scheduler Y for that case.

And CPU scheduling really isn't that complicated. It's *way* simpler than 
IO scheduling. There simply is *no*excuse* for not trying to do it well 
enough for all cases, or for having special-case stuff.

But even IO scheduling actually ends up being largely the same. Yes, we 
have pluggable schedulers, and we even allow switching them, but in the 
end, we don't want people to actually do it. It's much better to have a 
scheduler that is good enough than it is to have five that are perfect 
for five particular cases.

Linus
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread hui
On Thu, Mar 08, 2007 at 10:31:48PM -0800, Linus Torvalds wrote:
 On Thu, 8 Mar 2007, Bill Davidsen wrote:
  Please, could you now rethink plugable scheduler as well? Even if one had to
  be chosen at boot time and couldn't be change thereafter, it would still 
  allow
  a few new thoughts to be included.
 
 No. Really.
 
 I absolutely *detest* pluggable schedulers. They have a huge downside: 
 they allow people to think that it's ok to make special-case schedulers. 
 And I simply very fundamentally disagree.

Linus,

This is where I have to respectfully disagree. There are types of loads
that aren't covered in SCHED_OTHER. They are typically certain real time
loads and those folks (regardless of -rt patch) would benefit greatly
from having something like that in place. Those scheduler developers can
plug in (at compile time) their work without having to track and forward
port their code constantly so that non-SCHED_OTHER policies can be
experimented with easily.

This is especially so with rate monotonic influenced schedulers that are
in the works by real time folks, stock kernel or not. This is about
making Linux generally accessible to those folks and not folks doing
SCHED_OTHER work. They are orthogonal.

bill

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Ingo Molnar

* Con Kolivas [EMAIL PROTECTED] wrote:

 This message is to announce the first general public release of the 
 Rotating Staircase DeadLine cpu scheduler.
 
 Based on previous work from the staircase cpu scheduler I set out to 
 design, from scratch, a new scheduling policy design which satisfies 
 every requirement for SCHED_NORMAL (otherwise known as SCHED_OTHER) 
 task management.

cool! I like this even more than i liked your original staircase 
scheduler from 2 years ago :) Lets try what we did back then: put it 
into -mm and see what breaks (if anything). But in general, it is 
becoming increasingly clear that the interactivity estimator is a more 
fragile concept than the built-in quota mechanism of the staircase 
scheduler, so if it works in practice i'm quite in favor of it, even if 
it regresses /some/ workloads.

Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-08 Thread Con Kolivas
On Thursday 08 March 2007 19:53, Ingo Molnar wrote:
 * Con Kolivas [EMAIL PROTECTED] wrote:
  This message is to announce the first general public release of the
  Rotating Staircase DeadLine cpu scheduler.
 
  Based on previous work from the staircase cpu scheduler I set out to
  design, from scratch, a new scheduling policy design which satisfies
  every requirement for SCHED_NORMAL (otherwise known as SCHED_OTHER)
  task management.

 cool! I like this even more than i liked your original staircase
 scheduler from 2 years ago :) Lets try what we did back then: put it
 into -mm and see what breaks (if anything). But in general, it is
 becoming increasingly clear that the interactivity estimator is a more
 fragile concept than the built-in quota mechanism of the staircase
 scheduler, so if it works in practice i'm quite in favor of it, even if
 it regresses /some/ workloads.

Great! Thanks for your support. 

After futzing around for all that time I've become sure that an approach 
without an interactive estimator is our only way forward. So far the 
throughput benchmarks are encouraging too so I suspect the estimator may be 
causing harm there too.

Ensuring the different arches and cpuidle work properly I likely will need 
help with, though, so I'd appreciate any help from people if they see 
something obvious and can get a grip of my code.

Thanks!

-- 
-ck
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-06 Thread Willy Tarreau
Hi Bill,

On Tue, Mar 06, 2007 at 04:37:37PM -0500, Bill Davidsen wrote:
(...)
> The point is that no one CPU scheduler will satisfy the policy needs of 
> all users, any more than one i/o scheduler does so. We have realtime 
> scheduling, preempt both voluntary and involuntary, why should we not 
> have multiple CPU schedulers. If Linus has an objection to plugable 
> schedulers, then let's identify what the problem is and address it. If 
> that means one scheduler or the other must be compiled in, or all 
> compiled in and selected, so be it.

I'm not in Linus' head, but I think that he wanted the recurrent scheduler
problems to be addressed first for most users before going further. Too
much choice is often dangerous for quality. For instance, look at all the
netfilter modules. Many of them were completely bogus in their early stages,
and some of them even do mostly the same jobs, and many of them have never
left the "extra" stage. Choice is good to detect users' needs, it's good
for global evolution, but it's not as good when you want to have something
good enough for most people.

> >Then, when we have a generic, good enough scheduler for most situations, I
> >think that it could be good to get the plugsched for very specific usages.
> >People working in HPC may prefer to allocate ressource differently for
> >instance. There may also be people refusing to mix tasks from different 
> >users
> >on two different siblings of one CPU for security reasons, etc... All those
> >would justify a plugable scheduler. But it should not be an excuse to 
> >provide
> >a set of bad schedulers and no good one.
> >
> >  
> Unless you force the the definition of "good" to "what the default 
> scheduler does," there can be no "one" good one. Choice is good, no one 
> is calling for bizarre niche implementations, but we have at minimum 
> three CPU schedulers which as "best" for a large number of users. 
> (current default, and Con's fair and interactive flavors, before you ask).

By "good", I mean a scheduler that is not trivially DoSable, and which
does not cause unexpected long pauses to some processes without any reason
(processes which cannot get any time slice for tens of seconds, or ssh
daemons which freeze under system load, to the point of totally preventing
remote administration past 50% CPU usage on some systems).

> >The CPU scheduler is often compared to the I/O schedulers while in fact 
> >this
> >is a completely different story. The I/O schedulers are needed because the
> >hardware and filesystems may lead to very different behaviours, and the
> >workload may vary a lot (eg: news server, ftp server, cache, desktop, real
> >time streaming, ...). But at least, the default I/O scheduler was good 
> >enough
> >for most usages, and alternative ones are here to provide optimal solutions
> >to specific needs.
> And multiple schedulers are needed because the type of load, mix of 
> loads, and admin preference all require decisions at the policy which 
> can't be covered by a single solution. Or at least none of the existing 
> solutions, and I think letting people tune the guts of scheduler policy 
> is more dangerous than giving a selection of solutions. Linux has been 
> about choice all along, I hope it's nearly time for a solution better 
> than patches to be presented.

There's a difference between CPU and I/O scheduler though. With the CPU
scheduler, you've always had the choice to assign per-process priorities 
with "nice". Don't get me wrong, I'm all for pluggable schedulers, as I'm
an ever unsatisfied optimizer. It's just that I think it has been good to
encourage people to focus on real issues before dispersing efforts on
different needs. I hope that Con's work will eventually get merged and
that the door will be opened towards pluggable schedulers.

Best regards,
Willy

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-06 Thread Bill Davidsen

Willy Tarreau wrote:

On Tue, Mar 06, 2007 at 11:18:44AM +1100, Con Kolivas wrote:
  

On Tuesday 06 March 2007 10:05, Bill Davidsen wrote:


jos poortvliet wrote:
  

Well, imho his current staircase scheduler already does a better job
compared to mainline, but it won't make it in (or at least, it's not
likely). So we can hope this WILL make it into mainline, but I wouldn't
count on it.


Wrong problem, what is really needed is to get CPU scheduler choice into
mainline, just as i/o scheduler finally did. Con has noted that for some
loads this will present suboptimal performance, as will his -ck patches,
as will the default scheduler. Instead of trying to make ANY one size
fit all, we should have a means to select, at runtime, between any of
the schedulers, and preferably to define an interface by which a user
can insert a new scheduler in the kernel (compile in, I don't mean
plugable) with clear and well defined rules for how that can be done.
  
Been there, done that. Wli wrote the infrastructure for plugsched; I took his 
code and got it booting and ported 3 or so different scheduler designs. It 
allowed you to build as few or as many different schedulers into the kernel 
and either boot the only one you built into your kernel, or choose a 
scheduler at boot time. That code got permavetoed by both Ingo and Linus. 
After that I gave up on that code and handed it over to Peter Williams who 
still maintains it. So please note that I pushed the plugsched barrow 
previously and still don't think it's a bad idea, but the maintainers think 
it's the wrong approach.



In a way, I think they are right. Let me explain. Pluggable schedulers are
useful when you want to switch away from the default one. This is very useful
during development of a new scheduler, as well as when you're not satisfied
with the default scheduler. Having this feature will incitate many people to
develop their own scheduler for their very specific workload, and nothing
generic. It's a bit what happened after all : you, Peter, Nick, and Mike
have worked a lot trying to provide alternative solutions.

But when you think about it, there are other OSes which have only one scheduler
and which behave very well with tens of thousands of tasks and scale very well
with lots of CPUs (eg: solaris). So there is a real challenge here to try to
provide something at least as good and universal because we know that it can
exist. And this is what you finally did : work on a scheduler which ought to be
good with any workload.

  
The problem is not with "any workload," because that's not the issue, 
the issue is the definition of "good" matching the administrator's 
policy. And that's where the problem comes in. We have the default 
scheduler, which favors interactive jobs. We have Con's staircase 
scheduler which is part of an interactivity package. We have the 
absolutely fair scheduler which is, well... fair, and keeps things 
smooth and under reasonable load crisp.


There are other schedulers in the pluggable package, I did a doorknob 
scheduler for 2.2 (everybody gets a turn, special case of round-robin). 
I'm sure people have quietly hacked many more, which have never been 
presented to public view.


The point is that no one CPU scheduler will satisfy the policy needs of 
all users, any more than one i/o scheduler does so. We have realtime 
scheduling, preempt both voluntary and involuntary, why should we not 
have multiple CPU schedulers. If Linus has an objection to plugable 
schedulers, then let's identify what the problem is and address it. If 
that means one scheduler or the other must be compiled in, or all 
compiled in and selected, so be it.



Then, when we have a generic, good enough scheduler for most situations, I
think that it could be good to get the plugsched for very specific usages.
People working in HPC may prefer to allocate ressource differently for
instance. There may also be people refusing to mix tasks from different users
on two different siblings of one CPU for security reasons, etc... All those
would justify a plugable scheduler. But it should not be an excuse to provide
a set of bad schedulers and no good one.

  
Unless you force the the definition of "good" to "what the default 
scheduler does," there can be no "one" good one. Choice is good, no one 
is calling for bizarre niche implementations, but we have at minimum 
three CPU schedulers which as "best" for a large number of users. 
(current default, and Con's fair and interactive flavors, before you ask).

The CPU scheduler is often compared to the I/O schedulers while in fact this
is a completely different story. The I/O schedulers are needed because the
hardware and filesystems may lead to very different behaviours, and the
workload may vary a lot (eg: news server, ftp server, cache, desktop, real
time streaming, ...). But at least, the default I/O scheduler was good enough
for most usages, and alternative ones are here to provide optimal 

Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-06 Thread Con Kolivas
On Wednesday 07 March 2007 04:50, Bill Davidsen wrote:
> Gene Heskett wrote:
> > On Monday 05 March 2007, Nicolas Mailhot wrote:
> >> This looks like -mm stuff if you want it in 2.6.22
> >
> > This needs to get to 2.6.21, it really is that big an improvement.
>
> As Con pointed out, for some workloads and desired behavour this is not
> as good as the existing scheduler. Therefore it should go in -mm and
> hopefully give the user an option to select which is appropriate.

Actually I wasn't saying that for some workloads mainline will be better. What 
I was saying was there will be some bizarre scenarios where the intrinsic 
unfairness in mainline towards certain interactive tasks will make them 
appear to run better. After fiddling with scheduler code for the last few 
years I've come to believe that that may _appear to look better_, but is 
worse since that behaviour can be exploited and leads to scheduling delays 
elsewhere.

> With luck I'll get to shake out that patch in combination with kvm later
> today.

Great thanks!. I've appreciated all the feedback so far.

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-06 Thread jos poortvliet
Op Tuesday 06 March 2007, schreef Willy Tarreau:
> In a way, I think they are right. Let me explain. Pluggable schedulers are
> useful when you want to switch away from the default one. This is very
> useful during development of a new scheduler, as well as when you're not
> satisfied with the default scheduler. Having this feature will incitate
> many people to develop their own scheduler for their very specific
> workload, and nothing generic. It's a bit what happened after all : you,
> Peter, Nick, and Mike have worked a lot trying to provide alternative
> solutions.

Did that happen for I/O? There are a few schedulers, eg some for servers, 
other more for desktop or throughput. But not 10 or something...

> But when you think about it, there are other OSes which have only one
> scheduler and which behave very well with tens of thousands of tasks and
> scale very well with lots of CPUs (eg: solaris). So there is a real
> challenge here to try to provide something at least as good and universal
> because we know that it can exist. And this is what you finally did : work
> on a scheduler which ought to be good with any workload.

I can imagine a desktop can work optimally with another scheduler than a tiny 
embedded OS in a phone or an 8-core system serving a website, or a 
distributed 512 core system doing heavy scientific calculations?!?

Optimizing for all at the same time involves some compromises, and thus limits 
performance on certain scenario's, right?

> Then, when we have a generic, good enough scheduler for most situations, I
> think that it could be good to get the plugsched for very specific usages.
> People working in HPC may prefer to allocate ressource differently for
> instance. There may also be people refusing to mix tasks from different
> users on two different siblings of one CPU for security reasons, etc... All
> those would justify a plugable scheduler. But it should not be an excuse to
> provide a set of bad schedulers and no good one.

CFQ does pretty well at most workloads, that's why it's default, right? But 
there is choice, which is a good thing. And the current mainline CPU 
scheduler isn't bad at all, so having 'no good one' won't happen anyway.

> The CPU scheduler is often compared to the I/O schedulers while in fact
> this is a completely different story. The I/O schedulers are needed because
> the hardware and filesystems may lead to very different behaviours, and the
> workload may vary a lot (eg: news server, ftp server, cache, desktop, real
> time streaming, ...). But at least, the default I/O scheduler was good
> enough for most usages, and alternative ones are here to provide optimal
> solutions to specific needs.

Ok, for I/O, the diff could be pretty big. But still, there are workloads 
which could be improved by a certain scheduler, right?

And wouldn't it make sense then to have a choice in the default kernel at 
boottime? If that wouldn't hurt performance, it would be an improvement for 
desktop distributions like (K)ubuntu who can set staircase by default, and 
server distro's offering RSDL...

At least having a desktop/interactivity optimized scheduler like staircase and 
a fair, throughput-optimized scheduler like RSDL sounds sane. RSDL does 
better at the msql testcase, staircase is better on the desktop... We're not 
talking about huge amounts of code, or 10 schedulers, and the diff of a few 
percent and better scaling on many cpu's and processes versus better 
interactivity on the desktop sounds like it's worth it.

> Regards,
> Willy


pgpE6ZtR6Ynfy.pgp
Description: PGP signature


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-06 Thread Bill Davidsen

Gene Heskett wrote:

On Monday 05 March 2007, Nicolas Mailhot wrote:

This looks like -mm stuff if you want it in 2.6.22


This needs to get to 2.6.21, it really is that big an improvement.

As Con pointed out, for some workloads and desired behavour this is not 
as good as the existing scheduler. Therefore it should go in -mm and 
hopefully give the user an option to select which is appropriate.


With luck I'll get to shake out that patch in combination with kvm later 
today.


--
Bill Davidsen <[EMAIL PROTECTED]>
  "We have more to fear from the bungling of the incompetent than from
the machinations of the wicked."  - from Slashdot
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-06 Thread Al Boldi
Xavier Bestel wrote:
> On Tue, 2007-03-06 at 09:10 +1100, Con Kolivas wrote:
> > Hah I just wish gears would go away. If I get hardware where it runs at
> > just the right speed it looks like it doesn't move at all. On other
> > hardware the wheels go backwards and forwards where the screen refresh
> > rate is just perfectly a factor of the frames per second (or something
> > like that).
> >
> > This is not a cpu scheduler test and you're inferring that there are cpu
> > scheduling artefacts based on an application that has bottlenecks at
> > different places depending on the hardware combination.
>
> I'd add that Xorg has its own scheduler (for X11 operations, of course),
> that has its own quirks, and chances are that it is the one you're
> testing with glxgears. And as Con said, as long as glxgears does more
> FPS than your screen refresh rate, its flickering its completely
> meaningless: it doesn't even attempt to sync with vblank. Al, you'd
> better try with Quake3 or Nexuiz, or even Blender if you want to test 3D
> interactivity under load.

Actually, games aren't really usefull to evaluate scheduler performance, due 
to their bursty nature.

OTOH, gears runs full throttle, including any of its bottlenecks. In fact, 
it's the bottlenecks that add to its realism.  It exposes underlying 
scheduler hickups visually, unless buffered by the display-driver, in which 
case you just use the vesa-driver to be sure.

If gears starts to flicker on you, just slow it down with a cpu hog like:

# while :; do :; done &

Add as many hogs as you need to make the hickups visible.

Again, these hickups are only visible when using uneven nice+ levels.

BTW, another way to show these hickups would be through some kind of a 
cpu/proc timing-tracer.  Do we have something like that?


Thanks!

--
Al

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-06 Thread Xavier Bestel
On Tue, 2007-03-06 at 09:10 +1100, Con Kolivas wrote:
> Hah I just wish gears would go away. If I get hardware where it runs at just 
> the right speed it looks like it doesn't move at all. On other hardware the 
> wheels go backwards and forwards where the screen refresh rate is just 
> perfectly a factor of the frames per second (or something like that). 
> 
> This is not a cpu scheduler test and you're inferring that there are cpu 
> scheduling artefacts based on an application that has bottlenecks at 
> different places depending on the hardware combination. 

I'd add that Xorg has its own scheduler (for X11 operations, of course),
that has its own quirks, and chances are that it is the one you're
testing with glxgears. And as Con said, as long as glxgears does more
FPS than your screen refresh rate, its flickering its completely
meaningless: it doesn't even attempt to sync with vblank. Al, you'd
better try with Quake3 or Nexuiz, or even Blender if you want to test 3D
interactivity under load.

Xav


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-06 Thread Xavier Bestel
On Tue, 2007-03-06 at 09:10 +1100, Con Kolivas wrote:
 Hah I just wish gears would go away. If I get hardware where it runs at just 
 the right speed it looks like it doesn't move at all. On other hardware the 
 wheels go backwards and forwards where the screen refresh rate is just 
 perfectly a factor of the frames per second (or something like that). 
 
 This is not a cpu scheduler test and you're inferring that there are cpu 
 scheduling artefacts based on an application that has bottlenecks at 
 different places depending on the hardware combination. 

I'd add that Xorg has its own scheduler (for X11 operations, of course),
that has its own quirks, and chances are that it is the one you're
testing with glxgears. And as Con said, as long as glxgears does more
FPS than your screen refresh rate, its flickering its completely
meaningless: it doesn't even attempt to sync with vblank. Al, you'd
better try with Quake3 or Nexuiz, or even Blender if you want to test 3D
interactivity under load.

Xav


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ck] Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-06 Thread Al Boldi
Xavier Bestel wrote:
 On Tue, 2007-03-06 at 09:10 +1100, Con Kolivas wrote:
  Hah I just wish gears would go away. If I get hardware where it runs at
  just the right speed it looks like it doesn't move at all. On other
  hardware the wheels go backwards and forwards where the screen refresh
  rate is just perfectly a factor of the frames per second (or something
  like that).
 
  This is not a cpu scheduler test and you're inferring that there are cpu
  scheduling artefacts based on an application that has bottlenecks at
  different places depending on the hardware combination.

 I'd add that Xorg has its own scheduler (for X11 operations, of course),
 that has its own quirks, and chances are that it is the one you're
 testing with glxgears. And as Con said, as long as glxgears does more
 FPS than your screen refresh rate, its flickering its completely
 meaningless: it doesn't even attempt to sync with vblank. Al, you'd
 better try with Quake3 or Nexuiz, or even Blender if you want to test 3D
 interactivity under load.

Actually, games aren't really usefull to evaluate scheduler performance, due 
to their bursty nature.

OTOH, gears runs full throttle, including any of its bottlenecks. In fact, 
it's the bottlenecks that add to its realism.  It exposes underlying 
scheduler hickups visually, unless buffered by the display-driver, in which 
case you just use the vesa-driver to be sure.

If gears starts to flicker on you, just slow it down with a cpu hog like:

# while :; do :; done 

Add as many hogs as you need to make the hickups visible.

Again, these hickups are only visible when using uneven nice+ levels.

BTW, another way to show these hickups would be through some kind of a 
cpu/proc timing-tracer.  Do we have something like that?


Thanks!

--
Al

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ANNOUNCE] RSDL completely fair starvation free interactive cpu scheduler

2007-03-06 Thread Bill Davidsen

Gene Heskett wrote:

On Monday 05 March 2007, Nicolas Mailhot wrote:

This looks like -mm stuff if you want it in 2.6.22


This needs to get to 2.6.21, it really is that big an improvement.

As Con pointed out, for some workloads and desired behavour this is not 
as good as the existing scheduler. Therefore it should go in -mm and 
hopefully give the user an option to select which is appropriate.


With luck I'll get to shake out that patch in combination with kvm later 
today.


--
Bill Davidsen [EMAIL PROTECTED]
  We have more to fear from the bungling of the incompetent than from
the machinations of the wicked.  - from Slashdot
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


  1   2   >