On Tue, Apr 17, 2007 at 11:24:22AM +0200, Ingo Molnar wrote:
>
> * William Lee Irwin III <[EMAIL PROTECTED]> wrote:
>
> > [...] Also rest assured that the tone of the critique is not hostile,
> > and wasn't meant to sound that way.
>
> ok :) (And i guess i was too touchy - sorry about coming
On Tue, Apr 17, 2007 at 09:01:55AM +0200, Nick Piggin wrote:
> On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
> > On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III wrote:
> > >> All things are not equal; they all have different properties. I like
> >
> > On
On Tue, Apr 17, 2007 at 09:07:49AM -0400, James Bruce wrote:
> Nonlinear is a must IMO. I would suggest X = exp(ln(10)/10) ~= 1.2589
> That value has the property that a nice=10 task gets 1/10th the cpu of a
> nice=0 task, and a nice=20 task gets 1/100 of nice=0. I think that
> would be fairly
On Tue, Apr 17, 2007 at 05:31:20AM +0200, Nick Piggin wrote:
> On Mon, Apr 16, 2007 at 09:28:24AM -0500, Matt Mackall wrote:
> > On Mon, Apr 16, 2007 at 05:03:49AM +0200, Nick Piggin wrote:
> > > I'd prefer if we kept a single CPU scheduler in mainline, because I
> > > think that simplifies
Peter Williams wrote:
Chris Friesen wrote:
Scuse me if I jump in here, but doesn't the load balancer need some
way to figure out a) when to run, and b) which tasks to pull and where
to push them?
Yes but both of these are independent of the scheduler discipline in force.
It is not clear
William Lee Irwin III wrote:
William Lee Irwin III wrote:
Comments on which directions you'd like this to go in these respects
would be appreciated, as I regard you as the current "project owner."
On Tue, Apr 17, 2007 at 06:00:06PM +1000, Peter Williams wrote:
I'd do scan through LKML from
Ingo Molnar wrote:
* Nick Piggin <[EMAIL PROTECTED]> wrote:
Maybe the progress is that more key people are becoming open to
the idea of changing the scheduler.
Could be. All was quiet for quite a while, but when RSDL showed up,
it aroused enough interest to show that scheduling woes is on
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 05:48:55PM +1000, Peter Williams wrote:
Nick Piggin wrote:
Other hints that it was a bad idea was the need to transfer time slices
between children and parents during fork() and exit().
I don't see how that has anything to do with dual arrays.
It's
Chris Friesen wrote:
William Lee Irwin III wrote:
The sorts of like explicit decisions I'd like to be made for these are:
(1) In a mixture of tasks with varying nice numbers, a given nice number
corresponds to some share of CPU bandwidth. Implementations
should not have the freedom to
On Tue, Apr 17, 2007 at 11:59:00AM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > 2.6.21-rc7-cfs-v2
> > 534.80user 30.92system 2:23.64elapsed 393%CPU
> > 534.75user 31.01system 2:23.70elapsed 393%CPU
> > 534.66user 31.07system 2:23.76elapsed 393%CPU
> > 534.56user
William Lee Irwin III wrote:
>> Comments on which directions you'd like this to go in these respects
>> would be appreciated, as I regard you as the current "project owner."
On Tue, Apr 17, 2007 at 06:00:06PM +1000, Peter Williams wrote:
> I'd do scan through LKML from about 18 months ago looking
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> On Tue, Apr 17, 2007 at 11:24:22AM +0200, Ingo Molnar wrote:
>
> > until now the main approach for nice levels in Linux was always:
> > "implement your main scheduling logic for nice 0 and then look for
> > some low-overhead method that can
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> 2.6.21-rc7-cfs-v2
> 534.80user 30.92system 2:23.64elapsed 393%CPU
> 534.75user 31.01system 2:23.70elapsed 393%CPU
> 534.66user 31.07system 2:23.76elapsed 393%CPU
> 534.56user 30.91system 2:23.76elapsed 393%CPU
> 534.66user 31.07system 2:23.67elapsed
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
>> Also, given the general comments it appears clear that some
>> statistical metric of deviation from the intended behavior furthermore
>> qualified by timescale is necessary, so this appears to be headed
>> toward a sort of performance metric
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> > > Maybe the progress is that more key people are becoming open to
> > > the idea of changing the scheduler.
> >
> > Could be. All was quiet for quite a while, but when RSDL showed up,
> > it aroused enough interest to show that scheduling woes is
* Peter Williams <[EMAIL PROTECTED]> wrote:
> There's a lot of ugly code in the load balancer that is only there to
> overcome the side effects of SMT and dual core. A lot of it was put
> there by Intel employees trying to make load balancing more friendly
> to their systems. What I'm
On Tue, Apr 17, 2007 at 08:56:27AM +0100, Andy Whitcroft wrote:
> >
> > as usual, any sort of feedback, bugreports, fixes and suggestions are
> > more than welcome,
>
> Pushed this through the test.kernel.org and nothing new blew up.
> Notably the kernbench figures are within expectations even
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> [...] Also rest assured that the tone of the critique is not hostile,
> and wasn't meant to sound that way.
ok :) (And i guess i was too touchy - sorry about coming out swinging.)
> Also, given the general comments it appears clear that
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
>> The additive nice_offset breaks nice levels. A multiplicative priority
>> weighting of a different, nonnegative metric of cpu utilization from
>> what's now used is required for nice levels to work. I've been trying
>> to point this out
* Peter Williams <[EMAIL PROTECTED]> wrote:
> > And my scheduler for example cuts down the amount of policy code and
> > code size significantly.
>
> Yours is one of the smaller patches mainly because you perpetuate (or
> you did in the last one I looked at) the (horrible to my eyes) dual
>
On Mon, Apr 16, 2007 at 04:10:59PM -0700, Michael K. Edwards wrote:
>> This observation of Peter's is the best thing to come out of this
>> whole foofaraw. Looking at what's happening in CPU-land, I think it's
>> going to be necessary, within a couple of years, to replace the whole
>> idea of
On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
>> Any chance you'd be willing to put down a few thoughts on what sorts
>> of standards you'd like to set for both correctness (i.e. the bare
>> minimum a scheduler implementation must do to be considered valid
>> beyond not
William Lee Irwin III wrote:
On Tue, Apr 17, 2007 at 04:34:36PM +1000, Peter Williams wrote:
This doesn't make any sense to me.
For a start, exact simultaneous operation would be impossible to achieve
except with highly specialized architecture such as the long departed
transputer. And
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> > Anyone who thinks that there exists only two kinds of code: 100%
> > correct and 100% incorrect with no shades of grey inbetween is in
> > reality a sort of an extremist: whom, depending on mood and
> > affection, we could call either a 'coding
Ingo Molnar wrote:
> [announce] [patch] Modular Scheduler Core and Completely Fair Scheduler [CFS]
>
> i'm pleased to announce the first release of the "Modular Scheduler Core
> and Completely Fair Scheduler [CFS]" patchset:
>
>http://redhat.com/~mingo/cfs-sched
On Tue, Apr 17, 2007 at 05:48:55PM +1000, Peter Williams wrote:
> Nick Piggin wrote:
> >>Other hints that it was a bad idea was the need to transfer time slices
> >>between children and parents during fork() and exit().
> >
> >I don't see how that has anything to do with dual arrays.
>
> It's
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 04:23:37PM +1000, Peter Williams wrote:
Nick Piggin wrote:
And my scheduler for example cuts down the amount of policy code and
code size significantly.
Yours is one of the smaller patches mainly because you perpetuate (or
you did in the last one I
On Tue, Apr 17, 2007 at 09:33:08AM +0200, Ingo Molnar wrote:
>
> * William Lee Irwin III <[EMAIL PROTECTED]> wrote:
>
> > On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
> > > I had a quick look at Ingo's code yesterday. Ingo is always smart to
> > > prepare a main dish
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
> > I had a quick look at Ingo's code yesterday. Ingo is always smart to
> > prepare a main dish (feature) with a nice sider (code cleanup) to
> > Linus ;) And even this code
On Tue, Apr 17, 2007 at 12:27:28AM -0700, Davide Libenzi wrote:
> On Tue, 17 Apr 2007, William Lee Irwin III wrote:
>
> > On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
> > > I would suggest to thoroughly test all your alternatives before deciding.
> > > Some code and design may
On Tue, 17 Apr 2007, William Lee Irwin III wrote:
> On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
> > I would suggest to thoroughly test all your alternatives before deciding.
> > Some code and design may look very good and small at the beginning, but
> > when you start
On Tue, Apr 17, 2007 at 12:09:49AM -0700, William Lee Irwin III wrote:
>
> The trouble with thorough testing right now is that no one agrees on
> what the tests should be and a number of the testcases are not in great
> shape. An agreed-upon set of testcases for basic correctness should be
>
On Tue, 17 Apr 2007, Nick Piggin wrote:
> To be clear, I'm not saying O(logN) itself is a big problem. Type
>
> plot [10:100] x with lines, log(x) with lines, 1 with lines
Haha, Nick, I know how a log() looks like :)
The Time Ring I posted as example (that nothing is other than a
ring-based
William Lee Irwin III wrote:
On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
I had a quick look at Ingo's code yesterday. Ingo is always smart to
prepare a main dish (feature) with a nice sider (code cleanup) to Linus ;)
And even this code does that pretty nicely. The deadline
On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
> On Tue, 17 Apr 2007, Nick Piggin wrote:
>
> > > All things are not equal; they all have different properties. I like
> >
> > Exactly. So we have to explore those properties and evaluate performance
> > (in all meanings of the
On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
> I had a quick look at Ingo's code yesterday. Ingo is always smart to
> prepare a main dish (feature) with a nice sider (code cleanup) to Linus ;)
> And even this code does that pretty nicely. The deadline designs looks
> good,
On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
> On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III wrote:
> >> All things are not equal; they all have different properties. I like
>
> On Tue, Apr 17, 2007 at 08:15:03AM +0200, Nick Piggin wrote:
> > Exactly.
On Tue, 17 Apr 2007, Nick Piggin wrote:
> > All things are not equal; they all have different properties. I like
>
> Exactly. So we have to explore those properties and evaluate performance
> (in all meanings of the word). That's only logical.
I had a quick look at Ingo's code yesterday. Ingo
On Tue, Apr 17, 2007 at 04:23:37PM +1000, Peter Williams wrote:
> Nick Piggin wrote:
> >And my scheduler for example cuts down the amount of policy code and
> >code size significantly.
>
> Yours is one of the smaller patches mainly because you perpetuate (or
> you did in the last one I looked
On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III wrote:
>> All things are not equal; they all have different properties. I like
On Tue, Apr 17, 2007 at 08:15:03AM +0200, Nick Piggin wrote:
> Exactly. So we have to explore those properties and evaluate performance
> (in all meanings
Nick Piggin wrote:
Well I know people have had woes with the scheduler for ever (I guess that
isn't going to change :P). I think people generally lost a bit of interest
in trying to improve the situation because of the upstream problem.
Yes.
Peter
--
Peter Williams
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 02:17:22PM +1000, Peter Williams wrote:
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 04:29:01AM +0200, Mike Galbraith wrote:
On Tue, 2007-04-17 at 10:06 +1000, Peter Williams wrote:
Mike Galbraith wrote:
Demystify what? The casual observer need
On Tue, Apr 17, 2007 at 04:03:41PM +1000, Peter Williams wrote:
> Nick Piggin wrote:
> >
> >But you add extra code for that on top of what we have, and are also
> >prevented from making per-cpu assumptions.
> >
> >And you can get N CPUs per runqueue behaviour by having them in a domain
> >with no
On Tue, Apr 17, 2007 at 04:03:41PM +1000, Peter Williams wrote:
> There's a lot of ugly code in the load balancer that is only there to
> overcome the side effects of SMT and dual core. A lot of it was put
> there by Intel employees trying to make load balancing more friendly to
> their
On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III wrote:
> On Tue, Apr 17, 2007 at 02:17:22PM +1000, Peter Williams wrote:
> >> I myself was thinking of this as the chance for a much needed
> >> simplification of the scheduling code and if this can be done with the
> >> result
On Tue, Apr 17, 2007 at 07:53:55AM +0200, Willy Tarreau wrote:
> Hi Nick,
>
> On Tue, Apr 17, 2007 at 06:29:54AM +0200, Nick Piggin wrote:
> (...)
> > And my scheduler for example cuts down the amount of policy code and
> > code size significantly. I haven't looked at Con's ones for a while,
> >
On Tue, Apr 17, 2007 at 02:17:22PM +1000, Peter Williams wrote:
>> I myself was thinking of this as the chance for a much needed
>> simplification of the scheduling code and if this can be done with the
>> result being "reasonable" it then gives us the basis on which to propose
>> improvements
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 02:25:39PM +1000, Peter Williams wrote:
Nick Piggin wrote:
On Mon, Apr 16, 2007 at 04:10:59PM -0700, Michael K. Edwards wrote:
On 4/16/07, Peter Williams <[EMAIL PROTECTED]> wrote:
Note that I talk of run queues
not CPUs as I think a shift to
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 02:25:39PM +1000, Peter Williams wrote:
Nick Piggin wrote:
On Mon, Apr 16, 2007 at 04:10:59PM -0700, Michael K. Edwards wrote:
On 4/16/07, Peter Williams [EMAIL PROTECTED] wrote:
Note that I talk of run queues
not CPUs as I think a shift to multiple
On Tue, Apr 17, 2007 at 02:17:22PM +1000, Peter Williams wrote:
I myself was thinking of this as the chance for a much needed
simplification of the scheduling code and if this can be done with the
result being reasonable it then gives us the basis on which to propose
improvements based on
On Tue, Apr 17, 2007 at 07:53:55AM +0200, Willy Tarreau wrote:
Hi Nick,
On Tue, Apr 17, 2007 at 06:29:54AM +0200, Nick Piggin wrote:
(...)
And my scheduler for example cuts down the amount of policy code and
code size significantly. I haven't looked at Con's ones for a while,
but I
On Tue, Apr 17, 2007 at 04:03:41PM +1000, Peter Williams wrote:
There's a lot of ugly code in the load balancer that is only there to
overcome the side effects of SMT and dual core. A lot of it was put
there by Intel employees trying to make load balancing more friendly to
their systems.
On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III wrote:
On Tue, Apr 17, 2007 at 02:17:22PM +1000, Peter Williams wrote:
I myself was thinking of this as the chance for a much needed
simplification of the scheduling code and if this can be done with the
result being
On Tue, Apr 17, 2007 at 04:03:41PM +1000, Peter Williams wrote:
Nick Piggin wrote:
But you add extra code for that on top of what we have, and are also
prevented from making per-cpu assumptions.
And you can get N CPUs per runqueue behaviour by having them in a domain
with no restrictions
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 02:17:22PM +1000, Peter Williams wrote:
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 04:29:01AM +0200, Mike Galbraith wrote:
On Tue, 2007-04-17 at 10:06 +1000, Peter Williams wrote:
Mike Galbraith wrote:
Demystify what? The casual observer need
On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III wrote:
All things are not equal; they all have different properties. I like
On Tue, Apr 17, 2007 at 08:15:03AM +0200, Nick Piggin wrote:
Exactly. So we have to explore those properties and evaluate performance
(in all meanings of
Nick Piggin wrote:
Well I know people have had woes with the scheduler for ever (I guess that
isn't going to change :P). I think people generally lost a bit of interest
in trying to improve the situation because of the upstream problem.
Yes.
Peter
--
Peter Williams
On Tue, Apr 17, 2007 at 04:23:37PM +1000, Peter Williams wrote:
Nick Piggin wrote:
And my scheduler for example cuts down the amount of policy code and
code size significantly.
Yours is one of the smaller patches mainly because you perpetuate (or
you did in the last one I looked at) the
On Tue, 17 Apr 2007, Nick Piggin wrote:
All things are not equal; they all have different properties. I like
Exactly. So we have to explore those properties and evaluate performance
(in all meanings of the word). That's only logical.
I had a quick look at Ingo's code yesterday. Ingo is
On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III wrote:
All things are not equal; they all have different properties. I like
On Tue, Apr 17, 2007 at 08:15:03AM +0200, Nick Piggin wrote:
Exactly. So we
On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
I had a quick look at Ingo's code yesterday. Ingo is always smart to
prepare a main dish (feature) with a nice sider (code cleanup) to Linus ;)
And even this code does that pretty nicely. The deadline designs looks
good,
On Tue, 17 Apr 2007, Nick Piggin wrote:
To be clear, I'm not saying O(logN) itself is a big problem. Type
plot [10:100] x with lines, log(x) with lines, 1 with lines
Haha, Nick, I know how a log() looks like :)
The Time Ring I posted as example (that nothing is other than a
ring-based
William Lee Irwin III wrote:
On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
I had a quick look at Ingo's code yesterday. Ingo is always smart to
prepare a main dish (feature) with a nice sider (code cleanup) to Linus ;)
And even this code does that pretty nicely. The deadline
On Tue, Apr 17, 2007 at 12:09:49AM -0700, William Lee Irwin III wrote:
The trouble with thorough testing right now is that no one agrees on
what the tests should be and a number of the testcases are not in great
shape. An agreed-upon set of testcases for basic correctness should be
devised
On Tue, 17 Apr 2007, William Lee Irwin III wrote:
On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
I would suggest to thoroughly test all your alternatives before deciding.
Some code and design may look very good and small at the beginning, but
when you start patching it
On Tue, Apr 17, 2007 at 12:27:28AM -0700, Davide Libenzi wrote:
On Tue, 17 Apr 2007, William Lee Irwin III wrote:
On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
I would suggest to thoroughly test all your alternatives before deciding.
Some code and design may look very
* William Lee Irwin III [EMAIL PROTECTED] wrote:
On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
I had a quick look at Ingo's code yesterday. Ingo is always smart to
prepare a main dish (feature) with a nice sider (code cleanup) to
Linus ;) And even this code does that
On Tue, Apr 17, 2007 at 09:33:08AM +0200, Ingo Molnar wrote:
* William Lee Irwin III [EMAIL PROTECTED] wrote:
On Mon, Apr 16, 2007 at 11:50:03PM -0700, Davide Libenzi wrote:
I had a quick look at Ingo's code yesterday. Ingo is always smart to
prepare a main dish (feature) with a nice
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 04:23:37PM +1000, Peter Williams wrote:
Nick Piggin wrote:
And my scheduler for example cuts down the amount of policy code and
code size significantly.
Yours is one of the smaller patches mainly because you perpetuate (or
you did in the last one I
On Tue, Apr 17, 2007 at 05:48:55PM +1000, Peter Williams wrote:
Nick Piggin wrote:
Other hints that it was a bad idea was the need to transfer time slices
between children and parents during fork() and exit().
I don't see how that has anything to do with dual arrays.
It's totally to do
Ingo Molnar wrote:
[announce] [patch] Modular Scheduler Core and Completely Fair Scheduler [CFS]
i'm pleased to announce the first release of the Modular Scheduler Core
and Completely Fair Scheduler [CFS] patchset:
http://redhat.com/~mingo/cfs-scheduler/sched-modular+cfs.patch
* Nick Piggin [EMAIL PROTECTED] wrote:
Anyone who thinks that there exists only two kinds of code: 100%
correct and 100% incorrect with no shades of grey inbetween is in
reality a sort of an extremist: whom, depending on mood and
affection, we could call either a 'coding purist' or a
William Lee Irwin III wrote:
On Tue, Apr 17, 2007 at 04:34:36PM +1000, Peter Williams wrote:
This doesn't make any sense to me.
For a start, exact simultaneous operation would be impossible to achieve
except with highly specialized architecture such as the long departed
transputer. And
On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
Any chance you'd be willing to put down a few thoughts on what sorts
of standards you'd like to set for both correctness (i.e. the bare
minimum a scheduler implementation must do to be considered valid
beyond not oopsing)
On Mon, Apr 16, 2007 at 04:10:59PM -0700, Michael K. Edwards wrote:
This observation of Peter's is the best thing to come out of this
whole foofaraw. Looking at what's happening in CPU-land, I think it's
going to be necessary, within a couple of years, to replace the whole
idea of CPU
* Peter Williams [EMAIL PROTECTED] wrote:
And my scheduler for example cuts down the amount of policy code and
code size significantly.
Yours is one of the smaller patches mainly because you perpetuate (or
you did in the last one I looked at) the (horrible to my eyes) dual
array
* William Lee Irwin III [EMAIL PROTECTED] wrote:
The additive nice_offset breaks nice levels. A multiplicative priority
weighting of a different, nonnegative metric of cpu utilization from
what's now used is required for nice levels to work. I've been trying
to point this out politely by
* William Lee Irwin III [EMAIL PROTECTED] wrote:
[...] Also rest assured that the tone of the critique is not hostile,
and wasn't meant to sound that way.
ok :) (And i guess i was too touchy - sorry about coming out swinging.)
Also, given the general comments it appears clear that some
On Tue, Apr 17, 2007 at 08:56:27AM +0100, Andy Whitcroft wrote:
as usual, any sort of feedback, bugreports, fixes and suggestions are
more than welcome,
Pushed this through the test.kernel.org and nothing new blew up.
Notably the kernbench figures are within expectations even on the
* Peter Williams [EMAIL PROTECTED] wrote:
There's a lot of ugly code in the load balancer that is only there to
overcome the side effects of SMT and dual core. A lot of it was put
there by Intel employees trying to make load balancing more friendly
to their systems. What I'm suggesting
* Nick Piggin [EMAIL PROTECTED] wrote:
Maybe the progress is that more key people are becoming open to
the idea of changing the scheduler.
Could be. All was quiet for quite a while, but when RSDL showed up,
it aroused enough interest to show that scheduling woes is on folks
* William Lee Irwin III [EMAIL PROTECTED] wrote:
Also, given the general comments it appears clear that some
statistical metric of deviation from the intended behavior furthermore
qualified by timescale is necessary, so this appears to be headed
toward a sort of performance metric as
* Nick Piggin [EMAIL PROTECTED] wrote:
2.6.21-rc7-cfs-v2
534.80user 30.92system 2:23.64elapsed 393%CPU
534.75user 31.01system 2:23.70elapsed 393%CPU
534.66user 31.07system 2:23.76elapsed 393%CPU
534.56user 30.91system 2:23.76elapsed 393%CPU
534.66user 31.07system 2:23.67elapsed 393%CPU
* William Lee Irwin III [EMAIL PROTECTED] wrote:
On Tue, Apr 17, 2007 at 11:24:22AM +0200, Ingo Molnar wrote:
until now the main approach for nice levels in Linux was always:
implement your main scheduling logic for nice 0 and then look for
some low-overhead method that can be glued to
William Lee Irwin III wrote:
Comments on which directions you'd like this to go in these respects
would be appreciated, as I regard you as the current project owner.
On Tue, Apr 17, 2007 at 06:00:06PM +1000, Peter Williams wrote:
I'd do scan through LKML from about 18 months ago looking for
On Tue, Apr 17, 2007 at 11:59:00AM +0200, Ingo Molnar wrote:
* Nick Piggin [EMAIL PROTECTED] wrote:
2.6.21-rc7-cfs-v2
534.80user 30.92system 2:23.64elapsed 393%CPU
534.75user 31.01system 2:23.70elapsed 393%CPU
534.66user 31.07system 2:23.76elapsed 393%CPU
534.56user 30.91system
Chris Friesen wrote:
William Lee Irwin III wrote:
The sorts of like explicit decisions I'd like to be made for these are:
(1) In a mixture of tasks with varying nice numbers, a given nice number
corresponds to some share of CPU bandwidth. Implementations
should not have the freedom to
William Lee Irwin III wrote:
William Lee Irwin III wrote:
Comments on which directions you'd like this to go in these respects
would be appreciated, as I regard you as the current project owner.
On Tue, Apr 17, 2007 at 06:00:06PM +1000, Peter Williams wrote:
I'd do scan through LKML from
Peter Williams wrote:
Chris Friesen wrote:
Scuse me if I jump in here, but doesn't the load balancer need some
way to figure out a) when to run, and b) which tasks to pull and where
to push them?
Yes but both of these are independent of the scheduler discipline in force.
It is not clear
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 05:48:55PM +1000, Peter Williams wrote:
Nick Piggin wrote:
Other hints that it was a bad idea was the need to transfer time slices
between children and parents during fork() and exit().
I don't see how that has anything to do with dual arrays.
It's
Ingo Molnar wrote:
* Nick Piggin [EMAIL PROTECTED] wrote:
Maybe the progress is that more key people are becoming open to
the idea of changing the scheduler.
Could be. All was quiet for quite a while, but when RSDL showed up,
it aroused enough interest to show that scheduling woes is on
On Tue, Apr 17, 2007 at 05:31:20AM +0200, Nick Piggin wrote:
On Mon, Apr 16, 2007 at 09:28:24AM -0500, Matt Mackall wrote:
On Mon, Apr 16, 2007 at 05:03:49AM +0200, Nick Piggin wrote:
I'd prefer if we kept a single CPU scheduler in mainline, because I
think that simplifies analysis and
On Tue, Apr 17, 2007 at 09:07:49AM -0400, James Bruce wrote:
Nonlinear is a must IMO. I would suggest X = exp(ln(10)/10) ~= 1.2589
That value has the property that a nice=10 task gets 1/10th the cpu of a
nice=0 task, and a nice=20 task gets 1/100 of nice=0. I think that
would be fairly
On Tue, Apr 17, 2007 at 09:01:55AM +0200, Nick Piggin wrote:
On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III wrote:
All things are not equal; they all have different properties. I like
On Tue, Apr 17,
On Tue, Apr 17, 2007 at 11:24:22AM +0200, Ingo Molnar wrote:
* William Lee Irwin III [EMAIL PROTECTED] wrote:
[...] Also rest assured that the tone of the critique is not hostile,
and wasn't meant to sound that way.
ok :) (And i guess i was too touchy - sorry about coming out
On Tue, 17 Apr 2007, William Lee Irwin III wrote:
On Tue, Apr 17, 2007 at 09:01:55AM +0200, Nick Piggin wrote:
Latency. Given N tasks in the system, an arbitrary task should get
onto the CPU in a bounded amount of time (excluding events like freak
IRQ holdoffs and such, obviously -- ie.
On Tue, Apr 17, 2007 at 11:24:22AM +0200, Ingo Molnar wrote:
yeah. If you could come up with a sane definition that also translates
into low overhead on the algorithm side that would be great!
On Tue, Apr 17, 2007 at 05:08:09PM -0500, Matt Mackall wrote:
How's this:
If you're running two
On Tue, Apr 17, 2007 at 03:32:56PM -0700, William Lee Irwin III wrote:
On Tue, Apr 17, 2007 at 11:24:22AM +0200, Ingo Molnar wrote:
yeah. If you could come up with a sane definition that also translates
into low overhead on the algorithm side that would be great!
On Tue, Apr 17, 2007 at
On Tue, Apr 17, 2007 at 03:32:56PM -0700, William Lee Irwin III wrote:
I'm already working with this as my assumed nice semantics (actually
something with a specific exponential base, suggested in other emails)
until others start saying they want something different and agree.
On Tue, Apr 17,
On 4/17/07, Peter Williams [EMAIL PROTECTED] wrote:
The other way in which the code deviates from the original as that (for
a few years now) I no longer calculated CPU bandwidth usage directly.
I've found that the overhead is less if I keep a running average of the
size of a tasks CPU bursts and
201 - 300 of 623 matches
Mail list logo