El Wed, 18 Apr 2007 10:22:59 -0700 (PDT), Linus Torvalds <[EMAIL PROTECTED]>
escribió:
> So if you have 2 users on a machine running CPU hogs, you should *first*
> try to be fair among users. If one user then runs 5 programs, and the
> other one runs just 1, then the *one* program should get
On Wed, 18 Apr 2007 10:22:59 -0700 (PDT)
Linus Torvalds <[EMAIL PROTECTED]> wrote:
> So if you have 2 users on a machine running CPU hogs, you should
> *first* try to be fair among users. If one user then runs 5 programs,
> and the other one runs just 1, then the *one* program should get 50%
> of
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> It does largely achieve the sort of fairness it set out for itself as
> its design goal. One should also note that the queueing mechanism is
> more than flexible enough to handle prioritization by a number of
> different methods, and the
On Wed, Apr 18, 2007 at 10:22:59AM -0700, Linus Torvalds wrote:
> So I claim that anything that cannot be fair by user ID is actually really
> REALLY unfair. I think it's absolutely humongously STUPID to call
> something the "Completely Fair Scheduler", and then just be fair on a
> thread
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> In that sense 'fairness' is not global (and in fact it is almost
> _never_ a global property, as X runs under root uid [*]), it is only
> the most lowlevel scheduling machinery that can then be built upon.
> [...]
perhaps a more fitting term would
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> The fact is:
>
> - "fairness" is *not* about giving everybody the same amount of CPU
>time (scaled by some niceness level or not). Anybody who thinks
>that is "fair" is just being silly and hasn't thought it through.
yeah, very much so.
On Wed, 18 Apr 2007, Matt Mackall wrote:
>
> On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
> > And "fairness by euid" is probably a hell of a lot easier to do than
> > trying to figure out the wakeup matrix.
>
> For the record, you actually don't need to track a whole NxN
* Christian Hesse <[EMAIL PROTECTED]> wrote:
> Hi Ingo and all,
>
> On Friday 13 April 2007, Ingo Molnar wrote:
> > as usual, any sort of feedback, bugreports, fixes and suggestions are
> > more than welcome,
>
> I just gave CFS a try on my system. From a user's point of view it
> looks good
Hi Ingo and all,
On Friday 13 April 2007, Ingo Molnar wrote:
> as usual, any sort of feedback, bugreports, fixes and suggestions are
> more than welcome,
I just gave CFS a try on my system. From a user's point of view it looks good
so far. Thanks for your work.
However I found a problem: When
On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
> And "fairness by euid" is probably a hell of a lot easier to do than
> trying to figure out the wakeup matrix.
For the record, you actually don't need to track a whole NxN matrix
(or do the implied O(n**3) matrix inversion!) to
On Wed, 18 Apr 2007, Matt Mackall wrote:
>
> Why is X special? Because it does work on behalf of other processes?
> Lots of things do this. Perhaps a scheduler should focus entirely on
> the implicit and directed wakeup matrix and optimizing that
> instead[1].
I 100% agree - the perfect
On Wed, Apr 18, 2007 at 12:55:25AM -0500, Matt Mackall wrote:
> Why are processes special? Should user A be able to get more CPU time
> for his job than user B by splitting it into N parallel jobs? Should
> we be fair per process, per user, per thread group, per session, per
> controlling
Chris Friesen wrote:
Peter Williams wrote:
Chris Friesen wrote:
Suppose I have a really high priority task running. Another very
high priority task wakes up and would normally preempt the first one.
However, there happens to be another cpu available. It seems like it
would be a win if we
On Wednesday 18 April 2007 22:13, Nick Piggin wrote:
> On Wed, Apr 18, 2007 at 11:53:34AM +0200, Ingo Molnar wrote:
> > * Nick Piggin <[EMAIL PROTECTED]> wrote:
> > > So looking at elapsed time, a granularity of 100ms is just behind the
> > > mainline score. However it is using slightly less user
On Wednesday 18 April 2007 22:14, Nick Piggin wrote:
> On Wed, Apr 18, 2007 at 07:33:56PM +1000, Con Kolivas wrote:
> > On Wednesday 18 April 2007 18:55, Nick Piggin wrote:
> > > Again, for comparison 2.6.21-rc7 mainline:
> > >
> > > 508.87user 32.47system 2:17.82elapsed 392%CPU
> > > 509.05user
On Wed, Apr 18, 2007 at 07:33:56PM +1000, Con Kolivas wrote:
> On Wednesday 18 April 2007 18:55, Nick Piggin wrote:
> > Again, for comparison 2.6.21-rc7 mainline:
> >
> > 508.87user 32.47system 2:17.82elapsed 392%CPU
> > 509.05user 32.25system 2:17.84elapsed 392%CPU
> > 508.75user 32.26system
On Wed, Apr 18, 2007 at 11:53:34AM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > So looking at elapsed time, a granularity of 100ms is just behind the
> > mainline score. However it is using slightly less user time and
> > slightly more idle time, which indicates
* Andy Whitcroft <[EMAIL PROTECTED]> wrote:
> > as usual, any sort of feedback, bugreports, fixes and suggestions
> > are more than welcome,
>
> Pushed this through the test.kernel.org and nothing new blew up.
> Notably the kernbench figures are within expectations even on the
> bigger numa
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> > > 535.43user 30.62system 2:23.72elapsed 393%CPU
> >
> > Thanks for testing this! Could you please try this also with:
> >
> >echo 1 > /proc/sys/kernel/sched_granularity
>
> 507.68user 31.87system 2:18.05elapsed 390%CPU
> 507.99user
On Wednesday 18 April 2007 18:55, Nick Piggin wrote:
> On Tue, Apr 17, 2007 at 11:59:00AM +0200, Ingo Molnar wrote:
> > * Nick Piggin <[EMAIL PROTECTED]> wrote:
> > > 2.6.21-rc7-cfs-v2
> > > 534.80user 30.92system 2:23.64elapsed 393%CPU
> > > 534.75user 31.01system 2:23.70elapsed 393%CPU
> > >
On Tue, Apr 17, 2007 at 11:59:00AM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > 2.6.21-rc7-cfs-v2
> > 534.80user 30.92system 2:23.64elapsed 393%CPU
> > 534.75user 31.01system 2:23.70elapsed 393%CPU
> > 534.66user 31.07system 2:23.76elapsed 393%CPU
> > 534.56user
Matt Mackall wrote:
On Tue, Apr 17, 2007 at 03:59:02PM -0700, William Lee Irwin III wrote:
On Tue, Apr 17, 2007 at 03:32:56PM -0700, William Lee Irwin III wrote:
I'm working with the following suggestion:
On Tue, Apr 17, 2007 at 09:07:49AM -0400, James Bruce wrote:
Nonlinear is a must IMO. I
On Wed, Apr 18, 2007 at 01:55:34AM -0500, Matt Mackall wrote:
> On Wed, Apr 18, 2007 at 08:37:11AM +0200, Nick Piggin wrote:
> > I don't know how that supports your argument for unfairness,
>
> I never had such an argument. I like fairness.
>
> My argument is that -you- don't have an argument
On Wed, Apr 18, 2007 at 08:37:11AM +0200, Nick Piggin wrote:
> I don't know how that supports your argument for unfairness,
I never had such an argument. I like fairness.
My argument is that -you- don't have an argument for making fairness a
-requirement-.
> processes are special only because
On Wed, Apr 18, 2007 at 12:55:25AM -0500, Matt Mackall wrote:
> On Wed, Apr 18, 2007 at 07:00:24AM +0200, Nick Piggin wrote:
> > > It's also not yet clear that a scheduler can't be taught to do the
> > > right thing with X without fiddling with nice levels.
> >
> > Being fair doesn't prevent
On Wed, Apr 18, 2007 at 07:00:24AM +0200, Nick Piggin wrote:
> > It's also not yet clear that a scheduler can't be taught to do the
> > right thing with X without fiddling with nice levels.
>
> Being fair doesn't prevent that. Implicit unfairness is wrong though,
> because it will bite people.
>
On Wed, Apr 18, 2007 at 07:00:24AM +0200, Nick Piggin wrote:
It's also not yet clear that a scheduler can't be taught to do the
right thing with X without fiddling with nice levels.
Being fair doesn't prevent that. Implicit unfairness is wrong though,
because it will bite people.
What's
On Wed, Apr 18, 2007 at 12:55:25AM -0500, Matt Mackall wrote:
On Wed, Apr 18, 2007 at 07:00:24AM +0200, Nick Piggin wrote:
It's also not yet clear that a scheduler can't be taught to do the
right thing with X without fiddling with nice levels.
Being fair doesn't prevent that. Implicit
On Wed, Apr 18, 2007 at 08:37:11AM +0200, Nick Piggin wrote:
I don't know how that supports your argument for unfairness,
I never had such an argument. I like fairness.
My argument is that -you- don't have an argument for making fairness a
-requirement-.
processes are special only because
On Wed, Apr 18, 2007 at 01:55:34AM -0500, Matt Mackall wrote:
On Wed, Apr 18, 2007 at 08:37:11AM +0200, Nick Piggin wrote:
I don't know how that supports your argument for unfairness,
I never had such an argument. I like fairness.
My argument is that -you- don't have an argument for
Matt Mackall wrote:
On Tue, Apr 17, 2007 at 03:59:02PM -0700, William Lee Irwin III wrote:
On Tue, Apr 17, 2007 at 03:32:56PM -0700, William Lee Irwin III wrote:
I'm working with the following suggestion:
On Tue, Apr 17, 2007 at 09:07:49AM -0400, James Bruce wrote:
Nonlinear is a must IMO. I
On Tue, Apr 17, 2007 at 11:59:00AM +0200, Ingo Molnar wrote:
* Nick Piggin [EMAIL PROTECTED] wrote:
2.6.21-rc7-cfs-v2
534.80user 30.92system 2:23.64elapsed 393%CPU
534.75user 31.01system 2:23.70elapsed 393%CPU
534.66user 31.07system 2:23.76elapsed 393%CPU
534.56user 30.91system
On Wednesday 18 April 2007 18:55, Nick Piggin wrote:
On Tue, Apr 17, 2007 at 11:59:00AM +0200, Ingo Molnar wrote:
* Nick Piggin [EMAIL PROTECTED] wrote:
2.6.21-rc7-cfs-v2
534.80user 30.92system 2:23.64elapsed 393%CPU
534.75user 31.01system 2:23.70elapsed 393%CPU
534.66user
* Nick Piggin [EMAIL PROTECTED] wrote:
535.43user 30.62system 2:23.72elapsed 393%CPU
Thanks for testing this! Could you please try this also with:
echo 1 /proc/sys/kernel/sched_granularity
507.68user 31.87system 2:18.05elapsed 390%CPU
507.99user 31.93system
* Andy Whitcroft [EMAIL PROTECTED] wrote:
as usual, any sort of feedback, bugreports, fixes and suggestions
are more than welcome,
Pushed this through the test.kernel.org and nothing new blew up.
Notably the kernbench figures are within expectations even on the
bigger numa systems,
On Wed, Apr 18, 2007 at 11:53:34AM +0200, Ingo Molnar wrote:
* Nick Piggin [EMAIL PROTECTED] wrote:
So looking at elapsed time, a granularity of 100ms is just behind the
mainline score. However it is using slightly less user time and
slightly more idle time, which indicates that
On Wed, Apr 18, 2007 at 07:33:56PM +1000, Con Kolivas wrote:
On Wednesday 18 April 2007 18:55, Nick Piggin wrote:
Again, for comparison 2.6.21-rc7 mainline:
508.87user 32.47system 2:17.82elapsed 392%CPU
509.05user 32.25system 2:17.84elapsed 392%CPU
508.75user 32.26system 2:17.83elapsed
On Wednesday 18 April 2007 22:14, Nick Piggin wrote:
On Wed, Apr 18, 2007 at 07:33:56PM +1000, Con Kolivas wrote:
On Wednesday 18 April 2007 18:55, Nick Piggin wrote:
Again, for comparison 2.6.21-rc7 mainline:
508.87user 32.47system 2:17.82elapsed 392%CPU
509.05user 32.25system
On Wednesday 18 April 2007 22:13, Nick Piggin wrote:
On Wed, Apr 18, 2007 at 11:53:34AM +0200, Ingo Molnar wrote:
* Nick Piggin [EMAIL PROTECTED] wrote:
So looking at elapsed time, a granularity of 100ms is just behind the
mainline score. However it is using slightly less user time and
Chris Friesen wrote:
Peter Williams wrote:
Chris Friesen wrote:
Suppose I have a really high priority task running. Another very
high priority task wakes up and would normally preempt the first one.
However, there happens to be another cpu available. It seems like it
would be a win if we
On Wed, Apr 18, 2007 at 12:55:25AM -0500, Matt Mackall wrote:
Why are processes special? Should user A be able to get more CPU time
for his job than user B by splitting it into N parallel jobs? Should
we be fair per process, per user, per thread group, per session, per
controlling terminal?
On Wed, 18 Apr 2007, Matt Mackall wrote:
Why is X special? Because it does work on behalf of other processes?
Lots of things do this. Perhaps a scheduler should focus entirely on
the implicit and directed wakeup matrix and optimizing that
instead[1].
I 100% agree - the perfect scheduler
On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
And fairness by euid is probably a hell of a lot easier to do than
trying to figure out the wakeup matrix.
For the record, you actually don't need to track a whole NxN matrix
(or do the implied O(n**3) matrix inversion!) to get to
Hi Ingo and all,
On Friday 13 April 2007, Ingo Molnar wrote:
as usual, any sort of feedback, bugreports, fixes and suggestions are
more than welcome,
I just gave CFS a try on my system. From a user's point of view it looks good
so far. Thanks for your work.
However I found a problem: When
* Christian Hesse [EMAIL PROTECTED] wrote:
Hi Ingo and all,
On Friday 13 April 2007, Ingo Molnar wrote:
as usual, any sort of feedback, bugreports, fixes and suggestions are
more than welcome,
I just gave CFS a try on my system. From a user's point of view it
looks good so far.
On Wed, 18 Apr 2007, Matt Mackall wrote:
On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
And fairness by euid is probably a hell of a lot easier to do than
trying to figure out the wakeup matrix.
For the record, you actually don't need to track a whole NxN matrix
(or
* Linus Torvalds [EMAIL PROTECTED] wrote:
The fact is:
- fairness is *not* about giving everybody the same amount of CPU
time (scaled by some niceness level or not). Anybody who thinks
that is fair is just being silly and hasn't thought it through.
yeah, very much so.
But note
* Ingo Molnar [EMAIL PROTECTED] wrote:
In that sense 'fairness' is not global (and in fact it is almost
_never_ a global property, as X runs under root uid [*]), it is only
the most lowlevel scheduling machinery that can then be built upon.
[...]
perhaps a more fitting term would be
On Wed, Apr 18, 2007 at 10:22:59AM -0700, Linus Torvalds wrote:
So I claim that anything that cannot be fair by user ID is actually really
REALLY unfair. I think it's absolutely humongously STUPID to call
something the Completely Fair Scheduler, and then just be fair on a
thread level.
* William Lee Irwin III [EMAIL PROTECTED] wrote:
It does largely achieve the sort of fairness it set out for itself as
its design goal. One should also note that the queueing mechanism is
more than flexible enough to handle prioritization by a number of
different methods, and the large
On Wed, 18 Apr 2007 10:22:59 -0700 (PDT)
Linus Torvalds [EMAIL PROTECTED] wrote:
So if you have 2 users on a machine running CPU hogs, you should
*first* try to be fair among users. If one user then runs 5 programs,
and the other one runs just 1, then the *one* program should get 50%
of the
El Wed, 18 Apr 2007 10:22:59 -0700 (PDT), Linus Torvalds [EMAIL PROTECTED]
escribió:
So if you have 2 users on a machine running CPU hogs, you should *first*
try to be fair among users. If one user then runs 5 programs, and the
other one runs just 1, then the *one* program should get 50% of
On Wed, 18 Apr 2007, Matt Mackall wrote:
On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
And fairness by euid is probably a hell of a lot easier to do than
trying to figure out the wakeup matrix.
For the record, you actually don't need to track a whole NxN matrix
(or do
On 4/18/07, Matt Mackall [EMAIL PROTECTED] wrote:
For the record, you actually don't need to track a whole NxN matrix
(or do the implied O(n**3) matrix inversion!) to get to the same
result. You can converge on the same node weightings (ie dynamic
priorities) by applying a damped function at
On Wed, 18 Apr 2007, Ingo Molnar wrote:
But note that most of the reported CFS interactivity wins, as surprising
as it might be, were due to fairness between _the same user's tasks_.
And *ALL* of the CFS interactivity *losses* and complaints have been
because it did the wrong thing
Mark Glines wrote:
One minor question: is it even possible to be completely fair on SMP?
For instance, if you have a 2-way SMP box running 3 applications, one of
which has 2 threads, will the threaded app have an advantage here? (The
current system seems to try to keep each thread on a
On Wed, 18 Apr 2007, Ingo Molnar wrote:
perhaps a more fitting term would be 'precise group-scheduling'. Within
the lowest level task group entity (be that thread group or uid group,
etc.) 'precise scheduling' is equivalent to 'fairness'.
Yes. Absolutely. Except I think that at least if
* Linus Torvalds [EMAIL PROTECTED] wrote:
For example, maybe we can approximate it by spreading out the
statistics: right now you have things like
- last_ran, wait_runtime, sum_wait_runtime..
be per-thread things. [...]
yes, yes, yes! :) My thinking is struct sched_group embedded into
On Wed, 18 Apr 2007, William Lee Irwin III wrote:
Thinking of the scheduler as a CPU bandwidth allocator, this means
handing out shares of CPU bandwidth to all users on the system, which
in turn hand out shares of bandwidth to all sessions, which in turn
hand out shares of bandwidth to all
On Wed, 18 Apr 2007, Linus Torvalds wrote:
I'm not arguing against fairness. I'm arguing against YOUR notion of
fairness, which is obviously bogus. It is *not* fair to try to give out
CPU time evenly!
Perhaps on the rare occasion pursuing the right course demands an act of
unfairness,
On Wed, 18 Apr 2007, Linus Torvalds wrote:
For example, maybe we can approximate it by spreading out the statistics:
right now you have things like
- last_ran, wait_runtime, sum_wait_runtime..
be per-thread things. Maybe some of those can be spread out, so that you
put a part of them
On Wed, 18 Apr 2007, Davide Libenzi wrote:
Perhaps on the rare occasion pursuing the right course demands an act of
unfairness, unfairness itself can be the right course?
I don't think that's the right issue.
It's just that fairness != equal.
Do you think it fair to pay everybody the
* Linus Torvalds [EMAIL PROTECTED] wrote:
perhaps a more fitting term would be 'precise group-scheduling'.
Within the lowest level task group entity (be that thread group or
uid group, etc.) 'precise scheduling' is equivalent to 'fairness'.
Yes. Absolutely. Except I think that at
* Davide Libenzi [EMAIL PROTECTED] wrote:
I think Ingo's idea of a new sched_group to contain the generic
parameters needed for the key calculation, works better than adding
more fields to existing strctures (that would, of course, host
pointers to it). Otherwise I can already the the
On Wednesday 18 April 2007 22:33, Con Kolivas wrote:
On Wednesday 18 April 2007 22:14, Nick Piggin wrote:
On Wed, Apr 18, 2007 at 07:33:56PM +1000, Con Kolivas wrote:
On Wednesday 18 April 2007 18:55, Nick Piggin wrote:
Again, for comparison 2.6.21-rc7 mainline:
508.87user
On Wed, 18 Apr 2007, Ingo Molnar wrote:
That's one reason why i dont think it's necessarily a good idea to
group-schedule threads, we dont really want to do a per thread group
percpu_alloc().
I still do not have clear how much overhead this will bring into the
table, but I think (like
On Wed, 18 Apr 2007, Linus Torvalds wrote:
On Wed, 18 Apr 2007, Davide Libenzi wrote:
Perhaps on the rare occasion pursuing the right course demands an act of
unfairness, unfairness itself can be the right course?
I don't think that's the right issue.
It's just that fairness !=
Linus Torvalds wrote:
On Wed, 18 Apr 2007, Matt Mackall wrote:
On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
And fairness by euid is probably a hell of a lot easier to do than
trying to figure out the wakeup matrix.
For the record, you actually don't need to track a whole
On Wed, 18 Apr 2007, Davide Libenzi wrote:
I know, we agree there. But that did not fit my Pirates of the Caribbean
quote :)
Ahh, I'm clearly not cultured enough, I didn't catch that reference.
Linus yes, I've seen the movie, but it
apparently left more of a mark
Chris Friesen wrote:
Mark Glines wrote:
One minor question: is it even possible to be completely fair on SMP?
For instance, if you have a 2-way SMP box running 3 applications, one of
which has 2 threads, will the threaded app have an advantage here? (The
current system seems to try to keep
Ingo Molnar wrote:
* Peter Williams [EMAIL PROTECTED] wrote:
And my scheduler for example cuts down the amount of policy code and
code size significantly.
Yours is one of the smaller patches mainly because you perpetuate (or
you did in the last one I looked at) the (horrible to my eyes) dual
On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
On Wed, 18 Apr 2007, Matt Mackall wrote:
Why is X special? Because it does work on behalf of other processes?
Lots of things do this. Perhaps a scheduler should focus entirely on
the implicit and directed wakeup matrix
On Wed, Apr 18, 2007 at 10:49:45PM +1000, Con Kolivas wrote:
On Wednesday 18 April 2007 22:13, Nick Piggin wrote:
The kernel compile (make -j8 on 4 thread system) is doing 1800 total
context switches per second (450/s per runqueue) for cfs, and 670
for mainline. Going up to 20ms
On Thu, 19 Apr 2007 05:18:07 +0200 Nick Piggin [EMAIL PROTECTED] wrote:
And yes, by fairly, I mean fairly among all threads as a base resource
class, because that's what Linux has always done
Yes, there are potential compatibility problems. Example: a machine with
100 busy httpd processes and
Peter Williams wrote:
Chris Friesen wrote:
Suppose I have a really high priority task running. Another very high
priority task wakes up and would normally preempt the first one.
However, there happens to be another cpu available. It seems like it
would be a win if we moved one of those
On Tue, Apr 17, 2007 at 11:38:31PM -0500, Matt Mackall wrote:
> On Wed, Apr 18, 2007 at 05:15:11AM +0200, Nick Piggin wrote:
> >
> > I don't know why this would be a useful feature (of course I'm talking
> > about processes at the same nice level). One of the big problems with
> > the current
On Wed, Apr 18, 2007 at 05:15:11AM +0200, Nick Piggin wrote:
> On Tue, Apr 17, 2007 at 04:39:54PM -0500, Matt Mackall wrote:
> > On Tue, Apr 17, 2007 at 09:01:55AM +0200, Nick Piggin wrote:
> > > On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
> > > > On Mon, Apr 16, 2007 at
On Tue, Apr 17, 2007 at 11:16:54PM +1000, Peter Williams wrote:
> Nick Piggin wrote:
> >I don't like the timeslice based nice in mainline. It's too nasty
> >with latencies. nicksched is far better in that regard IMO.
> >
> >But I don't know how you can assert a particular way is the best way
> >to
On Tue, 17 Apr 2007, William Lee Irwin III wrote:
> 100**(1/39.0) ~= 1.12534 may do if so, but it seems a little weak, and
> even 1000**(1/39.0) ~= 1.19378 still seems weak.
>
> I suspect that in order to get low nice numbers strong enough without
> making high nice numbers too strong something
On Wed, 2007-04-18 at 05:56 +0200, Nick Piggin wrote:
> On Wed, Apr 18, 2007 at 05:45:20AM +0200, Mike Galbraith wrote:
> > On Wed, 2007-04-18 at 05:15 +0200, Nick Piggin wrote:
> > >
> > >
> > > So on what basis would you allow unfairness? On the basis that it doesn't
> > > seem to harm anyone?
On Tue, Apr 17, 2007 at 09:07:49AM -0400, James Bruce wrote:
>>> Nonlinear is a must IMO. I would suggest X = exp(ln(10)/10) ~= 1.2589
>>> That value has the property that a nice=10 task gets 1/10th the cpu of a
>>> nice=0 task, and a nice=20 task gets 1/100 of nice=0. I think that
>>> would be
On Wed, Apr 18, 2007 at 05:45:20AM +0200, Mike Galbraith wrote:
> On Wed, 2007-04-18 at 05:15 +0200, Nick Piggin wrote:
> > On Tue, Apr 17, 2007 at 04:39:54PM -0500, Matt Mackall wrote:
> > >
> > > I'm a big fan of fairness, but I think it's a bit early to declare it
> > > a mandatory feature.
On Wed, 2007-04-18 at 05:15 +0200, Nick Piggin wrote:
> On Tue, Apr 17, 2007 at 04:39:54PM -0500, Matt Mackall wrote:
> >
> > I'm a big fan of fairness, but I think it's a bit early to declare it
> > a mandatory feature. Bounded unfairness is probably something we can
> > agree on, ie "if we
On Tue, Apr 17, 2007 at 04:39:54PM -0500, Matt Mackall wrote:
> On Tue, Apr 17, 2007 at 09:01:55AM +0200, Nick Piggin wrote:
> > On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
> > > On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III wrote:
> > > >> All things
Michael K. Edwards wrote:
On 4/17/07, Peter Williams <[EMAIL PROTECTED]> wrote:
The other way in which the code deviates from the original as that (for
a few years now) I no longer calculated CPU bandwidth usage directly.
I've found that the overhead is less if I keep a running average of the
William Lee Irwin III wrote:
Peter Williams wrote:
William Lee Irwin III wrote:
I was tempted to restart from scratch given Ingo's comments, but I
reconsidered and I'll be working with your code (and the German
students' as well). If everything has to change, so be it, but it'll
still be a
> Peter Williams wrote:
> >William Lee Irwin III wrote:
> >>I was tempted to restart from scratch given Ingo's comments, but I
> >>reconsidered and I'll be working with your code (and the German
> >>students' as well). If everything has to change, so be it, but it'll
> >>still be a derived work.
Peter Williams wrote:
William Lee Irwin III wrote:
I was tempted to restart from scratch given Ingo's comments, but I
reconsidered and I'll be working with your code (and the German
students' as well). If everything has to change, so be it, but it'll
still be a derived work. It would be
On Tue, Apr 17, 2007 at 04:52:08PM -0700, Michael K. Edwards wrote:
> On 4/17/07, William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> >The ongoing scheduler work is on a much more basic level than these
> >affairs I'm guessing you googled. When the basics work as intended it
> >will be possible to
On 4/17/07, William Lee Irwin III <[EMAIL PROTECTED]> wrote:
The ongoing scheduler work is on a much more basic level than these
affairs I'm guessing you googled. When the basics work as intended it
will be possible to move on to more advanced issues.
OK, let me try this in smaller words for
Chris Friesen wrote:
Peter Williams wrote:
Chris Friesen wrote:
Scuse me if I jump in here, but doesn't the load balancer need some
way to figure out a) when to run, and b) which tasks to pull and
where to push them?
Yes but both of these are independent of the scheduler discipline in
On Wed, Apr 18, 2007 at 09:23:42AM +1000, Peter Williams wrote:
> Matt Mackall wrote:
> >On Tue, Apr 17, 2007 at 09:01:55AM +0200, Nick Piggin wrote:
> >>On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
> >>>On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III
Matt Mackall wrote:
On Tue, Apr 17, 2007 at 09:01:55AM +0200, Nick Piggin wrote:
On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III wrote:
All things are not equal; they all have different properties. I like
On Tue, Apr 17, 2007 at 03:59:02PM -0700, William Lee Irwin III wrote:
> On Tue, Apr 17, 2007 at 03:32:56PM -0700, William Lee Irwin III wrote:
> >> I'm already working with this as my assumed nice semantics (actually
> >> something with a specific exponential base, suggested in other emails)
> >>
On Tue, Apr 17, 2007 at 04:00:53PM -0700, Michael K. Edwards wrote:
> Works, that is, right up until you add nonlinear interactions with CPU
> speed scaling. From my perspective as an embedded platform
> integrator, clock/voltage scaling is the elephant in the scheduler's
> living room. Patch in
On 4/17/07, Peter Williams <[EMAIL PROTECTED]> wrote:
The other way in which the code deviates from the original as that (for
a few years now) I no longer calculated CPU bandwidth usage directly.
I've found that the overhead is less if I keep a running average of the
size of a tasks CPU bursts
On Tue, Apr 17, 2007 at 03:32:56PM -0700, William Lee Irwin III wrote:
>> I'm already working with this as my assumed nice semantics (actually
>> something with a specific exponential base, suggested in other emails)
>> until others start saying they want something different and agree.
On Tue,
On Tue, Apr 17, 2007 at 03:32:56PM -0700, William Lee Irwin III wrote:
> On Tue, Apr 17, 2007 at 11:24:22AM +0200, Ingo Molnar wrote:
> >> yeah. If you could come up with a sane definition that also translates
> >> into low overhead on the algorithm side that would be great!
>
> On Tue, Apr 17,
On Tue, Apr 17, 2007 at 11:24:22AM +0200, Ingo Molnar wrote:
>> yeah. If you could come up with a sane definition that also translates
>> into low overhead on the algorithm side that would be great!
On Tue, Apr 17, 2007 at 05:08:09PM -0500, Matt Mackall wrote:
> How's this:
> If you're running
On Tue, 17 Apr 2007, William Lee Irwin III wrote:
> On Tue, Apr 17, 2007 at 09:01:55AM +0200, Nick Piggin wrote:
> > Latency. Given N tasks in the system, an arbitrary task should get
> > onto the CPU in a bounded amount of time (excluding events like freak
> > IRQ holdoffs and such, obviously --
101 - 200 of 623 matches
Mail list logo