Hi,
On Sun, 5 Aug 2007, Arjan van de Ven wrote:
There's no problem to provide a high resolution sleep, but there is also
no reason to mess with msleep, don't fix what ain't broken...
John Corbet provided the patch because he had a problem with the current
msleep... in that it didn't
Hi,
On Sun, 5 Aug 2007, Arjan van de Ven wrote:
> Timers are course resolution that is highly HZ-value dependent. For
> cases where you want a finer resolution, the kernel now has a way to
> provide that functionality... so why not use the quality of service this
> provides..
We're going in
Hi,
On Sat, 4 Aug 2007, Arjan van de Ven wrote:
> > hr_msleep makes no sense. Why should we tie this interface to millisecond
> > resolution?
>
> because a lot of parts of the kernel think and work in milliseconds,
> it's logical and USEFUL to at least provide an interface that works on
>
Hi,
On Sat, 4 Aug 2007, Arjan van de Ven wrote:
hr_msleep makes no sense. Why should we tie this interface to millisecond
resolution?
because a lot of parts of the kernel think and work in milliseconds,
it's logical and USEFUL to at least provide an interface that works on
Hi,
On Sun, 5 Aug 2007, Arjan van de Ven wrote:
Timers are course resolution that is highly HZ-value dependent. For
cases where you want a finer resolution, the kernel now has a way to
provide that functionality... so why not use the quality of service this
provides..
We're going in circles
Hi,
On Fri, 3 Aug 2007, Arjan van de Ven wrote:
> > Actually the hrsleep() function would allow for submillisecond sleeps,
> > which might be what some of the 450 users really want and they only use
> > msleep(1) because it's the next best thing.
> > A hrsleep() function is really what makes
Hi,
On Fri, 3 Aug 2007, Arjan van de Ven wrote:
> On Fri, 2007-08-03 at 21:19 +0200, Roman Zippel wrote:
> > Hi,
> >
> > On Fri, 3 Aug 2007, Jonathan Corbet wrote:
> >
> > > Most comments last time were favorable. The one dissenter was Roman,
> >
Hi,
On Fri, 3 Aug 2007, Jonathan Corbet wrote:
> Most comments last time were favorable. The one dissenter was Roman,
> who worries about the overhead of using hrtimers for this operation; my
> understanding is that he would rather see a really_msleep() function for
> those who actually want
Hi,
On Fri, 3 Aug 2007, Jonathan Corbet wrote:
Most comments last time were favorable. The one dissenter was Roman,
who worries about the overhead of using hrtimers for this operation; my
understanding is that he would rather see a really_msleep() function for
those who actually want
Hi,
On Fri, 3 Aug 2007, Arjan van de Ven wrote:
On Fri, 2007-08-03 at 21:19 +0200, Roman Zippel wrote:
Hi,
On Fri, 3 Aug 2007, Jonathan Corbet wrote:
Most comments last time were favorable. The one dissenter was Roman,
who worries about the overhead of using hrtimers
Hi,
On Fri, 3 Aug 2007, Arjan van de Ven wrote:
Actually the hrsleep() function would allow for submillisecond sleeps,
which might be what some of the 450 users really want and they only use
msleep(1) because it's the next best thing.
A hrsleep() function is really what makes most sense
Hi,
On Wed, 1 Aug 2007, Linus Torvalds wrote:
> So I think it would be entirely appropriate to
>
> - do something that *approximates* microseconds.
>
>Using microseconds instead of nanoseconds would likely allow us to do
>32-bit arithmetic in more areas, without any real overflow.
Hi,
On Thu, 2 Aug 2007, Ingo Molnar wrote:
> Most importantly, CFS _already_ includes a number of measures that act
> against too frequent math. So even though you can see 64-bit math code
> in it, it's only rarely called if your clock has a low resolution - and
> that happens all
Hi,
On Wed, 1 Aug 2007, Peter Zijlstra wrote:
> Took me most of today trying to figure out WTH you did in fs2.c, more
> math and fundamental explanations would have been good. So please bear
> with me as I try to recap this thing. (No, your code was very much _not_
> obvious, a few comments and
Hi,
On Wed, 1 Aug 2007, Peter Zijlstra wrote:
Took me most of today trying to figure out WTH you did in fs2.c, more
math and fundamental explanations would have been good. So please bear
with me as I try to recap this thing. (No, your code was very much _not_
obvious, a few comments and
Hi,
On Thu, 2 Aug 2007, Ingo Molnar wrote:
Most importantly, CFS _already_ includes a number of measures that act
against too frequent math. So even though you can see 64-bit math code
in it, it's only rarely called if your clock has a low resolution - and
that happens all automatically!
Hi,
On Wed, 1 Aug 2007, Linus Torvalds wrote:
So I think it would be entirely appropriate to
- do something that *approximates* microseconds.
Using microseconds instead of nanoseconds would likely allow us to do
32-bit arithmetic in more areas, without any real overflow.
The
Hi,
On Wed, 1 Aug 2007, Ingo Molnar wrote:
> > [...] I didn't say 'sleeper starvation' or 'rounding error', these are
> > your words and it's your perception of what I said.
>
> Oh dear :-) It was indeed my preception that yesterday you said:
*sigh* and here you go off again nitpicking on a
Hi,
On Wed, 1 Aug 2007, Ingo Molnar wrote:
> Andi's theory cannot be true either, Roman's debug info also shows this
> /proc//sched data:
>
> clock-delta : 95
>
> that means that sched_clock() is in high-res mode, the TSC is alive and
> kicking and a
Hi,
On Wed, 1 Aug 2007, Ingo Molnar wrote:
> > > in that case 'top' accounting symptoms similar to the above are not
> > > due to the scheduler starvation you suspected, but due the effect of
> > > a low-resolution scheduler clock and a tightly coupled
> > > timer/scheduler tick to it.
> >
>
Hi,
On Wed, 1 Aug 2007, Andi Kleen wrote:
> > especially if one already knows that
> > scheduler clock has only limited resolution (because it's based on
> > jiffies), it becomes possible to use mostly 32bit values.
>
> jiffies based sched_clock should be soon very rare. It's probably
> not
Hi,
On Wed, 1 Aug 2007, Ingo Molnar wrote:
> Please also send me the output of this script:
>
> http://people.redhat.com/mingo/cfs-scheduler/tools/cfs-debug-info.sh
Send privately.
> Could you also please send the source code for the "l.c" and "lt.c" apps
> you used for your testing so i
Hi,
On Wed, 1 Aug 2007, Ingo Molnar wrote:
> * Roman Zippel <[EMAIL PROTECTED]> wrote:
>
> > [...] the increase in code size:
> >
> > 2.6.22:
> >textdata bss dec hex filename
> > 10150 243344 1351834ce kernel/sched.
Hi,
On Wed, 1 Aug 2007, Ingo Molnar wrote:
> > [...] e.g. in this example there are three tasks that run only for
> > about 1ms every 3ms, but they get far more time than should have
> > gotten fairly:
> >
> > 4544 roman 20 0 1796 520 432 S 32.1 0.4 0:21.08 lt
> > 4545 roman
Hi,
On Wed, 1 Aug 2007, Ingo Molnar wrote:
[...] e.g. in this example there are three tasks that run only for
about 1ms every 3ms, but they get far more time than should have
gotten fairly:
4544 roman 20 0 1796 520 432 S 32.1 0.4 0:21.08 lt
4545 roman 20 0
Hi,
On Wed, 1 Aug 2007, Ingo Molnar wrote:
* Roman Zippel [EMAIL PROTECTED] wrote:
[...] the increase in code size:
2.6.22:
textdata bss dec hex filename
10150 243344 1351834ce kernel/sched.o
recent git:
textdata bss dec
Hi,
On Wed, 1 Aug 2007, Andi Kleen wrote:
especially if one already knows that
scheduler clock has only limited resolution (because it's based on
jiffies), it becomes possible to use mostly 32bit values.
jiffies based sched_clock should be soon very rare. It's probably
not worth
Hi,
On Wed, 1 Aug 2007, Ingo Molnar wrote:
Please also send me the output of this script:
http://people.redhat.com/mingo/cfs-scheduler/tools/cfs-debug-info.sh
Send privately.
Could you also please send the source code for the l.c and lt.c apps
you used for your testing so i can have a
Hi,
On Wed, 1 Aug 2007, Ingo Molnar wrote:
in that case 'top' accounting symptoms similar to the above are not
due to the scheduler starvation you suspected, but due the effect of
a low-resolution scheduler clock and a tightly coupled
timer/scheduler tick to it.
Well, it
Hi,
On Wed, 1 Aug 2007, Ingo Molnar wrote:
Andi's theory cannot be true either, Roman's debug info also shows this
/proc/PID/sched data:
clock-delta : 95
that means that sched_clock() is in high-res mode, the TSC is alive and
kicking and a sched_clock()
Hi,
On Wed, 1 Aug 2007, Ingo Molnar wrote:
[...] I didn't say 'sleeper starvation' or 'rounding error', these are
your words and it's your perception of what I said.
Oh dear :-) It was indeed my preception that yesterday you said:
*sigh* and here you go off again nitpicking on a minor
Hi,
On Sat, 28 Jul 2007, Linus Torvalds wrote:
> We've had people go with a splash before. Quite frankly, the current
> scheduler situation looks very much like the CML2 situation. Anybody
> remember that? The developer there also got rejected, the improvement was
> made differently (and much
Hi,
On Sat, 14 Jul 2007, Mike Galbraith wrote:
> > On Fri, 13 Jul 2007, Mike Galbraith wrote:
> >
> > > > The new scheduler does _a_lot_ of heavy 64 bit calculations without any
> > > > attempt to scale that down a little...
> > >
> > > See prio_to_weight[], prio_to_wmult[] and
Hi,
On Sat, 14 Jul 2007, Mike Galbraith wrote:
On Fri, 13 Jul 2007, Mike Galbraith wrote:
The new scheduler does _a_lot_ of heavy 64 bit calculations without any
attempt to scale that down a little...
See prio_to_weight[], prio_to_wmult[] and sysctl_sched_stat_granularity.
Hi,
On Sat, 28 Jul 2007, Linus Torvalds wrote:
We've had people go with a splash before. Quite frankly, the current
scheduler situation looks very much like the CML2 situation. Anybody
remember that? The developer there also got rejected, the improvement was
made differently (and much
Hi,
On Saturday 21 July 2007, Andrew Morton wrote:
> On Sat, 21 Jul 2007 00:58:01 +0200 Sebastian Siewior
<[EMAIL PROTECTED]> wrote:
> > Got with randconfig
>
> randconfig apparently generates impossible configs. Please always
> run `make oldconfig' after the randconfig, then do the test
Hi,
On Tuesday 24 July 2007, Rodolfo Giometti wrote:
> By doing:
>
> struct pps_ktime {
> __u64 sec;
> - __u32 nsec;
> + __u64 nsec;
> };
Just using __u32 for both works as well...
bye, Roman
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
Hi,
On Tuesday 24 July 2007, Rodolfo Giometti wrote:
By doing:
struct pps_ktime {
__u64 sec;
- __u32 nsec;
+ __u64 nsec;
};
Just using __u32 for both works as well...
bye, Roman
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of
Hi,
On Saturday 21 July 2007, Andrew Morton wrote:
On Sat, 21 Jul 2007 00:58:01 +0200 Sebastian Siewior
[EMAIL PROTECTED] wrote:
Got with randconfig
randconfig apparently generates impossible configs. Please always
run `make oldconfig' after the randconfig, then do the test build.
If
Hi,
On Wed, 18 Jul 2007, Ingo Molnar wrote:
> > Why do you constantly stress level 19? Yes, that one is special, all
> > other positive levels were already relatively consistent.
>
> i constantly stress it for the reason i mentioned a good number of
> times: because it's by far the most
Hi,
On Wed, 18 Jul 2007, Ingo Molnar wrote:
> > > [more rude insults deleted]
> > > I've been waiting for that obvious question, and i _might_ be able
> > > to answer it, but somehow it never occured to you ;-) Thanks,
>
> the ";-)" emoticon (and its contents) clearly signals this as a
>
Hi,
On Mon, 16 Jul 2007, Jonathan Corbet wrote:
> > That's a bit my problem - we have to consider other setups as well.
> > Is it worth converting all msleep users behind their back or should we
> > just a provide a separate function for those who care?
>
> Any additional overhead is clearly
Hi,
On Mon, 16 Jul 2007, Jonathan Corbet wrote:
That's a bit my problem - we have to consider other setups as well.
Is it worth converting all msleep users behind their back or should we
just a provide a separate function for those who care?
Any additional overhead is clearly small -
Hi,
On Wed, 18 Jul 2007, Ingo Molnar wrote:
[more rude insults deleted]
I've been waiting for that obvious question, and i _might_ be able
to answer it, but somehow it never occured to you ;-) Thanks,
the ;-) emoticon (and its contents) clearly signals this as a
sarcastic,
Hi,
On Wed, 18 Jul 2007, Ingo Molnar wrote:
Why do you constantly stress level 19? Yes, that one is special, all
other positive levels were already relatively consistent.
i constantly stress it for the reason i mentioned a good number of
times: because it's by far the most commonly
Hi,
On Wed, 18 Jul 2007, Ingo Molnar wrote:
> _changing_ it is an option within reason, and we've done it a couple of
> times already in the past, and even within CFS (as Peter correctly
> observed) we've been through a couple of iterations already. And as i
> mentioned it before, the outer
Hi,
On Wed, 18 Jul 2007, Peter Zijlstra wrote:
> The only expectation is that a process with a lower nice level gets more
> time. Any other expectation is a bug.
Yes, users are buggy, they expect a lot of stupid things...
Is this really reason enough to break this?
What exactly is the damage
Hi,
On Wed, 18 Jul 2007, Peter Zijlstra wrote:
> By breaking the UNIX model of nice levels. Not an option in my book.
BTW what is the "UNIX model of nice levels"?
SUS specifies the limit via NZERO, which is defined as "Minimum Acceptable
Value: 20", I can't find any information that it must
Hi,
On Wed, 18 Jul 2007, Peter Zijlstra wrote:
> By breaking the UNIX model of nice levels. Not an option in my book.
Breaking user expectations of nice levels is?
bye, Roman
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
Hi,
On Wed, 18 Jul 2007, Peter Zijlstra wrote:
> I actually like the extra range, it allows for a much softer punch of
> background tasks even on somewhat slower boxen.
The extra range is not really a problem, in
http://www.ussg.iu.edu/hypermail/linux/kernel/0707.2/0850.html
I suggested how
Hi,
On Wed, 18 Jul 2007, Ingo Molnar wrote:
> > > Roman, please do me a favor, and ask me the following question:
> > >
> > > [insult deleted]
> In this discussion about
> nice levels you were (very) agressively asserting things that were
> untrue,
Instead of simply asserting things, how
Hi,
On Wed, 18 Jul 2007, Ingo Molnar wrote:
Roman, please do me a favor, and ask me the following question:
[insult deleted]
In this discussion about
nice levels you were (very) agressively asserting things that were
untrue,
Instead of simply asserting things, how about you
Hi,
On Wed, 18 Jul 2007, Peter Zijlstra wrote:
I actually like the extra range, it allows for a much softer punch of
background tasks even on somewhat slower boxen.
The extra range is not really a problem, in
http://www.ussg.iu.edu/hypermail/linux/kernel/0707.2/0850.html
I suggested how we
Hi,
On Wed, 18 Jul 2007, Peter Zijlstra wrote:
By breaking the UNIX model of nice levels. Not an option in my book.
Breaking user expectations of nice levels is?
bye, Roman
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
Hi,
On Wed, 18 Jul 2007, Peter Zijlstra wrote:
By breaking the UNIX model of nice levels. Not an option in my book.
BTW what is the UNIX model of nice levels?
SUS specifies the limit via NZERO, which is defined as Minimum Acceptable
Value: 20, I can't find any information that it must be 20.
Hi,
On Wed, 18 Jul 2007, Peter Zijlstra wrote:
The only expectation is that a process with a lower nice level gets more
time. Any other expectation is a bug.
Yes, users are buggy, they expect a lot of stupid things...
Is this really reason enough to break this?
What exactly is the damage if
Hi,
On Wed, 18 Jul 2007, Ingo Molnar wrote:
_changing_ it is an option within reason, and we've done it a couple of
times already in the past, and even within CFS (as Peter correctly
observed) we've been through a couple of iterations already. And as i
mentioned it before, the outer edge
Hi,
On Tue, 17 Jul 2007, Ingo Molnar wrote:
> * Roman Zippel <[EMAIL PROTECTED]> wrote:
>
> > > > It's nice that these artifacts are gone, but that still doesn't
> > > > explain why this ratio had to be increase that much from around
> > > &g
Hi,
On Tue, 17 Jul 2007, Ingo Molnar wrote:
> Roman, please do me a favor, and ask me the following question:
>
> " Ingo, you've been maintaining the scheduler for years. In fact you
>wrote the old nice code we are talking about here. You changed it a
>number of times since then. So
Hi,
On Tue, 17 Jul 2007, Ingo Molnar wrote:
Roman, please do me a favor, and ask me the following question:
Ingo, you've been maintaining the scheduler for years. In fact you
wrote the old nice code we are talking about here. You changed it a
number of times since then. So you
Hi,
On Tue, 17 Jul 2007, Ingo Molnar wrote:
* Roman Zippel [EMAIL PROTECTED] wrote:
It's nice that these artifacts are gone, but that still doesn't
explain why this ratio had to be increase that much from around
1:10 to 1:69.
More dynamic range is better? If you actually
Hi,
On Tue, 17 Jul 2007, I wrote:
> Playing around with some other nice levels, confirms the theory that
> something is a little off, so I'm quite correct at saying that the ratio
> _should_ be 1:10.
Rechecking everything there was actually a small error in my test program,
so the ratio
Hi,
On Tue, 17 Jul 2007, Ingo Molnar wrote:
> * Roman Zippel <[EMAIL PROTECTED]> wrote:
>
> > Hi,
> >
> > On Mon, 16 Jul 2007, Ingo Molnar wrote:
> >
> > > and note that even on the old scheduler, nice-0 was "3200% more
> > > powerfu
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
> and note that even on the old scheduler, nice-0 was "3200% more
> powerful" than nice +19 (with CONFIG_HZ=300),
How did you get that value? At any HZ the ratio should be around 1:10
(+- rounding error).
> in fact i like it that nice -20 has a
Hi,
On Mon, 16 Jul 2007, Matt Mackall wrote:
> > It's nice that these artifacts are gone, but that still doesn't explain
> > why this ratio had to be increase that much from around 1:10 to 1:69.
>
> More dynamic range is better? If you actually want a task to get 20x
> the CPU time of another,
Hi,
On Mon, 16 Jul 2007, Linus Torvalds wrote:
> How about trying a much less aggressive nice-level (and preferably linear,
> not exponential)?
I think the exponential increase isn't the problem. The old code did
approximate something like this rather crudely with the result that there
was a
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
> I explained it numerous times (remember the 'timeout' vs. 'timer event'
> discussion?) that i consider timer granularity important to scalability.
> Basically, in every case where we know with great certainty that a
> time-out will _not_ occur
Hi,
On Mon, 16 Jul 2007, Jonathan Corbet wrote:
> > One possible problem here is that setting up that timer can be
> > considerably more expensive, for a relative timer you have to read the
> > current time, which can be quite expensive (e.g. your machine now uses the
> > PIT timer, because
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
> because when i assumed the obvious, you called it an
> insult so please dont leave any room for assumptions and remove any
> ambiguity - especially as our communication seems to be marred by what
> appears to be frequent misunderstandings ;-)
Hi,
On Sun, 15 Jul 2007, Jonathan Corbet wrote:
> The OLPC folks and I recently discovered something interesting: on a
> HZ=100 system, a call to msleep(1) will delay for about 20ms. The
> combination of jiffies timekeeping and rounding up means that the
> minimum delay from msleep will be two
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
> to sum it up: a nice +19 task (the most commonly used nice level in
> practice) gets 9.1%, 3.9%, 3.1% of CPU time on the old scheduler,
> depending on the value of HZ. This is quite inconsistent and illogical.
You're correct that you can find
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
> > Well, you cut out the major question from my initial mail:
> > One question here would be, is it really a problem to sleep a little more?
>
> oh, i did not want to embarrass you (and distract the discussion) with
> answering a pretty stupid,
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
> > > > As soon as you add another loop the difference changes again,
> > > > while it's always correct to say it gets 25% more cpu time [...]
> > >
> > > yep, and i'll add the relative effect to the comment too.
> >
> > Why did you cut off the rest
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
> i'm not sure how your question relates/connects to what i wrote above,
> could you please re-phrase your question into a bit more verbose form so
> that i can answer it? Thanks,
Well, you cut out the major question from my initial mail:
One
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
> > As soon as you add another loop the difference changes again, while
> > it's always correct to say it gets 25% more cpu time [...]
>
> yep, and i'll add the relative effect to the comment too.
Why did you cut off the rest of the sentence?
To
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
> i dont think there's any significant overhead. The OLPC folks are pretty
> sensitive to performance,
How is a sleep function relevant to performace?
bye, Roman
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body
Hi,
On Sun, 15 Jul 2007, Jonathan Corbet wrote:
> Here's another approach: a reimplementation of msleep() and
> msleep_interruptible() using hrtimers. On a system without real
> hrtimers this code will at least drop down to single-jiffy delays much
> of the time (though not deterministically
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
> yes, the weight multiplier 1.25, but the actual difference in CPU
> utilization, when running two CPU intense tasks, is ~10%:
>
> PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
> 8246 mingo 20 0 1576 244 196 R 55
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
yes, the weight multiplier 1.25, but the actual difference in CPU
utilization, when running two CPU intense tasks, is ~10%:
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
8246 mingo 20 0 1576 244 196 R 55 0.0
Hi,
On Sun, 15 Jul 2007, Jonathan Corbet wrote:
Here's another approach: a reimplementation of msleep() and
msleep_interruptible() using hrtimers. On a system without real
hrtimers this code will at least drop down to single-jiffy delays much
of the time (though not deterministically so).
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
i dont think there's any significant overhead. The OLPC folks are pretty
sensitive to performance,
How is a sleep function relevant to performace?
bye, Roman
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
As soon as you add another loop the difference changes again, while
it's always correct to say it gets 25% more cpu time [...]
yep, and i'll add the relative effect to the comment too.
Why did you cut off the rest of the sentence?
To
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
i'm not sure how your question relates/connects to what i wrote above,
could you please re-phrase your question into a bit more verbose form so
that i can answer it? Thanks,
Well, you cut out the major question from my initial mail:
One question
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
As soon as you add another loop the difference changes again,
while it's always correct to say it gets 25% more cpu time [...]
yep, and i'll add the relative effect to the comment too.
Why did you cut off the rest of the sentence?
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
Well, you cut out the major question from my initial mail:
One question here would be, is it really a problem to sleep a little more?
oh, i did not want to embarrass you (and distract the discussion) with
answering a pretty stupid, irrelevant
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
to sum it up: a nice +19 task (the most commonly used nice level in
practice) gets 9.1%, 3.9%, 3.1% of CPU time on the old scheduler,
depending on the value of HZ. This is quite inconsistent and illogical.
You're correct that you can find
Hi,
On Sun, 15 Jul 2007, Jonathan Corbet wrote:
The OLPC folks and I recently discovered something interesting: on a
HZ=100 system, a call to msleep(1) will delay for about 20ms. The
combination of jiffies timekeeping and rounding up means that the
minimum delay from msleep will be two
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
because when i assumed the obvious, you called it an
insult so please dont leave any room for assumptions and remove any
ambiguity - especially as our communication seems to be marred by what
appears to be frequent misunderstandings ;-)
What
Hi,
On Mon, 16 Jul 2007, Jonathan Corbet wrote:
One possible problem here is that setting up that timer can be
considerably more expensive, for a relative timer you have to read the
current time, which can be quite expensive (e.g. your machine now uses the
PIT timer, because TSC was
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
I explained it numerous times (remember the 'timeout' vs. 'timer event'
discussion?) that i consider timer granularity important to scalability.
Basically, in every case where we know with great certainty that a
time-out will _not_ occur (where
Hi,
On Mon, 16 Jul 2007, Linus Torvalds wrote:
How about trying a much less aggressive nice-level (and preferably linear,
not exponential)?
I think the exponential increase isn't the problem. The old code did
approximate something like this rather crudely with the result that there
was a
Hi,
On Mon, 16 Jul 2007, Matt Mackall wrote:
It's nice that these artifacts are gone, but that still doesn't explain
why this ratio had to be increase that much from around 1:10 to 1:69.
More dynamic range is better? If you actually want a task to get 20x
the CPU time of another, the
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
and note that even on the old scheduler, nice-0 was 3200% more
powerful than nice +19 (with CONFIG_HZ=300),
How did you get that value? At any HZ the ratio should be around 1:10
(+- rounding error).
in fact i like it that nice -20 has a slightly
Hi,
On Tue, 17 Jul 2007, Ingo Molnar wrote:
* Roman Zippel [EMAIL PROTECTED] wrote:
Hi,
On Mon, 16 Jul 2007, Ingo Molnar wrote:
and note that even on the old scheduler, nice-0 was 3200% more
powerful than nice +19 (with CONFIG_HZ=300),
How did you get that value? At any
Hi,
On Tue, 17 Jul 2007, I wrote:
Playing around with some other nice levels, confirms the theory that
something is a little off, so I'm quite correct at saying that the ratio
_should_ be 1:10.
Rechecking everything there was actually a small error in my test program,
so the ratio should
Hi,
On Fri, 13 Jul 2007, Mike Galbraith wrote:
> > The new scheduler does _a_lot_ of heavy 64 bit calculations without any
> > attempt to scale that down a little...
>
> See prio_to_weight[], prio_to_wmult[] and sysctl_sched_stat_granularity.
> Perhaps more can be done, but "without any
Hi,
On Fri, 13 Jul 2007, Mike Galbraith wrote:
The new scheduler does _a_lot_ of heavy 64 bit calculations without any
attempt to scale that down a little...
See prio_to_weight[], prio_to_wmult[] and sysctl_sched_stat_granularity.
Perhaps more can be done, but without any attempt...
Hi,
On Wed, 11 Jul 2007, Linus Torvalds wrote:
> Sure, bugs happen, but code that everybody runs the same generally doesn't
> break. So a CPU scheduler doesn't worry me all that much. CPU schedulers
> are "easy".
A little more advance warning wouldn't have hurt though.
The new scheduler does
Hi,
On Thu, 12 Jul 2007, I wrote:
> On Wed, 11 Jul 2007, Rob Landley wrote:
>
> > Replace name "Linux Kernel" in menuconfig with a macro (defaulting to "Linux
> > Kernel" if not -Ddefined by the makefile), and remove a few unnecessary
> > occurrences of "kernel" in pop-up text.
>
> Could you
Hi,
On Wed, 11 Jul 2007, Rob Landley wrote:
> Replace name "Linux Kernel" in menuconfig with a macro (defaulting to "Linux
> Kernel" if not -Ddefined by the makefile), and remove a few unnecessary
> occurrences of "kernel" in pop-up text.
Could you drop the PROJECT_NAME changes for now? The
201 - 300 of 1055 matches
Mail list logo