I don't want to open any old wounds, but I just got a summary from a
colleague of mine, Dan Tsafrir, who measured the context switch overhead
on Linux with multiple processes.
You can find the document at:
http://www.cs.huji.ac.il/~dants/linux-2.2.18-context-switch.ps
The measurements were
I don't want to open any old wounds, but I just got a summary from a
colleague of mine, Dan Tsafrir, who measured the context switch overhead
on Linux with multiple processes.
You can find the document at:
http://www.cs.huji.ac.il/~dants/linux-2.2.18-context-switch.ps
The measurements were
On Fri, Apr 06, 2001 at 11:06:03AM -0700, Timothy D. Witham wrote:
> Timothy D. Witham wrote :
> > I propose that we work on setting up a straight forward test harness
> > that allows developers to quickly test a kernel patch against
> > various performance yardsticks.
The Linux Test Project
Missing an important one, our VPN's routinely run on 16 MG Ram, no HD or swap..
Loaded from an initrd on a floppy..
Don't we need to test on minimalistic machines as well :)
> So the server hardware configurations have evolved to look like
> the following.
>
> 1 way, 512 MB, 2 IDE
>
Timothy D. Witham wrote :
[...]
> I propose that we work on setting up a straight forward test harness
> that allows developers to quickly test a kernel patch against
> various performance yardsticks.
[...
(proposed large server testbeds)
...]
OK, so I have received some feedback on my
@vger.kernel.org on 04/05/2001
07:53:27 PM
Sent by: [EMAIL PROTECTED]
To: "'Timothy D. Witham'" <[EMAIL PROTECTED]>, Linux Kernel List
<[EMAIL PROTECTED]>
cc:
Subject: RE: a quest for a better scheduler
Timothy D. Witham wrote :
[...]
> I propose that we work on
PROTECTED]
To: "'Timothy D. Witham'" [EMAIL PROTECTED], Linux Kernel List
[EMAIL PROTECTED]
cc:
Subject: RE: a quest for a better scheduler
Timothy D. Witham wrote :
[...]
I propose that we work on setting up a straight forward test harness
that allows developers to qu
Timothy D. Witham wrote :
[...]
I propose that we work on setting up a straight forward test harness
that allows developers to quickly test a kernel patch against
various performance yardsticks.
[...
(proposed large server testbeds)
...]
OK, so I have received some feedback on my
Missing an important one, our VPN's routinely run on 16 MG Ram, no HD or swap..
Loaded from an initrd on a floppy..
Don't we need to test on minimalistic machines as well :)
So the server hardware configurations have evolved to look like
the following.
1 way, 512 MB, 2 IDE
On Fri, Apr 06, 2001 at 11:06:03AM -0700, Timothy D. Witham wrote:
Timothy D. Witham wrote :
I propose that we work on setting up a straight forward test harness
that allows developers to quickly test a kernel patch against
various performance yardsticks.
The Linux Test Project would
--On Thursday, April 05, 2001 15:38:41 -0700 "Timothy D. Witham"
<[EMAIL PROTECTED]> wrote:
> Database performance:
> Raw storage I/O performance
>OLTP workload
You probably want to add an OLAP scenario as well.
--Chris
-
To unsubscribe from this list: send
Timothy D. Witham wrote :
[...]
> I propose that we work on setting up a straight forward test harness
> that allows developers to quickly test a kernel patch against
> various performance yardsticks.
[...
(proposed large server testbeds)
...]
I like this idea, but could the testbeds also
05/2001
06:38:41 PM
Sent by: [EMAIL PROTECTED]
To: Linux Kernel List <[EMAIL PROTECTED]>
cc: [EMAIL PROTECTED]
Subject: Re: a quest for a better scheduler
I have been following this thread and thinking that everybody has some
truth in
what they are saying but with the abs
I have been following this thread and thinking that everybody has some truth in
what they are saying but with the absence of a repeatable test environment there
really isn't a way of arriving at a data driven decision.
Given the following conditions.
1)The diversity of the problem sets that
This concept I think is used in Solaris .. as they have dynamic loadable
schedulers..
Zdenek Kabelac <[EMAIL PROTECTED]> on 04/05/2001 05:43:15 PM
To: Andrea Arcangeli <[EMAIL PROTECTED]>
cc:(bcc: Amol Lad/HSS)
Subject: Re: [Lse-tech] Re: a quest for a better schedule
Hello
Just dump idea - why not make scheduler switchable with modules - so
users
could select any scheduler they want ?
This should not be that hard and would make it easy to replace scheduler
at runtime so everyone could easily try what's the best for him/her.
[EMAIL PROTECTED]
-
To
Hello
Just dump idea - why not make scheduler switchable with modules - so
users
could select any scheduler they want ?
This should not be that hard and would make it easy to replace scheduler
at runtime so everyone could easily try what's the best for him/her.
[EMAIL PROTECTED]
-
To
This concept I think is used in Solaris .. as they have dynamic loadable
schedulers..
Zdenek Kabelac [EMAIL PROTECTED] on 04/05/2001 05:43:15 PM
To: Andrea Arcangeli [EMAIL PROTECTED]
cc:(bcc: Amol Lad/HSS)
Subject: Re: [Lse-tech] Re: a quest for a better scheduler
Hello
Just
I have been following this thread and thinking that everybody has some truth in
what they are saying but with the absence of a repeatable test environment there
really isn't a way of arriving at a data driven decision.
Given the following conditions.
1)The diversity of the problem sets that
6:38:41 PM
Sent by: [EMAIL PROTECTED]
To: Linux Kernel List [EMAIL PROTECTED]
cc: [EMAIL PROTECTED]
Subject: Re: a quest for a better scheduler
I have been following this thread and thinking that everybody has some
truth in
what they are saying but with the absence of a repeatable test e
Timothy D. Witham wrote :
[...]
I propose that we work on setting up a straight forward test harness
that allows developers to quickly test a kernel patch against
various performance yardsticks.
[...
(proposed large server testbeds)
...]
I like this idea, but could the testbeds also
--On Wednesday, April 04, 2001 15:16:32 -0700 Tim Wright <[EMAIL PROTECTED]>
wrote:
> On Wed, Apr 04, 2001 at 03:23:34PM +0200, Ingo Molnar wrote:
>> nope. The goal is to satisfy runnable processes in the range of NR_CPUS.
>> You are playing word games by suggesting that the current behavior
>>
On Wed, Apr 04, 2001 at 03:23:34PM +0200, Ingo Molnar wrote:
>
> On Wed, 4 Apr 2001, Hubertus Franke wrote:
>
> > I understand the dilemma that the Linux scheduler is in, namely
> > satisfy the low end at all cost. [...]
>
> nope. The goal is to satisfy runnable processes in the range of
Mark Hahn <[EMAIL PROTECTED]> on 04/04/2001 02:28:42 PM
To: Hubertus Franke/Watson/IBM@IBMUS
cc:
Subject: Re: a quest for a better scheduler
> ok if the runqueue length is limited to a very small multiple of the
#cpus.
> But that is not what high end server systems encounter.
do y
On Wed, Apr 04, 2001 at 10:49:04AM -0700, Kanoj Sarcar wrote:
> Imagine that most of the program's memory is on node 1, it was scheduled
> on node 2 cpu 8 momentarily (maybe because kswapd ran on node 1, other
> higher priority processes took over other cpus on node 1, etc).
>
> Then, your
>
> It helps by keeping the task in the same node if it cannot keep it in
> the same cpu anymore.
>
> Assume task A is sleeping and it last run on cpu 8 node 2. It gets a wakeup
> and it gets running and for some reason cpu 8 is busy and there are other
> cpus idle in the system. Now with the
> Just a quick comment. Andrea, unless your machine has some hardware
> that imply pernode runqueues will help (nodelevel caches etc), I fail
> to understand how this is helping you ... here's a simple theory though.
> If your system is lightly loaded, your pernode queues are actually
>
) 914-945-2003(fax) 914-945-4425 TL: 862-2003
Kanoj Sarcar <[EMAIL PROTECTED]> on 04/04/2001 01:14:28 PM
To: Hubertus Franke/Watson/IBM@IBMUS
cc: [EMAIL PROTECTED] (Linux Kernel List),
[EMAIL PROTECTED]
Subject: Re: [Lse-tech] Re: a quest for a better scheduler
>
&g
On Tue, Apr 03, 2001 at 09:21:57PM -0700, Fabio Riccardi wrote:
> I was actually suspecting that the extra lines in your patch were there for a
> reason :)
>
> A few questions:
>
> What is the real impact of a (slight) change in scheduling semantics?
>
> Under which situation one should notice
bertus Franke/Watson/IBM@IBMUS,
Mike Kravetz <[EMAIL PROTECTED]>, Fabio Riccardi
<[EMAIL PROTECTED]>
Subject: Re: a quest for a better scheduler
On 04-Apr-2001 Ingo Molnar wrote:
>
> On Tue, 3 Apr 2001, Fabio Riccardi wrote:
>
>> I've spent my afternoon runn
On Wed, Apr 04, 2001 at 09:50:58AM -0700, Kanoj Sarcar wrote:
> >
> > I didn't seen anything from Kanoj but I did something myself for the wildfire:
> >
> >
>ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.3aa1/10_numa-sched-1
> >
> > this is mostly an userspace
>
>
>
> Kanoj, our cpu-pooling + loadbalancing allows you to do that.
> The system adminstrator can specify at runtime through a
> /proc filesystem interface the cpu-pool-size, whether loadbalacing
> should take place.
Yes, I think this approach can support the various requirements
put on the
TECTED] (Andrea Arcangeli)
cc: [EMAIL PROTECTED] (Ingo Molnar), Hubertus Franke/Watson/IBM@IBMUS,
[EMAIL PROTECTED] (Mike Kravetz), [EMAIL PROTECTED] (Fabio
Riccardi), [EMAIL PROTECTED] (Linux Kernel List),
[EMAIL PROTECTED]
Subject: Re: [Lse-tech] Re: a quest for a better sch
On Wed, Apr 04, 2001 at 09:39:23AM -0700, Kanoj Sarcar wrote:
> example, for NUMA, we need to try hard to schedule a thread on the
> node that has most of its memory (for no reason other than to decrease
> memory latency). Independently, some NUMA machines build in multilevel
> caches and local
>
> I didn't seen anything from Kanoj but I did something myself for the wildfire:
>
>
>ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.3aa1/10_numa-sched-1
>
> this is mostly an userspace issue, not really intended as a kernel optimization
> (however it's also
>
>
> On Wed, 4 Apr 2001, Hubertus Franke wrote:
>
> > Another point to raise is that the current scheduler does a exhaustive
> > search for the "best" task to run. It touches every process in the
> > runqueue. this is ok if the runqueue length is limited to a very small
> > multiple of the
On 04-Apr-2001 Ingo Molnar wrote:
>
> On Tue, 3 Apr 2001, Fabio Riccardi wrote:
>
>> I've spent my afternoon running some benchmarks to see if MQ patches
>> would degrade performance in the "normal case".
>
> no doubt priority-queue can run almost as fast as the current scheduler.
> What i'm
On Wed, Apr 04, 2001 at 09:44:22AM -0600, Khalid Aziz wrote:
> Let me stress that HP scheduler is not meant to be a replacement for the
> current scheduler. The HP scheduler patch allows the current scheduler
> to be replaced by another scheduler by loading a module in special
> cases.
HP also
Andrea Arcangeli wrote:
>
> On Wed, Apr 04, 2001 at 10:03:10AM -0400, Hubertus Franke wrote:
> > I understand the dilemma that the Linux scheduler is in, namely satisfy
> > the low end at all cost. [..]
>
> We can satisfy the low end by making the numa scheduler at compile time (that's
> what I
Hubertus Franke wrote:
>
> This is an important point that Mike is raising and it also addresses a
> critique that Ingo issued yesterday, namely interactivity and fairness.
> The HP scheduler completely separates the per-CPU runqueues and does
> not take preemption goodness or alike into
s Franke/Watson/IBM@IBMUS
cc: Mike Kravetz <[EMAIL PROTECTED]>, Fabio Riccardi
<[EMAIL PROTECTED]>, Linux Kernel List
<[EMAIL PROTECTED]>
Subject: Re: a quest for a better scheduler
On Wed, 4 Apr 2001, Hubertus Franke wrote:
> I understand the dilemma that th
o Riccardi <[EMAIL PROTECTED]>, Linux
Kernel List <[EMAIL PROTECTED]>,
[EMAIL PROTECTED]
Subject: Re: a quest for a better scheduler
On Wed, Apr 04, 2001 at 03:34:22PM +0200, Ingo Molnar wrote:
>
> On Wed, 4 Apr 2001, Hubertus Franke wrote:
>
> > Another poin
On Wed, Apr 04, 2001 at 10:03:10AM -0400, Hubertus Franke wrote:
> I understand the dilemma that the Linux scheduler is in, namely satisfy
> the low end at all cost. [..]
We can satisfy the low end by making the numa scheduler at compile time (that's
what I did in my patch at least).
Andrea
-
On Wed, Apr 04, 2001 at 03:34:22PM +0200, Ingo Molnar wrote:
>
> On Wed, 4 Apr 2001, Hubertus Franke wrote:
>
> > Another point to raise is that the current scheduler does a exhaustive
> > search for the "best" task to run. It touches every process in the
> > runqueue. this is ok if the
On Wed, 4 Apr 2001, Hubertus Franke wrote:
> It is not clear that yielding the same decision as the current
> scheduler is the ultimate goal to shoot for, but it allows
> comparision.
obviously the current scheduler is not cast into stone, it never was,
never will be.
but determining whether
On Wed, 4 Apr 2001, Hubertus Franke wrote:
> I understand the dilemma that the Linux scheduler is in, namely
> satisfy the low end at all cost. [...]
nope. The goal is to satisfy runnable processes in the range of NR_CPUS.
You are playing word games by suggesting that the current behavior
L PROTECTED]>
To: Mike Kravetz <[EMAIL PROTECTED]>
cc: Hubertus Franke/Watson/IBM@IBMUS, Fabio Riccardi
<[EMAIL PROTECTED]>, Linux Kernel List
<[EMAIL PROTECTED]>
Subject: Re: a quest for a better scheduler
On Tue, 3 Apr 2001, Mike Kravetz wrote:
>
lan Cox <[EMAIL PROTECTED]>
Subject: Re: a quest for a better scheduler
On Tue, Apr 03, 2001 at 05:18:03PM -0700, Fabio Riccardi wrote:
>
> I have measured the HP and not the "scalability" patch because the two do
more
> or less the same thing and give me the same performan
On Wed, 4 Apr 2001, Alan Cox wrote:
> The problem has always been - alternative scheduler, crappier
> performance for 2 tasks running (which is most boxes). [...]
it's not only the 2-task case, but also less flexibility or lost
semantics.
> Indeed. I'd love to see you beat tux entirely in
On Tue, 3 Apr 2001, Fabio Riccardi wrote:
> I agree that a better threading model would surely help in a web
> server, but to me this is not an excuse to live up with a broken
> scheduler.
believe me, there are many other parts of the kernel that are not
optimized for the nutcase. In this case
On Tue, 3 Apr 2001, Fabio Riccardi wrote:
> I've spent my afternoon running some benchmarks to see if MQ patches
> would degrade performance in the "normal case".
no doubt priority-queue can run almost as fast as the current scheduler.
What i'm worried about is the restriction of the
On Tue, 3 Apr 2001, Mike Kravetz wrote:
> Our 'priority queue' implementation uses almost the same goodness
> function as the current scheduler. The main difference between our
> 'priority queue' scheduler and the current scheduler is the structure
> of the runqueue. We break up the single
If we are facing these problems for "normal case" then hope the Solaris is
handling it !!
Amol
Fabio Riccardi <[EMAIL PROTECTED]> on 04/04/2001 07:03:57 AM
To: Alan Cox <[EMAIL PROTECTED]>
cc: [EMAIL PROTECTED] (bcc: Amol Lad/HSS)
Subject: Re: a quest
If we are facing these problems for "normal case" then hope the Solaris is
handling it !!
Amol
Fabio Riccardi [EMAIL PROTECTED] on 04/04/2001 07:03:57 AM
To: Alan Cox [EMAIL PROTECTED]
cc: [EMAIL PROTECTED] (bcc: Amol Lad/HSS)
Subject: Re: a quest for a better schedule
On Tue, 3 Apr 2001, Fabio Riccardi wrote:
I've spent my afternoon running some benchmarks to see if MQ patches
would degrade performance in the "normal case".
no doubt priority-queue can run almost as fast as the current scheduler.
What i'm worried about is the restriction of the 'priority'
On Tue, 3 Apr 2001, Fabio Riccardi wrote:
I agree that a better threading model would surely help in a web
server, but to me this is not an excuse to live up with a broken
scheduler.
believe me, there are many other parts of the kernel that are not
optimized for the nutcase. In this case
On Wed, 4 Apr 2001, Alan Cox wrote:
The problem has always been - alternative scheduler, crappier
performance for 2 tasks running (which is most boxes). [...]
it's not only the 2-task case, but also less flexibility or lost
semantics.
Indeed. I'd love to see you beat tux entirely in
03
Mike Kravetz [EMAIL PROTECTED] on 04/03/2001 10:47:00 PM
To: Fabio Riccardi [EMAIL PROTECTED]
cc: Mike Kravetz [EMAIL PROTECTED], Ingo Molnar [EMAIL PROTECTED],
Hubertus Franke/Watson/IBM@IBMUS, Linux Kernel List
[EMAIL PROTECTED], Alan Cox [EMAIL PROTECTED]
Subject: R
etz [EMAIL PROTECTED]
cc: Hubertus Franke/Watson/IBM@IBMUS, Fabio Riccardi
[EMAIL PROTECTED], Linux Kernel List
[EMAIL PROTECTED]
Subject: Re: a quest for a better scheduler
On Tue, 3 Apr 2001, Mike Kravetz wrote:
Our 'priority queue' implementation uses almost the same goodness
On Wed, 4 Apr 2001, Hubertus Franke wrote:
I understand the dilemma that the Linux scheduler is in, namely
satisfy the low end at all cost. [...]
nope. The goal is to satisfy runnable processes in the range of NR_CPUS.
You are playing word games by suggesting that the current behavior
On Wed, 4 Apr 2001, Hubertus Franke wrote:
It is not clear that yielding the same decision as the current
scheduler is the ultimate goal to shoot for, but it allows
comparision.
obviously the current scheduler is not cast into stone, it never was,
never will be.
but determining whether the
On Wed, Apr 04, 2001 at 03:34:22PM +0200, Ingo Molnar wrote:
On Wed, 4 Apr 2001, Hubertus Franke wrote:
Another point to raise is that the current scheduler does a exhaustive
search for the "best" task to run. It touches every process in the
runqueue. this is ok if the runqueue length
On Wed, Apr 04, 2001 at 10:03:10AM -0400, Hubertus Franke wrote:
I understand the dilemma that the Linux scheduler is in, namely satisfy
the low end at all cost. [..]
We can satisfy the low end by making the numa scheduler at compile time (that's
what I did in my patch at least).
Andrea
-
To
], Linux
Kernel List [EMAIL PROTECTED],
[EMAIL PROTECTED]
Subject: Re: a quest for a better scheduler
On Wed, Apr 04, 2001 at 03:34:22PM +0200, Ingo Molnar wrote:
On Wed, 4 Apr 2001, Hubertus Franke wrote:
Another point to raise is that the current scheduler does a exhaustive
Mike Kravetz [EMAIL PROTECTED], Fabio Riccardi
[EMAIL PROTECTED], Linux Kernel List
[EMAIL PROTECTED]
Subject: Re: a quest for a better scheduler
On Wed, 4 Apr 2001, Hubertus Franke wrote:
I understand the dilemma that the Linux scheduler is in, namely
satisfy the low end at a
Hubertus Franke wrote:
This is an important point that Mike is raising and it also addresses a
critique that Ingo issued yesterday, namely interactivity and fairness.
The HP scheduler completely separates the per-CPU runqueues and does
not take preemption goodness or alike into account.
Andrea Arcangeli wrote:
On Wed, Apr 04, 2001 at 10:03:10AM -0400, Hubertus Franke wrote:
I understand the dilemma that the Linux scheduler is in, namely satisfy
the low end at all cost. [..]
We can satisfy the low end by making the numa scheduler at compile time (that's
what I did in
On Wed, Apr 04, 2001 at 09:44:22AM -0600, Khalid Aziz wrote:
Let me stress that HP scheduler is not meant to be a replacement for the
current scheduler. The HP scheduler patch allows the current scheduler
to be replaced by another scheduler by loading a module in special
cases.
HP also has a
On 04-Apr-2001 Ingo Molnar wrote:
On Tue, 3 Apr 2001, Fabio Riccardi wrote:
I've spent my afternoon running some benchmarks to see if MQ patches
would degrade performance in the "normal case".
no doubt priority-queue can run almost as fast as the current scheduler.
What i'm worried
On Wed, 4 Apr 2001, Hubertus Franke wrote:
Another point to raise is that the current scheduler does a exhaustive
search for the "best" task to run. It touches every process in the
runqueue. this is ok if the runqueue length is limited to a very small
multiple of the #cpus. [...]
I didn't seen anything from Kanoj but I did something myself for the wildfire:
ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.3aa1/10_numa-sched-1
this is mostly an userspace issue, not really intended as a kernel optimization
(however it's also partly a
On Wed, Apr 04, 2001 at 09:39:23AM -0700, Kanoj Sarcar wrote:
example, for NUMA, we need to try hard to schedule a thread on the
node that has most of its memory (for no reason other than to decrease
memory latency). Independently, some NUMA machines build in multilevel
caches and local
] (Andrea Arcangeli)
cc: [EMAIL PROTECTED] (Ingo Molnar), Hubertus Franke/Watson/IBM@IBMUS,
[EMAIL PROTECTED] (Mike Kravetz), [EMAIL PROTECTED] (Fabio
Riccardi), [EMAIL PROTECTED] (Linux Kernel List),
[EMAIL PROTECTED]
Subject: Re: [Lse-tech] Re: a quest for a better scheduler
I
Kanoj, our cpu-pooling + loadbalancing allows you to do that.
The system adminstrator can specify at runtime through a
/proc filesystem interface the cpu-pool-size, whether loadbalacing
should take place.
Yes, I think this approach can support the various requirements
put on the
On Wed, Apr 04, 2001 at 09:50:58AM -0700, Kanoj Sarcar wrote:
I didn't seen anything from Kanoj but I did something myself for the wildfire:
ftp://ftp.us.kernel.org/pub/linux/kernel/people/andrea/kernels/v2.4/2.4.3aa1/10_numa-sched-1
this is mostly an userspace issue, not
[EMAIL PROTECTED], Fabio Riccardi
[EMAIL PROTECTED]
Subject: Re: a quest for a better scheduler
On 04-Apr-2001 Ingo Molnar wrote:
On Tue, 3 Apr 2001, Fabio Riccardi wrote:
I've spent my afternoon running some benchmarks to see if MQ patches
would degrade performance in the "n
On Tue, Apr 03, 2001 at 09:21:57PM -0700, Fabio Riccardi wrote:
I was actually suspecting that the extra lines in your patch were there for a
reason :)
A few questions:
What is the real impact of a (slight) change in scheduling semantics?
Under which situation one should notice a
] Re: a quest for a better scheduler
Kanoj, our cpu-pooling + loadbalancing allows you to do that.
The system adminstrator can specify at runtime through a
/proc filesystem interface the cpu-pool-size, whether loadbalacing
should take place.
Yes, I think this approach can support the various
Just a quick comment. Andrea, unless your machine has some hardware
that imply pernode runqueues will help (nodelevel caches etc), I fail
to understand how this is helping you ... here's a simple theory though.
If your system is lightly loaded, your pernode queues are actually
implementing
It helps by keeping the task in the same node if it cannot keep it in
the same cpu anymore.
Assume task A is sleeping and it last run on cpu 8 node 2. It gets a wakeup
and it gets running and for some reason cpu 8 is busy and there are other
cpus idle in the system. Now with the current
On Wed, Apr 04, 2001 at 10:49:04AM -0700, Kanoj Sarcar wrote:
Imagine that most of the program's memory is on node 1, it was scheduled
on node 2 cpu 8 momentarily (maybe because kswapd ran on node 1, other
higher priority processes took over other cpus on node 1, etc).
Then, your patch
-2003
Mark Hahn [EMAIL PROTECTED] on 04/04/2001 02:28:42 PM
To: Hubertus Franke/Watson/IBM@IBMUS
cc:
Subject: Re: a quest for a better scheduler
ok if the runqueue length is limited to a very small multiple of the
#cpus.
But that is not what high end server systems encounter.
do you have
On Wed, Apr 04, 2001 at 03:23:34PM +0200, Ingo Molnar wrote:
On Wed, 4 Apr 2001, Hubertus Franke wrote:
I understand the dilemma that the Linux scheduler is in, namely
satisfy the low end at all cost. [...]
nope. The goal is to satisfy runnable processes in the range of NR_CPUS.
You
--On Wednesday, April 04, 2001 15:16:32 -0700 Tim Wright [EMAIL PROTECTED]
wrote:
On Wed, Apr 04, 2001 at 03:23:34PM +0200, Ingo Molnar wrote:
nope. The goal is to satisfy runnable processes in the range of NR_CPUS.
You are playing word games by suggesting that the current behavior
prefers
I was actually suspecting that the extra lines in your patch were there for a
reason :)
A few questions:
What is the real impact of a (slight) change in scheduling semantics?
Under which situation one should notice a difference?
As you state in your papers the global decision comes with a
On Tue, Apr 03, 2001 at 05:18:03PM -0700, Fabio Riccardi wrote:
>
> I have measured the HP and not the "scalability" patch because the two do more
> or less the same thing and give me the same performance advantages, but the
> former is a lot simpler and I could port it with no effort on any
--On Tuesday, April 03, 2001 18:17:30 -0700 Fabio Riccardi
<[EMAIL PROTECTED]> wrote:
> Alan Cox wrote:
> Indeed, I'm using RT sigio/sigwait event scheduling, bare clone threads
> and zero-copy io.
Fabio, I'm working on a similar solution, although I'm experimenting with
SGI's KAIO patch to
Alan Cox wrote:
> > for the "normal case" performance see my other message.
>
> I did - and with a lot of interest
thanks! :)
> > I agree that a better threading model would surely help in a web server, but to
> > me this is not an excuse to live up with a broken scheduler.
>
> The problem has
> for the "normal case" performance see my other message.
I did - and with a lot of interest
> I agree that a better threading model would surely help in a web server, but to
> me this is not an excuse to live up with a broken scheduler.
The problem has always been - alternative scheduler,
Alan,
for the "normal case" performance see my other message.
I agree that a better threading model would surely help in a web server, but to
me this is not an excuse to live up with a broken scheduler.
The X15 server I'm working on now is a sort of user-space TUX, it uses only 8
threads per
Dear all,
I've spent my afternoon running some benchmarks to see if MQ patches would
degrade performance in the "normal case".
To measure performance I've used the latest lmbench and I have mesured the
kernel compile times on a dual pentium III box runing at 1GHz with an 133MHz
bus.
Results
On Tue, Apr 03, 2001 at 08:47:52PM +0200, Ingo Molnar wrote:
>
> this restriction (independence of the priority from the previous process)
> is a fundamentally bad property, scheduling is nonlinear and affinity
> decisions depend on the previous context. [i had a priority-queue SMP
> scheduler
On Tue, 3 Apr 2001, Mike Kravetz wrote:
> [...] Currently, in this implementation we only deviate from the
> current scheduler in a small number of cases where tasks get a boost
> due to having the same memory map.
thread-thread-affinity pretty much makes it impossible to use a priority
queue.
On Tue, Apr 03, 2001 at 10:55:12AM +0200, Ingo Molnar wrote:
>
> you can easily make the scheduler fast in the many-processes case by
> sacrificing functionality in the much more realistic, few-processes case.
> None of the patch i've seen so far maintained the current scheduler's
>
> Is there any special reason why any of those patches didn't make it to
> the mainstream kernel code?
All of them are worse for the normal case. Also 1500 running apache's isnt
a remotely useful situation, you are thrashing the cache even if you are now
not thrashing the scheduler. Use an httpd
On Mon, 2 Apr 2001, Fabio Riccardi wrote:
> I sent a message a few days ago about some limitations I found in the
> linux scheduler.
>
> In servers like Apache where a large (> 1000) number of processes can
> be running at the same time and where many of them are runnable at the
> same time,
On Mon, 2 Apr 2001, Fabio Riccardi wrote:
I sent a message a few days ago about some limitations I found in the
linux scheduler.
In servers like Apache where a large ( 1000) number of processes can
be running at the same time and where many of them are runnable at the
same time, the
Is there any special reason why any of those patches didn't make it to
the mainstream kernel code?
All of them are worse for the normal case. Also 1500 running apache's isnt
a remotely useful situation, you are thrashing the cache even if you are now
not thrashing the scheduler. Use an httpd
On Tue, Apr 03, 2001 at 10:55:12AM +0200, Ingo Molnar wrote:
you can easily make the scheduler fast in the many-processes case by
sacrificing functionality in the much more realistic, few-processes case.
None of the patch i've seen so far maintained the current scheduler's
few-processes
On Tue, 3 Apr 2001, Mike Kravetz wrote:
[...] Currently, in this implementation we only deviate from the
current scheduler in a small number of cases where tasks get a boost
due to having the same memory map.
thread-thread-affinity pretty much makes it impossible to use a priority
queue.
1 - 100 of 108 matches
Mail list logo