ject: Re: sched_yield proposals/rationale
>
> Mark Lord wrote:
> >
> > Cool. You *do know* that there is a brand new CPU scheduler
> > scheduled to replace the current one for the 2.6.22 Kernel, right?
> >
> Having tried both nicksched and Con's fair sched on some
Mark Lord wrote:
[EMAIL PROTECTED] wrote:
From: Bill Davidsen
And having gotten same, are you going to code up what appears to be a
solution, based on this feedback?
The feedback was helpful in verifying whether there are any arguments
against my approach. The real proof is in the pudding.
Mark Lord wrote:
[EMAIL PROTECTED] wrote:
From: Bill Davidsen
And having gotten same, are you going to code up what appears to be a
solution, based on this feedback?
The feedback was helpful in verifying whether there are any arguments
against my approach. The real proof is in the pudding.
-Original Message-
From: [EMAIL PROTECTED] [mailto:linux-kernel-
[EMAIL PROTECTED] On Behalf Of Bill Davidsen
Sent: dinsdag 17 april 2007 21:38
To: linux-kernel@vger.kernel.org
Cc: Buytaert, Steven; [EMAIL PROTECTED]; linux-kernel@vger.kernel.org
Subject: Re: sched_yield proposals
[EMAIL PROTECTED] wrote:
From: Bill Davidsen
And having gotten same, are you going to code up what appears to be a
solution, based on this feedback?
The feedback was helpful in verifying whether there are any arguments against
my approach. The real proof is in the pudding.
I'm running a
[EMAIL PROTECTED] wrote:
From: Bill Davidsen
And having gotten same, are you going to code up what appears to be a
solution, based on this feedback?
The feedback was helpful in verifying whether there are any arguments against
my approach. The real proof is in the pudding.
I'm running a
> From: Bill Davidsen
>
> And having gotten same, are you going to code up what appears to be a
> solution, based on this feedback?
The feedback was helpful in verifying whether there are any arguments against
my approach. The real proof is in the pudding.
I'm running a kernel with these
[EMAIL PROTECTED] wrote:
-Original Message-
Besides - but I guess you're aware of it - any randomized
algorithms tend to drive benchmarkers and performance analysts
crazy because their performance cannot be repeated. So it's usually
better to avoid them unless there is really no
On Thu, Apr 12, 2007 at 11:27:22PM +1000, Nick Piggin wrote:
> This one should be pretty rare (actually I think it is dead code in
> practice, due to the way the page allocator works).
> Avoiding sched_yield is a really good idea outside realtime scheduling.
> Since we have gone this far with
On Thu, Apr 12, 2007 at 03:31:31PM +0200, Andi Kleen wrote:
> The only way I could think of to make sched_yield work the way they
> expect would be to define some way of gang scheduling and give
> sched_yield semantics that it preferably yields to other members
> of the gang.
> But it would be
On Thu, 2007-04-12 at 10:15 -0400, [EMAIL PROTECTED] wrote:
> > -Original Message-
> > > Agreed, but $ find . -name "*.[ch]" | xargs grep -E "yield[ ]*\(" | wc
> > over
> > > the 2.6.16 kernel yields 105 hits, note including comments... An
> > interesting spot is e.g. fs/buffer.c
> -Original Message-
> > Agreed, but $ find . -name "*.[ch]" | xargs grep -E "yield[ ]*\(" | wc
> over
> > the 2.6.16 kernel yields 105 hits, note including comments... An
> interesting spot is e.g. fs/buffer.c free_more_memory()
>
> A lot of those are probably broken in some way agreed.
On Thu, Apr 12, 2007 at 09:05:25AM -0400, [EMAIL PROTECTED] wrote:
> > -Original Message-
> > From: Andi Kleen
> > [ ... about use of sched_yield ...]
> > On the other hand when they fix their code to not rely on sched_yield
> > but use [...]
>
> Agreed, but $ find . -name "*.[ch]" |
[EMAIL PROTECTED] wrote:
-Original Message-
From: Andi Kleen
[ ... about use of sched_yield ...]
On the other hand when they fix their code to not rely on sched_yield
but use [...]
Agreed, but $ find . -name "*.[ch]" | xargs grep -E "yield[ ]*\(" | wc over
the 2.6.16 kernel yields 105
> -Original Message-
> From: Andi Kleen
> [ ... about use of sched_yield ...]
> On the other hand when they fix their code to not rely on sched_yield
> but use [...]
Agreed, but $ find . -name "*.[ch]" | xargs grep -E "yield[ ]*\(" | wc over
the 2.6.16 kernel yields 105 hits, note
[EMAIL PROTECTED] writes:
> Since the new 2.6.x O(1) scheduler I'm having latency problems. Probably due
> to excessive use of sched_yield in code in components I don't have control
> over. This 'problem'/behavioral change has been reported also by other
> applications (e.g. OpenLDAP, Gnome
On Thu, 2007-04-12 at 04:31 -0400, [EMAIL PROTECTED] wrote:
> Since the new 2.6.x O(1) scheduler I'm having latency problems.
1. Have you elevated the process priority?
2. Have you tried running SCHED_FIFO, or SCHED_RR?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel"
Since the new 2.6.x O(1) scheduler I'm having latency problems. Probably due
to excessive use of sched_yield in code in components I don't have control
over. This 'problem'/behavioral change has been reported also by other
applications (e.g. OpenLDAP, Gnome netmeeting, Postgress, e.google...)
I
Since the new 2.6.x O(1) scheduler I'm having latency problems. Probably due
to excessive use of sched_yield in code in components I don't have control
over. This 'problem'/behavioral change has been reported also by other
applications (e.g. OpenLDAP, Gnome netmeeting, Postgress, e.google...)
I
On Thu, 2007-04-12 at 04:31 -0400, [EMAIL PROTECTED] wrote:
Since the new 2.6.x O(1) scheduler I'm having latency problems.
1. Have you elevated the process priority?
2. Have you tried running SCHED_FIFO, or SCHED_RR?
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
[EMAIL PROTECTED] writes:
Since the new 2.6.x O(1) scheduler I'm having latency problems. Probably due
to excessive use of sched_yield in code in components I don't have control
over. This 'problem'/behavioral change has been reported also by other
applications (e.g. OpenLDAP, Gnome
-Original Message-
From: Andi Kleen
[ ... about use of sched_yield ...]
On the other hand when they fix their code to not rely on sched_yield
but use [...]
Agreed, but $ find . -name *.[ch] | xargs grep -E yield[ ]*\( | wc over
the 2.6.16 kernel yields 105 hits, note including
[EMAIL PROTECTED] wrote:
-Original Message-
From: Andi Kleen
[ ... about use of sched_yield ...]
On the other hand when they fix their code to not rely on sched_yield
but use [...]
Agreed, but $ find . -name *.[ch] | xargs grep -E yield[ ]*\( | wc over
the 2.6.16 kernel yields 105
On Thu, Apr 12, 2007 at 09:05:25AM -0400, [EMAIL PROTECTED] wrote:
-Original Message-
From: Andi Kleen
[ ... about use of sched_yield ...]
On the other hand when they fix their code to not rely on sched_yield
but use [...]
Agreed, but $ find . -name *.[ch] | xargs grep -E
-Original Message-
Agreed, but $ find . -name *.[ch] | xargs grep -E yield[ ]*\( | wc
over
the 2.6.16 kernel yields 105 hits, note including comments... An
interesting spot is e.g. fs/buffer.c free_more_memory()
A lot of those are probably broken in some way agreed.
OK, so
On Thu, 2007-04-12 at 10:15 -0400, [EMAIL PROTECTED] wrote:
-Original Message-
Agreed, but $ find . -name *.[ch] | xargs grep -E yield[ ]*\( | wc
over
the 2.6.16 kernel yields 105 hits, note including comments... An
interesting spot is e.g. fs/buffer.c free_more_memory()
A
On Thu, Apr 12, 2007 at 03:31:31PM +0200, Andi Kleen wrote:
The only way I could think of to make sched_yield work the way they
expect would be to define some way of gang scheduling and give
sched_yield semantics that it preferably yields to other members
of the gang.
But it would be still
On Thu, Apr 12, 2007 at 11:27:22PM +1000, Nick Piggin wrote:
This one should be pretty rare (actually I think it is dead code in
practice, due to the way the page allocator works).
Avoiding sched_yield is a really good idea outside realtime scheduling.
Since we have gone this far with the
[EMAIL PROTECTED] wrote:
-Original Message-
Besides - but I guess you're aware of it - any randomized
algorithms tend to drive benchmarkers and performance analysts
crazy because their performance cannot be repeated. So it's usually
better to avoid them unless there is really no
From: Bill Davidsen
And having gotten same, are you going to code up what appears to be a
solution, based on this feedback?
The feedback was helpful in verifying whether there are any arguments against
my approach. The real proof is in the pudding.
I'm running a kernel with these changes,
30 matches
Mail list logo