On Fri, 2007-06-22 at 23:59 +0200, Ingo Molnar wrote:
so how about the following, different approach: anyone who has a tasklet
in any performance-sensitive codepath, please yell now. We'll also do a
proactive search for such places. We can convert those places to
softirqs, or move them back
On Mon, 2007-06-25 at 14:48 -0400, Kristian Høgsberg wrote:
OK, here's a yell. I'm using tasklets in the new firewire stack for all
Thanks for speaking up!
interrupt handling. All my interrupt handler does is read out the event
mask and schedule the appropriate tasklets. Most of these
On Mon, 25 Jun 2007 18:50:03 +0200
Tilman Schmidt [EMAIL PROTECTED] wrote:
Ingo Molnar [EMAIL PROTECTED] wrote:
so how about the following, different approach: anyone who has a tasklet
in any performance-sensitive codepath, please yell now.
Getting rid of tasklet's may seem like a good
On Mon, 2007-06-25 at 15:11 -0400, Steven Rostedt wrote:
On Mon, 2007-06-25 at 14:48 -0400, Kristian Høgsberg wrote:
...
However, I don't really understand how you can discuss a wholesale
replacing of tasklets with workqueues, given the very different
execution sematics of the two
On Mon, 2007-06-25 at 16:07 -0400, Kristian Høgsberg wrote:
Maybe we should be looking at something like GENERIC_SOFTIRQ to run
functions that a driver could add. But they would run only on the CPU
that scheduled them, and do not guarantee non-reentrant as tasklets do
today.
Sounds
Am 25.06.2007 19:06 schrieb Steven Rostedt:
On Mon, 2007-06-25 at 18:50 +0200, Tilman Schmidt wrote:
The Siemens Gigaset ISDN base driver uses tasklets in its isochronous
data paths. [...]
Does that qualify as performance sensitive for the purpose of this
discussion?
Actually, no. 16ms,
On Mon, 2007-06-25 at 22:50 +0200, Tilman Schmidt wrote:
Ok, I'm reassured. I'll look into converting these to a work queue
then, although I can't promise when I'll get around to it.
In fact, if these timing requirements are so easy to meet, perhaps
it doesn't even need its own work queue,
On Mon, 2007-06-25 at 16:31 -0400, Steven Rostedt wrote:
On Mon, 2007-06-25 at 16:07 -0400, Kristian Høgsberg wrote:
Maybe we should be looking at something like GENERIC_SOFTIRQ to run
functions that a driver could add. But they would run only on the CPU
that scheduled them, and do not
* Kristian H?gsberg [EMAIL PROTECTED] wrote:
OK, here's a yell. I'm using tasklets in the new firewire stack for
all interrupt handling. All my interrupt handler does is read out the
event mask and schedule the appropriate tasklets. Most of these
tasklets typically just end up
Ingo Molnar wrote:
regarding workqueues - would it be possible for you to test Steve's
patch and get us performance numbers? Do you have any test with tons of
tasklet activity that would definitely show the performance impact of
workqueues?
I can't speak for Kristian, nor do I have test
A couple of days ago I said:
The cafe_ccic (OLPC) camera driver uses a tasklet to move frames out of
the DMA buffers in the streaming I/O path
Obviously some testing is called for here. I will make an attempt to do
that testing
I've done that testing - I have an OLPC B3 unit running
On Tue, 2007-06-26 at 01:36 +0200, Stefan Richter wrote:
I can't speak for Kristian, nor do I have test equipment for isochronous
applications, but I know that there are people out there which do data
acquisition on as many FireWire buses as they can stuff boards into
their boxes. There are
On Mon, 2007-06-25 at 18:00 -0600, Jonathan Corbet wrote:
A couple of days ago I said:
The cafe_ccic (OLPC) camera driver uses a tasklet to move frames out of
the DMA buffers in the streaming I/O path
Obviously some testing is called for here. I will make an attempt to do
that
so how about the following, different approach: anyone who has a tasklet
in any performance-sensitive codepath, please yell now. We'll also do a
proactive search for such places. We can convert those places to
softirqs, or move them back into hardirq context. Once this is done -
and i doubt it
On Mon, 2007-06-25 at 18:46 -0700, Dan Williams wrote:
Context switches on this platform flush the L1 cache so bouncing
between a workqueue and the MD thread is painful.
Why is context switches between two kernel threads flushing the L1
cache? Is this a flaw in the ARM arch? I would think
On 6/25/07, Steven Rostedt [EMAIL PROTECTED] wrote:
On Mon, 2007-06-25 at 18:46 -0700, Dan Williams wrote:
Context switches on this platform flush the L1 cache so bouncing
between a workqueue and the MD thread is painful.
Why is context switches between two kernel threads flushing the L1
On Sun, 24 Jun 2007, Jonathan Corbet wrote:
>
> The cafe_ccic (OLPC) camera driver uses a tasklet to move frames out of
> the DMA buffers in the streaming I/O path. With this change in place,
> I'd worry that the possibility of dropping frames would increase,
> especially considering that (1)
Ingo Molnar <[EMAIL PROTECTED]> wrote:
> so how about the following, different approach: anyone who has a tasklet
> in any performance-sensitive codepath, please yell now.
The cafe_ccic (OLPC) camera driver uses a tasklet to move frames out of
the DMA buffers in the streaming I/O path. With
Ingo Molnar [EMAIL PROTECTED] wrote:
so how about the following, different approach: anyone who has a tasklet
in any performance-sensitive codepath, please yell now.
The cafe_ccic (OLPC) camera driver uses a tasklet to move frames out of
the DMA buffers in the streaming I/O path. With this
On Sun, 24 Jun 2007, Jonathan Corbet wrote:
The cafe_ccic (OLPC) camera driver uses a tasklet to move frames out of
the DMA buffers in the streaming I/O path. With this change in place,
I'd worry that the possibility of dropping frames would increase,
especially considering that (1) this is
Most of the tasklet uses are in rarely used or arcane drivers - in fact
none of my 10 test-boxes utilizes _any_ tasklet in any way that could
even get close to mattering to performance. In other words: i just
cannot test this, nor do i think that others will really test this. I.e.
if we dont
Most of the tasklet uses are in rarely used or arcane drivers - in fact
none of my 10 test-boxes utilizes _any_ tasklet in any way that could
even get close to mattering to performance. In other words: i just
cannot test this, nor do i think that others will really test this. I.e.
if we dont
On Fri, 22 Jun 2007 00:00:14 -0400
Steven Rostedt <[EMAIL PROTECTED]> wrote:
>
> There's a very nice paper by Matthew Willcox that describes Softirqs,
> Tasklets, Bottom Halves, Task Queues, Work Queues and Timers[1].
> In the paper it describes the history of these items. Softirqs and
>
On Sat, 2007-06-23 at 00:44 +0200, Ingo Molnar wrote:
> * Daniel Walker <[EMAIL PROTECTED]> wrote:
>
> > > remember, these changes have been in use in -rt for a while. there's
> > > reason to believe that they aren't going to cause drastic problems.
> >
> > Since I've been working with -rt (~2
On Fri, 2007-06-22 at 23:59 +0200, Ingo Molnar wrote:
> * Linus Torvalds <[EMAIL PROTECTED]> wrote:
>
> > If the numbers say that there is no performance difference (or even
> > better: that the new code performs better or fixes some latency issue
> > or whatever), I'll be very happy. But if
> As a second example, msr_seek() in arch/i386/kernel/msr.c... is the
> inode semaphore enough or not? Who understands the implications well
> enough to say?
lseek is one of the nasty remaining cases. tty is another real horror
that needs further work but we slowly get closer - drivers/char is
* Daniel Walker <[EMAIL PROTECTED]> wrote:
> > remember, these changes have been in use in -rt for a while. there's
> > reason to believe that they aren't going to cause drastic problems.
>
> Since I've been working with -rt (~2 years now I think) it's clear
> that the number of testers of
> > [ and on a similar notion, i still havent given up on seeing all BKL
> > use gone from the kernel. I expect it to happen any decade now ;-) ]
> 2.6.21 had 476 lock_kernel() calls. 2.6.22-git has 473 lock_kernel()
> calls currently. With that kind of flux we'll see the BKL gone in
On Fri, 2007-06-22 at 15:09 -0700, [EMAIL PROTECTED] wrote:
> On Fri, 22 Jun 2007, Daniel Walker wrote:
>
> >
> > On Fri, 2007-06-22 at 22:40 +0200, Ingo Molnar wrote:
> >
> >>
> >> - tasklets have certain fairness limitations. (they are executed in
> >>softirq context and thus preempt
* Daniel Walker <[EMAIL PROTECTED]> wrote:
> > - tasklets have certain fairness limitations. (they are executed in
> >softirq context and thus preempt everything, even if there is
> >some potentially more important, high-priority task waiting to be
> >executed.)
>
> Since -rt has
On Fri, 22 Jun 2007, Daniel Walker wrote:
On Fri, 2007-06-22 at 22:40 +0200, Ingo Molnar wrote:
- tasklets have certain fairness limitations. (they are executed in
softirq context and thus preempt everything, even if there is some
potentially more important, high-priority task
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> [ and on a similar notion, i still havent given up on seeing all BKL
> use gone from the kernel. I expect it to happen any decade now ;-) ]
2.6.21 had 476 lock_kernel() calls. 2.6.22-git has 473 lock_kernel()
calls currently. With that kind of flux
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> If the numbers say that there is no performance difference (or even
> better: that the new code performs better or fixes some latency issue
> or whatever), I'll be very happy. But if the numbers say that it's
> worse, no amount of cleanliness
On Fri, 2007-06-22 at 22:40 +0200, Ingo Molnar wrote:
>
> - tasklets have certain fairness limitations. (they are executed in
>softirq context and thus preempt everything, even if there is some
>potentially more important, high-priority task waiting to be
>executed.)
Since -rt has
On Fri, 22 Jun 2007, Ingo Molnar wrote:
>
> * Linus Torvalds <[EMAIL PROTECTED]> wrote:
>
> > Whether we actually then want to do 6 is another matter. I think we'd
> > need some measuring and discussion about that.
>
> basically tasklets have a number of limitations:
I'm not disputing that
On Fri, 2007-06-22 at 22:00 +0100, Christoph Hellwig wrote:
> Note that we also have a lot of inefficiency in the way we do deferred
> processing. Think of a setup where you run a XFS filesystem runs over
> a megaraid adapter.
>
> (1) we get a real hardirq, which just clears the interrupt and
* Christoph Hellwig <[EMAIL PROTECTED]> wrote:
> Note that we also have a lot of inefficiency in the way we do deferred
> processing. Think of a setup where you run a XFS filesystem runs over
> a megaraid adapter.
>
> (1) we get a real hardirq, which just clears the interrupt and then
>
On Fri, Jun 22, 2007 at 10:40:58PM +0200, Ingo Molnar wrote:
> when it comes to 'deferred processing', we've basically got two 'prime'
> choices for deferred processing:
>
> - if it's high-performance then it goes into a softirq.
>
> - if performance is not important, or robustness and
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> Whether we actually then want to do 6 is another matter. I think we'd
> need some measuring and discussion about that.
basically tasklets have a number of limitations:
- tasklets have certain latency limitations over real tasks. (for
example
On Fri, Jun 22, 2007 at 10:16:47AM -0700, Linus Torvalds wrote:
>
>
> On Fri, 22 Jun 2007, Steven Rostedt wrote:
> >
> > I just want to state that tasklets served their time well. But it's time
> > to give them an honorable discharge. So lets get rid of tasklets and
> > given them a standing
On Fri, 2007-06-22 at 10:16 -0700, Linus Torvalds wrote:
>
> So patches 1-4 all look fine to me. In fact, 5 looks ok too.
Great!
> Leaving patch 6 as a "only makes sense after we actually have some numbers
> about it", and patch 5 is a "could go either way" as far as I'm concerned
> (ie I
On Fri, 22 Jun 2007, Steven Rostedt wrote:
>
> I just want to state that tasklets served their time well. But it's time
> to give them an honorable discharge. So lets get rid of tasklets and
> given them a standing salute as they leave :-)
Well, independently of whether we actually discharge
>
> This is stated on the assumption that pretty much all performance
> critical tasklets have been removed (although Christoph just mentioned
> megaraid_sas, but after I made this statement).
>
> We've been running tasklets as threads in the -rt kernel for some time
> now, and that hasn't
On Fri, 2007-06-22 at 07:25 -0700, Arjan van de Ven wrote:
> > The most part, tasklets today are not used for time critical functions.
> > Running tasklets in thread context is not harmful to performance of
> > the overall system.
>
> That is a bold statement...
>
> > But running them in
On Fri, 2007-06-22 at 15:12 +0200, Ingo Molnar wrote:
> * Steven Rostedt <[EMAIL PROTECTED]> wrote:
>
> yes, the softirq based tasklet implementation with workqueue based
> implementation, but the tasklet API itself should still stay.
done.
>
> ok, enough idle talking, lets see the next
> The most part, tasklets today are not used for time critical functions.
> Running tasklets in thread context is not harmful to performance of
> the overall system.
That is a bold statement...
> But running them in interrupt context is, since
> they increase the overall latency for high
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> > there are 120 tasklet_init()s in the tree and 224
> > tasklet_schedule()s.
>
> couple of hours?
hm, what would you replace it with? Another new API? Or to workqueues
with a manual adding of a local_bh_disable()/enable() pair around the
worker
> On Fri, 22 Jun 2007 15:26:22 +0200 Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
>
> > I do think that would be a better approach. Apart from the
> > cleanliness issue, the driver-by-driver conversion would make it much
> > easier to hunt down any
On Fri, 2007-06-22 at 06:13 -0700, Andrew Morton wrote:
> > On Fri, 22 Jun 2007 08:58:44 -0400 Steven Rostedt <[EMAIL PROTECTED]> wrote:
> > On Fri, 2007-06-22 at 14:38 +0200, Ingo Molnar wrote:
> > > * Steven Rostedt <[EMAIL PROTECTED]> wrote:
> > > > Honestly, I highly doubted that this would
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> I do think that would be a better approach. Apart from the
> cleanliness issue, the driver-by-driver conversion would make it much
> easier to hunt down any regresions or various funnineses.
there are 120 tasklet_init()s in the tree and 224
> On Fri, 22 Jun 2007 08:58:44 -0400 Steven Rostedt <[EMAIL PROTECTED]> wrote:
> On Fri, 2007-06-22 at 14:38 +0200, Ingo Molnar wrote:
> > * Steven Rostedt <[EMAIL PROTECTED]> wrote:
> >
> > > > And this is something that might be fine for benchmarking, but not
> > > > something
> > > > we
* Steven Rostedt <[EMAIL PROTECTED]> wrote:
> > that's where it belongs - but it first needs the cleanups suggested
> > by Christoph.
>
> I had the impression that he didn't want it in, but instead wanted
> each driver to be changed separately.
that can be done too in a later stage. We
On Fri, 2007-06-22 at 14:38 +0200, Ingo Molnar wrote:
> * Steven Rostedt <[EMAIL PROTECTED]> wrote:
>
> > > And this is something that might be fine for benchmarking, but not
> > > something
> > > we should put in. Keeping two wildly different implementation of core
> > > functionality with
* Steven Rostedt <[EMAIL PROTECTED]> wrote:
> > And this is something that might be fine for benchmarking, but not something
> > we should put in. Keeping two wildly different implementation of core
> > functionality with very different behaviour around is quite bad. Better
> > kill tasklets
Christoph,
Thanks for taking the time to look at my patches!
On Fri, 2007-06-22 at 08:09 +0100, Christoph Hellwig wrote:
> > I've developed this way to replace all tasklets with work queues without
> > having to change all the drivers that use them. I created an API that
> > uses the tasklet
* Christoph Hellwig <[EMAIL PROTECTED]> wrote:
> > which actual in-kernel tasklets do you have in mind? I'm not aware
> > of any in performance critical code. (now that both the RCU and the
> > sched tasklet has been fixed.)
>
> the one in megaraid_sas for example is in a performance-critical
* Christoph Hellwig <[EMAIL PROTECTED]> wrote:
> I think we probably want some numbers, at least for tasklets used in
> potentially performance critical code.
which actual in-kernel tasklets do you have in mind? I'm not aware of
any in performance critical code. (now that both the RCU and the
On Fri, Jun 22, 2007 at 09:51:35AM +0200, Ingo Molnar wrote:
>
> * Christoph Hellwig <[EMAIL PROTECTED]> wrote:
>
> > I think we probably want some numbers, at least for tasklets used in
> > potentially performance critical code.
>
> which actual in-kernel tasklets do you have in mind? I'm not
On Fri, Jun 22, 2007 at 12:00:14AM -0400, Steven Rostedt wrote:
> The most part, tasklets today are not used for time critical functions.
> Running tasklets in thread context is not harmful to performance of
> the overall system. But running them in interrupt context is, since
> they increase the
On Fri, Jun 22, 2007 at 12:00:14AM -0400, Steven Rostedt wrote:
The most part, tasklets today are not used for time critical functions.
Running tasklets in thread context is not harmful to performance of
the overall system. But running them in interrupt context is, since
they increase the
* Christoph Hellwig [EMAIL PROTECTED] wrote:
I think we probably want some numbers, at least for tasklets used in
potentially performance critical code.
which actual in-kernel tasklets do you have in mind? I'm not aware of
any in performance critical code. (now that both the RCU and the
On Fri, Jun 22, 2007 at 09:51:35AM +0200, Ingo Molnar wrote:
* Christoph Hellwig [EMAIL PROTECTED] wrote:
I think we probably want some numbers, at least for tasklets used in
potentially performance critical code.
which actual in-kernel tasklets do you have in mind? I'm not aware of
* Christoph Hellwig [EMAIL PROTECTED] wrote:
which actual in-kernel tasklets do you have in mind? I'm not aware
of any in performance critical code. (now that both the RCU and the
sched tasklet has been fixed.)
the one in megaraid_sas for example is in a performance-critical path
Christoph,
Thanks for taking the time to look at my patches!
On Fri, 2007-06-22 at 08:09 +0100, Christoph Hellwig wrote:
I've developed this way to replace all tasklets with work queues without
having to change all the drivers that use them. I created an API that
uses the tasklet API as
* Steven Rostedt [EMAIL PROTECTED] wrote:
And this is something that might be fine for benchmarking, but not something
we should put in. Keeping two wildly different implementation of core
functionality with very different behaviour around is quite bad. Better
kill tasklets once for
On Fri, 22 Jun 2007 08:58:44 -0400 Steven Rostedt [EMAIL PROTECTED] wrote:
On Fri, 2007-06-22 at 14:38 +0200, Ingo Molnar wrote:
* Steven Rostedt [EMAIL PROTECTED] wrote:
And this is something that might be fine for benchmarking, but not
something
we should put in. Keeping two
On Fri, 2007-06-22 at 14:38 +0200, Ingo Molnar wrote:
* Steven Rostedt [EMAIL PROTECTED] wrote:
And this is something that might be fine for benchmarking, but not
something
we should put in. Keeping two wildly different implementation of core
functionality with very different
* Steven Rostedt [EMAIL PROTECTED] wrote:
that's where it belongs - but it first needs the cleanups suggested
by Christoph.
I had the impression that he didn't want it in, but instead wanted
each driver to be changed separately.
that can be done too in a later stage. We cannot
On Fri, 2007-06-22 at 06:13 -0700, Andrew Morton wrote:
On Fri, 22 Jun 2007 08:58:44 -0400 Steven Rostedt [EMAIL PROTECTED] wrote:
On Fri, 2007-06-22 at 14:38 +0200, Ingo Molnar wrote:
* Steven Rostedt [EMAIL PROTECTED] wrote:
Honestly, I highly doubted that this would make it up to
On Fri, 22 Jun 2007 15:26:22 +0200 Ingo Molnar [EMAIL PROTECTED] wrote:
* Andrew Morton [EMAIL PROTECTED] wrote:
I do think that would be a better approach. Apart from the
cleanliness issue, the driver-by-driver conversion would make it much
easier to hunt down any regresions or
* Andrew Morton [EMAIL PROTECTED] wrote:
I do think that would be a better approach. Apart from the
cleanliness issue, the driver-by-driver conversion would make it much
easier to hunt down any regresions or various funnineses.
there are 120 tasklet_init()s in the tree and 224
On Fri, 2007-06-22 at 07:25 -0700, Arjan van de Ven wrote:
The most part, tasklets today are not used for time critical functions.
Running tasklets in thread context is not harmful to performance of
the overall system.
That is a bold statement...
But running them in interrupt context
This is stated on the assumption that pretty much all performance
critical tasklets have been removed (although Christoph just mentioned
megaraid_sas, but after I made this statement).
We've been running tasklets as threads in the -rt kernel for some time
now, and that hasn't bothered
On Fri, 2007-06-22 at 15:12 +0200, Ingo Molnar wrote:
* Steven Rostedt [EMAIL PROTECTED] wrote:
yes, the softirq based tasklet implementation with workqueue based
implementation, but the tasklet API itself should still stay.
done.
ok, enough idle talking, lets see the next round of
The most part, tasklets today are not used for time critical functions.
Running tasklets in thread context is not harmful to performance of
the overall system.
That is a bold statement...
But running them in interrupt context is, since
they increase the overall latency for high priority
* Andrew Morton [EMAIL PROTECTED] wrote:
there are 120 tasklet_init()s in the tree and 224
tasklet_schedule()s.
couple of hours?
hm, what would you replace it with? Another new API? Or to workqueues
with a manual adding of a local_bh_disable()/enable() pair around the
worker function?
On Fri, 22 Jun 2007, Steven Rostedt wrote:
I just want to state that tasklets served their time well. But it's time
to give them an honorable discharge. So lets get rid of tasklets and
given them a standing salute as they leave :-)
Well, independently of whether we actually discharge them
On Fri, 2007-06-22 at 10:16 -0700, Linus Torvalds wrote:
So patches 1-4 all look fine to me. In fact, 5 looks ok too.
Great!
Leaving patch 6 as a only makes sense after we actually have some numbers
about it, and patch 5 is a could go either way as far as I'm concerned
(ie I could merge
On Fri, Jun 22, 2007 at 10:16:47AM -0700, Linus Torvalds wrote:
On Fri, 22 Jun 2007, Steven Rostedt wrote:
I just want to state that tasklets served their time well. But it's time
to give them an honorable discharge. So lets get rid of tasklets and
given them a standing salute as
* Linus Torvalds [EMAIL PROTECTED] wrote:
Whether we actually then want to do 6 is another matter. I think we'd
need some measuring and discussion about that.
basically tasklets have a number of limitations:
- tasklets have certain latency limitations over real tasks. (for
example they
On Fri, Jun 22, 2007 at 10:40:58PM +0200, Ingo Molnar wrote:
when it comes to 'deferred processing', we've basically got two 'prime'
choices for deferred processing:
- if it's high-performance then it goes into a softirq.
- if performance is not important, or robustness and flexibility
* Christoph Hellwig [EMAIL PROTECTED] wrote:
Note that we also have a lot of inefficiency in the way we do deferred
processing. Think of a setup where you run a XFS filesystem runs over
a megaraid adapter.
(1) we get a real hardirq, which just clears the interrupt and then
On Fri, 2007-06-22 at 22:00 +0100, Christoph Hellwig wrote:
Note that we also have a lot of inefficiency in the way we do deferred
processing. Think of a setup where you run a XFS filesystem runs over
a megaraid adapter.
(1) we get a real hardirq, which just clears the interrupt and then
On Fri, 22 Jun 2007, Ingo Molnar wrote:
* Linus Torvalds [EMAIL PROTECTED] wrote:
Whether we actually then want to do 6 is another matter. I think we'd
need some measuring and discussion about that.
basically tasklets have a number of limitations:
I'm not disputing that they aren't
On Fri, 2007-06-22 at 22:40 +0200, Ingo Molnar wrote:
- tasklets have certain fairness limitations. (they are executed in
softirq context and thus preempt everything, even if there is some
potentially more important, high-priority task waiting to be
executed.)
Since -rt has been
* Linus Torvalds [EMAIL PROTECTED] wrote:
If the numbers say that there is no performance difference (or even
better: that the new code performs better or fixes some latency issue
or whatever), I'll be very happy. But if the numbers say that it's
worse, no amount of cleanliness really
* Ingo Molnar [EMAIL PROTECTED] wrote:
[ and on a similar notion, i still havent given up on seeing all BKL
use gone from the kernel. I expect it to happen any decade now ;-) ]
2.6.21 had 476 lock_kernel() calls. 2.6.22-git has 473 lock_kernel()
calls currently. With that kind of flux
On Fri, 22 Jun 2007, Daniel Walker wrote:
On Fri, 2007-06-22 at 22:40 +0200, Ingo Molnar wrote:
- tasklets have certain fairness limitations. (they are executed in
softirq context and thus preempt everything, even if there is some
potentially more important, high-priority task
* Daniel Walker [EMAIL PROTECTED] wrote:
- tasklets have certain fairness limitations. (they are executed in
softirq context and thus preempt everything, even if there is
some potentially more important, high-priority task waiting to be
executed.)
Since -rt has been
On Fri, 2007-06-22 at 15:09 -0700, [EMAIL PROTECTED] wrote:
On Fri, 22 Jun 2007, Daniel Walker wrote:
On Fri, 2007-06-22 at 22:40 +0200, Ingo Molnar wrote:
- tasklets have certain fairness limitations. (they are executed in
softirq context and thus preempt everything, even if
[ and on a similar notion, i still havent given up on seeing all BKL
use gone from the kernel. I expect it to happen any decade now ;-) ]
2.6.21 had 476 lock_kernel() calls. 2.6.22-git has 473 lock_kernel()
calls currently. With that kind of flux we'll see the BKL gone in about
* Daniel Walker [EMAIL PROTECTED] wrote:
remember, these changes have been in use in -rt for a while. there's
reason to believe that they aren't going to cause drastic problems.
Since I've been working with -rt (~2 years now I think) it's clear
that the number of testers of the patch
As a second example, msr_seek() in arch/i386/kernel/msr.c... is the
inode semaphore enough or not? Who understands the implications well
enough to say?
lseek is one of the nasty remaining cases. tty is another real horror
that needs further work but we slowly get closer - drivers/char is
On Fri, 2007-06-22 at 23:59 +0200, Ingo Molnar wrote:
* Linus Torvalds [EMAIL PROTECTED] wrote:
If the numbers say that there is no performance difference (or even
better: that the new code performs better or fixes some latency issue
or whatever), I'll be very happy. But if the numbers
On Sat, 2007-06-23 at 00:44 +0200, Ingo Molnar wrote:
* Daniel Walker [EMAIL PROTECTED] wrote:
remember, these changes have been in use in -rt for a while. there's
reason to believe that they aren't going to cause drastic problems.
Since I've been working with -rt (~2 years now I
On Fri, 22 Jun 2007 00:00:14 -0400
Steven Rostedt [EMAIL PROTECTED] wrote:
There's a very nice paper by Matthew Willcox that describes Softirqs,
Tasklets, Bottom Halves, Task Queues, Work Queues and Timers[1].
In the paper it describes the history of these items. Softirqs and
tasklets were
There's a very nice paper by Matthew Willcox that describes Softirqs,
Tasklets, Bottom Halves, Task Queues, Work Queues and Timers[1].
In the paper it describes the history of these items. Softirqs and
tasklets were created to replace bottom halves after a company (Mindcraft)
showed that
There's a very nice paper by Matthew Willcox that describes Softirqs,
Tasklets, Bottom Halves, Task Queues, Work Queues and Timers[1].
In the paper it describes the history of these items. Softirqs and
tasklets were created to replace bottom halves after a company (Mindcraft)
showed that
101 - 198 of 198 matches
Mail list logo