Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-14 Thread James Antill
On Sat, 10 Feb 2007 18:49:56 -0800, Linus Torvalds wrote: > And I actually talked about that in one of the emails already. There is no > way you can beat an event-based thing for things that _are_ event-based. > That means mainly networking. > > For things that aren't event-based, but based on

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-14 Thread James Antill
On Sat, 10 Feb 2007 18:49:56 -0800, Linus Torvalds wrote: And I actually talked about that in one of the emails already. There is no way you can beat an event-based thing for things that _are_ event-based. That means mainly networking. For things that aren't event-based, but based on real

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-11 Thread Alan
> I think we only do it for fget_light and some VM TLB simplification, so it > shouldn't be a big burden to check. And all the permission management stuff that relies on one thread not being able to manipulate the uid/gid of another to race security checks. Alan - To unsubscribe from this list:

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-11 Thread Alan
I think we only do it for fget_light and some VM TLB simplification, so it shouldn't be a big burden to check. And all the permission management stuff that relies on one thread not being able to manipulate the uid/gid of another to race security checks. Alan - To unsubscribe from this list:

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread Linus Torvalds
On Sat, 10 Feb 2007, David Miller wrote: > > Even if you have everything, every page, every log file, in the page > cache, everything talking over the network wants to block. > > Will you create a thread every time tcp_sendmsg() hits the send queue > limits? No. You use epoll() for those. >

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread David Miller
From: Linus Torvalds <[EMAIL PROTECTED]> Date: Fri, 9 Feb 2007 14:33:01 -0800 (PST) > So I actually like this, because it means that while we slow down > real IO, we don't slow down the cached cases at all. Even if you have everything, every page, every log file, in the page cache, everything

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread Davide Libenzi
On Sat, 10 Feb 2007, Linus Torvalds wrote: > On Sat, 10 Feb 2007, Davide Libenzi wrote: > > > > For the queue approach, I meant the async_submit() to simply add the > > request (cookie, syscall number and params) inside queue, and not trying > > to execute the syscall. Once you're inside

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread Linus Torvalds
On Sat, 10 Feb 2007, Linus Torvalds wrote: > > But that makes it impossible to do things synchronously, which I think is > a *major* mistake. > > The whole (and really _only_) point of my patch was really the whole > "synchronous call" part. I'm personally of the opinion that if you cannot

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread Linus Torvalds
On Sat, 10 Feb 2007, Davide Libenzi wrote: > > For the queue approach, I meant the async_submit() to simply add the > request (cookie, syscall number and params) inside queue, and not trying > to execute the syscall. Once you're inside schedule, "stuff" has already > partially happened, and

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread Davide Libenzi
On Fri, 9 Feb 2007, Linus Torvalds wrote: > > Another, even simpler way IMO, is to just have a plain per-task kthread > > pool, and a queue. > > Yes, that is actually quite doable with basically the same interface. It's > literally a "small decision" inside of "schedule_async()" on how it >

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread Davide Libenzi
On Sat, 10 Feb 2007, bert hubert wrote: > On Fri, Feb 09, 2007 at 02:33:01PM -0800, Linus Torvalds wrote: > > > - IF the system call blocks, we call the architecture-specific > >"schedule_async()" function before we even get any scheduler locks, and > >it can just do a fork() at that

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread bert hubert
On Fri, Feb 09, 2007 at 02:33:01PM -0800, Linus Torvalds wrote: > - IF the system call blocks, we call the architecture-specific >"schedule_async()" function before we even get any scheduler locks, and >it can just do a fork() at that time, and let the *child* return to the >

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread Davide Libenzi
On Sat, 10 Feb 2007, Linus Torvalds wrote: On Sat, 10 Feb 2007, Davide Libenzi wrote: For the queue approach, I meant the async_submit() to simply add the request (cookie, syscall number and params) inside queue, and not trying to execute the syscall. Once you're inside schedule,

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread David Miller
From: Linus Torvalds [EMAIL PROTECTED] Date: Fri, 9 Feb 2007 14:33:01 -0800 (PST) So I actually like this, because it means that while we slow down real IO, we don't slow down the cached cases at all. Even if you have everything, every page, every log file, in the page cache, everything

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread Linus Torvalds
On Sat, 10 Feb 2007, David Miller wrote: Even if you have everything, every page, every log file, in the page cache, everything talking over the network wants to block. Will you create a thread every time tcp_sendmsg() hits the send queue limits? No. You use epoll() for those. The

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread bert hubert
On Fri, Feb 09, 2007 at 02:33:01PM -0800, Linus Torvalds wrote: - IF the system call blocks, we call the architecture-specific schedule_async() function before we even get any scheduler locks, and it can just do a fork() at that time, and let the *child* return to the original

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread Davide Libenzi
On Sat, 10 Feb 2007, bert hubert wrote: On Fri, Feb 09, 2007 at 02:33:01PM -0800, Linus Torvalds wrote: - IF the system call blocks, we call the architecture-specific schedule_async() function before we even get any scheduler locks, and it can just do a fork() at that time, and

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread Davide Libenzi
On Fri, 9 Feb 2007, Linus Torvalds wrote: Another, even simpler way IMO, is to just have a plain per-task kthread pool, and a queue. Yes, that is actually quite doable with basically the same interface. It's literally a small decision inside of schedule_async() on how it actually

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread Linus Torvalds
On Sat, 10 Feb 2007, Davide Libenzi wrote: For the queue approach, I meant the async_submit() to simply add the request (cookie, syscall number and params) inside queue, and not trying to execute the syscall. Once you're inside schedule, stuff has already partially happened, and you

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-10 Thread Linus Torvalds
On Sat, 10 Feb 2007, Linus Torvalds wrote: But that makes it impossible to do things synchronously, which I think is a *major* mistake. The whole (and really _only_) point of my patch was really the whole synchronous call part. I'm personally of the opinion that if you cannot handle

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-09 Thread Linus Torvalds
On Sat, 10 Feb 2007, Eric Dumazet wrote: > > Well, I guess if the original program was mono-threaded, and syscall used > fget_light(), we might have a problem here if the child try a close(). So you > may have to disable fget_light() magic if async call is the originator of the > syscall. Yes.

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-09 Thread Eric Dumazet
Linus Torvalds a écrit : Ok, here's another entry in this discussion. - IF the system call blocks, we call the architecture-specific "schedule_async()" function before we even get any scheduler locks, and it can just do a fork() at that time, and let the *child* return to the

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-09 Thread Linus Torvalds
On Fri, 9 Feb 2007, Davide Libenzi wrote: > > That's another way to do it. But you end up creating/destroying a new > thread for every request. May be performing just fine. Well, I actually wanted to add a special CLONE_ASYNC flag, because I think we could do it better if we know it's a

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-09 Thread Davide Libenzi
On Fri, 9 Feb 2007, Linus Torvalds wrote: > > Ok, here's another entry in this discussion. That's another way to do it. But you end up creating/destroying a new thread for every request. May be performing just fine. Another, even simpler way IMO, is to just have a plain per-task kthread pool,

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-09 Thread Linus Torvalds
Ok, here's another entry in this discussion. This is a *really* small patch. Yes, it adds 174 lines, and yes it's actually x86 (32-bit) only, but about half of it is totally generic, and *all* of it is almost ludicrously simple. There's no new assembly language. The one-liner addition to

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-09 Thread Linus Torvalds
Ok, here's another entry in this discussion. This is a *really* small patch. Yes, it adds 174 lines, and yes it's actually x86 (32-bit) only, but about half of it is totally generic, and *all* of it is almost ludicrously simple. There's no new assembly language. The one-liner addition to

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-09 Thread Davide Libenzi
On Fri, 9 Feb 2007, Linus Torvalds wrote: Ok, here's another entry in this discussion. That's another way to do it. But you end up creating/destroying a new thread for every request. May be performing just fine. Another, even simpler way IMO, is to just have a plain per-task kthread pool,

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-09 Thread Linus Torvalds
On Fri, 9 Feb 2007, Davide Libenzi wrote: That's another way to do it. But you end up creating/destroying a new thread for every request. May be performing just fine. Well, I actually wanted to add a special CLONE_ASYNC flag, because I think we could do it better if we know it's a

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-09 Thread Eric Dumazet
Linus Torvalds a écrit : Ok, here's another entry in this discussion. - IF the system call blocks, we call the architecture-specific schedule_async() function before we even get any scheduler locks, and it can just do a fork() at that time, and let the *child* return to the

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-09 Thread Linus Torvalds
On Sat, 10 Feb 2007, Eric Dumazet wrote: Well, I guess if the original program was mono-threaded, and syscall used fget_light(), we might have a problem here if the child try a close(). So you may have to disable fget_light() magic if async call is the originator of the syscall. Yes. All

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-04 Thread Davide Libenzi
On Sat, 3 Feb 2007, Davide Libenzi wrote: > > - Signals. I have no idea what behaviour we want. Help? My first guess is > > that we'll want signal state to be shared by fibrils by keeping it in the > > task_struct. If we want something like individual cancellation, we'll > > augment > >

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-04 Thread Davide Libenzi
On Sat, 3 Feb 2007, Davide Libenzi wrote: - Signals. I have no idea what behaviour we want. Help? My first guess is that we'll want signal state to be shared by fibrils by keeping it in the task_struct. If we want something like individual cancellation, we'll augment

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-03 Thread Davide Libenzi
On Tue, 30 Jan 2007, Zach Brown wrote: > This very rough patch series introduces a different way to provide AIO support > for system calls. Zab, great stuff! I've found a little time to take a look at the patches and throw some comments at you. Keep in mind though, that the last time I

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-02-03 Thread Davide Libenzi
On Tue, 30 Jan 2007, Zach Brown wrote: This very rough patch series introduces a different way to provide AIO support for system calls. Zab, great stuff! I've found a little time to take a look at the patches and throw some comments at you. Keep in mind though, that the last time I seriously

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
Take FPU state: memory copies and RAID xor functions use MMX/SSE and require that the full task state be saved and restored. Sure, that much is obvious. I was hoping to see what FPU state juggling actually requires. I'm operating under the assumption that it won't be *terrible*. Task

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Joel Becker
On Tue, Jan 30, 2007 at 10:06:48PM -0800, Linus Torvalds wrote: > (Sadly, some of the people who really _use_ AIO are the database people, > and they really only care about a particularly stupid and trivial case: > pure reads and writes. A lot of other loads care about much more complex >

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Benjamin LaHaise
On Wed, Jan 31, 2007 at 11:25:30AM -0800, Zach Brown wrote: > >without linking it into the system lists. The reason I don't think > >this > >approach works (and I looked at it a few times) is that many things > >end > >up requiring special handling: things like permissions, signals, > >FPU

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
On Jan 31, 2007, at 2:50 AM, Xavier Bestel wrote: On Tue, 2007-01-30 at 19:02 -0800, Linus Torvalds wrote: Btw, this is also something where we should just disallow certain system calls from being done through the asynchronous method. Does that mean that doing an AIO-disabled syscall will

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
without linking it into the system lists. The reason I don't think this approach works (and I looked at it a few times) is that many things end up requiring special handling: things like permissions, signals, FPU state, segment registers Can you share a specific example of the

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
The only thing I saw in Zach's post against the use of threads is that some kernel API would change. But surely if this is the showstopper then there must be some better argument than sys_getpid()?! Haha, yeah, that's the silly example I keep throwing around :). I guess it does leave a

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
Btw, this is also something where we should just disallow certain system calls from being done through the asynchronous method. Yeah. Maybe just a bitmap built from __NR_ constants? I don't know if we can do it in a way that doesn't require arch maintainer's attention. It seems like

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Benjamin LaHaise
On Wed, Jan 31, 2007 at 09:38:11AM -0800, Zach Brown wrote: > Indeed, that was my first reaction too. I dismissed the idea for a > good six months after initially realizing that it implied sharing > journal_info, etc. > > But when I finally sat down and started digging through the >

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
Does that mean that we might not have some cases where we'd need to make sure we do things differently? Of course not. Something migt show up. Might, and has. For a good time, take journal_info out of per_call_chain() in the patch set and watch it helpfully and loudly wet itself. There

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
- We would now have some measure of task_struct concurrency. Read that twice, it's scary. That's the one scaring me in fact ... Maybe it will end up being an easy one but I don't feel too comfortable... Indeed, that was my first reaction too. I dismissed the idea for a good six

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Xavier Bestel
On Tue, 2007-01-30 at 19:02 -0800, Linus Torvalds wrote: > Btw, this is also something where we should just disallow certain system > calls from being done through the asynchronous method. Does that mean that doing an AIO-disabled syscall will wait for all in- flight AIO syscalls to finish ?

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Ingo Molnar
* Linus Torvalds <[EMAIL PROTECTED]> wrote: > [ Of course, that used to also be the claim by all the people who > thought we couldn't do native kernel threads for "normal" threading > either, and should go with the n*m setup. Shows how much they knew > ;^] > > But I've certainly

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Ingo Molnar
* Linus Torvalds [EMAIL PROTECTED] wrote: [ Of course, that used to also be the claim by all the people who thought we couldn't do native kernel threads for normal threading either, and should go with the n*m setup. Shows how much they knew ;^] But I've certainly _personally_

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Xavier Bestel
On Tue, 2007-01-30 at 19:02 -0800, Linus Torvalds wrote: Btw, this is also something where we should just disallow certain system calls from being done through the asynchronous method. Does that mean that doing an AIO-disabled syscall will wait for all in- flight AIO syscalls to finish ?

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
- We would now have some measure of task_struct concurrency. Read that twice, it's scary. That's the one scaring me in fact ... Maybe it will end up being an easy one but I don't feel too comfortable... Indeed, that was my first reaction too. I dismissed the idea for a good six

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
Does that mean that we might not have some cases where we'd need to make sure we do things differently? Of course not. Something migt show up. Might, and has. For a good time, take journal_info out of per_call_chain() in the patch set and watch it helpfully and loudly wet itself. There

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Benjamin LaHaise
On Wed, Jan 31, 2007 at 09:38:11AM -0800, Zach Brown wrote: Indeed, that was my first reaction too. I dismissed the idea for a good six months after initially realizing that it implied sharing journal_info, etc. But when I finally sat down and started digging through the task_struct

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
Btw, this is also something where we should just disallow certain system calls from being done through the asynchronous method. Yeah. Maybe just a bitmap built from __NR_ constants? I don't know if we can do it in a way that doesn't require arch maintainer's attention. It seems like

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
The only thing I saw in Zach's post against the use of threads is that some kernel API would change. But surely if this is the showstopper then there must be some better argument than sys_getpid()?! Haha, yeah, that's the silly example I keep throwing around :). I guess it does leave a

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
without linking it into the system lists. The reason I don't think this approach works (and I looked at it a few times) is that many things end up requiring special handling: things like permissions, signals, FPU state, segment registers Can you share a specific example of the

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
On Jan 31, 2007, at 2:50 AM, Xavier Bestel wrote: On Tue, 2007-01-30 at 19:02 -0800, Linus Torvalds wrote: Btw, this is also something where we should just disallow certain system calls from being done through the asynchronous method. Does that mean that doing an AIO-disabled syscall will

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Benjamin LaHaise
On Wed, Jan 31, 2007 at 11:25:30AM -0800, Zach Brown wrote: without linking it into the system lists. The reason I don't think this approach works (and I looked at it a few times) is that many things end up requiring special handling: things like permissions, signals, FPU state,

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Joel Becker
On Tue, Jan 30, 2007 at 10:06:48PM -0800, Linus Torvalds wrote: (Sadly, some of the people who really _use_ AIO are the database people, and they really only care about a particularly stupid and trivial case: pure reads and writes. A lot of other loads care about much more complex things:

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-31 Thread Zach Brown
Take FPU state: memory copies and RAID xor functions use MMX/SSE and require that the full task state be saved and restored. Sure, that much is obvious. I was hoping to see what FPU state juggling actually requires. I'm operating under the assumption that it won't be *terrible*. Task

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Linus Torvalds
On Wed, 31 Jan 2007, Nick Piggin wrote: > > I always thought that the AIO people didn't do this because they wanted > to avoid context switch overhead. I don't think that scheduling overhead was ever a really the reason, at least not the primary one, and at least not on Linux. Sure, we can

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Nick Piggin
Nick Piggin wrote: Linus Torvalds wrote: On Wed, 31 Jan 2007, Benjamin Herrenschmidt wrote: - We would now have some measure of task_struct concurrency. Read that twice, it's scary. As two fibrils execute and block in turn they'll each be referencing current->. It means that we need to

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Nick Piggin
Linus Torvalds wrote: On Wed, 31 Jan 2007, Benjamin Herrenschmidt wrote: - We would now have some measure of task_struct concurrency. Read that twice, it's scary. As two fibrils execute and block in turn they'll each be referencing current->. It means that we need to audit task_struct to

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Benjamin Herrenschmidt
> NOTE! This is with the understanding that we *never* do any preemption. > The whole point of the microthreading as far as I'm concerned is exactly > that it is cooperative. It's not preemptive, and it's emphatically *not* > concurrent (ie you'd never have two fibrils running at the same time

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Linus Torvalds
On Tue, 30 Jan 2007, Linus Torvalds wrote: > > Does that mean that we might not have some cases where we'd need to make > sure we do things differently? Of course not. Something migt show up. But > this actually makes it very clear what the difference between "struct > thread_struct" and

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Linus Torvalds
On Wed, 31 Jan 2007, Benjamin Herrenschmidt wrote: > > - We would now have some measure of task_struct concurrency. Read that > > twice, > > it's scary. As two fibrils execute and block in turn they'll each be > > referencing current->. It means that we need to audit task_struct to make >

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Benjamin Herrenschmidt
On Tue, 2007-01-30 at 15:45 -0800, Zach Brown wrote: > > Btw, I noticed that you didn't Cc Ingo. Definitely worth doing. Not > > just > > because he's basically the normal scheduler maintainer, but also > > because > > he's historically been involved in things like the async filename > >

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Benjamin Herrenschmidt
> - We would now have some measure of task_struct concurrency. Read that twice, > it's scary. As two fibrils execute and block in turn they'll each be > referencing current->. It means that we need to audit task_struct to make > sure > that paths can handle racing as its scheduled away. The

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Zach Brown
Btw, I noticed that you didn't Cc Ingo. Definitely worth doing. Not just because he's basically the normal scheduler maintainer, but also because he's historically been involved in things like the async filename lookup that the in-kernel web server thing used. Yeah, that was dumb. I had

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Linus Torvalds
On Tue, 30 Jan 2007, Zach Brown wrote: > > I think we'll also want to flesh out the submission and completion interface > so that we don't find ourselves frustrated with it in another 5 years. What's > there now is just scaffolding to support the interesting kernel-internal part. > No doubt the

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Zach Brown
So we should limit these to basically have some maximum concurrency factor, but rather than consider it an error to go over it, we'd just cap the concurrency by default, so that people can freely use asynchronous interfaces without having to always worry about what happens if their resources

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Zach Brown
I looked at this approach a long time ago, and basically gave up because it looked like too much work. Indeed, your mention of it in that thread.. a year ago?.. is what got this notion sitting in the back of my head. I didn't like it at first, but it grew on me. I heartily approve,

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Linus Torvalds
On Tue, 30 Jan 2007, Linus Torvalds wrote: > > But from a quick overview of the patches, I really don't see anything > fundamentally wrong. It needs some error checking and some limiting (I > _really_ don't think we want a regular user starting a thousand fibrils > concurrently), but it

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Linus Torvalds
On Tue, 30 Jan 2007, Zach Brown wrote: > > This very rough patch series introduces a different way to provide AIO support > for system calls. Yee-haa! I looked at this approach a long time ago, and basically gave up because it looked like too much work. I heartily approve, although I only

[PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Zach Brown
This very rough patch series introduces a different way to provide AIO support for system calls. Right now to provide AIO support for a system call you have to express your interface in the iocb argument struct for sys_io_submit(), teach fs/aio.c to translate this into some call path in the

[PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Zach Brown
This very rough patch series introduces a different way to provide AIO support for system calls. Right now to provide AIO support for a system call you have to express your interface in the iocb argument struct for sys_io_submit(), teach fs/aio.c to translate this into some call path in the

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Linus Torvalds
On Tue, 30 Jan 2007, Zach Brown wrote: This very rough patch series introduces a different way to provide AIO support for system calls. Yee-haa! I looked at this approach a long time ago, and basically gave up because it looked like too much work. I heartily approve, although I only gave

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Linus Torvalds
On Tue, 30 Jan 2007, Linus Torvalds wrote: But from a quick overview of the patches, I really don't see anything fundamentally wrong. It needs some error checking and some limiting (I _really_ don't think we want a regular user starting a thousand fibrils concurrently), but it actually

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Zach Brown
I looked at this approach a long time ago, and basically gave up because it looked like too much work. Indeed, your mention of it in that thread.. a year ago?.. is what got this notion sitting in the back of my head. I didn't like it at first, but it grew on me. I heartily approve,

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Zach Brown
So we should limit these to basically have some maximum concurrency factor, but rather than consider it an error to go over it, we'd just cap the concurrency by default, so that people can freely use asynchronous interfaces without having to always worry about what happens if their resources

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Linus Torvalds
On Tue, 30 Jan 2007, Zach Brown wrote: I think we'll also want to flesh out the submission and completion interface so that we don't find ourselves frustrated with it in another 5 years. What's there now is just scaffolding to support the interesting kernel-internal part. No doubt the

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Zach Brown
Btw, I noticed that you didn't Cc Ingo. Definitely worth doing. Not just because he's basically the normal scheduler maintainer, but also because he's historically been involved in things like the async filename lookup that the in-kernel web server thing used. Yeah, that was dumb. I had

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Benjamin Herrenschmidt
- We would now have some measure of task_struct concurrency. Read that twice, it's scary. As two fibrils execute and block in turn they'll each be referencing current-. It means that we need to audit task_struct to make sure that paths can handle racing as its scheduled away. The current

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Benjamin Herrenschmidt
On Tue, 2007-01-30 at 15:45 -0800, Zach Brown wrote: Btw, I noticed that you didn't Cc Ingo. Definitely worth doing. Not just because he's basically the normal scheduler maintainer, but also because he's historically been involved in things like the async filename lookup that

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Linus Torvalds
On Wed, 31 Jan 2007, Benjamin Herrenschmidt wrote: - We would now have some measure of task_struct concurrency. Read that twice, it's scary. As two fibrils execute and block in turn they'll each be referencing current-. It means that we need to audit task_struct to make sure

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Linus Torvalds
On Tue, 30 Jan 2007, Linus Torvalds wrote: Does that mean that we might not have some cases where we'd need to make sure we do things differently? Of course not. Something migt show up. But this actually makes it very clear what the difference between struct thread_struct and struct

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Benjamin Herrenschmidt
NOTE! This is with the understanding that we *never* do any preemption. The whole point of the microthreading as far as I'm concerned is exactly that it is cooperative. It's not preemptive, and it's emphatically *not* concurrent (ie you'd never have two fibrils running at the same time on

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Nick Piggin
Linus Torvalds wrote: On Wed, 31 Jan 2007, Benjamin Herrenschmidt wrote: - We would now have some measure of task_struct concurrency. Read that twice, it's scary. As two fibrils execute and block in turn they'll each be referencing current-. It means that we need to audit task_struct to

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Nick Piggin
Nick Piggin wrote: Linus Torvalds wrote: On Wed, 31 Jan 2007, Benjamin Herrenschmidt wrote: - We would now have some measure of task_struct concurrency. Read that twice, it's scary. As two fibrils execute and block in turn they'll each be referencing current-. It means that we need to

Re: [PATCH 0 of 4] Generic AIO by scheduling stacks

2007-01-30 Thread Linus Torvalds
On Wed, 31 Jan 2007, Nick Piggin wrote: I always thought that the AIO people didn't do this because they wanted to avoid context switch overhead. I don't think that scheduling overhead was ever a really the reason, at least not the primary one, and at least not on Linux. Sure, we can