direct-io.c is evil. Ridiculously.
You will have a hard time finding someone to defend it, I predict :).
There is good news on that front, too. Chris (Mason) is making
progress on getting rid of the worst of the Magical Locking that
makes buffered races with O_DIRECT ops so awful.
I'm
to do anything but chase pointers through cache. Done right, it
hardly even branches (although the branch misprediction penalty is a
lot less of a worry on current x86_64 than it was in the
mega-superscalar-out-of-order-speculative-execution days). It's damn
Actually it costs a lot more on
It's brilliant for disk I/O, not for networking for which
blocking is the norm not the exception.
So people will have to likely do something like divide their
applications into handling for I/O to files and I/O to networking.
So beautiful. :-)
Nobody has proposed anything yet which scales
On 2/22/07, Alan [EMAIL PROTECTED] wrote:
to do anything but chase pointers through cache. Done right, it
hardly even branches (although the branch misprediction penalty is a
lot less of a worry on current x86_64 than it was in the
mega-superscalar-out-of-order-speculative-execution days).
* Ulrich Drepper <[EMAIL PROTECTED]> wrote:
> Ingo Molnar wrote:
> > in terms of AIO, the best queueing model is i think what the kernel uses
> > internally: freely ordered, with barrier support.
>
> Speaking of AIO, how do you imagine lio_listio is implemented? If
> there is no asynchronous
* Michael K. Edwards <[EMAIL PROTECTED]> wrote:
> [...] Unless you essentially lock one application thread to each CPU
> core, with a complete understanding of its cache sharing and latency
> relationships to all the other cores, and do your own userspace I/O
> scheduling and dispatching
* Michael K. Edwards <[EMAIL PROTECTED]> wrote:
> [...] As for threadlets, making them kernel threads is not such a good
> design feature, O(1) scheduler or not. You take the full hit of
> kernel task creation, on the spot, for every threadlet that blocks.
> [...]
this is very much not how
On 2/21/07, Michael K. Edwards <[EMAIL PROTECTED]> wrote:
You won't be able to do it later if you don't design for it now.
Don't reinvent the square wheel -- there's a model to follow that was
so successful that it has killed all alternate models in its sphere.
Namely, IEEE 754. But please try
On 2/21/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
threadlets (and syslets) are parallel contexts and they behave so -
queuing and execution semantics are then ontop of that, implemented
either by glibc, or implemented by the application. There is no
'pipeline' of requests imposed - the
On 2/21/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
pthread_cancel() [if/once threadlets are integrated into pthreads] ought
to do that. A threadlet, if it gets moved to an async context, is a
full-blown thread.
The fact that you are proposing pthread_cancel as a model for how to
abort an
On 2/21/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
threadlets, when they dont block, are just regular user-space function
calls - so no need to schedule or throttle them. [*]
Right. That's a great design feature.
threadlets, when they block, are regular kernel threads, so the regular
O(1)
Ingo Molnar wrote:
> in terms of AIO, the best queueing model is i think what the kernel uses
> internally: freely ordered, with barrier support.
Speaking of AIO, how do you imagine lio_listio is implemented? If there
is no asynchronous syscall it would mean creating a threadlet for each
* Michael K. Edwards <[EMAIL PROTECTED]> wrote:
> 4) AIO vsyscalls whose semantics resemble those of IEEE 754 floating
> point operations, with a clear distinction between a) pipeline state
> vs. operands, b) results vs. side effects, and c) coding errors vs.
> not-a-number results vs.
* Michael K. Edwards <[EMAIL PROTECTED]> wrote:
> 2) Facilities for manipulating pools of threadlets, so you can
> throttle their concurrency, reprioritize them, and cancel them in
> bulk, disposing safely of any dynamically allocated memory,
> synchronization primitives, and so forth that
* Michael K. Edwards <[EMAIL PROTECTED]> wrote:
> 1) Documentation of what you can and can't do safely from a threadlet,
> given that it runs in an unknown thread context;
you can do just about anything from a threadlet, using bog standard
procedural programming. (Certain system-calls are
* Michael K. Edwards <[EMAIL PROTECTED]> wrote:
> 3) Reworked threadlet scheduling to allow tens of thousands of blocked
> threadlets to be dispatched efficiently in a controlled, throttled,
> non-cache-and-MMU-thrashing manner, immediately following the softirq
> that unblocks the I/O
On 2/21/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
I believe this threadlet concept is what user-space will want to use for
programmable parallelism.
This is brilliant. Now it needs just four more things:
1) Documentation of what you can and can't do safely from a threadlet,
given that it
this is the v3 release of the syslet/threadlet subsystem:
http://redhat.com/~mingo/syslet-patches/
This release came a few days later than i originally wanted, because
i've implemented many fundamental changes to the code. The biggest
highlights of v3 are:
- "Threadlets": the introduction
this is the v3 release of the syslet/threadlet subsystem:
http://redhat.com/~mingo/syslet-patches/
This release came a few days later than i originally wanted, because
i've implemented many fundamental changes to the code. The biggest
highlights of v3 are:
- Threadlets: the introduction
On 2/21/07, Ingo Molnar [EMAIL PROTECTED] wrote:
I believe this threadlet concept is what user-space will want to use for
programmable parallelism.
This is brilliant. Now it needs just four more things:
1) Documentation of what you can and can't do safely from a threadlet,
given that it runs
* Michael K. Edwards [EMAIL PROTECTED] wrote:
3) Reworked threadlet scheduling to allow tens of thousands of blocked
threadlets to be dispatched efficiently in a controlled, throttled,
non-cache-and-MMU-thrashing manner, immediately following the softirq
that unblocks the I/O they're
* Michael K. Edwards [EMAIL PROTECTED] wrote:
1) Documentation of what you can and can't do safely from a threadlet,
given that it runs in an unknown thread context;
you can do just about anything from a threadlet, using bog standard
procedural programming. (Certain system-calls are
* Michael K. Edwards [EMAIL PROTECTED] wrote:
2) Facilities for manipulating pools of threadlets, so you can
throttle their concurrency, reprioritize them, and cancel them in
bulk, disposing safely of any dynamically allocated memory,
synchronization primitives, and so forth that they may
* Michael K. Edwards [EMAIL PROTECTED] wrote:
4) AIO vsyscalls whose semantics resemble those of IEEE 754 floating
point operations, with a clear distinction between a) pipeline state
vs. operands, b) results vs. side effects, and c) coding errors vs.
not-a-number results vs. exceptions
Ingo Molnar wrote:
in terms of AIO, the best queueing model is i think what the kernel uses
internally: freely ordered, with barrier support.
Speaking of AIO, how do you imagine lio_listio is implemented? If there
is no asynchronous syscall it would mean creating a threadlet for each
request
On 2/21/07, Ingo Molnar [EMAIL PROTECTED] wrote:
threadlets, when they dont block, are just regular user-space function
calls - so no need to schedule or throttle them. [*]
Right. That's a great design feature.
threadlets, when they block, are regular kernel threads, so the regular
O(1)
On 2/21/07, Ingo Molnar [EMAIL PROTECTED] wrote:
pthread_cancel() [if/once threadlets are integrated into pthreads] ought
to do that. A threadlet, if it gets moved to an async context, is a
full-blown thread.
The fact that you are proposing pthread_cancel as a model for how to
abort an
On 2/21/07, Ingo Molnar [EMAIL PROTECTED] wrote:
threadlets (and syslets) are parallel contexts and they behave so -
queuing and execution semantics are then ontop of that, implemented
either by glibc, or implemented by the application. There is no
'pipeline' of requests imposed - the structure
On 2/21/07, Michael K. Edwards [EMAIL PROTECTED] wrote:
You won't be able to do it later if you don't design for it now.
Don't reinvent the square wheel -- there's a model to follow that was
so successful that it has killed all alternate models in its sphere.
Namely, IEEE 754. But please try
* Michael K. Edwards [EMAIL PROTECTED] wrote:
[...] As for threadlets, making them kernel threads is not such a good
design feature, O(1) scheduler or not. You take the full hit of
kernel task creation, on the spot, for every threadlet that blocks.
[...]
this is very much not how they
* Michael K. Edwards [EMAIL PROTECTED] wrote:
[...] Unless you essentially lock one application thread to each CPU
core, with a complete understanding of its cache sharing and latency
relationships to all the other cores, and do your own userspace I/O
scheduling and dispatching state
* Ulrich Drepper [EMAIL PROTECTED] wrote:
Ingo Molnar wrote:
in terms of AIO, the best queueing model is i think what the kernel uses
internally: freely ordered, with barrier support.
Speaking of AIO, how do you imagine lio_listio is implemented? If
there is no asynchronous syscall
601 - 632 of 632 matches
Mail list logo