Some more results, using a larger number of processes and io depths. A
repeat of the tests from friday, with added depth 2 for syslet and
libaio:
Engine Depth Processes Bw (MiB/sec)
libaio1 1
On Mon, Feb 26, 2007 at 01:39:23PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
Kevent is a _very_ small entity and there is _no_ cost of
requeueing (well, there is list_add guarded by lock) - after it is
done, process can start real
On Mon, Feb 26, 2007 at 02:57:36PM +0100, Jens Axboe wrote:
Some more results, using a larger number of processes and io depths. A
repeat of the tests from friday, with added depth 2 for syslet and
libaio:
Engine Depth Processes Bw (MiB/sec)
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
your whole reasoning seems to be faith-based:
[...] Anyway, kevents are very small, threads are very big, [...]
How about following the scientific method instead?
That are only rethorical words as you have understood I bet, I meant
* Suparna Bhattacharya [EMAIL PROTECTED] wrote:
syslet still on top. Measuring O_DIRECT reads (of 4kb size) on ramfs
with 100 processes each with a depth of 200, reading a per-process
private file of 10mb (need to fit in my ram...) 10 times each. IOW,
doing 10,000MiB of IO in total:
On Mon, Feb 26 2007, Suparna Bhattacharya wrote:
On Mon, Feb 26, 2007 at 02:57:36PM +0100, Jens Axboe wrote:
Some more results, using a larger number of processes and io depths. A
repeat of the tests from friday, with added depth 2 for syslet and
libaio:
Engine Depth
On Mon, Feb 26, 2007 at 01:51:23PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
Even having main dispatcher as epoll/kevent loop, the _whole_
threadlet model is absolutely micro-thread in nature and not state
machine/event.
Evgeniy, i'm
On Mon, Feb 26, 2007 at 03:15:18PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
your whole reasoning seems to be faith-based:
[...] Anyway, kevents are very small, threads are very big, [...]
How about following the scientific method instead?
That are only rethorical words
On Sun, Feb 25, 2007 at 02:44:11PM -0800, Linus Torvalds ([EMAIL PROTECTED])
wrote:
On Thu, 22 Feb 2007, Evgeniy Polyakov wrote:
My tests show that with 4k connections per second (8k concurrency) more
than 20k connections of 80k total block in tcp_sendmsg() over gigabit
lan
On Mon, Feb 26, 2007 at 02:11:33PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
* Linus Torvalds [EMAIL PROTECTED] wrote:
My tests show that with 4k connections per second (8k concurrency)
more than 20k connections of 80k total block in tcp_sendmsg() over
gigabit lan between quite
On Mon, 26 Feb 2007, Evgeniy Polyakov wrote:
Linus, you made your point clearly - generic AIO should not be used for
the cases, when it is supposed to block 90% of the time - only when it
almost never blocks, like in case of buffered IO.
I don't think it's quite that simple.
EVEN *IF* it
On Mon, 2007-02-26 at 20:37 +0300, Evgeniy Polyakov wrote:
I tend to agree.
Yes, some loads require event driven model, other can be done using
threads.
event driven model is really complex though. For event driven to work
well you basically can't tolerate blocking calls at all ...
open()
On Mon, Feb 26, 2007 at 09:57:00AM -0800, Linus Torvalds ([EMAIL PROTECTED])
wrote:
Similarly, even for a simple read() on a filesystem, there is no way to
just say block until data is available like there is for a socket,
because on a filesystem, the data may be available, BUT AT THE WRONG
On Mon, Feb 26, 2007 at 10:19:03AM -0800, Arjan van de Ven ([EMAIL PROTECTED])
wrote:
On Mon, 2007-02-26 at 20:37 +0300, Evgeniy Polyakov wrote:
I tend to agree.
Yes, some loads require event driven model, other can be done using
threads.
event driven model is really complex though.
Evgeniy Polyakov wrote:
I never ever tried to say _everything_ must be driven by events.
IO must be driven, it is a must IMO.
Do you disagree with Linus' post about the difficulty of treating
open(), fstat(), page faults, etc. as events? Or do you not consider
them to be IO?
Chris
-
To
On Mon, 26 Feb 2007, Evgeniy Polyakov wrote:
I want to say, that read() consists of tons of events, but programmer
needs only one - data is ready in requested buffer. Programmer might
not even know what is the object behind provided file descriptor.
One only wans data in the buffer.
On Mon, Feb 26, 2007 at 12:56:33PM -0600, Chris Friesen ([EMAIL PROTECTED])
wrote:
Evgeniy Polyakov wrote:
I never ever tried to say _everything_ must be driven by events.
IO must be driven, it is a must IMO.
Do you disagree with Linus' post about the difficulty of treating
open(),
On Mon, Feb 26, 2007 at 11:22:46AM -0800, Linus Torvalds ([EMAIL PROTECTED])
wrote:
See? Stop blathering about how everything is an event. THAT'S NOT
RELEVANT. I've told you a hundred times - they may be logically
equivalent, but that doesn't change ANYTHING. Event-based programming
simply
On Sun, 25 Feb 2007, Evgeniy Polyakov wrote:
Why userspace rescheduling is in order of tens times faster than
kernel/user?
About 50 times in my Opteron 254 actually. That's libpcl's (swapcontext
based) cobench against lat_ctx.
- Davide
-
To unsubscribe from this list: send the line
On Mon, 26 Feb 2007, Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
please also try evserver_epoll_threadlet.c that i've attached below -
it uses epoll as the main event mechanism but does threadlets for
request handling.
find updated code below - your evserver_epoll.c
On Mon, 26 Feb 2007, Evgeniy Polyakov wrote:
Will you argue that people do things like
num = epoll_wait()
for (i=0; inum; ++i) {
process(event[i])?
}
I have several times told you that I argue for a *combination* of
event-bassed interfaces and thread-like code. And that the choice
* Linus Torvalds [EMAIL PROTECTED] wrote:
Reading from the disk is _exactly_ the same - the same waiting for
buffer_heads/pages, and (since it is bigger) it can be easily
transferred to event driven model. Ugh, wait, it not only _can_ be
transferred, it is already done in kevent AIO,
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
If kernelspace rescheduling is that fast, then please explain me why
userspace one always beats kernel/userspace?
because 'user space scheduling' makes no sense? I explained my thinking
about that in a past mail:
--
One
On Mon, 26 Feb 2007, Jens Axboe wrote:
Some more results, using a larger number of processes and io depths. A
repeat of the tests from friday, with added depth 2 for syslet and
libaio:
Engine Depth Processes Bw (MiB/sec)
On Mon, Feb 26, 2007 at 09:35:43PM +0100, Ingo Molnar wrote:
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
If kernelspace rescheduling is that fast, then please explain me why
userspace one always beats kernel/userspace?
because 'user space scheduling' makes no sense? I explained my
On Mon, 26 Feb 2007, Evgeniy Polyakov wrote:
2. its notifications do not go through the second loop, i.e. it is O(1),
not O(ready_num), and notifications happens directly from internals of
the appropriate subsystem, which does not require special wakeup
(although it can be done too).
Sorry
On Mon, Feb 26, 2007 at 03:45:48PM +0100, Jens Axboe wrote:
On Mon, Feb 26 2007, Suparna Bhattacharya wrote:
On Mon, Feb 26, 2007 at 02:57:36PM +0100, Jens Axboe wrote:
Some more results, using a larger number of processes and io depths. A
repeat of the tests from friday, with added
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
On Mon, Feb 26, 2007 at 01:51:23PM +0100, Ingo Molnar ([EMAIL PROTECTED])
wrote:
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
Even having main dispatcher as epoll/kevent loop, the _whole_
threadlet model is absolutely micro-thread in
On 2/25/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
Fundamentally a kernel thread is just its
EIP/ESP [on x86, similar on other architectures] - which can be
saved/restored in near zero time.
That's because the kernel address space is identical in every
process's MMU context, so the MMU doesn't
On Thu, 22 Feb 2007, Evgeniy Polyakov wrote:
>
> My tests show that with 4k connections per second (8k concurrency) more
> than 20k connections of 80k total block in tcp_sendmsg() over gigabit
> lan between quite fast machines.
Why do people *keep* taking this up as an issue?
Use
* Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
> Interesting discussion, that will be very fun if kevent will lose
> badly :)
with your keepalive test no way can it lose against 80,000 sync
threadlets - it's pretty much the worst-case thing for threadlets while
it's the best-case for kevents.
On Sun, Feb 25, 2007 at 07:34:38PM +0100, Ingo Molnar wrote:
>
> * Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
>
> > > thx - i guess i should just run them without any options and they
> > > bind themselves to port 80? What 'ab' options are you using
> > > typically to measure them?
> >
> >
On Sun, Feb 25, 2007 at 08:04:15PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
>
> * Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
>
> > Kevent is a _very_ small entity and there is _no_ cost of requeueing
> > (well, there is list_add guarded by lock) - after it is done, process
> > can start
* Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
> > thx - i guess i should just run them without any options and they
> > bind themselves to port 80? What 'ab' options are you using
> > typically to measure them?
>
> Yes, but they require /tmp/index.html to have http header and actual
> data
* Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
> Kevent is a _very_ small entity and there is _no_ cost of requeueing
> (well, there is list_add guarded by lock) - after it is done, process
> can start real work. With rescheduling there are _too_ many things to
> be done before we can start
Ingo Molnar wrote:
> now look at kevents as the queueing model. It does not queue 'tasks', it
> lets user-space queue requests in essence, in various states. But it's
> still the same conceptual thing: a memory buffer with some state
> associated to it. Yes, it has no legacies, it has no
Ingo Molnar wrote:
> if you create a threadlet based test-webserver, could you please do a
> comparable kevents implementation as well? I.e. same HTTP parser (or
> non-parser, as usually the case is with prototypes ;). Best would be
> something that one could trigger between threadlet and kevent
On Sun, Feb 25, 2007 at 07:22:30PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
> > > Do you have any link where i could check the type of HTTP parsing
> > > and send transport you are (or will be) using? What type of http
> > > client are you using to measure, with precisely what options?
> >
* Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
> > thx - i guess i should just run them without any options and they
> > bind themselves to port 80? What 'ab' options are you using
> > typically to measure them?
>
> Yes, but they require /tmp/index.html to have http header and actual
> data
On Sun, Feb 25, 2007 at 09:21:35PM +0300, Evgeniy Polyakov ([EMAIL PROTECTED])
wrote:
> > Do you have any link where i could check the type of HTTP parsing and
> > send transport you are (or will be) using? What type of http client are
> > you using to measure, with precisely what options?
>
>
* Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
> > > Do you have any link where i could check the type of HTTP parsing
> > > and send transport you are (or will be) using? What type of http
> > > client are you using to measure, with precisely what options?
> >
> > For example this ones
* Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
> > Do you have any link where i could check the type of HTTP parsing
> > and send transport you are (or will be) using? What type of http
> > client are you using to measure, with precisely what options?
>
> For example this ones (essentially the
On Sun, Feb 25, 2007 at 06:54:37PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
>
> * Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
>
> > > hm, what tree are you using as a base? The syslet patches are
> > > against v2.6.20 at the moment. (the x86 PDA changes will probably
> > > interfere with
On Sun, Feb 25, 2007 at 06:45:05PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
>
> * Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
>
> > My main concern was only about the situation, when we ends up with
> > truly bloking context (like network), and this results in having
> > thousands of
* Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
> > hm, what tree are you using as a base? The syslet patches are
> > against v2.6.20 at the moment. (the x86 PDA changes will probably
> > interfere with it on v2.6.21-rc1-ish kernels) Note that otherwise
> > the syslet/threadlet patches are for
On Sun, Feb 25, 2007 at 06:23:38PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
>
> * Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
>
> > On Wed, Feb 21, 2007 at 10:13:55PM +0100, Ingo Molnar ([EMAIL PROTECTED])
> > wrote:
> > > this is the v3 release of the syslet/threadlet subsystem:
> > >
>
* Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
> My main concern was only about the situation, when we ends up with
> truly bloking context (like network), and this results in having
> thousands of threads doing the work - even having most of them
> sleeping, there is a problem with memory
* Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
> On Wed, Feb 21, 2007 at 10:13:55PM +0100, Ingo Molnar ([EMAIL PROTECTED])
> wrote:
> > this is the v3 release of the syslet/threadlet subsystem:
> >
> >http://redhat.com/~mingo/syslet-patches/
>
> There is no %xgs.
>
> ---
On Wed, Feb 21, 2007 at 10:13:55PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
> this is the v3 release of the syslet/threadlet subsystem:
>
>http://redhat.com/~mingo/syslet-patches/
There is no %xgs.
--- ./arch/i386/kernel/process.c~ 2007-02-24 22:56:14.0 +0300
+++
On Wed, Feb 21, 2007 at 10:13:55PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
this is the v3 release of the syslet/threadlet subsystem:
http://redhat.com/~mingo/syslet-patches/
There is no %xgs.
--- ./arch/i386/kernel/process.c~ 2007-02-24 22:56:14.0 +0300
+++
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
On Wed, Feb 21, 2007 at 10:13:55PM +0100, Ingo Molnar ([EMAIL PROTECTED])
wrote:
this is the v3 release of the syslet/threadlet subsystem:
http://redhat.com/~mingo/syslet-patches/
There is no %xgs.
--- ./arch/i386/kernel/process.c~
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
My main concern was only about the situation, when we ends up with
truly bloking context (like network), and this results in having
thousands of threads doing the work - even having most of them
sleeping, there is a problem with memory overhead
On Sun, Feb 25, 2007 at 06:23:38PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
On Wed, Feb 21, 2007 at 10:13:55PM +0100, Ingo Molnar ([EMAIL PROTECTED])
wrote:
this is the v3 release of the syslet/threadlet subsystem:
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
hm, what tree are you using as a base? The syslet patches are
against v2.6.20 at the moment. (the x86 PDA changes will probably
interfere with it on v2.6.21-rc1-ish kernels) Note that otherwise
the syslet/threadlet patches are for x86 only
On Sun, Feb 25, 2007 at 06:45:05PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
My main concern was only about the situation, when we ends up with
truly bloking context (like network), and this results in having
thousands of threads doing
On Sun, Feb 25, 2007 at 06:54:37PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
hm, what tree are you using as a base? The syslet patches are
against v2.6.20 at the moment. (the x86 PDA changes will probably
interfere with it on
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
Do you have any link where i could check the type of HTTP parsing
and send transport you are (or will be) using? What type of http
client are you using to measure, with precisely what options?
For example this ones (essentially the same,
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
Do you have any link where i could check the type of HTTP parsing
and send transport you are (or will be) using? What type of http
client are you using to measure, with precisely what options?
For example this ones (essentially the
On Sun, Feb 25, 2007 at 09:21:35PM +0300, Evgeniy Polyakov ([EMAIL PROTECTED])
wrote:
Do you have any link where i could check the type of HTTP parsing and
send transport you are (or will be) using? What type of http client are
you using to measure, with precisely what options?
For
On Sun, Feb 25, 2007 at 07:22:30PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
Do you have any link where i could check the type of HTTP parsing
and send transport you are (or will be) using? What type of http
client are you using to measure, with precisely what options?
For
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
thx - i guess i should just run them without any options and they
bind themselves to port 80? What 'ab' options are you using
typically to measure them?
Yes, but they require /tmp/index.html to have http header and actual
data page. They
Ingo Molnar wrote:
if you create a threadlet based test-webserver, could you please do a
comparable kevents implementation as well? I.e. same HTTP parser (or
non-parser, as usually the case is with prototypes ;). Best would be
something that one could trigger between threadlet and kevent mode,
Ingo Molnar wrote:
now look at kevents as the queueing model. It does not queue 'tasks', it
lets user-space queue requests in essence, in various states. But it's
still the same conceptual thing: a memory buffer with some state
associated to it. Yes, it has no legacies, it has no priorities
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
Kevent is a _very_ small entity and there is _no_ cost of requeueing
(well, there is list_add guarded by lock) - after it is done, process
can start real work. With rescheduling there are _too_ many things to
be done before we can start new
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
thx - i guess i should just run them without any options and they
bind themselves to port 80? What 'ab' options are you using
typically to measure them?
Yes, but they require /tmp/index.html to have http header and actual
data page. They
On Sun, Feb 25, 2007 at 08:04:15PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
Kevent is a _very_ small entity and there is _no_ cost of requeueing
(well, there is list_add guarded by lock) - after it is done, process
can start real work.
On Sun, Feb 25, 2007 at 07:34:38PM +0100, Ingo Molnar wrote:
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
thx - i guess i should just run them without any options and they
bind themselves to port 80? What 'ab' options are you using
typically to measure them?
Yes, but they
* Evgeniy Polyakov [EMAIL PROTECTED] wrote:
Interesting discussion, that will be very fun if kevent will lose
badly :)
with your keepalive test no way can it lose against 80,000 sync
threadlets - it's pretty much the worst-case thing for threadlets while
it's the best-case for kevents. Try
On Thu, 22 Feb 2007, Evgeniy Polyakov wrote:
My tests show that with 4k connections per second (8k concurrency) more
than 20k connections of 80k total block in tcp_sendmsg() over gigabit
lan between quite fast machines.
Why do people *keep* taking this up as an issue?
Use
On 2/25/07, Ingo Molnar [EMAIL PROTECTED] wrote:
Fundamentally a kernel thread is just its
EIP/ESP [on x86, similar on other architectures] - which can be
saved/restored in near zero time.
That's because the kernel address space is identical in every
process's MMU context, so the MMU doesn't
On 2/24/07, Davide Libenzi wrote:
Ok, roger that. But why are you playing "Google & Preach" games to Ingo,
that ate bread and CPUs for the last 15 years?
Sure I used Google -- for clickable references so that lurkers can
tell I'm not making these things up as I go along. Ingo and Alan have
On Sat, 24 Feb 2007, Michael K. Edwards wrote:
> The preceding may contain errors in detail -- I am neither a CPU
> architect nor an x86 compiler writer nor even a serious kernel hacker.
Ok, roger that. But why are you playing "Google & Preach" games to Ingo,
that ate bread and CPUs for the
On 2/23/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
> This is a fundamental misconception. [...]
> The scheduler, on the other hand, has to blow and reload all of the
> hidden state associated with force-loading the PC and wherever your
> architecture keeps its TLS (maybe not the whole TLB, but
On Fri, Feb 23 2007, Joel Becker wrote:
> On Fri, Feb 23, 2007 at 01:52:47PM +0100, Jens Axboe wrote:
> > Results:
> >
> > Engine Depth Bw (MiB/sec)
> >
> > libaio1 441
> > syslet1 574
On Fri, Feb 23 2007, Joel Becker wrote:
On Fri, Feb 23, 2007 at 01:52:47PM +0100, Jens Axboe wrote:
Results:
Engine Depth Bw (MiB/sec)
libaio1 441
syslet1 574
sync
On 2/23/07, Ingo Molnar [EMAIL PROTECTED] wrote:
This is a fundamental misconception. [...]
The scheduler, on the other hand, has to blow and reload all of the
hidden state associated with force-loading the PC and wherever your
architecture keeps its TLS (maybe not the whole TLB, but not
On Sat, 24 Feb 2007, Michael K. Edwards wrote:
The preceding may contain errors in detail -- I am neither a CPU
architect nor an x86 compiler writer nor even a serious kernel hacker.
Ok, roger that. But why are you playing Google Preach games to Ingo,
that ate bread and CPUs for the last 15
On 2/24/07, Davide Libenzi davidel@xmailserver.org wrote:
Ok, roger that. But why are you playing Google Preach games to Ingo,
that ate bread and CPUs for the last 15 years?
Sure I used Google -- for clickable references so that lurkers can
tell I'm not making these things up as I go along.
On 2/23/07, Michael K. Edwards <[EMAIL PROTECTED]> wrote:
which costs you a D-cache stall.) Now put an sprintf with a %d in it
between a couple of the syscalls, and _your_ arch is hurting. ...
er, that would be a %f. :-)
Cheers,
- Michael
-
To unsubscribe from this list: send the line
I wrote:
(On a pre-EABI ARM, there is even a substantial
cache-related penalty for encoding the syscall number in the syscall
opcode, because you have to peek back at the text segment to see it,
which costs you a D-cache stall.)
Before you say it, I'm aware that this is not directly relevant
Thanks for taking me at least minimally seriously, Alan. Pretty
generous of you, all things considered.
On 2/23/07, Alan <[EMAIL PROTECTED]> wrote:
That example touches back into user space, but doesnt involve MMU changes
or cache flushes, or tlb flushes, or floating point.
True -- on an
> long my_threadlet_fn(void *data)
> {
>char *name = data;
>int fd;
>
>fd = open(name, O_RDONLY);
>if (fd < 0)
>goto out;
>
>fstat(fd, );
>read(fd, buf, count)
>...
>
> out:
>return threadlet_complete();
> }
>
>
On 2/23/07, Alan <[EMAIL PROTECTED]> wrote:
> Do you not understand that real user code touches FPU state at
> unpredictable (to the kernel) junctures? Maybe not in a database or a
We don't care. We don't have to care. The kernel threadlets don't execute
in user space and don't do FP.
On Fri, Feb 23, 2007 at 01:52:47PM +0100, Jens Axboe wrote:
> Results:
>
> Engine Depth Bw (MiB/sec)
>
> libaio1 441
> syslet1 574
> sync 1 589
> libaio
On Fri, 23 Feb 2007, Evgeniy Polyakov wrote:
> I was not clear - I meant why do we need to do that when we can run the
> same code in userspace? And better if we can have non-blocking dataflows
> and number of threads equal to number of processors...
I've a userspace library that does exactly
On Fri, Feb 23 2007, Suparna Bhattacharya wrote:
> On Fri, Feb 23, 2007 at 05:25:08PM +0100, Jens Axboe wrote:
> > On Fri, Feb 23 2007, Suparna Bhattacharya wrote:
> > > On Fri, Feb 23, 2007 at 03:58:26PM +0100, Ingo Molnar wrote:
> > > >
> > > > * Suparna Bhattacharya <[EMAIL PROTECTED]> wrote:
On Fri, Feb 23, 2007 at 09:43:14AM -0800, Davide Libenzi
(davidel@xmailserver.org) wrote:
> On Fri, 23 Feb 2007, Evgeniy Polyakov wrote:
>
> > On Thu, Feb 22, 2007 at 11:46:48AM -0800, Davide Libenzi
> > (davidel@xmailserver.org) wrote:
> > >
> > > A dynamic pool will smooth thread
On Fri, 23 Feb 2007, Evgeniy Polyakov wrote:
> On Thu, Feb 22, 2007 at 11:46:48AM -0800, Davide Libenzi
> (davidel@xmailserver.org) wrote:
> >
> > A dynamic pool will smooth thread creation/freeing up by a lot.
> > And, in my box a *pthread* create/free takes ~10us, at 1000/s is 10ms, 1%.
> >
On Fri, Feb 23, 2007 at 05:25:08PM +0100, Jens Axboe wrote:
> On Fri, Feb 23 2007, Suparna Bhattacharya wrote:
> > On Fri, Feb 23, 2007 at 03:58:26PM +0100, Ingo Molnar wrote:
> > >
> > > * Suparna Bhattacharya <[EMAIL PROTECTED]> wrote:
> > >
> > > > As a really crude (and not very realistic)
* Suparna Bhattacharya <[EMAIL PROTECTED]> wrote:
> > my expectation is that it should behave better with iodepth=2
> > (although i havent tried that yet).
>
> I picked up the fio snapshot from 22nd Feb
> (fio-git-2007012513.tar.gz) and used the v3 syslet patches from
> your
On Fri, Feb 23 2007, Suparna Bhattacharya wrote:
> On Fri, Feb 23, 2007 at 03:58:26PM +0100, Ingo Molnar wrote:
> >
> > * Suparna Bhattacharya <[EMAIL PROTECTED]> wrote:
> >
> > > As a really crude (and not very realistic) example of the potential
> > > impact of large numbers of outstanding
On Fri, Feb 23, 2007 at 03:58:26PM +0100, Ingo Molnar wrote:
>
> * Suparna Bhattacharya <[EMAIL PROTECTED]> wrote:
>
> > As a really crude (and not very realistic) example of the potential
> > impact of large numbers of outstanding IOs, I tried some quick direct
> > IO comparisons using fio:
>
* Suparna Bhattacharya <[EMAIL PROTECTED]> wrote:
> As a really crude (and not very realistic) example of the potential
> impact of large numbers of outstanding IOs, I tried some quick direct
> IO comparisons using fio:
>
> [global]
> ioengine=syslet-rw
> buffered=0
> rw=randread
> bs=64k
>
On Thu, Feb 22, 2007 at 03:36:58PM +0100, Ingo Molnar wrote:
>
> * Suparna Bhattacharya <[EMAIL PROTECTED]> wrote:
>
> > > maybe it will, maybe it wont. Lets try? There is no true difference
> > > between having a 'request structure' that represents the current
> > > state of the HTTP
On Fri, Feb 23, 2007 at 01:52:47PM +0100, Jens Axboe wrote:
> On Wed, Feb 21 2007, Ingo Molnar wrote:
> > this is the v3 release of the syslet/threadlet subsystem:
> >
> >http://redhat.com/~mingo/syslet-patches/
>
> [snip]
>
> Ingo, some testing of the experimental syslet queueing stuff, in
On Wed, Feb 21 2007, Ingo Molnar wrote:
> this is the v3 release of the syslet/threadlet subsystem:
>
>http://redhat.com/~mingo/syslet-patches/
[snip]
Ingo, some testing of the experimental syslet queueing stuff, in the
syslet-testing branch of fio.
Fio job file:
[global]
bs=8k
size=1g
On Fri, Feb 23, 2007 at 03:22:25PM +0300, Evgeniy Polyakov ([EMAIL PROTECTED])
wrote:
> I meant that we end up with having one thread per IO - they were
> preallocated, but that does not matter. And what about your idea of
> switching userspace threads to cachemiss threads?
>
> My main concern
* Michael K. Edwards <[EMAIL PROTECTED]> wrote:
> On 2/22/07, Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > maybe it will, maybe it wont. Lets try? There is no true difference
> > between having a 'request structure' that represents the current
> > state of the HTTP connection plus a
On Fri, Feb 23, 2007 at 12:51:52PM +0100, Ingo Molnar ([EMAIL PROTECTED]) wrote:
> > [...] Those 20k blocked requests were created in about 20 seconds, so
> > roughly saying we have 1k of thread creation/freeing per second - do
> > we want this?
>
> i'm not sure why you mention thread creation
On Thu, Feb 22, 2007 at 11:46:48AM -0800, Davide Libenzi
(davidel@xmailserver.org) wrote:
> > I tried already :) - I just made a allocations atomic in tcp_sendmsg() and
> > ended up with 1/4 of the sends blocking (I counted both allocation
> > failure and socket queue overflow). Those 20k blocked
401 - 500 of 632 matches
Mail list logo