On 10/22/2013 12:56 AM, Vadim wrote:
What I have in mind is not nginx, but rather something like a pub-sub
service, with millions of idle connections, which don't do much most
of the time. And no, I don't want an async I/O API :-) Not if I can
help it. I am still hoping that Rust can avoid having separate sync
and async I/O APIs.
Millions of concurrent tasks was always a longshot anyway. Even when we
had segmented stacks and put some effort into tuning the most tasks we
ever had in a 32-bit process was 500,000; 1 million was my goal. At that
time our minimum stack allocation was still around 3k. With the new
strategy we're talking more like 4k of stack allocation. So some use
cases will be served slightly worse, but we weren't doing that much
better before anyway.
Not sure what context switching you are referring to... Surely nginx
also needs to save and restore registers when it goes into kernel mode
for select syscall (or whatever it uses)?
The context switching cost of converting async I/O into synchronous I/O.
In the current task model every sync I/O operation incurs 2 context
switches: context switch from the task to the scheduler, make an async
I/O call, wait for completion, context switch back to the task. That's
the minimum cost of not blocking other tasks when doing synchronous I/O.
Regarding "microthreads": wouldn't those need to check stack limit in
every function in order to detect overflow, just as segmented stack
prologues do? So what's the difference?
Vadim
On Mon, Oct 21, 2013 at 9:52 PM, Patrick Walton <pwal...@mozilla.com
<mailto:pwal...@mozilla.com>> wrote:
On 10/21/13 8:48 PM, Daniel Micay wrote:
Segmented stacks result in extra code being added to every
function,
loss of memory locality, high overhead for calls into C and
unpredictable performance hits due to segment thrashing.
They do seem important for making the paradigm of one task per
connection viable for servers, but it's hard to balance that
with other
needs.
I'm not sure they're that important even for that use case. Is 4
kB (page size) per connection that bad? You won't compete with
nginx's memory usage (2.5 MB for 10,000 connections, compared to
40 MB for the same with 4 kB stacks), but you do compete with Go
(4 kB initial stack segment) and Erlang (2.4 kB on 64 bit).
Besides, if we really wanted to go head-to-head with nginx we
could introduce "microthreads" with very small stack limits (256
bytes or whatever) that just fail if you run off the end. Such a
routine would be utterly miserable to program correctly but would
be necessary if you want to compete with nginx in the task model
anyhow :)
Realistically, though, if you are writing an nginx killer you will
want to use async I/O and avoid the task model, as even the
overhead of context switching via userspace register
save-and-restore is going to put you at a disadvantage. Given what
I've seen of the nginx code you aren't going to beat it without
counting every cycle.
Patrick
_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org <mailto:Rust-dev@mozilla.org>
https://mail.mozilla.org/listinfo/rust-dev
_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev
_______________________________________________
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev