On 10/21/13 8:48 PM, Daniel Micay wrote:
Segmented stacks result in extra code being added to every function,
loss of memory locality, high overhead for calls into C and
unpredictable performance hits due to segment thrashing.
They do seem important for making the paradigm of one task per
connection viable for servers, but it's hard to balance that with other
needs.
I'm not sure they're that important even for that use case. Is 4 kB
(page size) per connection that bad? You won't compete with nginx's
memory usage (2.5 MB for 10,000 connections, compared to 40 MB for the
same with 4 kB stacks), but you do compete with Go (4 kB initial stack
segment) and Erlang (2.4 kB on 64 bit).
Besides, if we really wanted to go head-to-head with nginx we could
introduce "microthreads" with very small stack limits (256 bytes or
whatever) that just fail if you run off the end. Such a routine would be
utterly miserable to program correctly but would be necessary if you
want to compete with nginx in the task model anyhow :)
Realistically, though, if you are writing an nginx killer you will want
to use async I/O and avoid the task model, as even the overhead of
context switching via userspace register save-and-restore is going to
put you at a disadvantage. Given what I've seen of the nginx code you
aren't going to beat it without counting every cycle.
Patrick
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev