Another thing to keep in mind: "awkward I/O multiplexing magic" is
likely *necessary* on some platforms to scale well. Or at least this
is the mythology. This is a numeric question that demands research.
Try writing a C program that does "the smallest thread you can make"
on each OS and tries to make 100,000 of them doing concurrent
blocking reads on 100,000 file descriptors. See if it scales as well
as a 100,000 way IOCP/kqueue/epoll approach. It might. It might not.
Kernel people are always tilting the balance one way or another,
sometimes userspace is just misinformed, working on old information.
There is something really wrong with the kernel if many blocking
threads are as cheep as an epool :-)
We should still provide an api for doing pools, but they get used by the
user when needed, without the overhead of passing them to an IO thread,
doing the epool, figuring out which task was handing that fd, and
switching back.
With light task that can be migrated, it is possible to mix and match.
For example, have a thread doing accept/add to epool and creating tasks
for serving the requests when data is available.
-Graydon
Cheers,
Rafael
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev