> The issue with async/await is that while it maps very well to the AIO
> primitives like IOCP and POSIX AIO, it doesn't map well to something
> that's solid on Linux. It's just not how I/O is done on the platform.
> It uses *non-blocking* I/O so scale up socket servers, with
> notification of ready state rather than completion state. This doesn't
> work for file system access though.

Well, I/O would work exactly the same as the current libuv-based Rust tasks 
approach, except that one needs a stack for each CPU thread and not for each 
task, because task creation functions return when a task wants to block, and 
thus there is no stack to save and restore.

On Linux, the epoll interface allows to implement this, and I think it's what 
libuv uses.

The only problem is that Linux doesn't really support asynchronously resolving 
file paths to inodes (aka opening files), but that can be done on a dedicated 
thread pool, with the advantage that the threads don't do anything, so they 
don't have zombie stacks.

The problem with blocking on all threads/the 1:1 thread model is that if you do 
a computation that requires 8MB of stack, and then block with a shallow call 
stack, the 8MB of stack are not freed, so you waste 8MB per TCP connection in a 
TCP server example, which means that on a system with 32GB RAM you can only 
service 4096 TCP connections without swapping to disk instead of millions.

A dedicated thread pool for blocking I/O doesn't have this issue 
because it can run with only 4KB stack or so since it doesn't do 
anything stack intensive, and the number of its threads can be limited 
without user-visible results.

                                          
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to