On Wed, Nov 13, 2013 at 5:32 PM, Bill Myers <[email protected]> wrote:
>> The issue with async/await is that while it maps very well to the AIO
>> primitives like IOCP and POSIX AIO, it doesn't map well to something
>> that's solid on Linux. It's just not how I/O is done on the platform.
>> It uses *non-blocking* I/O so scale up socket servers, with
>> notification of ready state rather than completion state. This doesn't
>> work for file system access though.
>
> Well, I/O would work exactly the same as the current libuv-based Rust tasks
> approach, except that one needs a stack for each CPU thread and not for each
> task, because task creation functions return when a task wants to block, and
> thus there is no stack to save and restore.
>
> On Linux, the epoll interface allows to implement this, and I think it's
> what libuv uses.

The epoll interface only allows waiting for ready (not completion)
state from sockets. It does not work for file system input/output or
meta operations. As a hack, timers, signals and pluggable events have
been exposed as file descriptors with ready events but nothing for
truly doing non-blocking file I/O. The libuv implementation of
non-socket I/O uses thread pools on Linux. Note that *writes* to
sockets are still potentially blocking and a thread pool might be used
even for that.

> The only problem is that Linux doesn't really support asynchronously
> resolving file paths to inodes (aka opening files), but that can be done on
> a dedicated thread pool, with the advantage that the threads don't do
> anything, so they don't have zombie stacks.

The file metadata cache is small enough that there's no point in
considering it blocking. You might as well considering memory accesses
blocking at that extreme, because they might require fetching memory
to the CPU cache.

> The problem with blocking on all threads/the 1:1 thread model is that if you
> do a computation that requires 8MB of stack, and then block with a shallow
> call stack, the 8MB of stack are not freed, so you waste 8MB per TCP
> connection in a TCP server example, which means that on a system with 32GB
> RAM you can only service 4096 TCP connections without swapping to disk
> instead of millions.

There's no difference between M:N scheduling and 1:1 scheduling in
regards to resource usage and stacks. Every task has to have a stack.

> A dedicated thread pool for blocking I/O doesn't have this issue because it
> can run with only 4KB stack or so since it doesn't do anything stack
> intensive, and the number of its threads can be limited without user-visible
> results.

Using a thread pool for blocking I/O is a significant performance
issue. Is it okay for small operating to be 5x, 10x, 20x or 30x slower
than C in a *systems language*? Perhaps 15%, but context switches to
and from a thread pool aren't going to get you that.

It's always going to be much slower to pretend Linux I/O is
non-blocking via layers of indirection and dispatching to other
threads. The model Rust wants to use is appreciated enough that the
Linux kernel will likely support it directly via 1:1 threading like
Win7/8 already do.
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to