On 05/31/2013 10:18 AM, Vadim wrote:

    ## Design and implement some solution for select / async events

    We need a way to efficiently wait on multiple types of events at
    once, including port receives, I/O reads, socket accepts, timers.
    This has some very complicated requirements to satisfy, as
    detailed in the linked issue, and I'm not sure what the right
    abstractions are here. This is super important and the biggest
    risk to the whole effort. If anybody has opinions about this topic
    I would love to hear them.

    https://github.com/mozilla/rust/issues/6842


Hi Brian,

So you mention .NET asyncs in the linked issue... The .NET way of doing this is to have functions that initiate async operation and return a future. You then explicitly wait for that future to resolve. If several asyncs need to be done concurrently, you start them all, then pass a list of futures to a helper function that returns another future, which resolves when all underlying futures had resolved. Another helper function allows to wait until any one of the futures resolves. Putting all this together (loosely translating C# into Rust), you'd write something like this:

let operations = [start_operation1(), start_operation2(), ...];
let timer = sleep(timeout); // returns future that resolves after the specified timeout expires

let top_level_future = resolve_when_any( [resolve_when_all(operations), timer] );

await top_level_future; // wait until the future has been resolved.

if (timer.is_complete())
   // we timed out
else
  // all operations completed (or failed) before timer has expired

Thank you for the clear example. I like this model.

With this problem in general I think the obvious solutions amount to taking one of two approaches: translate I/O events into pipe events; translate pipe events into I/O events. Solving the problem efficiently for either one is rather simpler than solving both. The example you show is promising model that looks like it could naively be implemented by buffering I/O into pipes. bblum and I talked about an implementation that would work well for this approach, but it has costs. I imagine it working like this.

1) The resolve_xxx function partitions the elements into pipesy types and I/O types. 3) For each of the I/O types it creates a new pipe, and registers a uv event. Note that because of I/O-scheduler affinity some of these may cause the task to migrate between threads.
4) Now we just wait on all the pipes.



Composition of futures works out pretty nicely, however it also has downsides: 1. The simple case of a single blocking i/o operation now looks like this "result = await(start_operatoin())" instead of just "result = operation()" .NET solves this by providing both synchronous and asynchronous variants of each operation in most cases.

I think we would want to do this too for efficiency reasons. The above outline has two major costs: the first is the extra pipes and the second is the buffering of the I/O. For example, the synchronous read method looks like `read(buf: &mut [u8])` where the buf is typically allocated on the stack. In the future scenario presumably it would be more like `read() -> Future<~[u8]>`, forcing a heap allocation, but maybe `read(buf: &mut [u8]) -> Future<&mut [u8]>` is workable.

2. Each i/o operation now needs to allocate heap memory for the future object. This has been known to create GC performance problems for .NET web apps which process large numbers of small requests. If these can live on the stack, though, maybe this wouldn't be a problem for Rust.

Haha, yep that's a concern.



As a side note, your list doesn't have anything regarding cancellation of async operations. In the example above you'd probably want to attempt to cancel outstanding i/o operations when timeout expires. .NET provides standard infrastructure for this, and I think that Rust will need one too.

Yes.


HTH,
Vadim


_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev

Reply via email to