On Fri, May 31, 2013 at 3:45 PM, Brian Anderson <[email protected]>wrote:
>
> With this problem in general I think the obvious solutions amount to
> taking one of two approaches: translate I/O events into pipe events;
> translate pipe events into I/O events. Solving the problem efficiently for
> either one is rather simpler than solving both. The example you show is
> promising model that looks like it could naively be implemented by
> buffering I/O into pipes. bblum and I talked about an implementation that
> would work well for this approach, but it has costs. I imagine it working
> like this.
>
> 1) The resolve_xxx function partitions the elements into pipesy types and
> I/O types.
> 3) For each of the I/O types it creates a new pipe, and registers a uv
> event. Note that because of I/O-scheduler affinity some of these may cause
> the task to migrate between threads.
> 4) Now we just wait on all the pipes.
>
What if futures were treated as one-shot pipes with one element queue
capacity, and "real" pipes were used only when there's a need for
buffering? That would help to reduce per-operation costs (also see notes
below about allocation).
>
> I think we would want to do this too for efficiency reasons. The above
> outline has two major costs: the first is the extra pipes and the second is
> the buffering of the I/O. For example, the synchronous read method looks
> like `read(buf: &mut [u8])` where the buf is typically allocated on the
> stack. In the future scenario presumably it would be more like `read() ->
> Future<~[u8]>`, forcing a heap allocation, but maybe `read(buf: &mut [u8])
> -> Future<&mut [u8]>` is workable.
>
Not necessarily. In fact, .NET's signature of the read method is something
like this: fn read(buf: &mut [u8]) -> ~Future<int>; It returns just the
read bytes count. This is perfectly fine for simple usage because caller
still has a reference to the buffer.
Now, if you want to send it over to another task, this is indeed a problem,
however composition of future comes to the rescue. Each .NET's future has
a method that allows to attach a continuation that yields a new future of
the type of continuation's result:
trait Future<T>
{
fn continue_with<T1>(&self, cont: fn (T) -> T1) -> Future<T1>;
}
let buf = ~[0,..1024];
let f1 = stream.read(&buf);
let f2 : Future<(~[u8],int> = f1.continue_with(|read| return (buf, read));
Now f2 contains all information needed to process received data in another
task.
>
> 2. Each i/o operation now needs to allocate heap memory for the future
> object. This has been known to create GC performance problems for .NET
> web apps which process large numbers of small requests. If these can live
> on the stack, though, maybe this wouldn't be a problem for Rust.
>
>
> Haha, yep that's a concern.
>
I know that Rust currently doesn't currently support this, but what if
futures could use a custom allocator? Then it could work like this:
1. Futures use a custom free-list allocator for performance.
2. The I/O request allocates new future object, registers uv event, then
returns unique pointer to the future to its' caller. However I/O manager
retains internal reference to the future, so that it can be resolved once
I/O completes.
3. The future object also has a flag indicating that there's an outstanding
I/O, so if caller drops the reference to it, it won't be returned to the
free list until I/O completes.
4. When I/O is complete, the future get resolved and all attached
continuations are run.
Vadim
_______________________________________________
Rust-dev mailing list
[email protected]
https://mail.mozilla.org/listinfo/rust-dev