Alright, done. It's a pretty interesting proposal. They are effectively closures with coroutine-like semantics. It seems like the overhead for a complex system might actually be greater than with classic coroutines, as closure data allocations could be happening all over the place, but this is pure speculation.
I think a direct comparison could be drawn between their API and ours, as std.concurrency now has a Generator object and one of his early examples is a generator as well. From a use perspective, the two are really pretty similar, though our Generator allocates an entire stack while theirs allocates N function-level context blocks (one per contained awaitable). Overall I see this proposal as being complementary to actors as per std.concurrency. Theirs provides a fairly simple and lightweight model for composing code that doesn't normally compose well (like recursive iterators), which is one traditional use of coroutines. But for high levels of concurrency to be achieved, a scheduler needs to sit behind the await mechanism so other things can happen when execution is suspended waiting for a result. This could integrate well with the Scheduler that is now a part of std.concurrency, as it would be fairly trivial for a context switch to occur whenever an awaitable suspend occurs.
