On Aug 31, 2017, at 11:35 AM, Joe Groff via swift-evolution 
<[email protected]> wrote:
> The coroutine proposal as it stands essentially exposes raw delimited 
> continuations. While this is a flexible and expressive feature in the 
> abstract, for the concrete purpose of representing asynchronous coroutines, 
> it provides weak user-level guarantees about where their code might be 
> running after being resumed from suspension, and puts a lot of pressure on 
> APIs to be well-behaved in this respect. And if we're building toward actors, 
> where async actor methods should be guaranteed to run "in the actor", I think 
> we'll *need* something more than the bare-bones delimited continuation 
> approach to get there. I think the proposal's desire to keep coroutines 
> independent of a specific runtime model is a good idea, but I also think 
> there are a couple possible modifications we could add to the design to make 
> it easier to reason about what context things run in for any runtime model 
> that benefits from async/await:
> 
> # Coroutine context
> 
> Associating a context value with a coroutine would let us thread useful 
> information through the execution of the coroutine. This is particularly 
> useful for GCD, so you could attach a queue, QoS, and other attributes to the 
> coroutine, since these aren't reliably available from the global environment. 
> It could be a performance improvement even for things like per-pthread 
> queues, since coroutine context should be cheaper to access than 
> pthread_self. 
> 
> For example, a coroutine-aware `dispatch_async` could spawn a coroutine with 
> the queue object and other interesting attributes as its context:
> 
> extension DispatchQueue {
> func `async`(_ body: () async -> ()) {
>   dispatch_async(self, {
>     beginAsync(context: self) { await body() }
>   })
> }
> }

I think it makes perfect sense to add a magically available context to async 
functions, and something like the above is a good way to populate it.  Because 
is is a magic value that is only available in async functions, giving it a 
keyword like asyncContext might make sense.  That said, I don’t understand how 
(by itself) this helps the queue hopping problem.

> # `onResume` hooks
> 
> Relying on coroutine context alone still leaves responsibility wholly on 
> suspending APIs to pay attention to the coroutine context and schedule the 
> continuation correctly. You'd still have the expression problem when 
> coroutine-spawning APIs from one framework interact with suspending APIs from 
> another framework that doesn't understand the spawning framework's desired 
> scheduling policy. We could provide some defense against this by letting the 
> coroutine control its own resumption with an "onResume" hook, which would run 
> when a suspended continuation is invoked instead of immediately resuming the 
> coroutine. That would let the coroutine-aware dispatch_async example from 
> above do something like this, to ensure the continuation always ends up back 
> on the correct queue:

Yes, we need something like this, though I’m not sure how your proposal works:

> extension DispatchQueue {
> func `async`(_ body: () async -> ()) {
>   dispatch_async(self, {
>     beginAsync(
>       context: self,
>       body: { await body() },
>       onResume: { continuation in
>         // Defensively hop to the right queue
>         dispatch_async(self, continuation)

If I’m running on a pthread, and use "someQueue.async {…}”, I don’t see how 
DispatchQueue.async can know how to take me back to a pthread.  If I understand 
your example code above, it looks like the call will run the continuation on 
someQueue instead.

That said, I think that your idea of context pretty much covers it: a non-async 
function cannot have any idea whether it is run on a queue or thread, but there 
is also no language way to call an async function from a non-async function.  I 
think this means that beginAsync and DispatchQueue.async will have to define 
some policy: for example, the implementation of DispatchQueue.async could use 
its own internal data structures to decide if the current task is being run on 
some dispatch queue (using the maligned “give me the current queue” operation), 
and ensure that it returns to the originating queue if it can find one.

Chains of async functions calling each other would maintain their context, so 
the other transitions we have to worry about are when an async function calls a 
non-async function (this just drops the context) or when you get to the bottom 
of the pile of async 🐢’s and want to actually do something on another context.  
This implies you’d actually want an async form of DispatchQueue.async, 
something like this:

extension DispatchQueue {
func `async`(_ body: () async -> ()) async {
  dispatch_async(self) {
    beginAsync(
      context: self,
      body: { 
         await body() 
         asyncContext.restore()
      })
  }}

Going back to the silly example, if you call DispatchQueue.async from an async 
function on a pthread, the asyncContext would be the pthread’s, and 
asyncContext.restore() would take you back to it.

Another nice thing about this is that it gets you back to a single context per 
async thing.

-Chris


_______________________________________________
swift-evolution mailing list
[email protected]
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to