> On Aug 18, 2017, at 6:17 AM, Thomas via swift-evolution 
> <[email protected]> wrote:
> 
> I have been writing a lot of fully async code over the recent years (in objc) 
> and this all seems to fit well with what we're doing and looks like it would 
> alleviate a lot of the pain we have writing asyc code.

Great,

> 
> # Extending the model through await
> 
> I'm a bit worried about the mention of dispatch_sync() here (although it may 
> just be there to illustrate the deadlock possibility). I know the actor 
> runtime implementation is not yet defined, but just wanted to mention that 
> dispatch_sync() will lead to problems such as this annoying thing called 
> thread explosion. This is why we currently cannot use properties in our code 
> (getters would require us to call dispatch_sync() and we want to avoid that), 
> instead we are writing custom async getters/setters with callback blocks. 
> Having async property getters would be pretty awesome.

I think that awaiting on the result of an actor method ends up being pretty 
similar (in terms of implementation and design tradeoffs) as dispatch_sync.  
That said, my understanding is that thread explosion in GCD happens whenever 
something blocks a GCD thread, not when it politely yields control back to GCD. 
 Am I misunderstanding what you mean.

> Another thing: it is not clearly mentionned here that we're getting back on 
> the caller actor's queue after awaiting on another actor's async method.

I omitted it simply because that is related to the runtime model, which I’m 
trying to leave unspecified.  I agree with you that that is the most likely 
answer.

> # Scalable Runtime
> 
> About the problem of creating too many queues. This is something that has 
> annoyed me at this year's wwdc. It used to be back when the libdispatch was 
> introduced in 10.6 that we were told that queues are very cheap, we could 
> create thousands of them and not worry about threads, because the libdispatch 
> would do the right thing internally and adjust to the available hardware (the 
> number of threads would more or less match the number of cores in your 
> machine). Somehow this has changed, now we're being told we need to worry 
> about the threads behind the queues and not have too many of them. I'm not 
> sure if this is something inevitable due to the underlying reality of the 
> system but the way things were presented back then (think in term of queues, 
> don't worry about threads) was very compelling.

I don’t know why the messaging changed, but I agree with you: the ideal is to 
have a simple and predictable model.

> # Entering and leaving async code
> 
> Certainly seems like the beginAsync(), suspendAsync() primitives would be 
> useful outside of the stdlib. The Future<T> example makes use of 
> suspendAsync() to store the continuation block and call it later, other codes 
> would do just as well.
> 
> Shouldn't this:
> 
>> let imageTmp    = await decodeImage(dataResource.get(), imageResource.get())
> 
> rather be:
> 
>> let imageTmp    = await decodeImage(await dataResource.get(), await 
>> imageResource.get())

As designed (and as implemented in the PR), “await” distributes across all of 
the calls in a subexpression, so you only need it at the top level.  This is 
one of the differences from the C# design.

-Chris
 
_______________________________________________
swift-evolution mailing list
[email protected]
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to