Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-12 Thread Pierre Habouzit via swift-evolution
> On Sep 12, 2017, at 12:31 PM, John McCall via swift-evolution 
>  wrote:
> 
>> 
>> On Sep 12, 2017, at 2:19 AM, Pierre Habouzit via swift-evolution 
>> > wrote:
>> 
>>> On Sep 11, 2017, at 9:00 PM, Chris Lattner via swift-evolution 
>>> > wrote:
>>> 
>>> On Sep 4, 2017, at 12:18 PM, Pierre Habouzit >> > wrote:
 Something else I realized, is that this code is fundamentally broken in 
 swift:
 
 actor func foo()
 {
 NSLock *lock = NSLock();
 lock.lock();
 
 let compute = await someCompute(); <--- this will really break `foo` 
 in two pieces of code that can execute on two different physical threads.
 lock.unlock();
 }
 
 
 The reason why it is broken is that mutexes (whether it's NSLock, 
 pthread_mutex, os_unfair_lock) have to be unlocked from the same thread 
 that took it. the await right in the middle here means that we can't 
 guarantee it.
>>> 
>>> Agreed, this is just as broken as:
>>> 
>>> func foo()
>>> {
>>> let lock = NSLock()
>>> lock.lock()
>>> 
>>> someCompute {
>>> lock.unlock()
>>> }
>>> }
>>> 
>>> and it is just as broken as trying to do the same thing across queues.  
>>> Stuff like this, or the use of TLS, is just inherently broken, both with 
>>> GCD and with any sensible model underlying actors.  Trying to fix this is 
>>> not worth it IMO, it is better to be clear that they are different things 
>>> and that (as a programmer) you should *expect* your tasks to run on 
>>> multiple kernel threads.
>>> 
>>> BTW, why are you using a lock in a single threaded context in the first 
>>> place??? ;-)
>> 
>> I don't do locks, I do atomics as a living.
>> 
>> Joke aside, it's easy to write this bug we should try to have the 
>> compiler/analyzer help here for these broken patterns.
>> TSD is IMO less of a problem because people using them are aware of their 
>> sharp edges. Not so much for locks.
> 
> Maybe we could somehow mark a function to cause a warning/error when directly 
> using it from an async function.  You'd want to use that on locks, 
> synchronous I/O, probably some other things.

Well the problem is not quite using them (malloc would e.g. and there's not 
quite a way around it), you don't want to hold a lock across await.

> 
> Trying to hard-enforce it would pretty quickly turn into a big, annoying 
> effects-system problem, where even a program not using async at all would 
> suddenly have to mark a ton of functions as "async-unsafe".  I'm not sure 
> this problem is worth that level of intrusion for most programmers.  But a 
> soft enforcement, maybe an opt-in one like the Clang static analyzer, could 
> do a lot to prod people in the right direction.

Sure, I'm worried about the fact that because POSIX is a piece of cr^W^W^W^Wso 
beautifully designed, if you unlock a mutex from another thread that the one 
locking it, you're not allowed to crash to tell the client he made a 
programming mistake.

the unfair lock on Darwin will abort if you try to do something like that 
though.

We'll see how much users will make these mistakes I guess.

-Pierre
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-12 Thread John McCall via swift-evolution

> On Sep 12, 2017, at 2:19 AM, Pierre Habouzit via swift-evolution 
>  wrote:
> 
>> On Sep 11, 2017, at 9:00 PM, Chris Lattner via swift-evolution 
>> > wrote:
>> 
>> On Sep 4, 2017, at 12:18 PM, Pierre Habouzit > > wrote:
>>> Something else I realized, is that this code is fundamentally broken in 
>>> swift:
>>> 
>>> actor func foo()
>>> {
>>> NSLock *lock = NSLock();
>>> lock.lock();
>>> 
>>> let compute = await someCompute(); <--- this will really break `foo` in 
>>> two pieces of code that can execute on two different physical threads.
>>> lock.unlock();
>>> }
>>> 
>>> 
>>> The reason why it is broken is that mutexes (whether it's NSLock, 
>>> pthread_mutex, os_unfair_lock) have to be unlocked from the same thread 
>>> that took it. the await right in the middle here means that we can't 
>>> guarantee it.
>> 
>> Agreed, this is just as broken as:
>> 
>> func foo()
>> {
>> let lock = NSLock()
>> lock.lock()
>> 
>> someCompute {
>>  lock.unlock()
>> }
>> }
>> 
>> and it is just as broken as trying to do the same thing across queues.  
>> Stuff like this, or the use of TLS, is just inherently broken, both with GCD 
>> and with any sensible model underlying actors.  Trying to fix this is not 
>> worth it IMO, it is better to be clear that they are different things and 
>> that (as a programmer) you should *expect* your tasks to run on multiple 
>> kernel threads.
>> 
>> BTW, why are you using a lock in a single threaded context in the first 
>> place??? ;-)
> 
> I don't do locks, I do atomics as a living.
> 
> Joke aside, it's easy to write this bug we should try to have the 
> compiler/analyzer help here for these broken patterns.
> TSD is IMO less of a problem because people using them are aware of their 
> sharp edges. Not so much for locks.

Maybe we could somehow mark a function to cause a warning/error when directly 
using it from an async function.  You'd want to use that on locks, synchronous 
I/O, probably some other things.

Trying to hard-enforce it would pretty quickly turn into a big, annoying 
effects-system problem, where even a program not using async at all would 
suddenly have to mark a ton of functions as "async-unsafe".  I'm not sure this 
problem is worth that level of intrusion for most programmers.  But a soft 
enforcement, maybe an opt-in one like the Clang static analyzer, could do a lot 
to prod people in the right direction.

John.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-12 Thread Johannes Weiß via swift-evolution


> On 11 Sep 2017, at 10:04 pm, Adam Kemp via swift-evolution 
>  wrote:
> 
> 
> 
>> On Sep 11, 2017, at 1:15 PM, Kenny Leung via swift-evolution 
>>  wrote:
>> 
>> I found a decent description about async/await here:
>> 
>> https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/async/
>> 
>> So it’s much more like runloop based callbacks in Foundation that 
>> libdispatch. It’s complicated. Even the simplest example is super 
>> complicated. 
>> 
>> It’s super implicit (like, “I’m going to package up every line of code from 
>> await to the end of this function and turn it into a continuation”). That 
>> seems to go against one of the primary principles of Swift, which is to make 
>> things plain to the reader. I’d be interested to know what the call stack 
>> looks like on the line after await.
> 
> This is pretty much how it would have to work for Swift as well. The call 
> stack after the await (in C#) would either start at the runloop and go 
> through the futures API (usually Task) or it would start at whatever code 
> satisfied the async request.
> 
> It’s true that this can make it more difficult to understand stack traces. In 
> most cases the original call stack is lost. Microsoft has made changes to 
> Visual Studio in order to show kind of an alternative stack trace for tasks 
> to try to make this better.

just FYI, Xcode does that too these days. If you breakpoint/crash within 
something that got asynchronously dispatched, you'll see a synthesised stack 
frame that shows you where it got enqueued from. The same could be done for 
async/await.


> I think they also made things like F10 (step over) and F11 (step out) do the 
> natural thing (i.e., wait for the continuation).
> 
>> 
>> The doc takes away some of the mystery, but leaves major questions, like: 
>> await is used to yield control to the parent, but at the bottom of the call 
>> stack, presumably you’re going to do something blocking, so how do you call 
>> await?
> 
> One of the common misconceptions about async/await (which I also had when I 
> first encountered it) is that there must be a blocking thread somewhere. It 
> doesn’t work that way. The “bottom of the call stack” is typically either a 
> run loop or a thread pool with a work queue (really just another kind of run 
> loop). I guess you’re right in the sense that those kinds of run loops do 
> block, but they’re not blocking on any particular piece of work to be done. 
> They’re blocking waiting for ANY more work to be done (either events or items 
> placed in the work queue).
> 
> The way that the continuation works is that it is placed onto one of those 
> queues. For the UI thread it’s kind of like doing 
> performSelectorOnMainThread: (the .Net equivalent is usually called 
> BeginInvokeOnMainThread). For a thread pool there’s another API. For GCD this 
> would be like doing a dispatch_async. It’s putting the continuation callback 
> block onto a queue, and that callback will be called when the run loop or the 
> thread pool is able to do so.
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-12 Thread Pierre Habouzit via swift-evolution
> On Sep 11, 2017, at 9:00 PM, Chris Lattner via swift-evolution 
>  wrote:
> 
> On Sep 4, 2017, at 12:18 PM, Pierre Habouzit  > wrote:
>> Something else I realized, is that this code is fundamentally broken in 
>> swift:
>> 
>> actor func foo()
>> {
>> NSLock *lock = NSLock();
>> lock.lock();
>> 
>> let compute = await someCompute(); <--- this will really break `foo` in 
>> two pieces of code that can execute on two different physical threads.
>> lock.unlock();
>> }
>> 
>> 
>> The reason why it is broken is that mutexes (whether it's NSLock, 
>> pthread_mutex, os_unfair_lock) have to be unlocked from the same thread that 
>> took it. the await right in the middle here means that we can't guarantee it.
> 
> Agreed, this is just as broken as:
> 
> func foo()
> {
> let lock = NSLock()
> lock.lock()
> 
> someCompute {
>   lock.unlock()
> }
> }
> 
> and it is just as broken as trying to do the same thing across queues.  Stuff 
> like this, or the use of TLS, is just inherently broken, both with GCD and 
> with any sensible model underlying actors.  Trying to fix this is not worth 
> it IMO, it is better to be clear that they are different things and that (as 
> a programmer) you should *expect* your tasks to run on multiple kernel 
> threads.
> 
> BTW, why are you using a lock in a single threaded context in the first 
> place??? ;-)

I don't do locks, I do atomics as a living.

Joke aside, it's easy to write this bug we should try to have the 
compiler/analyzer help here for these broken patterns.
TSD is IMO less of a problem because people using them are aware of their sharp 
edges. Not so much for locks.

-Pierre

> 
> -Chris
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-11 Thread Chris Lattner via swift-evolution
On Sep 4, 2017, at 12:18 PM, Pierre Habouzit  wrote:
> Something else I realized, is that this code is fundamentally broken in swift:
> 
> actor func foo()
> {
> NSLock *lock = NSLock();
> lock.lock();
> 
> let compute = await someCompute(); <--- this will really break `foo` in 
> two pieces of code that can execute on two different physical threads.
> lock.unlock();
> }
> 
> 
> The reason why it is broken is that mutexes (whether it's NSLock, 
> pthread_mutex, os_unfair_lock) have to be unlocked from the same thread that 
> took it. the await right in the middle here means that we can't guarantee it.

Agreed, this is just as broken as:

func foo()
{
let lock = NSLock()
lock.lock()

someCompute {
lock.unlock()
}
}

and it is just as broken as trying to do the same thing across queues.  Stuff 
like this, or the use of TLS, is just inherently broken, both with GCD and with 
any sensible model underlying actors.  Trying to fix this is not worth it IMO, 
it is better to be clear that they are different things and that (as a 
programmer) you should *expect* your tasks to run on multiple kernel threads.

BTW, why are you using a lock in a single threaded context in the first 
place??? ;-)

-Chris

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-11 Thread Chris Lattner via swift-evolution

> On Sep 11, 2017, at 4:19 PM, Marc Schlichte  
> wrote:
> 
> 
>> Am 07.09.2017 um 07:05 schrieb Chris Lattner via swift-evolution 
>> >:
>> 
>> 
>> Imagine you are maintaining a large codebase, and you come across this 
>> (intentionally abstract) code:
>> 
>>  foo()
>>  await bar()
>>  baz()
>> 
>> Regardless of what is the most useful, I’d argue that it is only natural to 
>> expect baz() to run on the same queue/thread/execution-context as foo and 
>> bar. 
> 
> But what if `bar` was defined like this in a pre async/await world:
> 
> `bar(queue: DispatchQueue, continuation: (value: Value?, error: Error?) -> 
> Void)`
> 
> ^ There are several existing APIs which use this pattern of explicitly 
> providing the queue on which the continuation should run.
> 
> My expectation (especially as a maintainer) would be that the async/await 
> version exhibits the same queueing semantics as the `old` CPS style - 
> whatever that was (implicitly on the main-queue, implicitly on some 
> background queue or explicitly on a provided queue).

I can understand that expectation shortly after the migration from Swift 4 to 
Swift 5 (or whatever).  However, in 6 months or a year, when you’ve forgotten 
about the fact that it happened to be implemented with callbacks, this will not 
be obvious.  Nor would it be obvious to the people who maintain the code but 
were never aware of the original API.

We should design around the long term view, not momentary transition issues IMO.

> Also, a related question I have: Will / should it be possible to 
> mix-and-match CPS and async/await style for system APIs? I would say yes, so 
> that we can transfer to the new async/await style at our own pace. 

The proposal does not include any changes to system APIs at all, such a design 
will be the subject of a follow-on proposal.

-Chris


___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-11 Thread Marc Schlichte via swift-evolution

> Am 07.09.2017 um 07:05 schrieb Chris Lattner via swift-evolution 
> :
> 
> 
> Imagine you are maintaining a large codebase, and you come across this 
> (intentionally abstract) code:
> 
>   foo()
>   await bar()
>   baz()
> 
> Regardless of what is the most useful, I’d argue that it is only natural to 
> expect baz() to run on the same queue/thread/execution-context as foo and 
> bar. 

But what if `bar` was defined like this in a pre async/await world:

`bar(queue: DispatchQueue, continuation: (value: Value?, error: Error?) -> 
Void)`

^ There are several existing APIs which use this pattern of explicitly 
providing the queue on which the continuation should run.

My expectation (especially as a maintainer) would be that the async/await 
version exhibits the same queueing semantics as the `old` CPS style - whatever 
that was (implicitly on the main-queue, implicitly on some background queue or 
explicitly on a provided queue).


Also, a related question I have: Will / should it be possible to mix-and-match 
CPS and async/await style for system APIs? I would say yes, so that we can 
transfer to the new async/await style at our own pace. 

Cheers
Marc




___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-11 Thread Adam Kemp via swift-evolution


> On Sep 11, 2017, at 1:15 PM, Kenny Leung via swift-evolution 
>  wrote:
> 
> I found a decent description about async/await here:
> 
> https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/async/
>  
> 
> 
> So it’s much more like runloop based callbacks in Foundation that 
> libdispatch. It’s complicated. Even the simplest example is super 
> complicated. 
> 
> It’s super implicit (like, “I’m going to package up every line of code from 
> await to the end of this function and turn it into a continuation”). That 
> seems to go against one of the primary principles of Swift, which is to make 
> things plain to the reader. I’d be interested to know what the call stack 
> looks like on the line after await.

This is pretty much how it would have to work for Swift as well. The call stack 
after the await (in C#) would either start at the runloop and go through the 
futures API (usually Task) or it would start at whatever code satisfied the 
async request.

It’s true that this can make it more difficult to understand stack traces. In 
most cases the original call stack is lost. Microsoft has made changes to 
Visual Studio in order to show kind of an alternative stack trace for tasks to 
try to make this better. I think they also made things like F10 (step over) and 
F11 (step out) do the natural thing (i.e., wait for the continuation).

> 
> The doc takes away some of the mystery, but leaves major questions, like: 
> await is used to yield control to the parent, but at the bottom of the call 
> stack, presumably you’re going to do something blocking, so how do you call 
> await?

One of the common misconceptions about async/await (which I also had when I 
first encountered it) is that there must be a blocking thread somewhere. It 
doesn’t work that way. The “bottom of the call stack” is typically either a run 
loop or a thread pool with a work queue (really just another kind of run loop). 
I guess you’re right in the sense that those kinds of run loops do block, but 
they’re not blocking on any particular piece of work to be done. They’re 
blocking waiting for ANY more work to be done (either events or items placed in 
the work queue).

The way that the continuation works is that it is placed onto one of those 
queues. For the UI thread it’s kind of like doing performSelectorOnMainThread: 
(the .Net equivalent is usually called BeginInvokeOnMainThread). For a thread 
pool there’s another API. For GCD this would be like doing a dispatch_async. 
It’s putting the continuation callback block onto a queue, and that callback 
will be called when the run loop or the thread pool is able to do so.___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-11 Thread Kenny Leung via swift-evolution
I found a decent description about async/await here:

https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/concepts/async/
 


So it’s much more like runloop based callbacks in Foundation that libdispatch. 
It’s complicated. Even the simplest example is super complicated. 

It’s super implicit (like, “I’m going to package up every line of code from 
await to the end of this function and turn it into a continuation”). That seems 
to go against one of the primary principles of Swift, which is to make things 
plain to the reader. I’d be interested to know what the call stack looks like 
on the line after await.

The doc takes away some of the mystery, but leaves major questions, like: await 
is used to yield control to the parent, but at the bottom of the call stack, 
presumably you’re going to do something blocking, so how do you call await?

-Kenny

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-11 Thread Thorsten Seitz via swift-evolution


> Am 21.08.2017 um 22:32 schrieb Brent Royal-Gordon via swift-evolution 
> :
> 
>> On Aug 21, 2017, at 12:41 PM, Wallacy via swift-evolution 
>>  wrote:
>> 
>> Based on these same concerns, how to do this using async/await ?
>> 
>> func process() -> Void) {
>> loadWebResource("bigData.txt") { dataResource in
>>//
>> }
>> printf("BigData Scheduled to load")
>> loadWebResource("smallData.txt") { dataResource in
>>//
>> }
>> printf("SmallData Scheduled to load")
>> 
>> }
> 
> 
> You would use something like the `Future` type mentioned in the proposal:
> 
>   func process() {
>   let bigDataFuture = Future { await 
> loadWebResource("bigData.txt") }
>   print("BigData scheduled to load")
>   
>   let smallDataFuture = Future { await 
> loadWebResource("smallData.txt") }
>   print("SmallData scheduled to load")
>   
>   let bigDataResource = await bigDataFuture.get()
>   let smallDataResource = await smallDataFuture.get()
>   // or whatever; you could probably chain off the futures to 
> handle whichever happens first first.
>   ...
>   }

Like others have already proposed I would imagine to be able to write something 
like this (adding a return type to do something with the data).

func process() async -> (Data, Data) {
let bigData =  async loadWebResource("bigData.txt") 
print("BigData scheduled to load")
let bigData =  async loadWebResource("bigData.txt") 
print("SmallData scheduled to load")
return await (bigData, smallData)
}

where bigData and smallData have the type `async Data` which has 
to be `await`ed upon to get at the `Data`.

-Thorsten


> 
> -- 
> Brent Royal-Gordon
> Architechies
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-11 Thread Thorsten Seitz via swift-evolution
If I understand correctly this queue/thread hopping problem arises because we 
do not want `await` to block. If `await` would block we would trivially stay in 
the same thread/on the same queue.
Actually I think that the mental model and the semantics should work as if 
`await` did block. Anything else will make concurrency with async/await not 
easier to use than  using GCD directly but harder.
To preserve the mental model and semantics the continuation has to be enqueued 
on the same queue where the code encountering the `await` is currently running.

-Thorsten

> Am 21.08.2017 um 20:04 schrieb Adam Kemp via swift-evolution 
> :
> 
> 
> 
>> On Aug 18, 2017, at 8:38 PM, Chris Lattner  wrote:
>> 
>>> On Aug 18, 2017, at 2:09 PM, Adam Kemp  wrote:
>>> Maybe I’m still missing something, but how does this help when you are 
>>> interacting only with Swift code? If I were to write an asynchronous method 
>>> in Swift then how could I do the same thing that you propose that the 
>>> Objective-C importer do? That is, how do I write my function such that it 
>>> calls back on the same queue?
>> 
>> You’re right: if you’re calling something written in Swift, the ObjC 
>> importer isn’t going to help you.
>> 
>> However, if you’re writing an async function in Swift, then it is reasonable 
>> for us to say what the convention is and expect you to follow it.  
>> Async/await doesn’t itself help you implement an async operation: it would 
>> be turtles all the way down… until you get to GCD, which is where you do the 
>> async thing.
>> 
>> As such, as part of rolling out async/await in Swift, I’d expect that GCD 
>> would introduce new API or design patterns to support doing the right thing 
>> here.  That is TBD as far as the proposal goes, because it doesn’t go into 
>> runtime issues.
> 
> The point I’m trying to make is that this is so important that I don’t think 
> it’s wise to leave it up to possible future library improvements, and 
> especially not to convention. Consider this example again from your proposal:
> 
> @IBAction func buttonDidClick(sender:AnyObject) {  
> doSomethingOnMainThread();
> beginAsync {
> let image = await processImage()
> imageView.image = image
> }
> doSomethingElseOnMainThread();
> }
> 
> The line that assigns the image to the image view is very likely running on 
> the wrong thread. That code looks simple, but it is not safe. You would have 
> to insert a line like your other examples to ensure it’s on the right thread:
> 
> @IBAction func buttonDidClick(sender:AnyObject) {  
> doSomethingOnMainThread();
> beginAsync {
> let image = await processImage()
> await DispatchQueue.main.asyncCoroutine()
> imageView.image = image
> }
> doSomethingElseOnMainThread();
> }
> 
> You would have to litter your code with that kind of stuff just in case 
> you’re on the wrong thread because there’s no way to tell where you’ll end up 
> after the await. In fact, this feature would make it much easier to end up 
> calling back on different queues in different circumstances because it makes 
> queue hopping invisible. From another example:
> 
> func processImageData1() async -> Image {
>   let dataResource  = await loadWebResource("dataprofile.txt")
>   let imageResource = await loadWebResource("imagedata.dat")
>   let imageTmp  = await decodeImage(dataResource, imageResource)
>   let imageResult   =  await dewarpAndCleanupImage(imageTmp)
>   return imageResult
> }
> 
> Which queue does a caller end up in? Whichever queue that last awaited call 
> gives you. This function does nothing to try to ensure that you always end up 
> on the same queue. If someone changes the code by adding or removing one of 
> those await calls then the final callback queue would change. If there were 
> conditionals in there that changed the code flow at runtime then you could 
> end up calling back on different queues depending on some runtime state.
> 
> IMO this would make doing safe async programming actually more difficult to 
> get right. It would be tedious and error prone. This simplified async/await 
> model may work well for JavaScript, which generally doesn’t have shared 
> mutable state across threads, but it seems dangerous in a language that does.
> 
>> This isn’t a fair transformation though, and isn’t related to whether 
>> futures is part of the library or language.  The simplification you got here 
>> is by making IBAction’s implicitly async.  I don’t see that that is 
>> possible, since they have a very specific calling convention (which returns 
>> void) and are invoked by objc_msgSend.  OTOH, if it were possible to do 
>> this, it would be possible to do it with the proposal as outlined.
> 
> I didn’t mean to imply that all IBActions implicitly async. I just allowed 
> for an entire method to be async without being awaitable. In C# an async void 
> function 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-11 Thread Thorsten Seitz via swift-evolution

> Am 20.08.2017 um 04:38 schrieb Thomas via swift-evolution 
> :
> 
> 
>>> On 20 Aug 2017, at 03:36, Brent Royal-Gordon  wrote:
>>> 
 On Aug 19, 2017, at 2:25 AM, Thomas  wrote:
 
 I think we need to be a little careful here—the mere fact that a message 
 returns `Void` doesn't mean the caller shouldn't wait until it's done to 
 continue. For instance:
 
listActor.delete(at: index) // Void, so it 
 doesn't wait
let count = await listActor.getCount()  // But we want the count 
 *after* the deletion!
>>> 
>>> In fact this will just work. Because both messages happen on the actor's 
>>> internal serial queue, the "get count" message will only happen after the 
>>> deletion. Therefore the "delete" message can return immediately to the 
>>> caller (you just need the dispatch call on the queue to be made).
>> 
>> Suppose `delete(at:)` needs to do something asynchronous, like ask a server 
>> to do the deletion. Is processing of other messages to the actor suspended 
>> until it finishes? (Maybe the answer is "yes"—I don't have experience with 
>> proper actors.)
> 
> It seems like the answer should be "yes". But then how do you implement 
> something like a cancel() method? I don't know how the actor model solves 
> that problem.

Processing of other messages to the actor should be suspended until 
`delete(at:)` finishes. Otherwise the actor's state would not be protected 
properly. But obviously this does not help if `delete(at:)` itself delegates 
the deletion to another actor with a fire-and-forget message.

-Thorsten___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-11 Thread Cory via swift-evolution

> On 11 Sep 2017, at 00:10, Howard Lovatt via swift-evolution 
>  wrote:
> 
> Not really certain what async/await adds, using this library (note self 
> promotion) which is built on top of GCD:
> 
> https://github.com/hlovatt/Concurrency-Utilities 
> 
> 
> You can write:
>> func doit() {
>> AsynchronousFuture { // Executes in background and therefore does 
>> not block main
>> let dataResource  = loadWebResource("dataprofile.txt") // 
>> Returns a future and therefore runs concurrently in background.
>> let imageResource = loadWebResource("imagedata.dat") // Future 
>> therefore concurrent.
>> let imageTmp  = decodeImage(dataResource.get ?? defaultText, 
>> imageResource.get ?? defaultData) // Handles errors with defaults easily, 
>> including timeout.
>> let imageResult   = dewarpAndCleanupImage(imageTmp)
>> Thread.executeOnMain {
>> self.imageResult = imageResult
>> }
>> }
>> }
> 
> So why bother with async/await?

Because async/await serve a different (but related) purpose than background 
threads. Remember that concurrency is not parallelism: running in parallel in 
the background is a useful construction, but not the *only* construction.

If you have multiple tasks you’d like to kick off in parallel, async/await can 
support that by providing some kind of awaitable Future object. You’d then call 
multiple functions that return those futures and then await on the completion 
of all of the Futures (Future.gatherResults is a nice function name).

However, async/await also allows you to more directly handle the notion that 
you are making a single blocking call that you need the result of before your 
computation can complete. For example, if you only had to load *one* web 
resource instead of two, your code with async/await would look exactly like 
synchronous code with the word ‘await’ scattered around:

func doit() async throws {
let dataResource = await loadWebResource(“dataprofile.txt”)
let imageTmp = decodeImage(dataResource)
return dewarpAndCleanupImage(imageTmp)
}

At this point scheduling this function becomes the calling programmer’s 
concern. It also allows multiple calls to doit() to be interleaved without 
deferring them to background threads and relying on the OS scheduler to 
schedule them appropriately. This is very useful on server-side applications 
that do not want to be dispatching large numbers of threads, and that are 
likely already running an I/O select() loop that can handle the most common 
cause of awaits (namely, please do this I/O).

For functions that are computationally heavy, no problem: most languages 
provide built-in support for scheduling async functions into threadpools and 
providing a Future that completes on the completion of the background task.

I guess the TL;DR here is that async/await allows you to have concurrency with 
or without parallelism, while using thread pools means you can only have 
concurrency with parallelism.

Cory___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-10 Thread Howard Lovatt via swift-evolution
Not really certain what async/await adds, using this library (note self
promotion) which is built on top of GCD:

https://github.com/hlovatt/Concurrency-Utilities


You can write:

func doit() {
AsynchronousFuture { // Executes in background and therefore does
not block main
let dataResource  = loadWebResource("dataprofile.txt") //
Returns a future and therefore runs concurrently in background.
let imageResource = loadWebResource("imagedata.dat") // Future
therefore concurrent.
let imageTmp  = decodeImage(dataResource.get ??
defaultText, imageResource.get ?? defaultData) // Handles errors with
defaults easily, including timeout.
let imageResult   = dewarpAndCleanupImage(imageTmp)

Thread.executeOnMain {
self.imageResult = imageResult
}
}
}

So why bother with async/await?

PS I also agree with the comments that there is no point writing the 1st
two lines of the example with async and then calling them with await - you
might as well write serial code.

  -- Howard.

On 10 September 2017 at 10:33, Wallacy via swift-evolution <
swift-evolution@swift.org> wrote:

> This is the only part of the proposal that i can't concur!
>
> ^async^ at call side solve this nicely! And Pierre also showed how common
> people are doing it wrong! And will make this wrong using Futures too.
>
> func doit() async {
> let dataResource = async loadWebResource("dataprofile.txt”)
> let imageResource = async loadWebResource("imagedata.dat”)
> let imageTmp = await decodeImage(dataResource, imageResource)
> self.imageResult = await dewarpAndCleanupImage(imageTmp)
> }
>
> Anyway, we have time to think about it.
>
>
>
> Em sáb, 9 de set de 2017 às 20:30, David Hart via swift-evolution <
> swift-evolution@swift.org> escreveu:
>
>> On 10 Sep 2017, at 00:40, Kenny Leung via swift-evolution <
>> swift-evolution@swift.org> wrote:
>>
>> Then isn’t the example functionally equivalent to:
>>
>> func doit() {
>> DispatchQueue.global().async {
>> let dataResource  = loadWebResource("dataprofile.txt")
>> let imageResource = loadWebResource("imagedata.dat")
>> let imageTmp  = decodeImage(dataResource, imageResource)
>> let imageResult   = dewarpAndCleanupImage(imageTmp)
>> DispatchQueue.main.async {
>> self.imageResult = imageResult
>> }
>> }
>> }
>>
>> if all of the API were synchronous? Why wouldn’t we just exhort people to
>> write synchronous API code and continue using libdispatch? What am I
>> missing?
>>
>>
>> There are probably very good optimisations for going asynchronous, but
>> I’m not the right person for that part of the answer.
>>
>> But I can give another answer: once we have an async/await pattern, we
>> can build Futures/Promises on top of them and then we can await on multiple
>> asynchronous calls in parallel. But it won’t be a feature of async/await in
>> itself:
>>
>> func doit() async {
>> let dataResource  = Future({ loadWebResource("dataprofile.txt”) })
>> let imageResource = Future({ loadWebResource("imagedata.dat”) })
>> let imageTmp = await decodeImage(dataResource.get, imageResource.get)
>> self.imageResult = await dewarpAndCleanupImage(imageTmp)
>> }
>>
>> -Kenny
>>
>>
>> On Sep 8, 2017, at 2:33 PM, David Hart  wrote:
>>
>>
>> On 8 Sep 2017, at 20:34, Kenny Leung via swift-evolution <
>> swift-evolution@swift.org> wrote:
>>
>> Hi All.
>>
>> A point of clarification in this example:
>>
>> func loadWebResource(_ path: String) async -> Resourcefunc decodeImage(_ r1: 
>> Resource, _ r2: Resource) async -> Imagefunc dewarpAndCleanupImage(_ i : 
>> Image) async -> Image
>> func processImageData1() async -> Image {
>> let dataResource  = await loadWebResource("dataprofile.txt")
>> let imageResource = await loadWebResource("imagedata.dat")
>> let imageTmp  = await decodeImage(dataResource, imageResource)
>> let imageResult   = await dewarpAndCleanupImage(imageTmp)
>> return imageResult
>> }
>>
>>
>> Do these:
>>
>> await loadWebResource("dataprofile.txt")
>>
>> await loadWebResource("imagedata.dat")
>>
>>
>> happen in in parallel?
>>
>>
>> They don’t happen in parallel.
>>
>> If so, how can I make the second one wait on the first one? If not, how
>> can I make them go in parallel?
>>
>> Thanks!
>>
>> -Kenny
>>
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org
>> https://lists.swift.org/mailman/listinfo/swift-evolution
>>
>>
>>
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org
>> https://lists.swift.org/mailman/listinfo/swift-evolution
>>
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org
>> https://lists.swift.org/mailman/listinfo/swift-evolution
>>
>
> 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-10 Thread Thorsten Seitz via swift-evolution
First off, I’m still catching up with all those (very welcome :-) threads about 
concurrency, so bear with me if I’m commenting on topics that have been settled 
in the meantime.


> Am 18.08.2017 um 17:13 schrieb Johannes Weiß via swift-evolution 
> :
> 
> Hi Chris & swift-evo,
> 
> (Given the already lengthy thread I tried to separate my points and keep them 
> reasonably short to allow people to skip points they don't care about. I'm 
> very happy to expand on the points.)
> 
> Thanks very much for writing up your thoughts/proposal, I've been waiting to 
> see the official kick-off for the concurrency discussions :).
> 
> I) Let's start with the async/await proposal. Personally I think this is the 
> right direction for Swift given the reality that we need to interface with 
> incredibly large existing code-bases and APIs. Further thoughts:
> 
> - ❓ GCD: dispatching onto calling queue, how?
> GCD doesn't actually allow you to dispatch back to the original queue, so I 
> find it unclear how you'd achieve that. IMHO the main reason is that 
> conceptually at a given time you can be on more than one queue (nested 
> q.sync{}/target queues). So which is 'the' current queue?
> 
> - ⊥ first class coroutine model => async & throws should be orthogonal
> given that the proposal pitches to be the beginning of a first class 
> coroutine model (which I think is great), I think `async` and `throws` do 
> need to be two orthogonal concepts. I wouldn't want automatically throwing 
> generators in the future ;). Also I think we shouldn't throw spanner in the 
> works of people who do like to use Result types to hold the errors or 
> values. I'd be fine with async(nothrow) or something though.

I, too, would like to keep async & throws orthogonal for these reasons. Even in 
the case of async meaning asynchronous or parallel execution I would expect 
that throwing is not implied as long as we are not using distributed execution 
or making things cancellable. As long as I am on the same machine just 
executing something in parallel on another CPU (but within the same runtime) 
does not make it failable, does it?


> - what do we do with functions that invoke their closure multiple times? Like 
> DispatchIO.read/write.
> 
> 
> II) the actor model part
> 
> -  Erlang runtime and the actor model go hand in hand 
> I really like the Erlang actor model but I don't think it can be separated 
> from Erlang's runtime. The runtime offers green threads (which allow an actor 
> to block without blocking an OS thread) and prevents you from sharing memory 
> (which makes it possible to kill any actor at any point and still have a 
> reliable system). I don't see these two things happening in Swift. To a 
> lesser extend these issues are also present in Scala/Akka, the mitigate some 
> of the problems by having Akka Streams. Akka Streams are important to 
> establish back-pressure if you have faster producers than consumers. Note 
> that we often can't control the producer, they might be on the other side of 
> a network connection. So it's often very important to not read the available 
> bytes to communicate to the kernel that we can't consumes bytes that fast. If 
> we're networking with TCP the kernel can then use the TCP flow-control to 
> signal to the other side that they better slow down (or else packets will be 
> dropped and then need to be resent later).
> 
> -  regarding fatal failure in actors
> in the server world we need to be able to accept hundreds of thousands 
> (millions) of connections at the same time. There are quite a few cases where 
> these connections are long-lived and paused for most of the the time. So I 
> don't really see the value in introducing a 'reliable' actor model where the 
> system stops accepting new connections if one actor fatalError'd and then 
> 'just' finishes up serving the existing connections. So I believe there are 
> only two possible routes: 1) treat it like C/C++ and make sure your code 
> doesn't fatalError or the whole process blows up (what we have right now) 2) 
> treat it like Erlang and let things die. IMHO Erlang wouldn't be successful 
> if actors couldn't just die or couldn't be linked. Linking propagates 
> failures to all linked processes. A common thing to do is to 1) spawn a new 
> actor 2) link yourself to the newly spawned actor 3) send a message to that 
> actor and at some point eventually await a reply message sent by the actor 
> spawned earlier. As you mentioned in the writeup it is a problem if the actor 
> doesn't actually reply which is why in Erlang you'd link them. The effect is 
> that if the actor we spawned dies, any linked actor will die too which will 
> the propagate the error to an appropriate place. That allows the programmer 
> to control where an error should propagate too. I realise I'm doing a poor 
> job in explaining what is best explained by documentation around Erlang: 
> supervision [1] and the 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-10 Thread Thorsten Seitz via swift-evolution

> Am 21.08.2017 um 22:09 schrieb Karim Nassar via swift-evolution 
> :
> 
> Thought about it in more depth, and I’m now firmly in the camp of: 
> ‘throws’/‘try' and ‘async’/‘await' should be orthogonal features. I think the 
> slight call-site reduction in typed characters ('try await’ vs ‘await’) is 
> heavily outweighed by the loss of clarity on all the edge cases.

+1

-Thorsten


> 
> —Karim
> 
>> On Aug 21, 2017, at 1:56 PM, John McCall > > wrote:
>> 
>>> 
>>> On Aug 20, 2017, at 3:56 PM, Yuta Koshizawa >> > wrote:
>>> 
>>> 2017-08-21 2:20 GMT+09:00 John McCall via swift-evolution 
>>> >:
 On Aug 19, 2017, at 7:17 PM, Chris Lattner via swift-evolution 
 > wrote:
> On Aug 19, 2017, at 8:14 AM, Karim Nassar via swift-evolution 
> > wrote:
> 
> This looks fantastic. Can’t wait (heh) for async/await to land, and the 
> Actors pattern looks really compelling.
> 
> One thought that occurred to me reading through the section of the 
> "async/await" proposal on whether async implies throws:
> 
> If ‘async' implies ‘throws' and therefore ‘await' implies ‘try’, if we 
> want to suppress the catch block with ?/!, does that mean we do it on the 
> ‘await’ ? 
> 
> guard let foo = await? getAFoo() else {  …  }
 
 Interesting question, I’d lean towards “no, we don’t want await? and 
 await!”.  My sense is that the try? and try! forms are only occasionally 
 used, and await? implies heavily that the optional behavior has something 
 to do with the async, not with the try.  I think it would be ok to have to 
 write “try? await foo()” in the case that you’d want the thrown error to 
 turn into an optional.  That would be nice and explicit.
>>> 
>>> try? and try! are quite common from what I've seen.
>>> 
>>> As analogous to `throws` and `try`, I think we have an option that `await!` 
>>> means blocking.
>>> 
>>> First, if we introduce something like `do/catch` for `async/await`, I think 
>>> it should be for blocking. For example:
>>> 
>>> ```
>>> do {
>>>   return await foo()
>>> } block
>>> ```
>>> 
>>> It is consistent with `do/try/catch` because it should allow to return a 
>>> value from inside `do` blocks for an analogy of `throws/try`.
>>> 
>>> ```
>>> // `throws/try`
>>> func foo() -> Int {
>>>   do {
>>> return try bar()
>>>   } catch {
>>> ...
>>>   }
>>> }
>>> 
>>> // `async/await`
>>> func foo() -> Int {
>>>   do {
>>> return await bar()
>>>   } block
>>> }
>>> ```
>>> 
>>> And `try!` is similar to `do/try/catch`.
>>> 
>>> ```
>>> // `try!`
>>> let x = try! foo()
>>> // uses `x` here
>>> 
>>> // `do/try/catch`
>>> do {
>>>   let x = try foo()
>>>   // uses `x` here
>>> } catch {
>>>   fatalError()
>>> }
>>> ```
>>> 
>>> If `try!` is a sugar of `do/try/catch`, it also seems natural that `await!` 
>>> is a sugar of `do/await/block`. However, currently all `!` in Swift are 
>>> related to a logic failure. So I think using `!` for blocking is not so 
>>> natural in point of view of symbology.
>>> 
>>> Anyway, I think it is valuable to think about what `do` blocks for 
>>> `async/await` mean. It is also interesting that thinking about combinations 
>>> of `catch` and `block` for `async throws` functions: e.g. If only `block`, 
>>> the enclosing function should be `throws`.
>> 
>> Personally, I think these sources of confusion are a good reason to keep the 
>> feature separate.
>> 
>> The idea of using await! to block a thread is interesting but, as you say, 
>> does not fit with the general meaning of ! for logic errors.  I think it's 
>> fine to just have an API to block waiting for an async operation, and we can 
>> choose the name carefully to call out the danger of deadlocks.
>> 
>> John.
>> 
>>> 
>>> That aside, I think `try!` is not so occasional and is so important. Static 
>>> typing has limitations. For example, even if we has a text field which 
>>> allows to input only numbers, we still get an input value as a string and 
>>> parsing it may fail on its type though it actually never fails. If we did 
>>> not have easy ways to convert such a simple domain error or a recoverable 
>>> error to a logic failure, people would start ignoring them as we has seen 
>>> in Java by `catch(Exception e) {}`. Now we have `JSONDecoder` and we will 
>>> see much more `try!` for bundled JSON files in apps or generated JSONs by 
>>> code, for which decoding fails as a logic failure.
>>> 
>>> --
>>> Yuta
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-09 Thread Wallacy via swift-evolution
This is the only part of the proposal that i can't concur!

^async^ at call side solve this nicely! And Pierre also showed how common
people are doing it wrong! And will make this wrong using Futures too.

func doit() async {
let dataResource = async loadWebResource("dataprofile.txt”)
let imageResource = async loadWebResource("imagedata.dat”)
let imageTmp = await decodeImage(dataResource, imageResource)
self.imageResult = await dewarpAndCleanupImage(imageTmp)
}

Anyway, we have time to think about it.


Em sáb, 9 de set de 2017 às 20:30, David Hart via swift-evolution <
swift-evolution@swift.org> escreveu:

> On 10 Sep 2017, at 00:40, Kenny Leung via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> Then isn’t the example functionally equivalent to:
>
> func doit() {
> DispatchQueue.global().async {
> let dataResource  = loadWebResource("dataprofile.txt")
> let imageResource = loadWebResource("imagedata.dat")
> let imageTmp  = decodeImage(dataResource, imageResource)
> let imageResult   = dewarpAndCleanupImage(imageTmp)
> DispatchQueue.main.async {
> self.imageResult = imageResult
> }
> }
> }
>
> if all of the API were synchronous? Why wouldn’t we just exhort people to
> write synchronous API code and continue using libdispatch? What am I
> missing?
>
>
> There are probably very good optimisations for going asynchronous, but I’m
> not the right person for that part of the answer.
>
> But I can give another answer: once we have an async/await pattern, we can
> build Futures/Promises on top of them and then we can await on multiple
> asynchronous calls in parallel. But it won’t be a feature of async/await in
> itself:
>
> func doit() async {
> let dataResource  = Future({ loadWebResource("dataprofile.txt”) })
> let imageResource = Future({ loadWebResource("imagedata.dat”) })
> let imageTmp = await decodeImage(dataResource.get, imageResource.get)
> self.imageResult = await dewarpAndCleanupImage(imageTmp)
> }
>
> -Kenny
>
>
> On Sep 8, 2017, at 2:33 PM, David Hart  wrote:
>
>
> On 8 Sep 2017, at 20:34, Kenny Leung via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> Hi All.
>
> A point of clarification in this example:
>
> func loadWebResource(_ path: String) async -> Resourcefunc decodeImage(_ r1: 
> Resource, _ r2: Resource) async -> Imagefunc dewarpAndCleanupImage(_ i : 
> Image) async -> Image
> func processImageData1() async -> Image {
> let dataResource  = await loadWebResource("dataprofile.txt")
> let imageResource = await loadWebResource("imagedata.dat")
> let imageTmp  = await decodeImage(dataResource, imageResource)
> let imageResult   = await dewarpAndCleanupImage(imageTmp)
> return imageResult
> }
>
>
> Do these:
>
> await loadWebResource("dataprofile.txt")
>
> await loadWebResource("imagedata.dat")
>
>
> happen in in parallel?
>
>
> They don’t happen in parallel.
>
> If so, how can I make the second one wait on the first one? If not, how
> can I make them go in parallel?
>
> Thanks!
>
> -Kenny
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
>
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-09 Thread David Hart via swift-evolution

> On 10 Sep 2017, at 00:40, Kenny Leung via swift-evolution 
>  wrote:
> 
> Then isn’t the example functionally equivalent to:
> 
> func doit() {
> DispatchQueue.global().async {
> let dataResource  = loadWebResource("dataprofile.txt")
> let imageResource = loadWebResource("imagedata.dat")
> let imageTmp  = decodeImage(dataResource, imageResource)
> let imageResult   = dewarpAndCleanupImage(imageTmp)
> DispatchQueue.main.async {
> self.imageResult = imageResult
> }
> }
> }
> 
> if all of the API were synchronous? Why wouldn’t we just exhort people to 
> write synchronous API code and continue using libdispatch? What am I missing?

There are probably very good optimisations for going asynchronous, but I’m not 
the right person for that part of the answer.

But I can give another answer: once we have an async/await pattern, we can 
build Futures/Promises on top of them and then we can await on multiple 
asynchronous calls in parallel. But it won’t be a feature of async/await in 
itself:

func doit() async {
let dataResource  = Future({ loadWebResource("dataprofile.txt”) })
let imageResource = Future({ loadWebResource("imagedata.dat”) })
let imageTmp = await decodeImage(dataResource.get, imageResource.get)
self.imageResult = await dewarpAndCleanupImage(imageTmp)
}

> -Kenny
> 
> 
>> On Sep 8, 2017, at 2:33 PM, David Hart > > wrote:
>> 
>> 
>>> On 8 Sep 2017, at 20:34, Kenny Leung via swift-evolution 
>>> > wrote:
>>> 
>>> Hi All.
>>> 
>>> A point of clarification in this example:
>>> 
>>> func loadWebResource(_ path: String) async -> Resource
>>> func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image
>>> func dewarpAndCleanupImage(_ i : Image) async -> Image
>>> 
>>> func processImageData1() async -> Image {
>>> let dataResource  = await loadWebResource("dataprofile.txt")
>>> let imageResource = await loadWebResource("imagedata.dat")
>>> let imageTmp  = await decodeImage(dataResource, imageResource)
>>> let imageResult   = await dewarpAndCleanupImage(imageTmp)
>>> return imageResult
>>> }
>>> 
>>> Do these:
>>> 
>>> await loadWebResource("dataprofile.txt")
>>> await loadWebResource("imagedata.dat")
>>> 
>>> happen in in parallel?
>> 
>> They don’t happen in parallel.
>> 
>>> If so, how can I make the second one wait on the first one? If not, how can 
>>> I make them go in parallel?
>>> 
>>> Thanks!
>>> 
>>> -Kenny
>>> 
>>> ___
>>> swift-evolution mailing list
>>> swift-evolution@swift.org 
>>> https://lists.swift.org/mailman/listinfo/swift-evolution 
>>> 
>> 
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-09 Thread Kenny Leung via swift-evolution
Then isn’t the example functionally equivalent to:

func doit() {
DispatchQueue.global().async {
let dataResource  = loadWebResource("dataprofile.txt")
let imageResource = loadWebResource("imagedata.dat")
let imageTmp  = decodeImage(dataResource, imageResource)
let imageResult   = dewarpAndCleanupImage(imageTmp)
DispatchQueue.main.async {
self.imageResult = imageResult
}
}
}

if all of the API were synchronous? Why wouldn’t we just exhort people to write 
synchronous API code and continue using libdispatch? What am I missing?

-Kenny


> On Sep 8, 2017, at 2:33 PM, David Hart  wrote:
> 
> 
>> On 8 Sep 2017, at 20:34, Kenny Leung via swift-evolution 
>> > wrote:
>> 
>> Hi All.
>> 
>> A point of clarification in this example:
>> 
>> func loadWebResource(_ path: String) async -> Resource
>> func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image
>> func dewarpAndCleanupImage(_ i : Image) async -> Image
>> 
>> func processImageData1() async -> Image {
>> let dataResource  = await loadWebResource("dataprofile.txt")
>> let imageResource = await loadWebResource("imagedata.dat")
>> let imageTmp  = await decodeImage(dataResource, imageResource)
>> let imageResult   = await dewarpAndCleanupImage(imageTmp)
>> return imageResult
>> }
>> 
>> Do these:
>> 
>> await loadWebResource("dataprofile.txt")
>> await loadWebResource("imagedata.dat")
>> 
>> happen in in parallel?
> 
> They don’t happen in parallel.
> 
>> If so, how can I make the second one wait on the first one? If not, how can 
>> I make them go in parallel?
>> 
>> Thanks!
>> 
>> -Kenny
>> 
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org 
>> https://lists.swift.org/mailman/listinfo/swift-evolution
> 

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-08 Thread David Hart via swift-evolution

> On 8 Sep 2017, at 20:34, Kenny Leung via swift-evolution 
>  wrote:
> 
> Hi All.
> 
> A point of clarification in this example:
> 
> func loadWebResource(_ path: String) async -> Resource
> func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image
> func dewarpAndCleanupImage(_ i : Image) async -> Image
> 
> func processImageData1() async -> Image {
> let dataResource  = await loadWebResource("dataprofile.txt")
> let imageResource = await loadWebResource("imagedata.dat")
> let imageTmp  = await decodeImage(dataResource, imageResource)
> let imageResult   = await dewarpAndCleanupImage(imageTmp)
> return imageResult
> }
> 
> Do these:
> 
> await loadWebResource("dataprofile.txt")
> await loadWebResource("imagedata.dat")
> 
> happen in in parallel?

They don’t happen in parallel.

> If so, how can I make the second one wait on the first one? If not, how can I 
> make them go in parallel?
> 
> Thanks!
> 
> -Kenny
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-08 Thread Kenny Leung via swift-evolution
Hi All.

A point of clarification in this example:

func loadWebResource(_ path: String) async -> Resource
func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image
func dewarpAndCleanupImage(_ i : Image) async -> Image

func processImageData1() async -> Image {
let dataResource  = await loadWebResource("dataprofile.txt")
let imageResource = await loadWebResource("imagedata.dat")
let imageTmp  = await decodeImage(dataResource, imageResource)
let imageResult   = await dewarpAndCleanupImage(imageTmp)
return imageResult
}

Do these:

await loadWebResource("dataprofile.txt")
await loadWebResource("imagedata.dat")

happen in in parallel? If so, how can I make the second one wait on the first 
one? If not, how can I make them go in parallel?

Thanks!

-Kenny

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-07 Thread Pierre Habouzit via swift-evolution
> On Sep 7, 2017, at 1:04 AM, Howard Lovatt via swift-evolution 
>  wrote:
> 
> I would argue that given:
> 
> foo()
> await bar()
> baz()
> 
> That foo and baz should run on the same queue (using queue in the GCD sense) 
> but bar should determine which queue it runs on. I say this because:
> foo and baz are running synchronously with respect to each other (though they 
> could be running asynchronously with respect to some other process if all the 
> lines shown are inside an async function).
> bar is running asynchronously relative to foo and baz, potentially on a 
> different queue.

This isn't true, in the code above, foo() bar() and baz() all execute serially 
(synchronously is a weird word to use here IMO).
Serial code that you write with or without async/await in the middle will run 
serially independently from where it physically executes.

And for the record I do agree with Chris that by default foo() and baz() should 
execute on the same context. This is quite tricky if the caller is a pthread 
though, and the three possibilities I see for this on a manually made thread 
are:
- we assert at runtime
- await synchronously blocks in that case
- baz() doesn't execute on the thread

I think the 3rd one is a non starter, that (1) would be nice but may prove 
impractical. a (4) would be to require for people making manual threads and 
using async/await to drain some thing themselves from that thread through an 
event loop of theirs. But the danger or (4) is that if the client doesn't do 
it, then the failure mode is silent.

> I say bar is potentially on a different queue because the user of bar, the 
> person who wrote these 3 lines above, cannot be presumed to be the writer of 
> foo, baz, and particularly not bar and therefore have no detailed knowledge 
> about which queue is appropriate.
> 
> Therefore I would suggest either using a Future or expanding async so that 
> you can say:
> 
> func bar() async(qos: .userInitiated) { ... }
> 
> You also probably need the ability to specify a timeout and queue type, e.g.:
> 
>func bar() async(type: .serial, qos: .utility, timeout: .seconds(10)) 
> throws { ... }
> 
> If a timeout is specified then await would have to throw to enable the 
> timeout, i.e. call would become:
> 
>try await bar()
> 
> Defaults could be provided for qos (.default works well), timeout (1 second 
> works well), and type (.concurrent works well).
> 
> However a Future does all this already :).
> 
>   -- Howard.
> 
> On 7 September 2017 at 15:13, David Hart via swift-evolution 
> > wrote:
> 
> 
> > On 7 Sep 2017, at 07:05, Chris Lattner via swift-evolution 
> > > wrote:
> >
> >
> >> On Sep 5, 2017, at 7:31 PM, Eagle Offshore via swift-evolution 
> >> > wrote:
> >>
> >> OK, I've been watching this thing for a couple weeks.
> >>
> >> I've done a lot of GCD network code.  Invariably my completion method 
> >> starts with
> >>
> >> dispatch_async(queue_want_to_handle_this_on,)
> >>
> >> Replying on the same queue would be nice I guess, only often all I need to 
> >> do is update the UI in the completion code.
> >>
> >> OTOH, I have situations where the reply is complicated and I need to 
> >> persist a lot of data, then update the UI.
> >>
> >> So honestly, any assumption you make about how this is supposed to work is 
> >> going to be wrong about half the time unless
> >>
> >> you let me specify the reply queue directly.
> >>
> >> That is the only thing that works all the time.  Even then, I'm very apt 
> >> to make the choice to do some of the work off the main thread and then 
> >> queue up the minimal amount of work onto the main thread.
> >
> > I (think that I) understand what you’re saying here, but I don’t think that 
> > we’re talking about the same thing.
> >
> > You seem to be making an argument about what is most *useful* (being able 
> > to vector a completion handler to a specific queue), but I’m personally 
> > concerned about what is most *surprising* and therefore unnatural and prone 
> > to introduce bugs and misunderstandings by people who haven’t written the 
> > code.  To make this more concrete, shift from the “person who writes to 
> > code” to the “person who has to maintain someone else's code”:
> >
> > Imagine you are maintaining a large codebase, and you come across this 
> > (intentionally abstract) code:
> >
> >foo()
> >await bar()
> >baz()
> >
> > Regardless of what is the most useful, I’d argue that it is only natural to 
> > expect baz() to run on the same queue/thread/execution-context as foo and 
> > bar.  If, in the same model, you see something like:
> >
> >foo()
> >await bar()
> >anotherQueue.async {
> >baz()
> >}
> 
> Couldn’t it end up being:
> 
> foo()
> await bar()
> await 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-07 Thread Howard Lovatt via swift-evolution
I would argue that given:

foo()
await bar()
baz()

That foo and baz should run on the same queue (using queue in the GCD
sense) but bar should determine which queue it runs on. I say this because:

   1. foo and baz are running synchronously with respect to each other
   (though they could be running asynchronously with respect to some other
   process if all the lines shown are inside an async function).
   2. bar is running asynchronously relative to foo and baz, potentially on
   a different queue.

I say bar is potentially on a different queue because the user of bar, the
person who wrote these 3 lines above, cannot be presumed to be the writer
of foo, baz, and particularly not bar and therefore have no detailed
knowledge about which queue is appropriate.

Therefore I would suggest either using a Future or expanding async so that
you can say:

func bar() async(qos: .userInitiated) { ... }

You also probably need the ability to specify a timeout and queue type,
e.g.:

   func bar() async(type: .serial, qos: .utility, timeout: .seconds(10))
throws { ... }

If a timeout is specified then await would have to throw to enable the
timeout, i.e. call would become:

   try await bar()

Defaults could be provided for qos (.default works well), timeout (1 second
works well), and type (.concurrent works well).

However a Future does all this already :).

  -- Howard.

On 7 September 2017 at 15:13, David Hart via swift-evolution <
swift-evolution@swift.org> wrote:

>
>
> > On 7 Sep 2017, at 07:05, Chris Lattner via swift-evolution <
> swift-evolution@swift.org> wrote:
> >
> >
> >> On Sep 5, 2017, at 7:31 PM, Eagle Offshore via swift-evolution <
> swift-evolution@swift.org> wrote:
> >>
> >> OK, I've been watching this thing for a couple weeks.
> >>
> >> I've done a lot of GCD network code.  Invariably my completion method
> starts with
> >>
> >> dispatch_async(queue_want_to_handle_this_on,)
> >>
> >> Replying on the same queue would be nice I guess, only often all I need
> to do is update the UI in the completion code.
> >>
> >> OTOH, I have situations where the reply is complicated and I need to
> persist a lot of data, then update the UI.
> >>
> >> So honestly, any assumption you make about how this is supposed to work
> is going to be wrong about half the time unless
> >>
> >> you let me specify the reply queue directly.
> >>
> >> That is the only thing that works all the time.  Even then, I'm very
> apt to make the choice to do some of the work off the main thread and then
> queue up the minimal amount of work onto the main thread.
> >
> > I (think that I) understand what you’re saying here, but I don’t think
> that we’re talking about the same thing.
> >
> > You seem to be making an argument about what is most *useful* (being
> able to vector a completion handler to a specific queue), but I’m
> personally concerned about what is most *surprising* and therefore
> unnatural and prone to introduce bugs and misunderstandings by people who
> haven’t written the code.  To make this more concrete, shift from the
> “person who writes to code” to the “person who has to maintain someone
> else's code”:
> >
> > Imagine you are maintaining a large codebase, and you come across this
> (intentionally abstract) code:
> >
> >foo()
> >await bar()
> >baz()
> >
> > Regardless of what is the most useful, I’d argue that it is only natural
> to expect baz() to run on the same queue/thread/execution-context as foo
> and bar.  If, in the same model, you see something like:
> >
> >foo()
> >await bar()
> >anotherQueue.async {
> >baz()
> >}
>
> Couldn’t it end up being:
>
> foo()
> await bar()
> await anotherQueue.async()
> // on another queue
>
> > Then it is super clear what is going on: an intentional queue hop from
> whatever foo/bar are run on to anotherQueue.
> >
> > I interpret your email as arguing for something like this:
> >
> >foo()
> >await(anotherQueue) bar()
> >baz()
> >
> > I’m not sure if that’s exactly the syntax you’re arguing for, but
> anything like this presents a number of challenges:
> >
> > 1) it is “just sugar” over the basic model, so we could argue to add it
> at any time (and would argue strongly to defer it out of this round of
> discussions).
> >
> > 2) We’d have to find a syntax that implies that baz() runs on
> anotherQueue, but bar() runs on the existing queue.  The syntax I sketched
> above does NOT provide this indication.
> >
> > -Chris
> >
> >
> > ___
> > swift-evolution mailing list
> > swift-evolution@swift.org
> > https://lists.swift.org/mailman/listinfo/swift-evolution
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
___
swift-evolution mailing list
swift-evolution@swift.org

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-06 Thread David Hart via swift-evolution


> On 7 Sep 2017, at 07:05, Chris Lattner via swift-evolution 
>  wrote:
> 
> 
>> On Sep 5, 2017, at 7:31 PM, Eagle Offshore via swift-evolution 
>>  wrote:
>> 
>> OK, I've been watching this thing for a couple weeks.
>> 
>> I've done a lot of GCD network code.  Invariably my completion method starts 
>> with
>> 
>> dispatch_async(queue_want_to_handle_this_on,)
>> 
>> Replying on the same queue would be nice I guess, only often all I need to 
>> do is update the UI in the completion code.
>> 
>> OTOH, I have situations where the reply is complicated and I need to persist 
>> a lot of data, then update the UI.
>> 
>> So honestly, any assumption you make about how this is supposed to work is 
>> going to be wrong about half the time unless
>> 
>> you let me specify the reply queue directly.
>> 
>> That is the only thing that works all the time.  Even then, I'm very apt to 
>> make the choice to do some of the work off the main thread and then queue up 
>> the minimal amount of work onto the main thread.
> 
> I (think that I) understand what you’re saying here, but I don’t think that 
> we’re talking about the same thing.  
> 
> You seem to be making an argument about what is most *useful* (being able to 
> vector a completion handler to a specific queue), but I’m personally 
> concerned about what is most *surprising* and therefore unnatural and prone 
> to introduce bugs and misunderstandings by people who haven’t written the 
> code.  To make this more concrete, shift from the “person who writes to code” 
> to the “person who has to maintain someone else's code”:
> 
> Imagine you are maintaining a large codebase, and you come across this 
> (intentionally abstract) code:
> 
>foo()
>await bar()
>baz()
> 
> Regardless of what is the most useful, I’d argue that it is only natural to 
> expect baz() to run on the same queue/thread/execution-context as foo and 
> bar.  If, in the same model, you see something like:
> 
>foo()
>await bar()
>anotherQueue.async {
>baz()
>}

Couldn’t it end up being:

foo()
await bar()
await anotherQueue.async()
// on another queue

> Then it is super clear what is going on: an intentional queue hop from 
> whatever foo/bar are run on to anotherQueue.
> 
> I interpret your email as arguing for something like this:
> 
>foo()
>await(anotherQueue) bar()
>baz()
> 
> I’m not sure if that’s exactly the syntax you’re arguing for, but anything 
> like this presents a number of challenges:
> 
> 1) it is “just sugar” over the basic model, so we could argue to add it at 
> any time (and would argue strongly to defer it out of this round of 
> discussions).
> 
> 2) We’d have to find a syntax that implies that baz() runs on anotherQueue, 
> but bar() runs on the existing queue.  The syntax I sketched above does NOT 
> provide this indication.
> 
> -Chris
> 
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-05 Thread Eagle Offshore via swift-evolution
OK, I've been watching this thing for a couple weeks.

I've done a lot of GCD network code.  Invariably my completion method starts 
with

dispatch_async(queue_want_to_handle_this_on,)

Replying on the same queue would be nice I guess, only often all I need to do 
is update the UI in the completion code.

OTOH, I have situations where the reply is complicated and I need to persist a 
lot of data, then update the UI.

So honestly, any assumption you make about how this is supposed to work is 
going to be wrong about half the time unless

you let me specify the reply queue directly.

That is the only thing that works all the time.  Even then, I'm very apt to 
make the choice to do some of the work off the main thread and then queue up 
the minimal amount of work onto the main thread.

Finally, I don't think this is properly a language feature.  I think its a 
library feature.  I think Swfit's tendency is to push way too much into the 
language rather than the library and I personally STRONGLY prefer tiny 
languages with rich libraries rather than the opposite.  

That's my $0.02.  I don't care about key words for async stuff.   I'm more than 
happy with GCD the library as long as I have the building blocks (closures) to 
take advantage of it.

> On Sep 5, 2017, at 6:06 PM, Pierre Habouzit via swift-evolution 
>  wrote:
> 
>> On Sep 5, 2017, at 5:29 PM, Elliott Harris via swift-evolution 
>> > wrote:
>> 
>>> 
>>> On Sep 4, 2017, at 11:40 AM, Pierre Habouzit via swift-evolution 
>>> > wrote:
>>> 
 On Sep 4, 2017, at 10:36 AM, Chris Lattner via swift-evolution 
 > wrote:
 
 On Sep 3, 2017, at 12:44 PM, Pierre Habouzit > wrote:
> My currently not very well formed opinion on this subject is that GCD 
> queues are just what you need with these possibilities:
> - this Actor queue can be targeted to other queues by the developer 
> when he means for these actor to be executed in an existing execution 
> context / locking domain,
> - we disallow Actors to be directly targeted to GCD global concurrent 
> queues ever
> - for the other ones we create a new abstraction with stronger and 
> better guarantees (typically limiting the number of possible threads 
> servicing actors to a low number, not greater than NCPU).
 
 Is there a specific important use case for being able to target an 
 actor to an existing queue?  Are you looking for advanced patterns 
 where multiple actors (each providing disjoint mutable state) share an 
 underlying queue? Would this be for performance reasons, for 
 compatibility with existing code, or something else?
>>> 
>>> Mostly for interaction with current designs where being on a given 
>>> bottom serial queue gives you the locking context for resources 
>>> naturally attached to it.
>> 
>> Ok.  I don’t understand the use-case well enough to know how we should 
>> model this.  For example, is it important for an actor to be able to 
>> change its queue dynamically as it goes (something that sounds really 
>> scary to me) or can the “queue to use” be specified at actor 
>> initialization time?
> 
> I think I need to read more on actors, because the same way you're not an 
> OS runtime expert, I'm not (or rather no longer, I started down that path 
> a lifetime ago) a language expert at all, and I feel like I need to 
> understand your world better to try to explain this part better to you.
 
 No worries.  Actually, after thinking about it a bit, I don’t think that 
 switching underlying queues at runtime is scary.
 
 The important semantic invariant which must be maintained is that there is 
 only one thread executing within an actor context at a time.  Switching 
 around underlying queues (or even having multiple actors on the same 
 queue) shouldn’t be a problem.
 
 OTOH, you don’t want an actor “listening” to two unrelated queues, because 
 there is nothing to synchronize between the queues, and you could have 
 multiple actor methods invoked at the same time: you lose the protection 
 of a single serial queue. 
 
 The only concern I’d have with an actor switching queues at runtime is 
 that you don’t want a race condition where an item on QueueA goes to the 
 actor, then it switches to QueueB, then another item from QueueB runs 
 while the actor is already doing something for QueueA.
 
 
>>> I think what you said made sense.
>> 
>> Ok, I captured this in yet-another speculative section:
>> 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-05 Thread Pierre Habouzit via swift-evolution
> On Sep 5, 2017, at 5:29 PM, Elliott Harris via swift-evolution 
>  wrote:
> 
>> 
>> On Sep 4, 2017, at 11:40 AM, Pierre Habouzit via swift-evolution 
>> > wrote:
>> 
>>> On Sep 4, 2017, at 10:36 AM, Chris Lattner via swift-evolution 
>>> > wrote:
>>> 
>>> On Sep 3, 2017, at 12:44 PM, Pierre Habouzit >> > wrote:
 My currently not very well formed opinion on this subject is that GCD 
 queues are just what you need with these possibilities:
 - this Actor queue can be targeted to other queues by the developer 
 when he means for these actor to be executed in an existing execution 
 context / locking domain,
 - we disallow Actors to be directly targeted to GCD global concurrent 
 queues ever
 - for the other ones we create a new abstraction with stronger and 
 better guarantees (typically limiting the number of possible threads 
 servicing actors to a low number, not greater than NCPU).
>>> 
>>> Is there a specific important use case for being able to target an 
>>> actor to an existing queue?  Are you looking for advanced patterns 
>>> where multiple actors (each providing disjoint mutable state) share an 
>>> underlying queue? Would this be for performance reasons, for 
>>> compatibility with existing code, or something else?
>> 
>> Mostly for interaction with current designs where being on a given 
>> bottom serial queue gives you the locking context for resources 
>> naturally attached to it.
> 
> Ok.  I don’t understand the use-case well enough to know how we should 
> model this.  For example, is it important for an actor to be able to 
> change its queue dynamically as it goes (something that sounds really 
> scary to me) or can the “queue to use” be specified at actor 
> initialization time?
 
 I think I need to read more on actors, because the same way you're not an 
 OS runtime expert, I'm not (or rather no longer, I started down that path 
 a lifetime ago) a language expert at all, and I feel like I need to 
 understand your world better to try to explain this part better to you.
>>> 
>>> No worries.  Actually, after thinking about it a bit, I don’t think that 
>>> switching underlying queues at runtime is scary.
>>> 
>>> The important semantic invariant which must be maintained is that there is 
>>> only one thread executing within an actor context at a time.  Switching 
>>> around underlying queues (or even having multiple actors on the same queue) 
>>> shouldn’t be a problem.
>>> 
>>> OTOH, you don’t want an actor “listening” to two unrelated queues, because 
>>> there is nothing to synchronize between the queues, and you could have 
>>> multiple actor methods invoked at the same time: you lose the protection of 
>>> a single serial queue. 
>>> 
>>> The only concern I’d have with an actor switching queues at runtime is that 
>>> you don’t want a race condition where an item on QueueA goes to the actor, 
>>> then it switches to QueueB, then another item from QueueB runs while the 
>>> actor is already doing something for QueueA.
>>> 
>>> 
>> I think what you said made sense.
> 
> Ok, I captured this in yet-another speculative section:
> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency
>  
> 
 Great. BTW I agree 100% with:
 
 That said, this is definitely a power-user feature, and we should 
 understand, build, and get experience using the basic system before 
 considering adding something like this.
 
 Private concurrent queues are not a success in dispatch and cause several 
 issues, these queues are second class citizens in GCD in terms of feature 
 they support, and building something with concurrency *within* is hard. I 
 would keep it as "that's where we'll go some day" but not try to attempt 
 it until we've build the simpler (or rather less hard) purely serial case 
 first.
>>> 
>>> Right, I agree this is not important for the short term.  To clarify 
>>> though, I meant to indicate that these actors would be implemented 
>>> completely independently of dispatch, not that they’d build on private 
>>> concurrent queues.
>>> 
>>> 
>> Another problem I haven't touched either is kernel-issued events 
>> (inbound IPC from other processes, networking events, etc...). Dispatch 
>> for the longest time used an indirection through a manager thread for 
>> all such events, and that had two major issues:
>> 
>> - the thread hops it caused, causing networking workloads to utilize up 
>> to 15-20% more CPU time than an 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-05 Thread Elliott Harris via swift-evolution

> On Sep 4, 2017, at 11:40 AM, Pierre Habouzit via swift-evolution 
>  wrote:
> 
>> On Sep 4, 2017, at 10:36 AM, Chris Lattner via swift-evolution 
>> > wrote:
>> 
>> On Sep 3, 2017, at 12:44 PM, Pierre Habouzit > > wrote:
>>> My currently not very well formed opinion on this subject is that GCD 
>>> queues are just what you need with these possibilities:
>>> - this Actor queue can be targeted to other queues by the developer 
>>> when he means for these actor to be executed in an existing execution 
>>> context / locking domain,
>>> - we disallow Actors to be directly targeted to GCD global concurrent 
>>> queues ever
>>> - for the other ones we create a new abstraction with stronger and 
>>> better guarantees (typically limiting the number of possible threads 
>>> servicing actors to a low number, not greater than NCPU).
>> 
>> Is there a specific important use case for being able to target an actor 
>> to an existing queue?  Are you looking for advanced patterns where 
>> multiple actors (each providing disjoint mutable state) share an 
>> underlying queue? Would this be for performance reasons, for 
>> compatibility with existing code, or something else?
> 
> Mostly for interaction with current designs where being on a given bottom 
> serial queue gives you the locking context for resources naturally 
> attached to it.
 
 Ok.  I don’t understand the use-case well enough to know how we should 
 model this.  For example, is it important for an actor to be able to 
 change its queue dynamically as it goes (something that sounds really 
 scary to me) or can the “queue to use” be specified at actor 
 initialization time?
>>> 
>>> I think I need to read more on actors, because the same way you're not an 
>>> OS runtime expert, I'm not (or rather no longer, I started down that path a 
>>> lifetime ago) a language expert at all, and I feel like I need to 
>>> understand your world better to try to explain this part better to you.
>> 
>> No worries.  Actually, after thinking about it a bit, I don’t think that 
>> switching underlying queues at runtime is scary.
>> 
>> The important semantic invariant which must be maintained is that there is 
>> only one thread executing within an actor context at a time.  Switching 
>> around underlying queues (or even having multiple actors on the same queue) 
>> shouldn’t be a problem.
>> 
>> OTOH, you don’t want an actor “listening” to two unrelated queues, because 
>> there is nothing to synchronize between the queues, and you could have 
>> multiple actor methods invoked at the same time: you lose the protection of 
>> a single serial queue. 
>> 
>> The only concern I’d have with an actor switching queues at runtime is that 
>> you don’t want a race condition where an item on QueueA goes to the actor, 
>> then it switches to QueueB, then another item from QueueB runs while the 
>> actor is already doing something for QueueA.
>> 
>> 
> I think what you said made sense.
 
 Ok, I captured this in yet-another speculative section:
 https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency
  
 
>>> Great. BTW I agree 100% with:
>>> 
>>> That said, this is definitely a power-user feature, and we should 
>>> understand, build, and get experience using the basic system before 
>>> considering adding something like this.
>>> 
>>> Private concurrent queues are not a success in dispatch and cause several 
>>> issues, these queues are second class citizens in GCD in terms of feature 
>>> they support, and building something with concurrency *within* is hard. I 
>>> would keep it as "that's where we'll go some day" but not try to attempt it 
>>> until we've build the simpler (or rather less hard) purely serial case 
>>> first.
>> 
>> Right, I agree this is not important for the short term.  To clarify though, 
>> I meant to indicate that these actors would be implemented completely 
>> independently of dispatch, not that they’d build on private concurrent 
>> queues.
>> 
>> 
> Another problem I haven't touched either is kernel-issued events (inbound 
> IPC from other processes, networking events, etc...). Dispatch for the 
> longest time used an indirection through a manager thread for all such 
> events, and that had two major issues:
> 
> - the thread hops it caused, causing networking workloads to utilize up 
> to 15-20% more CPU time than an equivalent manually made pthread parked 
> in kevent(), because networking pace even when busy idles back all the 
> time as far as the CPU is concerned, so dispatch queues never stay hot, 
> and the context switch is not 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-05 Thread Wallacy via swift-evolution
Fair enough! Tranks!

Em ter, 5 de set de 2017 às 13:48, Pierre Habouzit 
escreveu:

> On Sep 5, 2017, at 9:29 AM, Wallacy via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> "Actors are serial and exclusive, so this concurrent queue thing is not
> relevant."
>
> Always? That is something i can't understand. The proposal actually cites
> the "Intra-actor concurrency"
>
>
> As a future extension yes, I don't think we should rush there ;)
> Dispatch has clearly failed at making intra-queue concurrency a first
> class citizen atm.
>
>
> "Also, int he QoS world, using reader writer locks or private concurrent
> queues this way is not terribly great."
>
> This I understand, makes sense.
>
> "lastly for a simple writer like that you want dispatch_barrier_sync() not
> async (async will create a thread and it's terribly wasteful for so little
> work)."
>
> Yes, dispatch_barrier_sync makes more sense here...
>
> My point is:
>
> The proposal already define something like: actor var, in another words,
> "a special kind of var", and " Improve Performance with Reader-Writer
> Access" is not only a "special case" on concurrence world, but if make in
> the right way, is the only reasonable way to use a "class variable" (actor
> is special class right?) on multithreaded environment. If i'm not wrong
> (again) queues (concurrent/serial) help the "lock hell" problem.
>
> It is only a thing to be considered before a final model is defined, thus
> avoiding that in the future a big refactory is needed to solve something
> that has not been considered now.
>
> Okay to start small, I'm just trying to better visualize what may be
> necessary in the future to make sure that what has been done now will be
> compatible.
>
> Thanks.
>
>
> Em seg, 4 de set de 2017 às 16:06, Pierre Habouzit 
> escreveu:
>
>> On Sep 4, 2017, at 7:27 AM, Wallacy via swift-evolution <
>> swift-evolution@swift.org> wrote:
>>
>> Hello,
>>
>> I have a little question about the actors.
>>
>> On WWDC 2012 Session 712 one of the most important tips (for me at least)
>> was: Improve Performance with Reader-Writer Access
>>
>> Basically:
>> • Use concurrent subsystem queue: DISPATCH_QUEUE_CONCURRENT
>> • Use synchronous concurrent “reads”: dispatch_sync()
>> • Use asynchronous serialized “writes”: dispatch_barrier_async()
>>
>> Example:
>>
>> // ...
>>_someManagerQueue = dispatch_queue_create("SomeManager", 
>> DISPATCH_QUEUE_CONCURRENT);// ...
>>
>>
>> And then:
>>
>> - (id) getSomeArrayItem:(NSUInteger) index {
>> id importantObj = NULL;
>> dispatch_sync(_someManagerQueue,^{
>> id importantObj = [_importantArray objectAtIndex:index];
>>  });
>>return importantObj;
>>  }- (void) removeSomeArrayItem:(id) object {
>>  dispatch_barrier_async(_someManagerQueue,^{
>>  [_importantArray removeObject:object];
>>  });
>>  }- (void) addSomeArrayItem:(id) object {
>>  dispatch_barrier_async(_someManagerQueue,^{
>>  [_importantArray addObject:object];
>>  });
>>  }
>>
>>
>> That way you ensure that whenever you read an information (eg an array)
>> all the "changes" have been made ​​or are "waiting" . And every time you
>> write an information, your program will not be blocked waiting for the
>> operation to be completed.
>>
>> That way, if you use several threads, none will have to wait another to
>> get any value unless one of them is "writing", which is right thing to do.
>>
>> With this will it be composed using actors? I see a lot of discussion
>> about using serial queues, and I also have not seen any mechanism similar
>> to dispatch_barrier_async being discussed here or in other threads.
>>
>>
>> Actors are serial and exclusive, so this concurrent queue thing is not
>> relevant.
>> Also, int he QoS world, using reader writer locks or private concurrent
>> queues this way is not terribly great.
>> lastly for a simple writer like that you want dispatch_barrier_sync() not
>> async (async will create a thread and it's terribly wasteful for so little
>> work).
>>
>> We covered this subtleties in this year's WWDC GCD session.
>>
>>
>> -Pierre
>>
>>
>>
>> Em seg, 4 de set de 2017 às 08:20, Daniel Vollmer via swift-evolution <
>> swift-evolution@swift.org> escreveu:
>>
>>> Hello,
>>>
>>> first off, I’m following this discussion with great interest, even
>>> though my background (simulation software on HPC) has a different focus
>>> than the “usual” paradigms Swift seeks to (primarily) address.
>>>
>>> > On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution <
>>> swift-evolution@swift.org> wrote:
>>> >> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit 
>>> wrote:
>>> >>> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit 
>>> wrote:
>>> >>>
>>> >>> Is there a specific important use case for being able to target an
>>> actor to an existing queue?  Are you looking for advanced patterns where
>>> multiple actors (each providing 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-05 Thread Pierre Habouzit via swift-evolution
> On Sep 5, 2017, at 9:29 AM, Wallacy via swift-evolution 
>  wrote:
> 
> "Actors are serial and exclusive, so this concurrent queue thing is not 
> relevant."
> 
> Always? That is something i can't understand. The proposal actually cites the 
> "Intra-actor concurrency"

As a future extension yes, I don't think we should rush there ;)
Dispatch has clearly failed at making intra-queue concurrency a first class 
citizen atm.

> 
> "Also, int he QoS world, using reader writer locks or private concurrent 
> queues this way is not terribly great."
> 
> This I understand, makes sense.
> 
> "lastly for a simple writer like that you want dispatch_barrier_sync() not 
> async (async will create a thread and it's terribly wasteful for so little 
> work)."
> 
> Yes, dispatch_barrier_sync makes more sense here...
> 
> My point is:
> 
> The proposal already define something like: actor var, in another words, "a 
> special kind of var", and " Improve Performance with Reader-Writer Access" is 
> not only a "special case" on concurrence world, but if make in the right way, 
> is the only reasonable way to use a "class variable" (actor is special class 
> right?) on multithreaded environment. If i'm not wrong (again) queues 
> (concurrent/serial) help the "lock hell" problem.
> 
> It is only a thing to be considered before a final model is defined, thus 
> avoiding that in the future a big refactory is needed to solve something that 
> has not been considered now.
> 
> Okay to start small, I'm just trying to better visualize what may be 
> necessary in the future to make sure that what has been done now will be 
> compatible.
> 
> Thanks.
> 
> 
> Em seg, 4 de set de 2017 às 16:06, Pierre Habouzit  > escreveu:
>> On Sep 4, 2017, at 7:27 AM, Wallacy via swift-evolution 
>> > wrote:
>> 
>> Hello,
>> 
>> I have a little question about the actors.
>> 
>> On WWDC 2012 Session 712 one of the most important tips (for me at least) 
>> was: Improve Performance with Reader-Writer Access
>> 
>> Basically:
>> • Use concurrent subsystem queue: DISPATCH_QUEUE_CONCURRENT
>> • Use synchronous concurrent “reads”: dispatch_sync()
>> • Use asynchronous serialized “writes”: dispatch_barrier_async()
>> 
>> Example:
>> // ...
>>_someManagerQueue = dispatch_queue_create("SomeManager", 
>> DISPATCH_QUEUE_CONCURRENT);
>> // ...
>> 
>> And then:
>> 
>> - (id) getSomeArrayItem:(NSUInteger) index {
>> id importantObj = NULL;
>> dispatch_sync(_someManagerQueue,^{
>> id importantObj = [_importantArray objectAtIndex:index];
>>  });
>>return importantObj;
>>  }
>> - (void) removeSomeArrayItem:(id) object {
>>  dispatch_barrier_async(_someManagerQueue,^{
>>  [_importantArray removeObject:object];
>>  });
>>  }
>> - (void) addSomeArrayItem:(id) object {
>>  dispatch_barrier_async(_someManagerQueue,^{
>>  [_importantArray addObject:object];
>>  });
>>  }
>> 
>> That way you ensure that whenever you read an information (eg an array) all 
>> the "changes" have been made ​​or are "waiting" . And every time you write 
>> an information, your program will not be blocked waiting for the operation 
>> to be completed.
>> 
>> That way, if you use several threads, none will have to wait another to get 
>> any value unless one of them is "writing", which is right thing to do.
>> 
>> With this will it be composed using actors? I see a lot of discussion about 
>> using serial queues, and I also have not seen any mechanism similar to 
>> dispatch_barrier_async being discussed here or in other threads.
> 
> Actors are serial and exclusive, so this concurrent queue thing is not 
> relevant.
> Also, int he QoS world, using reader writer locks or private concurrent 
> queues this way is not terribly great.
> lastly for a simple writer like that you want dispatch_barrier_sync() not 
> async (async will create a thread and it's terribly wasteful for so little 
> work).
> 
> We covered this subtleties in this year's WWDC GCD session.
> 
> 
> -Pierre
> 
> 
>> 
>> Em seg, 4 de set de 2017 às 08:20, Daniel Vollmer via swift-evolution 
>> > escreveu:
>> Hello,
>> 
>> first off, I’m following this discussion with great interest, even though my 
>> background (simulation software on HPC) has a different focus than the 
>> “usual” paradigms Swift seeks to (primarily) address.
>> 
>> > On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution 
>> > > wrote:
>> >> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit > >> > wrote:
>> >>> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit > >>> > wrote:
>> >>>
>> >>> Is there a specific important use case for being able to target an 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-05 Thread Wallacy via swift-evolution
"Actors are serial and exclusive, so this concurrent queue thing is not
relevant."

Always? That is something i can't understand. The proposal actually cites
the "Intra-actor concurrency"

"Also, int he QoS world, using reader writer locks or private concurrent
queues this way is not terribly great."

This I understand, makes sense.

"lastly for a simple writer like that you want dispatch_barrier_sync() not
async (async will create a thread and it's terribly wasteful for so little
work)."

Yes, dispatch_barrier_sync makes more sense here...

My point is:

The proposal already define something like: actor var, in another words, "a
special kind of var", and " Improve Performance with Reader-Writer Access"
is not only a "special case" on concurrence world, but if make in the right
way, is the only reasonable way to use a "class variable" (actor is special
class right?) on multithreaded environment. If i'm not wrong (again) queues
(concurrent/serial) help the "lock hell" problem.

It is only a thing to be considered before a final model is defined, thus
avoiding that in the future a big refactory is needed to solve something
that has not been considered now.

Okay to start small, I'm just trying to better visualize what may be
necessary in the future to make sure that what has been done now will be
compatible.

Thanks.


Em seg, 4 de set de 2017 às 16:06, Pierre Habouzit 
escreveu:

> On Sep 4, 2017, at 7:27 AM, Wallacy via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> Hello,
>
> I have a little question about the actors.
>
> On WWDC 2012 Session 712 one of the most important tips (for me at least)
> was: Improve Performance with Reader-Writer Access
>
> Basically:
> • Use concurrent subsystem queue: DISPATCH_QUEUE_CONCURRENT
> • Use synchronous concurrent “reads”: dispatch_sync()
> • Use asynchronous serialized “writes”: dispatch_barrier_async()
>
> Example:
>
> // ...
>_someManagerQueue = dispatch_queue_create("SomeManager", 
> DISPATCH_QUEUE_CONCURRENT);// ...
>
>
> And then:
>
> - (id) getSomeArrayItem:(NSUInteger) index {
> id importantObj = NULL;
> dispatch_sync(_someManagerQueue,^{
> id importantObj = [_importantArray objectAtIndex:index];
>  });
>return importantObj;
>  }- (void) removeSomeArrayItem:(id) object {
>  dispatch_barrier_async(_someManagerQueue,^{
>  [_importantArray removeObject:object];
>  });
>  }- (void) addSomeArrayItem:(id) object {
>  dispatch_barrier_async(_someManagerQueue,^{
>  [_importantArray addObject:object];
>  });
>  }
>
>
> That way you ensure that whenever you read an information (eg an array)
> all the "changes" have been made ​​or are "waiting" . And every time you
> write an information, your program will not be blocked waiting for the
> operation to be completed.
>
> That way, if you use several threads, none will have to wait another to
> get any value unless one of them is "writing", which is right thing to do.
>
> With this will it be composed using actors? I see a lot of discussion
> about using serial queues, and I also have not seen any mechanism similar
> to dispatch_barrier_async being discussed here or in other threads.
>
>
> Actors are serial and exclusive, so this concurrent queue thing is not
> relevant.
> Also, int he QoS world, using reader writer locks or private concurrent
> queues this way is not terribly great.
> lastly for a simple writer like that you want dispatch_barrier_sync() not
> async (async will create a thread and it's terribly wasteful for so little
> work).
>
> We covered this subtleties in this year's WWDC GCD session.
>
>
> -Pierre
>
>
>
> Em seg, 4 de set de 2017 às 08:20, Daniel Vollmer via swift-evolution <
> swift-evolution@swift.org> escreveu:
>
>> Hello,
>>
>> first off, I’m following this discussion with great interest, even though
>> my background (simulation software on HPC) has a different focus than the
>> “usual” paradigms Swift seeks to (primarily) address.
>>
>> > On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution <
>> swift-evolution@swift.org> wrote:
>> >> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit 
>> wrote:
>> >>> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit 
>> wrote:
>> >>>
>> >>> Is there a specific important use case for being able to target an
>> actor to an existing queue?  Are you looking for advanced patterns where
>> multiple actors (each providing disjoint mutable state) share an underlying
>> queue? Would this be for performance reasons, for compatibility with
>> existing code, or something else?
>> >>
>> >> Mostly for interaction with current designs where being on a given
>> bottom serial queue gives you the locking context for resources naturally
>> attached to it.
>> >
>> > Ok.  I don’t understand the use-case well enough to know how we should
>> model this.  For example, is it important for an actor to be able to change
>> its queue dynamically as it goes 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Pierre Habouzit via swift-evolution

-Pierre

> On Sep 4, 2017, at 9:10 AM, Chris Lattner via swift-evolution 
>  wrote:
> 
> 
>> On Sep 4, 2017, at 9:05 AM, Jean-Daniel > > wrote:
>> 
 Sometimes, I’d probably make sense (or even be required to fix this to a 
 certain queue (in the thread(-pool?) sense), but at others it may just 
 make sense to execute the messages in-place by the sender if they don’t 
 block so no context switch is incurred.
>>> 
>>> Do you mean kernel context switch?  With well behaved actors, the runtime 
>>> should be able to run work items from many different queues on the same 
>>> kernel thread.  The “queue switch cost” is designed to be very very low.  
>>> The key thing is that the runtime needs to know when work on a queue gets 
>>> blocked so the kernel thread can move on to servicing some other queues 
>>> work.
>> 
>> My understanding is that a kernel thread can’t move on servicing a different 
>> queue while a block is executing on it. The runtime already know when a 
>> queue is blocked, and the only way it has to mitigate the problem is to 
>> spawn an other kernel thread to server the other queues. This is what cause 
>> the kernel thread explosion.
> 
> I’m not sure what you mean by “executing on it”.  A work item that currently 
> has a kernel thread can be doing one of two things: “executing work” (like 
> number crunching) or “being blocked in the kernel on something that GCD 
> doesn’t know about”. 
> 
> However, the whole point is that work items shouldn’t do this: as you say it 
> causes thread explosions.  It is better for them to yield control back to 
> GCD, which allows GCD to use the kernel thread for other queues, even though 
> the original *queue* is blocked.


You're forgetting two things:

First off, when the work item stops doing work and gives up control, the kernel 
thread doesn't become instantaneously available. If you want the thread to be 
reusable to execute some asynchronously waited on work that the actor is 
handling, then you have to make sure to defer scheduling this work until the 
thread is in a reusable state.

Second, there may be other work enqueued already in this context, in which 
case, even if the current work item yields, what it's waiting on will create a 
new thread because the current context is used.

The first issue is something we can optimize (despite GCD not doing it), with 
tons of techniques, so let's not rathole into a discussion on it.
The second one is not something we can "fix". There will be cases when the 
correct thing to do is to linearize, and some cases when it's not. And you 
can't know upfront what the right decision was.



Something else I realized, is that this code is fundamentally broken in swift:

actor func foo()
{
NSLock *lock = NSLock();
lock.lock();

let compute = await someCompute(); <--- this will really break `foo` in two 
pieces of code that can execute on two different physical threads.
lock.unlock();
}


The reason why it is broken is that mutexes (whether it's NSLock, 
pthread_mutex, os_unfair_lock) have to be unlocked from the same thread that 
took it. the await right in the middle here means that we can't guarantee it.

There are numerous primitives that can't be used across an await call in this 
way:
- things that use the calling context identity in some object (such as locks, 
mutexes, ...)
- anything that attaches data to the context (TSDs)

The things in the first category have probably to be typed in a way that using 
them across an async or await is disallowed at compile time.
The things in the second category are Actor unsafe and need to move to other 
ways of doing the same.



-Pierre___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Pierre Habouzit via swift-evolution
> On Sep 4, 2017, at 7:27 AM, Wallacy via swift-evolution 
>  wrote:
> 
> Hello,
> 
> I have a little question about the actors.
> 
> On WWDC 2012 Session 712 one of the most important tips (for me at least) 
> was: Improve Performance with Reader-Writer Access
> 
> Basically:
> • Use concurrent subsystem queue: DISPATCH_QUEUE_CONCURRENT
> • Use synchronous concurrent “reads”: dispatch_sync()
> • Use asynchronous serialized “writes”: dispatch_barrier_async()
> 
> Example:
> // ...
>_someManagerQueue = dispatch_queue_create("SomeManager", 
> DISPATCH_QUEUE_CONCURRENT);
> // ...
> 
> And then:
> 
> - (id) getSomeArrayItem:(NSUInteger) index {
> id importantObj = NULL;
> dispatch_sync(_someManagerQueue,^{
> id importantObj = [_importantArray objectAtIndex:index];
>  });
>return importantObj;
>  }
> - (void) removeSomeArrayItem:(id) object {
>  dispatch_barrier_async(_someManagerQueue,^{
>  [_importantArray removeObject:object];
>  });
>  }
> - (void) addSomeArrayItem:(id) object {
>  dispatch_barrier_async(_someManagerQueue,^{
>  [_importantArray addObject:object];
>  });
>  }
> 
> That way you ensure that whenever you read an information (eg an array) all 
> the "changes" have been made ​​or are "waiting" . And every time you write an 
> information, your program will not be blocked waiting for the operation to be 
> completed.
> 
> That way, if you use several threads, none will have to wait another to get 
> any value unless one of them is "writing", which is right thing to do.
> 
> With this will it be composed using actors? I see a lot of discussion about 
> using serial queues, and I also have not seen any mechanism similar to 
> dispatch_barrier_async being discussed here or in other threads.

Actors are serial and exclusive, so this concurrent queue thing is not relevant.
Also, int he QoS world, using reader writer locks or private concurrent queues 
this way is not terribly great.
lastly for a simple writer like that you want dispatch_barrier_sync() not async 
(async will create a thread and it's terribly wasteful for so little work).

We covered this subtleties in this year's WWDC GCD session.

-Pierre

> 
> Em seg, 4 de set de 2017 às 08:20, Daniel Vollmer via swift-evolution 
> > escreveu:
> Hello,
> 
> first off, I’m following this discussion with great interest, even though my 
> background (simulation software on HPC) has a different focus than the 
> “usual” paradigms Swift seeks to (primarily) address.
> 
> > On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution 
> > > wrote:
> >> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit  >> > wrote:
> >>> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit  >>> > wrote:
> >>>
> >>> Is there a specific important use case for being able to target an actor 
> >>> to an existing queue?  Are you looking for advanced patterns where 
> >>> multiple actors (each providing disjoint mutable state) share an 
> >>> underlying queue? Would this be for performance reasons, for 
> >>> compatibility with existing code, or something else?
> >>
> >> Mostly for interaction with current designs where being on a given bottom 
> >> serial queue gives you the locking context for resources naturally 
> >> attached to it.
> >
> > Ok.  I don’t understand the use-case well enough to know how we should 
> > model this.  For example, is it important for an actor to be able to change 
> > its queue dynamically as it goes (something that sounds really scary to me) 
> > or can the “queue to use” be specified at actor initialization time?
> 
> I’m confused, but that may just be me misunderstanding things again. I’d 
> assume each actor has its own (serial) queue that is used to serialize its 
> messages, so the queue above refers to the queue used to actually process the 
> messages the actor receives, correct?
> 
> Sometimes, I’d probably make sense (or even be required to fix this to a 
> certain queue (in the thread(-pool?) sense), but at others it may just make 
> sense to execute the messages in-place by the sender if they don’t block so 
> no context switch is incurred.
> 
> > One plausible way to model this is to say that it is a “multithreaded 
> > actor” of some sort, where the innards of the actor allow arbitrary number 
> > of client threads to call into it concurrently.  The onus would be on the 
> > implementor of the NIC or database to implement the proper synchronization 
> > on the mutable state within the actor.
> >>
> >> I think what you said made sense.
> >
> > Ok, I captured this in yet-another speculative section:
> > https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency
> >  
> > 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Pierre Habouzit via swift-evolution
> On Sep 4, 2017, at 10:36 AM, Chris Lattner via swift-evolution 
>  wrote:
> 
> On Sep 3, 2017, at 12:44 PM, Pierre Habouzit  > wrote:
>> My currently not very well formed opinion on this subject is that GCD 
>> queues are just what you need with these possibilities:
>> - this Actor queue can be targeted to other queues by the developer when 
>> he means for these actor to be executed in an existing execution context 
>> / locking domain,
>> - we disallow Actors to be directly targeted to GCD global concurrent 
>> queues ever
>> - for the other ones we create a new abstraction with stronger and 
>> better guarantees (typically limiting the number of possible threads 
>> servicing actors to a low number, not greater than NCPU).
> 
> Is there a specific important use case for being able to target an actor 
> to an existing queue?  Are you looking for advanced patterns where 
> multiple actors (each providing disjoint mutable state) share an 
> underlying queue? Would this be for performance reasons, for 
> compatibility with existing code, or something else?
 
 Mostly for interaction with current designs where being on a given bottom 
 serial queue gives you the locking context for resources naturally 
 attached to it.
>>> 
>>> Ok.  I don’t understand the use-case well enough to know how we should 
>>> model this.  For example, is it important for an actor to be able to change 
>>> its queue dynamically as it goes (something that sounds really scary to me) 
>>> or can the “queue to use” be specified at actor initialization time?
>> 
>> I think I need to read more on actors, because the same way you're not an OS 
>> runtime expert, I'm not (or rather no longer, I started down that path a 
>> lifetime ago) a language expert at all, and I feel like I need to understand 
>> your world better to try to explain this part better to you.
> 
> No worries.  Actually, after thinking about it a bit, I don’t think that 
> switching underlying queues at runtime is scary.
> 
> The important semantic invariant which must be maintained is that there is 
> only one thread executing within an actor context at a time.  Switching 
> around underlying queues (or even having multiple actors on the same queue) 
> shouldn’t be a problem.
> 
> OTOH, you don’t want an actor “listening” to two unrelated queues, because 
> there is nothing to synchronize between the queues, and you could have 
> multiple actor methods invoked at the same time: you lose the protection of a 
> single serial queue. 
> 
> The only concern I’d have with an actor switching queues at runtime is that 
> you don’t want a race condition where an item on QueueA goes to the actor, 
> then it switches to QueueB, then another item from QueueB runs while the 
> actor is already doing something for QueueA.
> 
> 
 I think what you said made sense.
>>> 
>>> Ok, I captured this in yet-another speculative section:
>>> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency
>>>  
>>> 
>> Great. BTW I agree 100% with:
>> 
>> That said, this is definitely a power-user feature, and we should 
>> understand, build, and get experience using the basic system before 
>> considering adding something like this.
>> 
>> Private concurrent queues are not a success in dispatch and cause several 
>> issues, these queues are second class citizens in GCD in terms of feature 
>> they support, and building something with concurrency *within* is hard. I 
>> would keep it as "that's where we'll go some day" but not try to attempt it 
>> until we've build the simpler (or rather less hard) purely serial case first.
> 
> Right, I agree this is not important for the short term.  To clarify though, 
> I meant to indicate that these actors would be implemented completely 
> independently of dispatch, not that they’d build on private concurrent queues.
> 
> 
 Another problem I haven't touched either is kernel-issued events (inbound 
 IPC from other processes, networking events, etc...). Dispatch for the 
 longest time used an indirection through a manager thread for all such 
 events, and that had two major issues:
 
 - the thread hops it caused, causing networking workloads to utilize up to 
 15-20% more CPU time than an equivalent manually made pthread parked in 
 kevent(), because networking pace even when busy idles back all the time 
 as far as the CPU is concerned, so dispatch queues never stay hot, and the 
 context switch is not only a scheduled context switch but also has the 
 cost of a thread bring up
 
 - if you deliver all possible events this way you also deliver events that 
 cannot possibly make progress because the execution context that will 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Chris Lattner via swift-evolution
On Sep 3, 2017, at 12:44 PM, Pierre Habouzit  wrote:
> My currently not very well formed opinion on this subject is that GCD 
> queues are just what you need with these possibilities:
> - this Actor queue can be targeted to other queues by the developer when 
> he means for these actor to be executed in an existing execution context 
> / locking domain,
> - we disallow Actors to be directly targeted to GCD global concurrent 
> queues ever
> - for the other ones we create a new abstraction with stronger and better 
> guarantees (typically limiting the number of possible threads servicing 
> actors to a low number, not greater than NCPU).
 
 Is there a specific important use case for being able to target an actor 
 to an existing queue?  Are you looking for advanced patterns where 
 multiple actors (each providing disjoint mutable state) share an 
 underlying queue? Would this be for performance reasons, for compatibility 
 with existing code, or something else?
>>> 
>>> Mostly for interaction with current designs where being on a given bottom 
>>> serial queue gives you the locking context for resources naturally attached 
>>> to it.
>> 
>> Ok.  I don’t understand the use-case well enough to know how we should model 
>> this.  For example, is it important for an actor to be able to change its 
>> queue dynamically as it goes (something that sounds really scary to me) or 
>> can the “queue to use” be specified at actor initialization time?
> 
> I think I need to read more on actors, because the same way you're not an OS 
> runtime expert, I'm not (or rather no longer, I started down that path a 
> lifetime ago) a language expert at all, and I feel like I need to understand 
> your world better to try to explain this part better to you.

No worries.  Actually, after thinking about it a bit, I don’t think that 
switching underlying queues at runtime is scary.

The important semantic invariant which must be maintained is that there is only 
one thread executing within an actor context at a time.  Switching around 
underlying queues (or even having multiple actors on the same queue) shouldn’t 
be a problem.

OTOH, you don’t want an actor “listening” to two unrelated queues, because 
there is nothing to synchronize between the queues, and you could have multiple 
actor methods invoked at the same time: you lose the protection of a single 
serial queue. 

The only concern I’d have with an actor switching queues at runtime is that you 
don’t want a race condition where an item on QueueA goes to the actor, then it 
switches to QueueB, then another item from QueueB runs while the actor is 
already doing something for QueueA.


>>> I think what you said made sense.
>> 
>> Ok, I captured this in yet-another speculative section:
>> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency
>>  
>> 
> Great. BTW I agree 100% with:
> 
> That said, this is definitely a power-user feature, and we should understand, 
> build, and get experience using the basic system before considering adding 
> something like this.
> 
> Private concurrent queues are not a success in dispatch and cause several 
> issues, these queues are second class citizens in GCD in terms of feature 
> they support, and building something with concurrency *within* is hard. I 
> would keep it as "that's where we'll go some day" but not try to attempt it 
> until we've build the simpler (or rather less hard) purely serial case first.

Right, I agree this is not important for the short term.  To clarify though, I 
meant to indicate that these actors would be implemented completely 
independently of dispatch, not that they’d build on private concurrent queues.


>>> Another problem I haven't touched either is kernel-issued events (inbound 
>>> IPC from other processes, networking events, etc...). Dispatch for the 
>>> longest time used an indirection through a manager thread for all such 
>>> events, and that had two major issues:
>>> 
>>> - the thread hops it caused, causing networking workloads to utilize up to 
>>> 15-20% more CPU time than an equivalent manually made pthread parked in 
>>> kevent(), because networking pace even when busy idles back all the time as 
>>> far as the CPU is concerned, so dispatch queues never stay hot, and the 
>>> context switch is not only a scheduled context switch but also has the cost 
>>> of a thread bring up
>>> 
>>> - if you deliver all possible events this way you also deliver events that 
>>> cannot possibly make progress because the execution context that will 
>>> handle them is already "locked" (as in busy running something else.
>>> 
>>> It took us several years to get to the point we presented at WWDC this year 
>>> where we deliver events directly to the right dispatch queue. If you only 
>>> have very 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Chris Lattner via swift-evolution

> On Sep 4, 2017, at 9:05 AM, Jean-Daniel  wrote:
> 
>>> Sometimes, I’d probably make sense (or even be required to fix this to a 
>>> certain queue (in the thread(-pool?) sense), but at others it may just make 
>>> sense to execute the messages in-place by the sender if they don’t block so 
>>> no context switch is incurred.
>> 
>> Do you mean kernel context switch?  With well behaved actors, the runtime 
>> should be able to run work items from many different queues on the same 
>> kernel thread.  The “queue switch cost” is designed to be very very low.  
>> The key thing is that the runtime needs to know when work on a queue gets 
>> blocked so the kernel thread can move on to servicing some other queues work.
> 
> My understanding is that a kernel thread can’t move on servicing a different 
> queue while a block is executing on it. The runtime already know when a queue 
> is blocked, and the only way it has to mitigate the problem is to spawn an 
> other kernel thread to server the other queues. This is what cause the kernel 
> thread explosion.

I’m not sure what you mean by “executing on it”.  A work item that currently 
has a kernel thread can be doing one of two things: “executing work” (like 
number crunching) or “being blocked in the kernel on something that GCD doesn’t 
know about”. 

However, the whole point is that work items shouldn’t do this: as you say it 
causes thread explosions.  It is better for them to yield control back to GCD, 
which allows GCD to use the kernel thread for other queues, even though the 
original *queue* is blocked.

-Chris

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Jean-Daniel via swift-evolution

> Le 4 sept. 2017 à 17:54, Chris Lattner via swift-evolution 
>  a écrit :
> 
> On Sep 4, 2017, at 4:19 AM, Daniel Vollmer  > wrote:
>> 
>> Hello,
>> 
>> first off, I’m following this discussion with great interest, even though my 
>> background (simulation software on HPC) has a different focus than the 
>> “usual” paradigms Swift seeks to (primarily) address.
>> 
>>> On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution 
>>>  wrote:
 On Sep 2, 2017, at 11:09 PM, Pierre Habouzit  wrote:
> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit  wrote:
> 
> Is there a specific important use case for being able to target an actor 
> to an existing queue?  Are you looking for advanced patterns where 
> multiple actors (each providing disjoint mutable state) share an 
> underlying queue? Would this be for performance reasons, for 
> compatibility with existing code, or something else?
 
 Mostly for interaction with current designs where being on a given bottom 
 serial queue gives you the locking context for resources naturally 
 attached to it.
>>> 
>>> Ok.  I don’t understand the use-case well enough to know how we should 
>>> model this.  For example, is it important for an actor to be able to change 
>>> its queue dynamically as it goes (something that sounds really scary to me) 
>>> or can the “queue to use” be specified at actor initialization time?
>> 
>> I’m confused, but that may just be me misunderstanding things again. I’d 
>> assume each actor has its own (serial) queue that is used to serialize its 
>> messages, so the queue above refers to the queue used to actually process 
>> the messages the actor receives, correct?
> 
> Right.
> 
>> Sometimes, I’d probably make sense (or even be required to fix this to a 
>> certain queue (in the thread(-pool?) sense), but at others it may just make 
>> sense to execute the messages in-place by the sender if they don’t block so 
>> no context switch is incurred.
> 
> Do you mean kernel context switch?  With well behaved actors, the runtime 
> should be able to run work items from many different queues on the same 
> kernel thread.  The “queue switch cost” is designed to be very very low.  The 
> key thing is that the runtime needs to know when work on a queue gets blocked 
> so the kernel thread can move on to servicing some other queues work.

My understanding is that a kernel thread can’t move on servicing a different 
queue while a block is executing on it. The runtime already know when a queue 
is blocked, and the only way it has to mitigate the problem is to spawn an 
other kernel thread to server the other queues. This is what cause the kernel 
thread explosion.

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Chris Lattner via swift-evolution
On Sep 4, 2017, at 4:19 AM, Daniel Vollmer  wrote:
> 
> Hello,
> 
> first off, I’m following this discussion with great interest, even though my 
> background (simulation software on HPC) has a different focus than the 
> “usual” paradigms Swift seeks to (primarily) address.
> 
>> On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution 
>>  wrote:
>>> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit  wrote:
 On Sep 2, 2017, at 12:19 PM, Pierre Habouzit  wrote:
 
 Is there a specific important use case for being able to target an actor 
 to an existing queue?  Are you looking for advanced patterns where 
 multiple actors (each providing disjoint mutable state) share an 
 underlying queue? Would this be for performance reasons, for compatibility 
 with existing code, or something else?
>>> 
>>> Mostly for interaction with current designs where being on a given bottom 
>>> serial queue gives you the locking context for resources naturally attached 
>>> to it.
>> 
>> Ok.  I don’t understand the use-case well enough to know how we should model 
>> this.  For example, is it important for an actor to be able to change its 
>> queue dynamically as it goes (something that sounds really scary to me) or 
>> can the “queue to use” be specified at actor initialization time?
> 
> I’m confused, but that may just be me misunderstanding things again. I’d 
> assume each actor has its own (serial) queue that is used to serialize its 
> messages, so the queue above refers to the queue used to actually process the 
> messages the actor receives, correct?

Right.

> Sometimes, I’d probably make sense (or even be required to fix this to a 
> certain queue (in the thread(-pool?) sense), but at others it may just make 
> sense to execute the messages in-place by the sender if they don’t block so 
> no context switch is incurred.

Do you mean kernel context switch?  With well behaved actors, the runtime 
should be able to run work items from many different queues on the same kernel 
thread.  The “queue switch cost” is designed to be very very low.  The key 
thing is that the runtime needs to know when work on a queue gets blocked so 
the kernel thread can move on to servicing some other queues work.

-Chris
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Gwendal Roué via swift-evolution

> Le 4 sept. 2017 à 16:28, Wallacy via swift-evolution 
>  a écrit :
> 
> Hello,
> 
> I have a little question about the actors.
> 
> On WWDC 2012 Session 712 one of the most important tips (for me at least) 
> was: Improve Performance with Reader-Writer Access
> 
> Basically:
> • Use concurrent subsystem queue: DISPATCH_QUEUE_CONCURRENT
> • Use synchronous concurrent “reads”: dispatch_sync()
> • Use asynchronous serialized “writes”: dispatch_barrier_async()
> 
> [...]
> 
> With this will it be composed using actors? I see a lot of discussion about 
> using serial queues, and I also have not seen any mechanism similar to 
> dispatch_barrier_async being discussed here or in other threads.

I tend to believe that such read/write optimization could at least be 
implemented using the "Intra-actor concurrency" described by Chris Lattner at 
https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency
 
.

But you generally ask the question of reader vs. writer actor methods, that 
could be backed by dispatch_xxx/dispatch_barrier_xxx. I'm not sure it's as 
simple as mutating vs. non-mutating. For example, a non-mutating method can 
still cache the result of some expensive computation without breaking the 
non-mutating contract. Unless this cache is itself a read/write-safe actor, 
such non-mutating method is not a real reader method.

That's a very interesting topic, Wallacy!

Gwendal

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Wallacy via swift-evolution
Hello,

I have a little question about the actors.

On WWDC 2012 Session 712 one of the most important tips (for me at least)
was: Improve Performance with Reader-Writer Access

Basically:
• Use concurrent subsystem queue: DISPATCH_QUEUE_CONCURRENT
• Use synchronous concurrent “reads”: dispatch_sync()
• Use asynchronous serialized “writes”: dispatch_barrier_async()

Example:

// ...
   _someManagerQueue = dispatch_queue_create("SomeManager",
DISPATCH_QUEUE_CONCURRENT);// ...


And then:

- (id) getSomeArrayItem:(NSUInteger) index {
id importantObj = NULL;
dispatch_sync(_someManagerQueue,^{
id importantObj = [_importantArray objectAtIndex:index];
 });
   return importantObj;
 }- (void) removeSomeArrayItem:(id) object {
 dispatch_barrier_async(_someManagerQueue,^{
 [_importantArray removeObject:object];
 });
 }- (void) addSomeArrayItem:(id) object {
 dispatch_barrier_async(_someManagerQueue,^{
 [_importantArray addObject:object];
 });
 }


That way you ensure that whenever you read an information (eg an array) all
the "changes" have been made ​​or are "waiting" . And every time you write
an information, your program will not be blocked waiting for the operation
to be completed.

That way, if you use several threads, none will have to wait another to get
any value unless one of them is "writing", which is right thing to do.

With this will it be composed using actors? I see a lot of discussion about
using serial queues, and I also have not seen any mechanism similar to
dispatch_barrier_async being discussed here or in other threads.

Em seg, 4 de set de 2017 às 08:20, Daniel Vollmer via swift-evolution <
swift-evolution@swift.org> escreveu:

> Hello,
>
> first off, I’m following this discussion with great interest, even though
> my background (simulation software on HPC) has a different focus than the
> “usual” paradigms Swift seeks to (primarily) address.
>
> > On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution <
> swift-evolution@swift.org> wrote:
> >> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit 
> wrote:
> >>> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit 
> wrote:
> >>>
> >>> Is there a specific important use case for being able to target an
> actor to an existing queue?  Are you looking for advanced patterns where
> multiple actors (each providing disjoint mutable state) share an underlying
> queue? Would this be for performance reasons, for compatibility with
> existing code, or something else?
> >>
> >> Mostly for interaction with current designs where being on a given
> bottom serial queue gives you the locking context for resources naturally
> attached to it.
> >
> > Ok.  I don’t understand the use-case well enough to know how we should
> model this.  For example, is it important for an actor to be able to change
> its queue dynamically as it goes (something that sounds really scary to me)
> or can the “queue to use” be specified at actor initialization time?
>
> I’m confused, but that may just be me misunderstanding things again. I’d
> assume each actor has its own (serial) queue that is used to serialize its
> messages, so the queue above refers to the queue used to actually process
> the messages the actor receives, correct?
>
> Sometimes, I’d probably make sense (or even be required to fix this to a
> certain queue (in the thread(-pool?) sense), but at others it may just make
> sense to execute the messages in-place by the sender if they don’t block so
> no context switch is incurred.
>
> > One plausible way to model this is to say that it is a “multithreaded
> actor” of some sort, where the innards of the actor allow arbitrary number
> of client threads to call into it concurrently.  The onus would be on the
> implementor of the NIC or database to implement the proper synchronization
> on the mutable state within the actor.
> >>
> >> I think what you said made sense.
> >
> > Ok, I captured this in yet-another speculative section:
> >
> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency
>
> This seems like an interesting extension (where the actor-internal serial
> queue is not used / bypassed).
>
>
> Daniel.
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-04 Thread Daniel Vollmer via swift-evolution
Hello,

first off, I’m following this discussion with great interest, even though my 
background (simulation software on HPC) has a different focus than the “usual” 
paradigms Swift seeks to (primarily) address.

> On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution 
>  wrote:
>> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit  wrote:
>>> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit  wrote:
>>> 
>>> Is there a specific important use case for being able to target an actor to 
>>> an existing queue?  Are you looking for advanced patterns where multiple 
>>> actors (each providing disjoint mutable state) share an underlying queue? 
>>> Would this be for performance reasons, for compatibility with existing 
>>> code, or something else?
>> 
>> Mostly for interaction with current designs where being on a given bottom 
>> serial queue gives you the locking context for resources naturally attached 
>> to it.
> 
> Ok.  I don’t understand the use-case well enough to know how we should model 
> this.  For example, is it important for an actor to be able to change its 
> queue dynamically as it goes (something that sounds really scary to me) or 
> can the “queue to use” be specified at actor initialization time?

I’m confused, but that may just be me misunderstanding things again. I’d assume 
each actor has its own (serial) queue that is used to serialize its messages, 
so the queue above refers to the queue used to actually process the messages 
the actor receives, correct?

Sometimes, I’d probably make sense (or even be required to fix this to a 
certain queue (in the thread(-pool?) sense), but at others it may just make 
sense to execute the messages in-place by the sender if they don’t block so no 
context switch is incurred.

> One plausible way to model this is to say that it is a “multithreaded actor” 
> of some sort, where the innards of the actor allow arbitrary number of client 
> threads to call into it concurrently.  The onus would be on the implementor 
> of the NIC or database to implement the proper synchronization on the mutable 
> state within the actor.
>> 
>> I think what you said made sense.
> 
> Ok, I captured this in yet-another speculative section:
> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency

This seems like an interesting extension (where the actor-internal serial queue 
is not used / bypassed).


Daniel.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-03 Thread Chris Lattner via swift-evolution
On Sep 2, 2017, at 11:09 PM, Pierre Habouzit  wrote:
>> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit > > wrote:
> What do you mean by this?
 
 My understanding is that GCD doesn’t currently scale to 1M concurrent 
 queues / tasks.
>>> 
>>> It completely does provided these 1M queues / tasks are organized on 
>>> several well known independent contexts.
>> 
>> Ok, I stand corrected.  My understanding was that you could run into 
>> situations where you get stack explosions, fragment your VM and run out of 
>> space, but perhaps that is a relic of 32-bit systems.
> 
> a queue on 64bit systems is 128 bytes (nowadays). Provided you have that 
> amount of VM available to you (1M queues is 128M after all) then you're good.
> If a large amount of them fragments the VM beyond this is a malloc/VM bug on 
> 64bit systems that are supposed to have enough address space.

Right, I was referring to the fragmentation you get by having a large number of 
2M stacks allocated for each kernel thread.  I recognize the queues themselves 
are small.

> What doesn't scale is asking for threads, not having queues.

Right, agreed.

>> Agreed, to be clear, I have no objection to building actors on top of 
>> (perhaps enhanced) GCD queues.  In fact I *hope* that this can work, since 
>> it leads to a naturally more incremental path forward, which is therefore 
>> much more likely to actually happen.
> 
> Good :)

I meant to be pretty clear about that all along, but perhaps I missed the mark. 
 In any case, I’ve significantly revised the “scalable runtime” section of the 
doc to reflect some of this discussion, please let me know what you think:
https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#scalable-runtime

> 
>>> My currently not very well formed opinion on this subject is that GCD 
>>> queues are just what you need with these possibilities:
>>> - this Actor queue can be targeted to other queues by the developer when he 
>>> means for these actor to be executed in an existing execution context / 
>>> locking domain,
>>> - we disallow Actors to be directly targeted to GCD global concurrent 
>>> queues ever
>>> - for the other ones we create a new abstraction with stronger and better 
>>> guarantees (typically limiting the number of possible threads servicing 
>>> actors to a low number, not greater than NCPU).
>> 
>> Is there a specific important use case for being able to target an actor to 
>> an existing queue?  Are you looking for advanced patterns where multiple 
>> actors (each providing disjoint mutable state) share an underlying queue? 
>> Would this be for performance reasons, for compatibility with existing code, 
>> or something else?
> 
> Mostly for interaction with current designs where being on a given bottom 
> serial queue gives you the locking context for resources naturally attached 
> to it.

Ok.  I don’t understand the use-case well enough to know how we should model 
this.  For example, is it important for an actor to be able to change its queue 
dynamically as it goes (something that sounds really scary to me) or can the 
“queue to use” be specified at actor initialization time?

>> One plausible way to model this is to say that it is a “multithreaded actor” 
>> of some sort, where the innards of the actor allow arbitrary number of 
>> client threads to call into it concurrently.  The onus would be on the 
>> implementor of the NIC or database to implement the proper synchronization 
>> on the mutable state within the actor.
> 
> I think what you said made sense.

Ok, I captured this in yet-another speculative section:
https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782#intra-actor-concurrency


> But it wasn't what I meant. I was really thinking at sqlite where the 
> database is strongly serial (you can't use it in a multi-threaded way well, 
> or rather you can but it has a big lock inside). It is much better to 
> interact with that dude on the same exclusion context all the time. What I 
> meant is really having some actors that have a "strong affinity" with a given 
> execution context which eases the task of the actor scheduler.

Ah ok.  Yes, I think that wrapping a “subsystem with a big lock” in an actor is 
a very natural thing to do, just as much as it makes sense to wrap a 
non-threadsafe API in an actor.  Any internal locking would be subsumed by the 
outer actor queue, but that’s ok - all the lock acquires would be uncontended 
and fast :)

> Another problem I haven't touched either is kernel-issued events (inbound IPC 
> from other processes, networking events, etc...). Dispatch for the longest 
> time used an indirection through a manager thread for all such events, and 
> that had two major issues:
> 
> - the thread hops it caused, causing networking workloads to utilize up to 
> 15-20% more CPU time than an equivalent manually made pthread parked in 
> kevent(), because 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-03 Thread Pierre Habouzit via swift-evolution
[Sorry I hit send too fast, let me fix two spots I didn't correct]

> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit  wrote:
> 
> 
> -Pierre
> 
>> On Sep 2, 2017, at 9:59 PM, Chris Lattner > > wrote:
>> 
>> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit > > wrote:
> What do you mean by this?
 
 My understanding is that GCD doesn’t currently scale to 1M concurrent 
 queues / tasks.
>>> 
>>> It completely does provided these 1M queues / tasks are organized on 
>>> several well known independent contexts.
>> 
>> Ok, I stand corrected.  My understanding was that you could run into 
>> situations where you get stack explosions, fragment your VM and run out of 
>> space, but perhaps that is a relic of 32-bit systems.
> 
> a queue on 64bit systems is 128 bytes (nowadays). Provided you have that 
> amount of VM available to you (1M queues is 128M after all) then you're good.
> If a large amount of them fragments the VM beyond this is a malloc/VM bug on 
> 64bit systems that are supposed to have enough address space.
> 
>> 
> queues are serial/exclusive execution contexts, and if you're not 
> modeling actors as being serial queues, then these two concepts are just 
> disjoint. 
 
 AFAICT, the “one queue per actor” model is the only one that makes sense.  
 It doesn’t have to be FIFO, but it needs to be some sort of queue.  If you 
 allow servicing multiple requests within the actor at a time, then you 
 lose the advantages of “no shared mutable state”.
>>> 
>>> I agree, I don't quite care about how the actor is implemented here, what I 
>>> care about is where it runs onto. my wording was poor, what I really meant 
>>> is:
>>> 
>>> queues at the bottom of a queue hierarchy are serial/exclusive execution 
>>> contexts, and if you're not modeling actors as being such fully independent 
>>> serial queues, then these two concepts are just disjoint.
>>> 
>>> In GCD there's a very big difference between the one queue at the root of 
>>> your graph (just above the thread pool) and any other that is within. The 
>>> number that doesn't scale is the number of the former contexts, not the 
>>> latter.
>> 
>> I’m sorry, but I still don’t understand what you’re getting at here.
> 
> What doesn't scale is asking for threads, not having queues.
> 
>> 
>>> The pushback I have here is that today Runloops and dispatch queues on 
>>> iOS/macOS are already systems that have huge impedance mismatches, and do 
>>> not share the resources either (in terms of OS physical threads). I would 
>>> hate for us to bring on ourselves the pain of creating a third completely 
>>> different system that is using another way to use threads. When these 3 
>>> worlds would interoperate this would cause significant amount of context 
>>> switches just to move across the boundaries.
>> 
>> Agreed, to be clear, I have no objection to building actors on top of 
>> (perhaps enhanced) GCD queues.  In fact I *hope* that this can work, since 
>> it leads to a naturally more incremental path forward, which is therefore 
>> much more likely to actually happen.
> 
> Good :)
> 
>>> I'd like to dive and debunk this "GCD doesn't scale" point, that I'd almost 
>>> call a myth (and I'm relatively unhappy to see these words in your proposal 
>>> TBH because they send the wrong message).
>> 
>> I’m happy to revise the proposal, please let me know what you think makes 
>> sense.
> 
> What doesn't scale is the way GCD asks for threads, which is what the global 
> concurrent queues abstract.
> The way it works (or rather limp along) is what we should not reproduce for 
> Swift.
> 
> What you can write in your proposal and is true is "GCD current relationship 
> with the system threads doesn't scale". It's below the queues that the 
> scalability has issues.
> Dave Z. explained it in a mail earlier today in very good words.
> 
>>> My currently not very well formed opinion on this subject is that GCD 
>>> queues are just what you need with these possibilities:
>>> - this Actor queue can be targeted to other queues by the developer when he 
>>> means for these actor to be executed in an existing execution context / 
>>> locking domain,
>>> - we disallow Actors to be directly targeted to GCD global concurrent 
>>> queues ever
>>> - for the other ones we create a new abstraction with stronger and better 
>>> guarantees (typically limiting the number of possible threads servicing 
>>> actors to a low number, not greater than NCPU).
>> 
>> Is there a specific important use case for being able to target an actor to 
>> an existing queue?  Are you looking for advanced patterns where multiple 
>> actors (each providing disjoint mutable state) share an underlying queue? 
>> Would this be for performance reasons, for compatibility with existing code, 
>> or something else?
> 
> Mostly for interaction with current 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-03 Thread Pierre Habouzit via swift-evolution

-Pierre

> On Sep 2, 2017, at 9:59 PM, Chris Lattner  wrote:
> 
> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit  > wrote:
 What do you mean by this?
>>> 
>>> My understanding is that GCD doesn’t currently scale to 1M concurrent 
>>> queues / tasks.
>> 
>> It completely does provided these 1M queues / tasks are organized on several 
>> well known independent contexts.
> 
> Ok, I stand corrected.  My understanding was that you could run into 
> situations where you get stack explosions, fragment your VM and run out of 
> space, but perhaps that is a relic of 32-bit systems.

a queue on 64bit systems is 128 bytes (nowadays). Provided you have that amount 
of VM available to you (1M queues is 128M after all) then you're good.
If a large amount of them fragments the VM beyond this is a malloc/VM bug on 
64bit systems that are supposed to have enough address space.

> 
 queues are serial/exclusive execution contexts, and if you're not modeling 
 actors as being serial queues, then these two concepts are just disjoint. 
>>> 
>>> AFAICT, the “one queue per actor” model is the only one that makes sense.  
>>> It doesn’t have to be FIFO, but it needs to be some sort of queue.  If you 
>>> allow servicing multiple requests within the actor at a time, then you lose 
>>> the advantages of “no shared mutable state”.
>> 
>> I agree, I don't quite care about how the actor is implemented here, what I 
>> care about is where it runs onto. my wording was poor, what I really meant 
>> is:
>> 
>> queues at the bottom of a queue hierarchy are serial/exclusive execution 
>> contexts, and if you're not modeling actors as being such fully independent 
>> serial queues, then these two concepts are just disjoint.
>> 
>> In GCD there's a very big difference between the one queue at the root of 
>> your graph (just above the thread pool) and any other that is within. The 
>> number that doesn't scale is the number of the former contexts, not the 
>> latter.
> 
> I’m sorry, but I still don’t understand what you’re getting at here.

What doesn't scale is asking for threads, not having queues.

> 
>> The pushback I have here is that today Runloops and dispatch queues on 
>> iOS/macOS are already systems that have huge impedance mismatches, and do 
>> not share the resources either (in terms of OS physical threads). I would 
>> hate for us to bring on ourselves the pain of creating a third completely 
>> different system that is using another way to use threads. When these 3 
>> worlds would interoperate this would cause significant amount of context 
>> switches just to move across the boundaries.
> 
> Agreed, to be clear, I have no objection to building actors on top of 
> (perhaps enhanced) GCD queues.  In fact I *hope* that this can work, since it 
> leads to a naturally more incremental path forward, which is therefore much 
> more likely to actually happen.

Good :)

>> I'd like to dive and debunk this "GCD doesn't scale" point, that I'd almost 
>> call a myth (and I'm relatively unhappy to see these words in your proposal 
>> TBH because they send the wrong message).
> 
> I’m happy to revise the proposal, please let me know what you think makes 
> sense.

What doesn't scale is the way GCD asks for threads, which is what the global 
concurrent queues abstract.
The way it works (or rather limp along) is what we should not reproduce for 
Swift.

What you can write in your proposal and is true is "GCD current relationship 
with the system threads doesn't scale". It's below the queues that the 
scalability has issues.
Dave Z. explained it in a mail earlier today in very good words.

>> My currently not very well formed opinion on this subject is that GCD queues 
>> are just what you need with these possibilities:
>> - this Actor queue can be targeted to other queues by the developer when he 
>> means for these actor to be executed in an existing execution context / 
>> locking domain,
>> - we disallow Actors to be directly targeted to GCD global concurrent queues 
>> ever
>> - for the other ones we create a new abstraction with stronger and better 
>> guarantees (typically limiting the number of possible threads servicing 
>> actors to a low number, not greater than NCPU).
> 
> Is there a specific important use case for being able to target an actor to 
> an existing queue?  Are you looking for advanced patterns where multiple 
> actors (each providing disjoint mutable state) share an underlying queue? 
> Would this be for performance reasons, for compatibility with existing code, 
> or something else?

Mostly for interaction with current designs where being on a given bottom 
serial queue gives you the locking context for resources naturally attached to 
it.

> I don’t see a problem with disallowing actors on the global concurrent queues 
> in general, but I do think it makes sense to be able to provide an 
> abstraction for homing code on the main 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-02 Thread Chris Lattner via swift-evolution
On Sep 2, 2017, at 12:19 PM, Pierre Habouzit  wrote:
>>> What do you mean by this?
>> 
>> My understanding is that GCD doesn’t currently scale to 1M concurrent queues 
>> / tasks.
> 
> It completely does provided these 1M queues / tasks are organized on several 
> well known independent contexts.

Ok, I stand corrected.  My understanding was that you could run into situations 
where you get stack explosions, fragment your VM and run out of space, but 
perhaps that is a relic of 32-bit systems.

>>> queues are serial/exclusive execution contexts, and if you're not modeling 
>>> actors as being serial queues, then these two concepts are just disjoint. 
>> 
>> AFAICT, the “one queue per actor” model is the only one that makes sense.  
>> It doesn’t have to be FIFO, but it needs to be some sort of queue.  If you 
>> allow servicing multiple requests within the actor at a time, then you lose 
>> the advantages of “no shared mutable state”.
> 
> I agree, I don't quite care about how the actor is implemented here, what I 
> care about is where it runs onto. my wording was poor, what I really meant is:
> 
> queues at the bottom of a queue hierarchy are serial/exclusive execution 
> contexts, and if you're not modeling actors as being such fully independent 
> serial queues, then these two concepts are just disjoint.
> 
> In GCD there's a very big difference between the one queue at the root of 
> your graph (just above the thread pool) and any other that is within. The 
> number that doesn't scale is the number of the former contexts, not the 
> latter.

I’m sorry, but I still don’t understand what you’re getting at here.

> The pushback I have here is that today Runloops and dispatch queues on 
> iOS/macOS are already systems that have huge impedance mismatches, and do not 
> share the resources either (in terms of OS physical threads). I would hate 
> for us to bring on ourselves the pain of creating a third completely 
> different system that is using another way to use threads. When these 3 
> worlds would interoperate this would cause significant amount of context 
> switches just to move across the boundaries.

Agreed, to be clear, I have no objection to building actors on top of (perhaps 
enhanced) GCD queues.  In fact I *hope* that this can work, since it leads to a 
naturally more incremental path forward, which is therefore much more likely to 
actually happen.

> I'd like to dive and debunk this "GCD doesn't scale" point, that I'd almost 
> call a myth (and I'm relatively unhappy to see these words in your proposal 
> TBH because they send the wrong message).

I’m happy to revise the proposal, please let me know what you think makes sense.

> My currently not very well formed opinion on this subject is that GCD queues 
> are just what you need with these possibilities:
> - this Actor queue can be targeted to other queues by the developer when he 
> means for these actor to be executed in an existing execution context / 
> locking domain,
> - we disallow Actors to be directly targeted to GCD global concurrent queues 
> ever
> - for the other ones we create a new abstraction with stronger and better 
> guarantees (typically limiting the number of possible threads servicing 
> actors to a low number, not greater than NCPU).

Is there a specific important use case for being able to target an actor to an 
existing queue?  Are you looking for advanced patterns where multiple actors 
(each providing disjoint mutable state) share an underlying queue? Would this 
be for performance reasons, for compatibility with existing code, or something 
else?

I don’t see a problem with disallowing actors on the global concurrent queues 
in general, but I do think it makes sense to be able to provide an abstraction 
for homing code on the main thread/queue/actor somehow. 

> I think this aligns with your idea, in the sense that if you exhaust the 
> Swift Actor Thread Pool, then you're screwed forever. But given that the 
> pattern above can be hidden inside framework code that the developer has *no 
> control over*, it is fairly easy to write actors that eventually through the 
> said framework, would result in this synchronization pattern happening. Even 
> if we can build the amazing debugging tools that make these immediately 
> obvious to the developer (as in understanding what is happening), I don't 
> know how the developer can do *anything* to work around these. The only 
> solution is to fix the frameworks. However the experience of the last few 
> years of maintaining GCD shows that the patterns above are not widely 
> perceived as a dramatic design issue, let alone a bug. It will be a very long 
> road before most framework code there is out there is Swift Actor async/await 
> safe.
> 
> What is your proposal to address this? that we annotate functions that are 
> unsafe? And then, assuming we succeed at this Herculean task, what can 
> developers do anyway about it if the only way to do a 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-02 Thread Pierre Habouzit via swift-evolution
> On Sep 2, 2017, at 2:19 PM, Charles Srstka via swift-evolution 
>  wrote:
> 
>> On Sep 2, 2017, at 4:05 PM, David Zarzycki via swift-evolution 
>> > wrote:
>> 
>>> On Sep 2, 2017, at 14:15, Chris Lattner via swift-evolution 
>>> > wrote:
>>> 
>>> My understanding is that GCD doesn’t currently scale to 1M concurrent 
>>> queues / tasks.
>> 
>> Hi Chris!
>> 
>> [As a preface, I’ve only read a few of these concurrency related emails on 
>> swift-evolution, so please forgive me if I missed something.]
>> 
>> When it comes to GCD scalability, the short answer is that millions of of 
>> tiny heap allocations are cheap, be they queues or closures. And GCD has 
>> fairly linear performance so long as the millions of closures/queues are 
>> non-blocking.
>> 
>> The real world is far messier though. In practice, real world code blocks 
>> all of the time. In the case of GCD tasks, this is often tolerable for most 
>> apps, because their CPU usage is bursty and any accidental “thread 
>> explosion” that is created is super temporary. That being said, programs 
>> that create thousands of queues/closures that block on I/O will naturally 
>> get thousands of threads. GCD is efficient but not magic.
>> 
>> As an aside, there are things that future versions of GCD could do to 
>> minimize the “thread explosion” problem. For example, if GCD interposed the 
>> system call layer, it would gain visibility into *why* threads are stalled 
>> and therefore GCD could 1) be more conservative about when to fire up more 
>> worker threads and 2) defer resuming threads that are at “safe” stopping 
>> points if all of the CPUs are busy.
>> 
>> That being done though, the complaining would just shift. Instead of an 
>> “explosion of threads”, people would complain about an “explosion of stacks" 
>> that consume memory and address space. While I and others have argued in the 
>> past that solving this means that frameworks must embrace callback API 
>> design patterns, I personally am no longer of this opinion. As I see it, I 
>> don’t think the complexity (and bugs) of heavy async/callback/coroutine 
>> designs are worth the memory savings. Said differently, the stack is simple 
>> and efficient. Why fight it?
>> 
>> I think the real problem is that programmers cannot pretend that resources 
>> are infinite. For example, if one implements a photo library browsing app, 
>> it would be naive to try and load every image at launch (async or 
>> otherwise). That just won’t scale and that isn’t the operating system's 
>> fault.
> 
> Problems like thread explosion can be solved using higher-level constructs, 
> though. For example, (NS)OperationQueue has a .maxConcurrentOperationCount 
> property. If you make a global OperationQueue, set the maximum to whatever 
> you want it to be, and run all your “primitive” operations through the queue, 
> you can manage the thread count rather effectively.
> 
> I have a few custom Operation subclasses that easily wrap arbitrary 
> asynchronous operations as Operation objects; once the new async/await API 
> comes out, I plan to adapt my subclass to support it, and I’d be happy to 
> submit the code to swift-evolution if people are interested.

NSOperation has several implementation issues, and using it to encapsulate 
asynchronous work means that you don't get the correct priorities (I don't say 
it cant' be fixed, I honnestly don't know, I just know from the mouth of the 
maintainer that NSOperation makes only guarantees if you do all your work from 
-[NSOperation main]).

Second, what Dave is saying is exactly the opposite of what you just wrote. If 
you use NSOQ's maximum concurrency bit, and you throw an infinite amount of 
work at it, *sure* the thread explosion will be fixed but:
- cancelation is a problem
- scalability is a problem
- memory growth is a problem.

The better design is to have a system that works like this:

(1) have a scheduler that knows how many operations are in flight and admits 
two levels "low" and "high".
(2) when you submit work to the scheduler, it tells you if it could take it or 
not, if not, then it's up to you to serialize it somewhere "for later" or 
propagate the error to your client
(3) the scheduler admits up to "high" work items, and as they finish, if you 
reach "low", then you use some notification mechanism to feed it again (and 
possibly get it from the database).

This is how any OS construct works for resource reasons (network sockets, file 
descriptors, ...) where the notification mechanism that writing to these is 
available again is select/poll/epoll/kevent/... name it.

By doing it this way, you actually can write smarter policies on what the next 
work is, because computing what you should do next is usually relatively 
expensive, especially if work comes in all the time and that your decision can 
go 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-02 Thread Charles Srstka via swift-evolution
> On Sep 2, 2017, at 4:05 PM, David Zarzycki via swift-evolution 
>  wrote:
> 
>> On Sep 2, 2017, at 14:15, Chris Lattner via swift-evolution 
>> > wrote:
>> 
>> My understanding is that GCD doesn’t currently scale to 1M concurrent queues 
>> / tasks.
> 
> Hi Chris!
> 
> [As a preface, I’ve only read a few of these concurrency related emails on 
> swift-evolution, so please forgive me if I missed something.]
> 
> When it comes to GCD scalability, the short answer is that millions of of 
> tiny heap allocations are cheap, be they queues or closures. And GCD has 
> fairly linear performance so long as the millions of closures/queues are 
> non-blocking.
> 
> The real world is far messier though. In practice, real world code blocks all 
> of the time. In the case of GCD tasks, this is often tolerable for most apps, 
> because their CPU usage is bursty and any accidental “thread explosion” that 
> is created is super temporary. That being said, programs that create 
> thousands of queues/closures that block on I/O will naturally get thousands 
> of threads. GCD is efficient but not magic.
> 
> As an aside, there are things that future versions of GCD could do to 
> minimize the “thread explosion” problem. For example, if GCD interposed the 
> system call layer, it would gain visibility into *why* threads are stalled 
> and therefore GCD could 1) be more conservative about when to fire up more 
> worker threads and 2) defer resuming threads that are at “safe” stopping 
> points if all of the CPUs are busy.
> 
> That being done though, the complaining would just shift. Instead of an 
> “explosion of threads”, people would complain about an “explosion of stacks" 
> that consume memory and address space. While I and others have argued in the 
> past that solving this means that frameworks must embrace callback API design 
> patterns, I personally am no longer of this opinion. As I see it, I don’t 
> think the complexity (and bugs) of heavy async/callback/coroutine designs are 
> worth the memory savings. Said differently, the stack is simple and 
> efficient. Why fight it?
> 
> I think the real problem is that programmers cannot pretend that resources 
> are infinite. For example, if one implements a photo library browsing app, it 
> would be naive to try and load every image at launch (async or otherwise). 
> That just won’t scale and that isn’t the operating system's fault.

Problems like thread explosion can be solved using higher-level constructs, 
though. For example, (NS)OperationQueue has a .maxConcurrentOperationCount 
property. If you make a global OperationQueue, set the maximum to whatever you 
want it to be, and run all your “primitive” operations through the queue, you 
can manage the thread count rather effectively.

I have a few custom Operation subclasses that easily wrap arbitrary 
asynchronous operations as Operation objects; once the new async/await API 
comes out, I plan to adapt my subclass to support it, and I’d be happy to 
submit the code to swift-evolution if people are interested.

Charles

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-02 Thread David Zarzycki via swift-evolution

> On Sep 2, 2017, at 14:15, Chris Lattner via swift-evolution 
>  wrote:
> 
> My understanding is that GCD doesn’t currently scale to 1M concurrent queues 
> / tasks.

Hi Chris!

[As a preface, I’ve only read a few of these concurrency related emails on 
swift-evolution, so please forgive me if I missed something.]

When it comes to GCD scalability, the short answer is that millions of of tiny 
heap allocations are cheap, be they queues or closures. And GCD has fairly 
linear performance so long as the millions of closures/queues are non-blocking.

The real world is far messier though. In practice, real world code blocks all 
of the time. In the case of GCD tasks, this is often tolerable for most apps, 
because their CPU usage is bursty and any accidental “thread explosion” that is 
created is super temporary. That being said, programs that create thousands of 
queues/closures that block on I/O will naturally get thousands of threads. GCD 
is efficient but not magic.

As an aside, there are things that future versions of GCD could do to minimize 
the “thread explosion” problem. For example, if GCD interposed the system call 
layer, it would gain visibility into *why* threads are stalled and therefore 
GCD could 1) be more conservative about when to fire up more worker threads and 
2) defer resuming threads that are at “safe” stopping points if all of the CPUs 
are busy.

That being done though, the complaining would just shift. Instead of an 
“explosion of threads”, people would complain about an “explosion of stacks" 
that consume memory and address space. While I and others have argued in the 
past that solving this means that frameworks must embrace callback API design 
patterns, I personally am no longer of this opinion. As I see it, I don’t think 
the complexity (and bugs) of heavy async/callback/coroutine designs are worth 
the memory savings. Said differently, the stack is simple and efficient. Why 
fight it?

I think the real problem is that programmers cannot pretend that resources are 
infinite. For example, if one implements a photo library browsing app, it would 
be naive to try and load every image at launch (async or otherwise). That just 
won’t scale and that isn’t the operating system's fault.

Dave___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-02 Thread John McCall via swift-evolution

> On Sep 2, 2017, at 3:19 PM, Pierre Habouzit via swift-evolution 
>  wrote:
> 
>> On Sep 2, 2017, at 11:15 AM, Chris Lattner > > wrote:
>> 
>> On Aug 31, 2017, at 7:24 PM, Pierre Habouzit > > wrote:
>>> 
>>> I fail at Finding the initial mail and am quite late to the party of 
>>> commenters, but there are parts I don't undertsand or have questions about.
>>> 
>>> Scalable Runtime
>>> 
>>> [...]
>>> 
>>> The one problem I anticipate with GCD is that it doesn't scale well enough: 
>>> server developers in particular will want to instantiate hundreds of 
>>> thousands of actors in their application, at least one for every incoming 
>>> network connection. The programming model is substantially harmed when you 
>>> have to be afraid of creating too many actors: you have to start 
>>> aggregating logically distinct stuff together to reduce # queues, which 
>>> leads to complexity and loses some of the advantages of data isolation.
>>> 
>>> 
>>> What do you mean by this?
>> 
>> My understanding is that GCD doesn’t currently scale to 1M concurrent queues 
>> / tasks.
> 
> It completely does provided these 1M queues / tasks are organized on several 
> well known independent contexts.
> The place where GCD "fails" at is that if you target your individual serial 
> queues to the global concurrent queues (a.k.a. root queues) which means 
> "please pool, do your job", then yes it doesn't scale, because we take these 
> individual serial queues as proxies for OS threads.
> 
> If however you target these queues to either:
> - new serial queues to segregate your actors per subsystem yourself
> - or some more constrained pool than what the current GCD runtime offers 
> (where we don't create threads to run your work nearly as eagerly)
> 
> Then I don't see why the current implementation of GCD wouldn't scale.

More importantly, the basic interface of GCD doesn't seem to prevent an 
implementation from scaling to fill the resource constraints of a machine.   
The interface to dispatch queues does not imply any substantial persistent 
state besides the task queue itself, and tasks have pretty minimal quiescent 
storage requirements.  Queue-hopping is an unfortunate overhead, but a 
constant-time overhead doesn't really damage scalability and can be addressed 
without a major overhaul of the basic runtime interface.  OS threads can be 
blocked by tasks, but that's not a Dispatch-specific problem, and any solution 
that would fix it in other runtimes would equally fix it in Dispatch.

Now, an arbitrarily-scaling concurrent system is probably a system that's 
destined to eventually become distributed, and there's a strong argument that 
unbounded queues are an architectural mistake in a distributed system: instead, 
every channel of communication should have an opportunity to refuse further 
work, and the entire system should be architected to handle such failures 
gracefully.  But I think that can be implemented reasonably on top of a runtime 
where most local queues are still unbounded and "optimistic".

John.___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-02 Thread Pierre Habouzit via swift-evolution
> On Sep 2, 2017, at 11:15 AM, Chris Lattner  wrote:
> 
> On Aug 31, 2017, at 7:24 PM, Pierre Habouzit  > wrote:
>> 
>> I fail at Finding the initial mail and am quite late to the party of 
>> commenters, but there are parts I don't undertsand or have questions about.
>> 
>> Scalable Runtime
>> 
>> [...]
>> 
>> The one problem I anticipate with GCD is that it doesn't scale well enough: 
>> server developers in particular will want to instantiate hundreds of 
>> thousands of actors in their application, at least one for every incoming 
>> network connection. The programming model is substantially harmed when you 
>> have to be afraid of creating too many actors: you have to start aggregating 
>> logically distinct stuff together to reduce # queues, which leads to 
>> complexity and loses some of the advantages of data isolation.
>> 
>> 
>> What do you mean by this?
> 
> My understanding is that GCD doesn’t currently scale to 1M concurrent queues 
> / tasks.

It completely does provided these 1M queues / tasks are organized on several 
well known independent contexts.
The place where GCD "fails" at is that if you target your individual serial 
queues to the global concurrent queues (a.k.a. root queues) which means "please 
pool, do your job", then yes it doesn't scale, because we take these individual 
serial queues as proxies for OS threads.

If however you target these queues to either:
- new serial queues to segregate your actors per subsystem yourself
- or some more constrained pool than what the current GCD runtime offers (where 
we don't create threads to run your work nearly as eagerly)

Then I don't see why the current implementation of GCD wouldn't scale.

> 
>> queues are serial/exclusive execution contexts, and if you're not modeling 
>> actors as being serial queues, then these two concepts are just disjoint. 
> 
> AFAICT, the “one queue per actor” model is the only one that makes sense.  It 
> doesn’t have to be FIFO, but it needs to be some sort of queue.  If you allow 
> servicing multiple requests within the actor at a time, then you lose the 
> advantages of “no shared mutable state”.

I agree, I don't quite care about how the actor is implemented here, what I 
care about is where it runs onto. my wording was poor, what I really meant is:

queues at the bottom of a queue hierarchy are serial/exclusive execution 
contexts, and if you're not modeling actors as being such fully independent 
serial queues, then these two concepts are just disjoint.

In GCD there's a very big difference between the one queue at the root of your 
graph (just above the thread pool) and any other that is within. The number 
that doesn't scale is the number of the former contexts, not the latter.

The pushback I have here is that today Runloops and dispatch queues on 
iOS/macOS are already systems that have huge impedance mismatches, and do not 
share the resources either (in terms of OS physical threads). I would hate for 
us to bring on ourselves the pain of creating a third completely different 
system that is using another way to use threads. When these 3 worlds would 
interoperate this would cause significant amount of context switches just to 
move across the boundaries.

"GCD Doesn't scale so let's build something new" will only create pain, we need 
a way for actors to inherently run on a thread pool that is shared with 
dispatch and that dispatch can reason about and vice versa, and where the Swift 
runtime gives enough information for GCD for it to execute the right work at 
the right time.

I'd like to dive and debunk this "GCD doesn't scale" point, that I'd almost 
call a myth (and I'm relatively unhappy to see these words in your proposal TBH 
because they send the wrong message).

Way before I started working on it, probably to ease adoption, the decision was 
made that it was ok to write code such as this and have it run without problems 
(FSVO without problems):

dispatch_queue_t q = ...;
dispatch_semaphore_t sema = dispatch_semaphore_create(0);
dispatch_async(q, ^{ dispatch_semaphore_signal(sema); });
dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER);


To accommodate for this we when the caller of this code blocks a worker thread, 
then the kernel will notice your level of concurrency dropped and will bring up 
a new thread for you. This thread will likely be the one that picks up `q` that 
got woken up by this async, and will unblock the caller.

If you never write such horrible code, then GCD scales *just fine*. The real 
problem is that if you go async you need to be async all the way. Node.js and 
other similar projects have understood that a very long time ago. If you 
express dependencies between asynchronous execution context with a blocking 
relationship such as above, then you're just committing performance suicide. 
GCD handles this by adding more threads and overcommitting the system, my 
understanding is that 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-09-02 Thread Chris Lattner via swift-evolution
On Aug 31, 2017, at 7:24 PM, Pierre Habouzit  wrote:
> 
> I fail at Finding the initial mail and am quite late to the party of 
> commenters, but there are parts I don't undertsand or have questions about.
> 
> Scalable Runtime
> 
> [...]
> 
> The one problem I anticipate with GCD is that it doesn't scale well enough: 
> server developers in particular will want to instantiate hundreds of 
> thousands of actors in their application, at least one for every incoming 
> network connection. The programming model is substantially harmed when you 
> have to be afraid of creating too many actors: you have to start aggregating 
> logically distinct stuff together to reduce # queues, which leads to 
> complexity and loses some of the advantages of data isolation.
> 
> 
> What do you mean by this?

My understanding is that GCD doesn’t currently scale to 1M concurrent queues / 
tasks.

> queues are serial/exclusive execution contexts, and if you're not modeling 
> actors as being serial queues, then these two concepts are just disjoint.

AFAICT, the “one queue per actor” model is the only one that makes sense.  It 
doesn’t have to be FIFO, but it needs to be some sort of queue.  If you allow 
servicing multiple requests within the actor at a time, then you lose the 
advantages of “no shared mutable state”.

> Actors are the way you present the various tasks/operations/activities that 
> you schedule. These contexts are a way for the developer to explain which 
> things are related in a consistent system, and give them access to state 
> which is local to this context (whether it's TSD for threads, or queue 
> specific data, or any similar context),

Just MHO, but I don’t think you’d need or want the concept of “actor local 
data” in the sense of TLS (e.g. __thread).  All actor methods have a ‘self’ 
already, and having something like TLS strongly encourages breaking the model.  
To me, the motivation for TLS is to provide an easier way to migrate 
single-threaded global variables, when introducing threading into legacy code.

This is not a problem we need or want to solve, given programmers would be 
rewriting their algorithm anyway to get it into the actor model.

> IMO, Swift as a runtime should define what an execution context is, and be 
> relatively oblivious of which context it is exactly as long it presents a few 
> common capabilities:
> - possibility to schedule work (async)
> - have a name
> - be an exclusion context
> - is an entity the kernel can reason about (if you want to be serious about 
> any integration on a real operating system with priority inheritance and 
> complex issues like this, which it is the OS responsibility to handle and not 
> the language)
> - ...
> 
> In that sense, whether your execution context is:
> - a dispatch serial queue
> - a CFRunloop
> - a libev/libevent/... event loop
> - your own hand rolled event loop

Generalizing the approach is completely possible, but it is also possible to 
introduce a language abstraction that is “underneath” the high level event 
loops.  That’s what I’m proposing.

> 
> Design sketch for interprocess and distributed compute
> 
> [...]
> 
> One of these principles is the concept of progressive disclosure of 
> complexity : a Swift 
> developer shouldn't have to worry about IPC or distributed compute if they 
> don't care about it.
> 
> 
> While I agree with the sentiment, I don't think that anything useful can be 
> done without "distributed" computation. I like the loadResourceFromTheWeb 
> example, as we have something like this on our platform, which is the 
> NSURLSession APIs, or the CloudKit API Surface, that are about fetching some 
> resource from a server (URL or CloudKit database records). However, they 
> don't have a single result, they have:
> 
> - progress notification callbacks
> - broken down notifications for the results (e.g. headers first and body 
> second, or per-record for CloudKit operations)
> - various levels of error reporting.

I don’t understand the concern about this.  If you want low level control like 
this, it is quite easy to express that.  However, it is also quite common to 
just want to say “load a URL with this name”, which is super easy and awesome 
with async/await.

> I expect most developers will have to use such a construct, and for these, 
> having a single async pivot in your code that essentially fully serializes 
> your state machine on getting a full result from the previous step to be 
> lacking.

Agreed, the examples are not trying to show that.  It is perfectly fine to pass 
in additional callbacks (or delegates, etc) to async methods, which would be a 
natural way to express this… just like the current APIs do.

> Delivering all these notifications on the context of the initiator would be 
> quite inefficient as clearly there are in my example above two very different 
> contexts, and having to hop through one to reach 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-31 Thread Pierre Habouzit via swift-evolution
I fail at Finding the initial mail and am quite late to the party of 
commenters, but there are parts I don't undertsand or have questions about.

Scalable Runtime

[...]

The one problem I anticipate with GCD is that it doesn't scale well enough: 
server developers in particular will want to instantiate hundreds of thousands 
of actors in their application, at least one for every incoming network 
connection. The programming model is substantially harmed when you have to be 
afraid of creating too many actors: you have to start aggregating logically 
distinct stuff together to reduce # queues, which leads to complexity and loses 
some of the advantages of data isolation.


What do you mean by this? queues are serial/exclusive execution contexts, and 
if you're not modeling actors as being serial queues, then these two concepts 
are just disjoint. The former (queues) represent where the code runs 
physically, gives you some level of scheduling, possibly prioritization, and 
the context is the entity that is known to the kernel so that when you need 
synchronization between two execution context (because despite your best 
intentions there is global mutable state on the system that Swift uses all the 
time whether it's through frameworks, malloc or simply any syscall), it can 
resolve priority inversions and do smart things to schedule these contexts.

Actors are the way you present the various tasks/operations/activities that you 
schedule. These contexts are a way for the developer to explain which things 
are related in a consistent system, and give them access to state which is 
local to this context (whether it's TSD for threads, or queue specific data, or 
any similar context), which is data that is not shared horizontally (across 
several concurrent execution contexts) but vertically (across all the hierarchy 
of actors/work items/... that you schedule on these execution contexts, hence 
require no locks and are "good" for the system).

GCD is trying to be a very efficient way to communicate and message between 
execution contexts that you know and represent your software architecture in 
your product/app/server/ Using queues for anything else will indeed scale 
poorly.

IMO, Swift as a runtime should define what an execution context is, and be 
relatively oblivious of which context it is exactly as long it presents a few 
common capabilities:
- possibility to schedule work (async)
- have a name
- be an exclusion context
- is an entity the kernel can reason about (if you want to be serious about any 
integration on a real operating system with priority inheritance and complex 
issues like this, which it is the OS responsibility to handle and not the 
language)
- ...

In that sense, whether your execution context is:
- a dispatch serial queue
- a CFRunloop
- a libev/libevent/... event loop
- your own hand rolled event loop

Then this is fine, this is something where Swift could enqueue its own 
"schedule Swift closures on this context" at the very least, and for the ones 
that have native integration do smarter things (I'd expect runloops or 
libdispatch to be such better integrated citizens given that they're part of 
the same umbrella ;p). If you layer the runtime this way, then I don't see how 
GCD can be a hindrance, it's just one of the several execution contexts that 
can host Actors.

While mentioning this, I've seen many people complain that 
dispatch_get_current_queue() is deprecated. It is so for tons of valid reasons, 
it's too sharp an API to use for developers, but as part of integrating with 
the swift runtime, having a "please give me a reference on the current 
execution context" is trivially implementable when we know what the Swift 
runtime will do with it and has a reasonable use.


Design sketch for interprocess and distributed compute

[...]

One of these principles is the concept of progressive disclosure of complexity 
: a Swift developer 
shouldn't have to worry about IPC or distributed compute if they don't care 
about it.


While I agree with the sentiment, I don't think that anything useful can be 
done without "distributed" computation. I like the loadResourceFromTheWeb 
example, as we have something like this on our platform, which is the 
NSURLSession APIs, or the CloudKit API Surface, that are about fetching some 
resource from a server (URL or CloudKit database records). However, they don't 
have a single result, they have:

- progress notification callbacks
- broken down notifications for the results (e.g. headers first and body 
second, or per-record for CloudKit operations)
- various levels of error reporting.


I expect most developers will have to use such a construct, and for these, 
having a single async pivot in your code that essentially fully serializes your 
state machine on getting a full result from the previous step to be lacking. 
Similarly, for the 3 categories I listed above, it's very likely 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-27 Thread Howard Lovatt via swift-evolution
To avoid or at least detect deadlocks you need: timeout (which will at
least generate an error), cancel (which will prevent zombie processes), and
status information (for debugging). It doesn’t make any difference if the
reference is strong or weak. There is an advantage in strong references
since you can fire and forget if you want a deamon process that is totally
self managing.

On Sun, 27 Aug 2017 at 12:53 am, Marc Schlichte via swift-evolution <
swift-evolution@swift.org> wrote:

> Am 26.08.2017 um 02:03 schrieb Adam Kemp via swift-evolution <
> swift-evolution@swift.org>:
>
> I’m not sure I understand. What is the connection between references and
> deadlocks?
>
>
>
> This is what I had in mind:
>
> To have a deadlock from async actor methods, you would need some mutual
> invocations of them - i.e a cycle in the call graph.
>
> If your code is (strong) retain cycle free and you make invocations only
> on actors of which you have strong references, you will also have no cyclic
> call graph, hence no deadlocks.
>
>
> Now, unfortunately - and contrary to my claim - deadlocks still can happen:
>
> if you `await` in your async actor method on some state which can only be
> set via another actor method in your actor, a deadlock occurs:
>
> Example:
> ```
> actor class A {
>   var continuation: (() -> Void)?
>   actor func m1() async {
> await suspendAsync { cont in
>   continuation = cont
> }
>   }
>   actor func m2() {
> continuation?()
>   }
> }
> ```
>
> If someone calls `a.m1()`, and someone else `a.m2()`, `a.m1()` still does
> not complete as `a.m2()` is not allowed to run while `a.m1()` is not
> finished.
>
> Marking `m2` as an `interleaved actor func` would remedy that situation as
> it could then run when the next work item is picked from the serial gdc
> queue - which can happen while we `await` on the `suspendAsync` in the
> example above.
>
>
> Cheers
> Marc
>
>
> On Aug 25, 2017, at 1:07 PM, Marc Schlichte 
> wrote:
>
>
> Am 25.08.2017 um 19:08 schrieb Adam Kemp via swift-evolution <
> swift-evolution@swift.org>:
>
> I understand what you’re saying, but I just think trying to make
> synchronous, blocking actor methods goes against the fundamental ideal of
> the actor model, and it’s a recipe for disaster. When actors communicate
> with each other that communication needs to be asynchronous or you will get
> deadlocks. It’s not just going to be a corner case. It’s going to be a very
> frequent occurrence.
>
> One of the general rules of multithreaded programming is “don’t call
> unknown code while holding a lock”. Blocking a queue is effectively the
> same as holding a lock, and calling another actor is calling unknown code.
> So if the model works that way then the language itself will be encouraging
> people to call unknown code while holding locks. That is not going to go
> well.
>
>
> I would claim - without having a prove though - that as long as you don’t
> invoke async actor methods on weak or unowned actor references and the code
> is retain cycle free, no deadlocks will happen.
>
> Cheers
> Marc
>
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
-- 
-- Howard.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-26 Thread Marc Schlichte via swift-evolution

> Am 26.08.2017 um 02:03 schrieb Adam Kemp via swift-evolution 
> :
> 
> I’m not sure I understand. What is the connection between references and 
> deadlocks?


This is what I had in mind:

To have a deadlock from async actor methods, you would need some mutual 
invocations of them - i.e a cycle in the call graph.

If your code is (strong) retain cycle free and you make invocations only on 
actors of which you have strong references, you will also have no cyclic call 
graph, hence no deadlocks.


Now, unfortunately - and contrary to my claim - deadlocks still can happen:

if you `await` in your async actor method on some state which can only be set 
via another actor method in your actor, a deadlock occurs:

Example:
```
actor class A {
  var continuation: (() -> Void)?
  actor func m1() async {
await suspendAsync { cont in
  continuation = cont
}
  }
  actor func m2() {
continuation?()
  }
}
```

If someone calls `a.m1()`, and someone else `a.m2()`, `a.m1()` still does not 
complete as `a.m2()` is not allowed to run while `a.m1()` is not finished.

Marking `m2` as an `interleaved actor func` would remedy that situation as it 
could then run when the next work item is picked from the serial gdc queue - 
which can happen while we `await` on the `suspendAsync` in the example above. 


Cheers
Marc

> 
>> On Aug 25, 2017, at 1:07 PM, Marc Schlichte > > wrote:
>> 
>> 
>>> Am 25.08.2017 um 19:08 schrieb Adam Kemp via swift-evolution 
>>> >:
>>> 
>>> I understand what you’re saying, but I just think trying to make 
>>> synchronous, blocking actor methods goes against the fundamental ideal of 
>>> the actor model, and it’s a recipe for disaster. When actors communicate 
>>> with each other that communication needs to be asynchronous or you will get 
>>> deadlocks. It’s not just going to be a corner case. It’s going to be a very 
>>> frequent occurrence.
>>> 
>>> One of the general rules of multithreaded programming is “don’t call 
>>> unknown code while holding a lock”. Blocking a queue is effectively the 
>>> same as holding a lock, and calling another actor is calling unknown code. 
>>> So if the model works that way then the language itself will be encouraging 
>>> people to call unknown code while holding locks. That is not going to go 
>>> well.
>>> 
>> 
>> I would claim - without having a prove though - that as long as you don’t 
>> invoke async actor methods on weak or unowned actor references and the code 
>> is retain cycle free, no deadlocks will happen.
>> 
>> Cheers
>> Marc
>> 
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-26 Thread Brent Royal-Gordon via swift-evolution
> On Aug 25, 2017, at 10:12 PM, Howard Lovatt via swift-evolution 
>  wrote:
> 
> I think we would be better off with a future type rather than async/await 
> since they can offer timeout, cancel, and control over which thread execution 
> occurs on.


async/await is a primitive you can build these high-level features on top of.

If you have async/await, you can temporarily handle timeout, cancel, and thread 
control manually until we have time to design features to address those. You 
can also ignore our features if you don't like them and use your own designs 
instead. Or you can substitute features more appropriate to your 
platform—imagine if Swift were a Linux language and you were writing the Mac 
port, and pthreads were so deeply baked into futures that our entire 
concurrency system couldn't be used with GCD.

You cannot design the entire world at once, or you'll end up with a huge, 
complicated, inflexible mess.

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Howard Lovatt via swift-evolution
I think we would be better off with a future type rather than async/await
since they can offer timeout, cancel, and control over which thread
execution occurs on.

  -- Howard.

On 26 August 2017 at 00:06, Cavelle Benjamin via swift-evolution <
swift-evolution@swift.org> wrote:

> Disclaimer: not an expert
>
> Question
> I didn’t see any where the async is required to time out after a certain
> time frame. I would think that we would want to specify both on the
> function declaration side as a default and on the function call side as a
> customization. That being said, the return time then becomes an optional
> given the timeout and the calling code would need to unwrap.
>
> func loadWebResource(_ path: String) async -> Resourcefunc decodeImage(_ r1: 
> Resource, _ r2: Resource) async -> Imagefunc dewarpAndCleanupImage(_ i : 
> Image) async -> Image
> func processImageData1() async -> Image {
> let dataResource  = await loadWebResource("dataprofile.txt")
> let imageResource = await loadWebResource("imagedata.dat")
> let imageTmp  = await decodeImage(dataResource, imageResource)
> let imageResult   = await dewarpAndCleanupImage(imageTmp)
> return imageResult
> }
>
>
>
> So the prior code becomes…
>
> func loadWebResource(_ path: String) async(timeout: 1000) -> Resource?func 
> decodeImage(_ r1: Resource, _ r2: Resource) async -> Image?func 
> dewarpAndCleanupImage(_ i : Image) async -> Image?
> func processImageData1() async -> Image? {
> let dataResource  = guard let await loadWebResource("dataprofile.txt”) 
> else { // handle timeout }
> let imageResource = guard let await(timeout: 100) 
> loadWebResource("imagedata.dat”) else { // handle timeout }
> let imageTmp  = await decodeImage(dataResource, imageResource)
> let imageResult   = await dewarpAndCleanupImage(imageTmp)
> return imageResult
> }
>
>
>
> Given this structure, the return type of all async’s would be optionals
> with now 3 return types??
>
> .continuation // suspends and picks back up
> .value // these are the values we are looking for
> .none // took too long, so you get nothing.
>
>
>
> On 2017-Aug -17 (34), at 18:24, Chris Lattner via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> Hi all,
>
> As Ted mentioned in his email, it is great to finally kick off discussions
> for what concurrency should look like in Swift.  This will surely be an
> epic multi-year journey, but it is more important to find the right design
> than to get there fast.
>
> I’ve been advocating for a specific model involving async/await and actors
> for many years now.  Handwaving only goes so far, so some folks asked me to
> write them down to make the discussion more helpful and concrete.  While I
> hope these ideas help push the discussion on concurrency forward, this
> isn’t in any way meant to cut off other directions: in fact I hope it helps
> give proponents of other designs a model to follow: a discussion giving
> extensive rationale, combined with the long term story arc to show that the
> features fit together.
>
> Anyway, here is the document, I hope it is useful, and I’d love to hear
> comments and suggestions for improvement:
> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782
>
> -Chris
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
>
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Adam Kemp via swift-evolution
I’m not sure I understand. What is the connection between references and 
deadlocks?

> On Aug 25, 2017, at 1:07 PM, Marc Schlichte  
> wrote:
> 
> 
>> Am 25.08.2017 um 19:08 schrieb Adam Kemp via swift-evolution 
>> >:
>> 
>> I understand what you’re saying, but I just think trying to make 
>> synchronous, blocking actor methods goes against the fundamental ideal of 
>> the actor model, and it’s a recipe for disaster. When actors communicate 
>> with each other that communication needs to be asynchronous or you will get 
>> deadlocks. It’s not just going to be a corner case. It’s going to be a very 
>> frequent occurrence.
>> 
>> One of the general rules of multithreaded programming is “don’t call unknown 
>> code while holding a lock”. Blocking a queue is effectively the same as 
>> holding a lock, and calling another actor is calling unknown code. So if the 
>> model works that way then the language itself will be encouraging people to 
>> call unknown code while holding locks. That is not going to go well.
>> 
> 
> I would claim - without having a prove though - that as long as you don’t 
> invoke async actor methods on weak or unowned actor references and the code 
> is retain cycle free, no deadlocks will happen.
> 
> Cheers
> Marc
> 

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Marc Schlichte via swift-evolution

> Am 25.08.2017 um 19:08 schrieb Adam Kemp via swift-evolution 
> :
> 
> I understand what you’re saying, but I just think trying to make synchronous, 
> blocking actor methods goes against the fundamental ideal of the actor model, 
> and it’s a recipe for disaster. When actors communicate with each other that 
> communication needs to be asynchronous or you will get deadlocks. It’s not 
> just going to be a corner case. It’s going to be a very frequent occurrence.
> 
> One of the general rules of multithreaded programming is “don’t call unknown 
> code while holding a lock”. Blocking a queue is effectively the same as 
> holding a lock, and calling another actor is calling unknown code. So if the 
> model works that way then the language itself will be encouraging people to 
> call unknown code while holding locks. That is not going to go well.
> 

I would claim - without having a prove though - that as long as you don’t 
invoke async actor methods on weak or unowned actor references and the code is 
retain cycle free, no deadlocks will happen.

Cheers
Marc

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Adam Kemp via swift-evolution

> On Aug 25, 2017, at 9:54 AM, Thomas  wrote:
> 
> I'd tend to think non-FIFO actor messaging will cause more trouble than 
> potential deadlocks. I'm re-reading the proposal and it seems to go this way 
> as well:
> 
> "An await on an actor method suspends the current task, and since you can get 
> circular waits, you can end up with deadlock. This is because only one 
> message is processed by the actor at a time. The trivial case like this can 
> also be trivially diagnosed by the compiler. The complex case would ideally 
> be diagnosed at runtime with a trap, depending on the runtime implementation 
> model."

I understand what you’re saying, but I just think trying to make synchronous, 
blocking actor methods goes against the fundamental ideal of the actor model, 
and it’s a recipe for disaster. When actors communicate with each other that 
communication needs to be asynchronous or you will get deadlocks. It’s not just 
going to be a corner case. It’s going to be a very frequent occurrence.

One of the general rules of multithreaded programming is “don’t call unknown 
code while holding a lock”. Blocking a queue is effectively the same as holding 
a lock, and calling another actor is calling unknown code. So if the model 
works that way then the language itself will be encouraging people to call 
unknown code while holding locks. That is not going to go well.___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Thomas via swift-evolution

> On 25 Aug 2017, at 18:30, Adam Kemp  wrote:
> 
> 
> 
>> On Aug 25, 2017, at 1:14 AM, Thomas > > wrote:
>>> On 25 Aug 2017, at 01:15, Adam Kemp >> > wrote:
>>> I don’t think await should cause the actor’s queue (or any queue) to be 
>>> suspended. Actor methods should not block waiting for asynchronous things. 
>>> That’s how you get deadlocks. If an actor method needs to be async then it 
>>> should work just like any async method on the main queue: it unblocks the 
>>> queue and allows other messages to be processed until it gets an answer.
>>> 
>>> You do have to be aware of the fact that things can happen in between an 
>>> await and the next line of code, but conveniently these places are all 
>>> marked for you. They all say “await”. :)
>> 
>> It is correct that suspending the queue allows for deadlocks, but not doing 
>> it means you can receive messages while still in the middle of another 
>> message. For the same reason you may need FIFO ordering in a class to 
>> guarantee coherency, you will want this to work in an asynchronous world as 
>> well. Take for example some storage class:
>> 
>> 1. store(object, key)
>> 2. fetch(key)
>> 
>> If you're doing these operations in order, you want the fetch to return the 
>> object you just stored. If the 'store' needs to await something in its 
>> implementation and we were to not suspend the queue, the fetch would be 
>> processed before the object is actually stored and it would return something 
>> unexpected.
> 
> Actors can use other means to serialize operations if they need to, for 
> instance by using an internal queue of pending operations. It’s better for 
> actors that need this kind of serialization to handle it explicitly than for 
> every actor to suffer from potential deadlocks when doing seemingly 
> straightforward things.
> 
> async/await in general is not meant to block anything. It’s explicitly meant 
> to avoid blocking things. That’s what the feature is for. It would be 
> confusing if await did something different for actor methods than it did for 
> every other context.

I'd tend to think non-FIFO actor messaging will cause more trouble than 
potential deadlocks. I'm re-reading the proposal and it seems to go this way as 
well:

"An await on an actor method suspends the current task, and since you can get 
circular waits, you can end up with deadlock. This is because only one message 
is processed by the actor at a time. The trivial case like this can also be 
trivially diagnosed by the compiler. The complex case would ideally be 
diagnosed at runtime with a trap, depending on the runtime implementation 
model."

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Adam Kemp via swift-evolution
Cancellation and time out can be built into futures, and async/await can 
interact with futures. I don’t think we need async/await itself to support 
either of those.

Just as a real-world example, C#’s async/await feature doesn’t have built-in 
timeout or cancellation support, but it’s still easy to handle both of those 
cases using the tools available. For example, one technique would be this (in 
C#):

var cts = new CancellationTokenSource();
cts.CancelAfter(TimeSpan.FromMilliseconds(2500));
try {
await DoAsync(cts.Token);
}
catch (OperationCanceledException) {
// Handle cancelled
}
catch (Exception) {
// Handle other failure
}

There are other techniques that would let you distinguish between cancellation 
and timeout as well.

> On Aug 25, 2017, at 7:06 AM, Cavelle Benjamin via swift-evolution 
>  wrote:
> 
> Disclaimer: not an expert
> 
> Question
> I didn’t see any where the async is required to time out after a certain time 
> frame. I would think that we would want to specify both on the function 
> declaration side as a default and on the function call side as a 
> customization. That being said, the return time then becomes an optional 
> given the timeout and the calling code would need to unwrap.
> 
> func loadWebResource(_ path: String) async -> Resource
> func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image
> func dewarpAndCleanupImage(_ i : Image) async -> Image
> 
> func processImageData1() async -> Image {
> let dataResource  = await loadWebResource("dataprofile.txt")
> let imageResource = await loadWebResource("imagedata.dat")
> let imageTmp  = await decodeImage(dataResource, imageResource)
> let imageResult   = await dewarpAndCleanupImage(imageTmp)
> return imageResult
> }
> 
> 
> So the prior code becomes… 
> 
> func loadWebResource(_ path: String) async(timeout: 1000) -> Resource?
> func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image?
> func dewarpAndCleanupImage(_ i : Image) async -> Image?
> 
> func processImageData1() async -> Image? {
> let dataResource  = guard let await loadWebResource("dataprofile.txt”) 
> else { // handle timeout }
> let imageResource = guard let await(timeout: 100) 
> loadWebResource("imagedata.dat”) else { // handle timeout }
> let imageTmp  = await decodeImage(dataResource, imageResource)
> let imageResult   = await dewarpAndCleanupImage(imageTmp)
> return imageResult
> }
> 
> 
> Given this structure, the return type of all async’s would be optionals with 
> now 3 return types??
> 
> .continuation // suspends and picks back up
> .value // these are the values we are looking for
> .none // took too long, so you get nothing.
> 
> 
> 
>> On 2017-Aug -17 (34), at 18:24, Chris Lattner via swift-evolution 
>> > wrote:
>> 
>> Hi all,
>> 
>> As Ted mentioned in his email, it is great to finally kick off discussions 
>> for what concurrency should look like in Swift.  This will surely be an epic 
>> multi-year journey, but it is more important to find the right design than 
>> to get there fast.
>> 
>> I’ve been advocating for a specific model involving async/await and actors 
>> for many years now.  Handwaving only goes so far, so some folks asked me to 
>> write them down to make the discussion more helpful and concrete.  While I 
>> hope these ideas help push the discussion on concurrency forward, this isn’t 
>> in any way meant to cut off other directions: in fact I hope it helps give 
>> proponents of other designs a model to follow: a discussion giving extensive 
>> rationale, combined with the long term story arc to show that the features 
>> fit together.
>> 
>> Anyway, here is the document, I hope it is useful, and I’d love to hear 
>> comments and suggestions for improvement:
>> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782 
>> 
>> 
>> -Chris
>> 
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org
>> https://lists.swift.org/mailman/listinfo/swift-evolution
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Adam Kemp via swift-evolution


> On Aug 25, 2017, at 1:14 AM, Thomas  wrote:
>> On 25 Aug 2017, at 01:15, Adam Kemp > > wrote:
>> I don’t think await should cause the actor’s queue (or any queue) to be 
>> suspended. Actor methods should not block waiting for asynchronous things. 
>> That’s how you get deadlocks. If an actor method needs to be async then it 
>> should work just like any async method on the main queue: it unblocks the 
>> queue and allows other messages to be processed until it gets an answer.
>> 
>> You do have to be aware of the fact that things can happen in between an 
>> await and the next line of code, but conveniently these places are all 
>> marked for you. They all say “await”. :)
> 
> It is correct that suspending the queue allows for deadlocks, but not doing 
> it means you can receive messages while still in the middle of another 
> message. For the same reason you may need FIFO ordering in a class to 
> guarantee coherency, you will want this to work in an asynchronous world as 
> well. Take for example some storage class:
> 
> 1. store(object, key)
> 2. fetch(key)
> 
> If you're doing these operations in order, you want the fetch to return the 
> object you just stored. If the 'store' needs to await something in its 
> implementation and we were to not suspend the queue, the fetch would be 
> processed before the object is actually stored and it would return something 
> unexpected.

Actors can use other means to serialize operations if they need to, for 
instance by using an internal queue of pending operations. It’s better for 
actors that need this kind of serialization to handle it explicitly than for 
every actor to suffer from potential deadlocks when doing seemingly 
straightforward things.

async/await in general is not meant to block anything. It’s explicitly meant to 
avoid blocking things. That’s what the feature is for. It would be confusing if 
await did something different for actor methods than it did for every other 
context.___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Cavelle Benjamin via swift-evolution
Disclaimer: not an expert

Question
I didn’t see any where the async is required to time out after a certain time 
frame. I would think that we would want to specify both on the function 
declaration side as a default and on the function call side as a customization. 
That being said, the return time then becomes an optional given the timeout and 
the calling code would need to unwrap.

func loadWebResource(_ path: String) async -> Resource
func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image
func dewarpAndCleanupImage(_ i : Image) async -> Image

func processImageData1() async -> Image {
let dataResource  = await loadWebResource("dataprofile.txt")
let imageResource = await loadWebResource("imagedata.dat")
let imageTmp  = await decodeImage(dataResource, imageResource)
let imageResult   = await dewarpAndCleanupImage(imageTmp)
return imageResult
}


So the prior code becomes… 

func loadWebResource(_ path: String) async(timeout: 1000) -> Resource?
func decodeImage(_ r1: Resource, _ r2: Resource) async -> Image?
func dewarpAndCleanupImage(_ i : Image) async -> Image?

func processImageData1() async -> Image? {
let dataResource  = guard let await loadWebResource("dataprofile.txt”) else 
{ // handle timeout }
let imageResource = guard let await(timeout: 100) 
loadWebResource("imagedata.dat”) else { // handle timeout }
let imageTmp  = await decodeImage(dataResource, imageResource)
let imageResult   = await dewarpAndCleanupImage(imageTmp)
return imageResult
}


Given this structure, the return type of all async’s would be optionals with 
now 3 return types??

.continuation // suspends and picks back up
.value // these are the values we are looking for
.none // took too long, so you get nothing.



> On 2017-Aug -17 (34), at 18:24, Chris Lattner via swift-evolution 
>  wrote:
> 
> Hi all,
> 
> As Ted mentioned in his email, it is great to finally kick off discussions 
> for what concurrency should look like in Swift.  This will surely be an epic 
> multi-year journey, but it is more important to find the right design than to 
> get there fast.
> 
> I’ve been advocating for a specific model involving async/await and actors 
> for many years now.  Handwaving only goes so far, so some folks asked me to 
> write them down to make the discussion more helpful and concrete.  While I 
> hope these ideas help push the discussion on concurrency forward, this isn’t 
> in any way meant to cut off other directions: in fact I hope it helps give 
> proponents of other designs a model to follow: a discussion giving extensive 
> rationale, combined with the long term story arc to show that the features 
> fit together.
> 
> Anyway, here is the document, I hope it is useful, and I’d love to hear 
> comments and suggestions for improvement:
> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782
> 
> -Chris
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Thomas via swift-evolution

> On 25 Aug 2017, at 10:14, Thomas via swift-evolution 
>  wrote:
> 
> 
>> On 25 Aug 2017, at 01:15, Adam Kemp > > wrote:
>> 
>> 
>> 
>>> On Aug 24, 2017, at 3:15 PM, Thomas >> > wrote:
>>> 
>>> 
 On 24 Aug 2017, at 23:47, Adam Kemp > wrote:
 
 
> On Aug 24, 2017, at 1:05 PM, Thomas via swift-evolution 
> > wrote:
> 
>> 
>> On 24 Aug 2017, at 21:48, Marc Schlichte > > wrote:
>> 
>> Yes, I think it is mandatory that we continue on the callers queue after 
>> an `await ` on some actor method.
>> 
>> If you `await` on a non-actor-method though, you would have to changes 
>> queues manually if needed.
>> 
>> Any `actor` should have a `let actorQueue: DispatchQueue` property so 
>> that we can call in these cases:
>> 
>> ```await actorQueue.asyncCoroutine()``` as mentioned in the manifesto.
> 
> Wouldn't that be really confusing though? That awaiting certain methods 
> would bring us back to the actor's queue but awaiting others would 
> require manual queue hopping? What if the compiler was to always generate 
> the 'await actorQueue.asyncCoroutine()' queue hopping code after awaiting 
> on an async/actor method?
 
 Yes, it would be confusing. await should either always return to the same 
 queue or never do it. Otherwise it’s even more error-prone. I see the 
 actor feature as being just another demonstration of why solving the 
 queue-hopping problem is important for async/await to be useful.
>>> 
>>> So the way a non "fire and forget" actor method would work is:
>>> 
>>> - the actor's queue is in a suspended state until the method returns, this 
>>> is required so that messages sent to other actor methods are not processed 
>>> (they're added to the queue)
>>> - if the method body awaits on some other code, it automatically jumps back 
>>> on the actor's queue after awaiting, regardless of the queue's suspension 
>>> and content
>>> - when the method returns, the actor's queue is resumed and pending 
>>> messages can be processed (if any)
>>> 
>> 
>> I don’t think await should cause the actor’s queue (or any queue) to be 
>> suspended. Actor methods should not block waiting for asynchronous things. 
>> That’s how you get deadlocks. If an actor method needs to be async then it 
>> should work just like any async method on the main queue: it unblocks the 
>> queue and allows other messages to be processed until it gets an answer.
>> 
>> You do have to be aware of the fact that things can happen in between an 
>> await and the next line of code, but conveniently these places are all 
>> marked for you. They all say “await”. :)
> 
> It is correct that suspending the queue allows for deadlocks, but not doing 
> it means you can receive messages while still in the middle of another 
> message. For the same reason you may need FIFO ordering in a class to 
> guarantee coherency, you will want this to work in an asynchronous world as 
> well. Take for example some storage class:
> 
> 1. store(object, key)
> 2. fetch(key)
> 
> If you're doing these operations in order, you want the fetch to return the 
> object you just stored. If the 'store' needs to await something in its 
> implementation and we were to not suspend the queue, the fetch would be 
> processed before the object is actually stored and it would return something 
> unexpected.

Also think about this storage class being used concurrently. If the 'store' 
method is called concurrently and you don't suspend the queue, you'd end up 
with some of these 'store' requests processed while possibly in the middle of 
previous 'store' requests. That doesn't seem very safe. Soon enough, you'll end 
up wanting to wrap your class into an async FIFO pipeline.

Thomas

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Thomas via swift-evolution

> On 25 Aug 2017, at 10:14, Thomas via swift-evolution 
>  wrote:
> 
> 
>> On 25 Aug 2017, at 01:15, Adam Kemp > > wrote:
>> 
>> 
>> 
>>> On Aug 24, 2017, at 3:15 PM, Thomas >> > wrote:
>>> 
>>> 
 On 24 Aug 2017, at 23:47, Adam Kemp > wrote:
 
 
> On Aug 24, 2017, at 1:05 PM, Thomas via swift-evolution 
> > wrote:
> 
>> 
>> On 24 Aug 2017, at 21:48, Marc Schlichte > > wrote:
>> 
>> Yes, I think it is mandatory that we continue on the callers queue after 
>> an `await ` on some actor method.
>> 
>> If you `await` on a non-actor-method though, you would have to changes 
>> queues manually if needed.
>> 
>> Any `actor` should have a `let actorQueue: DispatchQueue` property so 
>> that we can call in these cases:
>> 
>> ```await actorQueue.asyncCoroutine()``` as mentioned in the manifesto.
> 
> Wouldn't that be really confusing though? That awaiting certain methods 
> would bring us back to the actor's queue but awaiting others would 
> require manual queue hopping? What if the compiler was to always generate 
> the 'await actorQueue.asyncCoroutine()' queue hopping code after awaiting 
> on an async/actor method?
 
 Yes, it would be confusing. await should either always return to the same 
 queue or never do it. Otherwise it’s even more error-prone. I see the 
 actor feature as being just another demonstration of why solving the 
 queue-hopping problem is important for async/await to be useful.
>>> 
>>> So the way a non "fire and forget" actor method would work is:
>>> 
>>> - the actor's queue is in a suspended state until the method returns, this 
>>> is required so that messages sent to other actor methods are not processed 
>>> (they're added to the queue)
>>> - if the method body awaits on some other code, it automatically jumps back 
>>> on the actor's queue after awaiting, regardless of the queue's suspension 
>>> and content
>>> - when the method returns, the actor's queue is resumed and pending 
>>> messages can be processed (if any)
>>> 
>> 
>> I don’t think await should cause the actor’s queue (or any queue) to be 
>> suspended. Actor methods should not block waiting for asynchronous things. 
>> That’s how you get deadlocks. If an actor method needs to be async then it 
>> should work just like any async method on the main queue: it unblocks the 
>> queue and allows other messages to be processed until it gets an answer.
>> 
>> You do have to be aware of the fact that things can happen in between an 
>> await and the next line of code, but conveniently these places are all 
>> marked for you. They all say “await”. :)
> 
> It is correct that suspending the queue allows for deadlocks, but not doing 
> it means you can receive messages while still in the middle of another 
> message. For the same reason you may need FIFO ordering in a class to 
> guarantee coherency, you will want this to work in an asynchronous world as 
> well. Take for example some storage class:
> 
> 1. store(object, key)
> 2. fetch(key)
> 
> If you're doing these operations in order, you want the fetch to return the 
> object you just stored. If the 'store' needs to await something in its 
> implementation and we were to not suspend the queue, the fetch would be 
> processed before the object is actually stored and it would return something 
> unexpected.

By the way, contrary to what I said earlier, that means we'd need to do this 
also for "fire and forget" methods, as is "store" in this example.

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Thomas via swift-evolution

> On 25 Aug 2017, at 10:17, Thomas via swift-evolution 
>  wrote:
> 
>> 
>> On 25 Aug 2017, at 09:04, Marc Schlichte > > wrote:
>> 
>> 
>>> Am 24.08.2017 um 22:05 schrieb Thomas via swift-evolution 
>>> >:
>>> 
 
 Yes, I think it is mandatory that we continue on the callers queue after 
 an `await ` on some actor method.
 
 If you `await` on a non-actor-method though, you would have to changes 
 queues manually if needed.
 
 Any `actor` should have a `let actorQueue: DispatchQueue` property so that 
 we can call in these cases:
 
 ```await actorQueue.asyncCoroutine()``` as mentioned in the manifesto.
>>> 
>>> Wouldn't that be really confusing though? That awaiting certain methods 
>>> would bring us back to the actor's queue but awaiting others would require 
>>> manual queue hopping? What if the compiler was to always generate the 
>>> 'await actorQueue.asyncCoroutine()' queue hopping code after awaiting on an 
>>> async/actor method?
>>> 
>>> Thomas
>>> 
>> 
>> I think we are not allowed to implicitly switch back to the actor's queue 
>> after awaiting non-actor methods. These might have been auto converted from 
>> Continuation-Passing-Style (CPS) to async/await style. With the `mainActor` 
>> idea from the manifesto, all existing code will run in some actor, so 
>> changing the queue semantics could break existing code.
> 
> This would only happen when the caller is an actor, which means new code, so 
> I don't think we would be breaking any existing code.

Oh but yeah, the main actor will probably need migration/warnings from the 
compiler.

Thomas

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Thomas via swift-evolution

> On 25 Aug 2017, at 09:04, Marc Schlichte  
> wrote:
> 
> 
>> Am 24.08.2017 um 22:05 schrieb Thomas via swift-evolution 
>> >:
>> 
>>> 
>>> Yes, I think it is mandatory that we continue on the callers queue after an 
>>> `await ` on some actor method.
>>> 
>>> If you `await` on a non-actor-method though, you would have to changes 
>>> queues manually if needed.
>>> 
>>> Any `actor` should have a `let actorQueue: DispatchQueue` property so that 
>>> we can call in these cases:
>>> 
>>> ```await actorQueue.asyncCoroutine()``` as mentioned in the manifesto.
>> 
>> Wouldn't that be really confusing though? That awaiting certain methods 
>> would bring us back to the actor's queue but awaiting others would require 
>> manual queue hopping? What if the compiler was to always generate the 'await 
>> actorQueue.asyncCoroutine()' queue hopping code after awaiting on an 
>> async/actor method?
>> 
>> Thomas
>> 
> 
> I think we are not allowed to implicitly switch back to the actor's queue 
> after awaiting non-actor methods. These might have been auto converted from 
> Continuation-Passing-Style (CPS) to async/await style. With the `mainActor` 
> idea from the manifesto, all existing code will run in some actor, so 
> changing the queue semantics could break existing code.

This would only happen when the caller is an actor, which means new code, so I 
don't think we would be breaking any existing code.

Thomas

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Thomas via swift-evolution

> On 25 Aug 2017, at 01:15, Adam Kemp  wrote:
> 
> 
> 
>> On Aug 24, 2017, at 3:15 PM, Thomas > > wrote:
>> 
>> 
>>> On 24 Aug 2017, at 23:47, Adam Kemp >> > wrote:
>>> 
>>> 
 On Aug 24, 2017, at 1:05 PM, Thomas via swift-evolution 
 > wrote:
 
> 
> On 24 Aug 2017, at 21:48, Marc Schlichte  > wrote:
> 
> Yes, I think it is mandatory that we continue on the callers queue after 
> an `await ` on some actor method.
> 
> If you `await` on a non-actor-method though, you would have to changes 
> queues manually if needed.
> 
> Any `actor` should have a `let actorQueue: DispatchQueue` property so 
> that we can call in these cases:
> 
> ```await actorQueue.asyncCoroutine()``` as mentioned in the manifesto.
 
 Wouldn't that be really confusing though? That awaiting certain methods 
 would bring us back to the actor's queue but awaiting others would require 
 manual queue hopping? What if the compiler was to always generate the 
 'await actorQueue.asyncCoroutine()' queue hopping code after awaiting on 
 an async/actor method?
>>> 
>>> Yes, it would be confusing. await should either always return to the same 
>>> queue or never do it. Otherwise it’s even more error-prone. I see the actor 
>>> feature as being just another demonstration of why solving the 
>>> queue-hopping problem is important for async/await to be useful.
>> 
>> So the way a non "fire and forget" actor method would work is:
>> 
>> - the actor's queue is in a suspended state until the method returns, this 
>> is required so that messages sent to other actor methods are not processed 
>> (they're added to the queue)
>> - if the method body awaits on some other code, it automatically jumps back 
>> on the actor's queue after awaiting, regardless of the queue's suspension 
>> and content
>> - when the method returns, the actor's queue is resumed and pending messages 
>> can be processed (if any)
>> 
> 
> I don’t think await should cause the actor’s queue (or any queue) to be 
> suspended. Actor methods should not block waiting for asynchronous things. 
> That’s how you get deadlocks. If an actor method needs to be async then it 
> should work just like any async method on the main queue: it unblocks the 
> queue and allows other messages to be processed until it gets an answer.
> 
> You do have to be aware of the fact that things can happen in between an 
> await and the next line of code, but conveniently these places are all marked 
> for you. They all say “await”. :)

It is correct that suspending the queue allows for deadlocks, but not doing it 
means you can receive messages while still in the middle of another message. 
For the same reason you may need FIFO ordering in a class to guarantee 
coherency, you will want this to work in an asynchronous world as well. Take 
for example some storage class:

1. store(object, key)
2. fetch(key)

If you're doing these operations in order, you want the fetch to return the 
object you just stored. If the 'store' needs to await something in its 
implementation and we were to not suspend the queue, the fetch would be 
processed before the object is actually stored and it would return something 
unexpected.

Thomas

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Marc Schlichte via swift-evolution

> Am 25.08.2017 um 01:15 schrieb Adam Kemp via swift-evolution 
> :
> 
> I don’t think await should cause the actor’s queue (or any queue) to be 
> suspended. Actor methods should not block waiting for asynchronous things. 
> That’s how you get deadlocks. If an actor method needs to be async then it 
> should work just like any async method on the main queue: it unblocks the 
> queue and allows other messages to be processed until it gets an answer.
> 
> You do have to be aware of the fact that things can happen in between an 
> await and the next line of code, but conveniently these places are all marked 
> for you. They all say “await”. :)

Yes, that is important to note: when we `await` on an `async` method, the 
callers queue does not get blocked in any way. The control flow just continues 
- with the next instruction after the enclosing `beginAsync` I suppose - and 
when done with the current DispatchWorkItem just dequeues the next 
DispatchWorkItem and works on it. If there is no next item, the underlying 
thread might still not be suspend but be used to work on some another queue.

Despite that, we might still want to discuss if actor-methods should get 
serialized beyond that - think of an underlying GCD queue (as discussed above) 
and a separate (non-GDC) message-queue where actor messages will get queued up. 
In another thread I proposed to introduce a new modifier for actor methods 
which will not put them into the message-queue and which are thus allowed to 
run whenever the GCD queue picks them up:

serialized by message-queue: 
`actor func foo() async`

non-serialized:
`interleaved actor func bar() async`

This way, when you reason about your code and look at places marked with 
`await`, only `interleaved` methods (or code using explicit `beginAsync` calls) 
might have changed your state.

Cheers
Marc

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-25 Thread Marc Schlichte via swift-evolution

> Am 24.08.2017 um 22:05 schrieb Thomas via swift-evolution 
> :
> 
>> 
>> Yes, I think it is mandatory that we continue on the callers queue after an 
>> `await ` on some actor method.
>> 
>> If you `await` on a non-actor-method though, you would have to changes 
>> queues manually if needed.
>> 
>> Any `actor` should have a `let actorQueue: DispatchQueue` property so that 
>> we can call in these cases:
>> 
>> ```await actorQueue.asyncCoroutine()``` as mentioned in the manifesto.
> 
> Wouldn't that be really confusing though? That awaiting certain methods would 
> bring us back to the actor's queue but awaiting others would require manual 
> queue hopping? What if the compiler was to always generate the 'await 
> actorQueue.asyncCoroutine()' queue hopping code after awaiting on an 
> async/actor method?
> 
> Thomas
> 

I think we are not allowed to implicitly switch back to the actor's queue after 
awaiting non-actor methods. These might have been auto converted from 
Continuation-Passing-Style (CPS) to async/await style. With the `mainActor` 
idea from the manifesto, all existing code will run in some actor, so changing 
the queue semantics could break existing code.

Actually, I don’t find it confusing: when calling non-actor methods, you have 
to take care by yourself - as today. Calling actor methods instead, you don’t 
have to bother about this any longer - this is actually a big incentive to 
‚actorify‘ many APIs ;-)

Cheers
Marc___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-24 Thread Adam Kemp via swift-evolution


> On Aug 24, 2017, at 3:15 PM, Thomas  wrote:
> 
> 
>> On 24 Aug 2017, at 23:47, Adam Kemp > > wrote:
>> 
>> 
>>> On Aug 24, 2017, at 1:05 PM, Thomas via swift-evolution 
>>> > wrote:
>>> 
 
 On 24 Aug 2017, at 21:48, Marc Schlichte > wrote:
 
 Yes, I think it is mandatory that we continue on the callers queue after 
 an `await ` on some actor method.
 
 If you `await` on a non-actor-method though, you would have to changes 
 queues manually if needed.
 
 Any `actor` should have a `let actorQueue: DispatchQueue` property so that 
 we can call in these cases:
 
 ```await actorQueue.asyncCoroutine()``` as mentioned in the manifesto.
>>> 
>>> Wouldn't that be really confusing though? That awaiting certain methods 
>>> would bring us back to the actor's queue but awaiting others would require 
>>> manual queue hopping? What if the compiler was to always generate the 
>>> 'await actorQueue.asyncCoroutine()' queue hopping code after awaiting on an 
>>> async/actor method?
>> 
>> Yes, it would be confusing. await should either always return to the same 
>> queue or never do it. Otherwise it’s even more error-prone. I see the actor 
>> feature as being just another demonstration of why solving the queue-hopping 
>> problem is important for async/await to be useful.
> 
> So the way a non "fire and forget" actor method would work is:
> 
> - the actor's queue is in a suspended state until the method returns, this is 
> required so that messages sent to other actor methods are not processed 
> (they're added to the queue)
> - if the method body awaits on some other code, it automatically jumps back 
> on the actor's queue after awaiting, regardless of the queue's suspension and 
> content
> - when the method returns, the actor's queue is resumed and pending messages 
> can be processed (if any)
> 

I don’t think await should cause the actor’s queue (or any queue) to be 
suspended. Actor methods should not block waiting for asynchronous things. 
That’s how you get deadlocks. If an actor method needs to be async then it 
should work just like any async method on the main queue: it unblocks the queue 
and allows other messages to be processed until it gets an answer.

You do have to be aware of the fact that things can happen in between an await 
and the next line of code, but conveniently these places are all marked for 
you. They all say “await”. :)___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-24 Thread Thomas via swift-evolution

> On 24 Aug 2017, at 23:47, Adam Kemp  wrote:
> 
> 
>> On Aug 24, 2017, at 1:05 PM, Thomas via swift-evolution 
>> > wrote:
>> 
>>> 
>>> On 24 Aug 2017, at 21:48, Marc Schlichte >> > wrote:
>>> 
>>> Yes, I think it is mandatory that we continue on the callers queue after an 
>>> `await ` on some actor method.
>>> 
>>> If you `await` on a non-actor-method though, you would have to changes 
>>> queues manually if needed.
>>> 
>>> Any `actor` should have a `let actorQueue: DispatchQueue` property so that 
>>> we can call in these cases:
>>> 
>>> ```await actorQueue.asyncCoroutine()``` as mentioned in the manifesto.
>> 
>> Wouldn't that be really confusing though? That awaiting certain methods 
>> would bring us back to the actor's queue but awaiting others would require 
>> manual queue hopping? What if the compiler was to always generate the 'await 
>> actorQueue.asyncCoroutine()' queue hopping code after awaiting on an 
>> async/actor method?
> 
> Yes, it would be confusing. await should either always return to the same 
> queue or never do it. Otherwise it’s even more error-prone. I see the actor 
> feature as being just another demonstration of why solving the queue-hopping 
> problem is important for async/await to be useful.

So the way a non "fire and forget" actor method would work is:

- the actor's queue is in a suspended state until the method returns, this is 
required so that messages sent to other actor methods are not processed 
(they're added to the queue)
- if the method body awaits on some other code, it automatically jumps back on 
the actor's queue after awaiting, regardless of the queue's suspension and 
content
- when the method returns, the actor's queue is resumed and pending messages 
can be processed (if any)

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-24 Thread Adam Kemp via swift-evolution

> On Aug 24, 2017, at 1:05 PM, Thomas via swift-evolution 
>  wrote:
> 
>> 
>> On 24 Aug 2017, at 21:48, Marc Schlichte > > wrote:
>> 
>> Yes, I think it is mandatory that we continue on the callers queue after an 
>> `await ` on some actor method.
>> 
>> If you `await` on a non-actor-method though, you would have to changes 
>> queues manually if needed.
>> 
>> Any `actor` should have a `let actorQueue: DispatchQueue` property so that 
>> we can call in these cases:
>> 
>> ```await actorQueue.asyncCoroutine()``` as mentioned in the manifesto.
> 
> Wouldn't that be really confusing though? That awaiting certain methods would 
> bring us back to the actor's queue but awaiting others would require manual 
> queue hopping? What if the compiler was to always generate the 'await 
> actorQueue.asyncCoroutine()' queue hopping code after awaiting on an 
> async/actor method?

Yes, it would be confusing. await should either always return to the same queue 
or never do it. Otherwise it’s even more error-prone. I see the actor feature 
as being just another demonstration of why solving the queue-hopping problem is 
important for async/await to be useful.___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-24 Thread Ben Rimmington via swift-evolution
Re: 

Chris Lattner recently commented in  
that the prototype could use  support.

In one of the CppCon videos, Gor Nishanov said that C++ coroutines won't have 
an `async` keyword, and will be compatible with function pointers in C and C++.

* 


* 

* 


I couldn't find the reason for this decision; does anyone here know why C++ 
coroutines don't need an `async` keyword?

And/or why do Swift coroutines need the `async` keyword? Does it imply a hidden 
parameter, like the `throws` keyword?

-- Ben

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-24 Thread Thomas via swift-evolution

> On 24 Aug 2017, at 22:05, Thomas via swift-evolution 
>  wrote:
> 
>> 
>> On 24 Aug 2017, at 21:48, Marc Schlichte > > wrote:
>> 
>> 
>>> Am 24.08.2017 um 01:56 schrieb Adam Kemp >> >:
>>> 
>>> 
>>> 
 On Aug 23, 2017, at 4:28 PM, Marc Schlichte via swift-evolution 
 > wrote:
 
 
> Am 23.08.2017 um 12:29 schrieb Thomas via swift-evolution 
> >:
> 
> 
>> On 23 Aug 2017, at 11:28, Thomas via swift-evolution 
>> > wrote:
>> 
>> 1. What happens to the actor's queue when the body of a (non 
>> void-returning) actor method awaits away on some other actor? Does it 
>> suspend the queue to prevent other messages from being processes? It 
>> would seem to be the expected behavior but we'd also need a way to 
>> detach from the actor's queue in order to allow patterns like starting a 
>> long-running background operation and still allowing other messages to 
>> be processed (for example, calling a cancel() method). We could still do 
>> these long-running operations by passing a completion block to the 
>> method, rather than via its return value. That would clarify this goes 
>> beyond this one actor message, but we're back to the old syntax...
> 
> Maybe that's where Futures would come in handy? Just return a Future from 
> the method so callers can await long-running operations.
 
 If you wrap the call to a long-running operation of another actor in a 
 `beginAsync`, I would assume that other `actor funcs` of your actor will 
 be able to run even while
  the long-running operation is pending:
 
 actor class Caller {
   let callee = Callee()
   var state = SomeState()
 
   actor func foo() {
 beginAsync {
   let result = await callee.longRunningOperation()
   // do something with result and maybe state
 }
   }
   actor func bar() {
 // modify actor state
   }
 }
>>> 
>>> As currently proposed, the “// do something with result and maybe state” 
>>> line would likely run on Callee’s queue, not Caller’s queue. I still 
>>> strongly believe that this behavior should be reconsidered.
>>> 
>>> It does, however, bring up some interesting questions about how actors 
>>> interact with themselves. One of the rules laid out in Chris’s document 
>>> says that “local state and non-actor methods may only be accessed by 
>>> methods defined lexically on the actor or in an extension to it (whether 
>>> they are marked actor or otherwise).” That means it would be allowed for 
>>> the code in foo() to access state and call non-actor methods, even after 
>>> the await. As proposed that would be unsafe, and since the queue is an 
>>> implementation detail inaccessible to your code there wouldn’t be a 
>>> straightforward way to get back on the right queue to make it safe. I 
>>> presume you could call another actor method to get back on the right queue, 
>>> but having to do that explicitly after every await in an actor method seems 
>>> tedious and error prone.
>>> 
>>> In order to have strong safety guarantees for actors you would want to 
>>> ensure that all the code that has access to the state runs on the actor’s 
>>> queue. There are currently two holes I can think of that would prevent us 
>>> from having that protection: await and escaping blocks. await could be made 
>>> safe if it were changed to return to the calling queue. Maybe escaping 
>>> blocks could be restricted to only calling actor methods.
>> 
>> Yes, I think it is mandatory that we continue on the callers queue after an 
>> `await ` on some actor method.
>> 
>> If you `await` on a non-actor-method though, you would have to changes 
>> queues manually if needed.
>> 
>> Any `actor` should have a `let actorQueue: DispatchQueue` property so that 
>> we can call in these cases:
>> 
>> ```await actorQueue.asyncCoroutine()``` as mentioned in the manifesto.
> 
> Wouldn't that be really confusing though? That awaiting certain methods would 
> bring us back to the actor's queue but awaiting others would require manual 
> queue hopping? What if the compiler was to always generate the 'await 
> actorQueue.asyncCoroutine()' queue hopping code after awaiting on an 
> async/actor method?

Adding a bit more about that: I think it doesn't matter what the callee is 
(async vs. actor). What matters is we're calling from an actor method and the 
compiler should guarantee that we're running in the context of the actor's 
queue. Therefore I would tend to think there should be no need for manual queue 
hopping. The compiler should just take care of it.

Thomas


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-24 Thread Thomas via swift-evolution

> On 24 Aug 2017, at 21:48, Marc Schlichte  
> wrote:
> 
> 
>> Am 24.08.2017 um 01:56 schrieb Adam Kemp > >:
>> 
>> 
>> 
>>> On Aug 23, 2017, at 4:28 PM, Marc Schlichte via swift-evolution 
>>> > wrote:
>>> 
>>> 
 Am 23.08.2017 um 12:29 schrieb Thomas via swift-evolution 
 >:
 
 
> On 23 Aug 2017, at 11:28, Thomas via swift-evolution 
> > wrote:
> 
> 1. What happens to the actor's queue when the body of a (non 
> void-returning) actor method awaits away on some other actor? Does it 
> suspend the queue to prevent other messages from being processes? It 
> would seem to be the expected behavior but we'd also need a way to detach 
> from the actor's queue in order to allow patterns like starting a 
> long-running background operation and still allowing other messages to be 
> processed (for example, calling a cancel() method). We could still do 
> these long-running operations by passing a completion block to the 
> method, rather than via its return value. That would clarify this goes 
> beyond this one actor message, but we're back to the old syntax...
 
 Maybe that's where Futures would come in handy? Just return a Future from 
 the method so callers can await long-running operations.
>>> 
>>> If you wrap the call to a long-running operation of another actor in a 
>>> `beginAsync`, I would assume that other `actor funcs` of your actor will be 
>>> able to run even while
>>>  the long-running operation is pending:
>>> 
>>> actor class Caller {
>>>   let callee = Callee()
>>>   var state = SomeState()
>>> 
>>>   actor func foo() {
>>> beginAsync {
>>>   let result = await callee.longRunningOperation()
>>>   // do something with result and maybe state
>>> }
>>>   }
>>>   actor func bar() {
>>> // modify actor state
>>>   }
>>> }
>> 
>> As currently proposed, the “// do something with result and maybe state” 
>> line would likely run on Callee’s queue, not Caller’s queue. I still 
>> strongly believe that this behavior should be reconsidered.
>> 
>> It does, however, bring up some interesting questions about how actors 
>> interact with themselves. One of the rules laid out in Chris’s document says 
>> that “local state and non-actor methods may only be accessed by methods 
>> defined lexically on the actor or in an extension to it (whether they are 
>> marked actor or otherwise).” That means it would be allowed for the code in 
>> foo() to access state and call non-actor methods, even after the await. As 
>> proposed that would be unsafe, and since the queue is an implementation 
>> detail inaccessible to your code there wouldn’t be a straightforward way to 
>> get back on the right queue to make it safe. I presume you could call 
>> another actor method to get back on the right queue, but having to do that 
>> explicitly after every await in an actor method seems tedious and error 
>> prone.
>> 
>> In order to have strong safety guarantees for actors you would want to 
>> ensure that all the code that has access to the state runs on the actor’s 
>> queue. There are currently two holes I can think of that would prevent us 
>> from having that protection: await and escaping blocks. await could be made 
>> safe if it were changed to return to the calling queue. Maybe escaping 
>> blocks could be restricted to only calling actor methods.
> 
> Yes, I think it is mandatory that we continue on the callers queue after an 
> `await ` on some actor method.
> 
> If you `await` on a non-actor-method though, you would have to changes queues 
> manually if needed.
> 
> Any `actor` should have a `let actorQueue: DispatchQueue` property so that we 
> can call in these cases:
> 
> ```await actorQueue.asyncCoroutine()``` as mentioned in the manifesto.

Wouldn't that be really confusing though? That awaiting certain methods would 
bring us back to the actor's queue but awaiting others would require manual 
queue hopping? What if the compiler was to always generate the 'await 
actorQueue.asyncCoroutine()' queue hopping code after awaiting on an 
async/actor method?

Thomas

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-23 Thread Adam Kemp via swift-evolution


> On Aug 23, 2017, at 4:28 PM, Marc Schlichte via swift-evolution 
>  wrote:
> 
> 
>> Am 23.08.2017 um 12:29 schrieb Thomas via swift-evolution 
>> >:
>> 
>> 
>>> On 23 Aug 2017, at 11:28, Thomas via swift-evolution 
>>> > wrote:
>>> 
>>> 1. What happens to the actor's queue when the body of a (non 
>>> void-returning) actor method awaits away on some other actor? Does it 
>>> suspend the queue to prevent other messages from being processes? It would 
>>> seem to be the expected behavior but we'd also need a way to detach from 
>>> the actor's queue in order to allow patterns like starting a long-running 
>>> background operation and still allowing other messages to be processed (for 
>>> example, calling a cancel() method). We could still do these long-running 
>>> operations by passing a completion block to the method, rather than via its 
>>> return value. That would clarify this goes beyond this one actor message, 
>>> but we're back to the old syntax...
>> 
>> Maybe that's where Futures would come in handy? Just return a Future from 
>> the method so callers can await long-running operations.
> 
> If you wrap the call to a long-running operation of another actor in a 
> `beginAsync`, I would assume that other `actor funcs` of your actor will be 
> able to run even while
>  the long-running operation is pending:
> 
> actor class Caller {
>   let callee = Callee()
>   var state = SomeState()
> 
>   actor func foo() {
> beginAsync {
>   let result = await callee.longRunningOperation()
>   // do something with result and maybe state
> }
>   }
>   actor func bar() {
> // modify actor state
>   }
> }

As currently proposed, the “// do something with result and maybe state” line 
would likely run on Callee’s queue, not Caller’s queue. I still strongly 
believe that this behavior should be reconsidered.

It does, however, bring up some interesting questions about how actors interact 
with themselves. One of the rules laid out in Chris’s document says that “local 
state and non-actor methods may only be accessed by methods defined lexically 
on the actor or in an extension to it (whether they are marked actor or 
otherwise).” That means it would be allowed for the code in foo() to access 
state and call non-actor methods, even after the await. As proposed that would 
be unsafe, and since the queue is an implementation detail inaccessible to your 
code there wouldn’t be a straightforward way to get back on the right queue to 
make it safe. I presume you could call another actor method to get back on the 
right queue, but having to do that explicitly after every await in an actor 
method seems tedious and error prone.

In order to have strong safety guarantees for actors you would want to ensure 
that all the code that has access to the state runs on the actor’s queue. There 
are currently two holes I can think of that would prevent us from having that 
protection: await and escaping blocks. await could be made safe if it were 
changed to return to the calling queue. Maybe escaping blocks could be 
restricted to only calling actor methods.

> 
> Note, that in this case while waiting asynchronously on the long-running 
> operation, the state of the caller might get changed by another of its `actor 
> funcs` running.
> Sometimes this might be intended - e.g. for cancellation - but it also could 
> lead to hard to find bugs...
> 
>> 
>> Thomas
>> 
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org 
>> https://lists.swift.org/mailman/listinfo/swift-evolution
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-23 Thread Marc Schlichte via swift-evolution

> Am 23.08.2017 um 12:29 schrieb Thomas via swift-evolution 
> :
> 
> 
>> On 23 Aug 2017, at 11:28, Thomas via swift-evolution 
>> > wrote:
>> 
>> 1. What happens to the actor's queue when the body of a (non void-returning) 
>> actor method awaits away on some other actor? Does it suspend the queue to 
>> prevent other messages from being processes? It would seem to be the 
>> expected behavior but we'd also need a way to detach from the actor's queue 
>> in order to allow patterns like starting a long-running background operation 
>> and still allowing other messages to be processed (for example, calling a 
>> cancel() method). We could still do these long-running operations by passing 
>> a completion block to the method, rather than via its return value. That 
>> would clarify this goes beyond this one actor message, but we're back to the 
>> old syntax...
> 
> Maybe that's where Futures would come in handy? Just return a Future from the 
> method so callers can await long-running operations.

If you wrap the call to a long-running operation of another actor in a 
`beginAsync`, I would assume that other `actor funcs` of your actor will be 
able to run even while
 the long-running operation is pending:

actor class Caller {
  let callee = Callee()
  var state = SomeState()

  actor func foo() {
beginAsync {
  let result = await callee.longRunningOperation()
  // do something with result and maybe state
}
  }
  actor func bar() {
// modify actor state
  }
}

Note, that in this case while waiting asynchronously on the long-running 
operation, the state of the caller might get changed by another of its `actor 
funcs` running.
Sometimes this might be intended - e.g. for cancellation - but it also could 
lead to hard to find bugs...

> 
> Thomas
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-23 Thread Andre via swift-evolution
Hi Chris, 

This looks amazing(!) … I am really looking forward to the end result whatever 
that may be, because I know it will be awesome.
Im really excited and a lot raced though my mind while I read it...

———
Part 1: Async/await

let dataResource  = await loadWebResource("dataprofile.txt")


Was there any thought about introducing the concept of a timeout when awaiting?

Something like an `await for:` optional parameter?
Then, if used with try, it could go like…

let dataResource  = try await for:10/*seconds*/ 
loadWebResource("dataprofile.txt”) catch _ as Timeout { //abort,retry cancel } 

Timeouts should probably be handled at a higher level, but its just something 
that jumped out to me a little, since its something I notice sometimes people 
neglect to take care of… :/


———
Part 2: Actors

One other thing that also jumped at me was, if we are going to have actors, and 
having them encapsulate/isolate state, would it also make sense to make sure 
that we can’t invoke state changing messages when its invalid?

As an example, if we had a ”downloader” actor that downloads multiple files, we 
wouldn't want to be able to send invalid messages such as `begin` when 
downloading has already ”begun”…. Would it then make more sense to have 
callable messages be determined by the publicly visible state of the actor 
instead?

For example, if the downloader actor hasn’t begin downloading, then the only 
available messages are `begin` and `addItem`, conversely if the actor is 
”downloading” then the only messages it should accept are `cancel` and 
`getProgress`…

Im thinking something along the lines of merging a class with an enum I 
suppose… 
Just put a gist here if you want to see what I am thinking: 
https://gist.github.com/andrekandore/f2539a74002d1255cfc3da58faf0f007

It may add complexity but I think (at least for me) instead of writing a lot of 
boilerplate, it would come naturally from the "state contract"… and it could be 
more safe than manually doing it myself….

Anyways, maybe Im wrong, but just something I thought about…

———

I would appreciate to hear what you think. ^_^

Cheers,

Andre



> H29/08/18 7:25、Chris Lattner via swift-evolution 
> のメール:
> 
> Hi all,
> 
> As Ted mentioned in his email, it is great to finally kick off discussions 
> for what concurrency should look like in Swift.  This will surely be an epic 
> multi-year journey, but it is more important to find the right design than to 
> get there fast.
> 
> I’ve been advocating for a specific model involving async/await and actors 
> for many years now.  Handwaving only goes so far, so some folks asked me to 
> write them down to make the discussion more helpful and concrete.  While I 
> hope these ideas help push the discussion on concurrency forward, this isn’t 
> in any way meant to cut off other directions: in fact I hope it helps give 
> proponents of other designs a model to follow: a discussion giving extensive 
> rationale, combined with the long term story arc to show that the features 
> fit together.
> 
> Anyway, here is the document, I hope it is useful, and I’d love to hear 
> comments and suggestions for improvement:
> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782
> 
> -Chris
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-23 Thread Thomas via swift-evolution

> On 23 Aug 2017, at 11:28, Thomas via swift-evolution 
>  wrote:
> 
> 1. What happens to the actor's queue when the body of a (non void-returning) 
> actor method awaits away on some other actor? Does it suspend the queue to 
> prevent other messages from being processes? It would seem to be the expected 
> behavior but we'd also need a way to detach from the actor's queue in order 
> to allow patterns like starting a long-running background operation and still 
> allowing other messages to be processed (for example, calling a cancel() 
> method). We could still do these long-running operations by passing a 
> completion block to the method, rather than via its return value. That would 
> clarify this goes beyond this one actor message, but we're back to the old 
> syntax...

Maybe that's where Futures would come in handy? Just return a Future from the 
method so callers can await long-running operations.

Thomas

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-23 Thread Thomas via swift-evolution
Just wanted to sum up my actors interrogation here:

1. What happens to the actor's queue when the body of a (non void-returning) 
actor method awaits away on some other actor? Does it suspend the queue to 
prevent other messages from being processes? It would seem to be the expected 
behavior but we'd also need a way to detach from the actor's queue in order to 
allow patterns like starting a long-running background operation and still 
allowing other messages to be processed (for example, calling a cancel() 
method). We could still do these long-running operations by passing a 
completion block to the method, rather than via its return value. That would 
clarify this goes beyond this one actor message, but we're back to the old 
syntax...

2. Clarification about whether we are called back on the actor's queue after 
awaiting on some other code/actor.

3. How do we differentiate between void-returning methods that can be awaited 
and void-returning methods that are oneway "fire and forget". These two methods 
written as of now:

fund doSomething(completionHandler: () -> ()) -> Void
func doSomething() -> Void

Currently they'd translate both to:

actor func doSomething() -> Void


Thomas

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-22 Thread Mike Sanderson via swift-evolution
On Mon, Aug 21, 2017 at 4:09 PM, Karim Nassar via swift-evolution <
swift-evolution@swift.org> wrote:

> Thought about it in more depth, and I’m now firmly in the camp of:
> ‘throws’/‘try' and ‘async’/‘await' should be orthogonal features. I think
> the slight call-site reduction in typed characters ('try await’ vs ‘await’)
> is heavily outweighed by the loss of clarity on all the edge cases.
>

My concern is less for' ‘throws’/‘async' in declarations and ‘try’/‘await'
at call sites ( I can live with both for clarity and explicitness) than it
is for the enclosing 'beginAsync/do'. Johnathan Hull made a similar point
on another thread

.

The principle at work, that aligns with the concurrency manifesto 'design'
section, is that it should not be the case that handling errors is more
onerous than ignoring or discarding them. Nor should handling errors
produce ugly code.

If handling errors requires nesting a do/try/catch in a beginAsync block
every time, then code that ignored errors will always look cleaner than
responsible code that handles them. Not handling errors will be the
default. If there are specific recoverable errors, like moving a file from
a download task only to find something else already moved it there, now
there are multiple nested do/try/catch.

(I'm not sure what the reasoning is that most users won't have to interact
with primitive  `beginAsync`. That seems like it would be the starting
point for every use of any asynchronous function, especially in iOS
development, as seen in the IBAction example)

Leaving explicit throws/async in declarations and try/await at call sites,
a narrower modification would be:

1.  Make every `beginAsync` coroutine start also a `do` block. The `catch`
block would be necessary only if something throws-- currently a `do` block
can be used this way, so it matches nicely.

(This is the same as Johnathan Hull noted on the thread, that if a goal is
to eliminate the pyramid of doom, requiring two levels of indentation to do
anything isn't a clear win.)

2. The two syspendAsync calls should be only the one:
func suspendAsync(
_ body: (_ continuation: @escaping (T) -> (),
_ error: @escaping (Error) -> ()) -> ()
) async throws -> T

We can assume the programmers using this function basically know what
they're doing (not the same as being unwilling to cut corners, so make them
cut corners explicitly and visibly). If a user of this function knows that
no errors are possible, then then it should be called with try! and the
error closure ignored, presumably by replacing it with `_`. To force `try!
suspendAsync` and then call the error closure would be a programmer error.
It would also be a programmer error to keep the throwing function and then
not call the error block, but they can do that with the API as proposed--that's
just harder with the API as proposed because the single-continuation
suspend method makes it easy to write code that ignores errors.

We are talking not only about a language change, but making all Swift
programmers adopt a new methodology. It's an opportunity to build new
habits, as noted in the manifesto, by placing the right thing to do at
hand. Two years ago, moving from passing a pointer to an NSError to
do-try-catch--and truly no one ever used that pattern outside Cocoa
frameworks, a crisis-level problem in error handling-- it was a huge
obvious win. It made dealing with errors so much better. Completion blocks
are not at the level of NSError-pointer-pointer level crisis, but
regardless this should have similar improvement when doing the right thing.

(And my opinion on try?/try!: `try?` I seldom see used, find it be an
anti-pattern of ignoring errors instead of explicitly recovering; I
actually wish it wasn't in the language, but guess some people find
it useful. `try!` is necessary and useful for cases where the compiler
can't guarantee success and the programmer is willing to assert it's not
going to fail, and `!` marks those points in code nicely, matching the
optional syntax.)

About potential await? and await!: If we kept call sites `try await` (or
`await try`?) then the `try?/try!` semantics would behave the same,
another argument for that. I assume `await!` would have the current queue
block until the function returns.

The stronger need is for better recognition that often async functions will
_sometimes_ have their values or errors immediately, if cached, or known to
be impossible to get. In promise/futures, this is the equivalent of
creating a future already fulfilled with a value or error. This might just
be an educational point, though.

Maybe the use of `await?` could be this, checking if the value exists
already-- fulfill the value if it can without suspending, but return nil if
not--throwing if known error, unless try? also so annotated.

Really interesting and insightful comments on this thread, look forward to
seeing how this further 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-22 Thread Félix Cloutier via swift-evolution
Alternatively, until futures are a thing, you can call `beginAsync` twice to 
start to async tasks:

func process() -> Void {
beginAsync {
let dataResource = await loadWebResource("bigData.txt")
//
}

print("BigData Scheduled to load")
beginAsync {
let dataResource = await loadWebResource("smallData.txt")
//
}
}

Futures have a number of advantages. For instance, you can use a nullable 
Future to keep track of whether the task has been started at all.

Félix

> Le 21 août 2017 à 13:32, Brent Royal-Gordon via swift-evolution 
>  a écrit :
> 
>> On Aug 21, 2017, at 12:41 PM, Wallacy via swift-evolution 
>> > wrote:
>> 
>> Based on these same concerns, how to do this using async/await ?
>> 
>> func process() -> Void) {
>> loadWebResource("bigData.txt") { dataResource in
>>//
>> }
>> printf("BigData Scheduled to load")
>> loadWebResource("smallData.txt") { dataResource in
>>//
>> }
>> printf("SmallData Scheduled to load")
>> 
>> }
> 
> 
> You would use something like the `Future` type mentioned in the proposal:
> 
>   func process() {
>   let bigDataFuture = Future { await 
> loadWebResource("bigData.txt") }
>   print("BigData scheduled to load")
>   
>   let smallDataFuture = Future { await 
> loadWebResource("smallData.txt") }
>   print("SmallData scheduled to load")
>   
>   let bigDataResource = await bigDataFuture.get()
>   let smallDataResource = await smallDataFuture.get()
>   // or whatever; you could probably chain off the futures to 
> handle whichever happens first first.
>   ...
>   }
> 
> -- 
> Brent Royal-Gordon
> Architechies
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-22 Thread Yuta Koshizawa via swift-evolution
>> On Aug 21, 2017, at 1:56 PM, John McCall  wrote:
>>
>> Personally, I think these sources of confusion are a good reason to keep the 
>> feature separate.
>>
>> The idea of using await! to block a thread is interesting but, as you say, 
>> does not fit with the general meaning of ! for logic errors.  I think it's 
>> fine to just have an API to block waiting for an async operation, and we can 
>> choose the name carefully to call out the danger of deadlocks.
>>
>> John.
>
> 2017-08-22 5:08 GMT+09:00 Karim Nassar :
>
> Thought about it in more depth, and I’m now firmly in the camp of: 
> ‘throws’/‘try' and ‘async’/‘await' should be orthogonal features. I think the 
> slight call-site reduction in typed characters ('try await’ vs ‘await’) is 
> heavily outweighed by the loss of clarity on all the edge cases.
>
> —Karim


I agree.

1. `async(nonthrowing)` and `async` as a subtype of `throws` can be an
obstacle when we want to add the third effect following `throws` and
`async` though I think `async(nonthrowing)` is an interesting idea.
Assuming that the new effect is named `foos`, we may want
`async(nonfooing)` or to make `async` be a subtype of `foos`. But it
is hard because they are destructive. We need to modify all code which
uses `async`. However, it is inconsistent with `throws` to give up
them and make `foos` orthogonal to `async`.

2. It is also true for `throws`. If we had introduced `async/await`
before `throws/try` were introduced, it would be hard to introduce
`async(nonthrowing)`  or `async` as a subtype of `throws` because they
are destructive. (Although `async/await` without `throws/try` seems
impractical, it is not impossible by something like `func bar() async
-> Result`)

So I think `async` and `throws` are essentially orthogonal, and just
factually used together in most cases. I guess choosing the essential
one will keep the language simpler and prevent unexpected problems in
the future.

--
Yuta
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-21 Thread Brent Royal-Gordon via swift-evolution
> On Aug 21, 2017, at 12:41 PM, Wallacy via swift-evolution 
>  wrote:
> 
> Based on these same concerns, how to do this using async/await ?
> 
> func process() -> Void) {
> loadWebResource("bigData.txt") { dataResource in
> //
> }
> printf("BigData Scheduled to load")
> loadWebResource("smallData.txt") { dataResource in
> //
> }
> printf("SmallData Scheduled to load")
> 
> }


You would use something like the `Future` type mentioned in the proposal:

func process() {
let bigDataFuture = Future { await 
loadWebResource("bigData.txt") }
print("BigData scheduled to load")

let smallDataFuture = Future { await 
loadWebResource("smallData.txt") }
print("SmallData scheduled to load")

let bigDataResource = await bigDataFuture.get()
let smallDataResource = await smallDataFuture.get()
// or whatever; you could probably chain off the futures to 
handle whichever happens first first.
...
}

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-21 Thread Karim Nassar via swift-evolution
Thought about it in more depth, and I’m now firmly in the camp of: 
‘throws’/‘try' and ‘async’/‘await' should be orthogonal features. I think the 
slight call-site reduction in typed characters ('try await’ vs ‘await’) is 
heavily outweighed by the loss of clarity on all the edge cases.

—Karim

> On Aug 21, 2017, at 1:56 PM, John McCall  wrote:
> 
>> 
>> On Aug 20, 2017, at 3:56 PM, Yuta Koshizawa > > wrote:
>> 
>> 2017-08-21 2:20 GMT+09:00 John McCall via swift-evolution 
>> >:
>>> On Aug 19, 2017, at 7:17 PM, Chris Lattner via swift-evolution 
>>> > wrote:
 On Aug 19, 2017, at 8:14 AM, Karim Nassar via swift-evolution 
 > wrote:
 
 This looks fantastic. Can’t wait (heh) for async/await to land, and the 
 Actors pattern looks really compelling.
 
 One thought that occurred to me reading through the section of the 
 "async/await" proposal on whether async implies throws:
 
 If ‘async' implies ‘throws' and therefore ‘await' implies ‘try’, if we 
 want to suppress the catch block with ?/!, does that mean we do it on the 
 ‘await’ ? 
 
 guard let foo = await? getAFoo() else {  …  }
>>> 
>>> Interesting question, I’d lean towards “no, we don’t want await? and 
>>> await!”.  My sense is that the try? and try! forms are only occasionally 
>>> used, and await? implies heavily that the optional behavior has something 
>>> to do with the async, not with the try.  I think it would be ok to have to 
>>> write “try? await foo()” in the case that you’d want the thrown error to 
>>> turn into an optional.  That would be nice and explicit.
>> 
>> try? and try! are quite common from what I've seen.
>> 
>> As analogous to `throws` and `try`, I think we have an option that `await!` 
>> means blocking.
>> 
>> First, if we introduce something like `do/catch` for `async/await`, I think 
>> it should be for blocking. For example:
>> 
>> ```
>> do {
>>   return await foo()
>> } block
>> ```
>> 
>> It is consistent with `do/try/catch` because it should allow to return a 
>> value from inside `do` blocks for an analogy of `throws/try`.
>> 
>> ```
>> // `throws/try`
>> func foo() -> Int {
>>   do {
>> return try bar()
>>   } catch {
>> ...
>>   }
>> }
>> 
>> // `async/await`
>> func foo() -> Int {
>>   do {
>> return await bar()
>>   } block
>> }
>> ```
>> 
>> And `try!` is similar to `do/try/catch`.
>> 
>> ```
>> // `try!`
>> let x = try! foo()
>> // uses `x` here
>> 
>> // `do/try/catch`
>> do {
>>   let x = try foo()
>>   // uses `x` here
>> } catch {
>>   fatalError()
>> }
>> ```
>> 
>> If `try!` is a sugar of `do/try/catch`, it also seems natural that `await!` 
>> is a sugar of `do/await/block`. However, currently all `!` in Swift are 
>> related to a logic failure. So I think using `!` for blocking is not so 
>> natural in point of view of symbology.
>> 
>> Anyway, I think it is valuable to think about what `do` blocks for 
>> `async/await` mean. It is also interesting that thinking about combinations 
>> of `catch` and `block` for `async throws` functions: e.g. If only `block`, 
>> the enclosing function should be `throws`.
> 
> Personally, I think these sources of confusion are a good reason to keep the 
> feature separate.
> 
> The idea of using await! to block a thread is interesting but, as you say, 
> does not fit with the general meaning of ! for logic errors.  I think it's 
> fine to just have an API to block waiting for an async operation, and we can 
> choose the name carefully to call out the danger of deadlocks.
> 
> John.
> 
>> 
>> That aside, I think `try!` is not so occasional and is so important. Static 
>> typing has limitations. For example, even if we has a text field which 
>> allows to input only numbers, we still get an input value as a string and 
>> parsing it may fail on its type though it actually never fails. If we did 
>> not have easy ways to convert such a simple domain error or a recoverable 
>> error to a logic failure, people would start ignoring them as we has seen in 
>> Java by `catch(Exception e) {}`. Now we have `JSONDecoder` and we will see 
>> much more `try!` for bundled JSON files in apps or generated JSONs by code, 
>> for which decoding fails as a logic failure.
>> 
>> --
>> Yuta

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-21 Thread Wallacy via swift-evolution
Based on these same concerns, how to do this using async/await ?

func process() -> Void) {

loadWebResource("bigData.txt") { dataResource in
  //
}

printf("BigData Scheduled to load")
loadWebResource("smallData.txt") { dataResource in
  //
}
printf("SmallData Scheduled to load")

}


Small data usually will finish first using GCD, but using async/await apear
to be no way to schedule other things (on other Queue maybe) in the same
function. I now this is about parallelism, but this is something that we
will lose about GCD.

I know the parallelism is not the focus, but is something important for the
people migration from GCD.

Maybe:

yield loadWebResource("bigData.txt") // has a void return.

yield loadWebResource("smallData.txt") // has a void return.


I don't know if yield is the right keyword here but i think is important
some way to "not" wait.

Em seg, 21 de ago de 2017 às 15:04, Adam Kemp via swift-evolution <
swift-evolution@swift.org> escreveu:

> On Aug 18, 2017, at 8:38 PM, Chris Lattner  wrote:
>
> On Aug 18, 2017, at 2:09 PM, Adam Kemp  wrote:
>
> Maybe I’m still missing something, but how does this help when you are
> interacting only with Swift code? If I were to write an asynchronous method
> in Swift then how could I do the same thing that you propose that the
> Objective-C importer do? That is, how do I write my function such that it
> calls back on the same queue?
>
>
> You’re right: if you’re calling something written in Swift, the ObjC
> importer isn’t going to help you.
>
> However, if you’re writing an async function in Swift, then it is
> reasonable for us to say what the convention is and expect you to follow
> it.  Async/await doesn’t itself help you implement an async operation: it
> would be turtles all the way down… until you get to GCD, which is where you
> do the async thing.
>
> As such, as part of rolling out async/await in Swift, I’d expect that GCD
> would introduce new API or design patterns to support doing the right thing
> here.  That is TBD as far as the proposal goes, because it doesn’t go into
> runtime issues.
>
>
> The point I’m trying to make is that this is so important that I don’t
> think it’s wise to leave it up to possible future library improvements, and
> especially not to convention. Consider this example again from your
> proposal:
>
> @IBAction func buttonDidClick(sender:AnyObject) {
> doSomethingOnMainThread();
> beginAsync {
> let image = await processImage()
> imageView.image = image
> }
> doSomethingElseOnMainThread();
> }
>
> The line that assigns the image to the image view is very likely running
> on the wrong thread. That code looks simple, but it is not safe. You would
> have to insert a line like your other examples to ensure it’s on the right
> thread:
>
> @IBAction func buttonDidClick(sender:AnyObject) {
> doSomethingOnMainThread();
> beginAsync {
>
> let image = await processImage()
>
> await DispatchQueue.main.asyncCoroutine()
> imageView.image = image
> }
> doSomethingElseOnMainThread();
> }
>
>
> You would have to litter your code with that kind of stuff just in case
> you’re on the wrong thread because there’s no way to tell where you’ll end
> up after the await. In fact, this feature would make it much easier to end
> up calling back on different queues in different circumstances because it
> makes queue hopping invisible. From another example:
>
> func processImageData1() async -> Image {
>
>   let dataResource  = await loadWebResource("dataprofile.txt")
>
>   let imageResource = await loadWebResource("imagedata.dat")
>   let imageTmp  = await decodeImage(dataResource, imageResource)
>   let imageResult   =  await dewarpAndCleanupImage(imageTmp)
>   return imageResult
> }
>
>
> Which queue does a caller end up in? Whichever queue that last awaited
> call gives you. This function does nothing to try to ensure that you always
> end up on the same queue. If someone changes the code by adding or removing
> one of those await calls then the final callback queue would change. If
> there were conditionals in there that changed the code flow at runtime then
> you could end up calling back on different queues depending on some runtime
> state.
>
> IMO this would make doing safe async programming actually more difficult
> to get right. It would be tedious and error prone. This simplified
> async/await model may work well for JavaScript, which generally doesn’t
> have shared mutable state across threads, but it seems dangerous in a
> language that does.
>
> This isn’t a fair transformation though, and isn’t related to whether
> futures is part of the library or language.  The simplification you got
> here is by making IBAction’s implicitly async.  I don’t see that that is
> possible, since they have a very specific calling convention (which returns
> void) and are invoked by 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-21 Thread Adam Kemp via swift-evolution


> On Aug 18, 2017, at 8:38 PM, Chris Lattner  > wrote:
> 
> On Aug 18, 2017, at 2:09 PM, Adam Kemp  > wrote:
>> Maybe I’m still missing something, but how does this help when you are 
>> interacting only with Swift code? If I were to write an asynchronous method 
>> in Swift then how could I do the same thing that you propose that the 
>> Objective-C importer do? That is, how do I write my function such that it 
>> calls back on the same queue?
> 
> You’re right: if you’re calling something written in Swift, the ObjC importer 
> isn’t going to help you.
> 
> However, if you’re writing an async function in Swift, then it is reasonable 
> for us to say what the convention is and expect you to follow it.  
> Async/await doesn’t itself help you implement an async operation: it would be 
> turtles all the way down… until you get to GCD, which is where you do the 
> async thing.
> 
> As such, as part of rolling out async/await in Swift, I’d expect that GCD 
> would introduce new API or design patterns to support doing the right thing 
> here.  That is TBD as far as the proposal goes, because it doesn’t go into 
> runtime issues.

The point I’m trying to make is that this is so important that I don’t think 
it’s wise to leave it up to possible future library improvements, and 
especially not to convention. Consider this example again from your proposal:

@IBAction func buttonDidClick(sender:AnyObject) {  
doSomethingOnMainThread();
beginAsync {
let image = await processImage()
imageView.image = image
}
doSomethingElseOnMainThread();
}

The line that assigns the image to the image view is very likely running on the 
wrong thread. That code looks simple, but it is not safe. You would have to 
insert a line like your other examples to ensure it’s on the right thread:

@IBAction func buttonDidClick(sender:AnyObject) {  
doSomethingOnMainThread();
beginAsync {
let image = await processImage()
await DispatchQueue.main.asyncCoroutine()
imageView.image = image
}
doSomethingElseOnMainThread();
}

You would have to litter your code with that kind of stuff just in case you’re 
on the wrong thread because there’s no way to tell where you’ll end up after 
the await. In fact, this feature would make it much easier to end up calling 
back on different queues in different circumstances because it makes queue 
hopping invisible. From another example:

func processImageData1() async -> Image {
  let dataResource  = await loadWebResource("dataprofile.txt")
  let imageResource = await loadWebResource("imagedata.dat")
  let imageTmp  = await decodeImage(dataResource, imageResource)
  let imageResult   =  await dewarpAndCleanupImage(imageTmp)
  return imageResult
}

Which queue does a caller end up in? Whichever queue that last awaited call 
gives you. This function does nothing to try to ensure that you always end up 
on the same queue. If someone changes the code by adding or removing one of 
those await calls then the final callback queue would change. If there were 
conditionals in there that changed the code flow at runtime then you could end 
up calling back on different queues depending on some runtime state.

IMO this would make doing safe async programming actually more difficult to get 
right. It would be tedious and error prone. This simplified async/await model 
may work well for JavaScript, which generally doesn’t have shared mutable state 
across threads, but it seems dangerous in a language that does.

> This isn’t a fair transformation though, and isn’t related to whether futures 
> is part of the library or language.  The simplification you got here is by 
> making IBAction’s implicitly async.  I don’t see that that is possible, since 
> they have a very specific calling convention (which returns void) and are 
> invoked by objc_msgSend.  OTOH, if it were possible to do this, it would be 
> possible to do it with the proposal as outlined.

I didn’t mean to imply that all IBActions implicitly async. I just allowed for 
an entire method to be async without being awaitable. In C# an async void 
function is a “fire and forget” function. It executes in the context of the 
caller’s thread/stack up until the first await, at which point it returns to 
the caller like normal. The continuation just happens without the caller 
knowing about it. The method signature is the same, and they are callable by 
code that is unaware of async/await. C# supports async void functions 
specifically for the event handler use case (and it is generally discouraged 
for all other use cases).

Your proposal already has async void methods, but they are awaitable. You still 
need some ability to call an async method from a non-async method. The way that 
you solved that is a special function (beginAsync), which as I described 
earlier has some issues with readability. 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-21 Thread John McCall via swift-evolution

> On Aug 20, 2017, at 3:56 PM, Yuta Koshizawa  wrote:
> 
> 2017-08-21 2:20 GMT+09:00 John McCall via swift-evolution 
> >:
>> On Aug 19, 2017, at 7:17 PM, Chris Lattner via swift-evolution 
>> > wrote:
>>> On Aug 19, 2017, at 8:14 AM, Karim Nassar via swift-evolution 
>>> > wrote:
>>> 
>>> This looks fantastic. Can’t wait (heh) for async/await to land, and the 
>>> Actors pattern looks really compelling.
>>> 
>>> One thought that occurred to me reading through the section of the 
>>> "async/await" proposal on whether async implies throws:
>>> 
>>> If ‘async' implies ‘throws' and therefore ‘await' implies ‘try’, if we want 
>>> to suppress the catch block with ?/!, does that mean we do it on the 
>>> ‘await’ ? 
>>> 
>>> guard let foo = await? getAFoo() else {  …  }
>> 
>> Interesting question, I’d lean towards “no, we don’t want await? and 
>> await!”.  My sense is that the try? and try! forms are only occasionally 
>> used, and await? implies heavily that the optional behavior has something to 
>> do with the async, not with the try.  I think it would be ok to have to 
>> write “try? await foo()” in the case that you’d want the thrown error to 
>> turn into an optional.  That would be nice and explicit.
> 
> try? and try! are quite common from what I've seen.
> 
> As analogous to `throws` and `try`, I think we have an option that `await!` 
> means blocking.
> 
> First, if we introduce something like `do/catch` for `async/await`, I think 
> it should be for blocking. For example:
> 
> ```
> do {
>   return await foo()
> } block
> ```
> 
> It is consistent with `do/try/catch` because it should allow to return a 
> value from inside `do` blocks for an analogy of `throws/try`.
> 
> ```
> // `throws/try`
> func foo() -> Int {
>   do {
> return try bar()
>   } catch {
> ...
>   }
> }
> 
> // `async/await`
> func foo() -> Int {
>   do {
> return await bar()
>   } block
> }
> ```
> 
> And `try!` is similar to `do/try/catch`.
> 
> ```
> // `try!`
> let x = try! foo()
> // uses `x` here
> 
> // `do/try/catch`
> do {
>   let x = try foo()
>   // uses `x` here
> } catch {
>   fatalError()
> }
> ```
> 
> If `try!` is a sugar of `do/try/catch`, it also seems natural that `await!` 
> is a sugar of `do/await/block`. However, currently all `!` in Swift are 
> related to a logic failure. So I think using `!` for blocking is not so 
> natural in point of view of symbology.
> 
> Anyway, I think it is valuable to think about what `do` blocks for 
> `async/await` mean. It is also interesting that thinking about combinations 
> of `catch` and `block` for `async throws` functions: e.g. If only `block`, 
> the enclosing function should be `throws`.

Personally, I think these sources of confusion are a good reason to keep the 
feature separate.

The idea of using await! to block a thread is interesting but, as you say, does 
not fit with the general meaning of ! for logic errors.  I think it's fine to 
just have an API to block waiting for an async operation, and we can choose the 
name carefully to call out the danger of deadlocks.

John.

> 
> That aside, I think `try!` is not so occasional and is so important. Static 
> typing has limitations. For example, even if we has a text field which allows 
> to input only numbers, we still get an input value as a string and parsing it 
> may fail on its type though it actually never fails. If we did not have easy 
> ways to convert such a simple domain error or a recoverable error to a logic 
> failure, people would start ignoring them as we has seen in Java by 
> `catch(Exception e) {}`. Now we have `JSONDecoder` and we will see much more 
> `try!` for bundled JSON files in apps or generated JSONs by code, for which 
> decoding fails as a logic failure.
> 
> --
> Yuta

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-21 Thread Philippe Hausler via swift-evolution
I have read over the proposal and already asked a few questions ahead of the 
email chain and I have some follow up points that are perhaps worth 
consideration.

First off, I think that async/await is a great concept! From personally using a 
similar concept in C#, I think it can be an approachable yet powerful tool to 
resolve a decent number of problems with threading. It is worth stating, 
however, that it does NOT solve all problems with threading, or at least, not 
on its own.

One thing I have in mind is that Swift, of course, does exist in a vacuum. It, 
by nature, is being used for writing iOS, watchOS and macOS applications to be 
shipped on those platforms. On those platforms, libdispatch is often used for 
things like completion handlers and similar (or indirectly used via 
NSOperationQueue). Of course there are a few of those devices that only have 
one core or are thermally constrained under load. It is then imperative that 
applications (at least under the hood) utilize appropriate quality of service 
to ensure that certain workloads do not get scheduled as much as others. For 
example: if an application is synchronizing some sort of state over the 
network, it may be the best choice to run that work at a low 
quality-of-service. In this example, if a specific piece of work is then 
blocked by the work that is running at a low quality-of-service, it needs to 
temporarily override that low quality-of-service to match the blocked work. I 
think that any concurrency model we consider should be able to work 
constructively with a situation like this, and take QoS into account.

For sake of argument, though: let's presume that completion handlers are always 
going to be appropriately scheduled for their quality-of-service. So why is the 
override important to think about? Well... in the cases of single core, or 
other cases where a ton of work in limiting the number of cores available, 
there can be a problem known as a priority inversion. If no override is made, 
the low priority work can be starved by the scheduling of a waiter. This 
results in a deadlock. Now you might of course think: "oh hey, that's 
DispatchSemaphore's responsibility, or pthread_condition_t" etc... 
Unfortunately semaphores or conditions do not have the appropriate information 
to convey this. To offer a solution to this problem, the start of the 
asynchronous work needs to be recorded for the threads involved to the end of 
that asynchronous work, then at the beginning of the waiting section an 
override needs to be created against those asynchronous threads and the 
override is ended at the point that it is done waiting.

In short, effectively just waiting on completion handler will cause severe 
performance problems - to resolve this it seems like we absolutely need to have 
more entry points to do the correct promotions of QoS. 

What do you think is the best way of approaching a resolution for this 
potential pitfall?

> On Aug 17, 2017, at 3:24 PM, Chris Lattner via swift-evolution 
>  wrote:
> 
> Hi all,
> 
> As Ted mentioned in his email, it is great to finally kick off discussions 
> for what concurrency should look like in Swift.  This will surely be an epic 
> multi-year journey, but it is more important to find the right design than to 
> get there fast.
> 
> I’ve been advocating for a specific model involving async/await and actors 
> for many years now.  Handwaving only goes so far, so some folks asked me to 
> write them down to make the discussion more helpful and concrete.  While I 
> hope these ideas help push the discussion on concurrency forward, this isn’t 
> in any way meant to cut off other directions: in fact I hope it helps give 
> proponents of other designs a model to follow: a discussion giving extensive 
> rationale, combined with the long term story arc to show that the features 
> fit together.
> 
> Anyway, here is the document, I hope it is useful, and I’d love to hear 
> comments and suggestions for improvement:
> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782
> 
> -Chris
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-20 Thread Yuta Koshizawa via swift-evolution
2017-08-21 2:20 GMT+09:00 John McCall via swift-evolution <
swift-evolution@swift.org>:

> On Aug 19, 2017, at 7:17 PM, Chris Lattner via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> On Aug 19, 2017, at 8:14 AM, Karim Nassar via swift-evolution <
> swift-evolution@swift.org> wrote:
>
> This looks fantastic. Can’t wait (heh) for async/await to land, and the
> Actors pattern looks really compelling.
>
> One thought that occurred to me reading through the section of the
> "async/await" proposal on whether async implies throws:
>
> If ‘async' implies ‘throws' and therefore ‘await' implies ‘try’, if we
> want to suppress the catch block with ?/!, does that mean we do it on the
> ‘await’ ?
>
>
> guard let foo = await? getAFoo() else {  …  }
>
>
> Interesting question, I’d lean towards “no, we don’t want await? and
> await!”.  My sense is that the try? and try! forms are only occasionally
> used, and await? implies heavily that the optional behavior has something
> to do with the async, not with the try.  I think it would be ok to have to
> write “try? await foo()” in the case that you’d want the thrown error to
> turn into an optional.  That would be nice and explicit.
>
>
> try? and try! are quite common from what I've seen.
>

As analogous to `throws` and `try`, I think we have an option that `await!`
means blocking.

First, if we introduce something like `do/catch` for `async/await`, I think
it should be for blocking. For example:

```
do {
  return await foo()
} block
```

It is consistent with `do/try/catch` because it should allow to return a
value from inside `do` blocks for an analogy of `throws/try`.

```
// `throws/try`
func foo() -> Int {
  do {
return try bar()
  } catch {
...
  }
}

// `async/await`
func foo() -> Int {
  do {
return await bar()
  } block
}
```

And `try!` is similar to `do/try/catch`.

```
// `try!`
let x = try! foo()
// uses `x` here

// `do/try/catch`
do {
  let x = try foo()
  // uses `x` here
} catch {
  fatalError()
}
```

If `try!` is a sugar of `do/try/catch`, it also seems natural that `await!`
is a sugar of `do/await/block`. However, currently all `!` in Swift are
related to a logic failure. So I think using `!` for blocking is not so
natural in point of view of symbology.

Anyway, I think it is valuable to think about what `do` blocks for
`async/await` mean. It is also interesting that thinking about combinations
of `catch` and `block` for `async throws` functions: e.g. If only `block`,
the enclosing function should be `throws`.

That aside, I think `try!` is not so occasional and is so important. Static
typing has limitations. For example, even if we has a text field which
allows to input only numbers, we still get an input value as a string and
parsing it may fail on its type though it actually never fails. If we did
not have easy ways to convert such a simple domain error or a recoverable
error to a logic failure, people would start ignoring them as we has seen
in Java by `catch(Exception e) {}`. Now we have `JSONDecoder` and we will
see much more `try!` for bundled JSON files in apps or generated JSONs by
code, for which decoding fails as a logic failure.

--
Yuta
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-20 Thread Benjamin Spratling via swift-evolution
Howdy,

It’s good to have a formal, language supported way to manage these aspects of 
concurrency.

In particular, I like that it establishes the asynchronously provided values 
into the surrounding scope, like guard.  This helps me write a clean sequence 
of calls with access to values which ‘skip’ a call.  Often I need to make a few 
asynchronous calls which fetch disparate value types from disparate endpoints 
or frameworks and then provide them all together into some final API.

I also like that the actor model formally wraps up access to shared resources.  
It seems using these together, the compiler would be able to do some 
fundamental reasoning about whether deadlocks could occur (or preventing them 
entirely without expensive run-time mechanisms) and also determine if certain 
code paths were forgotten.  One question I have is “how do we assign queues or 
priorities to the async methods?”  I had been toying with the idea of declaring 
a reference to the queue on which async methods run at their definition of the 
method, but this actor model groups around the data which which needs 
coordination instead, which is possibly better.

I haven’t yet read the philosophy links, so if I’m repeating ideas, or these 
ideas are moot in light of something, I guess ignore me, and I’ll just feel 
slightly embarrassed later.

However, at the UI level we often don’t even use the GCD methods because they 
do not directly support cancelation and progress reporting, so we use 
NSOperation and NSOperationQueue instead.  To me, adding a language feature 
which gets ignored when I build any full-featured app won’t be worth the time.  
Whatever is done here, we need to be cognizant that someone will implement 
another layer on top which does provide progress reporting and cancellation and 
make sure there’s a clean point of entry, much like writing the Integer 
protocols didn’t implement arbitrary-sized integers, but gave various 
implementations a common interface.

I know that data parallelism is out of scope, but GPU’s have been mentioned.  
When I write code which processes large amounts of data (whether that’s 
scientific data analysis or exporting audio or video tracks), it invariably 
uses many different hardware resources, files, GPU’s and others.  The biggest 
unsolved system-level problem I see (besides the inter-process APIs mentioned) 
is a way to effectively “pull” data through the system, instead of the current 
“push”-oriented API’s.  With push, we send tasks to queues.  Perhaps a 
resource, like reading data from a 1000 files, is slower than the later stage, 
like using the GPU to perform optimal feature detection in each file.  So my 
code runs fine when executed.  However, later I add just a slightly slower GPU 
task, now my files fill up memory faster than my GPU’s drain it, and instead of 
everything running fine, my app exhausts memory and the entire process crashes. 
 Sure I can create a semaphore to read only the “next" file into memory at a 
time, but I suppose that’s my point.  Instead of getting to focus on my task of 
analyzing several GB’s of data, I’m spending time worrying about creating a 
pull asynchronous architecture.  I don’t know whether formal “pull” could be in 
scope for this next phase, but let’s learn from the problem of the deprecated 
“current queue” function in GCD which created a fundamental impossibility of 
writing run-time safe “dispatch_sync” calls, and provide at least the hooks 
into the system-detected available compute resources.  (If “get_current_queue” 
had provided a list of the stack of the queues, it would have been usable.)

Together with the preceding topic is the idea of cancellation of long-running 
processes.  Maybe that’s because the user needs to board the train and doesn’t 
want to continue draining power while exporting a large video document they 
could export later.  Or maybe it’s because the processing for this frame of the 
live stream of whatever is taking too long and is going to get dropped.  But 
here we are again, expressing dependencies and cancellation, like the high 
level frameworks.

I’m glad someone brought up NSURLSessionDataTask, because it provides a more 
realistic window into the kinds of features we’re currently using.  If we don’t 
design a system which improves this use, then I question whether we’ve made 
real progress.

NSBlockOperation was all but useless to me, since it didn’t provide a reference 
to ‘self’ when its block got called, I had to write a subclass which did so 
that it could ask itself if it had been cancelled to implement cancellation of 
long-running tasks. NSOperation also lacks a clean way to pass data from one 
block to those dependent on it.  So there’s lots of subclasses to write to 
handle that in a generic way.  I feel like block-based undo methods understood 
this when they added the separate references to weakly-held targets.  That 
enabled me to use the framework methods out of the box.

So my point is 

Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-20 Thread John McCall via swift-evolution
> On Aug 19, 2017, at 7:17 PM, Chris Lattner via swift-evolution 
>  wrote:
>> On Aug 19, 2017, at 8:14 AM, Karim Nassar via swift-evolution 
>> > wrote:
>> 
>> This looks fantastic. Can’t wait (heh) for async/await to land, and the 
>> Actors pattern looks really compelling.
>> 
>> One thought that occurred to me reading through the section of the 
>> "async/await" proposal on whether async implies throws:
>> 
>> If ‘async' implies ‘throws' and therefore ‘await' implies ‘try’, if we want 
>> to suppress the catch block with ?/!, does that mean we do it on the ‘await’ 
>> ? 
>> 
>> guard let foo = await? getAFoo() else {  …  }
> 
> Interesting question, I’d lean towards “no, we don’t want await? and await!”. 
>  My sense is that the try? and try! forms are only occasionally used, and 
> await? implies heavily that the optional behavior has something to do with 
> the async, not with the try.  I think it would be ok to have to write “try? 
> await foo()” in the case that you’d want the thrown error to turn into an 
> optional.  That would be nice and explicit.

try? and try! are quite common from what I've seen.

John.___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Georgios Moschovitis via swift-evolution

> what's important is that the code may pause for a while during a given 
> expression and run other stuff in the meantime.

I think that’s what `yield` actually means. In you sentence there is nothing 
about (a)waiting, only about pausing and ‘yielding’ the cpu time to ‘run other 
stuff’.

-g.

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread David Hart via swift-evolution


> On 20 Aug 2017, at 01:17, Chris Lattner via swift-evolution 
>  wrote:
> 
> 
>> On Aug 19, 2017, at 8:14 AM, Karim Nassar via swift-evolution 
>>  wrote:
>> 
>> This looks fantastic. Can’t wait (heh) for async/await to land, and the 
>> Actors pattern looks really compelling.
>> 
>> One thought that occurred to me reading through the section of the 
>> "async/await" proposal on whether async implies throws:
>> 
>> If ‘async' implies ‘throws' and therefore ‘await' implies ‘try’, if we want 
>> to suppress the catch block with ?/!, does that mean we do it on the ‘await’ 
>> ? 
>> 
>> guard let foo = await? getAFoo() else {  …  }
> 
> Interesting question, I’d lean towards “no, we don’t want await? and await!”. 
>  My sense is that the try? and try! forms are only occasionally used, and 
> await? implies heavily that the optional behavior has something to do with 
> the async, not with the try.  I think it would be ok to have to write “try? 
> await foo()” in the case that you’d want the thrown error to turn into an 
> optional.  That would be nice and explicit.

That seems like an argument in favor of having async and throws as orthogonal 
concepts.

> -Chris
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors: cancellation

2017-08-19 Thread Jan Tuitman via swift-evolution
Hi Joe,

Thanks for the answers so far! 

Abrupt cancellation is indeed not a good idea, but I wander if it is possible 
on every place where “await” is being used, to let the compiler generate  code 
which handles cancellation, assuming that can be cheap enough (and I am not 
qualified to judge if that is indeed the case)

Especially in the case where “await” implies “throws”, part of what you need 
for that is already in place. I imagine that it would work like this:
I imagine f(x) -> T is compiled as something that looks like f(x, callback: (T) 
-> Void). What if this was f(x,process, callback) where process is a simple 
pointer, that goes out of scope together with callback? This pointer the 
compiler can use to access a compiler generated mutable state, to see if the 
top level beginAsync { } in which context the call is being executed, has been 
canceled. The compiler could generate this check whenever it is going to do a 
new await call to a deeper level, and if the check said that the top level is 
canceled, the compiler can throw an exception. 

Would that introduce too much overhead? It does not seem to need references to 
the top level any longer than the callback needs to be kept alive.

I am asking this, because once Async/await is there, it will probably 
immediately become very popular, but the use case of having to abort a task 
from the same location where you start the task, is of course a very common 
one. Think of a view controller downloading some resources and then being moved 
of the screen by the user.

if everybody needs to wrap Async/await in classes which handle the 
cancellation, and share state with the tasks that can be cancelled, it might be 
cleaner to solve this problem in an invisible way, so that it is also 
standardized. This way there is more separation between the code of the task 
and the code that starts and cancels the task.

I assume actors in the future also are going to need a way to tell each other 
that pending messages can be cancelled, so, I think, in the end you need 
something for cancellation anyways. 

For the programmer it would look like this:

var result 
var process = beginAsync {
   result = await someSlowFunctionWithManyAwaitsInside(x)

}

// if it is no longer needed.
process.cancel()
// this will raise an exception inside someSlowFunction if this function enters 
an await.
// but not if it is waiting or actively doing something. So, it is also not 
guaranteed to cancel the function.



Regards,
Jan



> Op 18 aug. 2017 om 21:04 heeft Joe Groff  het volgende 
> geschreven:
> 
> 
>> On Aug 17, 2017, at 11:53 PM, Jan Tuitman via swift-evolution 
>>  wrote:
>> 
>> Hi,
>> 
>> 
>> After reading Chris Lattners proposal for async/await I wonder if the 
>> proposal has any way to cancel outstanding tasks.
>> 
>> I saw these two:
>> 
>> @IBAction func buttonDidClick(sender:AnyObject) {
>> // 1
>> beginAsync {
>>  // 2
>>  let image = await processImage()
>>  imageView.image = image
>> }
>> // 3
>> } 
>> 
>> 
>> And:
>> 
>> /// Shut down the current coroutine and give its memory back to the
>> /// shareholders.
>> func abandon() async -> Never {
>> await suspendAsync { _ = $0 }
>> }
>> 
>> 
>> Now, if I understand this correctly, the second thing is abandoning the task 
>> from the context of the task by basically preventing the implicit callback 
>> of abandon() to ever be called.
>> 
>> But I don't see any way how the beginAsync {} block can be canceled after a 
>> certain amount of time by the synchronous thread/context that is running at 
>> location //3
> 
> This is not something the proposal aims to support, and as you noted, abrupt 
> cancellation from outside a thread is not something you should generally do, 
> and which is not really possible to do robustly with cooperatively-scheduled 
> fibers like the coroutine proposal aims to provide. The section above is 
> making the factual observation that, in our model, a coroutine once suspended 
> can end up being dropped entirely by releasing all references to its 
> continuation, and discusses the impact that possibility has on the model. 
> This shouldn't be mistaken for proper cancellation support; as David noted, 
> that's something you should still code explicit support for if you need it.
> 
> -Joe
> 
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Matthew Johnson via swift-evolution


Sent from my iPad

> On Aug 19, 2017, at 9:33 PM, Brent Royal-Gordon  
> wrote:
> 
>> On Aug 19, 2017, at 7:41 AM, Matthew Johnson  wrote:
>> 
>> Regardless of which approach we take, it feels like something that needs to 
>> be implicit for structs and enums where value semantics is trivially 
>> provable by way of transitivity. When that is not the case we could require 
>> an explicit `value` or `nonvalue` annotation (specific keywords subject to 
>> bikeshedding of course).
> 
> There is no such thing as "trivially provable by way of transitivity". This 
> type is comprised of only value types, and yet it has reference semantics:
> 
>   struct EntryRef {
>   private var index: Int
>   
>   var entry: Entry {
>   get { return entries[index] }
>   set { entries[index] = newValue }
>   }
>   }

This type uses global mutable state in its implementation.  This is not hard 
for the compiler to detect and is pretty rare in most code.

> 
> This type is comprised of only reference types, and yet it has value 
> semantics:
> 
>   struct OpaqueToken: Equatable {
>   class Token {}
>   private let token: Token
>   
>   static func == (lhs: OpaqueToken, rhs: OpaqueToken) -> Bool {
>   return lhs.token === rhs.token
>   }
>   }

Yes, of course this is possible.  I believe this type should have to include an 
annotation declaring value semantics and should also need to annotate the 
`token` property with an acknowledgement that value semantics is being 
preserved by the implementation of the type despite this member not having 
value semantics.  The annotation on the property is to prevent bugs that might 
occur because the programmer didn't realize this type does not have value 
semantics.

> 
> I think it's better to have types explicitly declare that they have value 
> semantics if they want to make that promise, and otherwise not have the 
> compiler make any assumptions either way. Safety features should not be 
> *guessing* that your code is safe. If you can somehow *prove* it safe, go 
> ahead—but I don't see how that can work without a lot of manual annotations 
> on bridged code.

I agree with you that *public* types should have to declare that they have 
value semantics.  And I'm not suggesting we attempt to *prove* value semantics 
everywhere. 

I'm suggesting that the proportion of value types in most applications for 
which we can reasonably infer value semantics is pretty large.  If the stored 
properties of a value type all have value semantics and the implementation of 
the type does not use global mutable state it has value semantics.  

Whether we require annotation or not, value semantics will be decided by the 
declaring module.  If we don't infer it we'll end up having to write `value 
struct` and `value enum` a lot.  The design of Swift has been vigorous in 
avoiding keyword soup and I really believe that rule applies here.  The primary 
argument I can think of against inferring value semantics for non-public value 
types in these cases is if proving a type does not use global mutable state in 
its implementation would have too large an impact on build times.

> 
> -- 
> Brent Royal-Gordon
> Architechies
> 
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Brent Royal-Gordon via swift-evolution
> On Aug 19, 2017, at 1:29 PM, Michel Fortin via swift-evolution 
>  wrote:
> 
> I'm not actually that interested in the meaning of value semantics here. I'm 
> debating the appropriateness of determining whether something can be done in 
> another thread based on the type a function is attached to. Because that's 
> what the ValueSemantical protocol wants to do. ValueSemantical, as a 
> protocol, is whitelisting the whole type while in reality it should only 
> vouch for a specific set of safe functions on that type.


To state more explicitly what I think you might be implying here: In principle, 
we could have developers annotate value-semantic *members* instead of 
value-semantic *types* and only allow value-semantic members to be used on 
parameters to an actor. But I worry this might spread through the type system 
like `const` in C++, forcing large numbers of APIs to annotate parameters with 
`value` and restrict themselves to value-only APIs just in case they happen to 
be used in an actor.

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Thomas via swift-evolution

> On 20 Aug 2017, at 03:36, Brent Royal-Gordon  wrote:
> 
>> On Aug 19, 2017, at 2:25 AM, Thomas > > wrote:
>> 
>>> I think we need to be a little careful here—the mere fact that a message 
>>> returns `Void` doesn't mean the caller shouldn't wait until it's done to 
>>> continue. For instance:
>>> 
>>> listActor.delete(at: index) // Void, so it 
>>> doesn't wait
>>> let count = await listActor.getCount()  // But we want the count 
>>> *after* the deletion!
>> 
>> In fact this will just work. Because both messages happen on the actor's 
>> internal serial queue, the "get count" message will only happen after the 
>> deletion. Therefore the "delete" message can return immediately to the 
>> caller (you just need the dispatch call on the queue to be made).
> 
> Suppose `delete(at:)` needs to do something asynchronous, like ask a server 
> to do the deletion. Is processing of other messages to the actor suspended 
> until it finishes? (Maybe the answer is "yes"—I don't have experience with 
> proper actors.)

It seems like the answer should be "yes". But then how do you implement 
something like a cancel() method? I don't know how the actor model solves that 
problem.

Thomas

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Brent Royal-Gordon via swift-evolution
> On Aug 19, 2017, at 7:41 AM, Matthew Johnson  wrote:
> 
> Regardless of which approach we take, it feels like something that needs to 
> be implicit for structs and enums where value semantics is trivially provable 
> by way of transitivity. When that is not the case we could require an 
> explicit `value` or `nonvalue` annotation (specific keywords subject to 
> bikeshedding of course).

There is no such thing as "trivially provable by way of transitivity". This 
type is comprised of only value types, and yet it has reference semantics:

struct EntryRef {
private var index: Int

var entry: Entry {
get { return entries[index] }
set { entries[index] = newValue }
}
}

This type is comprised of only reference types, and yet it has value semantics:

struct OpaqueToken: Equatable {
class Token {}
private let token: Token

static func == (lhs: OpaqueToken, rhs: OpaqueToken) -> Bool {
return lhs.token === rhs.token
}
}

I think it's better to have types explicitly declare that they have value 
semantics if they want to make that promise, and otherwise not have the 
compiler make any assumptions either way. Safety features should not be 
*guessing* that your code is safe. If you can somehow *prove* it safe, go 
ahead—but I don't see how that can work without a lot of manual annotations on 
bridged code.

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Concurrency] async/await + actors

2017-08-19 Thread Brent Royal-Gordon via swift-evolution
> On Aug 19, 2017, at 3:23 AM, Georgios Moschovitis via swift-evolution 
>  wrote:
> 
> I am wondering, am I the only one that *strongly* prefers `yield` over 
> `await`?
> 
> Superficially, `await` seems like the standard term, but given the fact that 
> the proposal is about coroutines, I think `yield` is actually the proper 
> name. Also, subjectively, it sounds much better/elegant to me!


Swift tends to take a pragmatic view of this kind of thing, naming features 
after their common uses rather than their formal names. For instance, there's 
no technical reason you *have* to use the error-handling features for 
errors—you could use them for routine but "special" return values like breaking 
out of a loop—but we still name things like the `Error` protocol and the `try` 
keyword in ways that emphasize their use for errors.

This feature is about coroutines, sure, but it's a coroutine feature strongly 
skewed towards use for asynchronous calls, so we prefer syntax that emphasizes 
its async-ness. When you're reading the code, the fact that you're calling a 
coroutine is not important; what's important is that the code may pause for a 
while during a given expression and run other stuff in the meantime. `await` 
says that more clearly than `yield` would.

-- 
Brent Royal-Gordon
Architechies

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


  1   2   >