> On Aug 17, 2017, at 3:24 PM, Chris Lattner via swift-evolution
> <[email protected]> wrote:
>
> Anyway, here is the document, I hope it is useful, and I’d love to hear
> comments and suggestions for improvement:
> https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9f782
I think you're selecting the right approaches and nailing many of the details,
but I have a lot of questions and thoughts. A few notes before I start:
* I only did one pass through this, so I probably missed or misunderstood some
things. Sorry.
* I think the document may have evolved since I started reading, so some of
this may be out of date.
* I haven't yet read the rest of the thread—this email is already long enough.
* I have a lot of experience with Cocoa-style callback-based concurrency, a
little bit (unfortunately) with Javascript Promises, and basically none with
async/await. I've never worked with a language that formally supported actors,
although I've used similar patterns in Swift and Objective-C.
# async/await
I like the choice of async/await, and I agree that it's pretty much where
mainstream languages have ended up. But there are a few things you seem to
gloss over. You may simply have decided those details were too specific for
such a sweeping manifesto, but I wanted to point them out in case you missed
them.
## Dispatching back to the original queue
You correctly identify one of the problems with completion blocks as being that
you can't tell which queue the completion will run on, but I don't think you
actually discuss a solution to that in the async/await section. Do you think
async/await can solve that? How? Does GCD even have the primitives needed?
(`dispatch_get_current_queue()` was deprecated long ago and has never been
available in Swift.)
Or do you see this as the province of actors? If so, how does that work? Does
every piece of code inherently run inside one actor or another? Or does the
concurrency system only get you on the right queue if you explicitly use
actors? Can arbitrary classes "belong to" an actor, so that e.g. callbacks into
a view controller inherently go to the main queue/actor?
(If you *do* need actors to get sensible queue behavior, I'm not the biggest
fan of that; we could really use that kind of thing in many, many places that
aren't actors.)
## Delayed `await`
Most languages I've seen with async/await seem to allow you to delay the
`await` call to do parallel work, but I don't see anything similar in your
examples. Do you envision that happening? What's the type of the intermediate
value, and what can you do with it? Can you return it to a caller?
## Error handling
Do you imagine that `throws` and `async` would be orthogonal to one another? If
so, I suspect that we could benefit from adding typed `throws` and making
`Never` a subtype of `Error`, which would allow us to handle this through the
generics system.
(Also, I notice that a fire-and-forget message can be thought of as an `async`
method returning `Never`, even though the computation *does* terminate
eventually. I'm not sure how to handle that, though)
## Legacy interop
Another big topic I don't see discussed much is interop with existing APIs. I
think it's really important that we expose existing completion-based Cocoa APIs
with async/await. This ideally means automatic translation, much like we did
with errors. Moreover, I think we probably need to apply this translation to
Swift 4 libraries when you're using them from Swift 5+ (assuming this makes
Swift 5).
## Implementation
The legacy interop requirement tends to lean towards a particular model where
`await` calls are literally translated into completion blocks passed to the
original function. But there are other options, like generating a wrapper that
translates calls with completions into calls returning promises, and `await` is
translated into a promise call. Or we could do proper continuations, but as I
understand it, that has impacts further up the call stack, so I'm not sure how
you'd make that work when some of the calls on the stack are from other
languages.
# Actors
I haven't used actors before, but they look like a really promising model, much
better than Go's channels. I do have a couple of concerns, though.
## Interop, again
There are a few actor-like types in the frameworks—the WatchKit UI classes are
the clearest examples—but I'm not quite worried about them. What I'm more
concerned with is how this might interoperate with Cocoa delegates. Certain
APIs, like `URLSession`, either take a delegate and queue or take a delegate
and call it on arbitrary queues; these seem like excellent candidates for
actor-ization, especially when the calls are all one-way. But that means we
need to be able to create "actor protocols" or something. It's also hard to
square with the common Cocoa (anti?)pattern of implementing delegate protocols
on a controller—you would want that controller to also be an actor.
I don't have any specific answers here—I just wanted to point this out as
something we should consider in our actor design.
## Value-type annotation
The big problem I see with your `ValueSemantical` protocol is that developers
are very likely to abuse it. If there's a magic "let this get passed into
actors" switch, programmers will flip it for types that don't really qualify;
we don't want that switch to have too many other effects. I also worry that
the type behavior of a protocol is a bad fit for `ValueSemantical`. Retroactive
conformance to `ValueSemantical` is almost certain to be an unprincipled hack;
subclasses can very easily lose the value-semantic behavior of their
superclasses, but almost certainly can't have value semantics unless their
superclasses do. And yet having `ValueSemantical` conformance somehow be
uninherited would destroy Liskov substitutability.
One answer might be to narrow the scope of the annotation: Don't think of it as
indicating that it's a value type, merely think of it as a
"passable-to-`Actor`s" protocol. I'll call this alternate design `Actable` to
distinguish it from "is a value type". It's not an unprincipled hack to
retroactively conform a type to `Actable`—you're not stating an intrinsic
property of your type, just telling the actor system how to pass it. It's
totally coherent to have a subclass of a non-`Actable` class add `Actable` and
require its own subclasses to be `Actable`. And we can still synthesize
`Actable` on structs and enums.
A middle ground would be to define the protocol as being for types which can be
safely passed to another thread—`Shareable`, say. That might even permit
implementations that used atomics or mutexes to protect a shared instance.
(Sorry if this comes off as bikeshedding. What I'm trying to say is, while the
exact name is unimportant, the semantic we want the protocol to represent *is*
important. I suspect that "has value semantics" is too broad and will lead
users into misbehavior.)
## Plain old classes
In the section on actors, you suggest that actors can either be a variant of
classes or a new fundamental type, but one option you don't seem to probe is
that actors could simply *be* subclasses of an `Actor` class:
class Storage: Actor {
func fetchPerson(with uuid: UUID) async throws -> Person? {
...
}
}
You might be able to use different concurrency backends by using different base
classes (`GCDActor` vs. `DillActor` vs. whatever), although that would have the
drawback of tightly coupling an actor class to its backend. Perhaps `Actor`
could instead be a generic class which took an `ActorBackend` type parameter;
subclasses could either fix that parameter (`Actor<DispatchQueue>`) or expose
it to their users.
Another possibility doesn't involve subclasses at all. In this model, an actor
is created by an `init() async` initializer. An async initializer on `Foo`
returns an instance of type `Foo.Async`, an implicitly created pseudo-class
which contains only the `async` members of `Foo`.
class Storage {
let context: NSManagedObjectContext
init(url: URL) async throws {
// ...build a Core Data stack...
context = NSManagedObjectContext(concurrencyType:
.privateQueueConcurrencyType)
context.persistentStoreCoordinator = coordinator
}
func fetchPerson(with uuid: UUID) async throws -> Person? {
let req = NSFetchRequest<NSManagedObject>(entityType:
"Person")
req.predicate = NSPredicate(format: "uuid = %@", uuid)
req.fetchLimit = 1
return execute(req, for: Person.self).first
}
func execute<R: RecordConvertible>(_ req:
NSFetchRequest<NSManagedObject>, for type: R.Type) throws -> [R] {
let records = try context.fetch(req)
return try records.map { try Person.init(record: $0 as!
NSManagedObject) }
}
}
let store: Storage.Async = await try Storage(url: url)
// This is okay because `fetchPerson(with:)` is `async`.
let person = await try store.fetchPerson(with: personID)
// This is an error because `execute(_:for:)` is not `async`,
// so it's not exposed through `Storage.Async`.
let people = try store.execute(req, for: Person.self)
A third possibility is to think of the actor as a sort of proxy wrapper around
a (more) synchronous class, which exposes only `actor`-annotated members and
wraps calls to them in serialization logic. This would require some sort of
language feature to make transparent wrappers, though. This design would allow
the user, instead of the actor, to select a "backend" for it, so an iOS app
could use `GCDActor<Storage>` while its server backend could use
`DillActor<Storage>`. (`Storage` is a bad example for shared code, but you get
the idea.)
My point here is simply that, although you show the actor-ness of a type as
being fundamental to it, I'm not sure it needs to be.
### Lifting parameter type restrictions into `async`
The major downside of an "actors are not special types" model is that it
wouldn't enforce the parameter type restrictions. One solution would be to
apply those restrictions to *all* `async` functions—their parameters would all
have to conform to the magic "okay for actors" protocol (well, it'd be "okay
for async" now). That strikes me as a pretty sane restriction, since the
shared-state problems we want to avoid with actors are also questionable with
other async calls.
However, this would move the design of the magic protocol forward in the
schedule, and might delay the deployment of async/await. If we *want* these
restrictions on all async calls, that might be worth it, but if not, that's a
problem.
We'd probably also need to provide an escape hatch—either a function-wide
`async(unsafelyShared)` annotation, or a per-parameter `@unsafelyShared`
attribute.
## Function-typed parameters
You mention that function types would be unsafe to pass "because it could close
over arbitrary actor-local data", but closures over non-shared data would be
fine. Another carve-out that I *think* we could support is `async` functions in
general, because if they were closures, they could close over their original
actor and run inside it. This might be able to subsume the "closure over
non-shared data" case.
## The inevitable need for metadata
GCD started with a very simple model: you put blocks on a queue and the queue
runs them in order. This was much more lightweight than `NSOperationQueue`,
which had a lot of extra stuff for canceling operations, prioritizing them,
etc. Unfortunately, within a few years Apple decided that GCD *needed* to be
able to cancel and prioritize operations, so they had to pack this information
into weird pseudo-block objects. In Swift, this manifested as the
`DispatchWorkItem` class.
My point is, in anything that involves background processing, you always end up
needing more configurability than you think at the start. We should anticipate
this in our design and have a plan for how we'll attach metadata to actor
messages, even if we don't implement that feature right away. Because we'll
surely need to sooner or later.
## Examples
In a previous section, I used a class called `Storage` as an actor; I think
that might be a good type to illustrate with. I envision this as a type that
translates between the Swift structs/enums you use in your model layer and the
REST server/SQLite database/Core Data stack/CloudKit database you use to
actually store it.
Other examples might include:
* A shared cache:
actor SharedCache<Key: Hashable, Value> {
private var values: [Key: Value]
actor func cachedValue(for key: Key, orMake makeValue: (Key) async
throws -> Value) rethrows -> Value {
if let value = values[key] {
return value
}
let value = try await makeValue(key)
values[key] = value
return value
}
}
* A spell checker:
actor SpellChecker {
private let words: Set<String>
actor func addWord(_ word: String) throws {
words.insert(word)
await save()
}
actor func removeWord(_ word: String) throws {
words.remove(word)
await save()
}
func save() async throws { ... }
actor func checkText(_ text: String) -> Checker {
return Check(words: words, text: text, startIndex: text.startIndex)
}
actor Checker /* Hmm, can we get a SequenceActor and `for await` loop?
*/ {
fileprivate let words: Set<String>
fileprivate let text: String
fileprivate var startIndex: String.Index
actor func next() -> Misspelling? { ... }
}
struct Misspelling: ValueSemantical {
var substring: Substring
var corrections: [String]
}
}
# Reliability
Overall, I like reliability at the actor level; it seems like an appropriate
unit of trap-resistance.
I don't think we should incorporate traps into normal error-handling
mechanisms; that is, I don't think resilient actors should throw on traps. When
an invariant is violated within an actor, that means *something went wrong* in
a way that wasn't anticipated. The mistake may be completely internal to the
actor in question, but it may also have stemmed from invalid data passed into
it—data which may be present in other parts of the system. In other words, I
don't think we should think of reliable actors as a way to normalize trapping;
we should think of it as a way to mitigate the damage caused by a trap, to trap
gracefully. Failure handlers encourage the thinking we want; throwing errors
encourages the opposite.
To that end, I think failure handlers are the right approach. I also think we
should make it clear that, once a failure handler is called, there is no saving
the process—it is *going* to crash eventually. Maybe failure handlers are
`Never`-returning functions, or maybe we simply make it clear that we're going
to call `fatalError` after the failure handler runs, but in either case, a
failure handler is a point of no return.
(In theory, a failure handler could keep things going by pulling some
ridiculous shenanigans, like re-entering the runloop. We could try to prevent
that with a time limit on failure handlers, but that seems like
overengineering.)
I have a few points of confusion about failure handlers, though:
1. Who sets up a failure handler? The actor that might fail, or the actor which
owns that actor?
2. Can there be multiple failure handlers?
3. When does a failure handler get invoked? Is it queued like a normal message,
or does it run immediately? If the latter, and if it runs in the context of an
outside actor, how do we deal with the fact that invariants might not currently
hold?
# Distributed actors
I love the feature set you envision here, but I have two major criticisms.
## Heterogeneity is the rule
Swift everywhere is a fine idea, but heterogeneity is the reality. It's the
reality today and it will probably be the reality in twenty years. A magic
"distributed actor" model isn't going to do us much good if it doesn't work
when the actor behind it is implemented in Node, PHP, or Java.
That means that we should expect most distributed actors to be wrappers around
marshaling code. Dealing with things like XPC or Neo-Distributed Objects is
great, but we also need to think about "distributed actors" based on
`JSONEncoder`, `URLSession`, and some custom glue code to stick them together.
That's probably most of what we'll end up doing.
## It's just a tweaked backend
You describe this as a `distributed` keyword, but I don't think the keyword
actually adds much. I don't think there's a simple, binary distinction between
distributed and non-distributed actors. Rather, there are a variety of actor
"backends"—some in-process, some in-machine, some in-network—which vary in two
dimensions:
1. **Is the backend inherently error-prone?** Basically, should actor methods
that normally are not `throws` be exposed as `throws` methods because the
backend itself is expected to introduce errors in the normal course of
operation?
2. **How strictly does the backend constrain the types of parameters you can
pass?** In-process, anything that can be safely used by multiple threads is
fine. In-machine, it needs to be `Codable` or support `mmap`ing. In-network, it
needs to be `Codable`. But that's only the common case, of course! A simple
in-machine backend might not support `mmap`; a sophisticated in-network backend
might allow you to pass one of your `Actor`s to the other side (where calls
would be sent back the other way).
Handling these two dimensions of variation basically requires new protocol
features. For the error issue, we basically need typed `throws`, `Never` as a
universal subtype (or at least a universal subtype of all `Error`s), and an
operation equivalent to `#commonSupertype(BackendError, MethodError)`. For the
type-constraining issue, we need an "associated protocol" feature that allows
you to constrain `ActorBackend`'s parameters to a protocol specified by the
conforming type. And, y'know, a way to reject actor/backend combinations that
aren't compatible.
--
Brent Royal-Gordon
Architechies
_______________________________________________
swift-evolution mailing list
[email protected]
https://lists.swift.org/mailman/listinfo/swift-evolution