> On Dec 2, 2017, at 9:23 PM, Dave Abrahams <dabrah...@apple.com> wrote:
> 
> 
> On Nov 30, 2017, at 2:28 PM, Douglas Gregor via swift-evolution 
> <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
>> What’s a Good Solution Look Like?
>> Our current system for associated type inference and associated type 
>> defaults is buggy and complicated.
> 
> Well, that’s the problem, then.  Don’t worry, I won’t suggest that you simply 
> fix the implementation, because even if there weren’t bugs and the system 
> were predictable I’d still think we could improve the situation for users by 
> making associated type default declarations more explicit.
> 
>> The compiler gets it right often enough that people depend on it, but I 
>> don’t think anyone can reasonably be expected to puzzle out what’s going to 
>> happen, and this area is rife with bugs. If we were to design a new solution 
>> from scratch, what properties should it have?
>> 
>> It should allow the author of a protocol to provide reasonable defaults, so 
>> the user doesn’t have to write them
>> It shouldn’t require users to write typealiases for “obvious” cases, even 
>> when they aren’t due to defaults
>> It shouldn’t infer an inconsistent set of typealiases
>> It should be something that a competent Swift programmer could reason about 
>> when it will succeed, when and why it will fail, and what the resulting 
>> inferred typealiases would be
>> It should admit a reasonable implementation in the compiler that is 
>> performant and robust
> • It should cover all of the existing use cases.
> • It should not break code at this point.
> • We should have a migration strategy for existing code that avoids traps 
> like silent semantic changes.
> 
> My bullet is important to me; I don’t think existing use cases are 
> (inherently) so complex that we can sacrifice almost any of them and still 
> end up with a sufficiently useful system.  At the very least, existing use 
> cases provide the only guidance we really have as to what the feature should 
> do.

I honestly don’t feel like a have a good handle on all of the use cases for 
associated type inference, and it’s not something we can simply search for on 
GitHub. But I think it covers most of them—and Matthew and Greg’s positive 
feedback helps my confidence here. The biggest potential issue, I think, is 
that we’ll no longer infer associated types from default implementations, which 
protocol vendors might be relying on.

> 
> I think we need to acknowledge that my second bullet is unattainable, at 
> least if we want to improve type checking performance. Not breaking any code 
> means that given any existing code, the compiler would have to explore the 
> same solution space it currently does, and come up with the same answers.  
> Improving performance would require new  declarations to use totally optional 
> explicit syntax to prevent some explorations, and that’s an untenable user 
> experience.

Yes, I agree.

> Which brings me to my third bullet: unless we are willing to break the code 
> of protocol users (as opposed to vendors) we need to ensure that vendors can 
> confidently convert code to use the new system without changing semantics.

Yeah, (2) below is basically that feature.

>  
>> 
>> A Rough Proposal
>> I’ve been thinking about this for a bit, and I think there are three ways in 
>> which we should be able to infer an associated type witness:
>> 
>> Associated type defaults, which are specified with the associated type 
>> itself, e.g.,
>> 
>>   associatedtype Indices = DefaultIndices<Self>
>> 
>> These are easy to reason about for both the programmer and the compiler.
>> Typealiases in (possibly constrained) protocol extensions, e.g.,
>> 
>>   extension RandomAccessCollection where Index : Strideable, Index.Stride == 
>> IndexDistance {
>>     typealias RandomAccessCollection.Indices = CountableRange<Index>
>>   }
>> 
>> I’m intentionally using some odd ‘.’ syntax here to indicate that this 
>> typealias is intended only to be found when trying to satisfy an associated 
>> type requirement, and is not a general typealias that could be found by 
>> normal name lookup. Let’s set the syntax bike shed aside for the moment. The 
>> primary advantage of this approach (vs. inferring Indices from “var Indices: 
>> CountableRange<Index>” in a constrained protocol extension) is that there’s 
>> a real typealias declaration that compiler and programmer alike can look at 
>> and reason about based just on the name “Indices”. 
>> 
>> Note that this mechanism technically obviates the need for (1), in the same 
>> sense that default implementations in protocols 
>> <https://github.com/apple/swift/blob/master/docs/GenericsManifesto.md#default-implementations-in-protocols->
>>  are merely syntactic sugar.
>> Declarations within the nominal type declaration or extension that declares 
>> conformance to the protocol in question. This is generally the same approach 
>> as described in “associated type inference” above, where we match 
>> requirements of the protocol against declarations that could satisfy those 
>> requirements and infer associated types from there. However, I want to turn 
>> it around: instead of starting with the requirements of the protocol any 
>> looking basically anywhere in the type or any protocol to which it conforms 
>> (for implementations in protocol extensions), start with the declarations 
>> that the user explicitly wrote at the point of the conformance and look for 
>> requirements they might satisfy. For example, consider our initial example:
>> 
>>   extension MyCollection: RandomAccessCollection {    
>>     var startIndex: Int { return contents.startIndex }
>>     var endIndex: Int { return contents.endIndex }
>>     subscript(index: Int) -> T { return contents[index] }
>>   }
>> 
>> Since startIndex, endIndex, and subscript(_:) are declared in the same 
>> extension that declares conformance to RandomAccessIterator, we should look 
>> for requirements with the same name as these properties and subscript within 
>> RandomAccessCollection (or any protocol it inherits) and infer Index := Int 
>> and Element := T by matching the type signatures. This is still the most 
>> magical inference rule, because there is no declaration named “Index” or 
>> “Element” to look at. However, it is much narrower in scope than the current 
>> implementation, because it’s only going to reason from the (probably small) 
>> set of declarations that the user wrote alongside the conformance, so it’s 
>> more likely to be intentional. Note that this is again nudging programmers 
>> toward the style of programming where one puts one protocol conformance per 
>> extension, which is admittedly my personal preference.
>> 
>> Thoughts?
> 
> The thing that strikes me most about these is that the first two are explicit 
> declarations of intent: “In the absences of an explicit declaration, deduce 
> this associated type as follows,” while the third is still extremely 
> indirect.  While it hints to the compiler about which conformances’ 
> associated type requirements we are trying to satisfy, it never comes out and 
> says straight out what the associated type should be, even though it needs to 
> be mentioned.  As a generic programmer, I don’t value the concision gained 
> over the clarity lost, and I’d like to see the solutions to these problems 
> follow the explicit-declaration-of-intent pattern.  However, the code in #3 
> is not written by the protocol vendor, and for me it is (at least currently) 
> a stretch to think of breaking the code of protocol users, so I grudgingly 
> accept it.  

Sums up my feelings about #3 pretty well.

> 
> If we were really starting from scratch I might suggest requiring that 
> conformances use the associated type name rather than some concrete type, 
> e.g. 
> 
> extension MyCollection: RandomAccessCollection {    
>     typealias RandomAccessCollection.Index = Int
>     var startIndex: Index { return contents.startIndex }
>     var endIndex: Index { return contents.endIndex }
>     subscript(index: Index) -> Element { return contents[index] }
>   }
> 
> But I suspect we’re well past the point in the language’s evolution where 
> that sort of change is possible.

I’d like to *allow* that, for all declarations that are meant to conform to a 
protocol, but we can’t (and IMO shouldn’t) require it.

> As for migration of protocol user code, I think we’d need to run both the new 
> and the old slow inference in the migrator and flag any differences.  I don’t 
> know what to do about protocol vendors’ code though.

Yeah. We might simply need to run the old inference in Swift 4 mode when the 
new inference doesn’t succeed, and warn + Fix-It the missing typealiases when 
the old succeeds but the new fails.

>> I think this approach is more predictable and more implementable than the 
>> current model. I’m curious whether the above makes sense to someone other 
>> than me, and whether it covers existing use cases well enough. Thoughts?
> 
> Well, covering the use cases is definitely still a concern for me.  I don’t 
> think we’ll know for sure until we try it, but have you thought about how to 
> migrate each piece of code in the standard library?  Does it cover those 
> cases?

I’ve looked at the various defaulted associated types in the 
Sequence/Collection hierarchy, and I think they’ll work better with this scheme 
than they do currently. Honestly, I think I have to go implement it to see how 
things work out.

        - Doug


_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to