Thank you for the explanation, that makes sense. Do you think it makes sense to 
create a proposal to allow handling of specialized overloads in Swift? I 
suspect the issues caused by the current behavior: (a) will continue to confuse 
a lot of people coming from c++; and (b) affects a wider audience than just the 
library I’m developing.

Abe

> On Feb 6, 2017, at 1:06 PM, Douglas Gregor <[email protected]> wrote:
> 
>> 
>> On Feb 5, 2017, at 5:36 PM, Abe Schneider via swift-evolution 
>> <[email protected] <mailto:[email protected]>> wrote:
>> 
>> Hi Robert,
>> 
>> Exactly. The benefit being that you can figure out the correct function to 
>> dispatch entirely at compile time. My understanding is that Swift doesn’t do 
>> this because of the associated code bloat (and it’s usually not necessary). 
>> However, I think there is some important functionality by allowing 
>> specialization to control dispatch in a similar way to c++. There is also 
>> the design element — my (fairly) succinct Tensor class that used to be ~300 
>> lines is now already close to an additional 1000 lines of code and growing. 
>> While the type of library I’m writing might be outside of what is normally 
>> done with Swift, I suspect the design pattern I’m using crops up in other 
>> places, as well as the need for dispatch on specialization (e.g. 
>> http://stackoverflow.com/questions/41640321/extending-collection-with-a-recursive-property-method-that-depends-on-the-elemen
>>  
>> <http://stackoverflow.com/questions/41640321/extending-collection-with-a-recursive-property-method-that-depends-on-the-elemen>).
> 
> You can’t figure out the correct function to dispatch entirely at compile 
> time because Swift supports retroactive modeling. Let’s make this a 
> super-simple example:
> 
>       // Module A
>       public protocol P { }
>       public func f<T>(_:T) { print(“unspecialized”) }
>       public func f<T: P>(_: T) { print(“specialized”) }
> 
>       public func g<T>(_ x: T) { f(x) }
> 
>       // Module B
>       import A
>       func testG(x: Int) {
>         g(x)  // the best we can statically do is print “unspecialized”; Int 
> doesn’t conform to A.P, but...
>       }
> 
>       // Module C
>       import A
>       public extension A: P { }   // dynamically, Int does conform to A.P!
> 
> Swift’s model is that the selection among ad hoc overloads is performed 
> statically based on local knowledge, and is consistent across all 
> “specializations” of a generic function. Protocol requirements and 
> overridable methods are the customization points.
> 
> Selecting ad hoc overloads at runtime is possible, but of course it has 
> downsides. You could run into run-time ambiguities, for example:
> 
>       // Module A
>       public protocol P { }
>       public protocol Q { }
>       public func f<T>(_:T) { print(“unspecialized”) }
>       public func f<T: P>(_: T) { print(“specialized for P”) }
>       public func f<T: Q>(_: T) { print(“specialized for Q”) }
> 
>       public func g<T>(_ x: T) { f(x) }
> 
>       // Module B
>       import A
>       public extension Int: P { }     
> 
>       // Module C
>       import A
>       public extension Int: Q { }     
> 
>       // Module C
>       import A
>       func testG(x: Int) {
>         g(x)   // run-time ambiguity: which specialized “f” do we get?
>       }
> 
> There are reasonable answers here if we know what the potential set of 
> overloads is at compile-time. It’s a problem I’ve been interested in for a 
> long time <https://parasol.tamu.edu/~jarvi/papers/pldi06.pdf>. That dynamic 
> dispatch can be implemented somewhat reasonably (the compiler can emit a 
> static decision tree so long as we’re willing to limit the set of overloads 
> to the ones that are visible from g(_:), and can be folded away by the 
> optimizer when we’re specializing the function and the visibility of the 
> types and/or protocols in question is limited.
> 
>> As far as changes to Swift, `@_specialize` already does exactly this (except 
>> it is treated as a hint). You would need to transform the function to 
>> something like <function-name>_<mangled-type-name>(…) and a table of 
>> transformed functions, but after that you can just treat the functions as 
>> normal functions (and ignore the fact they were defined as generic). So, 
>> yes, specializations should be forced at every level. While this will lead 
>> to some code bloat, since it only occurs for the functions marked by the 
>> user, I would imagine it’s: (a) limited to the extent it occurs; and (b) 
>> manageable by simply not using the attribute (and using protocol witness 
>> tables instead). But at least that way you give the user the choice to do 
>> what is best for the particular situation.
> 
> For reference, `@_specialize` is doing dynamic dispatch. That dynamic 
> dispatch gets optimized away when we specialize the generic function, the 
> same way I mentioned about.
> 
> There might be a reasonable solution to the problem you’re encountering. I 
> don’t think it’s “force specialization at compile time like C++”, but 
> something akin to grouping together multiple overloads where we want dynamic 
> dispatch of callers that invoke them, statically diagnosing when that set of 
> overloads can have ambiguities in it (see the paper I referenced above), and 
> teaching the optimizers to resolve that dynamic dispatch statically whenever 
> possible.
> 
>       - Doug
> 
>> 
>> Thanks!
>> A
>> 
>>> On Feb 5, 2017, at 1:46 PM, Robert Widmann <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> Oh, I see.  The constraint solver is picking an overload that better 
>>> matches the caller rather than the callee's type, which differs from C++ 
>>> because the template expansion process considers specific-type overloads 
>>> more specific.  We don't consider less-generic prototypes than the caller 
>>> here because we aren't performing a (major) syntactic transformation in the 
>>> process of solving a system of type variables.   In order to change the 
>>> language to adopt this feature, Sema would have to have knowledge of the 
>>> candidate set of specializations, either user-specified or 
>>> SILOptimizer-generated, beforehand.  It's not impossible to imagine, but it 
>>> does create an interesting backdependency on future potential 
>>> optimizations, and would potentially majorly change the behavior of a Debug 
>>> or Release build (unless specialization were forced at all optimization 
>>> levels).
>>> 
>>> ~Robert Widmann
>>> 
>>> 2017/02/05 12:37、Abe Schneider <[email protected] 
>>> <mailto:[email protected]>> のメッセージ:
>>> 
>>>> Hi Robert,
>>>> 
>>>> Sorry, I’m not sure I understand your question. In c++ you can do the 
>>>> following:
>>>> 
>>>> struct Storage {};
>>>> struct CBlasStorage: Storage {};
>>>> 
>>>> template <typename S> class Tensor {};
>>>> 
>>>> template <typename S>
>>>> Tensor<S> dot(const Tensor<S> &lhs, const Tensor<S> &rhs) {
>>>>   std::cout << "general version called" << std::endl;
>>>>   Tensor<S> result;
>>>>   return result;
>>>> }
>>>> 
>>>> // specialized version for CBlasStorage
>>>> template <>
>>>> Tensor<CBlasStorage> dot(const Tensor<CBlasStorage> &lhs, const 
>>>> Tensor<CBlasStorage> &rhs) {
>>>>   std::cout << "specialized version called" << std::endl;
>>>>   Tensor<CBlasStorage> result;
>>>>   return result;
>>>> }
>>>> 
>>>> // this preserves type information and will call the appropriate `dot`
>>>> template <typename T>
>>>> void doSomething(const Tensor<T> &lhs, const Tensor<T> &rhs) {
>>>>   auto result = dot(lhs, rhs);
>>>> }
>>>> 
>>>> int main(int argc, char **argv) {
>>>>   Tensor<CBlasStorage> a, b;
>>>>   doSomething(a, b); // we should get "specialized version called"
>>>> }
>>>> 
>>>> 
>>>> The potential equivalent for Swift could look like:
>>>> 
>>>> @_specialize_all
>>>> func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> { … }
>>>> 
>>>> Which would cause the compile to create a version of `dot` per S type that 
>>>> it gets called with. Thus, when `doSomething` is called, it would dispatch 
>>>> to that version of `dot`, allowing the type information to be preserved in 
>>>> the same way it does in c++.
>>>> 
>>>> Abe
>>>> 
>>>>> On Feb 5, 2017, at 11:35 AM, Robert Widmann <[email protected] 
>>>>> <mailto:[email protected]>> wrote:
>>>>> 
>>>>> I don't understand how this change would cause method dispatch to invoke 
>>>>> a different prototype.  Specialization in either language mentioned 
>>>>> doesn't do that.
>>>>> 
>>>>> ~Robert Widmann
>>>>> 
>>>>> 2017/02/05 11:28、Abe Schneider via swift-evolution 
>>>>> <[email protected] <mailto:[email protected]>> のメッセージ:
>>>>> 
>>>>>> Hi all,
>>>>>> 
>>>>>> The current behavior of generics in Swift causes it lose type 
>>>>>> information at compile time due to the desire of maintaining a single 
>>>>>> version of the function. This runs counter to how c++ works, which 
>>>>>> creates a new copy of a function per type, but preserves information to 
>>>>>> be preserved. This can cause unexpected behavior from the user’s 
>>>>>> perspective:
>>>>>> 
>>>>>>   protocol DispatchType {}
>>>>>>   class DispatchType1: DispatchType {}
>>>>>> 
>>>>>>   func doBar<D:DispatchType>(value:D) {    
>>>>>>       print(“General function called")
>>>>>>   }
>>>>>> 
>>>>>>   func doBar(value:DispatchType1) {
>>>>>>       print("DispatchType1 called")
>>>>>>   }
>>>>>> 
>>>>>>   func test<D:DispatchType>(value:D) {
>>>>>>       doBar(value: value)
>>>>>>   }
>>>>>> 
>>>>>>   test(value: d1)     // “General function called”, but it’s not obvious 
>>>>>> why
>>>>>> 
>>>>>> 
>>>>>> The suggested method to get around this issue is to use a protocol to 
>>>>>> create a witness table, allowing for runtime dispatch. However, this 
>>>>>> approach is not ideal in all cases because: (a) the overhead of runtime 
>>>>>> dispatch may not be desirable, especially because this is something that 
>>>>>> can be determined at compile time; and (b) there are some designs in 
>>>>>> which this behavior can complicate things.
>>>>>> 
>>>>>> One example of a design where this behavior can be problematic is when a 
>>>>>> protocol is used to determine what functions get dispatched:
>>>>>> 
>>>>>>   protocol Storage { … }
>>>>>>   class Tensor<S:Storage> { … }
>>>>>> 
>>>>>>   class CBlasStorage: Storage { … }
>>>>>>   class OpenCLStorage: Storage { … }
>>>>>> 
>>>>>>   func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> { … 
>>>>>> }
>>>>>> 
>>>>>>   // like behavior, these will not work if called from another generic 
>>>>>> function (but will work for non-generic functions)
>>>>>>   func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> 
>>>>>> where S:CBlasStorage { … }
>>>>>>   func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> 
>>>>>> where S:OpenCLStorage { … }
>>>>>> 
>>>>>> In this case, depending on the underlying storage, we want an optimized 
>>>>>> version of `dot` to be called. To make this work correctly we can add 
>>>>>> static methods to `Tensor`, but this has several drawbacks: (a) it makes 
>>>>>> the `Tensor` class monolithic, every possible method must be determine a 
>>>>>> priori and be defined in the class; (b) it doesn’t allow new methods to 
>>>>>> be added Tensor without touching the main class; and (c) it 
>>>>>> unnecessarily forces users to user the more verbose `Tensor.dot(a, b)`.
>>>>>> 
>>>>>> Point (a) in theory could be made better by creating a `TensorOps` 
>>>>>> protocols. However, because type constraints cannot currently be placed 
>>>>>> on extensions, it is not currently possible to implement.
>>>>>> 
>>>>>> 
>>>>>> One potential solution would be to add/extend an attribute for generic 
>>>>>> functions that would force multiple versions of that function to be 
>>>>>> created. There is already there is a `@_specialize` attribute, but you 
>>>>>> have to: (a) manually write out all the cases you want to cover; and (b) 
>>>>>> only affects the compiled code, which does not change this behavior. Due 
>>>>>> to the fact that `@_specialize` exists, I’m going to assume it wouldn’t 
>>>>>> be a major change to the language to extend the behavior to compile-time 
>>>>>> dispatch.
>>>>>> 
>>>>>> 
>>>>>> Thanks!
>>>>>> Abe
>>>>>> _______________________________________________
>>>>>> swift-evolution mailing list
>>>>>> [email protected] <mailto:[email protected]>
>>>>>> https://lists.swift.org/mailman/listinfo/swift-evolution 
>>>>>> <https://lists.swift.org/mailman/listinfo/swift-evolution>
>>>> 
>> 
>> _______________________________________________
>> swift-evolution mailing list
>> [email protected] <mailto:[email protected]>
>> https://lists.swift.org/mailman/listinfo/swift-evolution 
>> <https://lists.swift.org/mailman/listinfo/swift-evolution>
_______________________________________________
swift-evolution mailing list
[email protected]
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to