Oh, I see.  The constraint solver is picking an overload that better matches 
the caller rather than the callee's type, which differs from C++ because the 
template expansion process considers specific-type overloads more specific.  We 
don't consider less-generic prototypes than the caller here because we aren't 
performing a (major) syntactic transformation in the process of solving a 
system of type variables.   In order to change the language to adopt this 
feature, Sema would have to have knowledge of the candidate set of 
specializations, either user-specified or SILOptimizer-generated, beforehand.  
It's not impossible to imagine, but it does create an interesting 
backdependency on future potential optimizations, and would potentially majorly 
change the behavior of a Debug or Release build (unless specialization were 
forced at all optimization levels).

~Robert Widmann

2017/02/05 12:37、Abe Schneider <[email protected]> のメッセージ:

> Hi Robert,
> 
> Sorry, I’m not sure I understand your question. In c++ you can do the 
> following:
> 
> struct Storage {};
> struct CBlasStorage: Storage {};
> 
> template <typename S> class Tensor {};
> 
> template <typename S>
> Tensor<S> dot(const Tensor<S> &lhs, const Tensor<S> &rhs) {
>   std::cout << "general version called" << std::endl;
>   Tensor<S> result;
>   return result;
> }
> 
> // specialized version for CBlasStorage
> template <>
> Tensor<CBlasStorage> dot(const Tensor<CBlasStorage> &lhs, const 
> Tensor<CBlasStorage> &rhs) {
>   std::cout << "specialized version called" << std::endl;
>   Tensor<CBlasStorage> result;
>   return result;
> }
> 
> // this preserves type information and will call the appropriate `dot`
> template <typename T>
> void doSomething(const Tensor<T> &lhs, const Tensor<T> &rhs) {
>   auto result = dot(lhs, rhs);
> }
> 
> int main(int argc, char **argv) {
>   Tensor<CBlasStorage> a, b;
>   doSomething(a, b); // we should get "specialized version called"
> }
> 
> 
> The potential equivalent for Swift could look like:
> 
> @_specialize_all
> func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> { … }
> 
> Which would cause the compile to create a version of `dot` per S type that it 
> gets called with. Thus, when `doSomething` is called, it would dispatch to 
> that version of `dot`, allowing the type information to be preserved in the 
> same way it does in c++.
> 
> Abe
> 
>> On Feb 5, 2017, at 11:35 AM, Robert Widmann <[email protected]> wrote:
>> 
>> I don't understand how this change would cause method dispatch to invoke a 
>> different prototype.  Specialization in either language mentioned doesn't do 
>> that.
>> 
>> ~Robert Widmann
>> 
>> 2017/02/05 11:28、Abe Schneider via swift-evolution 
>> <[email protected]> のメッセージ:
>> 
>>> Hi all,
>>> 
>>> The current behavior of generics in Swift causes it lose type information 
>>> at compile time due to the desire of maintaining a single version of the 
>>> function. This runs counter to how c++ works, which creates a new copy of a 
>>> function per type, but preserves information to be preserved. This can 
>>> cause unexpected behavior from the user’s perspective:
>>> 
>>>   protocol DispatchType {}
>>>   class DispatchType1: DispatchType {}
>>> 
>>>   func doBar<D:DispatchType>(value:D) {    
>>>       print(“General function called")
>>>   }
>>> 
>>>   func doBar(value:DispatchType1) {
>>>       print("DispatchType1 called")
>>>   }
>>> 
>>>   func test<D:DispatchType>(value:D) {
>>>       doBar(value: value)
>>>   }
>>> 
>>>   test(value: d1)     // “General function called”, but it’s not obvious why
>>> 
>>> 
>>> The suggested method to get around this issue is to use a protocol to 
>>> create a witness table, allowing for runtime dispatch. However, this 
>>> approach is not ideal in all cases because: (a) the overhead of runtime 
>>> dispatch may not be desirable, especially because this is something that 
>>> can be determined at compile time; and (b) there are some designs in which 
>>> this behavior can complicate things.
>>> 
>>> One example of a design where this behavior can be problematic is when a 
>>> protocol is used to determine what functions get dispatched:
>>> 
>>>   protocol Storage { … }
>>>   class Tensor<S:Storage> { … }
>>> 
>>>   class CBlasStorage: Storage { … }
>>>   class OpenCLStorage: Storage { … }
>>> 
>>>   func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> { … }
>>> 
>>>   // like behavior, these will not work if called from another generic 
>>> function (but will work for non-generic functions)
>>>   func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> where 
>>> S:CBlasStorage { … }
>>>   func dot<S:Storage>(_ lhs:Tensor<S>, _ rhs:Tensor<S>) -> Tensor<S> where 
>>> S:OpenCLStorage { … }
>>> 
>>> In this case, depending on the underlying storage, we want an optimized 
>>> version of `dot` to be called. To make this work correctly we can add 
>>> static methods to `Tensor`, but this has several drawbacks: (a) it makes 
>>> the `Tensor` class monolithic, every possible method must be determine a 
>>> priori and be defined in the class; (b) it doesn’t allow new methods to be 
>>> added Tensor without touching the main class; and (c) it unnecessarily 
>>> forces users to user the more verbose `Tensor.dot(a, b)`.
>>> 
>>> Point (a) in theory could be made better by creating a `TensorOps` 
>>> protocols. However, because type constraints cannot currently be placed on 
>>> extensions, it is not currently possible to implement.
>>> 
>>> 
>>> One potential solution would be to add/extend an attribute for generic 
>>> functions that would force multiple versions of that function to be 
>>> created. There is already there is a `@_specialize` attribute, but you have 
>>> to: (a) manually write out all the cases you want to cover; and (b) only 
>>> affects the compiled code, which does not change this behavior. Due to the 
>>> fact that `@_specialize` exists, I’m going to assume it wouldn’t be a major 
>>> change to the language to extend the behavior to compile-time dispatch.
>>> 
>>> 
>>> Thanks!
>>> Abe
>>> _______________________________________________
>>> swift-evolution mailing list
>>> [email protected]
>>> https://lists.swift.org/mailman/listinfo/swift-evolution
> 
_______________________________________________
swift-evolution mailing list
[email protected]
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to