Sorry, just following up with a few more thoughts.
>
> I consider Java’s type erasure to be orthogonal to the
> overloading/customization point issue, but of course I agree that it’s
> surprising.
While the underlying reason may be different, Swift has a similar potential for
surprise with generics. Yes, it might not be a surprise for people coming from
the world of Java, but mostly because Java’s generics are extremely limited.
>
>> I can see the advantage of protocols if they allowed this type of design:
>>
>> protocol LinearOperations {
>> associatedtype StorageType
>> static func dot(_ lhs:Tensor<StorageType>, _
>> rhs:Tensor<StorageType>) -> Tensor<StorageType>
>> ...
>> }
>>
>> extension Tensor: LinearOperations {
>> ...
>> }
>>
>> extension Tensor: LinearOperations where StorageType:CBlasStorage<Float> {
>> ...
>> }
>>
>> The advantage of this design is that the available functions are
>> clearly defined, but it still allows new operations to be defined
>> without having to touch the main code base.
>
> I’m assuming that both of these extensions implement the static func
> dot(_:_:). This is more interesting direction for me, because it’s taking the
> existing notion of customization points via protocol requirements and
> extending that to also support some level of customization.
So what needs to change in the language to enable this behavior? The obvious
candidate is allowing creating extensions with constraints. However, even if I
include all the necessary functions within a single class (to avoid that
issue), I’m still running into more design issues. For example, for this toy
example (sorry for the verbosity — this was the shortest version I could come
up with):
class Tensor<S:Storage> {
var storage:S
init(size:Int) {
storage = S(size: size)
}
// default implementation
static func cos(_ tensor:Tensor<S>) -> Tensor<S> {
// ...
}
}
With specializations defined for the storage types:
extension Tensor where S:IntStorage {
static func cos(_ tensor:Tensor<S>) -> Tensor<S> {
// ...
}
}
extension Tensor where S:FloatStorage {
static func cos(_ tensor:Tensor<S>) -> Tensor<S> {
// ...
}
}
This works:
let floatTensor = Tensor<FloatStorage>(size: 10)
let result1 = T.cos(floatTensor) // calls Tensor<FloatStorage>.cos(…)
let intTensor = Tensor<IntStorage>(size: 10)
let result2 = T.cos(intTensor) // calls Tensor<IntStorage>.cos(…)
However, if I define an operation to on the Tensor:
class SomeOp<S:Storage> {
typealias StorageType = S
var output:Tensor<S>
init() {
output = Tensor<S>(size: 10)
}
func apply() -> Tensor<S> {
let result = T.cos(output)
return result
}
}
let op1 = SomeOp<FloatStorage>()
let result3 = op1.apply() // calls default `cos` instead of FloatStorage version
So one question I have is why doesn’t the correct version of `cos` get called?
Before it was because there wasn’t a vtable available to figure out which
function to call. However, in this case since the function was defined in the
class, I would assume there would be (I also tried variants of this with an
accompanying protocol and non-static versions of the function).
I can get `SomeOp` to work correctly if I create specializations of the class:
extension SomeOp where S:FloatStorage {
func apply() -> Tensor<S> {
let result = T.cos(output)
return result
}
}
extension SomeOp where S:IntStorage {
func apply() -> Tensor<S> {
let result = T.cos(output)
return result
}
}
However, this doesn’t seem like a good design to me, as it requires copying the
same code for each StorageType introduced.
Thanks!_______________________________________________
swift-evolution mailing list
[email protected]
https://lists.swift.org/mailman/listinfo/swift-evolution