Re: [swift-evolution] Proposal: Introduce User-defined "Dynamic Member Lookup" Types

2017-12-05 Thread Abe Schneider via swift-evolution
I use python on a daily basis for scientific computing (PyTorch, Matplotlib, 
Numpy, Scipy, scikits-learn, etc.). Python is great for doing quick projects, 
but certain design features of the language make it less ideal for large 
projects (e.g. whitespace, weak typing, speed, etc.). Swift shares many of the 
nice qualities Python has but without some of the warts.

To answer your question, I think anyone who needs to integrate their algorithm 
into larger projects and/or on hardware care about moving away from Python. 

I think trying to include Python libraries makes sense. Julia did the same 
thing, which allowed them to grow their user base very quickly (e.g. being able 
to use Matplotlib in Julia was a huge win). I believe the way Julia imports 
Python libraries is via its extremely powerful (and complex) macro system.

While using Python via Swift is a great way to use already written libraries, 
my preference would be to eventually write everything in Swift (trying to debug 
across languages can be painful). I did (a while ago) write a Tensor library 
for Swift, but ran into some issues when trying to make it run both on the CPU 
and GPU. A new approach I’m considering is to instead wrap a pre-existing C 
library.

My two cents, as the question of users was brought up.

A

> On Dec 4, 2017, at 12:15 PM, Tino Heth via swift-evolution 
>  wrote:
> 
> 
>> This is a bridge to allow easy access to the vast number of libraries that 
>> currently exist in those dynamic language domains, and to ease the 
>> transition of the multitudes of those programmers into Swift.
> 
> I’ve read several posts that gave me the impression that Python has a huge 
> user base of people who are tired of using that language (the cited statement 
> is just an arbitrary pick)… but is that actually true?
> Afaik, Python never became as common as Java, C# or C++, and it never had 
> much support from big companies — people decided to use Python not because 
> it’s some sort of standard, but because they liked it and found it to be a 
> language that’s easy to learn.
> 
> So the whole story of „let’s make it easier for those poor Python guys to 
> switch to a real language“ sounds very much like hubris to me.
> Of course, that statement is an exaggeration, but still:
> Did anyone ever ask the Python-community who actually wants to switch to 
> Swift? I don’t think there would be enough positive feedback to take it as a 
> justification for the proposed changes.
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Overloading Generic Types

2017-02-22 Thread Abe Schneider via swift-evolution
>> 
>> I'm starting to think my original motivation might’ve had something to do 
>> with normally needing storage for both some property and a T, but not 
>> needing (local) storage for said property if T conformed to a protocol which 
>> itself already required conforming types to have that property? Or maybe as 
>> a way to have the “bottom” variable in a hierarchy of wrapper types to break 
>> the “reference cycle” for some property? (Come to think of it, those could 
>> be the same thing)
>> 
>> That was (and is) an odd project... Anyway, it's been a while since I've 
>> thought about it. I'll try to find time today to poke through some old code 
>> and see if still have a copy of what I’d gotten stuck on.
>> 
>> And thanks for those links :-)

It sounds like what you want is similar to C++ template specialization (also 
something I’ve been asking for). Another slightly different form you can 
imagine it taking is:

class Array {
associatedtype StorageType:Storage

subscript(index:[Int]) -> StorageType.ElementType { /* … */ }
}

extension Array where StorageType:FloatStorage {
// define specific variables for this specialization
// and define behavior for subscript
}




Another example might be (if ints were allowed as generic parameters):

struct Factorial {
var value { return N * Factorial .value }
}

struct Factorial where N == 1 {
var value { return 1 }
}

let value = Factorial<10>.value




The first example can in theory be done using runtime information (though as 
stated in my previous posts, I still can’t get it to work correctly in Swift). 
The second clearly needs to be done at compile time and could potentially 
benefit from the `constexpr` discussed in the `pure` function thread. A 
slightly different formulation could be:

constexpr factorial() { return N*factorial() }
constexpr factorial() where N == 1 { return 1 }


Right now generics in Swift feel closer to Java’s generics than c++ templates. 
I think these types of constructs are extremely useful to a language and would 
disagree with anyone who says they aren’t needed (e.g. look at Boost and 
Eigen). However, I can also appreciate that adding these features to a language 
probably should be done with lots of care and thought.



A___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Basic element-wise operator set for Arrays, Arrays of Arrays, etc.

2017-02-20 Thread Abe Schneider via swift-evolution
> 
> Well I was rather thinking of making a Swift-only library (at least at first) 
> but that would also be available for other platforms e.g Linux or maybe some 
> day on Windows => also working with reduced performance without the 
> Accelerator Framework but leveraging it on Apple Platforms (and possibly 
> leveraging others on the other platforms). This said I am open to discussion 
> on this... but having a very nice syntax for swift and having an close to 
> one-to-one equivalent also for Objective-C will probably add quite some 
> difficulties.
> 

While still very much in its infancy, just to add the libraries out there, 
there is: https://github.com/abeschneider/stem 
 . However, the library currently suffers 
from design issues related to dispatching correctly from generic functions. 
That said, I was able to recreate a large part of the Numpy functionality while 
allowing the ability to leverage the Accelerator Framework and OpenCL/CUDA.

> > again, with the obvious implementation, this wastes space for temporaries 
> > and results in extraneous passes through the data. It is often *possible* 
> > to solve these issues (at least for some the most common cases) by 
> > producing proxy objects that can fuse loops, but that gets very messy very 
> > fast, and it’s ton of work to support all the interesting cases.
> 
> This is clear to me and to be honest with you I am not really sure of the 
> best strategy to make this. 

The most successful method I’ve seen for dealing with this is let the user 
write what is most natural first (allowing for temporaries) but provide a path 
to optimize (using in-place operations). While expression trees can automate 
this for the user, it also has the potential for being much more difficult to 
debug and may not be as optimal as a hand-crafted expression.

> 
> I don't think that the primary target for the library should be to deliver 
> the highest performance possible.
>   => People who do need that level of performance would still need to analyze 
> and optimize their code by themselves and/or directly call the Acceleration 
> Framework or other specialized libraries.
> 
> What I would like to reach instead is rather what I would call "the highest 
> usability possible with decent performance". Some programmers will be 
> satisfied with the level of performance and will enjoy the readability and 
> maintainability of the code based of the library, whereas other will go for 
> more performant libraries (and that is perfectly fine!). Actually, I would 
> even expect later that some of those who belong to the latter category will 
> start experimenting with the easy but less performant library (lets call it 
> here "easy maths library") and optimize their code based on a high 
> performance library only in a second step.

If you can define your operations at the right granularity you can write really 
optimized Accelerate/OpenCL/CUDA code for the low level parts and string it 
together with less optimize code.

> 
> My idea of a possibly pragmatic roadmap (which can be followed in that order) 
> to make such a library almost from scratch with the long-term goal of being 
> quite performant but primarily very easy to use could be:
> 
> 1) think about the integration to the language, the syntax, the high-level 
> user documentation, etc. and demonstrate all this based on a relatively low 
> performance implementation
> 
> 2) generate a collection a typical operations where the low-level libraries 
> offer very nice performance or where a clever handling of the temporary 
> variables is possible

For (1) and (2), it’s worth taking a look at what libraries exist already. 
People have spent a lot of time organizing and re-organizing these. While not 
perfect, Numpy has become one of the most successful matrix libraries out there.


___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Compile-time generic specialization

2017-02-20 Thread Abe Schneider via swift-evolution
Sorry, I forgot to copy in its definition:

typealias T = Tensor

As a quick sanity check I changed all `T.` syntax to `Tensor` and got the 
same behavior.

Thanks!

> On Feb 20, 2017, at 3:58 PM, David Sweeris <daveswee...@mac.com> wrote:
> 
> 
> On Feb 20, 2017, at 12:23, Abe Schneider via swift-evolution 
> <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:
> 
>> However, if I define an operation to on the Tensor:
>> 
>> class SomeOp {
>> typealias StorageType = S
>> var output:Tensor
>> 
>> init() {
>> output = Tensor(size: 10)
>> }
>> 
>> func apply() -> Tensor {
>> let result = T.cos(output)
>> return result
>> }
>> }
>> 
>> let op1 = SomeOp()
>> let result3 = op1.apply() // calls default `cos` instead of FloatStorage 
>> version
>> 
>> 
>> 
>> So one question I have is why doesn’t the correct version of `cos` get 
>> called? Before it was because there wasn’t a vtable available to figure out 
>> which function to call. However, in this case since the function was defined 
>> in the class, I would assume there would be (I also tried variants of this 
>> with an accompanying protocol and non-static versions of the function).
>> 
>> 
>> I can get `SomeOp` to work correctly if I create specializations of the 
>> class:
>> 
>> extension SomeOp where S:FloatStorage {
>> func apply() -> Tensor {
>> let result = T.cos(output)
>> return result
>> }
>> }
>> 
>> extension SomeOp where S:IntStorage {
>> func apply() -> Tensor {
>> let result = T.cos(output)
>> return result
>> }
>> }
>> 
>> 
>> However, this doesn’t seem like a good design to me, as it requires copying 
>> the same code for each StorageType introduced.
> 
> Where is T defined? What happens if you replace "T" with "Tensor"?
> 
> - Dave Sweeris 

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Compile-time generic specialization

2017-02-20 Thread Abe Schneider via swift-evolution
Sorry, just following up with a few more thoughts.

> 
> I consider Java’s type erasure to be orthogonal to the 
> overloading/customization point issue, but of course I agree that it’s 
> surprising. 

While the underlying reason may be different, Swift has a similar potential for 
surprise with generics. Yes, it might not be a surprise for people coming from 
the world of Java, but mostly because Java’s generics are extremely limited.


> 
>> I can see the advantage of protocols if they allowed this type of design:
>> 
>> protocol LinearOperations {
>>  associatedtype StorageType
>>  static func dot(_ lhs:Tensor, _
>> rhs:Tensor) -> Tensor
>>  ...
>> }
>> 
>> extension Tensor: LinearOperations {
>> ...
>> }
>> 
>> extension Tensor: LinearOperations where StorageType:CBlasStorage {
>> ...
>> }
>> 
>> The advantage of this design is that the available functions are
>> clearly defined, but it still allows new operations to be defined
>> without having to touch the main code base. 
> 
> I’m assuming that both of these extensions implement the static func 
> dot(_:_:). This is more interesting direction for me, because it’s taking the 
> existing notion of customization points via protocol requirements and 
> extending that to also support some level of customization.


So what needs to change in the language to enable this behavior? The obvious 
candidate is allowing creating extensions with constraints. However, even if I 
include all the necessary functions within a single class (to avoid that 
issue), I’m still running into more design issues. For example, for this toy 
example (sorry for the verbosity — this was the shortest version I could come 
up with):

class Tensor {
var storage:S

init(size:Int) {
storage = S(size: size)
}

// default implementation
static func cos(_ tensor:Tensor) -> Tensor {
// ...
}
}




With specializations defined for the storage types:

extension Tensor where S:IntStorage {
static func cos(_ tensor:Tensor) -> Tensor {
// ...
}
}

extension Tensor where S:FloatStorage {
static func cos(_ tensor:Tensor) -> Tensor {
// ...
}
}




This works:

let floatTensor = Tensor(size: 10)
let result1 = T.cos(floatTensor) // calls Tensor.cos(…)

let intTensor = Tensor(size: 10)
let result2 = T.cos(intTensor) // calls Tensor.cos(…)





However, if I define an operation to on the Tensor:

class SomeOp {
typealias StorageType = S
var output:Tensor

init() {
output = Tensor(size: 10)
}

func apply() -> Tensor {
let result = T.cos(output)
return result
}
}

let op1 = SomeOp()
let result3 = op1.apply() // calls default `cos` instead of FloatStorage version



So one question I have is why doesn’t the correct version of `cos` get called? 
Before it was because there wasn’t a vtable available to figure out which 
function to call. However, in this case since the function was defined in the 
class, I would assume there would be (I also tried variants of this with an 
accompanying protocol and non-static versions of the function).


I can get `SomeOp` to work correctly if I create specializations of the class:

extension SomeOp where S:FloatStorage {
func apply() -> Tensor {
let result = T.cos(output)
return result
}
}

extension SomeOp where S:IntStorage {
func apply() -> Tensor {
let result = T.cos(output)
return result
}
}


However, this doesn’t seem like a good design to me, as it requires copying the 
same code for each StorageType introduced.


Thanks!___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] Support for pure functions. Part n + 1.

2017-02-17 Thread Abe Schneider via swift-evolution
I might(?) agree with others that  `constexpr` might overlap but be
ultimately different from `pure`. However, to me `constexpr` is more
interesting because it would provide a potential macro language.

As for syntax, while I think that matters less, I suspect trying to
find why code isn't working and having to find a "->" vs. "=>" might
be difficult.

As for the pure aspect, couldn't that compiler just look at the
arguments and determine that the function has no side effects? The
only complication comes when its a method to a class. However,
something like the c++ notation of:

int bar() const { ... }

could be used. However, it might be more Swiftly to do something like:

   const func bar() -> int { ... }

as others have suggested. This would make many non-member functions
automatically get made into purely-functional forms (with potential
optimizations). You should be able to do the same thing with type
methods (static/class).

On Fri, Feb 17, 2017 at 11:18 AM, Anton Zhilin  wrote:
> I didn’t mean to emphasize any specific syntax. I’m fine with either @const,
> @constexpr, @pure or =>.
> Anyway, I see no reason why generic functions shouldn’t be supported in any
> of the suggested models.
>
> 2017-02-17 19:08 GMT+03:00 Abe Schneider :
>>
>> +1. I think this is a great idea. As I was following this thread, I
>> was wondering if someone might suggest the C++ constexpr syntax.
>>
>> Would this support generics? E.g. could you do:
>>
>> @constepxr
>> func foo(a:S, b:S) {
>>return a+b
>> }
>>
>> and have that be done at compile time? While this could potentially
>> add a huge amount of complication on the backend, I could this as
>> being useful (also related to my previous postings as to having a way
>> of determining generic types at compile time).
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Basic element-wise operator set for Arrays, Arrays of Arrays, etc.

2017-02-17 Thread Abe Schneider via swift-evolution
If I read Nicolas's post correctly, I think he's more arguing for the
ability to create syntax that allows Swift to behave in a similar way
to Numpy/Matlab. While Swift already does allow you to define your own
operators, the main complaint is that he can't define the specific
operators he would like.

I've been working on a Tensor library that would also benefit from
this. I ended up creating unicode operators for inner product etc. and
then used the standard operators for elementwise operations. However,
I think there is some virtue in not having to use the unicode
characters (many people don't want to have to remap their keyboard),
so providing alternatives might be nice.

While I've never been a fan of Matlab's notation, other people might
be familiar with it, so there's some virtue in making it available.


On Fri, Feb 17, 2017 at 1:01 PM, Xiaodi Wu via swift-evolution
 wrote:
> If you're simply looking for elementwise multiply without performance
> requirements, map(*) is a very succinct spelling.
>
> Performant implementations for these operations like you have in Matlab rely
> on special math libraries. Apple platforms have Accelerate that makes this
> possible, and other implementations of BLAS/LAPACK do the same for Linux and
> Windows platforms.
>
> There has been talk on this list of writing Swifty wrappers for such
> libraries. The core team has said that the way to get such facilities into
> Swift corelibs is to write your own library, get broad adoption, then
> propose its acceptance here. Currently, several libraries like Surge and
> Upsurge offer vectorized wrappers in Swifty syntax for Apple platforms; it
> would be interesting to explore whether the same can be done in a
> cross-platform way.
>
> But simply adding sugar to the standard library will not give you the
> results you're looking for (by which I mean, the performance will be
> unacceptable), and there's no point in providing sugar for something that
> doesn't work like the operator implies (Matlab's elementwise operators offer
> _great_ performance).
>
>
>
>
> On Fri, Feb 17, 2017 at 11:46 Nicolas Fezans via swift-evolution
>  wrote:
>>
>> Dear all,
>>
>> In swift (just as in many other languages) I have been terribly
>> missing the operators like  .*  ./  .^  as I know them from
>> MATLAB/Scilab. These operators are very handy and do element-wise
>> operations on vectors or matrices of the same size.
>>
>> So for instance A*B is a matrix multiplication (and the number of
>> columns for A must correspond to the number of rows in B), whereas A*B
>> (with A and B of same size) returns the matrix of that size whose
>> elements are obtained by making the product of each pair of elements
>> at the same location in A and B.
>>
>> So just a small example:
>> [1.0 , 2.5 , 3.0] .* [2.0 , 5.0 , -1.0] -> [2.0 , 12.5 , -3.0]
>>
>> The same exists for the division (./) or for instance for the power
>> function (.^). Here another example with *, .* , ^ , and .^ to show
>> the difference in behaviour in MATLAB/Scilab
>>
>> >> A = [1 2 3 ; 4 5 6 ; 7 8 9];
>> >> A*A
>>
>> ans =
>>
>> 303642
>> 668196
>>102   126   150
>>
>> >> A.*A
>>
>> ans =
>>
>>  1 4 9
>> 162536
>> 496481
>>
>> >> A^2
>>
>> ans =
>>
>> 303642
>> 668196
>>102   126   150
>>
>> >> A.^3
>>
>> ans =
>>
>>  1 827
>> 64   125   216
>>343   512   729
>>
>> For addition and subtraction the regular operator (+ and -) and their
>> counterparts (.+ and .-) are actually doing the same. However note
>> that since the + operator on arrays is defined differently (it does an
>> append operation), there is a clear use for a .+ operation in swift.
>>
>> Version 1:
>> In principle, we can define it recursively, for instance ...+ would be
>> the element-wise application of the ..+ operator, which is itself the
>> element-wise application of the .+ operator, which is also the
>> element-wise application of the + operator.
>>
>> Version 2:
>> Alternatively we could have a concept where .+ is the element-wise
>> application of the .+ operator and finally when reaching the basic
>> type (e.g. Double when starting from Double) the .+ operator
>> needs to be defined as identical to the + operator. I do prefer this
>> version since it does not need to define various operators depending
>> on the "level" (i.e. Double -> level 0, [Double] -> level 1,
>> [[Double]] -> level 2, etc.). I could make this option work without
>> generics, but as I tried it with generics it generated a runtime error
>> as the call stack grew indefinitely (which does not seem as something
>> that should actually happen since at each call the level gets lower
>> and when reaching 0 it all solvable).
>>
>>
>> Anyway, I would like to discuss first the basic idea of defining these
>> element-wise operators for Arrays, before seeing how far it would be
>> 

Re: [swift-evolution] [Pitch] Support for pure functions. Part n + 1.

2017-02-17 Thread Abe Schneider via swift-evolution
+1. I think this is a great idea. As I was following this thread, I
was wondering if someone might suggest the C++ constexpr syntax.

Would this support generics? E.g. could you do:

@constepxr
func foo(a:S, b:S) {
   return a+b
}

and have that be done at compile time? While this could potentially
add a huge amount of complication on the backend, I could this as
being useful (also related to my previous postings as to having a way
of determining generic types at compile time).


On Fri, Feb 17, 2017 at 8:01 AM, Anton Zhilin via swift-evolution
 wrote:
> My vision of “pure” functions was the following:
>
> Compiler automatically marks all functions and expressions as pure, wherever
> possible
>
> We should be interested not in “Haskell-ish pure” functions, but in
> “computable during compilation” functions
> Therefore I prefer to use @constexpr or const instead of @pure
>
> We can mark a function as const to assert that it is indeed pure
> We can mark a variable as const to ensure that it’s computed at compilation
> time
>
> Compiler might compute some non-const expressions, but no guarantees given
>
> One issue is, we don’t have or suggest any facilities to make use of pure
> functions, other than some optimization, which can be performed anyway as of
> now.
>
> One use-case would be conversion of metatypes to types:
>
> const let x: Any = makeSomething()
> typealias T = type(of: x)
>
> This feature can be powerful enough to fill the niche of macros in Swift,
> without unsafety of C++ or specific syntax of Rust.
>
> 2017-02-17 14:14 GMT+03:00 Haravikk via swift-evolution
> :
>>
>> I like the idea of having pure functions in Swift, but my first thought
>> is; should we have to declare it at all? Is it not easier to just have the
>> compiler automatically flag a function as pure or not?
>>
>> With that in mind we don't need any new syntax, but a simple @pure
>> attribute should be sufficient. This can be used anywhere that a function is
>> declared, or a closure is accepted as a parameter, allowing us to be
>> explicit that we are trying to define a pure function, or only accept pure
>> closures.
>>
>> The big benefit of this is that it is retroactive; all existing functions
>> that are pure will be automatically detected as such, and can be passed into
>> any method accepting only pure functions. The new capability will be that
>> developers can specify that a function *must* be pure and thus produce an
>> error if it isn't.
>
>
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Compile-time generic specialization

2017-02-10 Thread Abe Schneider via swift-evolution
>> protocol LinearOperations {
>>   associatedtype StorageType
>>   static func dot(_ lhs:Tensor, _
>> rhs:Tensor) -> Tensor
>>   ...
>> }
>>
>> extension Tensor: LinearOperations {
>> ...
>> }
>>
>> extension Tensor: LinearOperations where StorageType:CBlasStorage {
>> ...
>> }
>>
> I’m assuming that both of these extensions implement the static func 
> dot(_:_:). This is more interesting direction for me, because it’s taking the 
> existing notion of customization points via protocol requirements and 
> extending that to also support some level of customization.

Exactly (sorry, I should've made that explicit). I'd be super happy if
this functionality got added to Swift.

Abe
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Compile-time generic specialization

2017-02-10 Thread Abe Schneider via swift-evolution
>
> Other languages in the C family (e.g., C#, Java) that have both generics and 
> ad hoc overloading provide the same static-resolution behavior that Swift 
> does, so someone coming from a language in the general “C” family will be 
> confounded whatever we choose. Personally, I think C++ got this wrong—I feel 
> that generic algorithm customization points and algorithm specializations 
> should be explicitly stated, because it makes it easier to reason about the 
> generic code if you know where those points are. Swift uses protocol 
> requirements for customization points, but hasn’t tackled algorithm 
> specialization yet.

That's a fair point, though I think Java's type erasure in generics
surprises/confuses a lot of people (and takes away a lot of the
potential power of generics). That's not to say C++ templates are easy
to understand (e.g. SFINAE), but at least to me it operates in a more
intuitive manner until you get to the esoteric parts. And that is
admittedly a very subjective point.

I can see the advantage of protocols if they allowed this type of design:

protocol LinearOperations {
   associatedtype StorageType
   static func dot(_ lhs:Tensor, _
rhs:Tensor) -> Tensor
   ...
}

extension Tensor: LinearOperations {
...
}

extension Tensor: LinearOperations where StorageType:CBlasStorage {
...
}

The advantage of this design is that the available functions are
clearly defined, but it still allows new operations to be defined
without having to touch the main code base. You can also easily add
new functionality to the Tensor class by creating a new protocol:

protocol StatisticsOperations {
   associatedtype StorageType
   static func histogram(_ tensor:Tensor) -> Tensor
}

extension Tensor: StatisticsOperations {
...
}

The two disadvantages are: (a) Swift currently doesn't allow this; and
(b) it's a little more verbose because you have to write:

let result = Tensor.histogram(mydata)

versus:

let result = histogram(mydata)

which has the redundant piece of information that it's a Tensor (which
can be inferred from `mydata`).


Abe
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Compile-time generic specialization

2017-02-10 Thread Abe Schneider via swift-evolution
Hi Joe,

> If there's really an independent implementation for each `S: Storage`, then 
> you can make `tensorDot` a requirement of `Storage` and avoid the explosion 
> that way. Ad-hoc type dispatch by either overloading or if chains should be a 
> last resort when protocols really can't model what you're trying to do. 
> Ad-hoc overloading wouldn't really save you any work compared to the if 
> chain—you'd have all the exact problems you mentioned, having to add an 
> overload for every new combo of types, but you'd also have to also think 
> about the implicit relationships among the overloads according to the 
> language's overloading rules instead of in explicit logic.


You are correct in the number of Impls I need (I was incorrect in that
statement). But I think the if-branches are still problematic. I may
need the same number of functions as branches, but I think the code is
cleaner/easier to read and maintain:

   func dot(_ lhs:Tensor, _ rhs:Tensor) -> Tensor
where S:CBlasStorage { .. }
   func dot(_ lhs:Tensor, _ rhs:Tensor) -> Tensor
where S:CBlasStorage { .. }

   // NativeStorage has no optimization per type, so we can lump all
of these into a single Impl
   func dot(_ lhs:Tensor, _
rhs:Tensor) -> Tensor  { .. }


The advantages from this approach are: (a) it has less repeated code
(i.e. I don't have to create both an Impl and an if-branch); (b)
adding a new storage type does require redefining some (if not all) of
the functions (though it provides a nice mechanism for dealing with
defaults), but that code can be kept as a separate module; and (c) You
are effectively rolling your own dynamic dispatch, which is something
I much rather leave up to the compiler to do.


Thanks!
Abe
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Compile-time generic specialization

2017-02-10 Thread Abe Schneider via swift-evolution
Hi Douglas,


> I don't think it's a particularly good time in Swift's evolution to
> introduce such a feature. Swift 4 actually has a pile of Generics
> improvements already, and relative to those, this kind of specialization is
> a bit of a niche feature. That said, it's not totally afield---the
> conditional conformances proposal talks about a similar issue in the context
> of existing dynamic dispatch (protocol requirements), and we're not quite
> sure how big of an issue it will be.

Okay, that's fair. My main goal was to at least raise the issue and
hope that at least some day it may get added to the roadmap. More
immediately, if some of the changes being discussed are made to how
protocols/extensions work, I think that could a potential solution.

Also, since I don't want to come off as sounding like I'm just
complaining, thank you everyone who have put so much effort and
thought into Swift! It's quickly made it's way into one of my favorite
languages.

>
> Swift's generics system is quite drastically different from C++ templates,
> so I (personally) am not strongly motivated by the first argument: there's a
> big leap to make going from C++ to Swift, particularly if you know C++
> templates well, and this seems a small part of that. The second argument I
> agree with---it does come up from time to time.

I wouldn't expect Swift's generics to work exactly the same as C++.
However, I have seen discussion come up in discussion that Swift
should cause the least amount of surprises for people coming from a
C-based language. Thus, for people coming from C++, this will cause a
lot of surprise -- especially since it does the correct behavior when
being called from a non-generic function. I hadn't noticed the
difference in behavior until much later in development of my library
(which is now causing a lot of refactoring to occur).

Thanks!
Abe
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Compile-time generic specialization

2017-02-10 Thread Abe Schneider via swift-evolution
Hi Slava,

I'm actually less worried about the performance issue and more on the
impact on design. Specifically, calling one generic from another
effectively loses type information. Because of this, specializations
stop working, disallowing certain design patterns. While you can put
your functions inside a protocol or class to overcome this problem,
this can create large monolithic classes (in my case it makes one of
my classes go from ~300 lines to ~1500 lines of code).

I think it might be possible to deal with some of these issues if: (a)
extensions could define methods, not in the protocol, that got
dynamically called; (b) constraints could be placed on extensions of
protocols.

My preference is still allow generics to behave in a fashion similar
to C++ templates (regardless of the underlying implementation), as
making everything have to rely on protocols or classes makes Swift
feel less mixed paradigm (like c++) and more OOP focused (like Java).
That said, it sounds like that may be difficult to accomplish at least
in the immediate future.

Thanks!
Abe

On Wed, Feb 8, 2017 at 12:03 AM, Slava Pestov <spes...@apple.com> wrote:
>
>> On Feb 5, 2017, at 8:28 AM, Abe Schneider via swift-evolution 
>> <swift-evolution@swift.org> wrote:
>
>> The suggested method to get around this issue is to use a protocol to create 
>> a witness table, allowing for runtime dispatch. However, this approach is 
>> not ideal in all cases because: (a) the overhead of runtime dispatch may not 
>> be desirable, especially because this is something that can be determined at 
>> compile time; and
>
> Just as the compiler is able to generate specializations of generic 
> functions, it can also devirtualize protocol method calls. The two 
> optimizations go hand-in-hand.
>
>> One potential solution would be to add/extend an attribute for generic 
>> functions that would force multiple versions of that function to be created. 
>> There is already there is a `@_specialize` attribute, but you have to: (a) 
>> manually write out all the cases you want to cover; and (b) only affects the 
>> compiled code, which does not change this behavior. Due to the fact that 
>> `@_specialize` exists, I’m going to assume it wouldn’t be a major change to 
>> the language to extend the behavior to compile-time dispatch.
>
> In Swift, specialization and devirtualization are optimization passes which 
> are performed in the SIL intermediate representation, long after type 
> checking, name lookup and overload resolution. In this sense it is completely 
> different from C++, where parsed templates are stored as a sort of untyped 
> AST, allowing some delayed name lookup to be performed.
>
> Implementing C++-style templates would be a major complication in Swift and 
> not something we’re likely to attempt at any point in time. The combination 
> of specialization and devirtualization should give you similar performance 
> characteristics, with the improved type safety gained from being able to 
> type-check the unspecialized generic function itself.
>
>>
>>
>> Thanks!
>> Abe
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org
>> https://lists.swift.org/mailman/listinfo/swift-evolution
>
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Compile-time generic specialization

2017-02-10 Thread Abe Schneider via swift-evolution
Hi Joe,

The issue re-dispatching from a function is that it can make
maintenance of the library difficult. For every function I define I
would need to have a large if-else tree. This means that the
introduction of both new functions and storage types becomes
expensive. For example, if I had:

func dot(_ lhs:Tensor, _ rhs:Tensor) -> Tensor {
  if let s = lhs as? Tensor<NativeStorage> { ... }
 else if if let s = lhs as? Tensor<NativeStorage> { ... }
 else if let s = lhs as? Tensor<NativeStorage> { ... }
  if let s = lhs as? Tensor<CBlasStorage> { ... }
 else if if let s = lhs as? Tensor<CBlasStorage> { ... }
 else if let s = lhs as? Tensor< CBlasStorage > { ... }
  if let s = lhs as? Tensor<OpenCLStorage> { ... }
 else if if let s = lhs as? Tensor< OpenCLStorage > { ... }
 else if let s = lhs as? Tensor< OpenCLStorage > { ... }
   }

with the same number of Impls to go along. If I added a new storage
type (e.g. CUDA) I would have to add each type specified (and I
haven't added Byte and Short) to every function that can be performed
on a Tensor (which is currently ~20-30). For my library this doesn't
lead to maintainable code.

In C++ this is what templates are supposed to help solve. In Swift,
generics solve this problem if called from a non-generic function or
if your generic is defined in a protocol/class, so it would seem to
fall within the pattern of what should be expected from generics.

Thanks!
Abe

On Wed, Feb 8, 2017 at 12:55 PM, Joe Groff <jgr...@apple.com> wrote:
>
> On Feb 6, 2017, at 10:06 AM, Douglas Gregor via swift-evolution
> <swift-evolution@swift.org> wrote:
>
>
> On Feb 5, 2017, at 5:36 PM, Abe Schneider via swift-evolution
> <swift-evolution@swift.org> wrote:
>
> Hi Robert,
>
> Exactly. The benefit being that you can figure out the correct function to
> dispatch entirely at compile time. My understanding is that Swift doesn’t do
> this because of the associated code bloat (and it’s usually not necessary).
> However, I think there is some important functionality by allowing
> specialization to control dispatch in a similar way to c++. There is also
> the design element — my (fairly) succinct Tensor class that used to be ~300
> lines is now already close to an additional 1000 lines of code and growing.
> While the type of library I’m writing might be outside of what is normally
> done with Swift, I suspect the design pattern I’m using crops up in other
> places, as well as the need for dispatch on specialization (e.g.
> http://stackoverflow.com/questions/41640321/extending-collection-with-a-recursive-property-method-that-depends-on-the-elemen).
>
>
> You can’t figure out the correct function to dispatch entirely at compile
> time because Swift supports retroactive modeling. Let’s make this a
> super-simple example:
>
> // Module A
> public protocol P { }
> public func f(_:T) { print(“unspecialized”) }
> public func f(_: T) { print(“specialized”) }
>
> public func g(_ x: T) { f(x) }
>
> // Module B
> import A
> func testG(x: Int) {
>   g(x)  // the best we can statically do is print “unspecialized”; Int
> doesn’t conform to A.P, but...
> }
>
> // Module C
> import A
> public extension A: P { }   // dynamically, Int does conform to A.P!
>
> Swift’s model is that the selection among ad hoc overloads is performed
> statically based on local knowledge, and is consistent across all
> “specializations” of a generic function. Protocol requirements and
> overridable methods are the customization points.
>
> Selecting ad hoc overloads at runtime is possible, but of course it has
> downsides. You could run into run-time ambiguities, for example:
>
> // Module A
> public protocol P { }
> public protocol Q { }
> public func f(_:T) { print(“unspecialized”) }
> public func f(_: T) { print(“specialized for P”) }
> public func f(_: T) { print(“specialized for Q”) }
>
> public func g(_ x: T) { f(x) }
>
> // Module B
> import A
> public extension Int: P { }
>
> // Module C
> import A
> public extension Int: Q { }
>
> // Module C
> import A
> func testG(x: Int) {
>   g(x)   // run-time ambiguity: which specialized “f” do we get?
> }
>
> There are reasonable answers here if we know what the potential set of
> overloads is at compile-time. It’s a problem I’ve been interested in for a
> long time. That dynamic dispatch can be implemented somewhat reasonably (the
> compiler can emit a static decision tree so long as we’re willing to limit
> the set of overloads to the ones that are visible from g(_:), and can be
> folded away by the optimizer when we’re specializing the function and the
> visibility of the types and/or protocols in question is limited.

Re: [swift-evolution] Compile-time generic specialization

2017-02-05 Thread Abe Schneider via swift-evolution
Hi Robert,

Exactly. The benefit being that you can figure out the correct function to 
dispatch entirely at compile time. My understanding is that Swift doesn’t do 
this because of the associated code bloat (and it’s usually not necessary). 
However, I think there is some important functionality by allowing 
specialization to control dispatch in a similar way to c++. There is also the 
design element — my (fairly) succinct Tensor class that used to be ~300 lines 
is now already close to an additional 1000 lines of code and growing. While the 
type of library I’m writing might be outside of what is normally done with 
Swift, I suspect the design pattern I’m using crops up in other places, as well 
as the need for dispatch on specialization (e.g. 
http://stackoverflow.com/questions/41640321/extending-collection-with-a-recursive-property-method-that-depends-on-the-elemen
 
<http://stackoverflow.com/questions/41640321/extending-collection-with-a-recursive-property-method-that-depends-on-the-elemen>).

As far as changes to Swift, `@_specialize` already does exactly this (except it 
is treated as a hint). You would need to transform the function to something 
like _(…) and a table of transformed 
functions, but after that you can just treat the functions as normal functions 
(and ignore the fact they were defined as generic). So, yes, specializations 
should be forced at every level. While this will lead to some code bloat, since 
it only occurs for the functions marked by the user, I would imagine it’s: (a) 
limited to the extent it occurs; and (b) manageable by simply not using the 
attribute (and using protocol witness tables instead). But at least that way 
you give the user the choice to do what is best for the particular situation.

Thanks!
A

> On Feb 5, 2017, at 1:46 PM, Robert Widmann <devteam.cod...@gmail.com> wrote:
> 
> Oh, I see.  The constraint solver is picking an overload that better matches 
> the caller rather than the callee's type, which differs from C++ because the 
> template expansion process considers specific-type overloads more specific.  
> We don't consider less-generic prototypes than the caller here because we 
> aren't performing a (major) syntactic transformation in the process of 
> solving a system of type variables.   In order to change the language to 
> adopt this feature, Sema would have to have knowledge of the candidate set of 
> specializations, either user-specified or SILOptimizer-generated, beforehand. 
>  It's not impossible to imagine, but it does create an interesting 
> backdependency on future potential optimizations, and would potentially 
> majorly change the behavior of a Debug or Release build (unless 
> specialization were forced at all optimization levels).
> 
> ~Robert Widmann
> 
> 2017/02/05 12:37、Abe Schneider <abe.schnei...@gmail.com 
> <mailto:abe.schnei...@gmail.com>> のメッセージ:
> 
>> Hi Robert,
>> 
>> Sorry, I’m not sure I understand your question. In c++ you can do the 
>> following:
>> 
>> struct Storage {};
>> struct CBlasStorage: Storage {};
>> 
>> template  class Tensor {};
>> 
>> template 
>> Tensor dot(const Tensor , const Tensor ) {
>>   std::cout << "general version called" << std::endl;
>>   Tensor result;
>>   return result;
>> }
>> 
>> // specialized version for CBlasStorage
>> template <>
>> Tensor dot(const Tensor , const 
>> Tensor ) {
>>   std::cout << "specialized version called" << std::endl;
>>   Tensor result;
>>   return result;
>> }
>> 
>> // this preserves type information and will call the appropriate `dot`
>> template 
>> void doSomething(const Tensor , const Tensor ) {
>>   auto result = dot(lhs, rhs);
>> }
>> 
>> int main(int argc, char **argv) {
>>   Tensor a, b;
>>   doSomething(a, b); // we should get "specialized version called"
>> }
>> 
>> 
>> The potential equivalent for Swift could look like:
>> 
>> @_specialize_all
>> func dot(_ lhs:Tensor, _ rhs:Tensor) -> Tensor { … }
>> 
>> Which would cause the compile to create a version of `dot` per S type that 
>> it gets called with. Thus, when `doSomething` is called, it would dispatch 
>> to that version of `dot`, allowing the type information to be preserved in 
>> the same way it does in c++.
>> 
>> Abe
>> 
>>> On Feb 5, 2017, at 11:35 AM, Robert Widmann <devteam.cod...@gmail.com 
>>> <mailto:devteam.cod...@gmail.com>> wrote:
>>> 
>>> I don't understand how this change would cause method dispatch to invoke a 
>>> different prototype.  Specialization in either language men

[swift-evolution] Compile-time generic specialization

2017-02-05 Thread Abe Schneider via swift-evolution
Hi all,

The current behavior of generics in Swift causes it lose type information at 
compile time due to the desire of maintaining a single version of the function. 
This runs counter to how c++ works, which creates a new copy of a function per 
type, but preserves information to be preserved. This can cause unexpected 
behavior from the user’s perspective:

protocol DispatchType {}
class DispatchType1: DispatchType {}

func doBar(value:D) {
print(“General function called")
}

func doBar(value:DispatchType1) {
print("DispatchType1 called")
}

func test(value:D) {
doBar(value: value)
}

test(value: d1) // “General function called”, but it’s not obvious 
why


The suggested method to get around this issue is to use a protocol to create a 
witness table, allowing for runtime dispatch. However, this approach is not 
ideal in all cases because: (a) the overhead of runtime dispatch may not be 
desirable, especially because this is something that can be determined at 
compile time; and (b) there are some designs in which this behavior can 
complicate things.

One example of a design where this behavior can be problematic is when a 
protocol is used to determine what functions get dispatched:

protocol Storage { … }
class Tensor { … }

class CBlasStorage: Storage { … }
class OpenCLStorage: Storage { … }

func dot(_ lhs:Tensor, _ rhs:Tensor) -> Tensor { … }

// like behavior, these will not work if called from another generic 
function (but will work for non-generic functions)
func dot(_ lhs:Tensor, _ rhs:Tensor) -> Tensor 
where S:CBlasStorage { … }
func dot(_ lhs:Tensor, _ rhs:Tensor) -> Tensor 
where S:OpenCLStorage { … }

In this case, depending on the underlying storage, we want an optimized version 
of `dot` to be called. To make this work correctly we can add static methods to 
`Tensor`, but this has several drawbacks: (a) it makes the `Tensor` class 
monolithic, every possible method must be determine a priori and be defined in 
the class; (b) it doesn’t allow new methods to be added Tensor without touching 
the main class; and (c) it unnecessarily forces users to user the more verbose 
`Tensor.dot(a, b)`.

Point (a) in theory could be made better by creating a `TensorOps` protocols. 
However, because type constraints cannot currently be placed on extensions, it 
is not currently possible to implement.


One potential solution would be to add/extend an attribute for generic 
functions that would force multiple versions of that function to be created. 
There is already there is a `@_specialize` attribute, but you have to: (a) 
manually write out all the cases you want to cover; and (b) only affects the 
compiled code, which does not change this behavior. Due to the fact that 
`@_specialize` exists, I’m going to assume it wouldn’t be a major change to the 
language to extend the behavior to compile-time dispatch.


Thanks!
Abe
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution