Re: [swift-evolution] Default Generic Arguments

2017-02-02 Thread Alexis via swift-evolution

> On Jan 27, 2017, at 4:43 PM, Anton Zhilin via swift-evolution 
>  wrote:
> 
> Current alternative to default generic arguments is typealias, like 
> basic_string and string in C++:
> 
> struct BasicBigInt { ... }
> typealias BigInt = BasicBigInt
This is a really great point, but it should be noted that this is only 
sufficient to accomplish source-stability. Once the standard library starts 
providing ABI stability, this solution won’t work for it — the type of BigInt 
will become BasicBigInt which will change mangling and other things. 

First-class generic defaults, on the other hand, have the potential to be built 
out so that any binary compiled against the old type definition continues to 
work. The details of what this looks like depends on precisely how the final 
ABI shakes out.___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Default Generic Arguments

2017-01-26 Thread Alexis via swift-evolution
I don’t have much skin in the nuance of PU vs DWIM since, as far as I can tell, 
it’s backwards compatible to update from PU to DWIM. So we could conservatively 
adopt PU and then migrate to DWIM if that's found to be intolerable. I expect 
it will be intolerable, though.

Also, language subtlety thing here: there’s lots of things which are *strictly* 
source breaking changes, but tend to work out 99% of the time anyway because of 
things like inference. I’m not at all opposed to making things work out 99.9% 
of the time instead. For instance, if I changed the Iterator type some 
collection yielded, almost no one would notice because they just pass it into a 
for loop or call a standard Sequence method on it. Still, strictly a source 
breaking change. Someone’s code could stop compiling.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Default Generic Arguments

2017-01-26 Thread Alexis via swift-evolution

> On Jan 26, 2017, at 4:26 PM, Xiaodi Wu  wrote:
> 
> Very interesting point, Alexis. So can you reiterate again which of the four 
> options you outlined earlier support this use case? And if there are 
> multiple, which would be the most consistent with the rest of the language?
> 

Both “prefer user” and “DWIM” are consistent with my desired solution for this 
specific problem (they pick Int64). DWIM seems more consistent with the rest of 
Swift to me in that it tries harder to find a reasonable interpretation of your 
code before giving up. I think it also ends up having the simplest 
implementation in the current compiler. You can potentially just add a new 
tie-breaker if-statement in this code: 
https://github.com/apple/swift/blob/master/lib/Sema/CSRanking.cpp#L1010 


Something to the affect of “if one of these was recommended by a generic 
default, that one’s better”. This of course requires threading that information 
through the compiler.

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Default Generic Arguments

2017-01-26 Thread Alexis via swift-evolution

> On Jan 25, 2017, at 8:15 PM, Xiaodi Wu  wrote:
> 
> Srdan, I'm afraid I don't understand your discussion. Can you simplify it for 
> me by explaining your proposed solution in terms of Alexis's examples below?
> 
> ```
> // Example 1: user supplied default is IntegerLiteralConvertible
> 
> func 

Re: [swift-evolution] Default Generic Arguments

2017-01-25 Thread Alexis via swift-evolution
Yes, I agree with Xiaodi here. I don’t think this particular example is 
particularly compelling. Especially because it’s not following the full 
evolution of the APIs and usage, which is critical for understanding how 
defaults should work.


Let's look at the evolution of an API and its consumers with the example of a 
BigInt:


struct BigInt: Integer {
  var storage: Array = []
}


which a consumer is using like:


func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0) 


Ok that's all fairly straightforward. Now we decide that BigInt should expose 
its storage type for power-users:


struct BigInt: Integer {
  var storage: Array = []
}


Let's make sure our consumer still works:


func process(_ input: BigInt) -> BigInt { ... }
let val1 = process(BigInt())
let val2 = process(0) 


Ok BigInt in process’s definition now means BigInt, so this still all 
works fine. Perfect!


But then the developer of the process function catches wind of this new power 
user feature, and wants to support it.
So they too become generic:


func process(_ input: BigInt) -> BigInt { ... }


The usage sites are now more complicated, and whether they should compile is 
unclear:


let val1 = process(BigInt())
let val2 = process(0) 


For val1 you can take a hard stance with your rule: BigInt() means 
BigInt(), and that will work. But for val2 this rule doesn't work, because 
no one has written BigInt unqualified. However if you say that the 
`Storage=Int` default is allowed to participate in this expression, then we can 
still find the old behaviour by defaulting to it when we discover Storage is 
ambiguous.

We can also consider another power-user function:


func fastProcess(_ input: BigInt) -> BigInt { ... }
let val3 = fastProcess(BigInt())


Again, we must decide the interpretation of this. If we take the interpretation 
that BigInt() has an inferred type, then the type checker should discover that 
BigInt is the correct result. If however we take stance that BigInt() 
means BigInt(), then we'll get a type checking error which our users will 
consider ridiculous: *of course* they wanted a BigInt here!

We do however have the problem that this won’t work:


let temp = BigInt()
fastProcess(temp) // ERROR — expected BigInt, found BigInt 


But that’s just as true for normal ints:


let temp = 0
takesAnInt64(temp) // ERROR — expected Int64, found Int


Such is the limit of Swift’s inference scheme.

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Default Generic Arguments

2017-01-24 Thread Alexis via swift-evolution
It’s worth noting that the question of “how do these defaults interact with 
other defaults” is an issue that has left this feature dead in the water in the 
Rust language despite being accepted for inclusion two years ago. See 
https://internals.rust-lang.org/t/interaction-of-user-defined-and-integral-fallbacks-with-inference/2496
 

 for some discussion of the issues at hand.

For those who don’t want to click that link, or are having trouble translating 
the syntax/terms to Swift. The heart of Niko’s post is the following (note: 
functions are used here for expedience; you can imagine these are `inits` for a 
generic type if you wish):

// Example 1: user supplied default is IntegerLiteralConvertible

func 

Re: [swift-evolution] Default Generic Arguments

2017-01-23 Thread Alexis via swift-evolution

> On Jan 23, 2017, at 3:18 PM, Srđan Rašić via swift-evolution 
>  wrote:
> 
> 
> I think such cases would be extremely rare and one would have to be very 
> ignorant about the types he/she works with. Additionally, that syntax is 
> useful only for types with one generic argument. Say we have `Promise Error>` and declare property as `let p: Promise`. How would you convey 
> the information that there is a second argument that could be changed? 
> Keeping the comma would be very ugly :)

To elaborate on this, default arguments are also a powerful tool for 
introducing new generic parameters in a way that’s source compatible. 
(potentially ABI compatible? Haven’t thought out implications of that). For 
instance, if you have a collection type, and decide to expose the allocator as 
a type parameter, defaults give you a backwards compatible way to do that. 
Making developers annotate “I’m using defaults” throws that away. If you make 
this “only” a warning then you’re just making busywork for the 99% of 
developers who always wanted the default behaviour, and couldn’t care less that 
it’s now configurable.

This would also go against the massive precedent set by default function 
arguments, which never need to be acknowledged.___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Proposal draft] Limiting @objc inference

2017-01-06 Thread Alexis via swift-evolution
I’m a big fan of making @objc less implicit — I’ve frequently been left 
wondering when it’s actually necessary or not. I think the standard library 
kinda just uses it haphazardly in its bridging stuff, leaving me to wonder when 
it actually does anything (this is probably due to how things once worked in 
the Long Long Ago). 

I don’t consider my opinion to be too valuable here though, as someone who’s 
never really done meaningful ObjC work.

What I do have a stronger opinion on is that I’m pretty scared about implicitly 
breaking tons of code in late-binding ways. I’d be more sympathetic to this if 
we hadn’t declared source stability, but we have, and it doesn’t seem like this 
breakage meets the bar. Especially since there's a 100% safe migration solution 
we can implement, if I’m understanding correctly. Adding @objc everywhere it 
was being inferred seems fairly reasonable to me. This means no immediate 
benefits for existing code bases, but it still means:

* All new code bases will benefit.

* Old code bases will have a very clear opportunity to audit, if they deem it 
worth their time. Realistically they probably won’t, but it’s not like their 
code will be any slower/bloated than it was.


___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] Add the DefaultConstructible protocol to the standard library

2017-01-03 Thread Alexis via swift-evolution
Since people keep chiming in with “Rust has this”, I figured I should give the 
context for what’s up with Default in Rust. Disclaimer: I wasn’t around for the 
actual design of this API, but I worked with it a lot. So any justification I 
give is mostly my own posthoc perception of the purpose it serves today. I’ll 
also be using Swift terminology/syntax here since there’s no interesting aspect 
of Rust involved in this design.

There are three major use-cases for Default, as I see it:


1) providing conditional default initializers for generic types
2) providing a standard hook for easily writing “obvious” default initializers 
3) refining another protocol for one-off convenience methods



The first case is easy. I have a `Mutex`, `Box`, `Rc`, etc. Generic 
types which require an instance of their generic type to exist. So of course 
their initializer requires a T. But it would be nice to not have to do this for 
types which have default constructors. So you have `extension Mutex: Default 
where T: Default`, and now you can do `Mutex()` where inference makes it clear 
what the type is. 

Here there’s no need to care about the “semantics” of Default. We’re just 
saying “if you can init() I can too!”. 




The second case is fairly Rust-specific, in that it combines with other 
features to make default initialization more ergonomic. Default provides a 
custom deriver, which makes a super convenient way to write default 
constructors for Plain Old Data types. #[derive(Default)] just says “yeah add a 
default initializer that loads up every field with its default”. Often this is 
done on a concrete type full of integers/optionals, in which case it’s 
synonymous with zeroing.

Since initializers in Swift are totally first-class, one could conceivably 
create this kind of Derive system without the need for protocols. Although 
#[derive(Default)] is generics-aware, so it can provide conditional 
conformances for generic types too.




The third case is the most complex (and niche). In effect, there are several 
places where you can make a slightly more ergonomic thing if you refine a 
protocol with “has a default initializer”. These default initializers are in no 
sense a requirement of the protocol, so including the initializer as a 
requirement of the protocol is incorrect. At the same time, no one really wants 
a bunch of adhoc DefaultConstructibleX protocols that are used by maybe one or 
two functions in the entire world. 

So Default is used as a universal modifier that can be applied to any protocol 
to create DefaultConstructibleX without anyone having to actually define or 
know about it. It’s a kind of retroactive modelling. If your type has some kind 
of reasonable default value, you conform to Default and maybe someone uses it. 
A particular user of `X + Default` then infers by example what a reasonable 
Default would mean in this context. 

Examples in Rust:



* H: BuildHasher + Default — Default applies the BuildHasher’s default seeding 
algorithm. For some algorithms this will go out to /dev/urandom, for others 
this will just set it to 0. That’s the call of the BuildHasher’s designer, and 
is hopefully made clear in its docs. However, there’s no reason why default 
constructibility is fundamental to a hashing algorithm. One could reasonably 
make the call that there isn’t a good default, and require it to be manually 
constructed. Possibly they could provide a couple wrappers which provide a 
clear default (MySecureHasher, MyWeakHasher).

This constraint is used by HashMap ’s default constructor. So in a 
sense this is just a more complex version of the first case, but we’re 
definitely inferring some semantics here. If a Default implementation doesn’t 
exist, then one must use HashMap’s with_hasher constructor to provide an 
instance of BuildHasher. 



* R: Rng + Default — same basic idea. Default seeding strategy so you don’t 
have to pass an instance of Rng. No reason why all Rng’s must be default 
constructible.



* T: Extend + Default — if something can be Extended and provides a default 
constructor, then presumably it’s some kind of collection. So default is 
presumed to be the empty collection. Again, Extend is more primitive then 
collections — one of the ends of a channel reasonably implements Extend, but 
default construction doesn’t make sense in that context. This is used by 

partition(predicate: (Item) -> Bool) -> (C, C)
where C: Extend + Default

which is basically just:



var yes = C()
var no = C()

for x in self {
  if predicate(x) {
yes.extend(x)
  } else {
no.extend(x)
  }
}

return (yes, no)


This is used similarly for unzip, which converts Iterator<(A, B)> to 
(CollectionOfA, CollectionOfB) 



This case represents a situation where the Rust and Swift devs have diverged a 
bit philosophically. There’s a tendency in the Rust community to make small 
“lego” protocols which you snap together to get the semantics you want on the 
off chance 

Re: [swift-evolution] Switch statement tuple labels

2017-01-03 Thread Alexis via swift-evolution
If the input has labels, including them in the pattern has clear value: the 
compiler can check that the labels you expected are there, preventing value 
swapping bugs. Being able to omit the labels in the pattern is a reasonable 
convenience to avoid repeating yourself over and over. But being able to insert 
arbitrary labels that don’t match anything from the input is very weird, 
because it doesn’t really make anything more convenient or give the compiler 
fuel to catch mistakes. 

The proposal is essentially asking for slightly nicer syntax for comments in 
patterns. That is,


switch (1, 2) {
case (width: 0, height: 0): 
…
}


is entirely equivalent to:


switch (1, 2) {
case (/*width:*/ 0, /*height:*/ 0): 
…
}



That said, there’s a very clear inconsistency here between if-case-let and 
switch-case-let, which you would expect to be semantically equivalent:



let t1: (Int, Int) = (0, 0)

// This works
if case let (a: 0, b: y5) = t1 {
  print("hello")
}

// This doesn't
switch t1 {
case let (a: 0, b: y6): 
print("hallo")
}


So unless someone has a compelling argument that if-case-let and 
switch-case-let should be different, one of these should probably be changed. 
Unless the behaviour of if-case-let is *clearly* a random compiler bug that no 
one is relying on, then source stability dictates that switch should become 
more permissive and allow these “comment" labels. I don’t have enough 
background in this feature’s design to say either way.
___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Discussion] Generic protocols

2016-12-09 Thread Alexis via swift-evolution
It seems like a lot of you are just trying to make different syntaxes for 
generic protocols, which I’m pretty sure was never the concern about them? We 
already have reasonable prior art here from the syntax of generic structs. The 
problem is that they add significant additional complexity to the language 
(they effectively add Higher Kinded Types to the language, as protocol 
conformances become functions over types).

I’m also pretty confident that allowing protocols to be multi-conformed without 
requiring some specific annotation on them is totally busted. The fact that you 
can uniquely determine the associated type of a conformance from the type of 
Self is an important property of the type system.

That is, 

func foo(seq: S) -> S.Item? { … }

let x: [Int] = …
let y = foo(x)

Only works because ([Int] as Sequence).Item can be uniquely determined. 
Otherwise you would require an annotation at the call site to “pick” the 
conformance (in this case y: Int would probably be sufficient, but you can 
easily imagine more complex cases).

Because of retroactive modeling, you can’t include a rule that says 
disambiguation is only necessary if a type actually conforms to multiple times. 
The other conformances could be in another library! This is why you want 
generic and associated type parameters to be different — it lets the type 
system know which types are independent, and which are dependent. 

* Iterator can only be implemented once 
* ConvertibleTo can be implemented many times (one for each type T)
* (Self as ConvertibleTo).SomeAssociatedType is still uniquely determined



> On Dec 9, 2016, at 2:16 PM, Anton Zhilin via swift-evolution 
>  wrote:
> 
> A fundamental problem is, how do we get rid of associatedtype duplicates?
> 
> A pedantic approach is to add trait operations for conformances:
> 
> protocol Constructible {
> associatedtype Value
> init(_ value: Value)
> }
> 
> struct MyStruct {
> conformance Constructible {
> rename associatedtype ValueInt = Value
> }
> conformance Constructible {
> rename associatedtype ValueString = Value
> }
> 
> typealias ValueInt = Int  // or can be inferred
> init(_ value: Int)
> 
> typealias ValueString = String  // or can be inferred
> init(_ value: String)
> }
> This way, if there is a if there is a conflicting member, which does not use 
> any of the associated types, like func foo(), then we can give it different 
> meanings in different conformances.
> Although this approach is the most clean one from theoreticall point of view, 
> choosing different names for associated types would not look very good in 
> practise.
> 
> One possible solution is to always automatically match associatedtypes, 
> without using typealiases.
> 
> protocol ConstructibleFromBoth {
> associatedtype First
> associatedtype Second
> init(first: First)
> init(second: Second)
> }
> 
> struct MyStruct : ConstructibleFromBoth {
> init(first: Int)
> init(second: Double)
> }
> 
> extension MyStruct {
> init(first: String)   // now there are 2 distinct conformances
> }
> 
> extension MyStruct {
> init(second: Float)   // now there are 4 distinct conformances
> }
> It introduces another potentially exponential algorithm for compiler. 
> Although, does it? During a conformance test in some generic function, 
> compiler will need to only find the first match or two.
> Anyway, I guess, people would prefer to state explicitly that a type conforms 
> to multiple versions of a protocol.
> 
> Attempt #3. We can resolve the conflict between associated types, if we 
> delete (if trait sense) confliting associated types from the type. But with 
> extensions, all associated types can be made conflicting. So there needs to 
> be some attribute, marking, which of the associated types we don’t want. It 
> can lie at the place of conformance:
> 
> struct MyStruct { }
> extension MyStruct : @dontCreateTypealiases (Constructible where Value == 
> Int) { ... }
> extension MyStruct : @dontCreateTypealiases (Constructible where Value == 
> String) { ... }
> // MyStruct.Value.self  // error, no such type
> 
> struct NormalConformanceTest : Constructible { init(_ value: Float) }
> NormalConformanceTest.Value.self  //=> Float
> Or we can let constrained protocols syntax carry this attribute by default:
> 
> extension MyStruct : (Constructible where Value == Int) { ... }
> // MyStruct.Value.self  // error, no such type
> 
> struct NormalConformanceTest: Constructible { init(_ value: Float) }
> NormalConformanceTest.Value.self  //=> Float
> The only thing left to solve is generic protocol declaration syntax and 
> protocol specialization syntax. I’d like to present two ways to do this. 
> First, taking ideas from Rust:
> 
> protocol ContainsCollection {
> associatedtype CollectionType : Collection where CollectionType.Element 
> == Element
> func collection() -> CollectionType
> }
> 
> 

Re: [swift-evolution] [swift-evolution-announce] [Review] SE-0145: Package Manager Version Pinning (Revised)

2016-12-01 Thread Alexis via swift-evolution
Haven’t had a chance to catch up on the latest discussion, but I just saw that 
the Yarn developers posted an excellent piece on lockfiles this week:

https://yarnpkg.com/blog/2016/11/24/lockfiles-for-all 


They argue lockfiles should be commited by libraries (but still ignored by 
applications). The essential point is that this makes it easier for developers 
of the library to maintain a coherent build of the library when dependencies 
ship a bug. The focus is particularly on new developers, who would otherwise 
lack a lockfile.

> On Nov 20, 2016, at 12:48 AM, Anders Bertelrud  wrote:
> 
> Hello Swift community,
> 
> The review of "SE-0145: Package Manager Version Pinning" begins again after 
> revisions, starting now and running through November 28th. The proposal is 
> available here:
> 
>   
> https://github.com/apple/swift-evolution/blob/master/proposals/0145-package-manager-version-pinning.md
>  
> 
> 
> Reviews are an important part of the Swift evolution process. All reviews 
> should be sent to the swift-build-dev and swift-evolution mailing lists at
> 
>   https://lists.swift.org/mailman/listinfo/swift-build-dev 
> 
>   https://lists.swift.org/mailman/listinfo/swift-evolution 
> 
> 
> or, if you would like to keep your feedback private, directly to the review 
> manager.
> 
> What goes into a review?
> 
> The goal of the review process is to improve the proposal under review 
> through constructive criticism and contribute to the direction of Swift. When 
> writing your review, here are some questions you might want to answer in your 
> review:
> 
>   * What is your evaluation of the proposal?
>   * Is the problem being addressed significant enough to warrant a change 
> to Swift?
>   * Does this proposal fit well with the feel and direction of Swift?
>   * If you have used other languages or libraries with a similar feature, 
> how do you feel that this proposal compares to those?
>   * How much effort did you put into your review? A glance, a quick 
> reading, or an in-depth study?
> 
> More information about the Swift evolution process is available at
> 
>   https://github.com/apple/swift-evolution/blob/master/process.md 
> 
> 
> Thank you,
> 
> Anders Bertelrud
> Review Manager
> ___
> swift-evolution-announce mailing list
> swift-evolution-annou...@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution-announce

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Out of scope] Discussion on general Darwin/GlibC module

2016-11-11 Thread Alexis via swift-evolution
I agree that trying to completely unify low-level platforms is usually a mess. 
That said, I also don’t think accessing platform specific behaviour needs to 
involve completely throwing away the nice abstractions in Foundation. Wherever 
possible, we should provide platform-specific extensions to the types in 
Foundation. For instance, we could expose methods/inits that operate in terms 
of file descriptors on unix-y systems, and handle_t on windows.

But I also think there should be some opt-in to doing this, so that Foundation 
users can be confident they’re writing portable software by default. I don’t 
think imports should be the mechanism for this because this necessarily forces 
awkward divisions. I’m cautiously optimistic the feature flag system we need to 
build out for language evolution purposes will provide a good fit here. Opting 
into platform-specific behaviour is fairly similar to opting into experimental 
APIs. 

(Note: I haven’t actually used Foundation much, so this may be inconsistent 
with its overarching design)

> On Nov 10, 2016, at 10:48 PM, Drew Crawford via swift-evolution 
>  wrote:
> 
> grep -R "import Glibc" ~/Code --include "*.swift" | wc -l
> 297
> 
> As someone who might be characterized as suffering from the problem this 
> proposal purports to solve, I am not convinced.
> 
> The primary problem here is that "libc" is a misnomer.  Did you mean musl, 
> dietlibc, or glibc?  Did you mean "whatever libc my distro likes?"  Swift in 
> practice only supports one per platform, but that is a bug not a feature, and 
> that bug should not be standardized.  We could try to invent some syntax to 
> specify one but now we are back with the current system again.
> 
> The other problem is that in all my usages, "import Glibc" is not a real 
> problem I face.  The real problems are that "the libcs plural" are *just 
> different*.  Darwin has timeval64, glibc does not, and you'd better check 
> your arch and pick the right one, only on one platform.  SO_REUSEADDR has one 
> type in Brand X and another type in Brand Y.  Don't even get me *started* on 
> poll, EREs, or half a dozen other behavioral variations.  
> 
> Taking two different libraries and pretending they are the same is not the 
> solution, it's the disease.  The way out of this swamp for most developers is 
> to use a real Swift library, the same damn Swift library, on all platforms 
> (sadly, Foundation today does not meet this requirement).  The way out of 
> this swamp for crazy people like me who must write to the metal is to 
> actually write to the metal, to the particular libc being targeted, not to a 
> hypothetical platonic ideal libc which does not exist.  
> 
> I realize that four lines at the top of my files is a *visible* annoyance, 
> but fixing it just promotes it to an invisible one. 
> 
> Drew
> 
> --
>   Drew Crawford
>   d...@sealedabstract.com
> 
> 
> 
> On Wed, Nov 9, 2016, at 12:58 PM, Alex Blewitt via swift-evolution wrote:
>> Although out of scope for phase 1, something that keeps cropping up in a 
>> variety of Linux/Darwin Swift scripts is the conditional inclusion of Darwin 
>> or GlibC per platform. The last point was an observation that creating a 
>> 'nice' wrapper for LibC or a cleaned up POSIX API is a non-goal:
>> 
>> https://lists.swift.org/pipermail/swift-evolution/Week-of-Mon-20161003/027621.html
>>  
>> 
>> 
>>> I think it makes sense to have a cross platform “libc” which is an alias 
>>> for darwin, glibc, or whatever, and just leave it at that.
>>> 
>>> Other proposals for a “POSIX” module have gotten bogged down because 
>>> inevitably the idea comes up to make the resultant API nicer in various 
>>> ways: rename creat, handle errno more nicely, make use of multiple return 
>>> values, … etc.  The problem with this approach is that we don’t *want* 
>>> people using these layer of APIs, we want higher level Foundation-like APIs 
>>> to be used.
>>> 
>>> ...
>>> 
>>> I think we should formally decide that a “nice” wrapper for libc is a 
>>> non-goal.  There is too much that doesn’t make sense to wrap at this level 
>>> - the only Swift code that should be using this is the implementation of 
>>> higher level API, and such extremely narrow cases that we can live with 
>>> them having to handle the problems of dealing with the raw APIs directly.
>>> 
>>> -Chris
>> 
>> I have created a draft for a proposal to create such a module. Comments are 
>> welcome.
>> 
>> Alex
>> 
>> ---
>> 
>> # Libc module for Swift
>> 
>> * Proposal: [SE-](-filename.md)
>> * Authors: [Alex Blewitt](https://github.com/alblue 
>> )
>> * Review Manager: TBD
>> * Status: **Under discussion**
>> 
>> ## Introduction
>> 
>> When running on Darwin, the base module is called `Darwin`. When running
>> on Linux or other operating systems, it's called `GlibC`. 
>> 
>> This 

Re: [swift-evolution] Contiguous Memory and the Effect of Borrowing on Safety

2016-11-11 Thread Alexis via swift-evolution


> On Nov 10, 2016, at 8:17 PM, Dave Abrahams via swift-evolution 
>  wrote:
> 
> 
> on Thu Nov 10 2016, Joe Groff  > wrote:
> 
>>> On Nov 10, 2016, at 1:02 PM, Dave Abrahams  wrote:
>>> 
>>> 
>>> on Thu Nov 10 2016, Stephen Canon  wrote:
>>> 
>> 
> On Nov 10, 2016, at 1:30 PM, Dave Abrahams via swift-evolution 
>  wrote:
> 
> 
> on Thu Nov 10 2016, Joe Groff  wrote:
> 
 
>>> On Nov 8, 2016, at 9:29 AM, John McCall  wrote:
>>> 
 On Nov 8, 2016, at 7:44 AM, Joe Groff via swift-evolution 
  wrote:
> On Nov 7, 2016, at 3:55 PM, Dave Abrahams via swift-evolution 
>  wrote:
> 
>> 
> 
> on Mon Nov 07 2016, John McCall  wrote:
> 
>>> On Nov 6, 2016, at 1:20 PM, Dave Abrahams via swift-evolution 
>>>  wrote:
>>> 
>>> 
>>> Given that we're headed for ABI (and thus stdlib API) stability, 
>>> I've
>>> been giving lots of thought to the bottom layer of our collection
>> 
>>> abstraction and how it may limit our potential for efficiency.  In
>>> particular, I want to keep the door open for optimizations that 
>>> work on
>>> contiguous memory regions.  Every cache-friendly data structure, 
>>> even if
>>> it is not an array, contains contiguous memory regions over which
>>> operations can often be vectorized, that should define boundaries 
>>> for
>>> parallelism, etc.  Throughout Cocoa you can find patterns designed 
>>> to
>>> exploit this fact when possible (NSFastEnumeration).  Posix I/O 
>>> bottoms
>>> out in readv/writev, and MPI datatypes essentially boil down to
>>> identifying the contiguous parts of data structures.  My point is 
>>> that
>>> this is an important class of optimization, with numerous real-world
>>> examples.
>>> 
>>> If you think about what it means to build APIs for contiguous memory
>>> into abstractions like Sequence or Collection, at least without
>>> penalizing the lowest-level code, it means exposing 
>>> UnsafeBufferPointers
>>> as a first-class part of the protocols, which is really
>>> unappealing... unless you consider that *borrowed* 
>>> UnsafeBufferPointers
>>> can be made safe.  
>>> 
>>> [Well, it's slightly more complicated than that because
>>> UnsafeBufferPointer is designed to bypass bounds checking in release
>>> builds, and to ensure safety you'd need a BoundsCheckedBuffer—or
>>> something—that checks bounds unconditionally... but] the point 
>>> remains
>>> that
>>> 
>>> A thing that is unsafe when it's arbitrarily copied can become safe 
>>> if
>>> you ensure that it's only borrowed (in accordance with 
>>> well-understood
>>> lifetime rules).
>> 
>> UnsafeBufferPointer today is a copyable type.  Having a borrowed 
>> value
>> doesn't prevent you from making your own copy, which could then 
>> escape
>> the scope that was guaranteeing safety.
>> 
>> This is fixable, of course, but it's a more significant change to the
>> type and how it would be used.
> 
> It sounds like you're saying that, to get static safety benefits from
> ownership, we'll need a whole parallel universe of safe move-only
> types. Seems a cryin' shame.
 
 We've discussed the possibility of types being able to control
 their "borrowed" representation. Even if this isn't something we
 generalize, arrays and contiguous buffers might be important enough
 to the language that your safe BufferPointer could be called
 'borrowed ArraySlice', with the owner backreference optimized
 out of the borrowed representation. Perhaps Array's own borrowed
 representation would benefit from acting like a slice rather than a
 whole-buffer borrow too.
>>> 
>>> The disadvantage of doing this is that it much more heavily
>>> penalizes the case where we actually do a copy from a borrowed
>>> reference — it becomes an actual array copy, not just a reference
>>> bump.
>> 
>> Fair point, though the ArraySlice/Array dichotomy strikes me as
>> already kind of encouraging this—you might pass ArraySlices down into
>> your algorithm, but we encourage people to use Array at storage and
>> API boundaries, forcing copies.
>> 
>> From a philosophical perspective of making systems Swift feel like
>> "the same 

Re: [swift-evolution] Contiguous Memory and the Effect of Borrowing on Safety

2016-11-07 Thread Alexis via swift-evolution

> On Nov 6, 2016, at 4:20 PM, Dave Abrahams via swift-evolution 
>  wrote:
> 
> 
> Given that we're headed for ABI (and thus stdlib API) stability, I've
> been giving lots of thought to the bottom layer of our collection
> abstraction and how it may limit our potential for efficiency.  In
> particular, I want to keep the door open for optimizations that work on
> contiguous memory regions.  Every cache-friendly data structure, even if
> it is not an array, contains contiguous memory regions over which
> operations can often be vectorized, that should define boundaries for
> parallelism, etc.  Throughout Cocoa you can find patterns designed to
> exploit this fact when possible (NSFastEnumeration).  Posix I/O bottoms
> out in readv/writev, and MPI datatypes essentially boil down to
> identifying the contiguous parts of data structures.  My point is that
> this is an important class of optimization, with numerous real-world
> examples.
> 
> If you think about what it means to build APIs for contiguous memory
> into abstractions like Sequence or Collection, at least without
> penalizing the lowest-level code, it means exposing UnsafeBufferPointers
> as a first-class part of the protocols, which is really
> unappealing... unless you consider that *borrowed* UnsafeBufferPointers
> can be made safe.  
> 
> [Well, it's slightly more complicated than that because
> UnsafeBufferPointer is designed to bypass bounds checking in release
> builds, and to ensure safety you'd need a BoundsCheckedBuffer—or
> something—that checks bounds unconditionally... but] the point remains
> that
> 
>  A thing that is unsafe when it's arbitrarily copied can become safe if
>  you ensure that it's only borrowed (in accordance with well-understood
>  lifetime rules).
> 
> And this leads me to wonder about our practice of embedding the word
> "unsafe" in names.  A construct that is only conditionally unsafe
> shouldn't be spelled "unsafe" when used in a safe way, right?  So this
> *seems* to argue for an "unsafe" keyword that can be used to label
> the constructs that actually add unsafety (as has been previously
> suggested on this list).  Other ideas are of course most welcome.
> 

Yes, I’ve always found this more appealing (“operations are unsafe, not 
types”). This allows you to make more subtle distinctions, and expose “low 
level” APIs for otherwise safe types (e.g. unchecked indexing on Array). I 
believe Graydon made a draft proposal for this a while back, but neither of us 
can recall what became of it.

That said, in this particular case the distinction isn’t very helpful: 
basically everything you can do with an Unsafe(Buffer)Pointer is truly unsafe 
today, and I wouldn’t really expect this to change with ownership stuff. You 
need a completely unchecked pointer type for the very lowest levels of 
abstractions, where scoped lifetimes can’t capture the relationships that are 
involved.

I would expect there to be two types of interest, one with safe borrowed 
semantics (Pointer/BufferPointer?), and one with unsafe unchecked semantics 
(today’s UnsafePointer/UnsafeBufferPointer). For those familiar with Rust, this 
is roughly equivalent to: , &[T], *mut T, and *mut [T] respectively. Most 
APIs should operate in terms of the safe types, requiring the holder of an 
unsafe type to do some kind of cast, asserting that the whatever guarantees the 
safe types make will be upheld.

99% of code should subsequently never actually interact with the Unsafe types, 
instead using the safe ones. Anything using the Unsafe types should 
subsequently try to get into the world of safe types as fast as possible. For 
instance, much of Rust’s growable array type (Vec) is implemented as “convert 
my unsafe pointer into a safe, borrowed slice, then operate on the slice”. 
Similarly, any API which is interested in passing around a non-growable pile of 
memory communicates in terms of these slices.


> -- 
> -Dave
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Why doesn't removeLast() on Collection return an optional?

2016-10-20 Thread Alexis via swift-evolution
I’m fairly confident the author of the collection has to make those checks for 
memory-safety, but in theory there’s wins in only doing the check once, and as 
early as possible. Smaller values to pass, and less checks. 

This is definitely micro-micro-optimization, though. Unlikely to matter for 
most cases.

> On Oct 18, 2016, at 6:00 PM, Max Moiseev via swift-evolution 
>  wrote:
> 
> Yes, if the author of the collection you’re using performs the check in 
> `removeLast`, but they don’t have to.
> 
>> On Oct 18, 2016, at 1:28 PM, Jean-Daniel  wrote:
>> 
>> 
>>> Le 17 oct. 2016 à 23:20, Max Moiseev via swift-evolution 
>>>  a écrit :
>>> 
>>> Hi Louis,
>>> 
>>> I believe, sometimes there are situations where you know for sure that your 
>>> collection is not empty. Maybe you are already in the context where the 
>>> check has been performed. In these cases there is no reason you’d have to 
>>> pay the price of an emptiness check once again.
>> 
>> You have to pay the price anyway, as the check has to be performed to decide 
>> if the software should abort.
>> 
>> 
> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] deprecating ManagedBufferPointer

2016-10-19 Thread Alexis via swift-evolution
A bit late to this game, because I didn’t fully understand the “point” of 
ManagedBuffer(Pointer). After a good week of messing around with these in 
Dictionary/Array/String, I now have Opinions.

I agree ManagedBufferPointer is largely unnecessary. However it’s seeming a lot 
like ManagedBuffer (and its equivalents) are suboptimal for the standard 
library’s purposes too!

In particular, pretty much every one of these buffers that I see wants to be a 
subclass of some NS* collection so that it can be toll-free bridged into 
objective C. This means that all those types are forced to directly drop down 
to allocWithTailElems, rather than using a nice abstraction that does it for 
them. Array does this right now, and I’ve got a PR up for review that’s doing 
the same thing to the HashedCollections. It’s an outstanding bug that String 
isn’t doing this (forcing its buffer to be wrapped in another class to be 
bridged).

I don’t really feel any pain from directly using allocWithTailElems, it’s a 
great API. It just leaves me at a loss for when I’d reach for ManagedBuffer at 
all, as it’s very limited.


> On Oct 13, 2016, at 3:11 PM, Erik Eckstein via swift-evolution 
>  wrote:
> 
> I created a proposal: https://github.com/apple/swift-evolution/pull/545 
> 
> 
>> On Oct 11, 2016, at 11:32 PM, Dave Abrahams via swift-evolution 
>> > wrote:
>> 
>> 
>> on Tue Oct 11 2016, Károly Lőrentey > > wrote:
>> 
>>> +1
>>> 
>>> ManagedBuffer has been really useful a couple of times, but I never
>>> found a use for ManagedBufferPointer. I can’t even say I’m entirely
>>> sure what need it was originally designed to fulfill.
>> 
>> The real need is/was to be able to do the same kind of storage
>> management in classes not derived from ManagedBuffer.  This can be
>> important for bridging, where the buffers of various native swift
>> containers need to be derived from, e.g., NSString or NSArray.  That is,
>> however, an extremely stdlib-specifc need.
>> 
>> 
 On 2016-10-11, at 00:12, Erik Eckstein via swift-evolution
>>> > wrote:
 
 The purpose of ManagedBufferPointer is to create a buffer with a custom 
 class-metadata to be able
>>> to implement a custom deinit (e.g. to destroy the tail allocated elements).
 It was used in Array (before I replaced it with the new 
 tail-allocated-array-built-ins). But now
>>> it’s not used anymore in the standard library.
 
 As a replacement for ManagedBufferPointer one can just derive a class from 
 ManagedBuffer and implement the deinit in the derived class.
 
 final class MyBuffer : ManagedBuffer {
  deinit {
// do whatever needs to be done
  }
 }
 
 // creating MyBuffer:
 let b = MyBuffer.create(minimumCapacity: 27, makingHeaderWith: { myb in 
 return MyHeader(...) })
 
 IMO ManagedBuffer is much cleaner than ManagedBufferPointer (it doesn’t 
 need this custom
>>> bufferClass to be passed to the constructor). Also ManagedBufferPointer 
>>> doesn’t use SIL
>>> tail-allocated arrays internally. Although this is not something visible to 
>>> the programmer, it makes
>>> life easier for the compiler.
 
 So I suggest that we deprecate ManagedBufferPointer.
 
 Erik
 ___
 swift-evolution mailing list
 swift-evolution@swift.org 
 https://lists.swift.org/mailman/listinfo/swift-evolution
>>> 
>>> ___
>>> swift-evolution mailing list
>>> swift-evolution@swift.org 
>>> https://lists.swift.org/mailman/listinfo/swift-evolution 
>>> 
>>> 
>> 
>> -- 
>> -Dave
>> 
>> ___
>> swift-evolution mailing list
>> swift-evolution@swift.org 
>> https://lists.swift.org/mailman/listinfo/swift-evolution 
>> 
> ___
> swift-evolution mailing list
> swift-evolution@swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Proposal: Package Manager Version Pinning

2016-10-14 Thread Alexis via swift-evolution


> On Oct 14, 2016, at 2:01 AM, Ankit Aggarwal via swift-evolution 
>  wrote:
> 
> Hi,
> 
> We're proposing version pinning feature in Swift Package Manager. The 
> proposal is available here 
> 
>  and also in this email:
> 
> Feedback welcomed!
> 
> Thanks,
> Ankit
> 
> 
> 
> Package Manager Version Pinning
> Proposal: SE-
> Author: Daniel Dunbar , Ankit Aggarwal 
> 
> Review Manager: TBD
> Status: Discussion
> Introduction
> This is a proposal for adding package manager features to "pin" or "lock" 
> package dependencies to particular versions.
> 
> Motivation
> As used in this proposal, version pinning refers to the practice of 
> controlling exactly which specific version of a dependency is selected by the 
> dependency resolution algorithm, independent from the semantic versioning 
> specification. Thus, it is a way of instructing the package manager to select 
> a particular version from among all of the versions of a package which could 
> be chosen while honoring the dependency constraints.
> 
> Terminology
> 
> We have chosen to use "pinning" to refer to this feature, over "lockfiles", 
> since the term "lock" is already overloaded between POSIX file locks and 
> locks in concurrent programming.
> 
I’ve never seen this cause any actual confusion, nor has anyone I know who 
teaches/develops these sorts of tools. As far as I can tell, the broader 
programming community is rapidly converging on this as standard terminology:

* Gemfile.lock (Ruby)
* Cargo.lock (Rust)
* Composer.lock (PHP)
* yarn.lock (JS)
* pubspec.lock (Dart)
* Podfile.lock (Swift/Objc!)

Diverging from this seems counter-productive.
> Philosophy
> 
> Our philosophy with regard to pinning is that we actively want to encourage 
> packages to develop against the latest semantically appropriate versions of 
> their dependencies, in order to foster rapid development amongst the 
> ecosystem and strong reliance on the semantic versioning concept. Our design 
> for version pinning is thus intended to be a feature for package authors and 
> users to use in crafting specific workflows, not be a mechanism by which most 
> of the packages in the ecosystem pin themselves to specific versions of each 
> other.
> 
> Use Cases
> 
> Our proposal is designed to satisfy several different use cases for such a 
> behavior:
> 
> Standardizing team workflows
> 
> When collaborating on a package, it can be valuable for team members (and 
> continuous integration) to all know they are using the same exact version of 
> dependencies, to avoid "works for me" situations.
> 
> This can be particularly important for certain kinds of open source projects 
> which are actively being cloned by new users, and which want to have some 
> measure of control around exactly which available version of a dependency is 
> selected.
> 
> Difficult to test packages or dependencies
> 
> Complex packages which have dependencies which may be hard to test, or hard 
> to analyze when they break, may choose to maintain careful control over what 
> versions of their upstream dependencies they recommend -- even if 
> conceptually they regularly update those recommendations following the true 
> semantic version specification of the dependency.
> 
> Dependency locking w.r.t. deployment
> 
> When stabilizing a release for deployment, or building a version of a package 
> for deployment, it is important to be able to lock down the exact versions of 
> dependencies in use, so that the resulting product can be exactly recreated 
> later if necessary.
> 
> Proposed solution
> We will introduce support for an optional new file Package.pins adjacent to 
> the Package.swift manifest, called the "pins file". We will also introduce a 
> number of new commands (see below) for maintaining the pins file.
> 
> This file will record the active version pin information for the package, 
> including data such as the package identifier, the pinned version, and 
> explicit information on the pinned version (e.g., the commit hash/SHA for the 
> resolved tag).
> 
> The exact file format is unspecified/implementation defined, however, in 
> practice it will be a JSON data file.
> 
> This file may be checked into SCM by the user, so that its effects apply to 
> all users of the package. However, it may also be maintained only locally 
> (e.g., placed in the .gitignore file). We intend to leave it to package 
> authors to decide which use case is best for their project.
> 
> In the presence of a Package.pins file, the package manager will respect the 
> pinned dependencies recorded in the file whenever it needs to do dependency 
> resolution (e.g., on the initial checkout or when updating).
> 
> The pins file will not override Manifest specified version requirements and 
> it will be an error (with proper 

Re: [swift-evolution] [Proposal Draft] Provide Custom Collections for Dictionary Keys and Values

2016-10-12 Thread Alexis via swift-evolution
Just to clarify: It seems like the only ABI-affecting change here is the type 
of keys/values. As you note at the end of your proposal, this should just be 
Dictionary.Keys/Dictionary.Values regardless of whether we implement this 
proposal or not, in which case this can be punted for Swift 4. It should be 
fine to keep .Keys/.Values resilient so that we can change their implementation 
details later if we want.

On the actual proposal: this is a pretty reasonable given Swift’s current 
design and constraints. That said, I expect pushing forward on this kind of 
thing right now is premature given the goals of Swift 4. A major aspect of 
Swift 4 is reworking the way CoW semantics function internally, which could 
drastically affect the way we approach this problem.

I’d really like if we could eliminate the “double search/hash” in the 
no-existing-key case. There are ways to do this really cleanly, but they 
probably involve more advanced CoW-safety propagation. In particular, you want 
some way for the collection to return its search state to the caller so that 
they can hand it back to insertion to just resume from there.

For instance:

map.entries[key]   // An enum like Found(Value) | NotFound(SearchState)
   .withDefault(value: []) // Unwrap the enum by completing the 
NotFound(SearchState)
   .append(1)  // Now we have a value in both cases, we can append!



Or more complex:

map.entries[key] 
   .withDefault { /* logic that computes value */ }
   .append(1)

I think this can be made to work in the current system if withDefault is 
actually `[withDefault:]`, which is fine but a bit weird from a user’s 
perspective.

In an ideal world the user could actually pattern match on the result of 
`entries[key]`. In this way they could match on it and perform special logic in 
both cases for really complex situations. This would make withDefault “just a 
convenience”, so we aren’t pressured to add more methods like it every time 
someone has a new Even More Complex use-case. e.g.:

switch map.entries[key] {
case .Found(entry):
  if entry.value == 10 { 
entry.remove()
print(“Found a value too many times! Moving key to fast-path auxiliary 
structure…”) 
  } else {
entry.value += 1
  }
case .NotFound(entry):
  entry.insert(1)
  print(“Found a value for the first time! Registering a bunch of extra 
stuff…”) 
}


But again, this is all dependent on a much more powerful SIL/ARC, and we just 
don’t know what we’re going to get at this stage.___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Proposal draft] Conditional conformances

2016-10-03 Thread Alexis via swift-evolution
Below I’ve provided a more fleshed out version of what Dave is suggesting, for 
anyone who had trouble parsing the very hypothetical example. It reflects the 
kind of implementation specialization I would expect to see in the standard 
library. In fact we have exactly this concern baked into value witness tables 
today by differentiating the various *Buffer methods so that they can be no-ops 
or memcpys for trivial values (rather than requiring code to loop over all the 
elements and call potentially-no-op methods on each element).

But the value witness table’s approach to this problem suggests some healthier 
solutions to the problem (for Swift’s particular set of constraints):

1) Add default-implemented fullAlgorithm() methods to the protocol. Anyone who 
can beat the default algorithm provides their own implementation. Consumers of 
the protocol then dispatch to fullAlgorithm(), rather than the lower-level 
primitives.

2) Add “let hasInterestingProperty: bool” flags to the protocol. Consumers of 
the protocol can then branch on these flags to choose a “slow generic” or “fast 
specific” implementation. (this is, after all, exactly what we’re asking the 
runtime to do for us!)

Of course, (1) and (2) aren’t always applicable solutions. Both only really 
apply if you’re the original creator of the protocol; otherwise no one will 
know about fullAlgorithm or hasInterestingProperty and be able to modify the 
default. It can also be really tedious to provide your own implementation of 
fullAlgorithm(), especially if everyone overloads it in the same way. These 
are, however, perfectly reasonable approaches if you’re just trying to 
specialize for a small, closed, set of types. Something like:

genericImpl()
stringImpl()
intImpl()

You can handle that pretty easily with extensions or super-protocols, I think.

I’m cautiously optimistic we can get pretty far before we really feel the need 
to introduce specialization like this. Although I’m used to handling this issue 
in a world of monomorphic generics; so I’m not sure if the performance 
characteristics of polymorphic generics will shift the balance to making 
specialization more urgent. Or perhaps the opposite — the runtime impact of 
specialization could be too high!


// Some kind of "super copy" operation
public protocol Clone {
  func clone() -> Self
}

// Can just memcpy instead of calling Clone
public protocol TrivialClone: Clone { }

// A terrible data structure
public struct FakeArray { let vals: (T, T, T) }



// --
// A dirty hack to get overlapping impls (specifically specialization)
// through overlapping extensions.

internal protocol CloneImpl {
  associatedtype TT: Clone
}

extension CloneImpl {
  static func clone(input: FakeArray) -> FakeArray {
// Have to manually invoke generic `clone` on each element
FakeArray(vals: (input.vals.0.clone(),
 input.vals.1.clone(),
 input.vals.2.clone()))
  }
}

extension CloneImpl where TT: TrivialClone {
  static func clone(input: FakeArray) -> FakeArray {
// Can just copy the whole buffer at once (ideally a memcpy)
FakeArray(vals: input.vals)
  }
}


// Inject our specialized Clone impl
// (doesn't compile today because this is a conditional conformance)
extension FakeArray: Clone where T: Clone {
  // A dummy to get our overlapping extensions
  // (doesn't compile today because we can't nest types in a generic type)
  struct CloneImplProvider : CloneImpl {
typealias TT = T
  }
  
  func clone() -> FakeArray {
CloneImplProvider.clone(input: self)
  }
}

// -
// Using Clone and the specialization

// Some plain-old-data
struct POD : TrivialClone {
  func clone() -> POD { return self }
}

// Works with any Clone type
func generic(_ value: T) -> T {
  return value.clone()
}

// Pass in a FakeArray that should use the fast specialization for Clone
generic(FakeArray(vals: (POD(), POD(), POD(




> On Sep 30, 2016, at 11:18 PM, Dave Abrahams via swift-evolution 
>  wrote:
> 
> 
> on Fri Sep 30 2016, Matthew Johnson  wrote:
> 
>>> It’s a valid concern, and I’m sure it does come up in practice. Let’s 
>>> create a small, self-contained example:
>>> 
>>> protocol P {
>>>  func f()
>>> }
>>> 
>>> protocol Q: P { }
>>> 
>>> struct X { let t: T}
>>> 
>>> extension X: P where T: P {
>>>  func f() {
>>>/* general but slow */
>>>  }
>>> }
>>> 
>>> extension X where T: Q {
>>>  func f() {
>>>/* fast because it takes advantage of T: Q */
>>>  }
>>> }
>>> 
>>> struct IsQ : Q { }
>>> 
>>> func generic(_ value: u) {
>>>  value.f()
>>> }
>>> 
>>> generic(X())
>>> 
>>> We’d like for the call to “value.f()” to get the fast version of f()
>>> from the second extension, but the proposal doesn’t do that: the
>>> conformance to P is “locked in” to the first extension.
> 
> I suppose that's true even if the