Re: [swift-evolution] [Planning][Request] "constexpr" for Swift 5

2017-08-06 Thread John McCall via swift-evolution
> On Aug 6, 2017, at 11:59 PM, Daryle Walker  wrote:
>> On Aug 1, 2017, at 2:58 PM, John McCall > > wrote:
>> 
>>> 
>>> On Aug 1, 2017, at 9:53 AM, Daryle Walker >> > wrote:
>>> 
 On Jul 31, 2017, at 4:37 PM, Gor Gyolchanyan > wrote:
 
 Well, yeah, knowing its size statically is not a requirement, but having a 
 guarantee of in-place allocation is. As long as non-escaped local 
 fixed-size arrays live on the stack, I'm happy. 
>>> 
>>> I was neutral on this, but after waking up I realized a problem. I want to 
>>> use the LLVM type primitives to implement fixed-size arrays. Doing a 
>>> run-time determination of layout and implementing it with alloca forfeits 
>>> that (AFAIK). Unless the Swift run-time library comes with LLVM (which I 
>>> doubt). Which means we do need compile-time constants after all.
>> 
>> We are not going to design the Swift language around the goal of producing 
>> exact LLVM IR sequences.  If you can't phrase this in real terms, it is 
>> irrelevant.
> 
> It isn’t being LLVM-specific, but for any similar system. The instruction 
> generator has certain primitives, like 16-bit integers or 32-bit floats. LLVM 
> (and probably rivals) also has aggregate primitives, heterogenous and 
> homogenous (and the latter as standard and vector-unit). I want to use those 
> primitives when possible. Saving sizing allocations until run-time, after 
> it’s too late for sized-array-specific generated instructions, means that the 
> array is probably implemented with general buffer pointer and length 
> instructions. Any opportunities for IR-level optimization of the types is 
> gone.

> How often do you expect a statically sized array to need said size determined 
> at run-time (with a function) versus a compile-time specification (with an 
> integer literal or “constexpr” expression)? This may enable a 1% solution 
> that anti-optimizes the 99% case.

If the array type is ultimately written with a constant bound, it will reliably 
end up having a constant static size for the same reason that (Either, Float) has a constant static size despite tuples, optionals, and 
Either all being generic types: the compiler automatically does this sort of 
deep substitution when it's computing type layouts.

Now, a generic method on all bounded array types would not know the size of 
'self', for two reasons: it wouldn't know the bound, and it wouldn't know the 
layout of the element type.  But of course we do have optimizations to generate 
specialized implementations of generic functions, and the specialized 
implementation would obviously be able to compute a static size of 'self' 
again.  Moreover, a language design which required bounds to always be constant 
would only help this situation in an essentially trivial way: by outlawing such 
a method from being defined in the first place.

John.

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] #dup -- a duplication "macro"(?)

2017-08-06 Thread Daryle Walker via swift-evolution
> On Aug 3, 2017, at 12:39 AM, Daryle Walker  wrote:
> 
> After a few hours, I figured out what was bugging me. Variadic generic 
> parameters, and the existing variadic function parameters, CONSUME 
> arbitrarily long comma-separated lists. The #dup facility PRODUCES those 
> kinds of lists. The features are duals, not the same. The features can 
> synergize.

I just upload a rough proposal at 
.

— 
Daryle Walker
Mac, Internet, and Video Game Junkie
darylew AT mac DOT com 

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Planning][Request] "constexpr" for Swift 5

2017-08-06 Thread Daryle Walker via swift-evolution
> On Aug 3, 2017, at 8:20 PM, Karl Wagner via swift-evolution 
>  wrote:
> 
>>> The root cause, of course, is that the VLAs require new stack allocations 
>>> each time, and the stack is only deallocated as one lump when the frame 
>>> ends.
>> 
>> That is true of alloca(), but not of VLAs.  VLAs are freed when they go out 
>> of scope.
> 
> Learned something today.
> 
> Anyway, if the goal is stack allocation, I would prefer that we explored 
> other ways to achieve it before jumping to a new array-type. I’m not really a 
> fan of a future where [3; Double] is one type and (Double, Double, Double) is 
> something else, and Array is yet another thing.

Just about every system programming language has all three of these, since you 
can’t really stop these “similar” types from co-existing. The third type uses 
remote storage, while the first two are scoped storage. A heterogenous product 
type template has to include homogenous product types as a subset. And 
instruction generators can produce different code between tuples and arrays; 
are you willing to forfeit one set of optimizations?

— 
Daryle Walker
Mac, Internet, and Video Game Junkie
darylew AT mac DOT com 

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Pitch] New Version of Array Proposal

2017-08-06 Thread Daryle Walker via swift-evolution
> On Aug 3, 2017, at 7:20 AM, Tino Heth <2...@gmx.de> wrote:
> 
> Hi Daryle,
> 
> I think we agree a lot on the importance of fixed-size arrays, but have a 
> different opinion on which aspect is the most valuable… (so we now only have 
> to agree that mine is better ;-) ;-)
> My motivation for FSA is safety and convenience:
> I want to iterate over C arrays in a straightforward way, but primarily, I 
> don't want to accidentally multiply a vector of size 3 with a 2×2 matrix.

Data modeling is important for me too. That’s why the proposal includes 
multi-dimensionality and why I didn’t just jam in Collection support. We don’t 
want to add conformance then find out that was a mistake.

> Of course, fast is cool, but I don't expect to suffer from bad performance 
> because of implementation details like how, where and when memory allocation 
> happens.
> Your focus, on the other hand, seems to be performance:
> You don't want to give guarantees about the order because of (hypothetical?) 
> optimisations that could be blocked by that, and avoid initialisation 
> overhead.

I also don’t want to block parallel/vector processing.

let a: @vector [4; Int] = //…
let b: @vector [4; Int] = //…
var c: @vector [4; Int] = //…
//…
loop: for i in a {
c[ #indexOf(loop) ] = i * b[ #indexOf(loop) ]
}
//…

I want the compiler to potentially be able to see that the elements are being 
computed in formation and use vector-unit instructions instead of serial 
processing.

> You brought "withUnsafeFlattening" to the table to add convenience, but I 
> think that is the wrong direction:
> Safe should be default, and it's quite common that you have to live with 
> "unsafe" when you need "fast".

The name has “Unsafe” because the Collection type used, UnsafeBufferPointer, 
does. I don’t know enough about what makes the existing “Unsafe” API that way 
for the flattening function to declared “safe”. It could be safe for all I know.

A substitute Collection-access function besides “withUnsafeFlattening” would 
either copy the elements (i.e. be inefficient, especially for large arrays) or 
somehow secretly maintain a reference to the array in memory. The latter would 
then be “withUnsafeFlattening” with a prettier name.

My first reason for “withUnsafeFlattening” was to allow API to use any FSA of a 
given element type, no matter the shape.

> As you don't want to confirm to Sequence at all, it shouldn't bother you if 
> the iterator sacrifices a tiny bit of performance in exchange for a reliable 
> order, and when you really need piecemeal initialisation, you could take a 
> FSA-Variant that skips initialisation of elements.
> Of course, that wouldn't be ideal, and there should be an "Unsafe" in the 
> name of that type — but I don't think tuple-like delayed initialisation would 
> help when solving real-world problems:
> The "x.0 = 0; x.1 = 1" case is trivial and can be done with normal init, and 
> when this isn't enough, you most likely loose all guarantees because you use 
> a loop to fill that array*.

Using a loop for array initialization is why I suggested we should look into 
run-time DI, which should be kept to restricted circumstances.

> I really appreciate the effort you spend for digging into the low-level 
> details, and hope that we end up with a draft that satisfies your use case.
> 
> - Tino
> 
> * There are actually cases where you want to compute the value of one element 
> based on another one, but that might as well be an indicator that you are 
> better off with a tuple, instead of using an array.
> 
>>> So would you say Dictionary shouldn't conform to Collection either?
>>> Just because a type conforms to a protocol doesn't mean it can't add its 
>>> own methods on top.
>> 
>> But the FSA interface and the Sequence/Collection interface would be very 
>> similar, basically competing, leading to a schizophrenic interface. Since 
>> another part of the overall FSA interface implements Collection, just use 
>> that.
> Yes, I can't argue against the claim that Collection sometimes feels a little 
> bit odd :-( — but it is what it is, and maybe there will be improvements in 
> the future that could take into account the experience with FSA.
> 
>>> Swift has one-dimensional arrays, and they support Collection... this may 
>>> sound like nitpicking that only works because there is no explicit 
>>> "fixed-size" in you statement, but feel free to prove me wrong for FSAs.
>> 
>> Yes, I meant FSAs, not both them and Array; it’s long-winded to keep adding 
>> the “fixed-size” part.
> So we do agree that there is no fundamental reason that stops FSAs from being 
> collections? ;-)

Besides that they can’t be Sequences, unless you throw away allowing 
parallel/vector processing in the future. (That can’t be bolted onto Version 2, 
since committing to Sequence means you committed to single-thread iteration.) 
Just found out that C++ 17 (optionally) adds parallel/vector processing to its 
for-loops. I guess I caught on to a 

Re: [swift-evolution] [Planning][Request] "constexpr" for Swift 5

2017-08-06 Thread Félix Cloutier via swift-evolution

> Le 6 août 2017 à 08:15, Karl Wagner  a écrit :
> 
>> 
>> On 4. Aug 2017, at 20:15, John McCall via swift-evolution 
>> > wrote:
>> 
>>> 
>>> On Aug 4, 2017, at 1:19 PM, Félix Cloutier via swift-evolution 
>>> > wrote:
>>> 
>>> That's not a concern with the `let` case that Robert brought up, since you 
>>> can't mutate a `let` array at all.
>>> 
>>> The big thing is that unconstrained escape analysis is uncomputable. Since 
>>> Swift array storage is COW, any function that receives the array as a 
>>> parameter is allowed to take a reference on its storage. If the storage 
>>> lives on the stack and that reference outlives the stack frame, you've got 
>>> a problem. It's the same problem that motivated @escaping for closures.
>>> 
>>> You could allow storage to be on the stack by forcing user to make a 
>>> pessimistic copy, which is possibly not an improvement.
>> 
>> Right.  I think maybe the name people keeping using for this feature is 
>> misleading; a better name would be "inline arrays" or "directly-stored 
>> arrays".  Having a fixed size is a necessary condition for storing the array 
>> elements directly, but the people asking for this feature are really asking 
>> for the different representation, not just the ability to statically 
>> constrain the size of an array.
>> 
>> That representation difference comes with a lot of weaknesses and 
>> trade-offs, but it's also useful sometimes.
>> 
>> John.
>> 
> 
> Right, and the question I’ve been asking (indirectly) is: why is this only 
> useful for arrays?

One special thing about fixed-size arrays is that they address the sore spot of 
C interop.

> Doesn’t it really apply to any value-type which allocates storage which it 
> manages with COW semantics (e.g. Dictionary, Set, Data, your own custom 
> types…)? Really, we want to inform the compiler that the 
> dynamically-allocated memory is part of the value - and if it sees that the 
> storage is only allocated once, it should be allowed to allocate that storage 
> inline with the value, on the stack.

Assuming that fixed-size arrays are a different type (like FixedSizeArray 
vs Array), the feature work that supports them would likely be sufficient to 
support fixed-size whatever collections, too. However (and I'm talking a bit 
through my hat here, that's not my area of expertise), I think that there could 
be some drawbacks to implementing some other collections in a fixed amount of 
memory. For instance, you have to decide on a fixed number of buckets for sets 
and dictionaries, regardless of what ends up in the collection. Dictionaries 
and sets also tend to use more memory than arrays, which could cause storage 
size to balloon up to degrees that are not immediately obvious.

> As I understand it, the only problem with this is when a function takes such 
> a value as a parameter and assigns it to some escaping reference (an ivar, 
> global, or capturing it inside an escaping closure).
> 
> So why can’t such assignments simply check if the value has inline storage 
> and copy it to the heap if necessary? The compiler should be able to optimise 
> the function so the check (which is really cheap anyway) only needs to happen 
> once per function. Because the entire type has value semantics, we can 
> substitute the original value with the copy for the rest of the function 
> (preventing further copies down the line).

The rest of the function might not be enough, especially if you use the same 
array for multiple calls. See:

> var globalArray = [[Int]]
> func append(array: [Int])
>   globalArray.append(array)
> }
> 
> func foo() {
>   let bar = [1,2,3]
>   append(array: bar)
>   append(array: bar)
>   append(array: bar)
> }

The function that escapes the array is `append(array:)`, so you'd get one copy 
per call to `append(array:)`, unless functions start annotating the escaping 
behavior of each parameter. That could work in many cases, but the compiler 
would still have to be pessimistic in common instances, like when calls to 
virtual functions are involved, including closures and calls to methods of 
objects represented as protocol instances. (Also, the amount of work required 
is likely to be significant.)

Félix

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] Why you can't make someone else's class Decodable: a long-winded explanation of 'required' initializers

2017-08-06 Thread Charles Srstka via swift-evolution
> On Aug 3, 2017, at 12:05 PM, Itai Ferber via swift-evolution 
>  wrote:
> 
> Thanks for putting these thoughts together, Jordan! Some additional comments 
> inline.
> 
>> On Aug 2, 2017, at 5:08 PM, Jordan Rose > > wrote:
>> 
>> David Hart recently asked on Twitter 
>>  if there was a good 
>> way to add Decodable support to somebody else's class. The short answer is 
>> "no, because you don't control all the subclasses", but David already 
>> understood that and wanted to know if there was anything working to mitigate 
>> the problem. So I decided to write up a long email about it instead. (Well, 
>> actually I decided to write a short email and then failed at doing so.)
>> 
>> The Problem
>> 
>> You can add Decodable to someone else's struct today with no problems:
>> 
>> extension Point: Decodable {
>>   enum CodingKeys: String, CodingKey {
>> case x
>> case y
>>   }
>>   public init(from decoder: Decoder) throws {
>> let container = try decoder.container(keyedBy: CodingKeys.self)
>> let x = try container.decode(Double.self, forKey: .x)
>> let y = try container.decode(Double.self, forKey: .y)
>> self.init(x: x, y: y)
>>   }
>> }
>> 
>> But if Point is a (non-final) class, then this gives you a pile of errors:
>> 
>> - init(from:) needs to be 'required' to satisfy a protocol requirement. 
>> 'required' means the initializer can be invoked dynamically on subclasses. 
>> Why is this important? Because someone might write code like this:
>> 
>> func decodeMe() -> Result {
>>   let decoder = getDecoderFromSomewhere()
>>   return Result(from: decoder)
>> }
>> let specialPoint: VerySpecialSubclassOfPoint = decodeMe()
>> 
>> …and the compiler can't stop them, because VerySpecialSubclassOfPoint is a 
>> Point, and Point is Decodable, and therefore VerySpecialSubclassOfPoint is 
>> Decodable. A bit more on this later, but for now let's say that's a sensible 
>> requirement.
>> 
>> - init(from:) also has to be a 'convenience' initializer. That one makes 
>> sense too—if you're outside the module, you can't necessarily see private 
>> properties, and so of course you'll have to call another initializer that 
>> can.
>> 
>> But once it's marked 'convenience' and 'required' we get "'required' 
>> initializer must be declared directly in class 'Point' (not in an 
>> extension)", and that defeats the whole purpose. Why this restriction?
>> 
>> 
>> The Semantic Reason
>> 
>> The initializer is 'required', right? So all subclasses need to have access 
>> to it. But the implementation we provided here might not make sense for all 
>> subclasses—what if VerySpecialSubclassOfPoint doesn't have an 'init(x:y:)' 
>> initializer? Normally, the compiler checks for this situation and makes the 
>> subclass reimplement the 'required' initializer…but that only works if the 
>> 'required' initializers are all known up front. So it can't allow this new 
>> 'required' initializer to go by, because someone might try to call it 
>> dynamically on a subclass. Here's a dynamic version of the code from above:
>> 
>> func decodeDynamic(_ pointType: Point.Type) -> Point {
>>   let decoder = getDecoderFromSomewhere()
>>   return pointType.init(from: decoder)
>> }
>> let specialPoint = decodeDynamic(VerySpecialSubclassOfPoint.self)
>> 
>> 
>> The Implementation Reason
>> 
>> 'required' initializers are like methods: they may require dynamic dispatch. 
>> That means that they get an entry in the class's dynamic dispatch table, 
>> commonly known as its vtable. Unlike Objective-C method tables, vtables 
>> aren't set up to have entries arbitrarily added at run time.
>> 
>> (Aside: This is one of the reasons why non-@objc methods in Swift extensions 
>> can't be overridden; if we ever lift that restriction, it'll be by using a 
>> separate table and a form of dispatch similar to objc_msgSend. I sent a 
>> proposal to swift-evolution about this last year but there wasn't much 
>> interest.)
>> 
>> 
>> The Workaround
>> 
>> Today's answer isn't wonderful, but it does work: write a wrapper struct 
>> that conforms to Decodable instead:
>> 
>> struct DecodedPoint: Decodable {
>>   var value: Point
>>   enum CodingKeys: String, CodingKey {
>> case x
>> case y
>>   }
>>   public init(from decoder: Decoder) throws {
>> let container = try decoder.container(keyedBy: CodingKeys.self)
>> let x = try container.decode(Double.self, forKey: .x)
>> let y = try container.decode(Double.self, forKey: .y)
>> self.value = Point(x: x, y: y)
>>   }
>> }
>> 
>> This doesn't have any of the problems with inheritance, because it only 
>> handles the base class, Point. But it makes everywhere else a little less 
>> convenient—instead of directly encoding or decoding Point, you have to use 
>> the wrapper, and that means no implicitly-generated Codable implementations 
>> either.
>> 
>> 

Re: [swift-evolution] [Planning][Request] "constexpr" for Swift 5

2017-08-06 Thread John McCall via swift-evolution

> On Aug 6, 2017, at 11:15 AM, Karl Wagner  wrote:
> 
> 
>> On 4. Aug 2017, at 20:15, John McCall via swift-evolution 
>> > wrote:
>> 
>>> 
>>> On Aug 4, 2017, at 1:19 PM, Félix Cloutier via swift-evolution 
>>> > wrote:
>>> 
>>> That's not a concern with the `let` case that Robert brought up, since you 
>>> can't mutate a `let` array at all.
>>> 
>>> The big thing is that unconstrained escape analysis is uncomputable. Since 
>>> Swift array storage is COW, any function that receives the array as a 
>>> parameter is allowed to take a reference on its storage. If the storage 
>>> lives on the stack and that reference outlives the stack frame, you've got 
>>> a problem. It's the same problem that motivated @escaping for closures.
>>> 
>>> You could allow storage to be on the stack by forcing user to make a 
>>> pessimistic copy, which is possibly not an improvement.
>> 
>> Right.  I think maybe the name people keeping using for this feature is 
>> misleading; a better name would be "inline arrays" or "directly-stored 
>> arrays".  Having a fixed size is a necessary condition for storing the array 
>> elements directly, but the people asking for this feature are really asking 
>> for the different representation, not just the ability to statically 
>> constrain the size of an array.
>> 
>> That representation difference comes with a lot of weaknesses and 
>> trade-offs, but it's also useful sometimes.
>> 
>> John.
>> 
> 
> Right, and the question I’ve been asking (indirectly) is: why is this only 
> useful for arrays? Doesn’t it really apply to any value-type which allocates 
> storage which it manages with COW semantics (e.g. Dictionary, Set, Data, your 
> own custom types…)? Really, we want to inform the compiler that the 
> dynamically-allocated memory is part of the value - and if it sees that the 
> storage is only allocated once, it should be allowed to allocate that storage 
> inline with the value, on the stack.

There are absolutely things we could do as part of the Array implementation to 
dynamically avoid heap allocations for temporary arrays.  However, it would be 
very difficult to take advantage of that for arrays stored in temporary structs 
or enums; worse, even if we could, it still wouldn't have the optimal 
representation that people want for mathematical types like vectors and 
matrices.

John.


> 
> As I understand it, the only problem with this is when a function takes such 
> a value as a parameter and assigns it to some escaping reference (an ivar, 
> global, or capturing it inside an escaping closure).
> 
> So why can’t such assignments simply check if the value has inline storage 
> and copy it to the heap if necessary? The compiler should be able to optimise 
> the function so the check (which is really cheap anyway) only needs to happen 
> once per function. Because the entire type has value semantics, we can 
> substitute the original value with the copy for the rest of the function 
> (preventing further copies down the line).
> 
> 
> // Module one
> 
> import ModuleTwo
> 
> func doSomething() {
> 
> let values = (0..<5).map { _ in random() }// allocated inline, since 
> the size can never change
> ModuleTwo.setGlobalItems(values)  // passes a stack-allocated 
> array to the (opaque) function
> }
> 
> // Module two
> 
> var GlobalItems = [Int]()
> var MoreGlobalItems = [Int]()
> 
> func setGlobalItems(_ newItems: [Int]) {
> 
> GlobalItems = newItems  // assignment to escaping reference: 
> checks for inline storage, copies to heap if needed
> 
> // all references to ‘newItems’ from this point refer to the copy known 
> to be on the heap
> 
> MoreGlobalItems = newItems  // we already have a known out-of-line copy 
> of the value; no checks or copying needed
> }
> 
> // To make it more explicit...
> 
> func setGlobalItems_explicit(_ newItems: [Int]) {
> 
> let newItems_heap = newItems.backing.isAllocatedInline ? 
> newItems(withBacking: newItems.backing.clone()) : newItems
> GlobalItems   = newItems_heap
> MoreGlobalItems   = newItems_heap
> }
> 
> 
> This would require some special language support for values that allocate 
> memory which is managed as COW.
> 
> - Karl
> 

___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


Re: [swift-evolution] [Planning][Request] "constexpr" for Swift 5

2017-08-06 Thread Karl Wagner via swift-evolution

> On 6. Aug 2017, at 17:15, Karl Wagner via swift-evolution 
>  wrote:
> 
> let newItems_heap = newItems.backing.isAllocatedInline ? 
> newItems(withBacking: newItems.backing.clone()) : newItems

Should, of course, be:

let newItems_heap = newItems.backing.isAllocatedInline ? Array(withBacking: 
newItems.backing.clone()) : newItems___
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution