> On Jul 31, 2017, at 4:37 PM, Gor Gyolchanyan <[email protected]> 
> wrote:
>> On Jul 31, 2017, at 11:23 PM, John McCall <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>>> 
>>> On Jul 31, 2017, at 4:00 PM, Gor Gyolchanyan <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> 
>>>> On Jul 31, 2017, at 10:09 PM, John McCall <[email protected] 
>>>> <mailto:[email protected]>> wrote:
>>>> 
>>>>> On Jul 31, 2017, at 3:15 AM, Gor Gyolchanyan 
>>>>> <[email protected] <mailto:[email protected]>> 
>>>>> wrote:
>>>>>> On Jul 31, 2017, at 7:10 AM, John McCall via swift-evolution 
>>>>>> <[email protected] <mailto:[email protected]>> wrote:
>>>>>> 
>>>>>>> On Jul 30, 2017, at 11:43 PM, Daryle Walker <[email protected] 
>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>> The parameters for a fixed-size array type determine the type's 
>>>>>>> size/stride, so how could the bounds not be needed during compile-time? 
>>>>>>> The compiler can't layout objects otherwise. 
>>>>>> 
>>>>>> Swift is not C; it is perfectly capable of laying out objects at run 
>>>>>> time.  It already has to do that for generic types and types with 
>>>>>> resilient members.  That does, of course, have performance consequences, 
>>>>>> and those performance consequences might be unacceptable to you; but the 
>>>>>> fact that we can handle it means that we don't ultimately require a 
>>>>>> semantic concept of a constant expression, except inasmuch as we want to 
>>>>>> allow users to explicitly request guarantees about static layout.
>>>>> 
>>>>> Doesn't this defeat the purpose of generic value parameters? We might as 
>>>>> well use a regular parameter if there's no compile-time evaluation 
>>>>> involved. In that case, fixed-sized arrays will be useless, because 
>>>>> they'll be normal arrays with resizing disabled.
>>>> 
>>>> You're making huge leaps here.  The primary purpose of a fixed-size array 
>>>> feature is to allow the array to be allocated "inline" in its context 
>>>> instead of "out-of-line" using heap-allocated copy-on-write buffers.  
>>>> There is no reason that that representation would not be supportable just 
>>>> because the array's bound is not statically known; the only thing that 
>>>> matters is whether the bound is consistent for all instances of the 
>>>> container.
>>>> 
>>>> That is, it would not be okay to have a type like:
>>>>  struct Widget {
>>>>    let length: Int
>>>>    var array: [length x Int]
>>>>  }
>>>> because the value of the bound cannot be computed independently of a 
>>>> specific value.
>>>> 
>>>> But it is absolutely okay to have a type like:
>>>>  struct Widget {
>>>>    var array: [(isRunningOnIOS15() ? 20 : 10) x Int]
>>>>  }
>>>> It just means that the bound would get computed at runtime and, 
>>>> presumably, cached.  The fact that this type's size isn't known statically 
>>>> does mean that the compiler has to be more pessimistic, but its values 
>>>> would still get allocated inline into their containers and even on the 
>>>> stack, using pretty much the same techniques as C99 VLAs.
>>> 
>>> I see your point. Dynamically-sized in-place allocation is something that 
>>> completely escaped me when I was thinking of fixed-size arrays. I can say 
>>> with confidence that a large portion of private-class-copy-on-write value 
>>> types would greatly benefit from this and would finally be able to become 
>>> true value types.
>> 
>> To be clear, it's not obvious that using an inline array is always a good 
>> move for performance!  But it would be a tool available for use when people 
>> felt it was important.
> 
> That's why I'm trying to push for compile-time execution system. All these 
> problems (among many others) could be designed out of existence and the 
> compiler would be incredibly simple in the light of all the different 
> specific features that the community is asking for. But I do feel your urge 
> to avoid inventing a bulldozer factory just for digging a hole in a sandbox. 
> It doesn't have to be relied upon by the type checker or generic resolution 
> mechanism. It would be purely auxiliary. But that would single-handedly move 
> a large chunk of the compiler into stdlib and a huge portion of various 
> little incidental proposals would fade away because they can now easily be 
> implemented in Swift for specific purposes.

My point here had nothing to do with compile-time vs. dynamic-time evaluation 
of array bounds.  Inline array storage is not a performance panacea even if 
everything about them is static.  The exact balance point will vary by element 
type, machine, and the overall load on the memory system in your program, but 
even for an array of bytes, as the size of the array grows it will eventually 
become the case that retaining a buffer pointer will be cheaper than copying 
the buffer contents.

>>>>> As far as I know, the pinnacle of uses for fixed-size arrays is having a 
>>>>> compile-time pre-allocated space of the necessary size (either literally 
>>>>> at compile-time if that's a static variable, or added to the pre-computed 
>>>>> offset of the stack pointer in case of a local variable).
>>>> 
>>>> The difference between having to use dynamic offsets + alloca() and static 
>>>> offsets + a normal stack slot is noticeable but not nearly as extreme as 
>>>> you're imagining.  And again, in most common cases we would absolutely be 
>>>> able to fold a bound statically and fall into the optimal path you're 
>>>> talking about.  The critical guarantee, that the array does not get 
>>>> heap-allocated, is still absolutely intact.
>>> 
>>> Yet again, Swift (specifically - you in this case) is teaching me to trust 
>>> the compiler to optimize, which is still an alien feeling to me even after 
>>> all these years of heavy Swift usage. Damn you, C++ for corrupting my brain 
>>> 😀.
>> 
>> Well.  Trust but verify. 🙂
> 
> The only good way I can think of doing that is hand-crafting a lightning-fast 
> implementation LLVM IR, then doing the same in Swift, decompiling the bitcode 
> and then doing a diff. It's going to be super tedious and painful, but it 
> seems to be the only way to prove that Swift can (hopefully, some day...) 
> replace C++ in sheer performance potential.

Or just run a benchmark and complain if the performance isn't as good as you 
expect?

>>> In the specific case of having dynamic-sized in-place-allocated value types 
>>> this will absolutely work. But this raises a chicken-and-the-egg problem: 
>>> which is built in what: in-place allocated dynamic-sized value types, or 
>>> specifically fixed-size arrays? On one hand I'm tempted to think that value 
>>> types should be able to dynamically decide (inside the initializer) the 
>>> exact size of the allocated memory (no less than the static size) that they 
>>> occupy (no matter if on the heap, on the stack or anywhere else), after 
>>> which they'd be able to access the "leftover" memory by a pointer and do 
>>> whatever they want with it. This approach seems more logical, since this is 
>>> essentially how fixed-size arrays would be implemented under the hood. But 
>>> on the other hand, this does make use of unsafe pointers (and no part of 
>>> Swift currently relies on unsafe pointers to function), so abstracting it 
>>> away behind a magical fixed-size array seems safer (with a hope that a 
>>> fixed-size array of UInt8 would be optimized down to exactly the first 
>>> case).
>> 
>> Representationally, I think we would have a builtin fixed-sized array type 
>> that.  But "fixed-size" means "the size is an inherent part of the type", 
>> not "we actually know that size statically".  Swift would just be able to 
>> use more optimal code-generation patterns for types whose bounds it was 
>> actually able to compute statically.
> 
> Well, yeah, knowing its size statically is not a requirement, but having a 
> guarantee of in-place allocation is. As long as non-escaped local fixed-size 
> arrays live on the stack, I'm happy. 🙂

I figured.

John.

> 
>> John.
>> 
>>> 
>>>>>> Value equality would still affect the type-checker, but I think we could 
>>>>>> pretty easily just say that all bound expressions are assumed to 
>>>>>> potentially resolve unequally unless they are literals or references to 
>>>>>> the same 'let' constant.
>>>>> 
>>>>> Shouldn't the type-checker use the Equatable protocol conformance to test 
>>>>> for equality?
>>>> 
>>>> The Equatable protocol does guarantee reflexivity.
>>>> 
>>>>> Moreover, as far as I know, Equatable is not recognized by the compiler 
>>>>> in any way, so it's just a regular protocol.
>>>> 
>>>> That's not quite true: we synthesize Equatable instances in several places.
>>>> 
>>>>> What would make it special? Some types would implement operator == to 
>>>>> compare themselves to other types, that's beyond the scope of Equatable. 
>>>>> What about those? And how are custom operator implementations going to 
>>>>> serve this purpose at compile-time? Or will it just ignore the semantics 
>>>>> of the type and reduce it to a sequence of bits? Or maybe only a few 
>>>>> hand-picked types will be supported?
>>>> 
>>>>> 
>>>>> The seemingly simple generic value parameter concept gets vastly 
>>>>> complicated and/or poorly designed without an elaborate compile-time 
>>>>> execution system... Unless I'm missing an obvious way out.
>>>> 
>>>> The only thing the compiler really *needs* to know is whether two types 
>>>> are known to be the same, i.e. whether two values are known to be the 
>>>> same. 
>>> 
>>> I think having arbitrary value-type literals would be a great place to 
>>> start. Currently there are only these types of literals:
>>>     * nil
>>>     * boolean
>>>     * integer
>>>     * floating-point
>>>     * string, extended grapheme cluster, unicode scalar
>>>     * array
>>>     * dictionary
>>> 
>>> The last three of which are kinda weird because they're not really 
>>> literals, because they can contains dynamically generated values.
>>> If value types were permitted to have a special kind of initializer (I'll 
>>> call it literal initializer for now), which only allows directly assigning 
>>> to its stored properties or self form parameters with no operations, then 
>>> that initializer could be used to produce a compile-time literal of that 
>>> value type. A similar special equality operator would only allow directly 
>>> comparing stored properties between two literal-capable value types.
>>> 
>>> struct Foo {
>>>     
>>>     literal init(one: Int, two: Float) {
>>>             self.one = one
>>>             self.two = two  
>>>     }
>>> 
>>>     let one: Int
>>> 
>>>     let two: Float
>>> 
>>> }
>>> 
>>> literal static func ==  (_ some: Foo, _ other: Foo) -> Bool {
>>>     return some.one == other.one && some.two == other.two
>>> }
>>> 
>>> only assignment would be allowed in the initializer and only equality check 
>>> and boolean operations would be allowed inside the equality operator. These 
>>> limitations would guarantee completely deterministic literal creation and 
>>> equality conformance at compile-time.
>>> Types that conform to _BuiltinExpressibleBy*Literal would be magically 
>>> equipped with both of these.
>>> String, array and dictionary literals would be unavailable.
>>> 
>>>> An elaborate compile-time execution system would not be sufficient here, 
>>>> because again, Swift is not C or C++: we need to be able to answer that 
>>>> question even in generic code rather than relying on the ability to fold 
>>>> all computations statically.  We do not want to add an algebraic solver to 
>>>> the type-checker.  The obvious alternative is to simply be conservatively 
>>>> correct by treating independent complex expressions as always yielding 
>>>> different values.
>>> 
>>> How exactly does generic type resolution happen? Obviously, it's not all 
>>> compile-time, since it has to deal with existential containers. Without 
>>> customizable generic resolution, I don't see a way to implement 
>>> satisfactory generic value parameters. But if we settle on magical 
>>> fixed-size arrays, we wouldn't need generic value parameters, we would only 
>>> need to support constraining the size of the array with Comparable 
>>> operators:
>>> 
>>> func  foo<T>(_ array: T) where T: [Int], T.count == 5 {
>>>     // ...
>>> } 
>>> 
>>> let array: [5 of Int] = [1, 2, 3, 4, 5]
>>> foo(array)
>>> 
>>>>>> The only hard constraint is that types need to be consistent, but that 
>>>>>> just means that we need to have a model in which bound expressions are 
>>>>>> evaluated exactly once at runtime (and of course typically folded at 
>>>>>> compile time).
>>>>> 
>>>>> What exactly would it take to be able to execute select piece of code at 
>>>>> compile-time? Taking the AST, converting it to LLVM IR and feeding it to 
>>>>> the MCJIT engine seems to be easy enough. But I'm pretty sure it's more 
>>>>> tricky than that. Is there a special assumption or two made about the 
>>>>> code that prevents this from happening?
>>>> 
>>>> We already have the ability to fold simple expressions in SIL; we would 
>>>> just make sure that could handle anything that we considered really 
>>>> important and allow everything else to be handled dynamically.
>>> 
>>> So, with some minor adjustments, we could get a well-defined subset of 
>>> Swift that can be executed at compile-time to yield values that would pass 
>>> as literals in any context?
>>> This would some day allow relaxing the limitations on literal initializers 
>>> and literal equality operators by pre-computing and caching values at 
>>> compile-time outside the scope of the type checker, allowing the type 
>>> checker to stay simple, while essentially allowing generics with complex 
>>> resolution logic.
>>> 
>>>> John.
>>>> 
>>>>> 
>>>>>> John.
>>>>>> 
>>>>>>> Or do you mean that the bounds are integer literals? (That's what I 
>>>>>>> have in the design document now.)
>>>>>>> 
>>>>>>> Sent from my iPhone
>>>>>>> 
>>>>>>> On Jul 30, 2017, at 8:51 PM, John McCall <[email protected] 
>>>>>>> <mailto:[email protected]>> wrote:
>>>>>>> 
>>>>>>>>> On Jul 29, 2017, at 7:01 PM, Daryle Walker via swift-evolution 
>>>>>>>>> <[email protected] <mailto:[email protected]>> wrote:
>>>>>>>>> The “constexpr” facility from C++ allows users to define constants 
>>>>>>>>> and functions that are determined and usable at compile-time, for 
>>>>>>>>> compile-time constructs but still usable at run-time. The facility is 
>>>>>>>>> a key step for value-based generic parameters (and fixed-size arrays 
>>>>>>>>> if you don’t want to be stuck with integer literals for bounds). Can 
>>>>>>>>> figuring out Swift’s story here be part of Swift 5?
>>>>>>>> 
>>>>>>>> Note that there's no particular reason that value-based generic 
>>>>>>>> parameters, including fixed-size arrays, actually need to be constant 
>>>>>>>> expressions in Swift.
>>>>>>>> 
>>>>>>>> John.
>>>>>> 
>>>>>> _______________________________________________
>>>>>> swift-evolution mailing list
>>>>>> [email protected] <mailto:[email protected]>
>>>>>> https://lists.swift.org/mailman/listinfo/swift-evolution 
>>>>>> <https://lists.swift.org/mailman/listinfo/swift-evolution>

_______________________________________________
swift-evolution mailing list
[email protected]
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to