> On May 6, 2016, at 12:41 PM, Joe Groff via swift-evolution 
> <[email protected]> wrote:
> 
>> 
>> On May 6, 2016, at 2:24 AM, Morten Bek Ditlevsen via swift-evolution 
>> <[email protected]> wrote:
>> 
>> Currently, in order to conform to FloatLiteralConvertible you need to 
>> implement
>> an initializer accepting a floatLiteral of the typealias: FloatLiteralType.
>> However, this typealias can only be Double, Float, Float80 and other built-in
>> floating point types (to be honest, I do not know the exact limitation since 
>> I have
>> not been able to read find this in the documentation).
>> 
>> These floating point types have precision limitations that are not 
>> necessarily
>> present in the type that you are making FloatLiteralConvertible.
>> 
>> Let’s imagine a CurrencyAmount type that uses an NSDecimalNumber as the
>> representation of the value:
>> 
>> 
>> public struct CurrencyAmount {
>> public let value: NSDecimalNumber 
>> // .. other important currency-related stuff ..
>> }
>> 
>> extension CurrencyAmount: FloatLiteralConvertible { 
>> public typealias FloatLiteralType = Double
>> 
>> public init(floatLiteral amount: FloatLiteralType) {
>>   print(amount.debugDescription) 
>>   value = NSDecimalNumber(double: amount) 
>> } 
>> }
>> 
>> let a: CurrencyAmount = 99.99
>> 
>> 
>> The printed value inside the initializer is 99.989999999999995 - so the value
>> has lost precision already in the intermediary Double representation.  
>> 
>> I know that there is also an issue with the NSDecimalNumber double 
>> initializer,
>> but this is not the issue that we are seeing here.
>> 
>> 
>> One suggestion for a solution to this issue would be to allow the
>> FloatLiteralType to be aliased to a String.  In this case the compiler should
>> parse the float literal token: 99.99 to a String and use that as input for 
>> the
>> FloatLiteralConvertible initializer.
>> 
>> This would mean that arbitrary literal precisions are allowed for
>> FloatLiteralConvertibles that implement their own parsing of a String value.
>> 
>> For instance, if the CurrencyAmount used a FloatLiteralType aliased to 
>> String we
>> would have:
>> 
>> extension CurrencyAmount: FloatLiteralConvertible { 
>> public typealias FloatLiteralType = String
>> 
>> public init(floatLiteral amount: FloatLiteralType) { 
>>   value = NSDecimalNumber(string: amount) 
>> } 
>> }
>> 
>> and the precision would be the same as creating an NSDecimalNumber from a
>> String: 
>> 
>> let a: CurrencyAmount = 1.00000000000000000000000000000000001
>> 
>> print(a.value.debugDescription)
>> 
>> Would give: 1.00000000000000000000000000000000001
>> 
>> 
>> How does that sound? Is it completely irrational to allow the use of Strings 
>> as
>> the intermediary representation of float literals?
>> I think that it makes good sense, since it allows for arbitrary precision.
>> 
>> Please let me know what you think.
> 
> Like Dmitri said, a String is not a particularly efficient intermediate 
> representation. For common machine numeric types, we want it to be 
> straightforward for the compiler to constant-fold literals down to constants 
> in the resulting binary. For floating-point literals, I think we could 
> achieve this by changing the protocol to "deconstruct" the literal value into 
> integer significand and exponent, something like this:
> 
> // A type that can be initialized from a decimal literal such as
> // `1.1` or `2.3e5`.
> protocol DecimalLiteralConvertible {
>  // The integer type used to represent the significand and exponent of the 
> value.
>  typealias Component: IntegerLiteralConvertible
> 
>  // Construct a value equal to `decimalSignificand * 10**decimalExponent`.
>  init(decimalSignificand: Component, decimalExponent: Component)
> }
> 
> // A type that can be initialized from a hexadecimal floating point
> // literal, such as `0x1.8p-2`.
> protocol HexFloatLiteralConvertible {
>  // The integer type used to represent the significand and exponent of the 
> value.
>  typealias Component: IntegerLiteralConvertible
> 
>  // Construct a value equal to `hexadecimalSignificand * 2**binaryExponent`.
>  init(hexadecimalSignificand: Component, binaryExponent: Component)
> }
> 
> Literals would desugar to constructor calls as follows:
> 
> 1.0 // T(decimalSignificand: 1, decimalExponent: 0)
> 0.123 // T(decimalSignificand: 123, decimalExponent: -3)
> 1.23e-2 // same
> 
> 0x1.8p-2 // T(hexadecimalSignificand: 0x18, binaryExponent: -6)

This seems like a very good approach to me.

– Steve

_______________________________________________
swift-evolution mailing list
[email protected]
https://lists.swift.org/mailman/listinfo/swift-evolution

Reply via email to