> On Apr 19, 2017, at 16:17, Xiaodi Wu <[email protected]> wrote: > > On Wed, Apr 19, 2017 at 6:00 PM, Philippe Hausler <[email protected] > <mailto:[email protected]>> wrote: > >> On Apr 19, 2017, at 3:23 PM, Xiaodi Wu <[email protected] >> <mailto:[email protected]>> wrote: >> >> On Wed, Apr 19, 2017 at 3:19 PM, Martin R <[email protected] >> <mailto:[email protected]>> wrote: >>> On 19. Apr 2017, at 01:48, Xiaodi Wu <[email protected] >>> <mailto:[email protected]>> wrote: >>> >>> So, as I understand it, `Float.init(exactly: Double.pi) == nil`. I would >>> expect NSNumber to behave similarly (a notion with which Martin disagrees, >>> I guess). I don't see a test that shows whether NSNumber behaves or does >>> not behave in that way. >> >> At present they behave differently: >> >> print(Float(exactly: Double.pi) as Any) >> // nil >> print(Float(exactly: NSNumber(value: Double.pi)) as Any) >> // Optional(3.14159274) >> >> I realize that identical behavior would be logical and least surprising. My >> only concern was about cases like >> >> let num = ... // some NSNumber from a JSON deserialization >> let fval = Float(exactly: num) >> >> where one cannot know how the number is represented internally and what >> precision it needs. But then one could use the truncating conversion or >> `.floatValue` instead. >> >> JSON numbers are double-precision floating point, unless I'm >> misunderstanding something. If someone writes `Float(exactly: >> valueParsedFromJSON)`, surely, that can only mean that they *really, really* >> prefer nil over an imprecise value. I can see no other reason to insist on >> using both Float and .init(exactly:). > > JSON does not claim 32 or 64 bit floating point, or for that matter 128 or > infinite bit floating point :( > > > Oops, you're right. I see they've wanted to future-proof this. That said, RFC > 7159 *does* say: > > This specification allows implementations to set limits on the range > and precision of numbers accepted. Since software that implements > IEEE 754-2008 binary64 (double precision) numbers [IEEE754] is > generally available and widely used, good interoperability can be > achieved by implementations that expect no more precision or range > than these provide, in the sense that implementations will > approximate JSON numbers within the expected precision. > > So JSON doesn't set limits on how numbers are represented, but JSON > implementations are permitted to (and I'd imagine that all in fact do). A > user of a JSON deserialization library can rightly expect to know the numeric > limits of that implementation; for the purposes of bridging NSNumber, if the > answer is that the implementation parses JSON numbers as double-precision > values, Double(exactly:) would be the right choice; otherwise, if it's 80-bit > values, then Float80(exactly:) would be the right choice, etc. >
Float80 is not compatible with NSNumber; and is well out of scope for this proposal. > > After thinking about it more; it seems reasonable to restrict it to the > behavior of Float(exactly: Double(…)). I am certain this will probably in the > end cause more bugs for me to have to address and mark as “behaves correctly” > and confuse a few new developers - but in the end they chose Swift and the > consistent story would be the current behavior of Float(exactly: Double). > >> >>> >>> >>> On Tue, Apr 18, 2017 at 11:43 AM, Philippe Hausler <[email protected] >>> <mailto:[email protected]>> wrote: >>> >>>> On Apr 18, 2017, at 9:22 AM, Stephen Canon <[email protected] >>>> <mailto:[email protected]>> wrote: >>>> >>>>> >>>>> On Apr 18, 2017, at 12:17 PM, Joe Groff <[email protected] >>>>> <mailto:[email protected]>> wrote: >>>>> >>>>> >>>>>> On Apr 17, 2017, at 5:56 PM, Xiaodi Wu via swift-evolution >>>>>> <[email protected] <mailto:[email protected]>> wrote: >>>>>> >>>>>> It seems Float.init(exactly: NSNumber) has not been updated to behave >>>>>> similarly? >>>>>> >>>>>> I would have to say, I would naively expect "exactly" to behave exactly >>>>>> as it says, exactly. I don't think it should be a synonym for >>>>>> Float(Double(exactly:)). >>>>>> On Mon, Apr 17, 2017 at 19:24 Philippe Hausler via swift-evolution >>>>>> <[email protected] <mailto:[email protected]>> wrote: >>>>>> I posted my branch and fixed up the Double case to account for your >>>>>> concerns (with a few inspired unit tests to validate) >>>>>> >>>>>> https://github.com/phausler/swift/tree/safe_nsnumber >>>>>> <https://github.com/phausler/swift/tree/safe_nsnumber> >>>>>> >>>>>> There is a builtin assumption here though: it does presume that the >>>>>> swift’s representation of Double and Float are IEEE compliant. However >>>>>> that is a fairly reasonable assumption in the tests. >>>>> >> >> >> Even with the updated code at >> https://github.com/phausler/swift/tree/safe_nsnumber >> <https://github.com/phausler/swift/tree/safe_nsnumber> >> >> print(Double(exactly: NSNumber(value: Int64(9000000000000000001))) as >> Any) >> // Optional(9e+18) >> >> still succeeds, however the reason seems to be an error in the `init(exactly >> value: someIntegerType)` inititializers of Float/Double, I have submitted a >> bug report: https://bugs.swift.org/browse/SR-4634 >> <https://bugs.swift.org/browse/SR-4634>. >> >> >>>>> (+Steve Canon) What is the behavior of Float.init(exactly: Double)? >>>>> NSNumber's behavior would ideally be consistent with that. >>>> >>>> The implementation is essentially just: >>>> >>>> self.init(other) >>>> guard Double(self) == other else { >>>> return nil >>>> } >>>> >>>> i.e. if the result is not equal to the source when round-tripped back to >>>> double (which is always exact), the result is nil. >>>> >>>> – Steve >>> >>> Pretty much the same trick inside of CFNumber/NSNumber
_______________________________________________ swift-evolution mailing list [email protected] https://lists.swift.org/mailman/listinfo/swift-evolution
