2017-11-09 21:55 GMT+01:00 Nicolas Cellier < [email protected]>:
> > > 2017-11-09 20:10 GMT+01:00 Raffaello Giulietti < > [email protected]>: > >> On 2017-11-09 19:04, Nicolas Cellier wrote: >> >>> >>> >>> 2017-11-09 18:02 GMT+01:00 Raffaello Giulietti < >>> [email protected] <mailto:[email protected] >>> >>: >>> >>> >>> >>> >>> Anyway relying upon Float equality should allways be subject to >>> extreme caution and examination >>> >>> For example, what do you expect with plain old arithmetic in >>> mind: >>> >>> a := 0.1. >>> b := 0.3 - 0.2. >>> a = b >>> >>> This will lead to (a - b) reciprocal = 3.602879701896397e16 >>> If it is in a Graphics context, I'm not sure that it's the >>> expected scale... >>> >>> >>> >>> a = b evaluates to false in this example, so no wonder (a - b) >>> evaluates to a big number. >>> >>> >>> Writing a = b with floating point is rarely a good idea, so asking about >>> the context which could justify such approach makes sense IMO. >>> >>> >> Simple contexts, like the one which is the subject of this trail, are the >> one we should strive at because they are the ones most likely used in >> day-to-day working. Having useful properties and regularity for simple >> cases might perhaps cover 99% of the everyday usages (just a dishonestly >> biased estimate ;-) ) >> >> Complex contexts, with heavy arithmetic, are best dealt by numericists >> when Floats are involved, or with unlimited precision numbers like >> Fractions by other programmers. >> >> >> This differs from my experience. > Float strikes in the most simple place were we put false expectation > because of a different mental representation > >> >> >> >> But the example is not plain old arithmetic. >>> >>> Here, 0.1, 0.2, 0.3 are just a shorthands to say "the Floats closest >>> to 0.1, 0.2, 0.3" (if implemented correctly, like in Pharo as it >>> seems). Every user of Floats should be fully aware of the implicit >>> loss of precision that using Floats entails. >>> >>> >>> Yes, it makes perfect sense! >>> But precisely because you are aware that 0.1e0 is "the Float closest to >>> 0.1" and not exactly 1/10, you should then not be surprised that they are >>> not equal. >>> >>> >> Indeed, I'm not surprised. But then >> 0.1 - (1/10) >> shall not evaluate to 0. If it evaluates to 0, then the numbers shall >> compare as being equal. >> >> The surprise lies in the inconsistency between the comparison and the >> subtraction, not in the isolated operations. >> >> >> >> >>> I agree that following assertion hold: >>> self assert: a ~= b & a isFloat & b isFloat & a isFinite & b >>> isFinite ==> (a - b) isZero not >>> >>> >> The arrow ==> is bidirectional even for finite Floats: >> >> self assert: (a - b) isZero not & a isFloat & b isFloat & a isFinite & b >> isFinite ==> a ~= b >> >> >> >> But (1/10) is not a Float and there is no Float that can represent it >>> exactly, so you can simply not apply the rules of FloatingPoint on it. >>> >>> When you write (1/10) - 0.1, you implicitely perform (1/10) asFloat - >>> 0.1. >>> It is the rounding operation asFloat that made the operation inexact, so >>> it's no more surprising than other floating point common sense >>> >> >> See above my observation about what I consider surprising. >> >> As already said, it's a false expectation in the context of mixed > arithmetic. > > >> >> >> >>> >>> In the case of mixed-mode Float/Fraction operations, I personally >>> prefer reducing the Fraction to a Float because other commercial >>> Smalltalk implementations do so, so there would be less pain porting >>> code to Pharo, perhaps attracting more Smalltalkers to Pharo. >>> >>> Mixed arithmetic is problematic, and from my experience mostly happens >>> in graphics in Smalltalk. >>> >>> If ever I would change something according to this principle (but I'm >>> not convinced it's necessary, it might lead to other strange side effects), >>> maybe it would be how mixed arithmetic is performed... >>> Something like exact difference like Martin suggested, then converting >>> to nearest Float because result is inexact: >>> ((1/10) - 0.1 asFraction) asFloat >>> >>> This way, you would have a less surprising result in most cases. >>> But i could craft a fraction such that the difference underflows, and >>> the assertion a ~= b ==> (a - b) isZero not would still not hold. >>> Is it really worth it? >>> Will it be adopted in other dialects? >>> >>> >>> >> As an alternative, the Float>>asFraction method could return the Fraction >> with the smallest denominator that would convert to the receiver by the >> Fraction>>asFloat method. >> >> So, 0.1 asFraction would return 1/10 rather than the beefy Fraction it >> currently returns. To return the beast, one would have to intentionally >> invoke asExactFraction or something similar. >> >> This might cause less surprising behavior. But I have to think more. >> >> >> No the goal here was to have a non null difference because we need to > preserve inequality for other features. > > Answering anything but a Float at a high computation price goes against > primary purpose of Float (speed, efficiency) > If that's what we want, then we shall not use Float in the first place. > That's why I don't believe in such proposal > > The minimal Fraction algorithm is an intersting challenge though. Not sure > how to find it... > Coming back to a bit of code, we have only minimal decimal (with only > powers of 2 & 5 at denominator): > > {[Float pi asFraction]. [Float pi asMinimalDecimalFraction]} collect: > #bench. > > >> >> >> But the main point here, I repeat myself, is to be consistent and to >>> have as much regularity as intrinsically possible. >>> >>> >>> >>> I think we have as much as possible already. >>> Non equality resolve more surprising behavior than it creates. >>> It makes the implementation more mathematically consistent (understand >>> preserving more properties). >>> Tell me how you are going to sort these 3 numbers: >>> >>> {1.0 . 1<<60+1/(1<<60). 1<<61+1/(1<<61)} sort. >>> >>> tell me the expectation of: >>> >>> {1.0 . 1<<60+1/(1<<60). 1<<61+1/(1<<61)} asSet size. >>> >>> >> A clearly stated rule, consistently applied and known to everybody, helps. >> >> In presence of heterogeneous numbers, the rule should state the common >> denominator, so to say. Hence, the numbers involved in mixed-mode >> arithmetic are either all converted to one representation or all to the >> other: whether they are compared or added, subtracted or divided, etc. One >> rule for mixed-mode conversions, not two. >> >> >> Having an economy of rules is allways a good idea. > If you can obtain a consistent system with 1 single rule rather than 2 > then go. > But if it's at the price of sacrificing higher expectations, that's > another matter. > > Languages that have a simpler arithmetic model, bounded integer, no > Fraction, may stick to a single rule. > More sofisticated models like you'll find in Lisp and Scheme have exact > same logic as Squeak/Pharo. > > sophisticated... (i'm on my way copying/pasting that one a thousand times) We don't have 2 rules gratuitously as already explained. > - Total relation order of non nan values so as to be a good Magnitude > citizen imply non equality > - Producing Float in case of mixed arithmetic is for practicle purpose: > speed > (What are those damn Float for otherwise?) > it's also justified a posteriori by (exact op: inexact) -> inexact > > What are you ready to sacrifice/trade? > > > >> >> tell me why = is not a relation of equivalence anymore (not associative) >>> >>> >>> >> Ensuring that equality is an equivalence is always a problem when the >> entities involved are of different nature, like here. This is not a new >> problem and not inherent in numbers. (Logicians and set theorists would >> have much to tell.) Even comparing Points and ColoredPoints is problematic, >> so I have no final answer. >> >> In Smalltalk, furthermore, implementing equality makes it necessary to >> (publicly) expose much more internal details about an object than in other >> environments. >> >> >> Let's focus on Number. > Loosing equivalence is loosing ability to mix Numbers in Set. > But not only Numbers... Anything having a Number somewhere in an inst var, > like (1/10)@0 and 0.1@0. > >
