2014-07-12 11:55 GMT+02:00 Andres Valloud <[email protected]
>:

> You say starting from a more consistent place, but how consistent are =
>> and <= among Numbers without transitivity?
>>
>
> How are numbers consistent to begin with, when for all integers you have
>
> x + 1 ~= x
>
> but for floats, there are a multitude of values such that
>
> x + 1.0 = x
>
> holds?  Addition is hardly the only operation that exhibits this kind of
> behavior.  Or how about
>
> x = x
>
> being true for the vast majority of cases across all sorts of numbers,
> except when x is NaN?  Integers and fractions don't even have a notion of
> -0.0.  All of these are already pretty inconsistent without considering
> obvious issues such as
>
> 17 / 20 = 0.85
>
> which in at least some Smalltalks evaluates to true even though it is
> mathematically impossible.
>
> The point is that classes like Float, Integer and Fraction may very well
> have a common superclass.  Nonetheless, floating point numbers and
> effectively rationals obey very different rules.  Both have their strong
> points, those strengths are maximized with consistency.
>
> Andres.
>
>
Yes, all is true, but it still is the same argument:
- given Float are inexact we have a license to waste ulp
- given Float OPERATIONS do not obey algebraic rules, we have a license to
abandon mathematical properties at will

I claim we'd better not be so liberal.
Maybe I were a bit clumsy, but what I tried to say is that Float are not
inexact per se.
They carrry a well defined exact value.
Only the operations are inexact. But they are the least inexact as possible:
IEEE 754 requires that the operations behaves as if performed EXACTLY then
result is rounded to nearest double
(more exactly to the rounding rule currently in effect, nearest being the
default rule).

Comparisons can be exact, so I see no point in making them inexact.
That's a bit against spirit of IEEE 754, even if they did not rule about
mixed arithmetic.

By making these comparisons exact, we preserve transitivity of = and <=
which is not a bad thing.

You know very well that VW has all the hacks to make hash preserve the
equality of Float/Fraction/etc...
What is the purpose of these hacks?
Make Dictionary & al work?
If so, why stopping in the middle of the bridge and let examples like
http://bugs.squeak.org/view.php?id=3374 miserably fail?
Being consistent then means either abandonning 1/2 = 0.5 but we saw this
has nasty side effects.
Or denying the generality of Dictionary: not all objects can be used as
key...
That's a possible choice, but IMO, this will generate bad feedback from
customers.
Speaking of consistency, I strongly believe that Squeak/Pharo are on the
right track.

OK, we can not magically erase inexactness of floating point operations.
They are on purpose for the sake of speed/memory footprint optimization.
Exceptional values like NaN and Inf are a great deal of complexification,
and I'd allways preferred exceptions to exceptional values...
But when we can preserve some invariants, we'd better preserve them.
Once again, I did not invent anything, that's the approach of lispers, and
it seems wise.

And last thing, I like your argumentation, it's very logical, so if you
have more, you're welcome
but I pretty much exhausted mine ;)

Nicolas


>  If we can maintain the invariant with a pair of double dispatching
>> methods and coordinated hash, why shouldn't we?
>> Why lispers did it? (Scheme too)
>>
>> For me it's like saying: "since float are inexact, we have a license to
>> waste ulp".
>> We have not. IEEE 754 model insists on operations to be exactly rounded.
>> These are painful contortions too, but most useful!
>> Extending the contortion to exact comparison sounds a natural extension
>> to me, the main difference is that we do not round true or false to the
>> nearest float, so it's even nicer!
>>
>>
>>     On 7/11/14 17:19 , Nicolas Cellier wrote:
>>
>>
>>
>>
>>         2014-07-12 1:29 GMT+02:00 Andres Valloud
>>         <avalloud@smalltalk.__comcastbiz.net
>>         <mailto:[email protected]>
>>         <mailto:avalloud@smalltalk.__comcastbiz.net
>>
>>         <mailto:[email protected]>>>:
>>
>>
>>                       I don't think it makes sense to compare floating
>> point
>>                  numbers to
>>                       other types of numbers with #=... there's a world of
>>                  approximations
>>                       and other factors hiding behind #=, and the
>>         occasional true
>>                  answer
>>                       confuses more than it helps.  On top of that, then
>>         you get
>>                  x = y =>
>>                       x hash = y hash, and so the hash of floating point
>>         values
>>                  "has" to
>>                       be synchronized with integers, fractions, scaled
>>         decimals,
>>                  etc...
>>                       _what a mess_...
>>
>>
>>                  Yes, that's true, hash gets more complex.
>>                  But then, this has been discussed before:
>>
>>                  {1/2 < 0.5. 1/2 = 0.5. 1/2 > 0.5} - > #(false false
>> false).
>>
>>                  IOW, they are unordered.
>>                  Are we ready to lose ordering of numbers?
>>                  Practically, this would have big impacts on code base.
>>
>>
>>              IME, that's because loose code appears to work.  What
>>         enables that
>>              loose code to work is the loose mixed mode arithmetic.  I
>> could
>>              understand integers and fractions.  Adding floating point
>>         to the mix
>>              stops making as much sense to me.
>>
>>              Equality between floating point numbers does make sense.
>>           Equality
>>              between floating point numbers and scaled decimals or
>>         fractions...
>>              in general, I don't see how they could make sense.  I'd
>>         rather see
>>              the scaled decimals and fractions explicitly converted to
>>         floating
>>              point numbers, following a well defined procedure, and then
>>         compared...
>>
>>              Andres.
>>
>>
>>         Why do such mixed arithmetic comparisons make sense?
>>         Maybe we used floating points in some low level Graphics for
>>         optimization reasons.
>>         After these optimized operations we get a Float result by
>>         contagion, but
>>         our intention is still to handle Numbers.
>>         It would be possible to riddle the code with explicit
>>         asFloat/asFraction
>>         conversions, but that does not feel like a superior solution...
>>
>>         OK, we can as well convert to inexact first before comparing for
>>         this
>>         purpose.
>>         That's what C does, because C is too low level to ever care of
>>         transitivity and equivalence relationship.
>>         It's not even safe in C because the compiler can decide to
>>         promote to a
>>         larger precision behind your back...
>>         But let's ignore this "feature" and see what lispers recommend
>>         instead:
>>
>>         http://www.lispworks.com/__documentation/lcl50/aug/aug-__170.html
>> <http://www.lispworks.com/documentation/lcl50/aug/aug-170.html>
>>
>>
>>         It says:
>>
>>         In general, when an operation involves both a rational and a
>>         floating-point argument, the rational number is first converted to
>>         floating-point format, and then the operation is performed. This
>>         conversion process is called /floating-point contagion/
>>         <http://www.lispworks.com/__reference/lcl50/aug/aug-193.__
>> html#MARKER-9-47
>>
>>         <http://www.lispworks.com/reference/lcl50/aug/aug-193.
>> html#MARKER-9-47>>.
>>
>>         However, for numerical equality comparisons, the arguments are
>>         compared
>>         using rational arithmetic to ensure transitivity of the equality
>> (or
>>         inequality) relation.
>>
>>         So my POV is not very new, it's an old thing.
>>         It's also a well defined procedure, and somehow better to my taste
>>         because it preserve more mathematical properties.
>>         If Smalltalk wants to be a better Lisp, maybe it should not
>>         constantly
>>         ignore Lisp wisdom ;)
>>
>>         We can of course argue about the utility of transitivity...
>>         As a general library we provide tools like Dictionary that rely on
>>         transitivity.
>>         You can't tell how those Dictionary will be used in real
>>         applications,
>>         so my rule of thumb is the principle of least astonishment.
>>         I've got bitten once by such transitivity while memoizing... I
>>         switched
>>         to a better strategy with double indirection as workaround: class
>> ->
>>         value -> result, but it was surprising.
>>
>>
>>                  I'm pretty sure a Squeak/Pharo image wouldn't survive
>>         that long
>>                  to such
>>                  a change
>>                  (well I tried it, the Pharo3.0 image survives, but
>>         Graphics are
>>                  badly
>>                  broken as I expected).
>>
>>                  That's allways what made me favour casual equality to
>>         universal
>>                  inequality.
>>
>>                  Also should 0.1 = 0.1 ? In case those two floats have
>> been
>>                  produced by
>>                  different path, different approximations they might not
>>         be equal...
>>                  (| a b | a := 0.1. b := 1.0e-20. a+b=a.)
>>                  I also prefer casual equality there too.
>>                  Those two mathematical expressions a+b and a are
>>         different but both
>>                  floating point expressions share same floating point
>>         approximation,
>>                  that's all what really counts because in the end, we
>> cannot
>>                  distinguish
>>                  an exact from an inexact Float, nor two inexact Float.
>>                  We lost the history...
>>
>>                  Also, the inexact flag is not attached to a Float, it's
>>         only the
>>                  result
>>                  of an operation.
>>                  Statistically, it would waste one bit for nothing, most
>>         floats
>>                  are the
>>                  result of an inexact operation.
>>                  But who knows, both might be the result of exact
>>         operations too ;)
>>
>>
>>
>>                       On 7/11/14 10:46 , stepharo wrote:
>>
>>                           I suggest you to read the Small number chapter
>>         of the
>>                  Deep into
>>                           Pharo.
>>
>>                           Stef
>>
>>                           On 11/7/14 15:53, Natalia Tymchuk wrote:
>>
>>                               Hello.
>>                                 I found interesting thing:
>>                               Why it is like this?
>>
>>                               Best regards,
>>                               Natalia
>>
>>
>>
>>
>>
>>
>>
>>
>>
>

Reply via email to