On 2019/06/14 4:23 AM, Richard Damon wrote:
On 6/13/19 10:51 AM, R Smith wrote:
On 2019/06/13 4:44 PM, Doug Currie wrote:
Except by the rules of IEEE (as I understand them)

-0.0 < 0.0 is FALSE, so -0.0 is NOT "definitely left of true zero"

Except that 0.0 is also an approximation to zero, not "true zero."

Consider that 1/-0.0 is -inf whereas 1/0.0 is +int

I do not know if this is the result case in any of the programming
languages, but in Mathematical terms that is just not true.

1/0.0 --> Undefined, doesn't exist, cannot be computed, Should error
out. Anything returning +Inf or -Inf is plain wrong.
I posit the same holds true for 1/-0.0
Yes, 1.0/0.0 is undefined in the Field of Real numbers, but IEEE isn't
the field of Real Numbers. First, as pointed out, it has limited
precision, but secondly it have values that are not in the field of Real
Numbers, namely NaN and +/-Inf.

Note, that with a computer, you need to do SOMETHING when asked for
1.0/0.0, it isn't good to just stop (and traps/exceptions are hard to
define for general compution systems), so defining the result is much
better than just defining that anything could happen. It could have been
defined as just a NaN, but having a special 'error' value for +Inf or
-Inf turns out to be very useful in some fields.

I wasn't advocating to do something weird when the value -0.0 exists in memory - the display of that is what the greater idea behind this thread is[**].

What I was objecting to, is claiming (in service of suggesting the use-case for -0.0), that the mathematical result of 1/-0.0 IS in fact "-Inf" and so computers should conform, when it simply isn't, it's an error and SHOULD be shown so. Neither is the mathematical result of 0/-1 = -0.0. It simply isn't mathematically true (or rather, it isn't distinct from 0.0), and I maintain that any system that stores -0.0 as the result of the computation of 0/-1 is simply doing so by virtue of the computational method handling the sign-bit separate from the division and being able to store it like so by happenstance of IEEE754 allowing -0.0 as a distinct value thanks to that same sign bit, and not because it ever was mathematically necessary to do so.

I'll be happy to eat my words if someone can produce a mathematical paper that argued for the inclusion of -0.0 in IEEE754 to serve a mathematical concept. It's a fault, not a feature.


[** As to the greater question of representation - In fact I'm now a bit on the fence about it. It isn't mathematical, but it does help represent true bit-data content. I'm happy with it both ways.]



_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to