At the end of the day, we still insist on expecting there won't be loss
of information just because we wrote a decimal print string for a binary
floating point number. One can get offended or irritated all one wants,
but the reality of the situation won't change.
I think the real solution to these "problems" is to implement decimal
floating point as per IEEE-754/2008. Then you can write something like
1.2345 and know for a fact the number represented is *exactly* 1.2345.
At least one VisualWorks platform already supports decimal floating
point in hardware (IBM's POWER line). C99 extensions exist at least in
draft form to extend C so that it supports decimal floating point. IBM
has released a GPL library in ANSI C that implements the feature. I am
not sure what is the deal with the license and if you can e.g.:
interface to it without making your whole Smalltalk GPL.
Food for thought...
On 4/8/11 12:53 , Marcus Denker wrote:
On Apr 8, 2011, at 7:48 PM, Hilaire Fernandes wrote:
Well, whatever the underneath representation, one can expect roundedTo:
2 to return a float with two decimals.
Sometimes I think we should use the resources that these amazing machines
give us these days to move programming languages closer to humans...
There are better Float models than the ones that are implemented in hardware.
E.g. another INRIA Project is this GNU MPFR:
"The MPFR library is a C library for multiple-precision floating-point computations
with correct rounding."
http://www.mpfr.org/
Why don't we make our language better at "real" math? The power in the machine
is definitly there...
I am quite convincd that if there are normal programming languages in 50 years,
the math part of them
will be closer to Mathematica then to C...
Marcus
--
Marcus Denker -- http://www.marcusdenker.de
INRIA Lille -- Nord Europe. Team RMoD.