Hello NightStrike!

On Sat, Apr 9, 2011 at 2:41 AM, NightStrike <[email protected]> wrote:
> On Sun, Apr 3, 2011 at 7:07 AM, James K Beard <[email protected]> wrote:
>> A quick glance through the document seems to tell us that the decimal
>> arithmetic will incorporate checks to ensure that any rounding in binary
>> floating point does not compromise the accuracy of the final decimal
>> result.
>> ...
>
> I'm being a little OT here, but I'm curious.. does that mean that
> COBOL was a language that gave very high accuracy compared to C of the
> day?
>

No, COBOL, by virtue of using decimal arithmetic, would not have been more
accurate than C using binary floating-point, but rather "differently"accurate.
(This, of course, is only true if you make an apples-to-apples comparison.
If you use 64-bit decimal floating-point -- call this double precision
-- this will
be much more accurate than 32-bit single-precision binary floating- point,
and, of course, double-precision binary floating-point will be much more
accurate than single-precision decimal floating-;point.)

That is, the set of real numbers that can be represented exactly as decimal
floating-point numbers is different than the set of exactly representable binary
floating-point numbers.

Let me illustrate this with an approximate example -- I won't get the exact
numbers and details of the floating-point representation correct, but the
core idea is spot-on right.

Compare using three decimal digits (0 -- 999; 10^3 = 1000) with ten binary
digits (0 -- 1023; 2^10 = 1024), essentially the same accuracy.

Consider the two real numbers:

   1 - 1/100 = 0.99 = 99 ^ 10^-2,  an exact decimal floating-point

   1 - 1/128 = 0.1111111 (binary) = 127 * 2^-7,  an exact binary floating-point

The first, 1 - 1/100, is not exactly representable in binary, because
1/100 = 1 / (2^2 * 5^2), and you can't represent fractional (negative)
powers of five exactly in binary.

The second, 1 - 1/128, is not exactly representable in decimal,
because we are only using three decimal digits.

   1/128 = 0.0078125 (exact),

so

   1 - 1/128 = 0.9921875 (exact)

If we give ourselves seven decimal digits, we can represent
1 - 1/128 exactly, but that wouldn't be an apples-to-apples
comparison.

The best we can do with our three-decimal-digit decimal
floating-point is

   1 - 128 ~= 0.992 = 992 * 10^-3 (approximate)

This shows that neither decimal nor binary is more accurate, but
simply that they are different.  If it is important that you can
represent things like 1/100 exactly, use decimal, but if you want
to represent things like 1/128 exactly, use binary.

(In practice, for the same word size, binary is somewhat more
accurate, because in decimal a single decimal digit is usually
stored in four bits, wasting the difference between a decimal
digit and a hexadecimal (0 -- 15) digit.  Also, you can trade off
accuracy for range by moving bits  from the mantissa to the
exponent.)

Happy Hacking!


K. Frank

------------------------------------------------------------------------------
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
_______________________________________________
Mingw-w64-public mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/mingw-w64-public

Reply via email to