Carl Lowenstein wrote:

Aren't there just as many cases in which the
decimal-to-binary-to-decimal errors undercharge rather than
overcharge?

Yes, and that is also an error.

The problem is that monetary compuations are *defined* using decimal arithmetic.

Neither binary FP nor decimal FP are inherently superior. Decimal is just "the way things are done" when money and accounting are involved. Binary is "the way things are done" when computers are involved.

The big problem is the algorithmic time required for decimal computations. Decimal computations are O(n) where n is the number of digits while binary computations are O(1) as long as the values are in range. The combinations of greater asymptotic time *and* the fact that computation time is dependent upon input data makes decimal a particular poor match to VLSI hardware.

displays five numbers, whereas the similar loop:

for (double d = 1.1; d <= 1.5; d += 0.1) System.out.println(d);


displays only four numbers. (If d had a decimal type then five
numbers would be displayed in both cases.)



I think I learned this while writing my first Fortran program, circa 1959.
Count with integers, compute with floats.

However, there is no a priori reason why this should be so. This is just an artifact of something which has an exact string representation only having an approximate machine representation.

It actually took me longer to bump into this particular problem because I learned to code bitmapped graphics using fractional powers of two stored in floats since all numeric constants in Microsoft Basic were floats and division/integer conversion was so painfully slow.

It wasn't until I actually started programming on Sun workstations with FP chips that I had to rearrange my thinking because division wasn't so painful.

-a


--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to