Hi, now that BigInteger deals better with large numbers, it would be nice for that to translate into an improvement in BigDecimal performance because BigDecimal is essentially a wrapper around BigInteger. Unfortunately, BigDecimal is still slower than BigInteger because it has to scale and round.
I don't see a way to fix this without breaking the BigDecimal=BigInteger*10^n paradigm, but it could be done by introducing something like a BigFloat class that wraps a BigInteger such that BigFloat=BigInteger*2^n. I would expect the code to be less complex than BigDecimal because the only places it would have to deal with powers of ten would be conversion from and to String or BigDecimal. It would also be faster than BigDecimal for the same reason, but the downside is that it wouldn't accurately represent decimal fractions (just like float and double). Is this something that would be beneficial in the real world? I also did a little experiment to see how long a computation would take using BigDecimals vs the same computation using fixed-point BigInteger arithmetic. I wrote two programs that calculate pi to a million digits. The BigInteger version took 3 minutes; the BigDecimal version took 28 minutes (both single-threaded). Tim