Don:

> OTOH looks like this is a case where it'd be much faster to use fixed 
> length integers rather than BigInt (I think that's true of nearly 
> everything in RosettaCode) -- strict upper bounds are known on the 
> length of the integers, they aren't arbitrary precision.

Two problems with this idea:
1) The Python version of this coins program has found a bug in the C version 
that uses 128 bit integers. When you use fixed sized numbers you risk 
overflows. If the overflows are silent (like the ones in the built-in D 
integral numbers) you don't know if your code is giving bogus results, you need 
to work and think to be sure there are no overflows. This work and thinking 
requires time, that often I'd like to use for something else. This is why 
multiprecision numbers (or not silent overflows) are handy. In this program the 
upper bounds are known if you compute them first with Python or with 
multiprecision numbers, like C GMP :-)
2) What if I want coins for a big larger number of euros like 200_000? Maybe 
the result or some intermediate value don't fit in 128 bits, so the C program 
becomes useless again, while the Python code is useful still. Multiprecision 
numbers sometimes are more useful.

Bye,
bearophile

Reply via email to