On 4/19/2011 8:37 PM, bearophile wrote:
dsimcha:
I know what you suggest could prevent bugs in a lot of
cases, but it also has the potential to get in the way in a lot of cases.
You are right, and I am ready to close that enhancement request at once if the
consensus is against it.
double->float and real->float cases are not so common. How often do you use
floats in your code? In my code it's uncommon to use floats, generally I use doubles.
Very often, actually. Basically, any time I have a lot of floating
point numbers that aren't going to be extremely big or small in
magnitude and I'm interested in storing them and maybe performing a few
_simple_ computations with them (sorting, statistical tests, most
machine learning algorithms, etc.). Good examples are gene expression
levels or transformations thereof and probabilities. Single precision
is plenty unless your numbers are extremely big or small, you need a
ridiculous number of significant figures, or you're performing intense
computations (for example matrix factorizations) where rounding error
may accumulate and turn a small loss of precision into a catastrophic one.