In Nim `float` should be the same as `float64`. `float32` is smaller which 
means less memory bandwidth needed as well as wider vector instructions in 
modern CPUs (e.g., on AVX you can fit 8 float32 in one register but only 4 
float64). Some neural net people want to use a couple variants of 16-bit (or 
even 8-bit) floating point formats, while for certain very high precision cases 
128-bit is becoming less rare.

So, I don't think there is some perfect choice of size/accuracy and so your 
`SomeFloat` idea sounds smarter to me. You might also benefit from having 
"iterative" numerical routines accept some kind of "precision" parameter that 
defaults to something (probably dependent upon the concrete floating point type 
in play), but allows users to maybe trade off speed & accuracy themselves. 
I.e., if you have some series approximation with an error bound then allow 
users to pass the max tolerable error (either in absolute terms or relative 
terms). Nim's named parameter passing with defaults should make it easy to have 
a common convention in your library.

Reply via email to