On Sunday, 22 February 2015 at 02:32:38 UTC, Adam D. Ruppe wrote:
On Sunday, 22 February 2015 at 02:15:29 UTC, Almighty Bob wrote:
what bothers me is any automatic conversion from float to int. It should never be automatic.

Should 5.5 + 5 compile? I suppose it arguably shouldn't but that'd probably be pretty annoying and no information is lost - it can convert 5 (the int) to 5.0 (the float) and add it without dropping a part of the number.

Float arithmetic is already lossy, almost every float operation truncates and rounds, sticking an int in the mix does not change that.

Int arithmetic is precise, putting a float in and automatically rounding it changes a precise equation to a lossy one.

The point is if you have a float LHS then you are accepting lossy arithmetic as float is inherently lossy. If you have an int LHS you should be able to expect precise arithmetic.

Most of what I do is DSP / data analysis. No only do I never want automatic float to int conversions, the majority of the time I actually want specific rounding when I do convert from float to int.

IMO automatic float to int conversion is a terrible idea. It's worse than initialized floats because you can get results that look most OK, and it's a bugger to track down. And it's worse when the compiler has lulled you into a false sense of security by giving errors about not being able to convert floats to ints in most other cases.

Reply via email to