On 2/26/18 6:34 PM, psychoticRabbit wrote:
On Sunday, 25 February 2018 at 14:52:19 UTC, Steven Schveighoffer wrote:
1 == 1.0, no?
no. at least, not when a language forces you to think in terms of types.
But you aren't. You are thinking in terms of text representation of
values (which is what a literal is). This works just fine:
double x = 1;
double y = 1.0;
assert(x == y);
The same generated code to store 1 into x is used to store 1.0 into y.
There is no difference to the compiler, it's just different in the
source code.
I admit, I've never printed output without using format specifiers, but
still, if I say write(1.0), it should not go off and print what looks to
me, like an int.
If it didn't, I'm sure others would complain about it :)
Inheriting crap from C is no excuse ;-)
and what's going on here btw?
assert( 1 == 1.000000000000000001 ); // assertion error in DMD but not
in LDC
assert( 1 == 1.0000000000000000001 ); // no assertion error??
Floating point is not exact. In fact, even the one that asserts, cannot
be accurately represented internally. At some decimal place, it cannot
store any more significant digits, so it just approximates.
You may want to just get used to this, it's the way floating point has
-Steve