In Bugzilla I have just added an enhancement request that asks for a little
change in D, I don't know if it was already discussed or if it's already
present in Bugzilla:
http://d.puremagic.com/issues/show_bug.cgi?id=5864
In a program like this:
void main() {
uint x = 10_000;
ubyte b = x;
}
DMD 2.052 raises a compilation error like this, because the b=x assignment may
lose some information, some bits of x:
test.d(3): Error: cannot implicitly convert expression (x) of type uint to ubyte
I think that a safe and good system language has to help avoid unwanted
(implicit) loss of information during data conversions.
This is a case of loss of precision where D generates no compile errors:
import std.stdio;
void main() {
real f1 = 1.0000111222222222333;
writefln("%.19f", f1);
double f2 = f1; // loss of FP precision
writefln("%.19f", f2);
float f3 = f2; // loss of FP precision
writefln("%.19f", f3);
}
Despite some information is lost, see the output:
1.0000111222222222332
1.0000111222222223261
1.0000110864639282226
So one possible way to face this situation is to statically disallow
double=>float, real=>float, and real=>double conversions (on some computers
real=>double conversions don't cause loss of information, but I suggest to
ignore this, to increase code portability), and introduce compile-time errors
like:
test.d(5): Error: cannot implicitly convert expression (f1) of type real to
double
test.d(7): Error: cannot implicitly convert expression (f2) of type double to
float
Today float values seem less useful, because with serial CPU instructions the
performance difference between operations on float and double is often not
important, and often you want the precision of doubles. But modern CPUs (and
current GPUs) have vector operations too. They are currently able to perform
operations on 4 float values or 2 double values (or 8 float or 4 doubles) at
the same time for each instruction. Such vector instructions are sometimes used
directly in C-GCC code using SSE intrinsics, or come out of auto-vectorization
optimization of loops done by GCC on normal serial C code. In this situation
the usage of float instead of double gives almost a twofold performance
increase. There are programs (like certain ray-tracing code) where the
precision of a float is enough. So a compile-time error that catches currently
implicit double->float conversions may help the programmer avoid unwanted
usages of doubles that allow the compiler to pack 4/8 floats in a vector !
register during loop vectorizations.
Partially related note: currently std.math doesn't seem to use the cosf, sinf C
functions, but it uses sqrtf:
import std.math: sqrt, sin, cos;
void main() {
float x = 1.0f;
static assert(is(typeof( sqrt(x) ) == float)); // OK
static assert(is(typeof( sin(x) ) == float)); // ERR
static assert(is(typeof( cos(x) ) == float)); // ERR
}
Bye,
bearophile