On Tuesday, 22 July 2014 at 15:31:22 UTC, Ola Fosheim Grøstad
wrote:
On Tuesday, 22 July 2014 at 11:40:08 UTC, Artur Skawina via
Digitalmars-d wrote:
obey the exact same rules as RT. Would you really like to use
a language
in which 'enum x = (a+b)/2;' and 'immutable x = (a+b)/2;'
results in
different values?...
With the exception of hash-functions the result will be wrong
if you don't predict that the value is wrapping. If you do, I
think you should make the masking explicit e.g. specifying
'(a+b)&0xffffffff' or something similar, which the optimizer
can reduce to a single addition.
That's how it is in D - the arguments are only about the
/default/, and in
this case about /using a different default at CT and RT/.
Using a non-wrapping
default would be a bad idea (perf implications, both direct and
Yes, but there is a difference between saying "it is ok that it
wraps on addition, but it shouldn't overflow before a store
takes place" and "it should be masked to N bits or fail on
overflow even though the end-result is known to be correct". A
system level language should encourage using the fastest
opcode, so you shouldn't enforce 32 bit masking when the
fastest register size is 64 bit etc. It should also encourage
reordering so you get to use efficient SIMDy instructions.
Not possible (for integers), unless you'd be ok with getting
different
results at CT.
You don't get different results at compile time if you are
explicit about wrapping.
NUMBER f(NUMBER a, NUMBER b) ...
Not sure what you mean here. 'f' is a perfectly fine existing
function, which is used at RT. It needs to be usable at CT as
is.
D claims to focus generic programming. So it should also
encourage pure functions that can be specified for floats, ints
and other numeric types that are subtypes of (true) reals in
the same clean definition.
I think it's a complete fantasy to think you can write generic
code that will work for both floats and ints. The algorithms are
completely different.
One of the simplest examples is that given float f; int i;
(f + 1) and (i + 1) have totally different semantics.
There are no values of i for which i + 1 == i,
but if abs(f) > 1/real.epsilon, then f + 1 == f.
Likewise there is no value of i for which i != 0 && i+1 == 1,
but for any abs(f) < real.epsilon, f + 1 == 1.
If you express the expression in a clean way to get down to the
actual (more limited type) then the optimizer sometimes can
pick an efficient sequence of instructions that might be a very
fast approximation if you reduce the precision sufficiently in
the end-result.
To get there you need to differentiate between a truncating
division and a non-truncating division etc.
Well, it's not a small number of differences. Almost every
operation is different. Maybe all of them. I can't actually think
of a single operation where the semantics are the same for
integers and floating point.
Negation comes close, but even then you have the special cases
-0.0 and -(-int.max - 1).
The philosophy behind generic programming and the requirements
for efficient generic programming is quite different from the
the machine-level hand optimizing philosophy of classic C, IMO.
I think that unfortunately, it's a quest that is doomed to fail.
Producing generic code that works for both floats and ints is a
fool's errand.