or give you the "default" precision. And note that this problem is not what you pointed out initially, but something I gave.

That is it, whole point, what the heck is a default literal or precision in a generic code?

double occupies less space and can use SSE instructions.
float faster and use half of double, that is not a valid argument, and it is not a valid answer to the problem at hand. If i want double i must state it "0.5d", float "0.5f" real "0.5L", we are talking about again "generic" code.

There are too many implicit casts, main reason is because there are default literals, am i wrong?

invalid code?
probably

T inv(T)(T m) {
        return 1 / m;
}

Now you are just doing it for the sake of argument. :P
This is another function, for another purpose.

Integers are the only values that can be represented without any approximation (unless you resort to rationals, or inefficient representations that are probably ok only at compile time or in very specific applications).
So generic code should use integers, not floating points.
You might argue that the way to get a floating point literal of type T is ugly:
        cast(T)1.0
for example fortran uses 1.0_T, but one can definitely live with it, anyway normally you should use integers, generic floating point literals is not really something that is so well defined...

This is not only about floating point types, same applies to integers.

3 is default "int" right?
Imagine what happens if it wasn't.

ushort m0 = 30000000000000000000000000000000000; // error
ubyte m1 = 3; // no implicit cast, fine
byte m2 = 3; // same
short m3 = 3; // same
ubyte m5 = -3; // error
ushort m6 = -3; // error

And for the floating point case :

float m = 0.5;
double n = 0.5;
real r = 0.5;

What is wrong with this?

Thanks.

Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

Reply via email to