On Mon, 29 Mar 2010 01:45:09 +0400, Fawzi Mohamed <[email protected]> wrote:

ok if you have a real message, I am sorry to have been so harsh, I was a bit hard due to other trolls going on... sorry if I did not really catch your message, but next time try to actually bring an actual example of what the compiler does wrong, it will help understand your problem.

D is the best language i have seen, and that is why i am here.
The point i am trying to make is something you all mostly say in this newsgroup.

Even it is the language choice of mine, C++ made some major mistakes by blindly copying some of the rules/syntax C has, and tried to build many things on top of them. As Walter said for the arrays not being first class citizens in C and unfortunately C++. For me this is a case exactly like arrays... For a mainly generic/system language, 0 shouldn't be an int and 0.0 should not be a double, and i am having hard time to find reasons why it is still as it is other than
backwards/root compatibility.

Thanks.


(1) Converting a floating point literal into a double literal is usually not lossless. 0.5f, 0.5, and 0.5L are all exactly the same number, since they are exactly representable.
But 0.1 is not the same as 0.1L.
So it's a bit odd that this silent lossless conversion is taking place.
It does have a very strong precedent from C, however.

well the correct thing to do (I thought that was what was done possibly CTFE excluded) is to always do constant folding with the maximum precision available (maybe even more than real), and only at the end convert to the type.

so that one could write
cast(real)0.555_555_555_555_555_555
and that is equivalent to
0.555_555_555_555_555_555L

and thus basically cast(T)xxx is the generic way to write a float literal of type T.

If this is not like that, then it should indeed be changed...

I still argue that using integers if possible often a better choice.

The language that that solved the problem of floating points, integers & co most cleanly is probably aldor (http://www.aldor.org/) but it is a dead language.

(2) The interaction between implicit casting and template parameters is quite poor. Eg, the fact that '0' is an int, not a floating point type, means that something simple like:
add(T)(T x) if (isFloatingPoint!(T))
doesn't work properly. It is not the same as:
add(real x)
since it won't allow add(0).

Which is pretty annoying. Why can't 0 just mean zero???

and maybe 1 the identity?

the aldor solution is probably a bit too challenging but allowed both things... Integer were arbitrary length integrals converted to the final type with implict cast (fromInteger), floats were converted to Rationals and then to the correct type with implicit cast (fromRational). 0 and 1 could be implicitly casted separately to allow things like vectors and matrixes to use them without defining a full fromInteger cast.

I find that for integers D normally does the correct thing, and for floats I thought it was closer that what it might be (i.e. arbitrary precision calculations and cast to the final type).

about the isFloatingPoint one can still one can easily write
add(T)(T x) if(is(T:real)) or something like that, and probably solve the problem.

what would be nice is to have templates to find types with a given precision and so like one does in fortran.

also i use templates to get the complex type or real type of a given type and so on. These things should probably be in the library...

so with some extra templates probably these things could be solved.


--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

Reply via email to