On 28-mar-10, at 13:39, so wrote:

That also means, you can't write a generic code in a C derived language "by definition", right?

Thanks.

On Sun, 28 Mar 2010 16:30:28 +0400, bearophile <[email protected] > wrote:

so:
Why "3" is an int?
Why "0.3" is a double?

A possible answer: because D2 has no polysemus literals yet :-)


no the answer is more complex:
for integers normally the smallest possible integral type is used (but not exactly int wins in some occasions, check the exact rules) and then implicit conversion can take place. The rules are such that it should do what one means most times (so conversion to long or uint is ok), but there are some pitfalls, often associated to conversion to unsigned (that I find useful though, even if it can lead to bugs in some occasions, and there are some reasons to like infinite integers as some languages have).
Still if you write a very large number it is a long for example.

floating point numbers are a bit different (because they are always approximated), but the philosophy is similar.

The absence polysemus literals is not a problem in my opinion, you can avoid that having good implicit casting rules.

unexpected results normally happen just when you have something very ambiguous. For example if you have an overloaded function and you call it with a literal then in *any* language you need to decide which default value has the literal (to decide with which type to start before looking at implicit conversions). D (like C) chooses int for for 3, and double for 0.3 (the latter I suppose for C compatibility and efficiency reasons, because one could argue that real would be a "safer" choice).

Yes there are some dark corners, but I don't see things that make generic programming impossible. Maybe if you would say what you try to do you would receive more meaningful answers.

Fawzi

Reply via email to