Andrei Alexandrescu wrote:
Denis Koroskin wrote:
On Wed, 26 Nov 2008 18:24:17 +0300, Andrei Alexandrescu
<[EMAIL PROTECTED]> wrote:
Also consider:
auto delta = a1.length - a2.length;
What should the type of delta be? Well, it depends. In my scheme that
wouldn't even compile, which I think is a good thing; you must decide
whether prior information makes it an unsigned or a signed integral.
Sure, it shouldn't compile. But explicit casting to either type won't
help. Let's say you expect that a1.length > a2.length and thus expect
a strictly positive result. Putting an explicit cast will not detect
(but suppress) an error and give you an erroneous result silently.
But "silently" and "putting a cast" don't go together. It's the cast
that makes the erroneous result non-silent.
Besides, you don't need to cast. You can always use a function that does
the requisite checks. std.conv will have some of those, should any
change in the rules make it necessary.
I doubt that would be used in practice.
By this I'm essentially replying Don's message in the bugs newsgroup:
nobody puts a gun to your head to cast.
Putting an assert(a1.length > a2.length) might help, but the check
will be unavailable unless code is compiled with asserts enabled.
Put an enforce(a1.length > a2.length) then.
A better solution would be to write code as follows:
auto delta = unsigned(a1.length - a2.length); // returns an unsigned
value, throws on overflow (i.e., "2 - 4")
auto delta = signed(a1.length - a2.length); // returns result as a
signed value. Throws on overflow (i.e., "int.min - 1")
auto delta = a1.length - a2.length; // won't compile
Amazingly this solution was discussed with these exact names! The signed
and unsigned functions can be implemented as libraries, but
unfortunately (or fortunately I guess) that means the bits32 and bits64
are available to all code.
One fear of mine is the reaction of throwing of hands in the air "how
many integral types are enough???". However, if we're to judge by the
addition of long long and a slew of typedefs to C99 and C++0x, the
answer is "plenty". I'd be interested in gaging how people feel about
adding two (bits64, bits32) or even four (bits64, bits32, bits16, and
bits8) types as basic types. They'd be bitbags with undecided sign ready
to be converted to their counterparts of decided sign.
Here I think we have a fundamental disagreement: what is an 'unsigned
int'? There are two disparate ideas:
(A) You think that it is an approximation to a natural number, ie, a
'positive int'.
(B) I think that it is a 'number with NO sign'; that is, the sign
depends on context. It may, for example, be part of a larger number.
Thus, I largely agree with the C behaviour -- once you have an unsigned
in a calculation, it's up to the programmer to provide an interpretation.
Unfortunately, the two concepts are mashed together in C-family
languages. (B) is the concept supported by the language typing rules,
but usage of (A) is widespread in practice.
If we were going to introduce a slew of new types, I'd want them to be
for 'positive int'/'natural int', 'positive byte', etc.
Natural int can always be implicitly converted to either int or uint,
with perfect safety. No other conversions are possible without a cast.
Non-negative literals and manifest constants are naturals.
The rules are:
1. Anything involving unsigned is unsigned, (same as C).
2. Else if it contains an integer, it is an integer.
3. (Now we know all quantities are natural):
If it contains a subtraction, it is an integer [Probably allow
subtraction of compile-time quantities to remain natural, if the values
stay in range; flag an error if an overflow occurs].
4. Else it is a natural.
The reason I think literals and manifest constants are so important is
that they are a significant fraction of the natural numbers in a program.
[Just before posting I've discovered that other people have posted some
similar ideas].