On 2008-11-26 10:24:17 -0500, Andrei Alexandrescu <[EMAIL PROTECTED]> said:

Well that at best takes care of _some_ operations involving constants, but for example does not quite take care of array.length - 1.

How does it not solve the problem. array.length is of type uint, 1 is polysemous(byte, ubyte, short, ushort, int, uint, long, ulong). Only "uint - uint" is acceptable, and its result is "uint".


Also consider:

auto delta = a1.length - a2.length;

What should the type of delta be? Well, it depends. In my scheme that wouldn't even compile, which I think is a good thing; you must decide whether prior information makes it an unsigned or a signed integral.

In my scheme it would give you a uint. You'd have to cast to get a signed integer... I see how it's not ideal, but I can't imagine how it could be coherent otherwise.

        auto diff = cast(int)a1.length - cast(int)a2.length;

By casting explicitly, you indicate in the code that if a1.length or a2.length contain numbers which are too big to be represented as int, you'll get garbage. In this case, it'd be pretty surprising to get that problem. In other cases it may not be so clear-cut.

Perhaps we could add a "sign" property to uint and an "unsign" property to int that'd give you the signed or unsigned corresponding value and which could do range checking at runtime (enabled by a compiler flag).

        auto diff = a1.length.sign - a2.length.sign;

And for the general problem of "uint - uint" giving a result below uint.min, as I said in my other post, that could be handled by a runtime check (enabled by a compiler flag) just like array bound checking.

One last thing. I think that in general it's a much better habit to change the type to signed prior doing the substratction. It may be harmless in the case of a substraction, but as you said when starting the thread, it isn't for others (multiply, divide, modulo). I think the scheme above promotes this good habit by making it easier to change the type at the operands rather than at the result.


I'd make "auto x = 1" create a signed integer variable for the sake of simplicity.

That can be formalized by having polysemous types have a "lemma", a default type.

That's indeed what I'm suggesting.


And all this would also make "uint x = -1" illegal... but then you can easily use "uint x = uint.max" if you want to enable all the bits. It's easier as in C: you don't have to include the right header and remember the name of a constant.

Fine. With constants there is some mileage that can be squeezed. But let's keep in mind that that doesn't solve the larger issue.

Well, by making implicit convertions between uint and int illegal, we're solving the larger issue. Just not in a seemless manner.


--
Michel Fortin
[EMAIL PROTECTED]
http://michelf.com/

Reply via email to