http://d.puremagic.com/issues/show_bug.cgi?id=1977





------- Comment #17 from [EMAIL PROTECTED]  2008-11-24 18:51 -------
(In reply to comment #16)
> I searched around, and you are right that C# disallows compiling byte + byte
> operands, and it does allow += operands.  The reasons given were not to forbid
> reassignment to the same type for fear of overflow (as is obvious by allowing
> the += operation), the point is to prevent operation overflow where it is not
> expected. for example:
> 
> int x = (byte)64 + (byte)64;
> 
> should result in x == 128, not x == -128.
> 
> And the enforcement is not in the compiler warning system, the enforcement is
> that they only define op codes for integer arithmetic, so the compiler 
> promotes
> the bytes to integers which result in an integer.

That's not quite accurate. Again, it's one thing to pass typechecking and one
thing to generate code. Any desired rule could have been implemented with only
int arithmetic and subsequent masking.

> But C++ does not forbid it, at least with g++ (even with -Wall).

C++ operates in a similar way (values are conceptually promoted to int before
arithmetic operations) but it's much more lax with narrowing conversions.
That's why there is no problem with assigning the result of adding e.g. two
shorts back to a short: the computation really yields an int, but C++ has no
qualms about narrowing that into a short, regardless of the potential loss of
data.

> This is not to say that the choices C# made are correct, it's just that there
> is precedent in C# (couldn't find Java reference, but I'm sure it's the same).
> 
> Here is a possible solution that allows current safe behavior and relaxes the
> implicit casting rules enough so that overflow is allowed to happen in the
> correct situations:
> 
> I think everyone agrees that the following:
> 
> byte b = 64;
> int i = b + b;
> 
> should produce i == 128.

I think there is agreement on that, too.

> And most believe that:
> 
> byte b2 = b + b;
> 
> should produce b2 == -128 without error, and should be equivalent semantically
> to:
> 
> byte b2 = b;
> b2 += b;
> 
> We don't want adding 2 bytes together to result in a byte result in all cases,
> only in cases where the actual assignment or usage is to a byte.

Well the "most" part doesn't quite pan out, and to me it looks like the
argument fails here. For one thing, we need to eliminate people who accept Java
and C#. They would believe that what their language does is the better thing to
do. Also, C and C++ are getting that right by paying a very large cost - of
allowing all narrowing integral conversions. I believe there is a reasonable
level of agreement that automatic lossy conversions are not to be encouraged.
This puts C and C++ behind Java and C# in terms of "getting it right".

> What if we defined several 'internal' types that were only used by the
> compiler?
> 
> pbyte -> byte promoted to an int (represented as an int internally)
> pubyte -> ubyte promoted to an int
> pshort -> short promoted to an int
> pushort -> ushort promoted to an int
> etc...
> 
> The 'promoted' types internally work just like int except in certain cases:
> 
> If you have (px or x) <op> (px or x), the resulting type is px
> 
> If you have (px or x) <op> (py or y), or (py or y) <op> (px or x), and the
> rules of promotion allow x to be implicitly cast to y, the resulting type is
> py.  Otherwise, the resulting type is int.
> 
> px is implicitly castable to x.
> 
> if the rules of promotion allow x to be implicitly cast to y, px is implicitly
> castable to y.
> otherwise, assigning px to y requires an explicit cast.
> 
> if calling a function foo with argument type px, where foo accepts type x, it
> is allowed.
> 
> If calling a function foo with argument type px, where foo accepts type y, and
> x is implicitly castable to y, it is allowed.  If x is not implicitly castable
> to y, it requires a cast.
> 
> if a variable is declared with 'auto', and the initializer is of type px, then
> the variable is declared as an int.
> 
> You can't declare any variables of type pbyte, etc, and the types actually
> don't have symbolic names, they are used internally by the compiler.
> 
> Now you have correct resolution of homogeneous operations, and no overflow of
> data where it is not desired.
> 
> examples:
> 
> byte b = 64;
> b + b -> evaluates to pbyte(128)
> b = b + b -> evaluates to b = pbyte(128), results in b == -128
> int i = b + b -> evaluates to int i = pbyte(128), results in i == 128.
> short s = b + b -> evaultes to short s = pbyte(128), results in s == 128.
> 
> short s = 64;
> byte b = s + s; -> evaluates to byte b = pshort(128), requires a cast because
> short does not fit into byte.
> 
> void foo(byte b);
> void foo2(short s);
> 
> byte x;
> short s;
> foo(x + x); // allowed
> foo2(x + x); // allowed
> foo(s + s); // requires cast
> foo2(s + s); // allowed 
> 
> Does this cover the common cases?  Is there a reason why this can't be
> implemented?  Is there a reason why this *shouldn't* be implemented?

IMHO not enough rationale has been brought forth on why this *should* be
implemented. It would make D implement an arcane set of rules for an odd, if
any, benefit. 

A better problem to spend energy on is the signed <-> unsigned morass. We've
discussed that many times and could not come up with a reasonable solution. For
now, D has borrowed the C rule "if any operand is unsigned then the result is
unsigned" leading to the occasional puzzling results known from C and C++.
Eliminating those fringe cases without losing compatibility with C and C++ is a
tough challenge.


-- 

Reply via email to