From a discussion on D.learn.

If x and y are different integral types, then in an expression like
  x >> y
the integral promotion rules are applied to x and y.
This behaviour is obviously inherited from C, but why did C use such a counter-intuitive and bug-prone rule?
Why isn't typeof(x >> y) simply typeof(x) ?
What would break if it did?

You might think the the rule is that typeof( x >> y) is typeof( x + y),
but it isn't: the arithmetic conversions are NOT applied:
typeof(int >> long) is int, not long, BUT
typeof(short >> int) is int.
And we have this death trap (bug 2809):

void main()
{
  short s = -1;
  ushort u = s;
  assert( u == s );
  assert ( (s >>> 1) == (u >>> 1) ); // FAILS
}

Reply via email to