Don wrote:
Andrei Alexandrescu wrote:
I've had a talk with Walter today, and two interesting things transpired.

First off, Walter pointed out that I was wrong about one conversion rule (google value preservation vs. type preservation). It turns out that everything that's unsigned and less than int is actually converted to int, NOT unsigned int as I thought. This is the case in C, C++, and D.

That has some interesting consequences.

  ushort x = 0xFFFF;
  short y = x;
  printf("%d %d %d\n", x>>1, y>>1, y>>>1);

// prints: 32767 -1 2147483647

What a curious beast the >>> operator is!

I'm very excited about polysemy. It's entirely original to D, covers a class of problems that can't be addressed with any of the known techniques (subtyping, coercion...) and has a kick-ass name to boot.

I agree. By making the type system looser in the one place where you actually need it to be loose, you can tighten it everywhere else. Fantastic.

My enthusiasm about polysemy got quite a bit lower when I realized that the promises of polysemy for integral operations can be provided (and actually outdone) by range analysis, a well-known method.

The way it's done: for each integral expression in the program assign two numbers: the smallest possible value, and the largest possible value. Literals will therefore have a salami-slice-thin range associated with them. Whenever code asks for a lossy implicit cast, check the range and if it fits within the target type, let the code go through.

Each operation computes its ranges from the range of its operands. The computation is operation-specific. For example, the range of a & b is max(a.range.min, b.range.min) to min(a.range.max, b.range.max). Sign considerations complicate this a bit, but not quite much.

The precision of range analysis can be quite impressive, for example:

uint b = ...;
ubyte a = ((b & 2) << 6) | (b >> 24);

typechecks no problem because it can prove no loss of information for all values of b.


Andrei

Reply via email to