Andrei Alexandrescu wrote:
Sergey Gromov wrote:
Thu, 04 Dec 2008 09:54:32 -0800, Andrei Alexandrescu wrote:
Fawzi Mohamed wrote:
On 2008-12-01 22:30:54 +0100, Walter Bright
<[EMAIL PROTECTED]> said:
Fawzi Mohamed wrote:
On 2008-12-01 21:16:58 +0100, Walter Bright
<[EMAIL PROTECTED]> said:
Andrei Alexandrescu wrote:
I'm very excited about polysemy. It's entirely original to D,
I accused Andrei of making up the word 'polysemy', but it turns
out it is a real word! <g>
Is this the beginning of discriminating overloads also based on
the return values?
No. I think return type overloading looks good in trivial cases,
but as things get more complex it gets inscrutable.
I agreee that return type overloading can go very bad, but a little
bit can be very nice.
Polysemy make more expressions typecheck, but I am not sure that I
want that.
For example with size_t & co I would amost always want a stronger
typechecking, as if size_t would be a typedef, but with the usual
rules wrt to ptr_diff, size_t,... (i.e. not cast between them).
This because mixing size_t with int, or long is almost always
suspicious, but you might see it only on the other platform (32/64
bit), and not on you own.
Something that I would find nice on the other hand is to have a kind
of integer literals that automatically cast to the type that makes
more sense.
Wouldn't value range propagation take care of that (and actually
more)? A literal such as 5 will have a support range [5, 5] which
provides enough information to compute the best type down the road.
It sounds very nice and right, except it's incompatible with Cee.
Well, you can safely reduce bit count so that assigning "1025 & 15" to
"byte" would go without both a cast and a warning/error. But you cannot
grow bitcount beyond the C limits, that is, you cannot return long for
"1024 << 30." You should probably report an error, and you should
provide some way to tell the compiler, "i mean it."
In the worst case, any shift, multiplication or addition will result in
a compiler error. Do I miss something?
Well any integral value carries:
a) type as per the C rule
b) minimum value possible
c) maximum value possible
The type stays the type as per the C rule, so there's no change there.
If (and only if) a *narrower* type is asked as a conversion target for
the value, the range is consulted. If the range is too large, the
conversion fails.
Andrei
Any idea how hard this would be to implement?
Also we've got an interesting case in D that other languages don't have:
CTFE functions.
I presume that range propagation would not apply during evaluation of
the CTFE function, but when evaluation is complete, it would then become
a known literal, which can have precise range propagation. But there's
still some funny issues:
uint foo(int x) { return 5; }
int bar(int y)
{
ubyte w = foo(7); // this is a narrowing conversion, generates
compiler warning (foo is not called as CTFE).
return 6;
}
enum ubyte z = foo(7); // this is range propagated, so narrowing is OK.
enum int q = bar(3); // still gets a warning, because bar() didn't compile.
int gar(T)(int y)
{
ubyte w = foo(7);
return 6;
}
enum int v = gar!(int)(3); // is this OK???