On Tuesday, 2 July 2013 at 02:15:09 UTC, Andrei Alexandrescu wrote:
On 7/1/13 6:29 PM, JS wrote:
Would would be nice is an experimental version of D where would could easily extend the language to try out such concepts to see if they truly are useful and how difficult to implement. e.g., I could attempt to add said "feature", it could be merged with the experimental compiler, those interested can download the compiler and test the feature out... all without negatively affecting D directly. If such features could be implemented dynamically then it would probably be pretty powerful.

I don't think such a feature would make it in D, even if the implementation cost was already sunken (i.e. an implementation was already done and one pull request away).

Ascribing distinct objects to the same symbol is a very core feature that affects and is affected by everything else. We'd design a lot of D differently if that particular feature were desired, and now the fundamentals of the design are long frozen. For a very simple example, consider:

auto a = 2.5; // fine, a is double
...
a = 3;


No, not under what I am talking about. You can't downgrade a type, only upgrade it. a = 3, a is still a float. Using the concept I am talking about, your example does nothing new.

but reverse the numbers:

auto a = 3;
a = 2.5;

and a is now a float, and your logic then becomes correct EXCEPT a is expanded, which is safe.

I really don't know how to make it any clearer but I'm not sure if anyone understands what I'm talking about ;/



By the proposed rule a will become an entirely different variable of type int, and the previous double variable would disappear. But current rules dictate that the type stays double. So we'd either have an unthinkably massive breakage, or we'd patch the language with a million exceptions.

Even so! If the feature were bringing amazing power, there may still be a case in its favor. But fundamentally it doesn't bring anything new - it's just alpha renaming; it doesn't enable doing anything that couldn't be done without it.



Expanding a type is always valid because it just consumes more memory. A double can always masquerade as an int without issue because one just wastes 4 bytes. An int can't masquerade as a double because any function think uses it as a double will cause corruption of 4 bytes of memory.

(I'm ignoring that a double and int use different cpu instructions. This is irrelevant unless we are hacking stuff up)


The simplest example I can give is:

auto x = 2;
x = 2.5;

x IS a double, regardless of the fact that auto x = 2; makes it look like an int BECAUSE that is how auto currently is defined(which might be the confusion).

The reason is, that the compiler looked at the scope for all assignments to x and was able to determine automatically that x needed to be a double.


I'll give one more way to look at this, that is a sort of inbetween but necessary logical step:

We have currently that auto looks at the immediate assignment after it's keyword to determine the type, correct?

e.g.,

auto x = 3;

What if we allow auto to look at the first assignment to x, not necessarily the immediate assignment, e.g.:

auto x;
x = 3;

(should be identical to above)

or

auto x;
.... (no assignments to x)
x = 3;

All this should be semantically equivalent, correct? To me, the last case is more powerful since it is more general. Of course, one could argue that it makes it more difficult to know the type of x but I doubt this would be a huge issue.


Reply via email to