On Monday, 1 July 2013 at 23:30:19 UTC, Andrei Alexandrescu wrote:
On 7/1/13 9:57 AM, JS wrote:
I think there is big confusion in what I'm "suggesting"(it's
not a
proposal because I don't expect anyone to take the time to
prove its
validity... and you can't know how useful it could be if you
don't have
any way to test it out).
It's two distinctly different concepts when you allow a
"multi-variable"
and one that can be "up-typed" by inference(the compiler
automatically
"up-types").
To me the basic notion was very clear from day one. Changing
the type of a variable is equivalent with "unaliasing" the
existing variable (i.e. destroy it and force it out of the
symbol table) and defining an entirely different variable, with
its own lifetime. It just so happens it has the same name.
It's a reasonable feature to have -- a nice cheat that brings a
statically-typed language closer to the look-and-feel of
dynamic languages. Saves on names, which is more helpful than
one might think. In D things like overloading and implicit
conversions would probably make it too confusing to be useful.
I'm more interested in a true counterexample where my
concept(which I've
not seen in any language before) results in an invalid
context....
It's obvious to me that the concept is sound within reasonable
use bounds.
The problem is when such a "idea" is present you get people
who are
automatically against it for various irrational fears and they
won't
take any serious look at it to see if it has any merit... If
you jump to
the conclusion that something is useless without any real
thought on it
then it obviously is... but the same type of mentality has
been used to
"prove" just about anything was useless at one time or
another. (and if
that mentality ruled we'd still be using 640k of memory)
I think this is an unfair characterization. The discussion was
pretty good and gave the notion a fair shake.
I have a very basic question for you and would like a simple
answer:
In some programming languages, one can do the following type
of code:
var x; // x is some type of variable that holds data. It's
type is not
statically defined and can change at run time.
x = 3; // x holds some type of number... usually an integer
but the
language may store all numbers as doubles or even strings.
now, suppose we have a program that contains essentially the
following:
var x;
x = 3;
Is it possible that the compiler can optimize such code to
find the
least amount of data to represent x without issue? Yes or no?
Yes, and in fact it's already done. Consider:
if (expr)
{
int a;
...
}
else
{
int b;
...
}
In some C implementations, a and b have the same physical
address. In some others, they have distinct addresses. This
appears to not be related, but it is insofar as a and b have
non-overlapping lifetimes.
Is this a
good thing? Yes or no?
It's marginally good - increases stack locality and makes it
simpler for a register allocator. (In fact all register
allocators do that already, otherwise they'd suck.)
(I don't need and don't want any explanation)
Too late :o).
Andrei
Too be honest, your reply seems to be the only one that attempts
to discuss exactly what I asked. Nothing more, nothing less. I do
realize there was some confusion between what Crystal does and
what I'm talking about... I still think the two are confused by
some and I'm not sure if anyone quite gets exactly what I am
talking about(Which is not re-aliasing any variables, using a
sort of variant type(directly at least), or having a
multi-variable(e.g., crystal)).
Would would be nice is an experimental version of D where would
could easily extend the language to try out such concepts to see
if they truly are useful and how difficult to implement. e.g., I
could attempt to add said "feature", it could be merged with the
experimental compiler, those interested can download the compiler
and test the feature out... all without negatively affecting D
directly. If such features could be implemented dynamically then
it would probably be pretty powerful.
The example I gave was sort of the reverse. Instead of expanding
the type into a supertype we are reducing it.
float x;
x = 3;
x could be stored as a byte which would potentially be an
increase in performance. Reducing the type can be pretty
dangerous though unless it is verifiable. I'm somewhat convinced
that expanding the type is almost always safe(at least in safe
code) although not necessarily performant. IMO it makes auto more
powerful in most cases but only having a test bed can really say
how much.