> Yes, i know, we have already discuss several models like that. But i think,
> it's a good idea to re-examine those because i believe they are more
> attractive today.
Indeed, this has come up several times. It is attractive to think of
flattening entirely as a ’storage class’, and fair to reexamine it (this also
came up in an internal discussion recently) but I think in the end this still
will be a choice that we regret.
> The main issue with the .val model is that it presents two *types* to the
> user while we really want is mostly to flatten the storage and have a precise
> the method calling convention.
> Those two goals are not equals, the first is far more important than the
> second, to the point where the coding guideline proposed by Brian is to use
> .ref for the parameters and .val for the fields and arrays.
FTR, the motivation for the the guideline here is “use .val where it makes the
most difference.” There’s nothing *wrong* with using val types on the stack,
you just don’t get the enormous payback you do with heap variables. But I can
imagine — especially in a specialized-generics world — that there is value to
using .val in APIs as well, because it carries the semantic “not null”
information as well as the flattening hint.
> We still need .val and .ref to be able to specialize generics, right ? No, i
> don't think so, we technically do not have to pass a .val as type argument to
> be able to specialize a generic class, we just need to pass a type argument
> that can be flatten if it's possible.
Here’s where I disagree. If field declaration and array creation expressions
were the only places you needed to say .val, I’d be much more sympathetic to
the container-properties model. But in a world with specialized generics, we
want to flow the types throughout, not only to field layout, but flowing the
non-null constraint to the JIT, etc. The `T.flat` approach will feel like a
hack, because it is, and as an unbonus, people will forget almost all the time
because having to select a storage class for an abstractly typed variable will
feel unnatural. When I say ArrayList<Foo.val>, I want the properties of
Foo.val to flow to *all* the places where a T is being moved around.
(This scheme rests on a clever but implicit assumption: that `T.flat` really
means “as flat as T can be”, which for a ref, is “not at all.” Its clever, but
for this reason `T.flat` is kind of a misnomer.).
> we can write instead
> value class C {
> // ...
> }
>
> class Container<T> {
> private T.flat value;
Yeah, this is where you lose me. When you’re writing a generic class like
ArrayList<T>, you’re abstracted from the details of heap layout, and it seems
overwhelmingly likely you’d forget to say T.flat somewhere. It also feels very
“nonparametric”, because we’ve created a second, ad-hoc channel through which
information flows, and that channel is “bumpier". But its worse than that,
because there’s less type information in the program, and therefore the VM has
to make more conservative assumptions about nullity.
I get what you are trying to accomplish; the ref/val distinction feels like it
is almost something we can get rid of. But I think swapping it for a storage
class model is worse, because it is asking users to think about low-level
details in more places, rather than using types and having the information flow
with the types. And as you point out, it means there are more possible ways
nulls can get deeper into the system before NPEing.