> `{.prototype(extensible).}` These could also be valid inside the defining 
> module.

Yes but you have submodules then. How to make it explicit in recent Nim, it's 
not in the language yet. Your syntactic extension looks reasonable.

> A general thought: prototypical/exemplary definitions seem to go against most 
> peoples' intuition in the context of statically typed languages. It feels 
> JavaScript-y.

A prototype in Javascript is a convenient way to construct local data and local 
functions via delegate bindings. The prototype provides the constructors. You 
don't need to know about the concrete types the prototype is working with. The 
types remain abstract for you. Prototypes can be regarded as (sub)modules. 
Let's say you are writing an user app Uapp and you have a prototypic module 
Pmodule providing an abstract type T. So, you have the pair : 
(Uapp,Pmodule[exT]). JS will bind Uapp to the prototype. The "exT" stands for 
"existentially bound". This is the abstraction. Since you never can define a 
free floating [exT] within JS, you always have a monotyped implementation type :

> Comparable to the witness construct you proposed

The witness. (It is called this way, not my invention though...) At this point, 
the abstraction is a mental model only. JS doesn't know about the exT, nor it 
can declare it. JS determines (standard) types at runtime and is typeless 
otherwise. Now the other way round (Uapp[exT],Pmodule), where [exT] is abstract 
for Pmodule. JS is typeless, so you can always pass your own types to PModule. 
Because the prototype is complete (by definition), PModule will work "out of 
the box" with your exT. However, due to JS's dynamic contexts (the key feature 
of JS) and runtime name lookup particularly, you can overlaod PModule's 
functions and therefore "extend" it.

Now to Nim. A Nim pair (Uapp.nim,Pmodule.nim) to consider with the two abstract 
cases (Uapp,Pmodule[exT]) and (Uapp[exT],Pmodule). Let us postpone the former 
for good reasons. Then we have (Uapp[exT],Pmodule) and this is idiomatic Nim, 
it should work always even if it does not. {.prototype.} will come to help 
here. If some overloads are specified, it will check the overloads. The most 
general overload is the unit type - it is completely abstract. If Uapp wants to 
specify its own overload, it can do this by "extending" the prototypically 
bound type e.g. Gstack[U] with its own exT and an appropriate implementation. 
Again, {.prototype.} will check this. If something is missing, it will tell you 
what to do.

Why are these checks useful? An existentially bound Type together with its 
implementation is a dependent sum type, a pair (a,B(a)) where a type `a` 
produces an implementation type B(a) that contains `a` . The unit type aside, 
this will not work in general. In fact, you can expect to get (a and C(a), 
B(a)) where C(a) extends `a` with constraints, (a and C(a)) being a subtype of 
a. If C(a) is already in the context, you are lucky. If not, compilation will 
fail. Another issue : Since the generic parameters are analyzed pointwise, 
function-per-function only, C(a) is not stable, it depends of how many B(a) are 
involved (no module-wide generic annotations, this is a severe problem).

In this regard, concepts are a misconception. They add new constraints (they 
are destructors). They do define new subclassing properties (examples in the 
concept RFC), very nice, but they don't remove the implicit constraint 
subtyping. If they could, they had to construct the appropriate context so that 
(a & C(a)) always matches with the context.

{.prototype.} would clear up the mess. It provides a (very) basic framework for 
type abstraction. It defines the set of a valid for B(a), a common supertype 
"Shape". So it gives a pair (a:Shape,B(a)) without further constraints. BTW it 
can help for vtables too.

I will not claim that {.prototype.} is a perfect solution, but a solution 
compliant with nim. A rather cheap solution, cheaper than concepts.

> If it was easier to experiment with types, people could get creative and I 
> believe some substantial practical progress could be made. A DSL for type 
> calculus and transformation...

Creative and complicated... Well, Haskell relies on higher-order unification 
(restricted to keep it decidable) for typeclass reasoning. It might be possible 
to do it simpler. Perhaps Araq could demonstrate how monads get verified with 
the current concept design. I tried it but I could not figure out how to name 
type abstractions like (U->V)->V, so I failed.

The last round now: (Uapp,Pmodule[exT]) . Here, the User app doesn't know the 
witness type exT. It knows about the supertype only, let's call it Shape again. 
So we have (Shape,B(a:Shape)) then. Example : Uapp can define var x : Shape = 
Pmodule.new() where x gets "opened" with a type Shape. From now on, any 
specific operation related to x has to be done with functions that are imported 
from Pmodule. Again, we could use {.prototype.} for this. Example with a 
`string` witness type:

`Exttype {.prototype.} = string`

The implementation type is a monotype and therefore it can be used in Uapp for 
declarations. As said, any function that gets applied to Exttype has to be 
taken from Pmodule for the sake of type abstraction. No functions containing 
Exttype may be declared in Uapp and the compiler should check this.

TL;DR : Generic annotations [T] introduce implicit and unpredicted 
constrains/downcasts in Nim that make type reasoning unfeasible. Downcasts can 
be removed with abstract types. Abstract types are missing but can be brought 
to Nim without any runtime cost, especially no virtual functions needed. 

Reply via email to