I don't have a lot of time, so my answer will be a bit terse. I'll mostly stick 
to clarifying what was not clear from the manual.

**1:**

Well, I respectfully disagree. Indeed, this rule is very handy, it leads to 
shorter code with greatly increased signal-to-noise ratio. Your argument about 
macros and templates is a bit moot, because this is not a special syntax, but 
merely special handling of symbols within the existing syntax. You can still 
generate the usual `nkCalls`, `nkIdents` and `nkSyms` from macros and templates.

Regarding your question about the explicit types rule, you must have missed the 
sentence in the manual immediately following the one quoted from you:

> The named instance of the type, following the `concept` keyword is also 
> considered an explicit `typedesc` value that will be matched only as a type.

Furthermore, the convenience rule applies only to types appearing in proc param 
positions. The right-hand side of the `is` operator always expects a type.

**2:**

`basis` is an upcoming type trait for which I haven't settled on a name yet. It 
takes an instantiated type such as `Matrix[int, int]` and returns the generic 
type in its uninstantiated form (i.e. `Matrix`). From the particular example it 
should be clear why this is necessary here. `basis` is probably not the best 
name and I'm currently considering alternative names such as `GenericHead` on 
simply `uninstantiated`.

Regarding your next comment, clearly the manual provides only examples, not the 
complete concept definitions that may appear in the standard library. With that 
said, I would recommend keeping concepts small in general and requiring the 
conformity to multiple concept types in algorithms and libraries where this 
makes sense.

**3:**

Regarding your concerns in the second paragraph, the compiler will infer the 
types to the best of its ability. Luckily, we have a very rich type system that 
can express all kinds of types. It's not illegal for a concept parameter type 
be inferred to what we call a metatype (a type class, not a concrete type). The 
semantics of these are well-defined in other parts of the language and if your 
algorithm needs to rule them out, this can be easily achieved - you can use the 
inferred type in a context requiring a concrete type or you can just add a 
boolean predicate using another to-be-added type trait such as `isConcrete`.

**4:**

I've omitted the mentioned proc definitions for brevity. There are comments, 
right above the usages, providing some explanation:
    
    
    # the following would be an overloaded proc for cstring, string, seq and
    # other user-defined types, returning either a StringRefValue[char] or
    # StringRefValue[wchar]
    return makeStringRefValue(x)
    

We already have experience with a similar feature in the compiler - the 
`varargs[string, $]` param type. The capabilities here are similar.

**5:**

The VTable types are similar to the type classes in Haskell or Rust's [trait 
objects](https://doc.rust-lang.org/book/trait-objects.html). There are no 
run-time checks of any kind. Let's say a proc expects a VTable type. When you 
pass a reference to a concrete type to the proc, the compiler checks whether 
the concrete type satisfies the requirements of the VTable type and it converts 
the reference to the VTable value at the call site. From this point on, the 
VTable value is just another concretely typed value that can be stored as a 
field, within a polymorphic collection and so on.

Reply via email to