My first impression of this is very positive. It would be basically everything
I'd want concepts to do and slightly more. My biggest concern with this would
be the additional complexity to the language but as long as it doesn't require
sweeping modifications to symbol matching that should be fine?
About the notation thingy - it would be viable to keep special notations to a
minimum but that would introduce significant amounts of noise. Which probably
would defeat the purpose of doing so? Example:
type MyConecpt[T] = conecpt
var
c: MyConcept[T]
t: T
c.foo() is T
c.bar(t) is M
I don't think that is more readable.
I'd like your thought on meta annotations. Should concepts be able to limit
what a proc may raise or forcing it to have no side effects? What about
limiting whether an argument in a proc call is mutable? Or would that already
be covered by term rewritng macros? The second feature I'd like has more to do
with concept usage. If you require a parameter to match multiple concepts you
would have to introduce a seperate concept that combines (refines?) them.
Having some notation that emulates sum types like proc foo(bar: Copyable and
Comparable) might be useful. It would lose a lot of the power and concept
definitions are pretty compact, though, so there might not be any need for this.
Fairly minor side note, I would prefer c.keys() is seq[K] or even c.keys()[int]
is K.
@mora I don't really understand your last example mostly because I don't know
what the definition of myX is supposed to be. You had the definition of
MapComparable as
type MapComperable[K,v] = concept c
c[K] = V
K < V
so for type(myX) to count as MapComparable[int, int] there would have to have a
proc `[]=`(self: type(myX), key, value: int) - and `<`(a, b: int): any which
obiously holds. This might be an example of why forcing myX to be a var arg in
the concept might be useful but I don't see why running f would be wrong if
that is the case? I very well might have misunderstood your example, though.