@Araq I feel that restricting `Concept[T]` to containers is very limiting and will in fact make it impossible to define many abstractions.
Let me give an example. In [emmy](https://github.com/unicredit/emmy) I have more or less the following defition of a field. type Field = concept x, y x + y is type(x) zero(type(x)) is type(x) -x is type(x) x - y is type(x) x * y is type(x) id(type(x)) is type(x) x / y is type(x) Here a field is defined as a type that admits the usual four operations and their identity elements - no need for generic concepts here (and in fact i works, more or less). Say that now I want to define a vector space over a field. The obvious definition would be type VectorSpace[K] = concept x, y x + y is type(x) zero(type(x)) is type(x) -x is type(x) x - y is type(x) var k: K k * x is type(x) and this requires generic concepts. Now the problem is: how does one infer `K` in this example (or `T` in any of the examples above)? My proposal would be to make the compiler only test for concepts when all types are fully specialized. This is the same that happens for generics right now. So, there would be no way to check that a type `T` belongs to `VectorSpace[K]` for some `K`, but we could check that it belongs to `VectorSpace[float]`. But we can still write generic functions having such constraint as `T: VectorSpace[K]`. The only problem is that the compiler will bail out at this point because `K` cannot be inferred. This can be easily fixed by * spitting out a suitable error message (`K cannot be inferred in concept... defined at line ...`) * letting the user explicitly opt in concept, say with a syntax like static: assert T is VectorSpace[float] or whatever we come up with. The compiler will then accumulate such facts and use them to resolve `K` in such situations as above. What do you think?
