there were a couple of issues Simon raised that I hadn't responded to in my earlier reply. since no-one else has taken them on so far, either, ..

- Haskell would need to be a lot more specific about exactly where
context reduction takes place.  Consider
f xs x = xs == [x]
Do we infer the type (Eq a) => [a] -> a -> Bool?  Thereby committing to
a particular choice of instance?  Or do we (as GHC does) infer the type
(Eq [a]) => [a] -> a -> Bool, so that if f is applied at (say) type
Char, then an instance Eq [Char] instance would apply.  GHC is careful
to do the latter.

my general idea about that would be never to commit unless we know it
is the only choice. which seems to be in line with what GHC is doing in
this case. of course, it follows that we'd like to be able to specify choices
unambiguously, to avoid delayed committs.

Concerning using the instance context, yes, it's attractive, but it
involves *search* which the current mechanism does not.  Presumably you
have in mind that the type system should "commit" only when there is
only one remaining instance declaration that can fit.  You need to be
very careful not to prune off search branches prematurely, because in a
traditional HM type checker you don't know what all the type are
completely.  And you need to take functional dependencies into account
during the search (but not irrevocably).   I have not implemented this
in GHC.  I don't know anyone who has.   I don't even know anyone who has
specified it.

search, yes, but with deterministic result (similar to HM inference). so the main issue is that we need to be able to perform inferences without committing to their conclusions, or setting up encapsulated inference processes with their own assumptions. which isn't surprising given that we're dealing with implications, or type class functions, where the usual proof rule is "if we can prove the conclusions assuming the prerequisites, then we have proven the implication". that may be substantially more complicated to implement, but is just what Prolog, or even simple HM type inference for functions, have been doing for a long time. and it is a pain to see the current situation, where Haskell implementations treat the conclusions as if there were no pre-requisites
(Haskell: these instances are overlapping; Programmer: no, they are not,
just look at the code!).

can we agree, at least in principle, that in the long term this needs to change?

since the general implementation techniques aren't exactly new, are there any specific reasons why they couldn't be applied to type classes? we'd have a state for the constraint store, and a backtracking monad with deterministic result for the inference, just as we have for implementing HM inference. if we want a more efficient, more low-level implementation, we could use the WAM's idea of variable trails (proceed as if there was no search, but record all variable substitutions, so that we can undo them if it turns out that this branch fails). or is there a pragmatic issue with current implementations of those type classes, having grown out of simpler type class beginnings, and having grown so complex that they couldn't go in that direction without a major rewrite?

in the short term, I'd be quite willing to aim for a compromise, where
we'd not look at all constraints in the context, but just at a few specific ones, for which we know that the search involved will be very shallow.
whether to do that via strictness annotations in contexts, as Bulat has
suggested, or by reserving a separate syntactic position for constraints
known to have shallow proofs, is another question.

the outstanding example of this would be type inequalities, which I'd
really like to see in Haskell', because they remove a whole class of
instance overlaps. and with FDs, one can build on that foundation.

I'm not sure I have a good handle on understanding when or how searches
could be hampered by incomplete types. naively, I'd expect residuation, ie,
delaying partially instantiated constraints until their variables are specific
enough to proceed with inference. I think CHR already does this. if that means that instance context constraints cannot be completely resolved without looking at concrete uses of those instances, then we'd have a problem, but no more than at the moment. and I suspect
that problem will not be a showstopper. on the contrary, it may help
to keep those searches shallow.

from my experience, it seems quite possible to arrange instance contexts in such a way that even such incomplete resolution will be sufficient to show that they ensure mutual exclusion of their instances (type inequality, type-level conditional, FDs, closed classes, ..). which would be all that was needed at that point.

once we start looking, we could probably find more ways to help
such incomplete inferences along. eg, if there was a built-in class
Fail a (built-in only so that the system could know there can be
no instances), then we could declare mutual exclusion like this:

   instance (Num a, Function a) => Fail a

[an exclusion which, incidentally, some folks might disagree with;-]

and, independent of the specific a, just by looking at the classes
mentioned in the instance context, and the instances visible in the
current module, the system could infer that the following instance constraints are mutually exclusive, hence the instances cannot be
overlapping:

   instance (Num a,..other constraints..) => MyClass a where ..
   instance (Function a, .. other constraints..) => MyClass a where..

Oleg, eg, has shown how such is-a-function constraint could
be used to solve several problems, but his code depends on
ghc-specific tricks from HList, iirc.

cheers,
claus


_______________________________________________
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime

Reply via email to