Do you have a revised set of restrictions on the form of instances?

sure, but you won't like them any better now than the last time I mentioned them:-)

- as for confluence, none of the restrictions seem necessary

   without FDs, type class inference proceeds by _matching_
of instance heads to constraints, and without overlapping instances, there cannot be more than one matching head. so the only divergence comes from reductions on separate parts of the constraint store, which are independent, so the
   divergence is temporary.

   with FDs, type class inference adds _unification_, but only
   for FD range parameters. instance heads that only differ in
   FD range positions are now overlapping, but only as far
   as improvement is concerned. note also that the resulting
   overlap in the left hand side of rules is over variables, so
no variable instantiation can disable such rules. to be initially diverging, such overlapping improvement rules must have different unifications as their right hand sides. memoisation ensures that any improvement that was possible at any point in the reduction process remains possible throughout. so following either of two conflicting improvements will leave the other enabled, ensuring failure of the constraint store instead of non-determistic results.
   consistency condition: neither necessary nor sufficient

   coverage/bound variable condition: as far as confluence is
       concerned, these were possibly motivated by the wish
to have variable-restricted CHR, without which arbitrary rule systems might introduce non-confluence by following different instantiations for the fresh variables, through unification with different left hand sides. but due to the peculiarities of Haskell's type classes, such instantiations do not appear to be a problem, as sketched above. (as far as termination is concerned, I remain unconvinced
        that these two rules are the right way to exclude problematic
cases, as they exclude many valid programs just because these might be invoked with problematic queries. in such cases, I'd prefer to exclude the problematic queries.)

early checking for superclass instances: also does not seem to be needed any more; as the superclass constraints
       will be introduced as proof obligations in the same step
       that introduces infer_Class and memo_Class constraints,
       they cannot be "forgotten", and inference cannot terminate
       successfully if those instances are missing (they may, however,
       be provided later than now, accepting more valid programs).

(sigh, I'm sure I've written this before, as so many other things here.. I may well be wrong in what I'm saying - if that is the case, please tell me so, but _please_ don't keep asking me to repeat myself..)

- as for termination, the checks not related to the conditions above
   are a useful start, but they are incomplete, and overly restrictive.
   in fact, we know that we are dealing with an undecidable problem
here, so that no finite system of restrictions will be able to do the job statically. we've already discussed various improvements to the current set of checks here, as have others here and elsewhere.

   my preference here is unchanged: the variant without static
   termination checks needs to be the basis, and it needs to be
   defined clearly, so that all implementations of Haskell' accept
   the same programs and give the same results. the reason for
   this preference is my opinion that Haskell has become too
complex a language already. heaping special cases on top of restrictions that are necessarily incomplete adds complexity to the language, while restricting its expressiveness.

on top of that basis, we should have termination conditions, and we should keep trying to push the envelope by making them encompass more valid programs, but that also means that these termination conditions need to keep evolving! so we should have warnings if the current set of termination checks
   cannot statically determine whether or not some type class
   program will terminate, and by all means, have a flag to decide
   whether or not those warnings should become error messages
(for teaching contexts, or for any other software development processes based on strict coding standard conformance, the former might be the right default option).

   but in the near future, I do not expect to see either of the
   following two: a set of termination conditions that will cover
   and permit all practically relevant programs, or a correct
and terminating, practically motivated type class program that will run much longer than it takes me to read and understand the warnings produced by the current set of termination checks.

   oh, two more things: if type class inference fails to terminate,
   I never get to run my program, so that is perfectly type safe;
and if, as will eventually happen, part of my terminating type class programs are beyond the abilities of the current set of
   termination checks, I do not want to have to resort to a
   module- or project-wide license to get my code through.

   I'm no expert in this, but I believe that those folks who are
   concerned with termination proofs accept it as a matter of
   routine that tailoring a termination-ordering for each particular
   problem is an important part of their business; but if Haskell'
does not support this, and still insists on doing mandatory termination checks by default, it should at least allow me to
   indicate which of the type classes/instances I want to take
   responsibility for (via a pragma, most likely), using its checks
   on all others.

cheers,
claus

_______________________________________________
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime

Reply via email to