Hello everyone I think I'm close to useful results on the instance restrictions.
First there's an obvious extension to the Haskell98 rule. The H98 rule says the instance head must be a type constructor applied to type variables, and the context must mention only those type variables. This gives a termination proof by counting constructors. If the rule is weakened to allow an arbitrary type expression in the head and require that the context mention only strict subexpressions of the head, the same proof still applies. I'm not sure how useful this is, but we might as well allow it. Second, I have half a result in the direction of allowing the context to mention types that arise from applying type constructors used in the instance head. This requires accepting regular derivations, which means the compiler would need to track all previous goals while deriving an instance, and handle a second occurance of a goal by producing a reference to the dictionary for the first occurance (which may not be constructed yet), rather than blindly continuing the derivation. First I will explain the proof method. It's related to structural induction, but not quite the same. Suppose we have a subexpression relation on type expressions such that every type expression has only a finite number of subexpressions. If instance contexts only mention subexpressions of the head, then searching for an instance for a type can only generate #of subexpressions*#of classes distinct goals. Therefore, in finite time either the derivation will fail, or we will product a regular derivaiton. Alternately, we only try to derivive an instance the first time it arises as a goal, so each time we apply an instance rule there is one less goal in the pool of possible goals, which must eventually be exhausted. The syntactic subexpression relation obviously has these properties, but it's often useful to refer to types that show up when we apply type constructors. For example, my case and a simplification of Ashley's: data Mu f = In (f (Mu f)) instance C (f (Mu f)) => C (Mu f) On the other hand, we can't unfold all type constructors because some types are irregular, or, we encounter an infinite number of types while expanding the type constructors. Define a kind indexed family of predicates on type constructors, R_K(T), where the property is true if T::K, T is regular (including expanding the insides of lambdas), and if K=K1->K2, then R_K2(T t) for all t such that R_K1(t). Say a type is regularity preserving if it satisfies the predicate corresponding to its kind. Any type expression build entirely from regularity preserving type constructors will be regular. I think that a subexpression relation that allows expanding applications regularity preserving type constructors will give any type a finite number of subexpressions, but I don't know enough about the structure of regularity preserving type constructors to prove it. The missing half here is an algorithm for testing whether a type constructor is regularity preserving. For this the body of the type constructor can be simplified to consist of just the type constructor applications in the body. Apply the type constructor to skolem arguments, and check whether the resulting tree is regular. I don't know how to do this. Another approach is to draw out a dependency graph between type constructors, with an edge from A to B for each use of B in the definition of A, labeled with the arguments used. Then the question is whether starting from out type applied to tyvars we can find some path through the graph that generates an infinite number of types, where we keep track of the current node and the current arguments, and modify the arguments as directed by a label when moving along an edge. I don't know if the search can even a tail repeating path that witnesses the irregularity, let alone a family of paths that can be tested. Any assistance here would be appreciated. Thanks Brandon _______________________________________________ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell