When I said "type class acrobats" I didn't mean to make fun
of people like you who use type classes in a very skillfull way.
Though, I tried (successfully :) ) to be provocactive.

I agree with you there's a real need for unrestricted type class
programming. After all, the Hindley/Milner subset of Haskell already
allows us to write non-terminating, "buggy" programs. So, what's
the differences if we allow for the same on the level of types!?

The trouble I have with the current practice of type class programming
is that it's not very principled (yes, I'm provocactive again).
People learn how to write type class programs by observing the
behavior of a specific implementation and sometimes the results
are quite unexpected.

I have been mentioning CHRs before as a very useful tool to study
FDs. In fact, CHRs go way beyond FDs see "A Theory of Overloading".
I claim that when it comes to unrestricted type class programming
CHRs are the way to go. CHRs have a clean semantic meaning and
precisely explain the meaning of type class programs.

Martin



Claus Reinke writes:
 > > I'm forwarding an email that Martin Sulzmann asked me to post on his
 > > behalf.
 > 
 > thanks. well, that is one view of things. but it is certainly not the only 
 > one.
 > 
 > first, about those "acrobatics": my type-class-level programs tend to
 > start out as straightforward logic programs over types - no acrobatics 
 > involved, easy to understand. the acrobatics start when I'm trying to 
 > fit those straightforward programs into the straightjackets imposed 
 > by current systems - be they explicit restrictions or accidental 
 > limitations. wrestling them leads to the need to introduce complex 
 > work-arounds and implementation-specific tricks into the formerly
 > clean logic programs.
 > 
 > the restrictions tend to be artificial and full of odd corners, rather 
 > than smooth or natural, and the limitations tend to be the result of 
 > not thinking the problem through. so one could say that neither of
 > the levels proposed by Martin is ready for standardization. or, one
 > could be pragmatic and standardize both levels, well knowing that
 > neither is going to be the final word.
 > 
 > second, about the proposal to "ignore overlapping instances for the
 > moment": that moment has long passed - iirc, undecidable and 
 > overlapping instances have been around for longer than the current
 > standard Haskell 98. they provide recursion and conditionals (via
 > best-match pattern-matching) for logic programming over types.
 > they are overdue for inclusion in the standard, and it is time to stop 
 > closing our eyes to this fact.
 > 
 > third, understanding FDs via CHRs is very nice, and a good start
 > that I hope to see expanded on rather sooner than later. but (a) 
 > there is a big discrepancy between the language studied there and 
 > current practice, so this work will have to catch on before it can
 > be applied; and (b) the main issue with unconstrained type-class-
 > level programming is not that it tackles a semi-decidable problem.
 > 
 > in fact, the usual Prolog system only provides an _incomplete_ 
 > implementation of almost the same semi-decidable problem. but it 
 > provides a well-defined and predictable implementation, so
 > programmers can live with that (with a few well-known exceptions).
 > 
 > the two problems are not quite the same, because we extract
 > programs from type-class-level proofs. so not only do we need
 > the result to be uniquely determined (coherence), we also need
 > the proof to be un-ambiguous (or we might get different programs
 > depending on the proof variant chosen).
 > 
 > but, and this is an important "but": trying to restrict the permissable
 > programs so that neither incoherence nor ambiguity can arise in 
 > the first place is only _one_ way to tackle the issue; the other
 > direction that needs to be explored is to give programmers control
 > enough to disambiguate and to restore coherence where needed.
 > 
 > so, I am quite happy with splitting the bigger problem into two
 > levels: restrictive programs corresponding to the current state of
 > art in formal studies of the problem, and unconstrained programs
 > catering for the current state of art in programming practice. just
 > as long as both levels are included in the standard, and both levels
 > receive the further attention they need.
 > 
 > as far as I understand, this should not pose a problem for 
 > implementors, as they usually implement the simpler unconstrained 
 > variant first, then add restrictions to conform to the standard.
 > 
 > cheers,
 > claus
 > 
 > > - There's a class of MPTC/FD programs which enjoy sound, complete
 > >  and decidable type inference. See Result 1 below. I believe that
 > >  hugs and ghc faithfully implement this class.
 > >  Unfortunately, for advanced type class acrobats this class of
 > >  programs is too restrictive.
 > > 
 > > - There's a more expressive class of MPTC/FD programs which enjoy
 > >  sound and complete type inference if we can guarantee termination
 > >  by for example imposing a dynamic check.  See Result 2 below. Again,
 > >  I believe that  hugs and ghc faithfully implement this class if they
 > >  additionally implement a dynamic termination check.
 > >  Most type class acrobats should be happy with this class I believe.
 > > 
 > > Let's take a look at the combination of FDs and "well-behaved" instances.
 > > By well-behaved instances I mean instances which are non-overlapping and
 > > terminating. From now on I will assume that instances must be well-behaved.
 > > The (maybe) surprising observation is that the combination
 > > of FDs and well-behaved instances easily breaks completeness and
 > > decidability of type inference. Well, all this is well-studied.
 > > Check out
 > > [February 2006] Understanding Functional Dependencies via Constraint
 > > Handling Rules by Martin Sulzmann, Gregory J. Duck, Simon Peyton-Jones
 > > and Peter J. Stuckey
 > > which is available via my home-page.
 > > 
 > > Here's a short summary of the results in the paper:
 > > 
 > > Result 1:
 > > To obtain sound (we always have that), complete and decidable type 
 > > inference
 > > we need to impose
 > >    - the Basic Conditions (see Sect4)
 > >      (we can replace the Basic Conditions by any conditions which 
 > > guarantees
 > >       that instances are well-behaved. I'm ignoring here
 > >       super classes for simplicity)
 > >    - Jones's FD restrictions and the Bound Variable Condition (see Sect4.1)
 > > 
 > > The trouble is that these restrictions are quite "severe". In fact,
 > > there's not much hope to improve on these results. See the discussion
 > > in Sect5.1-5.3.
 > > 
 > > Let's take a look at a simple example to understand the gist of the 
 > > problem.
 > > 
 > > Ex: Consider
 > > 
 > > class F a b | a->b
 > > instance F a b => F [a] [b] -- (F)
 > > 
 > > Assume some program text generates the constraint F [a] a.
 > > Type inference will improve a to [b] (because of the 
 > > functional dependency in connection with the instance).
 > > Though, then we'll find the constraint F [[b]] [b]
 > > which can be resolved by the instance to F [b] b.
 > > But this is a renamed copy of the original constraint
 > > hence, type inference will not terminate here.
 > > 
 > > If we translate the instance and improvement conditions of
 > > the above program to CHRs the problem becomes obvious.
 > > 
 > > rule F a b, F a c  ==> b=c    -- the FD rule
 > > rule F [a] [a]    <==> F a b  -- the instance rule
 > > rule F [a] c       ==> c=[b]  -- the improvement rule
 > > 
 > > The rules should be read from left to right where
 > > "==>" stands for propagation and "<==>" for simplification.
 > > 
 > > My point: The above improvement rule is (potentially) dangerous
 > > cause we introduce a fresh variable b. And that's why type inference
 > > may not terminate. Using the terminology from the paper.
 > > The instance (F) violates one of Jones' FD restriction (the Coverage
 > > Condition).
 > > 
 > > So, is all lost? Not quite. A second major result in the paper says
 > > that if we can guarantee termination then we can guarantee completeness.
 > > Of course, we need to find some appropriate replacement for 
 > > Jones' FD restrictions.
 > > 
 > > Result2:
 > > Assuming we can guarantee termination, then type inference
 > > is complete if we can satisfy
 > >   - the Bound Variable Condition,
 > >   - the Weak Coverage Condition, 
 > >   - the Consistency Condition, and
 > >   - and FDs are full.
 > > Effectively, the above says that type inference is sound,
 > > complete but semi-decidable. That is, we're complete
 > > if each each inference goal terminates.
 > > 
 > > Here's a short explanation of the conditions.
 > > The Bound Variable Condition says that all variables which appear
 > > in the instance head must appear in the instance body.
 > > Weak Coverage says that any of the "unbound" variables (see b
 > > in the above example) must be captured by a FD in the instance context.
 > > This sounds complicated but is fairly simple. Please see the paper.
 > > Consistency says that for any FD and any two instances the improvement
 > > rules generated must be consistent. In fact, because we assume
 > > that FDs are full and instances are well-behaved (i.e. instances are
 > > non-overlapping) the Consistency Condition is always satisfied here. 
 > > So, why did I mention Consistency then? Well, Theorem 2 in the paper 
 > > from which Result2 is derived is stated in very general terms. 
 > > We only assume that instances are terminating and confluent. Hence,
 > > we could (eventually) extend Result2 to overlapping instances
 > > and *then* Consistency becomes important (But PLEASE let's ignore
 > > overlapping instances for the moment).
 > > Okay, let's consider the last condition.
 > > We say a FD is full
 > > if all class parameters either appear in the domain or range of a FD.
 > > For example, 
 > >   class F a b c | a -> b is not full but
 > >   class F a b c | a b -> c is full
 > > 
 > > Again, if we leave out any of the above conditions, we loose completeness.
 > > 
 > > Result2 covers in my opinion a rich class of FD programs.
 > > In my experience, the fullness condition is not a problem.
 > > (In some situations, we can transform a non-full into a full
 > > program but I'd rather reject non-full FDs).
 > > We could even drop the Bound Variable Condition and check
 > > for this condition "locally", similar to the way we check
 > > for termination now locally.
 > > 
 > > Somebody might ask, will ATs (associated types) behave any better?
 > > Unfortunately not, they share the same problem as FDs. AT type inference
 > > has the potential to become undecidable and incomplete.
 > > 
 > > Here's the above example re-casted in AT terms.
 > > 
 > > class F a where
 > >  type T a
 > > instance F a => F [a]
 > >  type T [a] = [T a]
 > > 
 > > Assume some program text generates the constraint T [b] = b.
 > > Naively, we could apply the above type definition which results
 > > in [T [b]] = b etc 
 > > (T [b] = [T b] and T [b] = b leads to [T b] = b which leads
 > > to [T [T b]] = b)
 > > Though, the current AT description rejects the constraint T [b] = b
 > > right away. AT applies the occurs check rule. Clearly, b in fv(T [b]).
 > > 
 > > Important point to note: The occurs check is a "dynamic" check. 
 > > We could impose a similar check on FDs. E.g. the type class
 > > constraint F [a] a contains a cycle. Hence, we should immediately
 > > reject this constraint. Of course, cycles may span across
 > > several constraints/type equations. For example, consider
 > > F [a] b, F [b] a  and T [a]=b, T [b]=a.
 > > 
 > > Probably, more could be said but I'll stop here.
 > > 
 > > Martin
 > > 
 > > _______________________________________________
 > > Haskell-prime mailing list
 > > Haskell-prime@haskell.org
 > > http://haskell.org/mailman/listinfo/haskell-prime
 > _______________________________________________
 > Haskell-prime mailing list
 > Haskell-prime@haskell.org
 > http://haskell.org/mailman/listinfo/haskell-prime
_______________________________________________
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime

Reply via email to