Marcin 'Qrczak' Kowalczyk:
> > AAA! Thank you, it works now!
> 
> But there are problems with type inference.

I did mention that you would need explicit type declarations left, 
right and center, and that we are operating at the (quite irregular) 
borders of extended Haskell's type class system, didn't I?-)

Now, the first problem might not be as bad as it seems: as
for big type signatures - type synonyms are your friend, and
often, one type declaration at the right place can save lots of
others. I agree, though, that it can be a nuisance.

The second problem is more interesting, and related to the first, 
but as this is one of my pet topics, let me make a few comments 
first, for those Haskellers who wonder how such strange effects 
are possible with what the language report still explains to them 
as just "a structured way to introduce overloaded functions".

--

In truth, and with current extensions, the type class language does 
very much look like a restricted logic programming language over 
types. It has long outgrown its original interpretation as a systematic 
overloading mechanism, and I tend to find the logic programming 
picture more helpful than the overloading picture:

    - classes declare predicates/relations over types
    - plain instances define facts (predicates known to be true)
    - instances with contexts define implications (predicates known
        to be true if certain other predicates can be shown to be true)
    - functional programs can be attached to both facts and rules

Whenever a type is qualified by a class context, this establishes
a proof obligation for the class mechanism, and while this mechanism
tries to complete the proof, it also collects and composes the 
functional program( fragment)s attached to the rules and facts
used in the proof. If the proof is successful, we get some dictionaries
full of functional programs, composed during the proof, which 
serve as evidence for the successful proof and are passed 
implicitly to the original program that had a qualified type.

[This should also explain Simon's comment that there is no
recursion at the function level in our example problem. The 
recursion is at the level of type predicates, and as it unfolds
over finite types, it can terminate statically, leaving a suitable
non-recursive variant of callN/callIO behind.]

We still hang on to the old interpretation of classes, and so the
restrictions we use to ensure determinism and termination are
mostly brute force approximations (if we don't allow this and
that in the syntax, we know that we get a determistic and
terminating system). Otherwise, type classes would give us
a logic programming-based counterpart to the 
functional-programming-based Cayenne.

The result is that the boundaries of the type class mechanism 
are quite irregular, and whenever someone finds a way still to 
guarantee the properties we think of as essential (determinism, 
termination) with relaxed restrictions, it leads to an extension 
of the mechanism, and few of these extensions are easily 
understood in the "overloading"-interpretation of 
multiple-parameter type constructor classes. 

[In fact, termination is no longer seen as a must, as long as 
programmers don't get bitten. Anyone willing to relax the 
determinism restriction, letting the programmer choose at 
compile-time from a set of alternatives enumerated by the 
compiler?-)]

--

Back to our callN-problem:
If I'm not mistaken, you've actually encountered two limitations 
of the current class mechanism at once:

1. Invoking the class mechanism to resolve a type constraint
    will never instantiate variables in the original qualified type.
    The mechanism will try to find instance declarations whose
    head *matches* the constraints in the qualified type (which
    may lead to new constraints, etc..), so the substitutions will
    be one-way only (giving up one of the advantages of logic
    programming..).

2. As you noticed, there is no way to get the equivalent of
    Prolog's closed-world assumption - we can't tell the class
    mechanism that the instances currently in sight are all the
    instances that will ever be available.

So your example can be reduced further:

----------------------------------
module Class where

class C t
instance C Double 

f :: C a => a
f = 1

g :: C Int => Int
g = 2
----------------------------------

Even if we intend never to create any other instances of C, we can't
express this intention, so instead of a type error complaining about a
wrong type in g's signature, we'll get a complaint about a missing 
instance. And even if we could express our intention, f's type would
still not be instantiated to match the only instance of C [Hugs says
"cannot justify constraints" when trying to load f, and "unresolved 
overloading" when I try to use g after commenting out f; what do
batch-compilers say, and do they change messages when we rename
the module to Main and include some definition of main?].

> What I can't tell the compiler is "there will not be any other instances
> FA [foo] [bar], so if you need one, you may assume that foo == bar".

Both seem reasonable from a logic-programming perspective, as well 
as a few other modifications, but with the current lack of control over 
the scope of instance declarations, it is hard to define "these" in "these 
are the only instances of class C".

Does anyone else agree that Haskell's type classes are a logic 
programming language in disguise, and that this interpretation makes 
it easier to understand what is going on? It certainly seems more 
faithful to the semantics of type classes in terms of qualified types..

Claus



Reply via email to