Re: deriving over renamed types
Ashley Yakeley wrote: At 2002-04-08 12:45, Lennart Augustsson wrote: I just just wanted to say that I agree with almost everything Conor said. I find it a little odd that the extension to Haskell that allows explicit forall does not also allow you use explicit type application (and type lanbda). What did you have in mind? Actually, what you are proposing was not at all what I was thinking of (even if I think something like what you propose would be useful). I was referring to the expression language. So this is already allowed: f :: (forall a . a - a) - (b, c) - (c ,b ) f i (x,y) = (i y, i x) I'd like to be able to have explicit type applications. If we denote type application with infix # I'd like to write f i (x, y) = (i#c y, i#b x) And where you invoke f you could use a type lambda ... f (/\a - \ x - (x::a)) ... It doesn't make much sense in this example, but there are others where the implcit stuff just doesn't allow you to do what you want. -- Lennart data Zero; data Succ n; type Add Zero b = b; type Add (Succ a) b = Succ (Add a b); type Mult Zero b = Zero; type Mult (Succ a) b = Add b (Mult a b); type Fact Zero = Zero; type Fact (Succ n) = Mult (Succ n) (Fact n); data Foo f = MkFoo (f ()); type Succ' = Succ; type Succ'' n = Succ n; -- which of these types are the same? f1 = MkFoo undefined :: Foo Succ; f2 = MkFoo undefined :: Foo Succ'; f3 = MkFoo undefined :: Foo Succ''; f4 = MkFoo undefined :: (Add (Succ Zero)); -- Ashley Yakeley, Seattle WA ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: deriving over renamed types
Hi Lennart wrote: I was referring to the expression language. So this is already allowed: f :: (forall a . a - a) - (b, c) - (c ,b ) f i (x,y) = (i y, i x) I'd like to be able to have explicit type applications. If we denote type application with infix # I'd like to write f i (x, y) = (i#c y, i#b x) And where you invoke f you could use a type lambda ... f (/\a - \ x - (x::a)) ... It doesn't make much sense in this example, but there are others where the implcit stuff just doesn't allow you to do what you want. There's a plentiful supply of good examples arising naturally from higher-hind polymorphism. * - * is full of functions with good properties (eg being functorial or monadic) which aren't datatype constructors. Haskell's inability to express them and to instantiate type schemes with them is a serious hindrance to the good functional programming practice of identifying and abstracting common structure. Similarly, it's not too hard to find examples where the potential of local `forall' is thwarted by the lack of the corresponding lambda. These examples are already in circulation. Check out... Richard Bird and Ross Paterson: de Bruijn notation as a nested datatype Generalized folds for nested datatypes My own: Faking it and there are plenty more. These papers are too long, because they contain the same program written several times with different types. The abstraction which combines them just isn't available. Worse, multi-parameter classes contribute a second source of `functions from * to *' which just don't fit in with higher-kind polymorphism. Some rationalization would help. Type theory has already addressed lots of these issues. I'm not making these proposals in a vacuum. Type-level abstraction and application, with defaults computed by unification but explicit overriding available is standard practice in several implementations. Further... Ashley wrote: data Zero; data Succ n; type Add Zero b = b; type Add (Succ a) b = Succ (Add a b); ...you can write this program with type classes, but much less clearly and directly. They're a good tool, but the wrong tool for this job, and but this job is worth doing well. In that respect, I'd caution against pattern matching over * and higher-kinds, because these kinds are not closed. That's why I argue in favour of datakinds (given that using genuine term-level data would involve a seismic shift in Haskell's static/dynamic divide). But, imagining that Zero and Suc are the constructors of a datakind Nat over which we can write these programs safely... data Foo f = MkFoo (f ()); type Succ' = Succ; type Succ'' n = Succ n; -- which of these types are the same? f1 = MkFoo undefined :: Foo Succ; f2 = MkFoo undefined :: Foo Succ'; f3 = MkFoo undefined :: Foo Succ''; f4 = MkFoo undefined :: Foo (Add (Succ Zero)); ...as has already been pointed out, they all have the same eta-long normal form, being Foo (/\n - Succ n). Machines are good at figuring this stuff out, which is why I like their help when I'm programming. Working with types which contain computations, it's a great help if you can typecheck your code as you go, and ask for (head) normal forms anytime. This stuff doesn't require a leap in the dark. It just takes a bit of borrowing and adaptation. Cheers Conor ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: deriving over renamed types
At 2002-04-09 04:53, C T McBride wrote: There's a plentiful supply of good examples arising naturally from higher-hind polymorphism. * - * is full of functions with good properties (eg being functorial or monadic) which aren't datatype constructors. Haskell's inability to express them and to instantiate type schemes with them is a serious hindrance to the good functional programming practice of identifying and abstracting common structure. Similarly, it's not too hard to find examples where the potential of local `forall' is thwarted by the lack of the corresponding lambda. I agree. I've come across them. ... Ashley wrote: data Zero; data Succ n; type Add Zero b = b; type Add (Succ a) b = Succ (Add a b); ...you can write this program with type classes, but much less clearly and directly. For instance: http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/*checkout*/truth-framework/ Source/TypeCalc/NaturalType.hs?rev=HEADcontent-type=text/plain There's more in that folder, including concatenatable mixed lists: http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/truth-framework/Source/Type Calc/ This would have all been _much_ easier with typed lambda and pattern-matching. The biggest problem is that instances are unordered and must be disjoint, whereas pattern cases are of course ordered and don't need to be disjoint. They're a good tool, but the wrong tool for this job, and but this job is worth doing well. In that respect, I'd caution against pattern matching over * and higher-kinds, because these kinds are not closed. That's why I argue in favour of datakinds (given that using genuine term-level data would involve a seismic shift in Haskell's static/dynamic divide). Hmm. After value lambda and type lambda, are we going to one day want datakind lambda? Alternatively, could one extend classes to be datakinds? Some way of closing them? But, imagining that Zero and Suc are the constructors of a datakind Nat over which we can write these programs safely... data Foo f = MkFoo (f ()); type Succ' = Succ; type Succ'' n = Succ n; -- which of these types are the same? f1 = MkFoo undefined :: Foo Succ; f2 = MkFoo undefined :: Foo Succ'; f3 = MkFoo undefined :: Foo Succ''; f4 = MkFoo undefined :: Foo (Add (Succ Zero)); ...as has already been pointed out, they all have the same eta-long normal form, being Foo (/\n - Succ n). ...and what about this one: f5 = MkFoo undefined :: Foo (/\a ::: Nat = Add a (Succ Zero)); ...? Depending on how 'Add' is defined, f5 may very well be extensionally equivalent to the others. Isn't this going to trip up the programmer some time? Machines are good at figuring this stuff out, which is why I like their help when I'm programming. OK. But there needs to be a line drawn in what Haskell is going to figure out. We know it's never going to be able to calculate whether any two functions are extensionally equivalent. -- Ashley Yakeley, Seattle WA ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: deriving over renamed types
I just just wanted to say that I agree with almost everything Conor said. I find it a little odd that the extension to Haskell that allows explicit forall does not also allow you use explicit type application (and type lanbda). -- Lennart C T McBride wrote: Hi On Fri, 5 Apr 2002, Ashley Yakeley wrote: At 2002-04-04 05:57, C T McBride wrote: ...which would be very useful, but would probably have unpleasant consequences for type inference... To my mind, this is not a credible objection. The horse has already bolted; there's no point in trying to shut the stable door. Perhaps I should say type decidability. Currently Haskell can always calculate whether one type-constructor is a substitution-instance of another, and therefore whether two type-constructors are the same. This may not be possible if you have full type lambdas, as in general there is no way of calculating whether two functions are the same. That's fair enough, up to a point. Higher-order unification is undecidable, and does not in any case yield unique most general unifiers. It's inevitable that if there are more things we could possibly be saying, it's harder for the machine to guess which one we mean. My point, however, is that we should not restrict what we can say, just because machines have limitations. Instead, we should ensure that we who (hopefully) know what we mean, should be able to tell the machine. In this respect, it's reasonable to allow both `big lambda', abstracting over types and other higher kind things, and `big application' (absent, thus far, from the relevant proposals), allowing those abstracted parameters to be instantiated explicitly. That's what we do in dependent type theory (except that there's no distinction between big and little lambda), and proof assistants like Agda, Coq and Lego all have decidable typechecking, with much richer type systems than Haskell's. I have one foot on the platform and one on the train here: I'm currently implementing a dependently typed programming language in Haskell, and thinking about type system extensions for Haskell is an occupational hazard. The joy of Hindley-Milner is that you never have to write a big lambda or a big application, but it doesn't mean they aren't there. The machine inserts big lambdas via let-polymorphism and big applications via first-order unification. By careful restriction of the solutions available for higher-kind unknowns---only type constructors or polymorphic parameters---Haskell ekes out a little more automation. Not unreasonable. But instead of restricting the usage of big lambda and big application to only those which can be kept implicit, why not allow them optionally to be made explicit, and use the existing mechanisms to provide reasonable (but obviously not most general) defaults when this option is not exercised? Again, there's an established precedent for this in type theory. Pollack's `Implicit Syntax' has been around for ten years, and precisely allows you to indicate that an argument will by default be inferred (via unification, if possible) but can in any case be given explicitly. So, if we had, say, a monadically lifted fold operator mfoldr :: forall m. Monad m = forall a,b. (a - b - m b) - m b - [a] - m b we could use this for Maybe, [], IO, etc as it stands, but we could also define foldr by something like foldr = mfoldr{Id} That's certainly cheaper, and much less frustrating than the `packaging' option newtype Id a = An a instance Monad Id ... A proposal which serves a different purpose but pushes in this general direction is Kahl and Scheffczyk's `Named Instances' idea. They're interested (rightly, in my opinion) in providing an explict but optional language of witnesses for the class constraints which appear in type schemes. Their proposal recognizes and does not interfere with the fact that, for many simple programs, there's an obvious default choice of instance which the machine can safely be left to fill in. However, they provide the means to specify an instance whenever a non-obvious behaviour is desired, or when (as is often the case with multi-parameter classes) there is no obvious behaviour anyway. It's a good idea, which I would like to see taken further. It seems to me desirable to seek a better integration of type-level programming via classes and the type-level nearly-programming which is made available by higher-kind polymorphism. I'm beginning to have some ideas about how this can be done---again based on existing technology in dependent type theory---but now's not the right time to push that particular boat out. So, let me summarize with two points: [Scientific] If we want to do wacky stuff, we should be under no illusion about preserving type inference (as opposed to type checking)---we must say what we mean. The maintenance of type inference is an untenable excuse for preventing us from
Re: deriving over renamed types
At 2002-04-08 12:45, Lennart Augustsson wrote: I just just wanted to say that I agree with almost everything Conor said. I find it a little odd that the extension to Haskell that allows explicit forall does not also allow you use explicit type application (and type lanbda). What did you have in mind? data Zero; data Succ n; type Add Zero b = b; type Add (Succ a) b = Succ (Add a b); type Mult Zero b = Zero; type Mult (Succ a) b = Add b (Mult a b); type Fact Zero = Zero; type Fact (Succ n) = Mult (Succ n) (Fact n); data Foo f = MkFoo (f ()); type Succ' = Succ; type Succ'' n = Succ n; -- which of these types are the same? f1 = MkFoo undefined :: Foo Succ; f2 = MkFoo undefined :: Foo Succ'; f3 = MkFoo undefined :: Foo Succ''; f4 = MkFoo undefined :: (Add (Succ Zero)); -- Ashley Yakeley, Seattle WA ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: deriving over renamed types
Ashley Yakeley [EMAIL PROTECTED] writes: [...] -- which of these types are the same? f1 = MkFoo undefined :: Foo Succ; f2 = MkFoo undefined :: Foo Succ'; f3 = MkFoo undefined :: Foo Succ''; f4 = MkFoo undefined :: Foo (Add (Succ Zero)); yeah, why not! Have them all be the same thing. (i added the missing Foo on f4) add Zero b = b add (Succ a) b = Succ (add a b) mult Zero b = Zero mult (Succ a) b = add b (mult a b) fact Zero = Zero fact (Succ n) = mult (Succ n) (fact n) data Foo f = MkFoo (f ()) succ' = Succ succ'' n = Succ n -- which of these types are the same? f1 = MkFoo undefined :: Foo Succ f2 = MkFoo undefined :: Foo succ' f3 = MkFoo undefined :: Foo succ'' f4 = MkFoo undefined :: Foo (add (Succ Zero)) add to haskell - eta-expanding of (add (Succ Zero)) in (\x - add (Succ Zero) x) - compile-time evaluation of type expressions (handling unbounded parameters like x above) and that should do it! I don't think this is a crazy as it sounds. Cayenne has already started computing types at compile-time. I'm currently working on merd (http://merd.net) which has a such a type system. eg: range(n, m) = if n m then n | range(n + 1, m) else n Two_to_height = range(2, 8) #= 2|3|4|5|6|7|8 restricted_identity !! Two_to_height - Two_to_height restricted_identity(x) = x restricted_identity(4) #= 4 restricted_identity(1) #= compile-time error: 1 is not in 2|3|4|5|6|7|8 ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: deriving over renamed types
Hi On Fri, 5 Apr 2002, Ashley Yakeley wrote: At 2002-04-04 05:57, C T McBride wrote: ...which would be very useful, but would probably have unpleasant consequences for type inference... To my mind, this is not a credible objection. The horse has already bolted; there's no point in trying to shut the stable door. Perhaps I should say type decidability. Currently Haskell can always calculate whether one type-constructor is a substitution-instance of another, and therefore whether two type-constructors are the same. This may not be possible if you have full type lambdas, as in general there is no way of calculating whether two functions are the same. That's fair enough, up to a point. Higher-order unification is undecidable, and does not in any case yield unique most general unifiers. It's inevitable that if there are more things we could possibly be saying, it's harder for the machine to guess which one we mean. My point, however, is that we should not restrict what we can say, just because machines have limitations. Instead, we should ensure that we who (hopefully) know what we mean, should be able to tell the machine. In this respect, it's reasonable to allow both `big lambda', abstracting over types and other higher kind things, and `big application' (absent, thus far, from the relevant proposals), allowing those abstracted parameters to be instantiated explicitly. That's what we do in dependent type theory (except that there's no distinction between big and little lambda), and proof assistants like Agda, Coq and Lego all have decidable typechecking, with much richer type systems than Haskell's. I have one foot on the platform and one on the train here: I'm currently implementing a dependently typed programming language in Haskell, and thinking about type system extensions for Haskell is an occupational hazard. The joy of Hindley-Milner is that you never have to write a big lambda or a big application, but it doesn't mean they aren't there. The machine inserts big lambdas via let-polymorphism and big applications via first-order unification. By careful restriction of the solutions available for higher-kind unknowns---only type constructors or polymorphic parameters---Haskell ekes out a little more automation. Not unreasonable. But instead of restricting the usage of big lambda and big application to only those which can be kept implicit, why not allow them optionally to be made explicit, and use the existing mechanisms to provide reasonable (but obviously not most general) defaults when this option is not exercised? Again, there's an established precedent for this in type theory. Pollack's `Implicit Syntax' has been around for ten years, and precisely allows you to indicate that an argument will by default be inferred (via unification, if possible) but can in any case be given explicitly. So, if we had, say, a monadically lifted fold operator mfoldr :: forall m. Monad m = forall a,b. (a - b - m b) - m b - [a] - m b we could use this for Maybe, [], IO, etc as it stands, but we could also define foldr by something like foldr = mfoldr{Id} That's certainly cheaper, and much less frustrating than the `packaging' option newtype Id a = An a instance Monad Id ... A proposal which serves a different purpose but pushes in this general direction is Kahl and Scheffczyk's `Named Instances' idea. They're interested (rightly, in my opinion) in providing an explict but optional language of witnesses for the class constraints which appear in type schemes. Their proposal recognizes and does not interfere with the fact that, for many simple programs, there's an obvious default choice of instance which the machine can safely be left to fill in. However, they provide the means to specify an instance whenever a non-obvious behaviour is desired, or when (as is often the case with multi-parameter classes) there is no obvious behaviour anyway. It's a good idea, which I would like to see taken further. It seems to me desirable to seek a better integration of type-level programming via classes and the type-level nearly-programming which is made available by higher-kind polymorphism. I'm beginning to have some ideas about how this can be done---again based on existing technology in dependent type theory---but now's not the right time to push that particular boat out. So, let me summarize with two points: [Scientific] If we want to do wacky stuff, we should be under no illusion about preserving type inference (as opposed to type checking)---we must say what we mean. The maintenance of type inference is an untenable excuse for preventing us from saying what we mean. If we make big lambda and big application available but optional, we can keep the existing level of annotation in existing programs by using the existing inference engines as defaulting mechanisms. [Political] There is a great deal of work in type theory which is becoming more and more relevant to functional
Re: deriving over renamed types
At 2002-04-04 05:57, C T McBride wrote: ...which would be very useful, but would probably have unpleasant consequences for type inference... To my mind, this is not a credible objection. The horse has already bolted; there's no point in trying to shut the stable door. Perhaps I should say type decidability. Currently Haskell can always calculate whether one type-constructor is a substitution-instance of another, and therefore whether two type-constructors are the same. This may not be possible if you have full type lambdas, as in general there is no way of calculating whether two functions are the same. -- Ashley Yakeley, Seattle WA ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
RE: deriving over renamed types
It's a bug in GHC 5.02 and earlier; fixed in 5.03 Simon | -Original Message- | From: Hal Daume III [mailto:[EMAIL PROTECTED]] | Sent: 04 April 2002 00:15 | To: Haskell Mailing List | Subject: deriving over renamed types | | | Why can't I do this: | | import FiniteMap | type FM = FiniteMap | instance (Show a, Show b) = Show (FiniteMap a b) where | show = show . fmToList | data X = X (FM Int Int) | deriving (Show) | | if I replace | | type FM = FiniteMap | | with | | type FM a b = FiniteMap a b | | it works fine. I wasn't aware there was (supposed to be) a | difference between these two declarations? Is there? | | - Hal | | | -- | Hal Daume III | | Computer science is no more about computers| [EMAIL PROTECTED] | than astronomy is about telescopes. -Dijkstra | www.isi.edu/~hdaume | | ___ | Haskell mailing list | [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell | ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: deriving over renamed types
Hi all, At 2002-04-03 15:14, Hal Daume III wrote: type FM = FiniteMap type FM a b = FiniteMap a b I wasn't aware there was (supposed to be) a difference between these two declarations? Is there? On Wed, 3 Apr 2002, Ashley Yakeley wrote: type FM a b = FiniteMap a b ...This defines FM as a pseudo-type-constructor. It has no kind, but must be specified with two arguments each of kind (*) so as to become a type-constructor (a type) of kind (*). In effect, type synonyms introduce a macro facility, but because they must be fully applied, they introduce no new normal forms for types. This makes things simpler from one point of view, but simplicity is subjective. Here, we trade simplicity of comprehension (for the machine) for simplicity of composition (for us). I don't think we've landed the best deal. Unfortunately Haskell does not have lambda over types, otherwise the two could be the same. And then one could do things such as type M b a = a - b; instance Cofunctor (M b) where ... Yes, please! More, please! type Id a = a type Comp f g x = f (g x) instance Monad Id where return = id a = f = f a instance (Functor f,Functor g) = Functor (f `Comp` g) where fmap = fmap . fmap In particular, if Id were a Monad, we could adopt the useful habit of writing monadically lifted combinators (fold operators, etc), from which the unlifted versions could be readily recovered. ...which would be very useful, but would probably have unpleasant consequences for type inference... To my mind, this is not a credible objection. The horse has already bolted; there's no point in trying to shut the stable door. The existing post-Hindley-Milner aspects of Haskell's type system are too useful for type inference to be worthy of such veneration. [warning: drum-thumping Ulsterman on soap box] (*) Two points in this regard Pragmatically, there is no reason why extensions of the type system which break type inference should preclude the use of type inference for `old' programs which sit within the sublanguage where inference remains possible. If the machine is smart enough to figure out a program's type, by all means let it. But if not, we must be able to say what me mean. This is already the Haskell approach: we can and should take it further. Design remains important. The division of labour between humans and machines should be as clear as possible---we should be able to tell which parts of the program we must write, and which the machine can construct automatically. `Macbeth programming' (throwing miscellaneous rats, bats and insects into the cauldron until the magic happens) must be avoided. Dogmatically, type inference is in any case a bad rationalization of how it is that a program comes to have a type. In a strongly typed language, types define the spaces where programs are found, not the other way around! If we don't have at least some idea what the type is, we have no business writing the program. We should always be willing to write type signatures. (*) Further analysis The trouble, as things stand, is that we don't get any payoff for writing type signatures, other than the machine's grudging acceptance of our weirder programs, and a vague sensation of puritan worthiness. Explicit or not, we are forced to maintain the type discipline in our heads as we code. Typechecking is _summative_: the typechecker is a policeman who shows up at the end of the process and thumps us if we've got it wrong. Whilst the standard undergraduate cry of `Nasty typechecker won't let my program work!' is misguided (I typically respond `What program?'), it is at least understandable given that the typechecker plays no _formative_ role in the construction of programs which _do_ work. Type-directed editing (as opposed to mere syntax-directed editing) is a way to extract benefit from the work we put into writing a type signature. A choice of type has consequences for the choice of program, and we can use the machine to propagate those consequences, where current practice requires us to engineer coincidences between types and programs. In effect, the production of non-criminal code can benefit from parenting as well as policing. Rebellious teenagers will doubtless persist in loitering on street corners, writing their programs with a spraycan. This crime is its own punishment. As I've argued before, the need for type-directed editing, as pioneered in proof assistants based on dependent type theory, arises even without dependent types. The currently implemented extensions to Haskell's type system already introduce enough computation at the type level to make self-imposed type discipline a major headache. Polymorphic recursion requires care; multi-parameter type classes require serious dedication. It doesn't take a genius to see that things are getting out of hand, although it may take a non-genius to see the need for help. (*) You can probably tell where I'm going with this
RE: deriving over renamed types
| ...which would be very useful, but would probably have unpleasant | consequences for type inference... | | To my mind, this is not a credible objection. The horse has | already bolted; there's no point in trying to shut the stable | door. The existing post-Hindley-Milner aspects of Haskell's | type system are too useful for type inference to be worthy of | such veneration. I agree with this. The goal of 100% type inference is a red herring. This point of view is articulated at some length in a draft paper Mark Shields and I have written, on scoped type variables http://research.microsoft.com/~simonpj/papers/scoped-tyvars Type inference is still terrifically useful to fill in the gaps between programmer-supplied type signatures. The trouble with full type lambdas is that they make unification essentially impossible, and that has a huge impact on how many type annotations you need. So I'm completely in agreement with moving towards allowing the programmer to supply more explicit type information; but I have not yet seen a proposal that supports anything like full type lambdas in a system that looks anything like an extension of Haskell, and that does not require quite burdensome type annotations. But I may well simply be ignorant. Simon ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell
Re: deriving over renamed types
At 2002-04-03 15:14, Hal Daume III wrote: if I replace type FM = FiniteMap with type FM a b = FiniteMap a b it works fine. I wasn't aware there was (supposed to be) a difference between these two declarations? Is there? I don't know about your example, but there is a difference. type FM = FiniteMap ...This defines FM as a type-constructor of kind (* - * - *) equivalent to FiniteMap. type FM a b = FiniteMap a b ...This defines FM as a pseudo-type-constructor. It has no kind, but must be specified with two arguments each of kind (*) so as to become a type-constructor (a type) of kind (*). Unfortunately Haskell does not have lambda over types, otherwise the two could be the same. And then one could do things such as type M b a = a - b; instance Cofunctor (M b) where ... ...which would be very useful, but would probably have unpleasant consequences for type inference... -- Ashley Yakeley, Seattle WA ___ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell