As far as I remember, this was considered by the original Haskell committee 
in 1988.
The argument then against non-linear patterns was that, in the interests of 
equational reasoning, it
was desirable to define a function using disjoint cases, and there was no 
way of defining,
using a pattern, all the values that do not match the non-linear pattern.

The suggested translation of non-linear patterns using guards is a very 
simple case and does not  obviously generalise. For example, given the 
left-to-right semantics of pattern matching, with
g x x 1 = e1 ; g  x y z = e2
h x x' 1 | x==x' = e1 ; h x y z = e2
should
g 1 2 bottom
and
h 1 2 bottom
have the same value?

I think that g 1 2 bottom should be e2, and h 1 2 bottom should be bottom. 
 A possible translation of
g  would then be
g x y z = let
                g' x y | x == y = v
                                     = \ z -> e2
                v 1 = e1 in
              in  g' x y z

Suppose we had g x x x = e1 .... Given that == is not required to be 
transitive for every user-defined overloading, it would seem that 3 
equality tests would be necessary!

My view is that non-linear pattern are a succinct way of defining a very 
limited class of constraints,
but probably do not buy enough to justify including in Haskell.

--brian


On Wednesday, May 05, 1999 9:16 AM, Peter Thiemann 
[SMTP:[EMAIL PROTECTED]] wrote:
> A friend and I recently discussed why patterns in Haskell are
> restricted to be linear. He found it unintuitive due to his background
> in term rewriting and logic. And he noted that it is easy to compile
> away as in:
>
> f (x, x) = e
> ==>>
> f (x, x') | x == x' = e
>
> It is also easy to transform away in list comprehensions:
>
> (x, x) <- e
> ==>>
> (x, x') <- e, x == x'
>
> My main argument against it was a language design issue, namely that
> suddenly x is required to have an Eq type which cannot be explained by
> looking at its uses in e.
>
> Another problem is that comparing x with x' makes this kind of pattern
> matching super-strict (since x may be reduced to normal form).
>
> Can someone enlighten me on other arguments for or against non-linear
> patterns?
>
> NB:
> If I remember the Haskell98 discussion correctly, there was a
> discussion on Monad and (the now dead) MonadZero, where the MonadZero
> appeared "magically" in the context, whenever someone used (refutable)
> patterns in the do-notation. This discussion (which was resolved by
> hacking class Monad and dropping class MonadZero) is imho related to
> the question raised above; in both cases, the use of some language
> feature changes/restricts the type.
>
> -Peter


Reply via email to