I think one crucial point is being lost in the ongoing discussion of pattern-matching and newtype:
newtype is supposed permit *erasure* of construction and pattern matching. There is *no runtime cost* because the type disappears at compile time. Even a non-optimising Haskell implementation can exploit this fact. For data types like state monads which incorporate internal functions, this makes a huge difference to program performance, as it opens up vast numbers of beta-reductions, eta-abstractions, and so forth. This ties in a bit to recent discussions of the efficiency of monadic computations. Yes, it is possible to write a strict type declaration: > data C2 = C2 !Bool And then always match lazily: > f' ~(C2 x) ~(C2 y) = C2 (f x y) And if you *always* did that it would be safe for the compiler to erase all occurrences of C2 from your program. One omitted ~, one accidental strict match in your code, and you lose. If you export C2, a strict match *might* exist in some other module. You lose. Instead, you can write "newtype", secure in the knowledge that The Right Thing happens. Indeed, it's impossible to get erasure any other way if you're exporting your type (unless you have a whole-program optimizer, and no one's ever actually released one of those for Haskell). A side note: if one wants to pare the Haskell language down a bit, one need look no further than strictness annotations on fields! They can easily be simulated by wrapper functions(*): > data C3 = C3 Bool > > c3 b = C3 $! b -Jan-Willem Maessen Eager Haskell project [EMAIL PROTECTED] * Of course, then the compiler doesn't get to assume anything about the field when pattern matching happens. While I'm not sure that current compilers make much of this information anyhow, I do, and am not actually advocating the removal of strict fields from Haskell! _______________________________________________ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell