Adding seq ruins eta reduction. For normal order lambda calculus
we have '\x.f x = f' (x not free in f). If we add seq this is no
longer true.
I'm not sure why you bring up lazy evaluation (I presume you
mean lazy evaluation as in call-by-need). Having call-by-need
or not is unobservable, with or without seq.
I'm a fan of eta, it makes reasoning easier. It also means
the compiler can do more transformations.
-- Lennart
On Feb 12, 2007, at 10:22 , Yitzchak Gale wrote:
Lennart Augustsson wrote:
I'm not sure what you're asking. The (untyped) lambda calculus is
Turing complete.
How could seq improve that?
Obviously, it can't. But how can it hurt?
Classical lambda calculus does not model the
semantics of laziness, so seq is equivalent to
flip const there, just like foldl' is equivalent
to foldl. If we modify the lambda calculus to
model laziness - let's say, by restricting
beta-reduction - then the interesting
properties of seq are revealed.
Why should we treat seq differently in Haskell
just because its interesting properties are not
modeled in the classical lambda calculus?
Haskell is not a classical language, it is
non-strict (among other differences).
Regards,
Yitz
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe