apfelmus,
The question is the following: how big the gap between strict languages
with lazy constructs and Haskell? Does the default lazyness have
irrefutable advantage over default strictness?
Laziness is needed to achieve true compositionality. This point is
elaborated in "Why functional programming matters" by John Hughes
Yes, I like the paper. But the examples can be just decomposed into
generator and the rest, and the generator can be made lazy in case of
strict languages. For instance, finding "Newton-Raphson Square Roots"
can be expressed in strict language (Scala again):
repeat (f, a) : Stream[Double] = Stream.cons(repeat f (f a))
or in hypothetical language "L" with some syntactic sugar
repeat f a = *lazy* cons a (repeat f (f(a))
To be clear, I don't mind against laziness at all, I want to know what I
get if I take laziness everywhere as default semantics.
I also think that the laziness in Haskell is already so implicit that
90% of the Haskell code written so far will simply break irreparably if
you experimentally remove it.
Yes, I understand, that the present Haskell code heavily bases on
laziness, but I'm going into the problem in general: how much I get, if
I switch from default strictness to default laziness in my hypothetical
language L? Or, from other side, how much I throw away in the reverse case?
By the way, lazy evaluation is strictly more powerful than eager
evaluation (in a pure language, that is) with respect to asymptotic
complexity:
Richard Bird, Geraint Jones and Oege de Moor.
"More Haste, Less Speed."
http://web.comlab.ox.ac.uk/oucl/work/geraint.jones/morehaste.html
You gave me a lot of food for thought. Thank you for the link.
Best regards,
Nick.
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe