Hi,

I happen to like laziness, because it means that when I'm not thinking
about performance, I don't have to think about evaluation order _at
all_. And since my computer is a 750Mhz Athlon with Hugs, I never find
any need to worry about performance :) If it ever becomes an issue I
can move to GHC or buy a faster computer without too much hassle.

1) What's the advantage of being able to define if2?
What about &&, || ? Should they be "built in"? What about and, which
is just && a lot of times, should that be lazy? At what point do you
say no? Should I be able to define implies correctly?

3) "Lazy lists as glue" can easily be replaced by force/delay lists + an
extension to pattern matching where pattern matching against [a] forces the
argument and the syntax [h|t] is used as in Prolog, instead of h:t (This
would also free : to be used for "with type" or "with partial type" instead
of ::)
That seems like more "thought" when writing the program, maybe its
worth it, maybe its not, but it doesn't seem as "neat" as what we
already have.

4) Other examples of the utility of laziness can turn out to be impractical
chimera. For example, the famous repmin replaces the traversal of a tree
twice with the dubious "advantage" of traversing it "only once" and the
building up of a cluster of expensive thunks instead, and since the thunks
effectively encode the structure of the tree, evaluation of them effectively
constitutes the second traversal. So nothing's gained except difficulty of
understanding the all-too-clever code (very bad software engineering
practice imho), more heap consumption, and more time consumption.
Laziness doesn't have to be exploited in complex ways - minimum = head
. sort is a nice example. isSubstr x y = any (isPrefix x) (inits y) is
another one. Often by just stating a definition, laziness gives you
the performance for free. Of course, if you wanted to think harder
(and I never do), you can write better performing and strict-safe
versions of these, but again its more effort.

The other thing you loose when moving to strictness is the ability to
inline functions arbitrarily - consider:

if2 c t f = if x then t else f

Consider the expression:
if2 True 1 undefined

Now lets inline it and expand it, and in Haskell we get 1, which
matches the evaluation. In strict Haskell the inlining is now invalid,
and thats quite a useful optimisation to make. While it seems that
compilers can get round this, my concern is for the poor programmer -
this nice property of viewing functions as just "replace this with
that" has disappeared.

I suspect that in years to come, lazy languages will also have the
upper hand when it comes to theorem proving and formal reasoning, but
I guess thats a matter for future consideration.

While laziness may not be all good, its certainly not all bad :)

Thanks

Neil
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to