Moore Paul <[EMAIL PROTECTED]> writes
> [..]
> fact1 1 = 1
> fact1 n = n * fact1 (n-1)
>
> factsupp 1 a = a
> factsupp n a = factsupp (n-1) (a*n)
>
> fact2 n = factsupp n 1
>
> where fact1 has to be recursive, but fact2 can be expected to run in
> constant space. For example, Scheme guarantees this optimisation.
>
> Now, I tried this in Hugs98 and found inconclusive results. Both fact1 10000
> and fact2 10000 failed with a control stack overflow.
>
This is because Haskell is "lazy". The programs that the "eager"
programmers qualify as "iterative" the Haskell systems often evaluate
as "recursive" ones.
Sometimes a clever compiler improves this, but only in simplest cases.
Maybe, this is why the strictness annotations are brought to Haskell.
The whole subject is, to my mind, the weakest principal point about
Haskell.
In practice, i write a program in a simplest way, do not mind
"recursion". And obtain, on average, the programs that are not so bad
in performance.
You can look into manual.txt contained in the archive
ftp.botik.ru:/pub/local/Mechveliani/docon/2/docon-2.zip,
http://www....
It discusses this un-needed laziness feature (section 'i.lg1') and
also provides several average-performance tests from mathematical
practice (section 'pe').
------------------
Sergey Mechveliani
[EMAIL PROTECTED]