Hi,
I'm new to Haskell, so I don't know even if this question is meaningful in
the context of Haskell, but I'm curious to know if a Haskell implementation
is expected/required to optimise expressions which are tail-recursive.

The standard example would be

    fact1 1 = 1
    fact1 n = n * fact1 (n-1)

    factsupp 1 a = a
    factsupp n a = factsupp (n-1) (a*n)

    fact2 n = factsupp n 1

where fact1 has to be recursive, but fact2 can be expected to run in
constant space. For example, Scheme guarantees this optimisation.

Now, I tried this in Hugs98 and found inconclusive results. Both fact1 10000
and fact2 10000 failed with a control stack overflow. However, when I was
experimenting earlier today (I didn't save the results :-() I got a
variation on fact2 which went well beyond 10000, although it failed further
on with a garbage collection not being able to reclaim enough space (which
indicates that Hugs may *still* not be fully optimising to an iterative form
- unless the numbers had got so huge that the space for the bignums involved
was filling my RAM - but that seems unlikely...)

Two questions really - first, does the standard offer any guarantees here
(like Scheme's does), and what does Hugs do? (I'd ask the latter question on
the hugs list, but that seems to have been dead since about May...)

Thanks for any help,
Paul.



Reply via email to