Andrew Lentvorski wrote:
Summary: Anywhere you would iterate you can also recurse.

I don't believe that is true, but I'd have to think about it.

I've been watching the Abelson and Sussman SICP lectures available here:

http://swiss.csail.mit.edu/classes/6.001/abelson-sussman-lectures/

and this is one of the first things they point out and demonstrate.

The problem is in cases where side effects are the end result.

I'm not sure where or how side effects fit into the theory of computability.

Also, iteration is *by definition* O(1) in space. That is not true for generalized recursion; it is true only for tail recursion.

Right. Fortunately most of the commonly used types of iteration can be implemented in a tail recursive way. And it seems that anything can be made tail recursive using CPS transform. But then you start using space on the heap instead of the stack. It seems that it works out the same (in that in those situations you use O(n) space no matter what) except with a functional/recursive approach the programmer doesn't have as many opportunities to mess up the state. And that is the whole point. Although you might as well just do normal recursion on the stack even if it is not tail recursive since you are going to use space regardless but at least if you do it on the stack you can free space by decrementing the stack pointer instead of involving garbage collection which will tie up lots of cpu cycles in this situation. And really, tail recursion is iteration as far as the control flow of the machine language goes except the compiler worries about the state so you don't have to. Again, that would seem to be a good thing.

But it usually results in less code and fewer opportunities for bugs among other nice properties.

I don't believe that.  The main problem is that iteration is overloaded.

I have transformed a few iterative processes into recursive processes just for fun and watched them do the same in the videos and it usually works out that way. The simple example of summing a list that I pasted in my original mail was a little example to present the general idea of why this is so.

One of the main uses for iteration in an imperative language is "do action X to every element of Y". This always winds up with fencepost and side effect errors.

How can you end up with fencepost errors if you say "do action X to every element of Y"? That sort of notation would seem to rule them out. Much better than doing a for loop over n elements incrementing i each time through to index through the array of elements and then messing up your termination condition by using the wrong comparison.

Recursion doesn't solve this. The fact that functional languages generally have "apply", "map", or "collect" generally is what removes that type of bug. Imperative languages are finally growing "foreach" which can be used equivalently.

Maybe you mistyped or I misunderstood above. What is the difference between "foreach x in y" and "do action x to every element of y"?

Even so, in a functional language without iteration, when you have to maintain multiple pieces of state simultaneously, those algorithms break down and you wind up carrying a *lot* of extra state into a recursion to do the same actions as you would with an iteration.

Perhaps. Haskell has something called a state monad. I'm not sure how it works yet but it might be one way to help out that situation. I understand this is all tradeoffs and before anyone accuses me of being dazzled by the shiny newness of functional programming let me say that I'm just trying to broaden my horizons.

--
Tracy R Reed                  Read my blog at http://ultraviolet.org
Key fingerprint = D4A8 4860 535C ABF8 BA97  25A6 F4F2 1829 9615 02AD
Non-GPG signed mail gets read only if I can find it among the spam.

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to