On 10/27/06, Bob Rogers <[EMAIL PROTECTED]> wrote:
>    From: "Ben Tilly" <[EMAIL PROTECTED]>
>    Date: Thu, 26 Oct 2006 19:36:36 -0700
>
>    On 10/26/06, Bob Rogers <[EMAIL PROTECTED]> wrote:
>    >    From: "Ben Tilly" <[EMAIL PROTECTED]>
>    >    Date: Thu, 26 Oct 2006 17:09:35 -0700
>    >
>    >    [...]
>    >
>    >    While I agree that continuations are insanely powerful, I likewise
>    >    prefer generators and iterators to continuations.  Why?  Well suppose
>    >    that you write some complex code using continuations.  Suppose that
>    >    something goes wrong.  Now you want to debug it.  So you'd want some
>    >    useful debugging information.  Something like, say, a stack backtrace.
>    >
>    >    You're outta luck.
>    >
>    > That is not true -- or, at least, not necessarily true.  In a full CPS
>    > language like Scheme, tail-merging (or the equivalent CP transformation)
>    > is obligatory, so you may in fact be of luck.  (I've never used Scheme,
>    > so I don't know how a real Schemer goes about debugging.)  However, a
>    > Scheme continuation looks just like a closure at the source level.  It
>    > is possible to write nearly identical code in Common Lisp that uses
>    > closures, and since CL does not tail-merge by default, you get the same
>    > functionality plus useful backtraces in the debugger.
>
>    Ummm...you've mixed three unrelated things together.
>
>    1. Continuations.
>    2. Closures.
>    3. Tail call elimination.
>
> "Unrelated"??  I don't think so.

These are all distinct and none implies the others.  A language may
implement any one or two of these without implementing the rest.  In
fact I can think of combinations of languages and compiler settings
that implement 6 of the 8 possible combinations of these features.
(Actually Perl 5 implements 1 and a half, tail call elimination is not
automatically done but you can goto &foo.)

>    Taking them in reverse order, here is what they are:
>
>    3. Tail call elimination is the process of eliminating call frames
>    that have nothing left to do except return a later call.  So in other
>    words if foo ends with return bar(), then one can wipe out foo's call
>    frame and replace it with bar.
>
> Mostly true, but you also have to consider dynamic environment.  For
> example, if (in Perl 5) you have
>
>         sub foo {
>             local $arg = shift;
>
>             return bar($arg+1);
>         }
>
> the dynamic binding of $arg means that foo has some cleanup to do after
> bar returns, so the call to bar is not a tail call.  If foo used "my"
> instead of "local" (which probably makes more sense in any case), then
> the call to bar would indeed be a tail call.

Sorry, but how does this disagree with what I said?  The fact that it
is possible for it to be non-obvious that a call is not a tail call
doesn't change what a tail call is.

Incidentally the existence of reliable destruction mechanics in Perl 5
means that even if you'd used my in this example, it is not
necessarily a tail call.  See ReleaseAction on CPAN for a module that
abuses this fact.

>    Tail call elimination can make some recursive algorithms as efficient
>    as looping.  At the expense of losing potentially useful debugging
>    information.
>
> In reducing the stack requirement to that of a loop, you also reduce the
> amount of debugging information to that of a loop.  But what "useful"
> information about the previous iterations of a loop would you want to
> keep?  Are you saying that loops are therefore intrinsically harder to
> debug than (non-optimized) recursions?

If the recursive algorithm is just a rewritten loop, then there is no
difference.  The potential issue comes with a more complex recursive
algorithm where there are multiple functions you're bouncing between
and it isn't obvious to a human that it can be optimized.  Then you
might indeed get into trouble and want to ask, "How did I get here?"
And get confused that in one line you're calling foo, and on the next
you're in bar, and you have no idea how you got there.

Making this a compiler setting resolves the issue though.

>    2. Closures are anonymous functions that carry their own environment
>    (ie variables that were in lexical scope when the function was
>    created).  I love closures.  I'm very glad that Perl 5 has them.
>
> Agreed!  (Except that they need not be anonymous; that's just an
> unfortunate quirk of Perl 5.)

Oops, they need not be anonymous in Perl 5 either.  The unfortunate
quirk of Perl 5 is that you cannot lexically scope the name of a
function in Perl 5, therefore many things that one might want to do
with named lexical closures can't be.  However one can work around
this by just "naming" them through sticking them in a lexical variable
and calling that.

See http://www.perlmonks.org/?node_id=138442 for a code example
demonstrating why the conflict between global naming and lexical scope
can get you into trouble.  See any of various modules and discussions
about "inside-out objects" to see a useful use of non-anonymous
closures in Perl 5.

>    1. Continuations are a representation of the execution state of a
>    program at a given time.
>
> The definition I'm accustomed to is much broader.  A continuation is a
> funcallable object that represents of the *rest* of the computation.
> Most, if not all, of our disagreement is due to this difference.

Part of the reason why I made a point of trying to clearly define
these features is in case there was exactly this kind of difference on
a basic definition.  That said, I think the two definitions are closer
than you think.  If the rest of the computation depends on, for
instance, global variables, then you haven't really packaged up the
rest of the computation unless you capture those variables as well.
Therefore you need to package up everything.

Of course in Scheme, "everything" is less than in some other languages
I could name. ;-)

>    FWIW, Wikipedia [1] agrees with you, but I think this definition is
> too limited, in that it defines "the rest of the computation" as the
> resumption of some old context.  In short, it leaves out closures as an
> implementation mechanism.

Hmmm...

Perl 5 has closures but I'm having trouble figuring out how, for
instance, one could create an exit continuation out of Perl 5's
closures.

>    . . .
>
>    If that didn't make any sense, let me make it simpler.  A language
>    like Common Lisp can be implemented with all of the function calls
>    represented on a stack.  And therefore when you ask for a stack
>    backtrace, there is something to give you.  However continuations
>    introduce too much complexity to be implemented with a stack, and
>    therefore it is impossible to show what is going on with something as
>    simple as a stack backtrace.
>
> Parrot has both continuations and backtraces.  Each context (aka
> activation record) has a "return continuation" to the calling context,
> so Parrot follows the chain of continuations back to the beginning [2].
> Of course, you can use continuations to jump between contexts that are
> unrelated, as when using coroutines, and in such cases the backtrace
> won't show these jumps.  But if you *are* using coroutines, then you
> should have known the job was dangerous when you took it.

That is interesting.  I had wondered how they were planning to mix the
two, and this approach makes sense to me.

>    Ironically, I believe that a Parrot continuation (which is only
> glancingly related to a closure) is more or less what you mean when you
> say "continuation."

Yes, a Parrot continuation is what I mean by "continuation".  As is
callcc in Ruby.  As is my understanding of call/cc aka
call-with-continuation in Scheme.  As is the idea of a continuation in
Stackless Python.  And so on.

Here is a simple litmus test.  Suppose we have a function that
presents us with a callback interface.  For instance File::Find's find
function.  If we have continuations then by passing the right callback
we can turn this into an iterator and then, for instance, iterate over
it in a while loop.  If we do not have continuations, then we cannot
convert a callback interface into an iterator.

>    While the Scheme continuations may look similar to closures to you,
>    they're NOT similar to closures, and the implications are VERY
>    different from closures.
>
> That is true only for the limited definition of continuation you propose
> above, and not even generally true of Scheme.  Allow me to quote from a
> post I wrote last summer [3] that was part of a discussion of exception
> handling in Parrot.  I was responding to a question by Chip Salzenberg
> about why I thought a Scheme compiler for Parrot would not need Parrot
> continuations:

It is not even generally true of Scheme?  Not according to my
understanding of the Scheme standard.

http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-9.html#%_idx_566

I think that what is described there is what I described.  See also
http://community.schemewiki.org/?call-with-current-continuation for a
gentler description.  If there is a way to do what is discussed there
with closures, then I'm failing to see it.  And
http://en.wikipedia.org/wiki/Lisp_programming_language#Control_structures
backs my impression up.

Oh, you can use closures to IMPLEMENT a language that does that.  But
support for closures doesn't give you a general-purpose facility to do
what continuations provide.

>         . . . I believe that Scheme has a different idea of continuation
>         than Parrot.  In brief, if you rewrite call-and-return into pure
>         continuation-passing calling [4], it looks like a call into the
>         sub followed by another call to the continuation.  All of these
>         can in fact be implemented as tail calls, since there are no
>         returns, and Scheme stipulates that you *must* tail-merge where
>         possible.  This means you don't need a stack -- there is never
>         more than one activation record at any given time -- so all of
>         the state required for a continuation can be captured by a
>         closure.  In the Lisp community generally, "closure" and
>         "continuation" are therefore often used interchangeably, though
>         this does sweep some distinctions under the rug.
>
>            So all a CPS language really needs is support for closures.
>         Such an implementation is truly and utterly stackless, which
>         means that dynamic-wind needs to keep its own stack explicitly,
>         and similarly for dynamic binding (which, IIUC, is generally
>         implemented in terms of dynamic-wind).

All that quoting yourself proves is that your misunderstanding is
consistent over time. :-P

Speaking more seriously for a second, I think it is important to not
confuse what you need to implement a language with its internal
semantics.  You can implement Scheme in, say, C.  And your implemented
Scheme will have continuations, closures, and tail-call optimization.
However that doesn't mean that C has continuations, closures, or
tail-call optimization.

> This is also the prevailing terminology in Common Lisp, despite the
> difference in semantics.  I often see CL programmers label functional
> arguments "cont" or "continuation" when those functions are only used to
> return value(s) in a straightforward way.  For instance, here is some CL
> I wrote [5], vintage 2000:

I think enlightenment is glimmering here for me.

Lisp programmers are very accustomed to building languages on top of
Lisp, and then programming in those languages.  So if you're building
an infrastructure on top of Lisp that you will then program in, you
start to think in that infrastructure.  So if your infrastructure is
effectively a little language that has continuations, then it makes
sense to talk about them as continuations.  They aren't really
continuations in CL.  But they are in your little language.

I'll have to think about this more.

>         (defmethod map-parts ((def has-parts-mixin) continuation)
>           (dolist (part (def-parts def))
>             (funcall continuation part)))
>
>         (defmethod map-extent-parts ((def has-parts-mixin) x1 y1 x2 y2 
> continuation)
>           (dolist (part (def-parts def))
>             (when (bbox-overlaps-extent? part x1 y1 x2 y2)
>               (funcall continuation part))))
>
> The point of these abstractions, BTW, is to allow for the possibility of
> a more elaborate class that replaces the simple list of parts with a
> geometric search data structure, such as a quadtree.  If that happens,
> the execution trace can be quite elaborate, though it is still all
> stack-based.

I think I see what that is doing.  It would help to know what (def
has-parts-mixin) returns.  I think my thought about what you're doing
applies.

>    Note that this common usage is legitimate under the "rest of the
> computation" definition, but not the "goto execution state" definition.
> And please remember that this is not just my idiom; I think I started
> seeing this sort of thing in the Lisp Machine source code about 20 years
> ago.

If my glimmering enlightenment is accurate, then I can understand why it would.

>    >    Of course, you may also blow out the stack if you try to write
>    > Scheme-style loops without tail-merging.  But Common Lisp allows you to
>    > enable tail-merging selectively, as a compiler optimization.
>
>    I believe that "tail-merging" is usually called "tail-call
>    elimination."  If you mean something else, then please explain.
>
> I've also heard "tail-call optimization" used.  AFAIK, all terms are
> current in the Lisp compiler community, but since Lispers use them more
> often than the rest of the world, we generally gravitate towards the
> Huffman-coded label.  ;-}  I apologize if this is a source of confusion.

OK, then we're on the same page here.

>    >    I also believe (from having written complex CL applications) that the
>    > use of functional arguments (closures, continuations, whatever) reduces
>    > complexity, sometimes considerably.  This is a direct consequence of
>    > Graham's "succinctness = power" thesis . . .
>
>    I agree on the power and utility of closures.  I do not agree on
>    continuations.  Please note that your CL experience is of no utility
>    on the question of whether continuations are good because CL does not
>    have continuations.
>
> Only per your definition.  (And your assumption that my understanding is
> limited to my CL experience.)

What assumption?  I'm saying that the experience you are quoting is
not applicable.  I didn't say that you don't have further experience
that might.

>    Furthermore this is not an accidental oversight, the CL committee
>    looked at continuations, decided that continuations are Bad Things,
>    and explicitly chose to leave them out.
>
> X3J13 decided that continuations were Bad Things because X3J13 was
> mostly made up of representatives from Lisp vendors who had invested
> heavily in their stack-based runtime systems, and many figured they
> would go broke trying to produce a heap-based runtime.  If you have
> other evidence, I'd love to hear it.

My understanding is based on personal communication from Mark Jason
Dominus, who is usually a good authority on this kind of thing.
According to my memory, he said that many people dislike continuations
both because of the strict requirements it puts on the implementation
(so far matches what you're saying), and because continuations are
horrible abstractions that lead to very confusing code.  They are
powerful, even more so than goto, but they lead to problems just like
goto.

Proponents would say (with some justice) that the problem is that
you're not supposed to use continuations directly.  Instead you wrap
them in useful abstractions, like co-routines.  And then use those
abstractions instead.

>    >    Abstraction.  Speed.  Debuggability.  Pick any three.  ;-}
>
>    If you were arguing for closures, I'd agree with you.  But you're
>    arguing for continuations.
>
>    Cheers,
>    Ben
>
> I'm arguing for both, because my operational definition of continuation
> includes the possibility of using closures in their implementation,
> without necessarily needing hairy state-munging magic.  But there are
> times (I further argue) when you need the magic, too.

I agree that there are things which can only be done with the magic.
I've already given a very useful one.  (Turn a callback interface into
an iterator.)  However I continue to disagree with you on their effect
on debuggability.

Cheers,
Ben
 
_______________________________________________
Boston-pm mailing list
[email protected]
http://mail.pm.org/mailman/listinfo/boston-pm

Reply via email to