On 10/29/06, Bob Rogers <[EMAIL PROTECTED]> wrote:
>    From: "Ben Tilly" <[EMAIL PROTECTED]>
>    Date: Sat, 28 Oct 2006 09:25:32 -0700
>
[...]
>    >    Ummm...you've mixed three unrelated things together.
>    >
>    >    1. Continuations.
>    >    2. Closures.
>    >    3. Tail call elimination.
>    >
>    > "Unrelated"??  I don't think so.
>
>    These are all distinct and none implies the others . . .
>
> True enough; they are "distinct" and "independent."  That does not rule
> out "related."

Point.  However it does suggest that they shouldn't be confused with each other.

>    But I think this may be a red herring.  I suspect your definition of
> "continuation" makes you unwilling to consider anything in Common Lisp a
> continuation, and hence what I said above does not make sense to you.
> That is understandable, especially since I did not explain that the
> above recipe was not general.  If you really need call/cc, you can't
> rewrite it in Common Lisp.

Exactly true.

>    Of course, you're probably better off debugging your problematic code
> under a Scheme implementation that has a trace facility [1].  (If I had
> bothered to search for this earlier, we might have stopped typing at
> each other sooner.  ;-)

Which would have resulted in less confusion/enlightenment/boredom on
the part of our audience. :-)

>    >    Taking them in reverse order, here is what they are:
>    >
>    >    3. Tail call elimination is the process of eliminating call frames
>    >    that have nothing left to do except return a later call.  So in other
>    >    words if foo ends with return bar(), then one can wipe out foo's call
>    >    frame and replace it with bar.
>    >
>    > Mostly true, but you also have to consider dynamic environment.  For
>    > example, if (in Perl 5) you have
>    >
>    >         sub foo {
>    >             local $arg = shift;
>    >
>    >             return bar($arg+1);
>    >         }
>    >
>    > the dynamic binding of $arg means that foo has some cleanup to do after
>    > bar returns, so the call to bar is not a tail call.  If foo used "my"
>    > instead of "local" (which probably makes more sense in any case), then
>    > the call to bar would indeed be a tail call.
>
>    Sorry, but how does this disagree with what I said?  The fact that it
>    is possible for it to be non-obvious that a call is not a tail call
>    doesn't change what a tail call is.
>
> I considered what you said to be incomplete.  You stated only part of
> the requirement for a call to be in tail position, so I filled in the
> rest.  One could argue that this is nitpicking on my part, but one could
> also argue that "if foo ends with return bar()" is too simple to the
> point of error.

Ah.  I had said it correctly then restated it for clarity.  The
restatement was correct for some languages (eg Scheme), but not for
Perl.  You noticed, I didn't.

We're on the same page now.

>    Incidentally the existence of reliable destruction mechanics in Perl 5
>    means that even if you'd used my in this example, it is not
>    necessarily a tail call.  See ReleaseAction on CPAN for a module that
>    abuses this fact.
>
> I don't follow.  That reliable destruction happens when all pointers to
> $arg are destroyed; tail-calling bar would also destroy them when bar
> returned, though in the opposite order (bar's reference last instead of
> foo's reference last).  In other words, it doesn't matter whether foo or
> bar destroys the value, as long as it's the last reference.
>
>    It's true that ReleaseAction can call its actions prematurely in the
> event of a tailcall, depending on where the object reference is kept,
> but that's a separate issue.

That's not just a separate issue, that's the whole issue.  Perl has
precise semantics for when DESTROY is called.  Tail call optimization
is supposed to be an optimization that is done only when it will
change nothing about the behaviour of the program.  But in Perl,
trying to do it changes those semantics, and therefore my change what
the program does.  So in Perl, having a lexical variable inside a
function may be enough to make that function not a candidate for tail
call elimination.

Now you could say that the fault lies with my module for abusing
internal details of Perl's implementation.  I'm a bad user.  But one
of Perl's core modules is SelectSaver, which relies on the same thing.

>    . . .
>
>    >    2. Closures are anonymous functions that carry their own environment
>    >    (ie variables that were in lexical scope when the function was
>    >    created).  I love closures.  I'm very glad that Perl 5 has them.
>    >
>    > Agreed!  (Except that they need not be anonymous; that's just an
>    > unfortunate quirk of Perl 5.)
>
>    Oops, they need not be anonymous in Perl 5 either.  The unfortunate
>    quirk of Perl 5 is that you cannot lexically scope the name of a
>    function in Perl 5, therefore many things that one might want to do
>    with named lexical closures can't be.  However one can work around
>    this by just "naming" them through sticking them in a lexical variable
>    and calling that.
>
> I've done this on occasion, but it's pretty ugly, and the benefit is
> slim; I usually don't bother.  If you do
>
>     my $foo = sub {
>         ...
>     };
>
> you will find that $foo is not even in its own scope.  And those names
> don't show up in backtraces, so the sub is still anonymous as far as the
> rest of Perl is concerned.

Write that as:

    my $foo;
    $foo = sub {
        ...
    };

As for giving names to anonymous functions in backtraces,
http://search.cpan.org/~xmath/Sub-Name-0.02/lib/Sub/Name.pm may assist
you. :-)

>    . . .
>
>    >    1. Continuations are a representation of the execution state of a
>    >    program at a given time.
>    >
>    > The definition I'm accustomed to is much broader.  A continuation is a
>    > funcallable object that represents of the *rest* of the computation.
>    > Most, if not all, of our disagreement is due to this difference.
>
>    Part of the reason why I made a point of trying to clearly define
>    these features is in case there was exactly this kind of difference on
>    a basic definition.  That said, I think the two definitions are closer
>    than you think.  If the rest of the computation depends on, for
>    instance, global variables, then you haven't really packaged up the
>    rest of the computation unless you capture those variables as well.
>    Therefore you need to package up everything.
>
> Are you aware of an implementation of continuations that really does it
> this way?  If calling the continuation really rolls back EVERYTHING,
> then the called function can only change the state of the computation by
> returning values.  That would be OK in a pure FP language, of course,
> where there are no side effects, but if there are no side effects, then
> it is not necessary to capture the values of globals.  So what is the
> point?

My memory said that continuations in Ruby worked this way, but I just
double-checked and they don't restore global variables.  That seems to
have been a misunderstanding on my part.

[...]
>    >    Ironically, I believe that a Parrot continuation (which is only
>    > glancingly related to a closure) is more or less what you mean when you
>    > say "continuation."
>
>    Yes, a Parrot continuation is what I mean by "continuation".  As is
>    callcc in Ruby.  As is my understanding of call/cc aka
>    call-with-continuation in Scheme.  As is the idea of a continuation in
>    Stackless Python.  And so on.
>
> But this doesn't shed much light, because I claim these are different
> beasts (at least for Scheme vs. Parrot; I do not know the other two are
> implemented).  Your mental model of "continuation" is pretty clear to me
> by now, but you keep saying things that suggest there is only one way to
> implement them, and I know that is not true.  Are you familiar with the
> runtime internals of any of these languages?

I'm aware that there are multiple ways to implement continuations.
But to answer your question, I'm not deeply familiar with the runtime
internals of any of these languages.

>    Here is a simple litmus test.  Suppose we have a function that
>    presents us with a callback interface.  For instance File::Find's find
>    function.  If we have continuations then by passing the right callback
>    we can turn this into an iterator and then, for instance, iterate over
>    it in a while loop.  If we do not have continuations, then we cannot
>    convert a callback interface into an iterator.
>
> This sounds like a coroutine to me.  Coroutines can also be implemented
> by using stack-copying without providing continuation semantics.

You heard right then.  And yes, coroutines can be implemented without
implementing full continuations.  You may be amused to know that you
can even (if you're sufficiently evil) create coroutines in C by
abusing switch statements and the preprocessor.  And this technique is
used in at least one fairly widely used program.  (PuTTY.)

http://www.chiark.greenend.org.uk/~sgtatham/coroutines.html

However it's still a fairly good sanity check.  If you see something
called a continuation that cannot be used to implement coroutines, it
certainly isn't a continuation (in the sense that I think of one as).

[...]
>    See also
>    http://community.schemewiki.org/?call-with-current-continuation for a
>    gentler description.
>
> Thank you for pointing this out; this is a good description.

I selected it for that reason. :-)

>    If there is a way to do what is discussed there with closures, then
>    I'm failing to see it.
>
> The key is CP transformation.  Start with the "Examples" section of the
> Wikipedia "Continuation-passing style" article [2].  A traditional sort
> of Scheme compiler converts the "Direct style" into "Continuation
> passing style" during the analysis phase.  All of those extra lambdas
> are continuations, and all of those continuations are tail-called.
> Indeed, except for primitives, the transformed code has ONLY tail calls.
> Because all tail calls are optimized away, it follows that the leaving
> activation record is not needed after the call; any state that it may
> contain that is still relevant is captured by the continuation.  Since
> these continuations do not have to return to an existing AR (as you have
> already observed, Scheme needs no stack), they can can be implemented as
> closures.  And since all Scheme programs can be CP-transformed, closures
> are adequate for the implementation even of what we may think of "exit
> continuations," though the distinction is unimportant to Scheme.

I think we're talking past each other again.

You've just described how to implement continuations using closures.
However the effect is that within Scheme, you write normal code and
exit continuations are available.  To use the technique within CL or
Perl you'd have to first rewrite all of your existing code in an usual
style, and then you'd have something that works like an exit
continuation, but only for your rewritten code.  (Doing it in Perl
would waste tons of stack space, but that is neither here nor there.)

>    Oh, you can use closures to IMPLEMENT a language that does that.  But
>    support for closures doesn't give you a general-purpose facility to do
>    what continuations provide.
>
> I'm not sure I understand the difference between "implementing a
> language that does that" and "a language that does that."

Can I take someone else's library that gives me a callback interface
and turn it into an iterator? In "a language that does that" I can.

Have I implemented something so that someone who programs using what
I've implemented has the capacity to turn what is meant to be a
callback based interface into an iterator?  If I have then I've
"implemented a language that does that".

Your comments above about CP transformations are a good case.  By
writing a bunch of continuation passing code, one can achieve the
effect of continuations within CL.  That doesn't mean that CL has real
continuations though.

>    >         . . . I believe that Scheme has a different idea of continuation
>    >         than Parrot.  In brief, if you rewrite call-and-return into pure
>    >         continuation-passing calling, it looks like a call into the
>    >         sub followed by another call to the continuation.  All of these
>    >         can in fact be implemented as tail calls, since there are no
>    >         returns, and Scheme stipulates that you *must* tail-merge where
>    >         possible.  This means you don't need a stack -- there is never
>    >         more than one activation record at any given time -- so all of
>    >         the state required for a continuation can be captured by a
>    >         closure.  In the Lisp community generally, "closure" and
>    >         "continuation" are therefore often used interchangeably, though
>    >         this does sweep some distinctions under the rug.
>    >
>    >            So all a CPS language really needs is support for closures.
>    >         Such an implementation is truly and utterly stackless, which
>    >         means that dynamic-wind needs to keep its own stack explicitly,
>    >         and similarly for dynamic binding (which, IIUC, is generally
>    >         implemented in terms of dynamic-wind).
>
>    All that quoting yourself proves is that your misunderstanding is
>    consistent over time. :-P
>
> Assuming, of course, that it's my misunderstanding, and not yours.  ;-}
> Besides, I wasn't attempting to prove anything by it, just trying to
> save typing.

I was just teasing.  Hence the :-P.  And hence my next sentence
starting off with, "Speaking more seriously..."

>    Speaking more seriously for a second, I think it is important to not
>    confuse what you need to implement a language with its internal
>    semantics.  You can implement Scheme in, say, C.  And your implemented
>    Scheme will have continuations, closures, and tail-call optimization.
>    However that doesn't mean that C has continuations, closures, or
>    tail-call optimization.
>
> True, but if I'm confused, that's not it.

Well we have a disagreement. Each of us thinks we're right, and in
some sense we are.  You're absolutely right in what you told Chip,
Parrot doesn't need to supply continuations for Scheme to be
implemented.  However from within the implemented Scheme, you will
have continuations in my sense of the word.

>    > This is also the prevailing terminology in Common Lisp, despite the
>    > difference in semantics.  I often see CL programmers label functional
>    > arguments "cont" or "continuation" when those functions are only used to
>    > return value(s) in a straightforward way.  For instance, here is some CL
>    > I wrote, vintage 2000:
>
>    I think enlightenment is glimmering here for me.
>
>    Lisp programmers are very accustomed to building languages on top of
>    Lisp, and then programming in those languages.  So if you're building
>    an infrastructure on top of Lisp that you will then program in, you
>    start to think in that infrastructure.  So if your infrastructure is
>    effectively a little language that has continuations, then it makes
>    sense to talk about them as continuations.  They aren't really
>    continuations in CL.  But they are in your little language.
>
> Your insight is good, but not really relevant here.  The sublanguage
> still doesn't have continuations in the sense you mean; I am presenting
> a different sense of "continuation" here.

Hrm.  I think I see your point then.  You have a function that you're
thinking about as "continue here".  And because you're thinking about
it that way, you call it a continuation.

I grant that the usage makes some sense, but it doesn't match my
understanding, and I think it doesn't match most people's
understanding of what it means for a language to support
continuations.

>    >         (defmethod map-parts ((def has-parts-mixin) continuation)
>    >           (dolist (part (def-parts def))
>    >             (funcall continuation part)))
>    >
>    >         (defmethod map-extent-parts ((def has-parts-mixin) x1 y1 x2 y2 
> continuation)
>    >           (dolist (part (def-parts def))
>    >             (when (bbox-overlaps-extent? part x1 y1 x2 y2)
>    >               (funcall continuation part))))
>    >
>    > The point of these abstractions, BTW, is to allow for the possibility of
>    > a more elaborate class that replaces the simple list of parts with a
>    > geometric search data structure, such as a quadtree.  If that happens,
>    > the execution trace can be quite elaborate, though it is still all
>    > stack-based.
>
>    I think I see what that is doing.  It would help to know what (def
>    has-parts-mixin) returns.  I think my thought about what you're doing
>    applies.
>
> Sorry; I didn't explain that part because I didn't think it was
> pertinent.  "(def has-parts-mixin)" in the formal parameter list of a
> defmethod means that the variable "def" is bound to an object of class
> "has-parts-mixin".  Since the second formal "continuation" is a symbol,
> it is not specialized, and may be of any type (recall that CL uses
> multiple dispatch).

I've never programmed CL.  I have read a couple of books about it.
I've never tried to understand CLOS.

I was badly misreading the code.  I thought that def was a function or
macro of some sort, and therefore (def has-parts-mixin) was being
expanded into something that was interpolated into the function
signature.  Now that I have that straight, I think that the first
method is similar (modulo the complexities of CLOS) to:

    sub map_parts {
        my ($def, $continuation) = @_;
        for my $part ($def->def_parts()) {
            $continuation->($part);
        }
    }

And the other to:

    sub map_extent_parts {
        my ($has_parts_mixin, $x1, $y1, $x2, $y2, $continuation) = @_;
        for my $part ($def->def_parts()) {
            if ($part->bbox_overlaps_extent($x1, $y1, $x2, $y2)) {
                $continuation->($part);
            }
        }
    }

In that case, the name $continuation seems misleading to me.
Certainly it would give me the wrong expectations.  (In fact it
did...)

>    >    Note that this common usage is legitimate under the "rest of the
>    > computation" definition, but not the "goto execution state" definition.
>
> I take that back.  In both cases, the functional passed as the
> "continuation" arg still returns to the caller, so it's not even a
> continuation by the "rest of the computation" definition.

Right.  I was trying to figure out how your example was going to give
continuation the ability to do something "continuation-like".  And was
failing.  (This assisted my misunderstanding of what the code did,
because I was actively trying to figure out how to make it mean
something other than what it meant.)

>    So I'm guilty of muddying the waters by using "continuation" in two
> different senses.  My apologies; I will try to be more precise.  I guess
> I'm a product of my Lisp upbringing, where both senses are used, often
> imprecisely.  :-}

And I'm the product of my Perl upbringing, where continuations are
something that other languages have. :-)

Still, even if I accept the fact that there is a population somewhere
that uses continuation in a very different sense, I think I'm being
reasonable to say that when most people speak about a language having
continuations, they do not mean something that acts similarly to
closures.  (Even if they are just closures under the hood.)

>    >    Furthermore this is not an accidental oversight, the CL committee
>    >    looked at continuations, decided that continuations are Bad Things,
>    >    and explicitly chose to leave them out.
>    >
>    > X3J13 decided that continuations were Bad Things because X3J13 was
>    > mostly made up of representatives from Lisp vendors who had invested
>    > heavily in their stack-based runtime systems, and many figured they
>    > would go broke trying to produce a heap-based runtime.  If you have
>    > other evidence, I'd love to hear it.
>
>    My understanding is based on personal communication from Mark Jason
>    Dominus, who is usually a good authority on this kind of thing.
>
> I was speaking of X3J13 deliberations, and your assertion that the X3J13
> committee "decided that continuations are Bad Things."

So was I.  I may have misunderstood what Mark was saying, but he was
definitely talking about the decision to not include continuations in
CL.

>    According to my memory, he said that many people dislike continuations
>    both because of the strict requirements it puts on the implementation
>    (so far matches what you're saying), and because continuations are
>    horrible abstractions that lead to very confusing code.  They are
>    powerful, even more so than goto, but they lead to problems just like
>    goto.
>
> I would say "can lead to problems" or "may lead to confusing code."  So
> can the other powerful language features I've encountered, but I often
> find uses for them -- in code that (IMO) seems no more confusing than
> necessary.

We all claim that features are only misused by others. :-P

Still I think we're saying the same thing.  Continuations are very
powerful, and can lead to confusion.  (And not just confusion over
what the definition of a continuation is!)  We're just in disagreement
about whether that trade-off is a net win.

>    Proponents would say (with some justice) that the problem is that
>    you're not supposed to use continuations directly.  Instead you wrap
>    them in useful abstractions, like co-routines.  And then use those
>    abstractions instead.
>
> Agreed.

Agreement is good. :-)

>    >    >    Abstraction.  Speed.  Debuggability.  Pick any three.  ;-}
>    >
>    >    If you were arguing for closures, I'd agree with you.  But you're
>    >    arguing for continuations.
>    >
>    >    Cheers,
>    >    Ben
>    >
>    > I'm arguing for both, because my operational definition of continuation
>    > includes the possibility of using closures in their implementation,
>    > without necessarily needing hairy state-munging magic.  But there are
>    > times (I further argue) when you need the magic, too.
>
>    I agree that there are things which can only be done with the magic.
>    I've already given a very useful one.  (Turn a callback interface into
>    an iterator.)  However I continue to disagree with you on their effect
>    on debuggability.
>
>    Cheers,
>    Ben
>
> Complicated algorithms tend to be hard to debug no matter how they are
> implemented.  Even so, consider that an algorithm implemented with
> (exit) continuations may be easier to debug than one without if using
> continuations makes the implementation appreciably cleaner and/or more
> compact.

Hey, you don't have to convince me that exiting early is a good thing.
 Several years ago I read
http://www.cis.temple.edu/~ingargio/cis71/software/roberts/documents/loopexit.txt
and was easily convinced.

>    Indeed, error handling itself makes a good case in point.  The
> equivalent of try/recover in many languages makes it easier to separate
> error handling from "normal" code, thereby simplifying the normal
> control flow, but requires (exit) continuation semantics to make it
> happen.  (I was going to say that I rarely use exit continuations in my
> day-to-day programming, but then I remembered error handling.)

I would agree.  Many others would not.  You can get a sampling of
opinions at discussions such as
http://c2.com/cgi/wiki?AvoidExceptionsWheneverPossible.

Cheers,
Ben
 
_______________________________________________
Boston-pm mailing list
[email protected]
http://mail.pm.org/mailman/listinfo/boston-pm

Reply via email to