From: "Ben Tilly" <[EMAIL PROTECTED]>
Date: Sat, 28 Oct 2006 09:25:32 -0700
On 10/27/06, Bob Rogers <[EMAIL PROTECTED]> wrote:
> From: "Ben Tilly" <[EMAIL PROTECTED]>
> Date: Thu, 26 Oct 2006 19:36:36 -0700
>
> On 10/26/06, Bob Rogers <[EMAIL PROTECTED]> wrote:
> > From: "Ben Tilly" <[EMAIL PROTECTED]>
> > Date: Thu, 26 Oct 2006 17:09:35 -0700
> >
> > [...]
> >
> > While I agree that continuations are insanely powerful, I likewise
> > prefer generators and iterators to continuations. Why? Well
suppose
> > that you write some complex code using continuations. Suppose that
> > something goes wrong. Now you want to debug it. So you'd want
some
> > useful debugging information. Something like, say, a stack
backtrace.
> >
> > You're outta luck.
> >
> > That is not true -- or, at least, not necessarily true. In a full CPS
> > language like Scheme, tail-merging (or the equivalent CP
transformation)
> > is obligatory, so you may in fact be of luck. (I've never used
Scheme,
> > so I don't know how a real Schemer goes about debugging.) However, a
> > Scheme continuation looks just like a closure at the source level. It
> > is possible to write nearly identical code in Common Lisp that uses
> > closures, and since CL does not tail-merge by default, you get the
same
> > functionality plus useful backtraces in the debugger.
>
> Ummm...you've mixed three unrelated things together.
>
> 1. Continuations.
> 2. Closures.
> 3. Tail call elimination.
>
> "Unrelated"?? I don't think so.
These are all distinct and none implies the others . . .
True enough; they are "distinct" and "independent." That does not rule
out "related."
But I think this may be a red herring. I suspect your definition of
"continuation" makes you unwilling to consider anything in Common Lisp a
continuation, and hence what I said above does not make sense to you.
That is understandable, especially since I did not explain that the
above recipe was not general. If you really need call/cc, you can't
rewrite it in Common Lisp.
Of course, you're probably better off debugging your problematic code
under a Scheme implementation that has a trace facility [1]. (If I had
bothered to search for this earlier, we might have stopped typing at
each other sooner. ;-)
> Taking them in reverse order, here is what they are:
>
> 3. Tail call elimination is the process of eliminating call frames
> that have nothing left to do except return a later call. So in other
> words if foo ends with return bar(), then one can wipe out foo's call
> frame and replace it with bar.
>
> Mostly true, but you also have to consider dynamic environment. For
> example, if (in Perl 5) you have
>
> sub foo {
> local $arg = shift;
>
> return bar($arg+1);
> }
>
> the dynamic binding of $arg means that foo has some cleanup to do after
> bar returns, so the call to bar is not a tail call. If foo used "my"
> instead of "local" (which probably makes more sense in any case), then
> the call to bar would indeed be a tail call.
Sorry, but how does this disagree with what I said? The fact that it
is possible for it to be non-obvious that a call is not a tail call
doesn't change what a tail call is.
I considered what you said to be incomplete. You stated only part of
the requirement for a call to be in tail position, so I filled in the
rest. One could argue that this is nitpicking on my part, but one could
also argue that "if foo ends with return bar()" is too simple to the
point of error.
Incidentally the existence of reliable destruction mechanics in Perl 5
means that even if you'd used my in this example, it is not
necessarily a tail call. See ReleaseAction on CPAN for a module that
abuses this fact.
I don't follow. That reliable destruction happens when all pointers to
$arg are destroyed; tail-calling bar would also destroy them when bar
returned, though in the opposite order (bar's reference last instead of
foo's reference last). In other words, it doesn't matter whether foo or
bar destroys the value, as long as it's the last reference.
It's true that ReleaseAction can call its actions prematurely in the
event of a tailcall, depending on where the object reference is kept,
but that's a separate issue.
. . .
> 2. Closures are anonymous functions that carry their own environment
> (ie variables that were in lexical scope when the function was
> created). I love closures. I'm very glad that Perl 5 has them.
>
> Agreed! (Except that they need not be anonymous; that's just an
> unfortunate quirk of Perl 5.)
Oops, they need not be anonymous in Perl 5 either. The unfortunate
quirk of Perl 5 is that you cannot lexically scope the name of a
function in Perl 5, therefore many things that one might want to do
with named lexical closures can't be. However one can work around
this by just "naming" them through sticking them in a lexical variable
and calling that.
I've done this on occasion, but it's pretty ugly, and the benefit is
slim; I usually don't bother. If you do
my $foo = sub {
...
};
you will find that $foo is not even in its own scope. And those names
don't show up in backtraces, so the sub is still anonymous as far as the
rest of Perl is concerned.
. . .
> 1. Continuations are a representation of the execution state of a
> program at a given time.
>
> The definition I'm accustomed to is much broader. A continuation is a
> funcallable object that represents of the *rest* of the computation.
> Most, if not all, of our disagreement is due to this difference.
Part of the reason why I made a point of trying to clearly define
these features is in case there was exactly this kind of difference on
a basic definition. That said, I think the two definitions are closer
than you think. If the rest of the computation depends on, for
instance, global variables, then you haven't really packaged up the
rest of the computation unless you capture those variables as well.
Therefore you need to package up everything.
Are you aware of an implementation of continuations that really does it
this way? If calling the continuation really rolls back EVERYTHING,
then the called function can only change the state of the computation by
returning values. That would be OK in a pure FP language, of course,
where there are no side effects, but if there are no side effects, then
it is not necessary to capture the values of globals. So what is the
point?
Of course in Scheme, "everything" is less than in some other languages
I could name. ;-)
> FWIW, Wikipedia agrees with you, but I think this definition is
> too limited, in that it defines "the rest of the computation" as the
> resumption of some old context. In short, it leaves out closures as an
> implementation mechanism.
Hmmm...
Perl 5 has closures but I'm having trouble figuring out how, for
instance, one could create an exit continuation out of Perl 5's
closures.
You can't, at least not without CP transformation. Search for "The key
is CP transformation" below.
. . .
> Ironically, I believe that a Parrot continuation (which is only
> glancingly related to a closure) is more or less what you mean when you
> say "continuation."
Yes, a Parrot continuation is what I mean by "continuation". As is
callcc in Ruby. As is my understanding of call/cc aka
call-with-continuation in Scheme. As is the idea of a continuation in
Stackless Python. And so on.
But this doesn't shed much light, because I claim these are different
beasts (at least for Scheme vs. Parrot; I do not know the other two are
implemented). Your mental model of "continuation" is pretty clear to me
by now, but you keep saying things that suggest there is only one way to
implement them, and I know that is not true. Are you familiar with the
runtime internals of any of these languages?
Here is a simple litmus test. Suppose we have a function that
presents us with a callback interface. For instance File::Find's find
function. If we have continuations then by passing the right callback
we can turn this into an iterator and then, for instance, iterate over
it in a while loop. If we do not have continuations, then we cannot
convert a callback interface into an iterator.
This sounds like a coroutine to me. Coroutines can also be implemented
by using stack-copying without providing continuation semantics.
> While the Scheme continuations may look similar to closures to you,
> they're NOT similar to closures, and the implications are VERY
> different from closures.
>
> That is true only for the limited definition of continuation you propose
> above, and not even generally true of Scheme . . .
It is not even generally true of Scheme? Not according to my
understanding of the Scheme standard.
http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-9.html#%_idx_566
I think that what is described there is what I described.
I think it is closer to what I described. From the call/cc rationale:
The continuation represents an entire (default) future for the
computation.
If you see something different, please cite it more precisely. As for
closures, the "CP transformation" part is imminent.
See also
http://community.schemewiki.org/?call-with-current-continuation for a
gentler description.
Thank you for pointing this out; this is a good description.
If there is a way to do what is discussed there with closures, then
I'm failing to see it.
The key is CP transformation. Start with the "Examples" section of the
Wikipedia "Continuation-passing style" article [2]. A traditional sort
of Scheme compiler converts the "Direct style" into "Continuation
passing style" during the analysis phase. All of those extra lambdas
are continuations, and all of those continuations are tail-called.
Indeed, except for primitives, the transformed code has ONLY tail calls.
Because all tail calls are optimized away, it follows that the leaving
activation record is not needed after the call; any state that it may
contain that is still relevant is captured by the continuation. Since
these continuations do not have to return to an existing AR (as you have
already observed, Scheme needs no stack), they can can be implemented as
closures. And since all Scheme programs can be CP-transformed, closures
are adequate for the implementation even of what we may think of "exit
continuations," though the distinction is unimportant to Scheme.
Oh, you can use closures to IMPLEMENT a language that does that. But
support for closures doesn't give you a general-purpose facility to do
what continuations provide.
I'm not sure I understand the difference between "implementing a
language that does that" and "a language that does that."
> . . . I believe that Scheme has a different idea of continuation
> than Parrot. In brief, if you rewrite call-and-return into pure
> continuation-passing calling, it looks like a call into the
> sub followed by another call to the continuation. All of these
> can in fact be implemented as tail calls, since there are no
> returns, and Scheme stipulates that you *must* tail-merge where
> possible. This means you don't need a stack -- there is never
> more than one activation record at any given time -- so all of
> the state required for a continuation can be captured by a
> closure. In the Lisp community generally, "closure" and
> "continuation" are therefore often used interchangeably, though
> this does sweep some distinctions under the rug.
>
> So all a CPS language really needs is support for closures.
> Such an implementation is truly and utterly stackless, which
> means that dynamic-wind needs to keep its own stack explicitly,
> and similarly for dynamic binding (which, IIUC, is generally
> implemented in terms of dynamic-wind).
All that quoting yourself proves is that your misunderstanding is
consistent over time. :-P
Assuming, of course, that it's my misunderstanding, and not yours. ;-}
Besides, I wasn't attempting to prove anything by it, just trying to
save typing.
Speaking more seriously for a second, I think it is important to not
confuse what you need to implement a language with its internal
semantics. You can implement Scheme in, say, C. And your implemented
Scheme will have continuations, closures, and tail-call optimization.
However that doesn't mean that C has continuations, closures, or
tail-call optimization.
True, but if I'm confused, that's not it.
> This is also the prevailing terminology in Common Lisp, despite the
> difference in semantics. I often see CL programmers label functional
> arguments "cont" or "continuation" when those functions are only used to
> return value(s) in a straightforward way. For instance, here is some CL
> I wrote, vintage 2000:
I think enlightenment is glimmering here for me.
Lisp programmers are very accustomed to building languages on top of
Lisp, and then programming in those languages. So if you're building
an infrastructure on top of Lisp that you will then program in, you
start to think in that infrastructure. So if your infrastructure is
effectively a little language that has continuations, then it makes
sense to talk about them as continuations. They aren't really
continuations in CL. But they are in your little language.
Your insight is good, but not really relevant here. The sublanguage
still doesn't have continuations in the sense you mean; I am presenting
a different sense of "continuation" here.
> (defmethod map-parts ((def has-parts-mixin) continuation)
> (dolist (part (def-parts def))
> (funcall continuation part)))
>
> (defmethod map-extent-parts ((def has-parts-mixin) x1 y1 x2 y2
continuation)
> (dolist (part (def-parts def))
> (when (bbox-overlaps-extent? part x1 y1 x2 y2)
> (funcall continuation part))))
>
> The point of these abstractions, BTW, is to allow for the possibility of
> a more elaborate class that replaces the simple list of parts with a
> geometric search data structure, such as a quadtree. If that happens,
> the execution trace can be quite elaborate, though it is still all
> stack-based.
I think I see what that is doing. It would help to know what (def
has-parts-mixin) returns. I think my thought about what you're doing
applies.
Sorry; I didn't explain that part because I didn't think it was
pertinent. "(def has-parts-mixin)" in the formal parameter list of a
defmethod means that the variable "def" is bound to an object of class
"has-parts-mixin". Since the second formal "continuation" is a symbol,
it is not specialized, and may be of any type (recall that CL uses
multiple dispatch).
> Note that this common usage is legitimate under the "rest of the
> computation" definition, but not the "goto execution state" definition.
I take that back. In both cases, the functional passed as the
"continuation" arg still returns to the caller, so it's not even a
continuation by the "rest of the computation" definition.
So I'm guilty of muddying the waters by using "continuation" in two
different senses. My apologies; I will try to be more precise. I guess
I'm a product of my Lisp upbringing, where both senses are used, often
imprecisely. :-}
> Furthermore this is not an accidental oversight, the CL committee
> looked at continuations, decided that continuations are Bad Things,
> and explicitly chose to leave them out.
>
> X3J13 decided that continuations were Bad Things because X3J13 was
> mostly made up of representatives from Lisp vendors who had invested
> heavily in their stack-based runtime systems, and many figured they
> would go broke trying to produce a heap-based runtime. If you have
> other evidence, I'd love to hear it.
My understanding is based on personal communication from Mark Jason
Dominus, who is usually a good authority on this kind of thing.
I was speaking of X3J13 deliberations, and your assertion that the X3J13
committee "decided that continuations are Bad Things."
According to my memory, he said that many people dislike continuations
both because of the strict requirements it puts on the implementation
(so far matches what you're saying), and because continuations are
horrible abstractions that lead to very confusing code. They are
powerful, even more so than goto, but they lead to problems just like
goto.
I would say "can lead to problems" or "may lead to confusing code." So
can the other powerful language features I've encountered, but I often
find uses for them -- in code that (IMO) seems no more confusing than
necessary.
Proponents would say (with some justice) that the problem is that
you're not supposed to use continuations directly. Instead you wrap
them in useful abstractions, like co-routines. And then use those
abstractions instead.
Agreed.
> > Abstraction. Speed. Debuggability. Pick any three. ;-}
>
> If you were arguing for closures, I'd agree with you. But you're
> arguing for continuations.
>
> Cheers,
> Ben
>
> I'm arguing for both, because my operational definition of continuation
> includes the possibility of using closures in their implementation,
> without necessarily needing hairy state-munging magic. But there are
> times (I further argue) when you need the magic, too.
I agree that there are things which can only be done with the magic.
I've already given a very useful one. (Turn a callback interface into
an iterator.) However I continue to disagree with you on their effect
on debuggability.
Cheers,
Ben
Complicated algorithms tend to be hard to debug no matter how they are
implemented. Even so, consider that an algorithm implemented with
(exit) continuations may be easier to debug than one without if using
continuations makes the implementation appreciably cleaner and/or more
compact.
Indeed, error handling itself makes a good case in point. The
equivalent of try/recover in many languages makes it easier to separate
error handling from "normal" code, thereby simplifying the normal
control flow, but requires (exit) continuation semantics to make it
happen. (I was going to say that I rarely use exit continuations in my
day-to-day programming, but then I remembered error handling.)
-- Bob
[1] http://pre.plt-scheme.org/docs/html/mzlib/mzlib-Z-H-47.html#node_chap_47
[2] http://en.wikipedia.org/wiki/Continuation-passing_style
_______________________________________________
Boston-pm mailing list
[email protected]
http://mail.pm.org/mailman/listinfo/boston-pm