On Nov 15, 2004, at 3:27 AM, Leopold Toetsch wrote:

Jeff Clites <[EMAIL PROTECTED]> wrote:
On Nov 14, 2004, at 3:03 AM, Leopold Toetsch wrote:

Yes, but Jeff's example wasn't really reflecting the problem.

How come?

Your code didn't reuse C<a> after the call.

Oops.

It seems that, in term of cache locality, the same problem is there
with more registers v. spilling v. lexicals.

Not quite. We can't extend just one register set, we'd do that for all 4
kinds. You can not easily have 64 PMC registers and just 10 INTVALs.
A lexical access is just an array lookup (at least if the compiler uses
the indexed addressing of lexicals).

Ah. What I don't like, though, is that in the in the lexical case you have things sitting in an array, from which you need to move things back-and-forth to registers to work on them. In the "more registers" case, they're sitting in a quite similar array, but you can work on them directly there. Adding 2 such INTVALs is one op, instead of 4 (2 loads, 1 add, 1 store). Since the problem exists for all register types (unless HLLs only use PMCs, which is possible), then we either need 4 lexical arrays for maximum locality (so that they can be sized independently), or we need to be able to size the register frames independently for the 4 types. (I realize that currently lexicals must be PMCs, but it seems we have the same issue with other reg. types.)


... That is, if you have 100
local variables whose lifetimes overlap (due to continuations), then
you need 100 distinct memory locations to (repeatedly) access.

Sure. If the program is complex you need the storage anyway.

But I think the real problem is that it's only due to CPS that the lifetimes are overlapping so much--that's what's biting us. (By expanding my example code, you could come up with simple code which uses 100 locals be would only need 1 register w/o continuations, and needs 100 "spots" with them.) It's just a pretty unfortunate price we're paying, if the feature is not extensively used. Now we're just figuring out how to survive it.


4) Having an explicit syntax construct "(call-with-current-continuation
" that expresses verbatim, what's going on, like e.g. with a reserved
word placed as a label:


RESUMEABLE: x = choose(arr1)

I don't think that really helps either: If you have such a call, then
all the frames higher up the stack also can "return multiple times", so
they have the behavior, even w/o the label.

The RESUMABLE label is of course at the invocation that might resume, somehwere up the call chain. Again: the HLL compiler (or the programmer) knows the effect of the continuation. Just the PIR code is too dumb, i.e. is lacking this information.

Picture this call stack:

        main --> A --> B --> C --> D --> E

The call from D to E uses the "RESUMEABLE" label, and E stores the resulting continuation in a global, and everything up to main returns. Then, main invokes the continuation, and we find ourselves back in D. Now, C, B, and A will return "again", without any compile-time hint.

On the other hand, this Ruby code really bugs me (note: "$" variables
in Ruby are globals):

... , you get an infinite loop, printing increasing
integers.)

Sure. With callcc you are calling the function again and again.

I know--the infinite loop was the "desired" behavior (just mentioned it so that people wouldn't have to run it). What bugs me is that you can get that error, though looking at the code it should be impossible. The author of outer() might have no clue that could happen, so it's not really his bug, and the person using a continuation needs really detailed knowledge of everything in the call stack, to know if it will work. I guess that's "just how it is", but it seems to mean that continuations have limited usefulness in languages with side-effects, except for very local usage (breaking out of a loop and such).


JEff



Reply via email to