Hi!
Thanks for the notes on the optimizations that activations afford, and
for the feedback in general.
On Thu, 2012-01-26 at 13:14 -0800, Gavin Barraclough wrote:
> It's certainly true that activations are currently tied to the concept
> of function calls, in so much that they do currently always capture
> not only the local variables but also the arguments. But this isn't
> really an inherent property of activations so much as it is a current
> limitation of the implementation that needs to be fixed – it is
> sub-optimal in cases where variables are captured but arguments are
> not. Activations should support capturing only local variables and
> not the arguments.
If I read you right, this means we need a new class between
JSVariableObject and JSActivation, yes? So that the arguments mechanism
is specific to function call activations rather than block activations.
The phrase "block activation" is pretty bad though. "Static scope" and
"activation" are words that have meaning outside of JSC. How about, we
keep the name for JSActivation, and rebase JSStaticScope to sit between
JSVariableObject and JSActivation in the hierarchy. Initially we can
repurpose push_new_scope to push an eagerly torn-off JSStaticScope
activation, then refactor to do lazy tear-off. That way we will know
that the implementation is correct. Then my patches rebase fairly
cleanly on top of that approach, the difference being that all scopes
within a function are addressable via registers.
> an activation is a scope object capable of [...] supporting additional
> variables being introduced dynamically (e.g. from a nested direct
> eval).
Gavin, are you sure that putting let and block const in all modes is a
good idea?
Saying "ES6 builds on strict mode" gives a fairly understandable scoping
story. Eval in strict mode can't introduce bindings. But bringing let
and const to "classic mode" is not in any of the drafts yet, and it
sounds like a bad idea for JSC to introduce yet another incompatible
const implementation.
I guess my specific question is, are you proposing that direct
non-strict eval would be able to introduce block-scoped locals? Or does
direct non-strict eval implicitly create a block scope?
Are you proposing that direct non-strict eval follow the same conflict
rules for var declarations (no duplicate block-scoped var in the same or
containing scopes within a function)?
> An important point to recognize is that we don't need separate
> mechanisms in the engine to support let and var, since the
> requirements for var are a subset of those of let. Beyond the
> differences in parsing, a var variable is almost completely
> indistinguishable from a let variable in the outermost scope:
Except for the temporal dead zone, of course. Should we add a
"block-scoped" flag as a possible attribute for symbol tables? I would
think so. Also we will need new opcodes to respect the read and write
barriers, where we can't statically optimize them away.
> a direct eval within a nested block scope will need to be able to
> introduce new let variables onto the nearest block scope
Wow. That sounds... that sounds to me like "var is the new let",
actually ;-) I sent a mail to es-discuss. I'm sure you have the TC39
sentiment down because you were actually there, but I must admit that
I'm skeptical about this project. But either way it's important to
settle now, as it can affect many things.
> I believe when we tear off the activation, we tear off all of the
> functions locals variables (and arguments) – whereas we should be able
> to restrict to copying only the range of the virtual register bank
> that needs capturing. And that extends to the argument registers, too
> – if a closure only captures local variables, we should not be copying
> the arguments.
This sounds quite tricky, given that you probably want to maintain the
indexes of the variables as they are. I suppose it's possible though,
but only for contiguous ranges of registers.
Is the lazy tear-off advantage so great that implementing display
closures with assignment conversion is not worth it? Basically here we
are privileging access to variables declared within a function, while
making access to free variables much more expensive. I'm really not
sure. JS is such a strange language ;-)
Specifically in this function:
> function f(arg, str)
> {
> let x = random();
> let result = [];
> let y = random();
>
> result.push(function(){ return x + y; })
>
> {
> let i;
>
> for (i in arg) {
> let z = arg[i];
> if (i & 1)
> result.push(function(){ return x + y + z; })
> }
>
> return result;
> }
> }
None of these variables are assigned, except at the time they are bound.
Therefore capturing x, for example, can capture its value instead of its
storage location. Locally to the outer function, all vars are in
registers. They would be packed away into a vector when the closures
are created -- but only the specific variables that are captured by that
closure. Access to free vars happens by single index; there is no scope
chain if there is no eval.
Assigned variables would be the same, but boxed, so that closures
capture the box instead of a value at one time. All accesses go through
the box. Tight loops that don't call functions can temporarily unbox,
if appropriate.
Dunno, just putting that out there. This dynamic scoping stuff is
pretty terrifying to me :)
> this mechanism should immediately go live for const in all modes of
> execution (one javascript! – no guts no glory!).
Will classic-mode const be block-scoped, you think?
Anyway, long mail. Please, do count on me as part of this
implementation effort -- I'd like to focus on it over the next month.
This is a great discussion to have now so that I don't hack on the wrong
thing, and I appreciate your feedback!
Cheers,
Andy
_______________________________________________
squirrelfish-dev mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo.cgi/squirrelfish-dev