Le 13/01/2015 13:21, Anne van Kesteren a écrit :
A big challenge with self-hosting is memory consumption. A JavaScript
implementation is tied to a realm and therefore each realm will have
its own implementation. Contrast this with a C++ implementation of the
same feature that can be shared across many realms. The C++
implementation is much more efficient.
Why would a JS implementation *has to* be tied to a realm? I understand
if this is how things are done today, but does it need to be?
Asked differently, what is so different about JS (vs C++) as an
implementation language?
It seems like the sharings that are possible in C++ should be possible
in JS.
What is (or can be) shared in C++ that cannot in JS?
PS: Alternative explanation available here:
https://annevankesteren.nl/2015/01/javascript-web-platform
From your post :
More concretely, this means that an implementation of
|Array.prototype.map| in JavaScript will end up existing in each
realm, whereas an identical implementation of that feature in C++ will
only exists once.
Why? You could have a single privileged-JS implementation and each
content-JS context (~realm) would only have access to a proxy to
Array.prototype.map (transparently forwarding calls, which I imagine can
be optimized/inlined by engines to be the direct call in the optimistic
case). It would cost a proxy per content JS, but that already much much
less than a full Array.prototype.map implementation.
In a hand-wavy fashion, I'd say the proxy handler can be shared across
all content-JS. There is per-content storage to be created (lazily) in
case Array.prototype.map is mutated (property added, etc.), but the
normal case is fine (no mutation on built-ins means no cost)
One drawback is trying Object.freeze(Array.prototype.map). For this to
work with proxies as they are, either the privileged-JS
Array.prototype.map needs to be frozen (unacceptable, of course), or
each proxy needs a new target (which is equivalently bad than one
Array.prototype.map implementation per content-JS context).
The solution might be to allow proxies in privileged-JS contexts that
are more powerful than the standard ones (for instance, they can pretend
the object is frozen even when the underlying target isn't).
This is a bit annoying as a suggestion, because it means JS isn't really
implemented in normal JS any longer, but it sounds like a reasonable
trade-off (that's open for debate, of course).
The "problem" with proxies as they are today is that they were
retroffited in JS which severely constrained their design making use
cases like the one we're discussing (or even membranes) possible, but
cumbersome.
Privileged-JS taking some liberties from this design sounds reasonable.
(It was pointed out to me that SpiderMonkey has some tricks to share
the bytecode of a JavaScript implementation of a feature across
realms, though not across threads (still expensive for workers). And
SpiderMonkey has the ability to load these JavaScript implementations
lazily and collect them when no longer used, further reducing memory
footprint. However, this requires very special code that is currently
not available for features outside of SpiderMonkey. Whether that is
feasible might be up for investigation at some point.)
For contexts running in parallel to be able to share (read-only) data in
JS, we would need immutable data structures in JS, I believe.
https://mail.mozilla.org/pipermail/es-discuss/2014-November/040218.html
https://mail.mozilla.org/pipermail/es-discuss/2014-November/040219.html
David
_______________________________________________
es-discuss mailing list
[email protected]
https://mail.mozilla.org/listinfo/es-discuss