I have a few disjoint thoughts here.

1. Here's a link to a previous ES-discuss thread about somebody trying to 
create an exploit by combining observable weak references with conservative GC.
  https://mail.mozilla.org/pipermail/es-discuss/2013-March/029489.html

I think the basic idea is that you stick an integer on the stack, then observe 
whether an object dies, and that will tell you if the integer is the address of 
the object.

Aside from that, weak references complicate the implementation in ways that may 
not be immediately obvious.  For weak maps, we have special machinery to deal 
with:
   - cross-compartment wrappers
   - C++ reflectors
   - cross-compartment wrappers of C++ reflectors
Mistakes in any of these could lead to critical security vulnerabilities.

2. With regards to the cycle collector, Bill and I discussed this, and I think 
weak references won't affect the cycle collector.  If you consider two heap 
graphs, one with and one without weak references, the set of live objects in 
both graphs must be identical, thus the CC shouldn't have to care.  It is only 
the JS engine, that has to decide when the null out the weak references, that 
needs to figure that out.

3. Finally, though I'm generally opposed to weak references, as they are 
complex to implement, I do understand that there's a need for them.  When 
working on the leaks in bug 893012, to fix some of the individual leaks, like 
bug 894147, we had to use weak references.  You have iframes that want to 
listen to events, so the event listener has to hold onto the iframe, but you 
don't want to keep the iframe around if the only thing that is keeping the 
iframe alive is the event listener.  So it feels a little crummy to say "hey 
B2G shows that JS is good enough for everything!" if you have to choose between 
leaking or using web-legal JS.

Andrew

----- Original Message -----
> On 11/01/2013 09:42 AM, Bobby Holley wrote:
> > From the proposal:
> >
> > Note that makeWeakRef is not safe for general access since it grants access
> >> to the non-determinism inherent in observing garbage collection.
> >
> > What does that mean? That they don't expect this to be exposed to the web?
> > In that case, why bother speccing it, and why would we need to be concerned
> > with implementing it?
> 
> Yeah, that's some very critical weasel-wording in the strawman. "Let's
> add this to the language, but not expose it to things it shouldn't be
> exposed to." Huh?
> 
> > FWIW, I strongly believe that we should refuse to implement specs that make
> > GC effects any more observable than they already are on the web.
> 
> Why? I agree, but only for some general reasons and some
> dimly-remembered reasons that I've encountered in the past where the
> implications turned out to be far worse than I would have initially
> thought. I'd really like to have a crisp explanation of exactly *why*
> exposing GC behavior is bad, because otherwise I feel like people will
> end up deciding they can live with the minor drawbacks they can think
> of, and proceed forward with something truly awful. (Like, for example,
> exposing the current strawman to general web content.)
> 
> And there really are compelling use cases for having this sort of stuff.
> As Kevin Gadd said (I think it was him), people are reimplementing
> garbage collection over typed arrays, in JS, just to gain this level of
> control. We need to know why, in order to provide something reasonable
> for whatever those specific use cases happen to be.
> 
> _______________________________________________
> dev-tech-js-engine-internals mailing list
> dev-tech-js-engine-internals@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-js-engine-internals
> 
_______________________________________________
dev-tech-js-engine-internals mailing list
dev-tech-js-engine-internals@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-tech-js-engine-internals

Reply via email to