Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread Andreas Rossberg
On 4 December 2014 at 00:54, David Bruant bruan...@gmail.com wrote:
 The way I see it, data structures are a tool to efficiently query data. They
 don't *have* to be arbitrarily mutable anytime for this purpose.
 It's a point of view problem, but in my opinion, mutability is the problem,
 not sharing the same object. Being able to create and share structured data
 should not have to mean it can be modified by anyone anytime. Hence
 Object.freeze, hence the recent popularity of React.js.

I agree, but that is all irrelevant regarding the question of weak
maps, because you cannot freeze their content.

So my question stands: What would be a plausible scenario where
handing a weak map to an untrusted third party is not utterly crazy to
start with? In particular, when can giving them the ability to clear
be harmful, while the ability to add random entries, or attempt to
remove entries at guess, is not?

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Figuring out the behavior of WindowProxy in the face of non-configurable properties

2014-12-04 Thread Boris Zbarsky

On 11/30/14, 6:12 PM, Mark S. Miller wrote:

On Sun, Nov 30, 2014 at 12:21 PM, Boris Zbarsky bzbar...@mit.edu wrote:

Per spec ES6, it seems to me like attempting to define a non-configurable
property on a WindowProxy should throw and getting a property descriptor for
a non-configurable property that got defined on the Window (e.g. via var)
should report it as configurable.


Yes, both of these conclusions are correct.


OK.  What do we do if we discover that throwing from the defineProperty 
call with a non-configurable property descriptor is not web-compatible? 
 I'm going to try doing it in Firefox, and would welcome other UAs 
doing it ASAP to figure out whether we're in that situation.


-Boris
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread David Bruant

Le 04/12/2014 09:55, Andreas Rossberg a écrit :

On 4 December 2014 at 00:54, David Bruant bruan...@gmail.com wrote:

The way I see it, data structures are a tool to efficiently query data. They
don't *have* to be arbitrarily mutable anytime for this purpose.
It's a point of view problem, but in my opinion, mutability is the problem,
not sharing the same object. Being able to create and share structured data
should not have to mean it can be modified by anyone anytime. Hence
Object.freeze, hence the recent popularity of React.js.

I agree, but that is all irrelevant regarding the question of weak
maps, because you cannot freeze their content.
The heart of the problem is mutability and .clear is a mutability 
capability, so it's relevant. WeakMap are effectively frozen for some 
bindings if you don't have the keys.



So my question stands: What would be a plausible scenario where
handing a weak map to an untrusted third party is not utterly crazy to
start with?
Sometimes you call functions you don't have written and pass arguments 
to them. WeakMaps are new, but APIs will have functions with WeakMaps as 
arguments. I don't see what's crazy. It'd be nice if I don't have to 
review all NPM packages I use to make sure they dont use .clear when I 
pass a weakmap.
If you don't want to pass the WeakMap directly, you have to create a new 
object just in case (cloning or wrapping) which carries its own 
obvious efficiency. Security then comes at the cost of performance while 
both could have been achieved if the same safe-by-default weakmap can be 
shared.



In particular, when can giving them the ability to clear
be harmful, while the ability to add random entries, or attempt to
remove entries at guess, is not?

I don't have an answer to this case, now.
That said, I'm uncomfortable with the idea of seeing a decision being 
made that affects the language of the web until its end based on the 
inability of a few person to find a scenario that is deemed plausible by 
few other persons within a limited timeframe. It's almost calling for an 
I told you so one day.

I would return the question: can you demonstrate there are no such scenario?

We know ambiant authority is a bad thing, examples are endless in JS.
The ability to modify global variable has been the source of bugs and 
vulnerabilities.
JSON.parse implementations were modified by browsers because they used 
malicious versions of Array as a constructor which led to data leakage.
WeakMap.prototype.clear is ambiant authority. Admittedly, its effects 
are less broad and its malicious usage is certainly more subtle.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread Katelyn Gadd
There are scenarios where both security and performance matter. I
think this is more than self-evident at this point in the thread since
examples of both have been provided repeatedly. 'can you demonstrate
there are no such scenario' isn't really a necessary question because
we already know the answer: no.

That's not the relevant issue here, though. As discussed before, the
basic choice at hand here is whether to optimize the common case for
security or for reasonable performance  usability.

The security use case can be addressed by wrapping the weakmap in an
opaque object that revokes the ability to clear (along with the
ability to write, for example - an ability I think you would want to
deny in most security-sensitive scenarios.) It is true, as you state,
that wrapping the weakmap imposes a performance cost; however, it
seems unlikely that the cost would be more than negligible, given that
the wrapper object would literally consist of one property containing
an object reference, and its methods could be trivially inlined by
every JS runtime I'm aware of.

The performance use cases can theoretically be addressed by a
sufficiently smart runtime, even if a .clear() method is not present.
However I would argue that the difficulty of *actually* providing good
performance for these use cases without a .clear() method is extremely
high, for example:

Without a clear() method you have to wait for a sufficiently
exhaustive collection to be triggered, likely by memory pressure. If
the values being held alive in the interim do not merely use JS heap -
for example, they are webgl textures or bitmap images - it is likely
that the memory pressure feedback may not be sufficient to trigger an
exhaustive collection soon enough. I have seen this exact issue in the
past (while using weakmap, actually!)

It was previously stated that 'secure by default' is a noble goal, and
I agree. However, in this case secure-by-default is not something a
user will expect from JS containers, because no other JS data
structure offers secure-by-default. WeakMap as currently specced -
with or without clear() - is also not secure by default since writes
can still occur to values. You would need to disable writes by default
as well, somehow.

On a performance note, I would also argue that it seems profoundly
questionable that a transposed weak map implementation can provide
consistently good performance out in the real world for typical use
cases. I am certain that there *are* use cases where it is optimal,
and it clearly has its advantages, but as someone who spends absurd
amounts of time tuning the performance of software - both JS and
native - the design of a transposed weakmap contains many red flags
that suggest bad performance. I will speculate, based on my
understanding of transposed weak maps and my (incomplete) knowledge of
modern JS runtimes - please correct any errors:

The transposed weak map must store values as hidden properties on the
keys used. This means that any object used as a key - any object
reference, that is - must be able to accept hidden properties. This
means that it is effectively impossible to allocate object instances
with fixed-size, fixed-layout storage unless you reserve space for a
place to store the weakmap values. The only way I can imagine to solve
this is to make really aggressive use of type information gathering
and/or bailouts in the runtime to identify every type used as a
weakmap key - at which point I suppose you would have to convert their
memory layout on the heap in order to ensure consistency, or support
two different memory layouts for the same type.

I don't consider the above an academic concern, either: Dense memory
layout is essential if you want good locality (and thus, good cache
efficiency) and if you want the ability to cheaply do things like copy
your instances into a typed array and upload them onto the GPU. The
cheap copying use case will matter a lot once typed objects are
introduced since they are all about fixed, dense memory layout and
cheap copying.

A transposed weakmap generally implies poor memory locality, extensive
pointer-chasing, and higher memory overhead for each key/value pair
stored in the map.

If I'm not mistaken, a transposed weakmap may also increase the cost
of GC tracing overall, or at least for any object type that can be
used as a key - the requirement to allocate space for weakmap values
on those types means that the GC must now trace those weakmap value
slots regardless of whether they actually contain a value.

A transposed weakmap probably also implies worse memory fragmentation
or more wasted heap, because you either have to lazily allocate the
space for the weakmap values (which means a separate heap allocation)
or reserve empty space in all instances for the values. Neither of
these feel particularly ideal.

A transposed weakmap may also imply hindrances to a VM's ability to
elide heap allocations or store JS object instances on the stack/in

Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread Andreas Rossberg
On 4 December 2014 at 13:58, David Bruant bruan...@gmail.com wrote:
 Le 04/12/2014 09:55, Andreas Rossberg a écrit :

 On 4 December 2014 at 00:54, David Bruant bruan...@gmail.com wrote:

 The way I see it, data structures are a tool to efficiently query data.
 They
 don't *have* to be arbitrarily mutable anytime for this purpose.
 It's a point of view problem, but in my opinion, mutability is the
 problem,
 not sharing the same object. Being able to create and share structured
 data
 should not have to mean it can be modified by anyone anytime. Hence
 Object.freeze, hence the recent popularity of React.js.

 I agree, but that is all irrelevant regarding the question of weak
 maps, because you cannot freeze their content.

 The heart of the problem is mutability and .clear is a mutability
 capability, so it's relevant. WeakMap are effectively frozen for some
 bindings if you don't have the keys.

No, they are not. Everybody can enter additional keys, for example. In
the security or abstraction related examples I'm aware of, allowing
that would actually be more disastrous than doing a clear.

 So my question stands: What would be a plausible scenario where
 handing a weak map to an untrusted third party is not utterly crazy to
 start with?

 Sometimes you call functions you don't have written and pass arguments to
 them. WeakMaps are new, but APIs will have functions with WeakMaps as
 arguments. I don't see what's crazy. It'd be nice if I don't have to review
 all NPM packages I use to make sure they dont use .clear when I pass a
 weakmap.

Sure, I should have added security-related to the above sentence.

 If you don't want to pass the WeakMap directly, you have to create a new
 object just in case (cloning or wrapping) which carries its own obvious
 efficiency. Security then comes at the cost of performance while both could
 have been achieved if the same safe-by-default weakmap can be shared.

 In particular, when can giving them the ability to clear
 be harmful, while the ability to add random entries, or attempt to
 remove entries at guess, is not?

 I don't have an answer to this case, now.
 That said, I'm uncomfortable with the idea of seeing a decision being made
 that affects the language of the web until its end based on the inability of
 a few person to find a scenario that is deemed plausible by few other
 persons within a limited timeframe. It's almost calling for an I told you
 so one day.
 I would return the question: can you demonstrate there are no such scenario?

 We know ambiant authority is a bad thing, examples are endless in JS.
 The ability to modify global variable has been the source of bugs and
 vulnerabilities.
 JSON.parse implementations were modified by browsers because they used
 malicious versions of Array as a constructor which led to data leakage.
 WeakMap.prototype.clear is ambiant authority. Admittedly, its effects are
 less broad and its malicious usage is certainly more subtle.

Sure, but WeakMap.prototype.set is no different in that regard. When
you hand out a sensitive weak map you've already lost, with or without
clear. This really seems like a phantom discussion to me (and I'm
saying that although I do care a lot about abstraction and security!).

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Figuring out the behavior of WindowProxy in the face of non-configurable properties

2014-12-04 Thread Mark S. Miller
On Thu, Dec 4, 2014 at 2:58 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 11/30/14, 6:12 PM, Mark S. Miller wrote:

 On Sun, Nov 30, 2014 at 12:21 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 Per spec ES6, it seems to me like attempting to define a non-configurable
 property on a WindowProxy should throw and getting a property descriptor
 for
 a non-configurable property that got defined on the Window (e.g. via
 var)
 should report it as configurable.


 Yes, both of these conclusions are correct.


 OK.  What do we do if we discover that throwing from the defineProperty call
 with a non-configurable property descriptor is not web-compatible?

What we always do, for example, when we found that having

 Object.prototype.toString.call(null)

throw was not web compatible. We look into the specifics of the
incompatibility encountered and design a non-web-breaking workaround
that is least painful for the semantics we desire. For example, in
this case, we changed it to return
[object Null] even though that string itself had never previously
been returned. The specific web compatibility we encountered for this
case merely required a non-throw. It did not care what the contents of
the string were. This outcome could not have been predicted from first
principles.

Other times, as when we found that introducing a new global variable
named JSON was not web compatible, we found we could evangelize the
origin of that incompatibility to fix it at the source, rather than
change the spec.


  I'm
 going to try doing it in Firefox, and would welcome other UAs doing it ASAP
 to figure out whether we're in that situation.

Excellent! Bravo!



 -Boris



-- 
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread C. Scott Ananian
On Thu, Dec 4, 2014 at 9:25 AM, Katelyn Gadd k...@luminance.org wrote:

 The only way I can imagine to solve
 this is to make really aggressive use of type information gathering
 and/or bailouts in the runtime to identify every type used as a
 weakmap key - at which point I suppose you would have to convert their
 memory layout on the heap in order to ensure consistency, or support
 two different memory layouts for the same type.


Yup, this is how it's done.

In JavaScript every object can have new properties added to it at arbitrary
times.
http://jayconrod.com/posts/52/a-tour-of-v8-object-representation
 --scott
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Figuring out the behavior of WindowProxy in the face of non-configurable properties

2014-12-04 Thread Boris Zbarsky

On 12/4/14, 10:44 AM, Travis Leithead wrote:

So... this will prevent defining non-configurable properties on the global?


It will prevent using

  Object.defineProperty(window, name,  non-configurable-descriptor);

to define a property.

Note that window is not the global.  It's a proxy whose target is the 
global.



Combined with [PrimaryGlobal], this seems at odds with what browsers do internally to 
prevent re-definition of some properties like document?


Browsers can define properties on the actual global, so there is no 
problem here.



Are we sure we want this restriction?


Well, good question.  If we don't do this restriction (by which I assume 
defineProperty throwing; I assume getOwnPropertyDescriptor claiming 
configurable always is less controversial), what do we want to do?


Note that I did a bit of digging into the history here and as far as I 
can tell every single UA screwed up when implementing 
Object.getOwnPropertyDescriptor and company in ES5.  ES5 clearly spells 
out the rules for these methods, and browsers just didn't follow those 
rules.  Plus lack of testing and here we are.


-Boris
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread Mark S. Miller
On Thu, Dec 4, 2014 at 6:25 AM, Katelyn Gadd k...@luminance.org wrote:
[...]
 I should also note that while much of the above is speculative and
 based on intuition/experience, I *have* been shipping a use of WeakMap
 for performance-critical code for over a year now

Hi Katelyn, could you say more about your shipping code? Is the code
something your could post or make available? Thanks.


-- 
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


%TypedArray%.prototype.includes

2014-12-04 Thread Domenic Denicola
When implementing Array.prototype.includes for V8, we realized suddenly that we 
should probably do the same for typed arrays.

Looking at many of the %TypedArray%.prototype methods, it seems most of them 
are specified as basically the same as Array, but with these minor tweaks. 
E.g.

- 
https://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.fill
- 
https://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.findindex
- 
https://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.foreach

A few though are specified in detail:

- 
https://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.filter
- 
https://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.map

I was wondering if anyone knew why, so I can tell which to use as guidance for 
speccing %TypedArray%.prototype.includes?

As for process issues, I think it would be reasonable to add a supplement to 
the existing tc39/Array.prototype.includes repo tacking this on? Or would that 
be bad, and I should start a separate proposal?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: %TypedArray%.prototype.includes

2014-12-04 Thread Allen Wirfs-Brock

On Dec 4, 2014, at 2:25 PM, Domenic Denicola wrote:

 When implementing Array.prototype.includes for V8, we realized suddenly that 
 we should probably do the same for typed arrays.

Of course!



 
 Looking at many of the %TypedArray%.prototype methods, it seems most of them 
 are specified as basically the same as Array, but with these minor tweaks. 
 E.g.
 
 - 
 https://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.fill
 - 
 https://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.findindex
 - 
 https://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.foreach
 
 A few though are specified in detail:
 
 - 
 https://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.filter
 - 
 https://people.mozilla.org/~jorendorff/es6-draft.html#sec-%typedarray%.prototype.map
 
 I was wondering if anyone knew why, so I can tell which to use as guidance 
 for speccing %TypedArray%.prototype.includes?

Because some of the Array.prototype algorithms depend upon of the ability to an 
Array to dynamically grow its length or other characteristics that are not true 
of Typed Array instances. A new algorithm that is appropriate for a Typed Array 
implementation is specified for those cases. Otherwise, we just reference the 
Array.prototype algorithm. 

For 'includes', you probably can get array with using the same algorithm as 
Array.prototype.includes.

 
 As for process issues, I think it would be reasonable to add a supplement to 
 the existing tc39/Array.prototype.includes repo tacking this on? Or would 
 that be bad, and I should start a separate proposal?

Add it to the existing proposal.  It's an oversight that it wasn't included and 
something that should have been caught by reviewers if we were paying 
attention.  In general, we want Array and Typed Arrays to support the same set 
of methods.

Allen 



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: %TypedArray%.prototype.includes

2014-12-04 Thread Domenic Denicola
From: Allen Wirfs-Brock [mailto:al...@wirfs-brock.com] 

 Because some of the Array.prototype algorithms depend upon of the ability to 
 an Array to dynamically grow its length or other characteristics that are not 
 true of Typed Array instances. A new algorithm that is appropriate for a 
 Typed Array implementation is specified for those cases. Otherwise, we just 
 reference the Array.prototype algorithm.

Oh I see, that makes sense now that I realize how the return values of `map` 
and `filter` are created and filled. And yeah, agreed that the algorithm should 
be reusable for includes in particular.

 Add it to the existing proposal.  It's an oversight that it wasn't included 
 and something that should have been caught by reviewers if we were paying 
 attention.  In general, we want Array and Typed Arrays to support the same 
 set of methods.

Sounds good!

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Figuring out the behavior of WindowProxy in the face of non-configurable properties

2014-12-04 Thread Boris Zbarsky

On 12/4/14, 1:36 PM, Travis Leithead wrote:

Note that window is not the global.  It's a proxy whose target is the global.


Yes, but within a browser UA, there is no way to get a reference to the naked 
global because all entry-points return window proxies ;-)


Well, no way from web script.  The browser internals can do it, 
presumably, right?



Well, good question.  If we don't do this restriction (by which I assume
defineProperty throwing; I assume getOwnPropertyDescriptor claiming
configurable always is less controversial), what do we want to do?


As I look back on your original message, I fail to see what the problem is. You 
seem to think that the window proxy is referring to the same window object 
before and after the navigation.


The window proxy object identity does not change before and after the 
navigation.


The window object the proxy is pointing to changes.


In fact, in most implementations that I'm aware of, there is the concept of the inner 
and outer window.


Yes, I'm well aware.


The outer window is the window proxy, which is the object that implements the 
cross-origin access control.


In Gecko, the cross-origin access control is actually implemented using 
a separate security membrane proxy whose target is the outer window. 
But sure.



In IE's implementation, the window proxy has no storage as a typical JS var--it's only a 
semi-intelligent forwarder to its companion inner window.


That's an IE implementation detail.  In Gecko, the window proxy is a 
JS proxy object with a proxy handler written in C++.  That, too, is an 
implementation detail.


What matters here is what JS consumers see.  Consumers typically (there 
are some exceptions involving scope chains) just see the window proxy, yes?


So when a script does:

  Object.defineProperty(frames[0], foo, { value: true; });

It is defining a property on frames[0].  The fact that this is actually 
a proxy for some other object (the global inside that iframe) is 
somewhat of an implementation detail, again.  From the consumer's and 
the spec's point of view, frames[0] is something with some internal 
methods ([[GetOwnProperty]], [[DefineOwnProperty]], etc) which are 
implemented in some way.  Still from the spec's point of view, the 
implementation of these internal methods must satisfy 
http://people.mozilla.org/~jorendorff/es6-draft.html#sec-invariants-of-the-essential-internal-methods.



So, in your code sample, your defineProperty call forwarded to the inner 
window where the property was defined.


Sure.  I understand that.  As in, the proxy's [[DefineOwnProperty]] 
invoke's the target's [[DefineOwnProperty]].



After the navigation, the inner window was swapped out for a new one (and whole new 
type system at that) which the existing window proxy (outer window) now refers.


Sure.


This gave the appearance of the non-configurable property disappearing


This isn't about appearance.  The relevant spec invariant for 
[[GetOwnProperty]], for example, is:


  If P’s attributes other than [[Writable]] may change over time or
  if the property might disappear, then P’s [[Configurable]] attribute
  must be true.

And Object.getOwnPropertyDescriptor is clearly defined to invoke 
[[GetOwnProperty]].


So when a page does Object.getOwnPropertyDescriptor(window, foo) this 
is invoking the window proxy's [[GetOwnProperty]].  That's allowed to do 
all sorts of stuff as long as it preserves the invariants involved, 
including the one I quote above.  The fact that the disappearing is 
due to the target changing is an implementation detail of the window proxy.



but in reality it would still be there if you could get a reference to the 
inner window


Which doesn't matter, because the consumer is not interacting with the 
inner window.



*I wonder if you can capture the inner window in a scope chain or closure 
somehow


Sure, for a scope chain.  Testcase at 
https://web.mit.edu/bzbarsky/www/testcases/windowproxy/use-old-window-1.html 
shows OLD WINDOW on the first line in Firefox, Chrome, and Safari.  In 
IE11 it throws a Can't execute code from a freed script exception; I 
can't find anything in the specs that allows that, fwiw.



so that you could observe that foo is still there even though you can't 
directly see it anymore?


Absolutely.


I think that might work if the executing code was defined in the old iframe's 
environment and executed after navigation...


Right.

But we're not talking about indirect probes like this here, just about 
the basic invariants object internal methods are supposed to preserve.


-Boris
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Figuring out the behavior of WindowProxy in the face of non-configurable properties

2014-12-04 Thread Mark Miller
On Thu, Dec 4, 2014 at 4:32 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 12/4/14, 1:36 PM, Travis Leithead wrote:

 Note that window is not the global.  It's a proxy whose target is the
 global.


 Yes, but within a browser UA, there is no way to get a reference to the
 naked global because all entry-points return window proxies ;-)


 Well, no way from web script.  The browser internals can do it, presumably,
 right?

 Well, good question.  If we don't do this restriction (by which I assume
 defineProperty throwing; I assume getOwnPropertyDescriptor claiming
 configurable always is less controversial), what do we want to do?


 As I look back on your original message, I fail to see what the problem
 is. You seem to think that the window proxy is referring to the same window
 object before and after the navigation.


 The window proxy object identity does not change before and after the
 navigation.

 The window object the proxy is pointing to changes.

 In fact, in most implementations that I'm aware of, there is the concept
 of the inner and outer window.


 Yes, I'm well aware.

 The outer window is the window proxy, which is the object that
 implements the cross-origin access control.


 In Gecko, the cross-origin access control is actually implemented using a
 separate security membrane proxy whose target is the outer window. But
 sure.

 In IE's implementation, the window proxy has no storage as a typical JS
 var--it's only a semi-intelligent forwarder to its companion inner window.


 That's an IE implementation detail.  In Gecko, the window proxy is a JS
 proxy object with a proxy handler written in C++.  That, too, is an
 implementation detail.

 What matters here is what JS consumers see.  Consumers typically (there are
 some exceptions involving scope chains) just see the window proxy, yes?

 So when a script does:

   Object.defineProperty(frames[0], foo, { value: true; });

 It is defining a property on frames[0].  The fact that this is actually a
 proxy for some other object (the global inside that iframe) is somewhat of
 an implementation detail, again.  From the consumer's and the spec's point
 of view, frames[0] is something with some internal methods
 ([[GetOwnProperty]], [[DefineOwnProperty]], etc) which are implemented in
 some way.  Still from the spec's point of view, the implementation of these
 internal methods must satisfy
 http://people.mozilla.org/~jorendorff/es6-draft.html#sec-invariants-of-the-essential-internal-methods.

 So, in your code sample, your defineProperty call forwarded to the
 inner window where the property was defined.


 Sure.  I understand that.  As in, the proxy's [[DefineOwnProperty]] invoke's
 the target's [[DefineOwnProperty]].

 After the navigation, the inner window was swapped out for a new one
 (and whole new type system at that) which the existing window proxy (outer
 window) now refers.


 Sure.

 This gave the appearance of the non-configurable property disappearing


 This isn't about appearance.  The relevant spec invariant for
 [[GetOwnProperty]], for example, is:

   If P’s attributes other than [[Writable]] may change over time or
   if the property might disappear, then P’s [[Configurable]] attribute
   must be true.

 And Object.getOwnPropertyDescriptor is clearly defined to invoke
 [[GetOwnProperty]].

 So when a page does Object.getOwnPropertyDescriptor(window, foo) this is
 invoking the window proxy's [[GetOwnProperty]].  That's allowed to do all
 sorts of stuff as long as it preserves the invariants involved, including
 the one I quote above.  The fact that the disappearing is due to the
 target changing is an implementation detail of the window proxy.

 but in reality it would still be there if you could get a reference to the
 inner window


 Which doesn't matter, because the consumer is not interacting with the
 inner window.

 *I wonder if you can capture the inner window in a scope chain or closure
 somehow


 Sure, for a scope chain.  Testcase at
 https://web.mit.edu/bzbarsky/www/testcases/windowproxy/use-old-window-1.html

That page demands a client certificate. Is that intentional?


 shows OLD WINDOW on the first line in Firefox, Chrome, and Safari.  In
 IE11 it throws a Can't execute code from a freed script exception; I can't
 find anything in the specs that allows that, fwiw.

 so that you could observe that foo is still there even though you can't
 directly see it anymore?


 Absolutely.

 I think that might work if the executing code was defined in the old
 iframe's environment and executed after navigation...


 Right.

 But we're not talking about indirect probes like this here, just about the
 basic invariants object internal methods are supposed to preserve.


 -Boris
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss



-- 
  Cheers,
  --MarkM
___
es-discuss mailing list

Re: Figuring out the behavior of WindowProxy in the face of non-configurable properties

2014-12-04 Thread Boris Zbarsky

On 12/4/14, 4:45 PM, Mark Miller wrote:

On Thu, Dec 4, 2014 at 4:32 PM, Boris Zbarsky bzbar...@mit.edu wrote:

Sure, for a scope chain.  Testcase at
https://web.mit.edu/bzbarsky/www/testcases/windowproxy/use-old-window-1.html


That page demands a client certificate. Is that intentional?


Er, sorry. 
http://web.mit.edu/bzbarsky/www/testcases/windowproxy/use-old-window-1.html 
should work for everyone.


-Boris
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Figuring out the behavior of WindowProxy in the face of non-configurable properties

2014-12-04 Thread Mark Miller
On Thu, Dec 4, 2014 at 4:49 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 12/4/14, 4:45 PM, Mark Miller wrote:

 On Thu, Dec 4, 2014 at 4:32 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 Sure, for a scope chain.  Testcase at

 https://web.mit.edu/bzbarsky/www/testcases/windowproxy/use-old-window-1.html


 That page demands a client certificate. Is that intentional?


 Er, sorry.
 http://web.mit.edu/bzbarsky/www/testcases/windowproxy/use-old-window-1.html
 should work for everyone.

 -Boris


Here's an unexpected weirdness, probably not deeply related. Change
your first helper page to


script
var someName = OLD WINDOW;
var evil = eval;
function f() {
  return someName;
}
function g() {
  return (1,evil)(3);
}
/script



On FF and Safari, I get 3 as expected. On Chrome, I get on my console:

Uncaught EvalError: The this value passed to eval must be the
global object from which eval originated

Especially weird, because this code doesn't pass any this to the
renamed eval. I don't know what this means.



-- 
  Cheers,
  --MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread Katelyn Gadd
JSIL has a shim that emulates the 2D portion of the XNA game
framework's graphics stack using HTML5 canvas (for compatibility).
Many of the stack's features don't have direct equivalents in canvas,
so I have to generate and cache various bits of data and graphics
resources on-demand to implement them.

A main use case here is that in order to do color multiplication of
bitmaps - typically used for
text rendering, but used in other cases as well - I have to take a
given image I intend to draw and split it into images for each
specific color channel (r, g, b, a) and keep the images around. The
lifetime of those images needs to be tied to the lifetime of the image
they are derived from, and I also need the ability to discard them in
response to memory pressure. WeakMap is near-perfect for this.

I have a complex garbage collection scheme where I manually maintain a
LRU cache of these images and discard the ones that have not recently
been used periodically, and when the cache gets too big I discard the
oldest ones. Ensuring this collector runs often enough without
discarding images too often is a real challenge.

A downside here is that these resources are very heap light (just
HTML5 canvases/images) but memory heavy. In the past I have found and
filed bugs related to this where a browser was not properly responding
to the memory pressure from these images. As a result of this I don't
use WeakMap for this feature anymore (but I used to).

Managing the memory pressure here is important so it is very valuable
to have both a way to clear out the entire cache (in response to the
graphics adapter being reinitialized or a pool of game content being
destroyed) and to remove a single value from the cache (in response to
a single image resource being destroyed). The clear scenario is
thankfully not common but it does happen. This is also an area where
the performance is a concern.

I have a similar caching scenario involving textures generated from
bitmap fonts + text strings but WeakMap can't solve that since JS
strings aren't object references. Oh well :-) That one has to use my
terrible hacked-together collector as a result regardless of memory
pressure issues.

I do still use WeakMap in a few other places, for example to implement
Object.GetHashCode. This is a case where the transposed representation
is likely optimal - though in practice, I shouldn't need any sort of
container here, if only the hashing mechanisms clearly built into the
VM were exposed to user JS.

Out of the demos on the website, I think
http://hildr.luminance.org/Lumberjack/Lumberjack.html makes the most
significant use of the cache. The value below the framerate (measured
in mb) is the size of the bitmap resource cache. If the demo is
running using WebGL, the cache will be very small because WebGL needs
far fewer temporary resources. If you load
http://hildr.luminance.org/Lumberjack/Lumberjack.html?forceCanvas
instead, it forces the use of the canvas backend and you will see the
cache becomes quite large during gameplay. I think this at least provides
a realistic scenario where you want a good WeakMap implementation that
responds well to all forms of memory pressure.

I also have a more recent use of WeakMap that is used to cache typed
array views for memory buffers. This is necessary to implement various
pointer manipulation scenarios, so that arbitrary data structures can
be unpacked from arbitrary offsets in a given array buffer. You can
effectively view this as a Typed Objects polyfill that predates the
Typed Objects spec work. I should note that this is something that
would be unnecessary if DataView were designed better, but things are
what they are. :)

On 4 December 2014 at 12:16, Mark S. Miller erig...@google.com wrote:
 On Thu, Dec 4, 2014 at 6:25 AM, Katelyn Gadd k...@luminance.org wrote:
 [...]
 I should also note that while much of the above is speculative and
 based on intuition/experience, I *have* been shipping a use of WeakMap
 for performance-critical code for over a year now

 Hi Katelyn, could you say more about your shipping code? Is the code
 something your could post or make available? Thanks.


 --
 Cheers,
 --MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread Alex Russell
I support Katelyn's suggestion to make clear() neuterable on an instance,
perhaps with per-object configuration.

It leaves the API intact while allowing those with security concerns to
address them.
On 4 Dec 2014 20:01, Katelyn Gadd k...@luminance.org wrote:

 JSIL has a shim that emulates the 2D portion of the XNA game
 framework's graphics stack using HTML5 canvas (for compatibility).
 Many of the stack's features don't have direct equivalents in canvas,
 so I have to generate and cache various bits of data and graphics
 resources on-demand to implement them.

 A main use case here is that in order to do color multiplication of
 bitmaps - typically used for
 text rendering, but used in other cases as well - I have to take a
 given image I intend to draw and split it into images for each
 specific color channel (r, g, b, a) and keep the images around. The
 lifetime of those images needs to be tied to the lifetime of the image
 they are derived from, and I also need the ability to discard them in
 response to memory pressure. WeakMap is near-perfect for this.

 I have a complex garbage collection scheme where I manually maintain a
 LRU cache of these images and discard the ones that have not recently
 been used periodically, and when the cache gets too big I discard the
 oldest ones. Ensuring this collector runs often enough without
 discarding images too often is a real challenge.

 A downside here is that these resources are very heap light (just
 HTML5 canvases/images) but memory heavy. In the past I have found and
 filed bugs related to this where a browser was not properly responding
 to the memory pressure from these images. As a result of this I don't
 use WeakMap for this feature anymore (but I used to).

 Managing the memory pressure here is important so it is very valuable
 to have both a way to clear out the entire cache (in response to the
 graphics adapter being reinitialized or a pool of game content being
 destroyed) and to remove a single value from the cache (in response to
 a single image resource being destroyed). The clear scenario is
 thankfully not common but it does happen. This is also an area where
 the performance is a concern.

 I have a similar caching scenario involving textures generated from
 bitmap fonts + text strings but WeakMap can't solve that since JS
 strings aren't object references. Oh well :-) That one has to use my
 terrible hacked-together collector as a result regardless of memory
 pressure issues.

 I do still use WeakMap in a few other places, for example to implement
 Object.GetHashCode. This is a case where the transposed representation
 is likely optimal - though in practice, I shouldn't need any sort of
 container here, if only the hashing mechanisms clearly built into the
 VM were exposed to user JS.

 Out of the demos on the website, I think
 http://hildr.luminance.org/Lumberjack/Lumberjack.html makes the most
 significant use of the cache. The value below the framerate (measured
 in mb) is the size of the bitmap resource cache. If the demo is
 running using WebGL, the cache will be very small because WebGL needs
 far fewer temporary resources. If you load
 http://hildr.luminance.org/Lumberjack/Lumberjack.html?forceCanvas
 instead, it forces the use of the canvas backend and you will see the
 cache becomes quite large during gameplay. I think this at least provides
 a realistic scenario where you want a good WeakMap implementation that
 responds well to all forms of memory pressure.

 I also have a more recent use of WeakMap that is used to cache typed
 array views for memory buffers. This is necessary to implement various
 pointer manipulation scenarios, so that arbitrary data structures can
 be unpacked from arbitrary offsets in a given array buffer. You can
 effectively view this as a Typed Objects polyfill that predates the
 Typed Objects spec work. I should note that this is something that
 would be unnecessary if DataView were designed better, but things are
 what they are. :)

 On 4 December 2014 at 12:16, Mark S. Miller erig...@google.com wrote:
  On Thu, Dec 4, 2014 at 6:25 AM, Katelyn Gadd k...@luminance.org wrote:
  [...]
  I should also note that while much of the above is speculative and
  based on intuition/experience, I *have* been shipping a use of WeakMap
  for performance-critical code for over a year now
 
  Hi Katelyn, could you say more about your shipping code? Is the code
  something your could post or make available? Thanks.
 
 
  --
  Cheers,
  --MarkM
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread Steve Fink
On 12/04/2014 08:00 PM, Katelyn Gadd wrote:
 I do still use WeakMap in a few other places, for example to implement
 Object.GetHashCode. This is a case where the transposed representation
 is likely optimal - though in practice, I shouldn't need any sort of
 container here, if only the hashing mechanisms clearly built into the
 VM were exposed to user JS.

If I am understanding correctly, I don't think there is any such hashing
mechanism in the Spidermonkey VM. We hash on an object's pointer
address, which can change during a moving GC. (We update any hashtables
that incorporate an object's numeric address into their hash key
computations.)

I'm a little curious what you're generating the hashcode from. Is this
mimicking a value object? If the contents of the object change, would
you want the hashcode to change? Or are the hashcodes just
incrementing numerical object ids?

(Sorry for the tangent to the current thread.)

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Removal of WeakMap/WeakSet clear

2014-12-04 Thread Katelyn Gadd
.NET's hashing protocol is weird and arguably it's some awful baggage
carried over from its Java influences. All instances, value or
reference type, have GetHashCode. For a given type there is the
'default' implementation, or you can provide a specific one. This
enables anything to be used as a key in a container like a
dictionary/map.

For reference types, GetHashCode's default implementation assigns an
object a semi-unique value that persists for the entire lifetime of
the instance. So in a non-moving GC you could use the pointer address,
but in a moving GC the runtime is basically assigning it a permanent
identifier that sticks with the instance somewhere. For value types,
the default hashing implementation basically walks over the whole type
and hashes the fields to create a hash for the value as a whole.

I'm surprised to hear that JS runtimes don't necessarily have ways to
'hash' a given JS value, but it makes sense. I can see how that is a
great reason for 'get me a hash for this value' to never actually
exist in the API, even if it's unfortunate that I have to recreate
that facility myself in runtimes that do have it.

-kg

On 4 December 2014 at 21:24, Steve Fink sph...@gmail.com wrote:
 On 12/04/2014 08:00 PM, Katelyn Gadd wrote:
 I do still use WeakMap in a few other places, for example to implement
 Object.GetHashCode. This is a case where the transposed representation
 is likely optimal - though in practice, I shouldn't need any sort of
 container here, if only the hashing mechanisms clearly built into the
 VM were exposed to user JS.

 If I am understanding correctly, I don't think there is any such hashing
 mechanism in the Spidermonkey VM. We hash on an object's pointer
 address, which can change during a moving GC. (We update any hashtables
 that incorporate an object's numeric address into their hash key
 computations.)

 I'm a little curious what you're generating the hashcode from. Is this
 mimicking a value object? If the contents of the object change, would
 you want the hashcode to change? Or are the hashcodes just
 incrementing numerical object ids?

 (Sorry for the tangent to the current thread.)

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss