Re: Good-bye constructor functions?

2013-01-09 Thread Herby Vojčík

Sorry if I was too dense in my reply so things weren't understood. I
don't know how to reply put things efficiently because without big
picture, isolated cherry-picks seem ridiculous and properly put big
picture is often tl;dr-ed.

Concrete replies below.

Allen Wirfs-Brock wrote:

On Jan 8, 2013, at 12:45 AM, Herby Vojčík wrote:



Allen Wirfs-Brock wrote:

On Jan 7, 2013, at 4:09 PM, Herby Vojčík wrote:


...

I just don't see such an inconsistency in the current ES6 spec.
draft. A constructor is just a function that is usually called
with a this value. This is true whether it is called by the
[[Constructor]] internal method or via a super/super.constructor
call. In either case, the primary purpose of the constructor is
to perform initialization actions upon the this value. Where is
the inconsistency?

(I claim that) in any circumstances, what developers want to
express when writing `super(...args)` in the constructoro of
SubFoo, is: On my this, which is now instance of SubFoo, I want
the identical initialization code to be run, which `new
Foo(...args)` would run to initialize newly created instance of
Foo

That's not true. Because the spec is trying to serve two masters:
F and F.prototype.constructor. It is impossible.

The fixed semantics of [[Construct]] for `class` ((1) above)  is
fixing this by only serving one master: F.prototype.constructor
(in line 3).


I agree with your above statement about initialization.  But I also
content that is exactly what the current specification of super does
within a constructor function (subject, of course to what the
invoked methods actually are coded to do).  What I don't see  is why
you


What's it? super does call the .prototype.constructor of superclass,
yes, I know that well, ...

think otherwise.  I need a clearer concrete explanation of what you
see is the problem, prefably without a forward reference to what you
think is the solution.

...but new does not call the .prototype.constructor.

There this does not hold for `super(...args)` behaviour:
On my this, which is now instance of SubFoo, I want the identical
initialization code to be run, which `new Foo(...args)` would run to
initialize newly created instance of Foo.

And if you argue they are identical at the beginning, I say they can be
desynchronized, they will, and it does not matter how is the default case.

This state of serving two masters (new F, super F.prototype.constructor)
is a design issue / inconsistency / bug in the core of the language.

And sorry if I mention the aolution (it is simply call
.prototype.constructor in new for `class`), but it saves the model of
super-without-special-cases for constructor, which is fine (special
cases aren't).


The only anomaly I see is that a handful of legacy built-ins do
completely different things for new operator originated calls in
 contrast to regular function calls. There is no confusion in
the spec. about this and the mechanism for accomplishing it.
However such split behavior can't be declaratively defined in a
normal function or class declaration. In the other thread at
https://mail.mozilla.org/pipermail/es-discuss/2013-January/027864.html



I described how this split behavior an be procedurally described

in such declarations and also described how the same technique
can be applied to the offending built-in constructors (r any
user defined class constructor) to discriminate between
initialization and called behavior, even when called via
super.

Yes, but it is a workaround.


Or, alternatively stated, it shows how the objective can be met
without any further complicating the actual language.   That is
arguably a good thing, not just a woprkaround


Taken ad absurdum, JS is turing complete, you can achieve anything
without complicating the actual language.


The only fix I saw that you made to super was that in you
sketch constructor methods defined via a class definition are
only used for instance initialization, so there doesn't need to
be any code to determine between initialization and call
behavior. Constructor behavior is always initialization
behavior.

Yes, it is there. As a free bonus, btw. The main druver for the
design was new/super two masters fixing. Then, later (Brendan Eich
will not like this ;-) ) the beauty of breaking the tight coupling
and not needing [[Call]] at all for object with [[Construct]].

There are in fact TWO freebies: - [[Call]] separated from
[[Construct]] - you get default constructor for free; by not
defining it explicitly.


Sorry, I still don't see it, you have to explain both the problem
and your solution more concretely.


You probably see [[Call]] / [[Construct]] separation - class object's
[[Call]] is orthogonal to what [[Construct]] does, since it is specified
to call .prototype.constructor. It follows naturally from the fact that
class !== constructor.

So the issue needing explanation is free default constructor, I
presume (tell me if I am wrong).

It goes this way: if the [[Construct]] has proposed 

Re: direct_proxies problem

2013-01-09 Thread David Bruant

Le 08/01/2013 22:59, Erik Arvidsson a écrit :
On Tue, Jan 8, 2013 at 4:46 PM, David Bruant bruan...@gmail.com 
mailto:bruan...@gmail.com wrote:



One idea would be in the short term that implementations reliably
*always* throw when trying to wrap non-ECMAScript objects and play
with them as if they were legit browser objects. If the need
becomes strong, implementation can later allow it (turning an
error into a legit thing is usually not web breaking)


At least in V8/JSC for WebKit DOM objects are real ECMAScript objects.

I meant non-ECMAScript standard objects when I wrote non-ECMAScript 
object. Sorry for the confusion.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread David Bruant

Le 09/01/2013 06:57, Boris Zbarsky a écrit :
And specifically, it's not clear what the behavior should be when 
there are two different scripted proxies for the same WebIDL object.  
Say it's a DOM node.  One of the proxies gets passed to appendChild.  
When later getting that node out of the DOM with .firstChild, what 
should be handed back?  The proxy that was passed in, the JS object 
that proxy was wrapping, something else (e.g. an exception is thrown)?
The principle of least surprise would say the proxy that was passed in 
(at least for the sake of object identity checking). Proxies are 
supposed to be used as a replacement of their target.
Also if you wrap untrusted code in a membrane [1], you don't want this 
untrusted code to be handed the actual target, but the wrapped object 
(so the proxy).


Regardless of the answer for the specific point you bring up, there are 
hundreds of such questions to be answered before proxies can wrap 
browser objects. W3C specs are nowhere near being ready to explain what 
would happen if a proxy was passed as argument of a method.


Proxies expose the guts of algorithms applied on objects (exact sequence 
of internal methods with arguments). It is true for ECMAScript standard 
objects, it would be equally true for browser objects. Since these guts 
aren't specified in ECMAScript semantics, exposing the guts now would 
lead at best to non-interoperable behaviors, at worst to security issues 
(due to C++ proximity).



I agree with Andreas about the convenience for web developers [2] but I 
doubt it would be practical to have it in the short term both because of 
under-specification and implementation complexity.
Let's wait for a combinaison of 1) authors using proxies, 2) 
implementors move forward on WebIDL compliance and 3) proxies being 
introduced in the spec world (WindowProxy, live objects...). When these 
3 aspects will have moved forward enough, maybe it will be time to think 
about wrapping browser objects, but now, none of the 3 populations seem 
in a mature enough state for this to happen.


David

[1] http://soft.vub.ac.be/~tvcutsem/invokedynamic/js-membranes
[2] I regularly meet developers who aren't aware of the 
ECMAScript/Browser objects divide and call all that JavaScript.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread Andrea Giammarchi
all this reminds me, hilariously, the Object.defineProperty case in
Internet Explorer 8 ... the infamous implementation that works only on
global and DOM objects. Here we have the opposite behavior, something so
powerful that could let us redefine W3C specs or eventually make them
consistent across browsers where every time something is needed we don't
even need to wait for specs to be approved (I know, this might be the
beginning of the Chaos too) will land in Web development world half-backed
and with known inconsistencies able to break normal code.

So, at least, I would expect an error **when** the new Proxy is not able to
replace the identity of the first argument and not after, because after is
simply too late and I think completely pointless.

Just my 2 cents.

br
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On defining non-standard exotic objects

2013-01-09 Thread David Bruant

Le 31/12/2012 13:43, Tom Van Cutsem a écrit :

Hi David,

I generally support this view, although it may be too restrictive to 
completely disallow non-standard exotics.


Instead, I would be happy if we could just state that non-standard 
exotics are expected to uphold the invariants of the ES6 object 
model, which will presumably be codified in Section 8.1.6.3.
I disagree with this view for reasons Marc Stiegler expresses really 
well in his Lazy Programmer's Guide to Secure Computing talk [1].


The current situation is that ES folks would like to impose some 
restrictions on how the language can be extended, especially when it 
comes to objects. To the possible extent (and it looks possible), the 
limits would be the one set by what's possible by proxies.

Now, there are 2 choices:
1) do whatever you want within these boundaries defined in prose
2) proxies are the most powerful extension mechanism at your disposal

The former requires people to read the spec carefully, become very 
intimate with it. The recent issue you have solved [2] shows a very 
subtle problem. I don't think non-standard exotic object spec writers 
should be expected to understand all these subtleties. And I don't think 
every subtlety can be documented in the spec. At every non-standard 
exotic object spec change, people will have to re-review if all 
properties are properly enforced.
The latter solution is the ocap-style lazy one (in Marc Stiegler's 
sense). All properties that have been carefully weighted and crafted on 
es-discuss will apply to non-ECMAScript standard objects. Spec writer 
for these objects won't need to know the details. They just have to fit 
in the box they are provided and subtleties are taken care of by the 
box, by design of how the box is being designed.


One point I could understand is that maybe script proxies will not 
necessarily make a conveninent box for spec writers. If this is really 
an issue, ECMAScript could define an intermediate proxy representation 
that would be used to spec proxies and by other spec writers. But I 
think non-standard exotic objects spec writers could work easily with:
object X is a proxy which target is a new Y object and the handler is a 
frozen object which trap methods are [enumerate traps and their 
behaviors, specifically mention absent trap and that's on purpose]


David

Ps: non-standard exotic objects is way too long to type. I'll be 
aliasing that to host objects from now on.


[1] http://www.youtube.com/watch?v=eL5o4PFuxTY
[2] https://github.com/tvcutsem/harmony-reflect/issues/11
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread David Bruant

Le 09/01/2013 14:55, Andrea Giammarchi a écrit :
all this reminds me, hilariously, the Object.defineProperty case in 
Internet Explorer 8 ... the infamous implementation that works only on 
global and DOM objects. Here we have the opposite behavior, something 
so powerful that could let us redefine W3C specs or eventually make 
them consistent across browsers where every time something is needed 
we don't even need to wait for specs to be approved (I know, this 
might be the beginning of the Chaos too) will land in Web development 
world half-backed and with known inconsistencies able to break normal 
code.
With the shadow target idea, you could make a huge amount of things 
consistent across browsers I think. It costs twice as much objects and I 
don't know for time performance, but at least, you have the cross browser.


So, at least, I would expect an error **when** the new Proxy is not 
able to replace the identity of the first argument and not after, 
because after is simply too late and I think completely pointless.
I strongly agree with this idea. Either the object can be safely wrapped 
and it is or if it cannot, throw on the Proxy constructor call.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread Boris Zbarsky

On 1/9/13 5:24 AM, David Bruant wrote:

When later getting that node out of the DOM with .firstChild, what
should be handed back?  The proxy that was passed in, the JS object
that proxy was wrapping, something else (e.g. an exception is thrown)?

The principle of least surprise would say the proxy that was passed in


That's actually not that great either.  If you're handing out proxies as 
membranes, and the caller of .firstChild should be getting a different 
membrane than the caller of appendChild had, you lose.



Also if you wrap untrusted code in a membrane [1], you don't want this
untrusted code to be handed the actual target, but the wrapped object
(so the proxy).


If you want membranes you have to be able to pick the right membrane 
when handing out the object from any WebIDL method/getter, basically.



Proxies expose the guts of algorithms applied on objects (exact sequence
of internal methods with arguments).


Yes, true, for purposes of ES stuff.


it would be equally true for browser objects


I'm not quite sure what this part means.


Since these guts aren't specified in ECMAScript semantics


Again, not sure what that means.

The way the DOM works in practice right now, if one were to implement it 
in ES, is that each script-exposed object is just a normal ES object 
with some getters/setters/methods on its proto chain.  There is also a 
bunch of state that's not stored in the objects themselves, and a Map or 
WeakMap from the objects to their state, depending on the 
implementation; the GC issues are a bit complicated.  The 
getters/setters/methods work on this out-of-band state, for the most 
part (there are some exceptions; e.g. 
http://dev.w3.org/2006/webapi/WebIDL/#dfn-attribute-setter for the 
[PutForwards] case, though that may not match UA behavior well enough; 
we'll see).


So in this view, passing in a proxy should not work, because it can't be 
used to look up the out-of-band state.


Now if you happen to have access to engine-level APIs you can unwrap the 
proxy and use the target to index into the Map... but at that point I 
agree that you've left ES semantics land.


Now maybe you're arguing that the above model is wrong and there should 
be some other model here.  I welcome you describing what that model is. 
 But the above model, I believe, is fully describable in ES semantics 
(and in fact dom.js does exist).



I agree with Andreas about the convenience for web developers [2] but I
doubt it would be practical to have it in the short term both because of
under-specification and implementation complexity.


Agreed, at this point.


Let's wait for a combinaison of 1) authors using proxies, 2)
implementors move forward on WebIDL compliance


Of course the more investors invest in a rewrite of their DOM stuff, the 
less likely they are to want to change it.


So if we think we should be changing things somehow in the future, and 
have a good idea of what those changes will look like, the better it is 
to lay the groundwork now.  Rewriting the binding layer for a browser is 
a pretty massive project, and there's a limit to how often UAs want to 
do it.  ;)


-Boris
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread Andrea Giammarchi
only part I am slightly skeptical is that if developers cannot use
proxies with DOM nodes there's really not much to wait for ... they'll not
use them, end of the story.

If you mean waiting for developers to complain same way I did ... oh
well, I think is just a matter of time. I'll be here that day :D

Still, meanwhile, I would just change the current behavior throwing when
Proxy cannot work, i.e. with DOM ... or every library will have to
try/catch a generic appendChild in their code, as example, because of this
misleading possibility to wrap the DOM without the ability to retrieve back
the wrapped content.

br


On Wed, Jan 9, 2013 at 7:02 AM, Boris Zbarsky bzbar...@mozilla.com wrote:

 On 1/9/13 5:24 AM, David Bruant wrote:

 When later getting that node out of the DOM with .firstChild, what
 should be handed back?  The proxy that was passed in, the JS object
 that proxy was wrapping, something else (e.g. an exception is thrown)?

 The principle of least surprise would say the proxy that was passed in


 That's actually not that great either.  If you're handing out proxies as
 membranes, and the caller of .firstChild should be getting a different
 membrane than the caller of appendChild had, you lose.


  Also if you wrap untrusted code in a membrane [1], you don't want this
 untrusted code to be handed the actual target, but the wrapped object
 (so the proxy).


 If you want membranes you have to be able to pick the right membrane when
 handing out the object from any WebIDL method/getter, basically.


  Proxies expose the guts of algorithms applied on objects (exact sequence
 of internal methods with arguments).


 Yes, true, for purposes of ES stuff.


  it would be equally true for browser objects


 I'm not quite sure what this part means.


  Since these guts aren't specified in ECMAScript semantics


 Again, not sure what that means.

 The way the DOM works in practice right now, if one were to implement it
 in ES, is that each script-exposed object is just a normal ES object with
 some getters/setters/methods on its proto chain.  There is also a bunch of
 state that's not stored in the objects themselves, and a Map or WeakMap
 from the objects to their state, depending on the implementation; the GC
 issues are a bit complicated.  The getters/setters/methods work on this
 out-of-band state, for the most part (there are some exceptions; e.g.
 http://dev.w3.org/2006/webapi/**WebIDL/#dfn-attribute-setterhttp://dev.w3.org/2006/webapi/WebIDL/#dfn-attribute-setterfor
  the [PutForwards] case, though that may not match UA behavior well
 enough; we'll see).

 So in this view, passing in a proxy should not work, because it can't be
 used to look up the out-of-band state.

 Now if you happen to have access to engine-level APIs you can unwrap the
 proxy and use the target to index into the Map... but at that point I agree
 that you've left ES semantics land.

 Now maybe you're arguing that the above model is wrong and there should be
 some other model here.  I welcome you describing what that model is.  But
 the above model, I believe, is fully describable in ES semantics (and in
 fact dom.js does exist).


  I agree with Andreas about the convenience for web developers [2] but I
 doubt it would be practical to have it in the short term both because of
 under-specification and implementation complexity.


 Agreed, at this point.


  Let's wait for a combinaison of 1) authors using proxies, 2)
 implementors move forward on WebIDL compliance


 Of course the more investors invest in a rewrite of their DOM stuff, the
 less likely they are to want to change it.

 So if we think we should be changing things somehow in the future, and
 have a good idea of what those changes will look like, the better it is to
 lay the groundwork now.  Rewriting the binding layer for a browser is a
 pretty massive project, and there's a limit to how often UAs want to do it.
  ;)

 -Boris

 __**_
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/**listinfo/es-discusshttps://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On defining non-standard exotic objects

2013-01-09 Thread Brandon Benvie
On Wed, Jan 9, 2013 at 9:07 AM, David Bruant bruan...@gmail.com wrote:

 The current situation is that ES folks would like to impose some
 restrictions on how the language can be extended, especially when it comes
 to objects. To the possible extent (and it looks possible), the limits
 would be the one set by what's possible by proxies.
 Now, there are 2 choices:
 1) do whatever you want within these boundaries defined in prose
 2) proxies are the most powerful extension mechanism at your disposal

 The former requires people to read the spec carefully, become very
 intimate with it.

...

They just have to fit in the box they are provided and subtleties are taken
 care of by the box, by design of how the box is being designed.

 One point I could understand is that maybe script proxies will not
 necessarily make a conveninent box for spec writers. If this is really an
 issue, ECMAScript could define an intermediate proxy representation that
 would be used to spec proxies and by other spec writers.


The crux of the matter is that the ES5 spec doesn't really allow for aspect
oriented use of the internal methods. The granularity provided is pretty
much at the method level: if you want to reuse the core functionality of
say [[DefineOwnProperty]] then you're basically committing to
reimplementing the whole thing, or at best pre- or post- processing the
input/output of the standard ones. Proxies allow for deferring to the spec
implementation and letting it do the heavy lifting of ensuring the internal
consistency that a finely polished object protocol provides, and then
stepping in to make the (usually small) adjustments needed for the exotic
functionality. Proxies are only desirable as a model for implementers
because of a lack of other options provided by the spec. They are actually
pretty poorly suited in many ways because they're intended use is for
untrusted code.

The last couple revisions of the ES6 spec directly address this problem in
a much better way for implementers. The core functionality of the internal
methods is being split out into separate abstract operations that can be
composed with the additional exotic functionality an implementer wishes to
add. The flexibility that Proxies provide can be attained without layering
on the added complexity that describing something in terms of Proxies
requires.

Specifically, I'm referring to things
like OrdinaryGetOwnProperty, OrdinaryDefineOwnProperty,
ValidateAndApplyPropertyDescriptor, OrdinaryConstruct,
OrdinaryCreateFromConstructor, OrdinaryHasInstance, .etc, as well as
indexed delegated objects which is a reusable solution for the most common
form of exotic object. Along with these methods are the various hooks that
they expose to implementers so that, for some things, it's not even
required to override an internal method at all (@@create, @@hasInstance
being examples).
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread Alex Russell
On Tuesday, January 8, 2013, Tab Atkins Jr. wrote:

 On Tue, Jan 8, 2013 at 12:40 PM, Andrea Giammarchi
 andrea.giammar...@gmail.com javascript:; wrote:
  So, I am playing with FF 18 and I have this behavior:
 
  var a = new Proxy([], {});
  console.log(a instanceof Array); // true
  console.log(Array.isArray(a));   // true
  console.log({}.toString.call(a));// [object Array]
 
  Function.apply(null, a); // anonymous()
 
  Cool uh? there's no way to tell that a is not actually an array but
 rather a
  proxy: awesome!!!
 
  Now I go in that dark place called DOM:
 
  var n = new Proxy(document.createElement(p), {});
  console.log(n instanceof HTMLElement);// true
  console.log({}.toString.call(n)); // true
  document.body.appendChild(n);
  // Error: Could not convert JavaScript argument arg 0
  [nsIDOMHTMLBodyElement.appendChild]
 
  Is this meant? 'cause it looks lik ewe have half power here and once
 again
  inconsistencies ... thanks for explaining me this.

 As Francois points out, this is a known problem.  DOM objects don't
 live solely in JS - they tend to have C++ backing classes that do a
 lot of the heavy lifting.  Your average JS object, without such C++
 support, simply won't work with the JS methods that, when they call
 back into C++, expect to see the DOM C++ classes.

 This is a problem elsewhere, too - Web Components really wants to make
 it easy to subclass DOM objects, but we've had to twist ourselves into
 knots to do it in a way that alerts the C++ side early enough that
 it can create the new objects with appropriate backing C++ classes.

 It can potentially be solved by efforts like moving the DOM into JS,
 which browsers are pursuing to various degrees, but it's a hard
 problem on the impl side.  Meanwhile, there is unforgeable magic
 behind DOM objects, and nothing we can really do about it.


Well, we can attempt (as much as is practicable at any point) to avoid
blessing this split and the crazy behaviors it begets. Designing for
DOM-in-JS world is the only way to stay sane. What JS-in-DOM can't do that
DOM needs, we should add. But not much more.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread Jason Orendorff
On Tue, Jan 8, 2013 at 11:54 PM, Brendan Eich bren...@mozilla.com wrote:

 Boris and I talked more 1:1 -- it is not clear when a direct proxy can be
 safely cast to its target. The internal proxies Gecko uses are known
 implementations where this is safe (with a security check). An arbitrary
 scripted direct proxy? Not safe in general, and stripping the proxy to its
 target may break the abstraction of that certain scripted proxy.


Hard-earned wisdom:

1. Proxies that form membranes always have wrap/unwrap operations that can
be sensibly applied in every situation—except access to an object's
implementation details, like internal properties; then it’s unclear.

2. Proxies that exist only to give a particular object magical behavior
should never be cast to its target.

What 1 and 2 have in common is that only the handler ever sees both the
proxy and the target. The main lesson I've drawn from Mozilla's experience
with proxies is that this property is crucial.

Without it, you get a mess that's impossible to reason about. The symptoms
are a proliferation of special cases where something must be wrapped or
unwrapped with a lengthy comment explaining why; code where a particular
variable could be either a target or a proxy; and related bugs due to the
code naively assuming one or the other.

The default direct proxy you get with an empty handler does *not* have this
property; my hard-earned lesson predicts that we would therefore have a lot
of trouble figuring out exactly what that could usefully do, and in fact we
have had trouble. (It's not the only source of trouble; access to object
implementation details are trouble too.)

-j
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: direct_proxies problem

2013-01-09 Thread François REMY
  When later getting that node out of the DOM with .firstChild, what
  should be handed back? The proxy that was passed in, the JS object
  that proxy was wrapping, something else (e.g. an exception is thrown)?
  The principle of least surprise would say the proxy that was passed in

 That's actually not that great either. If you're handing out proxies as
 membranes, and the caller of .firstChild should be getting a different
 membrane than the caller of appendChild had, you lose.

I'm not sure I got your idea, but I maybe did. Okay, let's take a simple 
example, then. 


Let's say we have a P element and a membrane around it whose goal is to only 
allow you to change textContent (but you can read everything else, if you want 
to). When you read something and that the return value is an object, it's 
wrapped in a membrane where you can't change anything (that way if you take 
el.parentElement.firstChild you get a second, more restrictive membrane for el).

What you firstly claim is that the membrane is completely ineffective if you 
can retrieve the object via the DOM in any other form than the membrane you 
used in first hand because as soon as you added the element in the DOM, you can 
use document.getElementById(el.uniqueID) to retrieve it unprotected (or you can 
use something else).So, whatever membrane was used should continue to be used. 
From now one, any DOM API will need to return this readonly except 
textContent version to preserve compatibilty. 

What you claimed in second position is that even if you do that, it's still a 
broken design. Indeed, let's say you sent the readonly version to a function to 
make sure it doesn't modify the object. Again, if you added the element in the 
DOM, it can retrieve it and the function will recieve the other membrane, which 
allow him to change textContent.


However, this second problem is not related to the DOM but to globally 
accessible variables. If you maintain an object stored somewhere in the 
globally accessible world (be it the DOM or any kind of global variable), even 
if you create a membrane around those objects, the user can retrieve an 
unsecure version of it via the Map. No membrane system can be secure through 
global variable access, so we should not worry about that.


According to me, the only thing we want to make sure with a Proxy is that you 
can't actually extract the value of a DOMObject that's not added nor addable to 
any globally accessible map (like a CustomEvent). If you can, by creating an 
element and calling dispatchEvent on that element using the membrane and get 
the unmembraned CustomEvent in the event listener, then we've a problem.

I think it should be possible to work around this problem by making sure every 
object has a C++ Decorator (in this case ProxiedCustomEvent) which inherits 
from the base class (CustomEvent) and forward all calls to the extracted object 
identity but has a method like getScriptValue that returns the membrane from 
which the native value was extracted. 

When a native method is called with a Proxy, a new 
ProxiedCustomEvent(proxy.target, proxy) is created and passed to the native 
method. When the ProxiedCustomEvent is given back to the script world, his 
getScriptValue method is called to return the original proxy.

So, basically, el.appendChild(membrane) will cause el.lastChild===membrane to 
be true, and dispatchEvent(membranedEvent) will not allow to retrieve an 
unmembraned event object.

Albeit possible, this is quite a bit of work, however. In the mean time, we 
should probably make it impossible to proxy an host object whose object 
identity rely on something else that the prototype chain. That means that we 
should probably get this concept of object identity specced somewhere. 


The question in this case should be: can I create a secure but broken 
readonly proxy from a DOM object that cannot be used as parameter for native 
functions (if we allow a Proxy to take the native object identity of the target 
when used in native calls OR in the mean time if we make the proxyfication 
throw)? 

Yes: you can create a new, empty target with no object identity and create 
getters/setters to encapsulate the real DOM object. Actually, you didn't remove 
any ability by allow proxies to take the object identity.

However, if the Proxy cannot possibly use the identity of the target object in 
native calls, you'll not be able to emulate it. So, if we want to use the most 
capable solution, we need a way to transfer target - proxy identity, and 
that means we need to throw as not impelemented the proxification of native 
objects in the mean time.


The only problem I still see with the ProxiedXXX decorator approach is that, 
normally, when you are using a proxy, you can control the value returned by 
some property calls (let's say you pretend innerHTML is  while this is not 
true) but in the native world, your barrier will not apply and therefore one 
can get the innerHTML by using a 

Re: direct_proxies problem

2013-01-09 Thread Boris Zbarsky

On 1/9/13 11:30 AM, François REMY wrote:

However, this second problem is not related to the DOM but to
globally accessible variables.


Well, right, but the DOM sort of forces the problem on us because of 
window.document.



No membrane system can be secure through global variable access, so
we should not worry about that.


Actually, a membrane system _can_ be secure through global variable 
access.  You just have to make sure that all accesses are performed via 
the membrane and that you know for any accessor what the right membrane is.


An existence proof are the membranes Gecko+SpiderMonkey uses for 
security checks.  But those require that any time you're returning an 
object from anywhere you check who your caller is and what membrane they 
should be getting



According to me, the only thing we want to make sure with a Proxy is
that you can't actually extract the value of a DOMObject that's not
added nor addable to any globally accessible map (like a
CustomEvent). If you can, by creating an element and calling
dispatchEvent on that element using the membrane and get the
unmembraned CustomEvent in the event listener, then we've a problem.


OK.  So this assumes that all consumers should get the same membrane, right?


When the ProxiedCustomEvent is given back to the
script world, his getScriptValue method is called to return the
original proxy.


This is doable in theory.  I'd have to think about performance impact. 
One of the goals here from a UA implementor point of view is that there 
should be no performance hit from supporting this use case if you 
_don't_ proxy WebIDL objects.



Albeit possible, this is quite a bit of work, however.


Yep.


The only problem I still see with the ProxiedXXX decorator approach
is that, normally, when you are using a proxy, you can control the
value returned by some property calls (let's say you pretend
innerHTML is  while this is not true) but in the native world,
your barrier will not apply and therefore one can get the innerHTML
by using a DOMSerializer.


Yep.  Lots of issues like that, actually...


Because, when you think about it, I can deep-clone any
JS object and except that o!===o2 they will be the same and work in
the same contexts


Well, you have to know about the underlying identity of the JS object, no?

If I deep-clone (in the sense of copying all own property descriptors 
and copying the proto chain) a Date onto an Object, I don't think that 
will work quite right in current implementations, for example.  Same for 
Array.  So properly deep-cloning involves detecting cases like that 
already and creating a clone of the right type...


-Boris
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread David Bruant

Le 08/01/2013 22:23, Tab Atkins Jr. a écrit :

  Meanwhile, there is unforgeable magic
behind DOM objects, and nothing we can really do about it.
Do you have examples? Besides document.all being falsy, everything seem 
to be emulable with ES6 proxies.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread David Bruant

Le 09/01/2013 16:02, Boris Zbarsky a écrit :

On 1/9/13 5:24 AM, David Bruant wrote:

When later getting that node out of the DOM with .firstChild, what
should be handed back?  The proxy that was passed in, the JS object
that proxy was wrapping, something else (e.g. an exception is thrown)?

The principle of least surprise would say the proxy that was passed in


That's actually not that great either.  If you're handing out proxies 
as membranes, and the caller of .firstChild should be getting a 
different membrane than the caller of appendChild had, you lose.



Also if you wrap untrusted code in a membrane [1], you don't want this
untrusted code to be handed the actual target, but the wrapped object
(so the proxy).


If you want membranes you have to be able to pick the right membrane 
when handing out the object from any WebIDL method/getter, basically.
I went out of my way merging 2 incompatible use cases (put a wrapped 
node in a document and membrane for which the document would be wrapped 
in the same membrane). Sorry about that.


I still think the object that was introduced is the one that should be 
handed back. If you give access to some code to a membraned version of 
the DOM tree, you know whenever they want a given node and can pick the 
correct membraned node instead (a weakmap can do this job really well.)




it would be equally true for browser objects


I'm not quite sure what this part means.


Since these guts aren't specified in ECMAScript semantics


Again, not sure what that means.

The way the DOM works in practice right now, if one were to implement 
it in ES, is that each script-exposed object is just a normal ES 
object with some getters/setters/methods on its proto chain.  There is 
also a bunch of state that's not stored in the objects themselves, and 
a Map or WeakMap

or private symbols (used to be called private names)

from the objects to their state, depending on the implementation
There is some data associated with the object. Whether it's a (private) 
property or a map entry is an implementation detail; my point is that 
you can't state a bunch of state that's not stored in the objects 
themselves. properties or the [[Extensible]] boolean could also not be 
stored in the objects themselves, that's an implementation concern.


Choosing symbols or a weakmap would make a huge difference in how proxy 
replacement would react to DOM algorithms.
If the DOM in terms of accessing private properties, then proxies can 
replace DOM objects transparently. Their unknownPrivateSymbol trap 
will be called [2] and if they don't throw, the access to the private 
property will be transparently forwarded to the target without the 
private symbol ever leaking (it actually wouldn't need to exist in 
implementations).


That actually could work...

the GC issues are a bit complicated.  The getters/setters/methods work 
on this out-of-band state, for the most part (there are some 
exceptions; e.g. 
http://dev.w3.org/2006/webapi/WebIDL/#dfn-attribute-setter for the 
[PutForwards] case, though that may not match UA behavior well enough; 
we'll see).


So in this view, passing in a proxy should not work, because it can't 
be used to look up the out-of-band state.


Now if you happen to have access to engine-level APIs you can unwrap 
the proxy and use the target to index into the Map... but at that 
point I agree that you've left ES semantics land.

I agree with this analysis.

Now maybe you're arguing that the above model is wrong and there 
should be some other model here.  I welcome you describing what that 
model is.

That would be representing private state with private symbols properties.


But the above model, I believe, is fully describable in ES semantics

I think so too.


(and in fact dom.js does exist).
Dom.js started and is developed in a world where no engines has 
implemented symbols.
Also, last I checked it adds non-standard convieniences [2] and 
_properties for private state [3][4]. Am I looking at the wrong version?




I agree with Andreas about the convenience for web developers [2] but I
doubt it would be practical to have it in the short term both because of
under-specification and implementation complexity.


Agreed, at this point.


Let's wait for a combinaison of 1) authors using proxies, 2)
implementors move forward on WebIDL compliance


Of course the more investors invest in a rewrite of their DOM stuff, 
the less likely they are to want to change it.


So if we think we should be changing things somehow in the future, and 
have a good idea of what those changes will look like, the better it 
is to lay the groundwork now.  Rewriting the binding layer for a 
browser is a pretty massive project, and there's a limit to how often 
UAs want to do it.  ;)

I hear you :-)
Yet, assuming the private symbol idea would be the way to go, there is 
still a need for specs to define private state in terms of private 
symbols before implementations can start, no? I guess, worst 

Re: direct_proxies problem

2013-01-09 Thread Boris Zbarsky

On 1/9/13 12:13 PM, David Bruant wrote:

There is some data associated with the object. Whether it's a (private)
property or a map entry is an implementation detail

...

Choosing symbols or a weakmap would make a huge difference in how proxy
replacement would react to DOM algorithms.


Then it's not an implementation detail, now is it?  I'm having a hard 
time understanding how you can reconcile those two sentences.



If the DOM in terms of accessing private properties, then proxies can
replace DOM objects transparently. Their unknownPrivateSymbol trap
will be called [2] and if they don't throw, the access to the private
property will be transparently forwarded to the target without the
private symbol ever leaking (it actually wouldn't need to exist in
implementations).

That actually could work...


I'm not sure I see how yet, but I'm not as familiar with proxies as you 
are.  I assume the link above was actually [1]?  I'm having a hard time 
making sense of it, largely due to missing context, I think.


What happens if unknownPrivateSymbol throws?  Would internal DOM 
algorithms like the serialization algorithm invoke unknownPrivateSymbol? 
 If so, would unknownPrivateSymbol be allowed to modify the DOM tree?



That would be representing private state with private symbols properties.


OK, see above.


Dom.js started and is developed in a world where no engines has
implemented symbols.


It started in a world with no WeakMap or Map either.


Also, last I checked it adds non-standard convieniences [2] and
_properties for private state [3][4]. Am I looking at the wrong version?


I believe it adds those on the target object but the thing it hands out 
to script is actually a proxy for that object.


I'm not sure how it handles expando property sets that collide with 
its _properties, though.  But again, it was developed in a world without 
WeakMap/Map.



Yet, assuming the private symbol idea would be the way to go, there is
still a need for specs to define private state in terms of private
symbols before implementations can start, no?


Indeed.


implementations would be
non-interoperable only in how many times the unknownPrivateSymbol trap
is called which I don't think is really a big deal.


Whether it's a big deal depends on when it's called and what it can do. 
 If it can have side-effects, non-interop in how many times it's called 
is a big deal to me.


-Boris
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On defining non-standard exotic objects

2013-01-09 Thread Allen Wirfs-Brock
I'm uncomfortable with the blurring of the boundary between specification and 
implementation that I see in this thread.

Proxy is the only standard extension mechanism provided by ES6 for defining 
non-standard exotic objects. If non-proxy based non-standard exotic objects 
exist within an implementation then they must be defined using some other 
extension mechanism that is inherently implementation dependent.  It is 
presumably also up to the implementation (or the host environment) to determine 
who has access to any such extension mechanism.  For example, such a mechanism 
might only be available to components that are statically linked with the 
implementation engine. 

Regardless of the interfacing mechanism used to define non-standard exotic 
objects, all exotic object are required by the standard to conform to the 
essential invariants that will be defined in section 8.1.6.1 (this section 
number will ultimately change).  An implementation specific extension mechanism 
might actively enforce those invariants, similarly to what the Proxy mechanism 
is specified to do.  Or, it might place that burden upon the component that is 
using the extension mechanism in which case any component that defines objects 
that violate the 8.1.6.1 invariants would have to be considered a buggy 
component just as would a component that violated any other interfacing rule of 
extension mechanism. 

David seems to be primarily concerned about people who are writing specs. for 
non-standard exotic objects (eg, W3C/WebIDL-based spec. writers)  rather than 
implementors of such objects.  In that case, it is probably reasonable for such 
a spec. writer to assume that the objects must be implementable using the Proxy 
mechanism.  After all, that is the only extension mechanism that is guaranteed 
to be available in a standards compliant ES6 implementation.  That still 
doesn't mean that such a spec. writer doesn't need to understand the ES object 
invariants as they shouldn't be writing any specification requirments that 
violates those invariants. However, it does mean that they should be able to 
test their specification by doing a Proxy-based prototype implementation.  I 
can even imagine that such a prototype implementation could be made a mandatory 
part of the spec. development process.

In the end, specifications don't have any enforcement power and perhaps not 
even all that much moral authority.  If an implementation really needs to do 
something that is forbidden by a spec. it will do it anyway. Browser 
implementations and HTML5 certainly takes this perspective WRT Ecma-262 and 
re-specifies things such that don't match current browser requirements. 

I don't see any need for an intermediate proxy representation or for attempting 
to limit non-proxy based extension mechanisms.  However, if Proxy is not 
sufficiently powerful to support everything that needs to be done in the real 
world (and in particular by browsers) then we probably should be looking at how 
to fill those deficiencies. 

Allen




On Jan 9, 2013, at 6:07 AM, David Bruant wrote:

 Le 31/12/2012 13:43, Tom Van Cutsem a écrit :
 Hi David,
 
 I generally support this view, although it may be too restrictive to 
 completely disallow non-standard exotics.
 
 Instead, I would be happy if we could just state that non-standard exotics 
 are expected to uphold the invariants of the ES6 object model, which will 
 presumably be codified in Section 8.1.6.3.
 I disagree with this view for reasons Marc Stiegler expresses really well in 
 his Lazy Programmer's Guide to Secure Computing talk [1].
 
 The current situation is that ES folks would like to impose some restrictions 
 on how the language can be extended, especially when it comes to objects. To 
 the possible extent (and it looks possible), the limits would be the one set 
 by what's possible by proxies.
 Now, there are 2 choices:
 1) do whatever you want within these boundaries defined in prose
 2) proxies are the most powerful extension mechanism at your disposal
 
 The former requires people to read the spec carefully, become very intimate 
 with it. The recent issue you have solved [2] shows a very subtle problem. I 
 don't think non-standard exotic object spec writers should be expected to 
 understand all these subtleties. And I don't think every subtlety can be 
 documented in the spec. At every non-standard exotic object spec change, 
 people will have to re-review if all properties are properly enforced.
 The latter solution is the ocap-style lazy one (in Marc Stiegler's sense). 
 All properties that have been carefully weighted and crafted on es-discuss 
 will apply to non-ECMAScript standard objects. Spec writer for these objects 
 won't need to know the details. They just have to fit in the box they are 
 provided and subtleties are taken care of by the box, by design of how the 
 box is being designed.
 
 One point I could understand is that maybe script proxies will not 
 necessarily 

Re: On defining non-standard exotic objects

2013-01-09 Thread Allen Wirfs-Brock

On Jan 9, 2013, at 7:24 AM, Brandon Benvie wrote:

 
 On Wed, Jan 9, 2013 at 9:07 AM, David Bruant bruan...@gmail.com wrote:
 The current situation is that ES folks would like to impose some restrictions 
 on how the language can be extended, especially when it comes to objects. To 
 the possible extent (and it looks possible), the limits would be the one set 
 by what's possible by proxies.
 Now, there are 2 choices:
 1) do whatever you want within these boundaries defined in prose
 2) proxies are the most powerful extension mechanism at your disposal
 
 The former requires people to read the spec carefully, become very intimate 
 with it.
 ... 
 They just have to fit in the box they are provided and subtleties are taken 
 care of by the box, by design of how the box is being designed.
 
 One point I could understand is that maybe script proxies will not 
 necessarily make a conveninent box for spec writers. If this is really an 
 issue, ECMAScript could define an intermediate proxy representation that 
 would be used to spec proxies and by other spec writers.
 
 The crux of the matter is that the ES5 spec doesn't really allow for aspect 
 oriented use of the internal methods. The granularity provided is pretty much 
 at the method level: if you want to reuse the core functionality of say 
 [[DefineOwnProperty]] then you're basically committing to reimplementing the 
 whole thing, or at best pre- or post- processing the input/output of the 
 standard ones. Proxies allow for deferring to the spec implementation and 
 letting it do the heavy lifting of ensuring the internal consistency that a 
 finely polished object protocol provides, and then stepping in to make the 
 (usually small) adjustments needed for the exotic functionality. Proxies are 
 only desirable as a model for implementers because of a lack of other options 
 provided by the spec. They are actually pretty poorly suited in many ways 
 because they're intended use is for untrusted code.
 
 The last couple revisions of the ES6 spec directly address this problem in a 
 much better way for implementers. The core functionality of the internal 
 methods is being split out into separate abstract operations that can be 
 composed with the additional exotic functionality an implementer wishes to 
 add. The flexibility that Proxies provide can be attained without layering on 
 the added complexity that describing something in terms of Proxies requires.
 
 Specifically, I'm referring to things like OrdinaryGetOwnProperty, 
 OrdinaryDefineOwnProperty, ValidateAndApplyPropertyDescriptor, 
 OrdinaryConstruct, OrdinaryCreateFromConstructor, OrdinaryHasInstance, .etc, 
 as well as indexed delegated objects which is a reusable solution for the 
 most common form of exotic object. Along with these methods are the various 
 hooks that they expose to implementers so that, for some things, it's not 
 even required to override an internal method at all (@@create, @@hasInstance 
 being examples).

Except that the spec. is not describing a required implementation factoring, 
just required results.  There is no particular reason that an implementation 
must have the same internal factoring as the spec. or that an implementation 
exposes via an implementation-specific extension mechanism the equivalent of 
the abstract operations used within the specification.  So while, the spec. 
refactoring hopefully makes life easier for future spec. writers (including me) 
and for readers of the spec. I don't think it helps implementors to use object 
aspects in the way you seem to be envisioning.

On the other hand, the @@hooks are intentionally design to provide extension 
mechanisms that operate above the level of the MOP (internal methods) and its 
object invariants. As much as possible, I want to use that style of extension 
hook and avoid extending or complicating the MOP.

Allen___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread David Bruant

Le 09/01/2013 19:43, Boris Zbarsky a écrit :

On 1/9/13 1:23 PM, David Bruant wrote:

What happens if unknownPrivateSymbol throws?

I'm not sure yet. My guess is that the error should be propagated


So just to make sure we're on the same page here...  Say I have a 
proxy for a div and I put it in the DOM.  Say my page has:


  style
section  div { color: green; }
  /style

Should matching that selector against the div call 
unknownPrivateSymbol when getting the parent of the div?
Debattable. Here, there is no need to work with private state. The 
nodeType and tagName and parentNode (all public) are enough to do the 
matching I think.

So the unknownPrivateSymbol trap wouldn't be called, but the get trap would.
But the public properties could also be reflecting the state of private 
properties.



If so, what should it do if it throws?
I guess swallow in that case. But maybe forward for qS/qSA... or swallow 
and consider the element as non-matching. I don't know what's the most 
useful.


Note, by the way, that UAs are working on doing off-main-thread 
selector matching and that the exact timing/ordering of selector 
matching is not defined and won't be (because it's a state thing, not 
a procedural algorithm), so doing anything script-observable anywhere 
here is pretty weird.
Agreed. Maybe this point settles the argument. That's what I was 
referring to when I was talking about exposing guts of DOM objects. 
The downside is that it forces to define a lot of things as algorithms 
at the expense of optimizations like the one you describe.






If the serialization algorithm is represented as a private symbol'ed
methods on objects, then, doing a [[Get]] with this symbol as argument
would call the unknownPrivateSymbol trap. The result of this trap (throw
or return (return value is ignored)) determines whether the algorithm is
actually called.


That wasn't my point.  My point was what happens to the tree traversal 
the serialization algorithm does if the firstChild member (not the 
getter, the actual internal state that stores the first child) is 
defined to be a private symbol?
oh ok, I'm not familiar with this algorithm. If the firstChild is a 
private symbol, then the unknownPrivateSymbol trap would be called. If 
the public firstChild is called, the get trap is.



unknownPrivateSymbol is a trap, so I'm not sure I understand your
question.


My question boils down to this: are we talking about introducing 
things that would be able to modify a DOM tree while it's being 
iterated by internal browser algorithms?  Because doing that is not 
acceptable.


It sounds like so far the answer is maybe, depending on how those 
traversals are defined in the specs
Yes, depending on how they are defined, but pretty much anytime you 
touch a proxy, it calls a trap either the unknownPrivateSymbol or the 
get trap.
Imagine a proxy for which the unknownPrivateSymbol and get traps would 
add a new element anywhere randomly to the DOM tree.

I agree it'd be attrocious!
You've convinced me against proxies for DOM Nodes.
It could still make sense to wrap a DOM Node with a proxy to perform 
[[Get]] and [[Set]], etc. but definitely not put it in the DOM tree.


Now, the web platform defines a lot of other objects for which wrapping 
them with a proxy could make sense. I guess it would need to be on a 
case-by-case basis.




It can have side-effects, the only important case is whether a public
method call result in 0 or 1+ calls on a given object.


Uh... no.  How can that be the only important case???


What I meant
above (but didn't say) is that whether it's called (0 or 1) is
important, but if it's 1 or 5 times for a given public method call, it
doesn't matter much.


For a function with side-effects this seems blatantly false to me, so 
I must be missing something.  What am I mising?
I hadn't thought of cases like selector matching. I was thinking of 
function calls like appendChild that could be considered as atomic you 
call it once and whatever happens in the middle (the number of trap 
calls), in the end, the operation happened or not and trap call 
side-effects probably don't affect the appendChild algorithm. It 
actually depends on how it's exactly specified.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread Boris Zbarsky

On 1/9/13 2:45 PM, David Bruant wrote:

Debattable. Here, there is no need to work with private state. The
nodeType and tagName and parentNode (all public) are enough to do the
matching I think.


No, because script can override them but matching needs to not depend on 
that, right?



So the unknownPrivateSymbol trap wouldn't be called, but the get trap
would.
But the public properties could also be reflecting the state of private
properties.


I'm confused again.  The public properties can do anything they want, 
since script can redefien them.



If so, what should it do if it throws?

I guess swallow in that case. But maybe forward for qS/qSA... or swallow
and consider the element as non-matching. I don't know what's the most
useful.


What's most useful is not invoking any script at all for selector 
matching.  Note that the main consumer of selector matching is not 
qS/qSA but CSS rendering.



That wasn't my point.  My point was what happens to the tree traversal
the serialization algorithm does if the firstChild member (not the
getter, the actual internal state that stores the first child) is
defined to be a private symbol?

oh ok, I'm not familiar with this algorithm. If the firstChild is a
private symbol, then the unknownPrivateSymbol trap would be called. If
the public firstChild is called, the get trap is.


What happens right now is that private state is consulted that cannot be 
changed by script directly and which can be accessed with no side-effects.



Yes, depending on how they are defined, but pretty much anytime you
touch a proxy, it calls a trap either the unknownPrivateSymbol or the
get trap.


OK.  I doubt that's acceptable for internal algorithms like 
serialization, fwiw.



Imagine a proxy for which the unknownPrivateSymbol and get traps would
add a new element anywhere randomly to the DOM tree.


Yes, exactly.   Done that already.  ;)


Now, the web platform defines a lot of other objects for which wrapping
them with a proxy could make sense. I guess it would need to be on a
case-by-case basis.


OK.  That might make sense; we'd have to look at specific cases.

-Boris
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread Andrea Giammarchi
David ? You said: It could still make sense to wrap a DOM Node with a
proxy to perform [[Get]] and [[Set]], etc. but definitely not put it in the
DOM tree.

so this is the current scenario ... now, can you explain me a single case
where you would need that? If you can't use the DOM node then why would
create a proxy of one of them ? I thought you agreed on the fact new
Proxy() should throw instantly if target cannot be proxified ... in any
case, here a counter example of what you guys are discussing:

var
  o = {},
  p = new Proxy(o, {
get: function (target, name) {
  console.log(name);
  return target[name];
}
  })
;

p.test = 123;
p.test; // log test
o.test; // does nothing

At this point would be more like deciding if DOM should threat
internally the o or the p

Having special privileges it could use o directly and pass back p
when it comes to JS world. This will preserve performance. At the same
time this will make the usage of proxies in the DOM world less useful
because developers will be able to intercept only user defined
interactions with these nodes but hey, it's already better than now
where developers can create DOM proxies and use them only in RAM for
who knows which reason 'cause in the DOM, where these identities
belong, these fail.


As summary, as it is now, this from Francois is my favorite outcome from
the discussion:

I would certainly understand if the ECMAScript group settled up not to work
on Proxied native elements and specify that it should throw on creation.
However, I would advise to create an Object.hasIdentity(...) method that
returns true if the given object has a native identity

br




On Wed, Jan 9, 2013 at 11:57 AM, Boris Zbarsky bzbar...@mozilla.com wrote:

 On 1/9/13 2:45 PM, David Bruant wrote:

 Debattable. Here, there is no need to work with private state. The
 nodeType and tagName and parentNode (all public) are enough to do the
 matching I think.


 No, because script can override them but matching needs to not depend on
 that, right?


  So the unknownPrivateSymbol trap wouldn't be called, but the get trap
 would.
 But the public properties could also be reflecting the state of private
 properties.


 I'm confused again.  The public properties can do anything they want,
 since script can redefien them.


  If so, what should it do if it throws?

 I guess swallow in that case. But maybe forward for qS/qSA... or swallow
 and consider the element as non-matching. I don't know what's the most
 useful.


 What's most useful is not invoking any script at all for selector
 matching.  Note that the main consumer of selector matching is not qS/qSA
 but CSS rendering.


  That wasn't my point.  My point was what happens to the tree traversal
 the serialization algorithm does if the firstChild member (not the
 getter, the actual internal state that stores the first child) is
 defined to be a private symbol?

 oh ok, I'm not familiar with this algorithm. If the firstChild is a
 private symbol, then the unknownPrivateSymbol trap would be called. If
 the public firstChild is called, the get trap is.


 What happens right now is that private state is consulted that cannot be
 changed by script directly and which can be accessed with no side-effects.


  Yes, depending on how they are defined, but pretty much anytime you
 touch a proxy, it calls a trap either the unknownPrivateSymbol or the
 get trap.


 OK.  I doubt that's acceptable for internal algorithms like serialization,
 fwiw.


  Imagine a proxy for which the unknownPrivateSymbol and get traps would
 add a new element anywhere randomly to the DOM tree.


 Yes, exactly.   Done that already.  ;)


  Now, the web platform defines a lot of other objects for which wrapping
 them with a proxy could make sense. I guess it would need to be on a
 case-by-case basis.


 OK.  That might make sense; we'd have to look at specific cases.

 -Boris

 __**_
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/**listinfo/es-discusshttps://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread Andrea Giammarchi
Last, but not least, even offline DOM proxies are pointless, here another
example without putting them in the DOM

var p = new Proxy(
  document.createElement(p),
  {}
);
try {
  p.appendChild(
document.createElement(span)
  );
} catch(o_O) {
  console.log(o_O.message);
}
try {
  p.appendChild(
new Proxy(
  document.createElement(span),
  {}
)
  );
} catch(o_O) {
  console.log(o_O.message);
}


So, as it is right now, there is not a single reason to make them
valid as new Proxy target, IMHO


br



On Wed, Jan 9, 2013 at 12:30 PM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:

 David ? You said: It could still make sense to wrap a DOM Node with a
 proxy to perform [[Get]] and [[Set]], etc. but definitely not put it in the
 DOM tree.

 so this is the current scenario ... now, can you explain me a single case
 where you would need that? If you can't use the DOM node then why would
 create a proxy of one of them ? I thought you agreed on the fact new
 Proxy() should throw instantly if target cannot be proxified ... in any
 case, here a counter example of what you guys are discussing:

 var
   o = {},
   p = new Proxy(o, {
 get: function (target, name) {
   console.log(name);
   return target[name];
 }
   })
 ;

 p.test = 123;
 p.test; // log test

 o.test; // does nothing

 At this point would be more like deciding if DOM should threat internally the 
 o or the p

 Having special privileges it could use o directly and pass back p when it 
 comes to JS world. This will preserve performance. At the same time this will 
 make the usage of proxies in the DOM world less useful because developers 
 will be able to intercept only user defined interactions with these nodes but 
 hey, it's already better than now where developers can create DOM proxies and 
 use them only in RAM for who knows which reason 'cause in the DOM, where 
 these identities belong, these fail.


 As summary, as it is now, this from Francois is my favorite outcome from
 the discussion:

 I would certainly understand if the ECMAScript group settled up not to
 work on Proxied native elements and specify that it should throw on
 creation. However, I would advise to create an Object.hasIdentity(...)
 method that returns true if the given object has a native identity

 br




 On Wed, Jan 9, 2013 at 11:57 AM, Boris Zbarsky bzbar...@mozilla.comwrote:

 On 1/9/13 2:45 PM, David Bruant wrote:

 Debattable. Here, there is no need to work with private state. The
 nodeType and tagName and parentNode (all public) are enough to do the
 matching I think.


 No, because script can override them but matching needs to not depend on
 that, right?


  So the unknownPrivateSymbol trap wouldn't be called, but the get trap
 would.
 But the public properties could also be reflecting the state of private
 properties.


 I'm confused again.  The public properties can do anything they want,
 since script can redefien them.


  If so, what should it do if it throws?

 I guess swallow in that case. But maybe forward for qS/qSA... or swallow
 and consider the element as non-matching. I don't know what's the most
 useful.


 What's most useful is not invoking any script at all for selector
 matching.  Note that the main consumer of selector matching is not qS/qSA
 but CSS rendering.


  That wasn't my point.  My point was what happens to the tree traversal
 the serialization algorithm does if the firstChild member (not the
 getter, the actual internal state that stores the first child) is
 defined to be a private symbol?

 oh ok, I'm not familiar with this algorithm. If the firstChild is a
 private symbol, then the unknownPrivateSymbol trap would be called. If
 the public firstChild is called, the get trap is.


 What happens right now is that private state is consulted that cannot be
 changed by script directly and which can be accessed with no side-effects.


  Yes, depending on how they are defined, but pretty much anytime you
 touch a proxy, it calls a trap either the unknownPrivateSymbol or the
 get trap.


 OK.  I doubt that's acceptable for internal algorithms like
 serialization, fwiw.


  Imagine a proxy for which the unknownPrivateSymbol and get traps would
 add a new element anywhere randomly to the DOM tree.


 Yes, exactly.   Done that already.  ;)


  Now, the web platform defines a lot of other objects for which wrapping
 them with a proxy could make sense. I guess it would need to be on a
 case-by-case basis.


 OK.  That might make sense; we'd have to look at specific cases.

 -Boris

 __**_
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/**listinfo/es-discusshttps://mail.mozilla.org/listinfo/es-discuss



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On defining non-standard exotic objects

2013-01-09 Thread David Bruant

Le 09/01/2013 20:30, Allen Wirfs-Brock a écrit :

David seems to be primarily concerned about people who are writing specs. for 
non-standard exotic objects (eg, W3C/WebIDL-based spec. writers)  rather than 
implementors of such objects.
I am indeed. When there is a spec, implementors only have to read it 
(and ask questions/submit test cases to whoever is writing the spec in 
case of doubts)



In that case, it is probably reasonable for such a spec. writer to assume that 
the objects must be implementable using the Proxy mechanism.  After all, that 
is the only extension mechanism that is guaranteed to be available in a 
standards compliant ES6 implementation.
It's also the most powerful available and by design, if I understand 
corretly, the most powerful that will ever be introduced to the 
ECMAScript language (for objects I mean).
We had data properties in ES3, we have getter/setters in ES5, we're 
about to get proxies in ES6 and I think that's where the trains stops 
for objects (maybe something will be introduced for document.all 
falsiness... maybe not? but that would be the actual last step)



That still doesn't mean that such a spec. writer doesn't need to understand the 
ES object invariants as they shouldn't be writing any specification requirments 
that violates those invariants.
What I'm trying to get at is that these spec writers don't need to worry 
about the invariants if they define proxies. The invariants will take 
care of themselves.
If spec writers define proxies, by design, they won't be able to specify 
requirement that violate these invariants.



However, it does mean that they should be able to test their specification by 
doing a Proxy-based prototype implementation.  I can even imagine that such a 
prototype implementation could be made a mandatory part of the spec. 
development process.

Agreed.


In the end, specifications don't have any enforcement power and perhaps not even all that 
much moral authority.
The text no, but the coordination between different specs seems to 
indicate that there is a form of moral authority in what TC39 says. If 
in ES6 is written that any magic behavior has to fit in a proxy box, I 
think it will be read and understood.



If an implementation really needs to do something that is forbidden by a spec. 
it will do it anyway. Browser implementations and HTML5 certainly takes this 
perspective WRT Ecma-262 and re-specifies things such that don't match current 
browser requirements.
I'm more worried of the case where spec writers do want to conform to 
the spec without necessarily having to understand every subtelty of the 
invariants.
If they spec something as proxies, they can spec, prototype in code and 
see if it fits their intention or if they had missed something. This is 
not true for free-form objects.
As you say, writing you have to use proxies isn't that authoritative 
anyway, so it can just be written and those who feel expert enough can 
go on freeride.



I don't see any need for an intermediate proxy representation or for attempting 
to limit non-proxy based extension mechanisms.  However, if Proxy is not 
sufficiently powerful to support everything that needs to be done in the real 
world (and in particular by browsers) then we probably should be looking at how 
to fill those deficiencies.

Agreed.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On defining non-standard exotic objects

2013-01-09 Thread Allen Wirfs-Brock

On Jan 9, 2013, at 1:56 PM, David Bruant wrote:

 Le 09/01/2013 20:30, Allen Wirfs-Brock a écrit :
 ...
 
 I don't see any need for an intermediate proxy representation or for 
 attempting to limit non-proxy based extension mechanisms.  However, if Proxy 
 is not sufficiently powerful to support everything that needs to be done in 
 the real world (and in particular by browsers) then we probably should be 
 looking at how to fill those deficiencies.
 Agreed.
 

I guess I should have also said, that Proxy should be viewed as the last resort 
solution and should seldom be needed. Method dispatch based extension hooks 
like the @@create, @@hasInstance, and @@ToPrimitive hooks in the current ES6 
draft and the hooks described in the Object Model Reformation [ 1] strawman 
operate at a higher meta level and are probably generally preferable to proxy 
based solutions.  It is impossible for them to violate the object invariants. 
If we still have self-hosting deficiencies we probably should first look for 
that style of solution before extending Proxy.

Allen

[1]  http://wiki.ecmascript.org/doku.php?id=strawman:object_model_reformation 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread David Bruant

Le 09/01/2013 21:30, Andrea Giammarchi a écrit :
David ? You said: It could still make sense to wrap a DOM Node with a 
proxy to perform [[Get]] and [[Set]], etc. but definitely not put it 
in the DOM tree.


so this is the current scenario ... now, can you explain me a single 
case where you would need that? If you can't use the DOM node then why 
would create a proxy of one of them ?
If you wrap DOM Nodes in a proxy, you can do a membrane around them and 
make things like wrappedNode1.appendChild(wrappedNode2) work without 
needing shadow targets (just unwrapped in the traps, perform the native 
action and return a wrapped result)

It sounds like a worthwhile use case.

I thought you agreed on the fact new Proxy() should throw instantly if 
target cannot be proxified ...
For the short term I did (but now that the scope is reduced, I'm not 
sure). For the long term, I expected DOM objects could be proxified. 
Boris convinced me otherwise for DOM Nodes at least.



in any case, here a counter example of what you guys are discussing:
var
   o = {},
   p = new Proxy(o, {
 get: function (target, name) {
   console.log(name);
   return target[name];
 }
   })
;

p.test = 123;
p.test; // log test

o.test; // does nothing
At this point would be more like deciding if DOM should threat internally the o or the 
p
Having special privileges it could use o directly and pass back p when it 
comes to JS world.
This will preserve performance. At the same time this will make the usage of 
proxies in the DOM
world less useful because developers will be able to intercept only user 
defined interactions
with these nodes but hey, it's already better than now where developers can 
create DOM proxies
and use them only in RAM for who knows which reason 'cause in the DOM, where 
these identities
belong, these fail.
I don't understand that part. Especially given that you're dealing with 
normal objects. What is it a counter-example of (we've been discussing 
about a lot of things)? How is it a counter-example?



As summary, as it is now, this from Francois is my favorite outcome 
from the discussion:


I would certainly understand if the ECMAScript group settled up not to 
work on Proxied native elements and specify that it should throw on 
creation. However, I would advise to create an Object.hasIdentity(...) 
method that returns true if the given object has a native identity
I missed that part. What's a native identity? a non-proxy object? It's 
been decided early that a Proxy.isProxy method should be avoided, 
because proxies should be indetectable from the objects they try to 
emulate from the ECMAScript perspective (internal 
[[Get]]/[[DefineOwnProperty]]/[[Keys]]...).
Boris has exposed a case (selector-matching) where the DOM API is 
complex enough that a proxy wrapping a DOM object could be discriminated 
against a native DOM object. This is a good trade-off I think. Allowing 
proxies to give the impression that the DOM object acts correctly from 
the ECMAScript internal properties perspective, but doesn't from the DOM 
perspective.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread David Bruant

Le 09/01/2013 21:57, Andrea Giammarchi a écrit :
Last, but not least, even offline DOM proxies are pointless, here 
another example without putting them in the DOM

What's an offline DOM Proxy?


var p = new Proxy(
   document.createElement(p),

   {}
);
try {
   p.appendChild(

That's what you call without putting them in the DOM? ;-)
What are you going to do when your proxy node has a child? Probably put 
it on the DOM eventually, I guess, no?



 document.createElement(span)
   );
} catch(o_O) {
   console.log(o_O.message);
}
try {
   p.appendChild(
 new Proxy(
   document.createElement(span),

   {}
 )
   );
} catch(o_O) {
   console.log(o_O.message);
}
So, as it is right now, there is not a single reason to make them valid as new 
Proxy target, IMHO
Membrane and avoiding the cost of the shadow target sounds like a good 
enough.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread Andrea Giammarchi
do you have any example of what you are saying? all my examples fail and I
don't understand other use cases.

As you said, if a Proxy should be undetectable a broken Proxy that cannot
be used is a pointless object full of inconsistency in the environment,
IMHO.

Is this the decision then? Let the new Proxy accept any target even if the
target cannot be proxied ?


On Wed, Jan 9, 2013 at 2:24 PM, David Bruant bruan...@gmail.com wrote:

 Le 09/01/2013 21:57, Andrea Giammarchi a écrit :

  Last, but not least, even offline DOM proxies are pointless, here another
 example without putting them in the DOM

 What's an offline DOM Proxy?


  var p = new Proxy(
document.createElement(p),

{}
 );
 try {
p.appendChild(

 That's what you call without putting them in the DOM? ;-)
 What are you going to do when your proxy node has a child? Probably put it
 on the DOM eventually, I guess, no?


   document.createElement(span)
);
 } catch(o_O) {
console.log(o_O.message);
 }
 try {
p.appendChild(
  new Proxy(
document.createElement(span)**,

{}
  )
);
 } catch(o_O) {
console.log(o_O.message);
 }
 So, as it is right now, there is not a single reason to make them valid
 as new Proxy target, IMHO

 Membrane and avoiding the cost of the shadow target sounds like a good
 enough.

 David

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-09 Thread Andrea Giammarchi
forgto this: yes, offline DOM node means a DOM that is not part of the live
DOM tree ... that is, indeed, disconnected, offline, no CSS, no repaint, no
reflow, nothing ... is offline. Isn't this term good enough? I thought that
makes sense


On Wed, Jan 9, 2013 at 2:57 PM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:

 do you have any example of what you are saying? all my examples fail and I
 don't understand other use cases.

 As you said, if a Proxy should be undetectable a broken Proxy that cannot
 be used is a pointless object full of inconsistency in the environment,
 IMHO.

 Is this the decision then? Let the new Proxy accept any target even if the
 target cannot be proxied ?


 On Wed, Jan 9, 2013 at 2:24 PM, David Bruant bruan...@gmail.com wrote:

 Le 09/01/2013 21:57, Andrea Giammarchi a écrit :

  Last, but not least, even offline DOM proxies are pointless, here
 another example without putting them in the DOM

 What's an offline DOM Proxy?


  var p = new Proxy(
document.createElement(p),

{}
 );
 try {
p.appendChild(

 That's what you call without putting them in the DOM? ;-)
 What are you going to do when your proxy node has a child? Probably put
 it on the DOM eventually, I guess, no?


   document.createElement(span)
);
 } catch(o_O) {
console.log(o_O.message);
 }
 try {
p.appendChild(
  new Proxy(
document.createElement(span)**,

{}
  )
);
 } catch(o_O) {
console.log(o_O.message);
 }
 So, as it is right now, there is not a single reason to make them valid
 as new Proxy target, IMHO

 Membrane and avoiding the cost of the shadow target sounds like a good
 enough.

 David



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On defining non-standard exotic objects

2013-01-09 Thread Brandon Benvie
This was basically what I was getting out, even if it was wrong in the idea
of using those specific internal methods rather than the higher level
methods you describe. Proxies are better than the completely magical
implementations that were used in the past, in that they will end up, by
default, closer to internally consistent. But they are still capable of
being completely inconsistent, and it doesn't take very much complexity in
a handler's design to end up with subtle inconsistencies. Rather, as soon
as you stop auto-forwarding anything you have to put a great amount of care
into ensuring they do remain internally consistent. A higher level hook
that exposes the ability to tweak the thing you want to tweak, while
ensuring it all remains consistent automatically, is more desirable.


On Wed, Jan 9, 2013 at 5:18 PM, Allen Wirfs-Brock al...@wirfs-brock.comwrote:


 On Jan 9, 2013, at 1:56 PM, David Bruant wrote:

 Le 09/01/2013 20:30, Allen Wirfs-Brock a écrit :

 ...


 I don't see any need for an intermediate proxy representation or for
 attempting to limit non-proxy based extension mechanisms.  However, if
 Proxy is not sufficiently powerful to support everything that needs to be
 done in the real world (and in particular by browsers) then we probably
 should be looking at how to fill those deficiencies.

 Agreed.


 I guess I should have also said, that Proxy should be viewed as the last
 resort solution and should seldom be needed. Method dispatch based
 extension hooks like the @@create, @@hasInstance, and @@ToPrimitive hooks
 in the current ES6 draft and the hooks described in the Object Model
 Reformation [ 1] strawman operate at a higher meta level and are probably
 generally preferable to proxy based solutions.  It is impossible for them
 to violate the object invariants. If we still have self-hosting
 deficiencies we probably should first look for that style of solution
 before extending Proxy.

 Allen

 [1]
 http://wiki.ecmascript.org/doku.php?id=strawman:object_model_reformation


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss