Re: memory safety and weak references

2013-03-27 Thread David Bruant

Le 27/03/2013 01:55, David Herman a écrit :

But we need to take this into account as we consider what to do about weak 
references in ES7.
From what I understand, doing exact rooting (instead of conservative 
stack scanning) solves the problem or more precisely prevents the attack 
by design (because the attack would be based on numbers being 
interpreted as pointers addresses).
Assuming I understand correctly (and tell me if I don't), this is more 
an attack based on an implementation detail than an attack based on the 
inclusion of a weak references to the language, so I'm puzzled as to why 
this attack should be taken into account when discussing the inclusion 
of weak references.


Over the last month after Opera announced moving to WebKit, people on 
Twitter have been rounds and rounds about Webkits monoculture and how 
making spec decisions based on specific implementations is a bad thing 
(if specs followed WebKit implementation, we couldn't have parallel 
rendering engines like Servo, etc.). I don't see why that could be a 
good thing at the ECMAScript level.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak event listener

2013-03-27 Thread David Bruant

Le 27/03/2013 15:52, Brendan Eich a écrit :

Please read the memory safety and weak references thread.

The issue is not just SES, which might remove an iterator in preparing 
the environment. Stock JS must not be vulnerable to jit-spray attacks 
due to enumerable weak maps.
From what I understand of the attack, JS isn't vulnerable. Only current 
implementations are. I admit it carries some weight, but let's not 
confuse the 2.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak event listener

2013-03-26 Thread David Bruant

Le 26/03/2013 21:26, Brandon Benvie a écrit :

On 3/26/2013 1:03 PM, David Bruant wrote:
I'm starting to wonder whether bringing weakrefs is equivalent to 
having iterable WeakMaps... And if so, why not make WeakMaps iterable?
This is a question I had as well. An iterable WeakMap is nearly the 
same as a Map full of WeakRefs, is it not? Just a different API that 
is less usable for single references and more usable for collections.
Interestingly, publish-subscribe would probably make a better use of an 
iterable weakset (set of observers) I think:
When a publication happens, what needs to be done is tell all the 
(remaning) subscribers. I don't think anyone really needs the weakrefs 
themselves. Iterating over remaining observers seems to be enough.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak event listener

2013-03-26 Thread David Bruant

Le 26/03/2013 21:12, Allen Wirfs-Brock a écrit :

On Mar 26, 2013, at 12:18 PM, Mark S. Miller wrote:
WeakSet may or may not happen by ES6. But even if it doesn't, WeakSet 
is trivially shimmable on WeakMap.
Set is also shimmable on top of Map. If Set is in, there are as many 
reason to have WeakSets in. If WeakSets are considered as second class, 
so should Set.

I feel Set and WeakSet fate should be bound.

Which is why it isn't the in the spec. yet.  It was introduced in 
support of Proxy private Symbol white list, but it is still unclear 
whether we will have them and even if we do, it's not clear that the 
actual internal whitelist needs to be exposed as a WeakSet.
I don't understand the reluctance towards having WeakSets in spec. They 
have as much use as WeakMaps.
Domenic wrote a couple of messages ago I have run into a few use cases 
for [WeakSet] (...), and was hoping it was on-track.
I've had a case too and even left a comment about it [1]. We can argue 
whether, that's a use case more for private symbols than WeakSet, but 
still, weaksets sounds like an appriopriate tool for appropriate situations.


David

[1] 
https://github.com/DavidBruant/HarmonyProxyLab/blob/413a153c01b34bfc281b901b399ac09f3ca8c0d7/ES3AndProxy/ES5ObjectModelEmul.js#L57

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak event listener

2013-03-26 Thread David Bruant

Le 26/03/2013 22:56, Mark S. Miller a écrit :
Because the weak-map-maker constructor grants no privilege and can be 
generally accessible, whereas the weak-ref-maker grants the privilege 
of being able to observe the non-determinism of GC, and so should not 
be made accessible to code that shouldn't have such powers. It is the 
same reason why Maps and Sets, which are enumerable, enumerate their 
elements in a deterministic order.


In short, separation of concerns as well as separation of privileges.
If WeakMaps were granted the privilege of observing GC non-determinism 
via iteration, I assume it would be through a default 
WeakMap.prototype.@@iterator (that's how it works for Map).
Removing this authority can be done by providing another WeakMap 
constructor with 'null' as WeakMap.prototype.@@iterator which is pretty 
much as much work as removing access to the weak-ref-maker.


Thanks to the iterator protocol (and especially the @@iterator symbol), 
privileges can be kept separate, so I don't think it's a sufficient 
enough reason to not allow iteration over WeakMaps if WeakRefs are in.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Mutable Proto

2013-03-20 Thread David Bruant

Le 20/03/2013 16:36, Nathan Wall a écrit :

I didn't get a direct response to my question about mutating proto on objects 
which don't inherit from Object.prototype, but I'm inferring from [1] that it 
won't be possible.  I find this unfortunate, but I realize this issue has seen 
a lot of discussion in the past and there are reasons for the current decision. 
 I will see how I can make my code cope with reality.
Could you describe how you use __proto__ on objects not inheriting from 
Object.prototype?


From what I know there are 2 main use cases:
1) object as map
changing the prototype enable changing different default values. I 
guess any solution to that problem either looses the object syntax 
(maybe unless using proxies) like using an ES6 Map or has non-trivial 
runtime cost.
Or the code needs to be reorganized so that the object is always created 
after the prototype (using Object.create for instance)


2) Subclassing
ES6 will have classes with inheritance. That's mostly syntax sugar on 
top of what's already possible, but that works.


Do you have a use case that belongs in neither of these categories?

David



Nathan


Brendan Eich wrote:

Mariusz Nowak wrote:

+1!

It would be great if someone will explain in detail why
Object.setPrototypeOf is no go.

We've been over this many times, e.g. at

https://mail.mozilla.org/pipermail/es-discuss/2012-May/022904.html

To recap,

1. __proto__ is out in the field, a de-facto standard on mobile, and
not going away. Adding another API doesn't help, it hurts.

2. SES and other secure subsets want same-frame (global object, realm)
mashups of code that may use __proto__ and code that must not, but
Object.setPrototypeOf is a per-frame capability that would have to be
removed, breaking the former class of code.



Any function that blindly extends object with provided hash is affected e.g.
extend(obj, { __proto__: Error.prototype }).

No, that depends on how extend works. If it uses Object.defineProperty
or equivalent, then nothing is broken and the setter on Object.prototype
for __proto__ is not run.


Additionally it means that we need to serialize any user input which
eventually may be used as key on a dictionary e.g. data[userDefinedName].

Only if you use assignment into an object that delegates to
Object.prototype, but see (1) above: this hazard already exists. Don't
do that; JSON doesn't, and Object.create(null) gives a way to create
dictionaries.

Yes, the problems you cite are real, but they are already part of the
de-facto __proto__ standard (1). Beyond that, Object.setPrototypeOf is a
mistake due to (2).

/be

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Mutable Proto

2013-03-20 Thread David Bruant

Le 20/03/2013 16:15, Brendan Eich a écrit :

To recap,

1. __proto__ is out in the field, a de-facto standard on mobile, and 
not going away. Adding another API doesn't help, it hurts.


2. SES and other secure subsets want same-frame (global object, 
realm) mashups of code that may use __proto__ and code that must 
not, but Object.setPrototypeOf is a per-frame capability that would 
have to be removed, breaking the former class of code.


(...)

Yes, the problems you cite are real, but they are already part of the 
de-facto __proto__ standard (1).

Agreed.
From the spec/implementor point of view, __proto__ has to be added as 
de-facto standard because it is used.
From the developer point of view, it is not because it's in the 
language that it's a good idea to use it. Quite the opposite, I'd like 
to reiterate that devs should make delete Object.prototype.__proto__ 
the second line of their code (first line is use strict;).
Devs shouldn't make the mistake to think that __proto__ in the standard 
makes it a good or legitimate feature.


__proto__ in ES6 is yet another ECMAScript Regret [1]

David

[1] https://github.com/DavidBruant/ECMAScript-regrets (I haven't found 
much time to write more, but issues are more interesting to read than 
just the part that's been written down)

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Four static scoping violations in ES5 sloppy

2013-03-18 Thread David Bruant

Le 18/03/2013 17:48, Brendan Eich a écrit :

Andreas Rossberg wrote:

On 18 March 2013 17:32, Mark S. Millererig...@google.com  wrote:

And why does ES5/strict impose these restrictions, when they are not
necessary for the formal criterion?

Because ES5 strict mode, being an opt-in, gave us a rare opportunity to
clean things up in preparation for yet better scoping in ES6. I'm 
pleased to
report that it mostly turned out that way. Because of #1 and #3, ES5 
strict
code will be easier to refactor into ES6 modules, where the global 
object is

finally not on their scope chain. At the time we did this, we didn't
anticipate this specific aspect of ES6, but took the opportunity to 
clear

the ground.


Maybe I misunderstand what you mean, but unfortunately, the global
object will remain at the top of the scope chain in ES6, even with
modules (though complemented with a lexical environment for new
binding forms). We shied away from fixing that mistake.


Don't break the web.

Versioning is an anti-pattern.

I don't think shied away is accurate. We couldn't fix that mistake.
I don't understand the mention of don't break the web. Modules aren't 
used on the web, so whatever rules are chosen for them they can't break 
the web.

I'm probably missing something here.

Maybe global in the scope chain has to be divided into 2 different 
meanings: read and write to the global scope.

My current understanding is as follow:
* code in a module body can read globals in the global scope
* code in a module cannot create a global properties (unless being hand 
off the global object specifically)
* the module name is part of the newly introduced lexical binding (file 
local global variables).


How far am I in understanding modules and scoping?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On Scope And Prototype Security

2013-03-17 Thread David Bruant

Hi Andrea,

I'm really having a hard time understanding where the security issue is 
here.

From what I understand, you've properly hidden the Private constructor.
I am not surprised if code can reach the [[Prototype]] of an instance 
and I wouldn't consider that a flaw. I would consider that the 
[[Prototype]] is part of the object and accessing the [[Prototype]] is 
like accessing a property or the [[Class]], it's just introspection.


David

Le 17/03/2013 03:04, Andrea Giammarchi a écrit :
That conversation on `fn. caller` left me many doubts about extra 
things too.


As example, I understand the fact a function that do not want to be 
accessed should not be accessed when any accepted object could due 
tweaked to retrieve it via caller, that's OK, but what about private 
classes and the fact there's no way to ensure them private?


Despite the sense, the good and the bad, this is perfectly valid JS code:

var myNameSpace = function () {

  var queue = [];

  function Private() {
this.init();
  }

  function initBeforeDOM() {
queue.push(this);
  }

  function initAfterDOM() {
// do stuff
  }

  Private.prototype.init = initBeforeDOM;
  window.addEventListener('DOMContentLoaded', function(){
Private.prototype.init = initAfterDOM;
queue.forEach(function (instance) {
  initAfterDOM.call(instance);
});
  });

  // trying to make Private inaccessible
  Object.defineProperty(
Private.prototype,
'constructor',
{value: Object,
 enumerable:false,
 writable:false,
 configurable:false}
  );

  return {
generate: function () {
  return new Private;
}
  };
}();

var o = myNameSpace.generate();
var proto = Object.getPrototypeOf(o);
alert(proto.constructor);
alert(proto.init);

Above code is also based on few concepts I always found cool about JS 
like the possibility to mutate all objects at once through the 
prototype, usually considered a bad practice, but technically the 
best/fastest/memory-safe way we have in JS to create state machines 
behaviors through distributed instances so ... **way too cool**


Well, I've got a problem, even if the constructor might be 
unreachable, there is something I cannot secure at all which is the 
constructor prototype.


Not a single mechanism, in current JS, lets me make a prototype safe 
from operations, potentially nasty and disastrous, as 
`Object.getPrototypeOf(generic)` is.


Thoughts? Thanks.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On Scope And Prototype Security

2013-03-17 Thread David Bruant

Le 17/03/2013 18:09, Andrea Giammarchi a écrit :
My concern is about being unable to let anyone retrieve that 
property, for introspection or to pollute it or change it being able 
to make my private constructor insecure.
In the example there but in other situation I cannot freeze the 
prototype and yet I cannot hide it from outside in a meaningful way.
I see. I understand better why you change Private.prototype.init in the 
middle.
A solution here would be to have a single function starting with 
if(domReady) is which domReady is a boolean which value changes at 
the 'DOMContentLoaded' event.

I won't pretend it's a perfect solution, but it works.
A sufficiently smart interpreter could notice that the domReady value is 
never set to true after it's moved to false, so could remove the test 
(but I can already hear Andreas Rossberg about my over-optimism of what 
JIT-compilers do in practice :-) )


AFAICT it looks like just introspection for something able to make 
private classes basically impossible is not a security concern so 
thanks for your answer, now I know there's no way to have that 
behavior now, neither tomorrow (so, long story short: good to know)
I'd like you to express more formally what property you're trying to 
defend. My understanding is the following:
You have a constructor (and its .prototype property). You want to be 
able to change properties of the .prototype at any point in time, while 
providing instances generated by the constructor to some mixed trusted code.


Expressed this way, one can notice that giving access to an instance 
implies giving access to the full [[Prototype]] chain (because of 
Object.getPrototypeOf and __proto__ in browsers supporting that), so 
anyone with access to an instance has access to the objects that are 
kept mutable, making the problem unsolvable.


In my opinion, the issue is not in allowing access to [[Prototype]] but 
in expecting everything to be always mutable.



Other than the if(boolean) solution above, a different solution could 
involve mixins.


A different (but admittedly complex and too heavy for the problem at 
hand) solution could involve putting a proxy as .prototype. This proxy 
doesn't allow mutation (throw on set trap or whatever), but whoever has 
access to the underlying target (which can be kept private) can still 
modify it.


My point is that access to the prototype chain isn't an insecurity in 
essence.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object.is steps are very thing

2013-03-16 Thread David Bruant

Le 16/03/2013 19:18, Tom Schuster a écrit :

Hey!

Looking at the the steps for Object.is the first sentence just says:

When the is function is called with arguments value1 and value2 the following steps 
are taken:

I don't remember other functions being defined like that. It should at
least say something along the lines of
When called with less than 2 parameters return false.
I'd throw a TypeError. Calling Object.is with strictly more or less than 
2 parameters is most likely an error, akin to === with which something 
different than strictly 2 operands resultsis a SyntaxError.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nuking misleading properties in `Object.getOwnPropertyDescriptor`

2013-03-14 Thread David Bruant

Le 14/03/2013 08:51, Tom Van Cutsem a écrit :

[+Allen]

2013/3/13 Nathan Wall nathan.w...@live.com mailto:nathan.w...@live.com

However, as a matter of principle, my argument is that
`Object.getOwnPropertyDescriptor` should, at the bare minimum,
return a descriptor that can be known to work in
`Object.defineProperty`.  If `Object.defineProperty` doesn't
accept it, then you `getOwnPropertyDescriptor` didn't really give
me a valid descriptor.

I think that this behavior (1) limits the creativity of developers
to define properties like `Object.prototype.get`, (2) is a
potential stumbling block, (3) has no real benefit -- really,
there's not anything positive about this behavior, and (4) forces
developers who want to support `Object.prototype.get` to add an
extra layer of cleaning before using `defineProperty`.


While the monkey-patching of Object.prototype (don't do that!) is 
still the culprit, I agree that it would have been better if 
defineProperty looked only at own properties of the descriptor.
In a previous message, Brandon Benvie mentioned he uses inheritance to 
reuse a property descriptor [1] (I think there was another quote of him, 
but I can't find it now). I can imagine it's a used pattern.


I almost always think of descriptors as records rather than 
objects. Similarly, perhaps Object.getOwnPropertyDescriptor should 
have returned descriptors whose [[prototype]] was null.


It's true that Reflect.getOwnPropertyDescriptor and 
Reflect.defineProperty give us a chance to fix this. I'm just worried 
that these differences will bite developers that will assume that 
these methods are identical to the Object.* versions.

I doubt differences would be a good idea.

Maybe an idea would be for Object.defineProperty to call 
Attributes.@@iterate is user-defined so that a user can restrict what 
property descriptor properties are being traversed.
If that's too heavy of a refactoring, maybe an ES6 map could be accepted 
as the 3rd argument of Object.defineProperty (with maps semantics, not 
objects semantics). This way, one could write the copy function as:


function copy(from, to) {
for (let name of Object.getOwnPropertyNames(from)){
let desc = Object.getOwnPropertyDescriptor(from, name);
desc[@iterator] = ownIterator; // is that the proper 
syntax? I'm a bit lost :-/

Object.defineProperty(to, name, new Map(desc));
}
}

ownIterator only iterates over own properties as its name indicates, so 
the Map will only list that. The extra map allocation isn't that big of 
a deal since it is very short-lived. It could be shared and cleared 
across iterations if necessary.


Nathan, how do you feel about such a solution?

David

[1] https://mail.mozilla.org/pipermail/es-discuss/2012-November/026081.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nuking misleading properties in `Object.getOwnPropertyDescriptor`

2013-03-14 Thread David Bruant

Le 14/03/2013 17:01, Brandon Benvie a écrit :
I also mentioned I thought it was unlikely to be commonly used, since 
I've never seen it used besides some of my own code (which exists in a 
couple libraries used by few or just me).
Sincere apologies on missing an important part of your quote (I remember 
there was another message than the one I quoted, but I've been unable to 
find it) :-/


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nuking misleading properties in `Object.getOwnPropertyDescriptor`

2013-03-13 Thread David Bruant

Le 12/03/2013 16:45, Tom Van Cutsem a écrit :

Hi Nathan,

2013/3/10 Nathan Wall nathan.w...@live.com mailto:nathan.w...@live.com

Given that `defineProperty` uses properties on the prototype of
the descriptor[1] and `getOwnPropertyDescriptor` returns an object
which inherits from `Object.prototype`, the following use-case is
volatile:

function copy(from, to) {
for (let name of Object.getOwnPropertyNames(from))
Object.defineProperty(to, name,
Object.getOwnPropertyDescriptor(from, name));
}

If a third party script happens to add `get`, `set`, or `value` to
`Object.prototype` the `copy` function breaks.


To my mind, the blame for the breakage lies with `Object.prototype` 
being mutated by the third-party script, not with property descriptors 
inheriting from Object.prototype. Thus, a fix for the breakage should 
address that directly, rather than tweaking the design of property 
descriptors, IMHO.

I agree.

As Object.prototype-jacking threats are discussed more and more 
recently, I'd like to take a step back and meta-discuss JavaScript 
threats.


Currently, by default, any script that run can mutate the environment it 
is executed in (it can be fixed by sandboxing with things like Caja [1] 
and soon the module loader API used with proxies [2], but even then, 
there could be leaks of native built-ins).
The first (security) decision any JavaScript application should make 
would be to freeze all built-ins like SES [3][4] does. (In the future, 
it could even make sense to add a CSP [5] directive for that)
If necessary, the application can first enhance the environment by 
adding polyfills/libraries and such, but that's pretty much the only 
thing that's acceptable to run before freezing everything.


Given that freezing all built-ins (after polyfills) is a reasonable 
thing to do, I think JavaScript threat should be considered serious only 
if applicable assuming the environment is already frozen.
It naturally rules out threats related to property descriptors 
inheriting from Object.prototype or anything looking like what if an 
attacker switches Array.prototype.push and Array.prototype.pop?


David

[1] http://code.google.com/p/google-caja/
[2] http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders
[3] http://code.google.com/p/es-lab/wiki/SecureEcmaScript
[4] http://code.google.com/p/es-lab/source/browse/#svn%2Ftrunk%2Fsrc%2Fses
[5] 
https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-specification.dev.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nuking misleading properties in `Object.getOwnPropertyDescriptor`

2013-03-13 Thread David Bruant

Le 13/03/2013 16:26, Nathan Wall a écrit :

David Bruant wrote:

Tom Van Cutsem wrote:

To my mind, the blame for the breakage lies with `Object.prototype`
being mutated by the third-party script, not with property descriptors
inheriting from Object.prototype. Thus, a fix for the breakage should
address that directly, rather than tweaking the design of property
descriptors, IMHO.

I agree.
  
The first (security) decision any JavaScript application should make

would be to freeze all built-ins like SES [3][4] does. (In the future,
it could even make sense to add a CSP [5] directive for that)
If necessary, the application can first enhance the environment by
adding polyfills/libraries and such, but that's pretty much the only
thing that's acceptable to run before freezing everything.

Hey David and Tom.  This is good advice for application authors, but I don't 
work at the application level; I write libraries.  I don't want to freeze 
everything because I want to leave the environment open to monkey-patching and 
shimming by other libraries and the application authors. So this isn't an 
option for me.

Interesting.
As a library author, in theory, you don't know in which environment your 
code is going to be executed (that is maybe the environment has been 
modified or not), so I think one assumption has to be made:
Either you assume the library runs in a non-changing environment 
(whether the client has frozen itself or just decided not to change 
anything) or you assume it is a battlefield where anything can happen 
and try to capture a reference to all available built-ins at first load 
as well as being defensive against potential changes to the environment 
(like Object.prototype in your case).



what if an attacker switches Array.prototype.push and Array.prototype.pop?

These are issues that are easy to address by using stored late-bound function 
references rather than methods and array-likes instead of true arrays.

 var push = Function.prototype.call.bind(Array.prototype.push),
 arrayLike = Object.create(null);
 arrayLike.length = 0;
 push(arrayLike, 'item-1');

As long as the environment is correct when my script initializes, I get all 
methods I need to use stored inside my library's closure. Freezing isn't needed.

It's also possible to write around the `defineProperty` problem by converting 
the descriptor into a prototype-less object. However, I actually encountered 
some performance problems with this. I was able to improve the performance by 
only dropping the prototype when necessary (as long as `get`, `set`, `value` or 
`writable` haven't been added to `Object.prototype`, it's not necessary). 
However, as a matter of principle, my argument is that 
`Object.getOwnPropertyDescriptor` should, at the bare minimum, return a 
descriptor that can be known to work in `Object.defineProperty`.  If 
`Object.defineProperty` doesn't accept it, then you `getOwnPropertyDescriptor` 
didn't really give me a valid descriptor.

I think that this behavior (1) limits the creativity of developers to define 
properties like `Object.prototype.get`
I don't think we should consider adding a Object.prototype.get property 
as creativity. For instance, proxies have an optional get trap, so 
things can become confusing quickly.
Other properties also have other meanings for other objects. For 
instance JSON.parse creates objects with Object.prototype as 
[[Prototype]], so custom Object.prototype properties may be confusing in 
myData.someProp cases. It's possible, but annoying to prefix every 
[[Get]] with hasOwnProperty checks.

If someone tries to polyfill Array.prototype.contains, they may test
'contains' in Array.prototype
If Object.prototype.contains is defined, this can mistakenly return 
true. etc.
Writing defensive code is a serious relearning of how to write for the 
language.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nuking misleading properties in `Object.getOwnPropertyDescriptor`

2013-03-13 Thread David Bruant

Le 13/03/2013 16:49, Mark S. Miller a écrit :
On Wed, Mar 13, 2013 at 8:26 AM, Nathan Wall nathan.w...@live.com 
mailto:nathan.w...@live.com wrote:


David Bruant wrote:
 Tom Van Cutsem wrote:
  To my mind, the blame for the breakage lies with
`Object.prototype`
  being mutated by the third-party script, not with property
descriptors
  inheriting from Object.prototype. Thus, a fix for the breakage
should
  address that directly, rather than tweaking the design of
property
  descriptors, IMHO.
 I agree.

 The first (security) decision any JavaScript application should
make
 would be to freeze all built-ins like SES [3][4] does. (In the
future,
 it could even make sense to add a CSP [5] directive for that)
 If necessary, the application can first enhance the environment by
 adding polyfills/libraries and such, but that's pretty much the
only
 thing that's acceptable to run before freezing everything.

Hey David and Tom.  This is good advice for application authors,
but I don't work at the application level; I write libraries.  I
don't want to freeze everything because I want to leave the
environment open to monkey-patching and shimming by other
libraries and the application authors. So this isn't an option for me.

 what if an attacker switches Array.prototype.push and
Array.prototype.pop?

These are issues that are easy to address by using stored
late-bound function references rather than methods and array-likes
instead of true arrays.

var push = Function.prototype.call.bind(Array.prototype.push),
arrayLike = Object.create(null);
arrayLike.length = 0;
push(arrayLike, 'item-1');

As long as the environment is correct when my script initializes,
I get all methods I need to use stored inside my library's
closure. Freezing isn't needed.


That's correct. I've written a bit of code like that myself. At first, 
for those used to JS or even just conventional oo, the style needed is 
very counter-intuitive and awkward -- it goes against the grain of 
what the language tries to make convenient. But amazingly, it is 
possible, and one even gets used to it after a while.
Would it be easier to teach everyone to freeze their built-ins or to 
(re)write their code using this style?


We need a name for your use case. It is definitely distinct from the 
normal JS library writing practice, which implicitly assumes that the 
primordials haven't been too corrupted but does nothing to detect if 
this is true.
Given what you're describing as normal library writing practice, 
teaching everyone to freeze their built-ins sounds like a safer bet in 
the path of acceptability and making the web overall safer.


In my opinion, to a large extent, using a library and messing around 
with built-ins the library might be using is a recipe for disaster. This 
practice shouldn't be encouraged.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: a future caller alternative ?

2013-03-12 Thread David Bruant

Le 11/03/2013 22:51, Andrea Giammarchi a écrit :
the outer `with` statement ... you see my point? we are dropping 
powerful features in order to make JavaScript the toy we all think is 
since ever
A while ago I discussed the 'with' trick on es-discuss (I don't remember 
when, it was a message about dynamic loaders IIRC) and I think Mark's 
reply was that it was a temporary (ES5 era) hack and the proper ES6 
way would be to use a custom loader (using the a custom 'global' property).
In that instance, with isn't a powerful feature, it's a convenience 
used so that the relevant sandboxing code is 6 lines instead of a full 
parser, because if there was no with, that's probably what Caja would 
be doing anyway.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: a future caller alternative ?

2013-03-09 Thread David Bruant

Le 08/03/2013 22:19, Andrea Giammarchi a écrit :

This opens doors to debuggers (introspection) and APIs magic quite a lot.
If you want to write a debugger, use a debugger API [1] which is only 
executed in privileged environments, no?


Debuggers are useful, but pierce encapsulation which is useful for 
program integrity. I don't think making a debugger API available to all 
programs is a good idea.


David

[1] 
https://developer.mozilla.org/en-US/docs/SpiderMonkey/JS_Debugger_API_Guide

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Dynamic Modules Questions

2013-03-07 Thread David Bruant

Le 06/03/2013 23:31, Sam Tobin-Hochstadt a écrit :

On Wed, Mar 6, 2013 at 9:46 AM, Kevin Smith khs4...@gmail.com wrote:

(Referencing the module loaders proposal at
http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders)

1)  Loaders have a strict flag which indicates whether code evaluated in
the Loader's context should be implicitly strict.  If modules themselves are
implicitly strict, is this flag superfluous?

myLoader.eval(someJS) is just like regular `eval`, and also loaders
handle `eval` called from inside them.  So no, the flag isn't
superfluous.

I fail to understand the benefit of forcing the mode of the loaded code.
There is a risk to break the loaded code by forcing a mode it might not 
have been written for.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Dynamic Modules Questions

2013-03-07 Thread David Bruant

Le 07/03/2013 13:19, Sam Tobin-Hochstadt a écrit :


On Mar 7, 2013 4:53 AM, David Bruant bruan...@gmail.com 
mailto:bruan...@gmail.com wrote:


 Le 06/03/2013 23:31, Sam Tobin-Hochstadt a écrit :

 On Wed, Mar 6, 2013 at 9:46 AM, Kevin Smith khs4...@gmail.com 
mailto:khs4...@gmail.com wrote:


 (Referencing the module loaders proposal at
 http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders)

 1)  Loaders have a strict flag which indicates whether code 
evaluated in
 the Loader's context should be implicitly strict.  If modules 
themselves are

 implicitly strict, is this flag superfluous?

 myLoader.eval(someJS) is just like regular `eval`, and also loaders
 handle `eval` called from inside them.  So no, the flag isn't
 superfluous.

 I fail to understand the benefit of forcing the mode of the loaded code.
 There is a risk to break the loaded code by forcing a mode it might 
not have been written for.


Then you probably won't want to use this option when constructing 
loaders.  Others, I'm sure, feel differently.


I would most certainly use the option if I understood what use case it 
covers and found myself in the relevant situation.

What's the use case?
Out of curiosity, are there existing loaders libraries providing this 
feature too?


What is the semantics of setting strict:false?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Throwing StopIteration in array extras to stop the iteration

2013-03-05 Thread David Bruant

Le 05/03/2013 00:31, Jason Orendorff a écrit :
On Sun, Mar 3, 2013 at 12:45 PM, David Bruant bruan...@gmail.com 
mailto:bruan...@gmail.com wrote:


[2, 8, 7].forEach(function(e){
if(e === 8)
throw StopIteration;


This would be taking a piece of one low-level protocol, and using it 
for a sorta-kinda related thing that actually, on closer inspection, 
has nothing to do with that protocol. Array.prototype.forEach doesn't 
even use iterators.
I could not agree more. I'm aiming at a least-worst type of solution, 
rather than something good.


Currently, if one wants to do stop an iteration early, he/she has to be 
done one of the following way:

1)
try{
[2, 8, 7].forEach(function(e){
if(e === 8)
throw whatever;
console.log(e)
}
}
catch(e){
// do nothing, I just want to catch the error in case the iteration
// stopped before traversing all elements
}

2) (ab)using some and every


Downsides to each solution
1) If you care about good use of protocols, I hope you'll agree that the 
try/catch protocol is severely abused

2)
Brendan wrote:

Are the some and every names confusing? Would any and all be better?
The problem is that there are 2 of them and stopping the iteration is 
returning true in one case, false in the other. People are smart, 
memorize what they need to know. I know I spend more time reading 
some/every code than normal just to be sure of what true and false 
means. It might be because I'm not a native English speaker that I have 
to extra-think on every/some, but that's not an excuse anyway.


return true/false to stop an iteration early does not make very 
readable code.
Recently, I was helping a friend with some code he needed help to fix. 
Code that had been written in team. He asked me but how do you know 
where the problem is without having to read all the code?. To which I 
answered Do you see the 'queue.persist()'? I know it persists the queue 
and does nothing else, I don't need to read that code.
I wish whenever I want to stop an iteration early, I didn't have to flip 
a coin to choose some and every and the corresponding true/false 
semantics but had something as readable as queue.persist to say what I mean.


I thought I was doing a clever language hack by reusing 'throw 
StopIteration' exactly in a context where it's not supposed to have a 
meaning.

Apparently I was not.

I'm happy of the outcome of the thread if .findIndex is introduced, but 
I can't help wondering whether a new method is going to be introduced 
every single time someone brings up a pattern that would make good use 
of stopping an interation early.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Add intersections and unions to Set

2013-03-05 Thread David Bruant
I agree on the need but forsee problems with parametrized equivalence 
operator [1][2] like which comparator should be used for the union of 2 
sets with different comparators?


The need for set intersection/union/minus/etc. feels more important than 
the need to parametrized the comparator.


David

[1] 
https://github.com/rwldrn/tc39-notes/blob/master/es6/2013-01/jan-29.md#43-parameterize-the-equivalence-operator-for-mapset
[2] 
https://github.com/rwldrn/tc39-notes/blob/master/es6/2013-01/jan-31.md#mapset-comparator


Le 04/03/2013 19:08, al...@instantbird.org a écrit :

It would be useful to be able to form the intersection and the union of
two Sets. These are natural operations that are currently not part of
the API
(http://wiki.ecmascript.org/doku.php?id=harmony:simple_maps_and_sets).

Similar methods would make sense for Map, but one would have to think
about what to do in the case where the key but not the value matches.

An intersection is equivalent to a particular filter, so an alternative
might be to add a method like Array.filter to Sets instead.

(I filed bug 847355 for this and was told this mailing list was the
right place for this suggestion.)



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Throwing StopIteration in array extras to stop the iteration

2013-03-05 Thread David Bruant

Le 05/03/2013 18:32, Jason Orendorff a écrit :
On Tue, Mar 5, 2013 at 5:42 AM, David Bruant bruan...@gmail.com 
mailto:bruan...@gmail.com wrote:


Currently, if one wants to do stop an iteration early, he/she has
to be done one of the following way:
1)
try{

[2, 8, 7].forEach(function(e){
if(e === 8)
throw whatever;
console.log(e)
}
}
catch(e){
// do nothing, I just want to catch the error in case the
iteration
// stopped before traversing all elements
}


Well... here's what I would do.

for (var e of [2, 8, 7]) {
if (e === 8)
break;   // exiting JS loops since 1994
console.log(e);
}

Why not use statements for your procedural code?
I love the idea of for-of, and that's probably what I'll use in the 
future indeed (for-of isn't available since 1994 ;-) )


I've realized that in a recent Node.js experience I had made no 
off-by-one errors and I consider the extensive use of forEachfriends to 
be the reason for that.
The iterator protocol and for-of loops by extension provide the same 
guarantee that forEach (with the possibility to break) so I guess I'll 
use that whenever possible.


Thanks,

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On notification proxies

2013-03-05 Thread David Bruant

Le 05/02/2013 16:29, David Bruant a écrit :

Le 05/02/2013 13:52, Sam Tobin-Hochstadt a écrit :

Second, it forces
the use of the shadow target pattern in any wrapper, doubling the
number of allocations required.
I don't understand why more shadow targets would be necessary than 
with direct proxies.

Sorry for the very late understanding, but I finally get it.
In the case of a membrane, the object used as target needs to have the 
wrapped objects as property values, which means one of the following:
1) change the actual target in the pre-trap and change it back in the 
post trap. This back and forth has to be done at every property access.

2) shadow target

Hmm... actually, because of constraints of the getPrototypeOf trap, 
membranes implementation has to (lazily) duplicate the entire graph of 
reachable objects.


In cases where it'd be acceptable to share prototypes (because they'd be 
frozen and hold no powerful reference, for instance), one can wonder if 
1) is that cheaper than the invariants notification proxies are meant to 
remove (add+remove prop and enter/exit 2 function calls)


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Throwing StopIteration in array extras to stop the iteration

2013-03-05 Thread David Bruant

Le 05/03/2013 17:37, Brendan Eich a écrit :

David Bruant wrote:
I'm happy of the outcome of the thread if .findIndex is introduced, 
but I can't help wondering whether a new method is going to be 
introduced every single time someone brings up a pattern that would 
make good use of stopping an interation early.


Lacking an iteration protocol, that's the natural tendency, although 
there are only so many methods and method-variations likely to be needed.


With for-of, stopping early is done by a break in the body.
Indeed. The mention of the lack of an iteration protocol made me realize 
that maybe .forEach was just a detour (map/reduce/filter/every/some 
answer to a specific pattern, so they're still useful)


Inventing synonyms for exceptions (endframe) or adding escape 
continuations for more locally structured control effects could be 
done, but why? I think we're nearly done (no pun!).
Discriminating how a frame ended by value works. There is the concern 
about cross-frame protocols and we're saved by [[Brand]] which is a 
forgeable string that's the same cross-globals.

But authors don't have this kind of control.
Maybe we'll never need to extend JavaScript with other function-based 
protocols because hacking return true/false (considered so many times 
for proxy traps) and throw StopIteration will be enough.
I don't have a better name than function-based protocols, but I feel 
we're not done with them. We might be just starting. We'll see.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Throwing StopIteration in array extras to stop the iteration

2013-03-03 Thread David Bruant

Hi,

One (minor) annoyance with forEach/map, etc. is that the enumeration 
can't be stopped until all elements have been traversed which doesn't 
suit every use case. One hack to stop the enumeration is to throw an 
error but that requires to wrap the .forEach call in a try/catch block 
which is annoying too for code readability.


The iterator protocol defines the StopIteration value. What about 
reusing this value in the context of array extras?


(function(){
[2, 8, 7].forEach(function(e){
if(e === 8)
throw StopIteration;
console.log(e)
})

console.log('yo')
})();

In that case, '2' would be logged, then 'yo'. The idea is to have an 
in-function way to stop the iteration without being forced to throw 
something that has to be caught outside.
On the spec, for forEach, it would require to change step 7.c.ii. 
Probably looking at the completion value and if it's throw with a value 
of brand StopIteration, then don't forward the error and just stop the 
iteration.


For methods that return something, I guess the partially built 
array/value could be returned.


StopIteration is in Firefox (and that's not how it behaves with 
forEachco) but in no other engine as far as I know, so it's not 
supposed to be part of the web, so I don't think there is a major issue 
preventing this change.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Throwing StopIteration in array extras to stop the iteration

2013-03-03 Thread David Bruant

Le 03/03/2013 19:56, Bjoern Hoehrmann a écrit :

* David Bruant wrote:

One (minor) annoyance with forEach/map, etc. is that the enumeration
can't be stopped until all elements have been traversed which doesn't
suit every use case. One hack to stop the enumeration is to throw an
error but that requires to wrap the .forEach call in a try/catch block
which is annoying too for code readability.

The iterator protocol defines the StopIteration value. What about
reusing this value in the context of array extras?

Using exceptions for normal flow control seems like a bad idea to me.
I could not agree more. But JavaScript is what it is. Iterators are 
going to use throw StopIteration [1] too.
It's been discussed recently [2]. There could be slightly more radical 
ideas like the endframe thing I'm describing in the post, but I have 
no hope that such an idea would be considered seriously, that's why I 
haven't proposed it and only shared it as food for thought.



 (function(){
 [2, 8, 7].forEach(function(e){
 if(e === 8)
 throw StopIteration;
 console.log(e)
 })

 console.log('yo')
 })();

Languages like Haskell and C# would use `takeWhile` for this purpose,
so you would have something like

   [2, 8, 7].takeWhile(x = x !== 8).forEach(x = console.log(e));

That seems much better to me.
Sure. You can already prefix anything with .filter, but in current 
implementations and for the forseeable future, this systematically 
allocates an extra array (which in turn costs in terms of GC)


How would you make a takeWhile work in JS in a way that's as performant 
as throw StopIteration without breaking an existing website?


David

[1] http://wiki.ecmascript.org/doku.php?id=harmony:iterators
[2] https://mail.mozilla.org/pipermail/es-discuss/2013-February/028674.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Throwing StopIteration in array extras to stop the iteration

2013-03-03 Thread David Bruant

Le 03/03/2013 20:29, Brendan Eich a écrit :
If you want some or every and not forEach, they are there -- use them. 
No exception required.
I've found myself multiple times in a situation where I needed the index 
of the first element responding to some conditions. I solved it the 
following way:


var index;
array.some(function(e, i){
if(someCondition(e)){
index = i;
return false;
}

return true;
})

It works, but felt a bit awkward. It's a hack on .some because there is 
no other way to stop an iteration in other array methods.
Also spending hours on debugging because someone confused some for 
every (mixed the meaning of true/false) isn't fun.


var index;
array.forEach(function(e, i){
if(someCondition(e)){
index = i;
throw StopIteration;
}
})

would look more explicit in my opinion.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: What is the status of Weak References?

2013-03-02 Thread David Bruant

Le 02/03/2013 01:58, Rafael Weinstein a écrit :

On Sat, Feb 2, 2013 at 11:02 AM, Brendan Eich bren...@mozilla.com wrote:

David Bruant wrote:

Interestingly, revocable proxies require their creator to think to the
lifecycle of the object to the point where they know when the object
shouldn't be used anymore by whoever they shared the proxy with. I feel this
is the exact same reflections that is needed to understand when an object
isn't needed anymore within a trust boundary... seriously questioning the
need for weak references.


Sorry, but this is naive. Real systems such as COM, XPCOM, Java, and C#
support weak references for good reasons. One cannot do data binding
transparently without either making a leak or requiring manual dispose (or
polling hacks), precisely because the lifecycle of the model and view data
are not known to one another, and should not be coupled.

See http://wiki.ecmascript.org/doku.php?id=strawman:weak_refs intro, on the
observer and publish-subscribe patterns.

This is exactly right.

I'm preparing an implementation report on Object.observe for the next
meeting, and in it I'll include findings from producing a general
purpose observation library which uses Object.observe as a primitive
and exposes the kind of semantics that databinding patterns are likely
to need.

Without WeakRefs, observation will require a dispose() step in order
to allow garbage collection of observed objects, which is (obviously)
very far from ideal.
There is another approach taken by the requestAnimationFrame API that 
consists in one-time event listeners (Node.js also has that concept too 
[1]), requiring to re-subscribe if one wants to listen more than once.
I wonder why this approach has been taken for requestAnimationFrame 
which is fired relatively often (60 times a second). I'll ask on 
public-webapps.
I won't say it's absolutely better than WeakRefs and it may not apply to 
the data binding case (?), but it's an interesting pattern to keep in mind.


I'm looking forward to reading your findings in the meeting notes.

David

[1] http://nodejs.org/api/events.html#events_emitter_once_event_listener
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: What is the status of Weak References?

2013-03-02 Thread David Bruant

Le 02/03/2013 12:11, Kevin Gadd a écrit :

I don't understand how the requestAnimationFrame approach (to
registering periodic callbacks) applies to scenarios where you want
Weak References (for lifetime management) or to observe an object (for
notifications in response to actions by other arbitrary code that has
a reference to an object). These seem to be significantly different
problems with different constraints.
It's not really about periodic, but rather about the idea of a one 
time listener. The reason I talked about 60 frames per second is that 
it's an event that's fired very often, so it may have a significant cost 
something at runtime to conditionally re-register.


The general problem is to know when someone listening actually wants to 
stop listening. Currently, the default of observing is observe ad vitam 
æternam which obviously causes the issue of what if actually it's been 
long that we didn't want to observe?
The one time listener approach is interesting, because it doesn't say 
observe ad vitam æternam, but rather I'll call you only once, figure 
out the rest on your own (so, re-subscribe if that's what you want).
On the huge benefits of this approach, the problem of GC-ing observers 
is completely solved by the API. Well, not exactly. Maybe you 
registered, but don't care before the event happens, so your observer is 
garbage until called once. But after the event happens, it can be released.


In a way, a one-time listener can be seen as an auto-wrapped (wrapped by 
the event emitter, not the one who registered it) one-time reference 
(very weak reference?).



If anything, requestAnimationFrame is an example of an API that poorly
expresses developer intent.
I've asked [1], we'll see. I'm not very interested in rAF specifically, 
because I mostly agree with you that from all the code snippets I've 
read and written, people re-subscribe unconditionally. Maybe some more 
complex applications don't.



Furthermore, the need to manually trigger further
frame callbacks is error-prone - you are essentially offloading the
cost of lifetime management onto the application developer
The lifetime management *is* on the application developer shoulders. It 
always has and will always be. GC and weakrefs are just conveniences to 
make this work (much!) easier. There are cases where a GC isn't 
sufficient. There will always be cases where manual disposal will be 
necessary. Even if the language gets WeakRefs.
When a developer wraps an object in a WeakRef before handing it to 
observe an event, the developer is making a lifetime management decision.



For this and other reasons, I
would suggest that it is a horrible idea to use rAF as an example of
how to design an API or solve developer problems - especially problems
as important as those addressed by weak references.

I feel misintrepreted. There is a long way from my
I won't say it's absolutely better than WeakRefs and it may not apply 
to the data binding case (?), but it's an interesting pattern to keep in 
mind.

and your intrepretation of my post.
I shared an idea that hadn't been shared yet on this thread. I didn't 
say it would solve all problems. I've actually been careful to say that 
it may not.


David

[1] http://lists.w3.org/Archives/Public/public-webapps/2013JanMar/0623.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: What is the status of Weak References?

2013-03-02 Thread David Bruant
I'm still balanced on whether/when/how the one-time listener pattern can 
be effective, but I feel like it's an interesting idea to discuss.


Le 02/03/2013 21:13, Marius Gundersen a écrit :
 I won't say it's absolutely better than WeakRefs and it may not 
apply to the data binding case (?), but it's an interesting pattern to 
keep in mind.


I can't see how this would work in an observer/listener application. 
The listening object has no way of knowing if it will be deleted when 
the event occurs, so it cannot decide whether to resubscribe or not.
If all references to it were still deleted, it would not go away. It 
would therefore need a shouldNotResubscribe flag, which must be set 
when it should be deleted. When the next event occurs it can react and 
decide not to resubscribe. This means that a listening object still 
needs a dispose method (to set the shouldNotResubscribe flag)
In the weakref case, someone has to keep a strong reference to the 
listener until it's not needed anymore. The decision to cut this last 
strong reference is exactly the same decision as deciding when not to 
re-subscribe.


I think the difference is that it might be that the last reference is 
cut because another object got released and this object got released 
because another object got released, etc. Unless the entire application 
is collectable, I think somewhere in the chain, there is an explicit 
strong reference cut.
My point is that being garbage collectable needs an explicit action 
somewhere. WeakRef is a convenient construct that benefits from a 
cascading of the explicit action. But there is an explicit action needed 
somewhere anyway.


and it also means that it would not be deleted until the next event 
occurs, which could be in a very long time.

There can be a removeEventListener too.

David



Marius


On Sat, Mar 2, 2013 at 7:53 PM, Bill Frantz fra...@pwpconsult.com 
mailto:fra...@pwpconsult.com wrote:


On 3/2/13 at 3:47 AM, bruan...@gmail.com
mailto:bruan...@gmail.com (David Bruant) wrote:

I won't say it's absolutely better than WeakRefs and it may
not apply to the data binding case (?), but it's an
interesting pattern to keep in mind.


Speaking from ignorance here.

One advantage of the resubscribe for every event pattern is that
if the events are generated in one process -- an animation process
is the example used here -- and a message is sent to the observer
in another process, and the observer is slow, the natural outcome
will be to drop frames instead of queue up a lot of messages
pertaining to events that no longer need to be processed.

Cheers - Bill

-
Bill Frantz| When it comes to the world | Periwinkle
(408)356-8506 tel:%28408%29356-8506   | around us, is there any
choice | 16345 Englewood Ave
www.pwpconsult.com http://www.pwpconsult.com | but to explore? -
Lisa Randall | Los Gatos, CA 95032


___
es-discuss mailing list
es-discuss@mozilla.org mailto:es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Transitioning to strict mode

2013-02-22 Thread David Bruant

Le 21/02/2013 19:16, Mark S. Miller a écrit :
On Thu, Feb 21, 2013 at 9:12 AM, David Bruant bruan...@gmail.com 
mailto:bruan...@gmail.com wrote:


Le 18/02/2013 23:29, Claus Reinke a écrit :


What I'd like to understand is why likely static scoping problems
should lead to a runtime error, forcing the dependence on testing.
If they'd lead to compile time errors (for strict code),
there'd be no chance of missing them on the developer engine,
independent of incomplete test suite or ancient customer
engines. Wouldn't that remove one of the concerns against
using strict mode? What am I missing?

I guess it's too late now for ES5 strict mode.
What was the rationale behind making it a runtime error?

I think there were plans to make it a compile-time error... was it
with the ES6 opt-in? :-s
Can it be retrofit in new syntax which are their own opt-in
(module, class...)?



For the ES5 semantics of the interaction of the global scope and the 
global object, how could you make this a static error?

use hypothetic strict;
var a;
a = 12; // a was declared, no problem
b = b+1; // SyntaxError on the assignment regardless of |'b' in this|

If someone wants to assign to the global 'b', it's still possible to do:
this.b = b+1; // or
window.b = b+1;
Or maybe they forgot to declare b and they just need to declare it 
somewhere to fix the SyntaxError. At least, the intent will be very 
explicit.



What would you statically test?

is the variable being assigned declared in the same script?
And I am specifically speaking about variables assignments, that is 
AssignmentExpression (ES5-11.13) where LeftHandSideExpression is an 
Identifier.
If LeftHandSideExpression is a MemberExpression [ Expression ] or 
MemberExpression . IdentifierName in which MemberExpression resolves 
to the global object, I have no problem with it. At least it's very 
explicit that the global object is being assigned something.


Would you statically reject the following program, where 
someExpression is itself just some valid expression computing a 
value (that might be the string foo)? Note that this below is the 
global object, since it occurs at top level in a program.


use strict;
this[someExpression] = 8;
console.log(foo);

I would not reject it as I said above.
I think there are 2 different concerns:
1) assigning a value to an undeclared variable
2) adding a property to the global object

I think I only care about preventing the former (I'll talk about the 
latter below), because the intent is ambiguous and the disambiguation 
can be either declare it or add a property to the global object like 
you really mean it.
The rule for strict mode could have been: throw a SyntaxError when 
trying to assign a value to an undeclared variable.
As I suggest in the guide, these errors are almost free to fix. Just add 
use strict;, read your console which tells you at which line there is 
an error, what the syntax error is and fix it.


If people really want to add a property to the global object, they can 
anyway through this[expression] = 8 at the top-level scope as you 
suggest or by aliasing the global in non-top-level scopes as in:

(function(global){
global[expression] = value;
})(this)
or using an existing alias like window or frames in the web browser.
So preventing people from adding global properties is a lost cause.
Preventing people from assigning to an undeclared variable isn't.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Transitioning to strict mode

2013-02-21 Thread David Bruant

Le 18/02/2013 23:29, Claus Reinke a écrit :
Out of curiosity, what does your favorite test coverage tool report 
for the source below? And what does it report when you comment

out the directive?
:-p Ok, there are exceptions if your code depends on semantic changes 
described in the third section of the article (dynamic 
this/eval/arguments).

That's you case with how you define isStrict (dynamic this)
So: if your code does *not* depend on semantic changes, all instances 
of setting to an undeclared variable will be caught.


Just wanted to shake your faith in testing :-) The example code might
look unlikely, but real code is more complex and might evolve nasty
behavior without such artificial tuning.

You still need more than statement or branch coverage. Otherwise,
we might get 100% coverage while missing edge cases

   function raise() {
 use strict;
 if( Math.random()0.5 || (Math.random()0.5)  (variable = 0)) 
   console.log(true);

 else
   console.log(false);
   }

   raise();
   raise();
   raise(); // adjust probabilities and call numbers until we get
   // reliable 100% branch coverage with no errors; then
   // wait for the odd assignment to happen anyway, in
   // production, not reproducably
There is no reliable 100% coverage in this case. The coverage I guess 
is... probabilistic?


Throwing or not throwing Reference Errors is also a semantics change, 
and since errors can be caught, we can react to their presence/absence,

giving another avenue for accidental semantics changes.
I agree it's a semantic change, but it's one that's special in the 
development workflow. The common practice is to fix code that throws, 
whatever that means.
non-directly-throwing semantic changes require a different kind of 
attention and testing.
I understand errors can be caught by a try-catch placed for other 
reasons, but whoever cares about transitioning to strict mode will be 
careful to this kind of issues.



Undeclared variables are likely to be unintended, and likely to lead to
bugs, but they might also still let the code run successfully to 
completion where strict mode errors do or don't, depending on 
circumstances.
I agree. The goal when transitioning to strict mode is also to preserve 
the semantics of the original code. I've tried to provide examples of 
how to fix common errors. For the undeclared variable case, I've 
explained how to legitimately assign a global variable if that's what 
was really intended. This way, there is a quick fix that preserves the 
semantics.

Other fixes for all the error cases are welcome as contributions.


Testing increases confidence (sometimes too much so) but cannot
prove correctness, only the absence of selected errors.

I fully agree.


What I'd like to understand is why likely static scoping problems
should lead to a runtime error, forcing the dependence on testing.
If they'd lead to compile time errors (for strict code), there'd be no 
chance of missing them on the developer engine, independent of 
incomplete test suite or ancient customer engines. Wouldn't that 
remove one of the concerns against using strict mode? What am I missing?

I guess it's too late now for ES5 strict mode.
What was the rationale behind making it a runtime error?

I think there were plans to make it a compile-time error... was it with 
the ES6 opt-in? :-s
Can it be retrofit in new syntax which are their own opt-in (module, 
class...)?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: get/setIntegrity trap (Was: A case for removing the seal/freeze/isSealed/isFrozen traps)

2013-02-20 Thread David Bruant

Le 20/02/2013 21:08, Kevin Reid a écrit :
On Wed, Feb 20, 2013 at 11:52 AM, Nathan Wall nathan.w...@live.com 
mailto:nathan.w...@live.com wrote:


`Object.isFrozen` and `Object.isSealed` don't really seem that
helpful to me for the very reasons you've discussed: They don't
represent any real object state, so they don't accurately tell me
what can be done with an object.  If I could I would argue in
favor of their removal, though I know it's too late for that.

I would be curious to see legitimate uses of `isFrozen` and
`isSealed` in existing code if anyone has anything to offer.


I just took a look at uses of Object.isFrozen in Caja and I find that 
all but one are either in tests (test that something is frozen) or in 
sanity checks (if this isn't frozen, do not proceed further, or freeze 
it and warn).


The remaining one is in a WeakMap abstraction used for trademarking: 
an object cannot be given a trademark after it is frozen. (The 
rationale here, while not written down, I assume is that a defensive 
object's interface should not change, and it is an implementation 
detail that this particular information is not stored in the object.) 
There is a comment there suggesting we might strengthen this check to 
only permitting _extensible_ objects to be marked.

And in an ES6 world, you'll probably use an actual WeakMap anyway?

Thanks for sharing this experience from Caja,

David

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Transitioning to strict mode

2013-02-18 Thread David Bruant

Le 18/02/2013 11:10, Claus Reinke a écrit :
I'm looking forward to any recommendation you'd have to improve this 
guide, specifically about the runtime errors where I said something 
about 100% coverage test suite and I'm not entirely sure about that.


Talking about 100% coverage and catching all errors is never a
good combination - even if you should have found an example of
where this works, it will be an exception.
There are a couple of things I'm sure of. For instance, direct eval 
aside (eval needs some specific work anyway because its semantics is 
changed a lot), if you have 100% coverage, every instance of setting to 
an undeclared variable will be caught. There is no exception.

But I wonder if that's the case for all runtime errors I listed.
Otherwise, in general, I agree that a test suite with 100% coverage that 
pass doesn't mean that the program is correct. Specifically, in the 
semantics changes section, I don't talk about test suites, not even 
with 100% coverage.


Also, in practice, for large projects, 100% coverage is a fantasy. I 
know many software contracts are signed agreeing on 80% coverage, 
because 100% is a lot of work and not even necessary.


What I'm trying to convey in the different sections is the type and 
amount of work that is necessary to be sure the code works when moved to 
strict mode.



Then there is the issue of pragmatic concerns (can throw at runtime,
can change semantics on old engines), as expressed in this post

http://scriptogr.am/micmath/post/should-you-use-strict-in-your-production-javascript 



To push adoption of strict mode, it might need one or two refinements.
# we definitely don't want those silent bugs to throw runtime errors to 
our end users on live websites.


I could not agree more. But when I read this sentence, I can't help 
thinking but why would that ever happen?
Transitioning to strict mode does *not* mean putting use strict; at 
the top of the program and pushing to production. That's the very reason 
I wrote the guide actually. I'll expand the intro to talk about that.
People should run their code locally, test it before pushing to 
production. If people don't test locally before pushing to production, 
transitioning to strict mode should be the least of their concerns.
Also, gradually transitioning down to the function granularity means 
that if an error ever slips into production, it's easy to revert just 
the one function that is not strict-ready yet.



# On older browser not running strict mode

That point is a very valid concern (and I should probably expand the 
guide on this point). I think this point can be summarized by 2 rules:
1) Unless you're a language expert and know what you're doing (you don't 
need that guide anyway), just stay away from things where the semantics 
is different

1.1) eval
1.2) arguments (unless you're in a case where you'd use ...args in ES6)
1.3) odd cases of dynamic this (this in non-constructor/method, 
primitive values boxed in objects)
2) Strict mode doesn't make your code throw (either syntactically or 
dynamically)


If those 2 rules are followed, the code will run the same in strict and 
non-strict, no need to worry about it.
Developing new code in strict mode will de facto enforce the second rule 
(assuming people don't want their code to throw as the normal behavior). 
Only discipline (with the help of a static checker watching for the 
this/eval/arguments keywords?) will help to follow the first rule.


Does this sounds false to anyone?


# Concatenation

If the rules of the previous section are followed the code has the exact 
same semantics in strict and non-strict. So if the code is not run in 
the mode it was initially intended for, it won't make any difference.



Thanks for the feedback and the link to the article :-)

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Transitioning to strict mode

2013-02-18 Thread David Bruant

Le 18/02/2013 16:48, Claus Reinke a écrit :

Talking about 100% coverage and catching all errors is never a
good combination - even if you should have found an example of
where this works, it will be an exception.
There are a couple of things I'm sure of. For instance, direct eval 
aside (eval needs some specific work anyway because its semantics is 
changed a lot), if you have 100% coverage, every instance of setting 
to an undeclared variable will be caught. There is no exception.


Out of curiosity, what does your favorite test coverage tool report 
for the source below? And what does it report when you comment

out the directive?
:-p Ok, there are exceptions if your code depends on semantic changes 
described in the third section of the article (dynamic this/eval/arguments).

That's you case with how you define isStrict (dynamic this)
So: if your code does *not* depend on semantic changes, all instances of 
setting to an undeclared variable will be caught.


So I guess the first thing to do when transitioning to strict mode is 
getting rid of all the things that result in non-direct error semantic 
changes (dynamic this/eval/arguments).


Thanks for the feedback,

David



Claus


function test(force) {
 use strict;

 function isStrict() { return !this }
 console.log(isStrict());

 if (!force  (!isStrict()  (doocument=unndefined))) {

 console.log(we don't have lift-off);

 } else {

 console.log(ready to go!);
 // do stuff

 }

 !isStrict()  console.log(doocument);

}

test(false);
test(true);



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Transitioning to strict mode

2013-02-17 Thread David Bruant

Hi,

I'd like to share a piece of documentation I've recently written [1]. 
It's a guide to help developers understand how they can transition to 
strict mode and what they should be aware of while making this transition.
Differences between strict and non-strict are divided into 3 categories: 
syntax errors, runtime errors, semantic changes.
Each category requires a different amount of work and attention from 
developers.


I'm looking forward to any recommendation you'd have to improve this 
guide, specifically about the runtime errors where I said something 
about 100% coverage test suite and I'm not entirely sure about that.


Thanks,

David

[1] 
https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Functions_and_function_scope/Strict_mode/Transitioning_to_strict_mode

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A case for removing the seal/freeze/isSealed/isFrozen traps

2013-02-16 Thread David Bruant

Le 16/02/2013 23:31, Allen Wirfs-Brock a écrit :

Will this not just shift the current complexity someplace else?
Well, it means that for 100% backwards compatibility, Object.isFrozen 
would have to be something like:


1.  Let state = obj.[[GetIntegrity]]();
2   If state is frozen return true;
3   If state is sealed or non-extensible, then return true if all 
properties are non-configurable and  non-writable

nit: You can save the state to frozen before returning true.

4  return false.

The real complexity saving is in simplifying the MOP/Proxy handler 
interface and also in making Proxy invariants  only sensitive to the 
explicit integrity state of an object.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Case insensitive String startsWith, contains, endsWith, replaceAll method

2013-02-16 Thread David Bruant

Le 17/02/2013 00:58, Biju a écrit :

In most time when user want to search something in a text, he/she
wants to do a case insensitive search.
For example to filter items displayed in list on a page.
Also on other applications, say any word processor, or in page search
in Firefox, IE, Chrome etc.

So can we make the default behavior of new methods String.startsWith,
String.contains, String.endsWith case insensitive?
I think all current methods are case-sensitive. If these methods were to 
be made case-insensitive, someone else would come on the list demanding 
consistency.

Also, it doesn't seem that hard to implement:
String.prototype.startsWithI = function(s){
this.match(new RegExp('^'+s, 'i'));
}

And sometimes, case-sensitive is what you want.


And to make it case sensitive we should add a third flag parameter matchCase
like...

var startsWith = str.startsWith(searchString [, position [, matchCase] ] );
var contained = str.contains(searchString [, position [, matchCase] ] );
var endsWith = str.endsWith(searchString [, position [, matchCase] ] );


Additionally we should have a String.replaceAll method right now web
developers are using complex logic to achieve the same.

aA.replace(/a/ig, 'b'); // 'bb'
I feel the i and g flag or regexps aren't that complex. One just 
needs to know about them.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A case for removing the seal/freeze/isSealed/isFrozen traps

2013-02-15 Thread David Bruant

Le 15/02/2013 11:03, Mariusz Nowak a écrit :

I've worked a lot with ECMAScript5 features in last two years, and I must say
I never found a good use case for Object.freeze/seal/preventExtensions, it
actually raised more issues than it actually helped (those few times when I
decided to use it). Currently I think that's not JavaScript'y approach and
use cases mentioning untrusted parties sounds logical just in theory, in
practice when actually we never include untrusted modules in our code base
does not make much sense.

However, main point I want to raise is that several times I had a use case
for very close functionality, that with current API seem not possible:
I'd like to be able to *prevent accidental object extensions*.
If something *accidental* can happen, then untrusted parties is more 
than theorical ;-)

Brendan says it better [1]:
In a programming-in-the-large setting, a writable data property is 
inviting Murphy's Law. I'm not talking about security in a mixed-trust 
environment specifically. Large programs become mixed trust, even when 
it's just me, myself, and I (over time) hacking the large amount of code.


Security and untrusted parties aren't about terrorists groups trying 
to hack your application to get a copy of your database or corrupt it or 
your choice to use some code downloaded from a dark-backgrounded website.
They're about you trying to meet a deadline and not having time to read 
carefully the documentation and comments of every single line of modules 
you're delegating to.
Trust isn't an all-or-nothing notion. Anytime I say untrusted, I 
should probably say partially trusted instead.
Trust also changes over time, mostly because as times passes, our brains 
forget the invariants and assumptions we baked in our code and if those 
aren't enforced at compile time or runtime, we'll probably violate them 
at one point or another and thus create bugs. Or we just make mistakes, 
because we're human and that's exactly the case you're explaining.
Security and untrusted parties are about our inability as human 
beings to remember everything we do and our inability to be perfect. Any 
security mechanism is a mechanism to protect against hostile outsiders 
but also and probably mostly ourselves over time.


It is usually not considered so, but separation of concerns is a 
security mechanism in my opinion. So are most object-oriented so-called 
good practices.


Security is very loaded with emotions of people afraid to have their 
password stolen and cyber attacks. It's also loaded with the notion of 
human safety and human integrity which, as human beings are sensitive to.

Maybe I should start using a different word...


I want to
control all enumerable properties of the object, so they can only be set via
defineProperty, but any direct assignment of non existing prop e.g.
'x.notDefinedYet = value'  will throw. Imagine some ORM implementation, that
via setters propagates changes to underlying persistent layer, at this time
we cannot prevent accidental property sets that may occur before property
was actually defined (therefore not caught by the setter)
I assume that proxies will make such functionality possible, but maybe some
Object.preventUndefinedExtensions will be even better.
The problem is that there are probably dozens of use cases like yours 
[2] and the Object built-in can't welcome them all.
Hence proxies as an extension mechanism of any random 
micro-abstraction (as Andreas Rossberg puts it ;-) )


David

[1] https://mail.mozilla.org/pipermail/es-discuss/2013-February/028724.html
[2] When I learned JS, how many time did I mistyped .innerHTML and 
wasted hours not understanding where some undefined string in my UI 
came from.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A case for removing the seal/freeze/isSealed/isFrozen traps

2013-02-14 Thread David Bruant

Le 14/02/2013 18:11, Andreas Rossberg a écrit :

On 13 February 2013 13:39, David Bruant bruan...@gmail.com wrote:

Warning: In this post, I'll be diverging a bit from the main topic.

Le 12/02/2013 14:29, Brendan Eich a écrit :


Loss oread onlyf identity, extra allocations, and forwarding overhead remain
problems.

I'm doubtful loss of identity matters often enough to be a valid argument
here. I'd be interested in being proved wrong, though.

I understand the point about extra allocation. I'll talk about that below.

The forwarding overhead can be made inexistent in the very case I've exposed
because in the handler, the traps you care about are absent from the
handler, so engines are free to optimize the [[Get]]friends as operations
applied directly to the target.

You're being vastly over-optimistic about the performance and the
amount of optimisation that can realistically be expected for proxies.
Proxies are inherently unstructured, higher-order, and effectful,
which defeats most sufficiently simple static analyses. A compiler has
to work much, much harder to get useful results. Don't expect anything
anytime soon.

var handler = {set: function(){throw new TypeError}}
var p = new Proxy({a: 32}, handler);

p.a;

It's possible *at runtime* to notice that the handler of p doesn't have 
a get trap, optimize p.[[Get]] as target.[[Get]] and guard this 
optimization on handler modifications. Obviously, do that only if the 
code is hot.
I feel it's not that much work than what JS engines do currently and the 
useful result is effectively getting rid of the forwarding overhead.

Is this vastly over-optimistic?


I've seen this in a previous experience on a Chrome extension where someone
would seal an object as a form of documentation to express I need these
properties to stay in the object. It looked like:
 function C(){
 // play with |this|
 return Object.seal(this)
 }

My point here is that people do want to protect their object integrity
against untrusted parties which in that case was just people who'll
contribute to this code in the future.

Anecdotally, the person removed the Object.seal before the return because of
performance reasons, based on a JSPerf test [3].
Interestingly, a JSPerf test with a proxy-based solution [4] might have
convinced to do proxies instead of Object.seal.

Take all these JSPerf micro benchmark games with two grains of salt;

... that's exactly what I said right after :-/
But that's a JSPerf test and it doesn't really measure the GC overhead 
of extra objects.
JSPerf only measures one part of perf the story and its nice conclusion 
graph should be taken with a pinch of salt.



lots of them focus on premature optimisation.
I'm quite aware. I fear the Sphinx [1]. I wrote might have convinced to 
do proxies instead of Object.seal. I didn't say I agreed. and I 
actually don't.



Also, seal and freeze
are far more likely to see decent treat than proxies.

Why so?


But more importantly, I think you get too hung up on proxies as the
proverbial hammer. Proxies are very much an expert feature. Using them
for random micro abstractions is like shooting birds with a nuke. A
language that makes that necessary would be a terrible language. All
programmers messing with home-brewed proxies on a daily basis is a
very scary vision, if you ask me.

hmm... maybe.

David

[1] https://twitter.com/ubench_sphinx
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Are frozen Objects faster ?

2013-02-14 Thread David Bruant

Le 14/02/2013 19:08, Mark S. Miller a écrit :
On Thu, Feb 14, 2013 at 10:01 AM, Kevin Gadd kevin.g...@gmail.com 
mailto:kevin.g...@gmail.com wrote:


Frozen and sealed objects are both dramatically slower in most JS
engines I've tested. In the ones where they're not dramatically slower
they are never faster.

The last time I asked on the mozilla and v8 bug trackers I was
informed that there is no plan to optimize for these features and that
the design of the respective JS engines would make such optimizations
difficult anyway.

(I find this extremely unfortunate.)


Likewise. And unlikely.

Based on history, I suggest that the best way to get this situation 
fixed is benchmarks.

Agreed 100%

Either create a new benchmark or a variation of an existing benchmark. 
For example, if someone created a variant of SunSpider in which all 
objects that don't need to not be frozen were frozen, and posted the 
measurements, that would help get everyone's attention. The situation 
might then improve rapidly.
Choice of the specific benchmark aside, this is a very good idea. This 
could also be applied to strict mode.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A case for removing the seal/freeze/isSealed/isFrozen traps

2013-02-13 Thread David Bruant

Warning: In this post, I'll be diverging a bit from the main topic.

Le 12/02/2013 14:29, Brendan Eich a écrit :
Loss of identity, extra allocations, and forwarding overhead remain 
problems.
I'm doubtful loss of identity matters often enough to be a valid 
argument here. I'd be interested in being proved wrong, though.


I understand the point about extra allocation. I'll talk about that below.

The forwarding overhead can be made inexistent in the very case I've 
exposed because in the handler, the traps you care about are absent from 
the handler, so engines are free to optimize the [[Get]]friends as 
operations applied directly to the target.
A handler-wise write barrier can deoptimize but in most practical cases, 
the deoptimization won't happen because in most practical cases handlers 
don't change.


It seems to me that you are focusing too much on share ... to 
untrusted parties.

Your very own recent words [1]:
In a programming-in-the-large setting, a writable data property is 
inviting Murphy's Law. I'm not talking about security in a mixed-trust 
environment specifically. Large programs become mixed trust, even when 
it's just me, myself, and I (over time) hacking the large amount of code.

...to which I agree with (obviously?)

And Be a better language for writing complex applications is in the 
first goals [2]


Maybe I should use another word than untrusted parties. What I mean is 
any code that will manipulate something without necessarily caring to 
learn about what this something expects as precondition and own invariants.
This includes security issues of course, but also buggy code (which, in 
big applications, are often related to mismatch between a 
precondition/expectation and how something is used).


I've seen this in a previous experience on a Chrome extension where 
someone would seal an object as a form of documentation to express I 
need these properties to stay in the object. It looked like:

function C(){
// play with |this|
return Object.seal(this)
}

My point here is that people do want to protect their object integrity 
against untrusted parties which in that case was just people who'll 
contribute to this code in the future.


Anecdotally, the person removed the Object.seal before the return 
because of performance reasons, based on a JSPerf test [3].
Interestingly, a JSPerf test with a proxy-based solution [4] might have 
convinced to do proxies instead of Object.seal.
But that's a JSPerf test and it doesn't really measure the GC overhead 
of extra objects. Are there data on this? Are there methodologies to 
measure this overhead? I understand it, but I find myself unable to pull 
up numbers on this topic and convincing arguments that JSPerf only 
measures one part of perf the story and its nice conclusion graph should 
be taken with a pinch of salt.


It's true you want either a membrane or an already-frozen object in 
such a setting.
Not a membrane, just a proxy that protects its target. Objects linked 
from the proxy likely came from somewhere else. They're in charge of 
deciding of their own integrity policy.


And outside of untrusted parties, frozen objects have their uses -- 
arguably more over time with safe parallelism in JS.

Arguably indeed. I would love to see this happen.
Still, if (deeply) frozen POJSO could be part shared among contexts, I 
think we can agree that it wouldn't apply to frozen proxies for a long 
time (ever?)



I went a bit too far suggesting frozen objects could de-facto disappear 
with proxies.
I'm still unclear on the need for specific seal/freeze/isSealed/isFrozen 
traps


David

[1] https://mail.mozilla.org/pipermail/es-discuss/2013-February/028724.html
[2] http://wiki.ecmascript.org/doku.php?id=harmony:harmony#goals
[3] jsperf.com/object-seal-freeze/
[4] http://jsperf.com/object-seal-freeze/2


/be

David Bruant wrote:

Hi,

The main use case (correct me if I'm wrong) for freezing/sealing an 
object is sharing an object to untrusted parties while preserving the 
object integrity. There is also the tamper-proofing of objects 
everyone has access to (Object.prototype in the browser)


In a world with proxies, it's easy to build new objects with high 
integrity without Object.freeze: build your object, share only a 
wrapped version to untrusted parties, the handler takes care of the 
integrity.


function thrower(){
throw new Error('nope');
}
var frozenHandler = {
set: thrower,
defineProperty: thrower,
delete: thrower
};

function makeFrozen(o){
return new Proxy(o, frozenHandler);
}

This is true to a point that I wonder why anyone would call 
Object.freeze on script-created objects any longer... By design and 
for good reasons, proxies are a subset of script-created objects, 
so my previous sentence contained: I wonder why anyone would call 
Object.freeze on proxies...


There were concerned about Object.freeze/seal being costly on proxies

Re: A case for removing the seal/freeze/isSealed/isFrozen traps

2013-02-13 Thread David Bruant

Le 13/02/2013 20:36, Tom Van Cutsem a écrit :

Hi David,

I went a bit too far suggesting frozen objects could de-facto
disappear with proxies.
I'm still unclear on the need for specific
seal/freeze/isSealed/isFrozen traps


I think Allen and I reached consensus that we might do without those 
traps.

Excellent!

In addition, Allen was considering an alternative design where the 
state of an object (i.e. extensible, non-extensible, sealed or 
frozen) is represented explicitly as an internal property, so that 
Object.isFrozen and Object.isSealed must not derive the state of an 
object from its properties.

Interesting.
So what would happen when calling Object.isFrozen on a proxy? Would 
Object.isFrozen/isSealed/isExtensible reach out directly to the target? 
or a unique state trap returning a string for all of them? (state is 
too generic of a name, but you get the idea)


Regardless on the final decision on (full) notification proxies, maybe 
these operations (isSealed/isFrozen) could have notification trap. The 
invariant is that the answer has to be the target one (all the time), so 
the trap return value is irrelevant. Like the getPrototypeOf trap.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private symbols vs property attributes

2013-02-13 Thread David Bruant

Le 13/02/2013 21:56, Mark S. Miller a écrit :
On Wed, Feb 13, 2013 at 11:17 AM, Tom Van Cutsem tomvc...@gmail.com 
mailto:tomvc...@gmail.com wrote:


2013/2/10 Mark Miller erig...@gmail.com mailto:erig...@gmail.com

How does this interact with Proxies[1]? I know the answer
probably starts with whitelist, but let's spell it out in
this context, and test it against the 8 membrane transparency
cases previously discussed.


When thinking about symbol leaks, we must consider two cases:
a) leaking a symbol by having it show up in reflective query
methods on ordinary objects
b) leaking a symbol by inadvertently applying a symbol-keyed
operation on a proxy

Andreas' proposal of having a symbol's enumerability depend on a
property attribute makes a lot of sense and deals with problem a).
OTOH, it does not address leaks of type b). In order to prevent
those, proxies currently use the whitelist.

If we lose the a-priori distinction between unique and private
symbols and introduce only 1 type of symbol, then proxies must
treat all symbols like they currently treat private symbols.

The annoying thing about that is that well-known symbols like
@@create and @@iterator must be explicitly added to a proxy's
whitelist in order for the proxy to intercept them, but at least
it's doable.

W.r.t. membranes, AFAICT this proposal changes nothing re. the
interaction between private symbols and proxies. Membranes would
still need the unknownPrivateSymbol trap to stop unknown private
symbol access from piercing the membrane.


AFAICT, that trap wouldn't provide transparency for the membrane 
crossing cases. Is there anything about this new proposal that could 
improve on that?
The trap in itself no, but it's possible to keep track of all exchanged 
symbols and add them to the whitelists as they are observed before being 
shared. It all relies on the fact that for 2 parties to exchange 
symbols, they have to share it through a public communication first 
(get trap, etc.).

At some point, I thought it had a runtime cost, but Tom proved me wrong [1].
It seems realistic to consider that all proxies of the same membrane can 
all share the same set instance as a whitelist making the space cost as 
small as it could be.


David

[1] https://mail.mozilla.org/pipermail/es-discuss/2013-January/028405.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


A case for removing the seal/freeze/isSealed/isFrozen traps

2013-02-12 Thread David Bruant

Hi,

The main use case (correct me if I'm wrong) for freezing/sealing an 
object is sharing an object to untrusted parties while preserving the 
object integrity. There is also the tamper-proofing of objects everyone 
has access to (Object.prototype in the browser)


In a world with proxies, it's easy to build new objects with high 
integrity without Object.freeze: build your object, share only a wrapped 
version to untrusted parties, the handler takes care of the integrity.


function thrower(){
throw new Error('nope');
}
var frozenHandler = {
set: thrower,
defineProperty: thrower,
delete: thrower
};

function makeFrozen(o){
return new Proxy(o, frozenHandler);
}

This is true to a point that I wonder why anyone would call 
Object.freeze on script-created objects any longer... By design and for 
good reasons, proxies are a subset of script-created objects, so my 
previous sentence contained: I wonder why anyone would call 
Object.freeze on proxies...


There were concerned about Object.freeze/seal being costly on proxies if 
defined as preventExtension + enumerate + nbProps*defineProperty. 
Assuming Object.freeze becomes de-facto deprecated in favor of 
proxy-wrapping for high-integrity use cases, maybe that cost is not that 
big of a deal.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private symbols vs property attributes

2013-02-12 Thread David Bruant

Le 12/02/2013 16:06, Andreas Rossberg a écrit :

On 10 February 2013 19:40, Mark Miller erig...@gmail.com wrote:

How does this interact with Proxies[1]? I know the answer probably starts
with whitelist, but let's spell it out in this context, and test it
against the 8 membrane transparency cases previously discussed. If there are
good answers for all of these, and if we can reuse enumerable: for this
purpose as Brendan suggests, then I'm guardedly positive. I do not want to
introduce a new attribute.

I must have missed the discussion of the 8 membrane transparency
cases. Do you have a pointer?
I think it is 
https://mail.mozilla.org/pipermail/es-discuss/2013-January/028481.html


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: thoughts on ES6+ direction + modules

2013-02-11 Thread David Bruant

Le 11/02/2013 00:53, Andrea Giammarchi a écrit :
We have transpilers for everything else, we need few better things 
today and FirefoxOS knows it, as example ... I'd love to see 
discussions about all Mozilla proposals for FirefoxOS and not always 
some tedious syntax for classes discussion, you know what I mean.

I actually don't know what you mean :-s
Unless I'm mistaken, extensions for Firefox OS are more hardware related 
APIs (vibration, radio, battery, connectivity, alarm, proximity...) than 
anything else. There are a couple of exceptions like WebActivities, but 
I don't think es-discuss is the right place to talk about any of that.


Other groups at the W3C talk about FirefoxOS addition like 
public-device-apis and public-sysapps.


My understanding is that this mailing-list is about discussions on 
evolving the language, so that'll be tedious syntax discussions (it's 
tedious largely because of legacy reasons, not because people love 
talking about syntax I think) and new low-level construct (WeakMap, 
proxies, symbols...).


Which FirefoxOS would you want to talk about?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: thoughts on ES6+ direction + modules

2013-02-11 Thread David Bruant

Le 11/02/2013 00:53, Andrea Giammarchi a écrit :
involve as many developers as possible, rather than provide /already 
decided internal decisions based in already decided internal 
pools/ nobody ever heard about out there (public pools or it didn't 
happen)

hmm... I had skipped that part initially.
There are some accusations here and, as a JS dev, non-TC39 members, I'd 
like to say that I disagree strongly.


Here are a handful of public things related to TC39:
1) es-discuss
2) meeting notes with extra care and formatting since recently 
https://github.com/rwldrn/tc39-notes

3) http://wiki.ecmascript.org where drafts and accepted ideas are documented
4) bugs.ecmascript.org
5) spec drafts [1] are released on a monthly-basis

I recently questioned a feature [2], based on this public material. 
Public discussion happened. I'm balanced on the de-facto conclusion, but 
the least we can agree on is that a public discussion happened.

I'm willing to agree on a lot of things like:
* the different communication channels create confusion
* the wiki isn't always up-to-date (Rick did some good cleaning job 
recently, though)
* some discussions on es-discuss aren't documented in a condensed format 
and re-happen in some cases
* maybe on occasions Allen is too quick in adding things to the spec 
drafts (WeakMap.prototype.clear case), etc.
I personally put all these issues on the fact that TC39 is a group of 
human beings. They make mistake like any other group of human beings. 
They haven't fully solved the efficient communication problem, but no 
one has. At least, these errors are public. They may make a barrier to 
participation higher than what we'd wish, but I wouldn't think it's on 
purpose and you can propose ideas to solve this problem. I have thought 
about it several times and haven't found a satisfactory solution yet.


Accusing of internal decisions based on internal pools may be a step too 
far. Please be more specific in your accusations so we can discuss 
things as I did with WeakMap.prototype.clear. The blurry finger-pointing 
game isn't moving anything forward.



On listening to JS devs:
1) over the last couple of years, (at least) Dave Herman and Brendan 
Eich have been dev-conf-crawling with ES6/future of JavaScript talks, 
asking for feedback and involvement from the JS devs community. They 
could have chosen to talk about other things or not talk at all.
2) Rick Waldron and Yehuda Katz who could be easily labeled as coming 
from the JS dev community have joined TC39.


What else do you want? involve many devs. Maybe devs should get 
involved. I felt concerned about the future of ECMAScript I stepped up.
I find particularly ironic that some in the Node.js community are 
bitching about what happens for modules after saying [3]: We have these 
standards body [ECMA is cited] and Node made a very very conscious 
effort to ignore them and have pretty much nothing to do with them.
It feels to me that the Node community is discovering that what they are 
a part of the JavaScript ecosystem, that ECMAScript and TC39 are part of 
this ecosystem too and they should felt concerned about what's happening 
to ECMAScript. Hopefully, they'll discover soon enough that they can 
send feedback based on their experience to affect TC39 decisions.
I feel dev involvement boils down to a very simple cost/benefit 
analysis. Either you feel concerned about the future of JavaScript 
enough to get involved in discussions that affect your future. Or you're 
too busy making things happen [4] and that's cool, but you've chosen 
your priority and that is not the future of JavaScript.


David

[1] http://wiki.ecmascript.org/doku.php?id=harmony:specification_drafts
[2] https://mail.mozilla.org/pipermail/es-discuss/2013-January/028351.html
[3] 
http://www.youtube.com/watch?feature=player_detailpagev=GaqxIMLLOu8#t=1094s

[4] https://twitter.com/substack/status/300085464835174401
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Check out Dart's iterators

2013-02-10 Thread David Bruant

Le 10/02/2013 13:21, Alex Russell a écrit :


FWIW, there continue to be strong misgivings about the pythonesqe 
design we have now, but Mozilla insists on the back of their shipping 
implementation. Many feel that exceptions for control-flow are a 
missdesign, myself included


I agree and also think return-true/false protocols aren't any better. In 
an ideal world,

idealworld
an extensible way to end a frame would be better for this kind of 
function-based protocols.


function(){
if(last())
return next();
else
throw StopIteration;
}

// would become

function(){
if(last())
return next();
else
endframe as StopIteration
}

Return and throw would be mere sugar for endframe as return(value) and 
endframe untilcaught as throw(value). untilcaught would indicate that 
this termination value propagates until being try-caught (though in my 
ideal world, there would be no throw, because I find it too agressive)
What I'm describing here is nothing more than a generic mechanism to 
create new completion value types. I actually find fascinating that the 
SpiderMonkey debugger API completion value documentation [1] has a 
specific note to explain how to recognize the end of an iterator frame.


In this ideal world, the iterator consumer story would be as follow:
// ES6 snippet:
try{
var value = it.next();
// code to manipulate the value
}
catch(e){
if(e instanceof StopIteration){
// code to run when out of elements
}
}

// would become:
var complValue = completion it.next()
if(complValue.type === 'return'){
// code playing with complValue.return;
}
if(complValue.type === 'StopIteration'){
// code to run when out of elements
}
// or something that looks more legit than the try/catch thing

The proposed throw ForwardToTarget would be nothing less than 
endframe as ForwardToTarget in this world.


In this ideal world, function protocols are based not on *what* a 
function released (return/throw value), but rather on *how* the function 
ended.

/idealworld

But we do not live in the endframe as+completion world. throw 
StopIteration is probably as close as we can get in JavaScript given 
the 3 way to complete a frame that we have (return/throw/yield). If 
anything, it's very explicit about what it does (stop iteration). More 
than a return true/false protocol.


Maybe Dart could consider something like endframe as+completion 
though...


David

[1] 
https://developer.mozilla.org/en-US/docs/SpiderMonkey/JS_Debugger_API_Reference/Completion_values

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Check out Dart's iterators

2013-02-10 Thread David Bruant

Le 10/02/2013 16:21, David Bruant a écrit :

Le 10/02/2013 13:21, Alex Russell a écrit :


FWIW, there continue to be strong misgivings about the pythonesqe 
design we have now, but Mozilla insists on the back of their shipping 
implementation.
I have made a mistake in keeping that part of the quote in my reply. I 
actually disagree with this statement.


Many feel that exceptions for control-flow are a missdesign, myself 
included

That's the only part I disagree with and my answer applied to.

I wrote:
But we do not live in the endframe as+completion world. throw 
StopIteration is probably as close as we can get in JavaScript given 
the 3 way to complete a frame that we have (return/throw/yield). If 
anything, it's very explicit about what it does (stop iteration). 
More than a return true/false protocol. 
As I said at the end of my reply, throw StopIteration is probably the 
best thing that can be designed given the backward-compat constraints 
that JavaScript has, so I agree with Mozilla's implementation and 
bringing its design to ES6.


I apologize for the confusion.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Check out Dart's iterators

2013-02-10 Thread David Bruant

Le 10/02/2013 16:50, David Bruant a écrit :

Le 10/02/2013 16:21, David Bruant a écrit :
Many feel that exceptions for control-flow are a missdesign, myself 
included

That's the only part I disagree with and my answer applied to.

s/disagree/agree...
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private symbols vs property attributes

2013-02-10 Thread David Bruant

Le 10/02/2013 08:07, Brendan Eich a écrit :

Allen Wirfs-Brock wrote:
Note that the enumerable attribute really only affects for-in 
enumeration (and Object.keys), neither of which enumerates symbols 
anyway.  That, means that the enumerable attribute really has has no 
current meaning for symbol keyed properties.  That means we could 
probably reinterpret the enumerable attribute as a private 
attribute for such symbol keyed properties.


Groovy.

But the private-as-attribute idea still seems to require an access 
control check, which makes it less secure from an OCap perspective and 
experience, compared to symbols as capabilities.

I'm not sure I understand your concern.
Under Andreas proposal, a symbol would remain an unforgeable token. The 
only thing that changes is how the symbol is shared. In the proposal 
being discussed setting private:false in the property descriptor would 
be a way to share a symbol. That's less direct than handing access to 
the symbol only, but that's a very explicit way anyway, so I don't see a 
problem from an ocap perspective.
That said, to force the author to be explicit about sharing symbols 
indirectly through reflection, private:true should probably be the 
default when doing obj[symb] = 34.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private symbols vs property attributes

2013-02-10 Thread David Bruant

Le 10/02/2013 17:16, Mark S. Miller a écrit :
I do not understand what is being proposed. When I try to imagine a 
proposal starting from what has been said, I have not been able to 
imagine something that works. But that's not a criticism. What is this 
alternate privacy idea?

My understanding is:
* there is only one kind of symbol
* whether the symbol is reflected by Object.getOwnPropertyNames and the 
likes is controlled by a 'private' attribute in property descriptors.


// unique constructor, no boolean since there is only one kind of 
symbol

var s = new Symbol();
var o = {}, o2 = {};

Object.defineProperty(o, s, {value: 12, private: true});
assert(Object.getOwnPropertyNames(o).length === 0)
assert(o[s] === 12);
o[s] = 31;
assert(o[s] === 31);

Object.defineProperty(o2, s, {value: 7, private: false});
assert(Object.getOwnPropertyNames(o)[0] === s);
assert(o2[s] === 7);
o2[s] = 13;
assert(o2[s] === 13);

Pending question:
var o3 = {};
o3[s] = 62;
Object.getOwnPropertyDescriptor(o3, s).private // true or false?

Since private:false implies symbol sharing through 
Object.getOwnPropertyDescriptor, I think private:true should be favored 
to force people to be explicit (see my reply to Brendan)


The main difference with the current proposal is that privacy isn't an 
inherent characteristic of the symbol, but related to how it's been 
configured on the different objects it's been used on.


Was the above one of the things you imagined? If yes, why doesn't it work?

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Check out Dart's iterators

2013-02-10 Thread David Bruant
I have continued my wanderings on that topic elsewhere. Sharing as food 
for thought:


Le 10/02/2013 16:21, David Bruant a écrit :

idealworld
I initially thought that yield could be the sugar of endframe as 
yield(value), but yield and return/throw are different. In the former 
case, the frame can (and likely will) be re-entered which is not the 
case for the latter. This begs for 2 different keywords. Let's call them 
endframe and yield. yield could come in 2 forms:

// generic
yield as return(value)
yield as throw(value) // which is impossible today?

// sugar
yield value
// which desugars naturally to
yield as return(value)

To re-enter a frame, the following could be used:
reenter generator as return(yo) // equivalent of current 
generator.send('yo')
reenter generator as throw(yo) // equivalent of current 
generator.throw('yo')


This raises an error if generator is not re-entrable (that is if it 
didn't end with yield or one of its sugar).


What is lost is the ability to pass around the send/throw/close/next 
functions. I would consider this a win. From what I've seen of 
generators, there is no loss. At least, task.js doesn't seem to pass 
these functions around.


Since every generator additional methods would be reimplemented with 
syntax, I think that having yield/reenter keywords (and additional sugar 
for usability), generators wouldn't need to be their own new function type.

But that's in an ideal world, of course.

/idealworld


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Check out Dart's iterators

2013-02-10 Thread David Bruant

Le 10/02/2013 20:55, Oliver Hunt a écrit :

Just a couple of questions on this wonderous topic:

* how is 'e instanceof StopIteration' intended to work across multiple global 
objects?
StopIteration has a special StopIteration [[Brand]] [1], so the 
cross-global story shouldn't be a problem for the for-of loop.
Exposing the brand can solve the problem for manual use of iterators. 
(you'd check if the object has a particular brand instead of e 
instanceof StopIteration).


StopIteration could also be a deeply frozen constructor with same 
identity across globals.



* how firmly are we wedded to this? I can't imagine there is too much code that 
currently depends on manually catching StopIteration given ES6 is not finalized 
yet, and iterators aren't widely available.

I do dislike the exception based termination, I _think_ i'd prefer next() and 
hasNext() style iteration over exceptions, especially given that for the most 
part these are hidden by clean syntax.
The for the most part these are hidden by clean syntax argument 
applies to throwing StopIteration too, no?



My personal concern with all of these is how they deal with nested iterators.
I don't see the concern. Can you provide a use case/code sample where 
nested iterators would be a problem?


I have to note that there is a minor security hazard in code using 
iterators naively:

import process from m;

var a = [1, 2, 3, 4, 5];
var next = 0;
var it = {
next: function(){
if(next  a.length){
// If the call to process throws StopIteration 
because it's malicious/buggy,

// so does this code and that's largely unexpected.
return process(a[next++]);
}
else{
throw StopIteration;
}
}
}

You can always protect yourself by wrapping the call to process with a 
try/catch block.
I'm still on the side of preferring throw StopIteration for its better 
readability compared to return false. Dart has implements 
IteratorT to help, but JavaScript doesn't.


David

[1] http://wiki.ecmascript.org/doku.php?id=harmony:iterators
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Jan 29 TC39 Meeting Notes

2013-02-09 Thread David Bruant

Le 09/02/2013 00:39, Claude Pache a écrit :

Since BC is not an issue, let's pick the semantic that is most conform to the existing 
Ecmascript object model, and let's not invent a weird: true property 
descriptor just because we think that __proto__ deserves one.
The goal is to standardise the least weird thing possible. Indeed, 
backward-compat doesn't care.
*The* goal if any is to standardize the minimum set of properties that 
the web relies on (and Microsoft can implement to support existing 
mobile websites (ab)using __proto__).

In my opinion, a couple of properties should go along:
* it should be compulsory that __proto__ was deletable
* It would be preferable that there was no usable extractable setter 
(because it increases attack opportunities).
* it would be preferable that __proto__ doesn't work on non-extensible 
objects.


Beyond that, any detail, as long as it's localized to Object.prototype, 
is unimportant. Although __proto__ reaches the spec, it doesn't make it 
a feature people should be encouraged to use. In my opinion, the only 
thing devs should know about is that it's a de-facto standard, in the 
spec because of economical constraints and that the only thing they 
should do with it is delete Object.prototype.__proto__ (which is why 
anything beyond __proto__ must be deletable is at most a preference, 
in my opinion again).


Given that the good practice is to delete __proto__, both conforming to 
what exists in the object model and aiming at the least weird thing 
possible are probably over-engineering; magic: true is as bad as any 
other idea.

Coin tossing is as good as any other mean to decide the details.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Jan 29 TC39 Meeting Notes

2013-02-08 Thread David Bruant

Le 07/02/2013 18:42, Andreas Rossberg a écrit :

On 7 February 2013 18:36, David Bruant bruan...@gmail.com wrote:

I hardly understand the benefit of an inconditionally-throwing setter over a
__proto__ as data property, but I'm fine with either.

Well, it _is_ a setter, and even one that modifies its receiver, not
its holder. What would be the benefit of pretending it's not?
It _is_ an abomination (arguably, it even __is__ an abomination). Any 
resemblance to real ECMAScript construct, living or dead (?), is purely 
coincidental.


From the notes, a quote from Allen is involves magic. I don't think I 
will surprise anyone if I say that whatever is decided for __proto__, 
there will be magic involved.


An idea that I don't think has been suggested is to stop pretending 
__proto__ is something else than magic:

$ Object.getOwnPropertyDescriptor(Object.prototype, '__proto__');
{
magic: true,
enumerable: false,
configurable: true
}

Quite exotic but very clear. At will, replace magic with 
abomination, de facto standard, wildcard, don't use __proto__ or 
Why did you call Object.getOwnPropertyDescriptor on __proto__ anyway?. 
Other better suggestions are welcome, obviously.
Admittedly, the last idea may be a bit long, but that's a string, so 
that can be an property name. I wouldn't rule it out too quickly ;-)


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Jan 29 TC39 Meeting Notes

2013-02-08 Thread David Bruant

Le 08/02/2013 19:35, Allen Wirfs-Brock a écrit :

On Feb 8, 2013, at 10:15 AM, Claude Pache wrote:
The magic is not in the form of the '__proto__' property of the 
Object.prototype object, but in the action that its setter performs.
You're assuming it's a setter already, but that's not a given. Even 
Firefox's recent accessor has some weirdnesses [1], making it 
not-such-an-accessor.



Precisely, as implemented in the latest versions of Safari and Firefox (I 
haven't tested other browsers)
In Chrome and Node, Object.getOwnPropertyDescriptor(Object.prototype, 
'__proto__') is undefined. From that (and the fact that Firefox move to 
a setter is recent), we can conclude that the backward-compat story is 
no one cares about this detail.



The magic that was being proposed at the meeting was that
  Object.getOwnPropertyDescriptor(Object.prototype,__proto__}
would return an accessor property descriptor whose set property (and 
get??) was a function like:
  function () {throw new TypeErrorException}

This violates the mundane inherited accessor property equivalence for 
situations like:
  obj.__proto__ = foo;
and
   
Object.getOwnPropertyDescriptor(Object.prototype,__proto__).set.call(obj,foo);


But such a relationship between [[Get]]  or [[Set]] and [[GetOwnProperty]] has never been 
identified as an essential invariant and it is easy to create a proxy that does not have 
that behavior.  So what was proposed is only magic from the perspective of 
ordinary objects.
I'm somewhat surprised any form of consistency with existing 
properties/invariant is discussed at all.
__proto__ is in ES6 as a de-facto standard. What is needed is 
obj.__proto__ = obj2 to work. To avoid problems, it should be possible 
to delete __proto__, and it'd be preferable to avoid having an 
extractable usable setter (the goal is to de-facto standardize, lipstick 
isn't necessary). *Any* solution in these constraints in acceptable. 
Accessor? data? magic:true property? That's all the same. 
Backward-compat doesn't care.
I understand from the notes that the topic had some... emotion to it. 
Toss a 3-sided coin at next TC39 meeting?


David

[1] https://twitter.com/olov/status/298395329945030657
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Jan 29 TC39 Meeting Notes

2013-02-08 Thread David Bruant

Le 08/02/2013 23:07, David Bruant a écrit :
*Any* solution in these constraints in acceptable. Accessor? data? 
magic:true property? That's all the same. Backward-compat doesn't care.
I forgot to say that in my opinion, any JS dev in his/her right mind 
would start any new script with:

use strict;
delete Object.prototype.__proto__
Object.freeze(Object.prototype);

Making the details of __proto__ not-so-important. Legitimate __proto__ 
use cases are covered by extends in the class syntax. No need to keep 
that around.


Very much like any web dev in their right mind start their CSS with
*{
box-sizing: border-box;
}

I understand from the notes that the topic had some... emotion to it. 
Toss a 3-sided coin at next TC39 meeting?

Please record the video if you ever do that :-)

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private symbols vs property attributes

2013-02-07 Thread David Bruant

Le 07/02/2013 12:58, Andreas Rossberg a écrit :

We intend to have both unique and private symbols. The only
difference between the two is that the latter are filtered from
certain reflective operations.

I have come to think that this distinction is backwards. It is
attributing something to symbols that actually is an attribute of
properties. Symbols are just symbols.
This would force proxies to have the third whitelist argument regardless 
of what's decided on the proxy_symbol_decoupled strawman 
http://wiki.ecmascript.org/doku.php?id=strawman:proxy_symbol_decoupled
This is because some symbols (@@iterate, @@create, libraries extending 
built-ins with symbols-as-collsion-free-property-names etc.) need to 
pass proxies transparently, while what is currently private symbols 
shouldn't pass by default.


I don't have an opinion yet on whether it's a good or bad thing, but I 
just wanted to point it out.



We should not piggyback them
with something that is not actually related to their own semantics as
such, but only their forwarding in specific client contexts.

Let's put the distinction where it belongs. There is no systematic
difference between privateness and non-enumerability, so they should
be handled analogously. I hence propose that we add a new attribute to
property descriptors, say, 'private'. Any property with this attribute
set to true is filtered by the relevant reflective operations. That
is, it is simply a stronger form of the non-enumerable attribute. (For
consistency a logically inverted attribute like 'reflectable' might be
preferable, but that's secondary.)

The only drawback I see with this approach is that we have to pick a
default.
In particular, an assignment o[s] = v where s is a symbol
that does not exist yet on o can only have one meaning, either
consistently introducing a private property or a non-private one.
There are valid arguments for either choice, but I think making the
choice is doable.
Current string semantics begs for private:false (since 
string-as-property-name are always reflected)
But unique symbols, used as collision-free extension of built-ins could 
take the hit. I guess overriding @@iterate and @@create is also rare 
enough that using Object.defineProperty for that is also acceptable, 
leaving the PrivateSymbol semantics being the default. But I'm arguing 
for private:true by default here...

Valid arguments for either choice indeed :-)

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Jan 29 TC39 Meeting Notes

2013-02-07 Thread David Bruant

Le 07/02/2013 17:25, Rick Waldron a écrit :

## __proto__.

YK: We just need compatibility

LH: We need to just suck it up and standardize

:-)


YK/BE: Discussion re: interop with current implementations.

BE: (Review of latest changes to __proto__ in Firefox)

EA: Matches Safari

BE: __proto__ is configurable (can be deleted), accessor (getter and 
setter throw), reflection does not leak.


AWB: Involves magic

BE: Yes, but minimal. (Confirms that latest __proto__ is out in wild, 
Firefox)


WH: Clarify poisoning?

BE: When you call it, it throws.

WH: So how does it know when not to throw? (If it always throws then 
it won't work.)


EA: Throws if called with object and setter coming from different realms

…Discussion re: MOP semantics with __proto__

BE: Proxy has to stratify the MOP.
Speaking of proxies, what should happen in the following case (setter 
and proxy from same realm):
var protoSetter = Object.getOwnPropertyDescriptor(Object.prototype, 
'__proto__').set

var p = new Proxy({}, handler);
protoSetter.call(p, {});
?

Ideas:
1) add a setPrototypeOf trap
2) throw because it's a proxy (which wouldn't be entirely absurd since 
extracting the setter shouldn't be encouraged).


This problem is inexistent with a data property. Is an extractable 
setter *required* for web-compatibility? I've seen lots of use of 
__proto__ as a pseudo property, but no one extracting the setter yet.


AWB: Another issue… Objects that are non-extensible, can you change 
__proto__? Specifically, now that we're talking about being able to 
change __proto__, what type of objects can be changed?


BE: Wait for Mark?

YK?: Changing __proto__ is a write, not adding a property, so it 
should not be affected by extensibility.


AWB: Agree
How can one defend itself against abusive __proto__ modification? 
__proto__ becoming standard, delete Object.prototype.__proto__ is 
hardly a reliable option, because more code will rely on its existence. 
If that's not the [[Extensible]] boolean, another boolean has to be added.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Jan 29 TC39 Meeting Notes

2013-02-07 Thread David Bruant

Le 07/02/2013 18:22, Andreas Rossberg a écrit :

On 7 February 2013 18:09, David Bruant bruan...@gmail.com wrote:

Speaking of proxies, what should happen in the following case (setter and
proxy from same realm):
var protoSetter = Object.getOwnPropertyDescriptor(Object.prototype,
'__proto__').set
var p = new Proxy({}, handler);
protoSetter.call(p, {});
?

The property descriptor for Object.prototype.__proto__ will contain a
poisoned setter that always throws.

So what does the following mean:
EA: Throws if called with object and setter coming from different realms
?

I hardly understand the benefit of an inconditionally-throwing setter 
over a __proto__ as data property, but I'm fine with either.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Action proxies

2013-02-05 Thread David Bruant

Le 05/02/2013 12:20, Tom Van Cutsem a écrit :

2013/2/4 David Bruant bruan...@gmail.com mailto:bruan...@gmail.com

Le 04/02/2013 22:41, David Bruant a écrit :

Le 04/02/2013 19:57, Tom Van Cutsem a écrit :

The post-trap could be cached and reused, but only if the
post-processing is independent of the specific arguments
passed to the intercepted operation.

Is there any harm in passing the trap arguments to the
post-trap function additionally to the result?

I've played with post-traps a bit. A place I would naturally store
the post-trap to cache it is the handler.

Assuming trap arguments are passed to the post-trap, another idea
is to have pre and post traps. It duplicates the number of
elements in a handler, but not the size of the code (or marginally
assuming pre/post traps are longer than the boilerplate). Having a
post-trap would still be an opt-in, but the protocol to get it
would be callable handler.[[Get]] instead of the current
callable pretrap-return. One allocation per handler would become
the natural default (while the current natural default is a
function literal as return value).


I guess this could work. Borrowing naming conventions from Cocoa, you 
could have an API along the lines of:


willGetOwnPropertyDescriptor(target, ...args) // pre-notification
didGetOwnPropertyDescriptor(target, result, ...args) // post-notification
etc.

bikeshed
In most cases, posttraps aren't necessary. It may be a good idea to 
reflect this asymmetry in the handler API:

getOwnPropertyDescriptor(target, ...args) // pre-notification
didGetOwnPropertyDescriptor(target, result, ...args) // post-notification

Another idea:
var handler = {
getOwnPropertyDescriptor: {
pre(target, ...args){
// ...
},
post(target, ...args){
// ...
}
},
get(target, ...args){
// ...
}
}

The trap (like getOwnPropertyDescriptor here) can expose 2 parts 
pre/post (which could as well be will/did). If the trap is callable 
(like get here), it's only the pretrap.


In an earlier message, I hadn't answered one of your points:
I think the on-prefix is actually pretty important. It signals to 
the proxy writer that the trap is a callback whose return value will 
be ignored.
That's a distinction we're aware of as people who've followed the 
evolution of the API, but people discovering proxies will have to read 
the doc anyway to understand what they can do and how it works. I'm not 
sure the on (or any other) prefix will really help that much.

/bikeshed

That just was my bikeshed-ish opinion. I won't fight if there is 
disagreement.


I like the current API better because it allows for a cleaner pairing 
of pre and post-traps, including the ability to share private 
intermediate state through closure capture.
I have to admit, I'm a bit sad to loose that too. But that's the price 
to pay to get rid of invariant checks I think. It remains possible for 
pre/post trap to share info through the handler.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Action proxies

2013-02-05 Thread David Bruant

Le 04/02/2013 23:11, Brendan Eich a écrit :

Mark S. Miller wrote:
In any case, you may be right that this is a fatal flaw. You're 
making a performance-based argument, and it is certainly premature 
one way or the other to predict how these relative costs will balance 
out. Let's wait till we have more data.


We are not going to defer proxies from ES6 in order to implement 
notification proxies and find they cost too much. We know enough about 
allocation dwarfing other costs (direct + GC indirect), I bet. Happy 
to find out more from experiments but since we are just saying what 
if? I will talk back -- and make a wager on the side if you like.


IOW, I argue that while it's ok to speculate, doing so in one 
direction (in favor of notification proxies) does not mean data must 
gathered to prove a cost is too much in order to not defer proxies 
from ES6. For some applications, any cost is too much, so no data is 
needed.
About the performance argument, I think a performance argument can only 
be made in comparison with what we have and not in absolute terms.
What's at stake with notification proxies is getting rid of invariant 
checks [1]. For some applications, the cost of invariant check is too 
much too.
The right question for performance isn't do notification proxies cost? 
but do they cost more than direct proxies? for the main use cases? on 
average? worst cast?


Anyway, there are ideas to get rid of the per-invocation allocations on 
the table, so let's explore them. If they fail, time will come to 
compare posttrap allocations and invariant checks.


David

[1] 
http://wiki.ecmascript.org/doku.php?id=harmony:direct_proxies#invariant_enforcement

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On notification proxies

2013-02-05 Thread David Bruant

Le 05/02/2013 13:52, Sam Tobin-Hochstadt a écrit :

On Tue, Feb 5, 2013 at 7:03 AM, David Bruantbruan...@gmail.com  wrote:

I like the current API better because it allows for a cleaner pairing of pre
and post-traps, including the ability to share private intermediate state
through closure capture.

I have to admit, I'm a bit sad to loose that too. But that's the price to
pay to get rid of invariant checks I think. It remains possible for pre/post
trap to share info through the handler.

I've been holding off on this because I know that Mark is still
working on notification proxies, but I think this short discussion
encapsulates exactly why notification proxies are a bad idea.  The big
win of notification proxies is that it reduces the cost and complexity
of invariant checks [1]. However, I believe this cost is small, and
more importantly, not that relevant.

Another justification from my experience is that a lot of traps end with:
return Reflect.trap(...args)
So notification proxies also make implicit what is otherwise 
boilerplate. That's an important point (more below).



As evidence that this cost is small, in our work on chaperones in
Racket [2], a system that's very similar to proxies [3], we measured
the overhead of the invariant checking required.  In real programs,
even when the proxy overhead was more than *half* the total runtime,
the invariant checks never went above 1% of runtime.  Further, the
design of chaperones, in combination with much greater use of
immutable data in Racket, means that many *more* invariant checks were
performed than would be in a comparable JS system, and the Racket
invariant checks would be significantly *more* expensive.

Interesting stats. Thanks for sharing.


Even more importantly, optimizing the invariant checks is focusing on
the wrong use case.  Regardless of our preferences, very little JS
data is immutable, or requires any invariant checks at all.

At the very least, the engine has to test the following after most traps:
target.[[GetOwnPropertyDescriptor]](name).[[Get]]('configurable') === false
If the property is non-configurable, more invariant checks are needed. 
Otherwise, the code goes on, but it was necessary to test this before 
knowing it was possible to go on. I'll call this test pre-invariant check


So even no invariant checks means at least one pre-invariant check 
per-invocation. Since most traps end with return 
Reflect.trap(...args), this test feels even more stupid.

Here is what happens in the getOwnPropertyDescriptor trap case:
1) call the trap. It most likely ends with return 
Reflect.getOwnPropertyDescriptor(...args)
2) The runtime does its own Reflect.getOwnPropertyDescriptor(...args) 
call and compare its result with the one returned from the trap.
3) It obviously notices both descriptors are compatible (duh! they are 
the same because no code could modify it between the trap return and 
pre-invariant check)


Most traps have an equivalent story. Only the first step changes because 
it's a different trap, but by design of the invariants, the duh! in 3) 
remains.


We might count on static analysis or such, but I've been told enough on 
the list not to rely too much on that. Opinions on that are welcome.



We spend a lot of time focusing on re-implementations of built-in ES and DOM
APIs, which often are non-configurable or non-writable, but this is
not the common case in user-written JS.  Whether it's building
free-standing exotic objects or wrapping existing ones, it's very
likely that this will continue to be the case with proxy-implemented
objects.  We should focus on that case.
I could not agree more. For me, getting rid of invariant checks mostly 
means getting rid of the above test. Since most of my objects don't need 
invariant checks (because they end with return Reflect.trap), I don't 
know why I should be paying the above test every single time a trap 
exits. I already know what I want, I made it clear in my code, why am I 
paying a tax at all, even 0.5%?



In these common cases, I believe that notification proxies are at a
significant disadvantage.  Notification proxies require that all
communication between the handler and the result of an operation
operates via mutation of the target.
This has several problems.
First, it's a tricky pattern that every proxy programmer has to learn,
increasing the burden for an already complex API.
I'm balanced on that point. When writing a set trap, the trap is likely 
to end with Reflect.set(target, name, value, receiver), so when I 
write handler code, I already naturally communicate with the target.
But, I agree that there are cases where code returning a different value 
will have to set the value to the target. I'm not entirely happy with 
this, but I wonder if it's because I'm just used to direct proxies. In 
the notification proxy way of thinking, traps are a notification 
mechanism; their return value doesn't matter, so maybe it's normal that 
communication with the outside 

Re: Action proxies

2013-02-04 Thread David Bruant

Le 04/02/2013 18:51, Brendan Eich a écrit :
If notification proxies require allocation per trap activation, that's 
a fatal flaw in my view.
I assume you mean allocation of trap return values and will discuss 
that, if you mean something else, please expand.


Before stating anything, let's compare notification proxies with direct 
proxies.


# No post-trap case:
// direct proxies
trap(...args){
// pretrap code
return Reflect.trap(...args)
}

//notif proxies
trap(...args){
// pretrap code
}

In both case, the same return result needs to be allocated and passed to 
whoever the trap caller was.


# With post-trap case:
// direct proxies
trap(...args){
// pretrap code
var ret = Reflect.trap(...args)
// post-trap code
return ret;
}

// notif proxies
trap(...args){
// pretrap code
return () = {
// post-trap code
}
}

In the last snippet, the engine has to store the result of 
Reflect.trap(...args) internally, but that's not worse than storing it 
in a variable as it would currently be the case. I think that having 
this storage internal may even open the door to optimizations that would 
be harder if not impossible to achieve with current proxies.
Thinking about it more, since the posttrap can't modify the return 
value, it can be seen like a finally block. So return-value related 
allocation characteristics of a notif proxy with a post trap are 
comparable to the allocation characteristics of:


function f(){
try{
return 'https://www.youtube.com/watch?v=08WeoqWilRQ'
}
finally{
console.log('finally')
}
}

David




/be

Mark S. Miller wrote:
On Sun, Feb 3, 2013 at 7:22 AM, David Bruant bruan...@gmail.com 
mailto:bruan...@gmail.com wrote:


[...]
This does indeed get rid of invariant checks while guaranteeing
the invariants anyway and apparently not losing expressiveness. Wow.


;)

Was this discussed during the January TC39 meeting? Do
notification proxies have a chance to replace direct proxies or is
it too late?
In the case it would be too late, could throw ForwardToTarget be
considered?


I mentioned at the January meeting that we'll be experimenting with 
these new notification proxies, to see if they cover all the 
motivating use cases adequately. I'm increasingly hopeful, but have 
nothing to report yet. If they do, then at the March meeting I will 
propose that we do not include direct proxies in ES6. Since it is too 
late to introduce as radical a change as notification proxies into 
ES6, I would propose that proxies as a whole get postponed till ES7.


We'll all be sad to see proxies wait. But given how much better 
notification proxies seem to be, if they work out, it would be a 
terrible shame to standardize the wrong proxies in ES6 just because 
they're ready and sorely needed. Of course, as with Object.observe, 
implementors are free to ship things ahead of formal standardization. 
And notification proxies are vastly simpler to implement correctly 
than direct proxies.


--
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Action proxies

2013-02-04 Thread David Bruant

Le 04/02/2013 19:57, Tom Van Cutsem a écrit :
The post-trap could be cached and reused, but only if the 
post-processing is independent of the specific arguments passed to the 
intercepted operation.
Is there any harm in passing the trap arguments to the post-trap 
function additionally to the result?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Action proxies

2013-02-04 Thread David Bruant

Le 04/02/2013 22:41, David Bruant a écrit :

Le 04/02/2013 19:57, Tom Van Cutsem a écrit :
The post-trap could be cached and reused, but only if the 
post-processing is independent of the specific arguments passed to 
the intercepted operation.
Is there any harm in passing the trap arguments to the post-trap 
function additionally to the result?
I've played with post-traps a bit. A place I would naturally store the 
post-trap to cache it is the handler.


Assuming trap arguments are passed to the post-trap, another idea is to 
have pre and post traps. It duplicates the number of elements in a 
handler, but not the size of the code (or marginally assuming pre/post 
traps are longer than the boilerplate). Having a post-trap would still 
be an opt-in, but the protocol to get it would be callable 
handler.[[Get]] instead of the current callable pretrap-return. One 
allocation per handler would become the natural default (while the 
current natural default is a function literal as return value).


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: What is the status of Weak References?

2013-02-03 Thread David Bruant

Le 03/02/2013 06:21, Brandon Benvie a écrit :
Some people would say that garbage collection is the most important 
advancement in computer science in the last 20 
yearshttp://www.codinghorror.com/blog/2009/01/die-you-gravy-sucking-pig-dog.html
Don't get me wrong, I didn't say nor didn't mean to say that garbage 
collectors as a tool aren't awesome. I'm very happy that most of the 
time, I don't need to worry about releasing memory and that in most of 
the remaining cases, null-ing out a single reference makes an entire 
subgraph collectable.


However, like any tool, I think it's crucially important to understand 
the limitations of it. From the article you posted:
Use your objects, and just walk away when you're done. The garbage 
collector will cruise by periodically, and when he sees stuff you're 
not using any more, he'll clean up behind you and deal with all that 
nasty pointer and memory allocation stuff on your behalf. It's totally 
automatic. 
It's totally automatic. Here is someone who is not aware of the 
limitations of a garbage collector apparently.


Also from the article:

*I view explicit disposal as more of an optimization than anything else*
Nonsense. I'm sorry, but this is fueling the fantasy. Manual disposal is 
necessary in cases where the GC cannot make the decision. And the GC 
cannot make a decision because it is an algorithm, bound by decidability.


I feel there is a lot of misconception about what a GC can and cannot 
do. Here is an article I wrote about memory management [1]. Here is 
probably the most important part of it:

## Release when the memory is not needed anymore

Most of memory management issues come at this phase. The hardest task 
here is to find when the allocated memory is not needed any longer. 
It often requires for the developer to determine where in the program 
such piece of memory is not needed anymore and free it.


High-level languages interpreter embed a piece of software called 
garbage collector whose job is to track memory allocation and use in 
order to find when a piece of allocated memory is not needed any 
longer in which case, it will automatically free it. This process is 
an approximation since the general problem of knowing whether some 
piece of memory is needed is undecidable (can't be solved by an 
algorithm).
The most important part of this section is approximation. Every GC 
algorithm in existence has to approximate conservatively an answer to 
the question will this piece of memory be used again?. As an aside, I 
prefer that the GC is conservative and doesn't abusively free memory I 
would actually still need. In other words, memory leaks are the price to 
pay for the memory to be reliable.



## Reference-counting garbage collection

This is the most naive garbage collection algorithm. This algorithm 
reduces the definition of an object is not needed anymore to an 
object has no other object referencing to it. An object is considered 
garbage-collectable if there is zero reference pointing at this object.

## Mark-and-sweep algorithm

This algorithm reduces the definition of an object is not needed 
anymore to an object is unreachable. [then definition of reachable 
by explaining the root and the traversal]
And in both cases, I explain the limitations. I couldn't find a simple 
enough example to put in the documentation for the limitations of 
mark-and-sweep.

Let's try to make up a simple example:

function Storage(){
var storage = []
return {
push(e){storage.push(e)},
last(){return storage[storage.length-1]};
}
}

var s = new Storage();
s.push({});
s.push({});
s.last();

In this example, we know, as human beings understanding the semantics of 
JavaScript, that all but last elements of storage could be collected. 
Because of its limited definition of unreachable, the mark-and-sweep 
algorithm thinks these elements should be kept in memory.
Some serious static analysis might figure out (as we do, human beings 
understanding the semantics of JavaScript) that all but the last element 
of storage aren't needed anymore... This analysis needs to prove at some 
point that Array.prototype.push is a frozen property.
As a side note, as far as I know, all GC algorithms are runtime 
algorithms and deal with runtime objects and references and don't 
exploit static analysis/language-specific information.


Let's see how the example would be with weakrefs:

function Storage(){
var storage = []
return {
push(e){storage.push(makeWeakRef(e))},
last(){
var last = storage[storage.length-1]
return last.get(); // oops!
};
}
}

var s = new Storage();
s.push({});
s.push({});
s.last();

Interestingly, holding weakly the references in storage couldn't be of 
help here, it may be possible that the last element in the array has 
been GC'ed and that the call to .last doesn't 

Re: What is the status of Weak References?

2013-02-03 Thread David Bruant

Le 03/02/2013 12:08, Kevin Gadd a écrit :

On Sun, Feb 3, 2013 at 2:58 AM, David Bruant bruan...@gmail.com wrote:

Let's see how the example would be with weakrefs:

 function Storage(){
 var storage = []
 return {
 push(e){storage.push(makeWeakRef(e))},
 last(){
 var last = storage[storage.length-1]
 return last.get(); // oops!
 };
 }
 }

 var s = new Storage();
 s.push({});
 s.push({});
 s.last();

What problem is this example supposed to be solving?
None, but that's beyond the point. My point was to explain that in some 
cases a human being can see that some objects aren't going to be needed 
any longer while the GC algorithm cannot. The reason is that nowadays, 
state-of-the-art GCs are oblivious to the code semantics.



The problem here is not weakrefs, the problem is that the problem is poorly 
specified
I didn't say the problem came from weakrefs, but that weakrefs don't 
help for this particular problem.



When discussing issues as complex as garbage collection, the examples
need to be at least vaguely real-world. Your example does not
demonstrate a limitation of GCs because there's nothing for the GC to
actually do.
It's really hard to find real code, because it often means showing 
several pages of code and I don't want to force everyone reading to 
understand several pages of code just to make a point.
Although my example is dummy, it supports the point that a GC is limited 
because it's oblivious to code semantics and limits the information it 
uses to object, reference (and root for some objects in a 
mark-and-sweep). WeakRefs would just add a word in the GC vocabulary 
(weak reference), but wouldn't change the obliviousness.


On the obliviousness, here is another example. Let's say jQuery is 
imported with requireJS (otherwise, it's global and you can't do 
anything really). If you use only one jQuery function, the GC could 
probably figure out that only one function is used and release the rest. 
It doesn't, because you need a reference to the main jQuery object and 
the rest is attached and the GC has no understanding that you're using 
only one function of this library.
Weakrefs cannot help with this problem. Hopefully, that's real world 
enough even without the code ;-)
There will always be problems that weak references won't be able to 
solve. That's all I'm saying.


[answering to another point from yesterday]

I agree that there are scenarios where .dispose() or a similar
protocol may be come necessary; in those scenarios it would be great
if JS had some mechanism that could be used to simplify the proper use
of the protocol. Given that the need for Weak References in many
scenarios would be reduced because a partially (or wholly) automatic
implementation of that protocol can make it much easier to use the
protocol correctly.
The idea of a protocol is very interesting. I would love to see 
different *deterministic* protocols explored before bringing weakrefs if 
possible.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Action proxies

2013-02-03 Thread David Bruant

Le 03/02/2013 01:59, Mark S. Miller a écrit :
Hi David, have you seen 
https://github.com/tvcutsem/harmony-reflect/tree/master/notification ?
I remember seeing the announcement, but I must have forgotten about it. 
My bad :-s


AFAICT, this provides the same flexibility as action proxies with 
hardly any more mechanism and overhead than bare notification proxies. 
The key is that, if the pre trap returns a callable, the proxy calls 
that callable after the action as a post trap. No need to reify an 
action thunk, ever, though in exchange the pre trap must often 
allocate the post trap it returns.

Either allocate or keep the post-trap around for reuse, but yes.
On the post trap of getOwnPropertyDescriptor and keys, I would pass an 
iterator as argument, because it is not sure the post-trap will actually 
look at the result, so no absolute need to re-allocate the array. Or 
maybe wrap the array in a (built-in) readonly proxy (throws on writing 
traps).
Speaking of iterator, if the enumerate pretrap returns an iterator with 
an editable next method, can the post-trap modify the next method? 
Maybe there is some necessary wrapping in that case too.



Is there any remaining advantage of action proxies over this?
Trap names don't start with on? :-) I don't think the on is 
absolutely necessary, but that's more of a style issue. Otherwise, I 
don't think so. Unless I'm overlooking something, I think there is the 
following equivalence:


// action proxy:
trap: function(){
// pre-trap code
action();
// post-trap code
}

// notification proxy:
trap: function(){
// pre-trap code
return () ={
// post-trap code
}
}

The post-trap code is optional in the former part equivalently to the 
return statement in the latter.


This does indeed get rid of invariant checks while guaranteeing the 
invariants anyway and apparently not losing expressiveness. Wow.


Was this discussed during the January TC39 meeting? Do notification 
proxies have a chance to replace direct proxies or is it too late?
In the case it would be too late, could throw ForwardToTarget be 
considered?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: What is the status of Weak References?

2013-02-02 Thread David Bruant

Le 02/02/2013 06:41, Nathan Wall a écrit :

David Bruant wrote:
  David Bruant wrote:
  Garbage collectors have evolved and cycles aren't an issue any 
longer, weak

  references or not.
 
  Kevin Gadd wrote:
  Cycles are absolutely an issue, specifically because JS applications
  can interact with systems that are not wholly managed by the garbage
  collector. The problem in this case is a cycle being broken *too
  early* because the application author has to manually break cycles. To
  present a couple simple examples:
 
  I have a top-level application object that manages lower-level 'mode'
  objects representing screens in the application. The screens, when
  constructed, attach event listeners to the application object. Because
  the application manages modes, it needs to have a list of all the
  active modes.
  * The event handler closures can accidentally (or intentionally)

 Last I heard, it's very difficult to accidentally capture a 
reference in

 a closure because modern engines check which objects are actually used
 (looking at variable names), so for an object to be captured in a
 closure, it has to be used. So intentionally.

We had a situation recently where we needed to monitor an element with 
`setInterval` to get information about when it was resized or moved. 
 As library authors we wanted to encapsulate this logic into the 
module so that it would just work.  We wanted someone to be able to 
call `var widget = new Widget();`, attach it to the document, and have 
it automatically size itself based on certain criteria. If a developer 
then moved its position in the document (using purely DOM means), we 
wanted it to resize itself automatically again. We didn't want to make 
a requirement to call a public `resize` method, nor did we want to 
impose `dispose` (it's an easy thing to forget to call and it doesn't 
feel like JavaScript).  Of course, strongly referencing the element in 
the `setInterval` keeps it alive in memory even after the developer 
using the library has long since discarded it.
Since we're discussing the addition of a new feature, let's first start 
to see how existing or about-to-exist features can help us solve the 
same problem.


In an ES6 world, new Widget() can return a proxy and you, as the widget 
library author, can track down anytime the element is moved and resized 
(the handler will probably have to do some unwrapping, function binding, 
etc, but that's doable).

DOM mutation observers [1] can be of some help to track down this, I think.

Hmm... It's been a couple of years that I have the intuition that events 
should be considered as part of an object interface and not some sort of 
external thing and I think the justification is right here.
Doing setInterval polling has you do forces you to create a function 
unrelated to the object you want to observe and keeps a reference in 
that function.
If you were able to attach an event listener to the object itself to be 
notified of just what you need, the observer function would die as soon 
as the object it's attached to would die.


In your particular case, events at the object level would solve your 
problem, I think.



[answering separately]
nor did we want to impose `dispose` (it's an easy thing to forget to 
call and it doesn't feel like JavaScript)
I'd like to repeat something I wrote in another message: ...a very 
important point that most developers ignore or forget. GC is an 
undecidable problem, meaning that there will always be cases where a 
human being needs to figure out when in the object lifecycle it is not 
longer needed and either free it in languages where that's possible or 
make it collectable in languages with a GC. There will be such cases 
even in languages where there are weak references. 
And when such a case will be found, what will be the solution? Adding a 
new subtler language construct which exposes a bit more of the GC?


JavaScript has an history of being the language of the client side where 
a web page lives for a couple of minutes; leaks were largely 
unnoticeable because navigating or closing a tab would make the content 
collectable (well... except in crappy version of IE in which JS content 
could make browser-wide leaks -_-#).
As soon as we have long-running JavaScript, we have to start caring more 
about our memory usage, we have to question what we assumed/knew of 
JavaScript. The GC does maybe 80-100% of the job in well-written complex 
code, but we must never forget that the GC only does an approximation of 
an undecidable problem.
In applications where memory matters a lot, maybe a protocol like 
.dispose will become necessary.



In this case, we managed to come up with a solution to refer to 
elements weakly through selectors, retrieving them out of the 
document only when they're attached (we have a single `setInterval` 
that always runs, but it allows objects to be GC'd).  However, this 
solution is far from fool-proof, lacks integrity (any element can 
mimic

Action proxies

2013-02-02 Thread David Bruant

Hi,

Action proxies were born as a fork to notification proxies [1]. Both 
were attempts to get rid of invariant checks which have some cost. It's 
probably too late to bring such a change in the proxy design, but I have 
given more thoughts to it, so I'll share it, in the hope it'll fuel 
people's thoughts on proxies.
I had issues with potential dangers of action proxies, but they're 
isolated to the handler author and only in already complex cases. Since 
proxies are a expert feature, the additional complexity in already 
complex cases is probably acceptable.
Tom mentioned a per-trap-invocation cost [2]. Some ideas can make this 
cost disappear or so small it becomes acceptable.


In my experience, a lot of traps end with the statement:
return Reflect.trap(...trapArgs)
It would be nice if this was made implicit. Notification proxies allow 
for this implicity. It would be nice if action proxies did too.


# Proposal

The action is the equivalent of Reflect.trap(...trapArgs). It is 
optional to call it. There is one action function per trap (not per 
invocation, only per trap type).
When called, the action performs Reflect.trap(...trapArgs), stores 
the value in a slot and returns the value or throws.


Answers to Mark's questions:
1) what happens if the trap does not call action?
= Exactly the same thing than with notification proxies: an implicit
return Reflect.trap(...trapArgs)

2) what happens if the trap calls action more than once?
= The best thing I've come up with is to make the function stateful 
(keep in mind that it's a theorical model, I'll talk optimizations 
below) and have one slot per trap invocation. Calling action fills the 
slot with the return value, the end of trap invocation empties the slot 
to use it as termination value (return value or boolean). This slot 
semantics is necessary anyway for action proxies (to remember the values 
to return in case of nested proxies).

So calling twice just changes the slot value.
Calling the action outside of a related trap invocation timeframe throws 
(no slot to fill in)


3) do actions ever return anything, in case the trap wants to pay 
attention to it?

= yes, the return value of Reflect.trap(...trapArgs)

4) are there any return values from the trap that the proxy would pay 
attention to?

= No. The return value is ignored.

5) if the original operation performed by action throws, should the 
action still just return to the trap normally?

= No, forward the thrown exception.

6) what happens if the trap throws rather that returning?
= The error thrown is forwarded to the caller, regardless of whether 
action has been called.


# Stateful per-trap function and abusive authority
What if malicious code gets all action functions and call them 
maliciously? In order to be malicious, the code would have to call the 
function in the middle of a trap invocation. The effect of the trap is 
Reflect.trap(...trapArgs) (not even change the return value) which was 
planned to be done implicitly or explicitly anyway.
The attack case is when action had been called, modification was 
performed on the target and action wasn't planned to be called after the 
modification (and the attacker does call it within the invocation 
timeframe). Arguably, this is so subtle, harmless and easy to protect 
against that stateful actions can't be considered as abusive authority.


# Optimization opportunities

Since the calling the action is optional, before-traps won't even call 
it, won't even mention it, so the trap-invocation-slot semantics can be 
bypassed and the cost of this kind of action proxy is equivalent to 
notification proxies.


# Conclusion

This type of action proxy is sort of the fusion between notification 
proxies and original action proxies. By design, they remove the need for 
invariant checks. Their cost is one function per trap and the 
slot-per-trap-invocation semantics which will be ignored if the action 
isn't called explicitly. For the handler author, there is an additional 
action argument for each trap which is a bit boilerplate-y, but you only 
need it if you call it.


I feel it could work. Too late?

David

[1] https://mail.mozilla.org/pipermail/es-discuss/2012-December/026774.html
[2] https://mail.mozilla.org/pipermail/es-discuss/2012-December/026779.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: What is the status of Weak References?

2013-02-02 Thread David Bruant

Le 02/02/2013 15:32, Tom Van Cutsem a écrit :

2013/2/2 David Bruant bruan...@gmail.com mailto:bruan...@gmail.com

About weakrefs, I've read a little bit [2][3] and I'm puzzled by
one thing: the return value of get is a strong reference, so if a
misbehaving component keeps this strong reference around, having
passed a weak reference was pointless.


For use cases where you're passing a reference to some 
plug-in/component and want the referred-to object to be eventually 
collected, we have revocable proxies. Weak references aren't the right 
tool when you want to express the guarantee that the component can no 
longer hold onto the object.
Indeed, it makes weak references a tool only useful within a trust 
boundary (when you don't need to share the object reference with an 
untrusted 3rd party).


Interestingly, revocable proxies require their creator to think to the 
lifecycle of the object to the point where they know when the object 
shouldn't be used anymore by whoever they shared the proxy with. I feel 
this is the exact same reflections that is needed to understand when an 
object isn't needed anymore within a trust boundary... seriously 
questioning the need for weak references.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: What is the status of Weak References?

2013-02-02 Thread David Bruant

Le 02/02/2013 20:02, Brendan Eich a écrit :

David Bruant wrote:
Interestingly, revocable proxies require their creator to think to 
the lifecycle of the object to the point where they know when the 
object shouldn't be used anymore by whoever they shared the proxy 
with. I feel this is the exact same reflections that is needed to 
understand when an object isn't needed anymore within a trust 
boundary... seriously questioning the need for weak references.


Sorry, but this is naive.

It is, you don't need to apologize.

Real systems such as COM, XPCOM, Java, and C# support weak references 
for good reasons. One cannot do data binding transparently without 
either making a leak or requiring manual dispose (or polling hacks), 
precisely because the lifecycle of the model and view data are not 
known to one another, and should not be coupled.


See http://wiki.ecmascript.org/doku.php?id=strawman:weak_refs intro, 
on the observer and publish-subscribe patterns.
I guess manual dispose would make a lot of sense. A view knows own its 
lifecycle, it involves adding observers in a bunch of places. When the 
view lifecycle comes to an end for whatever reason, it only makes sense 
that it removes the observers it added. My rule of thumb would be clean 
up the mess you made.

Memory leaks are bugs. Like off-by-ones. People should just fix their bugs.
Garbage collectors encourage the fantasy that people can forget about 
memory. It is a fantasy. A convenient one, but a fantasy nonetheless. A 
fantasy like we can have a lifestyle that assumes oil is unlimited.

/naivety

acceptance
I guess it's just human nature, so weakrefs are pretty much unavoidable.

If a weakref to a function is passed to Object.observe, will it auto-get 
the function and unobserve automatically if the .get returns null?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: What is the status of Weak References?

2013-02-01 Thread David Bruant

Le 31/01/2013 22:48, Kevin Gadd a écrit :

I ask this because the lack of weak references (or any suitable
substitute mechanism) comes up regularly when dealing with the
challenge of porting native apps to JavaScript, and it leads people to
consider extremely elaborate workarounds just to build working
applications (like storing*all*  their data in a virtual heap backed
by typed arrays and running their own garbage collector against it).
If there is really a firm reason why this must be so, so be it, but
seeing so many people do an end-run around the JS garbage collector
only to implement their own*in JavaScript*  makes me wonder if perhaps
something is wrong. The presence of WeakMaps makes it clear to me that
solving this general class of problems is on the table.
I don't understand the connection between the lack of weak references 
and emulating a heap in a typed array.



Historically the lack of weak references has resulted in various
solutions in libraries like jQuery specifically designed to avoid
cycles being created between event listeners and DOM objects. Many of
these solutions are error-prone and require manual breaking of cycles.
Garbage collectors have evolved and cycles aren't an issue any longer, 
weak references or not.



But on the
other hand I've been told in response to this question before that
TC39 has a general policy against features that allow garbage
collection to be visible to applications.
I'm not part of TC39, but I'm largely opposed to anything that makes GC 
observable. It introduces a source of non-determinism; that is the kind 
of things that brings bugs that you observe in production, but 
unfortunately didn't notice and can't reproduce in development 
environment. Or if you observe them when running the program, you don't 
observe it in debugging mode.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: What is the status of Weak References?

2013-02-01 Thread David Bruant

Le 01/02/2013 12:21, Kevin Gadd a écrit :

On Fri, Feb 1, 2013 at 2:06 AM, David Bruant bruan...@gmail.com wrote:

I don't understand the connection between the lack of weak references and
emulating a heap in a typed array.

For an algorithm that needs weak references to be correct, the only
way to implement that algorithm in JavaScript is to stop using the JS
garbage collector and write your own collector. This is basically the
model used by Emscripten applications compiled from C++ to JS - you can use a
C++ weak reference type like boost::weak_ptr, but only because the
entire application heap is stored inside of a typed array and not
exposed to the JS garbage collector. This is great from the
perspective of wanting near-native performance, because there are JS
runtimes that can turn this into incredibly fast native assembly, but
the resulting code barely looks like JavaScript and has other
disadvantages, so that is why I bring it up - weakref support in JS
would make it possible to express these algorithms in hand-written,
readable, debuggable JS.
Sorry for repeating myself, but I still don't see the connection between 
the lack of weak references and emulating a heap in a typed array. 
Phrased as a question:
Would it be possible to compile a C++ program in JS with weakrefs 
without emulating a heap in a typed array? Because of pointer 
arithmetics, I doubt it, but I'm curious to learn if that's the case.



Garbage collectors have evolved and cycles aren't an issue any longer, weak
references or not.

Cycles are absolutely an issue, specifically because JS applications
can interact with systems that are not wholly managed by the garbage
collector. The problem in this case is a cycle being broken *too
early* because the application author has to manually break cycles. To
present a couple simple examples:

I have a top-level application object that manages lower-level 'mode'
objects representing screens in the application. The screens, when
constructed, attach event listeners to the application object. Because
the application manages modes, it needs to have a list of all the
active modes.
* The event handler closures can accidentally (or intentionally)
Last I heard, it's very difficult to accidentally capture a reference in 
a closure because modern engines check which objects are actually used 
(looking at variable names), so for an object to be captured in a 
closure, it has to be used. So intentionally.



capture the mode object, creating a real cycle involving a dead mode
that will not be collected by even the most sophisticated GC.
The problem is not about cycles. It's about abusively holding references 
to objects.



* If I am not extremely cautious, when a mode is destroyed I might
forget (or fail) to remove its associated event handlers from the
event handler list, causing the event handler lists to grow over time
and eventually degrade the performance of the entire application.
* I have to explicitly decide when a mode has become dead
Yes. I would say understand rather than decide, but yes. And that's 
a very important point that most developers ignore or forget. GC is an 
undecidable problem, meaning that there will always be cases where a 
human being needs to figure out when in the object lifecycle it is not 
longer needed and either free it in languages where that's possible or 
make it collectable in languages with a GC. There will be such cases 
even in languages where there are weak references.
Nowadays, making an object collectable means cutting all references 
(even if the object is not involved in a cycle!) that the mark-and-sweep 
algorithm (as far as I know, all modern engines use this algorithm) 
would traverse.




In this scenario, weak references are less essential but still
tremendously valuable: An event handler list containing weak
references would never form a cycle, and would continue to work
correctly as long as the mode is alive. It is also trivial to prune
'dead' event handlers from a list of weak event handlers.
When does the GC decide to prune dead event handlers? randomly? Or maybe 
when you've performed some action meaning that the corresponding mode is 
dead?



The need to
explicitly tag a mode as dead and break cycles (potentially breaking
ongoing async operations like an XHR) goes away because any ongoing
async operations will keep the object itself alive (even if it has
been removed from the mode list), allowing it to be eventually
collected when it is safe (because the GC can prove that it is safe).

I decide to build a simple pool allocator for some frequently used JS
objects, because JS object construction is slow. This is what
optimization guides recommend.
Are these guides aware of bump allocators? or that keeping objects alive 
more than they should pressures generational garbage collectors?



I pull an object instance out of the
pool and use it for a while, and return it to the pool.
* If I forget to return an object to the pool when I'm done

Re: Could | be spelled extends?

2013-02-01 Thread David Bruant

Le 01/02/2013 22:02, Allen Wirfs-Brock a écrit :

Something like this can still be expressed in the current draft of ES6 as:

let p = Proxy(target, {
   __proto__:  VirtualObject.prototype,
   get(key, receiver) {...},
   set(key,value, reciever) {...}
});

This is ugly in its use of __proto__ and unreliable because __proto__ can be 
removed from Object.prototype.
Deleting Object.prototype.__proto__ affects the semantics of the object 
literal? This doesn't smell good.



PrimaryExpression ::
...
extends Expression ObjectLiteral

a desugaring semantics might be:

Object.mixin(new (Expresion), ObjectLiteral)

I like this idea :-)

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxying Native Objects: Use case

2013-01-31 Thread David Bruant

Le 31/01/2013 16:10, François REMY a écrit :

Hi.

I must admit I didn't follow the whole thread about native element 
proxyfication but when I left the consensus was that the native element should 
not be proxied (ie: it would not work like Object.create(...) do not work for 
them).
Just to clarify, both for Object.create and proxy, your does not work 
means cannot be put in the DOM tree. The output object still behaves 
as expected from an ECMAScript point of view.
It remain possible to do wrappedNode1.appendChild(wrappedNode2) by 
unwrapping both wrappedNode under the hood.



I've however a compelling use case that may make some of us rethink the 
situation.


Let's say I want to implement a polyfill for MapEvent whose definition is as 
follows:

 interface MapEvent : Event {
 getter any item(DOMString name);
 }
Out of curiosity, where does MapEvent come from? I can't remember having 
read about it on any spec and Google isn't helping.
As a side note, I hope this is not a new API, because such getters are 
particularly bad taste. I remember it generated some issues with 
HTMLCollection. Maybe something about making it inherit from Array and 
static analysis on websites.
I think such a getter notation exists in WebIDL to formalize scars from 
the past (like HTMLCollection) rather than to be used in new APIs


For this kind of API, I largely prefer the CustomEvent approach where 
the (unique!) detail field is by-spec expected to be a go crazy type 
of field. The detail attribute also doesn't collide with any existing 
field; in the MapEvent case, a getter could shadow an inherited property.



In case of no-proxy-for-native-objects, I've no way to do it properly because 
implementing 'getter' requires me to use a Proxy but having a functional event will force 
me to use a natively-branded element (like document.createEvent(CustomEvent)) 
and changes its prototype chain to match my needs.
The conclusion I was personally at was that every single API will have 
to decide how it behaves with proxies. We've seen previously that 
proxies in appendChild were a no-go because browsers traverse the DOM 
tree for selector matching and putting proxies in the tree would reveal 
how selector matching is performed, thus ruining hope of making it parallel.
However, off top of my head, I don't see a main restriction to call 
.dispatch with a proxied event as argument. I'm not enough expert of 
this topic; people who are in how the APIs are used and implemented will 
better tell whether it's appropriate to accept a proxy.


I'll post to public-script-coord to talk about that.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxying Native Objects: Use case

2013-01-31 Thread David Bruant

Le 31/01/2013 17:26, François REMY a écrit :

I think such a getter notation exists in WebIDL to formalize scars from
the past (like HTMLCollection) rather than to be used in new APIs

Yes and no. For exemple, something alike is envisionned to support custom 
properties in CSS. Something like:

 element.style.myCustomProperty = true;
 // set the my-custom-property custom CSS property

How is this not future-hostile?
Do you have a link to where people are discussing this?


However, off top of my head, I don't see a main restriction to call
.dispatch with a proxied event as argument. I'm not enough expert of
this topic; people who are in how the APIs are used and implemented will
better tell whether it's appropriate to accept a proxy.

I'll post to public-script-coord to talk about that.

Thanks, that's a good idea. But you didn't comment on the possibility to simply 
turn any object into a proxy using

Indeed, sorry.
Turning any object into a proxy is a no-go, because it means that things 
are aren't observed suddenly have to be observed and that anyone can 
insert arbitrary code in the middle of any operation on any object. It's 
pretty bad both for security and probably for performance too.


I think having dispatchEvent accepting proxies is the easiest thing to 
do in your case.


If creating proxies for exotic objects become a real thing, maybe each 
exotic type can define its own reduced version of proxies like:


document.createProxiedElement('string', proxiedElementHandler)

The power proxiedElementHandler could be arbitrarily reduced by the spec 
of createProxiedElement. It could allow creating reduced proxies that 
can be inserted in the DOM without providing full abusive power than 
have the bad consequences for selector matching. It's possible in 
theory. In practice, I don't believe it'll happen :-s


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxying Native Objects: Use case

2013-01-31 Thread David Bruant

Le 31/01/2013 18:34, François REMY a écrit :

I think such a getter notation exists in WebIDL to formalize scars from
the past (like HTMLCollection) rather than to be used in new APIs

Yes and no. For exemple, something alike is envisionned to support custom 
properties in CSS. Something like:

element.style.myCustomProperty = true;
// set the my-custom-property custom CSS property

How is this not future-hostile?

Custom properties are guarenteed to start with a prefix (EWCG sports 'my' and 
Tab sports 'var' but in concept this is identical) so that the getter only get 
in the way if the property name starts by that prefix. That prefix being 
secured for author additions only, it's impossible that any conflict will 
happen in the future.

Ok, good to know. Thanks :-)


Do you have a link to where people are discussing this?

Feel free to comment on www-style, but if you want Tab's editor draft, it's 
here:

http://www.w3.org/TR/css-variables/#cssstyledeclaration-interface
The most natural way seems to be to first, set up a getter behavior on the 
interface that deals with variable properties, and second, set up a vars map 
that exposes the variable properties that aren't set to their initial value.

FWIW, I support both additions, given the author prefix.
In this instance, it's possible for you as a polyfill author to replace 
Element.prototype.style by your own getter which returns your special 
proxy objects which do what you expect on property set.


For the dispatchEvent/addEventListener case, it's possible for you to 
override these methods (and maybe the Proxy constructor?) to accept 
proxies the way you want them to.


I think web browsers implementing proxies are/will be sufficiently 
WebIDL compliant to make such a polyfill not that hard to write.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxying Native Objects: Use case

2013-01-31 Thread David Bruant

Le 31/01/2013 19:12, François REMY a écrit :

In this instance, it's possible for you as a polyfill author to replace
Element.prototype.style by your own getter which returns your special
proxy objects which do what you expect on property set.

For the style case, it's *maybe* possible to do so (in a WebIDL compatible 
browser at least, not sure it would work on Chrome for example).
hmm... we're sidetracking a bit but Chrome doesn't have proxies, so you 
can't polyfill what you want anyway. By the time Chrome does have 
proxies, maybe its WebIDL conformance will be better.



For the dispatchEvent/addEventListener case, it's possible for you to
override these methods (and maybe the Proxy constructor?) to accept
proxies the way you want them to.

How would I do so? It seems impossible to me, or at least very tedious. Do not 
forget that the browser itself will add event handlers on objects via Web 
Components, Decorators, HTML Attributes, ...

I'm not familiar with this.
Do you have links to spec saying that the browser adds native event 
handlers?


A point I hadn't answered:

The problem is that in the case of polyfilling you can't be sure the browser 
will think about proxy usage when they first implement the API then you're out 
of luck.
The best that can be done here is contact the DOM Core people (that's 
where events are now apparently), and ask them to specifically say they 
accept proxies in dispatchEvent. Then, write test cases, file (or fix) 
bugs in browsers in case some tests fails. Nothing better can be done I 
think.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: WeakMap GC performance

2013-01-24 Thread David Bruant
After email exchanged with Andreas, it seems that some emails clients 
(most modern ones?) do not work well when changing the email title; 
something I hadn't noticed on my own email client.


I apologize for the inconvenience to anyone it has bothered and will 
fork thread less often from now on.


David

Le 23/01/2013 11:21, Andreas Rossberg a écrit :

[Meta]

David, I would appreciate if you stopped breaking discussion threads
all the time. There are now about half a dozen threads related to
WeakMap clear, which clutters the discussion view and makes it hard to
properly follow the discussion with delay.

Thanks,
/Andreas


On 23 January 2013 10:49, David Bruant bruan...@gmail.com wrote:

[reordering]
Allen wrote:

We can understand the value of providing a clear method without talking
about GC at all.

I don't doubt there is a case to clear a data structure, but it can be
filled with clearless weakmaps. What I'm trying to find is a differentiating
factor. I agree that:
* clearable and clear-less weakmaps both have a use. Which is dominant for
developers has yet to be determined and only tastes and feelings have been
provided so far (including by myself).
* clearable weakmaps and clear-less weakmap can be symmetrically and at
close to no cost implemented on top of one another.

Until evidence (from other languages?) is provided that one case matters
more, I personally call this a tie. That's where my reflection is at.

I think a major remaining point is performance. If clear-less weakmaps
induce an incompressible significant GC cost, then, that is a valid
justification to have native .clear.
Now, implementors will have to deal with programs where some long-lived
weakmaps aren't manually cleared, the interesting question here is: how far
can they go to reduce the GC cost (without requiring a major breakthrough in
GC research of course ;-) )?
If the cost can be reduced to a marginal difference with manual .clear, I
call the performance argument a tie too (leaving the debate to a
taste/feeling debate)


Le 23/01/2013 00:36, Allen Wirfs-Brock a écrit :

On Jan 22, 2013, at 2:35 PM, David Bruant wrote:

So, to find out if a weakmap is dead, it has to come from another source
than the mark-and-sweep algorithm (since it losts its precision)...
Given the additional prohibitive cost weakmaps seem to have on the GC,
maybe things that would otherwise be considered too costly could make sense
to be applied specifically to WeakMaps. For instance, would the cost of
reference-counting only weakmaps be worth the benefit from knowing early
that the weakmap is dead? (I have no idea how much each costs, so it's hard
for me to compare the costs)
For WeakMapWithClear, reference counting would declare the weakmap dead
as soon as the new weakmap is assigned to the private property so that's
good. It wouldn't work if some weakmaps are part of a cycle of course... but
maybe that it's such an edge case that it's acceptable to ask users doing
that to break their weakmaps cycle manually if they don't want the GC not to
be too mad at them.


You know, as much as Jason and I enjoy talking about garbage collectors,
this probably isn't the place to revisit the last 40 years of a highly
developed area of specialized CS technology.

Even if there is a .clear method, it doesn't mean people will use it, so the
costs weakmaps induce on GC will have to be taken care of even if people
don't manually clear the weakmap [forking the thread for this reason]. JS
engine implementors will have to solve this problem regardless of the
introduction of a .clear method or not. Since JS engines start having
generational GC and WeakMaps, I feel here and now might be a very good place
and time to revisit these 40 years. Each implementor will have to do this
revisit anyway.
If anything, this thread may become a good resource for developers to
understand why some of their programs using WeakMaps have conjecturally or
inherently bad GC characteristics.

Of all points in this thread, the one that got stuck in my head is when
Jason said: In our current implementation, creating a new WeakMap and
dropping the old one is very nearly equivalent in performance to clear().
What this means is that something is lost when moving to a naive
generational GC regarding WeakMaps. The loss is the knowledge of when
exactly a weakmap is dead. And this loss has a cost related to weakmap GC
cost. Although Mark showed a linear algorithm, one can still wonder if in
practice this algorithm induce a significant cost (the worst-case complexity
doesn't say much about the most-frequent-case cost of an algorithm).

What I'm trying to find out is whether there is a small-cost
weakmap-specific tracking system that could tell the GC that a weakmap is
dead as soon as possible. First and foremost, what did the research find in
these 40 years on this specific question?
Did it prove that any tracking system doing what I describe would cost so
much that it wouldn't save on what it's supposed

Re: Private symbols auto-unwrapping proxies (was: Security Demands Simplicity (was: Private Slots))

2013-01-24 Thread David Bruant

Le 24/01/2013 09:52, Tom Van Cutsem a écrit :

2013/1/23 David Bruant bruan...@gmail.com mailto:bruan...@gmail.com

Le 23/01/2013 09:38, Tom Van Cutsem a écrit :

3) because of JS's invoke = get + apply semantics, by
default a proxy always leaves the |this| value pointing at the
proxy.

Looking only at 3), sometimes this is what you want, and
sometimes it isn't.

In which case would it be what you want?


See the example by Brendan just upstream in this thread.

True, I had read this post too quickly.


The example Brandon (and Kevin before him) provided showed
something very intrusive about proxies related to your 3). That
proxies mediate the access to the public method is one thing, that
they pretend to be the object acted on inside the method opens a
entire world.

Even with fixes suggested by Allen, the hazard can still exist if
someone does:
Counter.prototype.increment.call(new Proxy(counter,
maliciousHandler))


I don't understand why this is a hazard. Even without proxies, |this| 
is never reliable, unless you use .bind().
I'm not worried about the |this|-reliability for the method, but rather 
that the target instance can be left in an inconsistent state because of 
a malicious handler. The important part in the above expression isn't 
the .call, but that an actual Counter instance is the proxy target.



I have no idea how this can be mitigated in general without
creating a mechanism that can be abused to unwrap proxies. For
classes specifically, maybe an option can make that classes keep
track of generated objects and throw if non-instance is passed in
a method as |this| (...which is exactly the kind of things DOM
Node tree manipulation methods will need)


Recall that it was a goal for classes to be a form of sugar over the 
existing object model. That means the use of |this| within a method 
specified using class syntax should really be no different from using 
|this| outside of classes. Let's try to avoid making up special rules 
for class instances.
I agree with you, I suggested to add an option, not to change the 
default semantics. Because of the too-dynamic |this| and everyone being 
used to it, protecting yourself from malicious proxies from attacks like 
the one above (method.call(new Proxy(legitObject, maliciousProxy))) 
has to be an opt-in. Basically, methods make sure their |this| is an 
object that came out of the class constructor.
It would be nice if this opt-in could be made as simple as an optional 
keyword in the class syntax. This option would just desugar differently 
(put all objects created by the constructor in a WeakSet, add a prolog 
to each method verifying |this| is part of the weakset, continue if yes, 
throw if not).


Going back to the big discussion thread about proxying DOM objects, I 
maintain that it's a bad idea to try to make existing APIs (that 
expect objects of a very specific type) work with any random proxy, 
either by interacting with it or by unwrapping it. The cleaner thing 
to do would be to replace/wrap the API with one that also recognizes 
and accepts certain proxies (still not just anyone's proxies).
I agree. The selector matching use case convinced me there is no chance 
to put proxies or weird objects in a DOM tree.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private symbols auto-unwrapping proxies (was: Security Demands Simplicity

2013-01-24 Thread David Bruant

Le 22/01/2013 21:09, David Bruant a écrit :

Le 22/01/2013 20:05, Tom Van Cutsem a écrit :
Symbol-keyed indexing on the A face is distinct from symbol-keyed 
indexing on the B face. But that's OK: it's the job of the membrane 
to separate the A and the B face in the first place.
I don't think that's ok. A goal of the proxy mediation is to gives A 
and B the impression they communicate with one another like if there 
was no mediation (but keeping the right to revoke all communications 
when necessary). That's why the membrane faithfully forwards primitive 
values and preserve object identity equalities in other cases than 
private symbols.
If you created A and B and started to make them communicate, it's 
because you wanted them to collaborate to achieve something for you. 
If A and B share a private symbol, it's in order to communicate using 
it. If the membrane changes the symbol, then A and B don't communicate 
as if there was no mediation anymore. It's even possible that they 
won't be able to work together if their mutual collaboration relied on 
communication via the private symbol they expected to share.
I've come around to think that auto-unwrapping may be a simpler idea in 
the end. Maybe in complicated use cases it will require some additional 
work to not leak private symbols.
As explained above, auto-unwrapping would prevent 2 untrusted membraned 
parties to communicate directly through the private symbols, but the 
language already provides way enough ways to communicate (public object 
properties, arguments/return values in function calls).
I think it was crucially important that each context is consistent and 
oblivious to being wrapped and you've explained it could be done 
efficiently (1-1 mapping of private symbols) so I guess there is no 
problem on that side.
Securing the use private symbols will always require some additional 
book-keeping and that's unfortunate, but the cost sounds acceptable. 
Probably a library can be provided to help out proxy authors in the 
bookkeeping.


As you noted, one crucially important point is that no built-in private 
symbol is introduced in the language.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


WeakMap GC performance (was: WeakMap.prototype.clear performance)

2013-01-23 Thread David Bruant

[reordering]
Allen wrote:

We can understand the value of providing a clear method without talking about 
GC at all.
I don't doubt there is a case to clear a data structure, but it can be 
filled with clearless weakmaps. What I'm trying to find is a 
differentiating factor. I agree that:
* clearable and clear-less weakmaps both have a use. Which is dominant 
for developers has yet to be determined and only tastes and feelings 
have been provided so far (including by myself).
* clearable weakmaps and clear-less weakmap can be symmetrically and at 
close to no cost implemented on top of one another.


Until evidence (from other languages?) is provided that one case matters 
more, I personally call this a tie. That's where my reflection is at.


I think a major remaining point is performance. If clear-less weakmaps 
induce an incompressible significant GC cost, then, that is a valid 
justification to have native .clear.
Now, implementors will have to deal with programs where some long-lived 
weakmaps aren't manually cleared, the interesting question here is: how 
far can they go to reduce the GC cost (without requiring a major 
breakthrough in GC research of course ;-) )?
If the cost can be reduced to a marginal difference with manual .clear, 
I call the performance argument a tie too (leaving the debate to a 
taste/feeling debate)



Le 23/01/2013 00:36, Allen Wirfs-Brock a écrit :

On Jan 22, 2013, at 2:35 PM, David Bruant wrote:

So, to find out if a weakmap is dead, it has to come from another source than 
the mark-and-sweep algorithm (since it losts its precision)...
Given the additional prohibitive cost weakmaps seem to have on the GC, maybe 
things that would otherwise be considered too costly could make sense to be 
applied specifically to WeakMaps. For instance, would the cost of 
reference-counting only weakmaps be worth the benefit from knowing early that 
the weakmap is dead? (I have no idea how much each costs, so it's hard for me 
to compare the costs)
For WeakMapWithClear, reference counting would declare the weakmap dead as soon 
as the new weakmap is assigned to the private property so that's good. It 
wouldn't work if some weakmaps are part of a cycle of course... but maybe that 
it's such an edge case that it's acceptable to ask users doing that to break 
their weakmaps cycle manually if they don't want the GC not to be too mad at 
them.


You know, as much as Jason and I enjoy talking about garbage collectors, this 
probably isn't the place to revisit the last 40 years of a highly developed 
area of specialized CS technology.
Even if there is a .clear method, it doesn't mean people will use it, so 
the costs weakmaps induce on GC will have to be taken care of even if 
people don't manually clear the weakmap [forking the thread for this 
reason]. JS engine implementors will have to solve this problem 
regardless of the introduction of a .clear method or not. Since JS 
engines start having generational GC and WeakMaps, I feel here and now 
might be a very good place and time to revisit these 40 years. Each 
implementor will have to do this revisit anyway.
If anything, this thread may become a good resource for developers to 
understand why some of their programs using WeakMaps have conjecturally 
or inherently bad GC characteristics.


Of all points in this thread, the one that got stuck in my head is when 
Jason said: In our current implementation, creating a new WeakMap and 
dropping the old one is very nearly equivalent in performance to clear().
What this means is that something is lost when moving to a naive 
generational GC regarding WeakMaps. The loss is the knowledge of when 
exactly a weakmap is dead. And this loss has a cost related to weakmap 
GC cost. Although Mark showed a linear algorithm, one can still wonder 
if in practice this algorithm induce a significant cost (the worst-case 
complexity doesn't say much about the most-frequent-case cost of an 
algorithm).


What I'm trying to find out is whether there is a small-cost 
weakmap-specific tracking system that could tell the GC that a weakmap 
is dead as soon as possible. First and foremost, what did the research 
find in these 40 years on this specific question?
Did it prove that any tracking system doing what I describe would cost 
so much that it wouldn't save on what it's supposed to? If so, I'll be 
happy to read the paper(s) and give up on the topic. I assume it's not 
the case to continue.

Ideally, the tracking system would have the following properties:
* it costs nothing (or a small startup constant) if there is no weakmap
* the overall cost of the tracking system in normal cases is 
significantly less than what it costs to have a weakmap falsely assumed 
alive.
I say in normal cases because that's what modern GCs are already in 
the business of. Generational GC is an optimistic optimization based on 
the *observation* that in most real-life programs, most objects are 
short-lived. It's possible to craft

Re: Private symbols auto-unwrapping proxies (was: Security Demands Simplicity (was: Private Slots))

2013-01-23 Thread David Bruant

Le 23/01/2013 09:38, Tom Van Cutsem a écrit :
3) because of JS's invoke = get + apply semantics, by default a 
proxy always leaves the |this| value pointing at the proxy.


Looking only at 3), sometimes this is what you want, and sometimes it 
isn't.

In which case would it be what you want?
The example Brandon (and Kevin before him) provided showed something 
very intrusive about proxies related to your 3). That proxies mediate 
the access to the public method is one thing, that they pretend to be 
the object acted on inside the method opens a entire world.


Even with fixes suggested by Allen, the hazard can still exist if 
someone does:

Counter.prototype.increment.call(new Proxy(counter, maliciousHandler))

I have no idea how this can be mitigated in general without creating a 
mechanism that can be abused to unwrap proxies. For classes 
specifically, maybe an option can make that classes keep track of 
generated objects and throw if non-instance is passed in a method as 
|this| (...which is exactly the kind of things DOM Node tree 
manipulation methods will need)


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Ducks, Rabbits, and Privacy

2013-01-22 Thread David Bruant

Le 22/01/2013 07:31, Benoit Marchant a écrit :
Why can we express in a property descriptor the notion of writable, 
configurable or enumerable but not private?
Because strings are forgeable, meaning that someone you may not trust 
can read in your code or guess (maybe with low probability) the name of 
the property making it not-so-private-after-all.


Also, could be off topic, but the fact that for a getter/setter foo 
property, you have to implement yourself a non-enumerable _foo 
property to actually have some storage, is not particularly convenient.
That's a terrible idea missing the point of accessors which are expected 
to encapsulate the state they're dealing with, not force you to put it 
at everyone's sight. You don't *have* to do that ...


A solution to that would be welcome! If a local variable following the 
name of the property was added to the scope of the getter/setter while 
it's called on an object could be one way, it would certainly 
encourage following encapsulation rather than accessing a private 
property directly, which would still be possible.
... and you're providing the solution yourself, getters and setters can 
share a variable in a common scope. The language can't decide to add its 
own variable, because it could collide with or shadow an existing 
variable making the code much harder to understand and reason about. So 
you have to create the variable yourself.


Interestingly, if instead of a non-enumerable _foo property a private 
symbol was used, getters and setters would be the property-wise 
equivalent of proxies; the private symbol playing the role of the target 
and the publicly exposed string property being the proxy.


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questioning WeakMap.prototype.clear

2013-01-22 Thread David Bruant

Le 22/01/2013 11:47, Jason Orendorff a écrit :
On Mon, Jan 21, 2013 at 6:04 AM, David Bruant bruan...@gmail.com 
mailto:bruan...@gmail.com wrote:


[...] WeakMap.prototype.clear questions the property that was true
before its adoption (you can only modify a weakmap entry if you
have the key)


David, would you please elaborate your argument for this invariant? 
This the first I've seen it stated.


An invariant can be a powerful thing. Still, I guess my default 
position is that (1) the object-capabilities perspective is only one 
view among many; (2) even looking at things with an eye for o-c 
integrity and security, clearing a data structure seems like a 
reasonable thing to allow, treating a reference to the data structure 
itself as a sufficient capability. It's (2) that I would especially 
like you to address.
I think Rick already suggested your (2), though phrased a bit 
differently [1] (that was his #1). I answered [2]: I thought more about 
how I use weakmaps and [well-encapsulate my weakmaps so that I'm the 
only holder] is a thing I do naturally indeed.
The problem may arise when you start sharing weakmaps around and some 
use cases require you to [3].


Regarding your (1), I don't doubt the need to clear a data structure 
since Allen explained a very compelling use case for that [3].
However, Mark showed an elegant way to implement .clear on top of 
clear-less weakmaps and the class syntax [4][5] (reproducing here the 
final version for clarity)


// note: implements the WeakMap API but does *not* extend WeakMap.
class WeakMapWithClear {
private let wrapped;
constructor() {
wrapped = new WeakMap();
}
get(key) = wrapped.get(key),
set(key, val) = wrapped.set(key, value),
has(key) = wrapped.has(key),
delete(key) = wrapped.delete(key),
clear() { wrapped = new WeakMap(); }
}

Now, the only thing that can differentiate both the native against this 
version is performance I think. Allen seems to argue that a native 
.clear would have better perf characteristics (related to GC). I still 
fail to see why the difference would be significant (but I need to 
re-read his recent posts about that).
In all likelihood, .clear is a method that is used sporadically. At 
least, one needs to fill up the weakmap a bit before calling it, so I 
don't think a marginal perf difference would matter.


As an implementor, what is your feeling about performance 
characteristics of both the native and the class-based version?


David

[1] https://mail.mozilla.org/pipermail/es-discuss/2013-January/028353.html
[2] https://mail.mozilla.org/pipermail/es-discuss/2013-January/028357.html
[3] https://mail.mozilla.org/pipermail/es-discuss/2013-January/028380.html
[4] https://mail.mozilla.org/pipermail/es-discuss/2013-January/028370.html
[5] https://mail.mozilla.org/pipermail/es-discuss/2013-January/028371.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Security Demands Simplicity (was: Private Slots)

2013-01-22 Thread David Bruant

Le 21/01/2013 22:31, Tom Van Cutsem a écrit :
Let's talk through Allen and Brandon's suggestion of auto-unwrapping 
private symbol access on proxies.
If a membrane can intercept all exchanged private symbols I think this 
could be made to work.
Agreed. Unfortunately, I think the condition (If a membrane can 
intercept all exchanged private symbols) cannot be fulfilled 
practically. Let's start with:


var o = {a:{a:{a:new PrivateSymbol()}}}
// send o across the membrane

The membrane has to traverse o to find the symbol. The membrane can do 
it, but it will cost the complete traversal of all objects being passed 
back and forth which has a cost. An arbitrarily big cost for arbitrarily 
complex objects.


I could stop the argument here, but for the fun of it, I'll go further :-)

function PasswordProtectedSymbol(symbol, password){
return new Proxy({}, {
get: function(target, name){
if(name === password)
return symbol;
}
})
}

If such an object is crosses a membrane, the membrane needs to know the 
password to find the encapsulated symbol. The membrane can know the 
password from previous communication between untrusted parties, but it 
requires to know that such string was the password (note that the 
property name is not on the target, so Object.gOPN cannot help the 
membrane). Things can get trickier if the password is partially computed 
in both sides. For the membrane to know the password, it requires the 
membrane author to read and understand the code of both untrusted 
parties in order to understand the semantics of the communication 
between the 2 parties. This is very expensive and error prone.


Let's see another pattern. As an intermediate state in the 
demonstration, consider the following InfiniteObject abstraction:


var infiniteHandler = {
get: function(){
return new InfiniteObject();
}
}
var target = {};

function InfiniteObject(){
return new Proxy(target, infiniteHandler)
}

var do = new InfiniteObject();
do.you.think.this.can.ever.end ? 'nope' : 'yep';

A slightly different implementation could accept all 1-char strings (why 
not even putting them in the target) and decide that the proxy-chain 
stops to provide a private symbol if you pass in a password as in 
obj.p.a.s.s.w.o.r.d. In this case, fully traversing the object means 
doing an infinite loop and the above point about passwords and membrane 
being aware of communication semantics still stands.


The membrane can always capture the private symbols I think, but it may 
come at an unpractical price.



hmm... For all the above tricks to work, it requires that the untrusted 
parties can create their own functions that the membrane is unaware of. 
Maybe it could be considered to add a Loader option so that syntax-based 
initializations ({}, [], function(){}...) trigger a custom constructor 
in loaded code (only in loaded code, not globally of course). This 
custom constructor would make *all* new objects


If loaders don't have such an option, it's possible to parse the code 
and wrap all initializations. I have written such a tool recently for a 
completely unrelated purpose [1]. It's a ugly AST hack, please forgive 
the naivety of the implementation. Also the translated code does not 
have exactly the same semantics than the source one for hoisting reasons 
and a couple of other edge cases but I chose not to care in my case as a 
first approximation.


I have no idea if the perf cost would be more practical for either the 
loader or the rewriting solution. It seems worth investigating though.


David

[1] 
https://github.com/DavidBruant/HarmonyProxyLab/blob/ES3AndProxy/ES3AndProxy/tests/tools/Test262Converter/transform.js

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questioning WeakMap.prototype.clear

2013-01-22 Thread David Bruant

Le 22/01/2013 15:19, Jason Orendorff a écrit :
On Tue, Jan 22, 2013 at 5:56 AM, David Bruant bruan...@gmail.com 
mailto:bruan...@gmail.com wrote:


Le 22/01/2013 11:47, Jason Orendorff a écrit :

On Mon, Jan 21, 2013 at 6:04 AM, David Bruant bruan...@gmail.com
mailto:bruan...@gmail.com wrote:

[...] WeakMap.prototype.clear questions the property that was
true before its adoption (you can only modify a weakmap
entry if you have the key)


David, would you please elaborate your argument for this
invariant? This the first I've seen it stated.

An invariant can be a powerful thing. Still, I guess my default
position is that (1) the object-capabilities perspective is only
one view among many; (2) even looking at things with an eye for
o-c integrity and security, clearing a data structure seems like
a reasonable thing to allow, treating a reference to the data
structure itself as a sufficient capability. It's (2) that I
would especially like you to address.

I think Rick already suggested your (2), though phrased a bit
differently [1] (that was his #1). I answered [2]: I thought more
about how I use weakmaps and [well-encapsulate my weakmaps so that
I'm the only holder] is a thing I do naturally indeed.
The problem may arise when you start sharing weakmaps around and
some use cases require you to [3].


What problem exactly?
I was wrong in saying *the* problem. A problem may arise, this problem 
being that there is a risk that you were relying on some entries and 
that they may disappear at any time making your code harder to reason about.


[re-ordering]
Also, I don't understand how [3] is a use case for sharing weakmaps 
around. To me it looks like a use case for clearing a WeakMap.
I was imagining that some of the different phases could be performed by 
third-party code. But since the use case is about a cache, there is no 
reason one would rely on the existence of some entries. Maybe a more 
subtle use case needs to be found.


Sharing mutable data structures across abstraction (or trust) 
boundaries is already pretty well understood to be an integrity (or 
security) risk. It's easy to fix: you expose a read-only view instead.
If WeakMap.prototype.clear is part of the built-in API, an attacker 
(including buggy code) can do WeakMap.prototype.clear.call(yourWeakMap), 
so exposing a read-only view means wrapping pretty much the way Mark 
Miller implemented clear


class WeakMapWithoutClear {
private let wrapped;
constructor() {
wrapped = new WeakMap();
}
get(key) = wrapped.get(key),
set(key, val) = wrapped.set(key, value),
has(key) = wrapped.has(key),
delete(key) = wrapped.delete(key)
}

What this and my previous show is an semantics equivalence between 
clearable and clear-less weakmaps. Which should be chosen as default?

* clear-less weakmaps have better integrity properties.
* clearable weakmaps may have better performance characteristics (I'm 
still not entirely convinced)
Are use cases for .clear that common that they justify being put in the 
native API? Or is it acceptable to ask those who want it to wrap in classes?


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Security Demands Simplicity (was: Private Slots)

2013-01-22 Thread David Bruant

Le 22/01/2013 16:02, Tom Van Cutsem a écrit :

2013/1/22 David Bruant bruan...@gmail.com mailto:bruan...@gmail.com

Le 21/01/2013 22:31, Tom Van Cutsem a écrit :

Let's talk through Allen and Brandon's suggestion of
auto-unwrapping private symbol access on proxies.
If a membrane can intercept all exchanged private symbols I
think this could be made to work.

Agreed. Unfortunately, I think the condition (If a membrane can
intercept all exchanged private symbols) cannot be fulfilled
practically. Let's start with:

var o = {a:{a:{a:new PrivateSymbol()}}}
// send o across the membrane

The membrane has to traverse o to find the symbol. The membrane
can do it, but it will cost the complete traversal of all
objects being passed back and forth which has a cost. An
arbitrarily big cost for arbitrarily complex objects.


This is not my understanding of how membranes work:

- when o is passed through the membrane, a proxy |op| is created for 
it on the other side (let's call this the inside)
- when |op.a| is accessed inside the membrane, the membrane forwards 
the operation, and creates a new proxy |opp| for the value returned by 
|o.a|.
- when |opp.a| is accessed inside the membrane, the membrane forwards 
the operation, so retrieves |o.a.a|, sees that the value is a private 
symbol, and returns a new private symbol instead.


The same argument applies to the other examples you gave: membranes 
only wrap lazily, and it's only at the point where an actual private 
symbol value is about to cross the membrane (as argument or return 
value of a forwarded operation) that one needs to detect (and wrap) 
private symbols.

True. I'm taking back what I said :-)

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private symbols auto-unwrapping proxies (was: Security Demands Simplicity (was: Private Slots))

2013-01-22 Thread David Bruant

Le 22/01/2013 16:13, Tom Van Cutsem a écrit :
2013/1/22 Allen Wirfs-Brock al...@wirfs-brock.com 
mailto:al...@wirfs-brock.com


We can probably fix the built-ins with some ad hoc language about
them automatically resolving proxies to the target as the this
value. Or perhaps we could expand the internal MOP api  to include
a resolve proxy to target operation.

Using private symbols for all of these cases, including the
built-ins also seems like an alternative that may work.


Let me try to summarize:

The proposal: private symbol access auto-unwraps proxies.

In code:

var s = new PrivateSymbol();
var t = {};
var p = Proxy(t, {...});
t[s] = foo
p[s] // doesn't trap, returns foo
p[s] = bar // doesn't trap, sets t[s] = bar

Pro:
- would solve the issue of wrapping class instances with private state 
stored via private symbols
- would solve the issue of how to proxy built-ins, like Date, if they 
are specified to use private symbols to access internal state

- would get rid of the unknownPrivateSymbol trap in Proxies
- could maybe even get rid of the private symbol whitelist in the 
Proxy constructor, which would making proxies entirely oblivious to 
private names


Remaining issue: private symbols can pierce membranes.

This issue is resolved if:
- (base case) there are no built-in private symbols in a standard JS 
environment (i.e. all the built-in symbols are unique)
- (inductive case) a membrane takes care to detect and wrap any 
private symbols that cross the membrane, and keeps a 1-to-1 mapping to 
maintain the identity of the symbols across both sides of the membrane.
Just realizing now, but how does the membrane do the symbol unwrapping 
if private symbols pierces it?
2 contexts A and B share a symbol, the symbol initially has to go 
through a public channel (get trap with a string name for instance) and 
if A created a symbol a, the membrane can provide a symbol b to the B 
context, but when A does someObject[a] = 2 and B does someObject[b], 
both accesses pierce proxies, so the membrane can't do its unwrapping job.


Also, in some cases, the membrane can't switch a value.
// in context A
var s = new PrivateSymbol()
var o = Object.freeze({s:s});
// send o across the membrane

In B, invariants checks make that the membrane can't answer anything 
else than the original symbol when b retrieves the s property


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questioning WeakMap.prototype.clear

2013-01-22 Thread David Bruant

Le 21/01/2013 22:42, Allen Wirfs-Brock a écrit :

On Jan 21, 2013, at 12:25 PM, David Bruant wrote:


Le 21/01/2013 20:52, Allen Wirfs-Brock a écrit :

On Jan 21, 2013, at 11:36 AM, Rick Waldron wrote:


This is the reality check I can get behind—I'm hard pressed to come up with a 
use case that isn't contrived or solvable by some other means.


This is easy:

I'm do phased traversals over a complex data structure.  I have a number of 
functions that collaborative perform the function and they share access to a 
WeakMap to cache relationships that they identify over the course of the 
traversal.  When I start a new traversal phase I want to flush the cache so I 
use the clear method to do so.

Creating a new weakmap would work equally well to flush the cache.

Same arguments applies to Map clear.
The difference with maps is that one can already enumerate all the keys, 
the .clear is just a convenience, not a new capability



I'm actually more comfortable with a discussion of the utility of the clear 
method for maps, in general.  But, if it has utility for Map then it has the 
same utility for WeakMap and supplying on both is a matter of API consistency.
Again, I don't think maps and weakmaps can be compared. They are 
different tools that can be used in different conditions. Like unique 
and private symbols.
My (small) experience is that when I need to associate data with an 
object but don't want to do the book-keeping of which entry I care 
about, I use weakmaps. So far, I've only used maps in places where I 
used to use objects to associate string with data. So far, I've used 
maps as a safe object (no need to worry about inheritance or __properties__)


WeakMaps methods can't use anything else than objects as keys, it makes 
very hard to switch from a structure to another and I still haven't 
found a case where I would have traded one for the other.


I'll fork a new thread about GC-related performance.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


WeakMap.prototype.clear performance (was: Questioning WeakMap.prototype.clear)

2013-01-22 Thread David Bruant

[Merging a couple of relevant posts]

Le 22/01/2013 15:59, Jason Orendorff a écrit :


Now, the only thing that can differentiate both the native against
this version is performance I think. Allen seems to argue that a
native .clear would have better perf characteristics (related to
GC). I still fail to see why the difference would be significant
(but I need to re-read his recent posts about that).


Definitely re-read them. They made sense to me. If you have questions 
about the implementation of GC through WeakMaps, I'll happily share 
what I know.

That would be:
* https://mail.mozilla.org/pipermail/es-discuss/2013-January/028145.html
* https://mail.mozilla.org/pipermail/es-discuss/2013-January/028387.html



As an implementor, what is your feeling about performance
characteristics of both the native and the class-based version?


Heh! I'm the worst person to ask about this. I'm not comfortable with 
the worst-case GC performance of WeakMaps to begin with. My main 
coping mechanism is not thinking about it!


In our current implementation, creating a new WeakMap and dropping the 
old one is very nearly equivalent in performance to clear(). However 
that's because we don't have a generational GC today. Dead WeakMaps 
are promptly collected. In another year, that will change. If we end 
up with more than two generations, I think it'll lead to exactly the 
problems Allen foresees.

For reference, quote from Allen:
generational collectors can have large latencies between the time the 
last reference to an object is destroyed and the when the GC actually 
notices.  Many GC cycles may occur during that period and if a 
populated but unneeded large WeakMap is one of these zombie object, 
then it can have perf impacts.

Jason:
Maybe even if we just have two generations. (To some extent, 
long-lived ordinary Maps and Arrays also do this in a generational GC; 
but WeakMaps have much, much worse worst-case GC performance.)
Indeed, the long-lived object being the only root of a big graph is a 
problem unrelated to WeakMaps. If that was the only reason to add a 
clear method on weakmaps, an Object.clear should be considered too.
I don't understand the point about the worst-case GC performance. It may 
be related to Allen's point about ephemeron algorithms which I know not 
enough about.
I would be interested in knowing more if that's relevant. I'm not 
entirely sure it's relevant since the difference between .clear and 
dropping a weakmap is about the delta during which the storage is 
considered collectable.


Having said all that, I bet we could hack around the worst-case GC 
performance. It'll be a pain, but GC is like that sometimes.
What you said above about the current GC setup that yields equivalence 
performance to .clear is interesting. In a nutshell, moving to a 
(naive?) generational GC means that you're losing something you had 
before. I feel there is a middleground to be found. What about the 
following:
WeakMaps are allocated in their own area which is manually GC'ed with 
today's algorithm (which is probably implemented for the last 
generation?). This way, you'll know as soon as possible (next GC) if one 
is dead.

Variations:
* WeakMaps are moved to this area after a given threshold (20 keys?)
* WeakMaps are moved to this area if they survives one GC. cycle
I feel that with this dedicated area, you know soon enough (next GC 
which is what you get for .clear too) whether a big weakmap can be 
collected.


I wouldn't consider what I suggested as a hack around worst-case GC 
performance, but rather as WeakMap special-casing. Given WeakMaps 
special properties about memory management, it doesn't sound that crazy 
to special case how they're being used in GC algorithms.
Maybe the ideas I suggested above in 5 minutes are not perfect, but I 
feel special-casing weakmaps is a direction to explore regardless of the 
debate we're having about .clear actually since developers won't 
necessarily always use it and the GC needs to be fast in these cases too.



Just to be sure I understand generational GC: old generations considered 
as roots when doing most GC traversing, right? That's why it may take 
time to realize they're actually dead?


Thanks,

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private symbols auto-unwrapping proxies (was: Security Demands Simplicity (was: Private Slots))

2013-01-22 Thread David Bruant

Le 22/01/2013 20:05, Tom Van Cutsem a écrit :

2013/1/22 David Bruant bruan...@gmail.com mailto:bruan...@gmail.com

Just realizing now, but how does the membrane do the symbol
unwrapping if private symbols pierces it?
2 contexts A and B share a symbol, the symbol initially has to go
through a public channel (get trap with a string name for
instance) and if A created a symbol a, the membrane can provide a
symbol b to the B context, but when A does someObject[a] = 2 and
B does someObject[b], both accesses pierce proxies, so the
membrane can't do its unwrapping job.


The membrane doesn't need to unwrap. Evaluating someObject[a] in A, 
and someObject[b] in B will result in different values.


In the context of membranes, someObject is actually a Harvey 
Two-Face type of object (its split across two worlds, one in A, and a 
proxy representation in B).

Indeed, sorry, using someObject in both cases was a confusing shortcut.

Symbol-keyed indexing on the A face is distinct from symbol-keyed 
indexing on the B face. But that's OK: it's the job of the membrane to 
separate the A and the B face in the first place.
I don't think that's ok. A goal of the proxy mediation is to gives A and 
B the impression they communicate with one another like if there was no 
mediation (but keeping the right to revoke all communications when 
necessary). That's why the membrane faithfully forwards primitive values 
and preserve object identity equalities in other cases than private symbols.
If you created A and B and started to make them communicate, it's 
because you wanted them to collaborate to achieve something for you. If 
A and B share a private symbol, it's in order to communicate using it. 
If the membrane changes the symbol, then A and B don't communicate as if 
there was no mediation anymore. It's even possible that they won't be 
able to work together if their mutual collaboration relied on 
communication via the private symbol they expected to share.



Also, in some cases, the membrane can't switch a value.
// in context A

var s = new PrivateSymbol()
var o = Object.freeze({s:s});
// send o across the membrane

In B, invariants checks make that the membrane can't answer
anything else than the original symbol when b retrieves the s
property


This is independent of private symbols. The same issue occurs if s 
were a String. That's what requires the shadow-target work-around in 
membrane proxies in general.

Good point.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questioning WeakMap.prototype.clear

2013-01-22 Thread David Bruant

Le 22/01/2013 15:59, Jason Orendorff a écrit :
A) Are there more WeakMap applications that will want .clear() or 
applications that will want .clear() not to exist? Offhand I would bet 
on the former, by a landslide, but if you think otherwise, or if 
there's some other reason to privilege .clear() not existing, let's 
talk about that.


(...)

Having said all that, I bet we could hack around the worst-case GC 
performance. It'll be a pain, but GC is like that sometimes. This 
decision should hinge on what provides the best API for developers. I 
think we mainly disagree on what developers want, which is a great 
thing to talk about. Let's talk about that.
I agree use case dominant is a crucially important answer. To date, I 
honestly don't know which case of want-clear and don't-want-clear would 
be dominant. I agree Allen showed a compelling use case but I can't 
judge how important it is. In my experience, I've not had the need for a 
.clear yet. I know that in some occurences, my code relies on the 
weakmap not being emptied, but I've kept the weakmap well-encapsulated 
in these cases so this experience is not that relevant (because 
intrusive .clear can't happen).


I've written a whole paragraph which is the most factual I can say on 
the topic, but doesn't help the debate much.


When it comes to feelings, I prefer prudence by default. I'd like to say 
a few words about that. I understand that features shouldn't be seen 
only with ocaps eyes, but I'd like to take a moment to describe what I 
care about when it comes to ocaps and what we call security.
Node.js is an interesting ecosystem. There are a lot of modules, it's 
not unusual to use 10-20 modules in a project which can make 100+ 
modules when counting recursively. Because it costs a lot of time, it's 
not possible to rewrite everything, it's not possible to contribute the 
necessary test coverage to modules and it's not possible to do careful 
security reviews of all used modules (and updates!).
However, it's possible to apply POLA (Principle Of Least Authority), 
that is give to each module the information and capabilities it needs to 
do its job and no more. If WeakMap.prototype.clear gets natively in the 
language, it means *all* modules can have an irrevocable right to flush 
any weakmap I hand them.
It's the same sort of problem than if a free operator was brought to 
JavaScript (in advance, I agree that the .clear is more acceptable) as 
suggested once [1] (almost ironically by Node's lead?). Suddenly, a 
module could free objects you hand out to them. A module thinks it's 
freeing one of it's own objects but actually frees one of yours because 
of a bug? Too bad, you'll be throwing a TypeError very soon and who 
knows in which state it will leave your application.
Dave Herman made an equivalent case about coroutines [2]. It provides 
abusive authority: you call a module-imported function and for whatever 
good or bad reason, it can suddenly stop the stack. It makes your code 
harder to reason about because when you wrote the code, you probably 
expected the function call to return (or throw).


Back to weakmaps, the issue here is not technical, but... cultural I 
would say. I can decide to encapsulate my weakmap in a 
WeakMapWithoutClear, but doing so, I cut myself from modules which take 
.clear for granted. A module does rely on WeakMap clearbility? It will 
hand me the weakmap I'm supposed to use and I know in advance anything 
can happen, because I didn't create this object.
If weakmaps don't have a clear, modules using language weakmaps won't 
take it for granted and you can be fearless about sharing your 
weakmap... very much like you can be  hand objects today without the 
fear of them being free'd by mistake or malice using a free operator.


Since I'm on the topic of language-based abusive authority, I've come 
across a case where a logging library would throw at my face [3] anytime 
it would have to log about an object with a cycle in it [4]. Logging 
libraries are really not the one you'd expect to throw an error so we 
didn't wrap the call in a try-catch. So logging some objects would make 
the application crash.
I have come to think that error-throwing with stack-unwinding is an 
abusive authority. Especially given that try-catch is an opt-in.
I understand now that silently swallowing errors is not a good idea and 
the need to report errors in a different channel than return value, but 
I don't think stack-unwinding is a good default for that. Promises show 
an interesting model; to keep in mind for another language maybe.


The point I tried to make here is that POLA allows to build applications 
using untrusted modules without fearing them and that's really an 
excellent property, because we use constantly huge amounts code we don't 
necessarily trust or have the time to review or test besides what we 
test during the development period.


WeakMap.prototype.clear is at a different scale of danger and abusive 
authority 

Re: WeakMap.prototype.clear performance

2013-01-22 Thread David Bruant

Thanks a lot for these explanations! (Answer below)

Le 22/01/2013 22:46, Jason Orendorff a écrit :



Having said all that, I bet we could hack around the worst-case
GC performance. It'll be a pain, but GC is like that sometimes.

What you said above about the current GC setup that yields
equivalence performance to .clear is interesting. In a nutshell,
moving to a (naive?) generational GC means that you're losing
something you had before. I feel there is a middleground to be found.


What you're losing when you switch to a generational GC is precision. 
The tradeoff is: you do a lot less GC work, but you collect objects 
that survive a generation much less aggressively.


What about the following:
WeakMaps are allocated in their own area which is manually GC'ed
with today's algorithm (which is probably implemented for the last
generation?). This way, you'll know as soon as possible (next GC)
if one is dead.


I don't understand. How do you know if one is dead, short of marking 
the entire heap?
I understand the problem now. An old object may hold a reference to a 
weakmap and you can't know until the entire heap has been marked which 
happens less often.
Specifically, in Mark's pattern, the WeakMapWithClear object outlives 
the weakmaps it encapsulates and until you've found out that this object 
is dead, the formerly encapsulated weakmap can't be declared as dead.


So, to find out if a weakmap is dead, it has to come from another source 
than the mark-and-sweep algorithm (since it losts its precision)...
Given the additional prohibitive cost weakmaps seem to have on the GC, 
maybe things that would otherwise be considered too costly could make 
sense to be applied specifically to WeakMaps. For instance, would the 
cost of reference-counting only weakmaps be worth the benefit from 
knowing early that the weakmap is dead? (I have no idea how much each 
costs, so it's hard for me to compare the costs)
For WeakMapWithClear, reference counting would declare the weakmap dead 
as soon as the new weakmap is assigned to the private property so that's 
good. It wouldn't work if some weakmaps are part of a cycle of course... 
but maybe that it's such an edge case that it's acceptable to ask users 
doing that to break their weakmaps cycle manually if they don't want the 
GC not to be too mad at them.


Thanks again for the clarifications!

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Security Demands Simplicity (was: Private Slots)

2013-01-21 Thread David Bruant

Le 21/01/2013 03:35, Allen Wirfs-Brock a écrit :


On Jan 20, 2013, at 5:42 PM, David Bruant wrote:


Le 20/01/2013 23:05, Allen Wirfs-Brock a écrit :



On Jan 20, 2013, at 11:12 AM, David Bruant wrote:
complicated was an expression. Either proxies don't work with 
class instances, making them vastly pointless or classes need to 
publicize their private symbols (or maybe expose something like 
myClass.acceptProxy which is marginally better), thus ruining 
their own encapsulation.
Actually this whole discussion makes me question the validity of the 
current Proxy design rather than that of private Symbol.  I may be 
on the road towards getting on the NotificationProxy train.
If there is time to make that big of a change, Mark's idea of action 
proxies could be considered too. I've only expressed reluctance on 
the list, because it allows to do weird things when badly used, but 
for all use cases I've had, it would be fine. Tom expressed 
reluctance regarding the cost of action proxies, but I'm not entirely 
sure it's founded.
Although Notification and action proxies are good to get rid of the 
invariants cost, I'm not entirely sure they can help to reduce the 
complexity when it comes to private symbols.




(...)

This suggests a possible generalized solution to the Proxy/private 
symbol exposure problem:


The [[Get]] and [[Set]]  (and probably some others) internal methods 
of a proxy never call the corresponding trap when the property key 
is a private Symbol.  Instead, they trace the [[Target]] chain of 
the proxy until a non-proxy object is reached (call this the 
ultimate target).  It then invokes the ultimate target's 
[[Gett]]/[[Set]] using that same private Symbol key.  The result of 
that operation is then returned as the value of the original 
[[Get]]/[[Set]].


The private state access is applied to the correct object and 
there is no exposure of the private symbol!
It can work for built-in private state (and could work for private 
class syntax too), but not for user-generated or obtained private 
symbols:
Let's say 2 untrusted parties are in 2 membranes. They share a 
private symbol and each has access to a proxy wrapping a common 
target. With the private symbols semantics you're describing, these 2 
untrusted parties have an unmediated communication channel.


How did they originally come to share the private symbol?  Don't they 
have to have some common point of origin with visibility of the symbol 
or have both been provided the private symbol by some third party.
No, they have shared the symbol through mediated communication. As Tom 
has argued multiple times, while symbols are objects, they should be 
considered as primitive values that can't be wrapped by proxies.
So the 2 parties, while using the mediated communication came to share a 
private symbol. They didn't need a third party for that.


In either case couldn't they have closure captured references to each 
other and use those references for direct communicate?
No, you created each of this context separately and you are the only 
entity with access to both and for whatever good reason of yours, you 
initially make them share a single object through which their mediated 
communication start. A setup like one that Mark describes much better 
than I do [1].
For instance, you're a webpage, each untrusted party is a widget and you 
have an event mechanism through which you allow widgets to communicate 
for some time and for whatever good reason.


An unmediated communication channel defeats the purpose of having put 
the 2 untrusted parties in membranes in the first place.
The semantics of user-generated or obtained symbols has to go through 
proxy mediation because of this use case, hence the whitelist and the 
unknownPrivateSymbol trap in the current proxy design.


This really makes me start to question even more the viability of 
Proxy based membranes (direct proxies, at least) as an isolation 
mechanism. Independent of private Symbols,  it isn't clear that it is 
a practical approach.
I wonder how you're coming to such a question. It is a practical 
approach assuming proxies can properly mediate communication.


Also, I think some of the issues discussed in the thread 
https://mail.mozilla.org/pipermail/es-discuss/2012-December/027246.html have 
bearing on this discussion.  It is probably worth taking a second look.
I think too that this thread revealed unclear stratification properties 
in built-in algorithms, but I'm not following how relates to this 
discussion.


David

[1] 
http://www.youtube.com/watch?v=w9hHHvhZ_HYfeature=player_detailpage#t=2574s 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


<    1   2   3   4   5   6   7   8   9   10   >