Re: Function constants for Identity and No-op

2016-08-10 Thread Kevin Reid
[no quotes because I'm not replying to anyone in particular]

An advantage that has not been mentioned yet, of having a canonical
function instance for particular behaviors, is that it allows for some
library-level optimization by being able to know what a function does
(which is otherwise opaque). For a simple example:

SomeKindOfImmutableCollection.prototype.map = function (f) {
if (f === Function.IDENTITY) {
return this;
}
...build a new collection with f applied to elements and return it...
};

Of course, it would be silly to write coll.map(Function.IDENTITY), but a
caller might be passing the function from somewhere else.

(I only intend to point out this one benefit, not to claim it justifies the
feature entirely.)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Exponentiation operator precedence

2015-09-27 Thread Kevin Reid
On Thu, Sep 24, 2015 at 8:14 AM, Mark S. Miller  wrote:

> I like #4. Normally in a situation like this I would still argue for #1.
> #4 is a complicated special case that breaks the normal pattern of operator
> precedence elsewhere in the language. The need for ** is not great enough
> to justify introducing a new special case for users to learn.
>
> However, in this case, #4 is only technically complicated -- for those
> writing or reading spec docs like us. For normal users, the only complexity
> is a rarely encountered surprising static error. With a decent (and easy to
> generate) error message, these users will immediately know what they need
> to do to repair their program.
>
> Significant programs are read much more than they are written. Both #2 and
> #3 will lead many readers to misread programs. For programs that are not
> rejected, #4 is no more confusing than #1. Altogether, for readers, #4 is
> better than #1 because ** is more readable than Pow.
>

MarkM, I'm surprised you didn't also mention that there is precedent for #4
from your own E.

The E language chose to place math operations as methods on numbers, rather
than on any static "Math" object, and does not have an exponentiation
operator. In order to avoid precedence surprises of the category we're
discussing, E statically rejects the combination of a unary prefix
(negation) and unary postfix (method call) operator.

-(2).pow(2)   # "ought to be" -4, is a syntax error
-(2).max(2)   # "ought to be" 2, is a syntax error

(The parentheses around the number are not actually required in E, but I
have included them for the sake of comparison to JS despite the lexical
rejection of "1.foo" in JS.)

JavaScript already syntactically accepts the above programs (parsing "-" as
lower precedence than ".foo()"), but #4 is in the same spirit of rejecting
cases where there are conflicting or unclear precedents for operator
precedence.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Math.log2 applied to powers of 2

2014-09-18 Thread Kevin Reid
On Thu, Sep 18, 2014 at 10:56 AM, Tab Atkins Jr. jackalm...@gmail.com
wrote:

 On Thu, Sep 18, 2014 at 3:17 AM, Claude Pache claude.pa...@gmail.com
 wrote:
  Question: Should Math.log2 give exact results for powers of 2?
 
  The same issue holds for Math.log10 (might be applicable for nonnegative
  powers only): Math.log10(1e15) != 15 in Chrome.

 I have no idea of the computation complexity underlying a log
 implementation, so given that: yes, it should totally give exact
 results for powers of 2, and log10 should do the same for (positive)
 powers of 10.  (Negative powers of 10 can't actually be represented by
 a JS number, so there's no need to talk about them.)


It would also be useful, though perhaps not feasible, if they are
guaranteed to be monotonic everywhere and strictly monotonic near those
exact values; that is,

for all x  2^k, log2(x)  k
for all x  2^k, log2(x)  k

and similarly for log10. If this property held, then naïve number of
digits tests expressed using logarithms would always give the right
answers. (This probably conflicts with generally desirable rounding
properties, however.)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A way of explicitly reporting exceptions

2014-06-23 Thread Kevin Reid
On Mon, Jun 23, 2014 at 12:08 PM, Tab Atkins Jr. jackalm...@gmail.com
wrote:

 On Mon, Jun 23, 2014 at 11:54 AM, Boris Zbarsky bzbar...@mit.edu wrote:
for (listener of listeners) {
  try {
listener();
  } catch (e) {
// Now what?
  }
}

 Can't you just pass e into a setTimeout()'d callback, and rethrow it
 from there?  Does that mess with the stack or something?


Yes, but setTimeout may be less prompt than you want depending on the
application (though another possibility is to use promises to queue it).
You might also have an application-specific reason to do something after
all the listeners (some kind of buffer flush).

However, I'd like to propose a different facility: Instead of catch and
then report, have a try and stop propagation but don't catch. This has a
nifty advantage in debuggability: you can declare that a debugger's stop
on uncaught exception should stop on such errors _before the stack is
unwound_. This makes it much easier to debug errors in listeners, because
you don't have to step through all other caught exceptions in order to stop
on that exception.

This can be done without a new language construct by making it a primitive
function:

callAndRedirectErrors(function () {
listener();
});

which is equivalent to, in the absence of a debugger,

try {
listener();
} catch (e) {
log e to console, etc
}

In the presence of a debugger, it has the special behavior that any errors
which _would_ be caught by that catch block _instead_ are stopped on if
uncaught exceptions would be — that is, before the stack is unwound.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A way of explicitly reporting exceptions

2014-06-23 Thread Kevin Reid
On Mon, Jun 23, 2014 at 12:44 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 6/23/14, 3:35 PM, Kevin Reid wrote:

 Yes, but setTimeout may be less prompt than you want depending on the
 application


 Note that at least in some browsers window.onerror is called off an event
 loop task anyway.


Clarification: I meant how promptly the listener is invoked (independent of
the error case).


  This has a nifty advantage in debuggability: you can declare that a
 debugger's stop on uncaught exception should stop on such errors
 _before the stack is unwound_.


 Note that such a facility would still fail in cases when a catch examines
 and then rethrows an exception,


Yes, it would stop at the rethrow rather than the original throw. Doing
more than that is hard.


 and in fact allows observably detecting whether an exception is caught and
 rethrown or just not caught.


I did not intend it to do so. Could you explain?

(You can notice rethrows and things by inspecting the stack trace, but I
assume that's not what you meant.)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Specification styles

2014-02-05 Thread Kevin Reid
On Tue, Feb 4, 2014 at 9:21 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/4/14 12:08 PM, Kevin Reid wrote:

 Could not this be done while matching the above principle as follows?

 proxyForB, or more precisely the proxy for the function object
 (windowB.postMessage), does not actually invoke windowB.postMessage
 itself but a corresponding post message from origin A function.


 That's actually pretty complicated.  Now you have two not-object-identical
 representations of windowB.postMessage in the scope of windowB, no?  The
 current membrane in SpiderMonkey has a single per-global representation of
 each object, for sanity's sake.  As a simple example, consider this code in
 window A:

   proxyForB.setTimeout(proxyForB.postMessage, 0, 5, *);

 What function object should the setTimeout implementation see?  What
 should happen when the timeout fires?


You point out that this is indeed more complicated to get “right” than I
had realized. I think it could still be done but things like setTimeout
would have to be proxied in the same way.

This arguably shows that the legacy policy is even more of a bad idea,
though, because it breaks a property kind of like TCP (I don't know a name
for it offhand, but it's related to the confused deputy). For example,
suppose that in windowB we previously evaluated
function later(f, ...as) { setTimeout(function() { f(...as); }, 0); }
and then in windowA we do
proxyForB.later(proxyForB.postMessage, 5, *);
then if I understand your description correctly, this would perform a
postMesage from B's origin. But why should it, just because there's an
intervening user-defined HOF?

In particular, what if later instead is something that is built-in, _not_
defined by the DOM, but has the effect of calling a function passed in?
Whether or not one were to implement the mechanics in the way I propose,
there needs to be a well-defined and preferably sensible decision about
what happens upon something like:

proxyForB.Array.prototype.forEach.call([5], proxyForB.postMessage);
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Specification styles

2014-02-04 Thread Kevin Reid
On Mon, Feb 3, 2014 at 4:56 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/3/14 7:46 PM, Kevin Reid wrote:

 It is an extremely bad idea to have the consequences of a function call
 depend on properties of the caller rather than of the function and the
 arguments.


 I agree, but the postMessage API is what it is


[...]


  I think I heard that Firefox already handles this via a membrane
 ('otherWindow' is not actually the same as that window's 'window'
 object, but you can't tell because they're substituted as needed when
 passed across origins); can someone confirm this?


 Firefox has a membrane here, but the call actually pierces the membrane in
 this case, so when Firefox needs to find out who the caller was it
 actually just examines the callstack (modulo the cases when called with no
 script on the stack, etc).

 Specifically, in Firefox if code in window A has a reference to window B,
 then it actually has a reference to a proxy.  If it then does
 proxyForB.postMessage that returns a proxy for the actual
 windowB.postMessage method.  When [[Call]] happens on this proxy, it checks
 that you're allowed to make the call at all, then invokes
 windowB.postMessage, passing to it proxies to the arguments it was called
 with.

 But now the windowB.postMessage method would like to determine who called
 it... and that information is no longer available from itself, since it
 itself lives in window B.


Could not this be done while matching the above principle as follows?

proxyForB, or more precisely the proxy for the function object
(windowB.postMessage), does not actually invoke windowB.postMessage itself
but a corresponding post message from origin A function.

Or if the distinction should be made using 'this'-values rather than
function values, make it by the function proxy recognizing this=proxyForB
and not unwrapping it but performing the cross-origin post.

Either should be observably equivalent to the legacy behavior while not
introducing execution-context dependence.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Specification styles

2014-02-03 Thread Kevin Reid
On Mon, Feb 3, 2014 at 4:16 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/3/14 6:45 PM, Allen Wirfs-Brock wrote:

 In ES6 all functions, including built-ins are permanently associated with
 a Realm when they are created.


 Yes.  That plus every global having an origin gives us the concept of
 origin of a function, which is well defined.  And that's useful and used
 for things.  But it doesn't match what postMessage needs to do, because if
 my code does:

   otherWindow.postMessage(args)

 then the origin of the message is my code, not otherWindow...


It is an extremely bad idea to have the consequences of a function call
depend on properties of the caller rather than of the function and the
arguments. (Hence the removal of .caller and .callee in strict mode.) Even
if it proves necessary for legacy compatibility, no such behavior should be
specified in new systems.

I think I heard that Firefox already handles this via a membrane
('otherWindow' is not actually the same as that window's 'window' object,
but you can't tell because they're substituted as needed when passed across
origins); can someone confirm this? This removes any dependency on the
calling context — in particular, there is no need to have origin
information in contexts/stack frames rather than objects (which naturally
belong to a particular realm and hence origin).
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Standard modules?

2014-01-20 Thread Kevin Reid
On Sun, Jan 19, 2014 at 7:21 PM, Allen Wirfs-Brock al...@wirfs-brock.comwrote:

 It isn't clear that there much need for a global name for
 GeneratorFunction.  If you really eed to access it can always get it via:

(function *() {}).constructor

 (as the always helful generator UMO diagram at
 http://people.mozilla.org/~jorendorff/es6-draft.html#sec-generatorfunction-objects
  tells
 us)


SES needs to visit every 'primordial' / 'singleton' object to ensure
they're made immutable and harmless. (Other 'meta' code might also benefit
though I don't know of any examples offhand.)

This job is easier if all such objects are reachable via traversing data
properties.

ES5 contains only one object which this is not true of: [[ThrowTypeError]].
This would have been fine since [[ThrowTypeError]] as specified is
immutable and harmless, but in practice many implementations have bugs or
extensions which make it mutable. We had to add a special case for it to
ensure that it was traversed.
https://code.google.com/p/google-caja/issues/detail?id=1661
https://codereview.appspot.com/8093043/diff/19001/src/com/google/caja/ses/repairES5.js

It would be nice if there was some way in ES6 to make sure SES doesn't miss
any objects — either that every primordial object is reachable via data
properties (more precisely: that there are no preexisting objects which are
reachable only by way of executing some program construct; e.g. Array is
reachable by [].constructor, but is also named Array in the standard
environment), or there is some other way to enumerate them.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Rename Number.prototype.clz to Math.clz

2014-01-16 Thread Kevin Reid
On Thu, Jan 16, 2014 at 1:12 PM, Jens Nockert j...@nockert.se wrote:

  On 2014/01/16, at 17:40, Jason Orendorff jason.orendo...@gmail.com
 wrote:
  Or maybe: flip the function around so that it returns the number of
  bits in the binary expansion of the value: Math.bitlen(15) === 4. This
  is just (32 - CLZ), so it effectively computes the same thing as clz.
  The advantage is that it extends naturally to integers of any size.

 What is Math.bitlen(-1) then? Isn’t this just the same problem as before,
 except it happens for negative numbers instead of positive?


FWIW: Common Lisp has rigorously transparent (that is, you cannot observe
the machine word size) bigints and quite a few binary operations defined on
them, so it's where I personally would look for precedent on such
questions. It doesn't have clz or bitlen per se, but it has these two
functions which contain positions on the issue:


integer-length
http://www.lispworks.com/documentation/HyperSpec/Body/f_intege.htm
 Returns the number of bits needed to represent 'integer' in binary
two's-complement format.
[Comment: This is equivalent to bitlen + 1 in order to count the sign bit,
and is well-defined for negative numbers.]

logcount
http://www.lispworks.com/documentation/HyperSpec/Body/f_logcou.htm
Computes and returns the number of bits in the two's-complement binary
representation of 'integer' that are `on' or `set'. If 'integer' is
negative, the 0 bits are counted; otherwise, the 1 bits are counted.


(If I had guessed without actually reading the docs, though, I would have
had logcount rejecting negative numbers.)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Rename Number.prototype.clz to Math.clz

2014-01-16 Thread Kevin Reid
On Thu, Jan 16, 2014 at 1:58 PM, Brendan Eich bren...@mozilla.com wrote:

 Kevin Reid wrote:

 FWIW: Common Lisp has rigorously transparent (that is, you cannot observe
 the machine word size) bigints and quite a few binary operations defined on
 them, so it's where I personally would look for precedent on such questions.


 (a) we don't have a bignum type yet; (b) we want to JIT to concrete
 machine types where possible. (b) does not require clz32 vs. clz64 in my
 view, because of type inference or AOT type-checking (asm.js). But we don't
 want to require bignums.


Yes, but choices which work for bignum also work for I am working on
32-bit (or 8-bit or whatever) values which happen to be stored in a larger
(53- or 64-bit) field, and the length of the larger field is irrelevant to
the task.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Rename Number.prototype.clz to Math.clz

2014-01-16 Thread Kevin Reid
On Thu, Jan 16, 2014 at 1:56 PM, Mark S. Miller erig...@google.com wrote:

 Why is logcount called logcount? As the doc on integer-length makes
 clear, it has a strong relation to the log-base-2 of the number. logcount
 does not.


Common Lisp calls most *bitwise* functions of integers logsomething,
that's all.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.prototype.slice web-compat issue?

2013-08-29 Thread Kevin Reid
On Wed, Aug 28, 2013 at 10:19 AM, Allen Wirfs-Brock
al...@wirfs-brock.comwrote:

 The problem is that in ES6 slice always returned a new Array instance
 using the Array of the realm associated with the invoked slice function.
  In ES6 slice returns an object that is determine based upon the actual
 this value passed to slice.  In the default case like above, this will be
 the a new Array instance using the Array of the realm associated with the
 this value.


!

This is a hazardous change for SES-style security. For example, I've just
taken a quick look at our (Caja) codebase and found a place where
Array.prototype.slice.call(foo) is used to obtain a “no funny business”
array (i.e. doesn't have side effects when you read it) and another where
it's used to obtain an array which must be in the caller's realm. These
would be easy enough to replace with a more explicit operation, but I
wanted to point out that this is not a harmless change.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.prototype.slice web-compat issue?

2013-08-29 Thread Kevin Reid
On Thu, Aug 29, 2013 at 12:56 PM, Allen Wirfs-Brock
al...@wirfs-brock.comwrote:

 On Aug 29, 2013, at 10:51 AM, Kevin Reid wrote:

 This is a hazardous change for SES-style security. For example, I've just
 taken a quick look at our (Caja) codebase and found a place where
 Array.prototype.slice.call(foo) is used to obtain a “no funny business”
 array (i.e. doesn't have side effects when you read it) and another where
 it's used to obtain an array which must be in the caller's realm. These
 would be easy enough to replace with a more explicit operation, but I
 wanted to point out that this is not a harmless change.


 In the Array.prototype.slice.call(foo) use case what is foo? Is it known
 to be an Array?  Are you saying this is how you clone an Array?


Sorry, both are of that form, if I was unclear. When we want to simply
clone an existing array, belonging to a secured realm, I think we generally
use slice as a method, and there is no security property there.

Of the two cases I refer to, one is a function (the trademarking stamp())
which takes a list of objects as a parameter and needs to ensure that
successive stages of processing operate on exactly the same set of objects
and do not trigger any side effects in the list's implementation. Here,
realm is irrelevant but the list's implementation must be relied on, so in
practice we want an Array from stamp()'s own realm.

The other case is one where it is a cross-frame protocol and we
specifically want an object which belongs to 'our own' realm because its
prototypes are frozen and specially extended, whereas the calling realm's
prototypes notably are not frozen (it's outside of the shiny happy sandbox)
and therefore constitute a risk to least-authority programming which we
want to stop at the boundaries. (Note for MarkM: It's actually a little bit
more complicated than this, but the details are irrelevant to the
principle.)


 For you second use case, that sounds like it is contrary to what is
 implicitly assume by ES5.  For ES5, every built-in is assume to be
 associated with a realm when it is created and any references to built-ins
 by a built-in are assume to use the same realm as the referencing built-in.
  So something like:
var newArray = slice.call( [ ] );


Sorry, when I said the caller I meant *this particular* caller, the
function in our codebase which contains Array.prototype.slice in its
source text and therefore does call the slice belonging to its own realm. I
apologize for the particularly misleading phrasing.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array.prototype.slice web-compat issue?

2013-08-29 Thread Kevin Reid
On Thu, Aug 29, 2013 at 2:21 PM, Allen Wirfs-Brock al...@wirfs-brock.comwrote:

 Of the two cases I refer to, one is a function (the trademarking stamp())
 which takes a list of objects as a parameter and needs to ensure that
 successive stages of processing operate on exactly the same set of objects
 and do not trigger any side effects in the list's implementation. Here,
 realm is irrelevant but the list's implementation must be relied on, so in
 practice we want an Array from stamp()'s own realm.

 The other case is one where it is a cross-frame protocol and we
 specifically want an object which belongs to 'our own' realm because its
 prototypes are frozen and specially extended, whereas the calling realm's
 prototypes notably are not frozen (it's outside of the shiny happy sandbox)
 and therefore constitute a risk to least-authority programming which we
 want to stop at the boundaries. (Note for MarkM: It's actually a little bit
 more complicated than this, but the details are irrelevant to the
 principle.)

 for both cases, are you using Array.isArray to determine that you are
 operating upon an array?


In both cases, all we want is the user-specified set of values, to store or
operate on. So, in accordance with JavaScript idiom, we expect it to be
array-like, but (since we are multi-frame code) not necessarily an Array
from this frame.

Given this, there is no particular reason to perform an isArray test,
unless we wanted to do type checks for linting purposes (you passed a Foo,
not an array of Foos; you probably made a mistake), and we don't.

what would be the appropriate thing to happen (all things considered) in a
 world where subclasses of Array exist?


I don't have any examples to work from; I would think there is value in *
permitting* them to be used to carry the intended set-of-values if the code
calling our code uses them, but I do not see how any subclass could have a
custom behavior which would be *appropriate or useful* to preserve rather
than discarding in these two cases, or any other case where the array being
passed is used in 'functional' fashion (immediately reading as opposed to
either retaining it to look at later or mutating it).

(I admit I favor composition/delegation over inheritance, for public
interfaces, and therefore dislike the notion of working with subclasses of
concrete built-ins. But one could also consider, for example, the reasons
why java.lang.String is final.)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Interface prototype objects and ES6 @@toStringTag

2013-05-13 Thread Kevin Reid
On Mon, May 13, 2013 at 2:01 PM, Allen Wirfs-Brock al...@wirfs-brock.comwrote:

 On May 13, 2013, at 1:50 PM, Erik Arvidsson wrote:

 The way that WebIDL require Object.prototype.toString to return [object
 TypePrototype] for the interface prototype object and [object Type] for
 the instances seems to imply that every instance needs to have an own
 @@toStringTag.

 http://people.mozilla.org/~jorendorff/es6-draft.html#sec-15.2.4.2
 http://dev.w3.org/2006/webapi/WebIDL/#es-environment

 If an instance does not have its own @@toStringTag,
 Object.prototype.toString will read through to the [[Prototype]] which
 would return the wrong string.

 Well, toString just does a [[Get]] for @@toStringTag.  You are perfectly
 free to implement it as a get accessor that takes into account whether the
 this value is an instance or a prototype object. Not sure whether the
 complexity is really worth it in most cases. I considered building
 something like that into Object.prototype.toString but it seemed hard to
 justify and there was no (ES) legacy reason for doing so.

 The preferred way to over-ride toString should be via a toString method,
 not via @@toStringTag.


FWIW, oddball implementor's experience:

In Caja's emulated DOM type hierarchy, Node.prototype.toString is an
ordinary method which searches the prototype chain for the appropriate
type-name, including distinguishing prototypes. This seems to work fine
insofar as it gives the answers I want it to give and nobody's ever
complained, and I think it's identical in abilities and effects to your
proposal of an @@toStringTag accessor.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Most current Proxy-as-implemented test suite?

2013-05-02 Thread Kevin Reid
On Wed, May 1, 2013 at 11:42 PM, Tom Van Cutsem tomvc...@gmail.com wrote:

 2013/5/1 Kevin Reid kpr...@google.com

 What is the most current test suite available for this variant of
 proxies?  So far I have found
 http://hg.ecmascript.org/tests/harmony/, which seems to be a more
 recent version of what we are currently using, but has it been
 superseded by something else?


 The test suite you refer to is the first test suite I wrote for the
 original harmony:proxies. I used it mostly to test the early Mozilla
 implementation done by Andreas Gal. It wouldn't surprise me if Mozilla
 extended this test suite to test their implementation more thoroughly.
 Also, since these proxies are also in v8, perhaps Andreas Rossberg can
 point you to the test suite used to test v8 proxies.


FYI, I've gone with the version I linked above — it appears to be
sufficient for our immediate technical needs, and as David Bruant also
said, it is better not to spend too much effort on maintaining an
expected-to-be-obsolete API.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Most current Proxy-as-implemented test suite?

2013-05-01 Thread Kevin Reid
In Caja we have several uses for Proxies, some of which involve
reimplementing or modifying the Proxy API. We are currently following
the original harmony:proxies (rather than direct or notification
proxies) since that's what is available in browsers.

What is the most current test suite available for this variant of
proxies?  So far I have found
http://hg.ecmascript.org/tests/harmony/, which seems to be a more
recent version of what we are currently using, but has it been
superseded by something else?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Most current Proxy-as-implemented test suite?

2013-05-01 Thread Kevin Reid
On Wed, May 1, 2013 at 2:17 PM, David Bruant bruan...@gmail.com wrote:

 Le 01/05/2013 22:26, Kevin Reid a écrit :

  In Caja we have several uses for Proxies, some of which involve
 reimplementing or modifying the Proxy API.

 Out of curiosity, how are you modifying it? for which use case?


Sorry, I misspoke. Not modifying the API, but patching/wrapping the Proxy
implementation to support other SES features.


 If anything, I would recommend to move away from the initial proxy design
 for Caja, because the harmony:proxies API is meant to never see light in
 the spec (and should probably be removed from Firefox).


I agree that this is what we should be doing. However, we currently have to
maintain ES5/3 (our emulation of ES5 on top of browsers that do not
implement ES5, or do not do so correctly) which includes an implementation
of Proxy. It's less work overall if we don't ditch harmony:proxies until we
can also ditch ES5/3.


 Tom Van Cutsem wrote a direct proxies shim that runs on top of current
 browser implementations [4]. If you want to move to direct proxies, it
 might be something to consider.


Indeed.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Mutable Proto

2013-03-21 Thread Kevin Reid
Correction:

On Thu, Mar 21, 2013 at 2:16 PM, Kevin Reid kpr...@google.com wrote:

 Yes. SES requires 'with' as a means to hook into 'global' variable reads
 and writes; without it, it is impossible


without performing a parse and scope analysis of the code to be evaluated


 to emulate the semantics of browser global environments, such as in:
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Observability of NaN distinctions — is this a concern?

2013-03-20 Thread Kevin Reid
On Wed, Mar 20, 2013 at 1:57 PM, Allen Wirfs-Brock al...@wirfs-brock.comwrote:

 On Mar 20, 2013, at 1:42 PM, Kevin Reid wrote:

 That normalization on read is is my case 1 above — it is necessary _for
 that implementation_. A conformant implementation could use a different
 strategy which does not normalize on Float64 read, and this would be
 unobservable, so the spec should not bother to specify it.

 However, lack of normalization on Float64 write _is_ potentially
 observable (if the implementation does not normalize all NaNs from all
 sources). Therefore, I argue, the spec should specify that normalization
 happens on write; and it happens that an implementation can omit that as an
 explicit step, with no observable difference, if and only if its
 representation of NaN in JS values (from all possible sources, not just
 typed arrays) is normalized.


 The buffer contents may have come form an external source or the buffer
 may be accessible for writes by an agent that is not part of the ES
 implementation.  The only thing that the ES implementation has absolute
 control over are its own reads from a buffer and the values it propagates
 from those reads.


I don't think we're disagreeing about any facts or principles (everything
in your paragraph above is true), but you're thinking about implementation
strategies and I'm thinking about observable behavior.

This is the important point: normalization on write _or
observably-equivalent behavior_ is implicitly mandatory because otherwise 8.1.5
may fail to hold (standard ES code can use standard ES tools to distinguish
NaNs, as demonstrated by my test results — the behavior I found does not
contradict the spec, to my knowledge). Therefore, the spec should not claim
that it is optional.

_Incidentally_, I observe that normalization on read is not necessary
except as an implementation strategy. It may well be that all
implementations will find it expedient, but there is no need for the spec
to require it, since (as 8.1.5 specifically acknowledges) an implementation
may choose to let the NaN bits vary, as long as all operations on them
(which includes SetValueInBuffer by my above argument) treat them
identically.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Where is it specified that new objects are empty, if it is?

2013-03-15 Thread Kevin Reid
I'm doing a little maintenance on SES. Chrome has recently added a new
odd behavior:

 var o = Object.create(null);
 Object.getOwnPropertyNames(o)
[]
 Object.getOwnPropertyDescriptor(o, '__proto__');
Object {value: null, writable: true, enumerable: false, configurable: false}

The two results are clearly non-conformant, in that gOPN and gOPD
should be consistent with each other. However, the problem that I'm
wanting to record accurately is the fact that Object.create(null) has
(however inconsistently) any properties at all (thus interfering with
table-like uses).

15.2.3.5 Object.create refers to 15.2.2.1 which specifies “a newly
created native ECMAScript object”. Where is the initial state of the
collection of properties of a “newly created” object specified? (8.6
defining the Object type doesn't say anything about the existence of
non-internal properties.)

(I recognize that this behavior may well be a deliberate variance to
reconcile __proto__ and ES5/ES6. This is not a complaint; this is a
request to consult spec-lawyers.)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: a future caller alternative ?

2013-03-11 Thread Kevin Reid
On Sat, Mar 9, 2013 at 10:13 AM, Andrea Giammarchi 
andrea.giammar...@gmail.com wrote:

 but then again, the list of problems is massive here if it's about
 trustiness.
 Object.prototype can be enriched or some method changed, parseInt and
 parseFloat can be redefined to change payment details and stuff,
 String.prototype.trim can be faked to read all written fields before these
 are sent if the app does the cleanup, Function.prototype.apply/call can be
 redefined too so access to unknown functions is granted again


Yes, taking care of all those things is necessary as well. ES5 provides us
the tools to do so: Object.freeze(). If you recursively freeze all standard
global objects then all of the issues you mention are handled. Secure
ECMAScript (SES), developed by Mark Miller, does this; it provides an
execution environment which _is_ secure (given a sufficiently conformant
ES5 implementation).


 ... and all we worry about is a caller able to access without changing a
 thing a possibly unknown and private scope that if executed all it could do
 is causing an error?


If by “cause an error” you mean throw an exception, that's not all it can
do. Here's a very simplified example, which I have tried to make only
moderately silly; imagine a calculator/spreadsheet web-app with
user-defined functions. In order to make security relevant, imagine that it
(a) has persistent user data (i.e. a script could modify it on the server
by XHR), and (b) allows sharing of prepared calculations with other people.

var input = document.getElementById('input');
var output = document.getElementById('output');
function execute(fun, filter) {
  output.innerHTML = filter(fun(input.value));
}
function basicFilter(value) {
  return (+value).toFixed(4);
}

execute(new Function(x, return (+x) + 1), basicFilter);

SES patches the Function constructor so that it cannot access the normal
global scope (e.g. document itself), so the string code here can only
perform computation and use safe constructors (e.g. Object, Array, Date).

However, if the user-written function could use .caller, then it could
invoke the caller (which is the execute() function) with an alternate
'filter' which returns arbitrary HTML, at which point it can take over the
page and therefore the user's session as well.

In summary:
• Defending code from other code is not just a theoretical possibility: SES
is a working implementation.
• Not prohibiting .caller is sufficient to defeat this defense.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: a future caller alternative ?

2013-03-11 Thread Kevin Reid
On Mon, Mar 11, 2013 at 12:56 PM, Brandon Benvie bben...@mozilla.comwrote:

  On 3/11/2013 12:41 PM, Kevin Reid wrote:

 Yes, taking care of all those things is necessary as well. ES5 provides us
 the tools to do so: Object.freeze(). If you recursively freeze all standard
 global objects then all of the issues you mention are handled. Secure
 ECMAScript (SES), developed by Mark Miller, does this; it provides an
 execution environment which _is_ secure (given a sufficiently conformant
 ES5 implementation).


 I would note, however, that it looks like at, least in browsers, freezing
 the window or even any single property on it will no longer be an option in
 the future. I believe the technique used by SES (correct me if I'm wrong)
 is using is more complex than simply freezing the window (though I believe
 it does freeze every property recursively from there).


Right. We construct an object which has the bindings specified by ES5
(Object, Array, parseInt, ...), but not the DOM (document, window,
HTMLElement...). Actual access to the DOM by untrusted code is done with
intermediation and is not part of SES per se. Trying to turn window itself
into a sandbox is a non-starter.


 Something like shadowing all whitelisted global names and preventing any
 kind of direct access to the window object at all. This requires some
 amount of source code sandboxing to accomplish.


The minimal solution is to (conservatively) find all (potential) free
variables in the source code and bind them, which we currently do using an
outer 'with' statement (in order to be able to intercept accesses using
accessor properties, for best emulation of legacy global-variable
semantics).
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: a future caller alternative ?

2013-03-08 Thread Kevin Reid
On Fri, Mar 8, 2013 at 2:13 PM, Kevin Gadd kevin.g...@gmail.com wrote:

 The Error.stack strawman is a great start at making Error.stack's
 contents machine-readable, but doesn't remotely approach being a
 solution for use cases previously addressed by Function.caller.

 I don't really understand the security argument in this case. Being
 able to identify the particular function at offset N in the stack
 shouldn't expose any privileged information


The problem is exposing the ability to invoke the function. Not
'privileged' information, but 'privileged' operations.


 If anything, being able to cheaply and reliably walk
 the stack to - for example - identify your caller would allow you to
 implement some interesting security patterns in your own code, if for
 some reason you were trying to do sandboxing and code access security
 in pure JS. If specified correctly you could make it possible to walk
 the stack and ensure that the information you're getting isn't being
 spoofed, which would allow you to reliably limit callers of a given
 'protected' function to a fixed whitelist of trusted functions,
 something you can't do by parsing a dead stack trace.


Java tried stack inspection. It has failed. It has been responsible for
quite a few vulnerabilities (of the sort which allow Java applets to break
their sandbox) and does not compose well.


 Apologies if I've missed some huge design philosophy underpinning the
 design of ES6/ES7 re: security/sandboxing; I also don't really
 know/understand how Caja fits into the picture.


References constitute permissions. To have a reference to a function is to
be able to invoke it is to have the permission.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-07 Thread Kevin Reid
On Thu, Mar 7, 2013 at 8:39 AM, Andreas Rossberg rossb...@google.comwrote:

 On 7 March 2013 16:37, Andreas Rossberg rossb...@google.com wrote:
  But, in order to (hopefully) let Brandon calm down a bit, I am NOT making
  yet another proposal for a two-method protocol. Instead I propose
  simply _delivering_ a sentinel object as end-of-iteration marker
  instead of _throwing_ one.

 Forgot to mention one detail: under this approach, it should of course
 be a runtime error if yield is applied to a value that is a
 StopIteration object.


Use of a singleton (or not marked for the specific generator) sentinel
object has a hazard: the sentinel is then a magic value which cannot be
safely processed by library code written to operate on arbitrary values,
which happens to use generators in its implementation.

(ECMAScript already has moderately hazardous values, namely objects-as-maps
which do not implement the 'standard protocol' of Object.prototype methods,
but these are more hazardous than that in that they are not even safe  to
pass around without operating on them.)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-07 Thread Kevin Reid
On Thu, Mar 7, 2013 at 8:56 AM, Andreas Rossberg rossb...@google.comwrote:

 On 7 March 2013 17:50, Kevin Reid kpr...@google.com wrote:
  Use of a singleton (or not marked for the specific generator) sentinel
  object has a hazard: the sentinel is then a magic value which cannot be
  safely processed by library code written to operate on arbitrary values,
  which happens to use generators in its implementation.

 While that is true, it is conceptually no different from using a magic
 exception value, as under the current proposal. That clobbers use of
 that value for the exceptional return path in exactly the same way as
 the proposed alternative does for the regular return path. The only
 way to avoid both (in a single-function protocol) is the approach
 Claus mentioned.


There are conventional expectations about the exceptional return path —
interpretations independent of the specific code exiting via it — which
there are not about the normal return path.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: get/setIntegrity trap (Was: A case for removing the seal/freeze/isSealed/isFrozen traps)

2013-02-20 Thread Kevin Reid
On Wed, Feb 20, 2013 at 11:52 AM, Nathan Wall nathan.w...@live.com wrote:

 `Object.isFrozen` and `Object.isSealed` don't really seem that helpful to
 me for the very reasons you've discussed: They don't represent any real
 object state, so they don't accurately tell me what can be done with an
 object.  If I could I would argue in favor of their removal, though I know
 it's too late for that.

 I would be curious to see legitimate uses of `isFrozen` and `isSealed` in
 existing code if anyone has anything to offer.


I just took a look at uses of Object.isFrozen in Caja and I find that all
but one are either in tests (test that something is frozen) or in sanity
checks (if this isn't frozen, do not proceed further, or freeze it and
warn).

The remaining one is in a WeakMap abstraction used for trademarking: an
object cannot be given a trademark after it is frozen. (The rationale here,
while not written down, I assume is that a defensive object's “interface”
should not change, and it is an implementation detail that this particular
information is not stored in the object.) There is a comment there
suggesting we might strengthen this check to only permitting _extensible_
objects to be marked.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: get/setIntegrity trap (Was: A case for removing the seal/freeze/isSealed/isFrozen traps)

2013-02-20 Thread Kevin Reid
On Wed, Feb 20, 2013 at 12:15 PM, David Bruant bruan...@gmail.com wrote:

 And in an ES6 world, you'll probably use an actual WeakMap anyway?


Using an actual WeakMap does not change matters: the intent is that after
Object.freeze(o), you can't add new trademarks to o. Since the trademark
info is not stored on the object but in the WeakMap (whether emulated or
actual), we have to add an explicit test.

If 'private properties' (in whatever form they come to ES6) were available
to us, then it would be natural to use them instead for this purpose (at
least, so it seems to me at the moment) and so we would not need a test
since non-extensibility would presumably reject the addition of a new
private property.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [].push wrt properties along the [[Prototype]] chain

2013-02-19 Thread Kevin Reid
On Fri, Feb 15, 2013 at 5:30 PM, Jeff Walden jwalden...@mit.edu wrote:

 Consider:

   Object.defineProperty(Object.prototype, 0, { value: 17, writable:
 false, configurable: false });
   [].push(42);

 Per ES5, I think this is supposed to throw a TypeError.  The push should
 be setting property 0 with Throw = true, which means that when [[CanPut]]
 fails, a TypeError gets thrown.  No engine I can test does this, I suspect
 because everyone's mis-implemented an optimization.


FYI, this looks very similar to 
http://code.google.com/p/v8/issues/detail?id=2412, which is one of the
bugs which SES/Caja is concerned about: Array.prototype.push can mutate a
sealed (but not frozen) object.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Ducks, Rabbits, and Privacy

2013-01-23 Thread Kevin Reid
On Wed, Jan 23, 2013 at 8:59 AM, Mark S. Miller erig...@google.com wrote:

 Hi Kevin, thanks for pulling this code example out of the gist and posting
 separately. Looking at it only in context before, for some reason I hadn't
 realized how beautiful this is. To support this pattern, your makePrivate()
 could be defined either in terms of either private symbols or weakmaps,
 right?

 Given how concise and beautiful this is, even if this is defined in terms
 of private symbols, I agree this looks much better than the square bracket
 syntax for accessing private fields. It also looks good enough that the
 hypothetical ES7 syntactic support doesn't look much better -- perhaps not
 better enough to be worth adding more sugar. As you say, this will give us
 enough experience with a usable privacy syntax that we can make a more
 informed choice for ES7 when it comes to that. Thanks!


FYI, this is essentially identical to the 'Confidence' abstraction I
developed for domado.js in Caja. Perhaps the choice could be further
informed by looking at how it's worked out there.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Ducks, Rabbits, and Privacy

2013-01-23 Thread Kevin Reid
On Wed, Jan 23, 2013 at 9:45 AM, Kevin Reid kpr...@google.com wrote:


 FYI, this is essentially identical to the 'Confidence' abstraction I
 developed for domado.js in Caja. Perhaps the choice could be further
 informed by looking at how it's worked out there.


Perhaps I should have included a link:
http://code.google.com/p/google-caja/source/browse/trunk/src/com/google/caja/plugin/domado.js?spec=svn5223r=5223#359

The idea is that 'Confidence' introduces a 'class with private fields' as
if in Java: each object which has a private state record is considered to
be an instance. The private record is used identically to Kevin Smith's
examples, but my analogue of getPrivate on a new object fails hard rather
than creating one — analogous to a ClassCastException.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Ducks, Rabbits, and Privacy

2013-01-23 Thread Kevin Reid
On Wed, Jan 23, 2013 at 11:15 AM, Russell Leggett russell.legg...@gmail.com
 wrote:

 Perhaps I should have included a link:

 http://code.google.com/p/google-caja/source/browse/trunk/src/com/google/caja/plugin/domado.js?spec=svn5223r=5223#359

 The idea is that 'Confidence' introduces a 'class with private fields' as
 if in Java: each object which has a private state record is considered to
 be an instance. The private record is used identically to Kevin Smith's
 examples, but my analogue of getPrivate on a new object fails hard rather
 than creating one — analogous to a ClassCastException.


 I apologize for being lazy, but you provide an example of this being used
 and not just the implementation?


All of the examples within Caja are rather hairy, so I'll translate the
above Purse-using-makePrivate example into the style which I would write it
using Confidence. Note that *this is code that runs now, under ES5*; if I
had left in the class syntax that was used in the original then it would
have been identical except for the different choices of names.

If you would like me to discuss an actual example from Caja instead, feel
free to ask.

var PurseConf = new Confidence('Purse');
var m = PurseConf.protectMethod;
var p = PurseConf.p;
function Purse() {
PurseConf.confide(this);
p(this).balance = 0;
}
Purse.prototype.getBalance = function() {
return p(this).balance;
};
Purse.prototype.makePurse = function() { return new Purse; };
Purse.prototype.deposit = m(function(amount, srcPurse) {
p(srcPurse).balance -= amount;
p(this).balance += amount;
});

Note that the deposit method's protectMethod wrapper ensures that it will
not be invoked with a bogus 'this', which would otherwise allow srcPurse to
be drained to nowhere by crashing on the second line; this is not actually
useful here since purses may be unreferenced, but in other cases such a
precondition may be important.

Note that when both protectMethod and p are used, there is a redundant
WeakMap lookup. An alternate design would be for the protectMethod wrapper
to pass an additional argument to the wrapped function which is the
private-state record. This could be considered to have the advantage of
encouraging defining explicit operators on the private state (wrapped
functions) rather than just 'pulling it out of the object'.

I am not arguing that something like this is the right abstraction for
private state in ES6; only, given that the idea has arisen independently, I
note that we have some prior experience with it, and it has turned out
mostly all right. However, ignoring efficiency of implementation on current
ES5, I would myself rather see a mechanism which did not have any
private-state-record object, but rather had a separate 'symbol' object
identifying each 'private property'; this has the advantage of ensuring the
smallest 'scope' of the access to private state. For example, when a
'class' has private state and it has a 'subclass' which has additional
private state, the private-state-record pattern encourages the subclass to
use the same record (if it is defined in the same place and so has access)
which, besides increasing the chance of name collisions, also means that
the subtype's methods do not automatically fail when applied to instances
of the supertype and so may have unintended consequences.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: New paper: Distributed Electronic Rights in JavaScript

2013-01-15 Thread Kevin Reid
On Tue, Jan 15, 2013 at 12:49 PM, David Bruant bruan...@gmail.com wrote:

  Le 15/01/2013 19:19, Kevin Reid a écrit :

  From a capability viewpoint, there are non-performance reasons to have
 the same pattern, even within a given event loop, namely resources which
 may be transferred with observably exclusive access (ownership); the
 pattern is to have an operation on the resource which generates a new
 reference to it (i.e. a new object, from the OO perspective) and makes the
 old one useless (revoked).

 Interesting. It reminds me that JavaScript does not have a notion of
 internal and external code with regard to an object. Anyone with access to
 the object can equivalently modify it (add, delete properties for instance).
 On object can't at the same time defend its integrity and modify its own
 shape (I guess it's possible with proxies now).

 I don't know why anyone would want to do that, but if you want to model a
 caterpillar becoming a butterfly, it's a bit complicated in JavaScript
 without proxies. The caterpillar has a crawl method and if we want the
 same object (same identity) to eventually become a butterfly with a fly
 method (but no crawl method), the object has to remain extensible and the
 crawl property has to be configurable.


Becoming useless is different from becoming a different interface; all that
is necessary is that the operations on the object are disabled. A simple
generic answer would be something like 'all getters return undefined, all
setters and methods throw'. Note that this is a change in the behavior of
functions and is therefore allowed to be entirely internal.

However, it is useful to have an application-specific notion of “useless”.
For a made-up-on-the-spot example, suppose the disabled object is some kind
of container; if it reports itself as being empty and only throws in
response to operations attempting to add new elements, then an application
may be able to avoid special-case code by letting the empty container be
handled as a trivial case rather than the caller of the transfer operation
having to make sure the container is removed from other places. Or consider
that objects for e.g. streams and open-files may be _closed_: anything that
designates an external resource often has some kind of well-defined useless
state.

Where have you seen the create new object from old one pattern used? It
 sounds interesting, but I can't think of where I'd use it.


Real examples are hard to find because (1) hardly anyone is writing ocap
code with detailed mutually-suspicious interactions, and (2) one generally
prefers to avoid having to manage exclusive access by not having a need for
exclusion (i.e. everyone has their own independent instance of whatever)!

Our canonical example of exclusive access is virtual money: money held in a
'purse' object may be exclusively transferred away by depositing it into a
new purse, leaving the old one with a balance of zero. (In this case, the
old object simply has a zero balance and may be reused.) Similarly, a
virtual board/card game could handle transfers between players of its
virtual 'physical' tokens this way.

If my examples are unconvincing, I leave it to MarkM to provide better ones
:)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A DOM use case that can't be emulated with direct proxies

2012-12-13 Thread Kevin Reid
On Thu, Dec 13, 2012 at 11:47 AM, Jason Orendorff jason.orendo...@gmail.com
 wrote:

 This target, even if dummy, is the one that will be used for invariants
 checks. You can't get away from this by design. This is one of the most
 important part of the direct proxies design.
 Even if you switch of fake target, the engine will still perform checks
 on the dummy internal [[Target]].

 I feel we're cycling in what we say and I feel I can't find the right
 words to explain my point. One idea would be for you to implement a
 target-switching proxy based on direct proxies (Firefox has them natively
 or you can use Tom's shim [1]). I'm confident you'll understand my point
 through this exercise.


 David: https://gist.github.com/4279162

 I think this is what Kevin has in mind. Note in particular that the target
 of the Proxy is just a dummy object, and the handler ignores it entirely.
 The proxy uses it for invariant checks, but the intent is that those would
 always pass.


Yes, exactly. I was just this minute in the process of writing such a proxy
myself, and have not yet confirmed whether it is accepted by the invariant
checks for all the cases I'm thinking of (testing against FF 18.0).

Note that either
(1) all the switched-among targets need to have the same [[Prototype]],
(2) the proxy has to pretend that all inherited properties are actually own,
(3) or mutating [[Prototype]] (i.e. __proto__) needs to be possible.
In my particular use case, (1) is not a suitable option, so I would
implement (2) if (3) is not available. Not that I approve of (3), but one
does what one must to accomplish virtualization.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A DOM use case that can't be emulated with direct proxies

2012-12-13 Thread Kevin Reid
On Thu, Dec 13, 2012 at 2:58 PM, David Bruant bruan...@gmail.com wrote:

  Le 13/12/2012 20:47, Jason Orendorff a écrit :

 David: https://gist.github.com/4279162

 I think this is what Kevin has in mind. Note in particular that the target
 of the Proxy is just a dummy object, and the handler ignores it entirely.
 The proxy uses it for invariant checks, but the intent is that those would
 always pass.

 but they do not; try:

 var [p, setTarget] = retargetableProxy({}); // I love destructuring
 sooo much!
 Object.defineProperty(p, 'a', {configurable: false, value:31});


In my proposal, this would fail (refuse to commit to any invariant as I
put it above). The handler specifically refuses anything non-configurable
or non-writable-data.


 setTarget({});
 Object.getOwnPropertyDescriptor(p, 'a'); // invariant check throws here

 Any variant that can be written will have the same issue. Even trickeries
 with the defineProperty trap.
 The proxy is enforcing invariants against the dummy [[target]]. The same
 is to be expected from WindowProxy instances even if their underlying
 window changes. It doesn't matter if the invariant is enforced on the dummy
 target on an actual window instance. It is enforced and that's the
 problem (with WindowProxy implemented as they are now not being emulable
 with proxies)

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A DOM use case that can't be emulated with direct proxies

2012-12-12 Thread Kevin Reid
On Wed, Dec 12, 2012 at 11:19 AM, David Bruant bruan...@gmail.com wrote:

 A good question by Anne van Kesteren [1] followed by good remarks by Boris
 Zbarsky [2][3] made me try a little something [4][5].
 The WindowProxy object returned as the 'contentWindow' property of iframes
 never changes; whatever you do when changing the @src, always the same
 object is returned. However, based on whether the @src is changed, the
 WindowProxy proxies to a different Window instance.


I bumped into this myself just recently while attempting to implement
virtualized navigable iframes in Caja — I need to emulate exactly this
behavior.


 [...] I wish to point out that apparently iframe.contentWindow does break
 quite a lot of eternal invariants [7] which isn't really good news,
 because it questions their eternity.


 Indeed!


 Among alternatives I'm thinking of:
 * define a new type of proxies for which the target can be changed (either
 only as a spec device of as an actual object that can be instantiated in
 scripts)
 * change the behavior of WindowProxy instances when it comes to doing
 things that would commit them to eternal invariants to throw instead of
 forwarding. This solution may still be possible, because it's unlikely that
 Object.defineProperty is widely used in web content today. But this change
 should happen pretty fast before content relies on it.


The best option I see at the moment would be that a WindowProxy refuses to
commit, but a Window does. That is, code operating on 'window' within the
iframe can still Object.defineProperty, but from the outside every property
of Window appears to be configurable. This is what I have implemented in my
current draft.

On the other hand, it seems that in browsers either 'window' is also the
same (!) proxy, or === invariants are broken, or the WindowProxy is acting
as a membrane:

 f.contentWindow === f.contentWindow.window
true

This would seem to prohibit the distinction I propose.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A DOM use case that can't be emulated with direct proxies

2012-12-12 Thread Kevin Reid
On Wed, Dec 12, 2012 at 11:39 AM, David Bruant bruan...@gmail.com wrote:

  Le 12/12/2012 20:29, Kevin Reid a écrit :

   On Wed, Dec 12, 2012 at 11:19 AM, David Bruant bruan...@gmail.comwrote:

 The WindowProxy object returned as the 'contentWindow' property of
 iframes never changes; whatever you do when changing the @src, always the
 same object is returned. However, based on whether the @src is changed, the
 WindowProxy proxies to a different Window instance.


  I bumped into this myself just recently while attempting to implement
 virtualized navigable iframes in Caja — I need to emulate exactly this
 behavior.

 Do you have a pointer to the code for that, just out of curiosity?


I haven't made it public yet, but it's just the obvious implementation of
an (old-style, as implemented in Firefox/Chrome) proxy with a switchable
“target”.

  The best option I see at the moment would be that a WindowProxy refuses
 to commit, but a Window does. That is, code operating on 'window' within
 the iframe can still Object.defineProperty, but from the outside every
 property of Window appears to be configurable. This is what I have
 implemented in my current draft.

 Let's say that the window has a non-configurable, non-writable property,
 what happens to Object.getOwnPropertyDescriptor on the WindowProxy? Does it
 throw? (I would be fine with this behavior, but I'm just wondering)


It returns a descriptor which is identical except that it claims to be
configurable. Attempting to actually reconfigure it using defineProperty
would throw.

 On the other hand, it seems that in browsers either 'window' is also
 the same (!) proxy, or === invariants are broken, or the WindowProxy is
 acting as a membrane:

   f.contentWindow === f.contentWindow.window
 true

 I think it's a membrane. The HTML5 spec [1] makes pretty clear that the
 window property isn't a Window, but a WindowProxy.
 HTML5 experts will know better, but I think no one ever manipulates
 directly a Window instance, there is always a WindowProxy mediating the
 access. Of course, the implementation is free to optimize this mediation.


The disturbing thing about window instanceof WindowProxy, if you will,
(given that it accurately reports its mutability) is that since window is
the global environment, it means that the global environment cannot have
immutable things. Of course, SES actually establishes a new environment
(using 'with') for secured eval.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A DOM use case that can't be emulated with direct proxies

2012-12-12 Thread Kevin Reid
On Wed, Dec 12, 2012 at 12:03 PM, David Bruant bruan...@gmail.com wrote:

  Le 12/12/2012 20:49, Kevin Reid a écrit :

  I haven't made it public yet, but it's just the obvious implementation
 of an (old-style, as implemented in Firefox/Chrome) proxy with a switchable
 “target”.

 Interesting. As I said, target-switching won't be possible in direct
 proxies.


I understand that direct proxies have an internal “target” object. Will it
not be possible to simply never place any properties on said object (thus
not constrain future behavior) while still appearing to have properties?
This text suggests that is a possible and expected pattern:

Since this Proxy API requires one to pass an existing object as a target to
 wrap, it may seem that this API precludes the creation of fully “virtual”
 objects that are not represented by an existing JSObject. It’s easy to
 create such “virtual” proxies: just pass a fresh empty object as the target
 to Proxy and implement all the handler traps so that none of them
 defaults to forwarding, or otherwise touches thetarget.


In my case there is an actual object, of course, but I implement forwarding
to said object myself; the JS implementation never knows that I am
“treating it as a target”.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A DOM use case that can't be emulated with direct proxies

2012-12-12 Thread Kevin Reid
On Wed, Dec 12, 2012 at 12:35 PM, David Bruant bruan...@gmail.com wrote:

 I was a bit too strong in my statement, sorry. Let me rephrase: the
 internal [[Target]] can't be changed, but a proxy can emulate changing of
 fake target as long as what happens with this fake target doesn't
 involve invariant checking.
 That's the reason I was suggesting that WindowProxies could (maybe
 depending on how the object reference was obtained) throw whenever
 invariant checks are involved.


Exactly. So a user-defined switching proxy needs only to:
1. refuse to commit to any invariant (non-configurable property or
preventExtensions)
2. even if its switchable-target has an invariant, do not expose that
invariant (i.e. pretend each property is configurable)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A DOM use case that can't be emulated with direct proxies

2012-12-12 Thread Kevin Reid
On Wed, Dec 12, 2012 at 1:23 PM, David Bruant bruan...@gmail.com wrote:

  Le 12/12/2012 21:42, Kevin Reid a écrit :

  Exactly. So a user-defined switching proxy needs only to:
 1. refuse to commit to any invariant (non-configurable property or
 preventExtensions)
 2. even if its switchable-target has an invariant, do not expose that
 invariant (i.e. pretend each property is configurable)

 Pretend that something non-configurable actually is configurable is an
 invariant violation. To be more concrete:
 * There is an webpage with an iframe
 * The same window object is proxied by 2 WindowProxy instances. One
 outside the iframe, one inside.
 * Inside of the iframe, scripts can add a non-configurable property
 azerty to their global.
 * Outside the iframe, what happens when
 Object.getOwnPropertyDescriptor(iframeWindow, 'azerty') is called?
 You're suggesting that {configurable: true} is returned. The problem is
 that on the actual Window instance, there is a non-configurable property,
 so if the WindowProxy handler tries to do that, an error will be thrown
 because of invariant checks.


The JS runtime won't know that the proxy has anything to do with the actual
Window instance. The Proxy's formal target will be just {}; only the
handler interacts with the Window. This is the distinction I meant but did
not state clearly by saying “switchable-target” as opposed to proxy-target.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: possible excessive proxy invariants for Object.keys/etc??

2012-11-21 Thread Kevin Reid
On Wed, Nov 21, 2012 at 12:42 PM, Tom Van Cutsem tomvc...@gmail.com wrote:

 2012/11/21 Mark S. Miller erig...@google.com

 On Wed, Nov 21, 2012 at 8:55 AM, Allen Wirfs-Brock
 al...@wirfs-brock.com wrote:
  [...] Essentially we could internally turn the [[Extensible]] internal
 property into a four state value:  open,non-extensible,sealed,frozen.  [...]

 [...]
 If JS objects could be in only one of four states, things would be a lot
 simpler to reason about. That said, I don't see how we can get there
 without radically breaking with ES5's view of object invariants.


Why can't an implementation cache the knowledge that an object is frozen
(after Object.freeze or after a full pattern-check) exactly as if the
fourth state exists, and get the efficiency benefit (but not the
observability-of-the-test-on-a-proxy benefit) without changing the ES5
model?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: new function syntax and poison pill methods

2012-10-26 Thread Kevin Reid
On Fri, Oct 26, 2012 at 3:13 PM, David Bruant bruan...@gmail.com wrote:

  I think the oddity I note is a consequence of the too loose paragraph in
 section 2:
 A conforming implementation of ECMAScript is permitted to provide
 additional types, values, objects, properties, and functions beyond those
 described in this specification. In particular, a conforming implementation
 of ECMAScript is permitted to provide properties not described in this
 specification, and values for those properties, for objects that are
 described in this specification.

 Instead of having an there is no 'caller' nor 'arguments' property at
 all rule, maybe it would be a good idea to refine this paragraph to say
 what's permitted and what is not.
 For instance, mention that for function objects, there cannot be a
 property (regardless of its name!) providing access to the caller function
 during runtime, etc.
 With this kind of refinement (potentially reminded as a note in the
 relevant subsections), it may be easier to share and document the intent of
 what is acceptable to provide as authority and more importantly what is not.


How about: there must be no *nonstandard non-configurable properties* of
standard objects.

This directly implies “SES can do its job of deleting everything not
whitelisted”, and does not rely on the spec blacklisting undesirable
behaviors.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Must built-in prototypes also be valid instances? (Was: Why DataView.prototype object's [[Class]] is Object?)

2012-10-02 Thread Kevin Reid
On Mon, Oct 1, 2012 at 9:56 PM, Brendan Eich bren...@mozilla.com wrote:

 But if we have a solid branding mechanism (like Domado's ideal in latest
 browsers? ;-) then that should be used universally and this becomes a
 don't-care.


MarkM suggested I should expound on what Domado does.

Domado uses an abstraction which I called 'Confidence', which I invented in
order to provide class-like behavior in terms of ES5; it is designed to
provide the security properties we needed with a minimum of implementation
mechanism, and is therefore not purely a branding abstraction. It uses one
WeakMap keyed by the instances (the objects confided in); the value is a
plain {} object which stores all of the “private fields” of the key-object.
There are four operations provided by a Confidence:

1. confide: add an instance to the WeakMap and create its private-state
record.

  function TameNode() {
TameNodeConf.confide(this);
  }

2. guard: test that an instance is in the WeakMap and return it or throw.

  var TameNodeT = TameNodeConf.guard;
  ...
  TameBackedNode.prototype.appendChild = nodeMethod(function (child) {
child = TameNodeT.coerce(child);
...
  });

3. p: given an instance, return its private-state record.

  var np = TameNodeConf.p.bind(TameNodeConf);
  ...
  TameBackedNode.prototype.removeChild = nodeMethod(function(child) {
...
np(this).feral.removeChild(np(child).feral);
...
  });

4. protectMethod: given a function, return a function with a guarded this.

  var nodeMethod = TameNodeConf.protectMethod;
  (usage examples above)


Note that unlike closure-based encapsulation, Confidence provides _sibling
amplification_; that is, a node method on one object can access the private
properties of another object, not only its own. This is not ideal as a
default for writing robust programs, but is useful to Domado since its
siblings interact (e.g. appendChild operates on two nodes from the same
DOM). An alternative abstraction which deemphasized sibling amplification
would be, for example, if protectMethod were defined such that the
wrapped function received an extra private-state argument and there was no
separate p operation (though sibling amplification can still be achieved
by having a protected method not exposed on the prototype).

The WeakMap used is the browser's WeakMap if available; otherwise we use an
emulation layer with inferior but sufficient garbage-collection properties
(implemented by SES or ES5/3; Domado is unaware of the distinction).
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Why JSON5? (was: Hash style comments)

2012-08-10 Thread Kevin Reid
On Fri, Aug 10, 2012 at 12:00 PM, Mark S. Miller erig...@google.com wrote:
 SES:
 pros: superset of JSON, subset of ES5, includes virtually all of ES5,
 including of course JSON5, supported efficiently starting with ES5
 with no need for a custom parser, de facto standard, already some
 adoption at scale (Google Sites, Google App Script)
 cons: JS specific (rather than language neutral), not (yet) a de jure standard

 As safe midpoints between JSON and full ES5, SES has clear advantages
 over JSON5 and includes JSON5 as a subset. What advantages does JSON5
 have over SES? Is there any use case better addressed by JSON5 than by
 SES?

Loading JSON or JSON5 takes space and time linear in the input size;
SES is unbounded. Therefore SES is significantly less safe for naïve
server-side processing.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: The Name of the Name

2012-08-01 Thread Kevin Reid
Lisp precedent: Objects which are used to name things (that is, they
are used as keys by identity), and may be not globally-named
themselves, are called symbols.

On Wed, Aug 1, 2012 at 12:47 PM, Mark S. Miller erig...@google.com wrote:

 Now that we have both private Names and unique Names, the general
 category covering both is simply Names. Properties can therefore be
 indexed by strings or Names. Strings are the ones consisting of a
 sequence of characters that can typically be pronounced. Names are
 anonymous identities.

 In the real world, names and identities are distinct concepts, and
 names are the one corresponding to a unique sequence of characters
 that can be pronounced.

 Is Name exactly the wrong name for a opaque unique identity
 typically used to index a property? Is there a better term?

 --
 Cheers,
 --MarkM
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: The Name of the Name

2012-08-01 Thread Kevin Reid
On Wed, Aug 1, 2012 at 3:56 PM, David Herman dher...@mozilla.com wrote:
 In the Lisp world, strings and symbols are isomorphic, but there's a sense 
 that symbols have identity where strings don't. Now, Lisp symbols are 
 historically forgeable and interned, so this isn't exactly the same concept.

In Common Lisp (which I am most familiar with, and I think is a
relevant design), this is not right.

A symbol is an object; it has a name property, which is a string. The
name need not be unique. Package (namespace) objects are mappings
between strings and symbols, with the constraint that if a symbol is
in a package, then the symbol's name property is also the string that
the package maps to it. (This constraint can be considered a mistake,
in that it prohibits renamed imports.)

Therefore, a symbol which is not in a package (uninterned) is not
forgeable — that is, you cannot obtain it starting from its name
unless it's in some other table you have — with the caveat that since
CL is very far from a capability design, there are plenty of ways to
get your hands on a symbol being used, so this does not have any
security properties.

Uninterned symbols are used for unique generated names (gensyms);
interned symbols (those which exist in packages) are used for names
written in source code. These essentially correspond to the use cases
of the proposed unique-names and strings, respectively; CL's design
gains orthogonality by having even ordinary names be symbols rather
than strings, so the programmer need not have different code paths.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Comments on Object.observe strawman

2012-07-23 Thread Kevin Reid
MarkM asked for my perspective on the Object.observe strawman 
http://wiki.ecmascript.org/doku.php?id=strawman:observe; here it is. These
remarks are written primarily for MarkM but I am CC'ing es-discuss as they
may be of interest; please do not feel obligated to respond. I have not
previously followed discussions of this proposed feature.


UI frameworks often want to provide an ability to databind objects in a
 datamodel to UI elements. A key component of databinding is to track
 changes to the object being bound. Today, JavaScript framework which
 provide databinding typically create objects wrapping the real data, or
 require objects being databound to be modified to buy in to databinding.
 The first case leads to increased working set and more complex user model,
 and the second leads to siloing of databinding frameworks.


If the premise is that databinding should be possible without modifying the
model objects, then accessor properties present a problem: at the object
level (that is, not using Object.getPropertyDescriptor or such), accessor
properties may be indistinguishable from data properties, but they cannot
be observed generically. I see in Goals that accessor properties may be
programmed to notify; but this gives additional complexity for property
implementors (and, particularly, a way to work normally but fail in an
obscure case). I do not see a realistic solution, but this is something
to keep in mind. A similar issue is that proxies must implement
notifications much more extensively, of course.

7. Asynchronous notification of changes, but allow synchronous fetching of
 changes pending delivery


This seems to me a very good goal. I'm currently working on an application
which uses a custom notification framework extensively to keep derived
values consistent with their source values. I have refrained from switching
to asynchronous notifications, as I Should, because of concerns about
fetching incorrect values during the window between the original field's
notification and the derived field's update. Synchronous fetch allows for
just-in-time updates, and also for laziness (with the caveat that the
derived object cannot be GCed; I have designed a scheme for fixing this,
but it requires the auditor facility).

However, provoked synchronous delivery is not actually sufficient for
consistency, it seems to me. Suppose we have an original value cell (data
property), a derived cell (accessor property) and a third cell derived from
the second. Then if the first is assigned, the second is in a position to
be consistent (because it may deliverChangeRecords in its getter — but it
could just as well simply compare the current value to check for changes),
but the third has no way to know that it should be updated, since the
second does not signal its change. This could be fixed with an
application-level synchronous notification protocol or transitive
dependency calculator, but at that point I must ask the question:

What are the use cases for synchronous changes which are being considered?

Example


I note from the example that updated reports an oldValue but
reconfigured does not report an oldPropertyDescriptor. This is not a lack
of expressiveness but could be considered an inconsistency.

Object.unobserve
 A new function Object.unobserve(O, callback) is added, which behaves as
 follows:


From a capability perspective, this has a certain weakness: it is not
possible to export a callback and allow multiple clients to make use of it
(attach it to objects) independently, because having the callback also
allows removal of it from an object. The alternative is to use a
clearTimeout-style interface; the observe operation returns the option to
cancel it. However, this can also be implemented by each client wrapping
the callback with its own unique function, which moves the burden of
additional objects to the less-common case, so the current definition is
probably the right thing.

Object.deliverChangeRecords
 A new function Object.deliverChangeRecords(callback) is added, which
 behaves as follows:


This means that a callback may be invoked *without* an empty stack, which
is a hazard but seems acceptable as it is consistent with the above in
suggesting that callbacks should be closely held.

[[ObserverCallbacks]]
 There is now an ordered list, [[ObserverCallbacks]] which is shared per
 event queue. It is initially empty.


Should it be noted that the implementation is free to discard elements of
[[ObserverCallbacks]] which are otherwise unreferenced (i.e. will never be
invoked again)?

[[DeliverChangeRecords]]
 [[DeliverAllChangeRecords]]


These algorithms return boolean results which do not seem to be used
anywhere.


MarkM asked me to consider hazards arising from the interaction of this
proposal and proxies. I have not thought of any such.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Protocol for new and instanceof?

2011-10-22 Thread Kevin Reid
On Oct 21, 2011, at 19:08, Axel Rauschmayer wrote:

 Also, in general this sort of well known private method name hook is much 
 more extensible than internal method as currently used in the es spec. 
 They also avoid the need to polute the Proxy API
 
 Reified names (private or otherwise) are a very powerful mechanism. I’m not 
 aware of another programming language that does this (possibly Common Lisp 
 with its symbols, but I don’t know enough about them). It’s good to have 
 them, because they increase JavaScript’s expressiveness.

Common Lisp symbols are definitely reified names. Since all textual source code 
passes through the reader, which performs symbol lookup, nearly every 
name-of-a-thing is a symbol. Uninterned symbols - those not found in any symbol 
table (package) - are fully usable as names for things but cannot be retrieved 
starting from only textual information.

However, there is no guarantee that something with an uninterned name cannot be 
found by other means, so uninterned symbols do not form a security mechanism. 
They are primarily used for non-conflicting names in generated code.

-- 
Kevin Reid  http://switchb.org/kpreid/

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: new Object

2011-10-11 Thread Kevin Reid
On Oct 11, 2011, at 18:29, Jake Verbaten wrote:

 Point.zero = function () {
   return (new Point).{ x: 0, y: 0 };
 }
 
 why are factory methods special? they are just methods.

I agree with things should be just methods, but this particular pattern 
doesn't work for always-frozen types.

-- 
Kevin Reid  http://switchb.org/kpreid/

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: new Object

2011-10-11 Thread Kevin Reid
On Oct 11, 2011, at 18:51, Jake Verbaten wrote:
 On Oct 11, 2011 11:46 PM, Kevin Reid kpr...@switchb.org wrote:
 On Oct 11, 2011, at 18:29, Jake Verbaten wrote:
 Point.zero = function () {
 return (new Point)...
 I agree with things should be just methods, but this particular pattern
 doesn't work for always-frozen types.

 So basically your saying that the only function that can alter an object
 before its frozen is the constructor, thats why we need multiple
 constructors?
 
 You'll have to subclass with another constructor or come up with syntax for
 multiple constructors.

Well, in JavaScript you can always Object.create(Point.prototype, ...).

In Java, for example, the ability to have multiple constructors can be very 
convenient for that type of use case; but it always can be replaced with 
factory methods and a private constructor with more parameters.

Actually, my original remark may be off-base anyway; I haven't followed the 
discussion well enough. If the .{ syntax returns a new object rather than 
mutating one, then I was confused.


-- 
Kevin Reid  http://switchb.org/kpreid/

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: July TC39 meeting notes, day 1

2011-08-09 Thread Kevin Reid
On Tue, Aug 9, 2011 at 01:17, Andreas Rossberg rossb...@google.com wrote:
 On 8 August 2011 18:46, Kevin Reid kpr...@google.com wrote:
 On Mon, Aug 8, 2011 at 08:50, Andreas Rossberg rossb...@google.com wrote:
 Arguably, making a proxy trap return getters/setters seems a somewhat
 pointless use case anyway. But nevertheless we need to have some
 reasonable semantics for it.

 It allows a proxy to pretend to be an object which supports
 Object.defineOwnProperty normally.

 It allows a proxy to emulate, or wrap, an ordinary object which
 happens to have some accessor properties, while still being
 transparent to reflection (which I understand is one of the goals of
 the proxy facility).

 Sure, but is that necessarily something that the _default_ traps have
 to be able to mimic? There is no problem programming it up yourself if
 you want it.

Are you proposing a revised division of fundamental vs. derived traps?
If not, what do you propose the default derived get or set trap do in
the event that it gets an accessor property descriptor in response to
getOwnPropertyDescriptor?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: July TC39 meeting notes, day 1

2011-08-08 Thread Kevin Reid
On Mon, Aug 8, 2011 at 08:50, Andreas Rossberg rossb...@google.com wrote:
 I would welcome removing the extra receiver (or proxy) arguments from
 get and set traps. However, it seems to me that the main reason,
 currently, for having them is that they are needed by the default
 traps, in case the respective descriptor returned by
 getOwnPropertyDescriptor has a getter/setter (which need a receiver).

This is almost the rationale I gave earlier. To be precise, the
default traps themselves need not have behavior which is implementable
as an explicit trap (since they are not exposed as being functions
which take the same parameters as user-supplied traps do). I feel the
receiver should be provided so that user-supplied traps *can mimic the
default traps*, with variations or optimizations.

 Arguably, making a proxy trap return getters/setters seems a somewhat
 pointless use case anyway. But nevertheless we need to have some
 reasonable semantics for it.

It allows a proxy to pretend to be an object which supports
Object.defineOwnProperty normally.

It allows a proxy to emulate, or wrap, an ordinary object which
happens to have some accessor properties, while still being
transparent to reflection (which I understand is one of the goals of
the proxy facility).

(As it happens, this doesn't affect the use case which made me notice
this problem; there, I am defining an emulation of DOM nodes (i.e. the
accessors are such things as .innerHTML), and DOM nodes, being host
objects, are allowed to do anything, I am given to understand.
However, it is convenient to define my emulations as ordinary objects,
which are incidentally wrapped by a proxy to implement the
particularly magical parts.)
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss