Strict mode eval
I'm a bit puzzled regarding the meaning of use strict in the context of higher-order uses of eval. Consider the following program. What is the expected result of the latter calls? - var x = 3 function call_eval() { x = 3; eval(var x = 4); return x } function call_eval_strict() { use strict; x = 3; eval(var x = 4); return x } function get_eval() { return eval } function get_eval_strict() { use strict; return eval } function call_f(f) { x = 3; f(var x = 4); return x } function call_f_strict(f) { use strict; x = 3; f(var x = 4); return x } call_eval() // 4 call_eval_strict() // 3 call_f(eval) // 4 call_f(get_eval()) // 4 call_f(get_eval_strict()) // ? call_f_strict(eval) // ? call_f_strict(get_eval()) // ? call_f_strict(get_eval_strict()) // ? (function() { use strict; return call_f_strict(get_eval_strict()) })() // ? - V8 bleeding edge currently returns 4 for all of the latter calls, but that does not seem quite right to me. Especially for the last two cases that would practically amount to a loop hole in strict mode. But where does the spec say differently? I'd be happy for any enlightenment. Thanks, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Strict mode eval
Thanks to everybody for clearing up my confusion. The thing I had missed in the spec was Section 10.4.2. And of course, my example was too simplistic because it couldn't distinguish caller and global context. At the risk of reviving old discussions, are there any sources explaining the rationale behind the current design? The obvious solution to me would have been having two internal variants/modes of (or entry points to) the eval function, one strict, one non-strict. And depending on the lexical strict mode, the identifier eval would be bound to the right one. Thanks, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Strict mode eval
On 11 May 2011 20:30, Mark S. Miller erig...@google.com wrote: On Wed, May 11, 2011 at 10:31 AM, Andreas Rossberg rossb...@google.com wrote: On 11 May 2011 18:31, Mark S. Miller erig...@google.com wrote: On Wed, May 11, 2011 at 4:42 AM, Andreas Rossberg rossb...@google.com wrote: Thanks to everybody for clearing up my confusion. The thing I had missed in the spec was Section 10.4.2. And of course, my example was too simplistic because it couldn't distinguish caller and global context. At the risk of reviving old discussions, are there any sources explaining the rationale behind the current design? The obvious solution to me would have been having two internal variants/modes of (or entry points to) the eval function, one strict, one non-strict. And depending on the lexical strict mode, the identifier eval would be bound to the right one. I don't think I understand the suggestion. What would the following code do: // non-strict outer context function f(eval) { var f = eval; function g() { use strict; eval(str); // [1] (1,eval)(str); // [2] f(str); // [3] } } [1] If the outer eval is bound to the global eval function then this is a direct eval, which therefore lexically inherits strictness. So no problem here. [2] The 'eval' identifier is bound in non-strict code and used in strict code. [3] Strict code makes no use here of a lexical binding of the identifier eval. A previous approach which we rejected was to make strictness dynamically scoped, so all three of the above calls would do strict evals. This was rejected to avoid the problems of dynamic scoping. Are you suggesting a rule that would affect #2 but not #3? If so, IIRC no such rule was previously proposed. I'm actually suggesting plain lexical scoping. :) Basically, what I'm saying is that the directive use strict could simply amount to shadowing the global eval via an implicit var eval = eval_strict (unless global eval has already been shadowed lexically). So all uses of the identifier `eval' in that scope would refer to its strict version, no matter when, where, or how you ultimately invoke it. Consequently, all your three examples would behave the same, only depending on the argument to f. I don't understand. If use strict implicitly introduces a var eval = eval_strict; at the position it occurs, then wouldn't #3 in my example still evaluate non-strict? Assume that - we distinguish two variants of the eval function, strict and non-strict -- let's call these values EVAL_s and EVAL_ns. - initially (in global scope), the identifier `eval' is bound to EVAL_ns. - in a strict mode scope it will be considered rebound to EVAL_s instead (unless it has already been shadowed by user code anyway). (In addition, at least in strict mode, the only calls to `eval' that are considered _direct_ calls would be those where `eval' statically refers to the initial binding or one of the implicit strict-mode rebindings -- i.e., where it has not been shadowed by the user.) In your example, the `eval' identifier is already shadowed by the function parameter, so the inner use strict would have no effect on it -- in that scope `eval' is just an ordinary identifier. Consequently, all 3 examples would behave alike and are non-direct calls. Whether strict or not solely depends on what you pass in to f: // non-strict scope f(eval) // EVAL_ns (function() { use strict; f(eval) })() // EVAL_s f((function() { use strict; return eval })()) // EVAL_s Does that make sense? The idea is that strict/non-strict is resolved w.r.t. the static scope where the identifier `eval' occurs. With this semantics, there would be no way in strict mode to access non-strict eval, unless it is explicitly provided by someone. With the current rules that is not the case, because you can easily defeat strict mode by a random indirection, e.g.: use strict; var e = eval e(var oops = 666) // pollutes the global object, although the whole program is in strict mode I'm not sure whether that was intentional or not, but it feels strange. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Strict mode eval
On 11 May 2011 23:15, Allen Wirfs-Brock al...@wirfs-brock.com wrote: Direct eval (or the eval operator, as Oliver refers to it) is a way to (mostly) statically identify eval calls and to do special case processing to make the caller environment information available for eval processing. indirect evals are just regular function calls and not special environment information is passed or otherwise made available. So, the built-in eval function when indirectly involved is limited to using the global environment. This has nothing to do with strict mode. Sorry, I should have been clearer. To clarify: my follow-up question was only tangentially related to the question of direct calls -- it mainly was about how eval inherits strict mode. The one bit where these questions are somewhat related is the (mostly) bit in your reply. Is there a reason for this mostly, which, I would argue, is a form of dynamic scoping? /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Strict mode eval
On 12 May 2011 17:47, Allen Wirfs-Brock al...@wirfs-brock.com wrote: On May 12, 2011, at 2:10 AM, Andreas Rossberg wrote: The one bit where these questions are somewhat related is the (mostly) bit in your reply. Is there a reason for this mostly, which, I would argue, is a form of dynamic scoping? [...] This is necessary, because the global binding for eval might be changed Ouch, right. or because there might be a binding of eval in an surrounding scope. I see -- which, specifically, might have been introduced after the fact via `with' or non-strict eval. I guess I still have a blind spot on the various ways of messing up static scoping. But I'm working on it. :) Anyway, thanks for pointing it out. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Strict mode eval
Sure, sounds good. I will look into it. Thanks, /Andreas On 14 May 2011 03:18, Mark S. Miller erig...@google.com wrote: I think this is the kind of incremental refinement of the details of existing features that we can legitimately consider after May without setting a bad precedent. Would you be interested in turning these ideas into a strawman for, say, the July meeting? Unless there's a problem with this approach I'm not noticing, I think it would be a welcome cleanup of a messy part of the language -- conditioned on an ES-next opt in of course. On Fri, May 13, 2011 at 2:05 AM, Andreas Rossberg rossb...@google.com wrote: On 13 May 2011 01:50, Mark S. Miller erig...@google.com wrote: Assume that - we distinguish two variants of the eval function, strict and non-strict -- let's call these values EVAL_s and EVAL_ns. - initially (in global scope), the identifier `eval' is bound to EVAL_ns. - in a strict mode scope it will be considered rebound to EVAL_s instead (unless it has already been shadowed by user code anyway). (In addition, at least in strict mode, the only calls to `eval' that are considered _direct_ calls would be those where `eval' statically refers to the initial binding or one of the implicit strict-mode rebindings -- i.e., where it has not been shadowed by the user.) I think the core insight here is good, and had it been made in time, could have led to a better semantics than what we adopted into ES5. I like the idea that ' use strict;' effectively inserts a DeclarativeEnvironmentRecord binding 'eval' to EVAL_s, though I'd put this record on the stack at the strict/non-strict boundary rather than just above the global object. Yes, my previous description of shadowing `eval' at the point of use strict was meant to describe just that. Even better, since 'eval' cannot be rebound by ES5/strict, ES-next, or SES code, and since eval(str) is effectively a special form anyway, why not remove the dynamic and if 'eval' is bound to the original global eval function condition from direct eval? Why not just treat eval(str) as a direct eval special form independent of what 'eval' is bound to in that scope? That's what I tried to suggest in the parenthesized paragraph above, and it was the reason for my question to Allan. The difficulty in ES5 would be that scoping is not really static -- not even in strict-mode code, which might still be surrounded by non-script scopes shadowing `eval' dynamically (esp `with'). But for Harmony it'd be nice. Thanks, /Andreas -- Cheers, --MarkM ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Private Names in 'text/javascript'
Separating out the functionality of abstract names certainly is a good idea. But is there any reason to make it a method of Object? In essence, private names form a new primitive type, so there should be a separate global object or module for them. Assuming for a minute it was called Name (which clearly is a suboptimal choice), then you'd rather invoke Name.create(), or perhaps simply Name() (by analogy with calling String(v) to create primitive strings, although I'm not sure I like the notational abuse behind it). /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Private Names in 'text/javascript'
On 18 May 2011 18:27, Allen Wirfs-Brock al...@wirfs-brock.com wrote: In general, it is a good idea to avoid new global names that aren't going to be in modules. In particular, there is no particular reason these factory methods shouldn't be visible via the Harmony ES5 global object. In that case hanging them off an existing constructor carries less risk of collisions (but not no risk) with user defined name. Name seems like it might be a particularly risky global to grab. Luke suggested hanging them off Object and in my working draft I suggest String. Either is probably safer than adding new globals. Hm, making name creation a method of String seems equally odd -- unless you also plan to have typeof String.createPrivateName() == string. Of course, I see the concern with the global object (although 'Name' was only a strawman suggestion). I assume that we need a future-proof solution for adding new built-in objects anyway, most likely based on modules. And that accessing built-in objects through the global object will be deprecated in Harmony code. So, since private names will be Harmony-specific, their constructor doesn't have to be visible through the old ES5 global object. Wouldn't introducing a new built-in constructor in some module scope actually have less risk (none?) of producing name clashes than messing with an existing object? /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: I noted some open issues on Classes with Trait Composition
My apologies if this has been discussed to death before -- well, actually, I'd be surprised if it hasn't (pointers would be welcome). I think it is worth noting that the baroque notation for defining constructors that we see in the C++ / Java / C# world primarily is an artefact of the desire to allow multiple constructors with overloading in those languages. We don't have that issue in JS, so I wonder why we cannot go for something more elegant? There is precedent in other OOPLs (off the top of my head, e.g. Scala and OCaml) for putting the constructor arguments on the class head directly, and executing the class body like a block when the constructor is invoked. AFAICS: -- This approach is significantly slimmer (and, I'd argue, more readable) than the discussed alternatives, without needing any keywords: class Point(x0, y0) { public x = x0 public y = y0 } -- It naturally allows what Bob was suggesting: class Point { // no argument list would be shorthand for (), just like when invoking new public x = 0 public y = 0 } -- It avoids additional hoops with initializing const attributes: class ImmutablePoint(x0, y0) { const x = x0 // just like elsewhere const y = y0 } -- The constructor arguments naturally are in the scope of the entire object, so often you do not even need to introduce explicit (private) fields to store them: class Point(x, y) { public function abs() { return Math.sqrt(x*x, y*y) } } /Andreas On 19 May 2011 03:31, Mark S. Miller erig...@google.com wrote: On Wed, May 18, 2011 at 6:29 PM, Brendan Eich bren...@mozilla.com wrote: On May 18, 2011, at 5:57 PM, Bob Nystrom wrote: class Point { public x = 0, y = 0; } let p = new Point(); p.x; // 0 This is pretty rare, in my experience. A hard case? If the constructor does set x and y from parameters, then you have double-initialization. If some properties are non-writable, you can't do this. YAGNI? +1. If you're gonna initialize them somewhere, why not always do so in the constructor and avoid special cases? /be -- Cheers, --MarkM ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: I noted some open issues on Classes with Trait Composition
On 19 May 2011 15:36, Mark S. Miller erig...@google.com wrote: Hi Andreas, yes we have a long history of consider this shape, in fact much longer than the current shape. The final state of proposals along these lines is http://wiki.ecmascript.org/doku.php?id=strawman:classes_with_trait_compositionrev=1299750065. I'll have a closer look, thanks for the pointer! 1) Starting from scratch, there's no problem engineering a VM to make objects-as-closures efficient, especially given the semi-static analysis that class proposal was designed to enable. However, JS VM implementors are not starting from scratch. Fitting such a new optimization into existing heavily optimized engines was thought to be a hard sell. Especially since all major VM implementors would need to agree. 2) The conventional JS pattern is to place methods on the prototype, not the instance, and many felt that the main thing classes need to provide is a syntax to make this traditional semantics easier to express. Another variation of your suggestion that Tom suggested is that you mix instance initialization and class/prototype initialization together in the class body. This obscures both time-of-execution and scope. Methods on the prototype do cannot have the constructor parameters in scope. Like Luke I was wondering why a change of syntax should affect the semantics. But after seeing your reply pointing out the problem with `this', I see your point. Although it doesn't seem to specific to constructor parameters, but would apply to instance fields in general with the class-as-block approach. Still sad. I agree that the ideal solution would be objects-as-closures. Personally, though, I could also live with requiring instance variables and constructor arguments always be accessed through `this', even in that syntactic approach. True, it somewhat obscures scoping -- but arguably, public or private declarations are not ordinary declarations, so it's not too hard to argue that they do not actually bind anything in the current scope. Same goes for constructor arguments vs ordinary function arguments. Thanks, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: I noted some open issues on Classes with Trait Composition
On 19 May 2011 16:05, David Herman dher...@mozilla.com wrote: Yes, we've talked about this. One of the issues I don't know how to resolve is if we want to allow the specification of class properties aka statics, then those need *not* to be in the scope of the constructor arguments, which ends up with very strange scoping behavior: var x = outer class C(x) { static foo = x // outer -- whoa! } I'm not 100% up on the current thinking of the group that's been working on classes, and whether they are including statics in the design, but I think they are. Oh, it wasn't clear to me that we really want to have static members. I may be biased here, but I always viewed static members as just a poor man's substitute for a proper module system. Fortunately, it looks like we will have a real one instead! To be honest, I'm a bit worried that there will be a _lot_ of semantic redundancy in the end. After adding modules + classes + static members (+ traits?), there would be at least three or four different, complicated constructs that evaluate to some kind of object, with significant functional overlap. Thanks, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Private Names in 'text/javascript'
On 19 May 2011 17:09, Allen Wirfs-Brock al...@wirfs-brock.com wrote: Hm, making name creation a method of String seems equally odd -- unless you also plan to have typeof String.createPrivateName() == string. That is indeed the plan in the particular version of the proposal that started this thread. You mean the unique_string_values proposal that Luke mentioned? Hm... Doesn't that have similar potential for breaking code? Existing code could make lots of assumptions about what it can do with a value if its type equals string. Unless I misunderstand something, secret strings would break some of those (valid) assumptions. Of course, I see the concern with the global object (although 'Name' was only a strawman suggestion). I assume that we need a future-proof solution for adding new built-in objects anyway, most likely based on modules. And that accessing built-in objects through the global object will be deprecated in Harmony code. So, since private names will be Harmony-specific, their constructor doesn't have to be visible through the old ES5 global object. This thread initially was specifically about how to make private name creation available in code that does not opt-in into new Harmony syntax. My apologies, you are right. But how would such a proposal affect the Harmony side of things? Is the intention to have it replace a proper type of private names in Harmony, or merely complement it? Thanks, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: prototype for operator proposal for review
On 21 May 2011 01:16, felix feli...@gmail.com wrote: how about the fish operator , easy to type. Whow, apparently, you are not cursed with a German keyboard. ;) Seriously, easy to type is an argument that is highly subjective to i18n-related concerns. The majority of JS programmers does not have English keyboard layout. (I wish the guy who invented SGML syntax had also known this...) /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: I noted some open issues on Classes with Trait Composition
On 20 May 2011 18:00, David Herman dher...@mozilla.com wrote: I think modules are a construct that evaluates to an object is the wrong way to think about them. Syntactic modules are a second-class construct that is not an expression. You can reflect on modules at runtime, and that reflection is provided as an object, but that's because almost all compound data structures in JS are objects. But I would advise against describing modules as a kind of object. And I think an important aspect of classes is that they are providing a declarative convenience for doing things that people *already* do with objects in JS today. I see what you are saying, and yes, they are intended to serve a different purpose. But they still share a lot of semantic overlap. And I foresee that the overlap will increase overtime, as the language evolves. Take just one specific example: there already is the proposal for extending modules with module functions (http://wiki.ecmascript.org/doku.php?id=strawman:simple_module_functions) -- which makes a lot of sense, is straightforward, and I'm sure that people will demand something along these lines sooner or later. But for better or worse, modules now actually have become classes! Compare: class Point { private x, y constructor(x0, y0) { x = x0; y = y0 } public function move(dx, dy) { x += dx; y += dy } public function abs() { return Math.sqrt(x*x, y*y) } } let p = new Point(3, 4) p.abs() with: module Point(x0, y0) { let x = x0, y = y0 export function move(dx, dy) { x += dx; y += dy } export function abs() { return Math.sqrt(x*x, y*y) } } let p = Point(3, 4) // assuming module functions are reflected into functions p.abs() Almost the same effect, even though the underlying semantics differs somewhat. You can even express simple inheritance with import and export, depending on how general they will be in the end. Obviously, there are aspects that you still cannot express with modules but can with classes, and vice versa. But my point is that at their core, they end up being pretty similar things. And their differences might eventually start looking rather accidental. I would feel better if we thought a bit harder about ways to utilize the commonalities before we grow the size of the language too quickly. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: I noted some open issues on Classes with Trait Composition
On 20 May 2011 15:42, Mark S. Miller erig...@google.com wrote: Modules aren't generative. If you mean that you cannot create several objects from them, then yes, but see my reply to Dave. However, I was primarily wondering about static members, which don't provide any generativity in that sense either. /Andreas On May 20, 2011 7:58 AM, Andreas Rossberg rossb...@google.com wrote: On 19 May 2011 16:05, David Herman dher...@mozilla.com wrote: Yes, we've talked about this. One of the issues I don't know how to resolve is if we want to allow the specification of class properties aka statics, then those need *not* to be in the scope of the constructor arguments, which ends up with very strange scoping behavior: var x = outer class C(x) { static foo = x // outer -- whoa! } I'm not 100% up on the current thinking of the group that's been working on classes, and whether they are including statics in the design, but I think they are. Oh, it wasn't clear to me that we really want to have static members. I may be biased here, but I always viewed static members as just a poor man's substitute for a proper module system. Fortunately, it looks like we will have a real one instead! To be honest, I'm a bit worried that there will be a _lot_ of semantic redundancy in the end. After adding modules + classes + static members (+ traits?), there would be at least three or four different, complicated constructs that evaluate to some kind of object, with significant functional overlap. Thanks, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: I noted some open issues on Classes with Trait Composition
On 23 May 2011 19:42, Mark S. Miller erig...@google.com wrote: On Mon, May 23, 2011 at 2:16 PM, Andreas Rossberg rossb...@google.com wrote: Compare: class Point { private x, y constructor(x0, y0) { x = x0; y = y0 } public function move(dx, dy) { x += dx; y += dy } public function abs() { return Math.sqrt(x*x, y*y) } } let p = new Point(3, 4) p.abs() with: module Point(x0, y0) { let x = x0, y = y0 export function move(dx, dy) { x += dx; y += dy } export function abs() { return Math.sqrt(x*x, y*y) } } let p = Point(3, 4) // assuming module functions are reflected into functions p.abs() Almost the same effect, even though the underlying semantics differs somewhat. Regarding the scoping of private instance variables, the version with generative module functions is actually much better[1]. In fact, it's the same as the objects-as-closures pattern, or the earlier http://wiki.ecmascript.org/doku.php?id=strawman:classes_with_trait_compositionrev=1299750065 classes strawman, which is essentially a codification of objects-as-closures. Yes. I didn't want to stress that point, but it may be relevant, too, for better or worse. In the above example, do you anticipate an allocation per method per instance? Specifically, does each call to Point allocate a new instance of the point module (fine) exporting newly allocated move and abs closures (bad). Some codification of objects-as-closures only becomes a viable alternative to the current manual class pattern if it can avoid these extra allocations. Well, this was rather an observation than a proposal, so I didn't anticipate anything specific. But if the language grows in that direction, people might start using it that way, intended or not, whether we have classes as well, or not. So it potentially becomes an issue either way. Putting methods on a shared prototype essentially treats the prototype as a vtable. This implementation path is messy, but is well trodden in current JS implementations. Avoiding extra allocations for objects-as-closures, whether sugared by your generative module pattern on my earlier classes strawman, would seem to require a new vtable mechanism not directly mapped onto prototype inheritance. Perhaps the hidden classes optimization already provides a context in which we can reuse implementation machinery between these two vtable-like mechanisms? I'm afraid I don't know the hidden classes optimization. But speculating a bit, one implementation technique I could envision is the following. It amounts to moving the closure environment from individual functions to the module instance object: - Add the module's (internal) lexical environment as a hidden property Env to the module instance object. - Store (direct) function members as mere code objects, not closures. - Treat projection from a module M (which we can distinguish statically) specially. If we project a yet unclosed function: * if it's called right away, pass M.Env as the environment, * if it's not called, allocate a proper closure for it with environment M.Env, Allocating the individual closure is effectively deferred to the rarer case where we extract a function without calling it immediately. This technique would only apply to functions that are direct members of the module, but that's usually the vast majority. For others, you'd close immediately, as you'd do now. Note that this is merely an implementation trick, nothing the spec would or should have to worry about. Example: module M { let x = 2 // goes into M.Env export function f() { return x } } let f = M.f // M is a module (statically), f a function (dynamically), so close over M.Env f() // 2 The May meeting is the close of the additive phase of designing ES-next. Following that, we hope for some consolidation and subtraction, among other activities (prototype implementations, web testing, spec writing, etc). Modules are already in. If classes get accepted in May, then I would consider it in bounds after May to grow modules slightly in order to remove classes completely. This would seem to be an excellent tradeoff. As I recall, you were planning to be at the July meeting? I think this would be a good focus topic for July. OK, I will try to think this through in a bit more detail until then. Thanks, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: May the defineProperty method of a proxy handler throw a TypeError?
On a somewhat related note, I recently noticed that the semantics of [[GetProperty]] and [[GetOwnProperty]], according to http://wiki.ecmascript.org/doku.php?id=harmony:proxies_semantics, contain a possible reject step, which doesn't seem to be well-defined given that these method have no Throw parameter either. /Andreas On 26 May 2011 15:36, Tom Van Cutsem tomvc...@gmail.com wrote: Hi David, The short answer: if you also define a 'set' trap, throwing a TypeError from defineProperty() to signal rejection is appropriate. The longer answer: defineProperty() is based on [[DefineOwnProperty]] (ES5 section 8.12.9) whose rejection behavior depends on an explicit 'Throw' parameter, just like [[Put]] and [[Delete]]. Why then, do set() and delete() return a boolean success flag to determine their rejection behavior, while defineProperty() does not? I believe that, until now, we were convinced that in the case of proxies, defineProperty() actually does _not_ depend on strict mode, because the built-in Object.defineProperty and Object.defineProperties methods (ES5 section 15.2.3.6-7) call the built-in [[DefineOwnProperty]] method with its 3rd argument 'Throw' set unconditionally to 'true', meaning it should never reject silently, independent of the strictness of the code. However, checking other parts of the spec where [[DefineOwnProperty]] is invoked, I now notice it's also invoked by [[Put]], which passes its own 'Throw' argument to [[DefineOwnProperty]], and this time, the 'Throw' argument is dependent on strict mode. This complicates matters because of derived traps. If a proxy handler does _not_ define a 'set' trap (which is derived), the proxy implementation will fall back on the fundamental defineProperty() trap, whose rejection behavior in that context now _should_ depend on strict mode. However, the current defineProperty() API doesn't allow the programmer to express this. I see two options here: 1) modify the defineProperty() trap such that it also returns a boolean flag to indicate rejection, like set() and delete(). This is still possible as defineProperty() currently has no useful return value. 2) specify unambiguously that defineProperty() on proxies should always reject explicitly by throwing a TypeError, even if the defineProperty() trap was called as part of the default 'set' trap. This can be justified if we specify that the default 'set' trap invokes the 'defineProperty' trap as if by calling Object.defineProperty, which is specified to never reject silently. Option #1 would most closely follow the current ES5 spec, but would disallow the default 'set' trap behavior from being written in Javascript itself, since it's impossible to specify the value of the 'Throw' parameter in Javascript. Option #2 is in line with how the current default 'set' behavior is specified at http://wiki.ecmascript.org/doku.php?id=harmony:proxies#trap_defaults. Cheers, Tom 2011/5/25 David Flanagan dflana...@mozilla.com I'm using a proxy to implement the DOM NodeList interface. A NodeList object is array-like and has read-only, non-configurable array index properties. In my handler's set() and delete() methods, I just return false if an attempt is made to set or delete an indexed property. The defineProperty() method is not parallel to set() and delete(), however. I can't just return false, since the return value is ignored. And I can't tell from the proposal whether I am allowed to throw a TypeError from this method. In getOwnPropertyDescriptor() I know that I have to lie and return a descriptor with configurable:true for the indexed properties. So in defineProperty() should I just silently ignore any attempts to set an indexed property, or should I actively reject those attempts with a TypeError? ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Converting an existing object to a proxy
I'm puzzled about this idea. I thought that one of the main design goals of proxies was that they are transparent, i.e. you cannot tell a proxy apart from a proper object. How can this be maintained for Proxy.createFrom? AFAICS, there is no way you can perform this operation on an object that already is a proxy. /Andreas On 27 May 2011 04:13, Mark S. Miller erig...@google.com wrote: On Thu, May 26, 2011 at 5:04 PM, Cormac Flanagan cor...@cs.ucsc.edu wrote: [documenting/expanding some ideas briefly discussed at today's meeting] The current proxy proposal has a method to create a new proxy: var proxy = Proxy.create(handler, proto); We could extend this proposal to allow an existing object to be converted to a proxy, via: var proxy = Proxy.createFrom(object, handler, proto); Here, the return value 'proxy' is the same address as the argument 'object'. The original object thus becomes a proxy. Any state of the original object is discarded. This extension appears to support additional applications, such as registering an observer on an existing object. The target object would first be cloned, then the target object would be converted into a proxy that dispatches to the cloned object, but which also notifies observers about accesses/updates to the (now proxified) object. There are a number of open issues relating to security etc: In particular, what objects can be proxified in this way - perhaps not frozen object, or objects with non-configurable properties or with unique names. In today's meeting, I made two suggestions along these lines: * Given the current proxy semantics, we should allow this only if the object-to-be-proxified is extensible and has no non-configurable own properties. * We have on occasion discussed modifying the proxy proposal so that individual properties could be fixed rather than just the proxy as a whole. (Note: I am not in favor of such a change, but it could be done soundly.) Given that this change to proxies were done, then we should allow proxification only if the object-to-be-proxified is extensible, period. In both cases, as you state, one effect of the operation is to remove all configurable own properties from the object. In both cases, we can adopt the rationale that the object-to-be-proxified could not have taken any action inconsistent with it always having been a proxy. In both cases, we need the further restriction that it is a kind of object that can be emulated by a proxy. Today, this is technically only objects of [[Class]] Object or Function, but we're talking about relaxing that in any case. A design goal is that for any object that could be proxified, we can replace it with a proxy in a way that is semantically transparent. - Cormac -- Cheers, --MarkM ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Bug in 10.5 (Declaration Binding Instantiation)?
Is it too late to incorporate errata for 5.1 already? :) It seems that the algorithm specified in 10.5 is wrong. In order to make sense, and match the informal description at the beginning of Section 10.6, step 8 needs to take place before step 6. (Noticed by Steven Keuchel.) /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Minor type confusion in proxies proposal?
I believe there is some type confusion in the proxy proposal spec wrt property descriptors and their reification into attributes objects. 1. In a note on the def of [[DefineOwnProperty]] for proxies, the proposal says: The Desc argument to this trap is a property descriptor object validated by ToPropertyDescriptor, except that it also retains any non-standard attributes present in the original property descriptor passed to Object.defineProperty. See the semantics of the modified Object.defineProperty built-in, below. That seems fishy, since according to ES5 8.10: Values of the Property Descriptor type are records composed of named fields where each field‘s name is an attribute name and its value is a corresponding attribute value as specified in 8.6.1. In particular, I take this to mean that property descriptors are not objects (but abstract records), and that there cannot be any fields whose name is not an attribute name. (In fact, in V8 we currently encode property descriptors using objects, but the encoding is different from the reified attributes object representation, and not quite compatible with the idea of adding arbitrary other fields.) 2. In the modified definition of Object.defineProperty, the proposal says in step 4.c: Call the [[DefineOwnProperty]] internal method of O with arguments name, descObj, and true. This is passing descObj, which in fact is _not_ a descriptor, but its reification as an attributes object. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Minor type confusion in proxies proposal?
On 1 July 2011 12:12, Andreas Rossberg rossb...@google.com wrote: I believe there is some type confusion in the proxy proposal spec wrt property descriptors and their reification into attributes objects. 1. In a note on the def of [[DefineOwnProperty]] for proxies, the proposal says: The Desc argument to this trap is a property descriptor object validated by ToPropertyDescriptor, except that it also retains any non-standard attributes present in the original property descriptor passed to Object.defineProperty. See the semantics of the modified Object.defineProperty built-in, below. That seems fishy, since according to ES5 8.10: Values of the Property Descriptor type are records composed of named fields where each field‘s name is an attribute name and its value is a corresponding attribute value as specified in 8.6.1. In particular, I take this to mean that property descriptors are not objects (but abstract records), and that there cannot be any fields whose name is not an attribute name. (In fact, in V8 we currently encode property descriptors using objects, but the encoding is different from the reified attributes object representation, and not quite compatible with the idea of adding arbitrary other fields.) I forgot to say: step 5 of the definition invokes the defineProperty trap of the handler passing Desc as the second argument. But the handler expects a reified attributes object. 2. In the modified definition of Object.defineProperty, the proposal says in step 4.c: Call the [[DefineOwnProperty]] internal method of O with arguments name, descObj, and true. This is passing descObj, which in fact is _not_ a descriptor, but its reification as an attributes object. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Minor type confusion in proxies proposal?
Yes, I understand the intent, I'm just pointing out that the current spec is both - incomplete, because it seems to assume that the semantics of the internal object descriptor type is extended, without specifying that, and - inconsistent, because attributes objects are passed to functions expecting descriptors (Object.defineProperty invoking [[DefineOwnProperty]] for proxies), and vice versa, ([[DefineOwnProperty]] invoking the proxy trap), without proper conversions. Of course, that can be fixed. The easiest fix (and what Tom perhaps had in mind) probably is to leave alone the definition of property descriptors, and have Object.defineProperty call the proxy trap directly, without going through [[DefineOwnProperty]]. That avoids the redundant conversion back and forth and trivially allows to keep additional properties on the attributes object. Or do you envision additional ways where extended attributes could come in? /Andreas On 1 July 2011 21:21, David Bruant david.bru...@labri.fr wrote: Hi Andreas, Property descriptors as specific type is an internal construct of the ES spec. Their definition in ES5 was used in the context of ES5 (with normal objects, host objects but no proxies). The proxy API needed a way to represent them. Objects sound like the natural construct to do so. First, you have to notice that the object is copied, so a different object is passed as argument to the defineProperty trap and as argument within the trap. Same for the return value of getOwnPropertyDescriptor. So there is no way the proxy can magically change a property descriptor (since within the proxy, there are only proxies) Then, the intention behind letting custom attribute pass is to encourage innovation. Proxies have the potential to be arbitrarily complicated; so should be their dialog interface (defineProperty, getOwnPropertyDescriptor). For instance, in an experiment of mine [1], I use a custom index property attribute. If some good idea come out (unlike my experiment), they could be integrated to a next version of ECMAScript. So I agree that objects as property descriptors within traps instead of a custom type are a derivation from the spec, but I think it's a good thing. Tom: about ...except that it also retains any non-standard attributes present in the original property descriptor passed to Object.defineProperty., what do you mean by any? Is it Object.keys(desc)? Object.enumerate(desc) (I mean the prop list enumerated over with for-in)? Object.getOwnPropertyNames(desc)? David [1] https://github.com/DavidBruant/PropStackObjects (see the HTMLs to see how it works) Le 01/07/2011 14:54, Andreas Rossberg a écrit : On 1 July 2011 12:12, Andreas Rossberg rossb...@google.com wrote: I believe there is some type confusion in the proxy proposal spec wrt property descriptors and their reification into attributes objects. 1. In a note on the def of [[DefineOwnProperty]] for proxies, the proposal says: The Desc argument to this trap is a property descriptor object validated by ToPropertyDescriptor, except that it also retains any non-standard attributes present in the original property descriptor passed to Object.defineProperty. See the semantics of the modified Object.defineProperty built-in, below. That seems fishy, since according to ES5 8.10: Values of the Property Descriptor type are records composed of named fields where each field‘s name is an attribute name and its value is a corresponding attribute value as specified in 8.6.1. In particular, I take this to mean that property descriptors are not objects (but abstract records), and that there cannot be any fields whose name is not an attribute name. (In fact, in V8 we currently encode property descriptors using objects, but the encoding is different from the reified attributes object representation, and not quite compatible with the idea of adding arbitrary other fields.) I forgot to say: step 5 of the definition invokes the defineProperty trap of the handler passing Desc as the second argument. But the handler expects a reified attributes object. 2. In the modified definition of Object.defineProperty, the proposal says in step 4.c: Call the [[DefineOwnProperty]] internal method of O with arguments name, descObj, and true. This is passing descObj, which in fact is _not_ a descriptor, but its reification as an attributes object. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Minor type confusion in proxies proposal?
Hi Tom. On 2 July 2011 13:50, Tom Van Cutsem tomvc...@gmail.com wrote: Hi Andreas, First, you're right about the typing issue: In ES5, for values of type Object, the signature for [[DefineOwnProperty]] would be: [[DefineOwnProperty]](P: a property name, Desc: an internal property descriptor, Throw: a boolean) On trapping proxies, that signature would need to change to: [[DefineOwnProperty]](P: a property name, Desc: an Object, Throw: a boolean) I don't think such a change is consistent. [[DefineOwnProperty]] is invoked in a number of places in the spec, and I think in many of them the type of the receiver is not distinguished and may well be a proxy, so a proxy may then receive both kinds of descriptors. Moreover, I think it would be a mistake to make the appropriate case distinction everywhere -- you really want the internal method to have the same signature in all cases. With that, I believe the strawman is otherwise internally consistent. In [[DefineOwnProperty]] step 5, what will be passed to the user-defined defineProperty trap is a proper Object, not an internal descriptor. I did clarify the note you referred to, to be more explicit in this regard. I don't see an alternative to changing the signature of [[DefineOwnProperty]]. It can't just receive an internal descriptor, as it doesn't preserve any non-standard attributes. How about simply bypassing [[DefineOwnProperty]] in Object.defineProperty for proxies, as I suggested in my reply to David? That seems to be the only place where additional objects can occur, or am I wrong? Cheers, /Andreas 2011/7/1 Andreas Rossberg rossb...@google.com On 1 July 2011 12:12, Andreas Rossberg rossb...@google.com wrote: I believe there is some type confusion in the proxy proposal spec wrt property descriptors and their reification into attributes objects. 1. In a note on the def of [[DefineOwnProperty]] for proxies, the proposal says: The Desc argument to this trap is a property descriptor object validated by ToPropertyDescriptor, except that it also retains any non-standard attributes present in the original property descriptor passed to Object.defineProperty. See the semantics of the modified Object.defineProperty built-in, below. That seems fishy, since according to ES5 8.10: Values of the Property Descriptor type are records composed of named fields where each field‘s name is an attribute name and its value is a corresponding attribute value as specified in 8.6.1. In particular, I take this to mean that property descriptors are not objects (but abstract records), and that there cannot be any fields whose name is not an attribute name. (In fact, in V8 we currently encode property descriptors using objects, but the encoding is different from the reified attributes object representation, and not quite compatible with the idea of adding arbitrary other fields.) I forgot to say: step 5 of the definition invokes the defineProperty trap of the handler passing Desc as the second argument. But the handler expects a reified attributes object. 2. In the modified definition of Object.defineProperty, the proposal says in step 4.c: Call the [[DefineOwnProperty]] internal method of O with arguments name, descObj, and true. This is passing descObj, which in fact is _not_ a descriptor, but its reification as an attributes object. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Minor type confusion in proxies proposal?
On 3 July 2011 13:29, Tom Van Cutsem tomvc...@gmail.com wrote: Comments? Looks good to me. I agree with Mark's comment that it should do a (shallow) copy of the attributes object, though. I guess the obvious point would be in Object.defineProperty, before passing it to DefineProxyProperty. /Andreas 2011/7/2 Andreas Rossberg rossb...@google.com Hi Tom. On 2 July 2011 13:50, Tom Van Cutsem tomvc...@gmail.com wrote: Hi Andreas, First, you're right about the typing issue: In ES5, for values of type Object, the signature for [[DefineOwnProperty]] would be: [[DefineOwnProperty]](P: a property name, Desc: an internal property descriptor, Throw: a boolean) On trapping proxies, that signature would need to change to: [[DefineOwnProperty]](P: a property name, Desc: an Object, Throw: a boolean) I don't think such a change is consistent. [[DefineOwnProperty]] is invoked in a number of places in the spec, and I think in many of them the type of the receiver is not distinguished and may well be a proxy, so a proxy may then receive both kinds of descriptors. Moreover, I think it would be a mistake to make the appropriate case distinction everywhere -- you really want the internal method to have the same signature in all cases. With that, I believe the strawman is otherwise internally consistent. In [[DefineOwnProperty]] step 5, what will be passed to the user-defined defineProperty trap is a proper Object, not an internal descriptor. I did clarify the note you referred to, to be more explicit in this regard. I don't see an alternative to changing the signature of [[DefineOwnProperty]]. It can't just receive an internal descriptor, as it doesn't preserve any non-standard attributes. How about simply bypassing [[DefineOwnProperty]] in Object.defineProperty for proxies, as I suggested in my reply to David? That seems to be the only place where additional objects can occur, or am I wrong? Cheers, /Andreas 2011/7/1 Andreas Rossberg rossb...@google.com On 1 July 2011 12:12, Andreas Rossberg rossb...@google.com wrote: I believe there is some type confusion in the proxy proposal spec wrt property descriptors and their reification into attributes objects. 1. In a note on the def of [[DefineOwnProperty]] for proxies, the proposal says: The Desc argument to this trap is a property descriptor object validated by ToPropertyDescriptor, except that it also retains any non-standard attributes present in the original property descriptor passed to Object.defineProperty. See the semantics of the modified Object.defineProperty built-in, below. That seems fishy, since according to ES5 8.10: Values of the Property Descriptor type are records composed of named fields where each field‘s name is an attribute name and its value is a corresponding attribute value as specified in 8.6.1. In particular, I take this to mean that property descriptors are not objects (but abstract records), and that there cannot be any fields whose name is not an attribute name. (In fact, in V8 we currently encode property descriptors using objects, but the encoding is different from the reified attributes object representation, and not quite compatible with the idea of adding arbitrary other fields.) I forgot to say: step 5 of the definition invokes the defineProperty trap of the handler passing Desc as the second argument. But the handler expects a reified attributes object. 2. In the modified definition of Object.defineProperty, the proposal says in step 4.c: Call the [[DefineOwnProperty]] internal method of O with arguments name, descObj, and true. This is passing descObj, which in fact is _not_ a descriptor, but its reification as an attributes object. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Type of property names, as seen by proxy traps
On 7 July 2011 13:11, Tom Van Cutsem tomvc...@gmail.com wrote: 2011/7/6 Andreas Rossberg rossb...@google.com While putting together some test cases for Object.keys, I wondered: is it intended that property names are always passed to traps as strings? That is indeed the intent. It seems like a reasonable assumption, but is not currently the case everywhere (e.g. the default implementation for `keys' can violate this assumption when passing names to this.getOwnPropertyDescriptor). How so? The default implementation for the keys trap relies on the return value of the getOwnPropertyNames() trap, whose return value is coerced to an array of Strings. Not quite. The coercion is taking place in Object.getOwnPropertyNames, but the default `keys' trap doesn't go through that, but instead calls the trap directly. Moreover, it has to do it like that, because it doesn't even have a reference to the proxy itself. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Type of property names, as seen by proxy traps
On 7 July 2011 14:32, David Bruant david.bru...@labri.fr wrote: However, if we assume that the getOwnPropertyNames trap is able to do type coercion on its output, there is no reason for the keys trap to not do that too, regardless of how it was implemented. Yes, that's what I would propose, too. It's just a bit ugly that we have to do that in two places now. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Type of property names, as seen by proxy traps
On 7 July 2011 15:09, David Bruant david.bru...@labri.fr wrote: Yes, that's what I would propose, too. It's just a bit ugly that we have to do that in two places now. Three if counting the enumerate trap for for-in loops. Regardless of ugliness, it's necessary. keys and enumerate are derived traps. They have a default implementation for developer convenience, however, developers could decide to reimplement the trap and the proxy engine implementation have to enforce types anyway. Each trap has to be guarded independently. Derived traps as showed are written in JS for expository purposes. Engines will be free to optimize as they wish internally as long as the observed behavior is the same. True, but optimizing that actually is more tricky than you might think, since in general it would change the semantics if an engine decided to call toString only once. It has to make sure that none of the names are objects, or at least none of their toString methods was modified and they are all free of side effects. Specifically, I think that type inference engines can be of a great help in ensuring that types are correct without having to pay the price of looking at every single element independently. I don't think that the type checks are the biggest cost. Doing the actual conversion several times for those cases where the type is _not_ string is potentially much more expensive. I guess it's fine if programmers suffer for returning objects as property names. But something like integers might be a valid use case. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Type of property names, as seen by proxy traps
On 7 July 2011 16:12, David Bruant david.bru...@labri.fr wrote: Derived traps as showed are written in JS for expository purposes. Engines will be free to optimize as they wish internally as long as the observed behavior is the same. True, but optimizing that actually is more tricky than you might think, since in general it would change the semantics if an engine decided to call toString only once. It has to make sure that none of the names are objects, or at least none of their toString methods was modified and they are all free of side effects. Interesting. However, I'm not sure side-effects are a problem. - var o = {a:1, toString:function(){o.b = 12; return 'a'; }}; console.log(o[o], o.b); // 1, 12 on Firefox 5 - Here, o[o] triggers a side effect and that sound like the normal behavior. I'm not sure I understand what your example is intended to show. But consider this: var i = 0 var o = {toString: function() { ++i; return a } var p = Proxy.create({getOwnPropertyNames: function() { return [o] }, ...}) var k = Object.keys(p) // What's the value of i now? /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Type of property names, as seen by proxy traps
On 7 July 2011 17:58, Brendan Eich bren...@mozilla.com wrote: On Jul 7, 2011, at 8:32 AM, Andreas Rossberg wrote: On 7 July 2011 16:12, David Bruant david.bru...@labri.fr wrote: Derived traps as showed are written in JS for expository purposes. Engines will be free to optimize as they wish internally as long as the observed behavior is the same. True, but optimizing that actually is more tricky than you might think, since in general it would change the semantics if an engine decided to call toString only once. It has to make sure that none of the names are objects, or at least none of their toString methods was modified and they are all free of side effects. Interesting. However, I'm not sure side-effects are a problem. - var o = {a:1, toString:function(){o.b = 12; return 'a'; }}; console.log(o[o], o.b); // 1, 12 on Firefox 5 - Here, o[o] triggers a side effect and that sound like the normal behavior. I'm not sure I understand what your example is intended to show. But consider this: var i = 0 var o = {toString: function() { ++i; return a } var p = Proxy.create({getOwnPropertyNames: function() { return [o] }, ...}) var k = Object.keys(p) // What's the value of i now? Fresh tracemonkey tip js shell: js var i = 0 js var o = {toString: function() { ++i; return a }} js var p = Proxy.create({getOwnPropertyNames: function() { return [o] }, getOwnPropertyDescriptor: function() { return {value:42} }}) js var k = Object.keys(p) js i 1 Where would there be a double-conversion? Well, with the canonical fix to the spec we discussed further up the thread (adding a conversion to string in the default trap for `keys') there would (have to) be. So my concern was that that is perhaps not the best fix, despite its simplicity. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Type of property names, as seen by proxy traps
On 7 July 2011 19:35, David Bruant david.bru...@labri.fr wrote: No, with the current keys default trap (calling this.getOwnPropertyNames()) there is no double conversion. Only one at the exit of the keys trap. There would be 2 conversions if the keys trap had the proxy argument (based on http://wiki.ecmascript.org/doku.php?id=strawman:handler_access_to_proxy) and if internally, the default keys trap was calling Object.getOwnPropertyNames(proxy) (which would call the trap and do type coercion). But the current implementation and a type coercion only when going out of traps would do double-conversion. not. would not do double-conversion, sorry. I thought the fix we were discussing was changing the `keys' default trap from keys: function() { return this.getOwnPropertyNames().filter( function (name) { return this.getOwnPropertyDescriptor(name).enumerable }.bind(this)); } to something along the lines of keys: function() { return this.getOwnPropertyNames().filter( function (name) { return this.getOwnPropertyDescriptor('' + name).enumerable }.bind(this)); } That would fix passing non-strings to the getOwnPropertyDescriptor trap, but introduce double conversions when you invoke Object.keys. I'm not sure what alternative you are proposing now. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Minor type confusion in proxies proposal?
It seems like we need to make a change to Object.defineProperties, too (regardless of the other issue). With the current wording, it will not forward user attributes to the defineProperty trap. The most modular fix (that seems compatible with your proposal below) probably is to change the spec of defineProperties to go through defineProperty, instead of calling the internal [[DefineOwnProperty]] directly. /Andreas On 6 July 2011 14:55, Andreas Rossberg rossb...@google.com wrote: On 3 July 2011 13:29, Tom Van Cutsem tomvc...@gmail.com wrote: Comments? Looks good to me. I agree with Mark's comment that it should do a (shallow) copy of the attributes object, though. I guess the obvious point would be in Object.defineProperty, before passing it to DefineProxyProperty. /Andreas 2011/7/2 Andreas Rossberg rossb...@google.com Hi Tom. On 2 July 2011 13:50, Tom Van Cutsem tomvc...@gmail.com wrote: Hi Andreas, First, you're right about the typing issue: In ES5, for values of type Object, the signature for [[DefineOwnProperty]] would be: [[DefineOwnProperty]](P: a property name, Desc: an internal property descriptor, Throw: a boolean) On trapping proxies, that signature would need to change to: [[DefineOwnProperty]](P: a property name, Desc: an Object, Throw: a boolean) I don't think such a change is consistent. [[DefineOwnProperty]] is invoked in a number of places in the spec, and I think in many of them the type of the receiver is not distinguished and may well be a proxy, so a proxy may then receive both kinds of descriptors. Moreover, I think it would be a mistake to make the appropriate case distinction everywhere -- you really want the internal method to have the same signature in all cases. With that, I believe the strawman is otherwise internally consistent. In [[DefineOwnProperty]] step 5, what will be passed to the user-defined defineProperty trap is a proper Object, not an internal descriptor. I did clarify the note you referred to, to be more explicit in this regard. I don't see an alternative to changing the signature of [[DefineOwnProperty]]. It can't just receive an internal descriptor, as it doesn't preserve any non-standard attributes. How about simply bypassing [[DefineOwnProperty]] in Object.defineProperty for proxies, as I suggested in my reply to David? That seems to be the only place where additional objects can occur, or am I wrong? Cheers, /Andreas 2011/7/1 Andreas Rossberg rossb...@google.com On 1 July 2011 12:12, Andreas Rossberg rossb...@google.com wrote: I believe there is some type confusion in the proxy proposal spec wrt property descriptors and their reification into attributes objects. 1. In a note on the def of [[DefineOwnProperty]] for proxies, the proposal says: The Desc argument to this trap is a property descriptor object validated by ToPropertyDescriptor, except that it also retains any non-standard attributes present in the original property descriptor passed to Object.defineProperty. See the semantics of the modified Object.defineProperty built-in, below. That seems fishy, since according to ES5 8.10: Values of the Property Descriptor type are records composed of named fields where each field‘s name is an attribute name and its value is a corresponding attribute value as specified in 8.6.1. In particular, I take this to mean that property descriptors are not objects (but abstract records), and that there cannot be any fields whose name is not an attribute name. (In fact, in V8 we currently encode property descriptors using objects, but the encoding is different from the reified attributes object representation, and not quite compatible with the idea of adding arbitrary other fields.) I forgot to say: step 5 of the definition invokes the defineProperty trap of the handler passing Desc as the second argument. But the handler expects a reified attributes object. 2. In the modified definition of Object.defineProperty, the proposal says in step 4.c: Call the [[DefineOwnProperty]] internal method of O with arguments name, descObj, and true. This is passing descObj, which in fact is _not_ a descriptor, but its reification as an attributes object. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Minor type confusion in proxies proposal?
Likewise, invoking Object.getOwnPropertyDescriptor on a proxy does not return user attributes. That actually is explicitly noted in the semantics for [[GetOwnProperty]], but I'm not sure I see the rationale behind it. I would prefer a more coherent story with respect to proxies and user attributes on descriptor objects. That is, we should either support such attributes properly, i.e. have them consistently flow both ways (from clients to traps and vice versa). Or we do not support them at all, i.e. filter them out everywhere. /Andreas On 8 July 2011 10:59, Andreas Rossberg rossb...@google.com wrote: It seems like we need to make a change to Object.defineProperties, too (regardless of the other issue). With the current wording, it will not forward user attributes to the defineProperty trap. The most modular fix (that seems compatible with your proposal below) probably is to change the spec of defineProperties to go through defineProperty, instead of calling the internal [[DefineOwnProperty]] directly. /Andreas On 6 July 2011 14:55, Andreas Rossberg rossb...@google.com wrote: On 3 July 2011 13:29, Tom Van Cutsem tomvc...@gmail.com wrote: Comments? Looks good to me. I agree with Mark's comment that it should do a (shallow) copy of the attributes object, though. I guess the obvious point would be in Object.defineProperty, before passing it to DefineProxyProperty. /Andreas 2011/7/2 Andreas Rossberg rossb...@google.com Hi Tom. On 2 July 2011 13:50, Tom Van Cutsem tomvc...@gmail.com wrote: Hi Andreas, First, you're right about the typing issue: In ES5, for values of type Object, the signature for [[DefineOwnProperty]] would be: [[DefineOwnProperty]](P: a property name, Desc: an internal property descriptor, Throw: a boolean) On trapping proxies, that signature would need to change to: [[DefineOwnProperty]](P: a property name, Desc: an Object, Throw: a boolean) I don't think such a change is consistent. [[DefineOwnProperty]] is invoked in a number of places in the spec, and I think in many of them the type of the receiver is not distinguished and may well be a proxy, so a proxy may then receive both kinds of descriptors. Moreover, I think it would be a mistake to make the appropriate case distinction everywhere -- you really want the internal method to have the same signature in all cases. With that, I believe the strawman is otherwise internally consistent. In [[DefineOwnProperty]] step 5, what will be passed to the user-defined defineProperty trap is a proper Object, not an internal descriptor. I did clarify the note you referred to, to be more explicit in this regard. I don't see an alternative to changing the signature of [[DefineOwnProperty]]. It can't just receive an internal descriptor, as it doesn't preserve any non-standard attributes. How about simply bypassing [[DefineOwnProperty]] in Object.defineProperty for proxies, as I suggested in my reply to David? That seems to be the only place where additional objects can occur, or am I wrong? Cheers, /Andreas 2011/7/1 Andreas Rossberg rossb...@google.com On 1 July 2011 12:12, Andreas Rossberg rossb...@google.com wrote: I believe there is some type confusion in the proxy proposal spec wrt property descriptors and their reification into attributes objects. 1. In a note on the def of [[DefineOwnProperty]] for proxies, the proposal says: The Desc argument to this trap is a property descriptor object validated by ToPropertyDescriptor, except that it also retains any non-standard attributes present in the original property descriptor passed to Object.defineProperty. See the semantics of the modified Object.defineProperty built-in, below. That seems fishy, since according to ES5 8.10: Values of the Property Descriptor type are records composed of named fields where each field‘s name is an attribute name and its value is a corresponding attribute value as specified in 8.6.1. In particular, I take this to mean that property descriptors are not objects (but abstract records), and that there cannot be any fields whose name is not an attribute name. (In fact, in V8 we currently encode property descriptors using objects, but the encoding is different from the reified attributes object representation, and not quite compatible with the idea of adding arbitrary other fields.) I forgot to say: step 5 of the definition invokes the defineProperty trap of the handler passing Desc as the second argument. But the handler expects a reified attributes object. 2. In the modified definition of Object.defineProperty, the proposal says in step 4.c: Call the [[DefineOwnProperty]] internal method of O with arguments name, descObj, and true. This is passing descObj, which in fact is _not_ a descriptor, but its reification as an attributes object. /Andreas
Re: using Private name objects for declarative property definition.
On 8 July 2011 21:16, Allen Wirfs-Brock al...@wirfs-brock.com wrote: The current versions of the private names proposal http://wiki.ecmascript.org/doku.php?id=harmony:private_name_objects simply exposes a constructor for creating unique values can be be used as property keys: Of the several private names proposals around, I find this one preferable. It is clean and simple, and provides the functionality needed in an orthogonal manner. It seems worth exploring how well this works in practice before we settle on something more complicated. One minor suggestion I'd have is to treat names as a proper new primitive type, i.e. typeof key == name, not object. That way, it can be defined much more cleanly what a name is, where its use is legal (as opposed to proper objects), and where it maybe enjoys special treatment. Another alternative that avoids using the 'private' prefix is to allow the property name in a property definition to be enclosed with brackets: const __x=Name.create(); const __y=Name.create(); const __validate=Name.create(); Point = { //private members [__x]: 0, [ __y]: 0, [__validate](x,y) { return typeof x == 'number' typeof y = 'number'}, //public members new(x,y) { if (!this[__validate](x,y)) throw invalid; return this | { [__x]: x, [__y]: y } }; add(anotherPoint) { return this.new(this[__x]+another[__x], this[__y]+another[__y]) } } I like this notation most, because it can be generalised in a consistent manner beyond the special case of private names: there is no reason that the bit in brackets is just an identifier, we could allow arbitrary expressions. So the notation would be the proper dual to bracket access notation. From a symmetry and expressiveness perspective, this is very appealing. Notation-wise, I think people would get used to using brackets. I see no good reason to introduce yet another projection syntax, like @. Whether additional sugar is worthwhile -- e.g. private declarations -- remains to be explored. (To be honest, I haven't quite understood yet in what sense such sugar would really be more declarative. Sure, it is convenient and perhaps more readable. But being declarative is a semantic property, and cannot be achieved by simple syntax tweaks.) /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: using Private name objects for declarative property definition.
On 9 July 2011 00:24, Brendan Eich bren...@mozilla.com wrote: On Jul 8, 2011, at 2:43 PM, Andreas Rossberg wrote: One minor suggestion I'd have is to treat names as a proper new primitive type, i.e. typeof key == name, not object. That way, it can be defined much more cleanly what a name is, where its use is legal (as opposed to proper objects), and where it maybe enjoys special treatment. We went back and forth on this. I believe the rationale is in the wiki (but perhaps in one of the strawman:*name* pages). There are a couple of reasons: 1. We want private name objects to be usable as keys in WeakMaps. Clearly we could extend WeakMaps to have either object (but not null) or name typeof-type keys, but that complexity is not warranted yet. I can see that being a relevant use case for weak maps. But the same logic applies to using, say, strings or numbers as keys. So isn't the right fix rather to allow weak maps to be keyed on any JS value? 2. Private name objects are deeply frozen and behave like value types (since they have no copy semantics and you can only generate fresh ones). Thus they are typeof-type object but clearly distinct from string-equated property names that JS has sported so far. Oh, of course you meant to distinguish private names via typeof precisely to tell that they are not converted to strings when used as property names. For that test, the proposal http://wiki.ecmascript.org/doku.php?id=harmony:private_name_objects proposes an isName predicate function exported from the @name built-in module. Yes, I know. But this is introducing a kind of ad-hoc shadow type mechanism. Morally, this is a type distinction, so why not make it one? Moreover, I feel that ES already has too many classification mechanisms (typeof, class, instanceof), so adding yet another one through the back door doesn't seem optimal. [...] in the case of private name objects, we don't think we have good enough reason to add a typeof name -- and then to complicate WeakMap. Why do you think that it would that make WeakMap more complicated? As far as I can see, implementations will internally make that very type distinction anyways. And the spec also has to make it, one way or the other. I like this notation most, because it can be generalised in a consistent manner beyond the special case of private names: there is no reason that the bit in brackets is just an identifier, we could allow arbitrary expressions. Then the shape of the object is not static. Perhaps this is worth the costs to implementations and other analyzers (static program analysis, human readers). We should discuss a bit more first, as I just wrote in reply to Allen. I don't think that the more general form would be a big deal for implementations. And it is still easy to identify object expressions with static shape syntactically: they don't use [_]. Analyses shouldn't be harder than for a series of assignments (which you probably could desugar this into). Here, with obj = { [expr]: value } as the way to compute a property name in an object initialiser (I must not write object literal any longer), we are proceeding up another small and separate hill. But, is this the right design for object initialisers (which the normative grammar does call ObjectLiterals)? (If you care about that, then that's a misnomer already, since the property values have always been arbitrary expressions.) Thanks, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Array generation
On 10 July 2011 22:23, David Herman dher...@mozilla.com wrote: Another common and useful fusion of two traversals that's in many Schemes is map-filter or filter-map: a.filterMap(f) ~~~ [res for [i,x] of items(a) let (res = f(x, i)) if (res !== void 0)] I rather arbitrarily chose to accept both null and undefined here as way to say no element -- a reasonable alternative would be to accept *only* undefined as no element. \bikeshed{ The SML lib calls this one mapPartial, which I think is a much better name. } /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: using Private name objects for declarative property definition.
On 9 July 2011 14:42, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote: Unlike Names, strings and numbers are forgeable, so if they were allowed as keys in WeakMaps, the associated value could never be safely collected. Names, by contrast, have identity. Of course you are right, and I shouldn't post in the middle of the night. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: using Private name objects for declarative property definition.
On 9 July 2011 17:48, Brendan Eich bren...@mozilla.com wrote: Adding names as a distinct reference and typeof type, extending WeakMap to have as its key type (object | name), adds complexity compared to subsuming name under object. It seems to me that you are merely shifting the additional complexity from one place to another: either weak maps must be able to handle (object + name) for keys (instead of just object), or objects must handle (string + object * isName) as property names (instead of just string + name). Moreover, the distinction between names and proper objects will have to be deeply engrained in the spec, because it changes a fundamental mechanism of the language. Whereas WeakMaps are more of an orthogonal feature with rather local impact on the spec. (The same is probably true for implementations.) Why do you think that it would that make WeakMap more complicated? As far as I can see, implementations will internally make that very type distinction anyways. No, as proposed private name objects are just objects, and WeakMap implementations do not have to distinguish (apart from usual GC mark method virtualization internal to implementations) between names and other objects used as keys. And the spec also has to make it, one way or the other. Not if names are objects. I think an efficient implementation of names in something like V8 will probably want to assign different internal type tags to them either way. Otherwise, we'll need extra tests for each property access, and cannot specialise as effectively. I'm not sure which class you mean. The [[ClassName]] disclosed by Object.prototype.toString.call(x).slice(8,-1) is one possibility, which is one of the many and user-extensible ways of distinguishing among objects. ES.next class is just sugar for constructor/prototype patterns with crucial help for extends and super. I meant the [[Class]] property (I guess that's what you are referring to as well). Not sure what you mean when you say it is user-extensible, though. Is it in some implementations? (I'm aware of the somewhat scary note on p.32 of the spec.) Or are you just referring to the toString method? I appreciate the ongoing discussion, but I'm somewhat confused. Can I ask a few questions to get a clearer picture? 1. We seem to have (at least) a two-level nominal type system: the first level is what is returned by typeof, the second refines the object type and is hidden in the [[Class]] property (and then there is the oddball function type, but let's ignore that). Is it the intention that all type testing predicates like isArray, isName, isGenerator will essentially expose the [[Class]] property? 2. If there are exceptions to this, why? Would it make sense to clean this up? (I saw Allen's cleanup strawman, but it seems to be going the opposite direction, and I'm not quite sure what it's trying to achieve exactly.) 3. If we can get to a uniform [[Class]] mechanism, maybe an alternative to various ad-hoc isX attributes would be a generic classof operator? 4. What about proxies? Is the idea that proxies can *never* emulate any behaviour that relies on a specific [[Class]]? For example, I cannot proxy a name. Also, new classes can only be introduced by the spec. 5. What are the conventions by which the library distinguishes between regular object properties and operations, and meta (reflective) ones? It seems to me that part of the confusion(?) in the discussion is that the current design makes no real distinction. I think it is important, though, since e.g. proxies should be able to trap regular operations, but not reflective ones (otherwise, e.g. isProxy wouldn't make sense). Also, modern reflective patterns like mirrors make the point that no reflective method should be on the reflected object itself. Thanks, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Proxy.isProxy (Was: using Private name objects for declarative property definition.)
I fully agree that isProxy sounds like a bad idea. It just breaks the proxy abstraction. /Andreas On 13 July 2011 10:26, Andreas Gal g...@mozilla.com wrote: I really don't think IsProxy is a good idea. It can lead to subtle bugs depending on whether an object is a DOM node, or a wrapper around a DOM node (or whether the embedding uses a proxy to implement DOM nodes or not). In Firefox we plan on making some DOM nodes proxies for example, but not others. I really don't think there is value in exposing this to programmers. Andreas On Jul 13, 2011, at 1:23 AM, Tom Van Cutsem wrote: Perhaps Proxy.isProxy was used merely as an example, but wasn't the consensus that Proxy.isProxy is not needed? Dave pointed out that it breaks transparent virtualization. Also, there is Object.isExtensible which always returns |true| for (trapping) proxies. That means we already have half of Proxy.isProxy without exposing proxies: if !Object.isExtensible(obj), obj is guaranteed not to be a proxy. Cheers, Tom 2011/7/9 Brendan Eich bren...@mozilla.com Also the Proxy.isTrapping, which in recent threads has been proposed to be renamed to Proxy.isProxy or Object.isProxy. ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Type of property names, as seen by proxy traps
Very much appreciated. Given all the nasty mutability in JS, I think for the non-normative JS implementations you also might want to note that it assumes that nobody has hampered with the primordial Object object. /Andreas On 14 July 2011 12:34, Tom Van Cutsem tomvc...@gmail.com wrote: Follow-up: I updated http://wiki.ecmascript.org/doku.php?id=harmony:proxies_semantics with a more precise specification of the default behavior of all derived traps. This should also resolve the double-coercion issue with Object.keys. In summary: - for has, hasOwn and get it's easy to fall back on specifications of existing built-ins (Object.[[HasProperty]] for has, Object.prototype.hasOwnProperty for hasOwn and Object.[[Get]] for get) - for set, falling back on Object.[[Put]] is not ideal, as this built-in performs redundant invocations of [[Get{Own}Property]] through [[CanPut]]. Starting from Object.[[Put]] and the default set trap as specified in JS itself, I formulated a new DefaultPut algorithm that avoids this redundancy. - for keys and enumerate, there is no proper built-in to fall back on. I added two algorithms (FilterEnumerableOwn and FilterEnumerable) that take the uncoerced result of the get{Own}PropertyNames trap, and filter out the enumerable properties, specced after Array.prototype.filter. I also updated http://wiki.ecmascript.org/doku.php?id=harmony:proxies#trap_defaults so that it is clear that that section is only a non-normative description of how derived traps could be implemented in pure Javascript. Cheers, Tom 2011/7/8 Tom Van Cutsem tomvc...@gmail.com I believe the alternative that David is talking about is the following (pending the acceptance of http://wiki.ecmascript.org/doku.php?id=strawman:handler_access_to_proxy) keys: function(proxy) { return Object.getOwnPropertyNames(proxy).filter( function (name) { return Object.getOwnPropertyDescriptor(proxy, name).enumerable }); } (assuming that Object here refers to the built-in Object) With this definition, I don't see the need for double coercion: the handler's getOwnPropertyNames trap is called, and its result is coerced. Then, the proxy implementation knows that each of the above |name|s passed to getOwnPropertyDescriptor will be a String already, so it doesn't need to coerce again. Finally, `keys' does not need to coerce its own result array, since it is simply a filtered version of an already fresh, coerced array. Perhaps all self-sends to fundamental traps should be expressed in terms of the operation that causes the trap, rather than a direct trap invocation. Similar issues could arise in the default 'set' trap behavior when it calls 'this.defineProperty' rather than 'Object.defineProperty(proxy,...)'. 2011/7/7 Andreas Rossberg rossb...@google.com On 7 July 2011 19:35, David Bruant david.bru...@labri.fr wrote: No, with the current keys default trap (calling this.getOwnPropertyNames()) there is no double conversion. Only one at the exit of the keys trap. There would be 2 conversions if the keys trap had the proxy argument (based on http://wiki.ecmascript.org/doku.php?id=strawman:handler_access_to_proxy) and if internally, the default keys trap was calling Object.getOwnPropertyNames(proxy) (which would call the trap and do type coercion). But the current implementation and a type coercion only when going out of traps would do double-conversion. not. would not do double-conversion, sorry. I thought the fix we were discussing was changing the `keys' default trap from keys: function() { return this.getOwnPropertyNames().filter( function (name) { return this.getOwnPropertyDescriptor(name).enumerable }.bind(this)); } to something along the lines of keys: function() { return this.getOwnPropertyNames().filter( function (name) { return this.getOwnPropertyDescriptor('' + name).enumerable }.bind(this)); } That would fix passing non-strings to the getOwnPropertyDescriptor trap, but introduce double conversions when you invoke Object.keys. I'm not sure what alternative you are proposing now. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Type of property names, as seen by proxy traps
tampered, not hampered, of course... On 14 July 2011 12:52, Andreas Rossberg rossb...@google.com wrote: Very much appreciated. Given all the nasty mutability in JS, I think for the non-normative JS implementations you also might want to note that it assumes that nobody has hampered with the primordial Object object. /Andreas On 14 July 2011 12:34, Tom Van Cutsem tomvc...@gmail.com wrote: Follow-up: I updated http://wiki.ecmascript.org/doku.php?id=harmony:proxies_semantics with a more precise specification of the default behavior of all derived traps. This should also resolve the double-coercion issue with Object.keys. In summary: - for has, hasOwn and get it's easy to fall back on specifications of existing built-ins (Object.[[HasProperty]] for has, Object.prototype.hasOwnProperty for hasOwn and Object.[[Get]] for get) - for set, falling back on Object.[[Put]] is not ideal, as this built-in performs redundant invocations of [[Get{Own}Property]] through [[CanPut]]. Starting from Object.[[Put]] and the default set trap as specified in JS itself, I formulated a new DefaultPut algorithm that avoids this redundancy. - for keys and enumerate, there is no proper built-in to fall back on. I added two algorithms (FilterEnumerableOwn and FilterEnumerable) that take the uncoerced result of the get{Own}PropertyNames trap, and filter out the enumerable properties, specced after Array.prototype.filter. I also updated http://wiki.ecmascript.org/doku.php?id=harmony:proxies#trap_defaults so that it is clear that that section is only a non-normative description of how derived traps could be implemented in pure Javascript. Cheers, Tom 2011/7/8 Tom Van Cutsem tomvc...@gmail.com I believe the alternative that David is talking about is the following (pending the acceptance of http://wiki.ecmascript.org/doku.php?id=strawman:handler_access_to_proxy) keys: function(proxy) { return Object.getOwnPropertyNames(proxy).filter( function (name) { return Object.getOwnPropertyDescriptor(proxy, name).enumerable }); } (assuming that Object here refers to the built-in Object) With this definition, I don't see the need for double coercion: the handler's getOwnPropertyNames trap is called, and its result is coerced. Then, the proxy implementation knows that each of the above |name|s passed to getOwnPropertyDescriptor will be a String already, so it doesn't need to coerce again. Finally, `keys' does not need to coerce its own result array, since it is simply a filtered version of an already fresh, coerced array. Perhaps all self-sends to fundamental traps should be expressed in terms of the operation that causes the trap, rather than a direct trap invocation. Similar issues could arise in the default 'set' trap behavior when it calls 'this.defineProperty' rather than 'Object.defineProperty(proxy,...)'. 2011/7/7 Andreas Rossberg rossb...@google.com On 7 July 2011 19:35, David Bruant david.bru...@labri.fr wrote: No, with the current keys default trap (calling this.getOwnPropertyNames()) there is no double conversion. Only one at the exit of the keys trap. There would be 2 conversions if the keys trap had the proxy argument (based on http://wiki.ecmascript.org/doku.php?id=strawman:handler_access_to_proxy) and if internally, the default keys trap was calling Object.getOwnPropertyNames(proxy) (which would call the trap and do type coercion). But the current implementation and a type coercion only when going out of traps would do double-conversion. not. would not do double-conversion, sorry. I thought the fix we were discussing was changing the `keys' default trap from keys: function() { return this.getOwnPropertyNames().filter( function (name) { return this.getOwnPropertyDescriptor(name).enumerable }.bind(this)); } to something along the lines of keys: function() { return this.getOwnPropertyNames().filter( function (name) { return this.getOwnPropertyDescriptor('' + name).enumerable }.bind(this)); } That would fix passing non-strings to the getOwnPropertyDescriptor trap, but introduce double conversions when you invoke Object.keys. I'm not sure what alternative you are proposing now. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: July TC39 meeting notes, day 1
On 28 July 2011 10:35, David Bruant david.bru...@labri.fr wrote: Le 28/07/2011 06:21, Brendan Eich a écrit : == Handler access to proxies == Proxy handler traps need to receive the proxy as a parameter: first, or last? Last allows trap implementors to leave |proxy| off. It's also a compatible extension to the proposal and its prototype implementations. Putting |proxy| last may also steer implementors away from touching proxy, reducing the bugs where you infinitely diverge. First is more normal-order (proxy, name) and some find it more aesthetically pleasing. Another alternative: the proxy could be passed via a data property on the handler. I think we discussed already the idea of proxy being passed as a data property to the handler and came to the conclusion that it may not be a good idea, because it breaks the stratification. If two proxies use the same handler as in [2], then, there is an ambiguity on what the value of this property should be. The solution we discussed is to simply use prototypes. That is, share handler methods by putting them on a (single) prototype object, and have per-proxy instances that carry the individual proxy references (or other per-proxy data, for that matter). /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: July TC39 meeting notes, day 1
On 28 July 2011 20:34, David Bruant david.bru...@labri.fr wrote: Le 28/07/2011 19:52, Andreas Rossberg a écrit : On 28 July 2011 10:35, David Bruant david.bru...@labri.fr wrote: Le 28/07/2011 06:21, Brendan Eich a écrit : == Handler access to proxies == Proxy handler traps need to receive the proxy as a parameter: first, or last? Last allows trap implementors to leave |proxy| off. It's also a compatible extension to the proposal and its prototype implementations. Putting |proxy| last may also steer implementors away from touching proxy, reducing the bugs where you infinitely diverge. First is more normal-order (proxy, name) and some find it more aesthetically pleasing. Another alternative: the proxy could be passed via a data property on the handler. I think we discussed already the idea of proxy being passed as a data property to the handler and came to the conclusion that it may not be a good idea, because it breaks the stratification. If two proxies use the same handler as in [2], then, there is an ambiguity on what the value of this property should be. The solution we discussed is to simply use prototypes. That is, share handler methods by putting them on a (single) prototype object, and have per-proxy instances that carry the individual proxy references (or other per-proxy data, for that matter). This is a pattern that I have seen used by Tom a lot and that I really like too, but you can't force a user to do that. So I assume, you would systematically add a base object and use the argument handler as its prototype? --- // h is a handler object var p1 = Proxy.create(h); var p2 = Proxy.create(h); --- When a user does this, what does he want? To use the exact same handler (same object identity)? Or to use the same logic but different internal properties? The solution you discussed seems to assume the latter, but who knows? And how do I implement the former if the proxy spec imposes that the object I pass internally becomes another object? I'm not sure I understand what you are asking. The solution I mentioned is purely user-side. There is no magic assumed in the proxy semantics. If you pass the same handler twice, it will be the same handler. If you need proxy-specific state, pass different handlers. If you still want some form of code sharing, use prototypal delegation. Now that I think about it, it's a bit weird that the proxy API allows to create several proxies with the same handler (same object identity). Maybe the API could be reworked in order to prevent it? Maybe Proxy.create should return the same proxy object if provided the same handler (p1 === p2, here)? I agree that there probably aren't too many useful examples for using the same handler. However, I also don't see a good reason for disallowing it, nor to require Proxy.create to memoise all handlers. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Adding methods to {Array,String}.prototype
On 29 July 2011 19:08, Brendan Eich bren...@mozilla.com wrote: I did not mean multimethods (generic functions is a confusing term, since it also means functions that work for parameters of any time; also generic suggests generics, i.e. type parameters). Generic is a heavily overloaded term. I forgot who said it, but I remember a quote along the lines of by `generic' people always refer to the sort of polymorphism that your favourite language doesn't have. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: An experiment using an object literal based class definition pattern
On 4 August 2011 22:57, Allen Wirfs-Brock al...@wirfs-brock.com wrote: Using these two new ideas and other active object literal enhancement proposals it is pretty easy to compose a class-like declaration. For example the SkinnedMesh from the http://wiki.ecmascript.org/doku.php?id=harmony:classes proposal can be code as: const SkinnedMesh = THREE.Matrix4.Mesh | function(geometry,materials){ super.construtor(geometry,materials); this.{ identity.Matrix: new THREE.Matrix4(), bones: [], boneMatrices: [] }; }.prototype.{ update(camera) { ... super(update); } }.constructor.{ default(){ return new this(THREE.defaultGeometry,THREE.defaultMaterials); } }; Sorry for chiming in late, but I don't understand this example. My understanding so far was that | takes a literal on its right-hand side. But that doesn't seem to be the case here. So what is the intended semantics? Does | mutate the rhs object (that would make it equivalent to exposing mutable __proto__, which seems bad)? Or does it copy the object (then how is the copy defined)? Or do you consider | to take precedence over .{}? In that case, the example wouldn't make sense, because | wouldn't see the prototype property. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: July TC39 meeting notes, day 1
On 4 August 2011 19:34, Brendan Eich bren...@mozilla.com wrote: On Jul 31, 2011, at 1:04 PM, Sean Eagan wrote: A 'receiver' argument is not needed because it would never be different than the proxy, and the proxy can either be passed as an argument or stored either as an own property of the handler, or as a value keyed by the handler in a weak map, which there seems to have been TC39 concensus on. Ok, right -- even without the extra proxy parameter in addition to receiver, dropping receiver makes sense. Sorry to go in a circle on this. It's a trap API change, and I agree with Mark that we need Tom to bless it. I would welcome removing the extra receiver (or proxy) arguments from get and set traps. However, it seems to me that the main reason, currently, for having them is that they are needed by the default traps, in case the respective descriptor returned by getOwnPropertyDescriptor has a getter/setter (which need a receiver). Arguably, making a proxy trap return getters/setters seems a somewhat pointless use case anyway. But nevertheless we need to have some reasonable semantics for it. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: July TC39 meeting notes, day 1
On 8 August 2011 18:46, Kevin Reid kpr...@google.com wrote: On Mon, Aug 8, 2011 at 08:50, Andreas Rossberg rossb...@google.com wrote: I would welcome removing the extra receiver (or proxy) arguments from get and set traps. However, it seems to me that the main reason, currently, for having them is that they are needed by the default traps, in case the respective descriptor returned by getOwnPropertyDescriptor has a getter/setter (which need a receiver). This is almost the rationale I gave earlier. To be precise, the default traps themselves need not have behavior which is implementable as an explicit trap (since they are not exposed as being functions which take the same parameters as user-supplied traps do). I feel the receiver should be provided so that user-supplied traps *can mimic the default traps*, with variations or optimizations. Arguably, making a proxy trap return getters/setters seems a somewhat pointless use case anyway. But nevertheless we need to have some reasonable semantics for it. It allows a proxy to pretend to be an object which supports Object.defineOwnProperty normally. It allows a proxy to emulate, or wrap, an ordinary object which happens to have some accessor properties, while still being transparent to reflection (which I understand is one of the goals of the proxy facility). Sure, but is that necessarily something that the _default_ traps have to be able to mimic? There is no problem programming it up yourself if you want it. I'm not saying yes or no, just raising the question. At least the additional arguments seem like a significant complication (and asymmetry) to the proxy interface for very limited benefit. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: July TC39 meeting notes, day 1
On 9 August 2011 19:25, Kevin Reid kpr...@google.com wrote: On Tue, Aug 9, 2011 at 01:17, Andreas Rossberg rossb...@google.com wrote: On 8 August 2011 18:46, Kevin Reid kpr...@google.com wrote: On Mon, Aug 8, 2011 at 08:50, Andreas Rossberg rossb...@google.com wrote: Arguably, making a proxy trap return getters/setters seems a somewhat pointless use case anyway. But nevertheless we need to have some reasonable semantics for it. It allows a proxy to pretend to be an object which supports Object.defineOwnProperty normally. It allows a proxy to emulate, or wrap, an ordinary object which happens to have some accessor properties, while still being transparent to reflection (which I understand is one of the goals of the proxy facility). Sure, but is that necessarily something that the _default_ traps have to be able to mimic? There is no problem programming it up yourself if you want it. Are you proposing a revised division of fundamental vs. derived traps? If not, what do you propose the default derived get or set trap do in the event that it gets an accessor property descriptor in response to getOwnPropertyDescriptor? I guess my point was that there is no natural law demanding that the default traps have perfect semantics. So we could e.g. pass null to the accessors in that case. If you need a different semantics, program it. I'm not necessarily proposing that path, just pointing out the possibility. On the other hand, it is not at all necessary that we are able to express the default traps as _closed_ JS functions. If the proxy object is simply bound to a free variable in the get/set default code, for example, I don't see that as a real problem. So that might be an alternative solution as far as the spec is concerned (without resorting to something more low-level). Your argument against the latter kind of approach, as I understood it, was that you as a programmer want to be able to simulate the _exact_ behaviour of the individual default traps in real code, _in isolation_. I'm not sure why this has to be a goal. As you long as you can simulate the overall behaviour easily, that seems good enough. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: July TC39 meeting notes, day 1
On 12 August 2011 13:53, Tom Van Cutsem tomvc...@gmail.com wrote: I think I found a compelling and easy-to-understand rule for determining whether or not a trap needs access to proxy/receiver: if the trap deals with inherited properties, it needs access to |proxy|. Using that rule, the following traps require access to |proxy|: get, set, getPropertyDescriptor, getPropertyNames, has, enumerate (incidentally, all of these traps are or can be made derived) All other traps deal only with own properties, and do not need |proxy|: getOwnPropertyDescriptor, getOwnPropertyNames, defineProperty, delete, fix, hasOwn, keys Although that rule seems fairly simple, I still find a half/half situation unnecessarily confusing and error-prone. I would strongly vote for making the API consistent. That is, either equip all methods with a proxy argument (preferably as first), or none. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Where in spec does it explain why setting the value of an existing property will change its [[Enumerable]] attribute.
On 17 August 2011 21:24, John-David Dalton john.david.dal...@gmail.com wrote: Another odd thing is that V8 uses the `Array#push` internally for `Object.defineProperties`. I noticed that if I set `Array.prototype.push = 1;` using `Object.defineProperties(…)` would error complaining about `push` not being a function. Yes, there are still several bugs like that in the V8 built-ins. The reason is that most of these built-ins are written in JS itself, and in some places the code isn't careful enough about applying the original function instead of just invoking an object method. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Block scoping and redeclarations
We are currently in the process of implementing block scoping for V8 (http://wiki.ecmascript.org/doku.php?id=harmony:block_scoped_bindings). Brendan and Dave suggest that certain combinations of `let' and `var' should be an error (more precisely, a syntax error, I assume). However, there are various possibilities to interpret this. Assume that each line in the following is a function scope: { let x; var x } // 1a { var x; let x } // 1b { let x; { var x } } // 2a { var x; { let x } } // 2b { { let x } var x }// 3a { { var x } let x }// 3b { { let x } { var x } } // 4a { { var x } { let x } } // 4b 1a-2a should clearly be errors. Same for 3b arguably, because the var is hoisted to the same scope as the let. In 2b, 3a, and 4a/b, a var is shadowed by a let, which isn't a problem in principle. OTOH, strictly speaking, at least 3a and 4a actually introduce a var-declaration that has already been shadowed by a let-declaration (Dave's words). There are lots of arguments that can be made here, but ultimately, my feeling is that any rule that allows some of the examples above, but not others is both brittle and confusing, and potentially too complicated to memoize correctly for the average programmer. Consequently, we propose a very simple rule instead: * It is a syntax error if a given identifier is declared by both a let-declaration and a var-delaration in the same function. (And similarly, for const vs. var, or function vs. var -- the latter being an incompatible change for the global scope, but it seems like we may abolish that anyway.) We could go even further with the first point: we could make it a syntax error to mix var and let _at all_ in a single function, regardless of what identifiers they declare. I would be perfectly fine with that, too, but expect that others would disagree. * In a similar vein, I think we should probably forbid `var' to coexist with _any_ other form of binding for the same variable in a single function. In particular, for Harmony mode we should rule out the infamous try .. catch(x) { var x = 666; ...}. * Finally, do we allow redeclaring functions with let or const, like it used to be the case with var? I propose disallowing it. What do you think? Thanks, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Block scoping and redeclarations
On 23 August 2011 19:14, Allen Wirfs-Brock al...@wirfs-brock.com wrote: On Aug 23, 2011, at 5:31 AM, Andreas Rossberg wrote: We are currently in the process of implementing block scoping for V8 (http://wiki.ecmascript.org/doku.php?id=harmony:block_scoped_bindings). The wiki page spec. is neither complete nor up to date so I wouldn't depend too much on what it says. I'm in the process of writing the actual draft specification for block scoped declaration and expect to have it ready for review in advance of the next TC39 meeting. It's up to you, but it might be more economical to put off your implementation for a couple weeks until that spec. is ready That's OK, we are aware that the spec is not finalized yet. For most part, the semantics is rather obvious, and we try to avoid putting too much work into aspects that aren't clear yet (such as the global scope). Brendan and Dave suggest that certain combinations of `let' and `var' should be an error (more precisely, a syntax error, I assume). However, there are various possibilities to interpret this. Assume that each line in the following is a function scope: { let x; var x } // 1a { var x; let x } // 1b { let x; { var x } } // 2a { var x; { let x } } // 2b { { let x } var x } // 3a { { var x } let x } // 3b { { let x } { var x } } // 4a { { var x } { let x } } // 4b 1a-2a should clearly be errors. Same for 3b arguably, because the var is hoisted to the same scope as the let. In 2b, 3a, and 4a/b, a var is shadowed by a let, which isn't a problem in principle. OTOH, strictly speaking, at least 3a and 4a actually introduce a var-declaration that has already been shadowed by a let-declaration (Dave's words). I think the July meeting discussion covers all of these cases and I agree that 1a,1b, 2a,3b are errors and 2b,3a,4a,4b are not. Hm, I'm not sure I remember that we discussed this particular aspect in detail in July. Sorry if I missed something. I don't think Dave's quote applies to 3a and 4a. The var declaration is always logically hoisted to the top of the function so it is already in place before the let block shadows it. Another way to look at it is that within any scope contour, the same name can not be used within multiple declarations (except for multiple vars for the same name) that occur or are hoisted into that contour. The order of the multiple declaration doesn't really matter. Oh, but that description does not cover Dave's exact example, which actually was { { let x; { var x } } } Here, the var is hoisted across the let's scope, but doesn't end up in the same scope. And we clearly want to rule that out, too. But then, you also want to properly distinguish this case from, say { { { let x } { var x } } } So, while I see what you tried to say there, the fact that it didn't quite nail it reinforces my feeling that any actual rule might be more complicated to specify accurately than worthwhile. * It is a syntax error if a given identifier is declared by both a let-declaration and a var-delaration in the same function. (And similarly, for const vs. var, or function vs. var -- the latter being an incompatible change for the global scope, but it seems like we may abolish that anyway.) I'm not sure that this actually simplifies anything. We still need hoisting rules for let and we still need something like the multiple declaration rules above so just is yet another rule that has to be specified, implemented, and remembered by users. If we think cases such as 3a and 4a are real bug farms then maybe the additional rule carries its weight. But I'm not sure that we have all that much of a hazard even without it. I just don't expect to see much code that looked like 3a or 4a. I'm not sure I follow. It's not an additional rule -- the way I view it it is a rule that replaces a (set of) more complicated rule(s). And if we don't expect the cases in question to show up often, then doesn't that seems rather like an argument for the simplification than against it? That's not to say that I couldn't live with more fine-grained rules, I just don't consider them worthwhile. Ultimately, we want to morally deprecate var for Harmony mode, so introducing too many extra rules around it seems a bit unjustified, unless there is a very good reason. Thanks, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Block scoping and redeclarations
On 23 August 2011 21:18, Brendan Eich bren...@mozilla.com wrote: I think the rules we have discussed are: 1. Any hoisting of var across let binding the same name (whether explicit, or as in catch blocks and comprehensions, implicit) is an early error. 2. Redeclaration of a name in the same block scope via let, const, or function-in-block is an error. That's it. Shadowing via let, const, and function-in-block is allowed (alpha conversion). That's fine, although for (1) you probably meant hoisting of var across a scope that contains a let binding the same name (or, if you assume that let is hoisted to the beginning of its block already, then you have to be very careful about specifying the exact order in which all the hoisting happens). And for (2), you have to specify whether this applies before or after hoisting. In fact, I think it's both, since I assume that we want to make both of these an error: { let x; { var x } } { { let x; var x} } Also, I wouldn't necessarily have regarded catch variables as implicitly let-bound. Seems a bit odd, but I guess it's OK to define it that way if it does the right thing. Consequently, we propose a very simple rule instead: * It is a syntax error if a given identifier is declared by both a let-declaration and a var-delaration in the same function. (And similarly, for const vs. var, or function vs. var -- the latter being an incompatible change for the global scope, but it seems like we may abolish that anyway.) Anywhere in the same function? This seems unnecessarily restrictive. People will migrate large functions into ES6 and start let-converting while maintaining. We have seen this happen over the last five years in which we've supported let, in Firefox front-end and add-on JS. Does it really harm migration? I can see only two scenarios that would end up with an illegal redeclaration: 1) The original function contained a single var x somewhere, and somebody is adding a let x now -- this is no big deal, since it is a new variable anyway and can easily be chosen differently. (And in general, it has to anyway; perhaps better to have a uniform rule of thumb here.) 2) The original code contained several var x, and somebody starts changing some of them into let incrementally -- attempting this does change the meaning of the code and is extremely likely to break it in subtle ways. It's probably preferable to flag it as an error. * In a similar vein, I think we should probably forbid `var' to coexist with _any_ other form of binding for the same variable in a single function. In particular, for Harmony mode we should rule out the infamous try .. catch(x) { var x = 666; ...}. That is covered by my (1) above, no need for special cases. The catch variable is an implicit let binding. * Finally, do we allow redeclaring functions with let or const, like it used to be the case with var? I propose disallowing it. That's also been a point of recurring TC39 agreement, specifically to future-proof for guards. OK, I'm glad to hear that. :) Thanks, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Language modes (Was: Block scoping and redeclarations)
That's not to say that I couldn't live with more fine-grained rules, I just don't consider them worthwhile. Ultimately, we want to morally deprecate var for Harmony mode, so introducing too many extra rules around it seems a bit unjustified, unless there is a very good reason. Btw, we had this amusing discussion in July how to name the different language modes without stepping on people's toes. How about classic, strict, and modern? /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Object.methods
On 24 August 2011 01:07, Allen Wirfs-Brock al...@wirfs-brock.com wrote: Also there can be real problems with exposing too much program metadata directly to the application layer. I've had lots of experience with Smalltalk environments where this was the case and it leads to a muddling of the metalayers and the application layers of a system because many developers don't understand the concepts of stratification well enough to know which methods are not really appropriate for use in application logic. That is true, although you cannot really blame the programmers when the language designers already muddled it up. That very sin has been committed in JavaScript, for example, by putting arbitrary reflective methods into innocent intrinsics like Object. I guess programmers would be much less likely to abuse them if they had a separate home in an object/module explicitly named Meta, Reflect, or something. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Block scoping and redeclarations
On 24 August 2011 18:03, Brendan Eich bren...@mozilla.com wrote: On Aug 24, 2011, at 2:03 AM, Andreas Rossberg wrote: On 23 August 2011 21:18, Brendan Eich bren...@mozilla.com wrote: I think the rules we have discussed are: 1. Any hoisting of var across let binding the same name (whether explicit, or as in catch blocks and comprehensions, implicit) is an early error. 2. Redeclaration of a name in the same block scope via let, const, or function-in-block is an error. That's it. Shadowing via let, const, and function-in-block is allowed (alpha conversion). That's fine, although for (1) you probably meant hoisting of var across a scope that contains a let binding the same name (or, if you assume that let is hoisted to the beginning of its block already, then you have to be very careful about specifying the exact order in which all the hoisting happens). It doesn't matter whether the hoisting is in the same block, or the var is in a block nested within the let's block (or body) scope. And for (2), you have to specify whether this applies before or after hoisting. In fact, I think it's both, since I assume that we want to make both of these an error: { let x; { var x } } { { let x; var x} } There is no before and after or both here. Hoisting first, with rule 1 enforced; then rule 2 checking. Relative source order of declarations is irrelevant. pedanticWell, only when you're implicitly assuming a somewhat non-standard meaning of across, as rather across or from. Clarifying that amounts to the same thing./pedantic Also, I wouldn't necessarily have regarded catch variables as implicitly let-bound. Seems a bit odd, but I guess it's OK to define it that way if it does the right thing. That is explicit in ES3-in-reality (ES3 was broken), real engines use block scoped catch variable bindings, and those engines that support 'let' (Rhino and SpiderMonkey at least) use exactly the same block-scoping machinery for catch variables as for let bindings. We went over this during ES3.1 and ES4 days, here on-list and in TC39 meetings. No problem reiterating, I realize that was long before your time :-). Sorry for digging up old bones then. :) Not a big deal, I just wasn't aware that you seem to be equating let-bound with block-scoped. That's a bit surprising for the uninitiated. Does it really harm migration? I can see only two scenarios that would end up with an illegal redeclaration: 1) The original function contained a single var x somewhere, and somebody is adding a let x now -- this is no big deal, since it is a new variable anyway and can easily be chosen differently. (And in general, it has to anyway; perhaps better to have a uniform rule of thumb here.) This costs, you are special-pleading. Oh, migrators can absorb *my* preferred tax, for the greater good *I* see. That's not how the game is played. Fair enough. My main point here was that migrators are generally paying that tax anyway, we are just talking about a tax cut for some cases. Anyway, I'll rest my case. I remain somewhat unconvinced that the extra complexity is nil and worthwhile, but there obviously are more critical topics. We'll go ahead and implement the rules that you sketched above. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Block scoping and redeclarations
On 25 August 2011 18:57, Brendan Eich bren...@mozilla.com wrote: On Aug 25, 2011, at 6:17 AM, Andreas Rossberg wrote: There is no before and after or both here. Hoisting first, with rule 1 enforced; then rule 2 checking. Relative source order of declarations is irrelevant. pedanticWell, only when you're implicitly assuming a somewhat non-standard meaning of across, as rather across or from. Clarifying that amounts to the same thing./pedantic Please don't over-formalize my words here, that would be a big mistake! Hoisting is a very physical metaphor. Hoist that crate! By across I meant across the let declaration in its source position. Not across the let binding which is also hoisted (so I think I see what you mean by across or from -- do I?). Well, in that case you would not capture the order-independence properly. Consider e.g.: { { var x; ... let x} } Neither is the var hoisted across the source position of the let, nor across its scope. Still it's supposed to be an error. I'm not being entirely academic here. It's exactly details like this that crop up when you try to implement (and, presumably, specify) those rules, and they require some ugliness if you want to resolve everything in one pass. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: length property value for functions with parameter enhancements
On 27 August 2011 00:34, Allen Wirfs-Brock al...@wirfs-brock.com wrote: Something we need to define for ES.next is how to compute the length property value for functions whose formal parameter list includes optional and/or rest parameters. True, and actually, there are more issues with length function proxies. I don't have my notes with me right now, but for example, it is not clear at all what length Function.prototype.bind should set when called on a function proxy. 0? 1? Should it try to get the length property from the proxy and subtract N? What if length is not defined on the proxy, or not a (natural) number? This is probably something the proxy proposal has to resolve eventually, but it's worth keeping in mind for the broader picture. So, what is a length determination algorithm that recognizes optional/rest arguments and is consistent with the stated intent of length and (as much as possible)existing section 15 definitions. Here is one proposal: The length is 0 only if the formal parameter list is empty. For example: function foo() {}; //foo.length==1 You meant ==0 here, right? If the formal parameter list includes any non-optional, non-rest formal parameters, the length is the total number of non-optional/non-rest formal parameters. For example: function bar(a,b,c) {} //bar.length=3 function baz(a,b,c=0) {} //baz.length=2 function bam(a,b,c=0,...d) {} //bam.length==2 BTW, is this legal? Makes sense. (And yes, I don't see why the latter shouldn't be legal.) If there are no non-optional or non-rest formal parameters the length is 1. function bar1(a=0) {} //bar1.length=1 function baz1(a=0,b=1,c=2) {} //baz1.length=1 function bam1(...a) {} //bam1.length==1 I'm not so sure about this, it seems incoherent. Why not 0, especially for the first two? You mentioned builtins like Array above, but I would rather count them as the exception to the rule (especially given that the builtins don't seem entirely consistent wrt length anyway). FWIW, one could also argue for setting length to +infinity for functions with only rest parameters. :) /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: length property value for functions with parameter enhancements
On 29 August 2011 01:36, Allen Wirfs-Brock al...@wirfs-brock.com wrote: On Aug 27, 2011, at 6:12 AM, Andreas Rossberg wrote: True, and actually, there are more issues with length function proxies. I don't have my notes with me right now, but for example, it is not clear at all what length Function.prototype.bind should set when called on a function proxy. 0? 1? Should it try to get the length property from the proxy and subtract N? What if length is not defined on the proxy, or not a (natural) number? The ES5.1 spec. defines how how bind determines the length for the function it creates based upon the length property of the target function. I would expect the same rules would apply when the target is a function proxy. Ah, right, I was looking at the 5.0 spec 8-}. However, that is still not good enough for function proxies, because you have no guarantee that they define length at all, or make it a natural number. So we at least have to include additional error cases, and a solution for them. function bam(a,b,c=0,...d) {} //bam.length==2 BTW, is this legal? Makes sense. (And yes, I don't see why the latter shouldn't be legal.) Because the is a potential for misinterpreting the user intent on such a call. For bam('a','b',1,2,3) we surely have to interpret the argument/parameter mapping as: a='a',b='b',c=1,d=[2,3] but it is easy to imagine a programmer intending a='a',b='b',c=0,d=[1,2,3] Making it illegal to have a formal parameter list that has both optional and result parameters might result the likelihood of that confusion, Hm, that doesn't sound like a very JavaScripty argument :). If you buy into optional arguments at all, then I can certainly envision valid use cases for combining them with rest arguments. If there are no non-optional or non-rest formal parameters the length is 1. function bar1(a=0) {} //bar1.length=1 function baz1(a=0,b=1,c=2) {} //baz1.length=1 function bam1(...a) {} //bam1.length==1 I'm not so sure about this, it seems incoherent. Why not 0, especially for the first two? You mentioned builtins like Array above, but I would rather count them as the exception to the rule (especially given that the builtins don't seem entirely consistent wrt length anyway). In my proposal, I am decided to make a clear distinction between truly empty formal parameter lists and those with only various forms of optional formal parameters by only giving a 0 length to the empty formals case. That's a debatable decision but it seems desirable to distinguish the two cases and the built-ins is the only precedent that we have to follow. I guess I don't see what is special about empty argument lists. Why would you want to make a clearer distinction between the argument lists () and (a=0), than between (x) and (x, a=0)? You seem to be introducing a discontinuity. FWIW, one could also argue for setting length to +infinity for functions with only rest parameters. :) But there is no precedent for that and surely infinity is not the typical number of arguments. Yeah, I wasn't being serious. I'm not sure if there is any real use case for the length property of ECMAScript functions. Does anybody know of one? Regardless, I do think we can get rid of it. Do not, I suppose? (Unfortunately, as I was wondering the same.) /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Function proxy length (Was: length property value for functions with parameter enhancements)
On 30 August 2011 18:41, David Bruant david.bru...@labri.fr wrote: This would facilitate the author work when it comes to creating functions that look like functions without having to always include some initialization code for .length, .prototype and such. It will still be possible to opt-out of .length or .prototype if the author doesn't I don't think it's worth introducing special cases in the semantics, especially not for something like length. Proxy authors already have to do a lot of similar work, e.g. to make all the standard Object.prototype and Function.prototype methods available on proxies. I think this problem is something that should be solved by the library, not the proxy API itself. Maybe we can find some nice building blocks for handlers that make available the basic functionality in a convenient and extensible way? Another interesting case btw (although outside the standard proper) is __proto__... /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Function proxy length (Was: length property value for functions with parameter enhancements)
On 31 August 2011 15:21, Tom Van Cutsem tomvc...@gmail.com wrote: I coded up a hypothetical Proxy.createConstructor function that creates a function proxy initialized according to ES5.1 section 13.2: https://gist.github.com/1183514 Nit: invoking callTrap.call assumes that callTrap has an actual call method. But you want to work this for call traps that are themselves proxies and don't necessarily have that (or don't we?). So you need to do Function.prototype.apply.call(callTrap, instance, arguments) (assuming nobody messed with that either, of course). This turns up quite frequently, in fact. With proxies, it is really error-prone that all these functions have been made available as methods on {Object,Function}.prototype, instead of being separate. Morally, these methods used to be part of the implicit contracts of object and function types that everybody relies on. But proxies break those contracts! At least for functions, this is really a problem IMO (for plain objects, the contract was already invalidated by allowing Object.create(null)). /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: __doc__ for functions, classes, objects etc.
On 4 September 2011 21:45, Brendan Eich bren...@mozilla.com wrote: On Sep 6, 2011, at 1:52 PM, Dmitry Soshnikov wrote: (1) to standardize `toString` (for this particular case -- do not remove comments inside); If the (1) is not possible (why by the way?), Because comments are not saved in the compilation process and doing so would slow parsing down and take more space. It's not obvious this would matter in head-to-head competition with other browsers (esp. with minified benchmarks) -- we would have to find out. Switching to source recovery will entrain more space but may be tolerable -- except that switching to source recovery is work, competing with other demands. There's no free lunch. Plus, it breaks all function-based data abstraction if you can reliably reflect on its implementation and then even reify it through eval. I am indifferent about the general idea of a doc interface, but: having to peek at the _implementation_ of something (which is what toString does) in order to gather its _interface_ description sounds like a fundamental violation of basic principles and exactly the wrong way to go about it. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: IDE support?
On 13 September 2011 09:33, Brendan Eich bren...@mozilla.com wrote: You are simply way out of date on JS optimizing VMs, which (based on work done with Self and Smalltalk) all now use hidden classes aka shapes and polymorphic inline caching to optimize to exactly the pseudo-assembly you show, prefixed by a short (cheap if mispredicted) branch. What's more, SpiderMonkey bleeding edge does semi-static type inference, which can eliminate the guard branch. Please don't keep repeating out of date information about having to seek through a dictionary. It simply isn't true. True. On the other hand, all the cleverness in today's JS VMs neither comes for free, nor can it ever reach the full performance of a typed language. * There are extra costs in space and time to doing the runtime analysis. * Compile time is runtime, so there are severe limits to how smart you can afford to get in a compiler. * A big problem is predictability, it is a black art to get the best performance out of contemporary JS VMs. * The massive complexity that comes with implementing all this affects stability. * Wrt limits, even in the ideal case, you can only approximate the performance of typed code -- e.g. for property access you have at least two memory accesses (type and slot) plus a comparison and branch, where a typed language would only do 1 memory access. * Type inference might mitigate some more of these cases, but will be limited to fairly local knowledge. * Omnipresent mutability is another big performance problem in itself, because most knowledge is never stable. So despite all the cool technology we use these days, it is safe to assume that we will never play in the performance league of typed languages. Unless we introduce real types into JS, of course. :) /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: {Weak|}{Map|Set}
On 15 September 2011 09:10, Brendan Eich bren...@mozilla.com wrote: On Sep 14, 2011, at 11:09 PM, Allen Wirfs-Brock wrote: I would prefer ObjectMap (the keys are restricted to objects). Now that you point it out (again), I agree. I don't. :) It is true to some extent that WeakMap is GC jargon -- but as Mark points out, the normal use case for weak maps _is_ to ensure a certain space behaviour closely related to GC. So why obfuscate the very intent by purposely avoiding what is more or less standard terminology for it (if slightly ambiguous)? If I was a programmer looking for something like weak referencing in JS for the first time, weak is what I'd be searching for. ObjectMap would be too generic a name to catch my immediate attention. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: IDE support?
On 13 September 2011 16:48, Brendan Eich bren...@mozilla.com wrote: On Sep 13, 2011, at 5:33 AM, Andreas Rossberg wrote: * There are extra costs in space and time to doing the runtime analysis. * Compile time is runtime, so there are severe limits to how smart you can afford to get in a compiler. These are bearable apples to trade against the moldy oranges you'd make the world eat by introducing type annotations to JS. Millions of programmers would start annotating for performance, i.e., gratuitously, making a brittle world at high aggregate cost. The costs born in browsers by implementors and (this can hit users, but it's marginal) at runtime when evaluating code are less, I claim. Depends on how good you want to optimize. Aggressive compilers can be really slow. There are limits to what you can bear at runtime. Especially, when you have to iterate the process. * The massive complexity that comes with implementing all this affects stability. This one I'm less sympathetic to, since we won't get rid of untyped JS up front. A sunk cost fallacy? If we could make a clean break (ahem), sure. Otherwise this cost must be paid. Well, the counter-argument would be that you wouldn't need to care about optimising untyped code as much, if the user had the option to switch to a typed sublanguage for performance. * Wrt limits, even in the ideal case, you can only approximate the performance of typed code -- e.g. for property access you have at least two memory accesses (type and slot) plus a comparison and branch, where a typed language would only do 1 memory access. That's *not* the ideal case. Brian Hackett's type inference work in SpiderMonkey can eliminate the overhead here. Check it out. I'm actually pretty excited about that, and hope to see more on that front. Cool stuff. However, that ideal case is achieved in a relatively small percentage of cases only. Otherwise we should probably not see a (already impressive) 20-40% speed-up (IIRC), but rather something closer to 200-400%. * Type inference might mitigate some more of these cases, but will be limited to fairly local knowledge. s/might/does/ -- why did you put type inference in a subjunctive mood? Type inference in SpiderMonkey (Firefox nightlies) is not local. Fair enough re the subjunctive. Still, there are principal limitations to what type inference can do. Especially for OO. You hit on undecidable territory very quickly. You also have to give up at boundaries such as native bindings, calls to eval, or (in ES6) to the module loader, unless you're given extra information by the programmer (this is basically the separate compilation problem). So the inferencer has to work with approximations and fallbacks. For a language like JS, where a lot of conceptual polymorphism, potential mutation, and untypable operations are going on, those approximations will remain omnipresent, except, mostly, in sufficiently local contexts. Not to say that it is not worth extending the boundaries -- it definitely is. But it will only get you that far. * Omnipresent mutability is another big performance problem in itself, because most knowledge is never stable. Type annotations or (let's say) guards as for-all-time monotonic bounds on mutation are useful to programmers too, for more robust programming-in-the-large. That's a separate (and better IMHO) argument than performance. It's why they are on the Harmony agenda. Of course I'd never object to the statement that there are far better reasons to have typy features than performance. :) I just didn't mention it because it wasn't the topic of the discussion. So despite all the cool technology we use these days, it is safe to assume that we will never play in the performance league of typed languages. Unless we introduce real types into JS, of course. :) Does JS need to be as fast as Java? Would half as fast be enough? No, it doesn't have to be as fast. Not yet, at least... I estimate we have another 3 years. ;) /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: IDE support?
On 13 September 2011 21:32, Wes Garland w...@page.ca wrote: When I write shell programs, and JS programs, I keep an extra terminal window open to a spare shell or a JS REPL. I try stuff. Stuff that works, I copy into my program. Then I run my program - which happens quickly, because the compiler is super-fast and the program is a contained entity which probably runs in a dynamically configured environment. REPLs and quasi-instant compile turn-arounds are indeed great features, but by no means exclusive to untyped languages, and never have been. It just happens that the typed languages dominating the mainstream suck badly in this area. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: IDE support?
On 14 September 2011 00:00, Brendan Eich bren...@mozilla.com wrote: So, static+dynamic. The static side has some powerful algorithms to bring to bear. Dynamic is necessary due to eval and kin, and gives strictly more information (and more relevant information!). Nitpick: I believe you are mistaken about the strictly more bit. There is information that _only_ static analysis can derive. Consider e.g. aliasing or escape analysis, or other kinds of global properties. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: {Weak|}{Map|Set}
On 15 September 2011 17:47, Allen Wirfs-Brock al...@wirfs-brock.com wrote: No the normal use case for WeakMaps is simply to make associations between objects and arbitrary values. The special GC behavior is necessary to avoid memory leaks, but that is a quality of implementation issue, not a use case. Just like with tail calls, certain space optimizations are semantically relevant, because code will rely on them and potentially break if they are not performed. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: IDE support?
On 16 September 2011 02:12, Mike Shaver mike.sha...@gmail.com wrote: On Thu, Sep 15, 2011 at 9:02 AM, Andreas Rossberg rossb...@google.com wrote: On 14 September 2011 00:00, Brendan Eich bren...@mozilla.com wrote: So, static+dynamic. The static side has some powerful algorithms to bring to bear. Dynamic is necessary due to eval and kin, and gives strictly more information (and more relevant information!). Nitpick: I believe you are mistaken about the strictly more bit. There is information that _only_ static analysis can derive. Consider e.g. aliasing or escape analysis, or other kinds of global properties. There are systems that handle escape analysis cases via write barriers, no? Alias detection (or more importantly non-alias determinations) seem amenable to the assume-and-guard model used for PICs and trace selection and other code specialization patterns seen all over modern JS engines. Being able to detect when a condition is violated is not equivalent to knowing that it always holds. Take the property access example. You want to eliminate the extra check. For that you have to know that the typecheck would _never_ fail at this point. You use type inference to find that out, a static analysis. In general, whenever the correctness of an optimization or code transformation depends on a non-trivial _invariant_, you have to prove that this invariant holds. You can only do that statically, because it implies a quantification over all possible executions. No amount of dynamic checking can give you that. (Of course, you can often do something else instead that involves dynamic checks, but then you are in fact doing a _different_ optimization. Property access is a good example. Stack-allocating local variables is another one, where you need escape analysis.) /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: {Weak|}{Map|Set}
On 16 September 2011 13:52, David Bruant david.bru...@labri.fr wrote: Furthermore, let imagine for a minute that i need an ECMAScript implementation for programs i write which i know (for some reason) all are short live and use a maximum finite amount of memory i know. Based on this knowledge (i admit this to be weird to have this knowledge), i could decide to not implement a garbage collector. My programs are short-live and use few memory, i may not need to care about the memory. In this imagniary ECMAScript environment without Garbage Collection, why would i care of references to be weak or strong? I don't, i write programs. Right, but why would you care about WeakMap either? The canonical Map is perfectly fine for that situation. Also, the notion of garbage collection is a separate concern from programming. It comes from the necessity of being careful of resources. Well yes, but that's part of programming. In practice, all resources are finite. And the difference between finite and infinite space usage is a correctness criterium. Consider writing a server. If I cannot rely on tail call optimization then writing its message loop as a recursive function (e.g. actors style) would be incorrect. If I cannot rely on GC, then allocating an object for each received message would be incorrect. If I cannot rely on weak maps, then, say, mapping every message object to a return IP would be incorrect. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Fwd: {Weak|}{Map|Set}
[Forgot the list.] -- Forwarded message -- From: Andreas Rossberg rossb...@google.com Date: 16 September 2011 15:35 Subject: Re: {Weak|}{Map|Set} To: David Bruant david.bru...@labri.fr On 16 September 2011 15:17, David Bruant david.bru...@labri.fr wrote: Well yes, but that's part of programming. In practice, all resources are finite. And the difference between finite and infinite space usage is a correctness criterium. Consider writing a server. If I cannot rely on tail call optimization then writing its message loop as a recursive function (e.g. actors style) would be incorrect. If I cannot rely on GC, then allocating an object for each received message would be incorrect. If I cannot rely on weak maps, then, say, mapping every message object to a return IP would be incorrect. You are making a connection between program correctness and implementation consideration of what you use to write these programs. It doesn't sound right. If I cannot rely on tail call optimization then writing its message loop as a recursive function (e.g. actors style) would be incorrect. = This is not true. If your server ever receives only one message, you should be fine. Obviously, I was talking about a server that is expected to get an unbounded number of messages during its uptime. The problem is in implementation limitation, not correctness of your program. It turns out your programming style with nowadays reasonable use cases (size of input, number of messages...) makes current implementations fail. I agree that it is annoying, but it doesn't make your program incorrect. If we start considering implementation limitations as sources of program incorrectness, then some ECMAScript programs will always be incorrect. The difference is, this limitation would be hit for the _normal_ use case of the server. If my program cannot deal with its expected use case, then it is incorrect. Regarding tail call optimization, as far as I'm concerned, an agreement between implementors sounds like a more reasonable approach. There is no way in the language to test this feature. In this video [1], David Herman explains that a test can be written (at 40:40), but the test relies on implementation limitations rather than the language by itself (unlike all tests that can currently be found on test262). That is true, but whether some property can be observed from within the language itself, or only by its environment, is not relevant. There is no test that you can reliably write for a 'print' function. Still you want to be able to rely on it printing what you gave it. Or consider a sleep(secs) function. How would you test it? If I cannot rely on GC, then allocating an object for each received message would be incorrect. = Can you refine this point? I don't understand the connection between garbage collection and correctness of your program. I allocate objects on a daily basis and have never /relied/ on garbage collection. I think you implicitly do, all the time. Just try turning off GC and see whether your programs still work reliably. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: {Weak|}{Map|Set}
On 16 September 2011 19:42, Mark S. Miller erig...@google.com wrote: Does anyone see anything wrong with EphemeralMap? Yes. It's a longish name, and one that I will never be able to remember how to spell correctly. And to most programmers it probably sounds about as reassuring as endofunctor or catamorphism. ;) /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Implementation considerations in the ECMAScript standard (was {Weak|}{Map|Set})
I think we are digressing. There are three separate questions: 1. Intent 2. Specification 3. Testing Sometimes the intent is hard or impossible to specify formally, sometimes a specification is hard or impossible to test for. That doesn't necessarily invalidate such an intent or such a specification. Something you cannot test you might still be able to prove, for example. The whole point of having weak maps are space complexity consideration. So the name should be chosen accordingly IMO, regardless of whether we have a good solution to (2) and (3), which are really separate problems. But I'll leave it at that, nobody wants another longish bike-shedding discussion. On 16 September 2011 18:50, David Bruant david.bru...@labri.fr wrote: Changing the subject to something more relavant. Le 16/09/2011 15:36, Andreas Rossberg a écrit : On 16 September 2011 15:17, David Bruantdavid.bru...@labri.fr wrote: Well yes, but that's part of programming. In practice, all resources are finite. And the difference between finite and infinite space usage is a correctness criterium. Consider writing a server. If I cannot rely on tail call optimization then writing its message loop as a recursive function (e.g. actors style) would be incorrect. If I cannot rely on GC, then allocating an object for each received message would be incorrect. If I cannot rely on weak maps, then, say, mapping every message object to a return IP would be incorrect. You are making a connection between program correctness and implementation consideration of what you use to write these programs. It doesn't sound right. If I cannot rely on tail call optimization then writing its message loop as a recursive function (e.g. actors style) would be incorrect. = This is not true. If your server ever receives only one message, you should be fine. Obviously, I was talking about a server that is expected to get an unbounded number of messages during its uptime. The problem is in implementation limitation, not correctness of your program. It turns out your programming style with nowadays reasonable use cases (size of input, number of messages...) makes current implementations fail. I agree that it is annoying, but it doesn't make your program incorrect. If we start considering implementation limitations as sources of program incorrectness, then some ECMAScript programs will always be incorrect. The difference is, this limitation would be hit for the _normal_ use case of the server. If my program cannot deal with its expected use case, then it is incorrect. What is the definition of normal use? Size of input? When machines will be powerful enough to handle your current normal case without tail call optimization, will the definition of normal use change? Once again, program correctness [1] (tell me if you use incorrect differently and please define it if so) has nothing to do with implementation considerations of the plaform that run your program. There is no contradiction in having a correct program which fails when implemented because of implementations issues (of the platform, not the program). I do not think the ECMAScript standard is the the place where implementation considerations should be addressed. For that matters, people have been writing JavaScript programs for years and the spec doesn't say a word on implementations. Also, why should tail call optimization should be standardized? There is a use case, but aren't there other implementation optimizations that could be considered? Should they all been standardized? Why this one in particular? Really, once again, an appendix named hints for implementors or a different document or a page on the wiki would be better than a normative section. Saying that ECMAScript implementations aren't standard because they do not support one programming style sounds like a lot. Regarding tail call optimization, as far as I'm concerned, an agreement between implementors sounds like a more reasonable approach. There is no way in the language to test this feature. In this video [1], David Herman explains that a test can be written (at 40:40), but the test relies on implementation limitations rather than the language by itself (unlike all tests that can currently be found on test262). That is true, but whether some property can be observed from within the language itself, or only by its environment, is not relevant. It is not for people who write programs, it is for implementors because it directly impacts their work. They are not implementing a language anymore (which they still were as of ES5.1), but a language and some implementation constraints. Let imagine for a minute that tomorrow, implementors find another implementation trick which allows to run without crash the use cases that motivated the proper tail calls proposal, why would it be problematic? Why does it have to be this optimization in particular? There is no test
Minor issues with proxies
Hi Mark, Tom! I understand that you are currently working on finalizing a number of aspects of the proxies proposal, so I thought I'd send my current notes on issues I discovered. (Sorry if I'm a bit late with that, but I just returned from travelling.) Here is a list of minor issues. I'll send a separate mail describing what I think is a more fundamental problem with the current spec. - Proxy.create: What if the handler passed is not an object? Should we throw right away? - Proxy.create: What if the prototype passed is neither an object nor null? FF silently sets it to null in all other cases, but that seems inconsistent with Object.create, which throws. - Proxy.createFunction: More of a question, but do we really want to support a separate construct trap for function proxies? I would argue that it was a mistake to ever make a distinction between a regular and a construct call. Even if we cannot clean that up, we should perhaps avoid having it proliferate further, in the proxy interface. - Derived get/set traps: They use .call on accessor functions taken from a user-defined descriptor. Such a function might itself be a proxy, in which case .call is not necessarily defined. Should invoke it through Function.prototype.call.call instead. (There may be other places in the current ES spec that assume that all functions have a call method. I think they should all be changed.) - Also, we should specify that the JS code assumes that all used intrinisc properties are the original methods. - Object.{seal,freeze,preventExtensions}: When sealing a function proxy, how do we initialize the standard properties length, constructor, prototype, caller, and arguments? What if the proxy does not define them already, or returns unsuitable values? - Function.prototype.toString: should this work for function proxies? - Function.prototype.bind: requires additional language explaining how the length property is set if the curried function is a proxy. - JSON: don't we need some changes here, too? For example, step 6a of the JO operation (15.12.3) talks about the names of all the own properties of an object. Clearly, for a proxy we need to invoke the appropriate trap here. - Outside the (current?) standard, but pragmatically, how should we treat .__proto__ on a proxy? FF and V8 both treat it as an ordinary property for proxies, but that implies that Object.getPrototypeOf(p) != p.__proto__ in general. - ToStringArray, step 6.a: s/array/O/ /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Proxy-induced impurity of internal methods
Proxies invalidate one fundamental assumption of the current ES spec, namely that (most) internal methods are effectively pure. That has a couple of consequences which the current proxy proposal and semantics seem to ignore, but which we need to address. OBSERVABILITY EFFICIENCY In ES5, internal methods essentially are an implementation detail of the spec. AFAICS, there is no way their interaction is actually observable in user code. This gives JS implementations significant leeway in implementing objects (and they make use of it). This changes drastically with proxies. In particular, since most internal methods may now invoke traps directly or indirectly, we can suddenly observe many internal steps of property lookup and similar operations through potential side effects of these traps. (Previously, only the invocation of getters or setters was observable). Take the following simple example: var desc = {configurable: true, get: function() {return 8}, set: function() {return true}} var handler = {getPropertyDescriptor: function() {seq += G; return desc}} var p = Proxy.create(handler) var o = Object.create(p) var seq = o.x var seq1 = seq seq = o.x = 0 var seq2 = seq According to the proxy spec, we should see seq1==G and seq2==GG. In my local version of V8, I currently see seq1==G and seq2==G. In Firefox 7, I see seq1==GG and seq2==GG. Obviously, both implementations are unfaithful to the spec, albeit in reverse ways. At least for V8, implementing the correct behaviour may require significant changes. Also, I wonder whether the current semantics forcing seq2==GG really is what we want, given that it is unnecessarily inefficient (note that it also involves converting the property descriptor twice, which in turn can spawn numerous calls into user code). Optimizing this would require purity analysis on trap functions, which seems difficult in general. HIDDEN ASSUMPTIONS In a number of places, the ES5 spec makes hidden assumptions about the purity of internal method calls, and derives certain invariants from that, which break with proxies. For example, in the spec of [[Put]] (8.12.5), step 5.a asserts that desc.[[Set]] cannot be undefined. That is true in ES5, but no longer with proxies. Unsurprisingly, both Firefox and V8 do funny things for the following example: var handler = { getPropertyDescriptor: function() { Object.defineProperty(o, x, {get: function() { return 5 }}) return {set: function() {}} } } var p = Proxy.create(handler) var o = Object.create(p) o.x = 4 Firefox 7: InternalError on line 1: too much recursion V8: TypeError: Trap #error of proxy handler #Object returned non-configurable descriptor for property x More generally, there is no guarantee anymore that the result of [[CanPut]] in step 1 of [[Put]] is in any way consistent with what we see in later steps. In this light (and due to the efficiency reasons I mentioned earlier), we might want to consider rethinking the CanPut/Put split. This is just one case. There may be other problematic places in other operations. Most of them are probably more subtle, i.e. the spec still prescribes some behaviour, but that does not necessarily make any sense for certain cases (and would be hard to implement to the letter). We probably need to check the whole spec very carefully. FIXING PROXIES A particularly worrisome side effect is fixing a proxy. The proxy semantics contains a lot of places saying If O is a trapping proxy, do steps I-J. However, there generally is no guarantee that O remains a trapping proxy through all of I-J! Again, an example: var handler = { get set() { Object.freeze(p); return undefined }, fix: function() { return {} } } var p = Proxy.create(handler) p.x Firefox 7: TypeError on line 1: getPropertyDescriptor is not a function V8: TypeError: Object #Object has no method 'getPropertyDescriptor' The current proxy semantics has an (informal) restriction forbidding reentrant fixing of the same object, but that is only a very special case of the broader problem. Firefox 7 rejects fixing a proxy while one of (most) its traps is executing (this seems to be a recent change, and the above case probably is an oversight). But it is not clear to me what the exact semantics is there, and whether it is enough as a restriction. V8 currently even crashes on a few contorted examples. In summary, I'm slightly worried. The above all seems fixable, but is that all? Ideally, I'd like to see a more thorough analysis of how the addition of proxies affects properties of the language and its spec. But given the state of the ES spec, that is probably too much to wish for... :) /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Proxy-induced impurity of internal methods
On 5 October 2011 18:57, Andreas Rossberg rossb...@google.com wrote: FIXING PROXIES A particularly worrisome side effect is fixing a proxy. The proxy semantics contains a lot of places saying If O is a trapping proxy, do steps I-J. However, there generally is no guarantee that O remains a trapping proxy through all of I-J! Again, an example: var handler = { get set() { Object.freeze(p); return undefined }, fix: function() { return {} } } var p = Proxy.create(handler) p.x Firefox 7: TypeError on line 1: getPropertyDescriptor is not a function V8: TypeError: Object #Object has no method 'getPropertyDescriptor' Whoops, sorry, I just saw that I screwed up that example. That behaviour is perfectly fine, of course. Don't have my notes here, I'll deliver the proper example tomorrow. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Proxy-induced impurity of internal methods
On 6 October 2011 06:34, Allen Wirfs-Brock al...@wirfs-brock.com wrote: On Oct 5, 2011, at 9:57 AM, Andreas Rossberg wrote: In summary, I'm slightly worried. The above all seems fixable, but is that all? Ideally, I'd like to see a more thorough analysis of how the addition of proxies affects properties of the language and its spec. But given the state of the ES spec, that is probably too much to wish for... :) I'm not sure what you mean my the last sentence. I have not yet done any work to incorporate proxies into the ES6 draft. Oh, sorry, my remark was unintentionally ambiguous -- it wasn't directed at you. Just the generic rant that the whole ES spec is a horribly ad-hoc, utterly unanalysable beast using the state-of-the-art of language specification from 1960. :) Clearly nothing the editor could or should just fix at this point. If you have specific issues like these a good way to capture them is to file bugs against the proposals component of the harmony products at bugs.ecmascript.org. Proposes resolutions would be good too. I definitely look at reported proposal bugs when I work on incorporating new features into the draft specification. On the other hand I don't guarantee that I will spot or remember all issues raised on this list. So file bugs. Fair enough. This time, however, my comments were mainly meant for Tom Mark, who are working on the proposal right now I think. I refrained from suggesting concrete fixes because they probably have a better idea what semantics they envision. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Proxy-induced impurity of internal methods
On 5 October 2011 21:00, Andreas Rossberg rossb...@google.com wrote: On 5 October 2011 18:57, Andreas Rossberg rossb...@google.com wrote: FIXING PROXIES A particularly worrisome side effect is fixing a proxy. The proxy semantics contains a lot of places saying If O is a trapping proxy, do steps I-J. However, there generally is no guarantee that O remains a trapping proxy through all of I-J! Again, an example: var handler = { get set() { Object.freeze(p); return undefined }, fix: function() { return {} } } var p = Proxy.create(handler) p.x Firefox 7: TypeError on line 1: getPropertyDescriptor is not a function V8: TypeError: Object #Object has no method 'getPropertyDescriptor' Whoops, sorry, I just saw that I screwed up that example. That behaviour is perfectly fine, of course. Don't have my notes here, I'll deliver the proper example tomorrow. Here we go (the last line should have been an assignment): var handler = { get set() { Object.freeze(p); return undefined }, fix: function() { return {} } } var p = Proxy.create(handler) p.x = 4 Firefox 7: TypeError on line 1: proxy was fixed while executing the handler V8: TypeError: Object #Object has no method 'getOwnPropertyDescriptor' So Firefox rejects this (consistently with its treatment of other methods), while V8 tries to go on with the DefaultPut, using the traps from the handler that it still happens to have around. This is not quite what the rules of DefaultPut imply, but what the (inconsistent) note says. A related nit: even for freeze and friends, the restriction on recursive fix is NOT enough as currently stated in the proxy semantics. Consider: -- var handler = { get fix() { Object.seal(p); return {} } } var p = Proxy.create(handler) Object.freeze(p) -- Strictly speaking, there actually is no recursive execution of fix() -- the recursion occurs a few steps earlies, when we try to _get_ the fix function. Firefox rejects this nevertheless: TypeError on line 2: proxy was fixed while executing the handler V8 bails out with a stack overflow: RangeError: Maximum call stack size exceeded While this might merely be a nit, it shows that it is _not_ generally enough to only prevent fixing while _executing_ traps. To be conservative, it seems like we perhaps have to disallow any reentrant use of freeze/seal/prevenExt at any point in _any_ internal method of the same object. But how spec that? /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: holes in spread elements/arguments
On 7 October 2011 17:47, David Herman dher...@mozilla.com wrote: I don't think we can get away with repurposing _ as a pattern sigil, since it's already a valid identifier and used by popular libraries: http://documentcloud.github.com/underscore/ In my strawman for pattern matching, I used * as the don't-care pattern: http://wiki.ecmascript.org/doku.php?id=strawman:pattern_matching I reckoned that _ would be infeasible. But * is fine too, although it might cause more headache in a fused pattern/expression grammar. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Proxy-induced impurity of internal methods
On 10 October 2011 15:38, Tom Van Cutsem tomvc...@gmail.com wrote: This point was previously noted, see: http://wiki.ecmascript.org/doku.php?id=strawman:proxy_set_trap It was brought up at the March 2011 meeting, and IIRC we were in agreement that the spec. should be adapted to remove the redundant getPropertyDescriptor call. Ah, thanks. I wasn't aware of that. Good to hear. In a number of places, the ES5 spec makes hidden assumptions about the purity of internal method calls, and derives certain invariants from that, which break with proxies. For example, in the spec of [[Put]] (8.12.5), step 5.a asserts that desc.[[Set]] cannot be undefined. That is true in ES5, but no longer with proxies. Unsurprisingly, both Firefox and V8 do funny things for the following example: var handler = { getPropertyDescriptor: function() { Object.defineProperty(o, x, {get: function() { return 5 }}) return {set: function() {}} } } var p = Proxy.create(handler) var o = Object.create(p) o.x = 4 Firefox 7: InternalError on line 1: too much recursion V8: TypeError: Trap #error of proxy handler #Object returned non-configurable descriptor for property x (are you sure this tests the right behavior? It seems the V8 TypeError is simply due to the fact that the descriptor returned from getPropertyDescriptor is configurable.) You are right, of course. If I make it configurable, V8 returns without error. However, by modifying the example somewhat you can see that it executes the setter from the descriptor then. That is not quite right either (Though neither wrong, I suppose :) ). I agree. In fact, proxies already abandon the CanPut/Put split: they implement CanPut simply by always returning true, and perform all of their assignment logic in [[Put]]. Related to this refactoring: Mark has previously proposed introducing a [[Set]] trap that simply returns a boolean, indicating whether or not the assignment succeeded. The [[Put]] trap would simply call [[Set]], converting a false result into a TypeError when appropriate (cf. http://wiki.ecmascript.org/doku.php?id=harmony:proxy_defaulthandler#alternative_implementation_for_default_set_trap). We don't have consensus on this yet. I would propose to discuss the CanPut/Put refactoring and the [[Set]] alternative together during the Nov. meeting. Yes, that makes sense. On 10 October 2011 16:01, Tom Van Cutsem tomvc...@gmail.com wrote: I will go over the proposed proxies spec to check whether there is actually any harm in allowing a proxy to become non-trapping during an active trap. If the proxy describes a coherent object before and after the state change, there is no reason to disallow this. The new proposal Mark and I have been working on may help here, since it enforces more invariants on proxies. I'm not sure I understand what you mean by becoming non-trapping, can you elaborate? What would it do instead? /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Minor issues with proxies
On 11 October 2011 20:49, Tom Van Cutsem tomvc...@gmail.com wrote: Proxy.create{Function} is now present on http://wiki.ecmascript.org/doku.php?id=harmony:proxies_semantics. Let us know if you spot any further holes. Great, thanks! One comment only: 1. Let handler be ToObject(O) I wonder, is that useful at all? I don't see how ToObject can ever produce a useful handler from a non-object. It may be more helpful to throw a TypeError right away if the handler is not an object (like you do for non-object protos). Cheers, /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Minor issues with proxies
On 12 October 2011 11:00, Andreas Rossberg rossb...@google.com wrote: On 11 October 2011 20:49, Tom Van Cutsem tomvc...@gmail.com wrote: Proxy.create{Function} is now present on http://wiki.ecmascript.org/doku.php?id=harmony:proxies_semantics. Let us know if you spot any further holes. Great, thanks! One comment only: 1. Let handler be ToObject(O) I wonder, is that useful at all? I don't see how ToObject can ever produce a useful handler from a non-object. It may be more helpful to throw a TypeError right away if the handler is not an object (like you do for non-object protos). I think it might also be useful to have the prototype argument default to null (i.e. convert undefined to null in/before step 2). /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Feedback request: a ES spec. organization experiment
On 12 October 2011 02:32, Allen Wirfs-Brock al...@wirfs-brock.com wrote: The experiments are shown in http://wiki.ecmascript.org/lib/exe/fetch.php?id=harmony%3Aspecification_draftscache=cachemedia=harmony:11.1.5-alternatives.pdf This contains four versions of section 11.5.1 (Object literals). Each section is about 4 pages long and contains the same specification text, but organized in slightly different ways. The first version is what is currently in the specification. All the semantics definitions are lumped together in a single Semantics section in roughly the same order as the productions occur in the grammar. Each definition includes the grammar production it applies to so the order doesn't have any semantic significance. The second version regroups the semantic functions according to semantic function. First there are all the static semantic definitions for all the productions. Then all the PropertyDefinitionList function definitions for the productions that define it, and so until finally there are the evaluation function definitions for all the productions. The third version is ordered just like the second version but it uses explicit subsection headings for each function group in order to make them more visible. The four version orders everything by grammar production. It shows a production and immediately has all the functions that apply to that production. The third version seems far superior. It makes a proper, visible separation between static semantics and dynamic semantics, which is very helpful, and standard practice as well. In fact, you could be even more consistent about the triptych syntax/static semantics/dynamic semantics by grouping all of the latter in a subsection as well. That would be my preferred structure. (If you worry about having too many levels of numbering you might want to avoid numbering the individual bits of the dynamic semantics in that case. I think their section numbers are a minor concern.) (BTW, in addition to a better structure, I have to agree with Claus that having hyperlinks in the PDF would be a real blessing. I don't know how much work that would be, though.) /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Feedback request: a ES spec. organization experiment
On 12 October 2011 18:34, Allen Wirfs-Brock al...@wirfs-brock.com wrote: http://wiki.ecmascript.org/lib/exe/fetch.php?id=harmony%3Aspecification_draftscache=cachemedia=harmony:11.1.5-alternatives-2.pdf has a 5th alternative version that follows your suggested structure. Note that I classified semantic functions that only depend upon the static structure of the program under static semantics Unless somebody comes up with some better ideas I think this is the one I will go with. Nice, me likes. (BTW, in addition to a better structure, I have to agree with Claus that having hyperlinks in the PDF would be a real blessing. I don't know how much work that would be, though.) I agree that internal hyperlinks would be useful, but I don't intend to spend any time on them until this editions is much closer to completion. They're too hard to maintain in a rapidly changing document. Sure, I can definitely understand that. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Direct proxies strawman
On 18 October 2011 17:08, David Bruant bruan...@gmail.com wrote: Ok for typeof. But there are other places where [[Call]] is used and the proxy is expected to (indirectly) expose it. For instance bind: - var fpb = Function.prototype.bind; var bind = fpb.bind(fpb); var p = Proxy.for(function(){}, {}); // purposefully no 'call' trap var p2 = bind(p, {}); // ? - Here, bind will look for an internal [[Call]] from p. What is it? It cannot be the call trap since this one doesn't exist. Fallback to target.[[call]]? If target.[[Call]] is a fallback, it means that the internal [[call]] of an object can be changed... actually, just changing the call trap makes [[call]] dynamic. I'm not sure what are the ramifications of this. For instance, when binding a function, should it take the [[call]] value at bind call or the dynamic one (current ES5.1 definition says dynamic, but both are equivalent with today's objects) I don't think the presence of [[Call]] itself is dynamic. It's always there, but it checks for the presence of the call trap. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Direct proxies strawman
On 18 October 2011 17:48, Tom Van Cutsem tomvc...@gmail.com wrote: 2011/10/18 Andreas Rossberg rossb...@google.com First, Proxy.startTrapping (a.k.a. Proxy.attach). As far as I can see, this implies a significantly more general 'become' operation than the current semantics. I don't see how we can implement that without substantial (and potentially costly) changes to VMs, like support for transparent forwarding references. I also share Dave's concerns regarding conflation of non-attachability and non-extensibility. Your concerns are justified and this is why we need implementors to study the strawman. I'm not an implementor, so I have no clue as to the actual implementation costs of supporting Proxy.startTrapping. Yes, it is a very powerful operation. Not as powerful as Smalltalk's become, but getting close. Since the target would become a fresh proxy object, perhaps tricks similar to those used to make the old fix() behavior work could be used (swapping the internals of two objects). I realize this is highly VM-specific. The trick that you can use with the current proposal won't work any longer. In the current semantics, the only two possible 'become' transitions are proxy-JSobject and functionproxy-JSfunction. It is easy to implement 'become' by just overwriting the object itself. All you need to ensure is that the representation of a proxy is large enough for the representation of a regular object (and similarly for functions). Proxy.startTrapping, however, goes the other way, with more general transitions. To use the same trick one would need to make any object to which a proxy can be attached at least as large as a proxy (or the other way round, be able to represent proxies in a way that takes at most as much space as the smallest object you can attach to). With some extra work and indirection overhead, we could implement a proxy object in two words. But that might not be good enough, e.g. when attaching to foreign (host) objects or some internal ones. Proxy.stopTrapping worries me, too, especially in combination with attaching. It seems like I could create a proxy p, then later want to deactivate trapping for my handler. But in the meantime, somebody else morphed my p into his own proxy by invoking startTrapping(p). So stopTrapping(p) would deactivate his handler, not mine. So, the proposed interface seems broken to me. Good point. Yet another reason why I prefer the alternate Proxy.temporaryFor API I sketched in reply to Dave Herman. That API does not necessarily suffer from this issue. Yes, I think that interface, while less slick, is the right one. Finally, I'm not sure I fully understand the performance implications of direct proxies vs the current proposal. From looking at your prototype implementation, it seems that we need quite a number of additional checks and calls, even for non-fixed properties. Can you perhaps quantify that overhead a bit? Taken together, lots of checks are needed, but the amount of checks per trap is fairly limited. Also, most checks reuse the pathways of existing primitives like delete and defineProperty. In the scenario where we are wrapping a target object, we're trading one type of overhead for another: in the current proxy proposal, say I only want to intercept gets on a target object. I am still forced to implement a full ForwardingHandler (in JS itself), and override only its 'get' trap. All operations other than get incur an unnecessary overhead: the operation must trap the handler, only to have the handler forward the operation anyway (IOW: the operation is lifted to the meta-level, only to be lowered to base-level immediately afterward). With direct proxies, all traps other than get should incur very little overhead. I envision that a direct proxy can very efficiently forward an operation to the target, no need to lift lower. On the other hand, now the get operation will incur an additional check to verify that its reported result is consistent with the target object (only if the property was previously exposed as non-configurable). I'm speculating at this stage, but I assume that the vast majority of existing JS code does not use Object.getOwnPropertyDescriptor, hence has no way of determining whether a property is non-configurable, hence does not activate the more expensive checks. The overhead then is mostly checking whether the corresponding target object's property is non-configurable. In any case, I'm not sure that performing micro-benchmarks on my DirectProxies.js prototype implementation will generate useful results: I think it's too dependent on the current proxy implementation, and moreover I'm sure that many of my checks can be done _way_ more efficiently at the VM-level. For instance, to test whether a target property is non-configurable, I check Object.getOwnPropertyDescriptor(target, name).configurable. In a VM I presume this can be made considerably more efficient. No need
Re: Direct proxies strawman
On 19 October 2011 05:08, David Herman dher...@mozilla.com wrote: It’s still as easy to create such “virtual” proxies: just pass a fresh empty object (or perhaps even null?) Please, make it null. So much more pleasant (and avoids needless allocation). (The only downside of allowing null to mean no target would be if you wanted to future-proof for virtualizable primitives, including a virtualizable null.) If I understand the proposal correctly, you cannot avoid the allocation, because the target is used as a backing store for fixed properties. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Why isn't FunctionExpression a PrimaryExpression?
One concern might be that we probably cannot make arrow notation (if we introduce it) a primary expression, and it might be confusing if they have different precedence. I also think it is easier to parse for the human reader when he sees (function f() { ... })() instead of function f() { ... }() especially when this occurs as a statement. (Mh, actually, could we even distinguish between function declerations and expression statements starting with a function expr in LALR(1), without heavy grammar transformation?) /Andreas On 20 October 2011 01:20, Brendan Eich bren...@mozilla.com wrote: On Oct 19, 2011, at 3:29 PM, Allen Wirfs-Brock wrote: On Oct 19, 2011, at 2:53 PM, Brendan Eich wrote: On Oct 19, 2011, at 8:16 AM, Allen Wirfs-Brock wrote: Function expressions were added in ES3. Were they just added at the wrong place in the grammar? Thanks for raising this, I keep forgetting to. Oddly enough, SpiderMonkey always had them (prior to ES3 even being drafted) as PrimaryExpressions. No one can observe the difference, as you note. Either way, one can write var fun_member = function () {}.member; On aesthetic grounds, I would prefer the grammar to make function expressions primary. Good, I want to make that change because for semantic specification purposes FunctionExpression works better as PrimaryExpression. I just wanted to make sure, before I make the change, that there wasn't some grammatical subtlety I was overlooking Toy grammar (| is meta, other punctuators after the : are concrete): E: ME ME: PE | FE | ME [ E ] | ME . ID | ... PE: ( E ) | ... and there's no way that ME - FE would be reduced where ME - PE - FE was not possible, *and* there are no PE occurrences on the RHS of a production whose LHS is *not* ME, then FE can move down one precedence level from being the sole RHS part of ME, to being the sole RHS of PE. (Lotta abbreviation there, sorry.) Trivial search shows PrimaryExpression occurs in only one RHS, as the sole RHS part produced from MemberExpression. /be ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Protocol for new and instanceof?
On 22 October 2011 01:08, Axel Rauschmayer a...@rauschma.de wrote: Reified names (private or otherwise) are a very powerful mechanism. I’m not aware of another programming language that does this (possibly Common Lisp with its symbols, but I don’t know enough about them). It’s good to have them, because they increase JavaScript’s expressiveness. Dynamically generated abstract names are actually a fairly standard approach to achieve information hiding, at least in languages that cannot enforce it by other means (e.g. through a type system). /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: missing non-rest_parameters should throw error
On 26 October 2011 20:27, Allen Wirfs-Brock al...@wirfs-brock.com wrote: 1) arguments is needed for backwards compatability (no migration tax) 2) it is useful with destructing parameters: function ({a,b,c}, [x,y]) { if (arguments.length 2) ... ... var arg1obj = argruments[0]; ... All languages I know with pattern matching facilities provide a simple solution for that. E.g. in ML: fun (arg1 as {a,b,b}, arg2 as [x, y]) - ... We should really get rid of any need for using `arguments'. If your example is a relevant use case then supporting something along these lines in Harmony seems preferable, at least to me. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: testable specification
On 27 October 2011 13:35, David Bruant bruan...@gmail.com wrote: +1. Where the spec is already almost pseudo-code, its readability would improve if it was, in fact, pseudo-code. But would an extra interpreter be needed or couldn’t one just implement the ES-262 constructs (execution contexts etc.) in an existing language (Python, Rust, Scheme, Smalltalk, etc.)? Why choosing a completely different language? Why not ECMAScript 5.1? It will be one less language to learn as people who read the ES6 spec are very likely to be familiar with ES5.1. I personnally wouldn't feel comfortable reading a spec in any of the 4 languages you cited. Or maybe define the couple of things that can't be fully implemented in ES5.1 (proxies, private names) and use ES5.1 + these construct to define ES6. To spec a beast like ES, you want something with a considerably simpler and cleaner semantics than ES. Otherwise, all you end up with is a circular definition. Ideally, a good executable spec would become the normative spec at some point, so this is not just a philosophical point. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Globalization API working draft
On 3 November 2011 01:12, David Herman dher...@mozilla.com wrote: ES6 modules are not extensible, for a number of reasons including compile-time variable checking. But of course API evolution is critical, and it works; it just works differently. Monkey-patching says let the polyfill add the module exports by mutation, e.g.: // mypolyfill.js ... if (!SomeBuiltinModule.newFeature) { load(someotherlib.js, function(x) { SomeBuiltinModule.newFeature = x; }); } you instead say let the polyfill provide the exports, e.g.: // mypolyfill.js ... export let newFeature = SomeBuiltinModule.newFeature; if (!newFeature) { load(someotherlib.js, function(x) { newFeature = x; }); } The difference is that clients import from the polyfill instead of importing from the builtin module. I'm not 100% satisfied with this, but it's not any more code than monkey-patching. I believe the more modular and more convenient solution (for clients) is to create an adapter module, and let clients who care about new features import that instead of the original builtin. With module loaders, you should even be able to abstract that idiom away entirely, i.e. the importing code doesn't need to know the difference. It is easy to maintain such adaptors as a library. This is a common approach in module-based languages. It is a more robust solution than monkey patching, because different clients can simply import different adapters if they have conflicting assumptions (or, respectively, have a different loader set up for them). One issue perhaps is that the modules proposal doesn't yet provide a convenient way to wrap an entire module. Something akin to include in ML, which is a bit of a two-edged sword, but perhaps too useful occasionally to ignore entirely. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: An array destructing specification choice
On 5 November 2011 17:44, Brendan Eich bren...@mozilla.com wrote: Destructuring is irrefutable in that it desugars to assignments from properties of the RHS. It is not typed; it is not refutable I don't think that's true, at least not in the usual sense of irrefutable pattern. Because you can write let {x} = 666 which will be refuted, by raising a TypeError. Of course, the real question is, what does this do: let {} = 666 /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: An array destructing specification choice
On 5 November 2011 19:55, Brendan Eich bren...@mozilla.com wrote: On Nov 5, 2011, at 9:38 AM, Allen Wirfs-Brock wrote: In a similar vain, what is the value of r in: let [z,y,...r] = {0:0, 1:1, 2:2, length: 3, 3:3,4:4}; should it be [2] or [2,3,4] (and if the latter how is that determined)? The inspiration for ... in the past came from (among other sources) Successor ML: http://successor-ml.org/index.php?title=Functional_record_extension_and_row_capture Since I actually wrote half of that, I feel obliged to say that it does not answer the questions raised here. ML is a typed language, and contrary to popular belief, many language design problems are much easier to solve in a typed setting. However, there is some inspiration in the way SML treats tuples as special cases of records, very much like arrays are a special case of objects in JS. In particular, all of SML's pattern matching rules for tuples follow just from the way they desugar into records with numeric labels. For Harmony, this kind of equivalence would imply that let [x, y, z] = e is simply taken to mean let {0: x, 1: y, 2: z} = e and the rest follows from there. The only problem is rest patterns. One possible semantics could be treating let [x, y, z, ...r] = e as equivalent to let {0: x, 1: y, 2: z, ..._r} = e let r = [].slice.call(_r, 3) where I assume the canonical matching semantics for object rest patterns that would make _r an ordinary object (not an array) accumulating all properties of e not explicitly matched (even if e itself is an array, in which case _r includes a copy of e's length property). Of course, engines would optimize properly. (But yes, row capture for objects introduces a form of object cloning, as Allen points out.) /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: An array destructing specification choice
On 7 November 2011 17:07, Allen Wirfs-Brock al...@wirfs-brock.com wrote: let {x} = 666 which will be refuted, by raising a TypeError. No, It does ToObject(666) and then looks for the x property of the resulting wrapper object. Ouch, really? I don't see that in the proposal (http://wiki.ecmascript.org/doku.php?id=harmony:destructuring), and to be honest, it sounds like a horrible idea. It is just another way to silently inject an `undefined' that is tedious to track down. We already have too many of those... When would this ever be useful behaviour instead of just obfuscating bugs? /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: An array destructing specification choice
On 7 November 2011 17:34, Allen Wirfs-Brock al...@wirfs-brock.com wrote: It is just another way to silently inject an `undefined' that is tedious to track down. We already have too many of those... It is how the language currently behaves in all situations where an object is needed but a primitive values is provided. We want consistency in language design, not a hodgepodge of special cases and different rules. Hm, I don't quite buy that. There are plenty of places in ES today where we don't convert but throw, e.g. in, instanceof, various methods of Object, etc. Destructuring arguably is closely related to operators like in. Implicit conversion would violate the principle of least surprise for either, IMHO. I agree that consistency is a nice goal, but it seems like that train is long gone for ES. Also, if consistency implies proliferating an existing design mistake then I'm not sure it should have the highest priority. When would this ever be useful behaviour instead of just obfuscating bugs? let {toFixed, toExponential} = 42; OK, I guess useful is a flexible term. Would you recommend using that style as a feature? /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: An array destructing specification choice
On 7 November 2011 18:42, Allen Wirfs-Brock al...@wirfs-brock.com wrote: or let [first,second] = abc; Yes, that's a more convincing example -- although we should probably be aware that users will then also do let [x, y, ...s] = somestring and expect it to slice a string efficiently. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: An array destructing specification choice
On 7 November 2011 22:46, Brendan Eich bren...@mozilla.com wrote: On Nov 7, 2011, at 3:04 AM, Andreas Rossberg wrote: One possible semantics could be treating let [x, y, z, ...r] = e as equivalent to let {0: x, 1: y, 2: z, ..._r} = e let r = [].slice.call(_r, 3) where I assume the canonical matching semantics for object rest patterns that would make _r an ordinary object (not an array) accumulating all properties of e not explicitly matched (even if e itself is an array, in which case _r includes a copy of e's length property). Of course, engines would optimize properly. Right, but why the 3 passed to slice.call if _r captured all enumerable properties except those with ids 0, 1, and 2 (stringified, of course)? I was assuming that we want let [x, y,z, ...r] = [1, 2, 3, 4, 5] to bind r to [4, 5]. For that to hold, you have to shift down the numeric indices in _r by 3, which is what the slice call was intended to do. At least that's the behaviour I'd expect from an array rest pattern, and Allen's earlier example in this thread seems consistent with this assumption. But looking at the proposal, I now see that it does not actually do the shifting. So now I'm confused about what the intended semantics actually is. Anyway, you've hit what I was advocating over the weekend as the answer to the pair of questions I posed: [no, no]. Lasse makes a good case for [yes, yes]. The call to .slice implicitly reads the length, so it rather seems to implement [no, yes]. Using [no, no] would work, too, but requires a somewhat non-standard form of slicing. I have a slight preference for being consistent with the existing slicing semantics. I don't like [yes, yes] that much. I prefer to view array patterns merely as straightforward sugar for object matching, and [yes, yes] kind of breaks that and puts more special cases into the language. So I'd actually turn around Lasse's argument. :) I still think we should argue about row capture in object patterns a bit before concluding. What do you think? Well, I think that row capture in object patterns is indeed a useful feature, esp for record-like use cases. I agree that shallow cloning isn't a big problem -- in any case, it's no worse than doing the same sort of cloning for array-like objects. It also seems more consistent to me to have rest patterns in both forms. If object rows enable maintaining the syntactic sugar explanation for array patterns, then the overall result might even be a slightly simpler language. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: [Proxies] Refactoring prototype climbing in the spec
On 7 November 2011 16:54, Tom Van Cutsem tomvc...@gmail.com wrote: I wrote up an initial (but fairly complete) draft of a proposed refactoring of the ES5 [[Get]], [[Put]] and [[HasProperty]] algorithms to change the way in which these algorithms climb the prototype chain: http://wiki.ecmascript.org/doku.php?id=strawman:refactoring_put Looks good, and as far as I can see from a first read, solves the issues we were discussing so far. But I have a follow-up request. :) Regarding redundant trap calls with proxies there is another, more pervasive problem with the current spec: in lots of places it first calls [[HasProperty]] and then [[Get]]. With proxies, this always implies two trap calls, which seems wasteful. Would it be possible to refactor that, too? Seems more difficult, because we would need to enable [[Get]] (and hence the get trap) to signal lookup failure. (Too bad that we cannot reuse `undefined' for it.) But I think the current situation isn't satisfactory. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Security and direct proxies (Was: Re: Lecture series on SES and capability-based security by Mark Miller)
On 8 November 2011 18:47, David Bruant bruan...@gmail.com wrote: Given that direct proxies are not in a position to violate any of the *non-configurability or non-extensibility constraints* of their wrapped target, it should be safe to replace an existing normal object by a direct proxy wrapping that object. My understanding is that regarding the issue you mention, you cannot do more with startTrapping than redefining built-ins by (re)setting a property. That may be true for plain objects, but I think the situation is quite different for functions, because there is no equivalent to non-configurable for the [[Call]] and [[Construct]] properties. On 8 November 2011 19:13, Mark S. Miller erig...@google.com wrote: The reason Proxy.attach may not be fatal is that it only allows attachment to extensible objects. Our hypothesis is that any ES5 object that is interested in defending itself has already made itself non-extensible. This is why we must key this off of non-extensibility, rather than introducing a new orthogonal bit -- to avoid breaching the defenses of those ES5 era objects that tried to defend themselves. I don't think that addresses the issue I was describing. The problem is: the object itself can all be frozen, non-extensible, non-attachable just fine, but that doesn't achieve much by itself anymore because an attacker can still attach to each individual method, since they are entirely separate objects! So instead of just freezing an object, you _additionally_ would have to make all its _individual methods_ non-attachable (by whatever means). AFAICS, that affects assumptions of existing ES5 code quite severely. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss
Re: Security and direct proxies (Was: Re: Lecture series on SES and capability-based security by Mark Miller)
On 8 November 2011 20:29, Andreas Rossberg rossb...@google.com wrote: On 8 November 2011 18:47, David Bruant bruan...@gmail.com wrote: Given that direct proxies are not in a position to violate any of the *non-configurability or non-extensibility constraints* of their wrapped target, it should be safe to replace an existing normal object by a direct proxy wrapping that object. My understanding is that regarding the issue you mention, you cannot do more with startTrapping than redefining built-ins by (re)setting a property. That may be true for plain objects, but I think the situation is quite different for functions, because there is no equivalent to non-configurable for the [[Call]] and [[Construct]] properties. BTW, a similar issue applies to getters and setters: even if a property is non-configurable, as long as it is defined by accessors an attacker could attach to the underlying JS functions and thereby essentially redefine the property without actually modifying it. /Andreas ___ es-discuss mailing list es-discuss@mozilla.org https://mail.mozilla.org/listinfo/es-discuss