Re: Peano Arithmetic
On Aug 14, 2008, at 10:22 PM, Michael Haufe wrote: function add2(x,y){ if(!x){ return y; } else{ return add2(x--,y++); //recursion error }; } function add3(x,y){ if(!x){ return y; } else{ x--; return add3(x,y)++; //error: cannot assign to a function result } } -- The first function works as expected. The 2nd function goes into an infinite loop since the variables are apparently not assigned as expected before passed into the function You want --x and ++y as the arguments to add2 -- pre-decrement and pre-increment, not post-. The 3rd function throws an assignment error once it reaches the return. Functions return rvalues, not lvalues, to use the C jargon. So my question is whether this behavior is by design or by accident? Design. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: types
On Aug 13, 2008, at 11:24 PM, Neil Mix wrote: I do get the idea that a consumer of an API is helped with some good error messages. Sometimes I have public API functions of a library that start something like function(a, b) { /* begin debug */ isInt(a); // throws isStr(b); // throws /* end debug */ ... The debugging code is stripped when deploying for production for better download and runtime performance. Dating myself here: I had to deal with Fortran, long ago. Kernighan and Plauger created a structured programming language based on Fortran, which translated to Fortran: Ratfor (Rational Fortran). Not saying JS is Fortran, but really, unless it floats your boat, why not improve the language directly so it can support, declaratively if possible, things you have to do with hand tools and magic comments and offline sanitizers and so on? Those extra things may be necessary now, but not forever. And they're not a good use of programmer time, even if the initial development cost is sunk (see sunk cost fallacy). Does type checking catch misspelling bugs? It can, yes. Without some notion of an object type, var o = foo(); o.mispelled can't be caught (suppose foo returns {misspelled: 42}). AS3 does this, even for unqualified references if you are within a package. But see below. To the extent that there's static type checking available to catch errors early on, it's a huge productivity gain. Is Harmony type checking even going to be static? These are great questions that I don't know the answer to. It's ironic that ES4 already dropped its optional static type checker. The 'use strict' proposal that was circulated, which was the subject of efforts at subsetting and unification with 3.1 folks, was like Perl's use strict -- use good taste and sanity. It was not AS3's static type + sanity checker. I think the fear and hate around any whiff of static typing tained ES4 -- I'm glad to get past it, with Harmony if not with our earlier, unnoticed cuts. Clearly this needs time to sort out in committee. But I'll weigh in with an early opinion that the option to do static type checking in JavaScript would, IMO, be a very good thing. Although ES4 dropped the optional static type checker, ActionScript was going to keep its checker, hooked up via the toolchain IIRC. There's nothing preventing such extensions, but I don't foresee the committee specifying an optional checker. The runtime type system, however, needs to be fully specified, and if it's done right, then any static type checker will be a conservative partial evaluation of some sort. I've written that interoperation problems could result if different browsers used different degrees of conservatism and partial evaluation, but I don't expect static checker extensions in most browsers, and I'm also not too worried about people testing in the more lenient one where the stricter one has enough market share for the interop failure to be a problem. The issue will be what works in browsers with their dynamic type checking (whatever it ends up being). Before we cut the type checker from ES4, we were wrestling with when a browser might try to check. Code is loaded often; windows and frames can reference one another; documents and script tags come and go. Cormac suggested just checking when things seem stable or quiescent, repeatedly, and issue warnings. Not your must go to editor, fix stupid error, and recompile experience, but arguably a better one than Java heads have. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: types
On Aug 13, 2008, at 11:37 PM, Peter Michaux wrote: How large are the JS programs you write? What other programming languages do you use, and at what scale? I'm not sure if these two questions are rhetorical or not but anyway... No, I was curious. It's helpful to get scale data samples from different folks in the community. Just out of curiosity, are there many folks on the committee that write JavaScript applications? I ask that question with zero snark factor. I have the impression the majority of members are language implementers and that may be a completely unfounded impression. That's fair, and Doug made the point at the first meeting he attended. We have included folks like Alex Russell and now Kris Zyp from Dojo; also Adam Peller of IBM, a Dojo committer. The committee members are not just language implementors or experts, but representatives for their organizations. Apple is all about web as platform. The Mozilla community's JS hackers let me know what they think and set me straight often. Opera has a web apps / widgets team. Etc. But like many other programming language standards bodies, selection for language implementors is inevitable. I've also invited experts from academia, specifically Dave Herman and Cormac Flanagan, to help with specification and provide balance. Without growing too large, I favor a mix from several disciplines on the committee. Majority rules is not the committee's way, rather consensus (general agreement). The majority of JS hackers may not agree on much, from what I can tell. I'll say this: I've seen an effect with some JS hackers, possibly a Stockholm Syndrome variant, where because the language was frozen for nine years, and because one has to struggle to get things to work cross-browser, necessity becomes a virtue, and change is viewed as a threat. Don't turn it into Java or I like billing hours working around current deficiencies are two kinds of comments (paraphrased slightly, or somewhat pointedly in the second case) that I hear. This effect seems entirely malign to me. I'm not talking about you here, note well! My point is that the current JS users have been abused: stuck with something that didn't grow at all after a premature standardization cycle in the '90s. Many are worth hearing from, via this email list, in blogs, at sites like ajaxian.com; but some (sore abused and loving it ;-) are not the best guide on how to get unstuck. I often wonder if it is necessary that ECMAScript is a language which covers all bases and it good at all scales. We have plenty of languages that already try to do that (and don't.) No language should try to be everything. My position is that JS needs to evolve to meet users' needs, not be frozen or stunted, or have its users reeducated (in camps? that won't work) to use it as is, or in subset form. If one has only Fortran, then a subset if not a pre-processor (Ratfor!) may be a win. But if JS can evolve in an open-standards fashion, then it should, and we can put away some of the hand-tools and braces. Hand-tools as a metaphor should not be taken to disrespect doing things by hand or using simple tools where possible. Students should definitely do that, and JS users do it well when prototyping or even deploying small-ish codebases. Some do it well at large large scale. But it's not for everyone and all programs. I'm not arguing against type annotations with this comment but I don't see much difference in terms of remembering to have type checks between writing function(a) {isInt(a); ... and function(a:int) { ... There are big differences. With the first, a might be reassigned later in the function and isInt wouldn't know. Also, for a mutable object type, 'a' might remain the same *reference*, but the object it refers to might mutate to violate the type constraint. Mutation is not your friend. You're thinking more of 'like', and indeed, we proposed function (a) { if (!(a like T)) throw new TypeError; ... } But that's obviously too much boilerplate, so the short-hand function (a like T) { ... } has been proposed too, in ES4 and (by me, to some favorable comment depending on how T is defined -- specifically that it can be a value expression, an object or array containing types for example -- no distinct type expression grammar, binding rules, or evaluation model) in Oslo. I believe that a:int actually is intended to assign a type to the variable a and the isInt only checks the contents of a at the moment. Right. I'd rather the second be sugar for the former and the variable is not typed at all. That's not a good meaning to assign to an 'a:int' annotation, based on precedent in many languages. It's also not the right minimal syntax default meaning when mutable objects types are in play. Another difference is the ability for some system (not necessarily part of the standard) to do
Re: types
On Aug 13, 2008, at 11:52 PM, Peter Michaux wrote: I see. To which I say, why not both? First-class, lexical functions are a _relatively_ easy thing upon which to agree and were added to the language before such a large process was put in place for evolving the language. Lambda can do anything, we know this. But that doesn't mean it should be the only tool in the belt. Macros won't be ready for the next major edition. The research on well-defined, sound hygiene is just being done now. Then there's the lovely C-based syntax. Macros may eventually make it into some future edition (my gut says). Recasting sugar added before they arrive, once they're there, should be possible. In the mean time, we have extensions from the *not*-large-process Mozilla JS1.x world. We have concrete proposals and sketches from a few people on the committee acting more as champions than dictators, but not subjecting everything to design-by-committee. And we have a pretty high-quality committee. Nevertheless, I think you are right on to warn: A committee agreeing by consensus on a OOP system seems like a nightmare that may not happen. The single-inheritance, multiple-inheritance, mixins, interfaces, public, private, protected mess seems intractable since no one seems to think any language does it all perfectly. A CLOS-type system might be best. Who really knows? It is a huge problem and perhaps requires dictator-style leadership (and that may not work either as multiple implementations must get on board.) We won't get the benevolent dictator, and we won't try for the big CLOS-style system (this latter point is part of Harmony). We can't iterate the standard too quickly because of intrinsic overhead and latency, and the tendency to enshrine mistakes and get painted into corners by iterating too quickly (ignoring overhead/latency). I'm a realist, but I've pushed dialectically against the tendency of languages to stagnate and go into decline at high switching cost (dying only years later, if ever). To the extent that JS1.x in Mozilla, AS3 in Flash/Flex, and ES4 as a love-it-or-hate-it proposal have moved the needle away from stagnation, great. Without Firefox and Safari growing market share, we probably wouldn't even be talking right now about JS futures. Who knew four years ago that any of this was possible? So cheer up! ;-) Things are looking brighter, in spite of the inevitable standards-committee setting. I use OOP frequently in JavaScript but it isn't usually the style in ES3 or class-based like proposed ES4. It's the style I think is appropriate to the situation. That may draw a slight performance penalty but the code is certainly more robust than the ES3 style. Sorry, curious again: what do you mean by ES3 style? /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: types
On Aug 14, 2008, at 12:33 AM, Brendan Eich wrote: Another difference, for the future but worth future-proofing against: having the syntax gives us further hooks for contacts Grr, when spellcheckers fail: contracts. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: types
On Aug 14, 2008, at 10:40 AM, Steven Johnson wrote: On 8/14/08 6:25 AM, Neil Mix [EMAIL PROTECTED] wrote: It sounds like static type checking infers a certain amount of hard failure, i.e. you can't run this until you fix your code. That's not really what I'm voting for. I just want it to be possible, somehow, to catch simple type errors very early in the development process, and then to run the same type annotated code unchanged in the browser. Depends on what you mean by hard failure. If we have a function like // n must be a numeric type function sqrt(n) { ... } then the ability to say function sqrt(n:Number) { ... } only requires that n by convertible-to-Number at runtime. (If not, ES4/AS3 required that a TypeError be thrown.) That allows sqrt(foo) to compute the square root of a NaN, which is a NaN. For this reason, ES4 last year moved to avoid implicit conversions except among number types. This does make it possible to do some static type checking at compile-time, of course (and the AS3 compiler can optionally do this). IIRC that's because AS3's strict mode (a static type checker) changes the rules to avoid the implicit conversion (from the standard or dynamic mode, which converts freely as you say). May be water under the bridge, but at least for ES4 we chose to get rid of the strict vs. standard implicit conversion difference, and require a number in both. Personally, I like catching stupid mistakes of this sort as early as possible, so I tend to use type annotations everywhere I can when I code in AS3. Your mileage may vary. There are pluses and minuses. We've all been saved by simple types in C, compared to assembly or pre-historic C which allowed int and pointers to convert, and treated struct member names as offset macros usable with any int or pointer. But JS is a higher level language, and one doesn't need to annotate types to avoid memory safety bugs. For numeric code, there can even be a de-optimizing effect if one uses int or uint in AS3, I'm told, because evaluation still promotes everything to Number (double). Is this right? I took Neil's point to favor not only a separate lint-like tool (which some find painful to procure and remember to run), but possibly something like Cormac's idea I mentioned a few messages ago: a type checker that runs when the code loaded in a web app seems complete, and gives warnings to some console -- but which does not prevent code from running. Neil, how'd that strike you? It could be built into some developer- extension for the browser, so you wouldn't have to remember to run jslint. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
ECMAScript Harmony
It's no secret that the JavaScript standards body, Ecma's Technical Committee 39, has been split for over a year, with some members favoring ES4, a major fourth edition to ECMA-262, and others advocating ES3.1 based on the existing ECMA-262 Edition 3 (ES3) specification. Now, I'm happy to report, the split is over. The Ecma TC39 meeting in Oslo at the end of July was very productive, and if we keep working together, it will be seen as seminal when we look back in a couple of years. Before this meeting, I worked with John Neumann, TC39 chair, and ES3.1 and ES4 principals, especially Lars Hansen (Adobe), Mark Miller (Google), and Allen Wirfs-Brock (Microsoft), to unify the committee around shared values and a common roadmap. This message is my attempt to announce the main result of the meeting, which I've labeled Harmony. Executive Summary The committee has resolved in favor of these tasks and conclusions: 1. Focus work on ES3.1 with full collaboration of all parties, and target two interoperable implementations by early next year. 2. Collaborate on the next step beyond ES3.1, which will include syntactic extensions but which will be more modest than ES4 in both semantic and syntactic innovation. 3. Some ES4 proposals have been deemed unsound for the Web, and are off the table for good: packages, namespaces and early binding. This conclusion is key to Harmony. 4. Other goals and ideas from ES4 are being rephrased to keep consensus in the committee; these include a notion of classes based on existing ES3 concepts combined with proposed ES3.1 extensions. Detailed Statement A split committee is good for no one and nothing, least of all any language specs that might come out of it. Harmony was my proposal based on this premise, but it also required (at least on the part of key ES4 folks) intentionally dropping namespaces. This is good news for everyone, both those who favor smaller changes to the language and those who advocate ongoing evolution that requires new syntax if not new semantics. It does mean that some of the ideas going back to the first ES4 proposals in 1999, implemented variously in JScript.NET and ActionScript, won't make it into any ES standard. But the benefit is collaboration on unified successor specifications to follow ES3, starting with ES3.1 and continuing after it with larger changes and improved specification techniques. One of the use-cases for namespaces in ES4 was early binding (use namespace intrinsic), both for performance and for programmer comprehension -- no chance of runtime name binding disagreeing with any earlier binding. But early binding in any dynamic code loading scenario like the web requires a prioritization or reservation mechanism to avoid early versus late binding conflicts. Plus, as some JS implementors have noted with concern, multiple open namespaces impose runtime cost unless an implementation works significantly harder. For these reasons, namespaces and early binding (like packages before them, this past April) must go. This is final, they are not even a future possibility. To achieve harmony, we have to focus not only on nearer term improvements -- on what's in or what could be in -- we must also strive to agree on what's out. Once namespaces and early binding are out, classes can desugar to lambda-coding + Object.freeze and friends from ES3.1. There's no need for new runtime semantics to model what we talked about in Oslo as a harmonized class proposal (I will publish wiki pages shortly to show what was discussed). We talked about desugaring classes in some detail in Oslo. During these exchanges, we discussed several separable issues, including classes, inheritance, like patterns, and type annotations. I'll avoid writing more here, except to note that there were clear axes of disagreement and agreement, grounds for hope that the committee could reach consensus on some of these ideas, and general preference for starting with the simplest proposals and keeping consensus as we go. We may add runtime helpers if lambda-coding is too obscure for the main audience of the spec, namely implementors who aim to achieve interoperation, but who may not be lambda-coding gurus. But we will try to avoid extending the runtime semantic model of the 3.1 spec, as a discipline to guard against complexity. One possible semantic addition to fill a notorious gap in the language, which I sketched with able help from Mark Miller: a way to generate new Name objects that do not equate as property identifiers to any string. I also showed some sugar, but that is secondary at this point. Many were in favor of this new Name object idea. There remain challenges, in particular getting off of the untestable and increasingly unwieldy ES1-3.x spec formalism. I heard some generally agree, and no one demur, about the ES4 approach of using an SML + self-hosted
Re: ECMAScript Harmony
In light of Harmony, and the recurrent over- and under-cross-posting, I'd like to merge the [EMAIL PROTECTED] and es4- [EMAIL PROTECTED] lists into [EMAIL PROTECTED] The old archives will remain available via the web, and the old aliases will forward to the new list. Any objections? /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ECMAScript Harmony
On Aug 13, 2008, at 2:36 PM, Douglas Crockford wrote: Brendan Eich wrote: In light of Harmony, and the recurrent over- and under-cross-posting, I'd like to merge the [EMAIL PROTECTED] and es4- [EMAIL PROTECTED] lists into [EMAIL PROTECTED] The old archives will remain available via the web, and the old aliases will forward to the new list. Any objections? Yes. I'd like to keep ES3.x to discuss ES3.1 business until we get that finalized. I think Allen Wirfs-Brock was in favor of merging the lists, but we could just rename es4-discuss to es-discuss. Is it worth dropping that 4? /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ECMAScript Harmony
On Aug 13, 2008, at 2:50 PM, P T Withington wrote: [Trimmed recipients] On 2008-08-13, at 17:26EDT, Brendan Eich wrote: One possible semantic addition to fill a notorious gap in the language, which I sketched with able help from Mark Miller: a way to generate new Name objects that do not equate as property identifiers to any string. I also showed some sugar, but that is secondary at this point. Many were in favor of this new Name object idea. Is this Name object what we old Lispers would call a SYMBOL? I said gensym in Oslo and had it in a draft, but kids today do not appreciate how l33t it is. The inspiration came from several sources, and one idea for sugar came from PLT-Scheme's define-local-member-name. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: types
On Aug 13, 2008, at 8:02 PM, Peter Michaux wrote: Reading the recent news about ES4 removing more of its old features and morphing into Harmony, it seems that the related ideas of classes, types and type checking are the surviving major new features. It would be more accurate to say that members of the committee see value in some or all of: 1. Classes as higher-integrity factories than constructor functions that use closures for private variables; 1(a). possibly related by single or multiple inheritance, but in the simplest case with zero inheritance. 1(b) with this-bound (self-bound) methods, non-extensible instances, and other possibly differences from function constructors. 2. Like patterns, which are syntactic sugar for shape tests, assertions about the structural type *at the present moment* of a given object. 3. Structural types, related by record width subtyping. 4. Optional type annotations, which provide both 4(a). a write barrier on a variable to prevent it from referring to an object that does not satisfy the types constraints; 4(b) a guarantee that the object denoted by the annotated variable will not mutate away from the type's constraints. Opinions vary. I think it's fair to say some favor 1 (think Smalltalk) and possibly 3 (structural types do not require shared type definitions that are related by name; this can be winning in large systems involving multiple decoupled programmers), but are skeptical of 4. Some (overlapping people here) point out the benefits of nominal types for security: nominal type annotations can act as auditors or guards that can't be bypassed from outside the lexical scope in which the type name is hidden; private nominal types bound the set of implementations that have to be inspected; generative nominal types can act as keys/nonces; nominal types make feasible information flow analyses that otherwise can't bound effects along all flows, so must leave the program counter tainted. Most seem in favor of like patterns, from what I heard in Oslo. These are not types anyway, but convenient syntax for the kinds of shape- tests that are open-coded (sometimes with coverage gaps, sometimes skipped altogether) in modern Ajax libraries (isArrayLike, etc. -- you call this duck typing below). Like patterns and structural types could be thought of as JSON schemas. If you only need a spot-check that an object tree matches a structural pattern, use a like test or annotation. If you need the kind of monotonic guarantees listed in 4(a-b), then use structural types. Personally I've never quite understood why classes, types and type-checking have been such a fundamental part of the proposal. I've kept my fingers still about this for several reasons (mostly does my opinion even matter?) but all of this type business seems more trouble than it is worth to me (i.e. a daily JavaScript programmer.) How large are the JS programs you write? What other programming languages do you use, and at what scale? JS programmers write type checks and code with latent type systems in mind already. You do it. Doing this by hand, and often implicitly, can be pleasant at smaller scale, with no need for shorthands such as like patterns, or always/everywhere guarantees such as type annotations provide. Closures and constructors are enough for (1). Scaling up across people, space, and time (even with only one programmer, time is a problem; one forgets latent types that the program relies on) leads many people to want more automation. YMMV. Automation of type checks or shape tests does not mean static typing. Runtime checks and more advanced systems such as PLT-Scheme's Contracts can go a long way. But writing all the checks by hand is a tedious waste of programmer time, and it's costly enough that programmers tend not to do it. This seems good when prototyping, but it backfires at scale. Are these type-related features what the community of ECMAScript 3 programmers were really asking for emphatically years ago? Some are, you've heard from Neil already. But you're dismissing the other things than classes and types that I mentioned, which could be grouped into two categories: 1. Better modularity and integrity through name hiding. This includes the generative Name object idea, but also let as the new var for block scope (const and function in block too). 2. Conveniences: destructuring, spread-operator/rest-params, iterators/generators, expression closures. Is the community really asking for now with the surge of functional programming? False dichotomy alert. Please look into functional programming more, including SML and OCaml; also TypedScheme and prior optionally-typed Scheme systems. Not only is functional programming not antithetical to type system (even static type system), it covers a range of languages from Lisp to Dylan to JS to Scala. You use it as if it means one thing,
Re: How to escape implicit 'with (this)' of a method body
On Aug 1, 2008, at 2:43 PM, Garrett Smith wrote: What is dynamically inserted? I guess would mean properties added to an instance of a non-sealed class. Right. Those should not be addressable by unqualified names in method scope -- you have to use this. so all references continue to be bound at compile time and this sort of brittleness does not come up? I think I remember discussion that 'this' in a static context was not valid. Maciej meant static in the compile-time or lexical sense, not static in the class singleton object property sense. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: function hoisting like var
On Jul 30, 2008, at 3:13 PM, Ingvar von Schoultz wrote: Regarding my explanations quoted below, did they clarify things? No, and I don't have time right now to deal with the great number of words you have dedicated to promoting your ideas. This is a shame, since you could have a point, but I simply can't expend the effort to try to find it given other priorities. Sorry, this is not something I'm happy about, but it's not entirely due to my being busy (i.e., it's not just me -- it's you :-/). May I suggest using fewer words when replying than you have? Generally proportionate responses would be best. I'm assuming fair play, i.e., that people are not resorting to too short and dismissive a style of replying. I'm not doing that here, I'm just letting you know I don't have time to plow through what you have written. Maybe someone else on the list has the time. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: How to escape implicit 'with (this)' of a method body
On Jul 28, 2008, at 3:22 PM, Michael Haufe wrote: function foo () { return 'global'; } class bar { function foo () { return 'local'; } function zot () { // How can I call the global foo from here? without (this) { foo(); } } } It's the same as if you lambda-coded the above (here shown in JS1.8 [Firefox 3], note the expression closures): function bar() { function foo() 'local'; function zot() global.foo(); } function foo() 'global'; This example uses ES4's global synonym for the global object, but you could capture this in a global var at top level: var global = this; print(new bar().zot()); // print 'global' in ES3 or JS1.8 to get the same effect. You could use window[foo](); or whatever the global object is named in the environment No need to quote and bracket, of course -- window.foo() is fine too. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: How to escape implicit 'with (this)' of a method body
On Jul 28, 2008, at 9:46 PM, Brendan Eich wrote: On Jul 28, 2008, at 3:22 PM, Michael Haufe wrote: function foo () { return 'global'; } class bar { function foo () { return 'local'; } function zot () { // How can I call the global foo from here? without (this) { foo(); } } } It's the same as if you lambda-coded the above (here shown in JS1.8 [Firefox 3], note the expression closures): function bar() { function foo() 'local'; function zot() global.foo(); + return {foo: foo, zot: zot}; } function foo() 'global'; This example uses ES4's global synonym for the global object, but you could capture this in a global var at top level: var global = this; print(new bar().zot()); // print 'global' in ES3 or JS1.8 to get the same effect. /be___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: How to escape implicit 'with (this)' of a method body
On Jul 28, 2008, at 10:05 PM, Jon Zeppieri wrote: Isn't the 'with' statement in the original example significant? In the general case, assuming that you don't know what properties 'this' has (as it may have dynamic properties in addition to the fixtures determined by its class), you have no way of knowing whether 'global' or 'window' refers to the global object or to some arbitrary property of 'this.' The original code used without (this), not with, which I took to mean avoid instance properties shadowing globals. If you read the original as with, then there is no such problem. But if you construct a problematic case using 'with' and dynamic properties, then I concede that 'global' could be shadowed. This is a reason to avoid 'with'. In the ES4 proposals last sent out, you could always use __ES4__::global if you really wanted to avoid conflicts -- unless someone perversely added '__ES4__' as a dynamic instance property. There's no solution to this problem other than reserving at least one name, and we can't do that compatibly. We could reserve __ES4__ in version-selected ES4 mode, but that seems unnecessary. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: function hoisting like var
On Jul 26, 2008, at 4:03 AM, Ingvar von Schoultz wrote: [EMAIL PROTECTED] wrote: Ingvar von Schoultz wrote: [EMAIL PROTECTED] wrote: I'm trying to keep the language relatively simple. You can't get away from supporting this: { function a(){} var b = a; } What do you mean? This is a syntax error in both ES3 and ES3.1. It works fine in Firefox 2, Konqueror 3, Opera 9, Internet Explorer 6, and server-side Rhino with JavaScript 1.6. Waldemar meant precisely what he wrote: ES3 and draft ES3.1 -- the specifications, not random JS implementations. Five platforms out of five. Can you throw a syntax error here and claim to be compatible? The implementations are not compatible. Please see the earlier es4- discuss thread with subject Function declarations in statements at: https://mail.mozilla.org/pipermail/es4-discuss/2007-March/ thread.html#527 It does not already exist in ES3 or ES3.1. It exists on platforms as described above. I assumed that ES4 would be compatible. No, because it is impossible to be compatible with conflicting extensions to ES3 that browsers have implemented. The conflicts and undesirable intersection semantics are why ES4 proposes, and ES3.1 considered but deferred, block-scoped functions that must be direct children of braced blocks. This requires opt-in versioning, which is why ES3.1 deferred it. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: function hoisting like var
On Jul 26, 2008, at 2:07 PM, Ingvar von Schoultz wrote: You can't get away from supporting this: { function a(){} var b = a; } ES4 is planning to support function declarations locally bound in blocks, so the above is valid ES4 code. What you see above is function b() hoisting like var. (I said b, not a.) What you said does not make sense. It's true that var b is hoisted to the top of the program or function body. But it is not initialized until control flows through the assignment b = a that is part of the var declaration. So there is no capture problem. There is no far-too-complicated split-scope complexity. There is no capturing of variables that haven't been declared yet. It's simple, intuitive, well-defined and well-behaved. Thanks, I agree. But it is not what you proposed. Again, from Waldemar's original reply, but with your proposed {{}} interpolated and the elided code amended to say what the consequence is: // outer scope function c() ...; // inner scope {{ if (foo) { const c = 37; } ... c in your proposal must be hoisted to the {{, so it can't be function c -- yet it can't be initialized to 37 if foo is falsy ... }} You could reply that const is new (sort of -- two browsers already implement it one way, another treats it as var) and therefore should always scope to { or {{, whichever is closer. But the point stands if you replace const with function or var and hoist to the {{. Repeating the next counter-example, with {{}} changes again, to track your proposal since the original exchange with Waldemar: // outer scope function c() ...; // inner scope {{ function f() { return c; } a = f(); if (foo) { const c = 37; } b = f(); ... just what do a and b hold here? Was f's captured variable rebound by the if statement? ... }} And so on. The above is the /exact/ functionality of function hoisting like var, apart from using two names. You can refuse the clearer syntax, but you can't refuse the above code and functionality. I think I see the confusion now. Do you believe that in the var b = a; code you wrote, both the binding of the var named b *and* its initialization with the value of the function object denoted a are hoisted? Hoisted up to what point? Waldemar wrote a while back: Keep in mind that function assignments hoist to the beginning of the scope in which the function is defined, so your proposal won't work. The word assignment where definition was perhaps more precise (function definitions replace extant properties of the same name in the variable object, they are not equivalent to assignment expressions) may have misled you. From the context and the long- standing spec and implementation behavior with functions not in blocks or any other sub-statement position, it was clear (I think) what was meant, but I can see how this could be confusing. Assignment expressions and initializers in var statements do not hoist or otherwise move in the flow of control. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: function hoisting like var
On Jul 26, 2008, at 2:06 PM, Ingvar von Schoultz wrote: How sad! It seemed such a simple and intuitive notation! Opinions vary, but all the ones I heard at the Ecma TC39 meeting found it neither simple nor intuitive, and some abhorred it on aesthetic grounds to boot. I think all of these would be unambiguous: {. code .} {: code :} {| code |} {[ code ]} [[ code ]] [ code ] These are either syntax errors without opt-in versioning, or (the last three) do create incompatible ambiguity (consider array initialisers). What's more, as Waldemar pointed out many threads (and too many words) ago, they create capture problems. Please work through the last mail I sent before replying; if some vocabulary or infelicitous word choice is causing any confusion, feel free to mail me privately and ask pointed questions. Thanks, /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Function hoisting
On Jul 19, 2008, at 7:20 AM, Neil Mix wrote: On Jul 19, 2008, at 12:25 AM, Brendan Eich wrote: * ES3.1 and ES4 will both allow ‘function’ inside control flow statements [directly inside an explicit block, not as the lone unbraced consequent statement of an if, while, etc. /be], and it will be hoisted to the top of the block and initialized on block entry (to be compatible with how functions behave in ES3) What's the behavior of an if/else control flow statement that contains a function definition in each explicit block? if (true) { function x() { return 1; } } else { function x() { return 2; } } Block scope -- an x in each branch's block. This avoids capturing problems. To make a binding that extends beyond either block one just uses let or var in an outer scope, and assigns. Richard Cornford made this point in a different context last week (about memoizing top level functions via var, not by overwriting a function definition). /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Function hoisting
On Jul 19, 2008, at 9:51 AM, Brendan Eich wrote: if (true) { function x() { return 1; } } else { function x() { return 2; } } Block scope -- an x in each branch's block. This avoids capturing problems. To make a binding that extends beyond either block one just uses let or var in an outer scope, and assigns. Richard Cornford made this point in a different context last week (about memoizing top level functions via var, not by overwriting a function definition). Note also how this neatly separates duties with respect to const vs. var/let. If you want a const binding to the function that survives the end of block, you can do it in ES4, where const is assign-once: const x; if (truthy) { x = function () 1; } else { x = function () 2; } // use x here There is no free lunch, though. A typed function would probably want the signature restated as an annotation on the const. And of course if you want the function to have an intrinsic name, you'll need named function expressions above, which may redundantly restate x as the name of the function (as well as of the binding). /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Object static methods rationale document
On Jul 18, 2008, at 9:02 AM, Allen Wirfs-Brock wrote: We ultimately concluded that the best way to think about what we are currently provide is that it is a set of primitive mechanisms that could be used to build higher level reflection facilities. If we had a strong use case we could reintroduce getOwnProperties as such a primitive, but so far it seem non-essential. Incidentally, when we removed getOwnProperties we had to add getOwnPropertyName because otherwise you won't necessarily know what properties to ask for using getOwnProperty This trade-off comes up often with widely-targeted languages. JS is a case in point, but since it was over-minimized at the start, prematurely standarized, and then not evolved in the standard form for too long, the libraries have had to make up for the over- minimization, with inevitable gratuitous differences (minor, usually, in the case of functions like Object.extend). Providing minimal, low-level facilities is not necessarily the best approach. If the audience demonstrably needs and already uses sugar, or a clichéd composite op, the language should give the people what they want. If the low-level primitive (call/cc) is too hard to use for most users, at least Lars and I have argued that simpler, easier- to-use, higher-level functionality (generators, based on Python -- just one example) wins. If Object.extend is the common case, but as Igor shows it can be built on (new name alert, and own only) Object.getPropertyDescriptors and defineProperties, it may be fine to provide only Object.extend and defer Object.getPropertyDescriptors. My own view is that at this point, given the proposed API, it's better to provide both. The low-level primitive for the uncommon cases not yet seen or thought of, and the higher-level API for the many libraries that roll their own equivalents. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Object static methods rationale document
On Jul 16, 2008, at 10:14 PM, Kris Zyp wrote: Arguably, some of the need for direct prototype access is alleviated by providing the clone method. However, there are still plenty of other situations where it is useful. I observe that __proto__ in SpiderMonkey- and Rhino-based JS is mostly used for cases covered by Object.create, with a minority use- case that we've discussed before initializing it to null in object initialisers to make maps (dictionaries). I am curious how Object.create covers this __proto__ use case of making objects with a defined proto. Doesn't Object.create create a new object and copy properties over? ES3.1 draft dated 15-July-08: 15.2.3.6 Object.create ( O [, Properties] ) The create method creates a new object with a specified prototype. When the static create method is called, the following steps are taken: 1.If Type(O) is not Object throw a TypeError exception. 2.Create a new object as if by the expression new Object() where Object is the standard built-in constructor with that name 3.Call the standard built-in function Object.defineProperties with arguments Result(2) and Properties. 4.Set the internal [[Prototype]] property of Result (2) to Result(1). 5.Return Result(4). __proto__ allows objects with existing properties to have their proto defined in constant time, but isn't Object.create still O(n), with n being the number of properties? Object.create allows creation of a new Object instance with a designated prototype object initializing [[Prototype]]. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Object static methods rationale document
On Jul 15, 2008, at 10:30 PM, Allen Wirfs-Brock wrote: I’ve up loaded to the wiki a new document titled: “Proposed ECMAScript 3.1 Static Object Functions: Use Cases and Rationale” It’s available as both a pdf and as a Word doc file: http://wiki.ecmascript.org/lib/exe/fetch.php?id=es3.1% 3Aes3.1_proposal_working_draftcache=cachemedia=es3.1:rationale_for_e s3_1_static_object_methods.pdf http://wiki.ecmascript.org/lib/exe/fetch.php?id=es3.1% 3Aes3.1_proposal_working_draftcache=cachemedia=es3.1:rationale_for_e s3_1_static_object_methods.doc Hi Allen, Good to see rationales. A few comments: * No rationale responding to the thread containing this message: https://mail.mozilla.org/pipermail/es4-discuss/2007-September/ 001114.html that questions the wisdom of getPrototypeOf. The other rationales are helpful, the lack of one responding to this public thread questioning essentially the same design element is a lack -- what do you think? * getProperty and getProperties seem misnamed in light of common usage of get, [[Get]], getProperty, etc. all connoting value- getting, not descriptor-getting. getPropertyDescriptor is a bit long, but not fatally so. Worth renaming? * Did you consider prototype's Object.extend method: Object.extend = function(destination, source) { for (var property in source) destination[property] = source[property]; return destination; }; (see http://www.prototypejs.org/assets/2007/11/6/prototype.js)? It's a commonly encountered shallow enumerable property clone. John Resig enquired about it being in ES3.1 drafts, but it's not there. Any particular reason why not? /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Object static methods rationale document
On Jul 16, 2008, at 12:09 AM, Brendan Eich wrote: On Jul 15, 2008, at 11:50 PM, Brendan Eich wrote: * getProperty and getProperties seem misnamed in light of common usage of get, [[Get]], getProperty, etc. all connoting value- getting, not descriptor-getting. getPropertyDescriptor is a bit long, but not fatally so. Worth renaming? Shorter alternative verbs to get: lookup, query. The analogy is lookup : define :: get : put. That was unclear, sorry. I meant to suggest that lookupProperty is a shorter alternative to getPropertyDescriptor. Using lookup or query relieves the need for Descriptor at the end to disambiguate value- from descriptor-getting. So: // returns descriptor if (name in obj), else null or something falsy [1] Object.lookupProperty(obj, name) It's still longer than Object.getProperty, but Object.getProperty seems like a misnomer every time I read it, since it does not do a [[Get]] or [[GetProperty]]. ECMA-262 does not need more overloadings of get-property names. Similar comments apply to Object.getOwnProperty. /be [1] The 15 July 2008 draft specifies false, not null, as the return value of Object.getProperty(O, P) when !(P in O) -- is this intended? ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Object static methods rationale document
On Jul 16, 2008, at 5:39 AM, Douglas Crockford wrote: * Did you consider prototype's Object.extend method: Object.extend = function(destination, source) { for (var property in source) destination[property] = source[property]; return destination; }; Yes we did. And? The doc gives rationales for design decisions. What's the rationale for leaving Object.extend out? /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Object static methods rationale document
On Jul 16, 2008, at 8:28 AM, Allen Wirfs-Brock wrote: I didn't specifically respond to that thread because I wasn't aware of it. I had intended to mention __proto__ as a precedent but it slipped through the cracks. No problem. I wanted to point it out so that the rationale doc might include it. It's true that __proto__ or getPrototypeOf breaks an object's encapsulation barrier and reveals implementation details that perhaps were intended to be hidden. The same could be said about the proposed getProperty function which, among other things, gives an observer access to the functions that implement a getter/setter property. In general, that's the nature of reflection. Overall, I think that this is a situation that is inherent in our current generation of dynamic languages. They tend to depend upon the use of idioms that require penetration of the encapsulation barrier. Yeah, I mentioned that in the thread. It's more fundamental than a temporary lack of the current generation of dynamic langauges. Reflection breaks abstraction, removing some free theorems -- news at 11 ;-). Some of the concerns expressed in that thread are address by other aspects of the static Object methods proposal. For example, the integrity of prototype objects can be protected by sealing them in whole or in part to prevent tampering. This is a good point. SpiderMonkey and Rhino have had Seal APIs for years for this reason (shared prototypes across thread and trust boundaries must be immutable). One feature of both Seal APIs is the ability to seal an entire object graph. This is a sharp tool, since most graphs are fully connected and if you seal the world, life gets boring fast. But it is handy for setting up sealed standard object constructors/prototypes/methods trees with one API call, at the beginning of the world in a throwaway global object that (because of [[Scope]] links) gets sealed too due to the transitive closure. Also note, that while we support inspection of the prototype value, we don't support modification of it. I noticed ;-). As Doug implies below, one reason for making these operations static methods was to make it easier to control access to them. It you are creating some sort of sandbox, you may not want to make them available within it. Yes, the static Object method suite is a good idea for that reason, as well as for not intruding on the prototype-delegated property space of all objects. That could be taken as a argument in favor of hanging them off of a dedicated global Meta object rather than off of Object. It may be slightly easier to wholesale restrict access to Meta than it would be to restrict access to just these methods while still providing access to the Object constructor. Let's not bring back Meta to ES3.1, it is not wanted in ES4. We should reconcile strict modes too, but that's a different topic -- except insofar as 3.1's strict mode locks down Object.defineProperty, Object.getPrototypeOf, etc. So the host code that removes Object.getPrototypeOf from a guest's sandbox can't be running in strict mode. I'm not suggesting this is a problem, just noting it. Another already available technique for obtaining the same information in many situations that wasn't mentioned in the thread is to use Object.prototype.isPropertyOf as a probe to discover the prototypes of objects. It isn't something that you would want to do in production code but I don't think that anyone who was trying to crack an application would hesitate from doing. Searching the reachable object graph would not only be time- consuming, it could fail to find prototypes that were hidden in disconnected components. The case of an otherwise-unreachable prototype object was discussed in that thread. Arguably, some of the need for direct prototype access is alleviated by providing the clone method. However, there are still plenty of other situations where it is useful. I observe that __proto__ in SpiderMonkey- and Rhino-based JS is mostly used for cases covered by Object.create, with a minority use- case that we've discussed before initializing it to null in object initialisers to make maps (dictionaries). I'm convinced based on this experience that __proto__ is the tempting but wrong low-level API to handle these use-cases. I'm in favor of the higher level APIs such as create, clone, and ES4 Map or similar, provided they have the sweet syntax needed to keep kids off the attractive nuisances with which they compete. Crossing the object implementation boundary is generally required for defining object abstractions in this language Not generally. Constructor functions and .prototype properties go a long way, and no one has been able to use __proto__ in portable JS on the web, yet life goes on -- and people do define a great many object abstractions in JS
Re: ES3.1 Object static methods rationale document
On Jul 16, 2008, at 10:26 AM, Mark S. Miller wrote: On Wed, Jul 16, 2008 at 10:11 AM, Brendan Eich [EMAIL PROTECTED] wrote: And? The doc gives rationales for design decisions. What's the rationale for leaving Object.extend out? If the document needs to give rationales for leaving out each thing we did not include, it would be quite a long document. It's pretty long already, yet it dotes on some issues that are less relevant than Object.extend, as demonstrated by all the Ajax code that uses Object.extend but does without, e.g., Object.getPrototypeOf (or __proto__). Do what you want with the doc, but please don't dismiss particular requests for rationales with general fretting about document length. The issue of draft ES3.1 adding a great many Object APIs, yet not adding one of the most common APIs from popular Ajax libraries, is legitimate to raise. The answer to my question may not lead to a rationale being added to the document, but there ought to be an answer other than no -- or onlookers will rightly suspect that there something is wrong in the reasoning behind the rationales. What is the argument for adding Object.extend()? A pointer to Resig's message or a prior discussion is an adequate response. https://bugzilla.mozilla.org/show_bug.cgi?id=433351 The argument for Object.extend is similar to the one for Function.bind. Different use-cases, but common re-implementation in real-world code. Both can be built using ES3, but relieving everyone from having to re-invent and re-download these wheels is one of the main purposes of the standard library. /be___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: A read/write __proto__ that can vanish
On Jul 16, 2008, at 10:29 AM, Ingvar von Schoultz wrote: Some people yearn hotly for __proto__, preferrably writable if at all possible, while others point to problems with security and software privacy. I wrote recently that __proto__ should be viewed as call/cc without macros for common use-case (and average users) -- too sharp and low- level a tool for a language like JS. I get the impression that this could be solved by adding a fourth flag among the property flags Enumerable, Writable and Flexible. There might be a flag called Visible, so you could make __proto__ apparently vanish by setting Visible to false. There's no point in Visible if the property could be deleted altogether. What would be the difference? Note that a proto or __proto__ property reflecting [[Prototype]] is *not* the same as the internal [[Prototype]] property, which would always be visible in the sense of checked by [[Get]], [[Put]], etc. We should not add property attributes that can mutate lightly. The motivation for __proto__ is suspect (I argue, base on our experience -- and I perpetrated __proto__ a long time ago). The need for Visible is non-existent IMHO, while the costs and ramifications of another single-bit attribute, one that causes the property to appear to be deleted, are undesirable. Visibility control over names is an important topic, but it can't be served by a single-bit attribute. ES4 as proposed has namespaces to serve (among other use-cases) the cheap and easily expressed private members use-case. That's not this __proto__ case, which anyway depends on a suspect predicate (the need for __proto__). Better to settle the predicate issue first, and avoid adding general mechanism prematurely. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Object static methods rationale document
On Jul 16, 2008, at 11:35 AM, Allen Wirfs-Brock wrote: I could live with lookup, although I think it focuses the meaning on the access process rather than on the result. Another, slightly longer alternative would be retrieve. What do you say to Ingvar's suggestion of inspect? Regarding, what getOwnProperty returns, what you currently see in the spec. is probably a bug. Are you tracking these somewhere? I think bugs.ecmascript.org is a fine way to keep trac(k). :-) My intent was for it to return undefined, although somebody more steeped in JavaScript idioms could convince me that null is more appropriate if that really is the case. JS has two bottoms: null means no object and undefined means no value, so for this kind of descriptor object if property exists, else bottom API, null is better. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Object static methods rationale document
On Jul 16, 2008, at 11:44 AM, Brendan Eich wrote: On Jul 16, 2008, at 11:35 AM, Allen Wirfs-Brock wrote: I could live with lookup, although I think it focuses the meaning on the access process rather than on the result. Another, slightly longer alternative would be retrieve. What do you say to Ingvar's suggestion of inspect? Or (drum roll) describe: describeProperty, which returns a property descriptor. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Object static methods rationale document
On Jul 16, 2008, at 12:31 PM, Allen Wirfs-Brock wrote: (I'm not going to get you to take the bait on reify, am I?) (no way! ;-) I think I like describe better than inspect for no particularly tangible reason, although it does have more characters. I generally find the Thesaurus a useful tool in this process and it turned up depict which is shorter but also seems to capture the same core distinction as describe. Length is less of an issue, given the rationale doc's points in favor of keyword parameters via object literals, etc. I think that the currently named getOwnProperty is more fundamental than getProperty so in considering length we should probably use the former as our benchmark. BTW, I'm open to arguments that we don't really need getProperty (as long as getPrototypeOf is kept). (Oh shit ... do we need to rename that one, too??) No, that's a value-get, not a descriptor-get. But you raise a good point: defineProperty creates an own property. Is there really a need for getProperty as drafted? If not, I'd favor making describeProperty return null if the named property is not own, but in a prototype. What are use-cases for getProperty as distinct from getOwnProperty? I think we've pretty much covered the name space and would be content, at this point, to sit back for a few days and see if anybody else is brave enough to argue for one name over another. If not I think we can reach agreement on one of these that we have been discussing. Cool. I'm standing pat on describeProperty. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Object static methods rationale document
On Jul 16, 2008, at 1:41 PM, David Flanagan wrote: Brendan, I think you were correct when you originally wrote: lookup : define :: get : put. I think that lookupProperty is much nicer than describeProperty, since lookup captures the getter nature of the method in a way that describe does not. Connotations are many, ambiguity without a noun phrase (not just overloaded old property) saying what's being got or described or looked up is inevitable. This means the stolid, safe name getPropertyDescriptor is least likely to confuse. I see what you mean about describe in the context of setting a description (depict in a graphics context is problematic too) -- thanks. Thesaurus doesn't include mental concept filtering, dammit. I'm sure we'll get this right, but I'm also pretty sure getProperty isn't the droid we are seeking. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Two interoperable implementations rule
On Jul 11, 2008, at 3:01 PM, Maciej Stachowiak wrote: On Jul 10, 2008, at 6:29 AM, [EMAIL PROTECTED] wrote: In a message dated 7/10/2008 3:03:12 A.M. Eastern Daylight Time, [EMAIL PROTECTED] writes: I do not believe that ECMA has the two interoperable implementations rule that the IETF and W3C have, but since ECMAScript is a standard of equal important to the Web, I think we should adopt this rule for any future edition of ECMAScript. Such a rule is needed precisely to avoid such casual breakage relative to Web reality. Can we make that a binding TC39 resolution? While it is true that no such rule exists in Ecma, it has been used in work I am familiar with (optical storage) within TC 31. Early work on MO storage resulted in TC 31 agreeing that at least two implementations must demonstrate interoperability before approval of the standard. This meant that both disk manufacturers and drive manufacturers had to work together to demonstrate that the product resulting from the standard would work together. The committee always followed this rule without question, and the CC and GA of Ecma did not interfere with its implementation. We can add this subject to discussion at Oslo, but this is a question that I would put to an internal vote of TC 31 since it has wider impact than may be represented in Oslo. Since there is precedent within ECMA, then I definitely think we should take a formal vote on adopting this rule for TC39, in particular that we must have two interoperable implementations for any of our specs before it progresses outside our committee. There are also some details to be worked out: 1) Is two interoperable implementations at feature granularity, or whole spec granularity? In particular, is it ok to cite two implementations for one feature, but two other implementations for another? 2) How is interoperability to be demonstrated? Do we accept good- faith claims of support, or do we need a test suite? Given the nature of programming languages and the high stakes of Web standards, I would personally prefer whole-spec granularity (different implementations having different mixes of features does not prove real interoperability), and a test suite rather than just bare claims of support. To be clear, I propose this rule not to block ES3.1, but to make it successful. The WebKit project will accept patches for any feature of 3.1 that has been reconciled with 4, and we will likely devote Apple resources to implementing such features as well, so SquirrelFish will likely be a candidate for one of the interoperable implementations. Mozilla also has an extensive test suite for ECMAScript 3rd edition, which could be a good starting point for an ES3.1 test suite. I also note that the strong version of the interoperable implementations rule will be an even higher hurdle for ES4. Any comments? You don't need another huzzah from me. The hurdle is certainly higher for ES4, although it may be less high given its reference implementation, which could pass the tests. Should a reference implementation, even if slow, count? Of course tests are never complete, but we need not pretend they are to have confidence in interoperation. I am interested in real programmers banging on draft implementations, which will produce bug reports beyond what tests find, and lead to more tests being developed. /be___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Two interoperable implementations rule
On Jul 11, 2008, at 4:06 PM, Geoffrey Garen wrote: Should a reference implementation, even if slow, count? My own opinion on this is no. Since, for the most part, a reference implementation doesn't face the performance and maintainability challenges that shipping software faces, I don't think it fleshes out the same issues that a real-world implementation would. I happen to agree, but this means there's more than a shared test suite in answer to Maciej's second question: 2) How is interoperability to be demonstrated? Do we accept good- faith claims of support, or do we need a test suite? If only a test suite were enough, then the RI would have to count. The chicken-and-egg problems with prototype implementations and draft specs suggest that we need all of tests, users banging on prototypes and causing new (reduced) tests to be written, and of course specs (ideally testable, which is the primary reason for the RI). It will take nice judgment along with hard work to reach the point where we believe the specs should be standardized. It's clear some vendors won't want to risk implementing and shipping something that has not yet been standardized. I don't want to over-formalize at this point, but I'm happy to exclude the RI in the Two interoperable implementations rule. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Newly revised Section 10 for ES3.1.
On Jul 9, 2008, at 11:37 PM, Mark Miller wrote: What are the compatibility relationships between ES4 opt-in, ES4 opt-in strict, ES4 strict, and ES4? I think it's clear that ES4 opt-in strict is a subset of ES4 opt-in. You have to opt into ES4 to even utter the pragma |use strict|, so there's no ES4 strict without opt-in. ES4 without opt-in is also not meaningfully different from the compatible default version that browsers more-or-less agree on (which has been called ES3 + reality, which was one idea for ES3.1). Keywords new in ES4, even though only contextually reserved under opt- in versioning, would not be reserved in any context in the default version. As for new syntax not involving new keywords, and new library APIs, we have talked about supporting these additions in the default version, since doing so cannot break existing code. Note that new syntax not involving new keywords does not cover function sub- statements, as noted in this thread -- there, compatibility issues exist. Also, some implementations have extended ES3 with, e.g., const -- again not compatibly with ES4 as proposed. So opt-in versioning may (or may not, depending on browser-specific content) be required to change the meaning of const or function in a block or other sub- statement context. Are there any other subset relationships among them? I hadn't realized till just now how large the gulf might be between ES4 opt-in and ES4. What is ES4 without opt-in version selection? How could it differ from script.../script or script type=application/ javascript.../script? Do opt-in and strict define orthogonal switches? Can opt-in and non-opt-in programs co-exist in the same frame (global object)? We all need to clarify these issues. Have you read http://wiki.ecmascript.org/doku.php?id=proposals:versioning ? /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Newly revised Section 10 for ES3.1.
On Jul 10, 2008, at 12:02 AM, Maciej Stachowiak wrote: I do not believe that ECMA has the two interoperable implementations rule that the IETF and W3C have, but since ECMAScript is a standard of equal important to the Web, I think we should adopt this rule for any future edition of ECMAScript. Agreed -- I've been saying this, and I'm trying to line up at four non-reference ES4 implementation efforts. Such a rule is needed precisely to avoid such casual breakage relative to Web reality. Can we make that a binding TC39 resolution? It's really up to the TC. Ecma does not have a rule (the w3c has broken its own rule, as you know). The binding part is the honor system, not words on paper -- as with any resolution. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Newly revised Section 10 for ES3.1.
On Jul 10, 2008, at 12:09 AM, Brendan Eich wrote: On Jul 10, 2008, at 12:02 AM, Maciej Stachowiak wrote: I do not believe that ECMA has the two interoperable implementations rule that the IETF and W3C have, but since ECMAScript is a standard of equal important to the Web, I think we should adopt this rule for any future edition of ECMAScript. Agreed -- I've been saying this, and I'm trying to line up at four non-reference ES4 implementation efforts. At *least* four: SpiderMonkey Rhino ESC+Tamarin MbedThis Opera is a hoped-for fifth. But this is in the future, along with draft specs sufficient to prototype, and tests added to today's ES3- ish suites (which seem to have common ancestry in Mozilla's js/tests). /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Newly revised Section 10 for ES3.1.
On Jul 9, 2008, at 11:59 PM, Brendan Eich wrote: As for new syntax not involving new keywords, and new library APIs, we have talked about supporting these additions in the default version, since doing so cannot break existing code. This is not true of ES4, although it's true we have talked about trying to unconditionally add properties. The problem is that if existing code object-detects a new API property, expecting its own same-named (but possibly very different) method or value, it could use an ES4 namesake and go very wrong. ES4 as proposed uses the __ES4__ namespace to hide new names from the default version. Opting into ES4 opens the __ES4__ namespace. This is in the docs on http://ecmascript.org/ and the draft specs that have been circulated. There may be problems yet to solve with this approach, but it's a bona fide attempt to avoid potentially breaking property additions to standard objects. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Newly revised Section 10 for ES3.1.
On Jul 10, 2008, at 12:41 AM, Mark S. Miller wrote: On Wed, Jul 9, 2008 at 11:59 PM, Brendan Eich [EMAIL PROTECTED] wrote: Have you read http://wiki.ecmascript.org/doku.php?id=proposals:versioning ? I had read it, but rereading it in the current context was illuminating. Thanks for the pointer. Is current document the same as current frame and current global object? It turns out that they are coterminous and coextensive, because closures entrain the global object -- yet the window object as returned by window.open or accessed otherwise via the DOM must have persistent object identity -- you can write 'var w = window.open (...);' and no matter how many docs load in w, its object-reference identity is the same. This duality requires something called split windows, where w is the outer window object that persists across navigation, and each document gets a fresh inner window object to use as the ECMA-262 global object. All browsers do this now (Safari in seed 4 versions, if I recall Maciej's post here the other week correctly). /be___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Update on ES3.1 block scoped function declarations
On Jul 10, 2008, at 2:03 PM, Brendan Eich wrote: On Jul 10, 2008, at 1:58 PM, Allen Wirfs-Brock wrote: Maybe, I’m missing something subtle, but 21 is clearly the right answer and is what I believe is specified by the version of section 10 that I sent out yesterday regardless of the scoping of block nested functions. Of course, that’s just spec-ware… 21 is the right answer, I agree. Previously. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Update on ES3.1 block scoped function declarations
On Jul 10, 2008, at 3:28 PM, Mark S. Miller wrote: On Thu, Jul 10, 2008 at 2:51 PM, Allen Wirfs-Brock Allen.Wirfs- [EMAIL PROTECTED] wrote: I see, yes there is a potential eval tax. If I thought this was really a concern (and as you say, we already have the issue for catch and such) I'd be more inclined to fiddling with the scoping rule of eval rather than discarding lexically scoped consts. BTW, I think many of the use cases for such const are more in support of code generators then actual end user programming. Could you explain the eval tax issue you guys are concerned about? I don't get it. Thanks. In ES1-3, the scope chain is a linked list of objects. Every function call creates an activation object to be the variable object used when entering the execution context for that function's code. Thus when entering the execution context for eval code, one uses the caller's scope chain. Real implementations do not reify objects for all activations. This is a good way to be slow. Separately, we aspire to lexical scope. This does not necessarily mean block scope, see e.g. ES4 comprehensions. But however it maps onto syntax, lexical scope holds out the hope that the binding information is compile-time only. After that, the implementation can forget about bindings and the lexical scopes they inhabit. That's not possible if eval can see const bindings in ES3.1 as proposed, or let/const/sub-statement-function bindings in ES4. The Previously thread I cited talks about an alternative where eval cannot see lexical bindings. But that thread concluded (IMHO) with the victorious and inevitable usability argument that programmers would be greatly put out, as well as surprised, if eval could not see its caller's lexical bindings. So implementations have to save lexical binding information when compiling, and reify or otherwise propagate it to eval. Implementations that support indirect eval must not only save the lexical binding information, they must reify bindings as properties and scopes as objects (or something morally equivalent), since the compiler cannot see all eval calls and make a private arrangement to pass private binding/scope data structures preserved with the function or script that calls eval. The indirect eval activation really does need to see objects on a scope chain. This can be done on demand, but it is not pretty. Imposing this tax on implementations of ES3.1 and not giving them let and function sub-statements seems half-hearted, and implementations are likely to extend. Block scope is nice, but it's a big change for ES3.1. The alternative is to a. confine const grammatically to top level where it can be treated like a property of the global or activation object in the spec, and b. deal with eval referencing a catch variable or named lambda specially. The (b) cases were specified in ES3 using as if by new Object or equivalent, which is a bug, but some implementations ignored ES3 and used lexical binding machinery. I'm not sure whether all such all eval to see such bindings. Firefox 2 and 3 do for catch variables. I'll test Opera and report back. /be___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Newly revised Section 10 for ES3.1.
On Jul 10, 2008, at 6:01 PM, Waldemar Horwat wrote: The key criterion here is whether you can come up with a language that makes sense. None of the existing behaviors make sense because they would make 'function' hoist differently from 'const' hoist differently from declaring other kinds of things in ES4, etc., with the only way of fixing it being introducing yet more ways of declaring things. The net result would be gratuitously hostile to everyone in the long term. Agreed. On the other hand, Maciej is probably right [1] that a non-trivial amount of web content depends on intersection semantics today, loading scripts under the default version (no type or else one of the version-free javascript types). This is why I think opt-in versioning is required to change the meaning of a function definition in a block. /be [1] http://bugs.webkit.org/show_bug.cgi?id=13790 is about a script at starcraft2.com that once looked like this: if (ie||ns6) //var tipobj = document.getElementById(dhtmltooltip); function ietruebody(){ Someone carelessly commented out the consequent, making the if (ie|| ns6) govern the definition of function ietruebody without bracing that definition. In no proposed ES3.1 or ES4 would this be legal. Anyway, the error has since been fixed, and last I looked, the page did this: //if (ie||ns6) //var tipobj = document.getElementById(dhtmltooltip); function ietruebody(){ See view-source:http://www.starcraft2.com/js/tooltip.js. I'm interested in learning of more sites that seem to depend on intersection semantics. Please post URLs to the lists. ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Update on ES3.1 block scoped function declarations
On Jul 10, 2008, at 6:08 PM, Waldemar Horwat wrote: Brendan, You're beating a dead horse here. Sorry, no -- the question of whether and how much of ES4 is pulled into ES3.1, requiring costly and untestable work within the framework of the ES3 spec, is a live one, and it should be for anyone who cares about either version being done, and about 3.1 being done before 4. If this call to eval is allowed, the only reasonable answer is 21. All that means is that you must be able to recreate the bindings if the function uses eval. Unless you're proposing to take block-scoped declarations out of ES4, what's the harm with ES3.1 having a compatible subset of them? The harm is of two kinds: 1. That ES3.1 will be pushed through Ecma standardization this calendar year, then to ISO fast track, with zero implementations. 2. That ES3.1 spec work, using and extending the clumsy formalisms of ES1-3, will take a lot of time from everyone involved, with opportunity costs on other work including ES4, actual implementation improvements, better subsets like Caja, etc. We could turn ES3.1 into ES4 but I think you see the problem there. I'm suggesting it is a problem that 3.1 is growing to formalize lexical scope. It contradicts the stated aspiration of at least some (Mark was among them) at the January face to face that ES3.1 avoid mission creep, reflect ES3 + reality, and be done this calendar year. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Newly revised Section 10 for ES3.1.
On Jul 10, 2008, at 4:02 PM, Richard Cornford wrote: A new specification probably should pin down which of these is correct. It should either reinforce the implication that the linkage is intended to be long term or state the lifespan of the linkage (possibly saying that it cannot be expected to hold past the lifespan of the execution context for which the arguments object was created (thus categorizing longer term linkage as a possible non-standard extension)). Hi Richard, thanks for bringing this to light. It's one of those implementation secrets I try to forget, and actually manage to forget sometimes. Indeed SpiderMonkey does not alias arguments[i] and the activation (variable) object property for the corresponding formal parameter after the underlying stack frame has been popped. This is old as the hills, and even goes back to the original Netscape 2 (Mocha) runtime. I would favour the latter I do too, and not simply because that's what Mozilla's implementation does (I don't know what Rhino does -- anyone?). as the inconsistency in existing implementations makes it unlikely that anyone is using this linkage outside of the functions whose calls create the arguments objects, and there is nothing that could be done with this linkage that could not be better (less obscurely) achieved using closures. ES4 is actually deprecating the arguments object. To do this with any hope of being effective, we provide sweeter syntax without the aliasing cruft: optional (default value given in the function declaration) and rest parameters. Carrot, not stick. I agree that specs should address this divergence of implementations from the ES3 (ES1, IIRC) language. Would you be willing to file a bug in the trac at http://bugs.ecmascript.org/ ? It would save me from copying and pasting your fine message, and you would get email notification of udpates to the ticket. Thanks, /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Newly revised Section 10 for ES3.1.
On Jul 10, 2008, at 4:02 PM, Richard Cornford wrote: Something like:- fucntion getSomething(arg){ if(caseOneTest){ getSomething = function(x){ //actions appropriate for case one }; }else if(caseTwoTest){ getSomething = function(x){ //actions appropriate for case two }; }else{ getSomething = function(x){ //actions appropriate for other cases }; } return getSomething(arg); } It is the ability to do this sort of thing that helps make javascript so well suited to browser scripting. Oliver Steele blogged about this kind of memoization a couple of years ago: http://osteele.com/archives/2006/04/javascript-memoization although his examples did not show top-level function being replaced. But it's a good point: strict mode wants to break useful (and used) patterns that change the value of a property created by a defining form. From Allen's list: ·Illegal for a function to have duplicately named formal parameters ·Illegal for a function to contain a top level function declaration with a function name that is the same as a formal parameter. ·Illegal to have multiple top level function declarations for the same function name ·Illegal to have a function declaration with the same name as var declaration. ·Illegal for a function to contain a var declaration with the same name as a formal parameter. ·Illegal to assign to a top-level function name. I could see banning duplicate formal parameter names (I still have no memory of why Shon from Microsoft wanted these standardized in ES1 -- possibly just because JScript already allowed them). Shadowing a formal parameter with a nested function name also seems likely to be a mistake. Multiple top-level function definitions having the same name? That must be allowed if the definitions are in separate scripts. In the same script, it could be a mistake, or a fast patch of some kind. Without #if 0 or nested comments (ES1-3 do not require them, I don't know of any implementations that do them either) it's hard to hide bulk code. Anyway, this seems less likely to be fruitful as a good taste strict-mode check, more somewhat likely to bite back. Within the same program, function vs. var name conflict is probably a mistake to catch. I don't see it in web JS, but I'm not sure how uncommon it is. Anyone have insights? Function containing a var x and taking formal parameter x? That's allowed and might be tolerated if the var has no initialiser, but if the var has an initialiser then it is very likely to be a mistake. Even when hacking and debugging it's rare to nullify an actual argument by declaring a var and assigning to it in the var declaration -- one would just assign without adding a var at the front. /be___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Newly revised Section 10 for ES3.1.
On Jul 9, 2008, at 6:54 PM, Mark S. Miller wrote: On Wed, Jul 9, 2008 at 6:47 PM, Mike Shaver [EMAIL PROTECTED] wrote: 2008/7/9 Maciej Stachowiak [EMAIL PROTECTED]: Although the standard does not allow block-level function declarations I'd understood that, while ES3 didn't specify such declarations, it was not a violation of the standard to have them. I agree with your assessment of the compatibility impact, certainly. I believe the prohibition is in the ES3 syntax definition. ES3 chapter 16: An implementation shall report all errors as specified, except for the following: • An implementation may extend program and regular expression syntax. To permit this, all operations (such as calling eval, using a regular expression literal, or using the Function or RegExp constructor) that are allowed to throw SyntaxError are permitted to exhibit implementation-defined behaviour instead of throwing SyntaxError when they encounter an implementation-defined extension to the program or regular expression syntax. As Maciej notes, all four browsers extend syntax to support functions in sub-statement contexts. There's no prohibition given the chapter 16 language allowing such extensions. Is ES3.1 specifying reality (intersection semantics), or something not in the intersection or union of existing browsers' syntax and semantics, that is restrictive and therefore not compatible without a similar allowance for extensions? Chapter 16 is important to carry forward in any 3.1 or 4 successor edition. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Newly revised Section 10 for ES3.1.
On Jul 9, 2008, at 6:58 PM, Mark S. Miller wrote: Hi Maciej, IIUC, these examples work the same in Allen's proposal as the do in ES4. If this does break the web, doesn't ES4 have exactly the same problem? The idea for ES4 was to change the meaning of function sub-statements only under opt-in versioning. Implementations would do whatever they do today without an explicit type=application/ecmascript;version=4 or equivalent application/javascript;version=2 on the script tag. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Newly revised Section 10 for ES3.1.
On Jul 9, 2008, at 10:05 PM, Allen Wirfs-Brock wrote: I’m also confused about this. My understanding was, other than perhaps some of the details I was specifically looking for feedback on, that what I specified was generally what ES4 was planning on doing. See my reply to Mark citing http://wiki.ecmascript.org/doku.php? id=meetings:minutes_mar_27_2008#technical_notes [W]hat ES4 was planning on doing needs to be qualified with under the default version of JS, or under opt-in versioning. Again since default versions get differing function statement semantics depending on browser, unless all browsers can afford to break existing browser- specific content, the change to unify on proposed ES4 semantics may need to be under opt-in version selection only. If some browsers implemented in a way that happens to work for most browser-specific content (it's hard to be sure), then perhaps those implementations could just make the change. But for cross-browser portability, web scripts would want to select the explicit version that guarantees the new semantics (and syntax, for that matter) in all browsers that support that version. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: use subset introductory material
On Jul 1, 2008, at 10:55 AM, Mark S. Miller wrote: If the ES4 and ES3.1 folks can agree on their respective meanings of the subset named strict, such that * ES3.1 ES4 * ES3.1 strict ES3.1 * ES4 strict ES4 * ES3.1 strict ES4 strict then we'd only need the two subset names needed for these distinctions. Even then, making the syntax change from use strict; to use subset strict; still seems like a good idea to me. I'm not in favor of uniformity over usability, especially if for whatever reason we do not add subsets to future standards. 'use strict' is short and sweet, familiar from Perl and other languages, and IMHO much better than the pedantic (sorry, but it fits :-P) 'use subset strict'. If we do standardize more subsets, we can cross the longer bridge then. This is a small point, but it highlights tension between usability (read as well as write, I claim) and completeness (for want of a better word). Anyway the ES4 pragma syntax prefers one-word terms, although there have been ad-hoc two-word forms. At the last F2F, Lars and Brendan raised the possibility that they might like to define stricter modes in successors of ES4. The notion of named subsets would seem to accommodate that. Brendan Lars, in the absence of named subsets, how would you introduce such stricter modes in that future? What would you propose? Mainly what I recall we talked about at that meeting was simply the idea that (ESn+1 strict could be ESn strict) -- that one could make strict mode stricter over time. Adding 'use stricter' was not a serious proposal, IIRC. I recall also we did not want strict mode to be part of the MIME- typed version parameter, or the version to be selectable by a pragma. So the only issue was upgrading from ESn to ESn+1 and facing a stricter strict mode. Standard mode would be highly backward compatible, with unpredictable and slow obsolescence leading to some amount of deadwood shedding over time -- maybe. Of the above desired subset relationships, the problematic one is the last bullet above. To wit: * ES3.1 strict ES4 strict Agreed. The outstanding disagreements are variable arity strict functions and with. Hopefully we can resolve these in Oslo. In the meantime, to avoid stepping on each other's toes, we've renamed ES3.1 strict to cautious. That's all that's going on. I'm surprised there's such a fuss. Probably you are right, and there's too much fuss. On the other hand, we now have a straw cautious mode to argue about. It would be better to get agreement on strict before expending the effort and seeming to up the subset violation ante (not entirely a matter of perception; every documented and jargonized change tends to take on a life of its own, and fight for survival). That's my point -- profiled standards may pretend that implementations can pick and choose among profiles to support, but on the web's winner-take-all network effects force every implementation to agree on the full standard. Aren't you, Crock, and the rest of us vigorously agreeing? I am pretty sure we agree on a lot -- more than you might think ;-). But let's be clear on what I am arguing against: multiple subsets taking precious standards-making bandwidth, adding to the complexity (even temporarily) of the work within ES3.1 and ES4, and of course between them. Yes, subsets are useful -- so useful there are a great many out there. No, we should not try to standardize them all, or invent overlong syntax with which to select them, if we are only talking about strict mode. Users choosing subsets from full implementations do not simplify the implementation space, Agreed. That's not the point. But that's my point, one of them: the specs are in large part for the benefit of implementors, to make interoperation a more likely outcome. Since web browser implementations have to handle the whole language (and then some: quirks and deprecated features may remain for a long while), the spec should not overreach for subsets at the expense of clarity, compatibility, and completeness. Complexity due to unnecessary subset additions, or mooting, or future- proofing, is therefore not welcome, all else equal -- IMHO. and too many subsets make a mess of the spec and its implementation. Agreed. Let's continue to try to reconcile strict and cautious. And let's avoid adding any more choices to the menu in the ES3.1/ES4 timeframe. I agree completely -- good to read this. What users -- as opposed to es3.x-discuss participants who may have different and seemingly contradictory goals -- have asked for these subsets? To my ears, I've been quite amazed at how convergent our thinking has been on these matters. What disagreements and contradictions do you see among the es3.x-discuss participants? I was quoting Allen there. He wrote (two messages up on this thread): This is an interesting exercise because I'm trying to find a
Re: use subset introductory material
On Jun 30, 2008, at 9:08 PM, Mark S. Miller wrote: [+es4-discuss] On Mon, Jun 30, 2008 at 7:37 PM, Maciej Stachowiak [EMAIL PROTECTED] wrote: JSON be handled with a generic subset mechanism? I expect not, since a pragma inside the JSON source in the form of an initial quoted string would be (a) invalid JSON and (b) ineffective as a way to validate incoming JSON, since malicious alleged JSON would not use such a pragma. Whether or not it's a good idea, given use subset JSON as a recognized/enforced subset directive, one could trivially implement JSON.stringify(str) in terms of eval('use subset JSON; (' + str + ')') That's nothing like how JSON parsing is implemented in Mozilla. If the idea is to add a mode to the ES parser, then I'm worried about missed exclusion tests and false economies in a hacked up JS parser trying to serve two (or more) masters. JSON is defined by http://www.ietf.org/rfc/rfc4627.txt -- not by ES1-3 or any future spec. It's a subset of Python and other languages -- it's more accurately its own language. It's better off with ts own parser implementation, unit tests, etc. -- browsers want this for application/json handling anyway (no pragma or restrictive API mode required). Given that JSON.stringify is a proposed extension in ES3.1 (and was slated to be in ES4, after we rejected the old json.org API), why does the above trivial (except for possibly non-trivial risks in subsetting a real JS parser) re-implementation via eval matter? I do think JSON should be supported natively, but it does not seem at all analogous to strict mode / cautious subset. I think I agree. In any case, I agree that JSON is not by itself a compelling case for use subset X. My point is only that JSON is a huge counter-example to Brendan's statement that profiled (subsetted) standards are meaningless to harmful on the web. JSON is not huge, and that's one point in favor of keeping it separate from ES futures. It is not defined as a subset in any ES spec. It's also not an intentional, new-in-the-last-month, paper-spec-only subset of JavaScript -- it is a subset after the fact. As Doug has written, he discovered it. Inventing new, multiple, as-yet-unused subsets for ES3.1 -- and not implementing any of them in any experimental-to-beta released browser, especially not in IE8 -- is a bad idea. It will cause general and widespread opposition to any attempt to standardize such a ES3.1 this year. At least OOXML and E4X (to name two Ecma standards of mixed repute) each had one implementation -- however buggy or deficient some have argued those specs and their single implementations were. ES3.1 has none, not even a buggy work-in-progress reference implementation. My point is to recall the original ES3 + reality anti-mission-creep goal for ES3, which you among others espoused. Right now it's on a road to completion at the same time frame as a cut-down ES4, which will make for a busy 2009 -- assuming its supporters actually demonstrate it in several testable, interoperating implementations. JSON was defined as an enforced subset of JavaScript, and it has been extraordinarily helpful to the web. Except where people used JS parsers naively. Which is one variation on a theme that you are still playing in advocating use subset JSON. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Side-effects of some Array methods ...
On Jun 27, 2008, at 3:45 PM, Garrett Smith wrote: to list - I am not the one replying to sender only -- all of my replies to you have cc'ed the list. You have replied twice to me only, then resent as reply-alls. What mailer are you using? Again, we don't know what failing faster (you mean throwing a new error as an exception) would break. The shell session above shows how fail- soft could leave scripts executing and even behaving well. Throwing an exception that's not caught would rain on such scripts' parades. I hardly call that a parade. It looks like a toy program aimed at My shell example is not the parades plural referenced above, merely a demo of fail-soft behavior. The unknown web scripts that might depend on that behavior could be doing useful work based on the current semantics (having parades). How do you address these concern? Is it better to fail fast or fail later? If later, and in the case or attempting to set a ReadOnly property, then should the failure be silent? (String example). What about the NodeList example? This is not a green-field design exercise. My point is that browsers do what ES1-3 said (depending on the Array method; generics were there all along, but some were added IIRC after ES1). Code tends to depend on detailed semantics (not always, but more often than you'd think). Why rock the boat? /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Side-effects of some Array methods ...
On Jun 27, 2008, at 10:18 PM, Garrett Smith wrote: What is a green-field design exercise? Sorry for the confusing phrase -- I should have written it's not a clean-slate design opportunity. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES 3.1 implementations?
On Jun 25, 2008, at 11:46 PM, Allen Wirfs-Brock wrote: Yes, and of course SpiderMonkey. For no particularly good reason I simply have the other two positioned in my mind as perhaps being more (technically) approachable for somebody who wanted to plunge into such an effort. I may well be misguided in that perception. (Was that a cut? :-P) It's true, SpiderMonkey won't win any beauty pageants, but it gets the job done and people do hack on it. No mature and/or optimized engine is all that easy to hack on, and C or C++ is the wrong language for implementing interpreters and compilers, if the goal is clarity and extensibility. Java is better, SML is much better -- to pick non-random examples. Which reminds me, any thoughts on the RI subset? /be -Original Message- From: Brendan Eich [mailto:[EMAIL PROTECTED] Sent: Wednesday, June 25, 2008 10:16 PM To: Allen Wirfs-Brock Cc: Robert Sayre; es4-discuss; [EMAIL PROTECTED] Subject: Re: ES 3.1 implementations? On Jun 25, 2008, at 8:53 PM, Allen Wirfs-Brock wrote: It would be great if somebody wanted to work on a proof of concept ES 3.1 implementation in a open code bases such as such as Webkit or Rhino. Don't forget SpiderMonkey. If anybody is interested in volunteering send a not to es3.x- [EMAIL PROTECTED] There's the ES4 RI as well -- did you have anyone already lined up to work on the 3.1 subset of it? /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Semantics of indexed string access
On Jun 24, 2008, at 9:34 PM, Allen Wirfs-Brock wrote: Note that steps 1,2, and 5 amount to ToObject(GetValue(“abc”)). GetValue(“abc”) yields “abc” so this is really just ToObject (“abc”). 9.9 says ToObject when applied to a s primitive string: “Create a new String object whose [[value]] property is set to the value of the string.”. That string object becomes the base object of the resulting Reference. So the literal get converted into an object that has a [[Get]] Oh, I agree -- that's why I wrote No in reply to your Am I wrong? :-). /be From: Brendan Eich [mailto:[EMAIL PROTECTED] Sent: Tuesday, June 24, 2008 7:34 PM To: Allen Wirfs-Brock; Maciej Stachowiak Cc: [EMAIL PROTECTED] x-discuss; Pratap Lakshman (VJ#SDK); es4-discuss@mozilla.org es4-discuss Subject: Re: Semantics of indexed string access On Jun 24, 2008, at 6:39 PM, Allen Wirfs-Brock wrote: Actually, the intent was to support “indexed” access to both string values and string wrapper objects. I just didn’t make it clear in the example. The case analysis was intended to apply to both. My reading of section 11.2.1 is that a string value is to be transformed into an object before any actual property access semantics are applied. Am I wrong? No, the primitive string type (called String, confusingly, in ES1-3 when it uses type names) is not an object. It has no internal methods such as [[Get]]. Specifying the indexed unit-string access semantic based on the wrapper String (spelled as in the language) object seems ok. I noted a Result(4) that should have been Result(6) in step 7, via private email to Allen (this type of error is going to happen a lot; count on it). /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES 3.1 implementations?
On Jun 25, 2008, at 8:53 PM, Allen Wirfs-Brock wrote: It would be great if somebody wanted to work on a proof of concept ES 3.1 implementation in a open code bases such as such as Webkit or Rhino. Don't forget SpiderMonkey. If anybody is interested in volunteering send a not to es3.x- [EMAIL PROTECTED] There's the ES4 RI as well -- did you have anyone already lined up to work on the 3.1 subset of it? /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Why are global variables *non-deletable* properties of the global object?
On Jun 19, 2008, at 8:40 PM, Mark S. Miller wrote: Try putting this in a Firefox address toolbar: javascript:alert('foo' in window); var foo = 42; alert(delete foo); alert(foo) You will get true (because the var binding -- not initialization -- is hoisted to the top of the program, right after the javascript:), false (because var makes a DontDelete binding outside of eval), and 42. I did. Thanks for suggesting that experiment. But given the above behavior, I don't understand javascript:alert('foo' in window); var foo = 42; window.foo = 43; alert(delete window.foo); alert(window.foo) I get true, true, undefined. Works correctly (true, false, 43) in Firefox 3 (try it, you'll like it!). That looks like a Firefox 2 bug, probably to do with inner and outer windows. Don't ask (security requirement of the DOM level 0, implemented similarly in IE and Firefox, IIRC coming soon to Webkit, very probably in Opera; but thanks for pointing this out!). Using var in closures for object state has higher integrity than using plain old properties. This makes closures better in addition to the name- hiding (private variable) benefits. I don't understand this paragraph, and it seems crucial. Could you expand? Thanks. var makes a DontDelete property, unlike assignment expressions or object initialisers (which desugar to assignments). This is better for implementations (name to slot optimizations) and for integrity (although without ReadOnly, the benefit is only knowing that no one can remove a variable from its scope object -- the value could still change). It should even be possible to eval(s) in a scope with var bindings and be sure those vars were not removed or replaced by s. Unfortunately a bug in ES3 that I mentioned earlier this week allows s to replace a var or function its caller's scope with a function that s defines. This is not supported consistently or at all in popular implementations. It's fixed in ES4 (see http:// bugs.ecmascript.org/ticket/235). I'll probably be happy with that. But I'd like to understand the remaining anomaly above first. If it's considered correct, then I don't see how any of these benefits follow. It's just a bug in Firefox 2. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: const formal parameters?
On Jun 19, 2008, at 8:17 PM, Mark S. Miller wrote: So long as it parses, that would allow code to version sniff and adapt, with conditionals, to different execution environments. That's why we're using the parses on 3/4 browsers criterion. (Thanks Maciej, I think) Adapt means do without on Opera, in this case. But IE is out already, so I guess the user-agent sniffing should put Opera in the same no-const branch that IE gets. This again makes me ask: what's the plan for getting alpha implementations of ES3.1 interoperating before the standard is pushed through Ecma to ISO? I don't know if the ES3.1 WG has discussed how to get to ISO. I've only participated in discussions re an Ecma std, for which we're planning to leverage the ES4 RI. What would you suggest for ISO? Ecma specs go to ISO via the JTC1 fast-track process, mostly polishing and picking nits. The time to get implementor and user feedback is before Ecma stamps the standard as done. This was obviously the case for ES1, and ES2 followed implementations adopting features such as do-while and switch. ES3 had some innovations beyond what implementations had already supported -- some of these did not work so well while others were ignored by vendors of already-shipped code. I certainly appreciate the sentiment, and I agree on this case. It just seems weird to be able to declare local variables const but not be able to declare parameter variables const. Oh well, it's not the weirdest thing that we've decided to live with. const parameters are supported in ES4, FWIW. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Why are global variables *non-deletable* properties of the global object?
On Jun 19, 2008, at 11:20 PM, Brendan Eich wrote: It's just a bug in Firefox 2. The bug was https://bugzilla.mozilla.org/show_bug.cgi?id=369259 in case anyone is interested. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: How can eval code not have a calling context?
On Jun 19, 2008, at 4:49 PM, Mark S. Miller wrote: ES3 says: 10.2.2Eval Code When control enters an execution context for eval code, the previous active execution context, referred to as the calling context, is used to determine the scope chain, the variable object, and the this value. If there is no calling context, then initialising the scope chain, variable instantiation, and determination of the this value are performed just as for global code. I am baffled by If there is no calling context,. How could the possibility arise? How would eval get called if no one calls it? A call from native code, the host program. Some browsers support indirect eval, allowing this: setTimeout(eval, 0, alert('hi mom')) The window used is the one in which setTimeout was found along the scope chain, so myFrame.setTimeout(eval, 0, alert(x)) should show myFrame.x, not the calling frame or window's x. This is not something patched Firefox major versions support. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Why are global variables *non-deletable* properties of the global object?
On Jun 19, 2008, at 6:33 PM, Brendan Eich wrote: If you are using the squarefree.com then you're not testing an ES-anything- conformant global object implementation! I meant to hyperlink shell after the squarefree.com in this sentence to http://www.squarefree.com/shell/shell.html Indeed it uses eval, which makes var bindings deletable per ES1-3. (I don't recall the rationale for ES1 making var bindings created by eval deletable -- it was not something the original Netscape implementation did.) /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: [Caja] Open Templates in ES3.1 strict (was: Refactoring out unused material)
On Jun 16, 2008, at 8:28 PM, Mike Samuel wrote: From my reading of ES4, the eval operator works that way, and the eval function works like (new Function (textToEval))(). Is that correct? The latter (eval function, called indirectly, e.g.) is detectably different: var bindings at top level in the textToEval do not persist in the global object, but make local variables in the new Function. Is there no way to supply an environment to resolve free variables in textToEval? The usual way to do this is to wrap the eval call in a function whose arguments name the free variable, and return the result. The usual shared prototype and global mutation hazard warnings apply, but any var bindings in the program will be confined. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Subset relation (was: RE: ES3.1 Draft: 11 June 2008 version available)
On Jun 16, 2008, at 10:48 PM, Mark S. Miller wrote: On Mon, Jun 16, 2008 at 10:19 PM, Brendan Eich [EMAIL PROTECTED] wrote: I am not going to repeat what I wrote at an earlier point in this thread (13 June at 10:24) -- you didn't reply to what I wrote then. Did that message not reach you? Are you referring to https://mail.mozilla.org/pipermail/es3.x-discuss/2008-June/ 000108.html? Yes. It was the closest match I could find. I responded to this message. What remaining point in this message do you feel still needs to be addressed? I'm not being difficult. I just reread this message and couldn't spot it. You replied only to the bit about reformed with, where I wrote: Reformed with was an attempt to restore lexical scope by exact type annotation. That's what people voted down, not the ES1-3 with statement. but that was not the main point (since reformed with was withdrawn), it was just setting the record straight (reformed with, not strict with -- and the fact that it was voted down with red on the spreadsheet does not argue against plain old with being allowed in strict mode). The main point was that (a) 'with' is widespread and popular; therefore (b) strict mode that bans 'with' could fail to be used. The question isn't whether an existing statement is good enough, it's whether a strict mode that bans it is usable enough. A strict mode which doesn't ban is clearly not. Why clearly? Usability depends on users and ergonomics. Something about 'with' is usable enough that users persist in writing programs using it. These users say (when they speak up coherently at all) that 'with' makes the language more convenient. Well-known JS hackers say this, to me even, and get annoyed by nagging such as was found in older Firefoxes (console warnings about deprecated with). If you get rid of with, then the static analysis rule in ES4 becomes very simple: all free variables in a program (script, compilation unit, whatever) are global references, to be looked up as properties of that program's global object, whether or not those properties are present. That allows lexical-reference typos through to run-time, where they become errors -- that is not something the old, withdrawn strict mode in ES4 allowed. It's true that 'with' prevents analysis from deciding where a free name will be found, but with the old strict mode, that's actually a useful escape hatch. Otherwise (no 'with') the name would have to resolve to a compiler-visible global definition, or you would have to reach for eval. This old notion of strict mode was to be an optional feature, at the implementation's discretion. We dropped it in favor of 'use strict' a la Perl -- use good taste and sanity. And is with either in good taste or sane? I avoid 'with', but I try not to confuse my taste with others' tastes (plural), or with necessity. Why is it necessary for strict mode to ban 'with'? The global object makes the contents of the global scope unknown. But it does not ambiguate which variable name occurences are to be interpreted as references into this global scope. Without with, ES4 strict scopes would be statically analyzable. I'm surprised you're willing to give that up. As I wrote previously, all browser implementations have to support 'with' and deoptimize code in its body. There's no savings to be had in rejecting it from strict mode, and doing so takes a tiny bit of extra code. On the other hand, such a strict mode may be less used than 'with', because of 'with' perduring. Is 'with' any worse than eval, for the purposes of the analysis you have in mind, if you already allow the operator form of eval in strict mode? so is kicking 'with' out of strict mode worth it, especially if it impairs adoption of use strict? Yes. Otherwise I don't see the point of use strict. Can you define the point of use strict other than by appealing to taste? /be___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: frozen v. mutable
[Cross-posting, interesting topics here.] On Jun 14, 2008, at 12:07 PM, Mark S. Miller wrote: On Sat, Jun 14, 2008 at 11:50 AM, Allen Wirfs-Brock [EMAIL PROTECTED] wrote: While ugly, const meets my criteria. Aesthetics aside, const is shallow, analogous to a const reference type in C++, not a reference to a const type -- so you can set properties on an object denoted by a const reference, you just can't change the reference. This is disturbing. I've had a hard time thinking of a better term than const, though. BTW, earlier in this (es3.x-discuss only) thread you mentioned http:// www.erights.org/elang/kernel/auditors/, which I enjoyed and wanted to pass on to es4-discuss. The taxonomy you cited there (frozen, deep frozen, etc.) is helpful. It doesn't include anything so shallow as const, though, as E uses def to create such a constant binding. I wouldn't propose to add def, it doesn't fit the full-word keyword style of JS. But it's arguably less overloaded than const. (Another interesting point in that auditors page: the makeBrand example, which should be implementable in ES4 without problems -- but that is for another message.) What did we decide to do with the const declaration? Would this usage also be enough additional motivation to include it? On variables, we did decide to include const, as it meets the parses on 3/4 browsers criterion and it helps integrity. Oh, I didn't realize you had decided to include const in ES3.1 on this basis. IIRC, Opera treats const like var, which does at least let it parse. This could cause trouble -- we need to find out by testing (more below). IIUC, our meaning for it is exactly the meaning in ES4: the variable is letrec-scoped to its containing block (as with ES4 let) but the variable is also unassignable. The variable declaration must provide an initial value. An assignment to a const variable causes the program to be statically rejected. The variable is initialized when its initializing declaration is executed. (i.e., unlike functions, a const variable's initialization is not hoisted to the beginning of its block.) Any attempt to read the variable before it is initialized fails. In strict mode this failure is a throw. In non-strict mode, the failure is the reading of undefined. Did we agree to allow undefined be read before the declaration was evaluated? I thought at least Waldemar wanted const use before def always to be an error, in standard as well as strict mode. Note that all binding forms must be parented by a top-level program, function body, or block. No if (condition) const FOO = 42; where FOO is bound in the scope of the block or body enclosing the if. Apart from these nit-picks, this is a nice write-up, and it describes const in ES4 as proposed, but does anyone actually implement it yet in a proto-ES3.1 implementation? I think we should want interoperating implementations in some kind of alpha if not beta release, before standardization of ES3.1 as well as ES4. At this point we will get such implementations before standardizing ES4. The bug tracking ES4 const in SpiderMonkey is https://bugzilla.mozilla.org/show_bug.cgi?id=229756 I'll try to get this going for the 3.1 release of Firefox that's slated for late this year. Given all the weird ways in which JavaScript likes to think of variables and properties as similar, these uses of const are compatible. Activation objects do exist in the ES1-3 specs, although they can't escape as references through which you could mutate a local variable behind the back of active function code. This matches some of the early implementations. It's another example of how I over-minimized in a hurry, 13 years and one month ago. :-/ /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Draft: 11 June 2008 version available
On Jun 16, 2008, at 8:39 AM, Mark S. Miller wrote: On Sat, Jun 14, 2008 at 11:43 PM, Garrett Smith [EMAIL PROTECTED] wrote: The spec doesn't mention that FunctionExpression with Identifier can affect scope chain. Example:- (function f() { var propertyIsEnumerable = 0; (function f() { alert(propertyIsEnumerable); //= native code })(); })(); Hi Garrett, thanks for alerting us to this bizarre behavior. I had no idea. Lars included it in Compatibility Between ES3 and Proposed ES41, section 1.6, although the example there shows only the catch variable case. IIRC, Pratap pointed the problem out well over a year ago, but I can't find the reference at the moment. I did reproduce this behavior and minor variants. However, oether variants didn't work, indicating that I don't yet understand what's happening here. For example, on Firefox 2.0.0.14 in squarefree: var g = function f() { return x; } g() ReferenceError on line 1: x is not defined g.x = 3; 3 g() ReferenceError on line 1: x is not defined As I thought I understood this example, I would have expected the last call to g() to return 3. Can someone explain why it doesn't? Ad-hoc properties on the function object do not show up as variables referenced lexically, the bug is different. ES3 says (13, third production's semantics): The production FunctionExpression : function Identifier ( FormalParameterListopt ) { FunctionBody } is evaluated as follows: 1. Create a new object as if by the expression new Object(). 2. Add Result(1) to the front of the scope chain. 3. Create a new Function object as specified in section 13.2 with parameters specified by FormalParameterListopt and body specified by FunctionBody. Pass in the scope chain of the running execution context as the Scope. 4. Create a property in the object Result(1). The property's name is Identifier, value is Result(3), and attributes are { DontDelete, ReadOnly }. 5. Remove Result(1) from the front of the scope chain. 6. Return Result(3). Therefore one bad case for a named function expression goes like this: js var g = function f(){return x} js Object.prototype.x = 'wrong' wrong js var x = 'right' js g() wrong Garrett showed an example using a standard property of Object.prototype, propertyIsEnumerable. Worse, the ES3 spec says as if by the expression new Object(). So by the book, one could do this (doesn't work in Firefox, Opera, perhaps others who wisely ignore the spec): js var fake = {x:'fake'} js Object = function(){return fake} js var g = function f(){return x} js g() fake but you would need to be careful to restore Object to fake.constructor or equivalent before going too far. http://wiki.ecmascript.org/doku.php?id=clarification:which_prototype talks about the inconsistencies in ES3 between original value of ... and as if by the expression. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Draft: 11 June 2008 version available
On Jun 16, 2008, at 10:49 AM, Brendan Eich wrote: Lars included it in Compatibility Between ES3 and Proposed ES41, Meant to write Compatibility Between ES3 and Proposed ES4[1] there. No ES4.1 or ES41 in sight! /be [1] http://www.ecmascript.org/es4/spec/incompatibilities.pdf ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Draft: 11 June 2008 version available
On Jun 14, 2008, at 11:43 PM, Garrett Smith wrote: The spec doesn't mention that FunctionExpression with Identifier can affect scope chain. Example:- (function f() { var propertyIsEnumerable = 0; (function f() { alert(propertyIsEnumerable); //= native code })(); })(); Both catch variables and named function expression bindings based on Object properties are bugs in ES3, fixed in ES4 proposals and specs for a while now, and fixed in some JS implementations (both cases are fixed in Opera, IIRC; catch variables are let-based in Firefox 2 and 3). /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Draft: 11 June 2008 version available
On Jun 13, 2008, at 10:11 AM, Mark S. Miller wrote: On Thu, Jun 12, 2008 at 9:45 AM, Sam Ruby [EMAIL PROTECTED] wrote: p69 12.10. Disallowing the with statement in strict mode breaks the ES3.1 - ES4 subset relationship (we've found no compelling reason to ban it). Regarding whether there's a compelling reason to ban with, what about the issue that with is an insanely confusing construct? The horse has left the barn. On the spreadsheet, how much red was accumulated on strict with? Reformed with was an attempt to restore lexical scope by exact type annotation. That's what people voted down, not the ES1-3 with statement. IIRC, it was a lot. Does anyone think with is a valuable construct? Why? Anyone care to post a defense of with? There's no point tilting at windmills. with is absolutely required for web compatibility, and it won't go away for a long, long time -- if ever. It's insanely popular. It's not only common in extant or legacy JS, new uses crop up all the time. You might hope to cause with to go away by forbidding it in a new, optional ES3.1 mode, but the chances of that seem at least as small as the chances that with popularity will simply make people avoid such a strict mode. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Member Ordering
Using Lars's reply to respond to Mark's proposal, since it was not cross-posted. Mark's proposal text, one level of cite-quotation removed, first: -Original Message- From: Mark S. Miller [mailto:[EMAIL PROTECTED] Sent: 4. juni 2008 17:59 To: Douglas Crockford Cc: Lars Hansen; [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: Member Ordering On Wed, Jun 4, 2008 at 8:42 AM, Douglas Crockford [EMAIL PROTECTED] wrote: That is indeed discouraging. Perhaps ES3 did the right thing by leaving the order unspecified. No. Please specify some deterministic order. I don't much care which. But leaving it unspecified complicates testing unnecessarily. Also, please see http://www.cs.berkeley.edu/~amettler/purecomp.pdf (draft paper) for further benefits that can only be realized after removing such unavoidable sources of hidden non-determinism. (Brendan, I've added you to the cc to make sure you see this paper. It makes another interesting bridge between information-flow and ocaps.) (Thanks!) I agree that we need to specify more than what ES1-3 specified. The de-facto standard of insertion order remains mandatory, as far as I can tell, for most objects and named properties. You can't make a web-compatible browser without it. But index-named properties, specifically in Array instances, could be treated differently -- maybe. We'll find out with very-large-scale Firefox 3 testing how often any web content cares that index order is not necessarily insertion order. A proposal: * For direct instances of Array: Use the Opera ordering for the own properties. Then the normal prototype-following insertion order for ascending the superclass chain. Note that Array.prototype is not an instance of Array, and so should enumerate by insertion order. Array.prototype is an Array instance, but with a [[Prototype]] linking to Object.prototype instead of to Array.prototype (as all other Array instances have) -- see ECMA-262 Edition 3 15.4.4, first two paragraphs: 15.4.4 Properties of the Array Prototype Object The value of the internal [[Prototype]] property of the Array prototype object is the Object prototype object (section 15.2.3.1). The Array prototype object is itself an array; its [[Class]] is Array, and it has a length property (whose initial value is +0) and the special internal [[Put]] method described in section 15.4.5.1. On the other hand, Mark's proposal happens to match Firefox 3 (SpiderMonkey JS1.8) because Array.prototype is not dense -- it has a bunch of named properties (to wit, the standard methods plus some extensions). But this is just a bug. I like that the proposal does not generalize to indexed properties in any object. That seems better than obligating all implementations to segregate named from indexed properties in their property-map implementations. Array, Vector, other array-likes can opt in. ES4 should specify which ones do. /be___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Member Ordering
On Jun 3, 2008, at 9:25 AM, Douglas Crockford wrote: ES3 says that objects are unordered with respect to the sequence produced by for...in. Nevertheless, many web developers have an expectation that the sequence should reproduce the insertion order. That implied order is strictly complied with in all browsers except Opera which can optimize numeric sequences. We like the Opera optimization. Web developers don't seem to notice that it diverges from common practice, except that for in on arrays seems to work correctly in cases that seem to be confused on other browsers. Confused in other browsers *only* if one creates the array's indexed properties in other than index order. That's rare, so it's not a huge hardship and I hope we can do what you propose, even though it's an incompatible change from past releases of most browsers. From a Firefox 2's JavaScript 1.7, in the SpiderMonkey shell: js a = [] js a[3]=3 3 js a[2]=2 2 js a[1]=1 1 js a[0]=0 0 js for(i in a)print(i) 3 2 1 0 We are also reluctant to slap Opera for having produced the best implementation of this feature in a way that fully complies with the current standard. You would not be slapping only Opera. Firefox 3's JavaScript 1.8 matches it: js a = [] [] js a[3]=3 3 js a[2]=2 2 js a[1]=1 1 js a[0]=0 0 js for(i in a) print(i) 0 1 2 3 but only for dense arrays: length = 32 or load factor = .25 -- and no ad-hoc named, direct properties. I hear Opera has similar restrictions, but I haven't tested. There are other hard cases: js a = [] js a[3]=3 3 js a[2]=2 2 js Array.prototype[1] = 1 1 js Object.prototype[0] = 0 0 js for (i in a) print(i) 2 3 1 0 We want to better specify the ordering of for...in because the developer community has expectations. We are reluctant to impose a particular implementation of objects, but we do like that Opera's optimization seems to best match expectations in a way that is the least surprising, and possibly the best performing. How would the other three feel about having to adopt Opera's convention? We need an exact spec of what Opera does to agree, or better: we need a spec that matches Opera and Firefox 3 for the easy case of dense arrays with no prototype indexed properties and no ad-hoc named properties. I'm in favor of for-in using index order for arrays, provided we can get the hard cases right for some reasonable definition of right. We'd adapt future Mozilla releases to match. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES3.1 Draft: Array generics
On May 31, 2008, at 3:37 PM, Douglas Crockford wrote: The hazards of the misbinding of this are a particular problem for mashup platforms, so the use of the thisObject parameter will not be allowed in strict mode. So IIUC, in ES3.1 strict mode, given var a = [1,2,3]; this is an error: var b = a.map(function(e,i,a){print(e,i,a, this.z);return e}, {z: 42}); but this is not: var b = a.map(function(e,i,a){print(e,i,a, this.z);return e}.bind ({z:42})); Why is it ok to do something one (novel) way, but not another (existing in 3 of 4 browsers) way? That bind allocates an extra object may give the second way greater integrity, but unless the function is stashed somewhere and called on a different thisObject by other code, the above forms are equivalent. The map implementation will only call the passed in function, and only with the given or bound thisObject, once per existing (non-hole) array element. That bind require an extra (function) object to be allocated is undesirable overhead, in the absence of threats. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Namespaces as Sugar (was: complexity tax)
On May 28, 2008, at 12:48 AM, Maciej Stachowiak wrote: On May 27, 2008, at 1:07 PM, Brendan Eich wrote: I forgot one of the main use-case for 4, one you've expressed interest in already: 'use namespace intrinsic'. Without (4), this does not allow early binding for s.charAt(i) given var s:string to string.prototype.intrinsic::charAt. Yeah, but you can't early bind this anyway if s lacks a type annotation (or you otherwise have inferred that s is of string type). In fact if you can't infer the type of s then you get worse performance than if you hadn't opened any namespaces. That depends on the implementation, but let's say it's true. You still get the better-typed fixtures (the string class's intrinsic::charAt method) instead of possibly overridden prototype properties. The 'use namespace intrinsic' design for early binding grew out of the compact profile of ES3, which is a dead letter on the web because it's overtly incompatible (the spec is short enough to read in a few minutes: http://www.ecma-international.org/publications/ standards/Ecma-327.htm). A pragma for prioritizing fixed methods with stricter type signatures was prototyped by the strict mode in AS3. During collaborative ES4 development in TC39 TG1, we synthesized aspects of these precursors into 'use namespace intrinsic'. This design is implemented in the ES4 RI, and it will probably be in a couple of practical open source implementations this year. We want to get user as well as implementor feedback. But you do have my number - I like the potential for early binding both for program understandability and performance. I would love to see a way to make it possible without at the same time making property lookup in the face of unknown types slower. I will think about it myself but perhaps someone else will come up with something clever. Great. In the mean time, we'll be working on clever solutions to speed up all property lookups, even with open namespaces. Maybe we'll convince you that cleverness is better applied by implementors to enable programmers who benefit from new things like cross-cutting intrinsic, debugging, version2, etc. namespaces. As for my motherhood apple pie lecture, it is based on experience with a working on a high-performance production-quality JavaScript implementation. We managed to pretty much rewrite the core interpreter for a significant performance boost in about two months, and that was struggling against the existing complexity of de facto JavaScript (ES3 + some Mozilla extensions). Along the way we found some correctness bugs that exist in pretty much all existing implementations (sequencing considerations in the face of exceptions for instance). If the language were much more complex, then it would be much harder to make architectural changes of the implementation. Two months. Sorry, but if it took you three or four months to do that and also include namespace optimizations, but the benefit to the great many JS or would-be ES4 programmers were enough, then why wouldn't you take the extra month or three? Given the needs of the many and the scale of the web, I continue to believe that the main argument should be about maximizing usable utility for programmers writing in the new version of the language -- only secondarily about hardships for implementors who make the big bucks making JS go fast :-/. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Namespaces as Sugar (was: complexity tax)
On May 27, 2008, at 8:45 AM, Maciej Stachowiak wrote: I don't see how any of this argues for namespacing of properties on non-global objects, or why that case in particular requires unqualified import. The topic was any single object, so if these are good for the global object, they may (not must) be good for other objects. That's the general (uniformity) argument, made further below (I'm hoping for a response). One can certainly question the need for any namespacing of object properties, let alone unqualified import of namespaces for such purposes. I hope so. I'd rather have someone question unqualified import than implicitly dismiss it, which is what seems to be happening. I don't see how your statement is responsive. Hey, I was agreeing with you (Mr. Assistant District Attorney Sir :- P). Questioning is fine. What was responsive about your non-sequitur One can certainly question the need ... anyway? No one objects to asking clear questions that are not of the when did you stop beating your wife kind. What's at issue is whether and why unqualified import matters in any object, even the global object only, since the NAS proposal did not allow unqualified import even at global level, and the use-case for unqualified import was dismissed as not compelling. Namespacing of non-object properties is poorly justified, in my opinion. Unqualified import of top-level names is well-justified. I don't see why you keep mixing the two together. Sure you do, below where I gave particulars that led to the generalized namespace scheme in ES4. but does not mean namespaces need to generalize beyond the global scope. For example, global unqualified namespace import could desugar (logically) into the injection of scope chain items instead of into a general property lookup mechanism. That's an interesting idea, although we use namespace qualification along the prototype chain all over the place in ES4, and for what seem like good reasons. Other languages with successful namespacing features don't have such a mechanism, so I am dubious of the goodness of these ideas. I am concerned that the namespace lookup algorithm for object property access is too complicated. Agreed, this is the big issue. I share your concern, but the conservative approach (esp. with reference to C++) of throwing out non-global open namespaces looks like an overreaction, and it may not save much complexity. It makes object property lookup depend on the set of open namespaces, which means obj.property may compile to entirely different code depending on the context, Lexical context, no dynamic open-namespaces scope. and it seems likely it will slow down property lookup when multiple namespaces are open but static type info is missing. It certainly could, although we think not in implementations under way. Opening multiple namespaces without is not free in a dynamic language. Is the name lookup algorithm much simpler if namespaces are top-level only? Since obj.prop could end up with obj referring to the (or I should write a) global object, I don't see it. Unless you're proposing outlawing such object references using the namespaces open at top-level when obj is the global object. If the only real justification is that it's a nice generalization, then I do not think it is worth the performance hit. The nice generalization followed from particular use-cases, it did not precede them. I cited those cases (briefly). How about being responsive to them? ES (any version) has objects as scopes, as well as prototypes. It's hard to keep the gander -- objects below the top level, or on the prototype chain -- from wanting the same sauce that the goose -- the global object -- is trying to hog all to itself. Is it really? Is there any other language where namespacing of the global namespace has led to namespacing at the sub-object level? C+ +, Java and Python all get by fine without namespacing of individual object properties. C++ and Java are not the right paradigms for JS/ES. Python is better, but Python *does* allow import in local scope. The reason namespacing at top level is essential to programming in the large is that the global namespace is a shared resource and must be partitioned in controlled ways to avoid collision in a large system. But I do not see how this argument applies to classes or objects. See Mark's big post, which discusses (in item (b)) extending objects, including standard ones. Saying the global object is a shared resource that must be partitioned, etc., but no others reachable from it, particularly class objects, are shared resources, is begging the question: what makes any object a shared resource? That the global is the only necessarily shared object does not reduce the benefit, or make the cost prohibitive, of sharing other objects reachable from it. Prototype (the Ajax
Re: Namespaces as Sugar (was: complexity tax)
On May 27, 2008, at 11:42 AM, Maciej Stachowiak wrote: On May 27, 2008, at 11:00 AM, Brendan Eich wrote: What's at issue is whether and why unqualified import matters in any object, even the global object only, since the NAS proposal did not allow unqualified import even at global level, and the use- case for unqualified import was dismissed as not compelling. There's really 4 separable issues: 1) Namespacing of names at global scope (via lexically scoped reference). 2) Unqualified import of names into global scope. 3) Namespacing of arbitrary object properties. 4) Unqualified import of namespaces for arbitrary object properties. I would claim 1 and 2 are essential, 3 can be done by convention in the absence of 4 (a la the NAS proposal) and 4 is unnecessary and harmful to performance. Thanks, this is helpful, since the argument you joined was about (2) and/or (4) -- there is no unqualified import in the NAS sketch. I forgot one of the main use-case for 4, one you've expressed interest in already: 'use namespace intrinsic'. Without (4), this does not allow early binding for s.charAt(i) given var s:string to string.prototype.intrinsic::charAt. So you can't add one pragma and realize speedups or better type signatures for built-in methods. All you can do is access global intrinsic::foo bindings, which (in ES4, where opt-in versioning gets you immutable Object, Array, String, etc. without needing to open intrinsic) are few and not likely to realize a speed-up or more precise typing. So early binding via namespacing would be gone. It could be reintroduced via an ad-hoc pragma. But that takes up complexity budget in the spec and real implementations too. It could save a lot of complexity, by not requiring any first-class support for namespace lookup on arbitrary objects. If I understand your proposal, meta::get or iterator::get could still be defined in an arbitrary object, and called with full qualification. Lexical context, no dynamic open-namespaces scope. Note I said compile to so I think this was clear. Just dotting an i, especially for everyone following along at home :-). * internal namespace per compilation unit for information hiding -- hardcode as a special case? I'm not sure how this applies to unqualified import of namespaces on arbitrary object properties. Per the spec, internal is open in a separate set on the list of open namespaces. You don't have to qualify references to internal::foo. * iterator and meta hooks in objects. Ugly __get__, etc., names instead? Unqualified import is not necessary for iterator or meta hooks. Namespaces by convention (or __decoratedNames__) would be enough. Agreed :-). * helper_foo() and informative_bar() in the RI? I don't think any language feature should exist solely for the convenience of the RI. It's not just the RI. Namespacing for informative purposes in the RI is good programming style, akin to using helper namespaces in C++ where top-level (but often class-level as you can see from reading RI builtins/*.es). `grep 'use namespace ' builtins/*.es` runs to 41 lines. If more effort must be spent the necessary complexity of the language just to preserve current levels of performance, No, to increase the levels of performance for the current version of the language. That's the horse going over the first hill from the barn. Adding namespaces to the next major version could slip into the optimization frameworks under way in a couple of engines I know of. But you may doubt. That's ok. I maintain that the high order bit should not be the cost to implementors, rather utility vs. complexity facing users. then that takes away resources from implementing unnecessary complexity to improve performance beyond current levels. Or harmonizes with the unnecessary complexity that implementors trying to compete already face. Let's not prejudge the ability of competing implementations to (a) speed up; (b) optimize namespace search for common code. The first target has to be the programmer using the language, not the implementor. This does not mean implementor concerns do not matter, as I keep trying to reassure you -- only that they come second. In general, keeping the language simpler is good for performance. Yet focusing on performance can complicate the language for programmers. It certainly has for other languages. Other things being equal, simpler implementations are easier to change, and have more room to add complexity for optimization without becoming too complex for human understanding. I will agree that some added language features are essential but I think minor improvements in expressiveness that have large complexity cost are a poor tradeoff. Hard to disagree without more details. This is motherhood and apple pie stuff. This is a good trade-off if it can be done in reasonable footprint, since
Re: ES3 Specification oddness.
On May 19, 2008, at 12:56 AM, Steven Mascaro wrote: The use of Date makes things look odd, but I assume the spec means something like the following:* js function f(){} js f.prototype = {const prop1: 1, prop2: 2} js o = new f js o.prop1 = 42 42 js o.prop1 1 which is the behaviour I'd expect. * I can't find the notation for const's in object literals. You're using the ES4 syntax; there's no const in ES3. Yeah, I picked Date but any standard constructor will do (because standard constructors have DontDelete + ReadOnly attributes for their .prototype properties). So you could s/Date/Object/g, but Date suffices and its toString is less anonymous (i.e., not [object Object]). Other suitable DontDelete + ReadOnly properties than the standard constructors' prototype properties in ES3 include String length, Number.MAX_VALUE, etc. Recollecting more from ES1 days, we did not think about const per se, but we did want to prevent ReadOnly properties from being overridden in delegates where related properties and methods that depended on the ReadOnly value were not overridden, therefore found in a prototype. Integrity again. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Odd idea
On May 18, 2008, at 1:17 AM, Brendan Eich wrote: On May 17, 2008, at 9:00 PM, Mark S. Miller wrote: On Sat, May 17, 2008 at 8:54 PM, Brendan Eich [EMAIL PROTECTED] wrote: No, we want a number line that goes up sensibly. JS3.1 if it follows 1.7 would have everything on board for ES3.1 + other stuff not in ES3 that prefigures ES4. I couldn't parse that. Could you restate? JS version number line: Wrapped badly by mailman. Shrinking horizontally: 1.0 1.1 1.2 1.3 1.4 ES3 1.5 1.6 1.7 1.8 ^ ^ | | basis for ES1 close to ES2 Now, where does JS3.1 go? If it's exactly the same as anything like what's proposed for ES3.1, it does not fit on the number line above. The line must fork somewhere between ECMAv3 and 1.6 (inclusive), since ES3.1 as proposed does not have much (if anything) from JS1.7 or 1.8. The line must fork, because there are things in ES3.1 not in any version on the line above. Not sure if lack of replies means I was unclear, but the above number line should help highlight an awkward truth: ES3.1 is a step sideways (and in some ways backward) for JS as represented by Mozilla's implementations (Rhino is tracking SpiderMonkey). That's ok, standardizing post-hoc can be good (making up new stuff for 3.1 is less clearly good in this light -- more work needed to uphold the ES3.1 ES4 subset relation). Since JS has evolved ahead of the standard since 1999 (and did before then, resulting in ES1 and ES2), a JS3.1 does not make sense. Any ES3.1 standard would be folded into JS2 or possibly JS1.9 (the numbers are decimals, so 1.10, 1.11, etc. are possible too, but unlikely in my opinion). Separately from JS3.1, my belief is that jumping from JS2 to JS4 is not helpful to half the audience (not truly half; who knows? could be by far the majority, since ECMAScript, .es suffix, etc. have not caught on) who think in terms of the JS1.x evolution, however much it might help those focused on the ES numbers. It's hard to argue strongly for either half since I claim so little is at stake in terms of confusion. If we end up seeing script type=application/javascript;version=4 proliferate by accident, I'll eat my words. I rather suspect we will see untyped or default-typed script tags continue to dominate, and some amount of user-agent sniffing used server-side to deliver JS2/ES4 code to up-rev clients. I would bet real money that the .js suffix and the unversioned application/x- javascript (and even the unfortunate HTML-4.0-promulgated text/ javascript) continue to be common for a long time, too. Comments welcome. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES4 Security
On May 18, 2008, at 7:50 AM, Steven Mascaro wrote: On Sun, May 18, 2008 at 7:54 PM, Brendan Eich [EMAIL PROTECTED] wrote: Brendan wrote: I think you kept it too short. :-/ I've been accused of being verbose, so I tried to keep my *opening* statement concise. It was meant as an invitation to discussion, not a final statement. I got your intent, but vague feel-good words (sorry, that's unfortunately what security and too often privacy are) do not help us make progress in this forum. Kinda like appealing to motherhood and apple pie ;-). From a post to es4-discuss I wrote last fall: Security is an issue (not in the Oprah sense), all right. A bit more precisely, it is a set of end-to-end properties that need solid enforcement mechanisms at many layers of abstraction. Security is however not a unitary good or product. So we should talk about specific, well-defined properties that can be verified somehow (whether statically or not). Cross-site exploits happen on the server side too. They're possible in proxies and other kinds of gateways. They arise when data originating from different trust domains mix in ways that lose track of each datum's trust label (ignore policy questions, including the folly of putting the user in the loop). The mixing involves control flow, so the problem is not solvable by data-labeling alone. I'm confused. Aren't these man-in-the-middle attacks? Yes, there are 3 parties, but the structure and solution (usually encryption/signatures/hash checks) is different. No MITM -- think mashups, user-generated content hosting (myspace, lj, etc.). Firefox and other Mozilla-based apps are indeed the ur- mashup (code and markup loaded from various origins mixing in a single JS runtime). For example, suppose that it were possible to retrieve the text of any script src=.../script element using '.textContent' from javascript, regardless of origin. You'll agree that this is unthinkable today. But I assume you'll also agree that there is no security problem in doing this if no cookies (or other private data) are sent in the initial request to retrieve the script page? Absolutely not. Why do you think that would be safe? Shaver's followup shows a Princeton attack against inside-the-firewall known- intranet-address data. That's just one problem. You can blame the firewall or doc-hosting server admin, but it's a real-world problem. In the Netscape 3 data tainting model, with a much simpler DOM, properties of equivalent asset value would be tainted with the page's origin. They could freely flow into scripts loaded from other origins, but the results (*any* results) could not leak back (feasibly, or at least in practically cheap-enough flows) to the other origins' servers. But as I've noted in the last message and in recent talks about this experiment, the purely dynamic information flow system cannot untaint the virtual machine pc, so taint mixtures accumulate badly. The system is too pessimistic, for want of static analysis -- a hybrid approach I'm experimenting with now. In the same-origin model under which we all suffer still, there's no taint or trust label associated exactly with .textContext's value, so to protect this asset, ignoring cookie leaks, we would be relying (as we do for all such DOM assets today) on the access-control checks done between trust containers (window objects, frames in general). This is a bad bet, because browsers constantly patch bugs in their leaky inter-frame reference monitors. This is the bigger problem I alluded to above (That's just one problem). The same issues affect XMLHttpRequest. The solution adopted by 'AJAX' developers is to ask their own server for the page, which is equivalent to asking for the page without cookies. The recently suggested cross-site XMLHttpRequest extensions still do not solve the problem completely (the original page sends cookies to the 3rd party server, which may not be what either the original page or the user wants). Right, and cookies are a hot point of contention in the W3C Web API working group, and among Mozillans, right now. We pulled XS-XHR based on the draft recommendation from Firefox 3 in order to get it right, and avoid spreading a de-facto standard before the de-jure one was finished (this is a chicken and egg situation; some on the working group would want us to set precedent -- I think that's wrong, but I understand it, and so [evidently] does Microsoft). This is getting far afield from es4-discuss, however. Suggest we take it to a w3c public list, if there is one. If there are non-cookie examples of XSS, please point me to them (setting aside equivalents like DOM storage, but also the :visited example from below). Yikes. Where to start? https://bugzilla.mozilla.org/buglist.cgi? query_format=specificorder=relevance +descbug_status=__open__content=XSS Once this problem is solved, ES4 *does* *not* need RO
Re: Odd idea
On May 19, 2008, at 4:22 PM, Mark S. Miller wrote: Not sure if lack of replies means I was unclear, but the above number line should help highlight an awkward truth: ES3.1 is a step sideways (and in some ways backward) for JS as represented by Mozilla's implementations (Rhino is tracking SpiderMonkey). Only backward if more complex is forward ;) No, I mean backward in this sense: Mozilla's implementations have had getters and setters since 1999 or so. Other minority share browsers were forced to reverse-engineer them because Microsoft live.com launched with user-agent testing that expected them in non- IE browsers. This is old news, and backwards -- not progress, except to catch IE up. Good for developers, for sure. Enough after nine years? Hardly. That's ok, standardizing post-hoc can be good (making up new stuff for 3.1 is less clearly good in this light -- more work needed to uphold the ES3.1 ES4 subset relation). But ES4 is also sideways in this sense. There's a bunch of stuff in Mozilla's JS1.8 that didn't make it into ES4. Namely? As noted, some pieces are prototypes that will be adjusted to match the ES4 type-based counterpart (the iteration protocol hook, e.g.). What bunch of stuff is in 1.8 that did not make it into the latest ES4 drafts? Also, there's a tremendous amount of stuff in ES4 that was never in a JavaScript. Except under the hood, off limits to programmers, reserved for the built-ins and the DOM. Since JS has evolved ahead of the standard since 1999 (and did before then, resulting in ES1 and ES2), a JS3.1 does not make sense. Any ES3.1 standard would be folded into JS2 or possibly JS1.9 (the numbers are decimals, so 1.10, 1.11, etc. are possible too, but unlikely in my opinion). Glad to hear it's decimal. (Or at least binary floating point ;).) If ES4 does become known as JS2, then, taking up the doubling suggestion liorean mentioned, I suggest ES3.1 also be known as JS1.55. Its successor could then be JS1.57, etc... I'm going to risk missing the joke and repeat that we wouldn't fold any 3.1 into a distinct *JS* version number. This is a serious point, since you proposed the unification of version number lines. Any ES3.1 that's a small upgrade to ES3 should not require a new JS version number. With no new syntax (apart from getters and setters), programmers should be able to object-detect new methods, not resort to duplicative whole-script versioning. Right? Separately from JS3.1, my belief is that jumping from JS2 to JS4 is not helpful to half the audience (not truly half; who knows? could be by far the majority, since ECMAScript, .es suffix, etc. have not caught on) who think in terms of the JS1.x evolution, however much it might help those focused on the ES numbers. Surely you don't mean to suggest that ES4 represents a small evolutionary step beyond JS1.8? Wouldn't a larger increment be less misleading? Larger than what? 0.2? The numbers are decimal tuples, so 2 - 1.8 is arbitrarily large in the second place. We don't know until we get there. The main point is to have a total order, not to market (or counter-market, in your case :-/) by fudging the gap to be small (or big). /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: complexity tax
On May 19, 2008, at 4:32 PM, Douglas Crockford wrote: Brendan Eich wrote: On Mar 26, 2008, at 10:01 AM, Ric Johnson wrote: Let us take action instead of throwing opinions around: Brendan: What new features that can not be implemented via code constructs now? This is reductionism, therefore between silly and wrong, but I will list a few things: * you can't make read-only properties in ES3; *you can't make don't-delete properties of plain old objects (only vars in closures); * you can't make objects that cannot be extended; * you can't make don't-enum properties in plain old objects; It looks like these omissions will be corrected in ES3.1 by the Object.defineProperties function. Of course, Ric asked What new features [...] can not be implemented via code constructs now? Code constructs added in 3.1, with or without new syntax, don't count. These essential features will be added without resorting to new syntax. New syntax is what's needed to make these usable. Who wants to use Object.defineProperties on a closure to bind a read-only property when you could use 'const'? The problem with Object.defineProperties, apart from standardizing it in two committees, is the verbosity and (at the limit) overhead. There's really no reason not to have better UI for the underlying semantics. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Odd idea
On May 19, 2008, at 5:41 PM, Mark S. Miller wrote: On Mon, May 19, 2008 at 4:42 PM, Brendan Eich [EMAIL PROTECTED] wrote: On May 19, 2008, at 4:22 PM, Mark S. Miller wrote: But ES4 is also sideways in this sense. There's a bunch of stuff in Mozilla's JS1.8 that didn't make it into ES4. Namely? As noted, some pieces are prototypes that will be adjusted to match the ES4 type-based counterpart (the iteration protocol hook, e.g.). What bunch of stuff is in 1.8 that did not make it into the latest ES4 drafts? Ok, I looked, and it's a lot less than I expected. watch is an example. (Unless I didn't notice its inclusion). Object.prototype.watch and unwatch have been in SpiderMonkey for ~12 years (memory fades -- in rev 1.1 on cvs.mozilla.org, I recall implementing in Netscape's private CVS years before mozilla.org was founded). Also, there's a tremendous amount of stuff in ES4 that was never in a JavaScript. Except under the hood, off limits to programmers, reserved for the built-ins and the DOM. Huh? Classes, Type declarations, Namespaces!, Classes under the hood (interfaces too) in built-ins and the DOM and browser object models. You cannot bootstrap JS in JS without something like classes. Type declarations mean several things, but let's pick just structural types: under the hood in SpiderMonkey at least. Namespaces: fair enough, although the built-ins still get special treatment. What else are the [[Get]], etc. internal property names in ES1-3, but non-default namespace prefixes disguised as semantic brackets. These names are mangled in a way that cannot be spelled in the language, yet they are property names. perhaps Packages and/or Units, if these are still on the table. They were cut -- please try to keep up, we ES4-ers are spending time keeping up with 3.1 :-/. Namespaces is a huge addition to the complexity of the language, and the one I'm least happy about. Actually, I agree with you that namespaces add more complexity than classes alone (if that's what you mean). We've been working on them for a long time (Waldemar can tell you about the first go-round). They're too useful to lose in favor of privileged built-in names and __UGLY__ conventions for the lusers. and repeat that we wouldn't fold any 3.1 into a distinct *JS* version number. Ok then, I'm happy to stop arguing about this. I just thought that this odd idea might be seen as helpful. If not, forget it. We have enough substantive issues to argue about ;). Will do, although a transcendental version number will come in handy some day, I'm sure ;-). /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: complexity tax
On May 19, 2008, at 6:54 PM, Mark Miller wrote: If I could have a language with some of the syntactic conveniences of ES4 but without ES4's semantics, I'd be quite happy. What semantics in particular, can you pick on something specific that's not in your classes as sugar proposal (rather than have me guess)? BTW, since you missed the cuts in the spreadsheet, you may have missed the optional type checker being cut too: 'use strict' is good-taste mode, a la Perl and in accord with discussions we've had at the last two TC39 meetings. Thanks for the kind words, although since neither 3.1 nor 4 is done yet, specific constructive criticism is even better. /be___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES4 Security
On May 17, 2008, at 11:55 PM, Steven Mascaro wrote: The current browser security model is broken. Ok, sure. Any security exploit that has 'cross-site' in it's name need not exist today. Lots of problems need not exist today, but they do. Solving them requires making changes. We need (I do, at any rate) to be specific and detailed about exact problem and proposed solution(s), to talk sensibly and evaluate the likely effects of changes from this point forward. I'll try to be specific below, without getting stuck in details. Cross-site exploits happen on the server side too. They're possible in proxies and other kinds of gateways. They arise when data originating from different trust domains mix in ways that lose track of each datum's trust label (ignore policy questions, including the folly of putting the user in the loop). The mixing involves control flow, so the problem is not solvable by data-labeling alone. The solution for browsers is simple: do not *automatically* transmit private information (usually cookies) to 3rd parties in a transaction. This is so vague it's hard to respond to usefully, but notice that (in Parkerian Hexad terms) you're talking about Confidentiality here. The parenthetical non-vague bit about cookies does not help, because cookies are only one asset threatened by XSS. Browsers have cookie controls already, we're working on improvements to them and their defaults, but XSS goes way beyond cookies. Once this problem is solved, ES4 *does* *not* need RO/DD/IH for security. (IH=information hiding.) Now you've changed the subject to Integrity. Note, this post is *only* about security (and privacy). We are not out to solve security or any such fluffy problem-name in ES4. Anyone claiming to deliver security solely by means of a secure programming language is selling you a bridge. See http:// lambda-the-ultimate.org/node/2773 for a recent LtU thread (and cue David Teller and Mark Miller ;-). It is not about whether RO/DD/IH can make development/maintenance easier. The main issue in ES4 is not development/maintenance, it's being able to make sane type judgments (static or dynamic, doesn't matter), at all. A secondary issue is Integrity as an information security property. Integrity alone doesn't really solve whole problems on the real- world level of prevent entire class of security exploit (XSS) or make a usable and 'safe' yet powerful browser-based programming language. But Integrity is an end-to-end property that you can build proofs and helpful automations on top of, and without it you're too often pretty much screwed. I implemented a dynamic data tainting security model (opt-in) for Netscape 3. Helpful testers actually beat on it, hard. Its goal of Confidentiality was frustrated by (a) lack of static analysis to help untaint the pc (tainting the pc is required to deal with Denning's implicit flows -- see above about data labeling being insufficient); (b) lack of Integrity to make such analysis practical, if not feasible in the first place. Ignoring ES4, browsers have struggled with mutable (and shadowable, for initialisers per ES3!) Object and Array bindings, and mutability in general. Check out the open source bug databases at bugzilla.mozilla.org and webkit.org (both, one can cite bugs in either; and closed-source browsers' bug databases, if they're worth anything, should have similar bugs). Just one example: https://bugzilla.mozilla.org/show_bug.cgi?id=376957 Jesse Ruderman's comment 0 is worth reading in full. See how browser have to engineer defense-in-depth (so should everyone; I'm not whining here). In the real world this means considering the likelihood of web app developers failing to authenticate carefully, or configure and check MIME types. And of course, even if web app devs were perfect, there'd still be browser bugs, mashup novelties, wireless network IP addressing and DNS threats, etc. Here's another privacy issue, not solved in any browser I know of, dictated by web standards and user expectations: https://bugzilla.mozilla.org/show_bug.cgi?id=14 In the bug, David Baron even explores cross-copying (padding execution times along alternate paths to close timing channels). This is still a research problem. And what's made this bug less severe, besides its ubiquity in all popular browsers, is the fact that remote tracking of browser visited history is unfortunately easy using a number of techniques, ignoring CSS (Cascading Style Sheets, not XSS). Still, I would like to fix bug 14. I have some ideas, which involve hybrid information flow propagating through CSS layout, but they're vague and challenging -- researchy, in a word. Security is a big topic, not something to simplify with too few words. You cannot reduce a difficult argument to a short-hand formula that proves ES4 should let Object or Array be overridden (or
Re: ES3 Specification oddness.
On May 18, 2008, at 8:10 PM, Mark S. Miller wrote: Is this intended or a mistake? It's intended, it goes back to ES1 drafts written by the Microsoft lead with agreement from all participating in TG1 back then (1996-1997). Do JavaScript implementations obey this peculiar behavior? Yes: js function f(){} js f.prototype = Date function Date() { [native code] } js o = new f Function.prototype.toString called on incompatible object js o.prototype = 42 42 js o.prototype Invalid Date (SpiderMonkey shell, same as in Firefox.) Rhino follows the spec, and I'm pretty sure (can't test, no PCs at home) IE does too. Safari does not, looks like a JavaScriptCore bug. IIRC, Opera does now, having recently fixed a long-standing bug. Do programs depend on it? I don't know of any, but we've been down this road before. Without a widely distributed browser trying out a change from the standard (de- facto or de-jure) at beta scale or better, we can't prove a negative. We know from past attempts of similar changes that the odds are not good. It's not encouraging if IE and Mozilla-based engines agree with ES1-3 here, as IE and (long ago) the Mozilla progenitor implementation in Netscape browsers set the de-facto standard. There's no point borrowing trouble here, IMHO. I agree we want to separate read-only from no-override-allowed, but in new, compatible- because-opt-in territory. Hence ES4 classes with const and final as separate attributes. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES4 stable draft: object initializers
On Apr 17, 2008, at 5:15 PM, Erik Arvidsson wrote: 2008/4/17 Brendan Eich [EMAIL PROTECTED]: These are wanted by Ajax library hackers, jresig and shaver testify. Rather than cut a long-standing proposal because a recent evolution of its *syntax* (not its substance) led to something problematic, why not return to the original syntax: Yes, this feature is indeed wanted. The syntax is not that important. There is also a desire to allow these to be defined on an existing object. Can you say more about this last sentence? What existing objects would want meta-programming fixtures somehow added to their properties? I know, I'm to blame for __defineGetter__ and it has use- cases that o = {get x() {...}} can't satisfy, but a named getter is one thing -- a catch-all is different and scarier. Anyway, I'm interested in details about your use-cases. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES4 stable draft: object initializers
On Apr 10, 2008, at 9:05 AM, Lars Hansen wrote: Here is the third draft, which I have tentatively labeled as stable. Please note the OPEN ISSUES, the input of everyone on these would be appreciated. No one has addressed this OPEN ISSUE: * The meta::prototype facility does not allow 'null' as a value. I'm already spoken for in the stable draft. I did want to add this information: http://lxr.mozilla.org/mozilla/search?string=__proto__%3A which shows no one in the cross-referenced cvs.mozilla.org sources using obj = {__proto__: null, ...}. I don't have results for other repositories, or for Mozilla-specific content on the web. So I'll withdraw the request to allow null, even though the extent to which it cripples the expressed object is not different in kind from what you can do with obj = {toString: undefined, hasOwnProperty: undefined, /* etc. */}. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Iteration in ES4
On Apr 28, 2008, at 6:04 PM, Waldemar Horwat wrote: Brendan Eich wrote: On Apr 25, 2008, at 2:08 PM, Brendan Eich wrote: for (prop in obj) { ... obj[prop] ... } to look like for each (value in obj) { ... value ... } where obj might be an Array. The symmetry between for-each-in and for- in that E4X half-supports (viz, prototype property enumeration with shadowing, and deleted-after-loop-starts coherence) is broken. Just in case this is not well-known, SpiderMonkey starting in Firefox 1.5 supported E4X and made the for-each-in loop work for all object types, not just XML/XMLList. But not on Array element (indexed property) values only, in index order -- again property creation order, and named as well as indexed enumerable properties, are visited. This shares code with for-in and preserves the equivalence shown in the rewrite example above. I'm baffled trying to figure out what you're trying to say in the last paragraph. Let me try again: I added for-each-in support for all types when implementing E4X in SpiderMonkey, not just for XMLList and XML types. But I did not make for-each-in do anything different given an array object on the right of 'in' from what the for-in would do if you used the loop variable to index into the array to get the value produced in the loop variable by for-each-in. Does that help? /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Iteration in ES4
On Apr 25, 2008, at 12:09 PM, Jason Orendorff wrote: Here are some more comments on iteration in ES4; still more to come. Great to have feedback on a spec derived from a pretty old proposal -- better late than not in time. :-) === for-each on Array objects === The planned behavior, as far as I can discern it, is like this: - Properties will be visited in the order they were added. - Enumerable properties of Array.prototype will be visited. (This will hurt libraries that add extra Array methods there, like Prototype http://www.prototypejs.org/api/array. There are also more obscure cases.) You are concerned with for-each-in, I know, but the same concern arises with for-in. This has come up many times, and there is a converging (I hope) proposal to add Object.defineProperty or something named similarly, which allows DontEnum properties to be added to objects, including to standard constructor prototypes. - Non-numeric (expando) properties will be visited. Similarity to for-in, which is bound by backward compatibility. One improvement: the iteration protocol allows swapping in better behavior. From the proposal: Array.prototype.iterator::get = iterator::DEFAULT_GET_VALUES; Array.prototype.iterator::contains = function (v) this.indexOf(v) != -1; Then one does not have to use for-each-in at all, and Arrays become much more pleasant to use in a small-world or modular program that customizes Array.prototype like that. I think users will find all these details astonishing and undesirable. The first seems especially perverse. No prior standard requires it. You probably have read 4.2 in http://www.ecmascript.org/es4/spec/ incompatibilities.pdf but I thought I'd point to it for the list. The de-facto standard set by competing browser implementations starting in 1995 trumps the de-jure standard regarding for-in enumeration following property creation order, even for arrays. But for-each-in could do otherwise. That's a fair point, since in E4X (ECMA-357), for-each-in follows XMLList child index order. So let's say we make for-each-in special for Array, as it is for XMLList (and XML, but vacuously) in E4X. Now for-each-in and for-in differ more substantially than in the former enumerating values and the latter keys. This could be a good thing, but it might be annoying if one is rewriting code that does for (prop in obj) { ... obj[prop] ... } to look like for each (value in obj) { ... value ... } where obj might be an Array. The symmetry between for-each-in and for- in that E4X half-supports (viz, prototype property enumeration with shadowing, and deleted-after-loop-starts coherence) is broken. The latter two are kind of implicitly specified in E4X. The conversations I've had with E4X principals suggest they did not intend for-each-in to consider prototype properties at all. But the spec flatly contradicts that intention in the Semantics section: The order of enumeration is defined by the object (steps 6 and 6a in the first algorithm and steps 7 and 7a in the second algorithm). When e evaluates to a value of type XML or XMLList, properties are enumerated in ascending sequential order according to their numeric property names (i.e., document order for XML objects). The mechanics of enumerating the properties (steps 7 and 7a in the first algorithm, steps 8 and 8a in the second) is implementation dependent. Properties of the object being enumerated may be deleted during enumeration. If a property that has not yet been visited during enumeration is deleted, then it may not be visited. If new properties are added to the object being enumerated during enumeration, the newly added properties are not guaranteed to be visited in the active enumeration. Enumerating the properties of an object includes enumerating properties of its prototype and the prototype of the prototype, and so on, recursively; but a property of a prototype is not enumerated if it is shadowed because some previous object in the prototype chain has a property with the same name. (end of E4X spec citation) So intent and spec may be out of whack, and we should consider doing something more aligned with intent in ES4. A cost-benefit analysis applies here. The cost of following E4X is real, e.g. Web pages that use Prototype can't use for-each on Arrays. This is a bigger problem for for-in, and most Ajax libraries steer clear of adding properties to any standard constructor prototypes. So it's a good reminder of a deeper problem, but not as compelling with for-each-in as with for-in -- and the ship sailed 13 years ago. Again I'm not sure if rescuing for-each-in is going to pay off, if the price is loss of symmetry with for-in -- and assuming programmers can save themselves by customizing via the iteration protocol. I don't see any offsetting benefit. Note that several ES4 classes will have
Re: Iteration in ES4
On Apr 25, 2008, at 2:08 PM, Brendan Eich wrote: for (prop in obj) { ... obj[prop] ... } to look like for each (value in obj) { ... value ... } where obj might be an Array. The symmetry between for-each-in and for- in that E4X half-supports (viz, prototype property enumeration with shadowing, and deleted-after-loop-starts coherence) is broken. Just in case this is not well-known, SpiderMonkey starting in Firefox 1.5 supported E4X and made the for-each-in loop work for all object types, not just XML/XMLList. But not on Array element (indexed property) values only, in index order -- again property creation order, and named as well as indexed enumerable properties, are visited. This shares code with for-in and preserves the equivalence shown in the rewrite example above. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: defineProperty/getProperty design sketch
On Apr 23, 2008, at 8:36 AM, John Resig wrote: I'm confused as to why an API is being proposed which clashes with existing JavaScript-style APIs. The one case that I had seen previously, at least related to the implementation within Mozilla, is that it would look something like: Object.defineProperty(obj, name, value, Object.NOT_WRITABLE | Object.NOT_ITERABLE | Object.NOT_DELETABLE) Which makes much more sense than the proposal (not forcing the user to create temporary objects just to insert values). I agree it's better to have minimal-cost APIs that do not require object allocations (or fancy optimizations to avoid allocations). One overloaded API with an object parameter with too many degrees of freedom is also undesirable compared to targeted APIs, if the names and differences in use-case among the targeted set clearly distinguish them from one another. Another point (and I've said this to Allen already): I think browser vendors who have had to implement __defineGetter__, __defineSetter__, __lookupGetter__, and __lookupSetter__ would rather either standardize those as-is, or standardize Object.defineGetter, etc., counterparts to which the Object.prototype methods could forward. The clarity of object initialisers for flags is better, but only slightly. Per Neil Mix's suggestion (in his March 13, 2008 message to es4-discuss), the Object static constants would be named to avoid double-negative English hazards: WRITABLE, ENUMERABLE, REMOVABLE. A slight improvement to bias for conciseness in the common case, if it is the common case, would invert the sense of WRITABLE and require a READONLY flag for consts. An inverted sense for REMOVABLE would be named PERMANENT, if that is likely to be the exceptional case (is it?). This brings up the point Tucker made recently: READONLY with REMOVABLE is pointless. The Wheat prototype-based programming language found its initially-Unix-like mode bits had fewer sane than total linear combinations, too. StrawName | READONLY ENUMERABLE REMOVABLE +- FIXTURE |0 0 0 PROTOTYPE |0 0 1 VAR |0 1 0 PROP|0 1 1 CONST |1 0 0 N/A |1 0 1 ENUM|1 1 0 N/A |1 1 1 The StrawName column contains some proposed names for the combinations or N/A where the combination is nonsense. These flag-bit- combination names do not always add clarity compared to |'ed flag-bit manifest constant names, IMHO. Comments? We need to come to agreement here quickly, for both ES3.1 and ES4 to hang together. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: defineProperty/getProperty design sketch
On Apr 23, 2008, at 4:56 PM, P T Withington wrote: On 2008-04-23, at 18:35 EDT, Brendan Eich wrote: The StrawName column contains some proposed names for the combinations or N/A where the combination is nonsense. These flag- bit- combination names do not always add clarity compared to |'ed flag-bit manifest constant names, IMHO. I have to admit, those don't clarify it for me either. I'm trying to understand: what would I say in a class attribute declaration to get the corresponding properties. The names were based, where possible, on class declaration syntax: StrawName | READONLY ENUMERABLE REMOVABLE +- FIXTURE |0 0 0 PROTOTYPE |0 0 1 VAR |0 1 0 PROP|0 1 1 CONST |1 0 0 N/A |1 0 1 ENUM|1 1 0 N/A |1 1 1 In ES4, class C { var x; function f() ...; } make FIXTUREs. For compatibility with ES1-3, you can declare a method to be prototype: prototype function charAt(pos) string.charAt(this.toString(), pos); (this is from the RI), and it will be defined on the class prototype (String.prototype in this case, shared with string.prototype) as REMOVABLE, since you can do this: js String.prototype.charAtfunction charAt() {[native code]}js delete String.prototype.charAttruejs String.prototype.charAtjs in ES1-3. My choice of name for VAR is confusing since you can use var in a class to make a fixture, but the reference is to global variables: js var x = 1 js for (i in this)print(i) x js delete x false and functions too: js function f(){} js for(i in this)print(i) f js delete f false These are indeed not REMOVABLE, but they are ENUMERABLE, per ES1-3. PROP is my cheesy name for a plain old ad-hoc property in ES1-3, which is ENUMERABLE, REMOVABLE, and not READONLY. CONST is not READONLY or REMOVABLE. ENUM is CONST with ENUMERABLE. If the name were mnemonic for that, it might help. Let me know if you have better name suggestions, based on the above (please, no EXPANDO for PROP or I'll do something dire :-/). If there is no way to get the corresponding properties with a class attribute declaration, why would we support doing so dynamically? The combinations arise in different places in the existing language. Classes define nominal types, so they are not meant to model all of the language, in particular PROP additions to dynamic class instances (such as good ol' Object instances) have no declarative syntax. The object initialiser syntax is sugar, not declarative in the same sense as class syntax -- it desugars to property sets or [[Put]] calls to use ES1-3's meta-method, and that checks for readonly prototype properties. The short answer is that we have a core ES1-3 language for creating global PROTOTYPE, VAR, and PROP attribute-sets, but users of the language can't declare PROTOTYPE -- only the magic builtins get to make DontEnum properties in ES1.3. ES4 supports FIXTURE and CONST declarations. I had not thought of an example of ENUM in ES4, but I believe this is one: obj = {const FOO: 42}. FOO is not CONST because it should be enumerable by for (i in obj) constructs. ES4 purists might object that the FOO initialiser makes a fixture, but not a FIXTURE -- but if the game here is to name all sane attribute bit combinations, then it's hard to win with short names and avoid dropping FIXTURE from all the fixture variants. So I went with ENUM, and that name could be expanded to ENUMERABLE_CONST where CONST implies fixture. So we would have long-winded ENUMERABLE_CONST_FIXTURE, CONST_FIXTURE, GLOBAL_VAR, etc. names (instead of ENUM, CONST, and VAR respectively). But anything so long- winded loses to the disjunction of flag bit constants. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES4 draft: enumerability
On Apr 22, 2008, at 2:06 PM, Jason Orendorff wrote: On 2008/4/10, Lars Hansen [EMAIL PROTECTED] wrote: Here's the first draft explaining how enumeration works in ES4. * It's unclear which parts of this, if any, are intended to be normative. * Are the terms enumeration, iteration, and itemization defined somewhere? They are defined in http://wiki.ecmascript.org/doku.php? id=proposals:iterators_and_generators, at least. Even if they are, I can't stop reading the first two as synonyms, so I'd welcome a change in terminology. Perhaps for-in iteration for enumeration, and for-each-in iteration for itemization? Those are mouthfuls. After some up-front definition, would not the shorter terms of art start to sink into your subconscious? They did in mine, but probably that's because enumeration is such a beast in JS, and iteration a la Python so much less full of baggage (unless the iterator author wants baggage). Also, Pythonic keys/values/items naming is short and sweet. Renaming GET and DEFAULT_GET to something like FOR_ITERATOR and DEFAULT_FOR_ITERATOR would leave room for FOR_EACH_ITERATOR and DEFAULT_FOR_EACH_ITERATOR. See DEFAULT_GET_VALUES and DEFAULT_GET_ITEMS in the proposal. * The iterator proposal specifies additional special behavior when the iterator IT is a generator-iterator. Intentionally omitted here? Do you mean it.close() automation? See also http:// bugs.ecmascript.org/ticket/47. * Is a more general introspection facility planned? If so, I hope DEFAULT_GET can be defined in terms of that and the Enumerator class can be dropped. Something has to specify enumeration, and we prefer self-hosting to prose for library code. Why is Enumerator objectionable? * The design of Enumerator doesn't make sense to me, especially that it's a parameterized class. What's the design goal here? This is going to sound too technical and it immediately raises another question, but it's not arbitrary, and I'll answer the further question too: To satisfy iterator::IteratorType.T while implementing deep and shallow enumeration (a la ES1-3 for (i in o) loop) and itemization (E4X's for each (v in o) loop). Ok (you ask), why satisfy that generic structural type? Because that's the return type for the one iteration protocol hook, iterator::get, which ES4 checks to allow custom and superior iteration under the common, desired, and well-known for-in syntax. Alternatives adding different syntax lose for not being Pythonic (ignore mandatory parens) and adding more special forms. Really, enumeration is a one-size-does-not-fit-all default that we want to keep for compatibility, but allow objects and classes to customize. Just like in Python (if you squint with those beer^H^H^H^HJS-colored glasses ;-). To expose get obj's public, enumerable properties, just like ES3 for-in, a static method suffices: Object.getEnumerableProperties(obj: Object!): Iterable.string Not quite. See the discussion at the top of the proposal. * Something has to filter property identifiers deleted after enumeration or itemization starts but before those ids are visited in the snapshot. * Something has to implement shadowing for deep properties (those found in a prototype object but having the same identifier as a property in the directly referenced object or a nearer object along its prototype chain). These two, plus conversion of indexes to string type, are the hairy aspects of enumeration that we wish to implement under a uniform iteration protocol. Also, Iterable.string must mean IterableType.string, as there's no type named Iterable in the proposal at least (did it get renamed? Checking... no, still IterableType). If the goal is to expose a general API for getting various slices of an object's set of property names, ... while also matching IteratorType. a static method still suffices: A static method does not suffice for the matching IteratorType requirement. IOW, the for-in construct (whether loop statement or comprehension, with or without 'each') is layered on a general iteration protocol hook a la Python's __iter__. But that hook is a typed function, and the return value is of type IteratorType.T. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Crockfords isEmpty
On Apr 17, 2008, at 1:23 PM, Keryx Web wrote: Especially I wanted to see if isEmpty (and function.name) had been discussed (two features I would use a lot that I now have to program around). The wiki is kind of silent on discussion. That's because no one has proposed isEmpty for ES4. The only hit from this site-specific search: http://www.google.com/search?hl=enq=site%3Aecmascript.org+isEmpty is this: http://wiki.ecmascript.org/doku.php? id=es3.1:targeted_additions_to_array_string_object_date The function object name property has come up: http://www.google.com/search?hl=enhs=F6Xq=site%3Aecmascript.org+% 22name+property%22 See in particular the first hit: http://bugs.ecmascript.org/ticket/303 The suggestions are not even discussed in the context of ES 4. Are there any meeting notes somewhere? Or IRC logs? http://wiki.ecmascript.org/doku.php? id=es3.1:targeted_additions_to_array_string_object_date was created on 2007/04/15 with isEmpty as part of the proposal. Since ES4 is supposed to be a superset of ES3.1 I'm surprised this hasn't been proposed to the ES4 working group. Probably we're supposed to keep track of 3.1 proposals, and I (among others) have failed to do so. On the other hand, everyone has had trouble keeping up with what's current in the wiki. So we probably need both the 3.1 and 4 groups helping each other keep up, and I'm talking to 3.1 folks about this right now. So, the hope is that isEmpty will make it into new editions of the standard. I'm in favor, FWIW. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES4 stable draft: object initializers
On Apr 17, 2008, at 4:06 PM, Jeff Dyer wrote: The catchall syntax seems heavy handed for the use cases it serves. It introduces new syntax, not just special meaning for ‘meta’ qualified names, and it is otherwise possible to create object values with catchalls by using classes. I propose that we remove the productions that begin with “meta::get”, “meta::set”, “meta::has”, “meta::delete” and “meta::invoke”. These are wanted by Ajax library hackers, jresig and shaver testify. Rather than cut a long-standing proposal because a recent evolution of its *syntax* (not its substance) led to something problematic, why not return to the original syntax: obj = {get *(id) ..., set *(id, value) ...}; If on the other hand, the syntax is heavy either way, but the substance is valuable because the use-cases are compelling enough to serve, then we can stick with meta::get, etc. /be___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Namespaces on definitions
On Apr 16, 2008, at 2:50 AM, Yuh-Ruey Chen wrote: I agree. I don't see why there should be multiple syntaxes that are as concise as each other and both have about equal precedent (AS3 vs. E4X). If in some futuer spec, properties can inhabit multiple namespaces, then we can consider the |ns1 ns2 ... var foo| syntax again. The syntaxes are not equally concise, not only because :: is heavier visually and in terms of keyboard input (two shifted chars) than spaces. Consider ns var foo, bar; vs. var ns::foo, ns::bar; It's true you can't distribute one type across several variables: var foo:T, bar:T; but that's not a reason to restrict namespace syntax per se. Cases of ns including public, protected, private, and internal may be the most useful ones for this distributive syntax, but are those namespaces? Either way, the ns var foo syntax is more concise. /be___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Strict mode recap
On Apr 11, 2008, at 10:22 AM, Lars Hansen wrote: (It _is_ an indication that the syntax used in the object initializers is not fully general, though, since it only allows simple identifiers in the namespace position. Sigh.) I've argued that JS's literal property identifiers in object initialisers, instead of mandatory quoted strings for literal names and evaluated expressions for runtime naming, is a virtue, pace Python. It certainly reduces the quote burden compared to JSON or Python. It allows readers and compilers to make static judgments about what names are bound in the object created for the initialiser. Anyway, it's an old decision, hard to change now. I'm mailing mainly to ask whether this restriction is something considered harmful in ES4 with namespaces, or for any other reason. I think Jon and I have agreed in the past on namespaces being constant, but argument has evolved since then. My reason for agreeing with Jon then was that readers, never mind compilers, otherwise can have a hard time figuring out the meaning of names. This is always hard with globals, less so with outer names in closures, and no picnic with property initialisers if you add computed namespaces to them. I don't have a stronger reason than favoring comprehension and easing implementation, though. The second is less important than the first, but we consider efficiency too. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Strict mode recap
On Apr 11, 2008, at 12:51 PM, Lars Hansen wrote: There might be a slight misunderstanding here. In my example, the name C.ns is constant, not a general expression; C needs to be a class, and ns needs to be a static namespace definition inside that class (suitably available). Oh, ok. The general expression for namespace qualifier syntax, whatever it will be, is what I was concerned about. If qualifiers in object literals must be identifiers that resolve to namespace definitions, then I'm not concerned about object initialisers being harder to analyze (for people or programs). Although this may be too restrictive, and I should share your concern about the loss of computed namespace qualifier use-cases. In my (repentant) opinion the ns in _any_ ns::id expression must reference a namespace binding that was not introduced into the scope by with (and I'm happy to outlaw all such expressions in the body of a with, if that helps keep it simple). Great. I think you're trying to say something else too but I can't figure out what it is, something about the ns in ns::id being a literal in a stronger sense than what I just outlined? Let me try to be clearer. In ES3, obj = {prop: value} is sugar for $tmp = new Object // fixed in ES4 to not evaluate 'new Object' // but instead use memoized Object type $tmp.prop = value // evaluate value only, not prop the literal id obj = $tmp All I am noting is that obj = {ns::prop: value} might want to involve no further evaluation of arbitrary expressions than the ES3 case. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Dynamic class default (was Re: Class method addition...)
On Apr 6, 2008, at 8:10 PM, Kris Zyp wrote: Since you grant use-cases for sealing objects against mutation, are you simply arguing about what the default should be (that 'dynamic class' should not be required to get an extensible-instance factory, that 'class' should do that)? Well if it is up for debate... Can we have classes be dynamic by default, and non-dynamic if the class is declared to be final? 'final' already means can't be overridden for methods and can't be extended by subclassing for classes in several languages. Adding another meaning, even if it's of the same mood, seems like a bad idea to me. What's the point of your request? If you mean to promote AOP (a sacred cow, per my last message to you, reply-less :-P), you risk degrading overall integrity, or merely imposing a syntax tax as most class users have to say inextensible class (kidding, but it would have some contextual keyword in front -- and not static). The default should match the common case as decided by programmers using classes because they want greater integrity than they get with closures. Even if a class's instances are extensible, it doesn't mean the fixed properties (fixtures) can be AOP'ed. It just means certain objects can be dressed up to resemble others, by some like relation -- for good or ill. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: ES4 draft: Object initializers
var (outside of eval, an ES1 flaw) means DontDelete. /be On Apr 7, 2008, at 1:50 PM, Mark S. Miller wrote: On Mon, Apr 7, 2008 at 10:21 AM, Lars Hansen [EMAIL PROTECTED] wrote: IMO it ought to be possible to use 'var' in those same ways but we didn't discuss that much (if at all). I don't understand. What would it mean? -- Cheers, --MarkM ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Class method addition and replacement (was Re: AOP Compatibility)
On Apr 5, 2008, at 12:42 AM, Nathan de Vries wrote: On 04/04/2008, at 1:35 PM, Brendan Eich wrote: EIBTI! Perhaps not the best reference given the topic, given that Python makes class overriding and resurrection of overridden classes possible with their __builtin__ module. Good point. Slogans like EIBTI are too binary to handle the fullness of Python, never mind JS. Let's stick to ES3 and ES4 here. Since no one is proposing to do away with dynamic typing or mutable objects in ES4... I'm actually quite surprised to hear that there are so many on this list who are happy to drop EcmaScript's usual dynamicism in favour of so called integrity. ... drop here is false. We're adding missing tools to make immutable properties and objects that can't be extended, that's all. Monkey patching is prevalent on the web, and I believe that the practice should be supported, not feared. How about both? ;-) I picked mutable by default in JS1 intentionally, because it allowed content authors to monkey-patch or wholesale-mutate the built-in objects, to work around bugs or simply suit their own needs. This was assuming a same-origin single trust domain. Two things have happened since 1995 (really, both started happening right away in 1996, but few noticed): 1. JS code has scaled up to programming in the large domains, where even without hostile code or mutual suspicion, producers and consumers of library code want greater integrity properties than they can enforce with closures for private variabables (which are still mutable objects). 2. JS from different trust domains is being mixed, most obviously via script injection (for advertising, among other things), to overcome same-origin restrictions. Fixing this takes more than integrity in my view (confidentiality via secure information flow is something we're researching at Mozilla, partnering with others). But integrity is foundational, non-optional. Sure, developers will be able to explicitly mark areas in their code which they deem appropriate for another developer to change, but that strikes me as a bit of a fantasy land. The fantasy here would be that JS has been kept down on the same- origin and small-scale storybook farm where it was born. It's in the big city now. ;-) The majority of code which requires patching by external developers was never written to be patched, but people do it anyway. This is good, don't you agree? See above. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss
Re: Class method addition and replacement (was Re: AOP Compatibility)
On Apr 3, 2008, at 8:23 AM, Kris Zyp wrote: the moment, but I assume you can't do replace a method on a user class with another ad-hoc function. Absolutely not with fixtures, I was thinking about this, is there any reason why you can't replace a class's method with another method or install a method on an instance object that overrides the class's method, assuming that the method signature remains the same, the body has correct typing use of |this|, and the class is non-final? This seems to have the same integrity as method overriding in subclasses. Being able to do this (and possibly dynamically adding methods to classes) would bring the level of dynamicism that Mark had suggested with his ES4 sugar proposal (being able to create classes on the fly) Mark's sketch did not allow method replacement, however. AOP is not the root password to mutation barriers added to enforce integrity properties. It is not even formally sound, last I looked. And its main use-case is logging or other such post-hoc, cross- cutting instrumentation. If the universe of objects already contains some (like certain built-in objects including DOM nodes in most browsers) whose methods cannot be replaced, which must therefore be wrapped for AOP-style hacking, then why wouldn't we want classes to behave as proposed in ES4? Wrappers will be required; they already are for security in browsers I've studied (or they are coming soon, at any rate). Any code not insisting on a nominal type relation, i.e., using * (implicitly as in untyped code today, or explicitly), or a like test, or a structural subtype test, could let wrappers through. Just as DOM wrappers can satisfy hand-coded shape tests in today's untyped libraries that use AOP. /be ___ Es4-discuss mailing list Es4-discuss@mozilla.org https://mail.mozilla.org/listinfo/es4-discuss