Re: ECMAScript Harmony

2008-08-13 Thread Maciej Stachowiak

On Aug 13, 2008, at 2:30 PM, Brendan Eich wrote:

 In light of Harmony, and the recurrent over- and under-cross-posting,
 I'd like to merge the [EMAIL PROTECTED] and es4-
 [EMAIL PROTECTED] lists into [EMAIL PROTECTED] The old
 archives will remain available via the web, and the old aliases will
 forward to the new list. Any objections?

I support this move.

  - Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: How to escape implicit 'with (this)' of a method body

2008-08-01 Thread Maciej Stachowiak

On Jul 31, 2008, at 5:24 AM, Dave Herman wrote:

 We should take this problem seriously. ...

 Dynamic scope is very bad.

 Specifically:

 - Classes are supposed to provide integrity, but dynamic scope makes  
 the
 internals of code brittle; any variable reference inside the
 implementation could be subverted by the seemingly innocuous insertion
 of a property.

 - Dynamic dispatch has a reasonably understandable cost model, but  
 only
 if it's confined to explicit property references. With dynamic scope,
 any variable reference could potentially be very expensive.

 - Generally, code within a `with' block is brittle and hard to
 understand, and as Tucker says, the implicit `this.' means that all  
 code
 inside class methods is within a `with' block... this means that all
 code inside class methods is brittle!

 - In the past, this has been enough for many programmers to deprecate
 all use of `with' -- we should certainly hope to avoid the same
 happening for classes.

I'm not sure of the benefits on the whole of implicit 'this' for class  
methods, but isn't it plausible to apply it only to static properties  
and not dynamically inserted ones, so all references continue to be  
bound at compile time and this sort of brittleness does not come up?

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES3.1 Object static methods rationale document

2008-07-16 Thread Maciej Stachowiak

On Jul 16, 2008, at 2:36 PM, Allen Wirfs-Brock wrote:

 Just wait, reify may yet end up as the last name standing...

Methods don't reify things, the language definition does. Property  
descriptors are reified in ES3.1 whether or not you ever call the  
method.

I think getPropertyDescriptor is the best name suggested so far, it  
has no chance of being confused for a method that would get the  
property value, and it does not use obscure CS jargon in an incorrect  
way. I don't think brevity is critical for these metaprogramming/ 
reflection type methods - they are not the kind of thing that will be  
commonly used by most programmers. Mostly they will be used by  
frameworks such as Ajax libraries or secure language subsets.

Regards,
Maciej



 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 ] On Behalf Of Brendan Eich
 Sent: Wednesday, July 16, 2008 2:27 PM
 To: David Flanagan
 Cc: es4-discuss@mozilla.org es4-discuss
 Subject: Re: ES3.1 Object static methods rationale document

 On Jul 16, 2008, at 1:41 PM, David Flanagan wrote:

 Brendan, I think you were correct when you originally wrote:

 lookup : define :: get : put.

 I think that lookupProperty is much nicer than describeProperty,  
 since
 lookup captures the getter nature of the method in a way that
 describe does not.


 Connotations are many, ambiguity without a noun phrase (not just
 overloaded old property) saying what's being got or described
 or looked up is inevitable. This means the stolid, safe name
 getPropertyDescriptor is least likely to confuse.

 I see what you mean about describe in the context of setting a
 description (depict in a graphics context is problematic too) --
 thanks. Thesaurus doesn't include mental concept filtering, dammit.
 I'm sure we'll get this right, but I'm also pretty sure getProperty
 isn't the droid we are seeking.

 /be
 ___
 Es4-discuss mailing list
 Es4-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es4-discuss

 ___
 Es4-discuss mailing list
 Es4-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es4-discuss

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES3.1 Object static methods rationale document

2008-07-16 Thread Maciej Stachowiak

On Jul 16, 2008, at 4:10 PM, Allen Wirfs-Brock wrote:

 The most common use case seems to be the one where the target object  
 is a newly instantiated object without any properties of its own.  
 That use case (at least for variants of extend that only take a  
 single source object) is most directly supported by the Object.clone  
 function in our proposal. However, Object.clone is defined to be a  
 more comprehensive object duplication process than is performed by  
 extend.  It duplicates all own properties and their attributes and  
 any internal properties such as its [[Value]] property if it has one.

1) It seems like Object.clone as you have described it is not suitable  
for the mixin type use case where an object gets properties/methods  
from two others. Or at least, it only does half the job.

2) Is Object.clone expected to work on host objects (in particular DOM- 
related objects)? I think thorough cloning of all state is not a  
practical semantic in that case, and would be a very large burden on  
implementations. In the case of some classes (Window or Location for  
instance) allowing true cloning might even be a security risk. And if  
it does not support host objects then copying internal state like the  
[[Value]] or [[Class]] property for ES standard object types will be  
more confusing than helpful.

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: Two interoperable implementations rule

2008-07-14 Thread Maciej Stachowiak

On Jul 14, 2008, at 8:12 AM, Sam Ruby wrote:

 Maciej Stachowiak wrote:
 The WebKit project will accept patches for any feature of 3.1 that   
 has been reconciled with 4, and we will likely devote Apple  
 resources to implementing such features as well, so SquirrelFish  
 will likely be a candidate for one of the interoperable  
 implementations. Mozilla also has an extensive test suite for  
 ECMAScript 3rd edition, which could be a good starting point for an  
 ES3.1 test suite.

 Not being familiar with the webkit code base or process for  
 accepting patches, can you point me to where I can find out more?

Sure!

Here's basic instructions on how to check out, build and debug the  
code (applicable to Windows and Mac, the Gtk and Qt ports have their  
build processes documented elsewhere):
http://webkit.org/building/checkout.html
http://webkit.org/building/build.html
http://webkit.org/building/run.html

Here's an overview of the process for contributing:
http://webkit.org/coding/contributing.html

And here is contact info:
http://webkit.org/contact.html

These links and a lot more info are all on the front page of http://webkit.org/

Regards,
Maciej
___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Two interoperable implementations rule

2008-07-11 Thread Maciej Stachowiak


On Jul 10, 2008, at 6:29 AM, [EMAIL PROTECTED] wrote:

In a message dated 7/10/2008 3:03:12 A.M. Eastern Daylight Time, [EMAIL PROTECTED] 
 writes:

I do not believe that ECMA has the two interoperable implementations
rule that the IETF and W3C have, but since ECMAScript is a standard of
equal important to the Web, I think we should adopt this rule for any
future edition of ECMAScript. Such a rule is needed precisely to avoid
such casual breakage relative to Web reality. Can we make that a
binding TC39 resolution?
While it is true that no such rule exists in Ecma, it has been used  
in work I am familiar with (optical storage) within TC 31.  Early  
work on MO storage resulted in TC 31 agreeing that at least two  
implementations must demonstrate interoperability before approval of  
the standard.  This meant that both disk manufacturers and drive  
manufacturers had to work together to demonstrate that the product  
resulting from the standard would work together.  The committee  
always followed this rule without question, and the CC and GA of  
Ecma did not interfere with its implementation.


We can add this subject to discussion at Oslo, but this is a  
question that I would put to an internal vote of TC 31 since it has  
wider impact than may be represented in Oslo.


Since there is precedent within ECMA, then I definitely think we  
should take a formal vote on adopting this rule for TC39, in  
particular that we must have two interoperable implementations for any  
of our specs before it progresses outside our committee.


There are also some details to be worked out:

1) Is two interoperable implementations at feature granularity, or  
whole spec granularity? In particular, is it ok to cite two  
implementations for one feature, but two other implementations for  
another?


2) How is interoperability to be demonstrated? Do we accept good-faith  
claims of support, or do we need a test suite?


Given the nature of programming languages and the high stakes of Web  
standards, I would personally prefer whole-spec granularity (different  
implementations having different mixes of features does not prove real  
interoperability), and a test suite rather than just bare claims of  
support.


To be clear, I propose this rule not to block ES3.1, but to make it  
successful. The WebKit project will accept patches for any feature of  
3.1 that  has been reconciled with 4, and we will likely devote Apple  
resources to implementing such features as well, so SquirrelFish will  
likely be a candidate for one of the interoperable implementations.  
Mozilla also has an extensive test suite for ECMAScript 3rd edition,  
which could be a good starting point for an ES3.1 test suite.


I also note that the strong version of the interoperable  
implementations rule will be an even higher hurdle for ES4.


Any comments?

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: Two interoperable implementations rule

2008-07-11 Thread Maciej Stachowiak

On Jul 11, 2008, at 3:49 PM, Jeff Dyer wrote:


 On 7/11/08 3:01 PM, Maciej Stachowiak wrote:

 2) How is interoperability to be demonstrated? Do we accept good- 
 faith
 claims of support, or do we need a test suite?

 I'd say that good faith is good enough. It's easy enough for us to  
 check
 each other's work. And the blogosphere will not be kind to liars.

I'm less concerned about cheating than about honest mistakes, which  
may nonetheless affect interoperability, Web compatibility, or  
practical implementability of the spec.

For the WebKit project, we always make our best effort to correctly  
implement Web standards, and even make our own test cases as we go.  
However, once an independently developed test suite appears it always  
finds mistakes in our implementation. I think we are not unusual in  
this regard.

 One more detail:

 3) What constitutes a qualifying implementation? Does Rhino, EJScript
 (mbedthis.com), or Flash Player qualify? Or must it be one of the four
 leading browsers?

That is a good point to raise. I think limiting to browser-hosted  
implementations might be too extreme. On the other hand, if a spec  
qualifies based solely on non-browser-hosted implementations, then we  
have not done much to verify that the standard is compatible with the  
real-world Web. I think a reasonable middle ground would be to require  
at least one of the implementations to be browser-hosted. For these  
purposes, I would count an implementation that works as a browser  
extension replacing the scripting engine so long as it actually gets  
wide testing, so for example ScreamingMonkey could qualify.

It should also be required that the two implementations are  
independent (so a single vendor presenting two implementations would  
not qualify). This may be tricky to define, since many possible  
candidate implementations are open source and developed  
collaboratively by community contributors and overlapping sets of  
vendors. For example, would Rhino and SpiderMonkey count as  
sufficiently independent implementations?


 Given the nature of programming languages and the high stakes of Web
 standards, I would personally prefer whole-spec granularity  
 (different
 implementations having different mixes of features does not prove  
 real
 interoperability), and a test suite rather than just bare claims of
 support.

 Again, it will be hard to get away with cheating. But, on the other  
 hand an
 unofficial test suite (such as Spidermonkey's) would make it easier  
 for
 implementors to be honest.

Again, I am less worried about cheating than mistakes. If we had a  
semi-official test suite, it would not be normative, only the spec is  
normative. It would only be a tool for verifying interoperability has  
been achieved to a reasonable degree. The concern is less about  
deliberate deception than about having at least the minimal evidence  
needed to make a fact-based claim.

Regards,
Maciej



___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: Newly revised Section 10 for ES3.1.

2008-07-10 Thread Maciej Stachowiak

On Jul 9, 2008, at 10:01 PM, Allen Wirfs-Brock wrote:

 I completely agree, chapter 16 needs to carry forward.  We don't  
 want to forbid implementations from experimenting with future  
 extensions.

 When there has been broad agreement across major implements on an  
 extension (including the full semantics), I think it makes sense to  
 standardize that consensus. If there isn't such agreement, I'm not  
 so sure it makes sense to only standardize the compatible  
 intersection of major implementations as that may not be a useful  
 subset of functionality.

Sure, but your proposal is actively incompatible with all the existing  
implementations, because real web content does in fact do things like  
this:

if (someConditionThatIsAlwaysTrue) {
  function someFuncIThoughtWasConditionallyDeclared() {
  // some code
  }
}

and then the content counts on the function declaration being hoisted.  
Your proposal breaks that. It would also in some cases cause parse  
errors on content that would currently parse in all the browsers,  
meaning that even a mistake in a function that never gets called would  
start causing the whole script to fail.

I do not know if it is possible to make a proposal that both is useful  
and doesn't break the Web, within the premises of ES3.1. But it seems  
to me that a proposal that is almost sure to break the Web would be  
unacceptable under the ES3.1 assumptions, and locking it into a spec  
without first doing some form of widespread testing seems like a  
totally broken process.

I do not believe that ECMA has the two interoperable implementations  
rule that the IETF and W3C have, but since ECMAScript is a standard of  
equal important to the Web, I think we should adopt this rule for any  
future edition of ECMAScript. Such a rule is needed precisely to avoid  
such casual breakage relative to Web reality. Can we make that a  
binding TC39 resolution?

Regards,
Maciej




 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 ] On Behalf Of Brendan Eich
 Sent: Wednesday, July 09, 2008 7:00 PM
 To: Mark S. Miller
 Cc: [EMAIL PROTECTED]; es4-discuss@mozilla.org; Herman Venter
 Subject: Re: Newly revised Section 10 for ES3.1.

 On Jul 9, 2008, at 6:54 PM, Mark S. Miller wrote:

 On Wed, Jul 9, 2008 at 6:47 PM, Mike Shaver [EMAIL PROTECTED]
 wrote:
 2008/7/9 Maciej Stachowiak [EMAIL PROTECTED]:
 Although the standard does not allow block-level function
 declarations

 I'd understood that, while ES3 didn't specify such declarations, it
 was not a violation of the standard to have them.  I agree with your
 assessment of the compatibility impact, certainly.

 I believe the prohibition is in the ES3 syntax definition.

 ES3 chapter 16:

 An implementation shall report all errors as specified, except for
 the following:
 * An implementation may extend program and regular expression syntax.
 To permit this, all operations (such as
 calling eval, using a regular expression literal, or using the
 Function or RegExp constructor) that are allowed
 to throw SyntaxError are permitted to exhibit implementation-defined
 behaviour instead of throwing SyntaxError
 when they encounter an implementation-defined extension to the
 program or regular expression syntax.

 As Maciej notes, all four browsers extend syntax to support functions
 in sub-statement contexts. There's no prohibition given the chapter
 16 language allowing such extensions. Is ES3.1 specifying
 reality (intersection semantics), or something not in the
 intersection or union of existing browsers' syntax and semantics,
 that is restrictive and therefore not compatible without a similar
 allowance for extensions?

 Chapter 16 is important to carry forward in any 3.1 or 4 successor
 edition.

 /be
 ___
 Es4-discuss mailing list
 Es4-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es4-discuss

 ___
 Es3.x-discuss mailing list
 [EMAIL PROTECTED]
 https://mail.mozilla.org/listinfo/es3.x-discuss

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: Update on ES3.1 block scoped function declarations

2008-07-10 Thread Maciej Stachowiak


On Jul 10, 2008, at 3:28 PM, Mark S. Miller wrote:

On Thu, Jul 10, 2008 at 2:51 PM, Allen Wirfs-Brock [EMAIL PROTECTED] 
 wrote:
I see, yes there is a potential eval tax.  If I thought this was  
really a concern (and as you say, we already have the issue for  
catch and such) I'd be more inclined to fiddling with the scoping  
rule of eval rather than discarding lexically scoped consts.  BTW, I  
think many of the use cases for such const are more in support of  
code generators then actual end user programming.


Could you explain the eval tax issue you guys are concerned about?  
I don't get it. Thanks.


Because eval exposes lexical bindings by name in a way that is not  
statically detectable, it defeats implementation techniques like  
renaming or symbol versioning for block scope, and forces the use of  
actual environment objects for blocks if they contain a call to eval.  
For example:


function g(x)
{
if (x) {
const C = 0;
return eval(s1);
} else {
const C = 1;
return eval(s2);
}
}

Assuming s1 and s2 reference C, you at minimum need a separate runtime  
symbol table for each block to allow the eval lookup to succeed. When  
combined with closures and mutable block-scoped variables this can  
force the creation of a full activation object per block that  
introduces bindings.


And to avoid creating an activation object for every block (a huge  
performance cost), an implementation would have to detect which blocks  
introduce bindings, which contain calls to eval, and which contain  
closures. Reasonable implementations do this anyway but I think it is  
an unfortunate cost, even in ES4. In ES4, however, the benefit is  
arguably greater.


Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: Newly revised Section 10 for ES3.1.

2008-07-10 Thread Maciej Stachowiak

(Adding lists back to Cc, which I assume you meant to do)

On Jul 10, 2008, at 5:06 PM, Garrett Smith wrote:

 Authors who assume that the function was conditionally declared in IE
 and Opera (and who knows what else) would be making false assumption.

That's true, but what I have seen in practice is that code can end up  
depending on its false assumption being violated, or working by luck.  
For example, you see code like:

if (!isIE) {
 function somethingEssential() { }
}

where somethingEssential is then unconditionally called.



 If subsequent code relied on that false assumption, and produced
 intended results, it would be only by coincidence:

 if(!isIE  !isOpera) {
  function a(){ return 0; }
 }
 else (
  function a(){ return 1; } // IE gets here (coincidence unrelated to  
 else)
 }

 That would be very lucky programming, because if reversed, it would
 not produce the same results:

 if(isIE  isOpera) {
  function a(){ return 1; }
 }
 else (
  function a(){ return 0; } // IE gets here.(coincidence unrelated to  
 else)
 }

 FunctionDeclarations  are processed during variable instantiation,
 before code execution. IE and Opera (and probably other browsers)
 would always take the last one because in these environments, it would
 have the effect of:-

 function each(){ return 1; }
 function each(){ return 0; }
 if(isIE  isOpera) {
 }
 else {
 }

 I do not know if it is possible to make a proposal that both is  
 useful
 and doesn't break the Web, within the premises of ES3.1. But it seems
 to me that a proposal that is almost sure to break the Web would be
 unacceptable under the ES3.1 assumptions, and locking it into a spec
 without first doing some form of widespread testing seems like a
 totally broken process.


 Is it possible to know how much breakage would occur?

I don't know, but I am pretty sure it would be a non-zero amount. I  
think it is up to proponents of this proposal to somehow demonstrate  
that the level of breakage would be low enough; the presumption should  
be in favor of compatibility.

 Web scripts that attempt to work across IE and Mozilla that use a
 FunctionDeclaration in a Block either exhibit a bug or get lucky. If
 FunctionStatement is used, it is not likely to be used on public
 websites.

I know it has been used on public web sites.



 A major US defense company I once dealt with had an app that was
 Mozilla-only (due to security concerns in IE). It is possible that
 they successfully used a FunctionStatement. It seems that making
 FunctionStatement a syntax error could cause a problem in a
 Mozilla-only application, where as making FunctionStatement a standard
 would not cause problems (unless the problem was that the original
 code so badly confused):-

The ES3.1 proposal (now withdrawn, I believe) would make your example  
below throw a runtime error, since the declaration of function a would  
have only block scope and so would be unavailable after the if  
statement completes.



 if(IE || Opera) {
   function a(){ throw Error(); } // Error, but never reached.
 } else {
   function a(){} // No Error
 }

 a();

 Implementation of the FunctionStatement would cause the Error to be
 thrown when IE or Opera were true. How likely is such code to exist?
 Such code is arguably already broken and would seem to perform against
 the author's intention in IE.

 That sounds to me to be:
 1) Somewhat useful
 2) Probably wouldn't cause much breakage (test).

I'm not sure what specifically you propose, but it would make me happy  
if block-level function declarations could be standardized in a way  
that tries to be compatible with existing practice. I realize this is  
a challenge.

Regards,
Maciej


___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: Newly revised Section 10 for ES3.1.

2008-07-09 Thread Maciej Stachowiak


On Jul 9, 2008, at 5:16 PM, Allen Wirfs-Brock wrote:



Const and function declarations within blocks must be uniquely  
named, such a declaration may not over-write a preceding declaration  
in the same block and an attempt to do so is a syntax error.  Such  
declarations, of course, shadow any like named declarations in  
enclosing scopes. Since consts and function declarations in blocks  
are new, this is a new semantics.


Although the standard does not allow block-level function  
declarations, the following will parse and give identical results in  
all four of the major browsers (it will alert 2):


script
function g() {
if (true) {
function f() { alert(1); }
function f() { alert(2); }
}
f();
}
g();
/script

This example will interoperably alert 1:

script
function g() {
if (true) {
function f() { alert(1); }
}
f();
}
g();
/script

As I understand it, your proposal would make the first example a  
syntax error and the second a runtime error (unless a global function  
named f is defined).


I know from experience that sites do accidentally depend on the  
intersection of the different IE and Firefox extensions for block- 
level function declarations (and the Safari and Opera attempts to  
emulate them). Do you have evidence that leads you to conclude that  
your proposed behavior is compatible with the Web? I am almost certain  
it is not.



Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: strict mode becomes use subset cautious

2008-06-26 Thread Maciej Stachowiak


On Jun 26, 2008, at 1:34 PM, Allen Wirfs-Brock wrote:

At today’s ES 3.1 conference call (see http://wiki.ecmascript.org/doku.php?id=meetings:minutes_jun_24_2008) 
 we agreed to adopt the essence of the proposal below and to use the  
subset name “cautious” to referred to the set of restrictions  
formerly known as “strict mode”


Will ES4 also be making this change? If not, we need to add it to the  
list of subset rule violations. Is anyone keeping such a list? Does  
the ES3.1 committee intend to address all such issues by the July  
timeframe that ES3.1 is apparently targetting?


Regards,
Maciej



_
From: Allen Wirfs-Brock
Sent: Wednesday, June 25, 2008 12:38 PM
To: Pratap Lakshman (VJ#SDK); Adam Peller; Sam Ruby; Mark S. Miller
Cc: [EMAIL PROTECTED] x-discuss; es4-discuss@mozilla.org es4- 
discuss
Subject: RE: Brief Minutes [RE: ES3.1 WG phone conference 24 June  
08:00 PT]



Following up on the “Strict Mode” discussion…

As I advocated on the call, I think that by selecting “strict mode”  
a developer is really choosing to restrict themselves to using a  
subset of the complete language.  The future-proofing issues  of  
this relate to the possibility that there might be multiple such  
subsets that a developer might need to choose among.  Should there  
be multiple named “strict modes” to choose among, how should they be  
named, can “strictness” of a mode increase in future versions, etc?


I think some of the controversy could be eliminated if we simply  
eliminate the term “Strict Mode”.  Instead I propose we have a “Use  
Subset” directive  and that we name specific subsets in a meaningful  
and generally positive manner.  For example,  since the motivation  
of most of the proposed restrictions in ES3.1 strict mode is to  
provide a safer subset language I suggest that we call this subset  
“safety”  (or perhaps “safety1” or “safetyA”  or “safety2008”  
implying that in the future different safe subsets might be defined  
and we don’t want to give undo importance to this initial one).


So, the first line of a “strict mode” compilation unit would now  
look like”

“use subset safety”

I would suggest that we actually define “use subset” such that a  
comma separated list of subsets is allowed.  So, if somebody decided  
to define a subset that only contained the features of ES3 you might  
see something like this:

“use subset safety,ES3”

Since subsets are sets of restrictions, listing multiple subsets  
means to take the union of the restrictions imposed by all of the  
listed subsets.  So “use subset safty,ES3” means that this  
compilation unit may only use those features defined by ECMA 262-3  
and which are not excluded by the “safety” subset.  So, assuming  
that “safety” excludes use of the with statement, such a compilation  
unit could not include use of the with statement nor could it  
include any use of a feature that is new in ES3.1 or ES4.


Future versions of ECMAScript may add exclusions to a subset defined  
by an earlier version as long as the added exclusions only restrict  
capabilities that didn’t exist in the earlier version. For example,  
ES 4 in supporting the ES3.1 “safety” subset but add to it any  
features that are added in ES 4  but which are considered to be  
unsafe.


A future version may not add  exclusion to an pre-existing subset  
that restricts features that existed when the original subset was  
defined.  For example if ES3.14 or ES5 decided that the for-in  
statement was unsafe, it could not add that restriction to the  
“safety” subset.  However, it could define a new subset named  
perhaps “safety2010” that includes all the restrictions of the  
“safety” subset and in addition disallows use of the “for” statement.


If a compilation unit specifies a subset that is not known to the  
implementation that is processing it, that subset restriction is  
simply ignored. The code in the unit is still either valid or  
invalid on its own merit just like is the case when no subset had  
been specified.




_
From: Pratap Lakshman (VJ#SDK)
Sent: Tuesday, June 24, 2008 11:43 AM
To: Adam Peller; Sam Ruby; Mark S. Miller; Allen Wirfs-Brock; Pratap  
Lakshman (VJ#SDK)
Subject: Brief Minutes [RE: ES3.1 WG phone conference 24 June 08:00  
PT]



Here are brief minutes from our call.
Please take a look, and let me know if you want any changes by your  
EOD.
I’ll upload it to the wiki and send a copy to Patrick Charollais  
(ECMA) for posting on the ECMA site tomorrow night (Redmond time).


Attendees
Adam Peller (IBM)
Sam Ruby (IBM)
Mark Miller (Google)
Allen Wirfs-Brock (Microsoft)
Pratap Lakshman (Microsoft)

Agenda
On posting the latest draft to the wiki
Getters/Setters
Decimal
Setting up a review based on Lars' feedback on the 11 June draft

Minutes
Would like to add a couple more items to agenda that we can get to  
if we have the time (1) inconsistence 

Re: ES 3.1 implementations?

2008-06-26 Thread Maciej Stachowiak

On Jun 25, 2008, at 8:53 PM, Allen Wirfs-Brock wrote:

 It would be great if somebody wanted to work on a proof of concept  
 ES 3.1 implementation in a open code bases such as such as Webkit or  
 Rhino.

 If anybody is interested in volunteering send a not to [EMAIL PROTECTED]

We would gladly accept SquirrelFish patches for any part of ES3.1 that  
is not in conflict with the corresponding part of ES4.

Regards,
Maciej



 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
 ] On Behalf Of Robert Sayre
 Sent: Wednesday, June 25, 2008 5:35 PM
 To: es4-discuss; [EMAIL PROTECTED]
 Subject: ES 3.1 implementations?

 I am putting together feedback on the JSON features proposed for ES
 3.1, and I was wondering if there any ES 3.1 implementations
 available.

 Given the limited scope of the spec, I would expect to see at least
 one implementation completed soon if there isn't one. Maybe in Rhino
 or something?

 --

 Robert Sayre

 I would have written a shorter letter, but I did not have the time.
 ___
 Es3.x-discuss mailing list
 [EMAIL PROTECTED]
 https://mail.mozilla.org/listinfo/es3.x-discuss

 ___
 Es3.x-discuss mailing list
 [EMAIL PROTECTED]
 https://mail.mozilla.org/listinfo/es3.x-discuss

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: More string indexing semantics issues

2008-06-25 Thread Maciej Stachowiak

On Jun 25, 2008, at 4:00 PM, Garrett Smith wrote:

 In spec terms, WebKit's behavior can be explained in
 terms of strings having additional DontDelete ReadOnly properties.

 Let me get this straight:

 Webkit's behavior can be explained in terms of String objects having
 additional properties with numeric names and the attributes
 {DontDelete ReadOnly}

 Is that what you meant?

Yes.

 The
 Mozilla behavior can be explained as strings having those same  
 additional
 properties, but they are not ReadOnly. In both cases, index  
 properties past
 the last character do not exist ahead of time.


 My observations indicate otherwise. Webkit does not appear to create
 additional properties to String objects.

 javascript:alert(Object(foo).hasOwnProperty(0));


 FF2 - true
 Sf3 - false
 Op9 - false

 Where does the 0 property exist, Maciej? Is this bug related to
 hasOwnProperty?

I just tried this in Safari 3.1 and Safari alerted true. The same  
happens in WebKit trunk. If it alerted false I would say that is a bug.

 It appears to me that Mozilla and Opera and Webkit all implement a
 specialized [[Get]], where Opera and Mozilla do:

 1) Look for property P on String object.
 2) Look for String instance charAt( P )
 3) Look in String prototype.

 Webkit does:-
 1) Look for String instance charAt( P )
 2) Call the[[Get]] method on S with argument P.

You could model it in many ways. I have not looked at Mozilla's or  
Opera's actual implementations. What I am saying is that Safari/WebKit  
tries to publicly present the logical model that these are ReadOnly  
DontDelete properties. How it's actually implemented isn't really  
relevant. In WebKit's implementation we implement all sorts of JS  
properties internally in ways other than putting them in the generic  
object property map.

It is true that in spec-speak you could define it as special [[Get]]  
and [[Put]] behavior (and other operations like [[Delete]] and  
[[HasOwnProperty]]) instead of special properties.

 javascript:var f = Object(1234567);void(String.prototype[0] = 0);
 void(f[0] = 8); alert(f[0]);

 8 in Opera9 and FF2.
 0 in Saf3.

 In Opera, the object doesn't have numeric properties, and only appears
 to have special [[Get]]:-
 javascript:alert(0 in Object(foo));
 javascript:alert(Object(foo)[0]);

 Op9 - false and f
 FF2 - true and f

 Mozilla has the properties on the object and Opera doesn't.

 (this explains why - Object(foo).hasOwnProperty(0) - is false in  
 Opera.

 The reason for the way WebKit does things, for what it's worth, is  
 because
 index properties of the string are checked first before normal  
 properties
 (because they can't be overwritten), so abc[1] can be as fast as  
 an array
 access instead of going at the speed of normal property access.


 So the [[Put]] method on a String instance is different in Webkit.

What I am talking about above is the equivalent of the spec's [[Get]],  
not [[Put]]. The specialization I describe is for performance, and  
behaviorally transparent. However, our JavaScript implementation  
doesn't have things that correspond exactly to the spec's [[Get]] and  
[[Put]] formalisms.

Regards,
Maciej



___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: More string indexing semantics issues

2008-06-25 Thread Maciej Stachowiak

On Jun 25, 2008, at 2:33 PM, Garrett Smith wrote:

 On Wed, Jun 25, 2008 at 1:52 PM, Maciej Stachowiak [EMAIL PROTECTED]  
 wrote:

 I have not seen any reports of such problems. If it were common to  
 put
 random numeric properties on String objects, I expect we would have  
 had a
 bug report by now.


 Why?


What I meant is this:

1) When Safari/WebKit/JavaScriptCore diverges from other browsers in  
JavaScript behavior, in ways that Web content depends on, we have  
historically gotten bug reports even when the issue is very obscure.  
See my earlier comments about things like function declarations in  
statement position for examples.

2) We have not gotten bug reports that involve a site breaking because  
it set a low numeric property on a String object, and did not get the  
expected value back. At least, none have been found to have this as  
the cause. In other words, we have not seen cases like this:

var s = new String(abc);
s[0] = expected;
if (s[0] != expected)
 alert(EPIC FAIL);


3) Therefore, I think it is unlikely that a lot of public Web content  
depends on being able to do this. If this were at all common, odds are  
that we would have heard about it by now. As Brendan suggests,  
deploying the behavior in beta versions of other browsers would give  
us more data points.



 Do you are there many Webkit only applications? Do these
 applications take advantage of string indexing via property access?

I do not think the existence of WebKit-only applications is relevant.  
There are in fact a fair number (for example Dashboard widgets and  
iPhone-specific Web apps), but they do not tell us anything about  
whether public Web content at large depends on the behavior of  
allowing any numeric property of a String object to be successfully  
assigned. (I do not think any of this content depends on the WebKit  
behavior either).

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: Why are global variables *non-deletable* properties of the global object?

2008-06-20 Thread Maciej Stachowiak

On Jun 19, 2008, at 11:20 PM, Brendan Eich wrote:

 On Jun 19, 2008, at 8:40 PM, Mark S. Miller wrote:

 Try putting this in a Firefox address toolbar:

 javascript:alert('foo' in window); var foo = 42; alert(delete foo);
 alert(foo)

 You will get true (because the var binding -- not initialization
 -- is
 hoisted to the top of the program, right after the javascript:),
 false
 (because var makes a DontDelete binding outside of eval), and 42.

 I did. Thanks for suggesting that experiment. But given the above
 behavior, I don't understand

  javascript:alert('foo' in window); var foo = 42; window.foo = 43;
 alert(delete window.foo); alert(window.foo)

 I get true, true, undefined.

 Works correctly (true, false, 43) in Firefox 3 (try it, you'll like
 it!).

Also works correctly in Safari 3.1 and the Safari 4 Developer Preview  
(which implements split window support).

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES3.1 Draft: 11 June 2008 version available

2008-06-14 Thread Maciej Stachowiak

On Jun 14, 2008, at 12:21 PM, Mark S. Miller wrote:

 On Fri, Jun 13, 2008 at 2:44 PM, Mark Miller [EMAIL PROTECTED]  
 wrote:
 On Fri, Jun 13, 2008 at 11:20 AM, Lars Hansen [EMAIL PROTECTED]  
 wrote:
 What
 other with do people imagine is compatible with strict
 mode? I must have missed something.

 The one in ES3.  What makes it not compatible with strict mode?

 Doesn't it make static scope analysis impossible?

 Deleting a variable also makes static scope analysis impossible.
 (Deleting a property is fine.)

Deleting a variable is only possible for variables introduced by eval,  
and I gather eval will not be able to inject bindings into local scope  
in strict mode.

 Once try/catch is fixed, I can't think of anything else that prevents
 static scope analysis of ES3.1 strict or ES4 strict. Does anyone know
 of any other cases?

Named function expressions have the same kind of problem as try/catch  
(assuming the problem is a random non-activation object being on the  
scope chain).

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES3.1 Draft: 11 June 2008 version available

2008-06-12 Thread Maciej Stachowiak

On Jun 12, 2008, at 9:45 AM, Sam Ruby wrote:

 I'm trying to understand the nature of the ES3.1 - ES4 subset
 relationship that this committee has agreed to.

 p69 12.10.  Disallowing the with statement in strict mode breaks the
 ES3.1 - ES4 subset relationship (we've found no compelling reason to
 ban it).

 How does having ES4 support *more* than ES3.1 supports break the
 subset relationship?

Having ES3.1 strict mode forbid features that ES4 strict mode does not  
(but which are in both languages) feels like breaking the subset  
relationship to me. The reason is that program that is purely ES3 but  
has the strict mode pragma added could then be legal in one language  
but not the other, which seems problematic.

To the extent that strict mode limits ES3-with-pragma programs, I  
think it should impose the same limits in ES3.1 and ES4.

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES3.1 Draft: Array generics

2008-05-31 Thread Maciej Stachowiak

On May 20, 2008, at 7:35 AM, Douglas Crockford wrote:

 Erik Arvidsson wrote:
 I know for a fact that not passing the thisObject as the third param
 in methods like map etc will break real world applications such as
 Gmail.  If JScript does not follow the defacto standard, developers
 will have to add detection for this anormality.  I think  
 compatibility
 with other browser should be enough for ES3.1 to standardize that the
 thisObject should be passed as the third parameter.

 I disagree. Gmail can continue patching Array.prototype as it does  
 now, so
 nothing breaks. But going forward, new applications should be using  
 bind instead
 of a thisoObject.

I've heard it mentioned that ES3.1 has a 3 out of 4 browsers rule.  
What exactly is the scope of this rule, and why does it not apply here?

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES3.1 Draft: Array generics

2008-05-31 Thread Maciej Stachowiak

On May 31, 2008, at 6:17 AM, Pratap Lakshman (VJ#SDK) wrote:

 [I'll take shot at replying (and Doug can clarify)]

 A feature that is present in 3 out of 4 browsers makes it a  
 candidate for inclusion into ES3.1. However, that does not guarantee  
 that it will get included.

 In the present case, here are a couple of data points:
 (1) It was felt that providing the thisObj could potentially  
 introduce a security issue.
 (2) In ES3.1 we want to support Function.prototype.bind.

To counter that, I would mention:

(1) The 3 out of 4 browsers that currently support the thisObj  
parameter, and I do not expect they would remove it, so the spec will  
be incompatible with 3 out of 4 implementations.
(2) Te proposed function.bind can be used to defend against this  
rebinding, so the security issue can be avoided.
(3) Many existing ECMAScript functions can be used to perform this  
rebinding, including Function.prototype.call,  
Function.prototype.apply, and the ES3.1 proposed  
Function.prototype.bind itself!

So, specifying map, filter, forEach etc as they are already  
implemented in 3 out of 4 browsers does not create any consideration  
that does not already exist, and the spec itself creates the means to  
mitigate it.

 Taken together, this led us to propose codifying Array generics as  
 called out in the proposal I sent out (i.e. without the thisObj  
 parameter). As Doug mentiones below, apps could continue patching  
 Array.prototype if they so desired, but going forward new apps  
 should use bind instead of the thisObj.

If apps can redefine Array.prototype.map to do what existing  
implementations do, doesn't that by definition mean the same security  
issue still exists? Security is based on what bad things you *actually  
can* do, not which are condoned by the spec as acceptable or which are  
common practice. I imagine the reason this rebinding is of concern  
is for secure dialects trying to prevent malicious code from doing it,  
not to protect against someone doing accidentally.

Thus, I can't see a sound reason to be incompatible with existing  
practice.

Regards,
Maciej

 Kris Zyp then made the observation that apps on ES3.1 that relied on  
 feature-testing (before patching Array.prototype) would end up using  
 the 'incompatible' implementation was present! At that point, we  
 thought we would be better off not including the proposal for now.

 pratap
 PS: I'll have this on the agenda for further discussion in our next  
 conf. call.

 -Original Message-
 From: Maciej Stachowiak [mailto:[EMAIL PROTECTED]
 Sent: Saturday, May 31, 2008 2:22 AM
 To: Douglas Crockford
 Cc: Erik Arvidsson; [EMAIL PROTECTED]; Pratap Lakshman  
 (VJ#SDK); es4-discuss@mozilla.org
 Subject: Re: ES3.1 Draft: Array generics


 On May 20, 2008, at 7:35 AM, Douglas Crockford wrote:

 Erik Arvidsson wrote:
 I know for a fact that not passing the thisObject as the third param
 in methods like map etc will break real world applications such as
 Gmail.  If JScript does not follow the defacto standard, developers
 will have to add detection for this anormality.  I think
 compatibility
 with other browser should be enough for ES3.1 to standardize that  
 the
 thisObject should be passed as the third parameter.

 I disagree. Gmail can continue patching Array.prototype as it does
 now, so
 nothing breaks. But going forward, new applications should be using
 bind instead
 of a thisoObject.

 I've heard it mentioned that ES3.1 has a 3 out of 4 browsers rule.
 What exactly is the scope of this rule, and why does it not apply  
 here?

 Regards,
 Maciej


___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES3.1: Draft 1

2008-05-29 Thread Maciej Stachowiak


On May 28, 2008, at 10:38 PM, Pratap Lakshman (VJ#SDK) wrote:

I have uploaded to the wiki (link, see bottom of the page) a first  
draft of the specification for ES3.1. This is in the form of in- 
place edits and markups to the ES3 specification. As you will notice  
when reading through, there are still some open issues, and details  
on a few features to be filled in. This spec shall be updated as we  
make progress on these.




I have only skimmed the beginning of the document. The following  
change seems like a technical error:


All constructors are objects, but not all objects are constructors.
  --
All constructors are functions, but not all functions are usefully  
treated as constructors.


ECMAScript constructors need not be functions; an object may implement  
the [[Construct]] internal property without also implementing the  
[[Call]] internal property, thus it would be a constructor but not a  
function. I assume it was not the intent to deliberately remove this  
possibility.


Also, the addition of usefully treated as does not make sense. An  
object that does not implement the [[Construct]] internal property is  
not a constructor at all, rather than merely not usefully treated as  
a constructor.


The definition from the original spec that actually seems wrong is in  
4.3.4, A constructor is a Function object that creates and  
initialises objects.



With a lot of the changes, I could not tell if the intent was to  
change language semantics or to make a clarifying editorial change.



Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: Namespaces as Sugar (was: complexity tax)

2008-05-28 Thread Maciej Stachowiak

On May 27, 2008, at 12:18 PM, Mike Shaver wrote:

 2008/5/27 Maciej Stachowiak [EMAIL PROTECTED]:
 It could save a lot of complexity, by not requiring any first-class  
 support
 for namespace lookup on arbitrary objects.

 Is the expectation then that having two lookup models, one for global
 objects and the other for non-global objects, going to provide less
 complexity?

My proposal is that there is one for lexical scope lookup along the  
scope chain (which considers open namespaces when binding unqualified  
names) and another for property lookup qualified to an object  
reference (which does not). Yes, I believe this will provide less  
complexity, because the scope chain and prototype chain algorithms are  
quite complex in the current proposal and not the same as each other.  
My rough proposal would greatly simplify the prototype chain algorithm  
and not make the scope chain algorithm any more complex (it may become  
simpler).

 In a browser, window is the global object; would property lookup on
 window be namespaced when referenced as such?  When you have a
 handle to another window?  When you use |this.prop| in a global
 function?

My proposed answer to all of these would be no.

 If we have namespace-aware lookup, it seems to me that it would be
 less complex for implementors and script authors alike for it to
 always apply, rather than just for one magical object.

I think it would be simpler to limit the effect of namespaces to  
lexical bindings (where we know we can bind to the right name and  
namespace statically in any case of interest) and not have them affect  
object property lookup in general.

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: Namespaces as Sugar (was: complexity tax)

2008-05-27 Thread Maciej Stachowiak


On May 27, 2008, at 11:00 AM, Brendan Eich wrote:

What's at issue is whether and why unqualified import matters in any  
object, even the global object only, since the NAS proposal did not  
allow unqualified import even at global level, and the use-case for  
unqualified import was dismissed as not compelling.


There's really 4 separable issues:

1) Namespacing of names at global scope (via lexically scoped  
reference).

2) Unqualified import of names into global scope.
3) Namespacing of arbitrary object properties.
4) Unqualified import of namespaces for arbitrary object properties.

I would claim 1 and 2 are essential, 3 can be done by convention in  
the absence of 4 (a la the NAS proposal) and 4 is unnecessary and  
harmful to performance.


That's an interesting idea, although we use namespace  
qualification along the prototype chain all over the place in ES4,  
and for what seem like good reasons.


Other languages with successful namespacing features don't have  
such a mechanism, so I am dubious of the goodness of these ideas. I  
am concerned that the namespace lookup algorithm for object  
property access is too complicated.


Agreed, this is the big issue. I share your concern, but the  
conservative approach (esp. with reference to C++) of throwing out  
non-global open namespaces looks like an overreaction, and it may  
not save much complexity.


It could save a lot of complexity, by not requiring any first-class  
support for namespace lookup on arbitrary objects.


It makes object property lookup depend on the set of open  
namespaces, which means obj.property may compile to entirely  
different code depending on the context,


Lexical context, no dynamic open-namespaces scope.


Note I said compile to so I think this was clear.



and it seems likely it will slow down property lookup when multiple  
namespaces are open but static type info is missing.


It certainly could, although we think not in implementations under  
way. Opening multiple namespaces without is not free in a dynamic  
language.


Is the name lookup algorithm much simpler if namespaces are top- 
level only? Since obj.prop could end up with obj referring to the  
(or I should write a) global object, I don't see it. Unless you're  
proposing outlawing such object references using the namespaces open  
at top-level when obj is the global object.


I would propose that unqualified import only affects lexically scoped  
lookups, not object property access, even if the object in question is  
the global object. In particular, if you are willing to say  
global.property instead of property, it is not such a hardship to  
say global.ns::property.



If the only real justification is that it's a nice generalization,  
then I do not think it is worth the performance hit.


The nice generalization followed from particular use-cases, it did  
not precede them. I cited those cases (briefly). How about being  
responsive to them?


I think many (perhaps all) of those cases either use namespaces  
gratuitously or work fine without unqualified import (and so could use  
namespaces by convention). For example it doesn't seem important to  
allow unqualified import of the meta namespace.





ES (any version) has objects as scopes, as well as prototypes.  
It's hard to keep the gander -- objects below the top level, or on  
the prototype chain -- from wanting the same sauce that the goose  
-- the global object -- is trying to hog all to itself.


Is it really? Is there any other language where namespacing of the  
global namespace has led to namespacing at the sub-object level? C+ 
+, Java and Python all get by fine without namespacing of  
individual object properties.


C++ and Java are not the right paradigms for JS/ES. Python is  
better, but Python *does* allow import in local scope.


Python allows import from inside a local namespace, but does it allow  
import from outside a local namespace to affect lookup into that  
namespace? I am not aware of such a feature but I'm not a Python expert.




The reason namespacing at top level is essential to programming in  
the large is that the global namespace is a shared resource and  
must be partitioned in controlled ways to avoid collision in a  
large system. But I do not see how this argument applies to classes  
or objects.


See Mark's big post, which discusses (in item (b)) extending  
objects, including standard ones.


Saying the global object is a shared resource that must be  
partitioned, etc., but no others reachable from it, particularly  
class objects, are shared resources, is begging the question: what  
makes any object a shared resource? That the global is the only  
necessarily shared object does not reduce the benefit, or make the  
cost prohibitive, of sharing other objects reachable from it.


The benefit is less, because you can use separate objects in different  
namespaces instead of a shared object with namespaces inside it. The  
cost is greater 

Spreadsheet access

2008-03-26 Thread Maciej Stachowiak

I'd appreciate if someone could give me access to the feature  
spreadsheet to enter Apple's votes:

http://spreadsheets.google.com/pub?key=pFIHldY_CkszsFxMkQOReAQgid=2

(My Google account is maciej at gmail dot com).

  - Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES4 draft: Object

2008-03-11 Thread Maciej Stachowiak

On Mar 10, 2008, at 7:01 PM, Lars Hansen wrote:

 We are the WG.  Are you saying that substantive discussions
 of your proposals are not welcome?  Not sure what the point
 of participating is if that's the case.

 Sorry, I didn't realize that I find it abhorrent qualified as
 substantive discussion.  My fault.  Won't happen again.

The optional second argument to make propertyIsEnumerable a setter has  
some practical problems:

1) It violates the very strong norm that getter and setter functions  
are separate and have their own different arguments. It will make the  
feature harder to use and code using it harder to understand, to some  
extent.
2) It makes it impossible to feature test for the ability to disable  
enumerability, compared to a separate setter.

Against the argument that it is too risky compatibility-wise to add a  
new method to the object prototype (apparently the only reason things  
were done), I propose that it is overblown. Mozilla has defined new  
methods and properties on all objects. We have copied some in Safari  
and seen no web compatibility issues, I assume Mozilla has not had any  
as well. Specifically, I am thinking of __defineSetter__,  
__defineGetter__, __lookupSetter__, __lookupGetter__, and __proto__.

Has any study been done on how many sites currently make use of the  
property names of a likely setter for propertyIsEnumerable?

 I'm dealing with a serious insurrection of folks who believe
 that the ES4 working group has a bad attitude, based on
 Brendan's public comments and responses to issues like this
 one.  They're quite visible.

 Debate is only good.  I merely pointed out the obvious thing, namely
 that until there is an alternative proposal written up to deal with
 this issue, the current design stands unless the WG, as a group,
 decides to just get rid of it (leaving the problem it was designed
 to solve solution-less).

Surely reviewing this spec is the appropriate time to revisit this  
issue. I'd like to propose the following three alternatives to the  
current proposal:

1) Remove the feature entirely from ES4 (as part of the judicious  
feature cuts process) until a more appropriate syntax is found
2) Replace two-argument form of propertyIsEnumerable with  
setPropertyIsEnumerable
3) Replace two-argument form of propertyIsEnumerable with  
__setPropertyIsEnumerable__

I think each of these options is so trivial as to not require a  
lengthy write-up.

What is the process for the WG deciding whether to make one of these  
changes, or something else?

 I like the idea of making non-public-namespaced properties be
 not-enumerable and getting rid of DontEnum.  We've talked loosely
 about it for a while.  But it's remained loose talk, it has never
 made it to the stage where it is a coherent proposal.

I don't like syntax-based alternatives since they cannot be made to  
degrade gracefully in ES3 UAs.

Regards,
Maciej


___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: Insurrection (was: ES4 draft: Object)

2008-03-11 Thread Maciej Stachowiak

On Mar 10, 2008, at 9:54 PM, Mark Miller wrote:

 ES3 has several abstraction mechanisms:
 * lambda abstraction, which it gets approximately as right as Scheme!
 * objects as a generalization of records, which has some pros and cons
 * prototype-based sharing of common behavior, which is used almost
 exclusively by JavaScript programmers to express only class-like
 patterns.

 Altogether, ES3 has many virtues and many problems. One of its great
 virtues is its almost perfect support for lexical nesting. Virtually
 any thisless construct that could appear at top level can also appear
 within a nested lexical context with the same meaning. ES3 also avoids
 the CommonLisp trap of multiple namespaces, instead siding with
 Scheme's single namespace approach.

 Even ignoring ES4's type system, ES4 adds all the following
 abstraction mechanisms to those in ES3:
 * classes
 * packages
 * units
 * namespaces

You forgot interfaces (and the type system also adds record types,  
(sort-of)-tuples, typed arrays and parametric types). That does seem  
like a lot.

  - Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES4 draft: Object

2008-03-11 Thread Maciej Stachowiak

On Mar 10, 2008, at 11:14 PM, Maciej Stachowiak wrote:

 The optional second argument to make propertyIsEnumerable a setter has
 some practical problems:

 1) It violates the very strong norm that getter and setter functions
 are separate and have their own different arguments. It will make the
 feature harder to use and code using it harder to understand, to some
 extent.
 2) It makes it impossible to feature test for the ability to disable
 enumerability, compared to a separate setter.

 Against the argument that it is too risky compatibility-wise to add a
 new method to the object prototype (apparently the only reason things
 were done), I propose that it is overblown. Mozilla has defined new
 methods and properties on all objects. We have copied some in Safari
 and seen no web compatibility issues, I assume Mozilla has not had any
 as well. Specifically, I am thinking of __defineSetter__,
 __defineGetter__, __lookupSetter__, __lookupGetter__, and __proto__.

 Has any study been done on how many sites currently make use of the
 property names of a likely setter for propertyIsEnumerable?

I forgot to mention, making two-argument propertyIsEnumerable have  
setter semantics can be a tiny compatibility risk too, if any code  
accidentally calls it with two args and does not expect it to act as a  
setter in this case. Do we have any way to quantify the relative  
compatibility risk of the current design vs. a separate setter?

Regards,
Maciej


___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES4 draft: Object

2008-03-11 Thread Maciej Stachowiak

On Mar 10, 2008, at 11:35 PM, Mark S. Miller wrote:

 On Mon, Mar 10, 2008 at 11:14 PM, Maciej Stachowiak [EMAIL PROTECTED]  
 wrote:
 [...] I'd like to propose the following three alternatives to the
 current proposal:

 1) Remove the feature entirely from ES4 (as part of the judicious
 feature cuts process) until a more appropriate syntax is found
 2) Replace two-argument form of propertyIsEnumerable with
 setPropertyIsEnumerable
 3) Replace two-argument form of propertyIsEnumerable with
 __setPropertyIsEnumerable__

 So long as setPropertyIsEnumerable is a method of Object.prototype, it
 raises the otherwise pointless question of the meaning of overriding
 it.

I don't see how it raises any special questions - it's not called  
internally by the implementation or anything.

 At the last ES3.1 face-to-face, we agreed instead on the following
 static method on Object, as recorded at
 http://wiki.ecmascript.org/doku.php?id=es3.1:targeted_additions_to_array_string_object_date
  
 :

That proposal seems to depart markedly from the past plan that ES 3.1  
is to be a subset of ES4. Has that plan been abandoned? ES3.1  
organizers, what's up?

(If you'd like to propose this design for ES4 as well then the basic  
approach seems sound, in that it avoids whatever risk there is to  
polluting the namespace of all objects without being conceptually  
confusing. Also the names lean too much towards terse instead of  
descriptive for such obscure operations. For instance  
Object.readOnly(o, p) would read much better as something like  
Object.makePropertyReadOnly(o, p).)

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES4 draft last call: line continuation in string and regex literals

2008-03-08 Thread Maciej Stachowiak

On Mar 8, 2008, at 8:20 AM, Lars Hansen wrote:

 Last call for the line continuation spec:

 http://wiki.ecmascript.org/doku.php?id=spec:line_continuation_in_strings

 (Last call = it will be taken into the language spec within a week  
 or
 so unless there's opposition now.)

This (and the line terminator normalization draft) seem to be in  
restricted parts of the wiki. Could they go somewhere public?

  - Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES4 draft: Line terminator normalization

2008-02-26 Thread Maciej Stachowiak

On Feb 26, 2008, at 1:36 AM, Lars Hansen wrote:

 Please comment.  --lars

 line-terminator- 
 normalization.txt___
 Es4-discuss mailing list
 Es4-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es4-discuss

Has the web compatibility impact of this proposal been evaluated?

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES4 draft: Line terminator normalization

2008-02-26 Thread Maciej Stachowiak

On Feb 26, 2008, at 6:12 PM, Brendan Eich wrote:

 On Feb 26, 2008, at 3:46 PM, Maciej Stachowiak wrote:

 On Feb 26, 2008, at 1:36 AM, Lars Hansen wrote:

 Please comment.  --lars

 line-terminator-
 normalization.txt___
 Es4-discuss mailing list
 Es4-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es4-discuss

 Has the web compatibility impact of this proposal been evaluated?

 SpiderMonkey has converted \r and \r\n into \n since 1996 -- my  
 memory is dim, but IIRC I did that in the original Netscape 2  
 Mocha runtime, because anything else hurt interop (back then  
 people routinely authored HTML docs with inline scripts on Mac using  
 \r for line termination, never mind Windows using \r\n ;-)). Does  
 JavaScriptCore not canonicalize to \n?

That sounds like sufficient evaluation of the impact to me. I thought  
it might be possible that scripts would expect \r\n to appear as two  
characters but what you describe makes that seem pretty unlikely.

  - Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES3.1 Proposal Working Draft

2008-02-25 Thread Maciej Stachowiak

On Feb 25, 2008, at 2:15 AM, Mike Cowlishaw wrote:

 Pentium basic arithmetic operations take from 1 cycle (pipelined add,
 rarely achieved in practice) up to 39 cycles (divide).  The figures  
 at the
 URL above for decimal FP software are worst-cases (for example, for  
 Add, a
 full-length subtraction that requires pre-alignment and post- 
 rounding).  A
 simple x=x+1 is much faster.

Then I will ignore the details of the chart and assume lots slower  
unless you have better data.

 That's roughly the same speed on
 current processors as the hardware binary floating-point available
 when
 ECMAScript was first written.

 That's not really a relevant comparison. When ECMAScript was first
 written, people weren't using it to write complex web apps. Nowadays
 it would be be unacceptable even for a high-end phone to deliver the
 ECMAScript performance as slow as consumer desktops from that era.

 That's a fair comment (phones).  However, the path length for  
 rendering
 (say) a web page is huge compared to the time spent in arithmetic.   
 (I did
 a search for 'math-heavy' examples of several programming languages 3
 years ago and didn't find any ECMAScript examples.)

There can be many factors affecting page loading and web application  
performance. There are many cases where JavaScript execution time is a  
significant component.

 But if arithmetic performance really is an issue, one could provide  
 an option or attribute
 to request binary arithmetic, perhaps.

No, shipping a huge performance regression with an opt-out switch is  
not an acceptable option.

 In today's (unpipelined) decimal FP hardware it is much faster than
 those
 software measurements, of course, and there's no reason why future
 implementations should not be within 10%-15% of binary FP.

 I do all my browsing on a MacBook Pro and an iPhone. As far as I  
 know,
 neither of these has any kind of decimal FP hardware, nor do I expect
 their successors to support it any time soon (though I don't have
 inside knowledge on this). These systems are towards the high end of
 what is available to consumers.

 Intel are studying decimal FP hardware, but have not announced any  
 plans.
 Of course, PowerPC (as of POWER6) has a decimal FPU...

Apple completed the Intel switch some time ago, since then PowerPC has  
not really been relevant to the devices on which people browse the  
web. My point remains, decimal FP hardware is not relevant for any  
current performance evaluations and will not be for some time.

 This is not directly related to my main point, which is about
 performance and which I think still stands.

 In summary: software floating point (binary or decimal) is between  
 one and
 two orders of magnitude slower than hardware for individual  
 instructions.
 If (say) 5% of the instructions in an application are floating-point
 arithmetic (a high estimate for applications such as parsers and  
 browsers,
 I suspect), that means the application would be about twice as slow  
 using
 software FP arithmetic.  That's not really a 'showstopper' (but might
 justify a 'do it the old way' switch).

If you don't think imposing a 2x slowdown on web apps is a showstopper  
then clearly we have very different views on performance. (Note, using  
your high estimate of two orders of magnitude it would be a 6x  
slowdown if 5% of an application's time [not instructions] is spent in  
floating point arithmetic.)

 From my point of view, this would be a massive regression and  
conclusively rules out the idea of replacing binary floating point  
with decimal floating point in ECMAScript.

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES3.1 Proposal Working Draft

2008-02-21 Thread Maciej Stachowiak

On Feb 21, 2008, at 2:46 AM, Mike Cowlishaw wrote:

 Maciej wrote on Wed Feb 20 14:28:33 PST 2008:

 Besides compatibility issues, this would be a significant performance
 regression for math-heavy code. I would consider this a showstopper  
 to
 implementing such a change.

 I'm inclined to agree that it is (unfortunately) probably not a good  
 idea
 to simply replace the default binary arithmetic with decimal128 --  
 even
 though this would give better precision for math applications, as  
 well as
 all the other benefits of decimal arithmetic.

 But I don't buy the performance argument -- decimal math packages are
 respectably fast nowadays.  See, for example, the measurements at
 http://www2.hursley.ibm.com/decimal/dnperf.html -- a decDouble add  
 is a
 couple of hundred cycles in software.

That benchmark isn't very useful because it doesn't compare to  
hardware binary floating point, and also because they are  
microbenchmarks so it's hard to tell how much impact there would be on  
a real app. However, hundreds of cycles even for simple operations  
like add sounds to me like it would be hundreds of times slower than  
hardware floating point.

 That's roughly the same speed on
 current processors as the hardware binary floating-point available  
 when
 ECMAScript was first written.

That's not really a relevant comparison. When ECMAScript was first  
written, people weren't using it to write complex web apps. Nowadays  
it would be be unacceptable even for a high-end phone to deliver the  
ECMAScript performance as slow as consumer desktops from that era.

 In today's (unpipelined) decimal FP hardware it is much faster than  
 those
 software measurements, of course, and there's no reason why future
 implementations should not be within 10%-15% of binary FP.

I do all my browsing on a MacBook Pro and an iPhone. As far as I know,  
neither of these has any kind of decimal FP hardware, nor do I expect  
their successors to support it any time soon (though I don't have  
inside knowledge on this). These systems are towards the high end of  
what is available to consumers.

 I also agree with Mark's comment that arbitrary-precision integers  
 and
 arbitrary-precision rationals seem like more generally useful types
 than decimal floating point, if any numeric types are to be added,  
 but
 that seems like an issue more for ES4 than 3.1.

 I really do not understand that comment.  Almost every numerate human
 being on the planet uses decimal arithmetic every day; very few need  
 or
 use arbitrary-precision integers or rationals of more than a few  
 (decimal)
 digits.  And almost every commercial website and server deals with
 currency, prices, and measurements.

I don't think currency calculations are the only interesting kind of  
math. So if we need to add a software-implemented more accurate math  
type, why not go all the way? At least that is my first impression.

This is not directly related to my main point, which is about  
performance and which I think still stands.

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: ES4 implementation process, teams, and meetings

2008-02-21 Thread Maciej Stachowiak

On Feb 21, 2008, at 8:14 AM, Geoffrey Garen wrote:

 Is there a published specification that all these implementors will be
 using?

To expand a bit on Geoff's comment:

I'd like Apple and the WebKit project to get involved with ES4  
implementation. But right now, as far as I can tell, there isn't a  
written record for any of ES4's features that I could point an  
engineer to and say implement this. The proposals on the wiki are  
way out of date, it's not easy to find what trac tickets modified  
them, and there seem to be commonly understood planned changes that  
aren't even reflected in trac.

Before attempting interoperable implementations of particular  
features, I think we need at minimum a form of the proposal for that  
feature that is complete and up to date. It doesn't have to be formal  
specification quality, but there has to be something accurate.

Now, it may be that by telling someone to reverse engineer another  
implementation, or ask the ES4 crowd about every detail of how a  
feature should be implemented, someone could succeed in implementing.  
But it seems to me that this undermines the unstated assumption of  
interoperable *independent* implementations.

In contrast, with CSS, Web API or HTML WG specifications, I can point  
engineers to a spec that is more or less accurate for a given feature  
and they only have to ask questions about the few missing details. I  
would raise HTML5 as a particularly laudable example because it  
achieves this even though much implementation work is happening in  
parallel with writing the spec.

I think we should strive to achieve the same standard for ES4. At  
feature granularity, someone should first write an up to date accurate  
document and implementations should be done against that, not against  
some separate shared understanding of the feature.

Regards,
Maciej



 Thanks,
 Geoff

 On Feb 20, 2008, at 3:38 PM, Brendan Eich wrote:

 As Jeff has laid out, with helpful comments from Michael O'Brien,
 Lars, and Graydon, we are entering a phase of ES4 work where
 practical implementations will adopt and implement proposed parts of
 the new language. We need to do this to shake out proposals and gain
 critical feedback from implementors. We hope to get usability results
 from programmers too.

 I agree with Michael's point about the RI being both alpha and omega
 among implementations, so RI work will continue. But practical
 implementations, besides enabling usability testing with real
 programmers, will help weed out potential problems to do with
 performance and security that the RI blissfully ignores.

 As Graydon and Michael point out, the waterfall diagram (even if you
 put the RI earlier in the waterfall) does not reflect the wheels
 within wheels (waterwheels?) that must cycle at several levels of
 design, implementation, and even usability testing, in order to reach
 the ES4 spec we aspire to write. So take that waterfall diagram more
 as a management crutch ;-).

 Finally, we absolutely aspire to build our existing testsuites up to
 cover as much of the new language as we can. Test-driven development
 is the only way to go (I speak from painful experience back in the
 Netscape days :-/).

 The good news is that I believe we will have many ES4 implementations
 coming up in the next few months, working in parallel to improve the
 spec and RI. I know of at least these already under way:

 * ES4 RI (SML + ES4 self-hosted)
 * MbedThis (C + Java)
 * Rhino (Java)
 * SpiderMonkey (C)
 * Tamarin+ESC (C++ + ES4 self-hosted)

 If you are implementing any part of ES4 and want to join forces,
 please reply.

 We aim to track progress using the infrastructure created by John
 Resig:

 http://ejohn.org/blog/state-of-ecmascript-4-dec-07/
 http://spreadsheets.google.com/pub?key=pFIHldY_CkszsFxMkQOReAQgid=2

 I believe that the shared spreadsheet URL given above is correct, but
 John can provide the latest information as well as grant write
 access. My hope is that implementors can edit the spreadsheet to
 record progress, providing verifiable builds and even open source for
 their implementations, as they go. Again I'll defer to John on this.

 We propose to communicate among implementation teams using es4-
 [EMAIL PROTECTED], since (a) the list is not terribly high-traffic,
 (b) we aim to operate transparently, and (c) we believe most of you
 are interested at least as onlookers, if not as implementors. We can
 split lists if we run into a problem, but I don't foresee one.

 To provide focused face-to-face time working together and building
 rapport among the principals, we are thinking of meeting roughly
 every month, with this strawman schedule:

 March 17-21 - Mountain View, CA
 April 14-18 - Newton, MA
 May 12-16   - Vancouver, BC

 This is very straw, so please don't flame it or it will ignite! But
 please do feel free to suggest alternative dates and locations. We
 hope to host anyone who has known reputation on this list and who
 wants 

Re: ES4 implementation process, teams, and meetings

2008-02-21 Thread Maciej Stachowiak


On Feb 21, 2008, at 10:41 AM, Brendan Eich wrote:


On Feb 21, 2008, at 8:30 AM, Maciej Stachowiak wrote:


I'd like Apple and the WebKit project to get involved with ES4
implementation. But right now, as far as I can tell, there isn't a
written record for any of ES4's features that I could point an
engineer to and say implement this.


There's certainly no such spec, or you would be a passive observer  
of a standardization process that was all but done. That's not  
reality, and it arguably is not what you should want -- Apple people  
could be valued peers in the remaining work on ES4.


If you want to be passive implementors of a finished spec, then wait  
till next year.


We'd like to be active participant. However, it seems like as  
newcomers/outsiders, we do not have enough information available to  
participate in early implementation. I am not asking for a finished,  
formal, final detailed spec.


What I am asking is this: for each proposal where you'd like early  
implementations, before implementation commences please write down  
enough information about that proposal in some reasonably  
understandable form to represent the current shared understanding of  
the insiders/old-timers. That would be enough info for us relative  
outsiders/newcomers to participate. I don't think it's too much to ask  
for a rough but up-to-date and accurate first draft. I'm not sure how  
we are supposed participate otherwise. Maybe it is not expected that  
we would or should.




The proposals on the wiki are
way out of date, it's not easy to find what trac tickets modified
them, and there seem to be commonly understood planned changes that
aren't even reflected in trac.


That's a failure to file trac tickets -- could you please list these  
changes that aren't in the trac? There's no other bug system to  
track these planned changes, so they had better show up at http://bugs.ecmascript.org/ 
 soon or they won't happen.


I have no idea what changes aren't in trac. In the past I've asked  
questions on #jslang or elsewhere about particular proposals (such as  
the parametric types proposal) and been told that many things about it  
had been changed, and the person telling me wasn't sure if all these  
changes had trac tickets, or if so, what they were.


It really seems to me like in many cases there is a shared  
understanding among many of the insiders that is only recorded inside  
people's heads. Maybe that's not right, but that is certainly the  
impression I've gotten every time I have asked questions about where  
something is documented.



Before attempting interoperable implementations of particular
features, I think we need at minimum a form of the proposal for that
feature that is complete and up to date. It doesn't have to be formal
specification quality, but there has to be something accurate.


I've worked pretty hard to keep proposals such as iterators and  
generators up to date; it depends on other proposals which are also  
not formal spec quality, but stable and meaningful (structural  
types, type parameters). Cormac has done work recently in  
formalizing the type system which was important to Graydon's RI work.


Great, if some proposals are accurate and up to date enough to drive  
an initial implementation, then my concern is addressed for those  
features. But I don't know how to tell which ones those are. Is there  
a list of which proposals are up to date?


Furthermore, this won't help when it comes time to implement proposals  
that *aren't* up to date. All I'm asking is that they be brought up to  
date first.



So I think you are generalizing unfairly here.

It's true that decimal is out of date in the wiki, and there are  
open trac issues. This is true of other proposals.



Now, it may be that by telling someone to reverse engineer another
implementation, or ask the ES4 crowd about every detail of how a
feature should be implemented, someone could succeed in implementing.


Nice strawmen, but no one proposed those things.


Then what is proposed? If I ask an engineer on my team to implement a  
feature such as type annotation, how should I ask them to proceed?





But it seems to me that this undermines the unstated assumption of
interoperable *independent* implementations.




In contrast, with CSS, Web API or HTML WG specifications, I can point
engineers to a spec that is more or less accurate for a given feature
and they only have to ask questions about the few missing details.


And then Hixie goes and rewrites it. I am calling b.s. here, Maciej.  
We implemented offline web app support early in Firefox 3, based on  
such WHAT-WG (ok, not HTML WG at the time) specs. They changed a  
great deal later.


I'm not asking for a spec that won't substantially change. The whole  
point of early implementations is to improve the spec, and sometimes  
that may take significant redesign. Safari has been hit by this as  
well, and we accept that as a risk we take on as early

Re: ES4 implementation process, teams, and meetings

2008-02-21 Thread Maciej Stachowiak

On Feb 21, 2008, at 10:31 AM, Graydon Hoare wrote:

 Maciej Stachowiak wrote:

 To expand a bit on Geoff's comment:
 I'd like Apple and the WebKit project to get involved with ES4   
 implementation.

 Great! Though please keep in mind a point in the remainder of your  
 comments: WebKit (and Rhino) are operating from a somewhat  
 newcomer perspective, relative to the committee. The other 5  
 implementations in question (RI, spidermonkey, mbedthis, futhark, ESC 
 +tamarin) are all written by engineers who are and have been closely  
 following the tickets, proposals and discussion, and modifying the  
 RI to encode their thoughts / experiment with a feature.

 So the implication that the language designers are a disjoint set  
 from the implementors, or that they haven't been writing their  
 thoughts down, is not historically accurate. If that's becoming more  
 true now, ok, maybe we need to make some adjustments. But understand  
 where we're coming from.

I don't think the sets are disjoint, but they are not identical either.

 Before attempting interoperable implementations of particular   
 features, I think we need at minimum a form of the proposal for  
 that  feature that is complete and up to date. It doesn't have to  
 be formal  specification quality, but there has to be something  
 accurate.

 I agree this would be nice, but it'd also be nice to have 9 finished  
 implementations and a finished spec! We don't have these yet. So: is  
 your team *completely* stalled until we have such documents,  
 presumably in english rather than SML? If so, I (or anyone else who  
 understands the issues clearly enough -- they're not tremendously  
 complex) can make a priority of building up foundational  
 implementors documents covering such basic concepts as namespaces,  
 names, multinames, types and fixtures. I think we had hoped the RI  
 and tickets on it to serve this role.

I don't think foundational documents are what we need to implement  
specific features. What we need are rough cut but accurate specs for  
the features to implement. I don't think SML + trac is a form that  
anyone here can easily understand.

 Now, it may be that by telling someone to reverse engineer another   
 implementation, or ask the ES4 crowd about every detail of how a   
 feature should be implemented, someone could succeed in  
 implementing.  But it seems to me that this undermines the unstated  
 assumption of  interoperable *independent* implementations.

 I do not think it undermines the assumption of independent  
 implementations, but I also don't think there's a perfectly clear  
 line between dependent and independent. Impls inform themselves  
 from somewhere, be it spec or spec-that-is-informed-from-other-impls  
 or other technical reference material. Information flows somehow,  
 and often between impls (even if indirectly).

 You're not going to be doing WebKit by studying Futhark or  
 Spidermonkey; but I *would* recommend studying the RI (and  
 contributing to it!) I would not worry greatly about the risk of  
 being dependent on it, since it is in a very different language  
 (SML) than WebKit's interpreter and is surely structured quite  
 differently. Study it and understand it, though, as it's as precise  
 as we currently get. The RI was meant to be studied (a.k.a. reverse  
 engineered) and the risk of overspecificity from that is something  
 we all explicitly agreed was better than the risk of deploying  
 underspecified and incompatible impls to the field.

Well, neither I nor anyone on my team know SML. Nor do we know the  
internals of the reference implementation, what aspects of it are  
normative, which are implementation details, and which are considered  
bugs and are intended to change. Nor would I know where in the RI to  
look to understand how to implement particular features. For example,  
let binding was raised as an example of a possible early  
implementation feature. I don't know where in the ~40 klocs of SML in  
the repository I should look.


 In contrast, with CSS, Web API or HTML WG specifications, I can  
 point  engineers to a spec that is more or less accurate for a  
 given feature  and they only have to ask questions about the few  
 missing details. I  would raise HTML5 as a particularly laudable  
 example because it  achieves this even though much implementation  
 work is happening in  parallel with writing the spec.

 HTML5 is a laudable example, and I hope we wind up producing  
 something of similar quality. It has also had more energy put into  
 it, more eyes on it, and is a much wider and flatter spec (fewer  
 subtle interlocking issues).

 Web browsers are also stunningly more complex than programming  
 languages, so the concept of a reference implementation is  
 completely fanciful (though you could do a reference implementation  
 of the parsing rules, say).

 The ES4 RI is small and digestible. Discounting the builtins and  
 some obvious

Re: ES4 implementation process, teams, and meetings

2008-02-21 Thread Maciej Stachowiak

On Feb 21, 2008, at 4:34 PM, Graydon Hoare wrote:

 Maciej Stachowiak wrote:

 I don't think the sets are disjoint, but they are not identical  
 either.

 Agreed. I am trying to arrive at an understanding of which camp  
 Apple aspires to (designer, implementor or both) and in  
 particular how you wish to enact that role.

Apple is not monolithic. Some of us hope to participate more in the  
design process, but it's pretty likely that people who don't deeply  
involve themselves in the design process will do a lot of the  
implementation work. Those people who are implementing need to have  
something to follow.

 Any Rhino hackers (or other implementors) may also wish to chime in.  
 It sounds to me like not having anything to do with the RI is  
 characteristic of how you wish to participate. Is this correct?

We're unlikely to have much interest in working on implementing the  
RI. We have a small team and our hands are full implementing our own  
JS engine (and the rest of the browser engine). And we think that  
ultimately implementing a production-quality implementation in our  
engine is more valuable.

While we try to do our part to help with development and validation of  
important web standards, learning a new programming language and  
coding in it is a pretty high barrier to entry. Is that really the  
only way to meaningfully participate in the process?

As for reading the RI, it seems a lot harder to understand than specs  
written in prose. As far as I can tell, only people who have coded  
significant portions understand it.

 I don't think foundational documents are what we need to implement  
 specific features. What we need are rough cut but accurate specs  
 for the features to implement. I don't think SML + trac is a form  
 that anyone here can easily understand.

 Ok. Some of the features are deep and some are shallow. Many of the  
 shallow ones are not in much flux, do not require much insight, and  
 are well-described by the proposals on the wiki. You could readily  
 show one to an engineer and have them implement, say, destructuring  
 assignment, triple quotes, date and time extensions, regexp  
 extensions, a hashcode function, getters and setters, block  
 expressions (let expressions), enumerability control,  
 String.prototype.trim, expression closures.

Great, can we start recording this list somewhere? Perhaps part of  
breaking down features and figuring out the dependencies should be  
recording which features have an up to date and accurate wiki proposal.

 Others, as you've noted, require some clean-up / consolidation to be  
 brought into line with tickets. Decimal and numerics turned out to  
 be an ongoing source of trouble, as did tail calls. We're still  
 discussing these. We can probably do this in parallel with your team  
 implementing the ones that are more stable.

A clear list of proposals that are clearly not ready would be valuable  
as well. So let's also start recording this list.

 The thing we don't have feature sketches for are deep, systemic  
 issues that affect the whole language: types, fixtures and  
 namespaces. Also the rules governing packages, classes and  
 interfaces. These came in implicitly from AS3, and the type system  
 in particular has been modified extensively: Cormac's paper and the  
 code in type.sml is probably the best we have to describe the  
 intended type system. This is what I meant by foundational  
 documents. Do you still feel that they are not needed?

Documentation will certainly be needed when it comes time to implement  
said features.

 Well, neither I nor anyone on my team know SML. Nor do we know the  
 internals of the reference implementation, what aspects of it are  
 normative, which are implementation details, and which are  
 considered bugs and are intended to change. Nor would I know where  
 in the RI to look to understand how to implement particular  
 features. For example, let binding was raised as an example of a  
 possible early implementation feature. I don't know where in the  
 ~40 klocs of SML in the repository I should look.

 I am trying to give you a guide, but am unsure if you want this form  
 of guidance. SML is a small functional language, and we use mostly  
 applicative style. Nothing crazy.

 Here is a general rule for researching an aspect of the language:  
 start by identifying the AST node for a feature; if you can't find  
 it by guesswork and reading ast.sml, ask someone on IRC or here.  
 Then depending on what you want to do for that feature, read the  
 parser.sml function that produces that node, the defn.sml function  
 that elaborates it, the type.sml rule that checks it, or the  
 eval.sml rule that evaluates it.

 It would also be good to read some of mach.sml to familiarize  
 yourself with the runtime machine model eval.sml manipulates, if  
 you're investigating a runtime entity. The interesting types are  
 VAL, OBJ, VAL_TAG, MAGIC, SCOPE, REGS, PROP_STATE

Re: ES3.1 Proposal Working Draft

2008-02-20 Thread Maciej Stachowiak

On Feb 20, 2008, at 1:00 PM, Adam Peller wrote:

 Each of us has some pet addition we think would be a great addition  
 to
 the language. const, decimal, getters and setters, destructing
 assignment -- all these have come up just this morning!. Each of  
 these
 makes the language larger and more complex, imposing a general  
 diffuse
 cost on everyone.

 Mark, as I recall, the discussion at the March meeting in Newton  
 involved implementing decimal arithmetic in ES3.1 to *replace* the  
 floating point implementation in ES3, thus no new syntax. Yes,  
 this would have unexpected results for those who actually have code  
 logic which expects a value of 46.19 pounds, in Mike's example  
 (see Numbers thread) but the benefits here seemed to far outweigh  
 this discrepancy. I can't speak to the technical level of detail  
 that Mike can, but at a high level it's seen as a bug by the vast  
 majority of users, and for all practical purposes, that's what it is


Besides compatibility issues, this would be a significant performance  
regression for math-heavy code. I would consider this a showstopper to  
implementing such a change.

I also agree with Mark's comment that arbitrary-precision integers and  
arbitrary-precision rationals seem like more generally useful types  
than decimal floating point, if any numeric types are to be added, but  
that seems like an issue more for ES4 than 3.1.

  - Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: proper tail calls

2008-01-21 Thread Maciej Stachowiak

On Jan 18, 2008, at 10:49 PM, Brendan Eich wrote:


 If, in order make the presence of an explicit form convenient, we  
 have to add sugar for it as an additional form of expression-closure  
 -- goto call-expr() means {goto call-expr();} -- I don't think  
 it's the end of the world. I do think, at present, we're meandering  
 aimlessly towards a system that claims to provide a way to make tail  
 calls, but in which nobody can ever figure out where or when they're  
 actually getting tail calls. I don't think it'll be useful to ship  
 such a language.

Is goto the only option being considered for how to identify tail  
position? It seems to me this could easily be confused with what  
goto means in languages like C, Pascal or C#. return might be a  
good choice of syntax if it weren't for the implicit conversion  
problem. How about something like tailcall or tailreturn.


Here is another thing I don't understand: if goto must flag runtime  
errors in cases where otherwise implicit type conversion would occur,  
then does that not itself break proper tail recursion (something has  
to hang around to do the typecheck on the actually returned type)? Or  
is the intent that it throws a runtime error before the call if it  
can't statically prove that return types match? Does that mean you can  
never goto an untyped function from one with a declared return type?

Regards,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Re: proper tail calls

2008-01-21 Thread Maciej Stachowiak

On Jan 21, 2008, at 10:52 PM, Brendan Eich wrote:

 On Jan 21, 2008, at 8:02 PM, Maciej Stachowiak wrote:

 On Jan 21, 2008, at 12:35 PM, Brendan Eich wrote:

 Conversions (implicit and hardcoded among the
 built-in types representing and wrapping primitives) that might
 defeat PTC may not be evident until runtime, where the result would
 be a TypeError or possibly a new Error subtype.

 Isn't this case (implicit conversion) exactly what motivated the idea
 that programmers may not be able to easily tell if a call is in tail
 position?


 Indeed:

 ES4 has proper tail calls, but their constraints are sometimes  
 subtle, especially with regard to conversions or type checks  
 inserted at the return point. It may be that the Explicit Is Better  
 Than Implicit principle once again finds application here.

 First paragraph in http://bugs.ecmascript.org/ticket/323. Again, the  
 ticket is just sitting there, you don't need me transcribing it into  
 this list :-/.

What I meant to point out is that the motivating use case for  
additional up-front checking can't in general be checked until  
runtime, which somewhat undermines the point you made that many non- 
tail cases could be caught at compile time.

Cheers,
Maciej

___
Es4-discuss mailing list
Es4-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es4-discuss


Close review of Language Overview whitepaper

2007-11-14 Thread Maciej Stachowiak

Hello ES4 fans,

I have now read the recently posted whitepaper. I marked up my printed  
copy with many comments in the margins, and I am sharing them with the  
list now.

Please note that this does not constitute an official Apple position,  
just some personal off-the-cuff opinions. I have discussed the  
proposal with some of my colleagues, including Geoff Garen who  
attended the recent f2f, but we have not figured out a consensus  
overall position or anything. With the disclaimers out of the way,  
here are my review comments:

Section I.

Goals: I strongly agree with the stated goals of compatibility and  
enabling large software development. I wonder if perhaps performance  
should be added as a goal. At the very least we want it to be possible  
to achieve performance on par with ES3 engines, and ideally we want to  
enable better performance.


Section II.

Programming in the small: ... make the writing and reading of  
fragments of code simpler and more effortless. That is somewhat  
dubious gramatically, I suggest (with additional style fixes) make  
the reading and writing of code fragments easier.


Portability: This section first it says that the full language must be  
supported - subset profiles are not desirable. Then it says that, to  
allow ES4 to be practically implementable on small devices and in  
hosted environments, certain features, like extensive compile-time  
analysis and stack marks cannot be part of the language. Then it says  
those features are part of the language, but optional.

I hope the problems here are clear: first, the section plainly  
contradicts itself. It argues against subsets and certain classes of  
features, and then says the spec includes such features as optional,  
thus defining a subset. So that needs to be fixed in the whitepaper.  
More significantly, I think this may be an indication that the  
language has failed to meet its design goals. My suggestion would be  
to remove all optional features (though I could be convinced that  
strict mode is a special case).


Section III.

Syntax: The new non-contextual keywords, and the resulting need to  
specify dialect out of band, are a problem. I'll have more to say  
about compatibility under separate cover.

Behavior:
- This section has says that variation among ES3 implementations  
entails a license to specify behavior more precisely for ES4.  
However, the example given is a case where behavior among two  
implementations was already the same, due to compatibility  
considerations. I actually think both convergence on a single behavior  
where variation is allowed, and variation that leads to practical  
compatibility issues are license to spec more precisely,

- The RegExp change - is this really a bug fix? It's likely that this  
is not a big compatibility issue (Safari's ES3 implementation had  
things the proposed ES4 way for some time) but I think ES3's approach  
may be more performance and generating a new object every time does  
not seem especially helpful.

Impact: This section talks a lot about incompatibilities between ES4  
and ES3, however I think incompatibilities with ES3 as specced are in  
themselves almost irrelevant. What matters is incompatibilities with  
existing implementations and the content that depends on them. This  
section also appears to talk disparagingly about some implementations  
prioritizing compatibility over ES3 compliance, implies that any  
deviations may be due to inadequate engineering practices, and  
implies that only some implementations are not compatible with ES3.  
Is there any significant implementation that anyone would claim is  
100% free of ECMAScript 3 compliance bugs? I doubt it, and so I think  
we should make this section less judgmental in tone.

The web: Here especially, the actual concern is real-world  
compatibility, not compatibility with the ES4 spec. Furthermore, it  
completely ignores forward compatibility (the ability to serve ES4 to  
older browsers that do not support it). It implies that this is just  
an issue of aligning the timing of implementations. Ignoring for the  
moment how impractical it is to expect multiple implementations to  
roll out major new features in tandem, I note that there were similar  
theories behind XHTML, XSL, XHTML 2, and many other technologies that  
have largely failed to replace their predecessors. Again, I'll say  
more about compatibility (and in particular how the WHATWG approach to  
compatibility can be applied to ES4) under separate cover.



Section IV.

Classes: If any of the new type system is worthwhile, surely this is.  
The impedance mismatch between the class model used by most OO  
languages and by specifications like the DOM, and ES3's prototype  
model, is needlessly confusing to authors. So I approve of adding  
classes in a reasonable and tasteful way.

Dynamic properties: the fact that the dynamic behavior is not  
inherited makes class inheritence violate the Liskov