Re: JS syntax future-proofing, Macros, the Reader (was: Performance concern with let/const)

2012-09-24 Thread Waldemar Horwat

On 09/18/2012 09:47 AM, Brendan Eich wrote:

2. Tim Disney with help from Paul Stansifer (Mozilla grad student interns) have 
figured out how to implement a Reader (Scheme sense) for JS, which does not 
fully parse JS but nevertheless correctly disambiguates /-as-division-operator 
from /-as-regexp-delimiter. See

https://github.com/mozilla/sweet.js

This Reader handles bracketed forms: () {} [] and /re/. Presumably it could 
handle quasis too. Since these bracketed forms can nest, the Reader is a PDA 
and so more powerful than the Lexer (a DFA or equivalent), but it is much 
simpler than a full JS parser -- and you need a Reader for macros.


That's not possible.  See, for example, the case I showed during the meeting:

boom = yield/area/
height;

Is /area/ a regexp or two divisions and a variable?  You can't tell if you're 
using a purported universal parser based on ES5.1 and are unaware that yield is 
a contextual keyword which will be introduced into the language in ES6.  And 
yes, you can then get arbitrarily out of sync:

boom = yield/a+3/ string?  ...

Waldemar

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JS syntax future-proofing, Macros, the Reader (was: Performance concern with let/const)

2012-09-24 Thread Brendan Eich

Waldemar Horwat wrote:

On 09/18/2012 09:47 AM, Brendan Eich wrote:
2. Tim Disney with help from Paul Stansifer (Mozilla grad student 
interns) have figured out how to implement a Reader (Scheme sense) 
for JS, which does not fully parse JS but nevertheless correctly 
disambiguates /-as-division-operator from /-as-regexp-delimiter. See


https://github.com/mozilla/sweet.js

This Reader handles bracketed forms: () {} [] and /re/. Presumably it 
could handle quasis too. Since these bracketed forms can nest, the 
Reader is a PDA and so more powerful than the Lexer (a DFA or 
equivalent), but it is much simpler than a full JS parser -- and you 
need a Reader for macros.


That's not possible.  See, for example, the case I showed during the 
meeting:


boom = yield/area/
height;

Is /area/ a regexp or two divisions and a variable?  You can't tell if 
you're using a purported universal parser based on ES5.1 and are 
unaware that yield is a contextual keyword which will be introduced 
into the language in ES6.  And yes, you can then get arbitrarily out 
of sync:


boom = yield/a+3/ string?  ...


As I said to you at the meeting, there may be no problem if we simply 
stop adding contextual keywords and add macros instead.


We already gave up on adding /re/x and good riddance. Getting macros 
while freezing special form syntax strikes me as possibly a very good 
trade-off. What do you think?


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Performance concern with let/const

2012-09-18 Thread Andreas Rossberg
On 17 September 2012 18:37, Luke Hoban lu...@microsoft.com wrote:
 'let' is certainly not going to be faster than 'var' in any case

There is at least one very important counterexample to that claim: the
global scope. Assuming lexical global scope (as we tentatively agreed
upon at the last meeting) using 'let' in global scope will be
_significantly_ faster than 'var', without even requiring any
cleverness from the VM.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Performance concern with let/const

2012-09-18 Thread Allen Wirfs-Brock

On Sep 18, 2012, at 7:27 AM, Andreas Rossberg wrote:

 On 17 September 2012 19:51, Allen Wirfs-Brock al...@wirfs-brock.com wrote:
 
 On Sep 17, 2012, at 12:37 PM, Luke Hoban wrote:
 
 These are good questions.  Paul will be attending the TC39 meeting this 
 week, and can likely talk to specific details.   High level though, we 
 statically eliminate the TDZ checks for references to 'let' within the same 
 closure body as the declaration.
 
 
 The other major check that I would expect to be significant, is whether a 
 inner function that references an outer TDZ binding is (potentially) called 
 before initialization of the binding.  EG:
 
 {
function f(){return x}
f();   //TDZ check of x in f can not be eliminated
let x=1;
 }
 
 {
function f(){return x}
let x=1;  //TDZ check of x in f should be eliminated
f();
 }
 
 Unfortunately, detecting this case in general requires significant
 static analysis, since f might be called indirectly through other
 functions (even ignoring the case where f is used in a first-class
 manner).
 
Yes but but there are fairly simple heuristics that approximate that result, 
for example:
   if no function calls dominate the initialization of x then TDZ checks will 
never need to be made for x
   
   
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Performance concern with let/const

2012-09-18 Thread Andreas Rossberg
On 18 September 2012 13:41, Allen Wirfs-Brock al...@wirfs-brock.com wrote:
 Yes but but there are fairly simple heuristics that approximate that result, 
 for example:
if no function calls dominate the initialization of x then TDZ checks will 
 never need to be made for x

Yes, except that in JS, a function call can hide behind so many
seemingly innocent operations...

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JS syntax future-proofing, Macros, the Reader (was: Performance concern with let/const)

2012-09-18 Thread François REMY
My point is: how do you make sure you don't redefine an existing syntax? Or, 
if that the syntax you're defining for your personnal use will not be 
reclaimed by a next version of ES?


   // Polyfill for syntaxFeature:
   /*@cc_on
   /*@if !support syntaxFeature */
   code that emulate the syntaxFeature
   /* @endif */
   */

There's probably an anwser to that question, it's just that I'm not aware of 
it already :-)






-Message d'origine- 
From: Brendan Eich

Sent: Tuesday, September 18, 2012 7:17 PM
To: François REMY
Cc: Luke Hoban ; Andreas Rossberg ; Paul Leathers ; es-discuss@mozilla.org
Subject: Re: JS syntax future-proofing, Macros, the Reader (was: 
Performance concern with let/const)


François REMY wrote:

|  But if we have macros, then many progressive
|  enhancement and anti-regressive polyfill approaches can be done, even
|  with new syntax (not just with APIs).

Seems like another good way of fixing the issue :-) However, this seems to 
require some form of conditionnal compilation to work , right? [1]


[1] http://www.javascriptkit.com/javatutors/conditionalcompile.shtml


Macros have nothing to do with conditional compilation _per se_. When
you read Macros in a JS context, don't think the C Pre-Processor, think
Scheme!

/be 


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JS syntax future-proofing, Macros, the Reader (was: Performance concern with let/const)

2012-09-18 Thread Brendan Eich

François REMY wrote:
My point is: how do you make sure you don't redefine an existing 
syntax? Or, if that the syntax you're defining for your personnal use 
will not be reclaimed by a next version of ES?


   // Polyfill for syntaxFeature:
   /*@cc_on
   /*@if !support syntaxFeature */
   code that emulate the syntaxFeature
   /* @endif */
   */

There's probably an anwser to that question, it's just that I'm not 
aware of it already :-)


Yes, polyfilling means has-syntax testing. I'm told the sweet.js project 
is considering how to do this, but I bet a donut that it'll not look 
like CPP or @-CPP or any such ugly blast from the past.


More in a bit.

/be






-Message d'origine- From: Brendan Eich
Sent: Tuesday, September 18, 2012 7:17 PM
To: François REMY
Cc: Luke Hoban ; Andreas Rossberg ; Paul Leathers ; 
es-discuss@mozilla.org
Subject: Re: JS syntax future-proofing, Macros, the Reader (was: 
Performance concern with let/const)


François REMY wrote:

|  But if we have macros, then many progressive
|  enhancement and anti-regressive polyfill approaches can be done, even
|  with new syntax (not just with APIs).

Seems like another good way of fixing the issue :-) However, this 
seems to require some form of conditionnal compilation to work , 
right? [1]


[1] http://www.javascriptkit.com/javatutors/conditionalcompile.shtml


Macros have nothing to do with conditional compilation _per se_. When
you read Macros in a JS context, don't think the C Pre-Processor, think
Scheme!

/be


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: JS syntax future-proofing, Macros, the Reader (was: Performance concern with let/const)

2012-09-18 Thread David Herman
It's still early to say. But my feeling is that if we can get macros working, 
we can introduce new syntax via modules, not just unconditionally throwing them 
in everywhere. Then you don't have to do these kinds of global conditional 
things; rather, you just import the syntax from modules. The dynamic testing 
would then be done during app setup, when you construct your global environment 
before running the client code that uses it.

The goal of macros is to make syntax more modular.

Dave

On Sep 18, 2012, at 2:23 PM, Brendan Eich bren...@mozilla.com wrote:

 François REMY wrote:
 My point is: how do you make sure you don't redefine an existing syntax? Or, 
 if that the syntax you're defining for your personnal use will not be 
 reclaimed by a next version of ES?
 
   // Polyfill for syntaxFeature:
   /*@cc_on
   /*@if !support syntaxFeature */
   code that emulate the syntaxFeature
   /* @endif */
   */
 
 There's probably an anwser to that question, it's just that I'm not aware of 
 it already :-)
 
 Yes, polyfilling means has-syntax testing. I'm told the sweet.js project is 
 considering how to do this, but I bet a donut that it'll not look like CPP or 
 @-CPP or any such ugly blast from the past.
 
 More in a bit.
 
 /be
 
 
 
 
 
 -Message d'origine- From: Brendan Eich
 Sent: Tuesday, September 18, 2012 7:17 PM
 To: François REMY
 Cc: Luke Hoban ; Andreas Rossberg ; Paul Leathers ; es-discuss@mozilla.org
 Subject: Re: JS syntax future-proofing, Macros, the Reader (was: 
 Performance concern with let/const)
 
 François REMY wrote:
 |  But if we have macros, then many progressive
 |  enhancement and anti-regressive polyfill approaches can be done, even
 |  with new syntax (not just with APIs).
 
 Seems like another good way of fixing the issue :-) However, this seems to 
 require some form of conditionnal compilation to work , right? [1]
 
 [1] http://www.javascriptkit.com/javatutors/conditionalcompile.shtml
 
 Macros have nothing to do with conditional compilation _per se_. When
 you read Macros in a JS context, don't think the C Pre-Processor, think
 Scheme!
 
 /be
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Performance concern with let/const

2012-09-17 Thread Allen Wirfs-Brock
some comments below

On Sep 16, 2012, at 9:35 PM, Luke Hoban wrote:

 We've begun deeper investigations of implementation practicalities related to 
 let/const, and two significant performance concerns have been raised.  I 
 think these both merit re-opening discussion of two aspects of the let/const 
 design.
 
 
 __Temporal dead zones__
 
 For reference on previous discussion of temporal dead zone see [1].
 
 I've expressed concerns with the performance overhead required for temporal 
 dead zones in the past, but we did not at the time have any data to point to 
 regarding the scale of the concern.  
 
 As an experiment, I took the early-boyer test from V8 and changed 'var' to 
 'let'.  In Chrome preview builds with 'let' support, I saw a consistent ~27% 
 slowdown.  That is, the 'let is the new var' mantra leads to 27% slower code 
 in this example for the same functionality.  

Without evaluating the quality of the Chrome implementation, this isn't a 
meaningful observation.  As the Chrome implements have stated they have not 
done any optimization, this actually becomes a misleading statement.  You 
really should just strike this assertion from your argument and start with your 
actual experiments as the evidence  to support your position.


 
 However, we are aware that there are a class of dynamic checks that can be 
 removed by static analysis - in particular intra-procedural use before 
 assignment checks.  We implemented these checks in a Chakra prototype, and 
 even with these, we still see an ~5% slowdown.  
 
 Our belief is that any further removal of these dynamic checks 
 (inter-procedural checks of accesses to closure captured let references) is a 
 much more difficult proposition, if even possible in any reasonable 
 percentage of cases.  

To understand the general applicability of this results  we need to know what 
specific static optimizations you performed and evaluate that against the list 
of plausible optimizations.  I'd also like to understand that the specific 
coding patterns within the test program could not be statically optimized.

It's great to use experimental results to support you point, but the result 
needs to be possible to independently validate and analyze the results.

 
 Unless we can be sure that the above perf hit can indeed be easily overcome, 
 I'd like to re-recommend that temporal dead zones for let and const be 
 removed from the ES6 specification.  Both would remain block scoped binding, 
 but would be dynamically observable in 'undefined' state - including that 
 'const' would be observable as 'undefined' before single assignment.  

We really don't have enough evidence to come to that conclusion

 
 In particular - the case against temporal dead zones is as follows:
 
 1. The value of temporal dead zones is to catch a class of programmer errors. 
  This value is not overly significant (it's far from the most common error 
 that lint-like tools catch today, or that affects large code bases in 
 practice), and I do not believe the need/demand for runtime-enforced 
 protection against this class of errors has been proven.  This feature of 
 let/const is not the primary motivation for either feature (block scoped 
 binding, inlinability and errors on re-assignment to const are the motivating 
 features).

As far as I'm concerned the motivating feature for TDZs is to provide a 
rational semantics for const.  There was significant technical discussion of 
that topic and TDZs emerged as the best solution. An alternative argument you 
could make would be to eliminate const.  Is there a reason you aren't making 
that argument?
 
 2. The stated goal of 'let' is to replace 'var' in common usage (and if this 
 is not the goal, we should not be adding 'let'). 

There is actually some disagreement about that statement of the goal.  The goal 
of let is to provide variable that are scoped to the block level.  That is the 
significant new semantics that is being added.  The slogan-ism isn't the goal.

As stated above, let isn't the motivator for TDZ, it's const.   Let could 
easily be redefined to not need a TDZ (if that really proved to be a major area 
of concern).  So, you either need to argument against const or argue against 
block scoping, in general rather let.

 
 3. Unless the above performance hit can be overcome, and given #2 above, *let 
 will slow down the web by ~5%*.

As covered above, this is a bogus assertion without data to support it.

 
 4. Even if the above performance hit can be (mostly) overcome with net new 
 engine performance work, that is performance work being forced on engine 
 vendors simply to not make the web slower, and comes at the opportunity cost 
 of actually working on making the web *faster*.  

Again, isn't this really a question about the value of const.
 
 5. We are fairly confident that it is not possible to fully remove the 
 runtime overhead cost associated with temporal dead zones.  That means that, 
 as a rule, 'let' will be 

RE: Performance concern with let/const

2012-09-17 Thread Luke Hoban
From: Andreas Rossberg [mailto:rossb...@google.com] 
 On 17 September 2012 03:35, Luke Hoban lu...@microsoft.com wrote:
  __Temporal dead zones__
 
  As an experiment, I took the early-boyer test from V8 and changed 'var' to 
  'let'.  In Chrome preview builds with 'let' support, I saw a consistent 
  ~27% slowdown.  That is, the 'let is the new var' mantra leads to 27% 
  slower code in this example for the same functionality.

 Just to be clear, the V8 implementation of block scoping and 'let' has not 
 seen any serious performance optimisation yet. So I'd take these numbers 
 (which I haven't verified) with a large grain of salt.

Yes - sorry I didn't make this more clear.  This baseline was relevant mostly 
because it motivated trying to gather data with some of the key optimizations 
implemented.


 Also, I conjecture that the main cost for 'let' isn't the temporal dead-zone, 
 but block allocation. In particular, a 'let' in a loop costs a significant 
 extra if no static analysis is performed that would allow hoisting the 
 allocation out of the loop.

That may well be another significant perf concern.  For early-boyer in 
particular, I believe the structure of the code ensures that this particular 
issue will not come into play - the code largely hoists variable declarations 
to top of function scope.  


  However, we are aware that there are a class of dynamic checks that can be 
  removed by static analysis - in particular intra-procedural use before 
  assignment checks.  We implemented these checks in a Chakra prototype, and 
  even with these, we still see an ~5% slowdown.

 I would like to understand this better. AFAICT, you don't necessarily need 
 much static analysis. In most cases (accesses from within the same closure) 
 the check can be trivially omitted. 

Yes - this is what we implemented in the Chakra prototype.  Though note that 
the 'switch' issue raised on the list recently leads to cases even within a 
closure body where static analysis is insufficient - such as this (though I 
would guess this case won't be the primary perf culprit):

function(x) {
   do {
switch(x) {
case 0:
return x;//always a runtime error
case 1:
let x;
x = 'let'; //never a runtime error
case 2:
return x;//sometimes a runtime error
}
} while (foo());
}


 __ Early Errors__

 This ultimately means that any script which mentions 'const' will defeat a 
 significant aspect of deferred AST building, and therefore take a load time 
 perf hit.

This is indeed a concern. However, I don't think 'const' is the only problem, 
other ES6 features (such as modules) will probably introduce similar classes 
of early errors.

Agreed - this concern is broader 'let'/'const'.


 More generally - this raises a concern about putting increasingly more 
 aggressive static analysis in early errors.  It may, for example, argue for 
 a 3rd error category, of errors that must be reported before any code in 
 their function body executes.

 I agree that this is a worthwhile possibility to consider. I mentioned this 
 idea to Dave once and he didn't like it much, but maybe we should have a 
 discussion.

I understand the potential concern with this sort of thing - it weakens the 
upfront feedback from early errors.  But I'm not sure JavaScript developers 
really want to pay runtime performance cost in return for more up front static 
analysis during page load.


Luke



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Performance concern with let/const

2012-09-17 Thread François REMY

(Just one opinion)

I'm all in favor of function-level parse errors. This reminds me an article 
of Ian Hickson where he wondered why, to the contrary of CSS, the ECMAScript 
language didn't define a generic syntax defining a well-formed program 
(tokens, parenthesis+brackets balance, ...) and which would replace any 
block he didn't understand by a { throw ParseError() } block.


   function A() {
   ooops {
   return 3;
   }
   }

would be the same as

   function() {
   do { throw new ParseError(...); }
   }

I don't say we need to go that far (in fact, ASI probably makes it 
impossible), but any progress made to more modularity and lazy compilation 
is good to take. 


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Performance concern with let/const

2012-09-17 Thread Brendan Eich

Agree with your points in reply to Luke, one clarification here:

Allen Wirfs-Brock wrote:

As stated above, let isn't the motivator for TDZ, it's const.   Let could 
easily be redefined to not need a TDZ (if that really proved to be a major area 
of concern).  So, you either need to argument against const or argue against 
block scoping, in general rather let.


TC39 has been divided on this but managed to reach TDZ consensus. 
Waldemar argued explicitly for TDZ for let, as (a) future-proofing for 
guards; (b) to enable let/const refactoring without surprises.


One could argue that (a) can be deferred to let-with-guards, should we 
add guards. (b) I find more compelling.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


RE: Performance concern with let/const

2012-09-17 Thread Luke Hoban
From: Allen Wirfs-Brock [mailto:al...@wirfs-brock.com] 
 On Sep 16, 2012, at 9:35 PM, Luke Hoban wrote:
 
 As an experiment, I took the early-boyer test from V8 and changed 'var' to 
 'let'.  In Chrome preview builds with 'let' support, I saw a consistent ~27% 
 slowdown.  That is, the 'let is the new var' mantra leads to 27% slower code 
 in this example for the same functionality.  

 Without evaluating the quality of the Chrome implementation, this isn't a 
 meaningful observation.  As the Chrome implements have stated they have not 
 done any optimization, this actually becomes a misleading statement.  You 
 really should just strike this assertion from your argument and start with 
 your actual experiments as the evidence  to support your position.

Yes - this was definitely not a significant aspect of the argument - it was 
just the initial datapoint which motivated us to do a deeper performance 
investigation. 


 However, we are aware that there are a class of dynamic checks that can be 
 removed by static analysis - in particular intra-procedural use before 
 assignment checks.  We implemented these checks in a Chakra prototype, and 
 even with these, we still see an ~5% slowdown.  
  
  Our belief is that any further removal of these dynamic checks 
  (inter-procedural checks of accesses to closure captured let references) is 
  a much more difficult proposition, if even possible in any reasonable 
  percentage of cases.  

 To understand the general applicability of this results  we need to know what 
 specific static optimizations you performed and evaluate that against the 
 list of plausible optimizations.  I'd also like to understand that the 
 specific coding patterns within the test program could not be statically 
 optimized.

These are good questions.  Paul will be attending the TC39 meeting this week, 
and can likely talk to specific details.   High level though, we statically 
eliminate the TDZ checks for references to 'let' within the same closure body 
as the declaration. 


  Unless we can be sure that the above perf hit can indeed be easily 
  overcome, I'd like to re-recommend that temporal dead zones for let and 
  const be removed from the ES6 specification.  Both would remain block 
  scoped binding, but would be dynamically observable in 'undefined' state - 
  including that 'const' would be observable as 'undefined' before single 
  assignment.  

 We really don't have enough evidence to come to that conclusion

I'm not as sure.  I'm not convinced we have evidence that TDZ is actually 
demanded by developers.  I'm more convinced that we have evidence that TDZ 
makes let strictly slower than var.  The only question seems to be how much, 
and whether this is significant enough to counter balance the perceived 
developer demand for TDZ.


 In particular - the case against temporal dead zones is as follows:
 
 1. The value of temporal dead zones is to catch a class of programmer 
 errors.  This value is not overly significant (it's far from the most common 
 error that lint-like tools catch today, or that affects large code bases in 
 practice), and I do not believe the need/demand for runtime-enforced 
 protection against this class of errors has been proven.  This feature of 
 let/const is not the primary motivation for either feature (block scoped 
 binding, inlinability and errors on re-assignment to const are the 
 motivating features).

 As far as I'm concerned the motivating feature for TDZs is to provide a 
 rational semantics for const.  There was significant technical discussion of 
 that topic and TDZs emerged as the best solution. An alternative argument you 
 could make would be to eliminate const.  Is there a reason you aren't making 
 that argument?

I'm not as convinced that a const which is undefined until singly assigned is 
irrational.  When combined with a 'let' which can be observed as 'undefined', 
I believe developers would understand this semantic.  The optimization 
opportunity for 'const' remains the same - it can be inlined whenever the same 
static analysis needed to avoid TDZ checks would apply.  

I am not arguing for eliminating const because 'const' at least has a potential 
performance upside and thus can motivate developer usage.  I am honestly more 
inclined to argue for eliminating 'let' if it ends up having an appreciable 
performance cost over 'var', as it's upside value proposition is not strong.


 2. The stated goal of 'let' is to replace 'var' in common usage (and if this 
 is not the goal, we should not be adding 'let'). 

 There is actually some disagreement about that statement of the goal.  The 
 goal of let is to provide variable that are scoped to the block level.  That 
 is the significant new semantics that is being added.  The slogan-ism isn't 
 the goal.

This strikes at a critical piece of the discussion around 'let'.  Adding a new 
fundamental block scoped binding form ('let') has a very significant conceptual 
cost to the 

RE: Performance concern with let/const

2012-09-17 Thread Luke Hoban
From: Brendan Eich [mailto:bren...@mozilla.org] 
 Allen Wirfs-Brock wrote:
  As stated above, let isn't the motivator for TDZ, it's const.   Let could 
  easily be redefined to not need a TDZ (if that really proved to be a major 
  area of concern).  So, you either need to argument against const or argue 
  against block scoping, in general rather let.

 TC39 has been divided on this but managed to reach TDZ consensus. 
 Waldemar argued explicitly for TDZ for let, as (a) future-proofing for 
 guards; (b) to enable let/const refactoring without surprises.

 One could argue that (a) can be deferred to let-with-guards, should we add 
 guards. (b) I find more compelling.

That's right - I referenced the original mail with the detailed writeup of the 
discussion leading to those decisions and the consensus you note.  At the time, 
I raised concerns about performance overhead of TDZ, and I continue to believe 
it's important to weigh those performance concerns significantly in the 
discussion about TDZ.

My actual proposal is to remove TDZ for both 'let' and 'const', which addresses 
the refactoring concern.  But it leads to 'const' being observable as 
undefined, which I expect is the more controversial aspect (though I'm not 
personally convinced this is a particularly significant practical concern). 

Luke


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Performance concern with let/const

2012-09-17 Thread Allen Wirfs-Brock

On Sep 17, 2012, at 12:37 PM, Luke Hoban wrote:

 
 These are good questions.  Paul will be attending the TC39 meeting this week, 
 and can likely talk to specific details.   High level though, we statically 
 eliminate the TDZ checks for references to 'let' within the same closure body 
 as the declaration. 
 

The other major check that I would expect to be significant, is whether a inner 
function that references an outer TDZ binding is (potentially) called before 
initialization of the binding.  EG:

{
function f(){return x}
f();   //TDZ check of x in f can not be eliminated
let x=1;
}

{
function f(){return x}
let x=1;  //TDZ check of x in f should be eliminated
f();   
}

Allen


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Performance concern with let/const

2012-09-17 Thread Domenic Denicola

 2. The stated goal of 'let' is to replace 'var' in common usage (and if 
 this is not the goal, we should not be adding 'let'). 
 
 There is actually some disagreement about that statement of the goal.  The 
 goal of let is to provide variable that are scoped to the block level.  That 
 is the significant new semantics that is being added.  The slogan-ism isn't 
 the goal.
 
 This strikes at a critical piece of the discussion around 'let'.  Adding a 
 new fundamental block scoped binding form ('let') has a very significant 
 conceptual cost to the language.  If it is not the expectation of the 
 committee that new code will nearly universally adopt 'let' instead of 'var', 
 and that books will be able to state 'use let instead of var', then I think 
 that brings into question whether 'let' is still passing the cost/value 
 tradeoff.  This tradeoff gets increasingly weaker as additional performance 
 overheads are entered into the cost bucket.

To provide a (admittedly single) developer perspective: let/const are 
attractive because they bring us closer to eliminating the confusion inherent 
in hoisting and achieving the same semantics as C-family languages. Although it 
seems that disallowing use before declaration is not possible, hoisting to 
block-level plus TDZ checks for the intermediate code gives a reasonable 
approximation, at least assuming I've understood the proposals and email 
threads correctly.

There are also a number of auxiliary benefits like the fresh per-loop binding 
and of course const optimizations/safeguards (which eliminate the need for a 
dummy object with non-writable properties to store one's constants).

Personally in the grand scheme of things even a 5% loss of speed is unimportant 
to our code when weighed against the value of the saner semantics proposed. We 
would immediately replace all vars with let/const if we were able to program 
toward this Chakra prototype (e.g. for our Windows 8 app).

I am almost hesitant to bring up such an obvious argument but worrying about 
this level of optimization seems foolhardy in the face of expensive DOM 
manipulation or async operations. Nobody worries that their raw JS code will 
run 5% slower because people are using Chrome N-1 instead of Chrome N. Such 
small performance fluctuations are a fact of life even with ES5 coding patterns 
(e.g. arguments access, getters/setters, try/catch, creating a closure without 
manually hoisting it to the outermost applicable level, using array extras 
instead of for loops, …). If developers actually need to optimize at a 5% level 
solely on their JS they should probably consider LLJS or similar.

That said I do understand that a slowdown could make the marketing story harder 
as not everyone subscribes to my views on the speed/clarity tradeoff.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Performance concern with let/const

2012-09-16 Thread Luke Hoban
We've begun deeper investigations of implementation practicalities related to 
let/const, and two significant performance concerns have been raised.  I think 
these both merit re-opening discussion of two aspects of the let/const design.


__Temporal dead zones__

For reference on previous discussion of temporal dead zone see [1].

I've expressed concerns with the performance overhead required for temporal 
dead zones in the past, but we did not at the time have any data to point to 
regarding the scale of the concern.  

As an experiment, I took the early-boyer test from V8 and changed 'var' to 
'let'.  In Chrome preview builds with 'let' support, I saw a consistent ~27% 
slowdown.  That is, the 'let is the new var' mantra leads to 27% slower code in 
this example for the same functionality.  

However, we are aware that there are a class of dynamic checks that can be 
removed by static analysis - in particular intra-procedural use before 
assignment checks.  We implemented these checks in a Chakra prototype, and even 
with these, we still see an ~5% slowdown.  

Our belief is that any further removal of these dynamic checks 
(inter-procedural checks of accesses to closure captured let references) is a 
much more difficult proposition, if even possible in any reasonable percentage 
of cases.  

Unless we can be sure that the above perf hit can indeed be easily overcome, 
I'd like to re-recommend that temporal dead zones for let and const be removed 
from the ES6 specification.  Both would remain block scoped binding, but would 
be dynamically observable in 'undefined' state - including that 'const' would 
be observable as 'undefined' before single assignment.  

In particular - the case against temporal dead zones is as follows:

1. The value of temporal dead zones is to catch a class of programmer errors.  
This value is not overly significant (it's far from the most common error that 
lint-like tools catch today, or that affects large code bases in practice), and 
I do not believe the need/demand for runtime-enforced protection against this 
class of errors has been proven.  This feature of let/const is not the primary 
motivation for either feature (block scoped binding, inlinability and errors on 
re-assignment to const are the motivating features).

2. The stated goal of 'let' is to replace 'var' in common usage (and if this is 
not the goal, we should not be adding 'let')

3. Unless the above performance hit can be overcome, and given #2 above, *let 
will slow down the web by ~5%*.

4. Even if the above performance hit can be (mostly) overcome with net new 
engine performance work, that is performance work being forced on engine 
vendors simply to not make the web slower, and comes at the opportunity cost of 
actually working on making the web *faster*.  

5. We are fairly confident that it is not possible to fully remove the runtime 
overhead cost associated with temporal dead zones.  That means that, as a rule, 
'let' will be slower than 'var'.   And possibly significantly slower in certain 
coding patterns. Even if that's only 1% slower, I don't think we're going to 
convince the world to use 'let' if it's primary impact on their code is to make 
it slower.  (The net value proposition for let simply isn't strong enough to 
justify this).

6. The only time-proven implementation of let/const (SpiderMonkey) did not 
implement temporal dead zones.  The impact of this feature on the practical 
performance of the web is not well enough understood relative to the value 
proposition of temporal dead zones.


__ Early Errors__

Let and const introduce a few new early errors (though this general concern 
impacts several other areas of ES6 as well).  Of particular note, assignment to 
const and re-declaration of 'let' are spec'd as early errors. 

 Assignment to const is meaningfully different than previous early errors, 
because detecting it requires binding references *before any code runs*.  
Chakra today parses the whole script input to report syntax errors, but avoids 
building and storing ASTs until function bodies are executed [2].  Since it is 
common for significant amounts of script on typical pages to be downloaded but 
not ever executed, this can save significant load time performance cost.  

However, if scope chains and variable reference binding for all scopes in the 
file need to be established before any code executes, significantly more work 
is required during this load period.  This work cannot be deferred (and 
potentially avoided entirely if the code is not called), because early errors 
must be identified before any code executes.

This ultimately means that any script which mentions 'const' will defeat a 
significant aspect of deferred AST building, and therefore take a load time 
perf hit.  

More generally - this raises a concern about putting increasingly more 
aggressive static analysis in early errors.  It may, for example, argue for a 
3rd error category, of errors that must