Re: excluding features from sloppy mode

2012-12-27 Thread Brendan Eich

Kevin Smith wrote:


It does not even contain the word strict. IIRC (and I asked
about this at the last TC39 meeting and got verbal confirmation),
the idea of module {...} implying strict mode was latent, or
intended. I'm not sure about out of line modules.

At this point, best thing is to summon Dave.


Since any new code will likely be written as a module (even in the 
near-term, transpiled back to ES5), this would be the ideal scenario.


Which this do you mean? modules (in or out of line) implying strict 
mode can target ES5 strict, no problem.



But I'm trying to think through the implications while waiting.



One more thought from me, then I'll shut up for a bit:

Mark wants no micro-modes but really (and I appreciate his candor) 
wants no sloppy mode extension if possible. I see things differently but 
I've started coming down on the side of more implicit strictness: 
module, class, function*, perhaps we should revisit arrows. (Allen has 
to spec something in the way of poisoned or absent .caller, etc. on 
arrow function objects.)


IOW, I want more strict extensions too, but implicitly! Again, having to 
write use strict; itself makes for more sloppy code over time, but new 
syntax can be its own reward for the new semantics.


So I'm not convinced your slippery slope argument should prevail.

/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Changing [[Prototype]]

2012-12-27 Thread David Bruant

Le 27/12/2012 06:40, David Herman a écrit :
We've given up on the (non-existent) invariant that [[Prototype]] is 
immutable. That doesn't mean we should set caution to the wind and 
specify standard libraries that mutate [[Prototype]] links whenever it 
happens to solve some problem.
As you'll see in the www-dom thread, I've tried really hard to find 
better ideas (or I guess rather understanding the problem better). The 
window (no pun) of opportunity is really small

because it'd be web-breaking comes back often.

I'm in hope but doubtful a better solution can be found.

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: The 1JS experiment has failed. Let's return to plan A.

2012-12-27 Thread David Bruant

Le 27/12/2012 02:52, Brandon Benvie a écrit :
As an aside, ES itself can't self-host its own builtins in strict mode 
because of the (two of the very few) semantic differences that exist 
between strict mode and non-strict mode: non-strict thrower properties 
(which I've come to consider an annoying blight that punishes 
developers in order to influence implementers) and strict this-mode 
differences. Every semantic difference you mandate furthers this gap.
I fail to understand why built-ins can't be implemented in strict mode. 
Can you provide a concrete example of something that can't?


Thanks,

David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 01:50, David Herman dher...@mozilla.com wrote:

 On Dec 11, 2012, at 2:45 AM, Andreas Rossberg rossb...@google.com wrote:
  The question, then, boils down to what the observation should be: a

  runtime error (aka temporal dead zone) or 'undefined'. Given that
  choice, the former is superior in almost every way, because the latter
  prevents subtle initialisation errors from being caught early, and is
  not an option for most binding forms anyway.

 You only listed good things (which I agree are good) about TDZ, but you
 don't list its drawbacks. I believe the drawbacks are insurmountable.


 Let's start with TDZ-RBA. This semantics is *totally untenable* because it
 goes against existing practice. Today, you can create a variable that
 starts out undefined and use that on purpose:


I think nobody ever proposed going for this semantics, so we can put that
aside quickly. However:


 var x;
 if (...) { x = ... }
 if (x === undefined) { ... }

 If you want to use let instead, the === if-condition will throw. You would
 instead have to write:

 let x = undefined;
 if (...) { x = ... }
 if (x === undefined) { ... }


That is not actually true, because AFAICT, let x was always understood to
be equivalent to let x = undefined.


OK, so now let's consider TDZ-UBI. This now means that an initializer is
 different from an assignment, as you say:

  They are initialisations, not assignments. The difference, which is
  present in other popular languages as well, is somewhat important,
  especially wrt immutable bindings.

 For `const`, I agree that some form of TDZ is necessary. But `let` is the
 important, common case. Immutable bindings (`const`) should not be driving
 the design of `let`. Consistency with `var` is far more important than
 consistency with `const`.


There is not just 'let' and 'const' in ES6, but more than a handful of
declaration forms. Even with everything else not mattering, I think it
would be rather confusing if 'let' had a different semantics completely
different from all the rest.

And for `let`, making initializers different from assignments breaks a
 basic assumption about hoisting. For example, it breaks the equivalence
 between

 { stmt ... let x = e; stmt' ... }

 and

 { let x; stmt ... x = e; stmt' ... }

 This is an assumption that has always existed for `var` (mutatis mutantum
 for the function scope vs block scope). You can move your declarations
 around by hand and you can write code transformation tools that move
 declarations around.


As Dominic has pointed out already, this is kind of a circular argument.
The only reason you care about this for 'var' is because 'var' is doing
this implicitly already. So programmers want to make it explicit for the
sake of clarity. TDZ, on the other hand, does not have this implicit
widening of life time, so no need to make anything explicit.

It's true that with TDZ, there is a difference between the two forms above,
but that is irrelevant, because that difference can only be observed for
erroneous programs (i.e. where the first version throws, because 'x' is
used by 'stmt').

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-27 Thread Mark S. Miller
On Thu, Dec 27, 2012 at 12:24 AM, Brendan Eich bren...@mozilla.com wrote:
 Kevin Smith wrote:


 It does not even contain the word strict. IIRC (and I asked
 about this at the last TC39 meeting and got verbal confirmation),
 the idea of module {...} implying strict mode was latent, or
 intended. I'm not sure about out of line modules.

 At this point, best thing is to summon Dave.


 Since any new code will likely be written as a module (even in the
 near-term, transpiled back to ES5), this would be the ideal scenario.


 Which this do you mean? modules (in or out of line) implying strict mode
 can target ES5 strict, no problem.


 But I'm trying to think through the implications while waiting.


 One more thought from me, then I'll shut up for a bit:

 Mark wants no micro-modes but really (and I appreciate his candor) wants
 no sloppy mode extension if possible. I see things differently but I've
 started coming down on the side of more implicit strictness: module, class,
 function*, perhaps we should revisit arrows. (Allen has to spec something in
 the way of poisoned or absent .caller, etc. on arrow function objects.)

 IOW, I want more strict extensions too, but implicitly! Again, having to
 write use strict; itself makes for more sloppy code over time, but new
 syntax can be its own reward for the new semantics.

Geez I find this tempting. But I cannot agree. Code is read more often
than it is written, and ease of opting into strict mode isn't worth
the price of making it harder to tell which code is in strict mode. I
agree with Kevin's point #3. function* and arrow functions, being
functions, have function bodies. For function functions, they opt
into strict if they begin with use strict. It would be confusing to
a reader of code for some functions to do this implicitly. It would
not be confusing for *readers* to not have function* or arrow
functions available in sloppy mode. When reading sloppy code, these
new function forms wouldn't appear without a use strict pragma, and
so wouldn't raise any new strictness questions for readers.

Class is an interesting case though, for three reasons.
1) Its body is not a function body, and so it would be yet more syntax
to enable a class to opt into strict mode explicitly.
2) It is a large-grain abstraction mechanism, much like modules, and
often used as the only module-like mechanism in many existing
programming languages. (Yes, JavaScript is a different language. But
we called it class to leverage some of that prior knowledge.)
3) It looks as foreign to old ES3 programmers as does module.

So I recommend no implicit opt-in, except for module (of course) and
possibly class. If class does not implicitly opt in, we need to extend
the class body syntax to accept a use strict pragma.

As for what function forms or heads require explicit opt-in, that
hangs on the micro-mode issue. If you're right that we would not make
things simpler if these were available only in strict mode, then I
agree with your conclusion. More later after I review where these
micro-modes ended up, especially the scoping issues on default
argument expressions. What's the best thing to read to understand the
current state of these? How well does the current draft spec reflect
the current agreements?


I do think let should only be available in strict mode, rather than
the syntactic crazy rules we started to invent at the last meeting.

In writing this list, I realize that the specific issue that set me
off, f-i-b, is a red herring. Because of ES3 practice, everyone will
continue to support f-i-b somehow in sloppy mode. If we can get
everyone to adopt the block-lexical semantics for sloppy that they
have in strict mode, that's simpler than maintaining the current de
facto crazy semantics for these in sloppy mode and having them have
block lexical semantics in strict code. So I'm on board with
evangelizing the problem web sites to fix their f-i-b code.



 So I'm not convinced your slippery slope argument should prevail.

 /be



--
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object model reformation?

2012-12-27 Thread Axel Rauschmayer
Good points. I rarely (if ever) miss more powerful collections. I do miss more 
functionality along the lines of the standard libraries of functional 
programming languages. And I sometimes miss working with arrays of objects (or 
maps) like in relational calculus (or, possibly, LINQ). That last one is 
clearly beyond the scope of ECMA-262. More standard library things might not be 
(range, “reverse” the mappings of a map or an object, zip, etc.).

You don’t like the idea of read-only views? You’d rather copy? Even without 
that use case, faking arrays (without resorting to proxies) still seems useful, 
but that’s apparently considered for ES7 or later, anyway.

On Dec 27, 2012, at 5:53 , Brendan Eich bren...@mozilla.com wrote:

 Axel Rauschmayer wrote:
 http://wiki.ecmascript.org/doku.php?id=strawman:object_model_reformation
 
 Is the object model reformation (OMR) still on the table for ES6?
 
 It never was -- it missed the cutoff by over five months.
 
 The reason I’m asking is that I recently remembered a technique from the 
 Java collections API: you could wrap any collection in a read-only “view”. 
 That made it possible to have aliases to internal data structures without 
 worrying about them being modified. The OMR would allow one to implement 
 such wrappers for arrays.
 
 Java, schmava :-P.
 
 I have a theory: hashes and lookup tables (arrays or vectors) have displaced 
 most other data structures because most of the time, for most programs 
 (horrible generalizations I know), you don't need ordered entries, or other 
 properties that might motivate a balanced tree; or priority queue operations; 
 or similar interesting data structures we all studied in school and used 
 earlier in our careers.
 
 It's good to have these tools in the belt, and great to teach them, know 
 their asymptotic complexity, etc.
 
 But they just are not that often needed.
 
 So if JS grows a big collections API with some nominal-ish interface-like 
 faceting, I'll be surprised -- and disappointed. I don't think we need it. 
 This isn't that language.  Those minority-use-case data structures are 
 usually one-offs. These aren't the hash-and-array droids you're looking for.
 
 Iteration protocols for collections? Sure, but don't fit all pegs into the 
 one octagonal hole. Lists want value iteration, dicts want [key, value].
 
 Abstraction is two-edged, especially when piled up to handle collections 
 that might need to scale off of data cache memory. That's another point: if 
 you really want to scale in the modern world, you want functional data 
 structures, maps and folds, immutability. Abstracting at that level is cool 
 and people do it even in the small, in the dcache. Doesn't work the other 
 way, especially with mutation and eager effects.
 
 Anyone disagree strongly?
 
 /be
 

-- 
Dr. Axel Rauschmayer
a...@rauschma.de

home: rauschma.de
twitter: twitter.com/rauschma
blog: 2ality.com

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-27 Thread David Bruant

Le 27/12/2012 06:32, Brendan Eich a écrit :

Mark S. Miller wrote:

Superstition aside, and once pre-ES5 browsers are not significant, the
only purpose of sloppy mode is for old code that must be kept running
without active maintenance.


That is a teleological statement -- you're talking about purpose, 
designed intent, goal, The Good (Life).


Very philosophical, I dig it ;-). However, in reality as Dave said in 
a recent message, the Law of Least Effort says people will forget to 
write use strict; and we'll have sloppy mode code till the cows come 
home.
The Law of Least Effort also brought us compile-to-JS languages. 
Coffeescript still doesn't compile with use strict; by default [1], 
but it seems open to it when all non-strict browsers will have died and 
the perf issues are solved.

In the future, I expect:
* more usage of compile-to-JS languages
* all compile-to-JS languages to compile to strict mode by default

Interestingly, this would turn the result of the Law of Least Effort 
from some will forget to people won't forget.
Of course people will probably always write handwritten JS and some will 
forget to put use strict;, but I feel new non-strict code will 
eventually become a rare exception asymptotically leading to, but never 
reaching inexistence.



  For any code being actively maintained, it
should move into strict mode.


Very hortatory, but the kids are alright and they don't all follow a 
single should. Between dissenters and LoLE or Law of Murphy ;-), I 
bet your should will become an ineffectual nag over the next few years.


If, one fine day, virtually everyone does as in Perl and starts their 
programs with use strict'; (or module { with closing } after), I 
will raise a toast to you and others who helped teach that practice. 
It's nowhere near a certainty, and should isn't would or will.
I feel this will happen when compile-to-JS compile by default to strict 
mode. Only years will tell.



  Sloppy mode will become a relic only for code no one touches.
Perhaps, but not on a predictable schedule and not (if I'm right) 
within the next few years, when we want ES6 adoption -- including new 
syntax.

I sadly agree.

Finally, to connect to the first point, strict mode has some overhead 
(LoLE works against it, people forget to type the directive). I know 
developers who do not use strict mode, but who will rapidly adopt rest 
and default parameters, destructuring in general, and other new forms. 
This adoption of ES6 is partly subjective, distributed over time and 
(head-)space. It should not be yoked to strict mode adoption.

+1

Yoking the two multiplies the likelihood of adoption to get a smaller 
product. That's why I favor implicit strict mode only for bodies of 
new head syntaxes, starting with module as Dave proposed.


I'm ok wtih class opting its body into strict mode, although did we 
decide that one way or the other? I forget.
I don't know, but I'd be in favor of implicit strict in classes. Moving 
code to module and classes would be the gradual move to strict mode Mark 
talks about.
If people notice perf issues against equivalent not-in-module/class 
code, they'll report it. Hopefully, that'll be an incentive enough for 
implementors to make strict mode at least as fast as sloppy mode.


David

[1] https://github.com/jashkenas/coffee-script/pull/2337
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Changing [[Prototype]]

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 06:38, David Herman dher...@mozilla.com wrote:

 On Dec 24, 2012, at 1:48 AM, Anne van Kesteren ann...@annevk.nl wrote:
  It seems ES6 has __proto__ which also allows modifying [[Prototype]]
  so presumably this is nothing particularly bad, although it is very
  ugly :-(

 It is never safe to assume that just because something is out there on the
 web that it is nothing particularly bad... (FML)


I'm not surprised to read this, though. Putting mutable proto into the
language is far more than just regulating existing practice. It is blessing
it. That is a psychological factor that should not be underestimated. I
fully expect to see significantly more code in the future that considers it
normal to use this feature, and that no amount of evangelization can
counter the legislation precedent.

That is, if having it at all, I'd still think it much wiser to ban it to
some Appendix.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Changing [[Prototype]]

2012-12-27 Thread Anne van Kesteren
On Thu, Dec 27, 2012 at 6:38 AM, David Herman dher...@mozilla.com wrote:
 Thank for the heads up. I'll chat with bz to get more of the back story. This 
 is pretty effing awful. It may in fact be unavoidable but I'd like to make 
 sure I understand why people feel there's no alternative; otherwise I may 
 have to consider throwing my already mangled body in front of yet another 
 train... ;-)

For what it's worth, I'm good either way. I just want implementations
to do the same thing. If we can avoid having to mutate [[Prototype]]
in DOM and HTML I'd prefer that as it's less work for me. :-)


 It seems ES6 has __proto__ which also allows modifying [[Prototype]]
 so presumably this is nothing particularly bad, although it is very
 ugly :-(

 It is never safe to assume that just because something is out there on the 
 web that it is nothing particularly bad... (FML)

Fair enough. I expect the reactions here would have been worse though
if the prevailing notion had been that [[Prototype]] is not mutable.
;-)


-- 
http://annevankesteren.nl/
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object model reformation?

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 05:53, Brendan Eich bren...@mozilla.com wrote:

 I have a theory: hashes and lookup tables (arrays or vectors) have
 displaced most other data structures because most of the time, for most
 programs (horrible generalizations I know), you don't need ordered entries,
 or other properties that might motivate a balanced tree; or priority queue
 operations; or similar interesting data structures we all studied in school
 and used earlier in our careers.

 It's good to have these tools in the belt, and great to teach them, know
 their asymptotic complexity, etc.

 But they just are not that often needed.


Not often used =/= not often needed.

Seriously, I contest your theory. I think such observations usually suffer
from selection bias. In imperative languages, you see arrays used for
almost everything, often to horrible effect. In the functional world many
people seem to think that lists is all you need. In scripting languages
it's often hashmaps of some form. I think all are terribly wrong. Every
community seems to have its predominant collection data structure, but the
main reason it is dominant (which implies vastly overused) is not that it
is superior or more universal but that it is given an unfair advantage via
very convenient special support in the language, and programmers rather
shoe-horn something into it then losing the superficial notational
advantage. Languages should try harder to get away from that partisanship
and achieve egalite without baroque.

But yes, ES is probably not the place to start fixing this. :)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-27 Thread Kevin Smith
 Since any new code will likely be written as a module (even in the
 near-term, transpiled back to ES5), this would be the ideal scenario.


 Which this do you mean? modules (in or out of line) implying strict mode
 can target ES5 strict, no problem.


This meaning all module code (in or out-of-line) is implicitly strict.
 If that's the case, then implicit rules for anything else essentially
becomes moot:  module code will dominate by far.  Even in the near term,
many developers will start writing in ES6 modules, and transpiling back to
ES5.  If all modules are strict, then the transpiler will insert the
required use strict directive, and all is good.

To put another spin on it, how often will we see a class that is outside of
*any* module?

{ Kevin }
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object model reformation?

2012-12-27 Thread Brendan Eich

Andreas Rossberg wrote:
On 27 December 2012 05:53, Brendan Eich bren...@mozilla.com 
mailto:bren...@mozilla.com wrote:


I have a theory: hashes and lookup tables (arrays or vectors) have
displaced most other data structures because most of the time, for
most programs (horrible generalizations I know), you don't need
ordered entries, or other properties that might motivate a
balanced tree; or priority queue operations; or similar
interesting data structures we all studied in school and used
earlier in our careers.

It's good to have these tools in the belt, and great to teach
them, know their asymptotic complexity, etc.

But they just are not that often needed.


Not often used =/= not often needed.


True, and hard to prove.

Seriously, I contest your theory. I think such observations usually 
suffer from selection bias.


Definitely the big weakness of my theory. It's not mine alone of 
course, and it is not falsifiable _in situ_ or by 
simulation/reproduction, like a lab experiment. But my gut says there's 
something going on here beyond hashes-and-arrays selection bias.


Hashes and arrays are strong in memory safe languages but one still sees 
other data structures evey in dynamically typed languages, especially 
the rise of functional data structures.


What I do not see yet: more elaborate Collections APIs that were 
popular decades ago, but that do not scale well out of the d-cache.


In imperative languages, you see arrays used for almost everything, 
often to horrible effect. In the functional world many people seem to 
think that lists is all you need. In scripting languages it's often 
hashmaps of some form. I think all are terribly wrong.


JS hackers use objects and arrays, but also lately more map/fold-based 
stuff. At small scale its overhead doesn't matter, and with the right 
discipline it can scale up to very large working sets.


We hope with RiverTrail and other research to go beyond that. I've seen 
v8-cgi-programmed 48 core boxes with global (slow) and local (to the 
core, but only that core) memory using this style, it can work with 
enough care.


Every community seems to have its predominant collection data 
structure, but the main reason it is dominant (which implies vastly 
overused) is not that it is superior or more universal but that it is 
given an unfair advantage via very convenient special support in the 
language, and programmers rather shoe-horn something into it then 
losing the superficial notational advantage. Languages should try 
harder to get away from that partisanship and achieve egalite without 
baroque.


That is a good reason for more languages and language innovation. JS 
ain't everything to all programmers, thank goodness. Mozilla is 
investing in Rust to elevate safety and concurrency beyond C++, and this 
requires new thinking (an ownership system with enough richness to 
capture all the safe-ish C++ idioms).



But yes, ES is probably not the place to start fixing this. :)


Not generally. However, the functional style is on the rise in JS, and 
we're pushing it farther with things like RiverTrail. It's not 
unthinkable that JS could evolve into a better eager/mutating functional 
language, with more functional data structures and fewer arrays and 
hashes in-the-large.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Changing [[Prototype]]

2012-12-27 Thread Brendan Eich

Andreas Rossberg wrote:
On 27 December 2012 06:38, David Herman dher...@mozilla.com 
mailto:dher...@mozilla.com wrote:


On Dec 24, 2012, at 1:48 AM, Anne van Kesteren ann...@annevk.nl
mailto:ann...@annevk.nl wrote:
 It seems ES6 has __proto__ which also allows modifying [[Prototype]]
 so presumably this is nothing particularly bad, although it is very
 ugly :-(

It is never safe to assume that just because something is out
there on the web that it is nothing particularly bad... (FML)


I'm not surprised to read this, though. Putting mutable proto into the 
language is far more than just regulating existing practice.


Your point may be general, but in case there's confusion about this 
new demand for mutable [[Prototype]]: nothing is new here. The 
adoptNode API is pretty old, a de-facto standard. The horse left the 
barn many years ago.


It is blessing it. That is a psychological factor that should not be 
underestimated. I fully expect to see significantly more code in the 
future that considers it normal to use this feature, and that no 
amount of evangelization can counter the legislation precedent.


Noted, and known. But then:

That is, if having it at all, I'd still think it much wiser to ban it 
to some Appendix.


What earthly good would that do?

/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Dynamically changing of loader global

2012-12-27 Thread Mark S. Miller
On Wed, Dec 26, 2012 at 3:03 PM, David Bruant bruan...@gmail.com wrote:
 Le 26/12/2012 23:14, David Herman a écrit :

 On Dec 24, 2012, at 2:34 PM, David Bruant bruan...@gmail.com wrote:

 I've reading the loader API [1] and I was wondering if it was possible to
 dynamically change the global. I think it is by doing the following, but
 tell me if I'm wrong:

 That wasn't the intention. It probably wasn't written out since the full
 semantics isn't spelled out yet (though Sam and I have been making good
 progress working through the details of the semantics), but the idea was
 that the properties of the options object are read in up-front and stored
 internally. The getter always returns that internally closed-over value that
 was obtained when the loader was first created.

 Ok, thanks for the clarification. The strawman as it was didn't explain the
 full semantics hence my question.


 In other words, in the global in the loader the initial or the dynamic
 value of the global option?

 The initial value. We can look into what it would mean to make it
 modifiable, but we'd probably not make that the API; we'd probably just have
 a setter.

 Good point.
 [Adding MarkM into the mix for this part]
 I wish to point out a potential security/convenience issue regarding
 inherited getter/setters. My point is broader than the 'global' loader
 situation (it includes everything covered by WebIDL for instance), but let's
 assume a 'global' setter is added to Loader.prototype and I'll draw the
 general conclusion from this example.
 If I want to share a single loader instance to someone else, but not provide
 access to the loader global, I have to delete Loader.prototype.global
 (otherwise, someone can extract the getter and use the reference to the
 loader instance to retrieve the global)
 The problem with deleting Loader.prototype.global is that it's deleted for
 every single instance which in turns make the code harder to write (because,
 defensively, one needs to extract the getter/setter pair an then use that
 instead of the more convenient myLoader.global syntax). The opposite way,
 if freezing Loader.prototype, it becomes *impossible* to revoke the
 capability to introspect instances using the inherited getters/setters.

 In essence, an inherited accessor means that the choice allowing access to a
 given property isn't a per-instance choice anymore. It's a per-class
 choice (keep it or remove it for every instance) which is probably fine for
 most situations, but certainly too coarse for some others.

This is a very good point. Is there any reason other than legacy
compat why WebIDL specifies inherited accessors rather than own
properties?



 Since the amount of storage is equivalent anyway (you need some sort of
 private property to associate the global to the loader instance), I would
 suggest to go to an own global data property for loaders... and I guess to
 stay away from inherited accessors when describing a per-instance property
 for the reasons I described.

 David



--
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Dynamically changing of loader global

2012-12-27 Thread David Bruant

Le 27/12/2012 20:04, Mark S. Miller a écrit :
This is a very good point. Is there any reason other than legacy 
compat why WebIDL specifies inherited accessors rather than own 
properties?
There is no legacy compat issue. Before WebIDL, the ECMAScript 
representation of DOM objects was an absurd under-specified and 
consequently non-interoperable mess.
We are still in this mess. IE9 is following WebIDL quite closely. I 
assume IE10 is doing better (I've never had a look, but Microsoft is 
following a good path, so I assume progress). All other browsers are 
still very far from WebIDL. Firefox is making fast progress [1]. I don't 
think I have seen progress in other browsers, but I haven't been 
following that closely either.


I don't know why inherited accessors were chosen, but I'm very 
interested in learning if someone has the answer. Since we can decently 
assume than no web content really relies on WebIDL, there is certainly 
still time to change WebIDL if necessary


David

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=580070
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Dynamically changing of loader global

2012-12-27 Thread Brandon Benvie
It's definitely been my experience that accessors, as found in IE and
Firefox, are much more fidgety and error prone than own data properties, as
found in WebKit. The fact that it becomes possible to call them on invalid
targets and that it's no longer possible for a debugger to simply display
own properties exacerbates this fidgetyness. Even worse,
the prototypes which the accessors live on are themselves not valid
targets, which basically invites errors.
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Dynamically changing of loader global

2012-12-27 Thread Mark S. Miller
So does anyone know why? Own properties were the obvious choice, so
there must have been some reason to choose inherited accessors
instead.

On Thu, Dec 27, 2012 at 11:32 AM, Brandon Benvie
bran...@brandonbenvie.com wrote:
 It's definitely been my experience that accessors, as found in IE and
 Firefox, are much more fidgety and error prone than own data properties, as
 found in WebKit. The fact that it becomes possible to call them on invalid
 targets and that it's no longer possible for a debugger to simply display
 own properties exacerbates this fidgetyness. Even worse, the prototypes
 which the accessors live on are themselves not valid targets, which
 basically invites errors.



-- 
Cheers,
--MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread David Herman
On Dec 27, 2012, at 1:23 AM, Andreas Rossberg rossb...@google.com wrote:

 Let's start with TDZ-RBA. This semantics is *totally untenable* because it 
 goes against existing practice. Today, you can create a variable that starts 
 out undefined and use that on purpose:
 
 I think nobody ever proposed going for this semantics, so we can put that 
 aside quickly. However:

OK, well, it wasn't clear to me.

 var x;
 if (...) { x = ... }
 if (x === undefined) { ... }
 
 If you want to use let instead, the === if-condition will throw. You would 
 instead have to write:
 
 let x = undefined;
 if (...) { x = ... }
 if (x === undefined) { ... }
 
 That is not actually true, because AFAICT, let x was always understood to 
 be equivalent to let x = undefined.

Well that's TDZ-UBI. It *is* true for TDZ-RBA. Maybe I was the only person who 
thought that was a plausible semantics being considered, but my claim (P = Q) 
is true. Your argument is ~P. Anyway, one way or another hopefully everyone 
agrees that TDZ-RBA is a non-starter.

 This is an assumption that has always existed for `var` (mutatis mutantum 
 for the function scope vs block scope). You can move your declarations 
 around by hand and you can write code transformation tools that move 
 declarations around.
 
 As Dominic has pointed out already, this is kind of a circular argument. The 
 only reason you care about this for 'var' is because 'var' is doing this 
 implicitly already. So programmers want to make it explicit for the sake of 
 clarity. TDZ, on the other hand, does not have this implicit widening of life 
 time, so no need to make anything explicit.

OK, I'll accept that Crock's manual-hoisting style only matters for `var`. I 
just want to be confident that there are no other existing benefits that people 
get from the equivalence (either in programming patterns or 
refactoring/transformation patterns) that will break.

 It's true that with TDZ, there is a difference between the two forms above, 
 but that is irrelevant, because that difference can only be observed for 
 erroneous programs (i.e. where the first version throws, because 'x' is used 
 by 'stmt').

Can you prove this? (Informally is fine, of course!) I mean, can you prove that 
it can only affect buggy programs?

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread David Herman
On Dec 27, 2012, at 1:51 AM, Andreas Rossberg rossb...@google.com wrote:

 I think hoisting can mean different things, which kind of makes this debate a 
 bit confused.

Yep. Sometimes people mean the scope extends to a region before the syntactic 
position where the declaration appears, sometimes they mean the scope extends 
to the function body, and sometimes they mean function declaration bindings 
are dynamically initialized before the containing function body or script 
begins executing.

 There is var-style hoisting. Contrary to what Rick said, I don't think 
 anybody can seriously defend that as an excellent feature. First, because 
 it hoists over binders, but also second, because it allows access to an 
 uninitialized variable without causing an error (and this being bad is where 
 Dave seems to disagree).

Are you implying that my arguments are not serious? :-(

 Then there is the other kind of hoisting that merely defines what the 
 lexical scope of a declaration is. The reason we need this backwards-extended 
 scope is because we do not have an explicit let-rec or something similar that 
 would allow expressing mutual recursion otherwise -- as you mention. But it 
 does by no means imply that the uninitialized binding has to be (or should 
 be) accessible.

No, it doesn't. I'm not interested in arguments about the one true way of 
programming languages. I think both designs are perfectly defensible. All 
things being equal, I'd prefer to have my bugs caught for me. But in some 
design contexts, you might not want to incur the dynamic cost of the 
read(/write) barriers -- for example, a Scheme implementation might not be 
willing/able to perform the same kinds of optimizations that JS engines do. In 
our context, I think the feedback we're getting is that the cost is either 
negligible or optimizable, so hopefully that isn't an issue.

But the other issue, which I worry you dismiss too casually, is that of 
precedent in the language you're evolving. We aren't designing ES1 in 1995, 
we're designing ES6 in 2012 (soon to be 2013, yikes!). People use the features 
they have available to them. Even if the vast majority of 
read-before-initialization cases are bugs, if there are some cases where people 
actually have functioning programs or idioms that will cease to work, they'll 
turn on `let`.

So here's one example: variable declarations at the bottom. I certainly don't 
use it, but do others use it? I don't know.

 - It automatically makes forward references work, so you can:
 * order your definitions however it best tells the story of your code, 
 rather than being forced to topologically sort them by scope dependency
 * use (mutual) recursion
 
 Right, but that is perfectly well supported, and more safely so, with TDZ.

My point here was just about hoisting (perhaps a bit OT, but the question came 
up whether hoisting is bad) -- specifically, of having declarations bind 
variables in a scope that extends to a surrounding region that can cover 
expressions that occur syntactically earlier than the declaration itself. TDZ 
is orthogonal.

 - It binds variables without any rightward drift, unlike functional 
 programming languages.
 
 I totally don't get that point. Why would a rightward drift be inherent to 
 declarations in functional programming languages (which ones, anyway?).

Scheme:

(let ([sq (* x x)])
  (printf sq: ~a~n sq)
  (let ([y (/ sq 2)])
(printf y: ~a~n y)))

ML:

let sq = x * x in
  print (sq:  ^ (toString sq) ^ \n);
  let y = sq / 2 in
print (y:  ^ (toString y) ^ \n)

ES6:

let sq = x * x;
console.log(sq:  + sq);
let y = sq / 2;
console.log(y:  + y);

Obviously functional programming languages can do similar things to what ES6 
does here; I'm not saying functional programming sucks. You know me. :)

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 21:08, David Herman dher...@mozilla.com wrote:

 On Dec 27, 2012, at 1:23 AM, Andreas Rossberg rossb...@google.com wrote:
  var x;
  if (...) { x = ... }
  if (x === undefined) { ... }
 
  If you want to use let instead, the === if-condition will throw. You
 would instead have to write:
 
  let x = undefined;
  if (...) { x = ... }
  if (x === undefined) { ... }
 
  That is not actually true, because AFAICT, let x was always understood
 to be equivalent to let x = undefined.

 Well that's TDZ-UBI. It *is* true for TDZ-RBA. Maybe I was the only person
 who thought that was a plausible semantics being considered, but my claim
 (P = Q) is true. Your argument is ~P. Anyway, one way or another hopefully
 everyone agrees that TDZ-RBA is a non-starter.


Even with TDZ-RBA you can have that meaning for let x (and that semantics
would be closest to 'var'). What TDZ-RBA gives you, then, is the
possibility to also assign to x _before_ the declaration.

But anyway, I think we agree that this is not a desirable semantics, so it
doesn't really matter.

 It's true that with TDZ, there is a difference between the two forms
 above, but that is irrelevant, because that difference can only be observed
 for erroneous programs (i.e. where the first version throws, because 'x' is
 used by 'stmt').

 Can you prove this? (Informally is fine, of course!) I mean, can you prove
 that it can only affect buggy programs?


Well, I think it's fairly obvious. Clearly, once the
assignment/initialization x = e has been (successfully) executed, there
is no observable difference in the remainder of the program. Before that
(including while evaluating e itself), accessing x always leads to a TDZ
exception in the first form. So the only way it can not throw is if stmt
and e do not access x, in which case the both forms are equivalent.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 21:23, David Herman dher...@mozilla.com wrote:

 On Dec 27, 2012, at 1:51 AM, Andreas Rossberg rossb...@google.com wrote:

  I think hoisting can mean different things, which kind of makes this
 debate a bit confused.

 Yep. Sometimes people mean the scope extends to a region before the
 syntactic position where the declaration appears, sometimes they mean the
 scope extends to the function body, and sometimes they mean function
 declaration bindings are dynamically initialized before the containing
 function body or script begins executing.


Maybe we shouldn't speak of hoisting for anything else but the var case. As
I mentioned elsewhere, I rather like to think of it recursive (i.e.
letrec-style) block scoping. :)


  There is var-style hoisting. Contrary to what Rick said, I don't think
 anybody can seriously defend that as an excellent feature. First, because
 it hoists over binders, but also second, because it allows access to an
 uninitialized variable without causing an error (and this being bad is
 where Dave seems to disagree).

 Are you implying that my arguments are not serious? :-(


You are not defending the first part, are you? ;)


  Then there is the other kind of hoisting that merely defines what the
 lexical scope of a declaration is. The reason we need this
 backwards-extended scope is because we do not have an explicit let-rec or
 something similar that would allow expressing mutual recursion otherwise --
 as you mention. But it does by no means imply that the uninitialized
 binding has to be (or should be) accessible.

 No, it doesn't. I'm not interested in arguments about the one true way
 of programming languages. I think both designs are perfectly defensible.
 All things being equal, I'd prefer to have my bugs caught for me. But in
 some design contexts, you might not want to incur the dynamic cost of the
 read(/write) barriers -- for example, a Scheme implementation might not be
 willing/able to perform the same kinds of optimizations that JS engines do.
 In our context, I think the feedback we're getting is that the cost is
 either negligible or optimizable, so hopefully that isn't an issue.


Right, from our implementation experience in V8 I'm confident that it isn't
in almost any practically relevant case -- although we haven't fully
optimised 'let', and consequently, it currently _is_ slower, so admittedly
there is no proof yet.

But the other issue, which I worry you dismiss too casually, is that of
 precedent in the language you're evolving. We aren't designing ES1 in 1995,
 we're designing ES6 in 2012 (soon to be 2013, yikes!). People use the
 features they have available to them. Even if the vast majority of
 read-before-initialization cases are bugs, if there are some cases where
 people actually have functioning programs or idioms that will cease to
 work, they'll turn on `let`.

 So here's one example: variable declarations at the bottom. I certainly
 don't use it, but do others use it? I don't know.


Well, clearly, 'let' differs from 'var' by design, so no matter what,
you'll probably always be able to dig up some weird use cases that it does
not support. I don't know what to say to that except that if you want 'var'
in all its beauty then you know where to find it. :)

 - It binds variables without any rightward drift, unlike functional
 programming languages.
 
  I totally don't get that point. Why would a rightward drift be inherent
 to declarations in functional programming languages (which ones, anyway?).

 Scheme:

 (let ([sq (* x x)])
   (printf sq: ~a~n sq)
   (let ([y (/ sq 2)])
 (printf y: ~a~n y)))

 ML:

 let sq = x * x in
   print (sq:  ^ (toString sq) ^ \n);
   let y = sq / 2 in
 print (y:  ^ (toString y) ^ \n)


I don't feel qualified to talk for Scheme, but all Ocaml I've ever
seen (SML uses more verbose 'let' syntax anyway) formatted the above as

let sq = x * x in
 print (sq:  ^ toString sq ^ \n);
 let y = sq / 2 in
 print (y:  ^ toString y ^ \n)


Similarly, in Haskell you would write

do

   let sq = x * x
putStr (sq:  ++ show sq ++ \n)
let y = sq / 2
putStr (y:  ++ show y ++ \n)


/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 23:38, Andreas Rossberg rossb...@google.com wrote:

 I don't feel qualified to talk for Scheme, but all Ocaml I've ever
 seen (SML uses more verbose 'let' syntax anyway) formatted the above as

 let sq = x * x in
 print (sq:  ^ toString sq ^ \n);

 let y = sq / 2 in
 print (y:  ^ toString y ^ \n)


 Similarly, in Haskell you would write

 do

let sq = x * x
putStr (sq:  ++ show sq ++ \n)

let y = sq / 2
putStr (y:  ++ show y ++ \n)


Don't know where the empty lines in the middle of both examples are coming
from, weird Gmail quote-editing glitch that didn't show up in the edit box.
Assume them absent. :)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Changing [[Prototype]]

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 18:25, Brendan Eich bren...@mozilla.com wrote:

 That is, if having it at all, I'd still think it much wiser to ban it to
 some Appendix.


 What earthly good would that do?


Marketing and psychology (as I said, being important). It would send a
clear message that it is just ES adopting some bastard child because it has
to for political reasons, but with no intention of ever making it a true
bearer of its name. In other words, it isn't noble.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Changing [[Prototype]]

2012-12-27 Thread Brendan Eich

Andreas Rossberg wrote:
On 27 December 2012 18:25, Brendan Eich bren...@mozilla.com 
mailto:bren...@mozilla.com wrote:


That is, if having it at all, I'd still think it much wiser to
ban it to some Appendix.


What earthly good would that do?


Marketing and psychology (as I said, being important). It would send a 
clear message that it is just ES adopting some bastard child because 
it has to for political reasons, but with no intention of ever making 
it a true bearer of its name. In other words, it isn't noble.


In one sense, whatever floats your boat.

In a more serious vein, we are at cross purposes with reality. Mutable 
__proto__ just *is*. It is a de-facto standard. Doesn't mean we 
shouldn't fight [[Prototype]] changes where better methods of achieving 
desirable semantics exist. But calling mutable __proto__ a bad thing, 
deprecating it, will not work, and therefore the attempt degrades the 
coin of TC39's realm: our attitude and opinion on normativity.


Self had writable parent slots. One can disagree with the design 
decision, but it's not unique to JS or uniquely evil. We swallowed this 
turd. No point whinging about it in appendices that either no one reads, 
or else people read and think less of the spec on that account.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Brendan Eich

David Herman wrote:

ES1 in 1995


JS if you please! No ES till 1996 November at earliest, really till 
June 1997.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread David Herman
On Dec 27, 2012, at 2:13 PM, Andreas Rossberg rossb...@google.com wrote:

 It's true that with TDZ, there is a difference between the two forms above, 
 but that is irrelevant, because that difference can only be observed for 
 erroneous programs (i.e. where the first version throws, because 'x' is 
 used by 'stmt').
 
 Can you prove this? (Informally is fine, of course!) I mean, can you prove 
 that it can only affect buggy programs?
 
 Well, I think it's fairly obvious. Clearly, once the 
 assignment/initialization x = e has been (successfully) executed, there is 
 no observable difference in the remainder of the program. Before that 
 (including while evaluating e itself), accessing x always leads to a TDZ 
 exception in the first form. So the only way it can not throw is if stmt and 
 e do not access x, in which case the both forms are equivalent.

That doesn't prove that it was a *bug*. That's a question about the 
programmer's intention. In fact, I don't think you can. For example, I 
mentioned let-binding at the bottom:

{
console.log(x);
let x;
}

It the programmer intended that to print undefined, then TDZ would break the 
program. Before you accuse me of circularity, it's *TDZ* that doesn't have 
JavaScript historical precedent on its side. *You're* the one claiming that 
programs that ran without error would always be buggy.

Here's what it comes down to. Above all, I want let to succeed. The absolute, 
#1, by-far-most-important feature of let is that it's block scoped. TDZ, while 
clearly adding the bonus of helping catch bugs, adds several risks:

- possible performance issues

- possibly rejecting non-buggy programs based on existing JavaScript 
programming styles

Are those risks worth taking? Can we prove that they won't sink let? It's 
fairly obvious doesn't give me a lot of confidence, I'm afraid.

Dave

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-27 Thread Russell Leggett

On Dec 27, 2012, at 8:54 AM, Kevin Smith khs4...@gmail.com wrote:

 
 Since any new code will likely be written as a module (even in the 
 near-term, transpiled back to ES5), this would be the ideal scenario.
 
 Which this do you mean? modules (in or out of line) implying strict mode 
 can target ES5 strict, no problem.
 
 This meaning all module code (in or out-of-line) is implicitly strict.  If 
 that's the case, then implicit rules for anything else essentially becomes 
 moot:  module code will dominate by far.  Even in the near term, many 
 developers will start writing in ES6 modules, and transpiling back to ES5.  
 If all modules are strict, then the transpiler will insert the required use 
 strict directive, and all is good.
 
 To put another spin on it, how often will we see a class that is outside of 
 *any* module?
What about node code?

- Russ
 
 { Kevin }
 
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


whither TDZ for 'let' (was: On dropping @names)

2012-12-27 Thread Brendan Eich
This thread needs a new subject to be a spin-off on the very important 
topic of to TDZ or not to TDZ 'let'. IMHO you neatly list the risks below.


I had swallowed TDZ in the face of these risks. I'm still willing to do 
so, for Harmony and for greater error catching in practice. I strongly 
suspect declare-at-the-bottom and other odd styles possible with 'var' 
won't be a problem for 'let' adoption. However, we need implementors to 
optimize 'let' *now* and dispell the first item below.


/be

David Herman wrote:

Here's what it comes down to. Above all, I want let to succeed. The absolute, 
#1, by-far-most-important feature of let is that it's block scoped. TDZ, while 
clearly adding the bonus of helping catch bugs, adds several risks:

- possible performance issues

- possibly rejecting non-buggy programs based on existing JavaScript 
programming styles

Are those risks worth taking? Can we prove that they won't sink let? It's fairly 
obvious doesn't give me a lot of confidence, I'm afraid.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-27 Thread Brendan Eich

Mark S. Miller wrote:

On Thu, Dec 27, 2012 at 12:24 AM, Brendan Eichbren...@mozilla.com  wrote:

IOW, I want more strict extensions too, but implicitly! Again, having to
write use strict; itself makes for more sloppy code over time, but new
syntax can be its own reward for the new semantics.


Geez I find this tempting. But I cannot agree. Code is read more often
than it is written, and ease of opting into strict mode isn't worth
the price of making it harder to tell which code is in strict mode. I
agree with Kevin's point #3. function* and arrow functions, being
functions, have function bodies. For function functions, they opt
into strict if they begin with use strict.


No, not only by that syntax. They also may be *opted-in* by their 
outermost enclosing function that uses such a prologue directive.


That's what makes this less a readability win under your argument. To 
find the governing use strict; in a large program or (real bugs bit 
ES5 here) concatenation is not easy. One is not reading at that point, 
one is grepping or searching in an editor that can match brackets if not 
do deeper analysis.



  It would be confusing to
a reader of code for some functions to do this implicitly.


An outer function having use strict; can implicitly do this to an 
inner function, and rightly so, but at arbitrary distance in KLOCs or 
other metrics.


So I don't see the argument against implicit strict as overriding.


  It would
not be confusing for *readers* to not have function* or arrow
functions available in sloppy mode. When reading sloppy code, these
new function forms wouldn't appear without a use strict pragma, and
so wouldn't raise any new strictness questions for readers.


Again, we want ES6 features such as arrows and generators to be adopted, 
whether authors can afford to adopt strict mode in enclosing functions 
or top-level programs, or not. Do not multiply risks.


It's one thing to have arrows and generators be implicitly strict, and 
get them working without early errors on load and without runtime errors 
under test.


It's another to say that anyone who wants to use such good new features 
must migrate the entire enclosing function or program to strict mode. 
That may be a large top-level script, with legacy issues compounded by 
concatenation.



Class is an interesting case though, for three reasons.
1) Its body is not a function body, and so it would be yet more syntax
to enable a class to opt into strict mode explicitly.


Right. I don't think we've considered this carefully in TC39 yet.


2) It is a large-grain abstraction mechanism, much like modules, and
often used as the only module-like mechanism in many existing
programming languages. (Yes, JavaScript is a different language. But
we called it class to leverage some of that prior knowledge.)


Won't quibble ;-).


3) It looks as foreign to old ES3 programmers as does module.


More positive: it looks like a unit of new and stricter code, so it 
could be strict by fiat, implicitly.



So I recommend no implicit opt-in, except for module (of course) and
possibly class. If class does not implicitly opt in, we need to extend
the class body syntax to accept a use strict pragma.


Good, happy to have support for class bodies implicitly strict!


As for what function forms or heads require explicit opt-in, that
hangs on the micro-mode issue. If you're right that we would not make
things simpler if these were available only in strict mode, then I
agree with your conclusion. More later after I review where these
micro-modes ended up, especially the scoping issues on default
argument expressions. What's the best thing to read to understand the
current state of these? How well does the current draft spec reflect
the current agreements?


Allen's latest draft already covers a lot of the function head 
new-syntax-is-its-own-opt-in (including-extra-strictish-checks ;-).



I do think let should only be available in strict mode, rather than
the syntactic crazy rules we started to invent at the last meeting.


We should discuss at the next meeting, with more homework to do before then.


In writing this list, I realize that the specific issue that set me
off, f-i-b, is a red herring. Because of ES3 practice, everyone will
continue to support f-i-b somehow in sloppy mode. If we can get
everyone to adopt the block-lexical semantics for sloppy that they
have in strict mode, that's simpler than maintaining the current de
facto crazy semantics for these in sloppy mode and having them have
block lexical semantics in strict code. So I'm on board with
evangelizing the problem web sites to fix their f-i-b code.


Evangelization costs, even with volunteer help. It's work. Need to 
organize it a bit and take all the fun out, and make it matter. More 
under separate cover.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Real World Func Decl in Block Scope Breakages

2012-12-27 Thread Brendan Eich

Charles McCathie Nevile wrote:
On Thu, 27 Dec 2012 00:26:48 +0100, Brendan Eich bren...@mozilla.com 
wrote:



Brian Terlson wrote:

20 sites, however, will likely be broken by this change in some way.


I am guessing this is the key point. But there is a scope error, and 
this is undefined. It seems you're trying to find out why we are one 
of the few sites that do *something*, because you'd like to change 
something in ES that would break what we are doing, right? I am not 
sure exactly what the *something* or the change are, and that makes it 
hard for me to find the person who knows the answer (yes, folks, I 
confess that I didn't write this particular bit of code and haven't 
even carefully deconstructed it in my head ;) ).


So, any more clues? (No list-archive header to follow :( I can of 
course search, but someone here might be able to give a better reply 
more efficiently).


Sure, and sorry for lack of context.

The issue is that

   function foo() { return 42; }

and

  var foo = function() { return 42; }

and variations on the latter function expression syntax are all 
standardized. However,


  if (cond) {
function bar() { return 43; }
  }
  console.log(bar());

is not standardized. This syntax is not produced the any ECMA-262 
standard grammar, but it is supported by all major implementations, with 
varying semantics. Call it function-in-block (f-i-b for short).


In IE and browsers that reverse-engineered IE JScript's implementation 
of f-i-b, bar is hoisted whether cond evaluates to truthy or falsy, so 
the console.log(bar()) always works.


In SpiderMonkey in Firefox and other apps, only if cond evaluates to 
truthy will bar be defined, leaving the bar() call within the 
console.log(bar()) expression possibly failing due to bar not being 
found in the scope chain, or resolving to an outer bar that might not be 
callable.


There is also a chance that the tool used to identify breakages has 
missed some code that will breka.


Below are some examples of code on the web today that will be 
broken. For each I include a snippet of code that is heavily edited 
in an attempt to convey the pattern used and the developer intent. I 
also attempt to identify what functionality will actually be broken.


What is the proposed change?


ES6 proposes that f-i-b always bind bar in the nearest enclosing 
curly-braced block, hoisted in the manner of function in JS today but to 
block scope, not function or program scope, so that the function can be 
used in expressions evaluated before control flows to evaluate the 
declaration of bar.


This means the example can fail if !cond. Only within the then block 
(anywhere, thanks to hoisting) could one safely use bar to mean the 
function declared within that block.


Clearly we had an incompatible change in mind. The fallback if we can't 
get away with this compat-break is to have ES6's f-i-b semantics only 
under use strict and have the old mess (browser-dependent) in 
non-strict (sloppy) mode code.


Any help you can give on the one of 20 hard cases Brian found will be 
gratefully received.


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss