Re: Membranes, unmediated access to objects through Object.getPrototypeOf

2012-10-11 Thread Andreas Rossberg
On 11 October 2012 09:32, Brendan Eich bren...@mozilla.org wrote:
 Tom Van Cutsem wrote:

 - Proxy.revocable returns a tuple {proxy, revoke}. While more cumbersome
 to work with (especially in pre-ES6 code without destructuring), this API
 gets the authority to revoke a proxy exactly right: at proxy birth, only the
 creator of the proxy holds the right to revoke it. This is infinitely better
 than a global Proxy.revoke(proxy) method that would allow arbitrary objects
 to revoke any proxy.

 Ok, thanks for this recap. It makes sense, the ocap treatments are working
 ;-).

Even then I don't think the additional creation API is needed. The
handler itself can be mutable, right? So why not have a function
Proxy.revoke that takes a _handler_ (not a proxy) and replaces all its
trap methods by poisoned traps? This is still perfectly ocap (because
only the creator has access to the handler), but requires no extra API
for creating revocable proxies -- just make sure your handler is
mutable.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Membranes, unmediated access to objects through Object.getPrototypeOf

2012-10-11 Thread Andreas Rossberg
On 11 October 2012 13:41, Mark S. Miller erig...@google.com wrote:
 On Thu, Oct 11, 2012 at 4:25 AM, Andreas Rossberg rossb...@google.com
 wrote:
 On 11 October 2012 09:32, Brendan Eich bren...@mozilla.org wrote:
  Tom Van Cutsem wrote:
 
  - Proxy.revocable returns a tuple {proxy, revoke}. While more
  cumbersome
  to work with (especially in pre-ES6 code without destructuring), this
  API
  gets the authority to revoke a proxy exactly right: at proxy birth,
  only the
  creator of the proxy holds the right to revoke it. This is infinitely
  better
  than a global Proxy.revoke(proxy) method that would allow arbitrary
  objects
  to revoke any proxy.
 
  Ok, thanks for this recap. It makes sense, the ocap treatments are
  working
  ;-).

 Even then I don't think the additional creation API is needed. The
 handler itself can be mutable, right? So why not have a function
 Proxy.revoke that takes a _handler_ (not a proxy) and replaces all its
 trap methods by poisoned traps? This is still perfectly ocap (because
 only the creator has access to the handler), but requires no extra API
 for creating revocable proxies -- just make sure your handler is
 mutable.

 How does the target get dropped? Remember, this all started with David's
 observation that without some additional magic, we have an unsolvable GC
 problem. This is still true.

Ah, right. If revoke also froze the handler object, then it could
delete the target, because it will never be observable again. Would
that be too magic?

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: should we rename the Program grammar production?

2012-10-11 Thread Andreas Rossberg
On 11 October 2012 17:49, John J Barton johnjbar...@johnjbarton.com wrote:
 Script is not neutral but neither is Program plus it's just wrong.
 The language needs a name for both the unit of compilation and the
 assembly of those units. The latter is a program right? So the former
 needs a different name.

 CompilationUnit is a bit long but more correct.

Except that the unit of compilation is an individual function in
most contemporary JS implementations. ;)  More generally, I think that
implementation-specific notions like compilation are to be avoided
in a language spec.

Script sounds perfectly reasonable to me.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Property descriptors as ES6 Maps

2012-10-31 Thread Andreas Rossberg
On 31 October 2012 10:40, David Bruant bruan...@gmail.com wrote:
 My bug was about making the use of objects [for property descriptors]
 official in the spec internals... until I realized that ES6 has maps.

Can you motivate why maps would be more adequate? Frankly, I
completely disagree, because they would have a highly heterogeneous
type.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Promises

2012-11-07 Thread Andreas Rossberg
On 6 November 2012 20:55, David Herman dher...@mozilla.com wrote:
 - a way to create promises that don't expose their internal resolve me 
 methods, etc. so they can be delivered to untrusted clients, e.g.:

 var [internalView, externalView] = Promise.makePair();
 resolve in internalView // true
 resolve in externalView // false

Indeed. I think this is an issue where many promise/future libraries
are confused/broken. FWIW, when creating a concurrent language called
Alice ML some 15 years ago we thought about this quite extensively,
and ended up introducing the following separation of concepts:

* Futures are handles for (potentially) unresolved/asynchronous
values, on which you can wait and block -- but you cannot directly
resolve them.

* Promises are explicit resolvers for a future. More specifically,
creating a promise creates an associated future, which you can safely
pass to other parties. Only the promise itself provides the fulfill
method (and related functionality) that enables resolving that future.

In other words, futures provide synchronisation, while promises
provide resolution.

Incidentally, that's also exactly the model and naming that C++11 picked.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Promises

2012-11-07 Thread Andreas Rossberg
On 7 November 2012 17:57, Tom Van Cutsem tomvc...@gmail.com wrote:
 While we're talking nomenclature: the terms promise and future also
 appear, with roughly the semantics described by Andreas in Scala's API [1]
 and Clojure's API [2] (both very recent APIs). I know MarkM dislikes the use
 of these terms to distinguish synchronization from resolution, as he has
 long been using those same terms to distinguish traditional futures, which
 provide a .get() method blocking the calling thread and returning the
 future's value when ready (as in e.g. Java), from promises, which only
 provide a non-blocking when or then method requiring a callback, never
 blocking the event loop thread (as in all the Javascript promise APIs).

 To my mind, the term future is still very closely tied to blocking
 synchronization. YMMV.

I see. Interesting, I wasn't aware of Mark's reservations :). Mark, is
that just about the terminology, or also conceptually?

(Please correct me if I'm wrong, though, IIRC, the original Friedman 
Wise article introduced the term promise for something that's rather
a future according to that distinction.)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do we really need the [[HasOwnProperty]] internal method and hasOwn trap

2012-11-12 Thread Andreas Rossberg
On 12 November 2012 02:17, Allen Wirfs-Brock al...@wirfs-brock.com wrote:
 It isn't clear to me why the [[HasOwnProperty]] internal method and the 
 hasOwn Proxy trap need to exist as object behavior extension points.

 Within my current ES6 draft, [[HasOwnProperty]] is only used within the 
 definition of the ordinary [[HasProperty]] internal method and in the 
 definition of Object.prototype.hasOwnProperty.

 The first usage, could be replaced by an inline non-polymorphic property 
 existence check while the latter could be replaced by a [[GetOwnProperty]] 
 call with a check for an undefined result.

 The existence of both [[HasOwnProperty]] and [[HasProperty]] creates the 
 possibility of object that exhibit inconsistent results for the two 
 operations.  A reasonable expectation would seem to be:

 If O.[[HasProperty]](K) is false, then O.[[HasOwnProperty]](K) should also be 
 false.

 This is in fact the case for all ordinary objects that only have ordinary 
 object  in their inheritance chain. Now consider a proxy defined like so:

 let p = new Proxy({}, {
hasOwn(target,key) {return key === 'foo' ? false: 
 Reflect.hasOwn(target,key)}
has(target,key) {return key === 'foo' ? true: Reflect.has(target,key)}
 });

 console.log(Reflect.hasOwn(p,foo));  //will display: false
 console.log(Reflect.has(p,foo));  //will display: true

Well, this is only one such example, there are plenty of similar ways
to screw up the object model with proxies. For example, there is no
guarantee that the has, get, getOwnProperty{Descriptor,Names} traps
behave consistently either. When you buy into proxies, you have to buy
into that as well.

Nevertheless, I agree that [[HasOwnProperty]] and the hasOwn trap are
dispensable, but do not feel strongly about it. Just looking at the
traps from a programmer POV, it may feel a bit asymmetric to have
hasOwn but not e.g. getOwn (although I know where the difference is
coming from).

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Promises

2012-11-12 Thread Andreas Rossberg
On 7 November 2012 22:19, Mark S. Miller erig...@google.com wrote:
 On Wed, Nov 7, 2012 at 11:12 AM, Andreas Rossberg rossb...@google.com
 wrote:

 On 7 November 2012 17:57, Tom Van Cutsem tomvc...@gmail.com wrote:
  While we're talking nomenclature: the terms promise and future also
  appear, with roughly the semantics described by Andreas in Scala's API
  [1]
  and Clojure's API [2] (both very recent APIs). I know MarkM dislikes the
  use
  of these terms to distinguish synchronization from resolution, as he has
  long been using those same terms to distinguish traditional futures,
  which
  provide a .get() method blocking the calling thread and returning the
  future's value when ready (as in e.g. Java), from promises, which only
  provide a non-blocking when or then method requiring a callback,
  never
  blocking the event loop thread (as in all the Javascript promise APIs).
 
  To my mind, the term future is still very closely tied to blocking
  synchronization. YMMV.

 I see. Interesting, I wasn't aware of Mark's reservations :). Mark, is
 that just about the terminology, or also conceptually?

 (Please correct me if I'm wrong, though, IIRC, the original Friedman 
 Wise article introduced the term promise for something that's rather
 a future according to that distinction.)

 It is just terminology. Prior to E, the closest similar system was Liskov 
 Shrira's http://dl.acm.org/citation.cfm?id=54016, which called them
 promises. All the non-blocking promise systems I am aware of, with the
 exception of Tom's AmbientTalk, have called them promises or deferreds.
 AFAIK, all are derived from E's promises or Liskov  Shrira's promises. I
 think we should respect this history; but history itself is not a strong
 argument.

 The reason I like the promise terminology is that it naturally accounts
 for the three main states of a promise: unresolved, fulfilled, and broken.

I see. Of course, though, in the holder/resolver approach, those
states jointly apply to both objects. My reasoning is that in that
approach, then, the name promise is more suitable for the resolver
object, because that's what has the fulfill and fail methods. The
other only has then/when and friends, which is why a temporal name
like future is kind of intuitive.

But I understand your argument about history and terminology. I can
get rather worked up about abuses of pre-established terminology. I
don't dare mention my pet peeves on this list. :)


 A major feature of many promise systems (including IIRC Liskov and Shrira's)
 that I do not recall being implemented by future systems (with the
 exception of Tom's) is this broken state, as well as the broken promise
 contagion rules which go with it.

Maybe I misunderstand, but MultiLisp already had a notion of failed
future, I think, even if it wasn't really discussed in their paper. It
is kind of inevitable once you combine the future (or promise) idea
with exceptions. Consequently, it also is part of the future semantics
of at least Oz, Alice ML, Scala, and C++.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Promises

2012-11-12 Thread Andreas Rossberg
On 12 November 2012 16:43, Mark S. Miller erig...@google.com wrote:
 The shift back to when clearly failed to achieve consensus.

FWIW, I think then is better, because when sounds as if it should
be passed some kind of predicate or condition. It just doesn't read
very natural when taking continuations.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do we really need the [[HasOwnProperty]] internal method and hasOwn trap

2012-11-14 Thread Andreas Rossberg
On 14 November 2012 09:30, Tom Van Cutsem tomvc...@gmail.com wrote:
 2012/11/13 David Bruant bruan...@gmail.com

 For the particular case you've written, when going for hasOwnProperty.call
 or the in operator, the JS engine knows it needs to output a boolean, so it
 can rewrite (or contextually compile) your trap last line as
 e===undefined (since undefined is falsy and objects created by object
 literals are truthy). In that particular case, the allocation isn't
 necessary provided some simple static analysis.
 Maybe type inference can be of some help to prevent this allocation in
 more dynamic/complicated cases too. I would really love to have implementors
 POV here.

 I'm very skeptical of this.

 If I may summarize, you're arguing that we can get away with spurious
 allocations in handler traps because JS implementations can in theory
 optimize (e.g. partially evaluate trap method bodies). I think that's
 wishful thinking. Not that implementors can't do this, I just think the
 cost/benefit ratio is way too high.

I agree, this is a particularly daring instance of the sufficiently
smart compiler argument.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Promises

2012-11-14 Thread Andreas Rossberg
On 14 November 2012 18:41, Mark S. Miller erig...@google.com wrote:
 Either way, Scala's
 unfortunate choice clearly violates this history in a confusing manner, so
 I'd classify it as #4. Let's not repeat Scala's mistake.

Just to reiterate, it's not just Scala, but more importantly also C++,
Java (to some extent), and several less mainstream languages. That is,
this use of terminology has quite a bit of history of its own, dating
back almost as far as E (and having developed more or less
independently).

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Promises

2012-11-15 Thread Andreas Rossberg
On 14 November 2012 20:37, Tom Van Cutsem tomvc...@gmail.com wrote:
 I still think futures connote strongly with blocking synchronization. If
 we'd add a concept named future to JS on the grounds that the same concept
 exists in Java and C++, developers will reasonably expect a blocking
 future.get() method.

I'd say that different notions of concurrency in respective
languages naturally affect the details of the future _interface_, but
I don't see this as a fundamental difference in the concept _as such_.
Somewhat like weak maps not having iteration, but still being maps.

The future interface in languages with threads is a superset of what
we can provide for JS. In those languages, you (can) have 'then' and
'wait'. Obviously, in a language without threads and only asynchronous
concurrency, the latter operation is not available.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: no strict; directive

2012-11-15 Thread Andreas Rossberg
On 15 November 2012 20:58, Andrea Giammarchi
andrea.giammar...@gmail.com wrote:
 I am talking about caller which is NOT a misfeature

Indeed, misfeature is putting it too mildly.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: no strict; directive

2012-11-15 Thread Andreas Rossberg
On 15 November 2012 21:20, Andrea Giammarchi
andrea.giammar...@gmail.com wrote:
 thanks for your contribution to this thread, appreciated. I'd like a proper
 answer now if that is possible.

You already got rather strong answers from two members of TC39. It's
safe to assume that the rest feels similar. To be clear: it's not only
not planned, but it would happen only over the dead body of half of
the committee. There were reasons why strict mode started ruling out
'caller' and 'with' in the first place.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Problems with strict-mode caller poisoning

2012-11-16 Thread Andreas Rossberg
Consider the following code:

function f() { use strict; g() }
function g() {
  var caller = Object.getOwnPropertyDescriptor(g, caller).value
}

With the current spec, this code would legally give g the strict
function f as its caller. In
https://bugs.ecmascript.org/show_bug.cgi?id=310, Allen proposes the
obvious fix, which is to special case [[GetOwnProperty]] instead of
[[Get]] for function objects in 15.3.5.4. In fact, that is what both
V8 and FF already implement.

However, we recently discovered an issue with that semantics. Namely,
it causes Object.is{Sealed,Frozen} and Object.{seal,freeze} to
spuriously throw when applied to the wrong function at the wrong time.
Consider:

d8 function g() { Object.seal(g) }
d8 function f() { use strict; g() }
d8 f()
(d8):1: TypeError: Illegal access to a strict mode caller function.

(Interestingly, Firefox does not throw on that example, so I'm not
sure what semantics it actually implements.)

What can we do here? There does not seem to be a clean fix, only more
hacks on top of hacks. It is a bit of a bummer for our implementation
of Object.observe, which wants an isFrozen check on its callback.

Thoughts?

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: no strict; directive

2012-11-19 Thread Andreas Rossberg
On 16 November 2012 22:01, Andrea Giammarchi
andrea.giammar...@gmail.com wrote:
 P.S. Alex, just to be as clear as possible, one answer I did not like that
 much was that eval('no strict') nonsense ... that was not an answer 'cause
 problems are the same with eval('use strict')

No, they are not. You apparently didn't understand Oliver's answer,
but chose to call it nonsense without even trying. Not a good basis
for making a convincing argument.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do we really need the [[HasOwnProperty]] internal method and hasOwn trap

2012-11-19 Thread Andreas Rossberg
On 19 November 2012 13:04, David Bruant bruan...@gmail.com wrote:
 I wish to point out a little thought on the topic of memory management. As
 far as I know, all GC algorithms I'm aware of are runtime algorithms,
 meaning that the primitives of these algorithms are objects and references
 between objects. I have never heard of memory management system that would
 take advantage of source code informations to not allocate memory if it's
 proven to be unused after allocation (or allocate less if it's proven only
 part will be used).
 Is it a stupid idea? Too much effort? The conjonctions of 2 research areas
 where people usually don't talk to one another?

Search for region inference or region-based memory management. Was
a hot topic in the late 90s, but ultimately the cost/benefit ratio
turned out to be not so clear.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Problems with strict-mode caller poisoning

2012-11-20 Thread Andreas Rossberg
On 16 November 2012 22:19, Jeff Walden jwalden...@mit.edu wrote:
 On 11/16/2012 07:06 AM, Brendan Eich wrote:
 So it seems to me premature to throw on [[GetOwnProperty]] of a strict 
 function's 'caller'. It would be more precise, and avoid the problem you're 
 hitting, to return a property descriptor with a censored .value, or a 
 poisoned-pill throwing-accessor .value.

That may be plausible, but requires making the 'value' property an
accessor, and hence breaks with the idea that descriptors are just
records. But maybe that is OK for this hack? We should at least be
careful to define it such that the meaning and behaviour of the
descriptor does _not_ vary in time, which would be weird at best.
I.e., its return value and/or poisoning has to be determined once when
[[GetOwnProperty]] is executed.


 premature to throw on [[GetOwnProperty]](caller) on a function whose 
 caller is strict, I assume you meant.  That seems right to me.  Since caller 
 is a time-variant characteristic, it seems right for the property to be an 
 accessor, probably existing solely on Function.prototype, and to defer all 
 the strictness checks to when the function provided as |this| is actually 
 invoked.

I'm not sure I follow, are you talking about the 'caller' property
itself now or the 'value' property of its descriptor?

The problem with 'caller' itself is that the spec does not (and
doesn't want to) spec it for non-strict functions, so it cannot
prescribe it to be an accessor. All would be fine, I suppose, if it
was.

If you are talking about the descriptor's 'value' property then I
strongly oppose making that vary in time. A time varying descriptor
would be weird at best. Fortunately, it's not necessary either.


 Such a property is no different from anything already in the language, so I 
 can't see how it would at all interfere with Object.observe semantics or 
 implementation.

See above, non-strict 'caller' is special in that the spec does not
try define it but yet guard against it with special, er, measures.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: possible excessive proxy invariants for Object.keys/etc??

2012-11-21 Thread Andreas Rossberg
On 21 November 2012 01:06, Allen Wirfs-Brock al...@wirfs-brock.com wrote:
 Tom Van Cutsem tomvc...@gmail.com wrote:
 Allen Wirfs-Brock al...@wirfs-brock.com wrote:
 Tom Van Cutsem tomvc...@gmail.com wrote:
 c) to ensure the stability of the result.

 You can think of a + b as implementing a type coercion of the trap result
 to Array of String. This coercion is not too dissimilar from what the
 getOwnPropertyDescriptor has to do (normalization of the returned property
 descriptor by creating a fresh copy).

 Yes, premature type coercion, in my opinion. Also, a classic performance
 mistakes made by dynamic language programmers:  unnecessary coercion of
 values that are never going to be access or redundant coercion checks of
 values that are already of the property type.  Why is it important to do
 such checks on values that are are just passing through these traps.  When
 and if somebody actually gets arounds to using one of the elements that are
 returned as a property key they will be automatically coerced to a valid
 value.  Why is it important that it happens any sooner than that.

 I don't know if you are familiar with the work of Felleisen et al. on
 higher-order contracts. In that work, they use the notion of blame between
 different components/modules. Basically: if some module A receives some data
 from a module B, and B provides wrong data, then B should be assigned
 blame. You don't want to end up in a situation where A receives the blame at
 the point where it passes that wrong data into another module C.

 Yes, but ES is rampant with this sort of potentially misplaced blame.  We
 can debate whether such proper blame assignment is important or not, but I
 do believe this sort of very low level MOP interface is a situation where
 you want to absolutely minimize none essential work.  I'd sooner have it be
 a little bit more difficult to track now Proxy based bugs then to impact the
 performance of every correctly implemented proxy in every correct program.
 BTW this is a general statement about the entire proxy MOP and not just
 about these particularly property key access traps.

I'm strongly in favour of guaranteeing the contract Tom is mentioning.
However, there is an alternative to copying: we could require the
array (or array-like object) returned by the trap to be frozen. (We
could also freeze it ourselves, but that might be more problematic.)

(The fact that ES is already full of mistakes should not be an excuse
for reiterating them for new features.)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [Bug 20019] Support subclassing ES6 Map

2012-11-21 Thread Andreas Rossberg
On 20 November 2012 21:30, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Tue, Nov 20, 2012 at 10:57 AM, Mark S. Miller erig...@google.com wrote:
 I think adding a MultiMap API to ES7 is a good idea. Neither Map nor
 MultiMap should be a subclass of the other, since neither is an LSP
 subtype of the other.

 When properly designed, as long as you interact with it only through
 Map methods, a MultiMap can be an LSP subtype of Map.

 [...]

 The tricky part is dealing with .size and the iterator methods.  You
 need .size to reflect the number of keys, not pairs, to be consistent
 with Map.  But then .size doesn't match the length of the iterators
 returned by .items() or .values(), unless both of these are changed to
 only return the first value by default.  (They can't return an array
 of values, because that's not what Map does.)

If the multi map iterator returns the same key multiple times it
already breaks the map contract. So you would need a separate
iteration method for that. At that point, as Mark says, it is not
clear what the benefit is.

The proper approach would be to identify a common super class that
identifies the commonalities. You could try to come up with a
hierarchy of concepts, like they did for C++0X before it got
downsized. But lacking types that hardly seems useful for JavaScript.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: possible excessive proxy invariants for Object.keys/etc??

2012-11-21 Thread Andreas Rossberg
On 21 November 2012 17:55, Allen Wirfs-Brock al...@wirfs-brock.com wrote:
 I'd be more favorably inclined towards freezing than I am towards copying.  
 But, as you know,  ES5 does not currently produce frozen objects in these 
 situations. I feel uncomfortable about enforcing a frozen invariant for traps 
 where that invariant is not provided by the corresponding ordinary object 
 behavior.  Perhaps I could get over that or perhaps incompatibility applying 
 that requirement to ordinary objects wouldn't break anything.

 Regardless, freezing and testing for frozen is, itself, not a cheap 
 operation.  It requires iterating over all the property descriptors of an 
 object.  If we are going to build in a lot of checks of for frozen objects 
 perhaps we should just make frozen (and possibly) sealed object level states 
 rather than a dynamic check of all properties.  Essentially we could 
 internally turn the [[Extensible]] internal property into a four state value: 
  open,non-extensible,sealed,frozen.  It would make both freezing and checking 
 for frozen much cheaper.

That doesn't seem necessary, because it is just as easy to optimise
the current check for the normal case where the object has been frozen
or sealed with the respective operation.

 I think it is usually a mistake to perform complex invariant check at low 
 levels of a language engine.  Those often become performance barriers.  
 Checking complex relationships belongs at higher abstraction layers.

Well, the root of the problem arguably lies with the whole idea of
proxies hooking arbitrary code into low-level operations. Perhaps
such power has to come with a cost in terms of checks and balances.
There are no higher abstraction layers in this case.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: possible excessive proxy invariants for Object.keys/etc??

2012-11-21 Thread Andreas Rossberg
On 21 November 2012 18:35, Allen Wirfs-Brock al...@wirfs-brock.com wrote:
 If you are writing any sort of generic algorithm that does a freeze check on 
 an arbitrary object you have to explicitly perform all of the internal method 
 calls because you don't know whether the object is a proxy (where every such 
 internal method call turns into an observable trap) or even some other sort 
 of exotic object implementation that can observe actual internal method 
 calls.  If there is explicit internal state is designate an object as frozen, 
 then we wouldn't have all of those potentially observable calls.

Yes, but the fast path in the VM would merely check whether you have
an ordinary object with the 'frozen' flag set. Only if that fails, or
for (most) proxies and other exotics, you have to fall back to do
something more complicated. Presumably, most practical use cases would
never hit that.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Problems with strict-mode caller poisoning

2012-11-22 Thread Andreas Rossberg
On 20 November 2012 17:26, Allen Wirfs-Brock al...@wirfs-brock.com wrote:
 Yes, property descriptor records can't act like accessors.  They are just 
 specification internal records that indicate that a set of values is being 
 passed around.  But we can censor the value that goes into the record.  To me 
 this seems like a sufficient solution for dealing with the security issue.  
 It deviates from what was specified in ES5.1 but that is buggy and I don't 
 think a change from throwing to returning null for the caller would create 
 much havoc

+1. I just implemented this in V8, and we will see how it goes in the wild.

Interestingly, none of the 97 tests in test262 that are specifically
concerned with 15.3.5.4 fail after this change 8-}. It seems that they
are broken in at least two ways: allowing a falsey value for .caller,
and assuming that a global function would be non-strict even if the
global scope is already strict.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Pure functions in EcmaScript

2012-11-28 Thread Andreas Rossberg
On 28 November 2012 12:50, Marius Gundersen gunder...@gmail.com wrote:
 Has there been any work done on pure functions in EcmaScript? The way I
 imagine it, there would be a way to indicate that a function should be pure
 (by using a symbol or a new keyword, although I understand new keywords
 aren't terribly popular). The pure function is not allowed to access any
 variable outside its own scope. Any access to a variable outside the scope
 of the function would result in a Reference Error, with an indication that
 the reference attempt was made from a pure function. This also applies to
 any function called from within the pure function. The entire stack of a
 pure function must be pure. This also means the pure function cannot access
 the [this] object. Only the parameters  passed to the function can be used
 in the calculation.

 The syntax could be something like this (the @ indicates that it is pure):

 function sum@(a, b){
   return a+b;
 }

 var sum = function@(a, b){
   return a+b;
 }

A couple of comments.

First, your definition of pure is not quite correct. Any function
that even _returns_ locally created state in some form (i.e., a new
object), is impure.

Second, due to the extremely impure nature of JavaScript, there aren't
many useful pure functions you could even write. For example, your
'sum' function is not pure, because the implicit conversions required
by + can cause arbitrary side effects.

Last, short of a static type-and-effect system, I don't see how the
necessary checks could be implemented without imposing significant
overhead on almost every primitive operation -- because every
function, and hence almost any piece of code, might potentially end up
with a pure function in its call chain, and would need to check for
that.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Pure functions in EcmaScript

2012-11-28 Thread Andreas Rossberg
On 28 November 2012 14:39, Marius Gundersen gunder...@gmail.com wrote:
 On Wed, Nov 28, 2012 at 1:20 PM, Andreas Rossberg rossb...@google.com
 wrote:
 First, your definition of pure is not quite correct. Any function
 that even _returns_ locally created state in some form (i.e., a new
 object), is impure.

 Fair enough. A better name would probably be side effect free functions.

Well, observable allocation of state, or assignment to parameters,
_are_ side effects.

On the other hand, accessing non-local bindings that are (deeply)
immutable does not constitute an effect, yet you want to forbid it.

Seriously, what is the use case for this rather strange definition?


 Last, short of a static type-and-effect system, I don't see how the
 necessary checks could be implemented without imposing significant
 overhead on almost every primitive operation -- because every
 function, and hence almost any piece of code, might potentially end up
 with a pure function in its call chain, and would need to check for
 that.

 I'm not an implementer of EcmaScript, so I don't have deep knowledge of how
 this could be implemented. I would imagine that a subset of a true pure
 function, where the only restriction would be that only variables passed as
 arguments to the function exist in the scope, would be relatively easy to
 implement. Wouldn't this be a faster implementation than todays functions,
 which have to keep track of scope? These side-effect-free functions would
 only need to contain the variables passed as parameters in their scope.
 Accessing anything outside the scope would result in a reference error.

Modern JS implementations don't do the kind of runtime bookkeeping of
scopes that you seem to assume. Compiled code doesn't even know what a
scope is, it just accesses hardcoded indices into the stack or into
some heap arrays.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Problems with strict-mode caller poisoning

2012-11-28 Thread Andreas Rossberg
On 29 November 2012 00:16, Dave Fugate dave.fug...@gmail.com wrote:
 Believe you're correct on the former, but perhaps not the latter=)

 E.g.:
  6 /**
  7* @path ch15/15.3/15.3.5/15.3.5.4/15.3.5.4_2-1gs.js
  8* @description Strict mode - checking access to strict function
 caller from strict function (FunctionDeclaration defined within strict mode)
  9   * @onlyStrict
 10  * @negative TypeError
 11  */
 12
 13
 14 use strict;
 15 function f() {
 16 return gNonStrict();
 17 }
 18 f();
 19
 20
 21 function gNonStrict() {
 22 return gNonStrict.caller;
 23 }

 is globally scoped strict mode and passes only when a TypeError gets thrown
 indicating strict mode is in effect.

The bug with this test (and others) is that gNonStrict is _not_ a
non-strict function, its name notwithstanding. Hence the test throws
for the wrong reason, namely because strict-function.caller is a
poisoned getter, not because of Sec 15.3.5.4, which it is supposed to
test.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Problems with strict-mode caller poisoning

2012-11-28 Thread Andreas Rossberg
On 29 November 2012 06:06, Dave Fugate dave.fug...@gmail.com wrote:
 The naming 'gNonStrict' here refers to the function not containing a use
 strict declaration itself, not that it's subject to strict mode.  Sorry
 this intent wasn't clearer.

 Section 15.3.5.4 step 2 in my copy of ES5 reads:
 If P is caller and v is a strict mode Function object, throw a
 TypeError exception.

 Is something other than Function's [[Get]] really supposed to be called in
 this snippet?  E.g., 13.2.19.b.  If so, seems like they're still valid test
 cases, only they apply to step 1 of 15.3.5.4, not step 2?

I suppose so, but was that the intention? Either way, there currently
is no test that actually tests step 2.

/Andreas


 On Wed, Nov 28, 2012 at 4:43 PM, Andreas Rossberg rossb...@google.com
 wrote:

 On 29 November 2012 00:16, Dave Fugate dave.fug...@gmail.com wrote:
  Believe you're correct on the former, but perhaps not the latter=)
 
  E.g.:
   6 /**
   7* @path ch15/15.3/15.3.5/15.3.5.4/15.3.5.4_2-1gs.js
   8* @description Strict mode - checking access to strict
  function
  caller from strict function (FunctionDeclaration defined within strict
  mode)
   9   * @onlyStrict
  10  * @negative TypeError
  11  */
  12
  13
  14 use strict;
  15 function f() {
  16 return gNonStrict();
  17 }
  18 f();
  19
  20
  21 function gNonStrict() {
  22 return gNonStrict.caller;
  23 }
 
  is globally scoped strict mode and passes only when a TypeError gets
  thrown
  indicating strict mode is in effect.

 The bug with this test (and others) is that gNonStrict is _not_ a
 non-strict function, its name notwithstanding. Hence the test throws
 for the wrong reason, namely because strict-function.caller is a
 poisoned getter, not because of Sec 15.3.5.4, which it is supposed to
 test.

 /Andreas


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Problems with strict-mode caller poisoning

2012-11-29 Thread Andreas Rossberg
Or to null, which is exactly what the new semantics decided to do. ;)

/Andreas

On 29 November 2012 17:11, Dave Fugate dave.fug...@gmail.com wrote:
 Should be: 'caller' to false :)

 On Thu, Nov 29, 2012 at 9:10 AM, Dave Fugate dave.fug...@gmail.com wrote:

 'caller' to true


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-04 Thread Andreas Rossberg
On 4 December 2012 14:28, Claus Reinke claus.rei...@talk21.com wrote:
 Could you please document the current state of concerns, pros and
 cons that have emerged from your discussions so far? You don't
 want to have to search for these useful clarifications when this topic
 comes up again (be it in tc39 or in ES6 users asking where is private?).

There were various mixed concerns, like perhaps requiring implicit
scoping of @-names to be practical in classes, their operational
generativity perhaps being a mismatch with their seemingly static
meaning in certain syntactic forms, potential ambiguities with what @x
actually denotes in certain contexts. And probably more. Most of that
should be in the meeting minutes.

 Implicit scoping in a language with nested scopes has never been a
 good idea (even the implicit var/let scopes in JS are not its strongest
 point). Prolog got away with it because it had a flat program structure
 in the beginning, and even that fell down when integrating Prolog-like
 languages into functional one, or when adding local sets of answers.

Indeed. (Although I don't think we have implicit let-scopes in JS.)

 This leaves the generativity concerns - I assume they refer to
 gensym-style interpretations? ES5 already has gensym, in the
 form of Object References (eg, Object.create(null)), and Maps
 will allow to use those as keys, right?

 The only thing keeping us from using objects as property names
 is the conversion to strings, and allowing Name objects as property
 names is still on the table (as is the dual approach of using a
 WeakMap as private key representation, putting the object in the
 key instead of the key in the object).

Symbols will definitely still be usable as property names, that's
their main purpose.

The main technical reason that arbitrary objects cannot be used indeed
is backwards compatibility. The main moral reason is that using
general objects only for their identity seems like overkill, and you
want to have a more targeted and lightweight feature.

 So I'm not sure how your concerns are being addressed by
 merely replacing a declarative scoping construct by an explicitly
 imperative gensym construct?

We have the gensym construct anyway, @-names were intended to be
merely syntactic sugar on top of that.

 There is a long history of declarative interpretations of gensym-
 like constructs, starting with declarative accounts of logic variables,
 over name calculi (often as nu- or lambda/nu-calculi, with greek
 letter nu for new names), all the way to pi-calculi (where names
 are communication channels between processes). Some of these
 calculi support name equality, some support other name features.

 The main steps towards a non-imperative account tend to be:

 - explicit scopes (this is the difference to gensym)
 - scope extrusion (this is the difference to lambda scoping)

Scope extrusion semantics actually is equivalent to an allocation
semantics. The only difference is that the store is part of your term
syntax instead of being a separate runtime environment, but it does
not actually make it more declarative in any deeper technical sense.
Name generation is still an impure effect, albeit a benign one.

Likewise, scoped name bindings are equivalent to a gensym operator
when names are first-class objects anyway (which they are in
JavaScript).

 As Brendon mentions, nu-scoped variables aren't all that different
 from lambda-scoped variables. It's just that most implementations
 do not support computations under a lambda binder, so lambda
 variables do not appear to be dynamic constructs to most people,
 while nu binders rely on computations under the binders, so a
 static-only view is too limited.

I think you are confusing something. All the classical name calculi
like pi-calculus or nu-calculus don't reduce/extrude name binders
under abstraction either.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Comments on Meeting Notes

2012-12-05 Thread Andreas Rossberg
On 5 December 2012 02:46, Brendan Eich bren...@mozilla.org wrote:
 Also, good luck getting SunSpider or V8/Octane to enable use strict!
 Paging Dr. Rossberg on the latter :-P.

Octane actually contains two benchmarks running in strict mode, namely
PDF/JS and GameBoy. (Unfortunately, I just realised that we screwed it
up for the latter by file concatentation -- but I'll make sure to get
that fixed for the next release.)

For the rest, introducing strict mode would kind of go against the
spirit of Octane running real world applications as is. But I'd love
to see _new_ benchmarks using strict mode in future releases. I don't
mind suggestions. :)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Map|Set|WeakMap)#set() returns `this` ?

2012-12-06 Thread Andreas Rossberg
On 6 December 2012 05:05, Rick Waldron waldron.r...@gmail.com wrote:
 Again, I reject the notion that someone might screw up is a valid argument
 for this, or any, discussion. It's one thing to be aware of the potential
 for misuse, but entirely another to succumb to fear driven design.

Fear driven design is pejorative. The argument really is about the
ability to do local reasoning as much as possible, which is a *very*
valid concern, especially when reading somebody else's code using
somebody else's library.

I agree with other voices in this thread that in general, returning
'this' rather is an anti pattern. You can get away with it if you
limit it to very few well-known library functions, but I doubt that
blessing such style in the std lib does help that cause.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-06 Thread Andreas Rossberg
On 5 December 2012 19:19, Claus Reinke claus.rei...@talk21.com wrote:
 their operational generativity perhaps being a mismatch with their
 seemingly static meaning in certain syntactic forms,

 This appears to be ungrounded. See below.

Personally, I also consider that a non-issue, but it was concern that
was raised.

 Implicit scoping in a language with nested scopes has never been a
 good idea (even the implicit var/let scopes in JS are not its strongest
 point). Prolog got away with it because it had a flat program structure
 in the beginning, and even that fell down when integrating Prolog-like
 languages into functional one, or when adding local sets of answers.

 Indeed. (Although I don't think we have implicit let-scopes in JS.)

 There are few enough cases (scope to nearest enclosing block unless there is
 an intervening conditional or loop construct,

If you mean something like

  if (bla) let x;

then that is not actually legal.

 to nearest for loop body if it
 appears in the loop header, to the right in a comprehension) that the
 difference might not matter.
 I would have preferred if let had not been modeled after var so much, but
 that is another topic.

It is as clean as it can get given JS. And you may be surprised to
hear that there are some voices who actually would have preferred a
_more_ var-like behaviour.

 So I'm not sure how your concerns are being addressed by
 merely replacing a declarative scoping construct by an explicitly
 imperative gensym construct?

 We have the gensym construct anyway, @-names were intended to be merely
 syntactic sugar on top of that.

 Yes, so my question was how removing the sugar while keeping
 the semantics is going to address the concerns voiced in the meeting
 notes.

The concern was that the sugar has issues, not symbol semantics as such.


 Scope extrusion semantics actually is equivalent to an allocation
 semantics. The only difference is that the store is part of your term
 syntax instead of being a separate runtime environment, but it does
 not actually make it more declarative in any deeper technical sense.
 Name generation is still an impure effect, albeit a benign one.

 For me, as a fan of reduction semantics, having all of the semantics
 explainable in the term syntax is an advantage!-) While it is simple to map
 between the two approaches, the nu-binders are more declarative in terms
 of simpler program equivalences: for gensym,
 one needs to abstract over generated symbols and record sharing
 of symbols, effectively reintroducing what nu-binders model directly.

The program equivalences are the same, up to annoying additional
congruences you need to deal with for nu-binders, which complicate
matters. Once you actually try to formalise semantic reasoning (think
e.g. logical relations), it turns out that a representation with a
separate store is significantly _easier_ to handle. Been there, done
that.

 gensym is more imperative in terms of the simplest implementation:
 create a globally unused symbol.

Which also happens to be the simplest way of implementing
alpha-conversion. Seriously, the closer you look, the more it all
boils down to the same thing.

 As Brendon mentions, nu-scoped variables aren't all that different
 from lambda-scoped variables. It's just that most implementations
 do not support computations under a lambda binder, so lambda
 variables do not appear to be dynamic constructs to most people,
 while nu binders rely on computations under the binders, so a
 static-only view is too limited.

 I think you are confusing something. All the classical name calculi
 like pi-calculus or nu-calculus don't reduce/extrude name binders
 under abstraction either.

 Not under lambda-binders, but under nu-binders - they have to.

 If was explaining that the static/dynamic differences that seem to make
 some meeting attendees uncomfortable are not specific to nu-scoped
 variables, but to implementation strategies. For lambda-binders, one can get
 far without reducing below them, but if one lifts that restriction,
 lambda-bound variables appear as runtime constructs, too, just as for
 nu-binders and nu-bound variables (gensym-ed names).

Not sure what you're getting at precisely, but I don't think anybody
would seriously claim that nu-binders are useful as an actual
implementation strategy.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Comments

2012-12-06 Thread Andreas Rossberg
On 6 December 2012 15:42, Kevin Smith khs4...@gmail.com wrote:
 5) Dynamic exports via `export = ?` could make interop with existing
 module systems easier.  But how does that work?

 Dave gave an outline.  I'm liking this.  What are the downsides, if any?

The downside is that it introduces a severe anomaly into the module
semantics (a module which actually has no instance). I could live with
this feature if we were to find a way to explain it in terms of simple
syntactic sugar on both the import and export side, but screwing and
complicating the semantics for minor syntactic convenience is not
something I am particularly fond of.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Comments

2012-12-06 Thread Andreas Rossberg
On 6 December 2012 16:44, Domenic Denicola dome...@domenicdenicola.com wrote:
 For the record, here's the idea Yehuda and I worked out:

 https://gist.github.com/1ab3f0daa7b37859ce43

 I would *really* appreciate if people read it (it's easy reading, I
 promise!) and incorporated some of our concerns and ideas into their
 thinking on module syntax.

I strongly agree with having the

import x from ...
import {x, y} from ...

symmetry and consistent binding on the left. However, the more radical
parts of your proposal (allowing arbitrary export expressions, and
arbitrary import patterns) do not work.

The problem is that imports are not normal variable assignments. They
do not copy values, like normal destructuring, they are aliasing
bindings! If you were to allow arbitrary expressions and patterns,
then this would imply aliasing of arbitrary object properties. Not
only is this a completely new feature, it also is rather questionable
-- the aliased location might disappear, because objects are mutable.

Consider:

  module A {
let o = {
  x: [1, 2],
  f() { o.x = 666 }
}
export o
  }

  import {x: [a, b], f} from A
  a = 3  // is this supposed to modify the array?
  f()
  print(a)  // x is no longer an array, a doesn't even exist

In other words, what you are proposing has no longer anything to do
with static scoping.

You could arguably make this saner by interpreting nested patterns in
an import as copying, not aliasing, but I think mixing meanings like
that would be rather confusing and surprising.

You could also consider imports always meaning copying, but then you
exporting a variable will no longer be useful.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-06 Thread Andreas Rossberg
On 6 December 2012 17:25, Claus Reinke claus.rei...@talk21.com wrote:
 I was hoping for something roughly like

let lhs = rhs; statements
// non-recursive, scope is statements

let { declarations }; statements// recursive, scope is
 declarations and statements

Problem is that you need mutual recursion between different binding
forms, not just 'let' itself.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Comments

2012-12-06 Thread Andreas Rossberg
On 6 December 2012 17:33, Kevin Smith khs4...@gmail.com wrote:
 The downside is that it introduces a severe anomaly into the module
 semantics (a module which actually has no instance). I could live with
 this feature if we were to find a way to explain it in terms of simple
 syntactic sugar on both the import and export side, but screwing and
 complicating the semantics for minor syntactic convenience is not
 something I am particularly fond of.

 What if this:

 export = boo;

 Actually creates a static export with some exotic name, say __DEFAULT__ (for
 the sake of argument) and initializes it to the value boo.

 And this form:

 import boo from boo.js;

 Creates a binding to __DEFAULT__ in boo.js, if it exists, or to the module
 instance of boo.js otherwise.

 Would that work as a desugaring?

I suggested something along these lines at some point in the past, but
there were some concerns with it that, unfortunately, I do not
remember. Maybe it can be resolved.

Note, however, that you still assume some hack in the semantics with
the if it exists part. To avoid that, you need to divorce the import
syntax from the naming-an-external-module syntax -- which I'd actually
prefer anyway, and which was the case in the previous version of the
proposal.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Comments

2012-12-06 Thread Andreas Rossberg
On 6 December 2012 17:46, Matthew Robb matthewwr...@gmail.com wrote:
 What about trying it the other way, flip everything.

 import foo as bar;
 import foo as { baz }

Hm, I don't understand. What would that solve?

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-07 Thread Andreas Rossberg
On 6 December 2012 22:26, Claus Reinke claus.rei...@talk21.com wrote:
 I was hoping for something roughly like

let lhs = rhs; statements
// non-recursive, scope is statements

let { declarations }; statements// recursive, scope is
 declarations and statements

 Problem is that you need mutual recursion between different binding forms,
 not just 'let' itself.

 Leaving legacy var/function out of it, is there a problem with
 allowing mutually recursive new declaration forms in there?

let { // group of mutually recursive bindings

[x,y] = [42,Math.PI]; // initialization, not assignment

even(n) { .. odd(n-1) .. } // using short method form
odd(n) { .. even(n-1) .. } // for non-hoisting functions

class X { .. }
class C extends S { .. new X( odd(x) ) .. }
class S { }
};
if (even(2)) console.log(  new C() );

First of all, this requires whole new syntax for the let body. Second,
it doesn't eliminate the need for temporal dead zones at all. So what
does it gain? The model we have now simply is that every scope is a
letrec (which is how JavaScript has always worked, albeit with a less
felicitous notion of scope).

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Map|Set|WeakMap)#set() returns `this` ?

2012-12-07 Thread Andreas Rossberg
On 6 December 2012 18:38, Rick Waldron waldron.r...@gmail.com wrote:
 I agree with other voices in this thread that in general, returning
 'this' rather is an anti pattern.

 The evidence I've brought to this discussion shows that the most widely used
 and depended upon libraries heavily favor the pattern.

That's not necessarily a contradiction. ;)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Comments

2012-12-07 Thread Andreas Rossberg
On 6 December 2012 17:54, Kevin Smith khs4...@gmail.com wrote:

 Note, however, that you still assume some hack in the semantics with
 the if it exists part. To avoid that, you need to divorce the import
 syntax from the naming-an-external-module syntax -- which I'd actually
 prefer anyway, and which was the case in the previous version of the
 proposal.

 Could we eliminate the hack on the export side instead?

 Every module instance has a $DEFAULT export binding.  Normally, it is set to
 the module instance itself.  `export = ?` overrides the value of that
 binding.  `import x from y` binds $DEFAULT in y to x.  Maybe?

Well, in my book, that doesn't count as eliminating the hack, but
rather broadening it to all sides. Moreover, it still prevents you
from getting a handle on the module itself. In fact, I believe this is
pretty much equivalent to what's currently in the proposal.

For the record, what I have in mind is similar to your previous
suggestion, namely treating

  export = exp

as special syntax for the pseudo declaration

  export let export = exp

(where the second 'export' is meant to act as an identifier/property
name). And e.g.

  import x from url

as

  import {export: x} from url

For module naming, we'd need to have a different syntax. In earlier
versions of the proposal that was

  module x at url

which would still be usable even for modules using special exports.
Note also that using the special export would not be mutually
exclusive with having other exports, so in that sense, it is like your
$DEFAULT, but far less magic.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Comments

2012-12-09 Thread Andreas Rossberg
On 9 December 2012 02:10, Kevin Smith khs4...@gmail.com wrote:

 So if you didn't set the anonymous binding in some module x.js, what would
 this do:

 import x from x.js;

 Would x be bound to the module instance or would we get a binding error?

Since it is just sugar, and supposed to be equivalent to the
expansion, you (fortunately) would get an error (statically).

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Comments

2012-12-09 Thread Andreas Rossberg
On 9 December 2012 03:51, Domenic Denicola dome...@domenicdenicola.com wrote:
 From: Andreas Rossberg [mailto:rossb...@google.com]
 On 6 December 2012 16:44, Domenic Denicola
 dome...@domenicdenicola.com wrote:
  For the record, here's the idea Yehuda and I worked out:
 
  https://gist.github.com/1ab3f0daa7b37859ce43
 
  I would *really* appreciate if people read it (it's easy reading, I
  promise!) and incorporated some of our concerns and ideas into their
  thinking on module syntax.

 However, the more radical parts of your proposal (allowing arbitrary export 
 expressions, and arbitrary import patterns) do not work.

 The problem is that imports are not normal variable assignments. They do not 
 copy values, like normal destructuring, they are aliasing bindings! If you 
 were to allow arbitrary expressions and patterns, then this would imply 
 aliasing of arbitrary object properties. Not only is this a completely new 
 feature, it also is rather questionable -- the aliased location might 
 disappear, because objects are mutable.

 Thanks for the feedback Andreas; this is really helpful. It took me a while 
 to figure out what you meant by this, but I think I understand now. However, 
 I think that since the bindings are const bindings, the difference between 
 copying and aliasing is unobservable—is that right?

No, because what you are aliasing isn't const. Consider:

  module A {
export let x = 4
export function f() { x = 5 }
  }

  import {x, f} from A
  f()
  print(x)  // 5

Moreover, it is still up in the air whether exported mutable bindings
should be mutable externally or not. V8, for example, currently allows
that, and although it doesn't implement 'import' yet you can access
the module directly:

  A.x = 6
  print(A.x)  // 6

That is the natural behaviour if you want to be able to use modules as
a name spacing mechanism. Same in TypeScript, by the way.

 You could also consider imports always meaning copying, but then you 
 exporting a variable will no longer be useful.

 This is the part that made me question whether I understand what you're 
 saying. What do you mean by exporting a variable and useful?

I hope the example(s) clarify it. :)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Comments

2012-12-09 Thread Andreas Rossberg
On 9 December 2012 15:04, Nathan Wall nathan.w...@live.com wrote:
 The problem is that imports are not normal variable assignments. They
 do not copy values, like normal destructuring, they are aliasing
 bindings! If you were to allow arbitrary expressions and patterns,
 then this would imply aliasing of arbitrary object properties. Not
 only is this a completely new feature, it also is rather questionable
 -- the aliased location might disappear, because objects are mutable.

 Could it be structured so that using `export` directly on a variable
 exported the alias, while using `import { x: [ a, b ] } from A; ` was
 basically just sugar for `import { x } from A; let [ a, b ] = x;` so that a
 and b copied not aliased?

That's what I referred to when I wrote:

 You could arguably make this saner by interpreting nested patterns in
 an import as copying, not aliasing, but I think mixing meanings like
 that would be rather confusing and surprising.

So yes, you could do that, but no, I don't think it is a good idea.
Your example:

 import { x: { a, b }, f } from A;
 f();
 print(a); // 1
 print(b); // 2

 ...

 import { x, f } from A;
 f();
 print(x.a); // 3
 print(x.b); // 4

demonstrates perfectly how it violates the principle of least surprise
and can potentially lead to subtle bugs, especially when refactoring.
One overarching principle of destructuring should be that all
variables in one binding are treated consistently.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Comments

2012-12-10 Thread Andreas Rossberg
On 10 December 2012 05:30, Kevin Smith khs4...@gmail.com wrote:
 OK, then suppose we have these two separate forms:

 import x from url; // Bind x to the anonymous export, if defined,
 otherwise error

 and

 import module x from url; // Bind x to the module instance

 In the vast majority of cases the module keyword above can be inferred
 correctly at link-time based on whether or not there is an anonymous export
 in the target module.

 If it were important for the user to disambiguate in those rare cases, and
 load the module instance instead of the anonymous export, then she could
 simply provide the optional module keyword.

 Does that work?

I consider such second-guessing of user intention, which can lead one
construct to mean completely different things, harmful. It makes code
less readable and more brittle. And again, it's a semantic hack,
making the language more complex. I just don't see why it would be
worth it, especially since with the right choice of syntax, the two
forms of declaration can easily be made equally concise.

What's so terrible about using different constructs for different
things that you want to avoid it?

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-11 Thread Andreas Rossberg
On 10 December 2012 21:59, Claus Reinke claus.rei...@talk21.com wrote:
 Second, it doesn't eliminate the need for temporal dead zones at all.

 You could well be right, and I might have been misinterpreting what
 temporal dead zone (tdz) means.
 For a letrec, I expect stepwise-refinement-starting-from-undefined
 semantics, so I can use a binding anywhere in scope but may or may
 not get a value for it. While the tdz seems to stipulate that a binding for
 a variable in scope doesn't really exist and may not be accessed until its
 binding (explicit or implicitly undefined) statement is evaluated.

Not sure what you mean by
stepwise-refinement-starting-from-undefined. JavaScript is both
eager and impure, and there is no tradition of imposing syntactic
restrictions on recursive bindings. Consequently, any binding can have
effects, and the semantics must be sequentialised according to textual
order. Short of sophisticated static analysis (which we can't afford
in a jitted language), there is no way to prevent erroneous forward
accesses from being observable at runtime.

The question, then, boils down to what the observation should be: a
runtime error (aka temporal dead zone) or 'undefined'. Given that
choice, the former is superior in almost every way, because the latter
prevents subtle initialisation errors from being caught early, and is
not an option for most binding forms anyway.

 So what does it gain? The model we have now simply is that every scope is
 a letrec (which is how JavaScript has always worked, albeit
 with a less felicitous notion of scope).

 That is a good way of looking at it. So if there are any statements
 mixed in between the definitions, we simply interpret them as
 definitions (with side-effecting values) of unused bindings, and

 { let x = 0;
  let z = [x,y]; // (*)
  x++;
  let y = x;  let __ = console.log(z);
 }

 is interpreted as

 { let x = 0;
  let z = [x,y]; // (*)
  let _ = x++;
  let y = x;
  let __ = console.log(z);
 }

Exactly. At least that's my preferred way of looking at it.

 What does it mean here that y is *dead* at (*), *dynamically*?
 Is it just that y at (*) is undefined, or does the whole construct throw a
 ReferenceError, or what?

Throw, see above.

 If tdz is just a form of saying that y is undefined at (*), then I can
 read the whole block as a letrec construct. If y cannot be used until its
 binding initializer statement has been executed, then I seem to have a
 sequence of statements instead.

It inevitably is an _impure_ letrec, which is where the problems come in.

 Of course, letrec in a call-by-value language with side-effects is tricky.
 And I assume that tdz is an attempt to guard against unwanted surprises. But
 for me it is a surprise that not only can side-effects on the right-hand
 sides modify bindings (x++), but that bindings are interpreted as
 assignments that bring in variables from the dead.

They are initialisations, not assignments. The difference, which is
present in other popular languages as well, is somewhat important,
especially wrt immutable bindings. Furthermore, temporal dead zone
also applies to assignments. So at least, side effects (which cannot
easily be disallowed) can only modify bindings after they have been
initialised.

None of these problems would go away by having explicit recursion.
Unless you impose far more severe restrictions.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Number.isNaN

2012-12-13 Thread Andreas Rossberg
On 14 December 2012 06:46, John-David Dalton
john.david.dal...@gmail.com wrote:
Axel Rauschmayer:
 Honest question: I have yet to see boxed values in practice. Are there any
 real use cases?

 See Modernizr:
 https://github.com/Modernizr/Modernizr/blob/master/feature-detects/video.js#L23

I think not. And wrapping bools, like the above piece of code does, is
a particularly bad idea, because JS says

  (Object(false) ? 1 : 2)  ===  1

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A DOM use case that can't be emulated with direct proxies

2012-12-14 Thread Andreas Rossberg
On 13 December 2012 19:21, Mark S. Miller erig...@google.com wrote:
 On Thu, Dec 13, 2012 at 1:12 AM, David Bruant bruan...@gmail.com wrote:
 As you say, to remain viable, it
 must be done quickly. From previous experience, I suggest that there's
 exactly one way to get quick universal deployment: add a test to
 test262 that fails when a browser's WindowProxy object violates this
 normative part of the ES5 spec.

 I feel such a test would rather belong to the HTML DOM. But either way, I
 agree.

 The spec that it violates is ES5.1. Therefore it will be
 uncontroversial to put such tests into test262.

I have to strongly disagree here. By this argument, we could put in a
test for any JS extension in the world that potentially violates
proper ES semantics. I think test262 should test ECMA-262, nothing
else.

In particular, consider that test262 currently is a headless test,
i.e. no browser needed, a shell like d8 or jsc is enough to run it.
Putting in browser-specific tests would put a _huge_ burden on all
kinds of automated testing environments running this suite.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A DOM use case that can't be emulated with direct proxies

2012-12-14 Thread Andreas Rossberg
On 14 December 2012 16:54, Mark Miller erig...@gmail.com wrote:
 Regarding what Andreas said and what Alex +1ed, we already have precedent.
 We already argued through this precedent in committee and agreed. I like
 David's suggestion about how to organize these tests.

Hm, unless you are talking about intl402, I wasn't aware of that.
What's the precedent?

If the non ES tests are separated properly then it's probably less of
an issue, though I still prefer that such tests are under a different
umbrella. Just to make clear that they are not actually testing ES
engines.

That is, I'd much rather have a structure like (modulo details of naming):

estests/
  test262/
ch*/
  intl402/
  platforms/

/Andreas


 On Fri, Dec 14, 2012 at 5:22 AM, Alex Russell slightly...@google.com
 wrote:

 +1. What Andreas said.


 On Friday, December 14, 2012, Andreas Rossberg wrote:

 On 13 December 2012 19:21, Mark S. Miller erig...@google.com wrote:
  On Thu, Dec 13, 2012 at 1:12 AM, David Bruant bruan...@gmail.com
  wrote:
  As you say, to remain viable, it
  must be done quickly. From previous experience, I suggest that
  there's
  exactly one way to get quick universal deployment: add a test to
  test262 that fails when a browser's WindowProxy object violates this
  normative part of the ES5 spec.
 
  I feel such a test would rather belong to the HTML DOM. But either
  way, I
  agree.
 
  The spec that it violates is ES5.1. Therefore it will be
  uncontroversial to put such tests into test262.

 I have to strongly disagree here. By this argument, we could put in a
 test for any JS extension in the world that potentially violates
 proper ES semantics. I think test262 should test ECMA-262, nothing
 else.

 In particular, consider that test262 currently is a headless test,
 i.e. no browser needed, a shell like d8 or jsc is enough to run it.
 Putting in browser-specific tests would put a _huge_ burden on all
 kinds of automated testing environments running this suite.

 /Andreas
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss




 --
 Text by me above is hereby placed in the public domain

   Cheers,
   --MarkM
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Reflection of global bindings

2012-12-17 Thread Andreas Rossberg
On 15 December 2012 22:52, Allen Wirfs-Brock al...@wirfs-brock.com wrote:
 So, to me, it sounds like that to continue down this path we should really 
 add new non-reflected properties attributes that are the real control points 
 for the ES semantics. Eg, we may need [[RealReadOnly]], [[RealDeletable]], 
 and [[RealReconfigurable]] attributes to describe all the states and state 
 transitions that are actually exist within the legacy DOM (and the pure ES 
 global declaration semantics).  As these attributes would not be reflected by 
 Object.getOwnPropertyDescriptor/Object.defineProperty they would have to set 
 in some other internal manner when object instances are created.  Tis also 
 means that Proxy based object implementations would also need to have some 
 mechanism for emulating these Real attributes.

Now I'm really scared. Please let's not go there.

I see the following preferable solutions to deal with DOM features violating ES:

1. Lobby to fix the DOM and make it conform to ES instead of the other
way round. Alex Russell has argued for this repeatedly.

2. Where we can't (sadly, probably most cases), and are forced to
codify existing DOM hacks in ES, isolate these hacks as much as
possible. Specifically, in the current case, define them as specifics
of the global object (the global object is a lost cause anyway).

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Reflection of global bindings

2012-12-17 Thread Andreas Rossberg
On 17 December 2012 13:01, Mark S. Miller erig...@google.com wrote:
 On Mon, Dec 17, 2012 at 2:03 AM, Andreas Rossberg rossb...@google.com wrote:
 I see the following preferable solutions to deal with DOM features violating 
 ES:

 1. Lobby to fix the DOM and make it conform to ES instead of the other
 way round. Alex Russell has argued for this repeatedly.

 2. Where we can't (sadly, probably most cases), and are forced to
 codify existing DOM hacks in ES, isolate these hacks as much as
 possible. Specifically, in the current case, define them as specifics
 of the global object (the global object is a lost cause anyway).

 In general, I might be fine with that approach. But because of direct
 proxies, it doesn't work for invariant enforcement. Direct proxies can
 use the presence of a single invariant-violating object to create any
 number of other invariant-violating objects.

Yes, but is that a different problem than the global object itself?
Why would you expect anything else? And how would introducing an extra
set of internal attributes help?

Of course, I personally wouldn't mind being radical and simply forbid
proxying the global object altogether. But I assume that you are going
to say that there are important use cases. :)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxies: wrong receiver used in default set trap

2012-12-19 Thread Andreas Rossberg
On 18 December 2012 22:56, Tom Van Cutsem tomvc...@gmail.com wrote:

 Option B:
 Address point 2) directly by changing the test that determines property
 addition versus property update inside Reflect.set (i.e. the [[SetP]]
 internal method of objects) so that the algorithm no longer tests whether
 target === receiver, but rather whether target === receiver || receiver is
 a proxy for target.

 This solves the issue at hand, although it feels like a more ad hoc
 solution.


Indeed, especially since the length of the proxy chain may be 1.

So it has to be A. (Or the definition of Reflect.set has to change. I don't
have much love for the case distinction in there anyway. But it's probably
a necessary consequence of the somewhat incoherent property assignment
model we are stuck with.)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Function identity of non-configurable accessors

2012-12-19 Thread Andreas Rossberg
On 18 December 2012 20:24, Allen Wirfs-Brock al...@wirfs-brock.com wrote:

 To me, as a application programmer or even a library programmer,
 enforcement of these invariants are generally unnecessary. If enforcement
 impacts performance or expressibility they have a negative impact on my
 ability to get my job done.


I take issues with the dichotomy you build up here. It's important to note
that enforcement _is_ a form of expressiveness! Unfortunately, one that is
too often overlooked.

Expressiveness is defined by what a piece of code can do, as well as,
dually, what a piece of code can enforce its context _not to_ do. If I
cannot prevent things from happening then I generally have to work around
that by defending against them manually and potentially everywhere.

Moreover, as an application or library programmer I constantly need to
enforce certain things, _especially against myself_. Any smart programmer
knows that his own stupidity is most likely to outsmart him.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A Variation on ES Modules

2012-12-19 Thread Andreas Rossberg
On 19 December 2012 16:24, Kevin Smith khs4...@gmail.com wrote:

 I've worked up a concrete variation on the modules syntax:

 https://gist.github.com/4337062

 I believe that it presents a clean, simple model and coding experience.
  Comments welcome!


Thank you! I agree with almost everything you suggest (and especially, what
you say about anonymous exports), and your syntax pretty much exactly
matches my preferences.

I'm fine with considering syntactic module declarations separately (note,
however, that you probably cannot define mutually recursive modules without
them). What you _do_ want to have IMO, though, is module aliases

  module Short = Long.Qualified.Module.Name

Referring to nested modules can be pretty tedious if you don't have a way
to abbreviate names. (You can do that with 'let' or 'const', but then you
lose all static checking.)

OTOH, one more other feature I could consider dropping for the time being
is the ability to export from a ModuleSpecifier. I'm not convinced that
this is a common enough use case to warrant specialised extra syntax -- you
can already express it by pairing an export with an import. (In fact,
allowing the export keyword in front of imports seems like the more
consistent way to support re-exporting.)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?

2012-12-19 Thread Andreas Rossberg
On 19 December 2012 20:18, James Burke jrbu...@gmail.com wrote:

 exports assignment is not about backcompat specifically, although it
  helps. Exports assignment is more about keeping the anonymous natures
 of modules preserved. In ES modules, modules do not name themselves if
 it is a single module in a file. The name is given by the code that
 refers to that code.


I don't buy this, because the name for the export would just be a local
name. You can still bind it to whatever you want on the import side. That's
what we have lexical scoping for.

For all levels below, the module has to pick names anyway. I seriously fail
to see the point of trying so hard for this one special case.



 Assigning a single exports also nudges people to make small modules
 that do one thing.


It rather nudges people into exporting an object as a module, instead of
writing a real module. The only benefit of that is that they lose all
static checking.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?

2012-12-19 Thread Andreas Rossberg
On 19 December 2012 21:29, James Burke jrbu...@gmail.com wrote:

 This is illustrated by an example from Dave Herman, for a language
 (sorry I do not recall which), where developers ended up using _t,
 or some convention like that, to indicate a single export value that
 they did not want to name. As I recall, that language had something
 more like bindings than variables. That would be ugly to see a
 _t convention in JS (IMO).


That language would be ML (or its Ocaml dialect), which happens to have the
most advanced module system of all languages by far. The convention is to
use t as an internal type name, and I've never heard anybody complain
about it. ;)  It's an acquired taste, I suppose.

It's also worth noting that Dave's comparison is somewhat inaccurate. The
convention is used to name the _primary_ abstract type defined by a module,
not the _only_ export -- modules with only one export practically _never_
show up in ML programming, which perhaps is a relevant data point in itself.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?

2012-12-20 Thread Andreas Rossberg
On 19 December 2012 23:05, David Herman dher...@mozilla.com wrote:

 On Dec 19, 2012, at 12:59 PM, Andreas Rossberg rossb...@google.com
 wrote:

  It's also worth noting that Dave's comparison is somewhat inaccurate.
 The convention is used to name the _primary_ abstract type defined by a
 module, not the _only_ export

 That doesn't disagree with what I said. I don't really get the obsession
 with just one value either (it's some pretty dubious sophistry, IMO). I
 think the key is when you have a module that provides a primary
 *abstraction*. That's what I said in the meeting.


Yes, but unless it is the _only_ export, you cannot make it anonymous
anyway. That is, even if such an anonymous export feature existed in ML, it
would not be applicable to the case where the t convention is used.
(Which is part of the reason why I consider anonymous export very much a
corner case feature.)

I'd also like to note that the main motivation for the convention in Ocaml
(instead of just giving the type a proper name -- which, btw, is what
Standard ML prefers) is to ease the use of modules as arguments to other,
parameterised modules (a.k.a. functors). Such a feature does not even exist
in ES6, so in my mind, the analogy isn't really all that relevant.

In ML that can take the form of a type; in JS it can take the form of a
 constructor, class, and/or function. The concept you end up reaching for is
 unifying the idea of the module and the abstraction itself. That's what
 you're doing with .t in ML and that's what's going on in JS with jQuery,
 node-optimist, etc etc.


I think I disagree that that's an accurate description of what's going on
in ML. ;)

More importantly, though, convention is one thing, baking it into the
language another. I've become deeply skeptical of shoe-horning orthogonal
concerns into one unified concept at the language level. IME, that
approach invariably leads to baroque, kitchen sink style language
constructs that yet scale poorly to the general use case. (The typical
notion of a class in mainstream OO languages is a perfect example.)

One of the nicer aspects of pre-ES6 JavaScript is that it doesn't have too
much of that sort of featurism.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?

2012-12-20 Thread Andreas Rossberg
On 20 December 2012 05:24, Brendan Eich bren...@mozilla.com wrote:

 Domenic Denicola wrote:

 IMO this is undesirable. In such a situation, modules can no longer be
 abstraction boundaries. Instead you must peek inside each module and see
 which form it exported itself using.


 You have to know what a module exports, period. That *is* the abstraction
 boundary, the edge you must name or otherwise denote.

 All Andreas is arguing for is a runtime error when you try to denote an
 anonymous export but the module does not match.


A static error, actually.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxies: wrong receiver used in default set trap

2012-12-20 Thread Andreas Rossberg
On 20 December 2012 11:09, Tom Van Cutsem tomvc...@gmail.com wrote:

 Currently, that test is performed in step 5.b. by testing whether the
 current object we are visiting in the proto chain O is the Receiver
 object. At first sight, this is a rather weird way of deciding between
 update vs. addition. We can get away with this test because we've just
 queried the O object for an own property (ownDesc). Hence, if O ===
 Receiver, then we know Receiver already has the same property and we must
 update, rather than add.

 In the presence of proxies, this test is no longer valid, as Receiver may
 be a proxy for O.

 The right fix (at least, it feels like the obvious right fix to me), is
 not to test whether O === Receiver, but rather just let the algorithm
 explicitly test whether the Receiver object already defines an own property
 with key P, to decide between update vs. addition:

 replace step 5.b with:
 5.b Let existingDesc be the result of calling the [[GetOwnProperty]]
 internal method of Receiver with argument P.
 5.c If existingDesc is not undefined, then
   ... // same steps as before

 Now, for normal objects, this test is redundant since existingDesc and
 ownDesc will denote the same property.


...or existingDesc is already known to be undefined, in the case where
Receiver !== O, right?


  But that redundancy is harmless, just as in ES5 [[Put]] it was redundant
 to call both [[GetProperty]] and [[GetOwnProperty]] on the same object. In
 the fast path, engines will not follow this algorithm anyway. If Receiver
 is a proxy, then the difference matters and the proxy will be able to
 intercept the call to [[GetOwnProperty]] (which is what makes this revised
 algorithm work in the presence of proxies).


 Sounds good to me.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?

2012-12-20 Thread Andreas Rossberg
On 20 December 2012 14:17, Sam Tobin-Hochstadt sa...@ccs.neu.edu wrote:

 We want to support *both* a syntax for 'import a module, and bind a
  particular identifier to the single anonymous export' and a syntax for
 'import a module, and bind an identifier to the module instance
 object'.  We could make these different syntaxes, but then (a) we need
 to similar syntaxes, which will confuse people when they use the wrong
 one and it doesn't work, and (b) you can't switch the implementation
 of a module from 'single export' to 'multiple export' without breaking
 clients.


Argument (a) does not convince me for two reasons. First, it very much
sounds like an argument for premature dumbdownification. Second, and more
importantly, I don't even believe the premise, namely that the potential
for confusion is greater than with overloading one syntax with two subtly
different meanings.

If you want to avoid confusion, don't introduce anonymous exports in the
first place. ;)  Seriously, no matter what syntax we pick for anonymous
imports, I'm sure that any confusion that ensues will be dwarfed by the
question why an export like

  export = {a: ..., b: ..., c: ...}

cannot be imported with

  import {a, b, c} from ...

whereas it works for

  export {a: ..., b: ..., c: ...}

Would you risk a bet against this ending up among the Top 3 of ES module
WTFs? :)

Your point (b) is more interesting, at least in terms of a transition path
like you describe. But do we have any kind of evidence that such an
intermediate point on a transition path is particularly useful? And that it
will actually be relevant and/or workable for a significant number of
library implementers? Unless there is strong evidence, I'd be reluctant to
put some confusing hack into the language, eternally, that is only
potentially relevant for a limited time for a limited number of people.

I concur with Kevin's analysis that the emergence of singleton exports in
home-brewed JS module systems rather was a means than an end. Is there even
a single example out there of a language-level module system that has
something similar?

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?

2012-12-20 Thread Andreas Rossberg
On 20 December 2012 19:39, Brendan Eich bren...@mozilla.com wrote:

 Andreas Rossberg wrote:

 More importantly, though, convention is one thing, baking it into the
 language another. I've become deeply skeptical of shoe-horning orthogonal
 concerns into one unified concept at the language level. IME, that
 approach invariably leads to baroque, kitchen sink style language
 constructs that yet scale poorly to the general use case. (The typical
 notion of a class in mainstream OO languages is a perfect example.)


 That's a good concern, but not absolute. How do you deal with the
 counterargument that, without macros, the overhead of users having to glue
 together the orthogonal concerns into a compound cliché is too high and too
 error-prone?

  One of the nicer aspects of pre-ES6 JavaScript is that it doesn't have
 too much of that sort of featurism.


 So people keep telling me. Yet I see ongoing costs from all the
 module-pattern, power-constructor-pattern, closure-pattern lack of
 learning, slow learning, mis-learning, fetishization, and bug-habitat
 surface area.


Sorry, what I wrote may have been a bit unclear. I didn't try to argue
against features in general. I agree that it is important to grow a
language where the need arises. What I argued against was the particular
approach of accumulating all sorts of ad hoc features and extensions in one
monolithic language concept.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxies: wrong receiver used in default set trap

2012-12-21 Thread Andreas Rossberg
On 21 December 2012 03:00, Allen Wirfs-Brock al...@wirfs-brock.com wrote:


 On Dec 20, 2012, at 12:07 PM, Tom Van Cutsem wrote:

 I'm not sure I follow. In my understanding, the original Receiver is only
 needed for traps that involve prototype-chain walking and are thus
 |this|-sensitive. That would be just [[GetP]] and [[SetP]]. One can make
 the case (David has done so in the past) for [[HasProperty]] and
 [[Enumerate]] since they also walk the proto-chain, although it's not
 strictly necessary as the language currently does not make these operations
 |this|-sensitive.


 The proxy target delegation chain is also this-sensitive when it invokes
 internal methods.  For example, in the revised [[SetP]] step 5 it is
 important  that the [[DefineOwnProperty]] calls (in 5.e..ii and indirectly
 in 5.f.i are made on Receiver and not O.

 [...]
 If you step back a bit and just think about the concepts of Lieberman
 delegation and self-calls without worry about the specific of the proxies
 or the ES MOP I think you will come to see that delegated target calls
 naturally should self-call back to the original object.  That's what
 Lieberman style delegation is all about.


While I agree with your line of reasoning in principle, it seems that your
proposed change imposes substantial complications on implementations. While
simple forwarding of missing traps allows reusing existing code for
performing the respective operations (including all sorts of optimisations
and special-casing), it seems to me that a delegation semantics requires
duplicating much of the core functionality of objects to correctly deal
with the rare case where the object is a proxy target.

So far, proxies where mainly a special case implementations could
distinguish early on, and not care about them in the rest of the logic for
a given operation (except where you had to do proto climbing). With
delegation semantics everywhere, that is no longer the case, and everything
becomes intertwined.

If a VM is no longer able to reuse existing optimisations easily for the
proxy case, my guess is that such a semantics would make direct proxies
significantly slower in practice. I, for one, would not look forward to
implementing the change, let alone optimising it. :)

That said, I normally stand on the side of a better semantics. But we
should be aware of the likely implications in this case.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 01:50, David Herman dher...@mozilla.com wrote:

 On Dec 11, 2012, at 2:45 AM, Andreas Rossberg rossb...@google.com wrote:
  The question, then, boils down to what the observation should be: a

  runtime error (aka temporal dead zone) or 'undefined'. Given that
  choice, the former is superior in almost every way, because the latter
  prevents subtle initialisation errors from being caught early, and is
  not an option for most binding forms anyway.

 You only listed good things (which I agree are good) about TDZ, but you
 don't list its drawbacks. I believe the drawbacks are insurmountable.


 Let's start with TDZ-RBA. This semantics is *totally untenable* because it
 goes against existing practice. Today, you can create a variable that
 starts out undefined and use that on purpose:


I think nobody ever proposed going for this semantics, so we can put that
aside quickly. However:


 var x;
 if (...) { x = ... }
 if (x === undefined) { ... }

 If you want to use let instead, the === if-condition will throw. You would
 instead have to write:

 let x = undefined;
 if (...) { x = ... }
 if (x === undefined) { ... }


That is not actually true, because AFAICT, let x was always understood to
be equivalent to let x = undefined.


OK, so now let's consider TDZ-UBI. This now means that an initializer is
 different from an assignment, as you say:

  They are initialisations, not assignments. The difference, which is
  present in other popular languages as well, is somewhat important,
  especially wrt immutable bindings.

 For `const`, I agree that some form of TDZ is necessary. But `let` is the
 important, common case. Immutable bindings (`const`) should not be driving
 the design of `let`. Consistency with `var` is far more important than
 consistency with `const`.


There is not just 'let' and 'const' in ES6, but more than a handful of
declaration forms. Even with everything else not mattering, I think it
would be rather confusing if 'let' had a different semantics completely
different from all the rest.

And for `let`, making initializers different from assignments breaks a
 basic assumption about hoisting. For example, it breaks the equivalence
 between

 { stmt ... let x = e; stmt' ... }

 and

 { let x; stmt ... x = e; stmt' ... }

 This is an assumption that has always existed for `var` (mutatis mutantum
 for the function scope vs block scope). You can move your declarations
 around by hand and you can write code transformation tools that move
 declarations around.


As Dominic has pointed out already, this is kind of a circular argument.
The only reason you care about this for 'var' is because 'var' is doing
this implicitly already. So programmers want to make it explicit for the
sake of clarity. TDZ, on the other hand, does not have this implicit
widening of life time, so no need to make anything explicit.

It's true that with TDZ, there is a difference between the two forms above,
but that is irrelevant, because that difference can only be observed for
erroneous programs (i.e. where the first version throws, because 'x' is
used by 'stmt').

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Changing [[Prototype]]

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 06:38, David Herman dher...@mozilla.com wrote:

 On Dec 24, 2012, at 1:48 AM, Anne van Kesteren ann...@annevk.nl wrote:
  It seems ES6 has __proto__ which also allows modifying [[Prototype]]
  so presumably this is nothing particularly bad, although it is very
  ugly :-(

 It is never safe to assume that just because something is out there on the
 web that it is nothing particularly bad... (FML)


I'm not surprised to read this, though. Putting mutable proto into the
language is far more than just regulating existing practice. It is blessing
it. That is a psychological factor that should not be underestimated. I
fully expect to see significantly more code in the future that considers it
normal to use this feature, and that no amount of evangelization can
counter the legislation precedent.

That is, if having it at all, I'd still think it much wiser to ban it to
some Appendix.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Object model reformation?

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 05:53, Brendan Eich bren...@mozilla.com wrote:

 I have a theory: hashes and lookup tables (arrays or vectors) have
 displaced most other data structures because most of the time, for most
 programs (horrible generalizations I know), you don't need ordered entries,
 or other properties that might motivate a balanced tree; or priority queue
 operations; or similar interesting data structures we all studied in school
 and used earlier in our careers.

 It's good to have these tools in the belt, and great to teach them, know
 their asymptotic complexity, etc.

 But they just are not that often needed.


Not often used =/= not often needed.

Seriously, I contest your theory. I think such observations usually suffer
from selection bias. In imperative languages, you see arrays used for
almost everything, often to horrible effect. In the functional world many
people seem to think that lists is all you need. In scripting languages
it's often hashmaps of some form. I think all are terribly wrong. Every
community seems to have its predominant collection data structure, but the
main reason it is dominant (which implies vastly overused) is not that it
is superior or more universal but that it is given an unfair advantage via
very convenient special support in the language, and programmers rather
shoe-horn something into it then losing the superficial notational
advantage. Languages should try harder to get away from that partisanship
and achieve egalite without baroque.

But yes, ES is probably not the place to start fixing this. :)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 21:08, David Herman dher...@mozilla.com wrote:

 On Dec 27, 2012, at 1:23 AM, Andreas Rossberg rossb...@google.com wrote:
  var x;
  if (...) { x = ... }
  if (x === undefined) { ... }
 
  If you want to use let instead, the === if-condition will throw. You
 would instead have to write:
 
  let x = undefined;
  if (...) { x = ... }
  if (x === undefined) { ... }
 
  That is not actually true, because AFAICT, let x was always understood
 to be equivalent to let x = undefined.

 Well that's TDZ-UBI. It *is* true for TDZ-RBA. Maybe I was the only person
 who thought that was a plausible semantics being considered, but my claim
 (P = Q) is true. Your argument is ~P. Anyway, one way or another hopefully
 everyone agrees that TDZ-RBA is a non-starter.


Even with TDZ-RBA you can have that meaning for let x (and that semantics
would be closest to 'var'). What TDZ-RBA gives you, then, is the
possibility to also assign to x _before_ the declaration.

But anyway, I think we agree that this is not a desirable semantics, so it
doesn't really matter.

 It's true that with TDZ, there is a difference between the two forms
 above, but that is irrelevant, because that difference can only be observed
 for erroneous programs (i.e. where the first version throws, because 'x' is
 used by 'stmt').

 Can you prove this? (Informally is fine, of course!) I mean, can you prove
 that it can only affect buggy programs?


Well, I think it's fairly obvious. Clearly, once the
assignment/initialization x = e has been (successfully) executed, there
is no observable difference in the remainder of the program. Before that
(including while evaluating e itself), accessing x always leads to a TDZ
exception in the first form. So the only way it can not throw is if stmt
and e do not access x, in which case the both forms are equivalent.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 21:23, David Herman dher...@mozilla.com wrote:

 On Dec 27, 2012, at 1:51 AM, Andreas Rossberg rossb...@google.com wrote:

  I think hoisting can mean different things, which kind of makes this
 debate a bit confused.

 Yep. Sometimes people mean the scope extends to a region before the
 syntactic position where the declaration appears, sometimes they mean the
 scope extends to the function body, and sometimes they mean function
 declaration bindings are dynamically initialized before the containing
 function body or script begins executing.


Maybe we shouldn't speak of hoisting for anything else but the var case. As
I mentioned elsewhere, I rather like to think of it recursive (i.e.
letrec-style) block scoping. :)


  There is var-style hoisting. Contrary to what Rick said, I don't think
 anybody can seriously defend that as an excellent feature. First, because
 it hoists over binders, but also second, because it allows access to an
 uninitialized variable without causing an error (and this being bad is
 where Dave seems to disagree).

 Are you implying that my arguments are not serious? :-(


You are not defending the first part, are you? ;)


  Then there is the other kind of hoisting that merely defines what the
 lexical scope of a declaration is. The reason we need this
 backwards-extended scope is because we do not have an explicit let-rec or
 something similar that would allow expressing mutual recursion otherwise --
 as you mention. But it does by no means imply that the uninitialized
 binding has to be (or should be) accessible.

 No, it doesn't. I'm not interested in arguments about the one true way
 of programming languages. I think both designs are perfectly defensible.
 All things being equal, I'd prefer to have my bugs caught for me. But in
 some design contexts, you might not want to incur the dynamic cost of the
 read(/write) barriers -- for example, a Scheme implementation might not be
 willing/able to perform the same kinds of optimizations that JS engines do.
 In our context, I think the feedback we're getting is that the cost is
 either negligible or optimizable, so hopefully that isn't an issue.


Right, from our implementation experience in V8 I'm confident that it isn't
in almost any practically relevant case -- although we haven't fully
optimised 'let', and consequently, it currently _is_ slower, so admittedly
there is no proof yet.

But the other issue, which I worry you dismiss too casually, is that of
 precedent in the language you're evolving. We aren't designing ES1 in 1995,
 we're designing ES6 in 2012 (soon to be 2013, yikes!). People use the
 features they have available to them. Even if the vast majority of
 read-before-initialization cases are bugs, if there are some cases where
 people actually have functioning programs or idioms that will cease to
 work, they'll turn on `let`.

 So here's one example: variable declarations at the bottom. I certainly
 don't use it, but do others use it? I don't know.


Well, clearly, 'let' differs from 'var' by design, so no matter what,
you'll probably always be able to dig up some weird use cases that it does
not support. I don't know what to say to that except that if you want 'var'
in all its beauty then you know where to find it. :)

 - It binds variables without any rightward drift, unlike functional
 programming languages.
 
  I totally don't get that point. Why would a rightward drift be inherent
 to declarations in functional programming languages (which ones, anyway?).

 Scheme:

 (let ([sq (* x x)])
   (printf sq: ~a~n sq)
   (let ([y (/ sq 2)])
 (printf y: ~a~n y)))

 ML:

 let sq = x * x in
   print (sq:  ^ (toString sq) ^ \n);
   let y = sq / 2 in
 print (y:  ^ (toString y) ^ \n)


I don't feel qualified to talk for Scheme, but all Ocaml I've ever
seen (SML uses more verbose 'let' syntax anyway) formatted the above as

let sq = x * x in
 print (sq:  ^ toString sq ^ \n);
 let y = sq / 2 in
 print (y:  ^ toString y ^ \n)


Similarly, in Haskell you would write

do

   let sq = x * x
putStr (sq:  ++ show sq ++ \n)
let y = sq / 2
putStr (y:  ++ show y ++ \n)


/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 23:38, Andreas Rossberg rossb...@google.com wrote:

 I don't feel qualified to talk for Scheme, but all Ocaml I've ever
 seen (SML uses more verbose 'let' syntax anyway) formatted the above as

 let sq = x * x in
 print (sq:  ^ toString sq ^ \n);

 let y = sq / 2 in
 print (y:  ^ toString y ^ \n)


 Similarly, in Haskell you would write

 do

let sq = x * x
putStr (sq:  ++ show sq ++ \n)

let y = sq / 2
putStr (y:  ++ show y ++ \n)


Don't know where the empty lines in the middle of both examples are coming
from, weird Gmail quote-editing glitch that didn't show up in the edit box.
Assume them absent. :)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Changing [[Prototype]]

2012-12-27 Thread Andreas Rossberg
On 27 December 2012 18:25, Brendan Eich bren...@mozilla.com wrote:

 That is, if having it at all, I'd still think it much wiser to ban it to
 some Appendix.


 What earthly good would that do?


Marketing and psychology (as I said, being important). It would send a
clear message that it is just ES adopting some bastard child because it has
to for political reasons, but with no intention of ever making it a true
bearer of its name. In other words, it isn't noble.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Changing [[Prototype]]

2012-12-28 Thread Andreas Rossberg
On 28 December 2012 05:38, Brendan Eich bren...@mozilla.com wrote:

 No point whinging about it in appendices that either no one reads, or else
 people read and think less of the spec on that account.


The fewer read about it the better, no? :)

Why would people think less about the spec?

I think it makes sense to separate out legacy features as normative
optional, like it was the plan originally. Then implementations can still
choose not to implement them when they can afford it, e.g. when JS is
introduced into a new space where no such legacy exists.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-28 Thread Andreas Rossberg
On 28 December 2012 07:10, David Herman dher...@mozilla.com wrote:

 On Dec 27, 2012, at 2:13 PM, Andreas Rossberg rossb...@google.com wrote:

  It's true that with TDZ, there is a difference between the two forms
 above, but that is irrelevant, because that difference can only be observed
 for erroneous programs (i.e. where the first version throws, because 'x' is
 used by 'stmt').
 
  Can you prove this? (Informally is fine, of course!) I mean, can you
 prove that it can only affect buggy programs?
 
  Well, I think it's fairly obvious. Clearly, once the
 assignment/initialization x = e has been (successfully) executed, there
 is no observable difference in the remainder of the program. Before that
 (including while evaluating e itself), accessing x always leads to a TDZ
 exception in the first form. So the only way it can not throw is if stmt
 and e do not access x, in which case the both forms are equivalent.

 That doesn't prove that it was a *bug*. That's a question about the
 programmer's intention. In fact, I don't think you can. For example, I
 mentioned let-binding at the bottom:

 {
 console.log(x);
 let x;
 }


 It the programmer intended that to print undefined, then TDZ would break
 the program. Before you accuse me of circularity, it's *TDZ* that doesn't
 have JavaScript historical precedent on its side. *You're* the one claiming
 that programs that ran without error would always be buggy.


Hold on. First of all, that example is in neither of the two forms whose
equivalence you were asking about. Second, all I was claiming in reply is
that one of those two forms is necessarily buggy in all cases where the
equivalence does not hold. So the above is no counter example to that.
Instead, it falls into the weird use case category that I acknowledged
will always exist, unless you make 'let' _exactly_ like 'var'.

As for TDZ precedent, ES6 will have plenty of precedent of other lexical
declaration forms that uniformly have TDZ and would not allow an example
like the above. I think it will be rather difficult to make a convincing
argument that having 'let' behave completely differently from all other
lexical declarations is less harmful and confusing than behaving
differently from 'var' -- which is not a lexical declaration at all, so
does not raise the same expectations.

Here's what it comes down to. Above all, I want let to succeed. The
 absolute, #1, by-far-most-important feature of let is that it's block
 scoped.


I think introducing 'let' would actually be rather questionable if it was
(1) almost as error-prone as 'var', and at the same time, (2) had a
semantics that is inconsistent with _both_ 'var' and all other lexical
declarations (which is what you are proposing). (Not to mention
future-proofness.)

Your line of argument is 'let' is not like 'var', thereby people will
probably reject it. While I understand your concern, I do not see any
evidence that TDZ specifically will tip that of. So far, I've only heard
the opposite reaction.

Moreover, if you drive that argument to its logical conclusion then 'let'
should just be 'var'. Don't you think that you are drawing a somewhat
arbitrary line to define what you consider 'var'-like enough?

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-28 Thread Andreas Rossberg
On 28 December 2012 11:22, Brendan Eich bren...@mozilla.com wrote:

 Andreas Rossberg wrote:

 As for TDZ precedent, ES6 will have plenty of precedent of other
 lexical declaration forms that uniformly have TDZ and would not allow an
 example like the above.


 Can these plenty be enumerated? Apart from const, which ones have TDZs?


All declarations whose initialization cannot be hoisted. My understanding
is that that would be 'const', 'class' and 'private', although we have just
dropped the latter from ES6. There might potentially be additional ones in
future versions.

But actually, what I perhaps should have said is that there is no other
declaration that allows uninitialized access. That holds for all lexical
declarations.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Changing [[Prototype]]

2012-12-28 Thread Andreas Rossberg
On 28 December 2012 11:51, David Bruant bruan...@gmail.com wrote:

 Le 28/12/2012 11:20, Brendan Eich a écrit :

 David Bruant wrote:

 What about a specific section of the spec called de facto standards?
 It would indicate that it's part of the standard, but is a scar from
 history rather than a legit feature.
 An intro would explain what this is all about.
 It would be an interesting middleground between normal spec features
 (which people take for the Holy Graal) and appendices (which people will
 skip).
 __{define|lookup}{G|S}etter__ would fit well in this section.

 Those never made it into IE. Why include them? There's a bright line
 drawn by interop.

 I've seen Node.js code with it.


That's a good point, actually. I, for one, do not understand the criteria
by which we chose to include __proto__ but not __defineGetter__ and friends.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Changing [[Prototype]]

2012-12-28 Thread Andreas Rossberg
On 28 December 2012 13:34, Herby Vojčík he...@mailbox.sk wrote:

  Andreas Rossberg wrote:

 That's a good point, actually. I, for one, do not understand the
 criteria by which we chose to include __proto__ but not __defineGetter__
 and friends.


 __defineGetter__ and friends have sane alternative, mutable __proto__
 hasn't?


The argument for including __proto__ has been existing practice, and AFAICS
that applies to the others no less. The alternatives to __proto__ are
Object.create and Object.getPrototypeOf, which arguably cover the sane
use cases.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: `import` and aliasing bindings

2012-12-28 Thread Andreas Rossberg
On 28 December 2012 16:20, Domenic Denicola dome...@domenicdenicola.comwrote:

  Now that I have fully digested Andreas's points from the earlier thread
 on modules [1], I am a bit concerned about the implications of `import`
 introducing aliasing bindings. To recap, the situation is:

 module foo {
  export let state = 5;
  export function modifyState() {
state = 10;
  };
 }

 import { state, modifyState } from foo;

 assert(state === 5);
 modifyState();
 assert(state === 10);

 This is, to me as an ES5 programmer, very weird. There is *no other
 situation in the language* where what an identifier refers to can change
 without me assigning to it. Properties of objects, sure. But not bare
 identifiers. (Well, I guess `with` blurs this line. And the global object.
 No comment.)

 [...]
 Finally, I can't shake the feeling I'm missing something. Why is this
 aliasing property valuable, given that it's so contradictory to
 expectations?


Your expectations must be different than mine. :)

Dave and Sam may have a different answer, but I'd answer that the aliasing
semantics follows from a module system's role as (among other things) a
name spacing mechanism. That implies two axioms:

1) You should always be able to access an export under its qualified or
unqualified name, with the same meaning. That is,

  M.a = 5;

and

  import a from M;
  a = 5;

should have the same meaning.

2) You should always be able to wrap existing code into a module without
changing its meaning. That is, given

  let a = 4;
  // ...
  a = 5;

it should be possible to refactor 'a' (plus other declarations) into a
module without changing the meaning of use sites, like:

  module M {
export let a = 4;
// ...
  }
  // ...
  import a from M;
  a = 5;

I think that the loss of either of these properties would make modules far
more surprising, and refactoring code into modules harder and more
error-prone.

I wouldn't worry too much about current source-to-source compilers for ES6
not getting this right yet. That is mainly due to the lack of a detailed
specification. I could enumerate quite a few other fundamental things that
they don't have right yet (e.g. recursive linking, static checking, etc.).

However, I agree that the destructuring syntax for module imports may not
be the clearest choice in terms of raising the right expectations. (As we
have discovered already, it has other issues, namely people confusing the
direction of renaming.) Maybe it is a bit too clever, and considering
alternatives a worthwhile idea.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: `import` and aliasing bindings

2012-12-28 Thread Andreas Rossberg
On 28 December 2012 17:54, David Herman dher...@mozilla.com wrote:

 Another one is that I've been thinking we should add getter/setter exports
 to make it possible to create lazily initialized exports:


We haven't had the opportunity to discuss that one, but now that you
mention it, I should say that I actually think exports as accessors are a
no-go. Because with that, an innocent plain variable occurrence can
suddenly be an expression with arbitrary side effects, resurrecting one of
the worst features of 'with'. Please don't! If you need lazy
initialization, export a function.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Changing [[Prototype]]

2012-12-28 Thread Andreas Rossberg
On 28 December 2012 19:55, Mark S. Miller erig...@google.com wrote:

 That is exactly the issue. As long as it was not expected in IE, it
 could not be assumed by the cross-browser web. However, mobile changed
 MS's tradeoffs. Mobile is currently a separate enough ecosystem, with
 IE a sufficiently minor player, that some cross-mobile-platform code
 assumes mutable __proto__. Consider it a loss of genetic diversity on
 the part of a herd that gets temporarily separated from the rest of
 its species. As a result, MS is considering adding mutable __proto__
 to future IE. At that point, it would become a standard, at least de
 facto. In that case, we're all better off codifying a semantics and
 having this standard be de jure.


All understood, but what's the difference to __defineGetter__?

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: barrier dimension, default dimension

2012-12-29 Thread Andreas Rossberg
On 28 December 2012 20:30, David Herman dher...@mozilla.com wrote:

 Andreas, can you explain why you dismiss TDZ-RBA-UNDEF as a viable option? 
 The bug that motivates all the arguments you've made is 
 read-before-initialization, not write-before-initialization.

I agree that that would be a less error-prone semantics, but other
arguments still apply. IMO it's inferior to TDZ-UBI-UNDEF (the
current draft semantics) for three reasons:

1. Complexity/consistency
2. Readability
3. Future-proofness

Regarding (1), consider how to formulate the rules for unhoisted
bindings. Informally, TDZ-UBI-UNDEF says:

* Accessing a variable before its declaration has been executed is an
error. Furthermore, let x is shorthand for let x = undefined.

The corresponding text for TDZ-RBA-UNDEF:

* For immutable bindings, accessing a variable before its declaration
has been executed is an error. For mutable bindings, read-accessing a
variable before an assignment to it has been executed is an error.
Furthermore, let x = e is shorthand for let x; x = e. A
let-declaration without a r.h.s. is a conditional assignment of
undefined that is performed if and only if no other assignment to
the declared variable has been performed before.

This is clearly less consistent and more complicated, for two reasons.
First, the definition has to be different for mutable and immutable
bindings (there is no such thing as an assignment to an immutable
binding). But even ignoring immutable bindings altogether, the
semantics for mutable ones alone are more complicated because of the
runtime case distinction you need to make for the conditional
initialization.

Regarding (2), let me repeat my mantra: reading is 10x more important
than writing. Now consider reading a piece of code like this:

  {
// lots of stuff
let x;
// more stuff
print(x);
  }

With the current rule, all you need to read to understand what is
printed for 'x' is the code between its declaration and its use. Any
code before the declaration cannot possibly matter, which arguably is
what one would expect intuitively (and what's the case in every other
comparable language with proper lexical scoping). Not so with the
TDZ-RBA-UNDEF rule, where understanding whether the variable is
assigned, and how, generally requires reading _all_ code in that block
up to its use.

Regarding (3), that has been argued before often enough, so I won't
repeat it here. Just let me note that future-proofness is not about
crossing a bridge early, as you seemed to suggest elsewhere, it's
about making sure that you haven't already burnt that bridge once you
get there.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-29 Thread Andreas Rossberg
On 28 December 2012 20:53, David Herman dher...@mozilla.com wrote:
 On Dec 28, 2012, at 11:47 AM, Andreas Rossberg rossb...@google.com wrote:
 That seems clean, useful, consistent, and fairly easy to understand. 
 Introducing extra rules for 'let'? Not so much.

 But TDZ does introduce extra rules! Especially with disallowing assignment 
 temporally before initialization.

I have to disagree, see my other reply.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-29 Thread Andreas Rossberg
I haven't replied to this thread yet, because I feel that I already
made all the same arguments repeatedly to no avail. ;)  However, let
me reiterate one particular observation, which is that IMHO much of
the discussion (and decision making) around 1JS, modes, and opt-ins is
just mistargeted.

Namely, it is primarily based on the expectations and needs of
_current_ users. Users that are aware of what's ES3 or 5 and who are
about to investigate what's new in ES6. To those users, design choices
like making new constructs opt into strict mode by default will not
seem a big deal, even natural.

But that group will be irrelevant after a relatively short time of transition!

ES6+ will stay much longer (at least that's what we are working for).
Consequently, what should take precedence are the expectations and
needs of _future_ users of ES. Those who will come to ES6+ without
knowing nor caring about the colorful history of its earlier versions.
For them, having various features locally change the semantics of
unrelated constructs will be surprising at best. It means having to
remember a seemingly random set of rules for what semantics is active
where.

The more such rules there are, and the more fine-grained they are, the
less readable code becomes, and the more error-prone programming and,
particularly, refactoring will be -- not just for the current
generation of ES programmers, but for all generations to come. IMHO,
that is the wrong trade-off entirely.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-29 Thread Andreas Rossberg
On 29 December 2012 14:51, Axel Rauschmayer a...@rauschma.de wrote:
 I’m sympathetic to both sides of this argument. How would you handle things?

Ideally? Backing out of the whole 1JS marketing maneuver? In the long
run, I see it as more harmful than helpful, as it inevitably leads to
complexity creep, subtle mode mixtures, and refactoring hazards that
are there to stay for eternity. Instead, just make all new features
strict-mode only and be done with it.

But I've accepted that I am in the minority with that opinion, and
it's too late anyway. Short of that, at least hold the line with
modules as the only implicit opt-in. But in reality I'm pretty sure
that we will give in to extending the list at some point, if not in
ES6 then in ES7.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-30 Thread Andreas Rossberg
On 29 December 2012 22:06, Brendan Eich bren...@mozilla.com wrote:
 Andreas Rossberg wrote:
 ES6+ will stay much longer (at least that's what we are working for).
 Consequently, what should take precedence are the expectations and
 needs of _future_ users of ES. Those who will come to ES6+ without
 knowing nor caring about the colorful history of its earlier versions.
 For them, having various features locally change the semantics of
 unrelated constructs

 Whoa.

 Who ever proposed that? It seems a misunderstanding. No one is saying that,
 e.g., destructuring formal parameters, or a rest parameter, should flip the
 containing function into strict mode. Banning duplicate formals in no wise
 does that.

We are discussing it for classes right now, and it has been on the
table for other features (such as arrows or generators) several times,
if my memory serves me right.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-30 Thread Andreas Rossberg
On 29 December 2012 22:11, Brendan Eich bren...@mozilla.com wrote:
 Andreas Rossberg wrote:

 On 29 December 2012 14:51, Axel Rauschmayera...@rauschma.de  wrote:

 I’m sympathetic to both sides of this argument. How would you handle
 things?


 Ideally? Backing out of the whole 1JS marketing maneuver?

 It's not just marketing surface, but lack of version= substance. How do you
 propose backing out of that? Defining RFC4329 application/javascript and
 application/ecmascript ;version= parameter values and telling people to
 write script tags using those types?

As I said: make everything available in strict mode.

   In the long
 run, I see it as more harmful than helpful, as it inevitably leads to
 complexity creep, subtle mode mixtures,

 Note the V8 team (via MarkM) rightly prefigured 1JS by asking for no more
 modes several years ago. Now you want explicit modes? The world turns...

I'm sure no one had the current state of affairs in mind. My argument
is (and has been a year ago), that factually, 1JS means _more_ modes
(which is why I called the 1JS tag marketing). And that _is_
user-facing complexity.

I wish I had made a list, but IIRC over the last few meetings we have
spend a significant and increasing amount of time discussing problems
with new features and sloppy mode interaction. You can brush that off
as being only spec or implementation complexity. But I disagree. Any
spec complexity will also be user-facing complexity too at some point,
e.g. if your program does not work as expected.


 Let's be clear about the refactoring hazards. They do not involve early
 errors. So the only issues are the runtime semantic changes:

 * The arguments object in a strict function not aliasing formal parameters.

 * Poison pill properties on strict function objects.

 * Anything else I'm forgetting.

 Is this really that bad in the way of refactoring hazards? Anyone
 refactoring from old to ES6 or later code should get rid of arguments.

I'm surprised you're asking this, because you have pointed out
repeatedly in previous meetings that there are serious hazards. ;)

 There's a case for class bodies as implicitly strict, you can't dismiss it
 with generalities about refactoring hazards in my book :-P. Care to deal
 with the specific pro-strict-class argument?

It's complexity creep. I don't think we will stop there. Boiling the frog.

[Sorry for being brief, but I'm off to a 2-week trip in about 2 hours
and haven't packed yet :). I'll be off-line during that time, btw, so
unfortunately won't be able to follow the discussion further.]

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-30 Thread Andreas Rossberg
On 30 December 2012 02:31, Mark S. Miller erig...@google.com wrote:

 If duplicate formals are the only such case, then I agree that the
 fear of micro-mode is a non-issue. Do we have an accurate record of
 the scoping of default value expressions? How about the interaction of
 head scope and top body scope? I recall there were problems here, but
 I'd need to review our decisions to see if they smell of more
 micro-modes.

Yes, there were problems with duplicate parameters vs defaults. There
also is the sloppy-mode-arguments-object vs destructuring issue. And
'let' syntax. These are just some of the things that came up at the
last meeting alone.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-30 Thread Andreas Rossberg
On 30 December 2012 11:58, Brendan Eich bren...@mozilla.com wrote:
 Two separate things:

 1. All new syntax with code bodies makes strict-by-fiat code bodies.

 2. New parameter forms restrict duplicate parameters.

 Neither entails various features locally chang[ing] the semantics of
 unrelated constructs-- unless by local you mean the new syntax's head and
 unrelated the new syntax's body!

I did. It boils down to: I have this construct used locally here. For
how many different syntactic constructs do I have to look out for in
the context to determine what it means?

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-30 Thread Andreas Rossberg
On 30 December 2012 12:50, Axel Rauschmayer a...@rauschma.de wrote:
 It would actually be nice to have that as a feature: If the variable name is
 `_` then it can be used multiple times. It’s a nice, self-descriptive way of
 saying that you don’t care about a parameter value.

That underscore wildcard is the exact syntax used in functional
languages, and very useful, I agree. In JS, that syntax would be a
breaking change, unfortunately. But we could use something else (e.g.
I proposed '.' in the past).

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: fail-fast object destructuring (don't add more slop to sloppy mode)

2013-01-16 Thread Andreas Rossberg
 On Jan 2, 2013, at 7:58 PM, Brendan Eich wrote:

 I think we can do this now. Allen should weigh in. Hope to hear from Andreas 
 R. soon too!

 Apologies for the long thread, and thanks to Herby for interaction that 
 clarified many things. Perhaps I should resummarize:

 The best new-new plan to avoid adding slop is to revise ES6 destructuring 
 thus:

 1. No ToObject(RHS).
 2. Exception on missing property selected without a ?-suffix.
 3. ?-suffix allowed on any pattern, imputing undefined deeply instead of 
 refuting.
 4: the ? is a separate lexeme from the : in long-hand patterns.

 How's that?

[Sorry to be late, catching up from a 2 weeks off-line vacation. :) ]

Thanks Brendan for reviving the discussion. That plan mostly matches
what I was arguing for last time round (including the necessity to
allow ? on every pattern), so me likes. I still see some issues with
making ? postfix (readability, parsing), but that's a comparably minor
point.


On 4 January 2013 01:33, Allen Wirfs-Brock al...@wirfs-brock.com wrote:
 I'm fine with points 2-4.  However, I think no ToObject(RHS) would be a 
 mistake.  Here's why:

 In almost all other situations where an object is needed, a primitive value 
 (including a literal) can be used.  This includes contexts that use the dot 
 and [ ] property access operators. Essentially, in all object appropriate 
 situations primitive values act as if they were objects. This is important in 
 that in most cases it allows ES programmers to ignore distinctions between 
 objects and primitive values.

 Destructuring is frequently described as simply a de-sugaring over property 
 access in assignments and declarations.

 let {length: len} = obj;

 is most easily explained by saying that it is equivalent to:

 let len = obj.length;

 But if the ToObject is eliminated from the RHS then this desugaring 
 equivalence is no longer valid in all cases.  The obj.length form would work 
 fine if the value of obj was a string but the destructuring form will throw.  
 This breaks the general ES rule that you can use a primitive value in any 
 context where an object is required.  It is the sort of contextual special 
 case that developers hate and which makes a language harder to learn. 
 Consistency is important.

 Finally, note that now with exceptions on missing properties (without ?) it 
 is likely that most situations where a primitive value is erroneously used on 
 the RHS will throw anyway simply because the primitive wrapper probably won't 
 have the requested property. So, removing the ToObject just creates 
 inconsistency without adding much in the way of error detection.

All good points, and I think it makes sense to separate the discussion
of implicit conversion from refutability itself.

I think your argument presupposes an assumption that only happens to
hold for the pattern language currently proposed, but is unlikely to
remain so in the future: namely, that all patterns describe objects.
In particular, a pattern matching construct wants to allow, say,
strings, null, true and others as patterns, and surely you do _not_
want ToObject in those cases.

One defensible position might be to only invoke ToObject when actually
matching against an object/array pattern. But to be consistent, you'd
then have to do similar conversions for other patterns, too, e.g.
invoking ToString when matching against a string. Unfortunately, that
would make using a future pattern matching construct correctly much
harder and more tedious. For example, writing

  match (e) {
case true: ...
case null: ...
case {x, y}: ...
  }

will take the first case for all objects, although that is unlikely to
be the intention, here or elsewhere. Similarly,

  match (e) {
case {}: ...
case
  }

runs into the first case for almost anything when the more natural
expectation may be only actual objects.

So I believe that in the context of patterns, implicit conversions
violate the principle of least surprise and are future-hostile (in
terms of usability) towards a more general pattern matching facility.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2013-01-16 Thread Andreas Rossberg
On 1 January 2013 07:09, Mark Miller erig...@gmail.com wrote:
 On Mon, Dec 31, 2012 at 9:12 PM, Brendan Eich bren...@mozilla.com wrote:

 Mark S. Miller wrote:
 I'm pretty happy with Kevin's compromise. Here it is again:

 (1) No opt-in required for new syntax, except:
 (2) No breaking changes to sloppy mode, and
 (3) No grammar contortions (e.g. let) to support sloppy mode.  And
 (4) All new syntax forms with code bodies are implicit strict.

 What do you say?

 My preference order:

 1)
 1.a) To the extent clean and practical, new features are available only in
 strict mode,
 1.b) Lexical f-i-b is available in sloppy mode as it is in ES6 strict, since
 no browser will prohibit f-i-b syntax in sloppy mode. Better to have the
 f-i-b sloppy semantics be aligned with the ES6 f-i-b strict semantics.
 1.c) modules (both inline and out) implicitly opt-in to strict mode.
 1.d) classes implicitly opt-in to strict mode.
 1.e) nothing else causes an implicit strict mode opt-in.

 2) Like #1 but without #1.d (which I think of as Andreas' position)

Yes, although I'd even consider removing 1.c inline (matching your
option 6 below).

But what do you mean by to the extent clean and practical? In my
humble opinion, only two options are really acceptable at all: either
_all_ ES6 features work only in strict mode (my preference), or _all_
ES6 features work in both modes (how I interpret 1JS). Something
in-between, i.e., deciding inclusion into sloppy mode on a by-feature
basis, is a non-starter in terms of usability and observable
complexity. That is, rather (5) than (4) below.

 3) Like #1, but #1.e is replaced with
 3.e) All code bodies within new function syntax is implicitly strict.

I'd be strongly opposed to this (and Kevin's point (4) in general).

 4) Like #3, but #1.a is replaced with
 4.a) To the extent clean and practical, new features are available in sloppy
 mode.
 I take it this is essentially your position and Kevin's compromise position?

 5) Where things stood at the end of the last TC39 meeting, where we were
 violating the clean of #4.a to kludge things like let,
 non-duplicated-formals-sometimes, no-arguments-sometimes, weird scoping for
 default argument expressions, etc, into sloppy mode.

 6) Like #2 but without #1.c. Is this essentially Kevin's pre-compromise
 position?
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2013-01-17 Thread Andreas Rossberg
On 16 January 2013 19:33, Brandon Benvie bran...@brandonbenvie.com wrote:
 Without using modules as the indicator, how do you know whether code is
 intended to be run as ES6 or not? Do let and const count as ES6
 (retroactively applying to code using the old non-standard versions, which
 are still currently supported by V8 and Spidermonkey)? Does it apply to code
 that appears to use Map, WeakMap, and Set (though the code might well refer
 to shimmed versions of these and not otherwise expect to run as strict)?

Fair point that I should clarify: when I said ES6 features I really
meant ES6 language constructs. Libraries are fine, of course.

So yes, 'let' and 'const' count as ES6. That doesn't keep
implementations from providing them as sloppy mode language extensions
as they do now, for their own backwards compatibility. There would be
no reason to use those in new code, though.

One particular advantage of this is that we don't have to break the
web for things like 'const' and 'function' in blocks. Existing
implementations of those features are a horrible, inconsistent mess,
but one that is dangerous to touch. Only cleaning it up in strict mode
where we can safely do so (and remaining oblivious in sloppy mode) is
likely to cause much less problems.

 While there are many things that will absolutely indicate intention to run
 as ES6, there's a number of examples of ambiguity that make me doubt how
 successful an absolute judgment can be. This is why I think giving modules a
 double use as implicit opt-in/pragma has merit.

How does making certain constructs opt in implicitly resolve any of
the ambiguities you mentioned?

/Andreas


 On Wednesday, January 16, 2013, Andreas Rossberg wrote:

 On 1 January 2013 07:09, Mark Miller erig...@gmail.com wrote:
  On Mon, Dec 31, 2012 at 9:12 PM, Brendan Eich bren...@mozilla.com
  wrote:
 
  Mark S. Miller wrote:
  I'm pretty happy with Kevin's compromise. Here it is again:
 
  (1) No opt-in required for new syntax, except:
  (2) No breaking changes to sloppy mode, and
  (3) No grammar contortions (e.g. let) to support sloppy mode.  And
  (4) All new syntax forms with code bodies are implicit strict.
 
  What do you say?
 
  My preference order:
 
  1)
  1.a) To the extent clean and practical, new features are available only
  in
  strict mode,
  1.b) Lexical f-i-b is available in sloppy mode as it is in ES6 strict,
  since
  no browser will prohibit f-i-b syntax in sloppy mode. Better to have the
  f-i-b sloppy semantics be aligned with the ES6 f-i-b strict semantics.
  1.c) modules (both inline and out) implicitly opt-in to strict mode.
  1.d) classes implicitly opt-in to strict mode.
  1.e) nothing else causes an implicit strict mode opt-in.
 
  2) Like #1 but without #1.d (which I think of as Andreas' position)

 Yes, although I'd even consider removing 1.c inline (matching your
 option 6 below).

 But what do you mean by to the extent clean and practical? In my
 humble opinion, only two options are really acceptable at all: either
 _all_ ES6 features work only in strict mode (my preference), or _all_
 ES6 features work in both modes (how I interpret 1JS). Something
 in-between, i.e., deciding inclusion into sloppy mode on a by-feature
 basis, is a non-starter in terms of usability and observable
 complexity. That is, rather (5) than (4) below.

  3) Like #1, but #1.e is replaced with
  3.e) All code bodies within new function syntax is implicitly strict.

 I'd be strongly opposed to this (and Kevin's point (4) in general).

  4) Like #3, but #1.a is replaced with
  4.a) To the extent clean and practical, new features are available in
  sloppy
  mode.
  I take it this is essentially your position and Kevin's compromise
  position?
 
  5) Where things stood at the end of the last TC39 meeting, where we were
  violating the clean of #4.a to kludge things like let,
  non-duplicated-formals-sometimes, no-arguments-sometimes, weird scoping
  for
  default argument expressions, etc, into sloppy mode.
 
  6) Like #2 but without #1.c. Is this essentially Kevin's pre-compromise
  position?
 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private Slots

2013-01-17 Thread Andreas Rossberg
On 15 January 2013 17:16, Kevin Smith khs4...@gmail.com wrote:

 It's really none of your business when you try to freeze my object whether
 any of

 (a) pre-existing private-symbol-named properties remain writable;
 (b) weakmap-encoded private state remains writable;
 (c) objects-as-closures environment variables remain writable.

 Really. Not. Your. Business!


 But that's a change from the current object model and by (a) you're assuming
 the conclusion.  ES has a really simple object model, as explained in the
 first part of the ES5 specification.  That simplicity is an advantage.  If
 you're going add complexity, then it should be justified with
 application-focused use cases.  That request does not seem like a stretch to
 me.

Just to throw in one more opinion, I sympathise with Kevin to some
extent. Despite Sam's argument, I think there is considerable
complexity imposed by private names, and it has been increasingly
unclear to me that it is really warranted at this point. It might be
worth reconsidering and/or post-poning, and Kevin made a few good
arguments to that end later down the thread.

(However, I don't follow your description of the ES5 object model
being really simple. With sufficient squinting, you may call the ES3
model (relatively) simple, but ES5 certainly put an end to that. Now,
ES6 is adding several whole new dimensions of complexity, all of which
dwarf private symbols. Let alone a potential Object.observe in ES7. In
fact, there are very few languages that have an object model more
complicated than that. ;) )

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: direct_proxies problem

2013-01-17 Thread Andreas Rossberg
On 8 January 2013 22:33, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Tue, Jan 8, 2013 at 1:30 PM, Andrea Giammarchi
 andrea.giammar...@gmail.com wrote:
 so you are saying that Object.observe() does not suffer these problems ? Or
 is just much simpler than Proxies ? ( yeah, I know, I guess both ... )

 I believe it just wouldn't suffer the same problems - it needs to
 observe the JS-visible stuff, which DOM objects expose normally, so it
 can just hook those.  Alternately, it can be specialized to handle DOM
 stuff correctly even with their craziness.  I'm not familiar with the
 implementations of it.

Object.observe isn't simpler than proxies, but the complexity is along
somewhat different axes.

In any case, WebIDL actually specs attributes as accessor properties,
which means that Object.observe simply ignores them. So there isn't
much interference between Object.observe and the DOM.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Security Demands Simplicity (was: Private Slots)

2013-01-17 Thread Andreas Rossberg
On 17 January 2013 18:00, Mark S. Miller erig...@google.com wrote:
 I still have this position on classes. But I no longer buy that
 pessimistic conclusion about WeakMaps. Consider how WeakMaps would be
 used by the expansion of classes-with-private. Just 'cause it's on the
 top of my head, below I use the old representation of one WeakMap per
 class providing access to a record of all the private state. For the
 same reason, I'll use the encapsulation of the Purse example without
 any of the numeric checks.

 class Purse {
 constructor(private balance) {
 getBalance() { return balance; }
 makePurse() { return Purse(0); }
 deposit(amount, srcPurse) {
 private(srcPurse).balance -= amount;
 balance += amount;
 }
 }

Hm, I'm afraid I don't fully understand that example. There seems to
be a missing closing brace for the constructor, and I don't know what
the free occurrences of 'balance' are referring to. Also, the second
line of the deposit function seems to be missing in the expansion.


 expansion

 let Purse = (function() {
 let amp = WeakMap();
 function Purse(balance) {
 amp.set(this, Object.seal({
 get balance() { return balance; },
 set balance(newBalance) { balance = newBalance; }
 }));
 }
 Purse.prototype = {
 getBalance: function() { return balance; },
 makePurse: function() { return Purse(0); },
 deposit: function(amount, srcPurse) {
 amp.get(srcPurse).balance -= amount;
 }
 }
 return Purse;
 })();
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private symbols as WeakMap sugar

2013-01-18 Thread Andreas Rossberg
On 17 January 2013 21:08, Brendan Eich bren...@mozilla.com wrote:
 Andreas Rossberg wrote:
 Actually, I don't see why this should have a measurable impact on
 performance in practice. The generic case is dog-slow for JavaScript
 anyway, what matters is how easy it is to specialise for the types
 actually seen at runtime. And there, this would just add yet another
 row to the (already complex) matrix of cases for receiver/index type
 pairs that you optimize for. The same might actually be true for
 symbols, depending on the implementation strategy.

 Probably I'm more sensitive to the generic case, which while dog slow
 still dogs some real-world critical paths (if not fake/lame/old benchmarks).

 It all costs, David is proposing yet another cost. Maybe that's my final
 answer :-|.

I don't know enough about the internals of other VMs, but at least in
V8, the generic case will jump into the C++ runtime (costly) and
potentially trickle through hundreds of lines of logic. I think you
will have a very hard time constructing even a highly artificial
benchmark with which one additional conditional in that logic would be
measurable.

Obviously, that doesn't imply that I consider it a good idea... ;)

 Like any new type or representation, it may cause deoptimization and
 increased polymorphism, but that's nothing new under the sun, and we
 are adding plenty of that with ES6.

 In contrast, private symbols (any symbols, public/unique or private) should
 benefit from existing property-in-receiver optimizations. Right?

Perhaps, perhaps not. For V8, we haven't really thought hard yet about
how they fit into the existing representation. They may still end up
requiring a case distinction somewhere.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Security Demands Simplicity (was: Private Slots)

2013-01-21 Thread Andreas Rossberg
Amen.

/Andreas

On 21 Jan 2013 17:23, Tom Van Cutsem tomvc...@gmail.com wrote:

 2013/1/20 Allen Wirfs-Brock al...@wirfs-brock.com

 I don't have a problem at all with making the proxy story more
complicated.  Proxys are an expert feature designed for some specific use
cases.  they are probably an attractive nuisance.  I would advise most JS
programmer that if they are going down the road of using a Proxy, they are
probably making a mistake.  In that light,  placing the extra complexity
within the Proxy story seems just fine.


 While in the specific case of proxies  private symbols, I think it is
fine if proxies take an extra complexity hit, I'd like to push back a
little here regarding the more general point.

 People often complain that proxies are complicated. Well here's the deal:
proxies are only as complicated as the ES object model requires them to be.
The more features we add to the ES object model, the more complexity we
face in proxies. It's awkward to blame proxies for that extra complexity.
Proxies are just a (power-)tool that aim to let the ES programmer emulate
all aspects of ES objects in Javascript itself. If Javascript objects
become more complex, they obviously also become more complex to emulate.

 It also gives a false sense of simplicity to think that pushing off extra
complexity into proxies actually sweeps all the complexity under the rug
for the non-expert programmers. Extra complexity in proxies also implies
extra complexity for any other kind of exotic/host object. Point-in-case:
how will host objects interact with private symbols? It isn't entirely
obvious. Does a WindowProxy forward them, or does each WindowProxy have its
own set? These issues reappear outside of Proxies proper.

 Cheers,
 Tom

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: WeakMap GC performance (was: WeakMap.prototype.clear performance)

2013-01-23 Thread Andreas Rossberg
[Meta]

David, I would appreciate if you stopped breaking discussion threads
all the time. There are now about half a dozen threads related to
WeakMap clear, which clutters the discussion view and makes it hard to
properly follow the discussion with delay.

Thanks,
/Andreas


On 23 January 2013 10:49, David Bruant bruan...@gmail.com wrote:
 [reordering]
 Allen wrote:

 We can understand the value of providing a clear method without talking
 about GC at all.

 I don't doubt there is a case to clear a data structure, but it can be
 filled with clearless weakmaps. What I'm trying to find is a differentiating
 factor. I agree that:
 * clearable and clear-less weakmaps both have a use. Which is dominant for
 developers has yet to be determined and only tastes and feelings have been
 provided so far (including by myself).
 * clearable weakmaps and clear-less weakmap can be symmetrically and at
 close to no cost implemented on top of one another.

 Until evidence (from other languages?) is provided that one case matters
 more, I personally call this a tie. That's where my reflection is at.

 I think a major remaining point is performance. If clear-less weakmaps
 induce an incompressible significant GC cost, then, that is a valid
 justification to have native .clear.
 Now, implementors will have to deal with programs where some long-lived
 weakmaps aren't manually cleared, the interesting question here is: how far
 can they go to reduce the GC cost (without requiring a major breakthrough in
 GC research of course ;-) )?
 If the cost can be reduced to a marginal difference with manual .clear, I
 call the performance argument a tie too (leaving the debate to a
 taste/feeling debate)


 Le 23/01/2013 00:36, Allen Wirfs-Brock a écrit :

 On Jan 22, 2013, at 2:35 PM, David Bruant wrote:

 So, to find out if a weakmap is dead, it has to come from another source
 than the mark-and-sweep algorithm (since it losts its precision)...
 Given the additional prohibitive cost weakmaps seem to have on the GC,
 maybe things that would otherwise be considered too costly could make sense
 to be applied specifically to WeakMaps. For instance, would the cost of
 reference-counting only weakmaps be worth the benefit from knowing early
 that the weakmap is dead? (I have no idea how much each costs, so it's hard
 for me to compare the costs)
 For WeakMapWithClear, reference counting would declare the weakmap dead
 as soon as the new weakmap is assigned to the private property so that's
 good. It wouldn't work if some weakmaps are part of a cycle of course... but
 maybe that it's such an edge case that it's acceptable to ask users doing
 that to break their weakmaps cycle manually if they don't want the GC not to
 be too mad at them.

 You know, as much as Jason and I enjoy talking about garbage collectors,
 this probably isn't the place to revisit the last 40 years of a highly
 developed area of specialized CS technology.

 Even if there is a .clear method, it doesn't mean people will use it, so the
 costs weakmaps induce on GC will have to be taken care of even if people
 don't manually clear the weakmap [forking the thread for this reason]. JS
 engine implementors will have to solve this problem regardless of the
 introduction of a .clear method or not. Since JS engines start having
 generational GC and WeakMaps, I feel here and now might be a very good place
 and time to revisit these 40 years. Each implementor will have to do this
 revisit anyway.
 If anything, this thread may become a good resource for developers to
 understand why some of their programs using WeakMaps have conjecturally or
 inherently bad GC characteristics.

 Of all points in this thread, the one that got stuck in my head is when
 Jason said: In our current implementation, creating a new WeakMap and
 dropping the old one is very nearly equivalent in performance to clear().
 What this means is that something is lost when moving to a naive
 generational GC regarding WeakMaps. The loss is the knowledge of when
 exactly a weakmap is dead. And this loss has a cost related to weakmap GC
 cost. Although Mark showed a linear algorithm, one can still wonder if in
 practice this algorithm induce a significant cost (the worst-case complexity
 doesn't say much about the most-frequent-case cost of an algorithm).

 What I'm trying to find out is whether there is a small-cost
 weakmap-specific tracking system that could tell the GC that a weakmap is
 dead as soon as possible. First and foremost, what did the research find in
 these 40 years on this specific question?
 Did it prove that any tracking system doing what I describe would cost so
 much that it wouldn't save on what it's supposed to? If so, I'll be happy to
 read the paper(s) and give up on the topic. I assume it's not the case to
 continue.
 Ideally, the tracking system would have the following properties:
 * it costs nothing (or a small startup constant) if there is no weakmap
 * the overall cost of the tracking 

Re: Private symbols auto-unwrapping proxies (was: Security Demands Simplicity (was: Private Slots))

2013-01-28 Thread Andreas Rossberg
On 28 January 2013 19:45, Tom Van Cutsem tomvc...@gmail.com wrote:
 I just wrote up a strawman on the wiki to summarize the recent debates about
 the interaction between proxies and private symbols:

 http://wiki.ecmascript.org/doku.php?id=strawman:proxy_symbol_decoupled

 The page actually lists two proposals, out of which I prefer the second one.

 If I forgot some benefits/drawbacks of either approach, please speak up.

Under the second approach, how can you transparently proxy an object
with private properties _at all_? It seems like you can't, even when
you have access to its private names. In other words, what do you mean
by inherit the private state of the target, when the target is still
aliased and accessed?

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Adding [[Invoke]] to address issues with methods called on Proxies

2013-01-30 Thread Andreas Rossberg
On 29 January 2013 21:14, Tom Van Cutsem tomvc...@gmail.com wrote:
 2013/1/29 Brandon Benvie bran...@brandonbenvie.com

 Proxies are the thing that ultimately complicates the object model and
 these are fallout from it, but most of us agree that Proxies are worth it.

 I think this is a strange way of characterizing proxies. The object model is
 there, and proxies merely expose it to JS programmers. Proxies aren't
 themselves supposed to complicate the object model further.

I suppose you mean that proxies aren't supposed to complicate
pre-existing aspects of the object model. Their mere existence of
course is a major complication in itself. (And I admit here and now
that I have those blasphemous moments where I actually develop serious
doubts that they are worth it...)

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: disabling use strict; everywhere

2013-01-30 Thread Andreas Rossberg
On 30 January 2013 18:00, Andrea Giammarchi andrea.giammar...@gmail.com wrote:
 let me rephrase ...

 putting `with(this){` before any build process/inlined library and `}` at
 the end of all concatenated files **nothing** is strict anymore ^_^

 function isStrict() {use strict;
   return this;
 }

 isStrict(); // undefined

 now, wraping the whole thing inside a with statement

 with(this){
   function isStrict() {use strict;
 return this;
   }
   alert(isStrict()); // global object
 }

 forget the Function hack, there is no library able to ensure itself to be
 executed in strict mode.
 Using the width statement around any code makes it not strict.

That is, er, balls, of course. All that happens here is that inside
the 'with', a call to a _toplevel_ function like 'isStrict()' becomes
'this.isStrict()', and so receives the global object. That does not
otherwise change strictness, however. Try:

  with(this) {
function isStrict() { use strict; var x; delete x; }
alert(isStrict());
  }

for example. Moreover, calls to functions not resolving to global
definitions are completely unaffected, so are calls using 'call' or
'apply':

  with(this) {
function isStrict() { use strict; return this; };
function call(f) { f() };
alert(call(isStrict)); // undefined
alert(isStrict.call()); // undefined
 }

Both calls would return window in sloppy mode.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: disabling use strict; everywhere

2013-01-30 Thread Andreas Rossberg
On 30 January 2013 17:19, Brandon Benvie bran...@brandonbenvie.com wrote:
 Correction, the use strict directive needs to appear as the first statement
 (ExpressionStatement) that's not an EmptyStatement and not a
 FunctionDeclaration. Anything else will cause the directive to be ignored.

_Any_ statement/declaration before the use directive renders it
inoperational, empty, function, or otherwise. Actually, any _token_
before the magic string does.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Refutable pattern

2013-01-31 Thread Andreas Rossberg
I wrote up the semantics of refutable destructuring as discussed in
yesterday's meeting:

http://wiki.ecmascript.org/doku.php?id=harmony:refutable_matching

In particular, this defines the meaning of the ?-operator in a fairly
straightforward manner.

The page also describes how the proposed matching semantics would
readily be applicable to a pattern matching switch, and how it would
potentially allow us to turn 'undefined' into a keyword.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Refutable pattern

2013-02-01 Thread Andreas Rossberg
On 1 February 2013 10:56, Axel Rauschmayer a...@rauschma.de wrote:
 Beautiful.

 What do question marks in value (as opposed to key) positions mean?
 Example: { a: x? }

Not much: a plain identifier 'x' is always matches anyway, i.e. is
already irrefutable, so wrapping a '?' around it does not have any
effect (it's like writing if (true) or whatever). I removed the
redundant example.


 How does this work grammatically (ternary operator…)?

That still has to be worked out. I'd actually prefer a prefixed ?,
since it is quite easy to overlook a postfix one trailing a longish
pattern when reading code. But that may be more difficult to reconcile
with the existing syntax.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


<    1   2   3   4   5   6   7   8   >