Re: Proposal: Additional Meta Properties for ES7

2015-02-28 Thread Claus Reinke

I think the fact you had to write two solutions where one attach to the
object the listener and another one needs a double arrow (rainbow) already
shows we have a hole in the language once covered by arguments.callee.


just to make sure we are not misunderstanding each other: I wrote
two solutions because the initial examples I saw only needed recursion
for anonymous functions, a problem which has standard solutions. 

Your example had more stringent conditions in that the self-reference 
also needed to preserve the identity of the anonymous function (for
de-registering listeners). The modified solution should serve both 
examples as a replacement for arguments.callee (I think).


The only example it hasn't solved is the one with concise methods,
where the syntactic sugar keeps us from wrapping the function at
the definition site. Since wrapping functions at call-sites is awkward,
I suggested alternatives.

The more you write examples, the more you convince me we are 
missing callee ;-)


Again, this is jut my opinion. There are ways to work around this, 
it just feels wrong we lost the callee feature.


We just got rid of the 'this' workarounds, and it cost us a whole 
second set of function expressions. We still haven't solved all of
the 'super' issues. Do you really want to multiply these issues by 
introducing yet more implicitly scoped meta-level references?


Everyone is welcome to their opinions, but I'd rather avoid taking
the bait of minor conveniences, only to run into solid issues later.

To me, this looks like one of the cases where language designers
have to be more careful than language users in what they wish for.

My reference to Lisp was only half kidding: Lisp was born from
meta-level concepts, so everything was possible, programs talking 
about and rewriting themselves were cool and seemed to offer
easy solutions to everything; later languages like Scheme, ML, 
and Haskell have largely followed a path of trying to achieve 
comparable (or better) expressiveness while reducing the 
reliance on meta-level (and other too powerful) features. 

Their language designers had to work hard to get there while 
avoiding the seemingly simple path, but the result is that it is 
much easier to reason about and refactor a Haskell program 
than the equivalent Lisp program (which also makes optimization

easier, which makes seemingly complex features cheap to use).

Ok, getting off my soapbox now, sorry for the interruption:-)
Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: Additional Meta Properties for ES7

2015-02-27 Thread Claus Reinke

For concise methods, the problem is already solved by 'this',
isn't it?

  ({f(n){return n1?n*this.f(n-1):1}}.f)(6)
  720


No, not for the general case.  You could have arrived here via a 'super' method call in which case 
'this.f' will take you back to a subclass' f rather then recurring on this specific function


Sometimes, this might be what you want in such a case. If it isn't,
then how about:

   class Super { f(n) {return n1?n*Super.prototype.f(n-1):1 }}

   class Sub extends Super { f(n) { return 1+super.f(n) }};

   console.log((new Sub()).f(5));  // 121, rather than 326

Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: Additional Meta Properties for ES7

2015-02-27 Thread Claus Reinke

and yet you haven't removed any anonymous arrow listener. Assign first?
Mostly nobody will do that, it's just less natural then `obj.on(something,
()=happening)`


personally? Yes, I tend to assign listeners somewhere, at least when I 
intend to remove them later. I've even been known to assign them to a

virtual event object, so that I could translate the event names later
(eg, click vs touch). But that is just me.

One could also hide the assignment in an `on` helper (JQuery does 
something similar).


   function on (obj,event,listener) {
   obj._events[event]=listener;
   return obj.on(event,listener);
   }

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: Additional Meta Properties for ES7

2015-02-27 Thread Claus Reinke

Can you show an example of how callee is used with a fat arrow function?

((n)=n1? n*function.callee(n-1) : 1)


meta-level tools are powerful, which makes them ever so tempting.

They are too powerful to be used for tasks for which current 
language-level tools are sufficient. Using a call-by-value 
fixpoint combinator


   let fix = f=x=f(x=fix(f)(x))(x)
   undefined

we can use plain functional abstraction instead of meta properties

   let f = self=n=n1?n*self(n-1):1
   undefined

   fix(f)(6)
   720

   fix(f)(7)
   5040

(if you're worried about optimization, provide a built-in 'fix')

For concise methods, the problem is already solved by 'this',
isn't it?

   ({f(n){return n1?n*this.f(n-1):1}}.f)(6)
   720

Like most powerful tempting things, referring to meta-levels comes
at a cost, even though that may not be immediately visible (ie no
lexical scoping, cannot extract as part of a function body). So the
easiest route (of introducing the most powerful feature) is not
necessarily the best route.

You're still working to get rid of anomalies that hamper functional
abstraction and composition (arrow functions help with 'this'; and
wasn't the missing toMethod an attempt to handle the newly 
introduced 'super' special case?). I'm surprised to see everyone 
so eager to introduce new trouble.


just saying... :-)
Claus
http://clausreinke.github.com/


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proposal: Additional Meta Properties for ES7

2015-02-27 Thread Claus Reinke
 nope, you are limiting your object to have only one listener per event, I
think that's not quite how reality is. You gonna lose that listeners next
time somebody use same name with the same object.

true. For cases where that isn't enough, i assume you're thinking of
canceling from within the handler.

Here goes another attempt, preserving identity while providing a
self-reference.

let arg = ff={
  let arg = {};
  let f = ff(arg);
  arg.callee = f;
  return f;
};
let f = arg=n=n1?n*arg.callee(n-1):1;
console.log(arg(f)(5));

Perhaps i'm going to run out of ideas soon, but my point is that it is
worth looking for less powerful alternatives that achieve the same ends.
Else we'd all be writing lisp, right?-)

Claus
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Reserving await within Arrow Functions

2013-12-12 Thread Claus Reinke

1) ... functions express mappings
2) Generators express sequences
3) Don't muddy the waters


If only!-(

In current ES6 design, functions are mixed with generators (probably
based on the notion that the generator function call is suspended, even 
though, in reality, generator functions return generator objects that 
can be prompted to return next results).


In current ES6 design, generators represent iterators, which are
ephemeral pointers into stateful sequences (calling next has a 
side-effect; previous sequence pointers are invalidated, you need

to make copies manually).

waters are clearly muddy :-(

Claus

PS. my standard suggestion for water purification:

   - have a syntax for generator blocks, not generator functions
   do*{...}
 equivalent to immediately applied generator function,
   (function*(){...}())

   - have side-effect-free iterators

   - combine modular language features at will, eg: 
   x=do*{ yield 1; yield 2 }

   function(x) { return do*{ yield 1; yield 2 } }

   - profit! (simpler, more modular semantics, better compositionality)

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generator Arrow Functions

2013-11-16 Thread Claus Reinke

What I don't understand is why generator expressions are not used
as the only way to create generators, leaving 'function' alone.


We have been over this before: to support flows that for-of loops cannot 
expression, specifically coroutine libraries such as http://taskjs.org/.


Which is why I keep suggesting block-style generator expressions
in addition to comprehension-style generator expressions. The
equivalent of today's

   function*() { ... yield value ... }

would be 


   function() { return do* { ... yield value ... }}

or, if 'function' peculiarities don't matter, the simpler

  () = do* { ... yield value ... }

As far as I can tell, no functionality would go missing. 'function' and 
arrow would remain on par and function and generators would

remain separate (but composable) building blocks, leading to a more
modular language spec. You could keep 'function*' as syntactic sugar.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Generator Arrow Functions

2013-11-14 Thread Claus Reinke
Everybody should probably review 
esdiscuss.org/topic/why-do-generator-expressions-return-generators where we discussed this before.


which suggests using generator expressions as arrow bodies to make
generator functions with arrows

   () = (for (y of gen) y)

What I don't understand is why generator expressions are not used
as the only way to create generators, leaving 'function' alone. There
would be

- comprehension-style generator expressions, with implicit yield
(for (...) ...)

- block-style generator expressions, with explicit yield
(do*{ ... yield value ... })

and generator functions would be build from generator expressions
and functions (arrows or classic). No need to mix up functions and
generators. At least none I can see...

Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Weak callbacks?

2013-11-14 Thread Claus Reinke
I mean the promises-unwrapping proposal that has now been accepted into 
ES6 and the DOM, and IIUC is being implemented by several browser makers. 
Let's call these JS Promises. 


The unwrapping means that there is forced recursive synchronization,
doesn't it? When trying to work with nested promises, implicit unwrapping
leaves a single level of asynchrony, assuming synchrony beyond that.


* One cannot ignore network latency and partitioning.

Latency:
JS Promises are asynchronous and were carefully designed to not preclude
future promise pipelining. Q makes use of this pipelining.

* Sync RPC is the wrong model (it already yielded to async calls,
futures/promises, and the like).

Exactly!


And yet JS promises ended up with a synchronous model behind a 
single layer of asynchronous access. Only the maximal latency is

explicit, with async interface, any nested layers of access are implicit,
with sync interface.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


function .name property in draft spec?

2013-10-11 Thread Claus Reinke
According to 
http://wiki.ecmascript.org/doku.php?id=harmony:function_name_property ,
This proposal has progressed to the Draft ECMAScript 6 Specification.

I can't seem to find it in 6th Edition / Draft September 27, 2013, though.

Claus
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Clarification on function default param values

2013-10-01 Thread Claus Reinke

Generally variables are brought into scope by an explicitly appearing
defining occurrence. Two exceptions are the function brings into scope
both this and arguments. These remain in scope until shadowed by a
nested function or by an explicit definition. Note that this can never
be explicitly defined, and arguments can only be explicitly defined in
non-strict code.

As of ES6, a variety of other function-defining constructs, like
function, implicitly bring into scope a new this and argument.
Arrow-functions are not one of these. Within an arrow function, both this
and arguments are lexically captured from the enclosing context.


Also super (implicitly bound in function, available lexically scoped
in arrow function body). 

Still following the rule: arrow functions have no implicit bindings and 
do not interfere with lexical scope (other than adding explicit bindings 
for their parameters).


Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: 'function *' is not mandatory

2013-08-31 Thread Claus Reinke

I am one of those on TC39 that want the visible flag. Since, in my view,
the only non-mistaken need to preserve sloppy mode is as an ES3
compatibility mode and ES3 has no generators, I consider this flagging
issue to be the important one. Yes, you have to read the function to know
*what* it generates. But even before you've figured that out, your whole
effort to read the function is different once you know you're reading a
generator function. Better to know it early.


But that is a *semantic* property - you can't force it into *syntax*.

Consider this code:

   // run with node --harmony
   function* gen() { yield 1 }
   function f(g) { return g() }
   console.log( f(gen).next() ); // { value: 1, done: false }

You can't see by looking at 'f' that it can return an iterator. And 'f's
parameter and return value could even vary dynamically.

I could imagine a type system for this, which would be nice (just as it 
would be nice to have a type system telling you whether a callback-taking 
function uses the callback async, sync, or both). Then your IDE could tell

you (an approximation of) what you're dealing with, providing verified
API documentation. But I don't see how to do that with syntactic tools only.


Code is read much more than it is written -- at least code that matters.


For this reason, I would still suggest to separate generators from 'function' -
there is nothing function-specific about generators (apart from generator
implementations using stack frames, perhaps), so I find it confusing to
mix up these two concepts. It also keeps us from using arrow functions 
freely with generators.


I did suggest using something like 'do* { yield 1 }' as generator syntax
(ie, 'do*{}' would be an expression denoting an iterable, and 'yield' could
only appear in 'do*{}'). It still has the syntactic flag, but it separates functions 
and generators. We could then recombine the two as needed, without 
complicating the 'function'-related specification machinery:


   let gen = v = do* { yield v };
   gen(1).next()// { value: 1, done: false }

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A couple of questions about generatorsiterators

2013-07-29 Thread Claus Reinke

languages that use imperative iterators, like Python and PHP.
And JS -- JS has mutation and objects. It's not going to swerve 
toward Haskell (sorry, Claus).


I never understood your automated dislike of Haskell.


You misread me pretty badly here. Why?


Your dislike of Haskell as a reference/model for JS evolution is
explicit in your quoted message. It has been obvious from earlier
exchanges. 


My message didn't even mention Haskell - it said that I would prefer
functional APIs to complement JS imperative APIs where possible,
and functional APIs if it is not possible to have both (because 
imperative APIs can be implemented in terms of functional APIs

more easily than the other way round).

I never wrote anything showing dislike of Haskell. Rather, I said JS's 
future standards-based evolution is not going to *swerve* toward 
Haskell. 


There is no technical argument in there, so it is an opinion, or a
preference/like/dislike.

We are not going to make extra allocations for unwanted next 
funargs.


Ok, that touches on a technical argument. I remain to be convinced 
that the cost would be high, for the reasons I gave.


I like functional programming. I'm multi-paradigm. JS is too, but not to 
this FP-till-it-hurts extent. Objects and functions, not functions first.


FP-till-it-hurts? Why the hyperbole? I want objects and functions,
not imperative first.


But if you are
speaking for tc39 when claiming that JS has no aspirations towards
supporting functional APIs when possible, that would be a serious
disappointment. Personally, I believe you're selling JS short here.


Besides misreading me, your recent points have misread deep as a 
modifier to continuation, and pushed internal iteration exclusively 
(which does not work in all cases as Andy said).


Do you now have a problem with interpreting deep continuations?

https://mail.mozilla.org/pipermail/es-discuss/2011-September/016484.html
https://mail.mozilla.org/pipermail/es-discuss/2011-October/017596.html

I did not push internal iteration. I am concerned about composition,
abstraction, and refactoring. In the other thread, I started from the
observation that .forEach does not play with yield, then I tried to 
re-implement the built-in for-of as a user-level function and showed

some examples of abstraction being hampered.

These errors really get in the way of your making grander claims or 
plans about JS!


Is that necessary?

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: generators vs forEach

2013-07-26 Thread Claus Reinke

I have no idea why both you and Brendan assert that I was arguing/
rehashing for deep delimited continuations (let alone call/cc). 


Because you wrote:

1 can be worked around, but not with the usual tools of function
definitions and calls - yield forces use of function* and yield* 
   for abstracting over expressions containing it.


and later:

For instance, we cannot write

   function* g() {  (function(){ yield 1 })() }

This certainly looks like you want a deep continuation.


Ah, thanks, that explains it. Yes, I was listing examples that trouble
me about the current design, I was suggesting changes to that
design, and some of the examples could only be solved by deeper
continuations. 


The point of departure is that my suggested changes wouldn't actually
solve all the cases that trouble me (in particular, not those cases that
would depend on deep continuations). Issues that I've tried to address
include:

1. the example above doesn't require deep continuation any more
   than local variable declarations in a generator require them. 

   Let me change the example to use immediately applied arrow 
   functions (to avoid any special once-per-function-body handling 
   of this and arguments); then I would expect


   function* g() {  (()={ let x = 1; yield x })() }

   to be equivalent to

   function* g() {  { let x = 1; yield x } }

   and if the latter is considered valid/shallow, I would expect the former 
   to be valid/shallow, too.


2. I'd like to decouple generators from function, to avoid interference
   between the two features. 

   For concreteness, let me assume a block form of generators as 
   do* { ... } (delimiting continuations to the block, giving a generator-
   valued expression). Then the example would read (ignoring item 1 
   above for now, so we have to use yield*):


   var g = () = do* { yield* (()= do* { yield 1 } )() }

   With function, this would be slightly longer than with the current 
   spec, but since generators are now decoupled from function, we 
   can use (arrow) functions (our means of functional abstraction) freely -

   generators are simply another class of object to write functions over.

   We could even re-introduce

   function* f() { ... }

   as mere syntactic sugar for

   function f() { return do* { ... } }

3. I'd like to see a standard iterator library, with things like zip and
   feed (the exact contents of such a library would evolve in practice,
   not from a spec, but the spec could provide a seed, and organize
   the evolution), and I would like to see more support for composing 
   generators.


   Using the current spec, we could define

   function* then(g1,g2) { yield* g1; yield* g2 }

   and use this to combine generators via ES5 array iterations

   [1,2,3].map(function*(x) { yield x }).reduce(then)

   or, assuming item 2 above,

   function then(g1,g2) { return do* { yield* g1; yield* g2 } }

   [1,2,3].map(x=do* { yield x }).reduce(then)

   This, as well as my generators-as-monads gist, suggest that we 
   could let generators return their completion value and have them

   implement monadic .then, for easy composition using the monadic
   set of tools.

   And since yield* is essentially a mini-interpreter built on top of
   yield, the composition library could include alternative interpreters
   (eg, support for early return).

So, none of my suggestions require deep continuations. Nevertheless,
I'm having trouble distinguishing local blocks in shallow-continuation
generators from deep-continuation generators. So I'd be interested to 
hear the precise arguments against deep delimited continuations (link 
to meeting notes/mailing list thread would be fine).


Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A couple of questions about generatorsiterators

2013-07-26 Thread Claus Reinke

languages that use imperative iterators, like Python and PHP.


And JS -- JS has mutation and objects. It's not going to swerve toward 
Haskell (sorry, Claus).


I never understood your automated dislike of Haskell. But if you are
speaking for tc39 when claiming that JS has no aspirations towards
supporting functional APIs when possible, that would be a serious
disappointment. Personally, I believe you're selling JS short here.


Your point about allocation is on target too.


With a generational collector, short-lived allocation should be free
(provided all fields need to be filled anyway). When using generators
in standard iterators loop patterns, the first copy can be reused in
the second iteration, and so on, so only one extra copy per loop
should be needed.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: generators vs forEach

2013-07-25 Thread Claus Reinke

2 generators do not compose as freely as iteration functions,
   because they are tied to special syntax and restricted contexts


You place blame on generators here, but beside the laments about deep
coroutines -- totally understandable, but Brendan is right that that
they are pointless -- your examples apply just as well to iterators of
all kinds.  It just happens that generators are a convienient way to
implement iterators.


First, let me clarify that I am on record arguing for even shallower 
continuations. I restated this preference in this very thread


https://mail.mozilla.org/pipermail/es-discuss/2013-July/031967.html
(scroll down to FYI)

I have no idea why both you and Brendan assert that I was arguing/
rehashing for deep delimited continuations (let alone call/cc). But as
this makes two of you, I've extracted my suggestions/questions to a
separate thread, where they are less likely to be lost/misread.

Second, my worries are about generators, because they introduce
a special built-in form that interferes with functional abstraction.


Your point sounds like external iteration does not compose as freely as
internal iteration -- which is strictly not true!  You can't implement
zip, for example, with internal iteration, whereas you can with external
iteration.


My point is about expressiveness in the composition/abstraction/
refactoring/extract reusable components sense, not in the Turing
sense. Also,

   zip = (a,b)=a.reduce(function(x,y,i){return x.concat([[y,b[i]]])},[])


- if you compare the versions that use for-of with those (ending  with
a _) that use a user-defined abstraction forofG, you'll see  a lot
of syntax noise, even worse than with the old long-hand  function - in
terms of making functional abstraction readable,  this is going in the
wrong direction, opposite to arrow functions.


I humbly suggest that these abstractions are simply in the wrong place.


Since I was merely trying to re-implement parts of for-of in user
land, I don't see how that could be the case.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


A couple of questions about generatorsiterators

2013-07-25 Thread Claus Reinke

I do not understand why (1) iterators are specified using a self-updating
API when a functional API would seem preferable, or why (2) generators 
are linked to functions, when block-level generators would seem to be 
sufficient and less complex. In some more detail:


1. Why do iterators have an imperative API?

   As currently specified, an Iterator's next returns a {done,value} pair
   (ItrResult), updating the Iterator in place. A functional alternative 
   would have next return a {done,value,next} triple, leaving the original 
   Iterator unmodified. 


   It seems a lot easier to implement the imperative API in terms of
   the functional one than the other way round, if one needed both. 
   In practice, I do not see any advantage to forcing an imperative API.


2. Why are generators linked to functions, and not to blocks?

   It is not difficult to implement a form of generators in ES5. Here
   is one using TypeScript for classes and arrow functions (it uses
   a functional Iterator API, and .then-chained expressions instead
   of ;-chained statements):

   definition:
   https://gist.github.com/clausreinke/5984869#file-monadic-ts-L91-L125

   usage examples:
   https://gist.github.com/clausreinke/5984869#file-monadic-ts-L492-L529

   The main source-level disadvantages of these user-defined
   generators wrt built-in generators are:

   (a) lack of syntax sugar for .then-chaining

   (b) no coverage of statement blocks and their built-in control
   structures

   (a) would be cheap to fix (monad comprehensions have been
   suggested here multiple times), (b) could be fixed by introducing
   block-level generators as built-ins. 


   Block-level generators are shallower than function-level ones
   (continuation reaches to the end of the current block),
   expression-level generators are even shallower (continuation
   reaches to the end of the current .then-chained expression).

   This would allow us to introduce shallow generators without 
   messing with function - generators would simply be another

   class of object that function allows us to write abstraction
   over. Composing shallow continuations could be left to user-
   level functions, not built-ins, so everything but the block-level
   generators would remain (re-)programmable.

Claus
http://clausreinke.github.com/ 
___

es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: generators vs forEach

2013-07-24 Thread Claus Reinke

And why not? Because yield is a statement


Yield is an expression.


Thanks for the correction. Yes, yield expr is an expression, syntactically.

It doesn't have the nice composition and code transformation properties 
that I usually associate with expressions, it imposes unusual restrictions

on its *context* and impedes functional abstraction:

1 though yield constructs expressions from expressions, it isn't a 
   function (can't pass yield around or store it in a variable), nor is
   yield expr a function call. 

2  1 can be worked around, but not with the usual tools of function 
   definitions and calls - yield forces use of function* and yield* 
   for abstracting over expressions containing it.


3 yield is disappointingly similar to this, in being implicitly bound to
   the next function* (function, for this). Expressions referencing 
   either this or yield cannot be wrapped in functions (btw, can 
   generator bodies reference an outer this?), because this would
   cut the implicit binding. 

   For this, workarounds include arrow functions or bind, for yield, 
   the only workaround is yield*+function* (or diving even deeper,
   with hand-written iterators). Having to use different function 
   mechanisms for the latter is by design, so it is a workaround only 
   from the perspective of wanting to use uniform tools for functional 
   abstraction.


For instance, we cannot write

   function* g() {  (function(){ yield 1 })() }
   function* g() {  function y(x){ yield x } y(1) }

but have to write

   function* g() {  yield* (function*(){ yield 1 })() }
   function* g() {  function* y(x){ yield x } yield* y(1) }

and when I try to write a polyfill for for-of, I end up with two partial
fills (neither models early return):

   function forof(g,cb) {  // non-generator callbacks only
 var r;
 while(true) {
   r = g.next();
   if (r.done) break; // skip return value?
   cb(r.value);
 }
   }

   function* forofG(g,cb) {  // generator callbacks only
 var r;
 while(true) {
   r = g.next();
   if (r.done) break; // skip return value?
   yield* cb(r.value);
 }
   }

We could switch on the type of cb, and go down to handwritten iteration,
to unify the two partials into one, but then we'd still have to cope with 
different usage patterns at the call sites (call with function vs. yield* 
with function*).



Why shouldn't I be able to traverse an array, using the ES5 standard
operations for doing so, yielding intermediate results from the
traversal (recall also that yield can return data sent in via .next,
for incorporation into such traversals)?


You certainly can, with one modification: using *ES6* standard
operations (external iterators vs the ES5 forEach internal iterator).
Generators and non-generator iterators and for-of and comprehensions
hang together really nicely in practice.


function* g(){
for(x of [1,2,3]) yield transform(x);
}


You're suggesting to abandon ES5 array iteration patterns in favor
of more general ES6 iterator patterns. That would be okay (*), but

1 it leaves fairly new (ES5) API surface as legacy

2 generators do not compose as freely as iteration functions,
   because they are tied to special syntax and restricted contexts

(*) if we want to go down that route, then why join TypedArrays 
   with Arrays, according to old-style iteration-API? Shouldn't both 
   be covered by a common iterator-based API instead?


Hand-written iterators don't suffer from 2, but are somewhat awkward
to write in place, and expose their lower-level protocol. Perhaps the 
solution is a rich enough standard iterator library, with generators as 
local glue and iterator library functions for supporting more general 
functional abstraction and composition. 

Perhaps we need to play a bit more with such iterator library functions, 
to get a better feeling for the limitations imposed by generators, and to
give my concerns a concrete form? 

I've put up a gist with a few obvious things I'd want to have (something 
like zip and feed should really be standard; the former often has 
syntax support in the form of parallel comprehensions, the latter is

needed if we want to use an input-dependent generator in a for-of):

   https://gist.github.com/clausreinke/6073990

and there are several things I don't like, even at this simple stage:

- if you compare the versions that use for-of with those (ending 
   with a _) that use a user-defined abstraction forofG, you'll see 
   a lot of syntax noise, even worse than with the old long-hand 
   function - in terms of making functional abstraction readable, 
   this is going in the wrong direction, opposite to arrow functions.


- I haven't yet figured out how to end an outer generator early
   from within a yield* nested one (as needed for take_), without 
   replacing yield* with a micro-interpreter.  That might just be

   my incomplete reading of the draft spec, though?


Methods can be replaced by 

Re: Chained comparisons from Python and CoffeeScript

2013-07-19 Thread Claus Reinke

I'd like to see this implemented, at least for greater/less than (-or equal
to).

   a  b  c
   a = b = c

Desugars to

   a  b  b  c
   a = b  b = c


As a workaround, consider that ES6 arrow functions are going to
make something like this readable:

   function sortedBy(op,...args) {
 for(var i=1; iargs.length; i++) {
   if (!op( args[i-1], args[i])) return false;
 }
 return true;
   }

   console.log( sortedBy( (a,b)=ab,  2,3,4) );  // true
   console.log( sortedBy( (a,b)=ab,  2,4,3) );  // false

   console.log( sortedBy( (a,b)=ab,  2,2,3) );  // false
   console.log( sortedBy( (a,b)=a=b, 2,2,3) );  // true

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: generators vs forEach

2013-07-17 Thread Claus Reinke

// this doesn't work

   function* generator(){
   [1,2,3].forEach( function(x){ yield x } )
   }


I have been thinking and with for..of, I can't find a good reason to use
.forEach instead of for..of.
for..of does what you need here with generators too.


I've been looking at this example and thinking the same thing.


That's what you get for trying to use examples:-) long code doesn't
get read, short code is taken too seriously. As I said in my reply to
David, my point is not dependent on this example. Still, given the
readyness to abandon .forEach completely, it might be worthwhile 
to try and find a more realistic example, to see how big the damage 
is in practice.


Since we're talking about not completely implemented features, I 
don't have anything concrete yet, but perhaps in the direction of

other callback-based APIs? Is there a way to use generators to
enumerate directory trees in nodejs, or is it back to iterators?

Better examples welcome,
Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: generators vs forEach

2013-07-17 Thread Claus Reinke

   // this doesn't work
   function* generator(){
   [1,2,3].forEach( function(x){ yield x } )
   }
I have been thinking and with for..of, I can't find a good reason to use 
.forEach instead of for..of.

for..of does what you need here with generators too.


Perhaps you're right that .forEach is going to die (there are also
generator expressions to consider, covering some of the other
standard methods). It was the smallest example I could think of
to illustrate the point.

However, the argument is not about a specific operation but about
being able to define such operations in user code (eg, array 
comprehensions can usually be mapped to uses of .map, .concat,

.filter; loops can be mapped to tail recursion; ...). User-defined
control structures can be extended/modified without waiting for
the language as a whole to evolve. If the equivalence between
built-in and user-defined operation is broken, that option is no
longer fully functional.


For the specific case of forEach et al
What do you mean by et al? I don't believe .map, .reduce or .filter 
are any interesting to use alongside generators.


And why not? Because yield is a statement, and because those
operations have not been (cannot be) extended to work with
generators. Why shouldn't I be able to traverse an array, using the
ES5 standard operations for doing so, yielding intermediate results
from the traversal (recall also that yield can return data sent in via 
.next, for incorporation into such traversals)?



Even if so, for..of can work too and is decently elegant (YMMV):

function* g(){
[1,2,3].map(x = {yield transform(x)})
}


I fell for this, too:-) arrow functions have no generator equivalents.


becomes

function* g(){
for(x of [1,2,3]) yield transform(x);
}


Methods can be replaced by built-ins. It is the reverse that
is now broken.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: generators vs forEach

2013-07-16 Thread Claus Reinke

   // this doesn't work
   function* generator(){
   [1,2,3].forEach( function(x){ yield x } )
   }


This would make generators deep, violating the non-interleaving assumptions
of intermediate callers on the call stack. This is why we accepted
generators only on condition that they be shallow. We knew at the time that
this privileges built-in control structures over user defined ones. The
alternative would have been to omit generators completely. We agree that
shallow generators were worth it, despite this non-uniformity.


While I understand the compromise, and the wish to get in some form
of generators anyway, the discrimination against user-defined control
structures troubles me deeply. It introduces a new language construct
that defies abstraction. It means that we can no longer use functional
abstraction freely, but have to worry about interactions with generators.

For the specific case of forEach et al, another way to avoid intermediate
stack frames would be guaranteed inlining. If we always inline .forEach 
before execution, then specialize the resulting code wrt the callback,

any yields in the callback would be directly in the caller. Consider this
chain of code transformations:

   // inline forEach; this still doesn't work
   function* generator(){
   (function forEach(arr,cb) {
   for (var i=0; iarr.length; i++) cb(arr[i]);
   })([1,2,3], function(x){ yield x } );
   }

   // instantiate inlined forEach; still doesn't work
   function* generator(){
   let arr = [1,2,3];
   let cb = function(x){ yield x };
   for (var i=0; iarr.length; i++) cb(arr[i]);
   }

   // inline cb; still doesn't work
   function* generator(){
   let arr = [1,2,3];
   for (var i=0; iarr.length; i++) (function(x){ yield x})(arr[i]);
   }

   // instantiate inlined cb; this should work
   function* generator(){
   let arr = [1,2,3];
   for (var i=0; iarr.length; i++) yield arr[i];
   }

If such inlining and instantiating functions in ES6 changes the validity 
of code, then the opposite path -building abstractions from concrete

code examples- is also affected. I find that worrying.

The final form of the code can be handled with shallow generators,
and it should be semantically equivalent to the initial form (just
function application and variable instantiation in between). So why
shouldn't both forms be valid and doable without overcomplicating
the shallow generator ideal?

In pragmatic terms, perhaps introducing inline annotations for 
operations like .forEach and for their callback parameters could avoid 
nested stack frames her without forcing user-side code duplication. 
Such annotation-enforced inlining should also help with performance 
of .forEach et al (currently behind for-loops).


[in conventional pre-compiling FPL implementations, such worker/
wrapper staging plus inlining is done at compile time (stage recursive 
higher-order function into non-recursive wrapper and recursive but 
not higher-order worker; inline wrapper to instantiate the functional 
parameters in the nested worker; finally apply standard optimizer); 

it is an easy way to avoid deoptimizations caused by higher-order 
parameters interfering with code analysis; provided the library

author helps with code staging and inline annotations ]

Put another way, shallow generators are equivalent to a local cps 
transform of the generator function itself. Deep generators would 
require the equivalent of CPS transforming the world -- violating 
the stateful assumptions of existing code.


FYI:

I'm not sure what you mean by violating the stateful assumptions
but there is an even more local transform than that for ES6 generators:
writing code in monadic style always captures the local continuation
only. That allows for generator monads that compose those local
continuations back together. 

An example of such a generator monad can be found here (using a 
list of steps for simplicity; code is TypeScript v0.9, to make use of

ES6 classes with class-side inheritance and arrow functions)

   https://gist.github.com/clausreinke/5984869#file-monadic-ts-L91-L125

with example code (using user-defined forOf) at

   https://gist.github.com/clausreinke/5984869#file-monadic-ts-L492-L529

This differs from ES6 generators in using a functional API (next returns
{done,value,next}) and in building on expressions and user-defined
control-flow operations instead of statement blocks and built-in
control-flow structures. Still, this style does seem to allow more reuse
of existing ES5 array operations than ES6 generators will, as this
small example demonstrates:

   console.log(\n// mixing yield with higher-order array ops (prefix ^));
   var generator4 = ()= [1,2,3].map( x= G.yield(x) )
.reduce( (x,y)= x.then( _= y ), 
G.of(undefined) ) ;
   MonadId.forOf( generator4(), y= (console.log(^ +y), MonadId.of(y)) );

append example to the end of that gist, execute with tsc -e (TS v0.9 
required, 

generators vs forEach

2013-07-15 Thread Claus Reinke
[prompted by this nodejs list thread 
Weird error with generators (using suspend or galaxy)

https://groups.google.com/forum/#!topic/nodejs/9omOdgSPkz4 ]

1. higher order functions are used to model control structures

2. generators/yield are designed to allow for suspend/resume
   of control structure code

These two statements come in conflict if one considers the restriction
that generators be based on flat continuations, which is sufficient to
span built-in control structures like for but not predefined control
structures like forEach. The support for nested generators (yield*)
differs from normal function call operation.

I have not seen this conflict discussed here, so I wanted to raise it
in case it was an oversight and something can be done about it. As
far as I can tell, there are two issues:

- current predefined operations like forEach, map, filter, ..
   are not fully integrated with generators, even though they
   model synchronous operations; expecting users to duplicate
   their functionality for use with generators seems wrong;

- is it even possible to define higher-order operations that can be 
   used both normally (without yield inside their callbacks, without

   yield wrapping their result) and with generators (with yield
   inside their callbacks, with yield wrapping their result)?

Claus
http://clausreinke.github.com/

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [proposal] Function calls, syntax sugar

2013-07-12 Thread Claus Reinke

A slightly less ambitious suggestion:

   consider f() as syntax for the implicit arguments array
   (which, as of ES6, can be considered deprecated), then
   make the parens in this syntax optional

In other words, you could write 


   f 1 // single parameter
   f(1,2)// single parameter, implicit arguments pseudo-array
   f [1,2]// single parameter, explicit array

Things get more interesting when you consider currying (functions
returning functions):

   f(1)(2) // conventional, implicit arguments pseudo-arrays
   f 1 2// paren-free, single parameters, no arrays

For your nested call example, you'd have the choice between

   foo(1, 2, bar(a, b))//uncurried, implicit pseudo-arrays
   foo[1, 2, bar[a, b]]// uncurried, explicit arrays
   foo 1 2 (bar a b)// curried, single parameters

In the latter variant, () are used for grouping, consistent with their 
use in the rest of the language.


Nice as this would be, I don't know whether this can be fitted into 
ES grammar and ASI... (probably not?).


Claus

PS. could es-discuss-owner please check their mailbox
   (and update the mailing list info page)?

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Maps and Sets, goodbye polyfill ?!

2013-07-12 Thread Claus Reinke

In general, generators are very hard to polyfill.  (Not impossible, as
you can do a CPS transform of the source code, but very difficult.)


It depends on what you want. For concise specification of iteration,
you can do something without full CPS transform, by using monadic
coding style. My scratch area for monadic generators and promises:

   monadic javascript/typescript: promises and generators
   https://gist.github.com/clausreinke/5984869

   (which you can run with TypeScript 0.9, playground or npm)

Given that JS control-structures aren't predefined but built-in, we can't 
redefine them but have to define our own, but it still isn't too bad. For 
instance, the simple generator example


   https://gist.github.com/clausreinke/5984869#file-monadic-ts-L492-L506

outputs

   // Generator.forIn, with yield, plain iteration (prefix #)
   # yield1 1
   (yield1 returns 0)
   # yield2 1
   (yield2 returns 1)
   # yield1 2
   (yield1 returns 2)
   # yield2 4
   (yield2 returns 3)
   # yield1 3
   (yield1 returns 4)
   # yield2 9
   (yield2 returns 5)
   # 1,2,3

Note from the iteration loop that I've implemented a functional API
(next returns {done,value,next}) instead of an imperative one (next
returns {done,value} and modifies its host).

The standard recursive tree generator

   https://gist.github.com/clausreinke/5984869#file-monadic-ts-L521-L529

even looks readable without special syntax

   function iterTree(tree) {
 return Array.isArray(tree)
? tree.map( iterTree ).reduce( (x,y)= x.then( _= y ), 
G.of(undefined) )
: G.yield(tree);
   }

   var generator3 = iterTree([1,[],[[2,3],4],5]);
   MonadId.forOf( generator3, y= (console.log(* +y), MonadId.of(y)) );

and outputs

   // MonadId.forOf, iterTree recursive generator() (prefix *)
   * 1
   * 2
   * 3
   * 4
   * 5

With a very little syntactic sugar for monads (monad comprehensions, 
monadic do notation), it could even be made to look like conventional 
code. This has come up several times here, and would have a very high

value for very small cost, if done right.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [proposal] Function calls, syntax sugar

2013-07-12 Thread Claus Reinke

 function f(a, b) {
   return [a, b];
 }

Currently:

 f(1, 2); // [1, 2]

Whereas...

 // single parameter, implicit arguments pseudo-array:
 f(1, 2);

|a| would be magically be treated like a ...rest param that wasn't really
an array, but instead a implicit arguments pseudo-array?

 // [[1, 2], undefined]


No, just another way to describe the current situation, where 


   function f() {return [...arguments]}// pseudo code
   f(1,2) // [1,2]

or, if we make the arguments explicit

   function f(...arguments) {return [...arguments]}// pseudo code
   f(1,2) // [1,2]

and explicit formal parameters would be destructured from arguments,
so

   function f(a,b) {return [a,b]} 
   f(1,2) // [1,2]



The solutions shown above using [] also create ambiguity with:

- MemberExpression[ Expression ]
- CallExpression[ Expression ]

Given:

 function foo(value) {
   return value;
 }
 foo.prop = Some data;

Currently:

 foo(prop); // prop

 foo[prop]; // Some data


ah, yes, I knew there had to be a serious flaw somewhere... sigh

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Why is .bind so slow?

2013-07-12 Thread Claus Reinke

The TypeScript project tries to emulate arrow functions through the
_this = this pattern and keeps running into corner cases where a
semi-naïve renaming is not sufficient.

I have been trying to suggest using .bind to emulate arrow functions
instead, but the counter-arguments are (a) .bind might not be available
(supporting pre-ES5 targets) and (b) .bind is slow.

The polyfill isn't the problem, but I'm a bit shocked every time
someone reminds me of the performance hit for using .bind. Given
that a bound function has strictly more info than an unbound one,
I wouldn't expect that (I expected a bound function to be roughly
the same as an unbound function that does not use this). Unless
there is no special casing for the just-add-this case, and .bind is
always treated as a non-standard (meta-level) call.

While playing with test-code, I also found that v8 does a lot better
than other engines when using an .apply-based .bind emulation.

Can anyone explain what is going on with .bind, .apply and the
performance hits?

The TypeScript issue is https://typescript.codeplex.com/workitem/1322 .
My test code (*) is attached there as bind-for-arrows.html.

Claus
http://clausreinke.github.com/

(*) I also tried to make a jsperf test case, but the way jsperf
   runs the loop seems to prevent the optimization that makes
   v8 look good for the .apply-based bind.


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Why is .bind so slow?

2013-07-12 Thread Claus Reinke
Thanks, kg! Your message represents the kind of discussion/information 
I was hoping for. If your hunch as to the reason is correct, it would seem
an easy target for optimization. Partially and efficiently emulating arrow 
functions in ES6 transpilers should be a strong argument in favor, though

not the only one (eg bind keeps coming up as a recommendation when
using class methods as callback parameters, etc.).

For those interested, I've put my (micro) bench in a gist:

   https://gist.github.com/clausreinke/5987876

   (note in particular the performance difference between
   .bind and an .apply-based polyfill; other engines do worse)

I used es-discuss for this thread because:

- all engines are slow on .bind, so it is likely a general issue

- all engines are slow on .bind, so recommending .bind as freely as
   I (and several people on this list) used to do does not seem realistic;
   that puts a serious dent in the usability of this part of the spec

- even if that issue may turn out not to be spec-related, this is the only
   list I know of where I can reach all engine developers and es language
   gurus at once. 


   If this kind of es implementation/performance discussion is not
   welcome here, a dedicated cross-engine list for such topics would 
   be nice. Would only work if all engines had developers listening in.


   As long as there isn't enough traffic to warrant a dedicated list, I 
   (as one of the list owners there) welcome such threads on js-tools


   http://groups.google.com/group/js-tools/about

   (on the basis that engines are our most fundamental js tools;-)

Please let me know where to raise such cross-engine threads in future.
Claus


I've had some back and forth with v8 devs about this since it affects my
compiler. I believe they already have open issues about it but I don't know
the bug #s.

In general, the problem seems to be that Function.bind creates functions
that have different type information from normal functions you wrote in
pure JS; they're 'special' native functions in the same fashion as say, a
DOM API:
... 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: [module] dynaimic namespace/scope

2013-07-11 Thread Claus Reinke

Why allow global scope to leak into a new module?


That would require a tedious preamble for pretty much any bit of code you
want to write.


We agree, that's why we haven't tried to do this.


You could have a standard preamble, implicitly included, with
an option to override. That way, everything comes from a module,
you don't have to import from the standard modules explicitly, and
you could still override the standard imports if necessary.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Creating your own errors

2013-07-10 Thread Claus Reinke

```javascript
function CustomError(message) {
   this.message = message || '';
}
CustomError.prototype = new Error;

// whenever you need
throw new CustomError;
```


At best, this will not preserve the stack trace property, at worse this will 
lead to a bad one.


Because the location info will be that of the new Error? One could
try this instead, which seems to work for me:

throw { __proto__ : new Error(), reason: Monday }

Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Where'd Promise#done go?

2013-06-21 Thread Claus Reinke

   x.a().then( t1p=
   y.b().then( t2p=
   let t3p = { then(cb) { t1p.then( t1= t2p.then( t2=
   t1.c(t2).then( t3= cb(t3) ) ) ) };
   ...' ) )

(where ...' is ..., transformed to work with a promise t3p instead of t3)

Now, waiting for the .a and .b roundtrips would be delayed until
some code in ...' actually needs to look at t3. One could further
delay looking at t2 if t1.c() could deal with a promise t2p.


[colon removed from above quote per Tab's suggestion]


oops, force of habit, as Tab guessed correctly.


Ok, you've delayed sending the .c until you needed t3p. But that's the
opposite of the promise pipelining optimization! The point is not to delay
sending .c till even later. The point is to send it before the round trips
from .a or .b complete. This code still cannot do that.


I do not expect to be able to emulate pipelining fully in user-land
(at least not in JavaScript).

My aims were to demonstrate that .then does not need to stand in 
the way of such an optimization, and that the additional flexibility/

expressiveness provided by non-flattening .then is relevant here.

Back to your objection: there is nowhere to send the .c until you
have t1 at hand. You could, however, move waiting on the t2 
dependency to later, by passing the t2 receiver promise t2p to .c


   x.a().then( t1p=
   y.b().then( t2p=
   let t3p = { then(cb) { t1p.then( t1=
   t1.c'(t2p).then( t3= cb(t3) ) ) };
   ...' ) )

   (where t1.c' is t1.c, modified to work with a promise)

Now, the call to t1.c' can go out after the .a roundtrip yet before
the .b roundtrip completes. If you have also moved the callback
code to the remote site, then the call to t1.c' could happen even
without the .a roundtrip completing (from the perspective of
the local site that triggered the chain) because t1 would be on
the same site as the callback code and the remaining data. 

This latter aspect of pipelining is simpler to do in the language 
implementation (unless the language itself supports sending of

instantiated code, aka closures) - my point was merely to question
the statement that .then would be in the way of such optimization.

Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Where'd Promise#done go?

2013-06-21 Thread Claus Reinke


https://code.google.com/p/google-caja/source/browse/trunk/src/com/google/caja/ses/makeQ.js
supports promise pipelining in user land, using the makeRemote and makeFar
extension points.


Hmm. If you are moving JS code (from the callbacks) to another site, 
some local references (including some that aren't even behind promises) 
become remote and vice versa. How do you manage this? And could you

please link to an application that shows how makeRemote would be used
in context?


If we distinguish .then vs .there, you are describing .there above. With
this distinction, do you agree that .then prevents this optimization?


No. I described how a specific variant of .then, passing promises to
callbacks, could account for more flexibility in resolution, time-wise, 
than a flattening .then could. Providing an interface that fits with the
protocol of a remote-executing .there is just one application of this 
additional flexibility (and my code left the remote-executing aspects 
implicit).


For language-level futures, the lack of explicit nesting gives the 
implementation the freedom to rearrange resolution as needed.

For JS promises, ruling out nesting robs programmers of the
freedom to rearrange resolution explicitly.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Where'd Promise#done go?

2013-06-20 Thread Claus Reinke



I'm worried that you may be suffering from and spreading a terminology
confusion. Promise pipelining is an important latency reduction
optimization when using promises over a network. See Chapter 16 of
http://erights.org/talks/thesis/markm-thesis.pdf. Using .then, either
with or without the return result, **prevents** promise pipelining, which
is another reason to emphasize asynchronous message sending and
deemphasize .then.


I couldn't imagine why you would think that using .then would prevent
promise pipelining. A properly designed, monadic .then, is nothing but
a more powerful let. Perhaps you could elaborate?

Naively translating the standard pipeline example gives

   x.a().then( t1=
   y.b().then( t2=
   t1.c(t2).then( t3=
   ... ) ) )

where x, y, t1, and t2 are meant to live on the same remote machine,
computation is meant to proceed without delay into the ... part, and
the implementation is meant to take care of avoiding unnecessary
round-trips for the intermediate results.

This is naïve because the synchronous method calls should really
be asynchronous message sends. If we assume local proxies that
forward local method calls to remote objects and remote results
to local callbacks, then y.b() will not start until t1 comes back.

But if t1 is itself a promise, then it can come back immediately,
and the same applies to t2 and t3. So computation can proceed
directly to the ... part, with delays happening only when required
by data dependencies.

A user-level promise will not have the same opportunities for
network traffic optimization as an implementation-level future
(an obvious one would be moving the callback code to where
the data is), but the .then itself does not prevent optimizations.

Unless, that is, one insists on flattening promises (no promises
passed to .then-callbacks), which would sequentialize the chain...

What am I missing here?
Claus



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Where'd Promise#done go?

2013-06-20 Thread Claus Reinke

Naively translating the standard pipeline example gives

   x.a().then( t1=
   y.b().then( t2=
   t1.c(t2).then( t3=
   ... ) ) )
..
This is naïve because the synchronous method calls should really
be asynchronous message sends. If we assume local proxies that
forward local method calls to remote objects and remote results
to local callbacks, then y.b() will not start until t1 comes back.

But if t1 is itself a promise, then it can come back immediately,


I think this is what you are missing. If x.a() returns, for example, an
int, then x!a() returns a promise that will turn out to be a
promise-for-int. In that case, x!a().then(t1 = ...t1...), the callback
will only be invoked with t1 bound to the int itself. This can't happen
prior to the completion of the round trip.


As I was saying, that restriction is not necessary - it is a consequence
of the flatten-nested-promises-before-then-callback philosophy. Instead,
the local proxy can send the remote message and locally pass a receiver
promise to its callback. That way, the callback can start to run until it
actually needs to query the receiver promise for a value.

If we did this for the .a and .b calls, the translation would change to

   x.a().then( t1p=
   y.b().then( t2p=

   t1p.then( t1=
   t2p.then( t2=
   t1.c(t2).then( t3=

   ... ) ) ) ) )

and the .b call could be triggered before the .a call roundtrip completes.
If we want to push the lazy-evalutation into the ... part, things get
more interesting, as one would need to model the data-dependencies
and delay looking at t1p/t2p further. One could define an inline
then-able to capture this:

   x.a().then( t1p=
   y.b().then( t2p=
   let t3p = { then(cb): { t1p.then( t1= t2p.then( t2=
   t1.c(t2).then( t3= cb(t3) ) ) ) };
   ...' ) )

(where ...' is ..., transformed to work with a promise t3p instead of t3)

Now, waiting for the .a and .b roundtrips would be delayed until
some code in ...' actually needs to look at t3. One could further
delay looking at t2 if t1.c() could deal with a promise t2p.

This additional flexibility is not available in a flat-promise design,
which is why I think such a design is a mistake. Of course, even
if one wants to accept the restrictions of a flat-promise design,
the flattening should happen in promise construction, not in .then.

Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Conflicts using export *

2013-06-14 Thread Claus Reinke

I am confused: I thought import * was removed because, in the
presence of dynamically configured loaders, it would leave tools
(and programmers) unable to infer the local scope without executing code.
Now we have the same issue back via export *, just need a re-exporting
intermediate module?


No, you don't. `import *` affects the names bound in a module.
`export *` doesn't.  You still can't import a name without listing it
explicitly, meaning that it's always easy to determine the local
scope.

Sam


Ah, thanks. That makes sense.
Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


do we have a thisclass? (for abstract static methods)

2013-06-07 Thread Claus Reinke

We do have this/super for references along the instance prototype
chain, and we have this.constructor for getting to the class of an instance
method. But what about getting the current class from a static method,
for class-side inheritance?

   // abstract
   class Super {
  static f(x) { thisclass.g(x) } // how to write this?
  static g(x) { throw abstract static method g }
   }

   class Sub extends Super {
   static g(x) { console.log(x) } // how to call this from f?
   }

   Sub.f(how?)

The idea being that Super is partially abstract, with static f starting
to work in subclasses once static g is properly implemented. How do 
I get the definition of static f in Super to pick up the definition of static 
g in Sub (without naming Sub explicitly, could be anywhere in the chain)?


I thought a solution for this had been discussed, but have no idea
how to search for this in the list or spec.

Claus
http://clausreinke.github.com/

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: do we have a thisclass? (for abstract static methods)

2013-06-07 Thread Claus Reinke

We do have this/super for references along the instance prototype
chain, and we have this.constructor for getting to the class of an instance
method. But what about getting the current class from a static method,
for class-side inheritance?

Can't you just use this?


Exactly, that should work. The constructors form their own prototype chain (somewhat independently 
of the instance prototypes, but reflecting their chain), so everything should work out. In other 
words, Sub is just an object, so its methods can use `this` to refer to each other.


Kind of obvious from the desugaring... My thinking in that
direction was blocked by associating wrong ideas with the
class syntax.

Thanks,
Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: The Paradox of Partial Parametricity

2013-05-30 Thread Claus Reinke

The fact that we can make it monadic is just a bonus; any time you see
monad just think container++ - it's just a proven useful bit of
structure on top which makes it easy to work with the values inside
the container.  (It's a bit more abstract than that, of course, but
thinking in terms of containers works for many monads, and helps 
guide understanding toward the more abstract notion.)


You are aware of the over-simplification in that suggestion, but it
can still be harmful to readers here (who may dismiss the simple 
examples and never get to the abstract notion). So, please pardon

a little elaboration/clarification:

Monads are not about containers. Even functors (map-supporting 
things) are not about containers. That is confusing class-level 
constructs (ArrayString) with object-level constructs  ([hi]) and 
misses out on some of the most interesting applications of monadic 
coding. The container analogy stops working when class-level 
constructs correspond to object-level control structures (eg, 
mapping over a promise means attaching a post-processor).


To give a use-case relevant to ES6 language design: generators were
carefully designed to capture the flattest continuation that allows to
stop and resume code in ES control structures without changing them. 
Monadic code captures even flatter continuations, and allows to define 
equivalent control structures in libraries, including generators and

exceptions. In other words, good support for monadic coding allows
to move language design decisions to libraries, and gives library
authors expressive powers formerly reserved to language designers.

Monads first use case in programming languages was modular
specification of language semantics for features like exceptions
and non-determinism. That was then translated into modular
program design for things like parsers, solution search, and
embedded language interpreters. Some of the coding patterns
go back to the 1980s, at least, but bringing them under the 
common head of monads and coding against a common monadic 
API started in the early 1990s. 

This latter development allowed to work out commonalities between 
these coding patterns as well as sharing of code between control
structure implementations: whether you need to implement a parser, 
embed a prolog interpreter, support exceptions, implement a type 
system, or a strategic rewrite library for code analysis and 
transformation passes - in the past, you started from scratch each
time, these days, a good monad library gets you most of the way, 
provides valuable design guidelines, and pushes towards modular 
specifications and implementations. 

In effect, monads and their cousins have started to give us similar 
structuring, sharing, and reuse tools for control structures as those
we take for granted for data structures. 

And because monads have helped us to see commonalities in 
different control-structure problems and their solutions, adding 
language support for monadic code supports all of these solutions 
at once. Instead of languages growing bigger with problem-specific
constructs (generators, exceptions, promises, ...), languages can grow 
simpler again, off-loading specific solutions to library code while 
adding generic expressiveness to the language.


A lot of the early practical adoption of monads happened in a 
non-strict language, where data structures can stand in for control 
structures (eg, lazy lists for infinite iterators or promises). Also,
monads where the class-level constructors corresponds to a simple 
object-level constructor are easier to present in monad tutorials. So 
it has become popular to present monads to containers with extras, 
but that is a very limited view. And it does not explain why monads

have become so important to language designers and library
authors alike.

Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module naming and declarations

2013-05-17 Thread Claus Reinke

I'm still trying to make sense of the conflicting module naming design
goals and usage consequences.


You seem to believe otherwise, but I think you still need to explain
how any of the above cases is not sufficiently (or even superiorly)
supported by lexical modules + the loader API.


The most important flaw of this is staging. The loader API lets you
*dynamically* modify the registry, but those changes cannot be
used by code compiled at the same time as the code that does the
modification.


If loader/registry manipulation and module declaration happen
in different stages, and we have sub-projects that both provide
and consume common libraries, then we have a staging problem.

This happens when the loader configuration stage cannot refer
to lexical names in the project loading stage.

As I said above, this is broken. If we don't provide a declarative way 
to register modules, then they have to be added to the registry *in a 
different stage* from any code that uses them. This forces you to 
sequentialize the loading of different packages of your application, 
which is a non-starter.


Replacing module name declarations with strings that are registered
in-stage works around that problem, at the price of replacing scoped 
declaration with (compilation-time single-)assignment and storing/

referencing all local modules in the same (loader-)global registry.

Lexical module names and string-like module registry names
fulfill different purposes, and trouble comes from trying to make 
one serve the uses of the other: the earlier modules design only

had lexical names, which is awkward wrt configuration; the current
design has registry entries, which is awkward wrt local references.

The way to avoid such awkward mismatches of provided concepts
and use cases, then, seems to require both lexical modules and
registry entries, as separate concepts, each with their own uses.

Since we also need to get external modules from somewhere, this
leaves us with three levels of references:

1. module declarations and module references for import/export
   use lexical naming

   module m { ... }// local declaration
   import { ... } from m// local reference

2. registry entries for module registration and reference

   (a) use string-like naming

   module m as jquery // non-local/loader registry entry
   module m from jquery // non-local/loader registry lookup

   (b) use property-like naming

   module m as Registry.jquery // non-local/loader registry entry
   module m from Registry.jquery // non-local/loader registry lookup

   (c ) use modules of modules

   export { jquery: m } to Registry // non-local/loader registry entry
   import { jquery: m } from Registry // non-local/loader registry lookup

3. external references use urls, here marked via plugin-style prefix,
   to separate from registry references

   module m from url!jquery url // external resource reference

The main points of registry manipulation are that it happens before
runtime and is single-assignment, so that it can affect the loader that
is currently active. 

I'm not sure that I'd call this declarative (to begin with, it seems 
order-dependent), and string names (2a) do not seem to be necessary, 
either - they just make it easy to embed URLs in names.


String names (2a) make registry entries look like external references,
or rather, they put what looks like external references under control
of the loader. There could be a convention (3) that url!url refers
to an external reference, via url, but -by design- all string-named
module references are configurable. If lexical module names (1) are 
not included, all module references are configurable.


Property names (2b) make registry entries look like lexical references,
the only indication of (load-time) configurability being the Registry.
prefix. That is even more apparent in the import/export variation (2c).

No matter which of the three variations of (2) is used, the part about
register-a-module is a little odd, and my variations are meant to 
highlight this oddity


   module m as jquery // (2a)
   module m as Registry.jquery // (2b)
   export { jquery: m } to Registry // (2c)

Other variations would obscure the oddity, e.g., mixing definition and
registration in a form that suggests (local) naming

   module jquery { ... }

To illustrate the oddity a little further: if we consider a project SPa with
sub-projects SP1 and SP2, whose modules need to use some common 
library like JQuery, we end up with two phases for SP:


phase 1: configure and register jquery (versions/locations)
phase 2: load SP1 and SP2, do the SPa things

The proposed spec makes it possible to load configuration script and
sub-projects in one go, because the configuration script modifies the
loader that is used by the sub-projects. Which means that the phases
have to be loaded in this order, and that re-configuration has to be an 
error to preserve single-assignment.


However, this only works because neither SP1 nor 

Re: Module naming and declarations

2013-05-09 Thread Claus Reinke

A possible alternative might be to switch defaults, using generic
relative syntax (scheme:relative) to keep the two uses apart 
while avoiding having to introduce a new scheme


   import $ from http:jquery; // it's a URL, don't mess with it
   import $ from jquery; // it's a logical name, do your thing


Actually, that has a serious flaw for in-browser delivery, in that it
would force naming a single scheme for relative URLs, whereas
the same browser code can be delivered via several base schemes.

So, this option seems out, and I agree that burdening the common
case of logical names with a new scheme (jsp:) isn't nice, either. 


Currently, tagging the location refs as URLs (url(url)) appeals
most (unless there are conflicts hiding there, too).

Claus  
___

es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module naming and declarations

2013-05-08 Thread Claus Reinke

That is not my position.  My position has always been that if you want
logical names, then a reasonable way to do that is via a scheme:

   import $ from package:jquery;


A possible alternative would be to switch the defaults

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module naming and declarations

2013-05-08 Thread Claus Reinke

[sorry if you saw an earlier empty message - unknown keycombo!-(]


That is not my position.  My position has always been that if you want
logical names, then a reasonable way to do that is via a scheme:

   import $ from package:jquery;


A possible alternative might be to switch defaults, using generic
relative syntax (scheme:relative) to keep the two uses apart 
while avoiding having to introduce a new scheme


   import $ from http:jquery; // it's a URL, don't mess with it
   import $ from jquery; // it's a logical name, do your thing

The default loader could still cache URL-based resources (permitting
bundling), but should not impose non-URL semantics.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module naming and declarations

2013-05-04 Thread Claus Reinke

0. No effort: modules are loaded relative to the document base url,
with .js appended. So `import jquery` maps to the relative URL
jquery.js.

2. A few lines: you can use System.ondemand() to set the URL for each
module you use. If you call
`System.ondemand({https://example.com/jquery-1.9.1.js: jquery})`
then `import jquery` maps to the URL you specified (imports for
modules that aren't in the table will fall back on the loader's
baseURL).


I think part of Andreas' concerns was that you now have a conflict
between 'import jquery' referring to a relative (0.) or registered (2.)
thing, because all names just look URL-ish. Another part was that
both times, the import may look URL-ish but doesn't behave like one.

Using something like 'import registered:jquery' for 2 would
remove the conflict, without changing the functionality. That would
still leave the implicit rewriting involved in 0 - perhaps one could
specify that every protocol-free name refers to a module (with
rewriting) and names with protocol prefixes refer to URLs?

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Promise/Future: asynchrony in 'then'

2013-05-04 Thread Claus Reinke

That part I wouldn't be so sure about: in all monads, the .of equivalent
is effect-free (in an IO monad, it does no IO; in a non-determinism
monad, it is deterministic; in a failure/exception monad, it does not
fail; in a count-steps monad, it doesn't count).
If you look at those identity laws at the top again, you'll see that
Promise.of cannot introduce a delay for these laws to work out
(otherwise, the left- and right-hand sides would have different
numbers of ticks/turns).


As I said, the number of ticks is unobservable if you're writing
effect-free code.


In my use of the the term above, the effect of the Promise monad
would be to provide a value maybe now, maybe later, and an
effect-free '.of' would be an already resolved Promise (value
available now).

Adding ticks in operations that should allow for effect-free passing
of intermediate results is observable in slow-downs. That was the
topic of the blog post that got me to look into this in the first place
(performance issues with promise implementations, see thread 
opening message).


The point of insisting that promises implement a monadic interface
is that promises can reuse abstractions built for monads - that also
means that passing intermediate values around should not cause
additional delays. For instance, in the 'liftA2' example from one of 
the issue tracker threads:


https://github.com/promises-aplus/promises-spec/issues/94#issuecomment-16193265

there are several occurrences of '.then', via 'map' and 'ap', that 
should not delay the result by several additional turns - the only 
asynchrony in using 'liftA2' over promises should come from the 
promise parameters and possibly from the callback parameter.


However, you seem to be referring to side-effects instead (effects 
beyond returning a value in an expression, beyond the specified 
effect of a given monad).


Side-effect-free code is difficult to write in JS - I would be surprised 
if most promise implementations were not full of side-effects

(internal queues, shared pipelines, resolution). Also, so many
examples of using promises involve side-effects that this seems
to count as an established practice.

Which means that the additional code queuing will also be 
observable in code reorderings, not just delays. Which is, indeed,

the rationale for attempting to add delays in a normalized fashion,
as you state below:


If you're not writing effect-free code, then as I said before, keeping
the number of ticks the same regardless of the state of the promise
when you call .then() on it is important for consistency, so it's easy
to reason about how your code will run.


Given that many JS APIs still are heavily side-effect biased, we'll
need to take that into account. And in that world, adding delays in
parts of the promise API that should implement the common
monadic interface is very much observable, and will cause code 
written against this common interface to behave differently when

run over a promise than when run over another monad.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Promise/Future: asynchrony in 'then'

2013-05-03 Thread Claus Reinke

   Promise.of(value).then(cb) = cb(value)
   promise.then(Promise.of) = promise


My interpretation of these laws for promises is that attaching a callback to
a resolved promise should execute that callback synchronously (though the
callback itself may create an asynchronous promise, introducing its own
delays).


It's not that the callback may create an async promise: it *must*
create an async promise, if you want to reason about it in terms of
the monad laws.  The cb must have the signature a - Mb, where M in
this case is Promise.  If cb returns a non-promise value, then you're
not following the monad laws, and you can't reason about monadic
behavior.


We need cb :: a - Promiseb in order to avoid the non-monadic
overloads of 'then' in current promise specs but I was referring to
the choice of such a callback returning a resolved-now promise 
(Promise.of) or a resolve-later promise (via some asynchronous

operation like nextTick, ajax, ...).

Assuming it does follow the monad laws properly, then the return 
value of cb is *always* accessible in the next tick only, regardless of
whether it runs synchronously or not.  


That part I wouldn't be so sure about: in all monads, the .of equivalent
is effect-free (in an IO monad, it does no IO; in a non-determinism
monad, it is deterministic; in a failure/exception monad, it does not
fail; in a count-steps monad, it doesn't count). 


If you look at those identity laws at the top again, you'll see that
Promise.of cannot introduce a delay for these laws to work out
(otherwise, the left- and right-hand sides would have different
numbers of ticks/turns).

Almost all monads have other monad constructors that do have
effects (do IO, add non-determinism, throw an exception, ...). It is
just that the monad laws are about the effect-free part only.

At least that is my current reading of the situation;-)
Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Promise/Future: asynchrony in 'then'

2013-05-02 Thread Claus Reinke

Thanks for the various references and explanations - it took me a while
to follow them. So, while discussion is still ongoing on the details (both 
of how to spec and what to spec), all specs seem to agree on trying to 
force asynchrony, and on doing something in 'then' to achieve this.


I suspect that at least the latter part is wrong - at least it is in conflict
with decades of design and coding experience with general monadic 
APIs, especially with the idea of providing one effect-free form of 
creation that is left and right identity to composition:



   Promise.of(value).then(cb) = cb(value)
   promise.then(Promise.of) = promise


My interpretation of these laws for promises is that attaching a 
callback to a resolved promise should execute that callback 
synchronously (though the callback itself may create an asynchronous 
promise, introducing its own delays).


Similarly, a callback creating a resolved promise from a future
result should not add further delays beyond those of the original
future-result-creating promise.

This does not affect design decisions about promise resolution,
so the motivating examples could still work.

One of the examples in the linked threads was roughly:

   { promise, resolve } = ...
   promise.then( r = Promise.of( console.log( r ) ) );
   console.log(1);
   resolve(2);
   console.log(3);

expecting output order 1 3 2. This would still be possible if
resolve itself was asynchronous (queuing the callbacks for the 
next or end of current turn instead of the current one) - no need 
to introduce asynchrony in 'then', it seems, not even in the implicit 
result lifting.


Explicitly providing for both synchronous and asynchronous
promises also seems more predictable and performance-tunable 
than leaving next-ticking to implementation optimization efforts.


At least some of the alternatives discussed also violate the third
law of monadic interfaces, associativity of composition:

   promise.then( cb1 ).then( cb2 ) 
   = 
   promise.then( r=cb1( r ).then( cb2 ) )


*if* one was to add nextTicks in 'then' (which I think is a bad idea
anyway), then cb1 and cb2 should *not* be queued for the *same* 
next turn. Which would lead to the accumulation of delays that

were reported for some promise implementations, just for using
monadic callback composition.

Associativity is less of a problem for alternatives that propose
to move resolved promise callback execution to the end of the 
current turn (with or without some means to protect separate
execution threads from each other). Though that leaves the 
question of starving other queued tasks by continuing to extend 
the current turn with presumably asynchronous tasks. Again,

having explicit control over synchronous vs asynchronous
resolution of intermediate promises would help with tuning
the queuing.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Promise/Future: asynchrony in 'then'

2013-04-30 Thread Claus Reinke

The promises-aplus spec has a note that confuses me

   https://github.com/promises-aplus/promises-spec#notes

   1. In practical terms, an implementation must use a mechanism such 
   as setTimeout, setImmediate, or process.nextTick to ensure that 
   onFulfilled and onRejected are not invoked in the same turn of the 
   event loop as the call to then to which they are passed.


I have not yet been able to decide whether DOMFuture has a
similar provision, or how this note is meant to be interpreted.

The aspect that worries me is that this note is attached not to the
creation of promises but to the definition of 'then'. Is that because
of the implicit return lifting (if 'then' callbacks do not return promises,
wrap the return in a new promise), or is there something else going on?

As long as the 'then' callbacks return Promises, the idea of resolved 
Promise creation as left and right identity of 'then'


   Promise.of(value).then(cb) = cb(value)
   promise.then(Promise.of) = promise

would seem to require no additional delays introduced by 'then' 
(promise creation decides semantics/delays, 'then' only passes on 
intermediate results).


Could someone please clear up this aspect? How is that note meant
to be interpreted, and do other Promise/Future specs have similar
provisions?

Claus

PS. Prompted by this blog post:
   http://thanpol.as/javascript/promises-a-performance-hits-you-should-be-aware-of/ 
___

es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module naming and declarations

2013-04-28 Thread Claus Reinke

users are going to rewrite their code bases twice just because modules
are going to be delivered in two stages?


What are you talking about?

People are not going to rewrite more than once. Current NPM/AMD 
modules do not nest, so there's no basis for asserting they'll be rewritten 
twice, first to ES6-as-proposed modules, then to add lexical naming and 
nesting.


Talking for myself, I've been using node modules, AMD modules, my
own module loader, and have even tried, on occasion, to make my
code loadable in two module systems (though I've shied away from
the full complexity of UMD). I'm tired of that needless complexity - I
want to build on modules, not fight with them (and I don't want tool
builders having to guess what kind of module system a given code
base might be using and what its configuration rules might be).

I have high hopes for getting to use ES6 modules early, via transpilers,
but that cannot happen until that spec settles down - we have users
of implementations that are waiting for that spec to tell them what
to converge on.

As soon as the dust settles, I'll try to stop using legacy modules 
directly, switching to ES6 modules, transpiled to whatever (until the

engines catch up).

But what I really want are lexical modules as the base case. The
use cases that led to the new design are important, so a design
not covering them would be incomplete, but if ES6 modules are
not lexical, I'll be rewriting my code again once ES7 true modules 
come out. That is twice for me, and I doubt there is anything 
untypical about me in this situation.


I understand that David is snowed under (he has an unfortunate
habit of taking on too much interesting work?-) but given the
importance of this particular feature, perhaps more of tc39 could
give a helping hand? The earlier and the more complete that
spec is, the earlier there will be user feedback, and the greater
the chance that ES6, or at least ES6.1, will have a module system
that works in practice.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A Challenge Problem for Promise Designers (was: Re: Futures)

2013-04-26 Thread Claus Reinke
A Future for a Future seems like a corner case compared to the 
broader simplicity of an implicit unwrap.


The argument is not about whether FutureFuture... is a common 
case. The Argument is that Future... and Array... and Optional...
and things that may raise catchable errors and other types have enough 
structure in common that it makes sense to write common library code 
for them. 


One example is a map method, other examples may need more structure -
eg, filter would need a way to represent empty structures, so not all 
wrapper types can support filter.


The goal is to have types/classes with common structure implement 
common interfaces that represent their commonalities. On top of those

common interfaces, each type/class will have functionality that is not
shared with all others. As long as the common and non-common
interfaces are clearly separated, that is not a problem. 


It is only when non-common functionality is mixed into what could
be common interface methods, for convenience in the non-common
case, that a type/class loses the ability to participate in code written
to the common interface. That is why recursive flattening and implicit
lifting of Promises is okay, as long as it isn't mixed into 'then'.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A Challenge Problem for Promise Designers (was: Re: Futures)

2013-04-26 Thread Claus Reinke

I'm still wading through the various issue tracker threads, but only two
concrete rationales for flattening nested Promises have emerged so far:

1 library author doesn't want nested Promises.
2 crossing Promise library boundaries can create unwanted nesting
Perhaps you didn't read my post then? 
https://mail.mozilla.org/pipermail/es-discuss/2013-April/030192.html
I've shared experience on why flattening promises are convenient (easier 
refactoring, easier to reason about) and why non-flattening would be 
annoying (impose some sort of boilerplate somewhere to get to the actual 
value you're interested in).


Yes, I had seen that, but it doesn't explain where those nested Promises
are supposed to come from. For a normal thenable thread (without
implicit flattening or lifting), the nesting level should remain constant - 
.of(value) wraps value in a Promise, .then(cb) unwraps the intermediate 
result before passing it to cb, and cb constructs a new Promise.


In a later message, you suspect the reason for implicit flattening is
fear of buggy code that may or may not wrap results in Promises. You
say that such code may result from refactoring but, in JS, Promisevalue 
is different from value, so trying to hide the level of promise nesting is 
likely to hide a bug. Yes, it is more difficult to spot such bugs in JS, but 
they need to be fixed nevertheless. Wrapping them in more duct-tape 
isn't helping.


Beyond rationale, I'd like non-flattening advocates to show use cases 
where a FutureFutureT can be useful; more useful than just 
FutureT and T.


My own argument is not for nested futures themselves, but (1) for 
futures to offer the same interface (.of, .then) as other thenables, which

(2) implies that there is to be no implicit lifting or flattening in .then.

In other words, you are worried about others giving you arbitrary
nested promises and want to protect against that by implicit flattening,
whereas I want to have control over the level of nesting and keep that
level to one. For promises, I don't expect to use nested promises much, 
but I do expect to define and use thenable methods that should work 
for promises, too.


Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A Challenge Problem for Promise Designers (was: Re: Futures)

2013-04-26 Thread Claus Reinke
Can you point to any code in wide use that makes use of this 
thenables = monads idea you seem to be implicitly assuming? 
Perhaps some of this generic thenable library code? I have never 
seen such code, whereas the use of thenable to mean object with
a then method, which we will try to treat as a promise as in 
Promises/A+ seems widely deployed throughout libraries that are 
used by thousands of people judging by GitHub stars alone.


Thus I would say it's not promise libraries that are harming the 
thenable operations, but perhaps some minority libraries who 
have misinterpreted what it means to be a thenable.


Instead of rehashing the arguments from the various issue tracker
threads, where examples have been presented even in the half (or
less) I've read so far, let me try a different track: consider the case 
of Roman vs Arabic numerals.


As a user of Roman numerals, you might point to centuries of real
world use in the great Roman empire, complain that Arabic numerals 
don't have explicit numbers for things like IX or LM etc, ask why anyone
would need an explicit symbol representing nothing at all, or ask for 
examples of real world use of Arabic numerals in Roman markets, 
or say that Roman numerals don't need to follow the same rules 
as Arabic numerals, and that instead users of Arabic numerals have 
misinterpreted what it means to work with numbers.


All those arguments are beside the point, though. The point is that
Arabic numerals (with 0) are slightly better than Roman numerals
at representing the structure behind the things they represent, making
it slightly easier to work with those things. And that is why Arabic 
numerals have won and Roman numerals are obsolete, after centuries 
of real-world use in a great empire.


Thenables in the JS-monadic sense represent common structure
behind a variety of data types and computations, including Promises, 
they represent that structure well, and they give JS an equivalent to 
vastly successful computational structures in other languages. 

And that isn't because I or someone famous says so, but because lots 
of people have worked hard for lots of years to figure out what those 
common structures are and how they might be represented in 
programming languages, refining the ideas against practice until

we have reached a state where the only question is how and when
to translate those ideas to another language, in this case JS.

Promises differ from other thenables, but there is no reason to
burden the common interface with those differences.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module naming and declarations

2013-04-26 Thread Claus Reinke

You argue for a two-level system of non-lexical names to support
configuration - okay. But why does that imply you have to drop
the lexical naming altogether, instead of using a three-level system
(from external to internal to lexical names)?


You don't, it's an orthogonal concern. Note that Sam was *not* arguing 
against the existence of lexical modules.


Good to hear that confirmed.

But it's not nearly as important as the rest of the core system -- as Sam 
describes, coordination and separate development are the most important 
piece that the module system needs to address. We dropped lexical 
modules mostly in the interest of working out the core and eliminating 
parts that weren't necessary for ES6. Kevin's been urging us to reconsider 
dropping them, and I'm open to that in principle. In practice, however, we 
have to ship ES6.


But let's keep the question of having lexical *private* modules separate 
from this thread, which is about Andreas's suggestion to have lexical 
modules be the central way to define *public* modules.


There are a couple of problems I see with that: it proposes adding yet 
another imperative API to JS where a declarative API would do (adding

modules to the internal registry instead of the local scope); and it misses
the big-rewrite-barrier that is going to accompany ES6 introduction -
modules are the most urgent of ES6 improvements, but do you think
users are going to rewrite their code bases twice just because modules
are going to be delivered in two stages?

You believe you have worked out the core parts that caused you to
postpone lexical modules, and you had a lexical module proposal
before that. What is standing in the way of re-joining those two parts?

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A Challenge Problem for Promise Designers (was: Re: Futures)

2013-04-25 Thread Claus Reinke

I think we see a correlation -- not a 1.0 correlation, but something. Those
who've actually used promise libraries with this flattening property find
it pleasant. Those who come from either a statically typed or monadic
perspective, or have had no experience with flattening promises, generally
think they shouldn't flatten. 


I think the dispute could be settled easily: 


- flattening 'then' is a convenience
- non-flattening 'then' is necessary for promises being thenables
   (in the monad-inspired JS patterns sense)

Why not have both? Non-flattening 'then' for generic thenable
coding, and a convenience method 'then_' for 'then'+flattening.

That way, coders can document whether they expect convenience
or standard thenable behavior. And we can have convenience for
Promise coding without ruining Promises for more general thenable 
coding patterns.


Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A Challenge Problem for Promise Designers (was: Re: Futures)

2013-04-25 Thread Claus Reinke

I'm still wading through the various issue tracker threads, but only two
concrete rationales for flattening nested Promises have emerged so far:

1 library author doesn't want nested Promises.
2 crossing Promise library boundaries can create unwanted nesting

There is little to be said about 1, only that those library authors still 
have a choice: add a separate recursive flattening operation and keep 
the thenable operations unharmed, or give up on Promises being 
thenables in the monad-inspired JS patterns sense (and hence give 
up on profiting from generic thenable library code).


The second point is somewhat more interesting, as it stems from yet
another convenience-driven thenable deviation: if a then-callback does
not return a Promise, its result is implicitly lifted into a Promise; the
unwanted nesting apparently comes from different libs not recognizing 
each others promises, mistaking foreign promises for values, and lifting

them into their own promises. Recursive flattening (assimilation) is
then intended as a countermeasure to recursive lifting of foreign
promises.

It will come as no surprise that I think implicit lifting is just as mistaken
and recursive flattening;-) Both should be moved to explicit convenience
methods, leaving the generic 'then'/'of' interface with the properties
needed for generic thenable library code.

Claus


I think we see a correlation -- not a 1.0 correlation, but something. Those
who've actually used promise libraries with this flattening property find
it pleasant. Those who come from either a statically typed or monadic
perspective, or have had no experience with flattening promises, generally
think they shouldn't flatten. 


I think the dispute could be settled easily: 


- flattening 'then' is a convenience
- non-flattening 'then' is necessary for promises being thenables
   (in the monad-inspired JS patterns sense)

Why not have both? Non-flattening 'then' for generic thenable
coding, and a convenience method 'then_' for 'then'+flattening.

That way, coders can document whether they expect convenience
or standard thenable behavior. And we can have convenience for
Promise coding without ruining Promises for more general thenable 
coding patterns.


Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss 

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module naming and declarations

2013-04-25 Thread Claus Reinke

Module names play a role in three processes, in general:

1. As a way to identify local components.
2. As a way to find the physical resource that is the source code (or
object code) of the module.
3. As a way for two separately developed components to coordinate
about which module they mean.

In the current design, the internal names (eg, jquery) serve role 1,
and URLs (as generated by the loader hooks) serve role 2. The
coordination role is played by internal names in a shared registry.


You argue for a two-level system of non-lexical names to support
configuration - okay. But why does that imply you have to drop
the lexical naming altogether, instead of using a three-level system
(from external to internal to lexical names)?

Also, in a two-level system of external and lexical names, could one
not model the coordination level by a registry/configuration module?

// using loose syntax
module registry {
module jquery = external remote or local path
export jquery
}

module client {
import {jquery: $} from registry
}

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Futures (was: Request for JSON-LD API review)

2013-04-24 Thread Claus Reinke

Now, instead of a ducktest for a `then` method the promise check would
instead be specified as `instanceof Promise`. 


Picking a message at random for an interjection, there is something that 
seems to be missing in this discussion: 

*Promises are only one kind of  thenables (the asynchronous thenables)*. 

Ducktesting for 'then' will match things that aren't thenables (in the JS 
monadic sense), and identifying thenables will match things that aren't 
Promises. 

The type separation between thenables and Promises makes sense 
because there are library routines generically based on thenables that 
will work with Promises and with other thenables. At least, that is the

experience in other languages.

Also, much of the discussion seems not to be specific to Promises, asking
for a standard answer to the question of reliable dynamic typing in JS.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules: Curly Free

2013-04-21 Thread Claus Reinke
Anonymous export is simply about allowing library authors to indicate 
a module's main entry point. Semantically, we're talking about the 
difference between a string and a symbol; syntactically, we're talking 
about one production. It's all cleanly layered on top of the rest of the 
system. Let's keep some perspective.


If you put it like this (entry point), it recalls another issue, namely
that of scripts-vs-modules (executable code vs declaration container).

Would it be possible to combine the two issues, with a common
solution? 

Something like: modules are importable and callable, importing a 
module gives access to its (named) declarations but doesn't run 
any (non-declaration) code, calling a module gives access to a 
single anonymous export (the return value) while also running 
any non-declaration code in the module.


Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Modules: Curly Free

2013-04-21 Thread Claus Reinke
But then you went too far and made that entry-point, which with 
anonymous export is often (but not always) a function, with the body 
of the module, its top-level code.


I suggested that modules be callable, executing the module body and
returning what would be the anonymous export. I did not suggest that 
the exports themselves need to be callable.


A module is not a function. It is not generatjve when nested. The 
current proposal doesn't support nesting, but earlier versions did, and 
that was critical to the second-class nature of modules when declared.


Yes, one would want caching of executionreturns, to keep modules
singletons (and to keep anonymous export consistent across imports).

The real fly in the ointment is that JS does not separate side-effects
from pure code, so it isn't possible to separate declarations and
code execution entirely (the module code needs to be executed,
once, somewhere between first import and first use). This currently
happens implicitly, and is part of the problem that brought down
the earlier lexical modules.

Still, making modules (singleton-)callable would provide a simple 
syntac for accessing anonymous exports, without interfering with
the existing import/export features. Existing AMD and node 
(single-export) modules could be translated to this, where a 
full translation to statically checked, named export/import is

not wanted.

Claus

Anonymous export is simply about allowing library authors to 
indicate a module's main entry point. Semantically, we're talking 
about the difference between a string and a symbol; syntactically, 
we're talking about one production. It's all cleanly layered on top 
of the rest of the system. Let's keep some perspective.


If you put it like this (entry point), it recalls another issue, 
namely

that of scripts-vs-modules (executable code vs declaration container).

Would it be possible to combine the two issues, with a common
solution?
Something like: modules are importable and callable, importing a 
module gives access to its (named) declarations but doesn't run any 
(non-declaration) code, calling a module gives access to a single 
anonymous export (the return value) while also running any 
non-declaration code in the module.


Claus




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Coordination

2013-04-13 Thread Claus Reinke
But as I wrote privately, I don't think we can firehose all the new APIs 
through public-script-coord and get good API review results. We could go 
API by API in a more focused forum or meeting-like setting, with 
public-script-coord hosting the notices for upcoming reviews, progress 
updates, and review conclusions. Thoughts?


In principle, github would offer the means for review input from
the public (those who are meant to be using those APIs later on).

It might be too firehosey (everybody on the web submitting tiny
yet mutually exclusive suggestions), but perhaps a protocol of
serious reviews only, quality before quantity, please could be 
established? Or a filter by proven-to-be-useful contributions?


Claus

PS. I would permit public-script-coord to archive my messages,
   but I refuse to support an archive that doesn't even try to
   obfuscate email addresses. Hence they won't appear there:-(

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Coordination (was: ES6 Modules)

2013-04-12 Thread Claus Reinke

The DOM side should all be subscribed to es-discuss and read it on a
regular basis. Additionally, our f2f meeting notes are a great way for them
to keep up to date, as well as providing a good jump off for questions and
concerns.


Given the number of people working on platform APIs that should
seems ever less likely to become a reality. We need a different
strategy.


Parts of the DOM side have weekly summaries, eg

   http://www.w3.org/QA/2013/03/openweb-weekly-2013-03-24.html

Having such weeklies for all relevant spec groups, including TC39,
with specific feeds (not just part of general news feeds) and then a 
feed aggregator on top (only for the various weeklies), might help 
giving interested parties an idea of when to dive in where. It might
also establish a lower barrier on what everyone might be expected 
to follow?


https://twitter.com/esdiscuss almost fits into that gap, but twitter
isn't quite the right format, and suitable editing  labeling are
needed to guide readers to the right firehose at the right time.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Execution Order

2013-04-10 Thread Claus Reinke

1)  Just to be explicit, this is a different execution order than
node/CommonJS modules.  Nothing wrong with that, just pointing it out.


Yes.


Execute-in-order-of-import is used in practice to emulate parameterized
modules (for instance, set a global config, *then* import RequireJS).

As long as executing a module cannot change its set of exports, only
the value of exports, it should be possible to separate the binding
phase (topological sorted, establish import bindings) from the 
execution phase.


If the execution phase is to use the same order as the binding phase,
a suggested alternative to the parameterized modules pattern should 
be documented. 


For simple parameterized modules, that would involve an exported
setter, called after the imported module is executed.

For RequireJS-like uses, that would involve separate module loader
phases (first phase, load RequireJS-like import; second phase, configure
RequireJS and use for loading rest of dependencies).

Most likely, the classic use of modules as executable scripts should be 
discouraged, in favour of modules as declarative providers of bindings.


Claus


2)  The execution order is then just a topological sort of the dependency
graph, breaking cycles where appropriate.  Is that correct?


Yes.  Cycles are broken by reference to the order in which the
relevant imports appear in the source of the importing module.


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On Scope And Prototype Security

2013-03-19 Thread Claus Reinke

var public = (function(){
 var private = {
 };
 return Object.freeze(
   Object.create(private)
 );
}());

// why I cannot avoid this? I'd **LOVE** to!
Object.getPrototypeOf(public).test = 123;
alert(public.test); // 123


At first, I thought you were right - __proto__ is an object property,
so there should be a way to turn it into a private property (assuming
ES6 will have such).

Then I thought, it would have to be protected, not private - if I extend
the prototype chain further down, I should still be able to go up 
through this __proto__ here, right?


My current thinking is still different: __proto__ is *not* a normal
object property, it is an implementation shorthand for extending
an object. If we were to copy all the methods from the prototype
chain into a single class object, that would serve the same 
purpose, the __proto__ links just save space.


In other words, you want to protect/make private the properties 
of the objects that __proto__ points to, and those objects themselves, 
not the __proto__ link.


For that purpose, a deep chain freeze, following the prototype chain,
and freezing all objects in it, would be less confusing/error prone 
than the shallow Object.freeze we have. Apart from the fact that

sharing of objects in the chain might freeze someone else's prototypes.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Self-recursion and arrow functions

2013-03-17 Thread Claus Reinke

I understand, but it's still a limitation of arrow functions that they rely
on arguments.callee to self-reference. Relying on the defined name
they're assigned to suffers from the can be redefined problem. NFE's
don't suffer this problem and can completely avoid `arguments` in ES6
for all use cases Arrow functions, currently, cannot.


Neither arguments.callee (not available in strict) nor let (clumsy to
use in expressions) are needed for self-reference

   var rec = (f) = f((...args)=rec(f)(...args));

   var f = (self)=(n)= n1 ? n*self(n-1) : n;

   [1,2,3,4,5,6].map((n)=rec(f)(n));

You can try it out in traceur (example link at the bottom)

   http://traceur-compiler.googlecode.com/git/demo/repl.html

Claus

http://traceur-compiler.googlecode.com/git/demo/repl.html#var%20rec%20%3D%20(f)%20%3D%3E%20f((...args)%3D%3Erec(f)(...args))%3B%0A%0Avar%20f%20%3D%20(self)%3D%3E(n)%3D%3E%20n%3E1%20%3F%20n*self(n-1)%20%3A%20n%3B%0A%0A%5B1%2C2%2C3%2C4%2C5%2C6%5D.map((n)%3D%3Erec(f)(n))%3B



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-07 Thread Claus Reinke

But, in order to (hopefully) let Brandon calm down a bit, I am NOT making
yet another proposal for a two-method protocol. Instead I propose
simply _delivering_ a sentinel object as end-of-iteration marker
instead of _throwing_ one. The zip function above would then be written as:

 function zip(iterable1, iterable2) {
   let it1 = iterable1.iterator()
   let it2 = iterable2.iterator()
   let result = []
   while (true) {
 let x1 = it1.next(), x2 = it2.next()
 if (isStopIteration(x1) || isStopIteration(x2)) return result
 result.push([x1, x2])
   }
 }


'it.next()' needs to serve two purposes: yielding an arbitrary object
or signaling end of iteration. Using exceptions gives a second channel, 
separate from objects, but now there is potential for confusion with

other exceptions (and using exceptions comes with unwanted cost).

So, really, both the arbitrary object channel and the exception channel
are already taken and not freely available for iteration end.

How about lifting the result, to separate yielded objects and end
iteration signalling?

   { yields: obj }// iteration yields obj
   {} // iteration ends

Then we could use refutable patterns to generate end exceptions

  while (true) {
let { yields: x1 } = it1.next(), { yields: x2 } = it2.next() // throw if no 
match
...


or test for end without exceptions

  while (true) {
let x1 = it1.next(), x2 = it2.next()
if ((!x1.yields) || (!x2.yields)) return result
result.push([x1.yields, x2.yields])
  }

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-07 Thread Claus Reinke

How about lifting the result, to separate yielded objects and end
iteration signalling?

   { yields: obj }// iteration yields obj
   {} // iteration ends


Yes, that would be the proper encoding of an Option/Maybe type, which
in the abstract is the ideal (the end object might carry a return
value, though).


So, more of an Either type, which isn't yet easy to match in ES6.


However, I did not propose that because some around here would
probably be unhappy about the extra allocation that is required for
every iteration element under this approach.


One of the reasons for avoiding exceptions is to enable optimizations,
though, and looking through the call, one might be able to avoid the
intermediate allocation for the frequently used path (yield), falling
back to extra allocation only for the iteration end. Or allocate the
wrapper once, then reuse/fill it on each iteration and overwrite it
on iteration end.

Not sure how difficult that (inline and match up construct/deconstruct 
to avoid intermediate allocation) would be for JS engines... but using

exceptions would close most doors for optimization, so it might be
more costly overall?

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Transitioning to strict mode

2013-02-21 Thread Claus Reinke

You still need more than statement or branch coverage. Otherwise,
we might get 100% coverage while missing edge cases

   function raise() {
 use strict;
 if( Math.random()0.5 || (Math.random()0.5)  (variable = 0)) 
console.log(true);
 else
   console.log(false);
   }

   raise();
   raise();
   raise(); // adjust probabilities and call numbers until we get
   // reliable 100% branch coverage with no errors; then
   // wait for the odd assignment to happen anyway, in
   // production, not reproducably

There is no reliable 100% coverage in this case. The coverage I guess is... 
probabilistic?


Yes, and yet current test coverage tools, even if they go beyond
statement coverage and test for branch coverage, will happily
give you a 100% coverage report (with a little tuning, even
repeatedly). That is why the page you linked to talks about levels
of test coverage beyond branch coverage, and why it is important
to know about such limitations.

Given the complexities of test suites and experience with irreproducible
bugs, testers might even be tempted to overlook the occasional random
test suite failure if it doesn't happen again on re-running the suite?

I understand errors can be caught by a try-catch placed for other reasons, but whoever cares about 
transitioning to strict mode will be careful to this kind of issues.


I was thinking of services that need to stay up, no matter what
(restart first, check what happened later). At least, one can hope
that they would notice a Reference error in their logs.


Other fixes for all the error cases are welcome as contributions.


Providing fixes is good. It would be even better if there was a tool
that pointed out any and all potential problem spots related to strict
mode introduction, flagging non-strict-safe functions for review
(Ariya's validator does only check syntax, not scoping or this, I think,
but might serve as a starting point).

But now we're into debugging strict-mode-related issues, instead
of using strict mode to reduce the likelihood of issues.

I used to be firmly in the (naïve?) strict-mode-is-better camp and
couldn't understand why switching everything to strict mode was
considered a bad idea. This thread has documented the reasons
why some coders are wary about the idea. If your page can help
to steer a way around the obstacles, even better.

Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Transitioning to strict mode

2013-02-21 Thread Claus Reinke

For the ES5 semantics of the interaction of the global scope and the global
object, how could you make this a static error? What would you statically
test? Would you statically reject the following program, where
someExpression is itself just some valid expression computing a value
(that might be the string foo)? Note that this below is the global
object, since it occurs at top level in a program.

use strict;
this[someExpression] = 8;
console.log(foo);


My first reaction would be to reject the 3rd line statically. We can't
hope to check dynamic scoping statically, but we could enforce
safety of the language parts that look like they invoke static scoping.

Either declaring 'foo' or logging 'this[foo]' would be available as
workarounds. So I don't see this example as an argument for a
runtime error on the 3rd line.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array subclassing, .map and iterables (Re: Jan 30 TC39 MeetingNotes)

2013-02-20 Thread Claus Reinke

I'd be interested to see your alternative design suggestion
(or, in fact, any more general approach to the issue at hand
that would fit into ES). 


From ES4, http://wiki.ecmascript.org/doku.php?id=proposals:static_generics.


Thanks for the pointer. However, I'd like to have even more generic
methods - e.g., map is useful for structures that do not have .length
or indexed elements. 

To give you an idea of the possibilities I've got in mind, I've sketched 
an overly simplistic system where classes can implement interfaces,

generic methods can be written over these implementations, and
generic code can be written using only the generic methods. The
code is available as 


   https://gist.github.com/clausreinke/4997274

(you can run the html in a browser or with nodejs)

It does implement map for Array, String and Int32Array (also for
makePromise, if you're running this in nodejs, with q installed),
without extending the classes or their objects (no mixins, and
the generic map is class-independent and extensible).

To understand why this is simplistic, think about the copymodify 
involved with doing this for all typed array variations - that is 
where class-level constructors and de-structurable classes would 
come in handy, so that one could separately reuse the array-

and element-type-specific interface implementations (*).

Claus

(*) for those who do not switch off when they hear the word
   'types' - even without a static type system, many of the ideas
   could be carried over from how Haskell implements type classes;
   it'll just be a little less convenient, and a little more explicit code
   to write; and it'll require some language design work to translate
   ideas away from the static typing context, into EcmaScript concepts;

   if you want to read Haskell papers for ES language design ideas, 
   here are the beginnings of a little dictionary:


   for 'type', read 'class'
   for 'type class', read 'interface'
   for 'type class instance', read 'interface implementation'

   With this translation and the paper I referred to earlier in this
   thread, one notices that interfaces in Haskell tend to relate 
   multiple classes, and that interface-implementing classes can
   often be de-structured (e.g., ArrayInt32 instead of Int32Array), 
   to facilitate access to the interfaces implemented by the class 
   components - that avoids a lot of duplicated code;


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Transitioning to strict mode

2013-02-18 Thread Claus Reinke

Talking about 100% coverage and catching all errors is never a
good combination - even if you should have found an example of
where this works, it will be an exception.


There are a couple of things I'm sure of. For instance, direct eval 
aside (eval needs some specific work anyway because its semantics is 
changed a lot), if you have 100% coverage, every instance of setting to 
an undeclared variable will be caught. There is no exception.


Out of curiosity, what does your favorite test coverage tool report 
for the source below? And what does it report when you comment

out the directive?

Claus


function test(force) {
 use strict;

 function isStrict() { return !this }
 console.log(isStrict());

 if (!force  (!isStrict()  (doocument=unndefined))) {

 console.log(we don't have lift-off);

 } else {

 console.log(ready to go!);
 // do stuff

 }

 !isStrict()  console.log(doocument);

}

test(false);
test(true);

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: A case for removing the seal/freeze/isSealed/isFrozen traps

2013-02-18 Thread Claus Reinke

as a high-integrity function:

var freeze = Object.freeze,
push = Function.prototype.call.bind(Array.prototype.push);
function makeTable() {
  var array = [];
  return freeze({
add: function(v) { push(array, v); },
store: function(i, v) { array[i  0] = v; },
get: function(i) { return array[i  0]; }
  });
}


Careful there, you're not done!-) With nodejs, adding the following

   var table = makeTable();
   table.add(1);
   table.add(2);
   table.add(3);

   var secret;
   Object.defineProperty(Array.prototype,42,{get:function(){ secret = this;}});

   table.get(42);
   console.log(secret);
   secret[5] = me, too!;

   console.log( table.get(5) );

to your code prints

   $ node integrity.js
   [ 1, 2, 3 ]
   me, too!

Couldn't resist,
Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Transitioning to strict mode

2013-02-18 Thread Claus Reinke
Out of curiosity, what does your favorite test coverage tool report 
for the source below? And what does it report when you comment

out the directive?
:-p Ok, there are exceptions if your code depends on semantic changes 
described in the third section of the article (dynamic this/eval/arguments).

That's you case with how you define isStrict (dynamic this)
So: if your code does *not* depend on semantic changes, all instances of 
setting to an undeclared variable will be caught.


Just wanted to shake your faith in testing :-) The example code might
look unlikely, but real code is more complex and might evolve nasty
behavior without such artificial tuning.

You still need more than statement or branch coverage. Otherwise,
we might get 100% coverage while missing edge cases

   function raise() {
 use strict;
 if( Math.random()0.5 || (Math.random()0.5)  (variable = 0)) 
   console.log(true);

 else
   console.log(false);
   }

   raise();
   raise();
   raise(); // adjust probabilities and call numbers until we get
   // reliable 100% branch coverage with no errors; then
   // wait for the odd assignment to happen anyway, in
   // production, not reproducably

Throwing or not throwing Reference Errors is also a semantics change, 
and since errors can be caught, we can react to their presence/absence,

giving another avenue for accidental semantics changes.

Undeclared variables are likely to be unintended, and likely to lead to
bugs, but they might also still let the code run successfully to completion 
where strict mode errors do or don't, depending on circumstances.


Testing increases confidence (sometimes too much so) but cannot
prove correctness, only the absence of selected errors.

What I'd like to understand is why likely static scoping problems
should lead to a runtime error, forcing the dependence on testing. 

If they'd lead to compile time errors (for strict code), there'd be no 
chance of missing them on the developer engine, independent of 
incomplete test suite or ancient customer engines. Wouldn't that 
remove one of the concerns against using strict mode? What am I 
missing?


Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array subclassing, .map and iterables (Re: Jan 30 TC39 MeetingNotes)

2013-02-17 Thread Claus Reinke

More immediately relevant for this thread, I would like to see

   Array  Container

with map, from, filter, and perhaps some others, moving from
Array to Container. Then Map and Set would be Containers, 
supporting operations currently limited to Array 


This is not gonna happen for several reasons, one being backward 
incompatibility.


It's also unnecessary. The generic methods could (and should, some 
think -- I prototyped this years ago in SpiderMonkey) have their |this| 
parameters uncurried and be provided as functions. That would be a 
better route than OOP tyranny of hierarchy.


That particular suggestion was somewhat tongue-in-cheek, to get
the thread back to technical issues;-) - I don't want the operations 
limited to Array (and subclasses) but I also don't think that single-
inheritance prototype hierarchy is offering a good solution here. 

To begin with, the usefulness of map isn't limited to container-like 
classes, and map's implementation cannot always be inherited, so 
we're really talking about interfaces (or abstract classes).


And if existing code was written to interfaces, rather than taking
a specific prototype chain into account, backward incompatibility
would be less of a problem.

The whole thread is about using a specific and acute example 
decision (what to do with .map/.from) to trigger a discussion of 
the larger language design issues and options that would help 
addressing this kind of problem (which combines programming

to interfaces, generic methods outside the class hierarchy, and
selection of such methods based on the class of not-yet-existing
objects). Preferably before individual cases are fixed that might
not fit a later general solution pattern.

I mentioned type constructor classes not because of the type
system they are embedded in (or, rather, built over), but because 
they offer a logic and implementation pattern for dealing with 
interfaces and with generic code written to interfaces. 


The un-optimized implementation of generic code in that system
is to pass dictionaries of methods around, combining the method 
implementations in a way analogous to the way the types are

constructed. It is a systematic generalization of the target-class-
passing illustrated in Rick's code fragment earlier in this thread.

I'd be interested to see your alternative design suggestion
(or, in fact, any more general approach to the issue at hand
that would fit into ES).

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array subclassing, .map and iterables (Re: Jan 30 TC39 MeetingNotes)

2013-02-15 Thread Claus Reinke

I'd say that either we properly clean up the Array hierarchy, or we
leave it alone. A half-baked solution that only applies to typed
arrays, and divorces them from the Array hierarchy, seems less
attractive than just doing the naive thing, i.e., TypedArray  Array.


Agree with that, and I'll go further: we should leave alone what's 
already shipped and in use for a long time.


TypedArray  Array sounds good to me.


The question is how to clean up/refine the class hierarchy with
the existing language means. Consider a hypothetical

   FixedLengthArray  Array

and a FixedLengthTypedArray that inherits from both branches.

More immediately relevant for this thread, I would like to see

   Array  Container

with map, from, filter, and perhaps some others, moving from
Array to Container. Then Map and Set would be Containers, 
supporting operations currently limited to Array (WeakMap 
is probably too special to be a normal Container).


Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Array subclassing, .map and iterables (Re: Jan 30 TC39 Meeting Notes)

2013-02-12 Thread Claus Reinke
[to limit the length of my reply, I had to avoid responding to every 
detail, trying to answer the gist of your message instead; please let 
me know if I missed anything important]



Of course, you might argue that I could just call it like:

 NodeList.from( [ div, span, p ].map(nodeName =
document.createElement(nodeName)) );


Indeed, this would be my preferred choice. It would be more modular
than packaging the combination of container change and element
conversion into a single operation.

However, I understand the conflict between type-changing element 
maps and element-type-constrained containers. 

Let me change the example to bring out this point: if we convert from 
an array of Int32 to an array of Double, we cannot map on the source 
array nor can we map on the target array. So we do have to map the 
elements in transit, after extracting them from the Int32 source and 
before placing them in the Double target, preferably without creating

an intermediate Array.

Since the container change (.from()) is specced via iterators, I suggest
to ensure support for .map() on iterators and map the elements in the
iteration, after extraction from source, before integration into target.

Integrating the element map into the container conversion (.from())
instead, which is a static method to enforce target-type specification,
solves the issue you were trying to address, for Array subclasses, but
it leaves us with a host of residual issues:

- there are now two operations for one generic task, 
Array.prototype.map( function )// type-preserving

   TargetClass.from( source, function ) // type-changing

- the two -clearly related- operations do not have a common interface,
   in fact, one is an object method, the other a static/class method

- the latter operation is really a family of operations, and the static
   type prefixes of the family members are difficult to abstract over 
   (do I try to get the target type from the target object context or 
   from the function result, or do I force the user to pass it in at 
   every call site)


- even with this additional complexity, we still do not have support
   for mapping over the elements of other, non-Array containers

I suspect that the problem of establishing target container types 
is separate from the element mapping, so I would like to keep

.from() and .map() separate. But even in the merged design
programming will be awkward.

...But the arraylike or iterable might not have a .map() method 
of its own, which will cause issues if I'm in a JS-target transpilation 
scenario...


And I would like not only to root out any arraylike or iterable
that do not support .map(), but would also like to extend the reach
of .map() to other cases where it makes sense (I've listed examples
in previous messages).


(function( root ) {
 root.converter = function( ctor, iterable, map ) {
   return this[ ctor ].from( iterable, map );
 }.bind(root);
}( this ));


What you're saying here is that (1) .from() should support .map()-like
functionality for all iterables (even if they do not support .map()), that 
(2) we can't use .map() because it may not be supported for all iterables 
and the 'map' parameter might be type-changing, and that (3) you 
don't know how to get the target type generically, so it'll have to be 
passed in at each call site. 


None of this is promising when I think of writing generic code that
employs mapping over different container types, even if we assume
that the mapping .from() replaces .map() as the general interface.

Are you going to pass around 'ctor's as dynamic type hints? Since
we need the target classes, we can't even extract the class from
the source arrays. This, and the inability to de-structure things
like Int32Array into its components, are among the outstanding 
language design issues generated in this area.



My point was that map is far more widely useful, not limited to
Array (Array.prototype.map), and not limited to Array construction
from Iterables (Array.prototype.from with second parameter). 
Consider map on event emitters, on promises, on exceptions, on 
generators, ..


I don't have an alternative solution that would cover all use cases 
in ES uniformly, because the existing solutions in other languages

do not translate directly.
However, I wanted to ring a warning bell that adding a different 
partial solution for every new use case is not going to scale well 
(especially with things being so difficult to change once they are 
in ES), and misses potential for writing generic library code.


Can you show an example of this?


Example of what (can't resolve 'this';-)? I listed several examples of 
classes that I'd like to see map() support on. You gave an example 
of how you couldn't write generic code using map() because not 
all relevant classes support that method (using .from() on iterables 
doesn't work, either). If you mean difficulties of evolving ES designs

after release, think no further than existing code 

Re: Array subclassing, .map and iterables (Re: Jan 30 TC39 Meeting Notes)

2013-02-10 Thread Claus Reinke
Thanks for the explanations and additional details. Let me first try 
to rephrase, to see whether I've understood your reasoning:


The problem comes from the partial integration of types in ES, 
specifically having typed arrays but no easy way to express and 
control the types of the functions mapped over them.


And your solution is to fix Array.map to being type-preserving, and 
to use an auxiliary map in Container.from instead of Array.map 
when type changing mappings have to be expressed. 

Using  for type parameters, = for function types, and suppressing 
a few details (such as prototype, this, index parameter), we can write 
the types of the two groups of operations as


   ArrayElem.map : 
   (Elem = Elem) = 
   ArrayElem


   ContainerElem2.from : 
   IterableElem1 = 
   (Elem1 = Elem2) = 
   ContainerElem2


where the ES5 Array is seen as ArrayAny (so arbitrary mappings 
are still supported at that level), and ArrayInt32, etc are written 
as type-level constants Int32Array, for lack of type-level constructor 
syntax (the parameterized Interface Iterable is also inexpressible).


Since ES cannot guarantee that the mappings have the expected
types, an implicit conversion of the mapped elements to the 
expected element type will be enforced (probably with a type

check to avoid unexpected conversions?).

So 

   int32Array.map( f ) 


will be read as roughly

   int32Array.map( (elem) = Number( f(elem) ) )

and

   Int32Array.from( iterable, f )

as

   Int32Array.from( iterable, (elem) = Number( f(elem) ) )

Do I have this right, so far?

var intArray = new Int32Array([42,85,127649,32768]); 
//create a typed array from a regular array

var strArray = intArray.map(v=v.toString());

If intArray.map() produces a new intArray then the above map 
function is invalid.  If intArray.map() produces an Array instance
then you intArray.map instance of intArray.constructor desire 
won't hold.  We can't have it both ways without provide some
additional mechanism that probably involves additional 
parameters to some methods or new methods. 


It is this additional mechanism which I'm after. In typed languages,
it is long-established practice to put the additional parameters at
the type level and to hang the interface on the type-level constructor,
and I wonder how much of that could be translated for use in ES.

For instance, being able to specify an overloaded map function 
was the motivating example for introducing type constructor

classes in Haskell

   A system of constructor classes: overloading and implicit 
   higher-order polymorphism
   Mark P. Jones, 
   In FPCA '93: Conference on Functional Programming Languages 
   and Computer Architecture, Copenhagen, Denmark,  June 1993.

   http://web.cecs.pdx.edu/~mpj/pubs/fpca93.html

1) Array.prototype.map produces the same kind of array that it 
was applied to, so:


for the above example
  m instance of V will be true.   
  intArray.map(v=v.toSring()) produces an Int32Array.  
The strings produced by the map function get converted back to numbers.


Given the history of implicit conversions in ES, have you considered 
just doing runtime type checks, without those new implicit conversions?


2) If you want to map the elements of an array to different kind of array 
use ArrayClass.from with  a map function as the second parameter:


var strArray = Array.from(intArray, v=v.toString());

This seemed like a less invasive change then adding additional target kind
parameters to Array.prototype.map.  Also it seems like a very clear way for
programmers to state their intent.
 
ES isn't Java or C#.  We don't have formalized interfaces (although it 
is useful to think and talk about informal interfaces) and since we are

dynamically typed we don't need to get sucked into the tar pit of generics.


If a programming concept is as useful as interfaces are, it usually pays 
to think about language support for it. And I was certainly not thinking

of Java or C#, more of TypeScript, where the team seems to be working
on JavaScript-suited generics for the next version, to be able to type
current JavaScript library code. 


Btw, parametric polymorphism in ML and its refinements and
extensions in Haskell were elegant and concise tools before they got 
watered down in a multi-year process to fit into Java. If you have bad

experience with generics, they probably come from Java's adaption.


How would use produce an Array of strings from an Int32Array?


Somewhat like

   Array.from( int32Array ).map( (elem) = elem.toString() )

Implementations would be free to replace the syntactic pattern
with an optimized single pass (in more conventional optimizing
language implementations, such fusion of implicit or explicit loops 
is standard, but even ES JIT engines -with their limited time for 
optimization- should be able to spot the syntactic pattern).



- instead of limiting to Array, .from-map is now limited to iterables
  (it would work for Set, which is really 

Array subclassing, .map and iterables (Re: Jan 30 TC39 Meeting Notes)

2013-02-09 Thread Claus Reinke

I am trying to understand the discussion and resolution of
'The Array Subclassing Kind Issue'. The issue (though not its
solution) seemed simple enough

   class V extends Array { ... }
   m = (new V()).map(val = val);
   console.log( m instanceof V ); // false :(

and I was expecting solutions somewhere along this path:

1. .map should work for Array subclasses, preserving class

2. .map is independent of Array and its subclasses, there are
   lots of types for which it makes sense (Sets, EventEmitters, ..)

3. there should be an interface Mapable, implemented by
   Array and its subclasses, but also by other relevant classes,
   such that

   class M implements Mapable { ... }
   m = (new M()).map(val = val);
   console.log( m instanceof M ); // true

   (in typed variants of JS, this would call for generics, to 
   separate structure class -supporting map- from element 
   class -being mapped)


Instead, the accepted approach -if I understood it correctly-
focuses on conversion and iterables:

   Array.from( iterable ) = Array.from( iterable, mapFn )

such that

   SubArray.from( iterable, val = val ) instanceof SubArray

This seems very odd to me, because

- it introduces a second form of .map, in .from

- instead of limiting to Array, .from-map is now limited to iterables
   (it would work for Set, which is really OrderedSet, but it wouldn't 
   work for WeakMap)


- it doesn't address the general problem: how to inherit structural
   functionality (such as mapping over all elements or a container/
   iterable) while preserving class

With a general solution to the issue, I would expect to write

   SubArray.from( iterable ).map( val = val ) instanceof SubArray

while also getting

   new Mapable().map( val = val ) instanceof Mapable

Could someone please elaborate why the committee went with 
an additional map built into structure conversion instead?


Claus

PS. What about array comprehensions and generator expressions?

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Modules spec procedure suggestion (Re: Jan 31 TC39 Meeting Notes)

2013-02-07 Thread Claus Reinke

There has been a great deal of pressure from users wanting details
about whether the modules spec will cover their use cases, from
module library authors wanting to determine whether their most
important features will be covered (so that they can retire their
systems), and -more recently- from transpiler authors and users
needing to know what spec to target (existing implementations
being invalidated by late spec changes doesn't help, either).

It looks like this has finally blown up at the last meeting (excerpts
included below). The main concern seems not to be with specific
features but with lack of confidence and missing details.

One way to restore confidence would be to go the executable spec
route: implement the modules spec in JS, as a transpiler.

That would force the spec to be described in implementable detail
(no to determined later), would allow use cases to be written as 
tests (no I can't tell whether the spec covers this), and would 
make concrete discussions possible (no this doesn't feel right
vs don't worry, it'll all work out in the end) - all the usual 
advantages of executable specifications


There are some existing ES.next modules shims/transpilers that
could be used as a starting point. 


Just a suggestion,
Claus


LH: One of the things the module system needs, to move forward, there
should be 5-10 real world scenarios. Each scenario needs to be met and
_recorded_, each time a change is made, it needs to be applied to each
scenario to ensure that we're covering the needs.
.. 
BE: This is not design by committee, we _really_ need user testing.

..
LH: We need to drive this through with concrete examples and use those
steer the development here. I don't think we're going to get anywhere here
today.
..
LH: As much as it would be great to deliver this, it's simply not ready and
not nearly developed or concrete enough to continue forward with. We'd need
a year to user test to get the best results from the feature. Need to
understand and meet the needs.
.. 
STH: There is a great deal of completed work for this and with valuable

feedback, we can still deliver. It's not fair to say We're providing you
with feedback, therefore you're not done.

BE: Yes, we need algorithms in the spec and user testing in time.

EA: Modules are the most important feature that we have to ship, but I
agree with Doug that it might not be in time.

BE: One option is to move it to ES7
..
STH: I believe strongly that the state of the module system is being
mis-perceived.



...



EA: From implementation, modules is the highest priority, but there is
nothing to implement.

AWB: And much of the spec will rely on modules.

MM: Comparisons of the lexical scoped modules and the new string name
modules

BE: ...Recalls the argument from 3 years ago. Notes that IIFE is NOT what
`module {}` is and cannot be compared.

DC: I've spent years promoting this, but I can't see how it will make it.

AWB: We should aim for a maximally minimal modules proposal, that we can
then move forward with to be successful.

WH: I like modules but also share Doug's skepticism about them being ready
for ES6

ARB: I've been working on implementing modules for a year now and the
changes you made in November invalidated most of that work, and they were
not changes that I could agree with.

STH: To Clarify, we should not do a design along these lines AND you're
concerned about the schedule.

ARB/AWB: Don't want to go back to ground zero.

LH: Start with a more robust design exercise, instead of trying to patch.

YK: To address the notion that the need has eroded, the systems that have
been developed have actually put us in a worse position.

EA: Can we get champions of an alternate proposal? Andreas?

AWB: If we're going to defer or decouple modules from ES6, we need to know
sooner, by the next meeting.

STH: I believe strongly that the state of the module system is being
mis-perceived.

EA: I'm really confused by the current system and I feel like i used to
understand the old module proposal well.

ARB: Same here.

LH: I don't think the current system is grounded or addresses the real
problems that it needs to address.
...What we need to accomplish is much deeper.
...Prior art and experience dictates how the experience will be received
and this doesn't seem to come up.

YK: I see modules as largely desugaring to two things that can be done
with defining and loading in AMD, YUI etc.

LH: I'm not seeing all of the needs being met. My intuition is to say this
is a sugar for AMD, I think that could be a solid guiding principle.

YK: I agree that it's easier to see a path forward if it's largely the same
thing as something that is already in use.

STH: re: possibly maximally minimal? There are too many details and
requirements that are closer to surface, syntactically. It's harder to
jettison the complicated parts to reduce the work, but also gives me
confidence that the fundamental design is not changing as much as those in
the room may belief.

Re: Ducks, Rabbits, and Privacy

2013-01-22 Thread Claus Reinke
It's my opinion that saying that closures should be used for an object 
to hold onto private data, as you are advocating, is in conflict with ES's
prototypal model of inheritance. Methods cannot both (A) be on a 
constructor's prototype and (B) live inside the scope used to house

private data. The developer is forced to make a decision: Do I want
my methods to be defined on the constructors prototype or do I 
want them to have access to private data?


That used to worry me, too, when I came up with my pattern for
implementing (TypeScript-style) private slots via bind[1], but 
currently I think it is inherent (no pun intended;) in private data.


You could have your methods on the prototype and extend/bind 
them from the constructor to give them access to private data.


However, private here means instance-private, so if you have
a method that needs access to instance-private data, what is that
going to do on the prototype? You could store it there, but have
to remember to provide it with that instance-private data when 
borrowing it.


This is different from class-private static data, and also from 
protected slots, where each object or each method in the 
prototype chain is supposed to have access. I suppose those 
could also be modeled using private symbols - private symbols

do more than just (instance-)private slots.

Claus

[1] https://mail.mozilla.org/pipermail/es-discuss/2013-January/028073.html

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Proxy target as __proto__? (Re: Proxy's optional target)

2013-01-18 Thread Claus Reinke

Hi Tom,


I'm not sure I fully understand your proposal, but could you not achieve it
by simply doing:

var target = ...; // might be frozen
var p = Proxy( Object.create(target), handler);


Ah, too obvious for me, thanks! 

Also, proxy wrappers often modify functions, which tend to be on 
a non-frozen prototype. So perhaps it isn't as big an issue as I thought.


Claus

PS. I probably shouldn't mention that Proxies' invariant checks
   only against own properties behave differently from how non
   Proxy objects behave, if a target prototype property happens 
   to be frozen (override prohibition non-mistake)?


var x = Object.freeze({foo: 88});
var y = Object.create(x);

y.foo = 99; // fail
console.log( y.foo ); // 88

var yp = Proxy(y,{get:function(t,n,r){ return n===foo ? 99 : t[n] } });
console.log( yp.foo ); // 99

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private Slots

2013-01-17 Thread Claus Reinke

I have suggested before that it would be good to put control
over object iteration into the hands of the object authors, by
enabling them to override the slot iteration method.


I might be missing something, but isn't this basically covered 
with the enumerable flag? 


There are several object iteration protocols, of which enumerable
implicitly specifies one, and this one is (a) somewhat out of favor
as the default and (b) does not cover private slots.

One could think about a 'freezable' flag, to allow private slots
to participate in the iteration behind .freeze.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private Slots

2013-01-17 Thread Claus Reinke
It's really none of your business when you try to freeze my object 
whether any of


(a) pre-existing private-symbol-named properties remain writable;
(b) weakmap-encoded private state remains writable;
(c) objects-as-closures environment variables remain writable.

Really. Not. Your (user). Business!


But it is Your (author) business, and turning everything into
a proxy to get control over how the authored objects behave
seems a little excessive.


What have proxies to do with any of a-c? I don't follow.


I was assuming that proxies would be able to intercept freeze
and implement matching behavior for private slots.


It has been pointed out that the issue is one of implicitly called
iterators: standard methods for freezing or cloning call iterators 
that only handle public API.


I think you're missing the point. Object.freeze deals in certain 
observables. It does not freeze closure state, or weakmap-encoded 
state, or (per the ES6 proposal) private-symbol-named property state 
where such properties already exist. Those are not directly observable 
by the reflection API.


True, private symbols as first-class objects can hide anywhere.

I was thinking in terms of private slots as limited to the instance
(hoping to translate existing private symbol property names to 
fresh private symbols, thereby supporting mixins without exposing
the existing private symbols), but even if that was the case, one would 
quickly end up with the complexity of a deep-cloning operation, which 
could only be provided as a primitive/built-in.


For the special case of freeze, perhaps a 'freezable' attribute is all
that is needed to include private slots in the freeze iteration, without
exposing them.

Your -- as in You the abstraction implementor -- may indeed make 
such state observable. 


But if I do not have a way to hook into freeze, etc, how do I make
my objects with hidden state behave like objects with exposed state?

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Proxy target as __proto__? (Re: Proxy's optional target)

2013-01-17 Thread Claus Reinke
The proxy target is important because it specifies some invariants 
about the proxy (typeof, builtin brand, behavior of forwarding for 
unspecified traps, values of internal properties like [[DateValue]], 
[[NumberValue]], etc.).


That is probably the most important difference between direct
proxies and old-style proxies. Yet I find it slightly limiting and
accident-prone: it uses invariants and target to make proxies
not just behave like an object, but to behave like the target.

Presentations on direct proxies tend to present too usage 
patterns: wrapped objects and virtual objects.


My problem is: if I am using proxies as wrappers, I want to use
the target object as a -wait for it- prototype, to be *modified* by
the proxy. But if the target object happens to be frozen, modified 
returns are no longer allowed by the invariants. To cover this

eventuality, I should use the virtual object pattern even if I
just want to wrap an object!

Would it be possible/helpful to use the target merely as a 
__proto__ instead of a harness, inheriting the target's internal 
properties without over-constraining the proxy's ability to
modify the wrapped target's behavior? Invariants could still 
use an implied object for freezing the *proxy*, so the proxy 
would behave as an object, not necessarily the same as the 
target object.


Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private Slots

2013-01-16 Thread Claus Reinke

Below is a slight variation of the closure hack that allows using
private properties through this. The idea is to use the public 'this'
as prototype for the private 'this', using 'bind' to give instance
methods access to the private 'this'

Bound methods. Smart!
I come to wonder why you even need the Object.create at all.


I need both a private 'this' (for the methods to use) and a public 
'this' (for the constructor to return, and for the public/clients to 
use), and I need them to be related.


Using Object.create allows to put the private fields before the
public fields in the prototype chain (the public 'this' is the prototype
of the private 'this'), so instance methods can access both while
the public can only access the public 'this'.

Claus


var MyClass = (function () {

   function MyClass(id) {
  this.id = id;

  var private_this = Object.create(this);  // this and more
  private_this.secret = Math.random().toString();

  this.guess = guess.bind(private_this);
   }

   function guess(guess) {
 var check = guess===this.secret
   ? I'm not telling!
   : That guess is wrong!;
 console.log((+this.id+'s secret is: +this.secret+));
 console.log(this.id+' says: '+check);
   }

   return MyClass;
})();

var myObj1 = new MyClass(instance1);
var myObj2 = new MyClass(instance2);

console.log(myObj1,myObj2);

var guess = Math.random().toString();
console.log(guessing: +guess);
myObj1.guess(guess);
myObj2.guess(guess);




___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private Slots

2013-01-15 Thread Claus Reinke
It's really none of your business when you try to freeze my object 
whether any of


(a) pre-existing private-symbol-named properties remain writable;
(b) weakmap-encoded private state remains writable;
(c) objects-as-closures environment variables remain writable.

Really. Not. Your (user). Business!


But it is Your (author) business, and turning everything into
a proxy to get control over how the authored objects behave
seems a little excessive. And the same thing applies to clone/
mixin.

It has been pointed out that the issue is one of implicitly called
iterators: standard methods for freezing or cloning call iterators 
that only handle public API.


I have suggested before that it would be good to put control
over object iteration into the hands of the object authors, by
enabling them to override the slot iteration method. 

One would need to find a way of doing so without exposing 
private names, but it should allow object authors to handle 
your a-c, as well as define what cloning/mixing should do in

the presence of private state (however encoded, although
private slots might make this easier/more explicit).

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Private Slots

2013-01-14 Thread Claus Reinke


From the teachability perspective, I'm tired of explaining the closure 
hack to explain private properties. Even to some who are experienced 
webdevs, I have to explain that they can't access the private property 
through this..
The language needs to evolve to the point where people can write 
this[something] to retrieve private state. Symbols work for that.


Below is a slight variation of the closure hack that allows using
private properties through this. The idea is to use the public 'this'
as prototype for the private 'this', using 'bind' to give instance
methods access to the private 'this'

Claus


var MyClass = (function () {

   function MyClass(id) {
  this.id = id;

  var private_this = Object.create(this);  // this and more
  private_this.secret = Math.random().toString();

  this.guess = guess.bind(private_this);
   }

   function guess(guess) {
 var check = guess===this.secret
   ? I'm not telling!
   : That guess is wrong!;
 console.log((+this.id+'s secret is: +this.secret+));
 console.log(this.id+' says: '+check);
   }

   return MyClass;
})();

var myObj1 = new MyClass(instance1);
var myObj2 = new MyClass(instance2);

console.log(myObj1,myObj2);

var guess = Math.random().toString();
console.log(guessing: +guess);
myObj1.guess(guess);
myObj2.guess(guess);

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-30 Thread Claus Reinke
Ease of teaching != successfully imparted knowledge at scale. Sorry, but 
it's true. People don't use use strict; at top level enough, and 
teaching them all will take time. Even then, because of the Law of Least 
Effort, it'll be left out.


This is the major objection some of us keep raising, and you don't 
engage with it. Please do!


   The only occasions when I don't use strict mode is when 
   I forget to write use strict, which is most of the time.


Ideally, I would like to get rid of the pragma, while making strict
mode the default for ES6. But it would be ES6 strict mode, which
does not have to be the same as ES5 strict mode.

If there is anything in ES5 strict mode that cannot be implemented 
efficiently or that gives other reasons for not wanting to use strict 
mode, on purpose (rather than by accident/lack of knowledge), 
then ES6 strict mode could perhaps be refined to get rid of those 
stumbling blocks?


So, my question is: is it possible to merge the needs of ES5 
sloppy mode users and the advantages of ES5 strict mode and 
come up with a one-mode-only ES6?


Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-30 Thread Claus Reinke

It would actually be nice to have that as a feature: If the variable name is
`_` then it can be used multiple times. It’s a nice, self-descriptive way of
saying that you don’t care about a parameter value.


That underscore wildcard is the exact syntax used in functional
languages, and very useful, I agree. In JS, that syntax would be a
breaking change, unfortunately. But we could use something else (e.g.
I proposed '.' in the past).


Some languages even interpret any id with '_'-prefix as wildcard, and
warn about uses as likely errors. Btw, the existing duplicate-parameter
error feature smells of should-be-a-warning-not-an-error.

But '_' is in popular use in JS, there being few good short identifiers.

How about moving the early error from parameter list to parameter
use? If a parameter isn't used, even if duplicated, it isn't likely to be
an error (*), and this would allow for wildcard use. If a duplicate
parameter is used at all, that seems to be the case to guard against.

Claus

(*) Though one can always construct a case where something is
   an error, eg: function(a,a) { return b } // meant a,b instead of a,a


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: excluding features from sloppy mode

2012-12-30 Thread Claus Reinke

Another thought: What will JavaScript code look like once 99% of browsers in
use support ES6? Will we have a language with coherent semantics and a
simple structure? That is: is there a way to drop some of the trickiness,
long term? And which of the approaches gets us there?


In the short term, while people are making the transition, the rule
would be stated as above “If you want the new stuff, turn on strict
mode or wrap a module around it.” Later, once ES6 is everywhere, it
would instead be stated as Turn on strict mode or code in a module in
order to code in JavaScript. If you don't, you'll be coding instead in
an insane JavaScript fossil that exists merely for compatibility with
old ES3 code. No one even understands its scoping rules.


I'd like to add one JS coder's view, and a suggestion (at the end).

Not long ago, the ES situation -to me- looked like this:

- ES3 engines are dying

- ES5 is about to be in-practice standard (a state now achieved?)

- if you want to look ahead or catch more silly errors, use ES5 strict

- because ES3 engines aren't quite dead, ES5 strict mode
   is a pragma that will be ignored by those engines

- ES6 will start from ES5 strict mode

While complicated as status quo, there was a clear progression
path: use ES5 now, move to ES5 strict when possible, in anticipation
of ES6. In particular, it was clear that ES3 dependence or ES5 sloppy
mode were temporary, with very short remaining intended life-time,
and their problematic features, as far as addressed in ES5 strict mode,
were on their way out for ES6 (such as 'with').

These days, 'with' and sloppy mode are still in ES6, and there is
talk of supporting them and ES3-dependent code in combination
with ES6-new-features, perhaps forever (can't deprecate the web),
together with in-the-wild code that depends on non-ES features.

The quotes at the top of this message echo Andreas' argument
(ES future is longer than ES past) against a state of discussion that
gives high prominence to the past (support unmaintained code).

In other languages, I would suggest tooling as a remedy (automated
code upgrade, as done for Cobol and the year 2000 issue), but for
JS, that route seems impractical (analysis being undecidable, and
owners of unmaintained JS code being greater in number and
having smaller budgets than those of unmaintained Cobol code).

// suggestion

Perhaps there is a way to make the automated upgrade
problem solvable/cheap? Instead of ES6+ supporting sloppy
mode and strict mode and mixtures of new features with
sloppy mode indefinetly, how about turning the situation
on its head:

- ES6 engines default to strict mode, with new features
   (the cleaner future)

- ES6 engines support a use ES5 pragma
   that switches off both new features and strict mode
   (give a helping hand to support old code)

- ES6 engines ignore use strict
- ES3/5 engines ignore use ES5

// end suggestion

That way, you don't have to version to use new language
(the current standard), only to use old language. And all
code owners have to do to support unmaintained code
is to slap use ES5 in front of it - easily automated.

ES engine implementers would have to support ES5 mode,
but at least they wouldn't have to support mixtures of ES5
sloppy mode and ES6+ features. ES5 mode would treat
ES6 features the way an ES5 engine would.

As an added bonus, future ES committees would have an
easier time querying the web for code still relying on ES5.
Perhaps some day, ES5 mode will no longer be needed,
but meanwhile ES6+ won't be complicated by it.

Just a thought,
Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do Anonymous Exports Solve the Backwards Compatibility Problem?

2012-12-21 Thread Claus Reinke

* The module loader API exposes a runtime API that is not new
syntax, just an API. From some earlier Module Loader API drafts, I
thought it was something like System.get() to get a dependency,
System.set() to set the value that will be used as the export.

* Base libraries that need to live in current ES and ES.next worlds
(jquery, underscore, backbone, etc…) would *not* use the ES.next
module syntax, but feature detect the System API and call it to
participate in an ES.next module scenario, similar to how a module
today detects if it wants to register for node, AMD or browser
globals:


There is a slightly annoying mismatch here, though: ES6 modules
are *compile-time* constructs, so jquery et all cannot completely
integrate by using ES6 *runtime* APIs. If code depends on jquery,
jquery will need to be loaded explicitly, by hand, before the
dependency resolution for the caller starts (ie, a separate script
element), even if jquery starts to use System.set to register itself.

This wasn't an issue with ES5 module libraries, where everything
was runtime and nothing was checked - you could have dependencies
that registered themselves (or were registered by shims) on load.

One might be able to have a special-purpose loader, though,
which knows about jquery and handles it in its resolve/load
hooks, similar to config shim?


* Modules using the ES.next module syntax will most likely be
contained to app logic at first because not all browsers will have
ES.next capabilities right away, and only apps that can restrict
themselves to ES.next browsers will use the module syntax. Everything
else will use the runtime API.


I'd prefer to use transpilers, mapping new syntax to runtime constructs
in old engines. That way, all newly-written code can use the same, new
syntax, but the compile-time checking advantages only come into play
when the transpilation step is removed, and ES6 engines are used.

We are now in the odd situation that there is a user base for ES6
modules in TypeScript, but since the ES6 module spec is still in progress,
TS has a mix of partially-implemented old spec and not-yet-implemented
new spec.

The idea is to use modern module syntax, and transpile to AMD or
CommonJS or ES6, as needed. Currently, TS coders try out external
modules, find them cumbersome, and fall back to reference paths
and internal modules (which translates to includes+iifes), but that
is merely a result of the current spec and implementation state.


For using ES5 libraries that do not call the ES Module Loader runtime
API, a shim declarative config could be supported by the ES Module
Loader API, similar to the one in use by AMD loaders:

http://requirejs.org/docs/api.html#config-shim

this allows the end developer to consume the old code in a modular
fashion, and the parsing is done by the ES Module Loader, not userland
JS.


I'd very much like to see a config-shim-look-alike implemented in
terms of the updated ES6 modules spec, just to be sure it is possible.
This is important enough that it should be part of the ES6 modules
test suite.

Claus



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Bringing ES6 to programmers as soon as possible

2012-12-19 Thread Claus Reinke

http://www.2ality.com/2012/12/es6-workflow.html


I would really like to see a shared resource collecting ES6 shims
and techniques.

Since this involves checking those shims against the evolving
spec, it would be good to have this as a wiki page moderated
by tc39 (otherwise we'll keep seeing new blog posts showing
how to use old specs). 

Then transpilation ideas could be shared and reused, but also 
be whittled down and improved.


For instance, traceur maps let to try/throw/catch, which is clever
for a prototype, but not practical for production use (defeating 
optimizers). TypeScript maps arrow functions to a 'that=this'

pattern, which is clever, but would be rather complicated to get
right, and TypeScript doesn't (better to use .bind, and shim that
for ES3). Mapping let to IIFE is simple, but needs refinement for 
'this'/'arguments'/.. . And so on, and so on.


Claus

Smaller notes on the blog post

- harmonizr != modernizr(url typo)

- for transpilers, feel free to link to 
   http://clausreinke.github.com/js-tools/resources.html#group:language-extensions


- it is good to see the TypeScript project holding their ES6+types
   position against all those .net folks who want to change it into
   something else; if that wall was ever breached, TS would hold 
   the same disadvantages as CS



___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-10 Thread Claus Reinke

   let lhs = rhs; statements
   // non-recursive, scope is statements

   let { declarations }; statements// recursive, scope is
  // declarations 
and statements


   let { // group of mutually recursive bindings, *no statements*

   [x,y] = [42,Math.PI]; // initialization, not assignment

   even(n) { .. odd(n-1) .. } // using short method form
   odd(n) { .. even(n-1) .. } // for non-hoisting functions

   class X { .. }
   class C extends S { .. new X( odd(x) ) .. }
   class S { }
   };
   if (even(2)) console.log(  new C() );


First of all, this requires whole new syntax for the let body. 


Yes and no - I'm borrowing definition syntax from other parts of
the language. Part of the appeal of having a declarations-only block
was to be able to use things like short method form there. The main 
appeal was to have no statements or hoisted constructs between 
declarations in a letrec.


[by separating recursive and non-recursive forms, the non-recursive
form would have no rhs-undefineds for the ids being defined, which
would circumvent the separate, lexical form of dead zone]

Second, it doesn't eliminate the need for temporal dead zones at all. 


You could well be right, and I might have been misinterpreting what
temporal dead zone (tdz) means. 

For a letrec, I expect stepwise-refinement-starting-from-undefined 
semantics, so I can use a binding anywhere in scope but may or may
not get a value for it. While the tdz seems to stipulate that a binding 
for a variable in scope doesn't really exist and may not be accessed 
until its binding (explicit or implicitly undefined) statement is evaluated.


So what does it gain? The model we have now simply is that every 
scope is a letrec (which is how JavaScript has always worked, albeit

with a less felicitous notion of scope).


That is a good way of looking at it. So if there are any statements
mixed in between the definitions, we simply interpret them as
definitions (with side-effecting values) of unused bindings, and

{ let x = 0;
 let z = [x,y]; // (*)
 x++;
 let y = x; 
 let __ = console.log(z);

}

is interpreted as

{ let x = 0;
 let z = [x,y]; // (*)
 let _ = x++;
 let y = x;
 let __ = console.log(z);
}

What does it mean here that y is *dead* at (*), *dynamically*?
Is it just that y at (*) is undefined, or does the whole construct 
throw a ReferenceError, or what? 


If tdz is just a form of saying that y is undefined at (*), then I can
read the whole block as a letrec construct. If y cannot be used 
until its binding initializer statement has been executed, then I 
seem to have a sequence of statements instead.


Of course, letrec in a call-by-value language with side-effects is 
tricky. And I assume that tdz is an attempt to guard against 
unwanted surprises. But for me it is a surprise that not only can 
side-effects on the right-hand sides modify bindings (x++), but 
that bindings are interpreted as assignments that bring in 
variables from the dead.


The discussion of dead zone varieties in

https://mail.mozilla.org/pipermail/es-discuss/2008-October/007807.html

was driven by the interplay of old-style, hoisted, definitions with
initialization desugaring to assignment. The former mimics a letrec,
with parallel definitions, the latter means a block of sequential
assignments.

So I was trying to get the old-style hoisting and initialization by
assignment out of the picture, leaving a block of recursive
definitions that has a chance of being a real letrec. Perhaps
nothing is gained wrt temporal dead zones. But perhaps this is a 
way to clean up the statement/definition mix, profit from short 
definition forms and provide for non-recursive let without

lexical deadzone.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Do we really need the [[HasOwnProperty]] internal method andhasOwn trap

2012-12-10 Thread Claus Reinke

Also compile-time garbage collection or compile-time memory
management. Then there is the whole area of linear types or
uniqueness types, 


affine types


which allow for in-place updating (reusing
memory) without observable side-effects when absence of other 
references can be proven statically. Perhaps also fusion, which

avoids the allocation of intermediate structures, when it can be
proven statically that construction will be followed immediately
by deconstruction.


This is all great fun, but really off-target for JS. JS has no type 
system, lots of aliasing, and a never-ending need for speed.


Seems I haven't replied yet. You do not necessarily need a language
level type system to profit from compile-time memory management.

I don't have any references but -at the time when soft typing had
made the rounds but static typing seemed to rule the functional
programming world in terms of speed- there were some approaches 
for working on internal byte code generated from dynamically typed 
source, to approximate some of the performance gains of source 
level static analysis. 

One way of looking at it was to make information providers and 
consumers explicit as instructions in the byte code, then to try to 
move related instructions closer to each other by byte code 
transformations.


If you managed to bring together a statement that made an
abstract register a string and a test that checked for string-ness, 
then you could drop the test. If you managed to bring together

statements for boxing and unboxing, you could drop the
pair, saving memory traffic. If you managed to limit the scope 
of a set of reference count manipulations, you could replace 
them by bulk updates. No matter how many references there
are to an unknown object coming in, after you make a copy, 
and until you pass out a reference to that copy, you know

you have the only copy and might be able to update in place.

Most of these weren't quite as effective as doing proper
static analysis in a language designed for it, but such tricks
allowed implementations of dynamically typed functional
languages to come close to the performance of statically
typed language implementations.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: lexical 'super' in arrow functions?

2012-12-10 Thread Claus Reinke

That's related to a feature I have on my list to implement:
cross-referencing actions in a step-through debugger/action record with
their specific origin in the spec. So as you step into a function, see a
sidebar scrolling by with Function Declaration Instantiation, multiple hits
on Create(Mutable|Immutable)Binding, InstantiateArgumentsObject, Binding
Initialization, etc.


I've followed your occasional near-announcements with interest
but cannot help the feeling that there is a great effort the purpose
of which I do not fully appreciate yet. Sometimes it seems you
understate what you've already implemented, at other times it
seems as if this is mostly a fascinating project for you, with little
concern about how other might find it useful.

Even as an executable version of the spec, it would be great to 
have. If single-stepping and cross-referencing helps program

understanding, tool support for single-stepping through a cross-
referenced spec should help getting used to the spec internals
(not to mention helping to verify spec consistency).

Or am I misunderstanding again?-)
Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: (Map|Set|WeakMap)#set() returns `this` ?

2012-12-08 Thread Claus Reinke

If one thing this is clear from this discussion, it is that different
programmers have different preferences (perhaps even changeable
depending on use case). So, no single standard API will suit everyone
and the language support for different patterns could be improved.

Meanwhile, it seems one can get both via proxies (wrapping selected
target methods to return this or argument). I had some trouble finding 
a direct proxy implementation in released engines, so I'm using Tom's 
harmony-reflect shim in node). See code below, which outputs:


   $ node --harmony proxy-chain.js
   undefined { xs: [ 5 ] }
   { xs: [ 5, 6 ] }
   { xs: [ 5, 6, 7, 8 ] }
   9 { xs: [ 5, 6, 7, 8, 9 ] }

The original collection's add method returns undefined (line 1), the 
this-chained proxy's add returns this (lines 2,3), the value-chained 
proxy's add returns the added value (line 4).


Claus


// install npm install harmony-reflect
// run: node --harmony proxy-chain.js

var Reflect = require('harmony-reflect'); // also shims direct proxies

// enable chainable this-return for methods in target
function chain_this(target,methods) {
 return Proxy(target
 ,{get:function(target,name,receiver){
 return methods.indexOf(name)!==-1
? function(){
target[name].apply(target,arguments);
return receiver
  }
: target[name]
   }
  });
}

// enable chainable value-return for unary! methods in target
function chain_value(target,methods) {
 return Proxy(target
 ,{get:function(target,name,receiver){
 return methods.indexOf(name)!==-1
? function(arg){
target[name](arg);
return arg
  }
: target[name]
   }
  });
}


function X() { this.xs = []; }
X.prototype.add = function(x){ this.xs.push(x) }; // returns void

var x = new X();

console.log( x.add(5) , x );


var xt = chain_this(x,['add']);

console.log( xt.add(6) );

console.log( xt.add(7).add(8) );


var xv = chain_value(x,['add']);

console.log( xv.add(9) , xv );

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Comments

2012-12-06 Thread Claus Reinke
Well, the thing is it isn't consistent with the destructuring meaning: dropping the curlies here 
means extracting a single export (aka property), which is not what it means in destructuring 
assignment/binding anywhere else.


But that said, the convenience may well still trump the inconsistency.


I think I'd prefer consistency here, as it also allows to get rid of

   import foo as foo;

and replace it with

   import foo from foo;

which keeps the order of

   import {x,y} from foo

and it is all just module-level destructuring (fewer new concepts).

Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-06 Thread Claus Reinke

I would have preferred if let had not been modeled after var so much, but
that is another topic.


It is as clean as it can get given JS. 


I was hoping for something roughly like

   let lhs = rhs; statements
   // non-recursive, scope is statements

   let { declarations }; statements 
   // recursive, scope is declarations and statements


No hoisting needed to support recursion, no temporal deadzones,
no problem with referring to old x when defining x non-recursively.
And less mixing of declarations and statements.

And you may be surprised to hear that there are some voices who 
actually would have preferred a _more_ var-like behaviour.


Well, in the beginning let was meant to replace var, so it had to
be more or less like it for an easy transition. Later, even that transition
was considered too hard, so var an let coexist, giving more freedom
for let design. At least, that is my impression.


The program equivalences are the same, up to annoying additional
congruences you need to deal with for nu-binders, which complicate
matters. Once you actually try to formalise semantic reasoning (think
e.g. logical relations), it turns out that a representation with a
separate store is significantly _easier_ to handle. Been there, done
that.


Hmm, I used to find reasoning at term level quite useful (a very long
time ago, I was working on a functional logic language, which had 
something like nu-binders for logic variables). Perhaps it depends on

whether one reasons about concrete programs (program development)
or classes of programs (language-level proofs).


gensym is more imperative in terms of the simplest implementation:
create a globally unused symbol.


Which also happens to be the simplest way of implementing
alpha-conversion. Seriously, the closer you look, the more it all
boils down to the same thing.


Yep. Which is why I thought to speak up when I saw those concerns
in the meeting notes;-)


Not under lambda-binders, but under nu-binders - they have to.

If was explaining that the static/dynamic differences that seem to make
some meeting attendees uncomfortable are not specific to nu-scoped
variables, but to implementation strategies. For lambda-binders, one can get
far without reducing below them, but if one lifts that restriction,
lambda-bound variables appear as runtime constructs, too, just as for
nu-binders and nu-bound variables (gensym-ed names).


Not sure what you're getting at precisely, but I don't think anybody
would seriously claim that nu-binders are useful as an actual
implementation strategy.


More as a user-level representation of whatever implementation
strategy is used behind the scenes, just as lambda-binders are a
user-level representation of efficient implementations. 


But to clarify the point:

Consider something like: 


   (\x. (\y. [y, y]) x)

Most implementations won't reduce under the \x., nor will they
bother to produce any detailed result, other than 'function'. So
those x and y are purely static constructs.

However, an implementation that does reduce under the \x.
will need to deal with x as a dynamic construct, passing it to
\y. to deliver the result (\x. [x,x]).

Now, the same happens with nu-binders, or private names:
after bringing them in scope, computation continues under
the nu-binder, so there is a dynamic representation (the
generated symbol) of the variable.

My point is that there isn't anything worrying about variables
appearing at dynamic constructs, nor is it specific to private
names - normal variables appearing to be static is just a
consequence of limited implementations. What is static is
the binding/scope structure, not the variables.

Since we mostly agree, I'll leave this here. Perhaps it helps
the meeting participants with their concerns.

Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-06 Thread Claus Reinke

I was hoping for something roughly like

   let lhs = rhs; statements
   // non-recursive, scope is statements

   let { declarations }; statements// recursive, scope is
declarations and statements


Problem is that you need mutual recursion between different 
binding forms, not just 'let' itself.


Leaving legacy var/function out of it, is there a problem with
allowing mutually recursive new declaration forms in there?

   let { // group of mutually recursive bindings

   [x,y] = [42,Math.PI]; // initialization, not assignment

   even(n) { .. odd(n-1) .. } // using short method form
   odd(n) { .. even(n-1) .. } // for non-hoisting functions

   class X { .. }
   class C extends S { .. new X( odd(x) ) .. }
   class S { }
   };
   if (even(2)) console.log(  new C() );

Or did I misunderstand your objection?
Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: lexical 'super' in arrow functions?

2012-12-05 Thread Claus Reinke

Language specification is a difficult task, especially when handling a
complex language, legacy spec style, and wide variety of audience
background, not to mention a committee with lots of feedback and opinions.
We are very lucky that Allen does the job he does.


Yes. That doesn't mean he should have to do everything alone,
or that the results of his work shouldn't be useable for a wider
audience. And documentation helps collaboration. Here's an 
illustrative example from another context:


I've recently been digging through non-trivial, insufficiently
documented project source code (namely the TypeScript compiler 
and services). So have several others. Like a spec, the whole thing is 
completely specified by its source, but with just a few more high-level 
comments, we would have had a much easier time, would have 
wasted less time, and would be more certain of the understanding 
we've reached. More coders would have tried or more would have 
succeeded in building on the services, and earlier. There still isn't

as much plugin developer uptake as the code itself warrants.

If you believe in commenting source code, you should believe in
annotating the reference implementation that is the ES spec.


That also means we shouldn't make it harder, or ask the spec to bear
burdens it doesn't need to handle.  JavaScript is blessed with numerous
excellent books describing how to use the language and what various
features are for, including Dave's new book. That's the place to go for
description and explanation, not the spec.


I'll try to answer several related concerns in one reply:

1. Back when I last looked at Standard ML, it had a formal definition
   and an informal, but official commentary

   I don't want that - it explains the formalism, instead of the language

2. David Flanagan does a very nice job of providing an informal
   reference to the language (and its environment)

   I hope he'll do it again for ES6, and the ES6 spec should not 
   include all the useful text that he adds


3. There have been community efforts to explain or annotate the
   spec, and I hope there will be such efforts for ES6

   This is getting closer, but represents more work than is needed 
   for documenting the core spec


4. There are concerns about annotations (a) extending the failure
   surface and (b) being used instead of the normative parts

   (a) yes, definitely want that! It is about saying the same thing
   twice, in different forms and level of detail. That means one
   can check for internal consistency, and file either a spec or
   a documentation bug.

   (b) if implementors are tempted to treat prose as normative,
   that only confirms that the normative formal parts are too
   difficult to interpret for normal JS hackers;

   By all means, put the normative, formal parts first, then call
   the informal parts notes on the spec, to avoid any genuine
   misunderstandings about which parts are normative; then
   community and tests need to ensure that engines implement
   spec, not notes.

What do I want, then? Well, good comments give intent and big
picture instead of explaining the code. And some parts of the spec
do have both already. So I'm mainly asking for very brief notes
in the existing style (on language features, not on the formalism).

For instance, chapter 8 is all about the big picture, and the return 
(12.9) and with statements (12.10) have short notes explaining 
what they are about. This shows that no huge effort is needed.


   Given that with was well on its way out of the language, and that 
   so many coders do not care about strict mode, there ought to be 
   an explanation that use of 'with' is discouraged, and why (it is 
   useful, but too powerful for its use cases).


However, right before that, the continue (12.7) and break (12.8)
statements have no such explanatory notes. This shows that the
level of documentation is not consistent, and that it is not just a
question of focusing on the formalization of new features first.

Moving to new stuff, the only notes in arrow function definitions
(13.2) are explanations of the formalism, the only notes about
lexical this (never mind lexical super) in arrows are in the sections 
on *.forEach. The super keyword section (11.2.4) has no notes 
at all - if you were browsing the spec trying to figure out what

super is about in JS, how much of the spec would you have to
read to answer that question, and how many readers succeed?

So, I am not asking for great extra efforts, just for a couple of
sentences indicating what each language feature section is
trying to formalize. If the big picture sections exist and are
readable, then simply using language NOTEs consistently 
everywhere would help.


Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: On dropping @names

2012-12-05 Thread Claus Reinke

There were various mixed concerns, like perhaps requiring implicit
scoping of @-names to be practical in classes, 


Like implicitly scoping this, super, and arguments, this would cause
problems with nested scopes. Unless the name of the class was made
part of the implicitly named scope reference?

their operational generativity perhaps being a mismatch with their 
seemingly static meaning in certain syntactic forms, 


This appears to be ungrounded. See below.

potential ambiguities with what @x actually denotes in certain 
contexts. And probably more. Most of that should be in the meeting 
minutes.


Can't say about ambiguities. And I started asking because I couldn't
find (valid) reasons in the minutes;-)


Implicit scoping in a language with nested scopes has never been a
good idea (even the implicit var/let scopes in JS are not its strongest
point). Prolog got away with it because it had a flat program structure
in the beginning, and even that fell down when integrating Prolog-like
languages into functional one, or when adding local sets of answers.


Indeed. (Although I don't think we have implicit let-scopes in JS.)


There are few enough cases (scope to nearest enclosing block unless 
there is an intervening conditional or loop construct, to nearest for 
loop body if it appears in the loop header, to the right in a 
comprehension) that the difference might not matter. 

I would have preferred if let had not been modeled after var so 
much, but that is another topic.



Symbols will definitely still be usable as property names, that's
their main purpose.

The main technical reason that arbitrary objects cannot be used indeed
is backwards compatibility. The main moral reason is that using
general objects only for their identity seems like overkill, and you
want to have a more targeted and lightweight feature.


Having specific name objects sounds like the right approach.


So I'm not sure how your concerns are being addressed by
merely replacing a declarative scoping construct by an explicitly
imperative gensym construct?


We have the gensym construct anyway, @-names were intended 
to be merely syntactic sugar on top of that.


Yes, so my question was how removing the sugar while keeping
the semantics is going to address the concerns voiced in the meeting
notes.


- explicit scopes (this is the difference to gensym)
- scope extrusion (this is the difference to lambda scoping)


Scope extrusion semantics actually is equivalent to an allocation
semantics. The only difference is that the store is part of your term
syntax instead of being a separate runtime environment, but it does
not actually make it more declarative in any deeper technical sense.
Name generation is still an impure effect, albeit a benign one.


For me, as a fan of reduction semantics, having all of the semantics 
explainable in the term syntax is an advantage!-) While it is simple 
to map between the two approaches, the nu-binders are more 
declarative in terms of simpler program equivalences: for gensym,

one needs to abstract over generated symbols and record sharing
of symbols, effectively reintroducing what nu-binders model directly.

gensym is more imperative in terms of the simplest implementation:
create a globally unused symbol.


As Brendon mentions, nu-scoped variables aren't all that different
from lambda-scoped variables. It's just that most implementations
do not support computations under a lambda binder, so lambda
variables do not appear to be dynamic constructs to most people,
while nu binders rely on computations under the binders, so a
static-only view is too limited.


I think you are confusing something. All the classical name calculi
like pi-calculus or nu-calculus don't reduce/extrude name binders
under abstraction either.


Not under lambda-binders, but under nu-binders - they have to.

If was explaining that the static/dynamic differences that seem to make
some meeting attendees uncomfortable are not specific to nu-scoped 
variables, but to implementation strategies. For lambda-binders, one 
can get far without reducing below them, but if one lifts that restriction,
lambda-bound variables appear as runtime constructs, too, just as for 
nu-binders and nu-bound variables (gensym-ed names).


Claus

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: lexical 'super' in arrow functions?

2012-12-04 Thread Claus Reinke

Is 'super' currently limited to method bodies, excluding local functions?
Given that 'this' is lexical in arrow functions, I expected any enclosing
'super' to be available, as well, but I cannot confirm this from the spec.


Yes, clearly super should be able to be used in an arrow function that is lexically scoped to an 
enclosing super binding.  The mechanism for describing this are mostly in the spec., but I just 
checked and there are a few loose ends that I will clean-up in the next spec. draft.


That would be good.


OK, I looked more closely and anything needed for super references within from within Arrow 
functions is already in the current draft.  Just trace through the algorithms in section 11.2.4. 
Particularly, steps 1-4 of the Evaluation algorithms.  However, I did add a few clarifying note in 
the next draft.


Before I make another attempt to extract this info from the current
draft, let me some general comments:

Like everyone else on this list, I have grown familiar with the current
spec - not as familiar as tc39 members, but enough to find answers
to questions when I need them.

But with the evolving drafts of the new spec, I'm back in the situation
most JS coders are wrt the spec: trying to find answers in the spec is
just a little demoralizing, often unsuccessful, and will remain a hidden
art for those who do not read/study most of it at some point.

Language specs, for those languages that have them, fall somewhere
on a scale from informal, readable to formal, unreadable.

ES, for all its faults, has a spec on the formal side -which is a very
good thing!- but unfortunately also on the not directly readable side.

The reason is that the spec is essentially a reference implementation -
even though it doesn't use a formal language, it consists of what to
do with this piece of code instructions. Understanding these
instructions requires knowledge and understanding of the reference
machine code patterns, instructions and system libraries.

This makes the spec not so useful for quick lookups or for
understanding what those language features are for.

It would enhance the usefulness of this important asset -the spec-
if each section would start with one or two informal paragraphs
on the most salient points of each feature.

The formal parts would still be there to confirm the details, to guide
implementers, and as the normative part of the spec. But the informal
parts would make quick lookups succeed, would give guidance on
what is being formalized, and would support program construction
(what is this good for? rather than just how do I implement this?).

Claus


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


  1   2   3   4   >