Re: Nuking misleading properties in `Object.getOwnPropertyDescriptor`

2013-03-14 Thread Tom Van Cutsem
[+Allen]

2013/3/13 Nathan Wall nathan.w...@live.com

 However, as a matter of principle, my argument is that
 `Object.getOwnPropertyDescriptor` should, at the bare minimum, return a
 descriptor that can be known to work in `Object.defineProperty`.  If
 `Object.defineProperty` doesn't accept it, then you
 `getOwnPropertyDescriptor` didn't really give me a valid descriptor.

 I think that this behavior (1) limits the creativity of developers to
 define properties like `Object.prototype.get`, (2) is a potential stumbling
 block, (3) has no real benefit -- really, there's not anything positive
 about this behavior, and (4) forces developers who want to support
 `Object.prototype.get` to add an extra layer of cleaning before using
 `defineProperty`.


While the monkey-patching of Object.prototype (don't do that!) is still
the culprit, I agree that it would have been better if defineProperty
looked only at own properties of the descriptor. I almost always think of
descriptors as records rather than objects. Similarly, perhaps
Object.getOwnPropertyDescriptor should have returned descriptors whose
[[prototype]] was null.

It's true that Reflect.getOwnPropertyDescriptor and Reflect.defineProperty
give us a chance to fix this. I'm just worried that these differences will
bite developers that will assume that these methods are identical to the
Object.* versions.

I'd like to hear Allen's opinion on this issue.

Cheers,
Tom
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nuking misleading properties in `Object.getOwnPropertyDescriptor`

2013-03-14 Thread Herby Vojčík



Tom Van Cutsem wrote:

[+Allen]

2013/3/13 Nathan Wall nathan.w...@live.com mailto:nathan.w...@live.com

However, as a matter of principle, my argument is that
`Object.getOwnPropertyDescriptor` should, at the bare minimum,
return a descriptor that can be known to work in
`Object.defineProperty`.  If `Object.defineProperty` doesn't accept
it, then you `getOwnPropertyDescriptor` didn't really give me a
valid descriptor.

I think that this behavior (1) limits the creativity of developers
to define properties like `Object.prototype.get`, (2) is a potential
stumbling block, (3) has no real benefit -- really, there's not
anything positive about this behavior, and (4) forces developers who
want to support `Object.prototype.get` to add an extra layer of
cleaning before using `defineProperty`.


While the monkey-patching of Object.prototype (don't do that!) is
still the culprit, I agree that it would have been better if
defineProperty looked only at own properties of the descriptor. I
No, there are legitimate uses of Object.create(descriptorTemplate) with 
descriptors.



almost always think of descriptors as records rather than objects.
Similarly, perhaps Object.getOwnPropertyDescriptor should have returned
descriptors whose [[prototype]] was null.


Herby
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nuking misleading properties in `Object.getOwnPropertyDescriptor`

2013-03-14 Thread David Bruant

Le 14/03/2013 08:51, Tom Van Cutsem a écrit :

[+Allen]

2013/3/13 Nathan Wall nathan.w...@live.com mailto:nathan.w...@live.com

However, as a matter of principle, my argument is that
`Object.getOwnPropertyDescriptor` should, at the bare minimum,
return a descriptor that can be known to work in
`Object.defineProperty`.  If `Object.defineProperty` doesn't
accept it, then you `getOwnPropertyDescriptor` didn't really give
me a valid descriptor.

I think that this behavior (1) limits the creativity of developers
to define properties like `Object.prototype.get`, (2) is a
potential stumbling block, (3) has no real benefit -- really,
there's not anything positive about this behavior, and (4) forces
developers who want to support `Object.prototype.get` to add an
extra layer of cleaning before using `defineProperty`.


While the monkey-patching of Object.prototype (don't do that!) is 
still the culprit, I agree that it would have been better if 
defineProperty looked only at own properties of the descriptor.
In a previous message, Brandon Benvie mentioned he uses inheritance to 
reuse a property descriptor [1] (I think there was another quote of him, 
but I can't find it now). I can imagine it's a used pattern.


I almost always think of descriptors as records rather than 
objects. Similarly, perhaps Object.getOwnPropertyDescriptor should 
have returned descriptors whose [[prototype]] was null.


It's true that Reflect.getOwnPropertyDescriptor and 
Reflect.defineProperty give us a chance to fix this. I'm just worried 
that these differences will bite developers that will assume that 
these methods are identical to the Object.* versions.

I doubt differences would be a good idea.

Maybe an idea would be for Object.defineProperty to call 
Attributes.@@iterate is user-defined so that a user can restrict what 
property descriptor properties are being traversed.
If that's too heavy of a refactoring, maybe an ES6 map could be accepted 
as the 3rd argument of Object.defineProperty (with maps semantics, not 
objects semantics). This way, one could write the copy function as:


function copy(from, to) {
for (let name of Object.getOwnPropertyNames(from)){
let desc = Object.getOwnPropertyDescriptor(from, name);
desc[@iterator] = ownIterator; // is that the proper 
syntax? I'm a bit lost :-/

Object.defineProperty(to, name, new Map(desc));
}
}

ownIterator only iterates over own properties as its name indicates, so 
the Map will only list that. The extra map allocation isn't that big of 
a deal since it is very short-lived. It could be shared and cleared 
across iterations if necessary.


Nathan, how do you feel about such a solution?

David

[1] https://mail.mozilla.org/pipermail/es-discuss/2012-November/026081.html
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nuking misleading properties in `Object.getOwnPropertyDescriptor`

2013-03-14 Thread Brandon Benvie

On 3/14/2013 2:12 AM, David Bruant wrote:

Le 14/03/2013 08:51, Tom Van Cutsem a écrit :

[+Allen]

2013/3/13 Nathan Wall nathan.w...@live.com 
mailto:nathan.w...@live.com


However, as a matter of principle, my argument is that
`Object.getOwnPropertyDescriptor` should, at the bare minimum,
return a descriptor that can be known to work in
`Object.defineProperty`.  If `Object.defineProperty` doesn't
accept it, then you `getOwnPropertyDescriptor` didn't really give
me a valid descriptor.

I think that this behavior (1) limits the creativity of
developers to define properties like `Object.prototype.get`, (2)
is a potential stumbling block, (3) has no real benefit --
really, there's not anything positive about this behavior, and
(4) forces developers who want to support `Object.prototype.get`
to add an extra layer of cleaning before using `defineProperty`.


While the monkey-patching of Object.prototype (don't do that!) is 
still the culprit, I agree that it would have been better if 
defineProperty looked only at own properties of the descriptor.
In a previous message, Brandon Benvie mentioned he uses inheritance to 
reuse a property descriptor [1] (I think there was another quote of 
him, but I can't find it now). I can imagine it's a used pattern.


I also mentioned I thought it was unlikely to be commonly used, since 
I've never seen it used besides some of my own code (which exists in a 
couple libraries used by few or just me). I think, though, the other 
thing I mentioned that you're thinking of is that methods on the 
Descriptor class prototypes can be useful. Here's a simple version that 
demonstrates potential utility:



  const fields = new Set(['enumerable', 'configurable', 'writable', 
'value', 'get', 'set', 'key']);


  class Descriptor extends null {
constructor(desc){
  if (desc) {
for (let [key, value] of items(desc)) {
  if (fields.has(key)  value === undefined || 
Object.prototype[key] !== value) {

this[key] = value;
  }
}
  }
}
hide(){
  this.enumerable = false
}
lock(){
  this.configurable = false;
}
define(object, key = this.key){
  Object.defineProperty(object, key, this);
}
/** etc **/
  }

  class AccessorDescriptor extends Descriptor {
setOn(object, value){
  if (typeof this.set === 'function') {
return call(this.set, object, value) !== false;
  }
  return false;
}
getOn(object){
  if (typeof this.get === 'function') {
return call(this.get, object);
  }
}
/** etc **/
  }
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nuking misleading properties in `Object.getOwnPropertyDescriptor`

2013-03-14 Thread David Bruant

Le 14/03/2013 17:01, Brandon Benvie a écrit :
I also mentioned I thought it was unlikely to be commonly used, since 
I've never seen it used besides some of my own code (which exists in a 
couple libraries used by few or just me).
Sincere apologies on missing an important part of your quote (I remember 
there was another message than the one I quoted, but I've been unable to 
find it) :-/


David
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Nuking misleading properties in `Object.getOwnPropertyDescriptor`

2013-03-14 Thread Andrea Giammarchi
this is an excellent Point Mark, I wish I mentioned  this earlier in
redefine.js

For already panicing developers sake, the problem addressed by Mark can be
easily solved going explicitly over the property definition so that this
won't work:

Object.freeze(Object.prototype);
function Point(x, y) {
  this.x = x;
  this.y = y;
}
Point.prototype.toString = function() {
  return '' + this.x + ',' + this.y + '';
};

alert(new Point(1, 2)); // [object Object]

but this will:
Object.defineProperty(
  Point.prototype,
  'toString', {value:
  function() {
return '' + this.x + ',' + this.y + '';
  }
});

alert(new Point(1, 2)); // 1,2

br




On Wed, Mar 13, 2013 at 8:36 AM, Mark S. Miller erig...@google.com wrote:

 The one qualification everyone should be aware of is that if they simply
 freeze these themselves, rather than using the tamper-proofing abstractions
 defined by SES[1][2], then they will suffer from the override mistake[3]
 and conventional code such as the following will break:

   function Point(x, y) {
 this.x = x;
 this.y = y;
   }
   Point.prototype.toString = function() {
 return '' + x + ',' + y + '';
   };

 This normal looking assignment to Point.prototype.toString fails because
 Point.prototype inherits from Object.prototype, on which toString is a
 non-writable non-configurable data property.

 [1]
 https://code.google.com/p/google-caja/source/browse/trunk/src/com/google/caja/ses/repairES5.js#466
 [2]
 https://code.google.com/p/google-caja/source/browse/trunk/src/com/google/caja/ses/startSES.js#869
 [3]
 http://wiki.ecmascript.org/doku.php?id=strawman:fixing_override_mistake




 On Wed, Mar 13, 2013 at 7:18 AM, David Bruant bruan...@gmail.com wrote:

  Le 12/03/2013 16:45, Tom Van Cutsem a écrit :

 Hi Nathan,

  2013/3/10 Nathan Wall nathan.w...@live.com

 Given that `defineProperty` uses properties on the prototype of the
 descriptor[1] and `getOwnPropertyDescriptor` returns an object which
 inherits from `Object.prototype`, the following use-case is volatile:

 function copy(from, to) {
 for (let name of Object.getOwnPropertyNames(from))
 Object.defineProperty(to, name,
 Object.getOwnPropertyDescriptor(from, name));
 }

 If a third party script happens to add `get`, `set`, or `value` to
 `Object.prototype` the `copy` function breaks.


  To my mind, the blame for the breakage lies with `Object.prototype`
 being mutated by the third-party script, not with property descriptors
 inheriting from Object.prototype. Thus, a fix for the breakage should
 address that directly, rather than tweaking the design of property
 descriptors, IMHO.

 I agree.

 As Object.prototype-jacking threats are discussed more and more recently,
 I'd like to take a step back and meta-discuss JavaScript threats.

 Currently, by default, any script that run can mutate the environment it
 is executed in (it can be fixed by sandboxing with things like Caja [1] and
 soon the module loader API used with proxies [2], but even then, there
 could be leaks of native built-ins).
 The first (security) decision any JavaScript application should make
 would be to freeze all built-ins like SES [3][4] does. (In the future, it
 could even make sense to add a CSP [5] directive for that)
 If necessary, the application can first enhance the environment by adding
 polyfills/libraries and such, but that's pretty much the only thing that's
 acceptable to run before freezing everything.

 Given that freezing all built-ins (after polyfills) is a reasonable thing
 to do, I think JavaScript threat should be considered serious only if
 applicable assuming the environment is already frozen.
 It naturally rules out threats related to property descriptors inheriting
 from Object.prototype or anything looking like what if an attacker
 switches Array.prototype.push and Array.prototype.pop?

 David

 [1] http://code.google.com/p/google-caja/
 [2] http://wiki.ecmascript.org/doku.php?id=harmony:module_loaders
 [3] http://code.google.com/p/es-lab/wiki/SecureEcmaScript
 [4]
 http://code.google.com/p/es-lab/source/browse/#svn%2Ftrunk%2Fsrc%2Fses
 [5]
 https://dvcs.w3.org/hg/content-security-policy/raw-file/tip/csp-specification.dev.html




 --
 Cheers,
 --MarkM

 ___
 es-discuss mailing list
 es-discuss@mozilla.org
 https://mail.mozilla.org/listinfo/es-discuss


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Loader Comments

2013-03-14 Thread Kevin Smith
 The way to invoke the default behavior will be to just return
 `undefined` from a hook (or fulfill with `undefined` for the fetch
 hook).


That's one way to do it.  It doesn't allow the overrider to do any
transformation on the result though.  If the overrider can explicitly call
the default behavior, then source translation could just be done from
within the fetch hook.

We'll be updating the wiki in the next week or two with these
 sorts of issues.


The sooner the better!  Modules are (IMO) the most important new feature of
ES6 and it's getting close...

- It ought to be possible to override the translation behavior without
 mucking with fetching.  Coffeescript translators don't need to perform
 their own fetching.


First, if the default behavior is explicitly invoked (as opposed to
implicitly by returning undefined), then fetching doesn't really have to be
mucked with.

Also (and this is a separate point), I don't see how in the current design
a CoffeeScript translator could be implemented as a custom loader.  As far
as I can tell, all loaders encapsulate their own module instance table.
 For CoffeeScript, you want the JS module instance table and the CS module
instance table to be the same.  Is there a way to have a custom loader
share an instance table with another loader, then?


 - calls to `eval` go through the translate hook, but not through the
 fetch hook, since there's nothing to fetch.


So we're going to have eval execute arbitrary languages other than
javascript?  I didn't realize this...  I'm going to have to think on that
for a while.

Thanks for debating, BTW!

{ Kevin }
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Loader Comments

2013-03-14 Thread Sam Tobin-Hochstadt
On Thu, Mar 14, 2013 at 11:03 AM, Kevin Smith khs4...@gmail.com wrote:

 The way to invoke the default behavior will be to just return
 `undefined` from a hook (or fulfill with `undefined` for the fetch
 hook).


 That's one way to do it.  It doesn't allow the overrider to do any
 transformation on the result though.  If the overrider can explicitly call
 the default behavior, then source translation could just be done from within
 the fetch hook.

That's why for several of the hooks, we plan to provide to the hook an
explicit value which is what the default would do.  For `translate`
this isn't needed, since the default is not to transform, and for
`fetch` browsers already provide ways of doing remote fetching.  I
don't think we want to force engines to reify these operations as new
JS functions.

 We'll be updating the wiki in the next week or two with these
 sorts of issues.

 The sooner the better!  Modules are (IMO) the most important new feature of
 ES6 and it's getting close...

Glad you're excited about modules.

 - It ought to be possible to override the translation behavior without
 mucking with fetching.  Coffeescript translators don't need to perform
 their own fetching.

 First, if the default behavior is explicitly invoked (as opposed to
 implicitly by returning undefined), then fetching doesn't really have to be
 mucked with.

 Also (and this is a separate point), I don't see how in the current design a
 CoffeeScript translator could be implemented as a custom loader.  As far as
 I can tell, all loaders encapsulate their own module instance table.  For
 CoffeeScript, you want the JS module instance table and the CS module
 instance table to be the same.  Is there a way to have a custom loader share
 an instance table with another loader, then?

For this use case, you'd probably want to just modify the default
System loader to understand Coffeescript, via file extension
detection, AMD-style plugins, or some other mechanism.

If you want a new loader, you'll have to explicitly share the modules
that you want to share.  But the point of new loaders is to have the
distinct table.

 - calls to `eval` go through the translate hook, but not through the
 fetch hook, since there's nothing to fetch.


 So we're going to have eval execute arbitrary languages other than
 javascript?  I didn't realize this...  I'm going to have to think on that
 for a while.

For Coffeescript, it's easy to actually translate `eval` calls.

The more important reason to handle `eval` in a loader is so that if
you're enforcing some invariant on code executed in a specific loader,
you don't want the code to be able to escape that invariant via
`eval`.

 Thanks for debating, BTW!

Hopefully it doesn't need to be a debate. :)

Sam
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Loader Comments

2013-03-14 Thread Brandon Benvie

On 3/14/2013 11:03 AM, Kevin Smith wrote:


Also (and this is a separate point), I don't see how in the current 
design a CoffeeScript translator could be implemented as a custom 
loader.  As far as I can tell, all loaders encapsulate their own 
module instance table.  For CoffeeScript, you want the JS module 
instance table and the CS module instance table to be the same.  Is 
there a way to have a custom loader share an instance table with 
another loader, then?


You can't share the table, but you can manually use `loaderOne.set(mrl, 
loaderTwo.get())` after the translate? (I think this would be closest to 
the final evaluation) hook.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Loader Comments

2013-03-14 Thread Brandon Benvie

On 3/14/2013 12:04 PM, Brandon Benvie wrote:

loaderOne.set(mrl, loaderTwo.get())
Sorry, I meant `loaderOne.set(mrl, loaderTwo.get(mrl))`. Assuming their 
resolve hooks are the same I guess.

___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Loader Comments

2013-03-14 Thread Kevin Smith

 The more important reason to handle `eval` in a loader is so that if
 you're enforcing some invariant on code executed in a specific loader,
 you don't want the code to be able to escape that invariant via
 `eval`.


This makes sense now.  The loader has the opportunity to analyze and
transform external code, so it should also have the same ability with
respect to dynamic code.  Maybe this rationale could be captured on the
wiki.

So it's really more of a transform hook than a translate hook.  Would
naming it transform would make the intention more obvious?

{ Kevin }
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Loader Comments

2013-03-14 Thread Sam Tobin-Hochstadt
On Thu, Mar 14, 2013 at 2:14 PM, Kevin Smith khs4...@gmail.com wrote:
 So it's really more of a transform hook than a translate hook.  Would
 naming it transform would make the intention more obvious?

Transformations of languages are translations, so I think that
`translate` is clearer.  CS - JS is a translation.

Sam
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Module Loader Comments

2013-03-14 Thread Kevin Smith
  So it's really more of a transform hook than a translate hook.  Would
  naming it transform would make the intention more obvious?

 Transformations of languages are translations, so I think that
 `translate` is clearer.  CS - JS is a translation.


Sure, but to me translate implies translation from one programming
language to another.  But the loader concept can be fully expressed
without dependence on on the concept of other languages.  Transform, to
me, implies the more generic concept.

Anyway, my questions are cleared up (for now).  Can't wait for those wiki
updates!  ; )


{ Kevin }
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-14 Thread Brendan Eich

Consider:

var i = getSomeIterator();
var x0 = i.next(), x1 = i.next(), x2 = i.next();
for (let x of i) {
...
}

Should the loop iterate over x0, x1, and x2? That's what would (have to) 
happen if i[@iterator]() returned a cloneof the iterator ireset to the 
starting position.


Of course the iteration protocol we have in Harmony has no notion of 
position, or memory, or any particular thing that might be needed to 
replay x0, x1, and x2.


Cloning i at its current position (if such a notion exists) has the 
problem that Andreas objected to in the o.p.


Not cloning i, making iter[@iterator]() === iter as in Python, solves 
the problem minimally.


I don't see a way to specify iterator cloning as part of the iteration 
protocol. What am I missing?


/be

Dmitry Lomov wrote:


(I'll address comments from both your e-mails here)

On Tue, Mar 12, 2013 at 7:56 AM, Jason Orendorff 
jason.orendo...@gmail.com mailto:jason.orendo...@gmail.com wrote:


On Tue, Mar 12, 2013 at 1:06 AM, Dmitry Lomov dslo...@google.com
mailto:dslo...@google.com wrote:

At a risk of repeating myself, 'open()' scenario is handled
perfectly well with the iterable (see my example). Example
where things truly cannot be reiterated (I am not sure why
network stream is such an example - the network connection can
always be opened twice) are rare. One possibility will be to
throw on the second call to iterator(). 



Gosh, maybe we are irreconcilable then. Implicitly opening network
connections many times seems bad to me. Same for reopening files,
actually.


I do not think we are irreconcilable. Clearly there is a library 
design choice here. A designer of a particular library for 
file/network IO may or may not consider opening a file on 'iterator()' 
call too implicit. I think it is not too implicit, while you appear to 
think otherwise.


In the world with Iterables, the library designer can easily disallow 
iterating the result of open a second time - as I suggested above, if 
for whatever reason the sequence cannot be re-iterated, iterator() 
method can throw on second call. In that case, attempt to zip a file 
with itself will throw when zip calls the iterator method a second 
time, and that will be an early failure with a clear cause.


However, non-reiterable iterables are a fringe case - maybe 10% of 
iterators are non-re-iterable even by the standards you suggest 
(expensive operations on iteration start). [I am being generous here; 
seems that all allegedly non-reiterable examples suggested so far has 
been related to I/O; given that I/O libraries are generally 
asynchronous in ES, I/O is generally not very amenable to be 
implemented as iterators, since in general results of I/O operations 
are only available in a callback, and not on-demand, as next() method 
would require]. My educated guess would be that 90% 
iterators/iterables in the wild would be easily re-iterable, as they 
would be results of operations over collections (such as filter, map, 
zip and similar). This is a baby that gets thrown with the water, not 
the non-restartable iterators



This semantics is sound and consistent, but there is a problem: by
that logic, the first call 'zip(rangeAsArray, rangeAsArray)' also
has all the appearances of a user error! It requires careful
analysis and thinking to convince oneself that it is indeed
correct. Well, maybe not in a simple case when the initializer of
rangeAsArray is an array literal, but as soon as the initializer
is more complicated - say an external function, you can never be sure.


 But you could argue the same way for literally any other operation 
on an object. 'rangeAsArray.length', for example, would also be 
nonsensical if rangeAsArray turns out to be some other sort of object 
and not an array after all.


We do not talk here about arbitrary operations on a random object; we 
are talking about operations mandated by the language and their 
semantics. In fact, length is not a bad example of a precedent in this 
space: after ES5

   for (int i = 0; i  obj.length; i++) console.log(obj[i]);
works great for all array-like data structures, including arrays, 
strings and typed arrays. It will be nice to achieve the same for 
iterator(), for..of and generators.


 Note that generators return coroutine objects with methods other 
than just the iteration-related methods. The coroutine state, to my 
mind, is inherent to the returned object.


In the Iterable design, coroutine state would be inherent to a result 
of iterator(), i.e. co-routine execution begins once iterator() is called.


If we are to presume that this particular kind of bug will be
common in JS, why isn't it common in Python?
If I'm mistaken about Python and this is actually a common problem
there, then I'd reconsider.


I am not a deep specialist in Python, but my understanding is that the 
problem there is 

Re: Questions/issues regarding generators

2013-03-14 Thread Andreas Rossberg
On 14 March 2013 22:54, Brendan Eich bren...@mozilla.com wrote:
 Consider:

 var i = getSomeIterator();
 var x0 = i.next(), x1 = i.next(), x2 = i.next();
 for (let x of i) {
 ...
 }

 Should the loop iterate over x0, x1, and x2? That's what would (have to)
 happen if i[@iterator]() returned a cloneof the iterator ireset to the
 starting position.

I agree this is an unsatisfactory consequence to the
generatorObject.iterator = cloning proposal, which was meant as a kind
of have-your-cake-and-eat-it-too compromise. It doesn't really achieve
that, so I withdraw it.

That leaves my original proposal not to have generator application
return an iterator, but only an iterable. Under that proposal, your
example requires disambiguation by inserting the intended call(s) to
.iterator in the right place(s).

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-14 Thread Brendan Eich

Andreas Rossberg wrote:

On 14 March 2013 22:54, Brendan Eichbren...@mozilla.com  wrote:

Consider:

 var i = getSomeIterator();
 var x0 = i.next(), x1 = i.next(), x2 = i.next();
 for (let x of i) {
 ...
 }

Should the loop iterate over x0, x1, and x2? That's what would (have to)
happen if i[@iterator]() returned a cloneof the iterator ireset to the
starting position.


I agree this is an unsatisfactory consequence to the
generatorObject.iterator = cloning proposal, which was meant as a kind
of have-your-cake-and-eat-it-too compromise. It doesn't really achieve
that, so I withdraw it.


Thanks.


That leaves my original proposal not to have generator application
return an iterator, but only an iterable. Under that proposal, your
example requires disambiguation by inserting the intended call(s) to
.iterator in the right place(s).


That's horribly inconvenient. It takes Dmitry's example:

 function* enum(from, to) { for (let i = from; i = to; ++i) yield i }

 let rangeAsGenerator = enum(1, 4)
 let dup = zip(rangeAsGenerator, rangeAsGenerator)  // Oops!

which contains a bug under the Harmony proposal, to this:

 function* enum(from, to) { for (let i = from; i = to; ++i) yield i }

 let rangeAsGenerator = enum(1, 4)
 let dup = zip(rangeAsGenerator[@iterator](), 
rangeAsGenerator[@iterator]())


which while it works, is just silly given JS's mutable object heritage. 
Programmers will not write this. They will instead write


 function* enum(from, to) { for (let i = from; i = to; ++i) yield i }

 let dup = zip(enum(1, 4), enum(1, 4))

which is clearer, shorter, and more truthful and beautiful.

You seem to think iterables are immutable, or something. 'taint so -- JS 
is not ML! :-P


/be


___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-14 Thread Andreas Rossberg
On 8 March 2013 18:23, Jason Orendorff jason.orendo...@gmail.com wrote:
 On Thu, Mar 7, 2013 at 1:05 PM, Andreas Rossberg rossb...@google.com
 wrote:

 On 7 March 2013 18:30, Allen Wirfs-Brock al...@wirfs-brock.com wrote:
  Zip's informal contract should state that if iterators are passed as
  arguments they need to be distinct objects. If you want to implement it
  defensively, you can add a  check for that pre-condition.

 I have to disagree here. That is just evading the question what the
 contract for .iterator is. Either it is supposed to create new state
 or it isn't. It's not a very useful contract to say that it can be
 both, because then you cannot reliably program against it.

 In Python, the contract definitely says that it can be both. It's the only
 practical choice. For collections, you want new state. But you also want
 things such as generators, database cursors, and file descriptors to be
 iterable:

 with open(filename, 'r') as f:
 for line in f:
 handle_input(line)

 and you definitely don't want new state here, because what would that even
 mean? A read position is kind of inherent to a file descriptor, right?

A generator is an abstraction that is intended to be invokable many
times. Are you saying that there are generators for which you cannot
do that?

All I'm suggesting (in my original proposal) is that iterator creation
is always performed in the .iterator method. For generators that means
that you can create multiple iterators from one generator application,
but that would be no different from what you can do anyway by invoking
the same generator with the same arguments multiple times.

The stratification I suggest reconciles generators with a clean
contractual interpretation of iterables. Among other things, that
allows generators to be used in combination with both abstractions
over iterators as well as abstractions over iterables (which are
different beasts!). Under the current semantics, that does not really
work.

I can see that the suggestion might look like a complication, but I
think it is a fairly minor one, and more importantly, in practice will
almost always be confined to abstractions.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-14 Thread Andreas Rossberg
On 14 March 2013 23:38, Brendan Eich bren...@mozilla.com wrote:
 Andreas Rossberg wrote:

 That leaves my original proposal not to have generator application
 return an iterator, but only an iterable. Under that proposal, your
 example requires disambiguation by inserting the intended call(s) to
 .iterator in the right place(s).

 That's horribly inconvenient. It takes Dmitry's example:

  function* enum(from, to) { for (let i = from; i = to; ++i) yield i }

  let rangeAsGenerator = enum(1, 4)
  let dup = zip(rangeAsGenerator, rangeAsGenerator)  // Oops!

 which contains a bug under the Harmony proposal, to this:

  function* enum(from, to) { for (let i = from; i = to; ++i) yield i }

  let rangeAsGenerator = enum(1, 4)
  let dup = zip(rangeAsGenerator[@iterator](), rangeAsGenerator[@iterator]())

No, why? The zip function invokes the iterator method for you.

See my reply to Jason: I think that in most practical cases (in
particular, all abstractions over iterables), the invocation of the
iterator method will happen inside an abstraction, and the programmer
does not have to worry about it.

 which while it works, is just silly given JS's mutable object heritage.
 Programmers will not write this. They will instead write

  function* enum(from, to) { for (let i = from; i = to; ++i) yield i }

  let dup = zip(enum(1, 4), enum(1, 4))

 which is clearer, shorter, and more truthful and beautiful.

And that is perfectly fine under my proposal. :)

 You seem to think iterables are immutable, or something. 'taint so -- JS is
 not ML! :-P

Not sure what that has to do with anything. 8-}

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-14 Thread Brendan Eich

Andreas Rossberg wrote:

On 14 March 2013 23:38, Brendan Eichbren...@mozilla.com  wrote:

Andreas Rossberg wrote:

That leaves my original proposal not to have generator application
return an iterator, but only an iterable. Under that proposal, your
example requires disambiguation by inserting the intended call(s) to
.iterator in the right place(s).

That's horribly inconvenient. It takes Dmitry's example:

  function* enum(from, to) { for (let i = from; i= to; ++i) yield i }

  let rangeAsGenerator = enum(1, 4)
  let dup = zip(rangeAsGenerator, rangeAsGenerator)  // Oops!

which contains a bug under the Harmony proposal, to this:

  function* enum(from, to) { for (let i = from; i= to; ++i) yield i }

  let rangeAsGenerator = enum(1, 4)
  let dup = zip(rangeAsGenerator[@iterator](), rangeAsGenerator[@iterator]())


No, why? The zip function invokes the iterator method for you.


Sure, but only if you know that. I thought you were advocating explicit 
iterator calls.


A call expression cannot be assumed to return a result that can be 
consumed by some mutating protocol twice, in general. Why should 
generator functions be special?


I agree they could be special-cased, but doing so requires an extra 
allocation (the generator-iterable that's returned).


Meanwhile the Pythonic pattern is well-understood, works fine, and 
(contra Dmitry's speculation) does not depend on class-y OOP in Python.


I guess it's the season of extra allocations, but still: in general when 
I consume foo() via something that mutates its return value, I do not 
expect to be able to treat foo() as referentially transparent. Not in JS!


/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-14 Thread Brendan Eich

Andreas Rossberg wrote:

See my reply to Jason: I think that in most practical cases (in
particular, all abstractions over iterables), the invocation of the
iterator method will happen inside an abstraction, and the programmer
does not have to worry about it.


Talking 1:1 with you after the TC39 meeting, it came out that the ES6 
spec does not say that iterators are iterables whose @iterator does 
return self. That changes things, but still makes a messier contract 
than you prefer.


The contract you prefer is one where iterables have @iterator and 
calling it gets a (mutable) iterator that is *not* an iterable. It would 
require, as you proposed, making generators return iterables not 
iterators -- an extra allocation.


At the level of contract cleanliness and usability, that may be better 
than the Pythonic convention -- I'm not sure. Cc'ing Jason.


At the level of extra allocations, I still say boo.

/be
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-14 Thread Jason Orendorff
On Thu, Mar 14, 2013 at 3:48 PM, Andreas Rossberg rossb...@google.comwrote:

 On 8 March 2013 18:23, Jason Orendorff jason.orendo...@gmail.com wrote:
  and you definitely don't want new state here, because what would that
 even
  mean? A read position is kind of inherent to a file descriptor, right?

 A generator is an abstraction that is intended to be invokable many
 times. Are you saying that there are generators for which you cannot
 do that?


Eh? No, I'm saying you generally don't want to restart functions
automatically and implicitly.

Currently, if you call a generator, you get a coroutine. I think what
you're suggesting would instead make generators return a coroutine factory,
and have coroutines implicitly created in many situations. That seems like
it might be bad to me. Not all generators are as straightforward as enum.
They can have side effects, etc. Implicitly creating extra copies of these
things which are kind of like new threads of execution sounds potentially
awful to me.

Also—if you wanted to use generators in a really coroutine-like way, like
task.js does, under your scheme you'd have to explicitly call the @iterator
method in order to get the object you want, the one that has .next(),
.send(), .throw(), and so on. (Not a showstopper, as it's going to be
pretty specialized code that does that.)

I can see that the suggestion might look like a complication, but I
 think it is a fairly minor one, and more importantly, in practice will
 almost always be confined to abstractions.


I agree that in most use cases, no difference will be observed.

-j
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-14 Thread Andreas Rossberg
On 15 March 2013 02:33, Jason Orendorff jason.orendo...@gmail.com wrote:
 On Thu, Mar 14, 2013 at 3:48 PM, Andreas Rossberg rossb...@google.com
 wrote:

 On 8 March 2013 18:23, Jason Orendorff jason.orendo...@gmail.com wrote:
  and you definitely don't want new state here, because what would that
  even
  mean? A read position is kind of inherent to a file descriptor, right?

 A generator is an abstraction that is intended to be invokable many
 times. Are you saying that there are generators for which you cannot
 do that?

 Eh? No, I'm saying you generally don't want to restart functions
 automatically and implicitly.

Hm, where do you see automatically and implicitly? The whole point
of the proposal is to be more *explicit* about when a fresh iterator
is created, and who expects whom to do that.

 Also—if you wanted to use generators in a really coroutine-like way, like
 task.js does, under your scheme you'd have to explicitly call the @iterator
 method in order to get the object you want, the one that has .next(),
 .send(), .throw(), and so on. (Not a showstopper, as it's going to be pretty
 specialized code that does that.)

Yes, but that would also occur inside the task.js abstractions, wouldn't it?

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss


Re: Questions/issues regarding generators

2013-03-14 Thread Andreas Rossberg
On 15 March 2013 01:13, Brendan Eich bren...@mozilla.com wrote:
 Andreas Rossberg wrote:

 See my reply to Jason: I think that in most practical cases (in
 particular, all abstractions over iterables), the invocation of the
 iterator method will happen inside an abstraction, and the programmer
 does not have to worry about it.

 Talking 1:1 with you after the TC39 meeting, it came out that the ES6 spec
 does not say that iterators are iterables whose @iterator does return
 self. That changes things, but still makes a messier contract than you
 prefer.

Yes. Iterators have a next method, that's all what makes them an
iterator, according to the wiki, and having an iterator method is
never mentioned there. The only place where such a case shows up in
the proposals is for generator objects.

 The contract you prefer is one where iterables have @iterator and calling it
 gets a (mutable) iterator that is *not* an iterable. It would require, as
 you proposed, making generators return iterables not iterators -- an extra
 allocation.

 At the level of contract cleanliness and usability, that may be better than
 the Pythonic convention -- I'm not sure. Cc'ing Jason.

 At the level of extra allocations, I still say boo.

I'd say that one allocation per loop is perfectly affordable -- and is
likewise required for packaging up the return value. For both it is
easy to avoid ever materializing the extra object in the common case
of a for-loop.

/Andreas
___
es-discuss mailing list
es-discuss@mozilla.org
https://mail.mozilla.org/listinfo/es-discuss